text
stringlengths 9
7.94M
|
---|
\begin{document}
\title{Resonant quantum kicked rotor with two internal levels} \author{Guzm\'an Hern\'andez and Alejandro Romanelli} \altaffiliation{[email protected]} \affiliation{Instituto de F\'{\i}sica, Facultad de Ingenier\'{\i}a\\ Universidad de la Rep\'ublica\\ C.C. 30, C.P. 11300, Montevideo, Uruguay} \date{\today }
\begin{abstract} We develop a system consisting of a quantum kicked rotor with an additional degree of freedom. This models a single two-level atom with internal ground and excited states, and it is characterized by its quantum resonances with ballistic spreading and by the entanglement between the internal and momentum degrees of freedom. These behaviors establish an equivalence between our model and the usual quantum walk on the line. \end{abstract}
\pacs{03.67-a, 32.80Qk, 05.45Mt} \maketitle
\section{Introduction}
Advances in technology during the last decades have made it possible to obtain samples of atoms at temperatures in the $nK$ range \cite{Cohen} (optical molasses) using resonant or quasiresonant exchanges of momentum and energy between atoms and laser light. The experimental progress that has allowed to construct and preserve quantum states has also opened the possibility of building quantum computing devices \cite {Dur,Sanders,Du,Berman} and has led the scientific community to think that quantum computers could be a reality in the near future. This progress has been accompanied with the development of the interdisciplinary fields of quantum computation and quantum information. In this scientific framework, the study of simple quantum systems such as the quantum kicked rotor (QKR) \cite{Casati0,Izrailev} and the quantum walk (QW) \cite{Kempe} may be useful to understand the quantum behavior of atoms in optical molasses.
The QKR is considered as the paradigm of periodically driven systems in the study of chaos at the quantum level \cite{Casati0}. This system shows behaviors without classical equivalent, such as quantum resonance and dynamical localization, which have posed interesting challenges both in theoretical and experimental \cite{Nielssen} terms. The occurrence of quantum resonance or dynamical localization depends on whether the period of the kick $T$ is a rational or irrational multiple of $4\pi$. For rational multiples, the behavior of the system is resonant while for irrational multiples the average energy of the system grows in a diffusive manner for a short time and then the diffusion stops and localization appears. From a theoretical point of view the two types of values of $T$ determine the spectral properties of the Hamiltonian. For irrational multiples the energy spectrum is purely discrete and for rational multiples it contains a continuous part. Both resonance and localization can be seen as interference phenomena, the first being a constructive interference effect and the second a destructive one. The QKR has been used as a theoretical model for several experimental situations dealing with atomic traps \cite {Moore0,Kanem,Chaudhury,Moore1,Robinson0,Robinson1,Bharucha,Oskay} and is a matter of permanent attention \cite {Schomerus0,Schomerus1,alejo,alejo0,alejo1,alejo2,alejo3,alejo4,alejo5}.
The quantum walk has been introduced \cite {Aharonov,Meyer,Watrous,Ambainis,Kempe,Kendon1,Kendon2,Konno,Salvador} as a natural generalization of the classical random walk in relation with quantum computation and quantum information processing. In both cases there is a walker and a coin; at every time step the coin is tossed and the walker moves depending on the toss output. In the classical random walk the walker moves to the right or to the left, while in the QW coherent superpositions right/left and head/tail happen. This feature endows the QW with outstanding properties, such as the linear growth with time of the standard deviation of the position of an initially localized walker. as compared with its classical counterpart, where this growth goes as $t^{1/2}$. This has strong implications in terms of the realization of algorithms based on QWs and is one of the reasons why they have received so much attention. It has been suggested \cite{Childs} that the QW can be used for universal quantum computation. Some possible experimental implementations of the QW have been proposed by a number of authors \cite {Dur,Travaglione,Sanders,Knight,Bouwmeester,Do,Chandrashekar}. In particular the development of techniques to trap samples of atoms using resonant exchanges of momentum and energy between atoms and laser light may also provide a realistic frame to implement quantum computers \cite{Cirac}.
A parallelism between the behavior of the QKR and a generalized form of the QW was developed in Refs. \cite{alejo0,alejo1} showing that these models have similar dynamics. In those papers, the modified QW was mapped into a one-dimensional Anderson model \cite{Anderson}, as had been previously done for the QKR \cite{Grempel}. In the present paper, following the work of Saunders $et$ $al.$ \cite{Saunders1,Saunders2} we propose a modification of the QKR. We study some properties of this new version of the QKR and establish a novel equivalence between this new QKR and the QW. Essentially, the new QKR has an additional degree of freedom which describes the internal ground and excited states of a two-level atom.
We call this new system the two-level quantum kicked rotor (2L-QKR). In this system the internal atomic levels are coupled with the momentum of the particle. This coupling produces an entanglement between the internal degrees of freedom and the momentum of the system.
The rest of the paper is organized as follows, in the next section we present the 2L-QKR system. In the third section we obtain the time evolution of the moments. In the fourth section the entanglement between the internal degrees of freedom and momentum is studied. In the last section some conclusions are drawn.
\section{Two-level quantum kicked rotor}
We consider a Hamiltonian that describes a single two-level atom of mass $M$
with center-of-mass momentum described by the operator $\widehat{P}$. Its internal ground state is denoted by the vector $|g\rangle$ and its excited state by the vector $|e\rangle$. The internal atomic levels are coupled by two equal-frequency laser traveling waves with a controllable phase difference. Following \cite{Saunders1}, after a shift of the energy values, the 2L-QKR Hamiltonian can be written as \begin{eqnarray}
\widehat{H} &=&\frac{\widehat{P}^{2}}{2M}+\hbar \Delta |e\rangle \langle e| \notag \\
&&+K\delta _{T}(t)\cos (k_{L}\widehat{z})(|e\rangle \langle g|+|g\rangle
\langle e|). \label{ec_ham_qkr_2niveles} \end{eqnarray} Here $\Delta $ is the detuning between the laser frequency and atomic transition frequency. $K$ is proportional to the Rabbi frequency and we shall refer to it as the strength parameter. \begin{equation} \delta _{T}(t)=\sum_{n=0}^{n=\infty }\delta (t-nT) \label{Dirac} \end{equation} is a series of periodic Dirac's delta applied at times $t=nT$ with $n$ integer and $T$ the kick period. $\widehat{z}$ is the operator of the atom's center of mass position. Finally, $k_{L}$ is the laser wave-vector magnitude along the $z$ direction.
Unlike the QKR, in the 2L-QKR the conjugate position and momentum operators have discrete and continuous components, \emph{i.e.} \begin{equation} \widehat{z}=\frac{1}{k_{L}}(2\pi \widehat{l}+\widehat{\theta }) \label{z} \end{equation} \begin{equation} \widehat{P}=\hbar {k_{L}}(\widehat{k}+\widehat{\beta }) \label{p} \end{equation} where the eigenvalues of $\widehat{l}$ and $\widehat{k}$ are integers and the eigenvalues of $\widehat{\theta }\in \lbrack -\pi ,\pi )$ and the eigenvalues of the quasimomentum $\widehat{\beta }\in \lbrack -1/2,1/2)$. It is important to point out that the operator $\widehat{\beta }$ commutes with both $\widehat{k}$ and $\widehat{\theta }$. Using Eqs.(\ref{z},\ref{p}) to substitute $\widehat{z}$ and $\widehat{P}$ in Eq.(\ref{ec_ham_qkr_2niveles}) yields \begin{eqnarray} \widehat{H} &=&\frac{\left[ \hbar {k_{L}}(\widehat{k}+\widehat{\beta })
\right] ^{2}}{2M}+\hbar \Delta |e\rangle \langle e| \notag \\
&&+K\delta _{T}(t)\cos (\widehat{\theta })(|e\rangle \langle g|+|g\rangle
\langle e|). \label{ec_ham_qkr_2nivele} \end{eqnarray}
It must be noted that Eq.(\ref{ec_ham_qkr_2nivele}) does not depend on the operator $\widehat{l}$ and therefore $\widehat{\beta }$ is a preserved quantity. Then if the initial condition belongs to a subspace corresponding to a well defined eigenvalue of $\widehat{\beta }$, the dynamics is such that the system remains in said subspace and the evolution of the system will be only determined by the conjugate operators $\widehat{\theta }$ and $ \widehat{k}$. Therefore we may restrict ourselves to the study of the evolution constrained to a subspace corresponding to a given eigenvalue of $ \beta$. In this case the composite Hilbert space for the Hamiltonian Eq.(\ref {ec_ham_qkr_2nivele}) is the tensor product $\mathcal{H} _{s}\otimes
\mathcal{H}_{c}$. $\mathcal{H}_{s}$ is the Hilbert space associated to the discrete momentum on the line and it is spanned by the set $\{|k\rangle\}$. $
\mathcal{H}_{c}$ is the chirality (or coin) Hilbert space spanned by two orthogonal vectors $\{|g\rangle, |e\rangle\}$. In this composite space the system evolves, at discrete time steps $t\in \mathbb{N}$, along a one-dimensional lattice of sites $k\in \mathbb{Z}$. The direction of motion depends on the state of the chirality. Taking this into account it is clear that the Hilbert space of the 2L-QKR (with the preceding restriction) is identical to that of the usual QW on the line.
The evolution of the system is governed by the Hamiltonian given by Eq.(\ref {ec_ham_qkr_2nivele}), so that, as is the case for the usual QKR, the unitary time evolution operator for one temporal period $T$ can be written as the application of two operators, one representing the unitary operator due to the kick and another being the unitary operator of the free evolution \cite{Saunders1} \begin{equation}
\widehat{U}=e^{-i\left[ \hbar \Delta |e\rangle \langle e|+\tau (\widehat{k}+ \widehat{\beta })^{2}\right] }e^{i\kappa cos{\widehat{\theta }}\sigma _{x}} \label{evolu0} \end{equation} where $\sigma _{x}$ is the Pauli matrix in the $x$ direction, \begin{equation} \tau =\frac{k_{L}^{2}\hbar }{2M}T, \label{tau} \end{equation} and \begin{equation} \kappa =\frac{K}{\hbar }. \label{kappa} \end{equation}
The unit operator Eq.(\ref{evolu0}) in the momentum representation and in the chirality base $\left\{ |e\rangle ,|g\rangle \right\} $ has the following shape \begin{eqnarray} U(\beta )_{jk} &=&f_{jk}(\beta ,\kappa ,\tau ) \label{uu} \\ &&.\left( \begin{array}{cc} e^{-i\widetilde{\Delta }}\delta _{k-j\text{ }2l} & e^{-i\widetilde{\Delta } }\delta _{k-j\text{ }2l+1} \\ \delta _{k-j\text{ }2l+1} & \delta _{k-j\text{ }2l} \end{array} \right) , \notag \end{eqnarray} where \begin{equation} f_{jk}(\beta ,\kappa ,\tau )=i^{k-j}J_{k-j}\left( \kappa \right) e^{-i(j+\beta )^{2}\tau }, \label{efe} \end{equation} $\delta _{kj}$ is the Kronecker delta, $l$ is an integer number and \begin{equation} \widetilde{\Delta }=T\Delta =\frac{2M}{k_{L}^{2}\hbar }\tau \Delta . \label{delta} \end{equation} The wave-vector in the momentum representation can be expressed as the spinor \begin{eqnarray}
|\Psi (t)\rangle &\equiv &\left( \begin{array}{c}
|\Psi ^{e}(t)\rangle \\
|\Psi ^{g}(t)\rangle \end{array} \right) \label{psi} \\ &=&\sum_{k=-\infty }^{\infty }\int_{-\frac{1}{2}}^{\frac{1}{2}}\left( \begin{array}{c} a_{k+\beta ^{\prime }}(t) \\ b_{k+\beta ^{\prime }}(t) \end{array}
\right) \delta (\beta -\beta ^{\prime })|k+\beta ^{\prime }\rangle d\beta ^{\prime }, \notag \end{eqnarray} where $\beta $ is the value of $\beta ^{\prime }$ for the chosen subspace and \begin{equation} \left( \begin{array}{c} a_{k+\beta }(t) \\ b_{k+\beta }(t) \end{array} \right) =\left( \begin{array}{c}
\langle {k+\beta }|\Psi ^{e}(t)\rangle \\
\langle {k+\beta }|\Psi ^{g}(t)\rangle \end{array} \right) , \label{evolu} \end{equation} are the upper and lower components that correspond to the left and right chirality of the QW.
The discrete quantum map is obtained using Eqs.(\ref{uu},\ref{psi}) \begin{equation} \left( \begin{array}{c} a_{k+\beta }(t+T) \\ b_{k+\beta }(t+T) \end{array} \right) =\sum_{j=-\infty }^{\infty }U(\beta )_{kj}\left( \begin{array}{c} a_{j+\beta }(t) \\ b_{j+\beta }(t) \end{array} \right) . \label{mapa} \end{equation} The dynamical evolution of the system up to $t=nT$ is obtained applying the above rule Eq.(\ref{mapa}) $n$ times.
\subsection{Resonance $\protect\tau=2\protect\pi$ in the $\protect\beta=0$ subspace with $\widetilde{\Delta }=2m\protect\pi$}
In this subsection we solve analytically the evolution of the system given by the map Eq.(\ref{mapa}). We consider here the principal resonance $\tau =2\pi $ in the subspace $\beta =0$. Due to the quasimomentum conservation the value of $\beta $ does not change. Therefore the accessible momentum spectrum is discrete and from now on the theoretical development is similar to that of the usual QKR in resonance. Additionally we choose $\widetilde{ \Delta }=2m\pi $ with $m$ integer in order to obtain the wave function analytically. We will show afterwards, using numerical calculation, that the qualitative behavior will be similar for arbitrary $\widetilde{\Delta }$. With these conditions the matrix of Eq.(\ref{uu}) only depends on $j-k$. In order to simplify the notation we define \begin{equation} U_{k_{a}k_{b}}(\kappa )=f_{k_{a}k_{b}}(0,\kappa ,2\pi )\left( \begin{array}{cc} \delta _{k_{a}-k_{b}\text{ }2l} & \delta _{k_{a}-k_{b}\text{ }2l+1} \\ \delta _{k_{a}-k_{b}\text{ }2l+1} & \delta _{k_{a}-k_{b}\text{ }2l} \end{array} \right) . \label{simp} \end{equation} Using Eq.(\ref{mapa}) the initial condition is connected with the wave function at the time $t=nT$ by the equation \begin{eqnarray} \left( \begin{array}{c} a_{k_{n}}(nT) \\ b_{k_{n}}(nT) \end{array} \right) &=&\sum_{k_{n-1}}\sum_{k_{n-2}}\sum_{k_{n-3}}... \notag \\ &&...\sum_{k_{2}}\sum_{k_{1}}\sum_{k_{0}}U_{k_{n}k_{n-1}}(\kappa )U_{k_{n-1}k_{n-2}}(\kappa )... \notag \\ &&...U_{k_{2}k_{1}}(\kappa )U_{k_{1}k_{0}}(\kappa )\left( \begin{array}{c} a_{k_{0}}^{0} \\ b_{k_{0}}^{0} \end{array} \right) , \label{mtotal0} \end{eqnarray} where $a_{k_{0}}^{0}=a_{k_{0}}(0)$ and $b_{k_{0}}^{0}=b_{k_{0}}(0)$.
Using the relation, \begin{eqnarray} \sum_{k_{n-1}}U_{k_{n}k_{n-1}}(\kappa _{{1}})U_{k_{n-1}k_{n-2}}(\kappa _{{2} }) &=&U_{k_{n}k_{n-2}}(\kappa _{{1}}+\kappa _{{2}}), \notag \\ && \label{rela0} \end{eqnarray} obtained in Appendix A, Eq.(\ref{mtotal0}) is reduced to \begin{eqnarray} \left( \begin{array}{c} a_{k}(nT) \\ b_{k}(nT) \end{array} \right) &=&\sum_{j}i^{j-k}J_{j-k}\left( n\kappa \right) \left\{ \delta _{j-k \text{ }2l}\left( \begin{array}{c} a_{j}^{0} \\ b_{j}^{0} \end{array} \right) \right. \notag \\ &&+\delta _{j-k\text{ }2l+1}\left. \left( \begin{array}{c} b_{j}^{0} \\ a_{j}^{0} \end{array} \right) \right\} ,\ \label{sol0} \end{eqnarray} where $l$ is now an arbitrary integer number.
\subsection{Antiresonance $\protect\tau=2\protect\pi$ in the $\protect\beta =0 $ subspace with $\widetilde{\Delta }=(2m+1)\protect\pi$}
We now find the time evolution of the wave function for $\widetilde{\Delta } =(2m+1)\pi$. Eq.(\ref{uu}) shows that in this case the matrix $U(\beta = 0)_{jk}$ satisfies the relation \begin{equation} \sum_{k_{n-1}}U_{k_{n}k_{n-1}}(\kappa )U_{k_{n-1}k_{n-2}}(\kappa )=\delta _{k_{n}\text{ }k_{n-2}}I, \label{rela2} \end{equation} where $I$ is the identity matrix. This last expression together with Eq.(\ref {mapa}) imply that \begin{eqnarray} \left( \begin{array}{c} a_{k}(nT) \\ b_{k}(nT) \end{array} \right) &=&\text{\ }\delta _{n\text{ }2l+1}\sum_{j}U_{kj}(\kappa )\left( \begin{array}{c} a_{j}^{0} \\ b_{j}^{0} \end{array} \right) \label{periodico} \\ &&+\text{ \ }\delta _{n\text{ }2l}\text{ }\left( \begin{array}{c} a_{k}^{0} \\ b_{k}^{0} \end{array} \right) . \notag \end{eqnarray} Then it is clear that the 2L-QKR shows a periodic behavior when the parameters of the system take the values here considered. This behavior has no analog in the usual QKR since the parameter $\widetilde{\Delta }$ does not exist in said system. Furthermore, it is interesting to point out that this anti-resonance occurs for $\tau=2\pi$, value for which the usual QKR is in resonance and does not present periodic behavior.
\section{Probability distribution of momentum}
The evolution of the variance, $\sigma ^{2}=m_{2}-m_{1}^{2}$, of the probability distribution of momentum is a distinctive feature of the QKR in resonance. It is known that it increases quadratically in time in the quantum case, but only linearly in the classical case. In this section we study the evolution of the variance of the 2L-QKR, once again restricting ourselves to the $\beta =0$ subspace and taking $\tau =2\pi$, which corresponds to the primary resonance of the usual QKR model. We will obtain the variance from the evolution of the first and second moments, defined as $ m_{1}(t)=\sum_{k}kP_{k}(t)$ and $m_{2}(t)=\sum_{k}k^{2}P_{k}(t)$
respectively, where $P_{k}(t) = |a_k(t)|^2 + |b_k(t)|^2$ is the probability to find the particle with momentum $p = \hbar k_L k$ at time $t$.
We first consider the resonance defined by $\widetilde{\Delta} =2m\pi $. In this case we are able to calculate the first and second moments analytically using Eq.(\ref{sol0}) and the properties of the Bessel functions (see Appendix B), obtaining: \begin{equation} m_{1}(n)=\kappa n\sum_{j=-\infty }^{\infty }\Im \left[ a_{j}^{0}b_{j+1}^{0 \ast }-a_{j}^{0}b_{j-1}^{0\ast }\right] +m_{1}(0)\text{,} \label{mome0} \end{equation} \begin{eqnarray} m_{2}(n) &=&\frac{(\kappa n)^{2}}{2}\left( 1+\sum_{j=-\infty }^{\infty }\Re \left[ a_{j}^{0}a_{j+2}^{0\ast }(0)+b_{j}^{0}b_{j+2}^{0\ast }\right] \right) \notag \\ &&+\kappa n\sum_{j=-\infty }^{\infty }(2j+1)\Im \left[ a_{j}^{0}b_{j+1}^{0 \ast }+a_{j}^{0}b_{j-1}^{0\ast }\right] \text{ } \notag \\ &&+m_{2}(0)\text{,} \label{mome} \end{eqnarray} where $\Re \left[ x\right] $ and $\Im \left[ x\right] $ are respectively the real part and imaginary part of $x.$ $m_{1}(0)$ and $m_{2}(0)$ are the moments at time $t=0$. These last equations show that the behavior of the variance $\sigma^2 = m_{2}-m_{1}^{2}$ has a quadratic time dependence irrespective of the initial conditions taken. \newline \indent When $\widetilde{\Delta} = (2n + 1)\pi$, it was shown in the previous section that the 2L-QKR has a periodic dynamics and therefore the behavior of the statistical moments will be periodic as well.\newline \indent The case when $\widetilde{\Delta} \neq 2n\pi$ is cumbersome to solve analytically, so we restrict ourselves to a numerical study. The evolution of the second statistical moment was obtained for different values of $ \widetilde{\Delta}$ through numerical iterations of the map given by Eq.(\ref {mapa}). It was found, for all the considered values of $\widetilde{\Delta} \neq 2n\pi$, that the long-time behavior of the second moment (and therefore of the variance) is quadratic after an initial transient. The duration of the initial transient depends on the initial conditions and the value of $ \widetilde{\Delta}$. This features can be appreciated in Fig.(\ref{f1}). \begin{figure}
\caption{The dimensionless second moment for $\widetilde{\Delta}=0.97 \protect\pi$ as a function of the dimensionless time.}
\label{f1}
\end{figure}
The figure shows the time evolution of the second moment for the initial conditions $|\Psi(0)\rangle = |k=0\rangle|g\rangle$. It can be appreciated that the second moment approaches a quadratic behavior after an oscillatory transient. It was found that the nearer the parameter $\widetilde{\Delta}$ is to $(2n+1)\pi $, the more pronounced this oscillation is.
\section{Entanglement}
In the context of QWs several authors \cite {Carneiro,Abal,Annabestani,Omar,Pathak,Liu,Venegas,Endrejat,Bracken,Ellinas,Maloyer,alejo2010,alejo2012} have been investigating the relationship between the asymptotic coin-position entanglement and the initial conditions of the walk. In order to compare the model considered in this paper with the QW, we investigate the asymptotic chirality-momentum entanglement in the 2L-QKR. The unitary evolution of the 2L-QKR generates entanglement between chirality and momentum degrees of freedom. This entanglement will be characterized \cite {Carneiro,alejo2010} by the von Neumann entropy of the reduced density operator, called entropy of entanglement. The quantum analog of the Gibbs entropy is the von Neumann entropy \begin{equation} S_{N}(\rho )=-\text{tr}(\rho \log \rho ), \label{entropy} \end{equation}
where $\rho =|\Psi (t)\rangle \left\langle \Psi (t)\right\vert $ is the density matrix of the quantum system. Owing to the unitary dynamics of the 2L-QKR, the system remains in a pure state, and this entropy vanishes. In spite of this chirality and momentum are entangled, and the entanglement can be quantified by the associated von Neumann entropy for the reduced density operator: \begin{equation} S=-\text{tr}(\rho _{c}\log_{2} \rho _{c}), \label{entroredu} \end{equation} where $\rho _{c}=$tr$_k(\rho)$ is the reduced density matrix that results from taking the partial trace over the momentum space. The reduced density operator can be explicitly obtained using the wave function Eq.(\ref{psi}) in the subspace $\beta =0$ and its normalization properties \begin{equation} \rho _{c}=\left( \begin{array}{cc} P_{g}(n) & Q(n) \\ Q^{\ast }(n) & P_{e}(n) \end{array} \right) , \label{rc} \end{equation} where \begin{equation} P_{g}(n)=\sum_{j=-\infty }^{\infty }\left\vert a_{k}(nT)\right\vert ^{2} \text{,} \label{pe} \end{equation} \begin{equation} P_{e}(n)=\sum_{j=-\infty }^{\infty }\left\vert b_{k}(nT)\right\vert ^{2} \text{,} \label{peb} \end{equation} \begin{equation} Q(n)=\sum_{j=-\infty }^{\infty }a_{k}(nT)b_{k}^{\ast }(nT)\text{.} \label{qu} \end{equation} $P_{e}(n)$ and $P_{g}(n)$ may be interpreted as the time-dependent probabilities for the system to be in the excited and the ground states respectively. In order to investigate the entanglement dependence on the initial conditions, we consider the localized case, that is the initial state of the rotor is assumed to be sharply localized with vanishing momentum and arbitrary chirality, thus \begin{equation} \left( \begin{array}{c} {a_{k}(0)} \\ {b_{k}(0)} \end{array} \right) =\left( \begin{array}{c} \cos {\frac{\gamma }{2}} \\ \exp i\varphi \text{ }\sin {\frac{\gamma }{2}} \end{array} \right) \delta _{k0}, \label{sol} \end{equation} where $\gamma \in \left[ 0,\pi \right] $ and $\varphi \in \left[ 0,2\pi \right] $ define a point on the unit three-dimensional Bloch sphere. Eq.(\ref {sol0}) takes the following form \begin{eqnarray} \left( \begin{array}{c} a_{k}(nT) \\ b_{k}(nT) \end{array} \right) &=&i^{k}J_{k}\left( n\kappa \right) \left\{ \delta _{k\text{ } 2l}\left( \begin{array}{c} \cos {\frac{\gamma }{2}} \\ \exp i\varphi \text{ }\sin {\frac{\gamma }{2}} \end{array} \right) \right. \notag \\ &&+\delta _{k\text{ }2l+1}\left. \left( \begin{array}{c} \exp i\varphi \text{ }\sin {\frac{\gamma }{2}} \\ \cos {\frac{\gamma }{2}} \end{array} \right) \right\} . \label{wave} \end{eqnarray} Substituting Eq.(\ref{wave}) into Eqs.(\ref{pe},\ref{peb},\ref{qu}) and using the properties of the Bessel functions, we obtain: \begin{equation} P_{g}(n)=\frac{1}{2}\left[1+ J_0(2n\kappa)\cos\gamma\right] \text{,} \label{pe2} \end{equation} \begin{equation} P_{e}(n)=\frac{1}{2}\left[1- J_0(2n\kappa)\cos\gamma\right] \text{,} \label{peb2} \end{equation} \begin{equation} Q(n)=\frac{\sin\gamma}{2}\left[\cos\varphi-i\sin\varphi J_0(2n\kappa)\right] \text{.} \label{qu2} \end{equation} The eigenvalues of the density operator $\rho_{c}$, Eq.(\ref{rc}), as a function of $P_{g}(n)$, $P_{e}(n)$ and $Q(n)$ is \begin{equation} \lambda _{\pm}=\frac{1}{2}\left[ 1\pm \sqrt{1-4\left( P_g(n)\,P_e(n)-\left\vert Q(n)\right\vert ^{2}\right) }\right], \label{lam} \end{equation} and the reduced entropy as a function of these eigenvalues is \begin{equation} S(n)=-\lambda_{+}\log_{2} \lambda_{+}-\lambda_{-}\log_{2} \lambda_{-}. \label{ttres} \end{equation} Therefore the dependence of the entropy on the initial conditions is expressed through the angular parameters $\varphi$ and $\gamma$. This means that, given certain initial conditions, the degree of entanglement of the chirality and momentum degrees of freedom is determined.\newline \indent It is seen from Eqs.(\ref{pe2},\ref{peb2},\ref{qu2}) that the occupation probabilities and the coherence $Q$ tend to a certain limit when $ n\rightarrow\infty$. In this limit $J_0(2n\kappa) \rightarrow0$ and both of the occupation probabilities tend to $1/2$, irrespective of the initial conditions. However, in the asymptotic regime, dependence on the initial conditions is still maintained by $Q$, and therefore by the entropy as well. Thus, in the asymptotic regime we have \begin{equation} \lambda _{\pm}\rightarrow\Lambda _{\pm}=\frac{1}{2}\left[ 1\pm \cos\varphi \sin\gamma\right], \label{lamasi} \end{equation} and the asymptotic value of the entropy, $S(n)\rightarrow S_{0}$, is \begin{equation} S_{0}=-\Lambda_{+}\log_{2} \Lambda_{+}-\Lambda_{-}\log_{2} \Lambda_{-}. \label{ttresi} \end{equation} For the initial condition $\varphi= \pi/2$ and/or $\gamma= \pi$ on the Bloch sphere, $Q\rightarrow0$ and both eigenvalues are $\Lambda_{\pm}= 1/2$. In this case the asymptotic entanglement entropy Eq.(\ref{ttresi}) has its maximum value $S_{0} = 1$. Finally, for sharply localized initial conditions with zero momentum, Fig.\ref{f2} shows the dependence of the asymptotic entanglement entropy on the parameters $\varphi$ and $\gamma$. \begin{figure}
\caption{The dimensionless entanglement entropy as a function of the dimensionless initial conditions, see Eq.(\protect\ref{sol}) . The grayscale (color online) corresponds to different values of the entropy between zero and one.}
\label{f2}
\end{figure}
\section{Conclusion}
We developed a new QKR model with an additional degree of freedom, the 2L-QKR. This system exhibits quantum resonances with a ballistic spreading of the variance of the momentum distribution, and entanglement between the internal and momentum degrees of freedom only depending on the initial conditions. These results were established analytically and numerically for different values of the parameter space of this system that correspond to the primary resonance of the usual QKR model. The above two behaviors also characterize the QW on the line and hence establish again an equivalence between the QW and the 2L-QKR. This suggests that experiments that are related to each of two models should also carry some kind of physical equivalence between them. We have found also that, although our system exhibits characteristics similar to those found in the usual QKR model, there are still novel features, such as the existence of the anti-resonance described in section II B, which have no analogue in the simple QKR model. These characteristics of the 2L-QKR render the system as an interesting candidate for further study within the framework of quantum computation.
We acknowledge stimulating discussions with V\'{\i}ctor Micenmacher, the support from PEDECIBA and ANII.
\appendix \section{} Starting from Eq.(\ref{simp}) the following expression is obtained \begin{equation} \sum_{k_{1}}U_{k_{1}k_{2}}(\kappa )U_{k_{0}k_{1}}(\kappa )=i^{k_{2}-k_{0}}\sum_{k_{1}}J_{\nu _{2}}\left( \kappa \right) J_{\nu _{1}}\left( \kappa \right) \left( \begin{array}{cc} E_{1} & E_{2} \\ E_{3} & E_{4} \end{array} \right) \label{s0}, \end{equation} where \begin{eqnarray*} E_{1} &=&e^{-i2\widetilde{\Delta }}\delta _{\nu _{1}\text{ }2l}\text{ } \delta _{\nu _{2}\text{ }2l^{\prime }}+e^{-i\widetilde{\Delta }}\left( 1-\delta _{\nu _{1}\text{ }2l}\right) \left( 1-\delta _{\nu _{2}\text{ } 2l^{\prime }}\right) , \\ E_{2} &=&e^{-i2\widetilde{\Delta }}\delta _{\nu _{1}\text{ }2l}\text{ } \left( 1-\delta _{\nu _{2}\text{ }2l^{\prime }}\right) +e^{-i\widetilde{ \Delta }}\text{ }\delta _{\nu _{2}\text{ }2l^{\prime }}\left( 1-\delta _{\nu _{1}\text{ }2l}\right) , \\ E_{3} &=&e^{-i\widetilde{\Delta }}\text{ }\delta _{\nu _{2}\text{ } 2l^{\prime }}\text{ }\left( 1-\delta _{\nu _{1}\text{ }2l}\right) +\delta _{\nu _{1}\text{ }2l}\left( 1-\delta _{\nu _{2}\text{ }2l^{\prime }}\right) , \\ E_{4} &=&e^{-i\widetilde{\Delta }}\text{ }\left( 1-\delta _{\nu _{1}\text{ } 2l}\right) \text{ }\left( 1-\delta _{\nu _{2}\text{ }2l^{\prime }}\right) +\delta _{\nu _{1}\text{ }2l}\delta _{\nu _{2}\text{ }2l^{\prime }}, \end{eqnarray*} and with $\nu _{1}=k_{1}-k_{0}$, $\nu _{2}=k_{2}-k_{1}$. In the above equations, three different type of sums are involved, which can be carried out using the properties of the Bessel functions (Ref.\cite{Gradshteyn}, p. 992, Eq. \textbf{8.530}). \begin{eqnarray} \sum_{k_{1}}J_{k_{2}-k_{1}}\left( \kappa \right) J_{k_{1}-k_{0}}\left( \kappa \right) &=&J_{\mu _{2}}\left( 2\kappa \right) , \notag \\ && \label{a1} \end{eqnarray} \begin{equation} \sum_{k_{1}}J_{k_{2}-k_{1}}\left( \kappa \right) J_{k_{1}-k_{0}}\left( \kappa \right) \delta _{k_{1}-k_{0}\text{ }2l}= \notag \end{equation} \begin{equation} \frac{1}{2}\left[ J_{\mu _{2}}\left( 2\kappa \right) \right. \left. +\delta _{k_{2}k_{0}}\right] , \label{a2} \end{equation} \begin{equation*} \sum_{k_{1}}J_{k_{2}-k_{1}}\left( \kappa \right) J_{k_{1}-k_{0}}\left( \kappa \right) \delta _{k_{1}-k_{0}\text{ }2l}\delta _{k_{2}-k_{1}\text{ } 2l^{\prime }}= \end{equation*} \begin{equation} \frac{1}{2}\delta _{\mu _{2}\text{ }2\left( l+l^{\prime }\right) }\left[ J_{\mu _{2}}\left( 2\kappa \right) \right. \left. +\delta _{k_{2}k_{0}} \right] , \label{a3} \end{equation} where $\mu _{2}=k_{2}-k_{0}$. Substituting the above equations into Eq.(\ref{s0}) and defining $p=l+l^{\prime }$ \begin{equation} \sum_{k_{1}}U_{k_{1}k_{2}}(\kappa _{{1}})U_{k_{0}k_{1}}(\kappa _{{2}})=\frac{ e^{-i\widetilde{\Delta }}}{2}\left[ \left( \begin{array}{cc} F_{1} & F_{2} \\ F_{3} & F_{4} \end{array} \right) +\left( \begin{array}{cc} G_{1} & 0 \\ 0 & G_{2} \end{array} \right) \right] \end{equation} where \begin{eqnarray*} F_{1} &=&i^{\mu _{2}}J_{\mu _{2}}\left( 2\kappa \right) \text{ }\delta _{\mu _{2}\text{ }2p}\left( 1+e^{-i\widetilde{\Delta }}\right) , \\ F_{2} &=&i^{\mu _{2}}J_{\mu _{2}}\left( 2\kappa \right) \left( 1+e^{-i \widetilde{\Delta }}\right) \left( 1-\text{ }\delta _{\mu _{2}\text{ } 2p}\right) , \\ F_{3} &=&i^{\mu _{2}}J_{\mu _{2}}\left( 2\kappa \right) \left( 1+e^{i \widetilde{\Delta }}\right) \left( 1-\text{ }\delta _{\mu _{2}\text{ } 2p}\right) , \\ F_{4} &=&i^{\mu _{2}}J_{\mu _{2}}\left( 2\kappa \right) \delta _{\mu _{2} \text{ }2p}\left( 1+e^{i\widetilde{\Delta }}\right) , \\ G_{1} &=&\delta _{k_{2}k_{0}}\left( e^{-i\widetilde{\Delta }}-1\right) , \\ G_{2} &=&\delta _{k_{2}k_{0}}\left( e^{i\widetilde{\Delta }}-1\right) . \end{eqnarray*} \section{} The probability $P_{k}(n)$ of finding the system with momentum $k$ at a time $t=nT$ is obtained using Eq.(\ref{sol0}). \begin{equation}
P_{k}(n)=|a_{k}(n)|^{2}+|b_{k}(n)|^{2}= \notag \end{equation} \begin{equation} =\frac{1}{2}\sum\limits_{j,l}f_{jl}[a_{j}^{0}a_{l}^{0\ast }+b_{j}^{0}b_{l}^{0\ast }]+\frac{1}{2}\sum\limits_{j,l}\Re \left\{ f_{jl}[a_{j}^{0}b_{l}^{0\ast }]\right\} \label{eqbess} \end{equation} where \begin{equation} f_{jl}=i^{l-j}\left[ J_{k-j}(n\kappa )J_{k-l}(n\kappa )+J_{k-j}(-n\kappa )J_{k-l}(-n\kappa )\right] \notag \end{equation} and $a_{k}^{0}$ and $b_{k}^{0}$ are given by the initial conditions of the system. To calculate the moments $m_{1}(n)$ and $m_{2}(n)$ we need the following sums \begin{eqnarray} I_{jl}^{(1)} &=&i^{l-j}\sum_{k=-\infty }^{\infty }kJ_{k-j}(\kappa )J_{k-l}(\kappa ) \notag \\ &=&j\delta _{jl}-\frac{i\kappa }{2}(\delta _{lj+1}-\delta _{lj-1}) \label{g0a} \end{eqnarray} \noindent and \begin{eqnarray} I_{jl}^{(2)} &=&i^{l-j}\sum_{k=-\infty }^{\infty }k^{2}J_{k-j}(\kappa )J_{k-l}(\kappa ) \notag \\ &=&\frac{\kappa ^{2}}{2}(\delta _{l}j-\frac{1}{2}(\delta _{lj+2}+\delta _{lj-2})) \notag \\ &&+i\kappa \lbrack \frac{1}{2}(\delta _{lj+1}+\delta _{lj-1})+j(\delta _{lj+1}-\delta _{lj-1})]+l^{2}\delta _{jl}. \notag \\ && \end{eqnarray} Using these expressions together with Eq.(\ref{eqbess}) and the definition of the moments we obtain the first and second moments Eqs.(\ref{mome0},\ref{mome}).
\end{document} |
\begin{document}
\title{Calculus on symplectic manifolds} \author[Michael Eastwood]{Michael Eastwood} \address{\hskip-\parindent School of Mathematical Sciences\\ University of Adelaide\\ SA 5005\\ Australia} \email{[email protected]} \author[Jan Slov\'ak]{Jan Slov\'ak} \address{\hskip-\parindent Department of Mathematics and Statistics\\ Masaryk University,\newline 611 37 Brno, Czech Republic} \email{[email protected]} \subjclass{53D05, 53B35} \thanks{This research was supported by the Czech Grant Agency. The authors would like to thank the Agency for their generous support under Grant P201/12/G028.} \thanks{This work was also supported by the Simons Foundation grant 346300 and the Polish Government MNiSW 2015--2019 matching fund. It was completed whilst the authors were visiting the Banach Centre at IMPAN in Warsaw for the Simons Semester `Symmetry and Geometric Stuctures.'} \begin{abstract} On a symplectic manifold, there is a natural elliptic complex replacing the de~Rham complex. It can be coupled to a vector bundle with connection and, when the curvature of this connection is constrained to be a multiple of the symplectic form, we find a new complex. In particular, on complex projective space with its Fubini--Study form and connection, we can build a series of differential complexes akin to the Bernstein--Gelfand--Gelfand complexes from parabolic differential geometry. \end{abstract} \renewcommand{\textup{2010} Mathematics Subject Classification}{\textup{2010} Mathematics Subject Classification} \maketitle \section{Introduction} Throughout this article $M$ will be a smooth manifold of dimension $2n$ equipped with a symplectic form $J_{ab}$. Here, we are using Penrose's abstract index notation~\cite{OT} and non-degeneracy of this $2$-form says that there is a skew contravariant $2$-form $J^{ab}$ such that $J_{ab}J^{ac}=\delta_b{}^c$ where $\delta_b{}^c$ is the canonical pairing between vectors and co-vectors.
Let $\Wedge^k$ denote the bundle of $k$-forms on~$M$. The homomorphism $$\Wedge^k\to\Wedge^{k-2}\enskip\mbox{given by}\enskip \phi_{abc\cdots d}\mapsto J^{ab}\phi_{abc\cdots d}$$ is surjective for $2\leq k\leq n$ with non-trivial kernel, corresponding to the irreducible representation $$\rule[-10pt]{20pt}{0pt}\begin{picture}(145,10) \put(5,3){\line(1,0){20}} \put(5,2.6){\makebox(0,0){$\bullet$}} \put(20,2.6){\makebox(0,0){$\bullet$}} \put(36,2.6){\makebox(0,0){$\cdots$}} \put(50,2.6){\makebox(0,0){$\bullet$}} \put(45,3){\line(1,0){40}} \put(65,2.6){\makebox(0,0){$\bullet$}} \put(80,2.6){\makebox(0,0){$\bullet$}} \put(96,2.6){\makebox(0,0){$\cdots$}} \put(105,3){\line(1,0){20}} \put(110,2.6){\makebox(0,0){$\bullet$}} \put(125,2.6){\makebox(0,0){$\bullet$}} \put(125,5){\line(1,0){15}} \put(125,1){\line(1,0){15}} \put(140,2.6){\makebox(0,0){$\bullet$}} \put(132.5,3){\makebox(0,0){$\langle$}} \put(5,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(20,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(50,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(65,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}1$}} \put(80,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(110,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(125,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(140,5){\makebox(0,0)[b]{\scriptsize$\vphantom{(}0$}} \put(65,-10){\vector(0,1){8}} \put(70,-6){\makebox(0,0)[l]{\scriptsize{$k^{\mathrm{th}}$ node}}} \end{picture}\quad\mbox{of}\quad {\mathrm{Sp}}(2n,{\mathbb{R}})\subset{\mathrm{GL}}(2n,{\mathbb{R}}).$$ Denoting this bundle by $\Wedge_\perp^k$, there is a canonical splitting of the short exact sequence $$0\to\Wedge_\perp^k\raisebox{-6.3pt}{$\begin{array}{c} \rightleftarrows\\[-8pt] \mbox{\scriptsize$\pi$}\end{array}$} \Wedge^k\to\Wedge^{k-2}\to 0$$ and an elliptic complex~\cite{BEGN,E,ES,S,TY} \begin{equation} \label{RScomplex}\addtolength{\arraycolsep}{-1pt}\begin{array}{rcccccccccccc} 0&\to&\Wedge^0&\stackrel{d}{\longrightarrow}&\Wedge^1 &\stackrel{d_\perp}{\longrightarrow}&\Wedge_\perp^2 &\stackrel{d_\perp}{\longrightarrow}&\Wedge_\perp^3 &\stackrel{d_\perp}{\longrightarrow}&\cdots &\stackrel{d_\perp}{\longrightarrow}&\Wedge_\perp^{n}\\[2pt] &&&&&&&&&&&&\big\downarrow\makebox[0pt][l]{\scriptsize$d_\perp^2$}\\ 0&\leftarrow&\Wedge^0&\stackrel{d_\perp}{\longleftarrow}&\Wedge^1 &\stackrel{d_\perp}{\longleftarrow}&\Wedge_\perp^2 &\stackrel{d_\perp}{\longleftarrow}&\Wedge_\perp^3 &\stackrel{d_\perp}{\longleftarrow}&\cdots &\stackrel{d_\perp}{\longleftarrow}&\Wedge_\perp^{n} \end{array}\end{equation} where \begin{itemize} \item $d:\Wedge^0\to\Wedge^1$ is the exterior derivative, \item for $1\leq k< n$, the operator $d_\perp:\Wedge_\perp^k\to\Wedge_\perp^{k+1}$ is the composition $$\Wedge_\perp^k\hookrightarrow\Wedge^k\xrightarrow{\,d\,}\Wedge^{k+1} \xrightarrow{\,\pi\,}\Wedge_\perp^{k+1},$$ a first order operator, \item $d_\perp:\Wedge_\perp^{k+1}\to\Wedge_\perp^k$ are canonically defined first order operators, which may be seen as adjoint to $d_\perp:\Wedge_\perp^k\to\Wedge_\perp^{k+1}$, \item $d_\perp^2:\Wedge_\perp^n\to\Wedge_\perp^n$ is the composition
$$\Wedge_\perp^n\xrightarrow{\,d_\perp\,}\Wedge_\perp^{n-1} \xrightarrow{\,d_\perp\,}\Wedge_\perp^n,$$
a second order operator. \end{itemize} More explicitly, formul{\ae} for these operators may be given as follows. Firstly, it is convenient to choose a {\em symplectic connection\/}~$\nabla_a$, namely a torsion-free connection such that $\nabla_aJ_{bc}=0$, equivalently $\nabla_aJ^{bc}=0$. As shown in~\cite{GRS}, for example, such connections always exist and if $\nabla_a$ is one such, then the general symplectic connection is
$$\hat\nabla_a\phi_b=\nabla_a\phi_b+J^{cd}\Xi_{abc}\phi_d\quad\mbox{where} \enskip\Xi_{abc}=\Xi_{(abc)}.$$
Then, for $1\leq k<n$, the operator $d_\perp:\Wedge_\perp^k\to\Wedge_\perp^{k+1}$ is given by \begin{equation}\label{early}\textstyle\phi_{def\cdots g}\longmapsto \nabla_{[c}\phi_{def\cdots g]} -\frac{k}{2(n+1-k)}J^{ab}(\nabla_a\phi_{b[ef\cdots g})J_{cd]}\end{equation}
and $d_\perp:\Wedge_\perp^{k+1}\to\Wedge_\perp^k$ is given by \begin{equation}\label{late} \psi_{cdef\cdots g}\longmapsto J^{bc}\nabla_b\psi_{cdef\cdots g}.\end{equation}
Now suppose $E$ is a smooth vector bundle on $M$ and $\nabla:E\to\Wedge^1\otimes E$ is a connection. Choosing any torsion-free connection on $\Wedge^1$ induces a connection on $\Wedge^1\otimes E$ and, as is well-known, the composition $$\Wedge^1\otimes E\to
\Wedge^1\otimes\Wedge^1\otimes E\to\Wedge^2\otimes E$$ does not depend on this choice. (It is the second in a well-defined sequence of differential operators \begin{equation}\label{coupled_de_Rham} E\xrightarrow{\,\nabla\,}\Wedge^1\otimes E \xrightarrow{\,\nabla\,}\Wedge^2\otimes E \xrightarrow{\,\nabla\,}\cdots \xrightarrow{\,\nabla\,}\Wedge^{2n-1}\otimes E \xrightarrow{\,\nabla\,}\Wedge^{2n}\otimes E\end{equation} known as the {\em coupled de~Rham sequence\/}.) In particular, we may define a homomorphism $\Theta:E\to E$ by $$\textstyle J^{ab}\nabla_a\nabla_b\Sigma=\frac1{2n}\Theta\Sigma \quad\mbox{for}\enskip\Sigma\in\Gamma(E).$$ It is part of the curvature of~$\nabla$ and if this is the only curvature, then \begin{equation}\label{Theta_in_the_symplectically_flat_case} (\nabla_a\nabla_b-\nabla_b\nabla_a)\Sigma=2J_{ab}\Theta\Sigma,\end{equation} and we shall say that $\nabla$ is {\em symplectically flat\/}. Looking back at (\ref{RScomplex}), it is easy to see that there are coupled operators $$E \begin{array}c\scriptstyle\nabla\\[-8pt] \longrightarrow\\[-10pt] \longleftarrow\\[-9pt] \scriptstyle{}\enskip\nabla_\perp\end{array} \Wedge^1\otimes E \begin{array}c\scriptstyle\;\nabla\!{}_\perp\\[-8pt] \longrightarrow\\[-10pt] \longleftarrow\\[-9pt] \scriptstyle{}\;\nabla\!{}_\perp\end{array} \Wedge_\perp^2\otimes E \begin{array}c\scriptstyle\;\nabla\!{}_\perp\\[-8pt] \longrightarrow\\[-10pt] \longleftarrow\\[-9pt] \scriptstyle{}\;\nabla\!{}_\perp\end{array} \cdots \begin{array}c\scriptstyle\;\nabla\!{}_\perp\\[-8pt] \longrightarrow\\[-10pt] \longleftarrow\\[-9pt] \scriptstyle{}\;\nabla\!{}_\perp\end{array} \Wedge_\perp^{n-1}\otimes E \begin{array}c\scriptstyle\;\nabla\!{}_\perp\\[-8pt] \longrightarrow\\[-10pt] \longleftarrow\\[-9pt] \scriptstyle{}\;\nabla\!{}_\perp\end{array} \Wedge_\perp^n\otimes E,$$ explicit formul\ae\ for which are just as in the uncoupled cases (\ref{early}) and~(\ref{late}). To complete the coupled version of (\ref{RScomplex}) let us use \begin{equation}\label{middle_operator} \textstyle\nabla^2_\perp-\frac2n\Theta: \Wedge_\perp^n\otimes E\longrightarrow\Wedge_\perp^n\otimes E\end{equation} for the middle operator. It is evident that $$E\stackrel{\nabla}{\longrightarrow}\Wedge^1\otimes E \xrightarrow{\,\nabla_\perp\,}\Wedge_\perp^2\otimes E$$ is a complex if and only if $\nabla$ is symplectically flat. The reason for the curvature term in (\ref{middle_operator}) is that this feature propagates as follows. \begin{thm}\label{one} Suppose $E\xrightarrow{\,\nabla\,}\Wedge^1\otimes E$ is a symplectically flat connection and define $\Theta:E\to E$ by~\eqref{Theta_in_the_symplectically_flat_case}. Then the coupled version of \eqref{RScomplex} $$\addtolength{\arraycolsep}{-1pt}\begin{array}{rcccccccccc} 0&\to&E&\stackrel{\nabla}{\longrightarrow}&\Wedge^1\otimes E &\stackrel{\nabla_\perp}{\longrightarrow}&\Wedge_\perp^2\otimes E &\stackrel{\nabla_\perp}{\longrightarrow}&\cdots &\stackrel{\nabla_\perp}{\longrightarrow}&\Wedge_\perp^n\otimes E\\[2pt] &&&&&&&&&& \big\downarrow \makebox[0pt][l]{\scriptsize$\nabla_\perp^2-\frac2{n}\Theta$}\\ 0&\leftarrow&E&\stackrel{\nabla_\perp}{\longleftarrow}&\Wedge^1\otimes E &\stackrel{\nabla_\perp}{\longleftarrow}&\Wedge_\perp^2\otimes E &\stackrel{\nabla_\perp}{\longleftarrow}&\cdots &\stackrel{\nabla_\perp}{\longleftarrow}&\Wedge_\perp^n\otimes E \end{array}\quad$$ is a complex. It is locally exact except near the beginning where $$\ker\nabla:E\to\Wedge^1\otimes E\quad\mbox{and}\quad \frac{\ker\nabla_\perp:\Wedge^1\otimes E\to\Wedge_\perp^2\otimes E} {\operatorname{im}\nabla:E\to\Wedge^1\otimes E}$$ may be identified with the kernel and cokernel, respectively, of\/ $\Theta$ as locally constant sheaves. \end{thm} \noindent More precision and a proof of Theorem~\ref{one} will be provided in \S\ref{rumin_seshadri}. Our next theorem yields some natural symplectically flat connections. \begin{thm}\label{two} Suppose $M$ is a\/ $2n$-dimensional symplectic manifold with symplectic connection~$\nabla_a$. Then there is a natural vector bundle\/ ${\mathcal{T}}$ on $M$ of rank $2n+2$ equipped with a connection, which is symplectically flat if and only if the curvature $R_{ab}{}^c{}_d$ of $\nabla_a$ has the form
\begin{equation}\label{Vis0} R_{ab}{}^c{}_d=\delta_a{}^c\Phi_{bd}-\delta_b{}^c\Phi_{ad} +J_{ad}\Phi_{be}J^{ce}-J_{bd}\Phi_{ae}J^{ce}+2J_{ab}\Phi_{de}J^{ce}, \end{equation} for some symmetric tensor~$\Phi_{ab}$. \end{thm} \noindent In particular, the Fubini--Study connection on complex projective space is symplectic for the standard K\"ahler form and its curvature is of the form (\ref{Vis0}) for $\Phi_{ab}=g_{ab}$, the standard metric.
More generally, if the symplectic connection $\nabla_a$ arises from a K\"ahler metric, then we shall see that (\ref{Vis0}) holds precisely in the case of constant holomorphic sectional curvature.
After proving Theorems~\ref{one} and~\ref{two}, the remainder of this article is concerned with the consequences of Theorem~\ref{one} for the vector bundle ${\mathcal{T}}$ and those bundles, such as~$\bigodot^k\!{\mathcal{T}}$, induced from it. In particular, these consequences pertain on complex projective space where we shall find a series of elliptic complexes closely following the Bernstein-Gelfand-Gelfand complexes on the sphere $S^{2n+1}$ as a homogeneous space for the Lie group ${\mathrm{Sp}}(2n+2,{\mathbb{R}})$.
This article is based on our earlier work~\cite{ES} but here we focus on the simpler case where we are given a symplectic structure as background. This results in fewer technicalities and in this article we include more detail, especially in constructing the BGG-like complexes in~\S\ref{BGG-like}.
\section{The Rumin--Seshadri complex}\label{rumin_seshadri} By the {\em Rumin--Seshadri complex\/}, we mean the differential complex (\ref{RScomplex}) after~\cite{S}. However, the $4$-dimensional case is due to R.T.~Smith~\cite{Sm} and the general case is also independently due to Tseng and Yau~\cite{TY}. In this section we shall derive the coupled version of this complex as in Theorem~\ref{one}, our proof of which includes (\ref{RScomplex}) as a special case. The following lemma is also the key step in~\cite{ES}. \begin{lemma}\label{key_lemma} Suppose $E$ is a vector bundle on $M$ with symplectically flat connection $\nabla:E\to\Wedge^1\otimes E$. Define $\Theta:E\to E$ by~\eqref{Theta_in_the_symplectically_flat_case}. Then $\Theta$ has constant rank and the bundles $\ker\Theta$ and $\operatorname{coker}\Theta$ acquire from~$\nabla$, flat connections defining locally constant sheaves \underbar{$\ker\Theta$} and \underbar{$\operatorname{coker}\Theta$}, respectively. There is an elliptic complex $$\begin{array}{cccccccccc}E &\stackrel{\nabla}{\longrightarrow}&\Wedge^1\otimes E &\stackrel{\nabla}{\longrightarrow}&\Wedge^2\otimes E &\stackrel{\nabla}{\longrightarrow}&\Wedge^3\otimes E &\stackrel{\nabla}{\longrightarrow}&\Wedge^4\otimes E\\ &\begin{picture}(0,0)(0,-3) \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus&\cdots, \\ &&E&\longrightarrow&\Wedge^1\otimes E &\longrightarrow&\Wedge^2\otimes E &\longrightarrow&\Wedge^3\otimes E \end{array}$$ where the differentials are given by $$\Sigma\!\mapsto\!\left[\!\begin{array}{c}\nabla\Sigma\\ \Theta\Sigma\end{array}\!\right] \quad \left[\!\begin{array}{c}\phi\\ \eta\end{array}\!\right] \!\mapsto\!\left[\!\begin{array}{c}\nabla\phi-J\otimes\eta\\ \nabla\eta-\Theta\phi\end{array}\!\right] \quad\left[\!\begin{array}{c}\omega\\ \psi\end{array}\!\right] \!\mapsto\!\left[\!\begin{array}{c}\nabla\omega+J\wedge\psi\\ \nabla\psi+\Theta\omega\end{array}\!\right]\enskip\cdots.$$ It is locally exact save for the zeroth and first cohomologies, which may be identified with \underbar{$\ker\Theta$} and \underbar{$\operatorname{coker}\Theta$}, respectively. \end{lemma} \begin{proof} {From} (\ref{Theta_in_the_symplectically_flat_case}) the Bianchi identity for $\nabla$ reads $$0=\nabla_{[a}\big(J_{bc]}\Theta\big)=J_{[ab}\nabla_{c]}\Theta$$ and non-degeneracy of $J_{ab}$ implies that $\nabla_a\Theta=0$. Consequently, the homomorphism $\Theta$ has constant rank and the following diagram with exact rows commutes $$\addtolength{\arraycolsep}{-2pt}\begin{array}{ccccccccccc} 0&\to&\ker\Theta&\to&E&\xrightarrow{\,\Theta\,}&E&\to&\operatorname{coker}\Theta&\to&0\\ &&&&\downarrow\!\makebox[0pt][l]{\scriptsize$\nabla$} &&\downarrow\!\makebox[0pt][l]{\scriptsize$\nabla$}\\ 0&\to&\Wedge^1\otimes\ker\Theta&\to&\Wedge^1\otimes E&\xrightarrow{\,\Theta\,} &\Wedge^1\otimes E&\to&\Wedge^1\otimes\operatorname{coker}\Theta&\to&0 \end{array}$$ and yields the desired connections on $\ker\Theta$ and~$\operatorname{coker}\Theta$, which are easily seen to be flat. Ellipticity of the given complex is readily verified and, by definition, the kernel of its first differential is~\underbar{$\ker\Theta$}. To identify the higher local cohomology of this complex the key observation is that locally we may choose a $1$-form $\tau$ such that $d\tau=J$ and, having done this, the connection $$\Gamma(E)\ni\Sigma\stackrel{\tilde\nabla}{\longmapsto} \nabla\Sigma-\tau\otimes\Theta\Sigma\in \Gamma(\Wedge^1\otimes E)$$ is flat. The rest of the proof is diagram chasing, using exactness of $$E\xrightarrow{\,\tilde\nabla\,}\Wedge^1\otimes E \xrightarrow{\,\tilde\nabla\,}\Wedge^2\otimes E \xrightarrow{\,\tilde\nabla\,}\Wedge^3\otimes E \xrightarrow{\,\tilde\nabla\,}\Wedge^4\otimes E \xrightarrow{\,\tilde\nabla\,}\cdots.$$ If needed, the details are in~\cite{ES}. \end{proof}
\noindent{\em Proof of Theorem~\ref{one}}. In \cite{ES}, the corresponding result \cite[Theorem~4]{ES} is proved by invoking a spectral sequence. Here, we shall, instead, prove two typical cases `by hand,' leaving the rest of the proof to the reader.
For our first case, let us suppose $n\geq 3$ and prove local exactness of $$\Wedge^1\otimes E\xrightarrow{\,\nabla_\perp\,}\Wedge_\perp^2\otimes E \xrightarrow{\,\nabla_\perp\,}\Wedge_\perp^3\otimes E.$$ Thus, we are required to show that if $\omega_{ab}$ has values in $E$ and $$\textstyle\omega_{ab}=\omega_{[ab]}\qquad J^{ab}\omega_{ab}=0\qquad \nabla_{[c}\omega_{de]}=\frac1{n-1}J^{ab}(\nabla_a\omega_{b[c})J_{de]},$$ then locally there is $\phi_{a}\in\Gamma(\Wedge^1\otimes E)$ such that $$\textstyle\omega_{cd} =\nabla_{[c}\phi_{d]}-\frac1{2n}J^{ab}(\nabla_a\phi_b)J_{cd}.$$ If we set $\psi_c\equiv-\frac1{n-1}J^{ab}\nabla_a\omega_{bc}$, then $\nabla_{[c}\omega_{de]}+J_{[cd}\psi_{e]}=0$ so $$0=\nabla_{[b}\nabla_c\omega_{de]}+J_{[bc}\nabla_d\psi_{e]} =J_{[bc}\Theta\omega_{de]}+J_{[bc}\nabla_d\psi_{e]}$$ and since $J\wedge\underbar{\enskip}:\Wedge^2\to\Wedge^4$ is injective it follows that $$\nabla_{[c}\psi_{d]}+\Theta\omega_{cd}=0.$$ In other words, we have shown that $$\begin{array}{rcl} \nabla\omega+J\wedge\psi&=&0\\ \nabla\psi+\Theta\omega&=&0\end{array}$$ and Lemma~\ref{key_lemma} locally yields $\phi_a\in\Gamma(\Wedge^1\otimes E)$ and $\eta\in\Gamma(E)$ such that $$\begin{array}{rcl}\nabla_{[a}\phi_{b]}-J_{ab}\eta&=&\omega_{ab}\\ \nabla_a\eta-\Theta\phi_a&=&\psi_a\end{array}$$ In particular, $$J^{ab}\nabla_a\phi_b-2n\eta=J^{ab}\big(\nabla_a\phi_b-J_{ab}\eta\big) =J^{ab}\omega_{ab}=0$$ and, therefore, $$\textstyle\nabla_{[c}\phi_{d]}-\frac1{2n}J^{ab}(\nabla_a\phi_b)J_{cd} =\nabla_{[c}\phi_{d]}-\eta J_{cd}=\omega_{cd},$$ as required.
Our second case is more involved. It is to show that \begin{equation}\label{complex} \Wedge_\perp^n\otimes E\xrightarrow{\,\nabla_\perp^2-\frac2n\Theta\,} \Wedge_\perp^n\otimes E\xrightarrow{\,\nabla_\perp\,} \Wedge_\perp^{n-1}\otimes E\end{equation} is locally exact. As regards $\nabla_\perp:\Wedge_\perp^n\otimes E\to\Wedge_\perp^{n-1}\otimes E$, notice that $$\textstyle J^{bc}\nabla_b\psi_{cdef\cdots g} =\frac{n+1}2J^{bc}\nabla_{[b}\psi_{cdef\cdots g]}$$ and that if $\phi_{def\cdots g}\in\Gamma(\Wedge^{k}\otimes E)$, then \begin{equation}\label{combinatorics} \textstyle J^{bc}J_{[bc}\phi_{def\cdots g]}= \frac{4(n-k)}{(k+1)(k+2)}\phi_{def\cdots g}+ \frac{k(k-1)}{(k+1)(k+2)}J_{[de}\phi_{f\cdots g]bc}J^{bc}\end{equation} so if $\phi_{def\cdots g}\in\Gamma(\Wedge_\perp^{n-1}\otimes E)$, then $$\textstyle J^{bc}J_{[bc}\phi_{def\cdots g]}= \frac4{n(n+1)}\phi_{def\cdots g}.$$ Therefore, $\nabla_\perp\psi\in\Gamma(\Wedge_\perp^{n-1}\otimes E)$ is characterised by \begin{equation}\label{trick} \textstyle J\wedge\nabla_\perp\psi=\frac2n\nabla\psi\end{equation} as an equation in $\Wedge^{n+1}\otimes E$. In particular, in $\Wedge^{n+2}\otimes E$ we find $$\textstyle J\wedge\nabla\nabla_\perp\psi =\nabla(J\wedge\nabla_\perp\psi)=\frac2n\nabla^2\psi =J\wedge\Theta\psi=0$$ whence $\nabla\nabla_\perp\psi$ already lies in $\Wedge^n\otimes E$ and there is no need to remove the trace as in (\ref{early}) to form $\nabla_\perp^2\psi$. Therefore, invoking (\ref{trick}) once again, the composition $$\Wedge_\perp^n\otimes E\xrightarrow{\,\nabla_\perp\,} \Wedge_\perp^{n-1}\otimes E \xrightarrow{\,\nabla_\perp\,}\Wedge_\perp^n\otimes E \xrightarrow{\,\nabla_\perp\,}\Wedge_\perp^{n-1}\otimes E$$ is characterised by $$\textstyle J\wedge\nabla_\perp^3\psi=\frac2n\nabla\nabla_\perp^2\psi =\frac2n\nabla^2\nabla_\perp\psi=\frac2nJ\wedge\Theta\nabla_\perp\psi =\frac2nJ\wedge\nabla_\perp\Theta\psi$$ and, since $J\wedge\underbar{\enskip}:\Wedge^{n-1}\to\Wedge^{n+1}$ is an isomorphism, we conclude that $\nabla_\perp^3\psi=\frac2n\nabla_\perp\Theta\psi$, equivalently that (\ref{complex}) is a complex.
Before proceeding, let us remark on another consequence of (\ref{combinatorics}), namely that for $\nu_{cdef\cdots g}\in\Gamma(\Wedge^n\otimes E)$,
\begin{equation}\label{algebra} J_{[ab}\nu_{cdef\cdots g]}=0\iff J^{cd}\nu_{cdef\cdots g}=0. \end{equation} Now to establish local exactness, suppose $\nu\in\Gamma(\Wedge_\perp^n\otimes E)$ satisfies $\nabla_\perp\nu=0$. Equivalently, according to (\ref{trick}) and~(\ref{algebra}) $$\nu\in\Gamma(\Wedge^n\otimes E)\quad\mbox{satisfies}\enskip \nabla\nu=0\enskip\mbox{and}\enskip J\wedge\nu=0.$$ Lemma~\ref{key_lemma} implies that locally there are $$\begin{array}{l}\phi\in\Gamma(\Wedge^n\otimes E)\\ \eta\in\Gamma(\Wedge^{n-1}\otimes E)\end{array}\enskip\mbox{such that}\enskip \begin{array}{rcl}\nabla\phi-J\wedge\eta&=&0\\ \nabla\eta-\Theta\phi&=&\nu.\end{array}$$
Since $$0\to\Wedge^{n-2}\xrightarrow{\,J\wedge\underbar{\enskip}\,}\Wedge^n \to\Wedge_\perp^n\to0$$ is exact, we can write $\phi$ uniquely as $$\phi=\psi+J\wedge\tau,$$ where $\psi\in\Gamma(\Wedge_\perp^n\otimes E)$ and $\tau\in\Gamma(\Wedge^{n-2}\otimes E)$. We conclude that $$\begin{array}{rcl}\nabla\psi-J\wedge\hat\eta&=&0\\ \nabla\hat\eta-\Theta\psi&=&\nu,\end{array}\enskip\mbox{(where}\enskip \hat\eta=\eta-\nabla\tau).$$ However, as discussed above, these equations say exactly that $$\textstyle\nabla_\perp^2\psi-\frac2n\Theta\psi=\nu,$$ and exactness is shown.
$\square$
\section{Tractor bundles}\label{tractors}\label{tractor_bundles}
For the rest of the article we suppose that we are given, not only a manifold $M$ with symplectic form~$J_{ab}$, but also a torsion-free connection $\nabla_a$ on the tangent bundle (and hence on all other tensor bundles) such that $\nabla_aJ_{bc}=0$. This is sometimes called a {\em Fedosov structure}~\cite{GRS} on~$M$. The curvature $R_{ab}{}^c{}_d$ of~$\nabla_a$, characterised by $$(\nabla_a\nabla_b-\nabla_b\nabla_a)X^c=R_{ab}{}^c{}_dX^d,$$ satisfies $$R_{ab}{}^c{}_d=R_{[ab]}{}^c{}_d\qquad R_{[ab}{}^c{}_{d]}=0\qquad R_{ab}{}^c{}_dJ_{ce}=R_{ab}{}^c{}_eJ_{cd}$$ and enjoys the following decomposition into irreducible parts $$R_{ab}{}^c{}_d=V_{ab}{}^c{}_d+\delta_a{}^c\Phi_{bd}-\delta_b{}^c\Phi_{ad} +J_{ad}\Phi_{be}J^{ce}-J_{bd}\Phi_{ae}J^{ce}+2J_{ab}\Phi_{de}J^{ce},$$ for some symmetric~$\Phi_{ab}$, where $V_{ab}{}^a{}_d=0$ (reflecting the branching $$\begin{picture}(15,10) \put(0,0){\line(1,0){5}} \put(0,5){\line(1,0){15}} \put(0,10){\line(1,0){15}} \put(0,0){\line(0,1){10}} \put(5,0){\line(0,1){10}} \put(10,5){\line(0,1){5}} \put(15,5){\line(0,1){5}} \end{picture}\enskip=\enskip\begin{picture}(17,10) \put(0,0){\line(1,0){5}} \put(0,5){\line(1,0){15}} \put(0,10){\line(1,0){15}} \put(0,0){\line(0,1){10}} \put(5,0){\line(0,1){10}} \put(10,5){\line(0,1){5}} \put(15,5){\line(0,1){5}} \put(17,3){\makebox(0,0){$\scriptstyle\perp$}} \end{picture}\enskip\oplus\enskip\begin{picture}(10,5) \put(0,0){\line(1,0){10}} \put(0,5){\line(1,0){10}} \put(0,0){\line(0,1){5}} \put(5,0){\line(0,1){5}} \put(10,0){\line(0,1){5}} \end{picture}$$ of representations under ${\mathrm{GL}}(2n,{\mathbb{R}})\supset{\mathrm{Sp}}(2n,{\mathbb{R}})$). Notice that \begin{equation}\label{Phi}\textstyle\Phi_{bd}=\frac1{2(n+1)}R_{ab}{}^a{}_d =\frac1{4(n+1)}J^{ae}R_{ae}{}^c{}_bJ_{cd}.\end{equation} We define the {\em standard tractor bundle\/} to be the rank $2n+2$ vector bundle ${\mathcal{T}}\equiv\Wedge^0\oplus\Wedge^1\oplus\Wedge^0$ with its {\em tractor connection\/} $$\textstyle\nabla_a\! \left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right]= \left[\!\begin{array}c\nabla_a\sigma-\mu_a\\ \nabla_a\mu_b+J_{ab}\rho+\Phi_{ab}\sigma\\ \nabla_a\rho-\Phi_{ab}J^{bc}\mu_c+S_a\sigma \end{array}\!\right]\!,\enskip\mbox{where}\enskip S_a\equiv\frac1{2n+1}J^{bc}\nabla_c\Phi_{ab}.$$ Readers familiar with conformal differential geometry may recognise the form of this connection as following the tractor connection in that setting~\cite{BEG}. If needs be, we shall write {\em symplectic tractor connection\/} to distinguish the connection just defined from any alternatives. We shall need the following curvature identities. \begin{lemma}\label{curvature_identities} Let $Y_{abc}\equiv\frac1{2n+1}\nabla_cV_{ab}{}^c{}_d$. Then \begin{equation}\label{contracted_Bianchi} Y_{abc}=2\nabla_{[a}\Phi_{b]c}-2J_{c[a}S_{b]}+2J_{ab}S_c\end{equation} and \begin{equation}\label{nablaY} \begin{array}{rcr}J^{ad}\nabla_aY_{bcd} &=&J^{ad}V_{bc}{}^e{}_a\Phi_{ed} +4n(J^{ad}\Phi_{ba}\Phi_{cd}-\nabla_{[b}S_{c]})\qquad\\[3pt] &&{}+2J_{bc}J^{ad}(\nabla_aS_d-J^{ef}\Phi_{ae}\Phi_{df}). \end{array}\end{equation} \end{lemma} \begin{proof} Writing the Bianchi identity $\nabla_{[a}R_{bc]}{}^d{}_e=0$ in terms of $V_{ab}{}^c{}_d$ and $\Phi_{ab}$ yields $$\nabla_{[a}V_{bc]}{}^d{}_e=-2\delta_{[b}{}^d\nabla_a\Phi_{c]e} +2J^{df}J_{e[b}\nabla_a\Phi_{c]f}-2J^{df}J_{[bc}\nabla_{a]}\Phi_{ef}.$$ and contracting over ${}_a{}^d$ gives $$\begin{array}{rcr} \frac13\nabla_aV_{bc}{}^a{}_e &=&\frac{4(n-1)}3\nabla_{[b}\Phi_{c]e} +\frac23\big[\nabla_{[b}\Phi_{c]e}-(2n+1)J_{e[b}S_{c]}\big]\qquad\\[3pt] &&{}+\frac23\big[(2n+1)J_{bc}S_e+2\nabla_{[b}\Phi_{c]e}\big], \end{array}$$ which is easily rearranged as~(\ref{contracted_Bianchi}).
For (\ref{nablaY}), firstly notice that $$J^{ad}R_{ab}{}^e{}_d=J^{ed}R_{ab}{}^a{}_d=2(n+1)J^{ed}\Phi_{bd}$$ and the Bianchi symmetry may be written as $R_{a[b}{}^e{}_{c]}=-\frac12R_{bc}{}^e{}_a$. Thus, $$\begin{array}{rcl}J^{ad}\nabla_a\nabla_b\Phi_{cd} &\!\!=\!\!&\nabla_bJ^{ad}\nabla_a\Phi_{cd} -J^{ad}R_{ab}{}^e{}_c\Phi_{ed}-J^{ad}R_{ab}{}^e{}_d\Phi_{ce}\\[3pt] &\!\!=\!\!&-(2n+1)\nabla_bS_c -J^{ad}R_{ab}{}^e{}_c\Phi_{ed}+2(n+1)J^{de}\Phi_{bd}\Phi_{ce} \end{array}$$ and so $$\textstyle J^{ad}\nabla_a\nabla_{[b}\Phi_{c]d}=-(2n+1)\nabla_{[b}S_{c]} +\frac12J^{ad}R_{bc}{}^e{}_a\Phi_{ed}+2(n+1)J^{de}\Phi_{bd}\Phi_{ce}.$$ {From} (\ref{contracted_Bianchi}) we see that $$J^{ad}\nabla_aY_{bcd}=2J^{ad}\nabla_a\nabla_{[b}\Phi_{c]d} +2\nabla_{[b}S_{c]}+2J_{bc}J^{ad}\nabla_aS_d.$$ Therefore, $$J^{ad}\nabla_aY_{bcd} =J^{ad}R_{bc}{}^e{}_a\Phi_{ed}-4n\nabla_{[b}S_{c]} +4(n+1)J^{de}\Phi_{bd}\Phi_{ce}+2J_{bc}J^{ad}\nabla_aS_d.$$ Finally, $$J^{ad}R_{bc}{}^e{}_a\Phi_{ed} =J^{ad}V_{bc}{}^e{}_a\Phi_{ed} -4J^{ad}\Phi_{ba}\Phi_{cd} -2J_{bc}J^{ad}J^{ef}\Phi_{ae}\Phi_{df},$$ so $$\begin{array}{rcl}J^{ad}\nabla_aY_{bcd} &=&J^{ad}V_{bc}{}^e{}_a\Phi_{ed} +4nJ^{ad}\Phi_{ba}\Phi_{cd} -2J_{bc}J^{ad}J^{ef}\Phi_{ae}\Phi_{df}\\[3pt] &&\quad{}-4n\nabla_{[b}S_{c]} +2J_{bc}J^{ad}\nabla_aS_d, \end{array}$$ which may be rearranged as~(\ref{nablaY}). \end{proof} \begin{prop}\label{tractor_curvature} The tractor connection ${\mathcal{T}}\to\Wedge^1\otimes{\mathcal{T}}$ preserves the non-degenerate skew form $$\left\langle\left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right], \left[\!\begin{array}c\tilde\sigma\\ \tilde\mu_c\\ \tilde\rho\end{array}\!\right]\right\rangle\equiv \sigma\tilde\rho+J^{bc}\mu_b\tilde\mu_c-\rho\tilde\sigma$$ and its curvature is given by
$$\setlength{\arraycolsep}{1pt}\begin{array}{rcl} (\nabla_a\nabla_a-\nabla_b\nabla_a)\!\! \left[\!\begin{array}c\sigma\\ \mu_d\\ \rho\end{array}\!\right] &=&\left[\!\begin{array}{c}0\\ -V_{ab}{}^c{}_d\mu_c +Y_{abd}\sigma\\ -Y_{abc}J^{cd}\mu_d +\frac1{2n}(J^{cd}V_{ab}{}^e{}_c\Phi_{de}-J^{cd}\nabla_cY_{abd})\sigma \end{array}\!\right]\\[20pt] &&+2J_{ab}\!\left[\!\begin{array}{c} \rho\\ J^{ce}\Phi_{cd}\mu_e-S_d\sigma\\ S_cJ^{cd}\mu_d +\frac1{2n}J^{cd}(\nabla_cS_d-J^{ef}\Phi_{ce}\Phi_{df})\sigma \end{array}\!\right]\!\!. \end{array}$$ \end{prop} \begin{proof}We expand $$\left\langle \nabla_a\!\left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right], \left[\!\begin{array}c\tilde\sigma\\ \tilde\mu_c\\ \tilde\rho\end{array}\!\right]\right\rangle+\left\langle \left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right], \nabla_a\!\left[\!\begin{array}c\tilde\sigma\\ \tilde\mu_c\\ \tilde\rho\end{array}\!\right]\right\rangle$$ to obtain $$\begin{array}{l}(\nabla_a\sigma-\mu_a)\tilde\rho +\sigma(\nabla\tilde\rho-\Phi_{ab}J^{bc}\tilde\mu_c+S_a\tilde\sigma)\\ \enskip{}+J^{bc}(\nabla_a\mu_b+J_{ab}\rho+\Phi_{ab}\sigma)\tilde\mu_c +J^{bc}\mu_b(\nabla_a\tilde\mu_c+J_{ac}\tilde\rho+\Phi_{ac}\tilde\sigma)\\ \quad{}-(\nabla_a\rho-\Phi_{ab}J^{bc}\mu_c+S_a\sigma)\tilde\sigma -\rho(\nabla_a\tilde\sigma-\tilde\mu_a)\end{array}$$ in which all terms cancel save for $$(\nabla_a\sigma)\tilde\rho +\sigma\nabla\tilde\rho +J^{bc}(\nabla_a\mu_b)\tilde\mu_c +J^{bc}\mu_b\nabla_a\tilde\mu_c -(\nabla_a\rho)\tilde\sigma -\rho\nabla_a\tilde\sigma,$$ which reduces to $$\nabla_a\big(\sigma\tilde\rho+J^{bc}\mu_b\tilde\mu_c-\rho\tilde\sigma\big),$$ as required. For the curvature, we readily compute
$$\nabla_{[a}\nabla_{b]}\! \left[\!\begin{array}c\sigma\\ \mu_d\\ \rho\end{array}\!\right] =\left[\!\begin{array}l\nabla_{[a}\nabla_{b]}\sigma-J_{ba}\rho\\ \nabla_{[a}\nabla_{b]}\mu_d +J_{d[a}\Phi_{b]c}J^{ce}\mu_e -\Phi_{d[a}\mu_{b]} +T_{abd}\sigma\\ \nabla_{[a}\nabla_{b]}\rho- T_{abc}J^{cd}\mu_d +(\nabla_{[a}S_{b]}-J^{cd}\Phi_{ac}\Phi_{bd})\sigma \end{array}\!\right],$$ where $T_{abc}\equiv\nabla_{[a}\Phi_{b]c}-J_{c[a}S_{b]}$. Lemma~\ref{curvature_identities}, however, states that $$\textstyle T_{abc}=\frac12Y_{abc}-J_{ab}S_c$$
and $$\begin{array}{rcl}4n(\nabla_{[a}S_{b]}-J^{cd}\Phi_{ac}\Phi_{bd}) &=&J^{cd}V_{ab}{}^e{}_c\Phi_{de}-J^{cd}\nabla_cY_{abd}\\[3pt] &&{}+2J_{ab}J^{cd}(\nabla_cS_d-J^{ef}\Phi_{ce}\Phi_{df}). \end{array}$$ Therefore, $$\begin{array}{rcl}\nabla_{[a}\nabla_{b]}\! \left[\!\begin{array}c\sigma\\ \mu_d\\ \rho\end{array}\!\right] &=&\left[\!\begin{array}{c}0\\ \nabla_{[a}\nabla_{b]}\mu_d +J_{d[a}\Phi_{b]c}J^{ce}\mu_e -\Phi_{d[a}\mu_{b]} +\frac12Y_{abd}\sigma\\ -\frac12Y_{abc}J^{cd}\mu_d +\frac1{4n}(J^{cd}V_{ab}{}^e{}_c\Phi_{de}-J^{cd}\nabla_cY_{abd})\sigma \end{array}\!\right]\\[20pt] &&\enskip{}+J_{ab}\!\left[\!\begin{array}{c} \rho\\ -S_d\sigma\\ S_cJ^{cd}\mu_d +\frac1{2n}J^{cd}(\nabla_cS_d-J^{ef}\Phi_{ce}\Phi_{df})\sigma \end{array}\!\right]. \end{array}$$ Finally, $$R_{ab}{}^c{}_d\mu_c=V_{ab}{}^c{}_d\mu_c-2\Phi_{d[a}\mu_{b]} +2J_{d[a}\Phi_{b]c}J^{ce}\mu_e +2J_{ab}\Phi_{de}J^{ce}\mu_c,$$ so $$\textstyle\nabla_{[a}\nabla_{b]}\mu_d +J_{d[a}\Phi_{b]c}J^{ce}\mu_e -\Phi_{d[a}\mu_{b]} =-\frac12V_{ab}{}^c{}_d\mu_c-J_{ab}\Phi_{de}J^{ce}\mu_c$$ whence $$\begin{array}{rcl}\nabla_{[a}\nabla_{b]}\! \left[\!\begin{array}c\sigma\\ \mu_d\\ \rho\end{array}\!\right] &=&\left[\!\begin{array}{c}0\\ -\frac12V_{ab}{}^c{}_d\mu_c +\frac12Y_{abd}\sigma\\ -\frac12Y_{abc}J^{cd}\mu_d +\frac1{4n}(J^{cd}V_{ab}{}^e{}_c\Phi_{de}-J^{cd}\nabla_cY_{abd})\sigma \end{array}\!\right]\\[20pt] &&\enskip{}+J_{ab}\!\left[\!\begin{array}{c} \rho\\ J^{ce}\Phi_{cd}\mu_e-S_d\sigma\\ S_cJ^{cd}\mu_d +\frac1{2n}J^{cd}(\nabla_cS_d-J^{ef}\Phi_{ce}\Phi_{df})\sigma \end{array}\!\right], \end{array}$$ as required. \end{proof}
\begin{cor}\label{symplectically_flat_tractors} The tractor connection is symplectically flat if and only if the curvature tensor $V_{ab}{}^c{}_d$ vanishes. \end{cor}
\section{K\"ahler geometry} K\"ahler manifolds provide a familiar source of symplectic manifolds equipped with a compatible torsion-free connection as in~\S\ref{tractors}. In this case, the connection $\nabla_a$ is the Levi-Civita connection of a metric $g_{ab}$ and $J_a{}^b\equiv J_{ac}g^{bc}$ is an almost complex structure on~$M$ whose integrability is equivalent to the vanishing of~$\nabla_aJ_{bc}$. In K\"ahler geometry, the Riemann curvature tensor decomposes into three irreducible parts: \begin{equation}\label{Bochner_in_real_money} \begin{array}{l} R_{ab}{}^c{}_d=U_{ab}{}^c{}_d\\ \enskip{}+\delta_a{}^c\Xi_{bd}-\delta_b{}^c\Xi_{ad} -g_{ad}\Xi_b{}^c+g_{bd}\Xi_a{}^c\\ \quad{}+J_a{}^c\Sigma_{bd} -J_b{}^c\Sigma_{ad} -J_{ad}\Sigma_b{}^c +J_{bd}\Sigma_a{}^c +2J_{ab}\Sigma^c{}_d +2J^c{}_d\Sigma_{ab}\\ \enskip\quad{}+\Lambda(\delta_a{}^cg_{bd}-\delta_b{}^cg_{ad} +J_a{}^cJ_{bd} -J_b{}^cJ_{ad} +2J_{ab}J^c{}_d), \end{array}\end{equation} where indices have been raised using $g^{ab}$ and \begin{itemize} \item $U_{ab}{}^c{}_d$ is totally trace-free with respect to $g^{ab}$, $J_a{}^b$, and $J^{ab}$, \item $\Xi_{ab}$ is trace-free symmetric whilst $\Sigma_{ab}\equiv J_a{}^c\Xi_{bc}$ is skew. \end{itemize} Computing the Ricci curvature from this decomposition, we find $$R_{bd}\equiv R_{ab}{}^a{}_d=2(n+2)\Xi_{bd}+2(n+1)\Lambda g_{bd}$$ and therefore from (\ref{Phi}) conclude that $$\textstyle\Phi_{ab}=\frac{n+2}{n+1}\Xi_{ab}+\Lambda g_{ab}.$$ Hence
$$\begin{array}{rcl}J_c{}^aR_{ab}{}^c{}_d &=&J_c{}^aV_{ab}{}^c{}_d -J_{bd}\Phi_a{}^a -2J_b{}^a\Phi_{da}\\[3pt] &=&J_c{}^aV_{ab}{}^c{}_d -2\frac{n+2}{n+1}\Sigma_{bd} -2(n+1)\Lambda J_{bd}.\end{array}$$ On the other hand, from (\ref{Bochner_in_real_money}) we find
$$J_c{}^aR_{ab}{}^c{}_d= -2(n+2)\Sigma_{bd}-2(n+1)\Lambda J_{bd}$$ and, comparing these two expressions gives $$\textstyle J_c{}^aV_{ab}{}^c{}_d-2\frac{n+2}{n+1}\Sigma_{bd}=-2(n+2)\Sigma_{bd}$$ and we have established the following. \begin{prop}\label{Kaehler_consequence} Concerning the symplectic curvature decomposition on a K\"ahler manifold, $$\textstyle J_c{}^aV_{ab}{}^c{}_d
=-2\frac{n(n+2)}{n+1}\Sigma_{bd}.$$ \end{prop} \begin{cor}\label{whenKaehlerVvanishes} The symplectic tractor connection on a K\"ahler manifold is symplectically flat if and only if the metric has constant holomorphic sectional curvature. \end{cor} \begin{proof} According to Corollary~\ref{symplectically_flat_tractors}, we have to interpret the constraint $V_{ab}{}^c{}_d=0$ in the K\"ahler case. {From} (\ref{Bochner_in_real_money}) it is already clear that $U_{ab}{}^c{}_d=0$ and Proposition~\ref{Kaehler_consequence} implies that also $\Sigma_{ab}=0$ so (\ref{Bochner_in_real_money}) reduces to $$R_{ab}{}^c{}_d=\Lambda(\delta_a{}^cg_{bd}-\delta_b{}^cg_{ad} +J_a{}^cJ_{bd} -J_b{}^cJ_{ad} +2J_{ab}J^c{}_d),$$ which is exactly the constancy of holomorphic sectional curvature. \end{proof}
\section{BGG-like complexes on ${\mathbb{CP}}_n$}\label{BGG-like} Fix a real vector space ${\mathfrak{g}}_{-1}$ of dimension $2n$, let ${\mathfrak{g}}_1$ denotes its dual, and fix a non-degenerate $2$-form $J_{ab}\in\Wedge^2{\mathfrak{g}}_1$. The $(2n+1)$-dimensional Heisenberg Lie algebra may be realised as $${\mathfrak{h}}={\mathbb{R}}\oplus{\mathfrak{g}}_{-1},$$ where the first summand is the $1$-dimensional centre of ${\mathfrak{h}}$ and the Lie bracket on ${\mathfrak{g}}_{-1}$ is given by $$[X,Y]=2J_{ab}X^aY^b\in{\mathbb{R}}\hookrightarrow{\mathfrak{h}}.$$ We should admit right away that the reason for this seemingly arcane notation is that we shall soon have occasion to write \begin{equation}\label{sp(2n+2,R)} \addtolength{\arraycolsep}{-2pt}{\mathfrak{sp}}(2n+2,{\mathbb{R}}) =\begin{array}[t]{ccccccccc}{\mathfrak{g}}_{-2}&\oplus&{\mathfrak{g}}_{-1} &\oplus&{\mathfrak{g}}_0&\oplus &{\mathfrak{g}}_1&\oplus&{\mathfrak{g}}_2\\
\|&&&&\|&&&&\|\\ {\mathbb{R}} &&&&\makebox[0pt]{${\mathfrak{sp}}(2n,{\mathbb{R}})\oplus{\mathbb{R}}$\quad} &&&&{\mathbb{R}}\end{array}\end{equation}
(a $|2|$-graded Lie algebra as in \cite[\S4.2.6]{CS}) and, in particular, regard ${\mathfrak{h}}={\mathbb{R}}\oplus{\mathfrak{g}}_{-1} ={\mathfrak{g}}_{-2}\oplus{\mathfrak{g}}_{-1}$ as a Lie subalgebra of ${\mathfrak{sp}}(2n+2,{\mathbb{R}})$. Be that as it may, let us suppose that ${\mathbb{V}}$ is a finite-dimensional representation of~${\mathfrak{h}}$. The Lie algebra cohomology $H^r({\mathfrak{h}},{\mathbb{V}})$ may be realised as the cohomology of the Chevalley-Eilenberg complex \begin{equation}\label{C-E} 0\to{\mathbb{V}}\to{\mathfrak{h}}^*\otimes{\mathbb{V}}\to \cdots\to\Wedge^r{\mathfrak{h}}^*\otimes{\mathbb{V}}\to \Wedge^{r+1}{\mathfrak{h}}^*\otimes{\mathbb{V}}\to\cdots\end{equation} as, for example, in~\cite[Chapter~IV]{Knapp}. We shall require, however, the following alternative realisation. \begin{lemma}\label{LieAlgBGG} There is a complex \begin{equation}\addtolength{\arraycolsep}{-2pt}\label{HeisenbergBGG} \begin{array}{rcccccccccc} 0&\to&{\mathbb{V}}&\stackrel{\partial}{\longrightarrow}& {\mathfrak{g}}_1\otimes{\mathbb{V}} &\stackrel{\partial_\perp}{\longrightarrow}& \Wedge_\perp^2{\mathfrak{g}}_1\otimes{\mathbb{V}} &\stackrel{\partial_\perp}{\longrightarrow}&\cdots &\stackrel{\partial_\perp}{\longrightarrow}& \Wedge_\perp^n{\mathfrak{g}}_1\otimes{\mathbb{V}}\\[2pt] &&&&&&&&&& \big\downarrow\\ 0&\leftarrow&{\mathbb{V}}&\stackrel{\partial_\perp}{\longleftarrow}& {\mathfrak{g}}_1\otimes{\mathbb{V}} &\stackrel{\partial_\perp}{\longleftarrow}& \Wedge_\perp^2{\mathfrak{g}}_1\otimes{\mathbb{V}} &\stackrel{\partial_\perp}{\longleftarrow}&\cdots &\stackrel{\partial_\perp}{\longleftarrow}& \Wedge_\perp^n{\mathfrak{g}}_1\otimes{\mathbb{V}}
\end{array}\end{equation} whose cohomology realises $H^r({\mathfrak{h}},{\mathbb{V}})$. Here, we are writing $$\Wedge_\perp^r{\mathfrak{g}}_1 \equiv\{\omega_{abc\cdots d}\in\Wedge^r{\mathfrak{g}}_1\mid J^{ab}\omega_{abc\cdots d}=0\},$$ where $J^{ab}\in\Wedge^2{\mathfrak{g}}_{-1}$ is the inverse of $J_{ab}\in\Wedge^2{\mathfrak{g}}_1$ (let's say normalised so that $J_{ab}J^{ac}=\delta_b{}^c$).
\end{lemma} \begin{proof} Notice that any representation $\rho:{\mathfrak{h}}\to\operatorname{End}({\mathbb{V}})$ is determined by its restriction to~${\mathfrak{g}}_{-1}\subset{\mathfrak{h}}$. Indeed, writing $\partial_a:{\mathfrak{g}}_{-1}\to\operatorname{End}({\mathbb{V}})$ for this restriction, to say that $\rho$ is a representation of ${\mathfrak{h}}$ is to say that \begin{equation}\label{rep_of_h}\begin{array}{rcl} (\partial_a\partial_b-\partial_b\partial_a)v &=&2J_{ab}\theta v\\ (\partial_a\theta-\theta\partial_a)v&=&0\end{array} \raisebox{2pt}{$\Big\}\quad\forall\,v\in{\mathbb{V}}$}, \end{equation} where $\theta\in\operatorname{End}({\mathbb{V}})$ is $\rho(1)$ for $1\in{\mathbb{R}}\subset{\mathfrak{h}}$.
The splitting ${\mathfrak{h}}^*={\mathfrak{g}}_1\oplus{\mathbb{R}}$ allows us to write (\ref{C-E}) as \begin{equation}\label{C-E_rewritten} \addtolength{\arraycolsep}{-2pt}\begin{array}{cccccccccc} {\mathbb{V}}&\longrightarrow&{\mathfrak{h}}^*\otimes{\mathbb{V}} &\longrightarrow&\Wedge^2{\mathfrak{h}}^*\otimes{\mathbb{V}} &\longrightarrow&\Wedge^3{\mathfrak{h}}^*\otimes{\mathbb{V}} &\longrightarrow&\cdots\\
\|&&\|&&\|&&\|\\ {\mathbb{V}}&\longrightarrow&{\mathfrak{g}}_1\otimes{\mathbb{V}} &\longrightarrow&\Wedge^2{\mathfrak{g}}_1\otimes {\mathbb{V}} &\longrightarrow&\Wedge^3{\mathfrak{g}}_1\otimes {\mathbb{V}} &\longrightarrow&\cdots\,,\\ &\begin{picture}(0,0)(0,-3) \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture} &\oplus &\begin{picture}(0,0)(0,-3) \put(-9,-6){\vector(3,2){18}} \put(-9,6){\vector(3,-2){18}}\end{picture}\\ &&{\mathbb{V}}&\longrightarrow &{\mathfrak{g}}_1\otimes{\mathbb{V}} &\longrightarrow &\Wedge^2{\mathfrak{g}}_1\otimes{\mathbb{V}} &\longrightarrow &\cdots \end{array}\end{equation} where the differentials are given by $$v\!\mapsto\!\left[\!\begin{array}{c}\partial_av\\ \theta v\end{array}\!\right] \quad \left[\!\begin{array}{c}\phi_a\\ \eta\end{array}\!\right] \!\mapsto\!\left[\!\begin{array}{c}\partial_{[a}\phi_{b]}-J_{ab}\eta\\ \partial_a\eta-\theta\phi_a\end{array}\!\right] \quad\left[\!\begin{array}{c}\omega_{ab}\\ \psi_a\end{array}\!\right] \!\mapsto\! \left[\!\begin{array}{c}\partial_{[a}\omega_{bc]}+J_{[ab}\psi_{c]}\\ \partial_{[a}\psi_{b]}+\theta\omega_{ab}\end{array}\!\right]$$ et cetera. In particular, notice that the homomorphisms \begin{equation}\label{Jwedge} \Wedge^{r-1}{\mathfrak{g}}_1\ni\psi\longmapsto \pm J\wedge\psi\in\Wedge^{r+1}{\mathfrak{g}}_1\end{equation} are \begin{itemize} \item independent of the representation on~${\mathbb{V}}$, \item injective for $1\leq r<n$, \item an isomorphism for $r=n$, \item surjective for $n<r\leq 2n-1$. \end{itemize} Note that $\Wedge_\perp^{r+1}{\mathfrak{g}}_1$ is complementary to the image of (\ref{Jwedge}) for $1\leq r<n$. Also note the isomorphisms $$\Wedge^{2n+1-r}{\mathfrak{g}}_1 \xrightarrow{\,J\wedge J\wedge\cdots\wedge J\,} \Wedge^{r-1}{\mathfrak{g}}_1,\quad\mbox{for}\enskip n<r\leq 2n+1,$$ under which the kernel of (\ref{Jwedge}) may be identified with $$\Wedge^{2n+1-r}_\perp{\mathfrak{g}}_1, \quad\mbox{for}\enskip n<r\leq 2n-1.$$ Diagram chasing in (\ref{C-E_rewritten}) (or the spectral sequence of a filtered complex) finishes the proof. \end{proof} \noindent{\bf Remark.} Evidently, the equations (\ref{rep_of_h}) are algebraic versions of $$\begin{array}{rcl} (\nabla_a\nabla_b-\nabla_b\nabla_a)\Sigma &=&2J_{ab}\Theta \Sigma\\ (\nabla_a\Theta-\Theta\nabla_a)\Sigma&=&0\end{array} \raisebox{2pt}{$\Big\}\quad\forall\,\Sigma\in\Gamma(E)$},$$ which hold for a symplectically flat connection $\nabla_a$ on smooth vector bundle $E$ on~$M$. Also (\ref{C-E_rewritten}) is the evident algebraic counterpart to the differential complex of Lemma~\ref{key_lemma}. It follows that explicit formul{\ae} for the operators $\partial_\perp$ in the complex (\ref{HeisenbergBGG}) follow the differential versions (\ref{early}) and (\ref{late}) with $\Wedge_\perp^n{\mathfrak{g}}\otimes{\mathbb{V}}\to \Wedge_\perp^n{\mathfrak{g}}\otimes{\mathbb{V}}$ being given by $\partial_\perp^2-\frac2n\theta$.
Let us now consider the tractor connection on ${\mathbb{CP}}_n$. According to Theorem~\ref{two}, the remarks following its statement, and the discussions in~\S\ref{tractor_bundles}, this is the connection on ${\mathcal{T}}=\Wedge^0\oplus\Wedge^1\oplus\Wedge^0$ given by $$\nabla_a\!\left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right]= \left[\!\begin{array}c\nabla_a\sigma-\mu_a\\ \nabla_a\mu_b+J_{ab}\rho+g_{ab}\sigma\\ \nabla_a\rho-J_a{}^b\mu_b \end{array}\!\right]= \left[\!\begin{array}c\nabla_a\sigma\\ \nabla_a\mu_b+g_{ab}\sigma\\ \nabla_a\rho-J_a{}^b\mu_b \end{array}\!\right] +\left[\!\begin{array}c-\mu_a\\ J_{ab}\rho\\ 0\end{array}\!\right]\!.$$ The induced operator $\nabla:\Wedge^1\otimes{\mathcal{T}}\to\Wedge^2\otimes{\mathcal{T}}$ is $$\left[\!\begin{array}c\sigma_b\\ \mu_{bc}\\ \rho_b\end{array}\!\right]\longmapsto
\left[\!\begin{array}c\nabla_{[a}\sigma_{b]}\\ \nabla_{[a}\mu_{b]c}+g_{c[a}\sigma_{b]}\\ \nabla_{[a}\rho_{b]}-J_{[a}{}^c\mu_{b]c} \end{array}\!\right] +\left[\!\begin{array}c\mu_{[ab]}\\ -J_{c[a}\rho_{b]}\\ 0\end{array}\!\right]\!$$ but Corollary~\ref{whenKaehlerVvanishes} says the tractor connection on ${\mathbb{CP}}_n$ is symplectically flat so we should contemplate $\nabla_\perp:\Wedge^1\otimes{\mathcal{T}}\to\Wedge_\perp^2\otimes{\mathcal{T}}$ from Theorem~\ref{one}, viz. $$\left[\!\begin{array}c\sigma_b\\ \mu_{bc}\\ \rho_b\end{array}\!\right]\longmapsto \left[\!\begin{array}c \nabla_{[a}\sigma_{b]}-\frac1{2n}J^{cd}\nabla_c\sigma_dJ_{ab}\\ \ldots\\ \ldots\end{array}\!\right] +\left[\!\begin{array}c\mu_{[ab]}-\frac1{2n}J^{cd}\mu_{cd}J_{ab}\\ -J_{c[a}\rho_{b]}-\frac1{2n}\rho_cJ_{ab}\\ 0\end{array}\!\right]\!.$$ From these formul{\ae}, let us focus attention on the homomorphisms \begin{equation}\label{attention_is_focussed} \makebox[0pt]{$\begin{array}{ccccccccc} 0&\to&{\mathcal{T}}&\to&\Wedge^1\otimes{\mathcal{T}}&\to &\Wedge_\perp^2\otimes{\mathcal{T}}&\to&\cdots\\[4pt] &&\left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right]&\mapsto &\left[\!\begin{array}c-\mu_a\\ J_{ab}\rho\\ 0\end{array}\!\right]\\[22pt] &&&&\left[\!\begin{array}c\sigma_b\\ \mu_{bc}\\ \rho_b\end{array}\!\right] &\mapsto &\left[\!\begin{array}c\mu_{[ab]}-\frac1{2n}J^{cd}\mu_{cd}J_{ab}\\ -J_{c[a}\rho_{b]}-\frac1{2n}\rho_cJ_{ab}\\ 0\end{array}\!\right] \end{array}$}\end{equation} It is evident that this is a complex and that its cohomology so far is $$\textstyle\Wedge^0\mbox{ in degree }0\quad\mbox{and}\quad \bigodot^2\!\Wedge^1\mbox{ in degree }1.$$ On the other hand, one may check that the defining representation of the Lie algebra ${\mathfrak{sp}}(2n+2,{\mathbb{R}})$ on ${\mathbb{R}}^{2n+2}={\mathbb{R}}\oplus{\mathbb{R}}^{2n}\oplus{\mathbb{R}}$ restricts via (\ref{sp(2n+2,R)}) to a representation of the Heisenberg Lie algebra ${\mathfrak{h}}={\mathbb{R}}\oplus{\mathfrak{g}}_{-1}$, given explicitly by $$\begin{array}[t]{ccc} {\mathbb{R}}^{2n+2}&\xrightarrow{\,\theta\,} &{\mathbb{R}}^{2n+2}\\ \left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right]&\longmapsto&\left[\!\begin{array}c\rho\\ 0\\ 0\end{array}\!\right] \end{array}\mbox{\quad and\quad}\begin{array}[t]{ccc} {\mathbb{R}}^{2n+2}&\xrightarrow{\,\partial_a\,} &{\mathfrak{g}}_{1}\otimes{\mathbb{R}}^{2n+2}\\ \left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right]&\longmapsto&\left[\!\begin{array}c-\mu_a\\ J_{ab}\rho\\ 0\end{array}\!\right] \end{array}$$ (noticing that equations (\ref{rep_of_h}) hold, as they must). We may also find $\theta$ as part of the curvature of the tractor connection on~${\mathbb{CP}}_n$. Specifically, the formula from Proposition~\ref{tractor_curvature} reduces to \begin{equation}\label{tractor_curvature_on_CPn} (\nabla_a\nabla_a-\nabla_b\nabla_a)\! \left[\!\begin{array}c\sigma\\ \mu_d\\ \rho\end{array}\!\right] =2J_{ab}\!\left[\!\begin{array}{c} \rho\\ J_d{}^e\mu_e\\ -\sigma \end{array}\!\right]\end{equation} and we find $\theta$ as the top component of $\Theta:{\mathcal{T}}\to{\mathcal{T}}$ where $\Theta$ is defined by~(\ref{Theta_in_the_symplectically_flat_case}). If we now consider the entire complex from Theorem~\ref{one}, with filtration induced by $$\begin{array}{ccccccc}\Wedge^0&\subset&\Wedge^1\oplus\Wedge^0 &\subset&\Wedge^0\oplus\Wedge^1\oplus\Wedge^0&=&{\mathcal{T}}\\ \left[\!\begin{array}c 0\\ 0\\ \rho\end{array}\!\right] &&\left[\!\begin{array}c 0\\ \mu_b\\ \rho\end{array}\!\right] &&\left[\!\begin{array}c\sigma\\ \mu_b\\ \rho\end{array}\!\right] \end{array}$$ of ${\mathcal{T}}$, then the associated spectral sequence (or corresponding diagram chasing) yields (\ref{attention_is_focussed}) continuing as in (\ref{HeisenbergBGG}) including the middle operator $\nabla_\perp^2-\frac2n\theta:\Wedge_\perp^n\to\Wedge_\perp^n$. The same reasoning pertains for any Fedosov structure with $V_{ab}{}^c{}_d=0$ as in Corollary~\ref{symplectically_flat_tractors}. Evidently, this sequence of vector bundle homomorphisms is induced by the complex (\ref{HeisenbergBGG}) and, together with Lemma~\ref{LieAlgBGG}, the spectral sequence of a filtered complex (or the appropriate diagram chasing) immediately yields the following. \begin{thm}\label{towardsBGG} Suppose $\nabla_a$ is a torsion-free connection on a symplectic manifold $(M,J_{ab})$, such that $\nabla_aJ_{bc}=0$ and so that the corresponding curvature tensor $V_{ab}{}^c{}_d$ vanishes. Fix a finite-dimensional representation ${\mathbb{E}}$ of\/ ${\mathrm{Sp}}(2n+2,{\mathbb{R}})$ and let $E$ denote the associated `tractor bundle' induced from the standard tractor bundle and the representation~${\mathbb{E}}$ (so that the standard representation of\/ ${\mathrm{Sp}}(2n+2,{\mathbb{R}})$ on\/ ${\mathbb{R}}^{2n+2}$ yields the standard tractor bundle). In accordance with Corollary~\ref{symplectically_flat_tractors}, the induced `tractor connection' $\nabla:E\to\Wedge^1\otimes E$ is symplectically flat and we may define $\Theta:E\to E$ by~\eqref{Theta_in_the_symplectically_flat_case}. Having done this, there are complexes of differential operators $$\addtolength{\arraycolsep}{-3pt}\begin{array}{rcccccccccc} 0&\to&H^0({\mathfrak{h}},E)&\to&H^1({\mathfrak{h}},E) &\to&H^2({\mathfrak{h}},E) &\to&\cdots &\to&H^n({\mathfrak{h}},E)\\[2pt] &&&&&&&&&& \big\downarrow\\ 0&\leftarrow&H^{2n+1}({\mathfrak{h}},E)&\leftarrow &H^{2n}({\mathfrak{h}},E)&\leftarrow &H^{2n-1}({\mathfrak{h}},E) &\leftarrow&\cdots &\leftarrow&H^{n+1}({\mathfrak{h}},E) \end{array}$$ where $H^r({\mathfrak{h}},E)$ denotes the tensor bundle on $M$ that is induced by the cohomology $H^r({\mathfrak{h}},{\mathbb{E}})$ as a representation of ${\mathrm{Sp}}(2n,{\mathbb{R}})$. This complex is locally exact except near the beginning where $$\ker:H^0({\mathfrak{h}},E)\to H^1({\mathfrak{h}},E) \quad\mbox{and}\quad \frac{\ker:H^1({\mathfrak{h}},E)\to H^2({\mathfrak{h}},E)} {\operatorname{im}:H^0({\mathfrak{h}},E)\to H^1({\mathfrak{h}},E)}$$ may be identified with the locally constant sheaves \underbar{$\ker\Theta$} and \underbar{$\operatorname{coker}\Theta$}, respectively. In particular, for\/ ${\mathbb{CP}}_n$ with its Fubini--Study connection, these sheaves vanish and the complex is locally exact everywhere. \end{thm} \begin{proof} It remains only to observe that for the Fubini--Study connection we see from (\ref{tractor_curvature_on_CPn}) that $\Theta:{\mathcal{T}}\to{\mathcal{T}}$ is an isomorphism. \end{proof} \noindent The main point about Theorem~\ref{towardsBGG}, however, is that if the representation ${\mathbb{E}}$ of ${\mathrm{Sp}}(2n+2,{\mathbb{R}})$ is irreducible, then the representations $H^r({\mathfrak{h}},{\mathbb{E}})$ of ${\mathrm{Sp}}(2n,{\mathbb{R}})$ are also irreducible and are computed by a theorem due to Kostant~\cite{K}. Specifically, if we denote the irreducible representations of ${\mathrm{Sp}}(2n+2,{\mathbb{R}})$ and ${\mathrm{Sp}}(2n,{\mathbb{R}})$ by writing the highest weight as a linear combination of fundamental weights and recording the coefficients over the corresponding nodes of the Dynkin diagrams for $C_{n+1}$ and $C_n$, as is often done, then Kostant's Theorem says that $$\begin{array}{ccl}H^0({\mathfrak{h}},\begin{picture}(84,5) \put(4,1.5){\line(1,0){42}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(40,1.2){\makebox(0,0){$\bullet$}} \put(55,1.2){\makebox(0,0){$\cdots$}} \put(62,1.5){\line(1,0){6}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(68,0.5){\line(1,0){12}} \put(68,2.5){\line(1,0){12}} \put(74,1.5){\makebox(0,0){$\langle$}} \put(80,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(40,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(80,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture})&=&\begin{picture}(72,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){12}} \put(56,2.5){\line(1,0){12}} \put(62,1.5){\makebox(0,0){$\langle$}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\[4pt] H^1({\mathfrak{h}},\begin{picture}(84,5) \put(4,1.5){\line(1,0){42}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(40,1.2){\makebox(0,0){$\bullet$}} \put(55,1.2){\makebox(0,0){$\cdots$}} \put(62,1.5){\line(1,0){6}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(68,0.5){\line(1,0){12}} \put(68,2.5){\line(1,0){12}} \put(74,1.5){\makebox(0,0){$\langle$}} \put(80,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(40,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(80,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture})&=&\begin{picture}(88,5)(-8,0) \put(4,1.5){\line(1,0){38}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(24,1.2){\makebox(0,0){$\bullet$}} \put(36,1.2){\makebox(0,0){$\bullet$}} \put(51,1.2){\makebox(0,0){$\cdots$}} \put(58,1.5){\line(1,0){6}} \put(64,1.2){\makebox(0,0){$\bullet$}} \put(64,0.5){\line(1,0){12}} \put(64,2.5){\line(1,0){12}} \put(70,1.5){\makebox(0,0){$\langle$}} \put(76,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a+b+1$}} \put(24,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(36,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(64,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(76,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\[4pt] H^2({\mathfrak{h}},\begin{picture}(84,5) \put(4,1.5){\line(1,0){42}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(40,1.2){\makebox(0,0){$\bullet$}} \put(55,1.2){\makebox(0,0){$\cdots$}} \put(62,1.5){\line(1,0){6}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(68,0.5){\line(1,0){12}} \put(68,2.5){\line(1,0){12}} \put(74,1.5){\makebox(0,0){$\langle$}} \put(80,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(40,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(80,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture})&=&\begin{picture}(88,5)(0,0) \put(4,1.5){\line(1,0){46}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(24,1.2){\makebox(0,0){$\bullet$}} \put(44,1.2){\makebox(0,0){$\bullet$}} \put(59,1.2){\makebox(0,0){$\cdots$}} \put(66,1.5){\line(1,0){6}} \put(72,1.2){\makebox(0,0){$\bullet$}} \put(72,0.5){\line(1,0){12}} \put(72,2.5){\line(1,0){12}} \put(78,1.5){\makebox(0,0){$\langle$}} \put(84,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(24,6){\makebox(0,0)[b]{$\scriptstyle b+c+1$}} \put(44,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(72,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(84,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\[4pt] H^3({\mathfrak{h}},\begin{picture}(84,5) \put(4,1.5){\line(1,0){42}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(40,1.2){\makebox(0,0){$\bullet$}} \put(55,1.2){\makebox(0,0){$\cdots$}} \put(62,1.5){\line(1,0){6}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(68,0.5){\line(1,0){12}} \put(68,2.5){\line(1,0){12}} \put(74,1.5){\makebox(0,0){$\langle$}} \put(80,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(40,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(80,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture})&=&\begin{picture}(88,5)(0,0) \put(4,1.5){\line(1,0){46}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(36,1.2){\makebox(0,0){$\bullet$}} \put(59,1.2){\makebox(0,0){$\cdots$}} \put(66,1.5){\line(1,0){6}} \put(72,1.2){\makebox(0,0){$\bullet$}} \put(72,0.5){\line(1,0){12}} \put(72,2.5){\line(1,0){12}} \put(78,1.5){\makebox(0,0){$\langle$}} \put(84,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(36,6){\makebox(0,0)[b]{$\scriptstyle c+d+1$}} \put(72,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(84,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\ \vdots\\ H^n({\mathfrak{h}},\begin{picture}(84,5) \put(4,1.5){\line(1,0){42}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(40,1.2){\makebox(0,0){$\bullet$}} \put(55,1.2){\makebox(0,0){$\cdots$}} \put(62,1.5){\line(1,0){6}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(68,0.5){\line(1,0){12}} \put(68,2.5){\line(1,0){12}} \put(74,1.5){\makebox(0,0){$\langle$}} \put(80,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(40,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(80,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture})&=& \begin{picture}(84,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){22}} \put(56,2.5){\line(1,0){22}} \put(67,1.5){\makebox(0,0){$\langle$}} \put(78,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle $}} \put(78,6){\makebox(0,0)[b]{$\scriptstyle e+f+1$}} \end{picture} \end{array}$$ and for $r\geq n+1$, there are isomorphisms $H^r({\mathfrak{h}},{\mathbb{E}})=H^{2n+1-r}({\mathfrak{h}},{\mathbb{E}})$. Using the same notation for the bundles $H^r({\mathfrak{h}},E)$, the complexes of Theorem~\ref{towardsBGG} become $$\begin{array}{rl}\begin{picture}(72,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){12}} \put(56,2.5){\line(1,0){12}} \put(62,1.5){\makebox(0,0){$\langle$}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture} &\xrightarrow{\,\nabla^{a+1}\,} \begin{picture}(88,5)(-8,0) \put(4,1.5){\line(1,0){38}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(24,1.2){\makebox(0,0){$\bullet$}} \put(36,1.2){\makebox(0,0){$\bullet$}} \put(51,1.2){\makebox(0,0){$\cdots$}} \put(58,1.5){\line(1,0){6}} \put(64,1.2){\makebox(0,0){$\bullet$}} \put(64,0.5){\line(1,0){12}} \put(64,2.5){\line(1,0){12}} \put(70,1.5){\makebox(0,0){$\langle$}} \put(76,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a+b+1$}} \put(24,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(36,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(64,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(76,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\ &\enskip{}\xrightarrow{\,\nabla^{b+1}\,} \begin{picture}(88,5)(0,0) \put(4,1.5){\line(1,0){46}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(24,1.2){\makebox(0,0){$\bullet$}} \put(44,1.2){\makebox(0,0){$\bullet$}} \put(59,1.2){\makebox(0,0){$\cdots$}} \put(66,1.5){\line(1,0){6}} \put(72,1.2){\makebox(0,0){$\bullet$}} \put(72,0.5){\line(1,0){12}} \put(72,2.5){\line(1,0){12}} \put(78,1.5){\makebox(0,0){$\langle$}} \put(84,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(24,6){\makebox(0,0)[b]{$\scriptstyle b+c+1$}} \put(44,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(72,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(84,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\ &\quad{}\xrightarrow{\,\nabla^{c+1}\,} \begin{picture}(88,5)(0,0) \put(4,1.5){\line(1,0){46}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(36,1.2){\makebox(0,0){$\bullet$}} \put(59,1.2){\makebox(0,0){$\cdots$}} \put(66,1.5){\line(1,0){6}} \put(72,1.2){\makebox(0,0){$\bullet$}} \put(72,0.5){\line(1,0){12}} \put(72,2.5){\line(1,0){12}} \put(78,1.5){\makebox(0,0){$\langle$}} \put(84,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(36,6){\makebox(0,0)[b]{$\scriptstyle c+d+1$}} \put(72,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(84,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}\\[-4pt] &\qquad\vdots\\ &\qquad{}\xrightarrow{\,\nabla^{e+1}\,} \begin{picture}(84,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){22}} \put(56,2.5){\line(1,0){22}} \put(67,1.5){\makebox(0,0){$\langle$}} \put(78,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle $}} \put(78,6){\makebox(0,0)[b]{$\scriptstyle e+f+1$}} \end{picture}\\ &\enskip\qquad{}\xrightarrow{\,\nabla^{2f+2}\,} \begin{picture}(84,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){22}} \put(56,2.5){\line(1,0){22}} \put(67,1.5){\makebox(0,0){$\langle$}} \put(78,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle a$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle $}} \put(78,6){\makebox(0,0)[b]{$\scriptstyle e+f+1$}} \end{picture}\\ &\qquad{}\xrightarrow{\,\nabla^{e+1}\,}\cdots\\ &\qquad\vdots\\ &\xrightarrow{\,\nabla^{a+1}\,} \begin{picture}(72,5) \put(4,1.5){\line(1,0){30}} \put(4,1.2){\makebox(0,0){$\bullet$}} \put(16,1.2){\makebox(0,0){$\bullet$}} \put(28,1.2){\makebox(0,0){$\bullet$}} \put(43,1.2){\makebox(0,0){$\cdots$}} \put(50,1.5){\line(1,0){6}} \put(56,1.2){\makebox(0,0){$\bullet$}} \put(56,0.5){\line(1,0){12}} \put(56,2.5){\line(1,0){12}} \put(62,1.5){\makebox(0,0){$\langle$}} \put(68,1.2){\makebox(0,0){$\bullet$}} \put(4,6){\makebox(0,0)[b]{$\scriptstyle b$}} \put(16,6){\makebox(0,0)[b]{$\scriptstyle c$}} \put(28,6){\makebox(0,0)[b]{$\scriptstyle d$}} \put(56,6){\makebox(0,0)[b]{$\scriptstyle e$}} \put(68,6){\makebox(0,0)[b]{$\scriptstyle f$}} \end{picture}, \end{array}$$ for arbitrary non-negative integers $a,b,c,d,\cdots,e,f$. When all these integers are zero, this is the Rumin--Seshadri complex. Just the first three terms in this complex, in the special case when only $a$ is non-zero, are already essential in~\cite{EG}. For example, if $a=1$, then the first two differential operators are $$\sigma\mapsto\nabla_a\nabla_b\sigma+\Phi_{ab}\sigma \quad\mbox{and}\quad \phi_{bc}\mapsto \big(\nabla_a\phi_{bc}-\nabla_b\phi_{ac})_\perp$$ where $\phi_{bc}$ is symmetric and $(\enskip)_\perp$ means to take the trace-free part with respect to $J_{ab}$.
{From} the curvature decomposition and Bianchi identity we find that their composition is $$\sigma\longmapsto V_{ab}{}^d{}_c\nabla_d\sigma+Y_{abc}\sigma,$$
which vanishes in case $V_{ab}{}^c{}_d=0$. In case $\Theta$ is invertible, as for the Fubini--Study connection, we conclude that this sequence of differential operators is locally exact.
\end{document} |
\begin{document}
\title{Perfectly Matched Sets in Graphs: Parameterized and Exact Computation}
\begin{abstract}
In an undirected graph $G=(V,E)$, we say that $(A,B)$ is a pair of perfectly matched sets if $A$ and $B$ are disjoint subsets of $V$ and every vertex in $A$ (resp. $B$) has exactly one neighbor in $B$ (resp. $A$). The size of a pair of perfectly matched sets $(A,B)$ is $|A|=|B|$. The PERFECTLY MATCHED SETS problem is to decide whether a given graph $G$ has a pair of perfectly matched sets of size $k$. We show that PMS is $W[1]$-hard when parameterized by solution size $k$ even when restricted to split graphs and bipartite graphs. We observe that PMS is FPT when parameterized by clique-width, and give FPT algorithms with respect to the parameters distance to cluster, distance to co-cluster and treewidth. Complementing FPT results, we show that PMS does not admit a polynomial kernel when parameterized by vertex cover number unless $\text{NP}\subseteq \text{coNP/poly}$. We also provide an exact exponential algorithm running in time $O^*(1.966^n)$. Finally, considering graphs with structural assumptions, we show that PMS remains NP-hard on planar graphs.
\keywords{perfectly matched sets, \and fixed parameter tractable, \and algorithms, \and perfect matching.} \end{abstract}
\section{Introduction}
Consider the following communication problem: we have an undirected graph each of whose nodes can send or receive messages. We wish to assign some nodes as transmitter or receiver, and then test the fidelity of transmission between transmitter-receiver pairs under the following assumptions: (a) there is no interference, i.e. a receiver doesn't get a message from more than one sender; (b) each sender can send at most one message at a time. What is the maximum number of pairs that can be tested simultaneously? This question was first studied in \cite{SV82}, where the underlying abstract problem was called TR-matching and shown to be NP-complete on 3-regular graphs.
We first formally define the problem PERFECTLY MATCHED SETS. \begin{tcolorbox} { PERFECTLY MATCHED SETS (PMS): \newline \textit{Input:} An instance $I$ = $(G,k)$, where $G=(V,E)$ is an undirected graph, and $k\in \mathbb{N}$.\newline \textit{Output:} YES, if $G$ contains two disjoint sets $A$ and $B$ of size $k$ each such that every vertex in $A$ (resp $B$) has exactly one neighbor in $B$ (resp $A$); NO otherwise. } \end{tcolorbox} The above definition is same as that of the TR-matching problem, as introduced in \cite{SV82}; however we have renamed it because of its relation to two recently well-studied problems: MATCHING CUT and PERFECT MATCHING CUT.
\subsection{Our results}
We revisit this problem in the context of designing parameterized and exact algorithms. In the context of parameterized complexity, the most natural parameter for PMS is the solution size. In Section \ref{W1-hard}, we start by showing that PMS is $W[1]$-hard when parameterized by solution size, even when restricted to split graph and bipartite graphs. This naturally motivates the study of PMS with respect to other structural parameters to obtain tractability. In Section \ref{clique_width}, we observe that PMS is FPT when parameterized by clique-width using Courcelle's Theorem \cite{CourcelleLinear}. This positive result motivated us to look for efficient FPT algorithms for PMS with other structural parameters. In Sections \ref{sec_cluster}, \ref{sec_co-cluster} and \ref{section-tw}, we obtain FPT algorithms for PMS with parameters distance to cluster, distance to co-cluster and tree-width, these parameters are unrelated to each other and are some of the most widely used structural parameters. On the kernelization side, in Section \ref{section-kernel-lb}, we show that there does not exist a polynomial kernel for PMS when parameterized by vertex cover unless $\text{NP}\subseteq \text{coNP/poly}$. This kernelization lower bound is in contrast to the PMS being FPT when parameterized by vertex cover, which is due to PMS being FPT by distance to cluster graph (a generalization of vertex cover). In Section \ref{sec_exact}, using a result of \cite{LT2020}, we present an exact algorithm for PMS which runs in time $O^*(1.966^n)$. Finally, focusing on restricted graph classes, in Section \ref{section-planar-hardness} we show that PMS remains NP-hard when restricted to planar graphs.
We remark that we are also interested in the optimization version of our problem, i.e. finding subsets $(A,B)$ of maximum size such that $A$ and $B$ are perfectly matched; our exact and FPT (distance to cluster, distance to co-cluster and tree-width) algorithms solve the optimization version.
\subsection{Related work} Given a graph $G$, a partition $(A,B)$ of $V(G)$ is a {\it matching cut} if every vertex in $A$ (resp $B$) has at most one neighbor in $B$ (resp $A$). The MATCHING CUT problem is then to decide whether a given graph has a matching cut or not.
The MATCHING CUT problem has been extensively studied. Graham \cite{graham1970primitive} discussed the graphs with matching cut under the name of decomposable graphs. Chv{\'{a}}tal \cite{chvatal1984recognizing} proved that MATCHING CUT is NP-Complete for graphs with maximum degree 4. Bonsma \cite{Bonsma} proved that MATCHING CUT is NP-complete for planar graphs with maximum degree 4 and with girth 5. Kratsch and Le \cite{kratsch16} provided an exact algorithm with running time $O^*(1.4143^n)\footnote{We use $O^*$ notation which suppresses polynomial factors. }$. Komusiewicz, Kratsch and Le \cite{DBLP:journals/dam/KomusiewiczKL20} provided a deterministic exact algorithm with running time $O^*(1.328^n)$. MATCHING CUT problem is also studied in parameterized realm with respect to various parameters in \cite{kratsch16,DBLP:journals/dam/KomusiewiczKL20,aravind2017structural,Gomes-Sau}. Hardness and polynomial time results have also been obtained for various structural assumptions in \cite{DBLP:conf/isaac/LeL16,DBLP:journals/tcs/LeL19,DBLP:conf/cocoon/HsiehLLP19}. Recently, enumeration version of matching cut is also studied \cite{DBLP:conf/stacs/GolovachKKL21}.
A special case of matching cut where for the partition $(A,B)$, every vertex in $A$ (resp $B$) has exactly one neighbor in $B$ (resp $A$) called perfect matching cut was studied by Heggernes and Telle \cite{DBLP:journals/njc/HeggernesT98}, where they proved that PERFECT MATCHING CUT problem is NP-complete. Recently, Le and Telle \cite{LT2020} revisited the PERFECT MATCHING CUT problem and showed that it remains NP-complete even when restricted to bipartite graphs with maximum degree $3$ and arbitrarily large girth. They also obtained an exact algorithm running in time $O^*(1.2721^n)$ for PERFECT MATCHING CUT \cite{LT2020}.
Observe that the PERFECT MATCHING CUT problem is more closely related to the PERFECTLY MATCHED SETS problem; indeed the later (with inputs $G,k$) is equivalent to deciding whether the given graph $G$ contains an induced subgraph of size $2k$ that has a perfect matching cut.
For a graph $G$, a matching $M\subseteq E(G)$ is an induced matching if $(V(M),M)$ is an induced subgraph of $G$.
The problem of finding maximum induced matching in a graph is INDUCED MATCHING, and it can also be considered a related problem to PMS. Stockmeyer and Vazirani \cite{SV82} discussed INDUCED MATCHING under the 'Risk-free' marriage problem. Since then INDUCED MATCHING is extensively studied. Hardness and polynomial time solvable results have been obtained with various structural assumptions \cite{IMS1,IMS2,IMS3,IMS4,IMS5,IMS6,IMS7,IMS8}. Exact algorithms \cite{IMEX1,IMEX2}, and FPT and kernelization results \cite{IMP1,IMP2,IMP3} have also been obtained.
\section{Preliminaries}
We use $[n]$ to denote the set $\{1,2,....,n\}$, and $[0]$ denotes an empty set. For a function $f:X\to Y$, for an element $e\in Y$, $f^{-1}(e)$ is defined to be the set of all elements of $X$ that map to $e$. Formally, $f^{-1}(e)=\{x\mid x\in X \land f(x)=e\} $.
\subsection{Graph Notations} All the graphs that we refer to are simple and finite. We mostly use standard notations and terminologies. We use $G=(V,E)$ to denote a graph with vertex set $V$ and edge set $E$. $E(G)$ denotes the set of edges of graph $G$, and $V(G)$ denotes the set of vertices of $G$. For $E'\subseteq E$, $V(E')$ denotes the set of all vertices of $G$ with at least one edge in $E'$ incident on it. For a vertex set $X\subseteq V$, $G[X]$ denotes the induced subgraph of $G$ on vertex set $X$, and $G-X$ denotes the graph $G[V\setminus X]$.
For an edge set $E'\subseteq E$, $G[E']$ denotes the subgraph of $G$ on edge set $E'$ i.e. $G[E']=(V(E'),E')$.
For disjoint vertex sets $A\subseteq V$ and $B \subseteq V$, $E(A,B)$ denotes the set of all the edges of $G$ with one endpoint in $A$ and another in $B$. For a vertex $v\in V$, we use $N(v)$ to denote the open neighborhood of $v$, i.e. set of all vertices adjacent to $v$ in $G$. We use $N[v]$ to denote the closed neighborhood of $v$, i.e. $N(v)\cup \{v\}$.
A graph is a \textit{cluster graph} if it is a vertex disjoint union of cliques. The maximal cliques of a cluster graph are called cliques or clusters. A graph is a \textit{co-cluster graph} if it is a complement of a cluster graph or equivalently a complete multipartite graph.\\
\subsection{Parameterized Complexity}
For details on parameterized complexity, we refer to \cite{DBLP:books/sp/CyganFKLMPPS15,DBLP:series/txcs/DowneyF13}, and recall some definitions here.
\begin{definition}[\cite{DBLP:books/sp/CyganFKLMPPS15}]
A \textit{parameterized problem} is a language $L \subseteq \Sigma^* \times \mathbb{N} $ where $\Sigma$ is a fixed and finite alphabet. For an instance $I=(x,k) \in \Sigma^* \times \mathbb{N} $, $k$ is called the parameter. A parameterized problem is called \textit{fixed-parameter tractable} (FPT) if there exists an algorithm $\cal A$ (called a \textit{fixed-parameter algorithm} ), a computable function $f:\mathbb{N} \to \mathbb{N}$, and a constant $c$ such that, the algorithm $\cal A$ correctly decides whether $(x,k)\in L$ in time bounded by $f(k).|(x,k)|^c$. The complexity class containing all fixed-parameter tractable problems is called FPT. \end{definition}
In the above definition, $|(x,k)|$ denotes the size of the input for a problem instance $(x,k)$. Informally, a $W[1]$-hard problem is unlikely to be fixed parameter tractable, see \cite{DBLP:books/sp/CyganFKLMPPS15} for details on complexity class $W[1]$. \begin{definition}[\cite{DBLP:books/sp/CyganFKLMPPS15}] Let $P,Q$ be two parameterized problems. A parameterized reduction from $P$ to $Q$ is an algorithm which for an instance $(x,k)$ of $P$ outputs an instance $(x',k')$ of $Q$ such that: \begin{itemize}
\item $(x,k)$ is yes instance of $P$ if and only if $(x',k')$ is a yes instance of $Q$,
\item $k'\leq g(k)$ for some computable function $g$, and
\item the reduction algorithm takes time $f(k)\cdot |x|^{O(1)}$ for some computable function $f$ \end{itemize} \end{definition}
\begin{theorem}[\cite{DBLP:books/sp/CyganFKLMPPS15}]\label{def-parameterized reduction} If there is a parameterized reduction from $P$ to $Q$ and $Q$ is fixed parameter tractable then $P$ is also fixed parameter tractable. \end{theorem}
For details on \textit{kernelization}, we refer to \cite{DBLP:books/sp/CyganFKLMPPS15} and recall the basic definition of a kernel here. A \textit{kernel} for a parameterized problem $P$ is an algorithm $A$ that given an instance $(x,k)$ of $P$ takes polynomial time and outputs an instance $(x',k')$ of $P$, such that (i) $(x,k)$ is a yes instance of $P$ if and only if $(x',k')$ is a yes instance of $P$, (ii) $|x'|+k' \leq f(k)$ for some computable function $k$. If $f(k)$ is polynomial in $k$, then we call it a polynomial kernel.
\section{Parameterized lower bounds}\label{W1-hard}
\subsection{W[1]-Hardness for Split Graphs }\label{section-whard-split}
In this section, we will prove the following theorem. \begin{theorem}\label{split_hardness} PMS is W[1]-hard for split graphs when parameterized by solution size $k$. \end{theorem}
IRREDUNDANT SET is known to be $W[1]$-complete when parameterized by the number of vertices in the set \cite{DOWNEY2000155}. We will give a parameterized reduction from IRREDUNDANT SET to PMS with solution size as the parameter. We also note that our construction in the reduction is similar to the one given in \cite{IMP1}. \begin{definition}[\cite{DOWNEY2000155}]
A set of vertices $I\subseteq V$ in a graph $G=(V,E)$ is called irredundant, if every vertex $v\in I$ has a private closed neighbor $p(v)$ in $V$ satisfying the following conditions:
\begin{enumerate}
\item $v=p(v)$ or $p(v)$ is adjacent to $v$,
\item no other vertex in $I$ is adjacent to $p(v)$.
\end{enumerate} \end{definition}
\begin{figure}
\caption{reduction from IRREDUNDANT SET to PMS, vertex set $Y$ forms a clique and $Z$ forms an independent set. A vertex $z_i$ is connected to a vertex $y_j\in Y$ if $v_j\in N[v_i]$ in input graph $G$.}
\end{figure}
Let $(G=(V,E),k)$ be an instance of IRREDUNDANT SET and let $V=\{v_1,v_2,...,v_n\}$. We construct a new graph $G'$ as follows. \begin{itemize}
\item Create a vertex set $Y=\{y_i\mid v_i\in V\}$ and connect every vertex of $y_i\in Y$ to $y_j\in Y$ when $y_i\neq y_j$, that is $Y$ forms a clique, and let $E_y$ be the set of these edges.
\item Create a vertex set $Z=\{z_i\mid v_i\in V\}$ and connect every vertex $z_i\in Z$ to $y_j\in Y$ if $v_j\in N[v_i]$ in input graph $G$, let $E'$ be the set of these edges. \end{itemize}
We define $G'=(Y\cup Z, E_y\cup E')$, which can be constructed in time polynomial in $|V|$, and $G'$ is a split graph.
\begin{proposition}\label{B'inV'} $G$ has an irredundant set of size $k$ if and only if $G'$ has a pair of perfectly matched sets of size $k$. \end{proposition} \begin{proof} For the forward direction, let $I\subseteq V$ be an irredundant set of size $k$ in $G$. For every vertex $v_i\in I$, pick exactly one private closed neighbor $v_{i'}$ of $v_i$ in $G$ and let $J$ be the set of these picked vertices. Let $I_a= \{y_{i}\mid y_{i}\in Y\land v_{i}\in I \}$, and let $J_b= \{z_{i'}\mid z_{i'}\in Z \land v_{i'}\in J\}$. We claim that $(I_a,J_b)$ is a pair of perfectly matched sets in $G'$. Since for every $v_i\in I$, $v_{i'}$ is a private closed neighbor of $v_i$ in $G$, by construction of $G'$, every $z_{i'}\in I_b$ is adjacent to only $y_i$ in $I_a$, and every $y_{i}\in I_a$ is adjacent to only $z_{i'}$ in $I_b$.
For the other direction, let $(A,B)$ be a pair of perfectly matched sets in $G'$ of size $k$. If $k = 1$, then let $y_i$ or $z_j$ be the only vertex in $A$, then $\{v_i\}$ or $\{v_j\}$ is an irredundant set of size one in $G$. If $k=|E(A,B)|\geq 2$, then either $A\subseteq Y$ and $B\subseteq Z$, or $A\subseteq Z$ and $B\subseteq Y$, this is due to the fact that $Y$ forms a clique and $Z$ forms an independent set in $G'$, and if both $A\cap Y$ and $B\cap Y$ are non-empty, then $|E(A,B)|$ must be 1. Let us assume that $A\subseteq Y$ and $B\subseteq Z$. In this case, let $I^*=\{v_i\mid v_i\in V\land y_i\in A\}$. Since $(A,B)$ is a pair of perfectly matched sets in $G'$, for every $y_i\in A$, there must be a vertex $z_{i'}$ in $B$ such that $y_i$ is the only neighbor of $z_{i'}$ in $A$. Therefore, every $v_i\in I^*$ has a private closed neighbor $v_{i'}$ in $G$, and $I^*$ is an irredundant set of size $k$ in $G$. This finishes the proof. \end{proof}
\subsection{ W[1]-Hardness for Bipartite Graph }\label{section-whard-bipart}
In this section, we prove the following theorem. \begin{theorem}\label{bi_hardness} PMS is W[1]-hard for bipartite graphs when parameterized by solution size $k$. \end{theorem}
We give a reduction from PMS on general graphs to PMS on bipartite graphs.
\begin{figure}
\caption{For every vertex $v_i$ create two vertices $v'_i$ and $v''_i$, and connect $v'_i$ to $v''_j$ if $v_j\in N(v_i)$.}
\label{fig:fig_bip_red}
\end{figure}
Let $(G=(V,E),k)$ be an instance PMS on general graphs, let $V=\{v_1,v_2,...,v_n\}$. We construct $G'$ as follows. \begin{itemize}
\item Create two copies of $V$ and call it $V'$ and $V''$, we refer copy of a vertex $v_i\in V$ in $V'$ as $v'_i$ and in $V''$ as $v''_i$.
\item Connect $v'_i$ to $v''_j$ if $v_i$ is adjacent to $v_j$ in $G$. Let $E'$ be the set of all these edges. \end{itemize}
We define $G'=(V'\cup V'',E')$, which can be constructed in time polynomial in $|V|$. \begin{proposition}\label{bi_hard_correctness}
$G$ has a pair of perfectly matched sets of size $k$ if and only if $G'$ has a pair of perfectly matched sets of size $2k$. \end{proposition} \begin{proof}
For the forward direction, let $(A,B)$ be a pair of perfectly matched sets in $G$ such that $|E(A,B)|=k$. Let $A'=\{v'_i\ |\ v_i\in A\}$, $A''=\{v''_i\ |\ v_i\in A\}$, $B'=\{v'_i\ |\ v_i\in B\}$, and $B''=\{v''_i\ |\ v_i\in B\}$. Observe that both $(A',B'')$ and $(A'',B')$ are perfectly matched sets of $G'$. Further, $(A''\cup A', B''\cup B')$ are perfectly matched sets of $G'$ as no vertex in $A''$ has a neighbor in $B''$ in $G'$ and similarly no vertex in $A''$ has a neighbor in $B''$ in $G'$. We further have $ |E(A''\cup A', B''\cup B')| =2k$.
For the other direction, let $(A,B)$ be perfectly matched sets in $G'$ such that $|E(A,B)|= 2k$. Due to the construction of $G'$, any vertex in $V'$ can only be matched to a vertex in $V''$ and vice versa. Thus there are $2k$ vertices from $V'$ in $A\cup B$. Then, either $|A\cap V'|\geq k$ or $|B\cap V'|\geq k$. W.l.o.g let $|A\cap V'|\geq k$, let $A'=A\cap V'$ and $B''\subseteq B$ be the vertices that are matched to $A'$ in $(A,B)$, clearly $B''\subseteq V''$ due to the construction. Let $A^*=\{ v\ |v' \in A'\}$ and $B^*=\{ v\ |v'' \in B''\}$; then $(A^*, B^*)$ is a pair of perfectly matched sets in $G$ such that $|E(A^*, B^*)|\geq k$. \end{proof}
\section{Kernelization Lower Bounds} \label{section-kernel-lb}
We refer to the work of Bodlaender, Thomassé and Yeo \cite{BODLAENDER_PPT} for the details on polynomial time parameter transformation, and recall its definition here.
\begin{definition}[\cite{BODLAENDER_PPT}] For two parameterized problems $P$ and $Q$, we say that there exists a polynomial time parameter transformation (ppt) from $P$ to $Q$, denoted by $P\leq_{ppt }Q$, if there exists a polynomial time computable function $f: \{0,1\}^* \times \mathbb{N} \to \{0,1\}^* \times \mathbb{N}$, and a polynomial $p: \mathbb{N}\to \mathbb{N}$, and for all $x\in \{0,1\}^*$ and $k\in \mathbb{N}$, if $f((x,k))=(x',k')$, then the following hold. \begin{itemize}
\item $(x,k)\in P$ if and only if $(x',k')\in Q$, and
\item $k'\leq p(k)$. \end{itemize} Here, $f$ is called the polynomial time parameter transformation. \end{definition}
\begin{theorem}[\cite{BODLAENDER_PPT}] For two parameterized problems $P$ and $Q$, let $P'$ and $Q'$ be their derived classical problems. Suppose that $P'$ is NP-complete, and $Q'\in \text{NP}$. Suppose that $f$ is a polynomial time parameter transformation from $P$ to $Q$. Then, if $Q$ has a polynomial kernel, then $P$ has a polynomial kernel. \end{theorem}
In the remaining part of this section, we will prove the following theorem.
\begin{theorem}\label{theorem-kernel} PMS does not admit a polynomial kernel parameterized by vertex cover number unless $\text{NP}\subseteq \text{coNP/poly}$. \end{theorem}
\begin{figure}
\caption{Reduction from CLIQUE to PMS, vertex set L and R form cliques. A vertex $v'_{k+i}$ is connected to a vertex $v''_j\in R$ if $v_j$ is not connected to $v_{k+i}$ in input graph $G$.}
\end{figure}
One can observe that $\text{PMS} \in \text{NP}$, as the solution certificate $(A,B)$ can be easily verified in polynomial time. Further, CLIQUE (an NP-complete problem) does not admit a polynomial kernel when parameterized by the size of a vertex cover unless $\text{NP}\subseteq \text{coNP/poly}$ \cite{Bodlaender_cross}. Thus, it will suffice to obtain a polynomial time parameter transformation from CLIQUE to PMS with parameter vertex cover number.
To this end, let $(G,l,V_c)$ be an instance of CLIQUE, where we need to decide if the input graph $G$ with vertex cover $V_c$ contains a clique of size $l$, here the parameter is the size of vertex cover $|V_c|\leq k$. Note that we are considering a vertex cover of $G$ of size $k$ to be a part of the input. However we are not dependent on this assumption, as there exists a well known polynomial time algorithm that finds a $2$-approximation of a minimum vertex cover.
For notational simplicity, let $V=\{v_1,v_2,...,v_n\}$ be the vertex set of $G$ and let $V_c=\{v_1,v_2,...,v_k\}$ be a vertex cover of $G$. The rest of the vertices of $G$ are $\{v_{k+1},v_{k+2},...,v_n\}$. Let $\bar{E}$ be the set of all the non-edges in $G$, i.e. $\bar{E}=\{v_iv_j\ |\ v_i\neq v_j \land v_iv_j\not \in E(G)\}$. Further, let $\bar{E}(V_c)$ be all the non-edges with both the endpoints in $V_c$. We now construct a graph $G'$ as follows. \vskip 0.1cm \begin{itemize}
\item Create a vertex set $L=\{v'_i|\ v_i\in V_c\}\cup \{a_1,a_2\}$, and for every distinct $x,y\in L$, connect $x$ and $y$ to each other, this way $L$ forms a clique.
\item Create a vertex set $R=\{v''_i|\ v_i\in V_c\}\cup \{b_1,b_2,u\}$, and for every distinct $x,y\in R$, connect $x$ and $y$ to each other, this way $R$ forms a clique.
\item Connect $a_1$ to $b_1$, $a_2$ to $b_2$, and $v'_{i}$ to $v''_{i}$ for every $i\in [k]$.
\item Create a vertex set $F= \{v'_{k+i}|\ v_{k+i}\in V(G)\setminus V_c\}$, and connect every vertex of $F$ to $u$.
\vskip 0.1cm
\item Connect a vertex $v'_{k+i}\in F$ to $v''_j\in R$ if $v_{k+i}$ and ${v_j}$ are not connected in $G$.
\vskip 0.1cm
\item Create a vertex set $X_{\bar{e}}= \{e_{ij}^1,e_{ij}^2,e_{ij}^3,e_{ij}^4|\ v_iv_j\in \bar{E}(V_c) \land (i<j)\}$. Further, for every non-edge $v_iv_j\in \bar{E}(V_c)$ where $i<j$, connect $e_{ij}^1$ to $e_{ij}^2$, $e_{ij}^2$ to $e_{ij}^3$, and $e_{ij}^3$ to $e_{ij}^4$, that is we create a path on 4 vertices, and connect $v''_{i}$ to $e_{ij}^1$ and $e_{ij}^4$, and connect $v''_j$ to $e_{ij}^2$ and $e_{ij}^3$. \end{itemize}
Observe that the construction can be achieved in time polynomial in $|V|$. Further, the vertex set $V(G')\setminus F$ is a vertex cover of $G'$, which is of size at most $|L|$ + $|R|$ + $4\cdot |\bar{E}(V_c)|$ that is $O(k^2)$. In the following proposition, we state and then prove the correctness of the reduction.
\begin{proposition}\label{popo:kernel-correctness}
$G$ has a clique of size $l$ if and only if $G'$ has a pair of perfectly matched sets of size $2+2\cdot |\bar{E}(V_c)|+l$. \end{proposition} \begin{proof}
For the forward direction, let $C$ be a clique of size $l$ in $G$. Recall that the size of a clique in $G$ can be at most $1$ more than the size of a vertex cover of $G$, thus $l$ can be at most $k+1$. We construct two sets $A$ and $B$ as follows, for every vertex $v_i$ in $C$ which also belongs to $V_c$, we put $v'_i$ in $A$ and $v''_i$ in $B$, if there is a vertex $v_{k+j}$ in $C$ which is not in $V_c$, we put $v'_{k+j}$ in $A$ and $u$ in $B$, observe that at most $1$ such vertex can be in $C$. Further, for every non-edge $v_iv_j\in \bar{E}(V_c)$ such that $i<j$, if $v_i\in C$ (then certainly $v_j\not \in C$), then put $e_{ij}^1$ and $e_{ij}^4$ in $B$ and $e_{ij}^2$ and $e_{ij}^3$ in $A$, else if $v_i\not \in C$, then put $e_{ij}^1$ and $e_{ij}^4$ in $A$, and $e_{ij}^2$ and $e_{ij}^3$ in $B$. Lastly we put $a_1$ and $a_2$ in $A$ and $b_1$ and $b_2$ in $B$. A direct check can verify that $(A,B)$ is a pair of perfectly matched sets in $G'$ and $|E(A,B)|= 2+2\cdot |\bar{E}(V_c)|+l$.
For the other direction, let $(A,B)$ be a pair of perfectly matched sets in $G'$ such that $|E(A,B)|= 2+2\cdot |\bar{E}(V_c)|+l$. For the proof of this direction, we will modify the set $A$ and/or $B$ to obtain desired vertices in each set while maintaining that $(A,B)$ remains a pair of perfectly matched sets and that $|E(A,B)|$ does not decrease.
For a pair $(A,B)$ of perfectly matched sets, we say a vertex $x\in A$ (resp. $B$) is \textit{matched} to $y\in B$ (resp. $A$) if $x$ and $y$ are neighbors. We will sequentially apply the following modifications, after verifying if the modification is applicable. Precedence of every modification is in the same order in which they are described.
\begin{itemize}
\item \textbf{M1:} If there exist two distinct vertices $x,y\in L$ such that $x$ is in $A$ and $y$ is in $B$, as well as there exist two distinct vertices $x',y'\in R$ such that $x'$ is in $A$ and $y'$ is in $B$. Then we set $A=(A\setminus\{x,x'\}) \cup \{a_1,a_2\}$ and $B= (B\setminus \{y,y'\}) \cup \{b_1,b_2\}$.
Observe that $(A,B)$ remains a pair of perfectly matched sets and $|E(A,B)|$ remains unchanged.
This is due to the fact that both $L$ and $R$ form cliques and thus no other vertices from $L$ except $x,y$ and no other vertices from $R$ except $x',y'$ could belong to $A$ or $B$.
\item \textbf{M2:} If there exist two distinct vertices $x,y\in L$ such that $x$ is in $A$ and $y$ is in $B$, and either $ A\cap R = \emptyset$ or $B\cap R = \emptyset$. In this case, neither $A\setminus\{x\}$ nor $B\setminus\{y\}$ contains any vertex from $L$ (as $L$ is a clique). Further, if $A\cap R=\emptyset$, then $b_1$ cannot belong to $B$, as it cannot be matched to any vertex in $A$ ($x$ and $y$ matched to each other). In this case, we set $A=(A\setminus\{x\}) \cup \{a_1\}$ and $B=(B\setminus \{y\}) \cup \{b_1\}$. Similarly, if $B\cap R=\emptyset$, then $b_1$ cannot belong to $A$ as it cannot be matched to any vertex in $B$. In this case, we set $A=(A\setminus\{x\}) \cup \{b_1\}$ and $B=(B\setminus \{y\}) \cup \{a_1\}$. In both the cases, $(A,B)$ remains a pair of perfectly matched sets and $|E(A,B)|$ remains unchanged.
\item \textbf{M3:} If there exist two distinct vertices $x,y\in R$ such that $x$ is in $A$ and $y$ is in $B$, and either $ A\cap L = \emptyset$ or $B\cap L = \emptyset$. In this case, neither $A\setminus\{x\}$ nor $B\setminus\{y\}$ contains any vertex from $R$ (as $R$ is a clique). Further, if $A\cap L=\emptyset$, then $a_1$ cannot belong to $B$ as it cannot be matched to any vertex in $L$ ($x$ and $y$ matched to each other). In this case, we set $A= (A\setminus\{x\}) \cup \{b_1\}$ and $B= (B\setminus \{y\}) \cup \{a_1\}$. Similarly, if $B\cap L=\emptyset$, then $a_1$ cannot belong to $A$ as it cannot be matched to any vertex in $B$. In this case, we set $A= (A\setminus\{x\}) \cup \{a_1\}$ and $B= (B\setminus \{y\}) \cup \{b_1\}$.
In both the cases, $(A,B)$ remains a pair of perfectly matched sets and $|E(A,B)|$ remains unchanged. \end{itemize}
After applying the above modifications, we may assume that for a pair of perfectly matched sets $(A,B)$, either $L\cap B = \emptyset$ and $R\cap A = \emptyset$, or $R\cap B = \emptyset$ and $L\cap A = \emptyset$. For the simplicity of arguments, we assume that $L\cap B = \emptyset$ and $R\cap A = \emptyset$ and proceed as follows: \begin{itemize}
\item \textbf{M4:} For every vertex $v''_i\in (R\cap B)\setminus \{u\}$, if the only neighbor of $v''_i$ in $A$ is $x$ and $x\neq v'_i$, then we remove $x$ from $A$ and add $v'_i$ to $A$. Observe that it is safe to do so as $v'_i$ is adjacent to only $v''_i$ outside $L$ and $L$ is disjoint from $B$. Also this modification does not change the size of $E(A,B)$. \end{itemize}
To this end, after applying the above modification exhaustively, we may also assume that in $(A,B)$, every vertex of $(R\cap B) \setminus \{u\}$ is matched by a vertex in $L$. Observe that if two distinct vertices $v''_i,v''_j\in B$ are such that $v_i$ and $v_j$ is not connected in $G$ and $v''_i$ is matched to $v'_i$ and $v''_j$ is matched to $v'_j$ in $(A,B)$, then none of the vertices from $\{e^1_{ij},e^2_{ij},e^3_{ij},e^4_{ij}\}$ can belong to $A$ without violating property of perfectly matched sets, and they can not be matched by $v''_i$ or $v''_j$, and hence none of them belongs to either $A$ or $B$. Thus, we modify $(A,B)$ as follows:
\begin{itemize}
\item \textbf{M5:} If two distinct vertices $v''_i,v''_j\in B$ are such that $v''_i$ is matched to $v'_i$ and $v''_j$ is matched to $v'_j$ in $(A,B)$ and $v_i$ and $v_j$ are not connected in $G$, then we remove $v''_i,v''_j$ from $B$ and remove $v'_i,v'_j$ from $A$, we then put $e^2_{ij},e^3_{ij}$ in $B$ and $e^1_{ij},e^4_{ij}$ in $A$. Observe that this modification maintains that $(A,B)$ remains a pair of perfectly matched sets and $|E(A,B)|$ remains unchanged. \end{itemize}
Recall that $|E(A,B)|$ is $2+2\cdot \bar{E}(V_c)+l$. Applying the above modification exhaustively, we also ensure that none of the vertices in $R\cap B$ is matched to a vertex from $X_{\bar{e}}$. Thus, every vertex of $X_{\bar{e}}\cap (A\cup B)$ must be matched to a vertex of $X_{\bar{e}}\cap (A\cup B)$ in $(A,B)$. Since there are at most $4\cdot |\bar{E}(V_c)|$ such vertices, they contribute at most $2\cdot |\bar{E}(V_c)|$ to $|E(A,B)|$. If we consider $a_1,a_2$ and $b_1,b_2$ to be part of $A$ and $B$ respectively, this leaves us with remaining $l$ edges in $E(A,B)$, the endpoints of these edges which belong to set $B$ must be from $R\setminus \{b_1,b_2\}$, let $C$ be the set of these $l$ vertices. If $C$ contains $u$, then let $v'_{k+p}$ be the only neighbor of $u$ in $A$. We define $C'=\{ v_i | \ v''_i\in C\}\cup \{v_{k+p}\}$ if $C$ contains $u$, otherwise $C'=\{ v_i | \ v''_i\in C\}$. Observe that $|C'|=l$. We claim that $C'$ is a clique in $G$. To prove the claim, recall modification M5, which ensures that every distinct $v_i,v_j\in C'\cap V_c$ are connected, and if there is a vertex $v_{k+p}$ outside $V_c$ in $C'$, $v_{k+P}$ is connected to every $V_c\cap C'$ in $G$, otherwise recalling construction of $G'$, $v'_{k+P}$ would be connected to at least one vertex in $C\setminus \{u\}$ and violate the property of perfectly matched sets $(A,B)$. This finishes the proof. \end{proof}
The above proposition conclude that the reduction is a polynomial time parameter transformation. This finishes the proof of Theorem \ref{theorem-kernel}. \section{Parameterized Algorithms} \input{CW} \input{pms-cluster4} \input{pms-co-cluster} \input{pms-treewidth} \section{Exact Algorithm}\label{sec_exact}
In this section, we prove the following result. \begin{theorem}\label{thm-exact-PMS} There is an algorithm that accepts a graph $G$ on $n$ vertices and finds a largest pair of perfectly matched sets of $G$ in time $O^*(1.966^n)$ time. \end{theorem}
We shall prove Theorem \ref{thm-exact-PMS} by making use of the algorithms in the following two lemmas. \begin{lemma}\label{PMS-size-lemma-one} There's an $O^*\left(\dbinom{n}{k}\right)$ algorithm to test if $G$ has a pair of perfectly matched sets of size $k$, and find one such pair if it exists. \end{lemma} \begin{proof}
The algorithm is the following: for each subset $A$ of size $k$, first find $C(A)$ which we define to be the set of vertices in $V \setminus A$ with exactly one neighbor in $A$. If every $v \in A$ has at least one neighbor in $C(A)$, then mark one such neighbor for each $v \in A$ as $f(v)$ and let $B=\{f(v)|v \in A\}$. Then $(A,B)$ forms a pair of perfectly matched sets. If for no $A$ can we find a corresponding $B$, then $G$ has no pair of perfectly matched sets of size $k$. \end{proof}
\begin{lemma}\label{PMS-size-lemma-two} If there's an $O^*(\alpha^n)$ algorithm to solve PERFECT MATCHING CUT on graphs of size $n$, then there's an $O^*\left(\dbinom{n}{2k}{\alpha}^{2k}\right)$ algorithm to test if $G$ has a pair of perfectly matched sets of size $k$, and find one if it exists. In particular, by the result of \cite{LT2020}, there's an $O^*\left(\dbinom{n}{2k}{1.2721}^{2k}\right)$ algorithm to do so. \end{lemma}
\begin{proof} The algorithm consists of picking every subset $S$ of size $2k$ and checking if $G[S]$ has a perfect matching cut. \end{proof}
\begin{proof}{\bf of Theorem \ref{thm-exact-PMS}} The main idea is to play off the bounds of Lemma \ref{PMS-size-lemma-one} and Lemma \ref{PMS-size-lemma-two}. When $k$ is close to $n/2$, the algorithm of Lemma \ref{PMS-size-lemma-two} is faster while for slightly smaller $k$, the algorithm of Lemma \ref{PMS-size-lemma-one} is faster.
Let $\varepsilon <1/2$, which we will fix shortly. For $1 \leq k \leq \varepsilon n$, we run the algorithm of Lemma \ref{PMS-size-lemma-one} and for $\varepsilon n <k \leq n$, we run the algorithm of Lemma \ref{PMS-size-lemma-two}. Thus we can find a largest pair of perfectly matched sets in time $O^*(T(n))$, for the following value of $T(n)$. \begin{align*} T(n)&=\sum_{k=0}^{\varepsilon n} \dbinom{n}{k} + \sum_{k=\varepsilon n}^{n/2} \dbinom{n}{2k}{\alpha}^{2k} \\ &\leq 2^{nH(\varepsilon)} + n\dbinom{n}{2 \varepsilon n}{\alpha}^{2\varepsilon n} \\ & \leq 2^{nH(\varepsilon)}+2^{nH(1-2\varepsilon)}{\alpha}^{2\varepsilon n}. \end{align*}
The second line follows from the fact that the terms of the second sum are decreasing with $k$ for $\varepsilon \geq \dfrac{1}{2+\alpha}$, and we also use the upper bound of $2^{H(tn)}$ on $\sum_{k=1}^{tn}\dbinom{n}{k}$ for $t \leq 1/2$.
The optimal value of $\varepsilon$ that minimizes the RHS is found by equating the two terms; i.e. by solving the equation $H(\varepsilon)=H(1-2\varepsilon)+ (2\log_2 \alpha) \varepsilon$. For $\alpha=1.2721$, this yields $\varepsilon \sim 0.4072$ and $2^{H(\varepsilon)} \sim 1.96565$. Thus, we obtain $T(n)=O(1.966^n)$. \end{proof}
\section{PMS for Planar Graphs }\label{section-planar-hardness}
In this section, we prove the following theorem. \begin{theorem}\label{planarNP_hardness} PMS is NP-hard for planar graphs. \end{theorem}
It is known that INDEPENDENT SET is NP-hard on planar graphs. We give a polynomial time reduction from INDEPENDENT SET to PMS.
Let $(G,k)$ be an instance of INDEPENDENT SET, where $G$ is a planar graph and $k\in \mathbb{N}$, and we need to decide if $G$ contains an independent set of size $k$. Let $V=\{v_1,v_2,...,v_n\}$ be vertices of $G$, we construct a graph $G'$ as follows. \begin{figure}
\caption{Reduction from INDEPENDENT SET to PMS where input and constructed graphs are planar. }
\label{fig:planar_reduction1}
\end{figure}
\begin{itemize}
\item Create a vertex set $V'=\{v'_i\mid v_i\in V\}$, and connect every $v'_i$ to $v'_j$ if $v_iv_j\in E(G)$.
\item Create a vertex set $V''=\{v''_i\mid v_i\in V\}$, and connect $v'_i$ to $v''_i$ for every $i\in[n]$.
\item Create a vertex set $X_{e}= \{e_{ij}^1,e_{ij}^2,e_{ij}^3,e_{ij}^4\mid v_iv_j\in E(G) \land (i<j)\}$. Further, for every edge $v_iv_j\in E(G)$ where $i<j$, connect $e_{ij}^1$ to $e_{ij}^2$, $e_{ij}^2$ to $e_{ij}^3$, and $e_{ij}^3$ to $e_{ij}^4$, i.e. create a path on 4 vertices. Further, connect $v'_{i}$ to $e_{ij}^1$ and $e_{ij}^4$, and connect $v'_j$ to $e_{ij}^2$ and $e_{ij}^3$. \end{itemize}
Observe that none of the introduced edges in $G'$ is a crossing edge if $G$ was embedded in a plane without crossing edges, and hence $G'$ is a planar graph. Further, the above construction takes time polynomial in size of $G$. Following proposition proves the correctness of the reduction.
\begin{proposition}
$G$ has an independent set of size $k$ if and only if $G'$ has a pair of perfectly matched sets of size $k+2\cdot |E(G)|$. \end{proposition} \begin{proof} For the forward direction, let $I$ be an independent set of size $k$ in $G$. We construct $(A,B)$ as follows. \begin{enumerate}
\item For every $v_i\in I$, put $v'_i$ in $A$ and $v''_i$ in $B$.
\item For every edge $v_iv_j \in E(G)$ such that $i<j$, if $v_i$ is in $I$ (then certainly $v_j$ is not in $I$), then add $e^1_{ij}, e^4_{ij}$ to $A$ and $e^2_{ij}, e^3_{ij}$ to $B$, else if $v_i$ is not in $I$, then add $e^1_{ij}, e^4_{ij}$ to $B$ and $e^2_{ij}, e^3_{ij}$ to $A$. \end{enumerate}
We claim that $(A,B)$ is a pair of perfectly matched sets and $|E(A,B)|= k+2\cdot |E(G)|$. For the proof of the claim, consider the following arguments.
\begin{itemize}
\item Consider the vertices of $V'$, in the construction, they can only be added to $A$. For a vertex $v'_i$ in $V'$, if $v_iv_j$ is an edge in $E(G)$ where $i<j$, then $v'_i$ has neighbors $e^1_{ij}, e^4_{ij}$, and if $v'_i$ is added to $A$, then $e^1_{ij}, e^4_{ij}$ are also added to $A$ (step 2). Further, if $v_jv_i$ is an edge in $E(G)$ where $j<i$, then $v'_i$ has neighbors $e^2_{ji}, e^3_{ji}$, and if $v'_i$ is added to $A$, then its neighbor $v'_j$ is not added to $A$ (step 1), and in step 2 we added $e^2_{ji}, e^3_{ji}$ to $A$.
Further, if a vertex $v'_i$ is added to $A$, then we also added its neighbor $v''_i$ to $B$, and every vertex $v'_i$ in $V'$ has only one neighbor $v''_i$ in $V''$. Thus, if a vertex of $V'$ is added to $A$, then it has exactly one neighbor in $B$.
\item Consider the vertices of $V''$, in the construction, they can only be added to $B$. We added $v''_i$ to $B$ whenever we added its neighbor $v'_i$ to $A$. All the vertices of $V''$ has degree $1$. Thus, if a vertex of $V''$ is added to $B$, it has exactly one neighbor in $A$.
\item Consider the vertices of $X_e$, no vertex in $X_e$ is connected to a vertex in $V''$. For every $v_iv_j$ in $E(G)$ where $i<j$, vertices $e^1_{ij}$ and $e^4_{ij}$ are connected to only $v'_i$ in $V'$ and $e^2_{ij}$ and $e^3_{ij}$ are connected to only $v'_j$ in set $V'$. If $v'_i$ was added to $A$ ($v'_j$ was not added to $A$), then we added both $e^1_{ij}$ and $e^4_{ij}$ to $A$ (step 2), and $e^2_{ij}$ and $e^3_{ij}$ added to $B$. Else, if $v'_i$ was not added to $A$ ($v'_j$ may be added to $A$), then we added $e^1_{ij}$ and $e^4_{ij}$ to $B$, and added $e^2_{ij}$ and $e^3_{ij}$ to $A$. This way no vertex of $X_e$ has a neighbor from $V'$ in its opposite set in $(A,B)$. Further, for every edge $v_iv_j\in E(G)$, when $e^1_{ij}$,$e^4_{ij}$ are added to $A$ (resp. $B$), then $e^2_{ij}$,$ e^3_{ij}$ are added to $B$ (resp. $A$). Thus, $e^1_{ij}$ and $e^2_{ij}$ are the only neighbors of each other in opposite sets, similarly $e^3_{ij}$ and $e^4_{ij}$ are the only neighbors of each other in opposite sets. Thus, every vertex in $X_e$ has exactly one neighbor in its opposite set in $(A,B)$.
\end{itemize}
The above conclude that every vertex in $A$ (resp. $B$) has exactly one neighbor in $B$ (resp. $A$), and $(A,B)$ is a pair of perfectly matched sets. To bound $|E(A,B)|$, it will suffice to bound $|A|$ as $(A,B)$ is a pair of perfectly matched sets. Observe that we are adding $|I|=k$ vertices from $V'$ in $A$ (step 1), and for every edge $v_iv_j$ in $E(G)$, we are adding two vertices from $X_e$ in $A$ (step 2). Thus, $|A|= k+2\cdot |E(G)|$.
For the other direction, let $(A,B)$ be a pair of perfectly matched sets in $G'$ such that $|E(A,B)|= k+2\cdot |E(G)|$. For the proof of this direction, we will modify the set $A$ and/or $B$ while maintaining that $(A,B)$ remains a pair of perfectly matched sets and that $|E(A,B)|$ does not decrease. For a pair $(A,B)$ of perfectly matched sets, we say a vertex $x\in A$ (resp. $B$) is \textit{matched} to $y\in B$ (resp. $A$) if $x$ and $y$ are neighbors. Consider the following modifications which we will apply in the same order in which they are described, at each step, a modification is applied exhaustively.
\begin{itemize}
\item \textbf{M1:} If there exists a vertex $v'_i\in V'\cap A$ which is matched to a vertex $v'_j\in V'\cap B$, then remove $v'_j$ from $B$ and add $v''_i$ to $B$. Observe that it is safe to do so, since $v''_i$ is connected to only $v'_i$ in $G'$ and $|E(A,B)|$ remains unchanged.
\item \textbf{M2:} If there exists a vertex $v'_i\in V'\cap A$ (resp. $V'\cap B$) which is matched to a vertex $e^p_{ij}\in X_e \cap B$ (resp. $X_e \cap A$) where $p\in [4]$, then remove $e^p_{ij}$ from $B$ (resp. $A$) and add $v''_i$ to $B$ (resp. $A$). Observe that it is safe to do so, since $v''_i$ is connected to only $v'_i$ in $G'$ and $|E(A,B)|$ remains unchanged. \end{itemize}
We apply the above modifications exhaustively and due to which, for every vertex $v'_i\in V'$, if $v'_i$ is in $(A\cup B)$, then $v'_i$ is matched to $v''_i$, and thus no two neighbors in $V'$ can belong to opposite sets in $(A,B)$. Further, if two distinct vertices $v'_i,v'_j\in V'\cap B$ (resp. $V'\cap A$) are neighbors in $G'$, then none of the vertices from $\{e^1_{ij},e^2_{ij},e^3_{ij},e^4_{ij}\}$ can belong to $A$ (resp. $B$) without violating any property of perfectly matched sets, and they can not be matched by $v'_i$ or $v'_j$, and hence none of them belongs to either $A$ or $B$. Thus, we further modify $(A,B)$ as follows:
\begin{itemize}
\item \textbf{M3:} If two distinct vertices $v'_i,v'_j\in V'\cap A$ (resp $V'\cap B)$ are neighbors, then we remove $v'_i,v'_j$ from $A$ (resp. $B$) and remove $v''_i,v''_j$ from $B$ (resp. $A$), we then put $e^2_{ij},e^3_{ij}$ in $B$ and $e^1_{ij},e^4_{ij}$ in $A$. Observe that this modification maintains that $(A,B)$ remains perfectly matched sets and $|E(A,B)|$ remains unchanged. \end{itemize}
Exhaustive application of the above modification ensures that no two neighbors in $V'$ belong to $A\cup B$. Recalling that $|E(A,B)|=k+2\cdot |E(G)|$, and that any vertex from $X_e$ if belongs to $A\cup B$ is matched to a vertex from $X_e$. Since there are at most $4\cdot |E(G)|$ vertices in $X_e$, they contribute at most $2\cdot |E(G)|$ to the $|E(A,B)|$. Thus, there must be at least $k$ vertices from $V'$ in $A\cup B$ as $V''$ vertices can only be matched to vertices of $V'$. As discussed earlier, no two neighbors in $V'$ belong to $A\cup B$. Thus, let $I^*= \{v_i |\ v'_i \in V'\cap (A\cup B)\}$. Recalling that $v'_i$ and $v'_j$ are adjacent in $G'$ if and only if $v_i$ and $v_j$ are adjacent in $G$, this ensures $I^*$ is an independent set in $G$ and $|I^*|=k$. This finishes the proof. \end{proof} \section{Conclusions} We showed PMS to be $W[1]$-hard with respect to solution size, while obtaining FPT algorithms with respect to several structural parameters. We leave open the problem of obtaining an exponential time algorithm with time complexity significantly lower than than our bound of $O^*(1.966^n)$. \newline \newline \textbf{Acknowledgements.} We thank anonymous reviewers for their feedback and useful suggestions.
\end{document} |
\begin{document}
\title{\textbf{Decomposing planar cubic graphs}}
\author{ Arthur Hoffmann-Ostenhof\thanks{Technical University of Vienna, Austria. Email: {\tt [email protected]}}\thanks{This work was supported by the Austrian Science Fund (FWF): P 26686.}\and
Tom\'{a}\v{s} Kaiser\thanks{Department of Mathematics, Institute for
Theoretical Computer Science (CE-ITI), and European Centre of
Excellence NTIS (New Technologies for the Information Society),
University of West Bohemia, Pilsen, Czech Republic. Email:
\texttt{[email protected]}}\thanks{Supported by project
GA14-19503S of the Czech Science Foundation.}\and
Kenta Ozeki\thanks{Faculty of Environment and Information Sciences, Yokohama National University,
Japan. e-mail: {\tt [email protected]} }\thanks{ This work was supported by JST ERATO Grant Number JPMJER1201, Japan.}}
\date{} \maketitle
\begin{abstract}
The 3-Decomposition Conjecture states that every connected cubic
graph can be decomposed into a spanning tree, a $2$-regular subgraph
and a matching. We show that this conjecture holds for the
class of connected plane cubic graphs. \end{abstract}
\noindent {\bf Keywords:} cubic graph, 3-regular graph, spanning tree, decomposition, separating cycle
\section{Introduction} \label{sec:intro}
All graphs considered here are finite and without loops. A \emph{decomposition} of a graph $G$ is a set of subgraphs whose edge sets partition the edge set of $G$. Any of these subgraphs may equal the empty graph --- that is, a graph whose vertex set is empty --- unless this is excluded by additional requirements (such as being a spanning tree). We regard matchings in decompositions as $1$-regular subgraphs.
The \textit{3-Decomposition
Conjecture} (3DC) by the first author \cite{C, Hof1} states that every connected cubic graph has a decomposition into a spanning tree, a $2$-regular subgraph and a matching. For an example, see the graph on the left in Figure~\ref{fig:decomp}. The $2$-regular subgraph in such a decomposition is necessarily nonempty whereas the matching can be empty.
The 3DC was proved for planar and projective-planar 3-edge-connected cubic graphs in \cite{Oz}. It is also known that the conjecture holds for all hamiltonian cubic graphs. For a survey on the 3DC, see \cite{Hof2}.
We call a cycle $C$ in a connected graph $G$ \emph{separating} if $G-E(C)$ is disconnected. The 3DC was shown in \cite{Hof2} to be equivalent to the following conjecture, called the \textit{2-Decomposition Conjecture} (2DC). (See Proposition \ref{p:2D3D} at the end of this paper.)
\begin{con}[2DC]
Let $G$ be a connected graph with vertices of degree two and three
only such that every cycle of $G$ is separating. Then $G$ can be
decomposed into a spanning tree and a nonempty matching. \end{con}
For an example, see the graph on the right in Figure~\ref{fig:decomp}. The main result of this paper, Theorem~\ref{main}, shows that the 2DC is true in the planar case. Call a graph \emph{subcubic} if its maximum degree is at most $3$.
\begin{figure}
\caption{Decomposition of a cubic and a subcubic graph into a spanning tree (thick lines), a $2$-regular subgraph (dotted lines), and a nonempty matching (thin lines). }
\label{fig:decomp}
\end{figure}
\begin{thm}\label{main}
Every connected subcubic plane graph in which every cycle is
separating has a decomposition into a spanning tree and a matching. \end{thm}
Note that the matching in Theorem \ref{main} is empty if and only if the subcubic graph is a tree.
It follows that the 2DC holds for the planar case. Finally, we will prove that Theorem \ref{main} implies the planar case of the 3DC:
\begin{cor}\label{imply2}
Every connected cubic plane graph can be decomposed into a
spanning tree, a nonempty $2$-regular subgraph and a matching. \end{cor}
\section{Preliminary observations}
Before we establish some facts needed for the proof of Theorem~\ref{main}, we introduce some terminology and notation. We refer to \cite{Bo, West} for additional information.
A cycle is a connected $2$-regular graph. Moreover, a $2$-cycle is a cycle with precisely two edges. A $vw$-path is a path with endvertices $v$ and $w$. For $k\in\Setx{2,3}$, a \emph{$k$-vertex} of a graph $G$ is a vertex of degree $k$. Similarly, for $k,\ell\in\Setx{2,3}$, a \emph{$(k,\ell)$-edge} is one with endvertices of degrees $k$ and $\ell$. We let $V_2(G)$ and $V_3(G)$ denote the set of vertices of degree $2$ and $3$, respectively.
\begin{definition} Let $\subc$ be the class of all connected plane graphs with each vertex of degree $2$ or $3$. Let $\sep$ be the class of all graphs $G$ in $\subc$, such that each cycle in $G$ is separating. \end{definition}
If a vertex $v$ of $G$ belongs to the boundary of a face $F$, we say that $v$ is \emph{incident} with $F$ or simply that it is a vertex of $F$. If $A$ is a set of edges of $G$ and $e$ is an edge, we abbreviate $A\cup\Setx{e}$ to $A+e$ and $A\setminus\Setx{e}$ to $A-e$.
When contracting an edge, any resulting parallel edges are retained. The contraction of a parallel edge is not allowed. \emph{Suppressing} a $2$-vertex (with two different neighbours) means contracting one of its incident edges. If $e \in E(G)$, then $G/e$ denotes the graph obtained from $G$ by contracting $e$.
The graph with two vertices and three edges joining them is denoted by $\Theta$.
Recall that an \emph{edge-cut} $C$ in a connected graph $G$ is an inclusionwise minimal set of edges whose removal disconnects $G$. By the minimality, $G-C$ has exactly two components. The edge-cut $C$ is \emph{cyclic} if both components of $G-C$ contain cycles. The graph $G$ is said to be \emph{cyclically $k$-edge-connected} (where $k$ is a positive integer) if it contains no cyclic edge-cuts of size less than $k$. Note that cycles, trees and subdivisions of $\Theta$ or of $K_4$ are cyclically $k$-edge-connected for every $k$.
In this paper, the end of a proof is marked by $\Box$, and the end of the proof of a claim (within a more complicated proof) is marked by $\triangle$.
The following lemma is a useful sufficient condition for a $2$-edge-cut to be cyclic: \begin{lemma}\label{l:cyclic}
Let $C$ be a $2$-edge-cut in a $2$-edge-connected graph
$G\in\subc$. If no component of $G-C$ is a path, then $C$ is a
cyclic edge-cut. \end{lemma} \begin{proof}
Let $K$ be a component of $G-C$ and let $u$ and $v$ be the
endvertices of the edges of $C$ in $K$. Note that since $G$ is
subcubic and $2$-edge-connected, $C$ is a matching and thus $u\neq v$. Suppose
that $K$ is acyclic. Since it is not a path, it is a tree with at
least $3$ leaves, one of which is different from $u,v$ and so its
degree in $G$ is $1$. Since $G\in\subc$, this is
impossible. Consequently, each component of $G-C$ contains a cycle
and $C$ is cyclic. \end{proof}
\begin{lemma}\label{l:bridge-parallel}
Every cyclically $3$-edge-connected graph $G \in \subc$ is
bridgeless. Furthermore, $G$ contains no pair of parallel edges
unless $G$ is a $2$-cycle or a subdivision of $\Theta$. \end{lemma} \begin{proof}
Suppose that $e$ is a bridge in $G$ and $K$ is a component of
$G-e$. Since $G \in \subc$, $K$ has at least two vertices. If $K$
contains no cycle, then $K$ is a tree and it has a leaf not incident
with $e$. This contradicts the assumption that $G\in\subc$. Thus,
$\Setx e$ is a cyclic edge-cut, a contradiction.
Suppose that $x,y$ are two vertices in $G$ joined by a pair of parallel
edges and that $G$ is neither a $2$-cycle nor a subdivision of
$\Theta$. Since $G$ is bridgeless, both $x$ and $y$ are of degree
$3$. Let $C$ consist of the two edges incident with just one of
$x,y$. If the component of $G-C$ not containing $x$ were acyclic, it
would be a tree with exactly two leaves, i.e., a path or a single vertex, and $G$ would
be a subdivision of $\Theta$. Hence, $C$ is a cyclic $2$-edge-cut of $G$
contradicting the assumption that $G$ is cyclically $3$-edge-connected.
\end{proof}
\begin{observation}\label{obs:sd}
Every cyclically $3$-edge-connected graph in $\subc$ is a cycle or a
subdivision of a $3$-edge-connected cubic graph. \end{observation} \begin{proof}
Suppose that $G\in\subc$ is cyclically $3$-edge-connected and
different from a cycle. Let the cubic graph $G'$ be obtained by
suppressing each vertex of degree two. (Since $G$ is bridgeless by
Lemma~\ref{l:bridge-parallel}, this does not involve contracting a
parallel edge.) If $C$ is a $2$-edge-cut in $G'$, then each
component of $G'-C$ contains a $3$-vertex or is a $2$-cycle.
Lemma~\ref{l:cyclic} implies that $C$
corresponds to a cyclic $2$-edge-cut in $G$ which is a contradiction.
\end{proof}
\begin{lemma}\label{l:face-two}
Let $G\in\subc$. If each face of $G$ is incident with a $2$-vertex,
then $G\in\sep$. Moreover, if $G$ is cyclically $3$-edge-connected,
then $G\in\sep$ if and only if each face of $G$ is incident with a
$2$-vertex. \end{lemma} \begin{proof}
In a graph in $\sep$, any cycle that is not a facial cycle is
separating. Thus, if $G\in\subc$ and each face is incident with a
$2$-vertex, then $G\in\sep$. The second assertion is trivially true
if $G$ is a cycle. Suppose thus, using Observation~\ref{obs:sd},
that $G$ is a subdivision of a $3$-edge-connected cubic graph. It is
well known that in a $3$-edge-connected plane graph, facial cycles are
exactly the non-separating cycles. Thus, if $G\in\sep$, then every
face is incident with a $2$-vertex. \end{proof}
Graphs $G\in\sep$ with cyclic $2$-edge-cuts may have faces which are not incident with $2$-vertices. We will use in the next section the following subset of $\sep$.
\begin{definition} Let $\sepf$ be the class of all connected plane graphs $G \in \sep$ such that each face of $G$ is incident with a $2$-vertex. \end{definition}
The next lemma will be used in the proof of Theorem~\ref{t:induction}.
\begin{lemma}\label{l:nbr}
Let $G \in \subc$ be cyclically $3$-edge-connected and
let $u$ be a $2$-vertex of the outer face, with distinct neighbours
$x$ and $y$ of degree $3$ (see Figure~\ref{fig:verts}). Let the other neighbours of
$y$ be denoted by $a,b$ and the other neighbours of $x$ by $c,d$, such that
the clockwise order of the neighbours of $y$ ($x$) is $uba$ ($udc$,
respectively). Then all of the following conditions hold, unless $G$
is a subdivision of $\Theta$ or of $K_4$:
\begin{enumerate}
\item[{\upshape (1)}] $\Setx{a,b,c,d} \cap \Setx{x,y} = \emptyset$,
\item[{\upshape (2)}] $\Setx{a,d} \cap \Setx{b,c} = \emptyset$, and
\item[{\upshape (3)}] $b\neq c$ or $a\neq d$.
\end{enumerate} \end{lemma} \begin{proof}
We prove (1). Consider the vertex $x$ and suppose that $x=a$.
Then $c$ or $d$ is $y$ otherwise $x$ would have degree $4$.
Therefore $y=d$ since $y=c$ would imply that $xd$ is a bridge, contradicting
Lemma~\ref{l:bridge-parallel}. Then the set of edges $C=\Setx{xc,yb}$ is
a $2$-edge-cut. Lemma~\ref{l:cyclic} implies that the component of $G-C$ not
containing $x$ is a path. Hence, $G$ is a subdivision of $\Theta$ which is a
contradiction. Thus, $x\neq a$. Essentially the same argument shows that $x\neq
b$. Trivially, $c\neq x\neq d$, so $x \notin\Setx{a,b,c,d}$. By symmetry, we conclude that (1)
holds.
To prove (2), note that $a\neq b$ by
Lemma~\ref{l:bridge-parallel}. If $a=c$, then $yb$ or $xd$ would be
a bridge by a planarity argument, contradicting
Lemma~\ref{l:bridge-parallel}. Thus, $a\notin\Setx{b,c}$, and by
symmetry, $d\notin\Setx{b,c}$.
Finally, we prove (3). Suppose that $b=c$ and $a=d$. If both $a$ and
$b$ are $2$-vertices, then $G$ is a subdivision of
$\Theta$. Otherwise, they must both be $3$-vertices as $G$ would
otherwise contain a bridge. If they are adjacent, then $G$ is a
subdivision of $K_4$ contrary to the assumption.
Thus, we may assume that there
is a $2$-edge-cut $C$ such that one edge in $C$ is incident with $a$
and the other one with $b$, and none of these edges is incident with
$x$ nor $y$. Since $G$ is cyclically $3$-edge-connected, the
component of $G-C$ not containing $a$ is a path, so $G$ is a
subdivision of $K_4$, which is a contradiction. \end{proof}
\begin{figure}
\caption{The situation in Lemma~\ref{l:nbr}. The dotted line
indicates part of the boundary of the outer face. A priori, some
of the vertices $a$, $b$, $c$, $d$ may coincide and $b$, $c$ may
be incident with the outer face.}
\label{fig:verts}
\end{figure}
\section{Decomposition into a forest and a matching with prescribed edges}
To find a decomposition of a connected graph into a spanning tree and a matching, it is clearly sufficient to decompose it into a forest and a matching. Thus, we define a \emph{$2$-decomposition} of a graph $G$ as a decomposition $E(G) = E(F)\cup E(M)$ such that $F$ is a forest and $M$ is a matching (called the \emph{forest part} and the \emph{matching
part} of the decomposition, respectively). If $B$ is a set of edges of $G$, then a \emph{$B$-$2$-decomposition} (abbreviated \emph{$B$-2D}) of $G$ is a $2$-decomposition whose forest part contains $B$. Obviously, if $B$ contains all edges of a cycle, then $G$ cannot have a $B$-2D. Note also that there are graphs in $\sep$ without a $B$-2D where $B$ consists only of a few $(2,3)$-edges; for an example see Figure~\ref{fig:counter}. Let us define $B(2,3)$ as the set of $(2,3)$-edges of $B$ and call a vertex \emph{sensitive} if it is a $2$-vertex incident with an edge in $B(2,3)$.
The following theorem is the main statement needed to prove Theorem~\ref{main}. Examples in Figure~\ref{fig:counter} show some limitations to relaxing the conditions in Theorem~\ref{t:induction}.
\begin{figure}
\caption{Two graphs $G\in\sepf$ and edge sets $B$ (bold) such that
$G$ admits no $B$-2D. Left: example showing that condition (a) in
Theorem~\ref{t:induction} cannot be relaxed to allow
$\size{B(2,3)} > 1$. Right: example showing that
condition~\ref{i:another} cannot be dropped.}
\label{fig:counter}
\end{figure}
\begin{thm}
\label{t:induction}
Let $G \in \sepf$ be $2$-edge-connected and not a cycle. Let $F_0$ be
the outer face of $G$, and let $B$ be a set of edges contained in
the boundary of $F_0$. Suppose that either
\begin{enumerate}[label=(\alph*)]
\item\label{i:3conn} $G$ is cyclically $3$-edge-connected and
$\size{B(2,3)} \leq 1$, or
\item\label{i:n3conn} $G$ contains a cyclic $2$-edge-cut
and there are distinct vertices $v,w$ incident with $F_0$ such
that $v$ is a $2$-vertex and all of the following hold:
\begin{enumerate}[label=(b\arabic*)]
\item\label{i:sep} $v,w$ are separated by every cyclic
$2$-edge-cut of $G$,
\item\label{i:path} all edges in $B$ are contained in a
$vw$-subpath of the boundary of $F_0$,
\item\label{i:another} if $v$ is a sensitive vertex, then the
inner face of $G$ incident with $v$ is incident with another
$2$-vertex, and
\item\label{i:two}
every sensitive vertex which is not $v$ is either $w$ or adjacent to $w$.
\end{enumerate}
\end{enumerate}
Then $G$ admits a $B$-2D. \end{thm}
Note that if $G$ in Theorem \ref{t:induction} has a cyclic $2$-edge-cut, then conditions (b2) and (b4) imply that $|B(2,3)| \leq 2$. Before we start with the proof, we explain how we use contraction in this section. Suppose we contract an edge $e=vw$ in a graph $H$ into the vertex $v$, then $w \not\in V(H/e)$, $v \in V(H/e)$ and each vertex of $H/e-v$ has the same vertex-label as the corresponding vertex in $H-v-w$. For the proof it will be essential that every edge of $H/e$ corresponds to an edge of $H-e$ and vice versa. We will use this edge-correspondence between the graphs $H/e$ and $H$ for edges which are not $e$ and edge-sets that do not contain $e$, without referring to it. To avoid later confusion, note that an edge $vx \in E(H/e)$ can correspond to an edge in $H$ with other endvertices than in $H/e$, namely $wx$. \\
\begin{proof}
Suppose by contradiction that $G$ is a counterexample
with $\size{V(G)}$ minimum. Moreover, let $B$ be a set of edges satisfying the
assumptions of the theorem, such that $G$ has no $B$-2D and $\size B$ is maximum.
We begin with a technical claim:
\begin{claim}\label{cl:contract}
Let $rs$ be an edge of a graph $H\in\subc$ where $\degree H r = 2$
and both neighbours of $r$ are distinct. Let $H'$ be obtained from $H$ by contracting $rs$ into $r$ and
let $B' \subseteq E(H')$. If $H'$ has a $B'$-2D, then $H$ admits a $(B'+rs)$-2D.
\end{claim}
\begin{claimproof}
Let $(F',M')$ be a $B'$-2D of $H'$. Let $F = F'+rs$ and let $z$
denote the neighbour of $r$ in $H$ distinct from $s$.
Then $rz\in F'$ or $rz \not\in F'$. In each case, $F$ is a forest of $H$; in fact, $F$ is the forest part of a $(B'+rs)$-2D of
$H$. The matching part of the desired 2D is $E(H)-E(F)$.
\end{claimproof}
We distinguish two main cases.
\begin{xcase}{I}{$G$ satisfies condition (a) in the theorem}
We start with the following claim:
\begin{claim}\label{cl:no22}
$G$ contains no $(2,2)$-edge.
\end{claim}
\begin{claimproof}
For contradiction, suppose that $f$ is such an edge; contracting
$f$, we obtain a $2$-edge-connected graph in $\sepf$ satisfying
condition (a) of the theorem. By the minimality of $G$, the
resulting graph admits a
$(B-f)$-2D. Then Claim 1 implies a $B$-2D of $G$, a contradiction.
\end{claimproof}
Using Claim \ref{cl:no22} it is straightforward to verify that when $G$ is
a subdivision of $\Theta$ or of $K_4$, then $G$ has a $B$-2D. Thus, we may assume that $G$ is not a subdivision of either of these graphs.
Note that we often refer to edges of $G$ only by their endvertices
(for example, $xc$). This is sufficient, since by
Lemma~\ref{l:bridge-parallel}, $G$ contains no parallel edges.
Since $G \in \sepf$, the outer face is incident with a $2$-vertex,
which is by Claim~\ref{cl:no22} incident with a $(2,3)$-edge.
If $B(2,3)= \emptyset$, then we can add any $(2,3)$-edge into $B(2,3)$, preserving condition (a) in Theorem \ref{t:induction}.
Then by the maximality of $B$, we obtain a $B$-2D, a contradiction.
Therefore, we may assume that $\size{B(2,3)}=1$.
Let $e=ux$ denote the unique edge in $B(2,3)$,
with $u\in V_2(G)$, and let the neighbour of $u$ other than $x$ be
denoted by $y$, see Figure~\ref{fig:verts}. Note that $x,y\in V_3(G)$.
Label the neighbours of $x,y$ distinct from $u$ by $a,b,c,d$ as in
Lemma~\ref{l:nbr}. Since $G$ is neither a subdivision of $\Theta$ nor
of $K_4$, we may assume by Lemma~\ref{l:nbr} that the vertices
$a,b,c,d,x,y,u$ are all distinct, except that possibly $a=d$ or
$b=c$ (but not both).
Let $G'$ be the graph obtained from $G$ by removing $u$ and
contracting the edge $yb$ into $y$.
\begin{claim}\label{cl:not-3-conn}
$G'$ is not cyclically $3$-edge-connected.
\end{claim}
\begin{claimproof}
For the sake of a contradiction, suppose that $G'$ is cyclically
$3$-edge-connected. Let $B' \subseteq E(G')$ with $B'=B-ux+ya$. Assume first that
$\size{B'(2,3)} \leq 1$.\\
Using the fact that $G\in\sepf$ and since $x$ is a
$2$-vertex of the outer face of $G'$, it is
not difficult to verify that $G'\in\sepf$. It follows that $(G',B')$
satisfies the conditions of the theorem, so $G'$ admits a
$B'$-2D by the minimality of $G$. Adding the edges $yb$ and $ux$
to its forest part and the edge $uy$ to its matching part, we obtain a $B$-2D of $G$, a contradiction.
Thus, $\size{B'(2,3)} \geq 2$. Since $B(2,3)=\Setx{ux}$, $B'(2,3) = \Setx{ya,xd}$ implying $xd \in B$, and
since $|B(2,3)|=1$,
we have $\degree{G}d = \degree{G'}d = 3$. Furthermore, since $ya \in B'(2,3)$ either $\degree{G}a = 2$ and $\degree{G}b = 3$
or vice versa.
We distinguish two cases according to $\degree{G}c$. If $\degree{G} c = 3$, we let $G''$ be the graph obtained from $G'$
by contracting $xc$ into $x$, and let $B'' = B'$. Then
$\size{B''(2,3)} = 1$, $G''\in\sepf$ and $G''$ is cyclically $3$-edge-connected.
By the minimality of $G$, $G''$ admits a $B''$-2D. To obtain a $B$-2D of $G$,
it suffices to add $ux$, $xc$ and $yb$ to the forest part,
and $uy$ to the matching part, respectively, of the
$B''$-2D. This contradicts the choice of $G$.
It remains to discuss the case $\degree G c = 2$. In this
case, we let $G'' = G'$ and $B'' = B'-xd$ implying
$\size{B''(2,3)} = 1$. By the minimality of $G$, there is a
$B''$-2D of $G''$, say $(F'',M'')$, where $F''$ is a forest and
$M''$ is a matching. Consider the 2-decomposition $(F,M)$ of
$G$, where $F=F''+yb+ux$ and $M=M''+uy$. We must have
$xd\notin F$, for otherwise this would be a $B$-2D. In fact, $F+xd$ must
contain a cycle $Z$.
Since $uy\notin F$ and since $\degree G c = 2$, $Z$ contains both edges incident
with $c$. It follows that $F+xd-xc$ is acyclic and that
$(F+xd-xc,M+xc-xd)$ is a $B$-2D of $G$, a contradiction.
\end{claimproof}
Let $B' \subseteq E(G')$ and let $B'=B-ux+ya$. We will show that $(G',B')$ satisfies condition (b).
Then, by the minimality of $G$, $G'$ will have a $B'$-2D implying a $B$-2D of $G$,
which will finish Case I. Firstly, $G'$ contains a cyclic 2-edge-cut by
Claim~\ref{cl:not-3-conn}. Comparing faces of $G'$ to those of
$G$, we conclude that every face of $G'$ is incident with a
$2$-vertex. Thus, $G'\in\sepf$. Let $v=x$ and $w=y$. We check conditions (b1)--(b4), starting with
(b1). Any cyclic $2$-edge-cut of $G'$ not separating $x$ from $y$
would be a cyclic $2$-edge-cut in $G$, contrary to the assumption
that $G$ is cyclically $3$-edge-connected. Condition (b2) follows
from the fact all edges of $B$ are edges of the boundary of the
outer face of $G$, and all of this boundary (except for the edges
$ux$ and $uy$) is covered by an $xy$-path in the boundary of the
outer face of $G'$. As for condition (b3), $x$ is indeed a
$2$-vertex of $G'$, and since $\degree G x=3$ and $G\in\sepf$,
the inner face of $G'$ incident with $x$
is also incident with some other $2$-vertex.
Finally, we consider condition (b4). Since for every vertex
$z$ of $G'$, $\degree{G'}z = \degree G z$ except if
$z\in\Setx{x,y}$, and since $B(2,3) = \Setx{ux}$ and all edges in
$B$ are contained in the boundary of the outer face of $G$, we
have $B'(2,3)\subseteq\Setx{xd,ya}$. Then condition (b4) follows.
Hence, $G'$ satisfies condition (b) and thus admits a
$B'$-2D, say $(F',M')$. Then $(F'+ux+yb,M'+uy)$ is a $B$-2D of
$G$, a contradiction to the choice of $G$ which finishes the
discussion of Case I.
\end{xcase}
\begin{xcase}{II}{$G$ satisfies condition (b) in the theorem}
Let $C=\Setx{e_1,e_2}$ be a cyclic $2$-edge-cut of $G$ such that
the component $K_1$ of $G-C$ containing $v$ is inclusionwise
minimal, i.e., there is no other cyclic $2$-edge-cut $C'$ such
that the component of $G-C'$ containing $v$ is contained in
$K_1$. We refer to this property of $C$ as the \emph{minimality}.
Let $K_2$ be the other component of $G-C$; note that
$w\in V(K_2)$. For $i=1,2$, let $G_i$ denote the graph obtained
from $G$ by contracting all edges of $K_{3-i}$. The vertex of $G_i$
incident with $e_1$ and $e_2$ is denoted by $u_i$. Thus, $G_1$
contains $v$ and $u_1$, while $G_2$ contains $w$ and $u_2$.
By property~\ref{i:path}, $B$ is contained in a $vw$-path in the
boundary of the outer face of $G$; since $C$ separates $v$ from
$w$, we may henceforth assume that $e_1\notin B$.
For $i=1,2$, let $B_i = B \cap E(G_i)$.
Let $G^*_1$ be the graph obtained from $G_1$ by contracting $e_1$.
The following claim will sometimes be used without explicit
reference:
\begin{claim}\label{cl:conn}
The following hold:
\begin{enumerate}[label=(\roman*)]
\item the graphs $G_1$, $G_2$ and $G^*_1$ are
$2$-edge-connected,
\item the endvertices of $e_1$ and $e_2$ in $G_1$ other than
$u_1$ have degree $3$, and
\item the graphs $G_1$, $G^*_1$ are cyclically
$3$-edge-connected and $G_1 \in \sepf$.
\end{enumerate}
\end{claim}
\begin{claimproof}
Part (i) follows from the fact that edge contraction preserves
the property of being $2$-edge-connected. Part (ii) is a
consequence of the minimality of $C$.
Part (iii): suppose
by contradiction that $G_1$ has a cyclic $2$-edge-cut $C_1$. Then $C_1$
does not separate $v$ from $u_1$ by the minimality of $C$.
Hence one component of $G_1-C_1$ contains $v$ and $u_1$. Thus, $C_1$ in $G$ does
not separate $v$ from $w$, which contradicts (b1). Finally, $G_1 \in \sepf$ follows from
the fact that $G \in \sepf$.
\end{claimproof}
Note that $G^*_1 \not\in \sepf$ if the inner facial cycle of $G_1$ containing $u_1$ has no other $2$-vertex.
Then by Lemma~\ref{l:face-two}, even $G^*_1 \not\in \sep$ holds.
\begin{claim}\label{cl:right}
The following hold:
\begin{enumerate}[label=(\roman*)]
\item The graph $G_1$ admits a $(B_1+e_2)$-2D.
\item If $G^*_1\in\sep$, then $G_1$ admits a $(B_1+e_1+e_2)$-2D.
\end{enumerate}
\end{claim}
\begin{claimproof}
(i) If $v$ is not sensitive, then the desired decomposition is
easy to obtain by noting that the pair $(G_1,B_1+e_2)$ satisfies
condition~\ref{i:3conn} in the theorem. Suppose thus that $v$ is
sensitive, and let $v'$ be the unique neighbour of $v$ in $G_1$
such that $vv'\in B_1(2,3)$. Let $G'_1$ be obtained from $G_1$
by contracting $vv'$ into $v$. Then $G'_1\in\sep$ thanks to
property~\ref{i:another} of $G$, $G'_1$ is cyclically
3-edge-connected and $B_1+e_2$ contains at most one
$(2,3)$-edge, so condition~\ref{i:3conn} is satisfied for
$(G'_1,B_1+e_2 - vv')$. Consequently, there is a
$(B_1+e_2-vv')$-2D of
$G'_1$. By Claim~\ref{cl:contract}, $G_1$ admits a
$(B_1+e_2)$-2D.
(ii) Suppose that $G^*_1\in\sep$ and consider the set of edges
$B^*_1 = B_1 + e_2$ in $G^*_1$. (Note that $e_2$ is an edge of
$G^*_1$ while $e_1$ has been contracted in its construction.)
By Claim~\ref{cl:conn}(iii) and Lemma~\ref{l:face-two},
$G^*_1\in\sepf$. By property~\ref{i:two} and the fact that $e_2$
is a $(3,3)$-edge in $G^*_1$, any \ttedge in $B^*_1$ is incident
with $v$. By property~\ref{i:path}, there is at most one such
edge. Thus, the pair $(G^*_1,B^*_1)$ satisfies condition (a),
and consequently $G^*_1$ admits a $B^*_1$-2D by the minimality
of $G$. By Claim~\ref{cl:contract}, $G_1$ admits a
$(B_1+e_1+e_2)$-2D.
\end{claimproof}
\begin{claim}\label{cl:left}
The following hold:
\begin{enumerate}[label=(\roman*)]
\item The graph $G_2$ admits a $(B_2-e_2)$-2D.
\item If $G^*_1\notin\sep$, then $G_2$ admits a $(B_2+e_2)$-2D.
\end{enumerate}
\end{claim}
\begin{claimproof}
(i) Suppose first that $G_2$ contains at least one cyclic
$2$-edge-cut. Since $G_2$ arises by contracting all edges of
$K_1$ `into' the vertex $u_2$, it is straightforward to check
that the pair $(G_2,B_2-e_2)$ satisfies condition~\ref{i:n3conn}
in the theorem with $u_2$ playing the role of $v$. (In relation
to property~\ref{i:another}, note that $u_2$ is not sensitive
with respect to $B_2-e_2$.) Thus, a $(B_2-e_2)$-2D of $G_2$
exists by the minimality of $G$.
If $G_2$ is cyclically $3$-edge-connected, then by
properties~\ref{i:path} and \ref{i:two} of $(G,B)$, $B_2-e_2$
contains at most one \ttedge (incident with $w$ if such an edge
exists). Therefore, $(G_2,B_2-e_2)$ satisfies condition~\ref{i:3conn} in
the theorem. The minimality of $G$ implies that $G_2$ has a $(B_2-e_2)$-2D.
(ii) Let us consider possible reasons why
$G^*_1\notin\sep$. Since $\sepf\subseteq\sep$, there is a face
of $G^*_1$ not incident with a 2-vertex.
Since $G_1 \in \sepf$ (Claim \ref{cl:conn} (iii)) and since
the 2-vertex $v$ is contained in the outer face of $G^*_1$,
there is only one such face, namely
the inner face whose boundary contains $e_2$. Let $Q$ be the
inner face of $G$ whose boundary contains the edge-cut
$C$. Since $G\in\sepf$, $Q$ is incident with a 2-vertex $z$.
Since $G^*_1 \not\in \sepf$,
$z$ and $u_2$ are both incident with the same inner face in $G_2$.
Suppose first that $G_2$ contains a cyclic $2$-edge-cut. The
existence of the vertex $z$ proves property~\ref{i:another} for
the pair $(G_2,B_2+e_2)$ with $u_2$ playing the role of $v$ (note that $u_2$ is sensitive). The
other parts of condition~\ref{i:n3conn} are straightforward to
check. By the minimality of $G$, the desired $(B_2+e_2)$-2D of
$G_2$ exists.
It remains to consider that $G_2$ is cyclically
$3$-edge-connected.
If $e_2$ is the unique $(2,3)$-edge in $B_2 + e_2$, then $(G_2, B_2 + e_2)$ satisfies condition (a) in the theorem,
and hence the minimality of $G$ implies that $G_2$ admits a $(B_2+e_2)$-2D.
Therefore, we may assume that there is another $(2,3)$-edge in $B_2 + e_2$, and in particular, there is a
sensitive vertex $z'$ incident with the outer face of $G_2$ with $z' \not= u_2$.
By condition (b4), $z'$ has to be either $w$ or a vertex adjacent to $w$.
Let $G^*_2$ be obtained from $G_2$ by
contracting $e_2$ into $u_2$. Since $G \in \sepf$ and since
$z$ and $z'$ are $2$-vertices, $G^*_2\in \sepf$. Hence the pair $(G^*_2,B_2-e_2)$
satisfies condition~\ref{i:3conn} of the theorem. By the
minimality of $G$, there is a $(B_2-e_2)$-2D of
$G^*_2$. Claim~\ref{cl:contract} implies a
$(B_2+e_2)$-2D of $G_2$.
\end{claimproof}
By the above claims we obtain the sought
contradiction. Suppose first that $G^*_1\in\sep$. By
Claims~\ref{cl:right}(ii) and \ref{cl:left}(i), there is a
$(B_1+e_1+e_2)$-2D $(F_1,M_1)$ of $G_1$ and a $(B_2-e_2)$-2D
$(F_2,M_2)$ of $G_2$. Since $e_1,e_2\in E(F_1)$, $F_1\cup F_2$ is
acyclic, regardless of whether $e_1,e_2\in E(F_2)$. Clearly, $M_1\cup (M_2 - e_1 - e_2)$ is a matching in
$G$, so we obtain a $B$-2D of $G$, contradicting the choice of
$G$.
Thus, $G^*_1\notin\sep$. By Claims~\ref{cl:right}(i) and
\ref{cl:left}(ii), there exists a $(B_1+e_2)$-2D $(F'_1,M'_1)$ of
$G_1$ and a $(B_2+e_2)$-2D $(F'_2,M'_2)$ of $G_2$. Since $e_2$ is
contained in both $F'_1$ and $F'_2$, the $2$-decompositions
combined produce a $B$-2D $(F'_1\cup F'_2, M'_1\cup M'_2)$ if
$e_1 \not\in E(F_1 \cup F_2)$, or
$(F'_1\cup F'_2, M'_1\cup M'_2 - e_1)$ if
$e_1 \in E(F_1 \cup F_2)$, a contradiction.
\end{xcase} \end{proof}
\begin{cor}\label{sep}
If $G \in \sep$ is $2$-edge-connected and $e \in E(G)$ is a
$(2,3)$-edge, then $G$ admits an $\Setx{e}$-2D. \end{cor}
\begin{proof}
We proceed by induction on the order of $G$. By choosing a suitable
embedding of $G$, we may assume that $e$ is contained in the
boundary of the outer face. If $G$ is cyclically $3$-edge-connected,
then $G\in\sepf$ by Lemma~\ref{l:face-two}, and the existence of a
2-decomposition follows from Theorem~\ref{t:induction} (with
$B = \Setx{e}$). Hence, we assume that $G$ contains a cyclic
$2$-edge-cut $C = \Setx{e_1,e_2}$. Let $K_1$ and $K_2$ be the
components of $G - C$. Just as in Case II of the proof of
Theorem~\ref{t:induction}, we contract all edges in $K_1$ or $K_2$
to obtain the smaller graphs $G_1$ and $G_2$ with new vertices $u_1$
and $u_2$, respectively. Note that $G_i \in \sep$, $i=1,2$. We may assume that
$e$ is contained in $G_1$.
By induction, there is an $\Setx{e}$-2D $(F_1, M_1)$ of $G_1$.
First, suppose that $e \not\in \Setx{e_1,e_2}$.
Since $M_1$ is a matching, we may assume that $e_1 \in
E(F_1)$. Again by induction, there is an $\Setx{e_1}$-2D
$(F_2, M_2)$ of $G_2$. Since each of $F_1$ and $F_2$ contains
$e_1$, $G$ has an $\Setx{e}$-2D $(F_1 \cup F_2, M_1 \cup M_2)$ if
$e_2 \not\in E(F_1 \cup F_2)$ and an $\Setx{e}$-2D
$(F_1 \cup F_2, M_1 \cup M_2 - e_2)$ if $e_2 \in E(F_1 \cup F_2)$.
In the remaining case that $e \in \Setx{e_1,e_2}$, we assume without
loss of generality that $e=e_1$ and proceed as above.
\end{proof}
Recall that a 2-decomposition of a connected graph implies a decomposition into a spanning tree and a matching.
Theorem~\ref{main} now follows by induction: since Theorem~\ref{main} holds for cycles and Corollary~\ref{sep} implies the $2$-edge-connected case, it remains to show that every graph $G$ satisfying the conditions of the theorem, with a bridge $e$, has a 2-decomposition. By combining 2-decompositions of the components of $G-e$ (found by induction), we obtain an $\Setx{e}$-2D of $G$, which completes the proof of Theorem~\ref{main}.
\begin{cor}\label{imply}
Every connected subcubic plane graph can be decomposed into a
spanning tree, a $2$-regular subgraph and a matching. \end{cor}
\begin{proof}
Let $G$ be a connected subcubic plane graph and let
$\Setx{C_1,\dots,C_k}$ be a maximal collection of disjoint cycles
such that $G':=G-\bigcup_{i=1}^k E(C_i)$ is connected. Thus, $G'$ is
a connected subcubic plane graph in which every cycle is separating,
so $G'$ is decomposed into a spanning tree and a matching by
Theorem~\ref{main}. Adding the union of $C_1,\dots,C_k$, we obtain
the desired decomposition of $G$. \end{proof}
Finally, for the sake of completeness, we prove the following statement.
\begin{prop}\label{p:2D3D} The 3DC and the 2DC are equivalent conjectures. \end{prop}
\begin{proof}
The proof of Corollary~\ref{imply}, which applies for an arbitrary
(not necessarily plane) connected subcubic graph, effectively shows that the 2DC
implies the 3DC. Therefore, it suffices to prove the converse direction.
Let $H$ be a connected graph such that every cycle of $H$ is
separating and each vertex of $H$ has degree $2$ or $3$. Let $X$
denote the graph resulting from the graph $\Theta$ by subdividing
one edge of $\Theta$ precisely once, i.e. $|V_2(X)|=1$. We construct
from $H$ a cubic graph $G$ by adding $|V_2(H)|$ many copies of $X$
to $H$ and by connecting each $2$-vertex of $H$ by an edge with a
$2$-vertex of a copy of $X$. By the 3DC, there is a
$3$-decomposition of $G$. The edges connecting $H$ to copies of $X$
are obviously bridges of $G$ and are thus contained in the tree
part, say $T$, of the $3$-decomposition. Since every cycle of $H$ is
separating, every cycle of $G$ which is not separating is contained
in some copy of $X$. Hence, we obtain a $2$-decomposition of $H$ in
which $T \cap H$ is the tree part and the matching part consists of
the remaining edges of $H$. \end{proof}
\section*{Acknowledgments} We thank Adam Kabela for interesting discussions of the 3-Decomposition Conjecture. Part of the work on this paper was done during the ``8th Workshop on the Matthews-Sumner Conjecture and Related Problems'' in Pilsen. The first and the third author appreciate the hospitality of the organizers of the workshop.
\end{document} |
\begin{document}
\title{Stability and busy periods in a multiclass queue with
state-dependent arrival rates
}
\title{Stability and busy periods in a multiclass queue with
state-dependent arrival rates
}
\begin{abstract} We introduce a multiclass single-server queueing system in which the arrival rates depend on the current job in service. The system is characterized by a matrix of arrival rates in lieu of a vector of arrival rates. Our proposed model departs from existing state-dependent queueing models in which the parameters depend primarily on the number of jobs in the system rather than on the job in service. We formulate the queueing model and its corresponding fluid model and proceed to obtain the necessary and sufficient conditions for stability via fluid models. Utilizing the natural connection with the multitype Galton-Watson processes, the Laplace-Stieltjes transform of busy periods in the system is given. We conclude with tail asymptotics for the busy period for heavy-tailed service time distributions for the regularly varying case. \end{abstract} Keywords: Busy periods; fluid models; multiclass queues; regular variation; stability; state-dependent arrival rates.
\section{Introduction} \setcounter{equation}{0} We introduce a multiclass single-server queueing system in which the arrival rates depend on the current job in service. The system is characterized by a matrix of arrival rates instead of a vector of arrival rates. The proposed model departs from existing state-dependent models in the literature in which the parameters depend primarily on the number of jobs in the system (see Bekker et al.~\cite{Bekker}, Cruz and Smith~\cite{Cruz}, Jain and Smith~\cite{Jain}, Perry et al.~\cite{Perry}, and Yuhaski and Smith~\cite{Yuh}, among other sources) rather than the job in service. \\ \indent Our model is motivated by two practical queueing considerations. The first is a multiclass queueing system in which the arriving customer can observe only the class of the customer in service and no other characteristics of the queue. This information informs the customer's decision to either join or leave the queue. The second concerns local area networks with a central server in which $K$ clients generate requests at individual Poisson rates $\mu_i$. Often, a client does not generate requests when a previous request is being handled by the server. Further, it is conceivable that groups of clients working together may influence each other's Poisson rate. To the best of our knowledge, this simple yet potentially very useful queueing model has never appeared in the literature. This serves as our primary motivation for the manuscript.\\ \indent The remainder of the work is structured as follows. We formulate the queueing model in Section \ref{sec2} and its corresponding fluid model in Section \ref{sec3}. In Section \ref{sec4}, we obtain the necessary and sufficient conditions for stability via fluid models. Through the natural connection with the multitype Galton-Watson processes, we characterize the Laplace-Stieltjes transform of busy periods in the system in Sections \ref{sec5} and \ref{sec6}. Section \ref{sec7} concerns tail asymptotics on the busy period in the case of heavy-tailed service time distributions. Section \ref{sec8} offers a brief conclusion and presents ideas for future work.
\section{The queueing model} \label{sec2}
Consider a multiclass single-server queue with $K$ classes of jobs, each arriving according to independent counting processes. \textcolor{black}}\def\tcrr{\textcolor{black}{We assume that only one job may be serviced at a time}. Let the arrival rate depend \textit{on the class of the job in service}. Let the arrival rate depend \textit{on the class of the job in service}. If the server is serving a job of class $i$, the arrival rate of class $j$ jobs is $\lambda_{ij}$, $i,j=1,\dots,K$. The matrix of arrival rates is defined as $\mathbf{\Lambda} = (\lambda_{ij})$, $i,j = 1, \ldots, K$. If there is no job in service, then the arrival rate of class $j$ jobs is defined as $\lambda_{0j}$, $j=1,\dots,K$. The arrival mechanism is described more precisely with dynamical equations in Section \ref{sec3}.
We proceed to set notation. Let $\bar{\lambda}^i = \sum_{j=1}^K \lambda_{ij}$ for each $i=1,\dots,K$. Service times for class $i$ jobs are assumed to be i.i.d.\ with distribution function $F_i$, $i=1,\dots, K$. Let $S_i$ be a generic service time for class $i$ jobs, with $\mathbb{E}[S_i] = m_i =\mu_i^{-1}$, $i=1,\dots, K$ and $\mathbf{G}=\text{diag}(\mu_1,\mu_2,\ldots,\mu_K)$. We define the ``mean offspring matrix'' to be $\mathbf{M} = \mathbf{G}^{-1} \mathbf{\Lambda} $ (here, the $ij$th element $\lambda_{ij}m_i$ is the mean number of arriving class $j$ customers during service of a class $i$ customer). By definition, all the elements of $\mathbf{M}$ are non-negative, and this is enough to ensure that the dominant eigenvalue $\rho(\mathbf{M})$ is real and positive, cf.~\cite{Gant}. For some results, more restrictive conditions on $\mathbf{M}$ will be required. Further, let $\psi_i$ denote the Laplace-Stieltjes transform (LST) of $S_i$, $i=1,\dots, K$, respectively, that is, $\psi_i(s) = \mathbb{E}[{\mathrm{e}}^{-s S_i}] = \int_0^{\infty} {\mathrm{e}}^{-st} {\mathrm{d}} F_i(s)$ for $s>0$. We let $Q_i$ denote the steady-state number of class $i$ jobs in the system, $i=1,\dots, K$, and let $\mathbf{Q}= (Q_1,\dots, Q_K)$. Each state of the system takes nonnegative integer values, that is, $\mathbf{x} = (x_1,\dots,x_K) \in \mathbb{Z}^K_+$.
The service disciplines we consider are non-idling, i.e., jobs must be served using the full capacity of the server whenever there are jobs in the system.
\textcolor{black}}\def\tcrr{\textcolor{black}{Our results on stability and the busy period are independent of the particular (non-idling) scheduling policy employed in the system.}
\section{Queueing and fluid dynamics} \label{sec3}\setcounter{equation}{0} \subsection{Queueing dynamical equations}
We now precisely define the arrival mechanism. For $i \in \{1, \ldots, K\}$ and $t \ge 0$, $Q_i(t)$ denotes the number of class $i$ jobs in the system at time $t$, whether in service or in the queue. Similarly, let $T_i(t)$ denote the amount of time that has been devoted to serving class $i$ jobs in $[0,t]$. Further, let $A_i(t)$ and $D_i(t)$ be, respectively, the total number of class $i$ jobs that have arrived and departed from the system in $[0,t]$. We then have the following input-output equation for each class $i$ job \begin{equation} Q_i(t) = Q_i(0) + A_i(t) - D_i(t). \end{equation}
For each class $i$, the counting process $\mathcal{E}^j_i(t)$ is the number of class $i$ jobs that arrive during the first $t$ time units devoted to processing class $j$. $\mathcal{E}^0_i(t)$ counts the number of class $i$ arrivals during the first $t$ time units that no job is being processed at the server. The total number of class $i$ arrivals in $[0,t]$ is then given by \begin{equation} A_i(t) = \mathcal{E}^0_i\bigl(T_0(t)\bigr) + \sum_{j=1}^N \mathcal{E}^j_i \bigl(T_j(t)\bigr), \end{equation} where the counting processes $\mathcal{E}^j_i$ for \textcolor{black}}\def\tcrr{\textcolor{black}{$i=1,\dots, K,$ and $j = 0, \dots, K$} are assumed to be mutually independent.
As for the service processes, for each $i$, $1 \le i \le K$ and positive integer $n$, we let $V_i(n)$ denote the total service requirement for the first $n$ class $i$ jobs. Assuming an HL service discipline, we have that \begin{equation} V_i(D_i(t)) \le T_i(t) \le V_i(D_i(t)+1) \end{equation} for each $t \ge 0$ and $1 \le i \le N$.
We define the workload in the system at time $t$ to be \begin{equation} W(t) = \sum_{i=1}^K V_i(A_i(t) + Q_i(0)) - \sum_{i=1}^K T_i(t), \end{equation} and the cumulative idle time process to be \begin{equation} Y(t) = t - \sum_{i=1}^K T_i(t). \end{equation} It is important to note that $Y$ is a non-decreasing function. We assume that the queueing policy is non-idling, which specifically means that $Y$ can increase only when $W(t)=0$. More precisely, $$ \int_0^{\infty} W(t) \, dY(t) = 0.$$
\subsection{Fluid model} For purposes of determining the stability conditions of a more general version of our model, we formulate a fluid network version of the model. For references to important definitions and results in the fluid model literature, we refer the reader to Bramson~\cite{Bramson} and Gamarnik~\cite{Gamarnik}.
For $i \in \{1, \ldots, K\}$ and $t \ge 0$, let $Q_i(t)$ denote the amount of fluid of class $i$ in the system at time $t$. Similarly, let $T_i(t)$ denote the amount of time that has been devoted to serving class $i$ fluid in $[0,t]$. We also define $A_i(t)$ and $D_i(t)$ which are, respectively, the total amount of class $i$ fluid that has arrived and departed from the system in $[0,t]$. We then have the following standard equation: \begin{equation} Q_i(t) = Q_i(0) + A_i(t) - D_i(t), \end{equation} for each $i \in \{1, \ldots, K\}$ and $t \ge 0$. The departure processes in this system also obey the standard relation $D_i(t) = \mu_i T_i(t)$ for all $t \ge 0$.
The unusual feature of our model lies in the arrival process, which is dependent on the current class in service. In the queueing model, processor sharing is not allowed. Hence, there is (at most) one class in service at any given time and ``the customer in service'' is defined unambiguously. Here, we provide a more general formulation that reduces to the queueing model presented in earlier sections, under appropriate restrictions on the allowable queueing disciplines. First, we recall the usual condition \begin{equation} \sum_{i=1}^N \dot{T}_i(t) \le 1, \end{equation} which simply indicates that the server cannot devote more than 100\% of its time to serving fluids of all classes. Since we assume that the queueing discipline is non-idling, $\sum_{i=1}^N \dot{T}_i(t) = 1$ whenever there is a positive amount of fluid in the system. We also define the idle time in $[0,t]$ to be: $$ Y(t) = t - \sum_{i=1}^N T_i(t).$$
Note that the current arrival rate of class $j$ fluid is given by $\dot{A}_j(t)$. In the queueing model, if a job of class $i$ is in service then the arrival rate of class $j$ jobs is $\lambda_{ij}$. Let $\mathbf{\lambda}_j$ be the column vector $(\lambda_{1j}, \ldots \lambda_{Nj})^\bot$ and let $\dot{\mathbf{T}}(t)$ be the column vector $\bigl(\dot{T}_1(t), \ldots, \dot{T}_N(t)\bigr)^\bot$, where ${}^\bot$ means transposition. We define the fluid arrival rate of class $j$ to be \begin{equation} \label{arrivalrep} \dot{A}_j(t) = \lambda_{0j} \dot{Y}(t) + \lambda^\bot_j \dot{\mathbf{T}}(t). \end{equation} In particular, when there is fluid in the system, the class $j$ arrival rate is a convex combination of the elements of $\mathbf{\lambda}_j$. If we restrict to policies in which only one class can be served at any time, then equation (\ref{arrivalrep}) assigns an arrival rate of $\lambda_{ij}$ to class $j$ fluid when class $i$ fluid is in service. Note that this concurs with the queueing model formulation. Combining the above, we have
\begin{eqnarray} \label{eq9} Q_j(t) & = & Q_j(0) + \int_0^t (\lambda_{0j} \dot{Y}(u) + \lambda^\bot_j \dot{\mathbf{T}}(u)) \; du - \mu_j T_j(t) \\ & = & Q_j(0) + \lambda_{0j} {Y}(t) + \lambda^\bot_j \mathbf{T}(t) - \mu_j T_j(t). \label{eq10} \end{eqnarray} Writing equations (\ref{eq9}) and (\ref{eq10}) in matrix form yields \begin{equation} \label{dynamics} \mathbf{Q}(t) = \mathbf{Q}(0)+ (\mathbf{M}^\bot-\mathbf{I})\mathbf{D}(t) +Y(t) \mathbf{\lambda_0}. \end{equation} We define the vector of fluid work in the system at time $t$ to be \begin{equation} \label{work} \mathbf{W}(t) = \mathbf{G}^{-1}\mathbf{Q}(t). \end{equation}
\subsubsection{Fluid Limits}
Thus far we have described a fluid model but it remains to show that the fluid limits of the queueing model satisfy the fluid model equations. In this subsection only, we use a bar to denote a fluid limit. As usual, we define the fluid limit of the queue-length processes to be $$ \bar{Q}_i(t) = \lim_{n \to \infty} \frac{Q_i(nt)}{n},$$ with other fluid limits defined in an analogous manner. We make the usual assumptions on the stochastic primitives and initial conditions, i.e., for all $i$ and $j$ \begin{eqnarray}
\lim_{n \to \infty} \frac{\mathcal{E}^j_i(nt)}{n} & = & \lambda_{ji}t \label{fslln1} \\
\lim_{n \to \infty} \frac{\mathcal{E}^0_i(nt)}{n} & = & \lambda_{0i}t \label{fslln2} \\
\lim_{n \to \infty} \frac{V_i(n)}{n} & = & m_i \\
\lim_{n \to \infty} \frac{Q_i(0)}{n} & = & \bar{Q}_i(0), \end{eqnarray} where the convergence is almost surely, uniformly on compact sets. Under these assumptions, the fluid model equations can be derived in a straightforward way from the queueing dynamical equations, since all but the arrival rate process is identical to the standard multiclass queueing network model. For the arrival process, we have \begin{eqnarray*} \bar{A}_i(t) & = & \lim_{n \to \infty} \frac{A_i(nt)}{n} \ = \ \lim_{n \to \infty} \frac{\mathcal{E}^0_i(T_0(nt))}{n} + \lim_{n \to \infty} \sum_{j=1}^N \frac{\mathcal{E}^j_i (T_j(nt))}{n} \\ & = & \lambda_{0i} \bar{Y}(t) + \lambda^\bot_i \bar{\mathbf{T}}(t). \end{eqnarray*}
The last equality follows from assumptions (\ref{fslln1}) and (\ref{fslln2}) and similar arguments as found in Proposition 4.12 in \cite{Bramson}. Finally, the connection between fluid stability and queueing network stability follow from straightforward modification of existing stability results, under the usual assumptions that the interarrival times for all job classes are unbounded and spread out. We refer the reader to Chapter 4 of Bramson \cite{Bramson} for full details.
\section{Stability results for fluid model} \label{sec4}\setcounter{equation}{0}
In this section, we prove a number of results regarding the stability, or instability, of the fluid model. The proofs rely on the following two observations. \begin{enumerate} \item $T_i(\cdot)$ is Lipschitz continuous for each $i$ and hence so is any linear function $f$ of $(T_1, \ldots, T_K)$. Thus, $f$ is absolutely continuous and its derivative exists almost everywhere. \item If $\dot{f}(t)$ exists for $t >0$, $t$ is called a regular point. \end{enumerate} We define $\mathbf{e} = (1, \ldots, 1)^\bot$ and assume this column vector is of size $K$. Finally, we set $\bf{H} = \bf G\mathbf{M}\mathbf{G}^{-1}.$
\begin{theorem} If $\rho(\textbf{M}) <1$, then $f(t) = \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1} \textbf{G}^{-1}Q(t)$ is a Lyapunov function for the fluid model. \end{theorem}
\begin{proof} Note that $\rho(\textbf{H})= \rho(\textbf{M}) <1$. Hence, $\textbf{I}-\textbf{H}$ is an $\cal{M}$-matrix. Therefore $\textbf{I}-\textbf{H}$ is invertible with a non-negative inverse.
Let us assume that the fluid system starts from a non-empty state, i.e., $\mathbf{Q}(0) \not= \mathbf{0}$. By the continuity of $\mathbf{Q}$, $\mathbf{Q}(t)\not= 0$ for all $t$ in some interval $[0,s)$.
Then we have $Y(t) =0$ for all $t \in [0, s)$. Using equations (\ref{dynamics}) and (\ref{work}) we have $$\mathbf{W}(t) = \mathbf{W}(0) - (\textbf{I}-\textbf{H}^\bot)\mathbf{T}(t),$$ for $t \in [0, s)$. Multiplying by $\mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1}$ yields $$ \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1}\mathbf{W}(t) = \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1} \mathbf{W}(0) - \mathbf{e}^\bot \mathbf{T}(t).$$ As in the statement of the theorem, set $$ f(t) = \mathbf{e}^\bot(\textbf{I}-\textbf{H}^\bot)^{-1} \textbf{G}^{-1}\mathbf{Q}(t)$$ and note that $f(t)=0$ if and only if $\mathbf{Q}(t) =\mathbf{0}$. Then we have: \begin{eqnarray} f(t) & = & \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1} \mathbf{W}(t) \\ & = & \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1} \mathbf{W}(0) - \mathbf{e}^\bot \mathbf{T}(t). \end{eqnarray}
Taking derivatives, we obtain
$$ \dot{f}(t) = -\mathbf{e}^\bot \dot{\mathbf{T}}(t) = -1,$$
for any $t \in [0,s)$ and regular point $t$.
Therefore, the draining time of the system under
any feasible policy is $$f(0) = \mathbf{e}^\bot (\textbf{I}-\textbf{H}^\bot)^{-1} \mathbf{W}(0),$$ which can be interpreted as the initial unfinished ``potential'' work, defined as the work due to the current workload and work generated in the future by the initial workload's ``offspring.'' The above argument implies that $\dot{f}(t)=-1$ whenever $\mathbf{W}(t) \not= \mathbf{0}$ and thus the system stays drained once $\mathbf{Q}(t) =\mathbf{0}$. This completes the proof.
\end{proof} The corollary below now immediately follows. \begin{corollary} The fluid model is globally stable if $\rho(\textbf{M}) <1$. \end{corollary}
\subsection{Weak instability} Next we show that the fluid model is weakly unstable if $\rho(\textbf{M})>1$. We begin by introducing the following lemma. \begin{lemma} \label{hithere} Suppose $\rho(\textbf{M})=\rho(\textbf{H})>1$ and that each row of $\mathbf{M}$ has at least one strictly positive element. Then for all nonnegative vectors $\mathbf{T}(t)>0$, $\mathbf{V}(t)=(\textbf{I}-\textbf{H})\mathbf{T}(t)$ must have some component $V_{i}(t)<0$ for some $i\in\{1,...,K\}$. \end{lemma}
\begin{proof} We argue to the contrary. Note that each row of $\textbf{H}$ has at least one strictly positive element, by the same assumption on $\textbf{M}$. Also, for some $\alpha \in (0,1)$, $\rho(\alpha \textbf{H}) = 1.$ For the sake of contradiction, assume that there exists a nonnegative vector $\mathbf{T}(t)>0$ s.t.\ $\mathbf{V}(t)=(\textbf{I}-\textbf{H})\mathbf{T}(t)\geq0$. Further, define $\mathbf{V}^{'}(t) = (\textbf{I}- \alpha \textbf{H})\mathbf{T}(t)$. We now consider:
\begin{eqnarray*} \mathbf{V}(t)-\mathbf{V}^{'}(t)=(\textbf{I}-\textbf{H})\mathbf{T}(t)-(\textbf{I}- \alpha \textbf{H})\mathbf{T}(t)=(\alpha \textbf{H}-\textbf{H})\mathbf{T}(t)<0. \end{eqnarray*} The above equations imply that $\mathbf{V}^{'}(t)>\mathbf{V}(t)$ and that there exists some $\mathbf{T}(t)>0$ with $(\textbf{I}- \alpha \textbf{H})\mathbf{T}(t)>0$. Thus $(\textbf{I}-\alpha \textbf{H})$ is semipositive, and by condition $I_{27}$ in Chapter 6 of Berman~\cite{Berman}, $(\textbf{I}-\alpha \textbf{H})$ is a non-singular $\cal{M}$-matrix. This implies that $\rho(\alpha \textbf{H}) < 1,$ yielding a contradiction. \end{proof}
We are now ready to prove Theorem \ref{thm2}, the main result of this subsection. \begin{theorem} \label{thm2} The fluid model is weakly unstable if $\rho(\mathbf{M})>1$ and each row of $\mathbf{M}$ has at least one strictly positive element. \end{theorem} \begin{proof} Assume $\mathbf{Q}(0)=\mathbf{W}(0)=\mathbf{0}$. Then for any $t >0$ we have by (\ref{dynamics}) and (\ref{work}) that
\begin{eqnarray*} \mathbf{W}(t) \ge \mathbf{W}(0) - (\textbf{I}-\textbf{H}^\bot)\mathbf{T}(t)=(\textbf{H}^\bot-\textbf{I})\mathbf{T}(t). \end{eqnarray*} By Lemma \ref{hithere}, there exists some component of $\mathbf{W}(t)$ s.t.\ $W_{i}(t)>0$. This implies that $\mathbf{Q}(t)\neq \mathbf{0}$ for all $t >0$. Thus the fluid model is weakly unstable. \end{proof}
\subsection{Weak Stability}
\begin{theorem} \label{implemma} Suppose that $\mathbf{M}$ is an irreducible non-negative matrix. \textcolor{black}}\def\tcrr{\textcolor{black}{Then the} fluid model is weakly stable if $\rho(\mathbf{M}) \le 1$. \end{theorem} \begin{proof} It suffices to show the result for the case $\rho(\bold{M})=1$, since we have already shown that the fluid model is ``strongly'' stable when $\rho(\bold{M})<1$.
Let $\mathbf{Q}(0)=0$. We argue to the contrary. For the sake of contradiction, let us assume that $\mathbf{Q}(t) \not= \mathbf{0}$ for some $t >0$. Then, since $\mathbf{Q}$ is continuous, there must be an interval $(t_1, t_2)$ with $t_2 > t_1$, for which $\|\mathbf{Q}(t)\| > 0$ for all $t \in (t_1, t_2)$,
$\mathbf{Q}(t_1) =0$ and $\|\mathbf{Q}(t_2) \| >0$. In particular, we may set $t_1 = \text{inf}\{t: \mathbf{Q}(t) \not=\mathbf{0} \}$. Now, recall that \begin{equation} \label{dynam1} \mathbf{Q}(t)=(\mathbf{M}^\bot-\mathbf{I})\bold{D}(t)+Y(t)\mathbf{\lambda_0}. \end{equation} Since $\bold{M}$ is a positive matrix, it follows by the Perron-Frobenius Theorem that there exists a positive left (row) eigenvector $\bold{w}$ of $\bold{M}$ with $\bold{w}\mathbf{M}=\bold{w}$, $w_i >0 $ for $i \in \{1,...,K\}$. Multiplying both sides of (\ref{dynam1}) by $\bold{w}$ we obtain $$ \bold{w}\mathbf{Q}(t)=\bold{w}\bigl[(\mathbf{M}^\bot-\mathbf{I})\bold{D}(t)+Y(t)\mathbf{\lambda_0}\bigr]\ =\ \bold{w}Y(t)\mathbf{\lambda_0},$$
for all $t \ge 0$. Recalling $\mathbf{Q}(t_1)=\mathbf{0}$ and $\|\mathbf{Q}(t_2)\| >0$ we have $$ \bold{w}\bigl(Y(t_2)-Y(t_1)\bigr) \mathbf{\lambda_0} \ =\ \bold{w}\bigl(\mathbf{Q}(t_2)-\mathbf{Q}(t_1)\bigr) > 0.$$ This implies $Y(t_2) > Y(t_1)$ and thus there is positive idle time in $(t_1,t_2)$. However, since the fluid level is positive in this entire interval, this violates the non-idling condition. Thus such a fluid solution is not feasible. A contradiction has been reached. This concludes the proof.
\end{proof}
\section{Branching process connection} \label{sec5}\setcounter{equation}{0}
In the remainder of the paper, we investigate a special case of the multiclass model discussed so far. In particular, we now assume that arrivals to each class from a Poisson process, i.e., the model is an $M/G/1$ multiclass queue, rather than a $GI/G/1$ queue. Although more general stability conditions for the $GI/G/1$ case were proven in Section \ref{sec4}, we begin by reproving them in the Poisson setting, by making a connection to branching processes. There are two reasons to do this. First, the stability results arise in a somewhat more intuitive manner using this methodology. Secondly, we find the connection to branching processes illuminating and useful in later sections.
A classical tool for the simple $M/G/1$ queue and related systems is to interpret customers as individuals in a branching process, such that the children of a customer is the number of customers arriving during his or her service. This is useful because the stability condition for the queueing system is the same as the condition for almost sure extinction. Carrying out the same idea for our multiclass systems leads to a $K$-type Crump-Mode-Jagers branching process $\{ \mathbf{Z}_n=\bigl(Z^{(1)}_n,\dots Z^{(K)}_n\bigr): n \ge 1\}$, such that the lifetime of an individual of type $j$ has the same distribution as $S_j$. \textcolor{black}}\def\tcrr{\textcolor{black}{In the results below, we consider a branching process with a single ancestor of type $i$. Whenever $\mathbb{E}_i$ and $\mathbb{P}_i$ are used, they are with reference to the probability measure induced by such a single ancestor.} The offspring mechanism is then described by the probabilities \begin{equation} p_{ij}(k) = \mathbb{P}_i\big(Z_{1}^{(j)}= k\big) = \mathbb{P}\bigl({\text{Pois}(\lambda_{ij}S_j) = k}\bigr) = \int_0^{\infty} \frac{(\lambda_{ij}s)^k {\mathrm{e}}^{-\lambda_{ij}s}}{k!} {\mathrm{d}} F_j(s)\,. \end{equation} The offspring matrix $\mathbf{M} = (M_{ij})_{i,j= 1,\dots, K}$ is given by $M_{ij}=\mathbb{E}_i\bigl[Z_1^{(j)}\bigr]=\lambda_{ij}/\mu_{i}$ and is assumed irreducible. Thus Perron-Frobenius theory applies to $\mathbf{M}$ and as before, $\rho=\rho(\mathbf{M})$ the largest eigenvalue. Note that the $ij$th element of the matrix $\sum_{n=0}^\infty \mathbf{M}^n$ gives the expected number of type $j$ progeny of an individual of type $i$; of course, when $\rho<1$, we have $\sum_{n=0}^\infty \mathbf{M}^n=(\mathbf{I}-\mathbf{M})^{-1}$.
\subsection{Stability Conditions}
Let $|\mathbf{Z}_n|=\sum_{j=1}^KZ_n^{(j)}$ denote the total number of individuals in the $n$th generation and $T^*$ the extinction time. Let \textcolor{black}}\def\tcrr{\textcolor{black}{$\mathbb{P}_i(T^*<\infty)$ be the extinction probability of type $i$ of the branching process.} Then, by classical results, we have the following theorem:\\
\begin{theorem}\label{thmHA}
\begin{eqnarray} \label{Harris} \mathbb{P}_i(T^*<\infty) =1,\,\, i=1,\ldots,K,\, \text{if and only if} \,\, \rho \leq 1. \end{eqnarray} \end{theorem} \begin{proof}
By the classical result for the extinction time of branching processes~\cite[Chap II. Theorem 7.1]{Harris63}, if and only if $\rho \leq 1$, the total number of generations for each type is finite with probability $1$ and thus $\sum |\mathbf{Z}_n| < \infty$, which further implies $\mathbb{P}_i(T^*<\infty) = 1 $ for every $i$.
\end{proof} \noindent Consider $K=2$. Straightforward algebra gives that $\rho \leq 1$ is equivalent to
\begin{eqnarray} \frac{\frac{\lambda_{11}}{\mu_1}+\frac{\lambda_{22}}{\mu_2}+\sqrt{\parens{\frac{\lambda_{11}}{\mu_1}-\frac{\lambda_{22}}{\mu_2}}^2+\frac{4\lambda_{12}\lambda_{21}}{\mu_1\mu_2}}}{2} \leq 1. \end{eqnarray}
\begin{theorem}\label{thmHA2} $\mathbb{E}_iT^*<\infty$ for all $i$ if and only if $\rho < 1$. \end{theorem} \begin{proof} For a simple proof of sufficiency, assume $\rho<1$ and let $S_j(m;n)$ denote the lifetime of the $m$th individual of type $j$ in the $n$th generation, $\underline\mu=\min_1^K\mu_j$. Then \begin{align*}\mathbb{E}_iT^*\ &= \mathbb{E}_i\sum_{n=0}^\infty\sum_{j=1}^K \sum_{m=1}^{Z_n^{(j)}}S_j(m;n)\ =\
\mathbb{E}_i\sum_{n=0}^\infty\sum_{j=1}^K\frac{Z_n^{(j)}}{\mu_j}\\ &\le\ \textcolor{black}}\def\tcrr{\textcolor{black}{ {\underline\mu}^{-1} \mathbb{E}_i\sum_{n=0}^\infty\sum_{j=1}^KZ_n^{(j)}\ =\ {\underline\mu}^{-1} \sum_{n=0}^\infty\sum_{j=1}^KM^n_{ij}\ <\ \infty, } \end{align*} where the second step above uses that $S_j(m;n)$ is independent of $\mathbf{Z}_0,\ldots,\mathbf{Z}_n$ (but not $\mathbf{Z}_{n+1},\mathbf{Z}_{n+2},\ldots$). Further, the strict inequality \begin{equation} \sum_{n=0}^\infty\sum_{j=1}^K M^n_{ij}<\infty \end{equation} follows from $\rho<1$. To prove the necessity, let $\bar{\mu}=\max_1^K\mu_j$. Then by the same reasoning we get $\mathbb{E}_iT^* \geq \bar{\mu}^{-1} \sum_{n=0}^\infty\sum_{j=1}^K M^n_{ij} = \infty$ for $\rho \geq 1$ (see Berman and Plemmons \cite{Berman}). Hence $\mathbb{E}_iT^* < \infty$ is also necessary for $\rho < 1$. \end{proof}
\noindent We now have the following corollary to Theorem \ref{thmHA}. \begin{corollary} The busy period $T<\infty$ w.p.1 if and only if the matrix $\mathbf{M}$ given by
\begin{eqnarray*} M_{ij}=\frac{\lambda_{ij}}{\mu_i},\,\, i,j=1,\ldots, K, \end{eqnarray*} has largest eigenvalue $\rho(\mathbf{M})\leq$ 1. \end{corollary} \noindent Similarly, a corollary to Theorem \ref{thmHA2} is stated below. \begin{corollary} For the busy period $T$, $\mathbb{E} T < \infty$ if and only if $\rho < 1$. \end{corollary}
\subsection{Further applications}
Let $B_{i;z}$ denote the length of the busy period initiated by a class $i$ customer with service requirement $z$ ($B_i$ is that of the standard busy period initiated by a class $i$ customer, that is, taking $z=S_i$). Let further \begin{equation}\label{12.5d}\tau_j =\ \mathbb{E}_i\sum_{n=0}^\infty \sum_{m=1}^{Z_n^{(j)}}S_j(m;n) \end{equation} be the expected total time in $[0,B_j)$ where the customer being served is of class $i$. As before, $\mathbf{G}$ is the diagonal matrix with the $\mu_i$ on the diagonal.
\begin{lemma}\label{Lemma22.7a} \textcolor{black}}\def\tcrr{\textcolor{black}{Assume $\rho<1$.} Then:\\ {\rm (i)} $(\mathbb{E}_i\mathbf{\tau}_j)_{i,j=1,\ldots,K}\ =\ (\mathbf{I}-\mathbf{M})^{-1}\mathbf{G}^{-1}\,; $ {\rm (ii)} $\mathbb{E} B_i\ =\ \mathbf{e}^\top_i(\mathbf{I}-\mathbf{M})^{-1}\mathbf{G}^{-1}\mathbf{e}\,;$\\ {\rm (iii)} $\mathbb{E} B_{i;z}\ =\ z\beta_i$ where $\beta_i\, =\,\mathbf{e}^\top_i\mathbf{\Lambda}(\mathbf{I}-\mathbf{M})^{-1}\mathbf{G}^{-1}\mathbf{e}\,;$\\ {\rm (iv)} $ B_{i;z}/z\to\beta_i$ in probability as $z\to\infty$. \end{lemma} \begin{proof} (i) follows immediately since the $ij$ element of $(\mathbf{I}-\mathbf{M})^{-1}\mathbf{G}^{-1}$ is \[\sum_{n=0}^\infty M^n_{ij}/\mu_j\ =\ \mathbb{E}_i\sum_{n=0}^\infty Z_n^{(j)}/\mu_j\ =\ \mathbb{E}\tau_i,\] and (ii) follows from (i) by summing over $j$. For (iii) and (iv), we may (by work conservation) assume that the discipline is preemptive-resume. The workload process during service of a class $i$ customer evolves as a standard compound Poisson process with arrival rate $\bar{\lambda}^i = \sum_{j=1}^N \lambda_{ij}$ and with cumulative distribution function \[\sum_{j=1}^K\frac{\lambda_{ij}}{\bar{\lambda}^i}\mathbb{P}(B_j\le x)\] for the jumps. For this system, the rate of arriving work is $\bar{\lambda}^i \sum_{j=1}^K\lambda_{ij}/\bar{\lambda}^i\, \mathbb{E} B_j$, which is the same as $\beta_i$. Now we may simply appeal to standard compound Poisson results to obtain (iii) and (iv). This concludes the proof. \end{proof}
\section{Busy period results} \label{secBP}\setcounter{equation}{0} \textcolor{black}}\def\tcrr{\textcolor{black}{In this section, we begin by assuming $\rho(M)\leq 1$.} Let $B_\mathbf{x}$ denote the busy period when the system starts from the state $\mathbf{x} \in \mathbb{Z}^K_+$, that is, the time period until the system becomes empty. In particular, when $\mathbf{x}$ consists of a single customer of class $i$, we denote the busy period as $B_i$, and $B_{i,s}$ is the busy period when his remaining service is $s$. Define $g_\mathbf{x}$ to be the LST of $B_\mathbf{x}$, i.e., $g_\mathbf{x}(\theta) = \mathbb{E}_\mathbf{x}[{\mathrm{e}}^{-\theta B_\mathbf{x}}]$ for $\mathbf{x} \in \mathbb{Z}^K_+$ and similarly for $g_i,g_{i,s}$.
\subsection{The busy period Laplace transform} \label{sec6}
For the $M/G/1$ queue, when $K=1$, it is well known that the LST of the busy period $B$ is given by \begin{equation} g(\theta) = \psi(\theta+ \lambda - \lambda g(\theta)), \end{equation} where $\psi$ is the LST of the service time and $\lambda$ is the arrival rate. See, for example, Neuts~\cite{Neuts} or Wolff~\cite{Wolff}. We shall use the branching process connection to derive a similar fixed point equation for our model.
We first observe that the busy period of the system corresponding to an arbitrary initial state $\mathbf{x} = (x_1,\dots,x_K) \in \mathbb{Z}^K_+$ is the independent sum of busy periods, each of which corresponds to the branching process starting with a single customer. This gives immediately that \begin{equation}\label{19.1a} g_\mathbf{x}(\theta)\ =\ g_1^{x_1}(\theta)\cdots g_K^{x_K}(\theta)\quad\text{when } \mathbf{x} = (x_1,\dots,x_K) \in \mathbb{Z}^K_+ \end{equation} Hence, it is sufficient to calculate $g_i,g_{i,s}$. Recall that $\psi_i$ is the LST of the service time distribution $F_i$ of a class $i$ customer.
\begin{theorem} \label{thm-LT} For $\theta\ge 0$, \begin{equation}\label{19.1b} g_{i,s}(\theta)\ =\ {\rm exp} \Bigl\{-s\Bigl(\theta+\bar{\lambda}^i-\sum_{j=1}^K\lambda_{ij} g_j(\theta)\Bigr)\Bigr\}. \end{equation} Further, \begin{equation}\label{19.1c} g_i(\theta)\ =\ \psi_i \Bigl(\theta+\bar{\lambda}^i-\sum_{j=1}^K\lambda_{ij} g_j(\theta)\Bigr)\,,\quad i=1,\ldots,K, \end{equation} and the vector $\bigl( g_1(\theta),\ldots g_K(\theta)\bigr)$ is the minimal non-negative and non-increasing solution of \textcolor{black}}\def\tcrr{\textcolor{black}{this system of equations.}
\end{theorem} \begin{proof} Clearly, $B_{i,s}$ is the service time $s$ plus the busy periods of all customers arriving during service. But the number of such customers of class $j$ \textcolor{black}}\def\tcrr{\textcolor{black}{is Poisson$( \lambda_{ij} s)$ and} so their busy periods add up to a compound Poisson random variable with LST ${\rm exp}\bigl\{\lambda_{ij} s \bigl(g_j(\theta)-1\bigr)\bigr\}$. The independence for different $j$ then gives \[g_{i,s}(\theta)\ =\ {\mathrm{e}}^{-\theta s}\prod_{j=1}^K {\rm exp}\bigl\{\lambda_{ij}s\bigl(g_j(\theta)-1\bigr)\bigr\},\] which is the same as \eqref{19.1b}. Integrating with respect to \ $ F_i({\mathrm{d}} s)$ then gives \eqref{19.1c}.
Now consider another non-negative solution $\bigl(\widetilde g_1(\theta),\ldots,\widetilde g_K(\theta)\bigr)$ of \eqref{19.1c}. Define the depth $D$ of the multitype Galton-Watson family tree as $D\,=\, \max\bigl\{n\ge 0:\,\mathbf{Z}_n\ne\mathbf{0} \bigr\}$ and let $g_i^{(n)}(\theta)=\mathbb{E}[{\mathrm{e}}^{-\theta B_i};\,D\le n]$. Here $D=0$ means no arrivals during service. This occurs with probability ${\mathrm{e}}^{-\bar{\lambda}^iS_i}$ given $S_i$, and so $g_i^{(0)}(\theta)=\psi_i(\theta+\bar{\lambda}^i)$. The assumptions on $\widetilde g_j(\theta)$ then gives $\widetilde g_i(\theta)\ge g_i^{(0)}(\theta)$. Further, the same reasoning as that leading to \eqref{19.1c} gives \[ g_i^{(n+1)}(\theta)\ =\ \psi_i \Bigl(\theta+\bar{\lambda}^i-\sum_{j=1}^K\lambda_{ij} g_j^{(n)}(\theta)\Bigr)\,.\] By induction starting from $g_i^{(0)}(\theta)\le \widetilde g_i(\theta)$ we then get $g_i^{(n)}(\theta)\le \widetilde g_i(\theta)$ for all $n$. The proof is completed by observing that $\rho(\mathbf{M})\le 1$ implies $D<\infty$ and hence $g_i^{(n)}(\theta)\uparrow g_i(\theta)$. \end{proof}
\begin{example} Consider a network with $K=2$ users, $\lambda_{11}=\lambda_{22}=0$ and $F_i$ exponential$(\mu_i)$. Then \eqref{19.1c} has the form \[ g_1\ =\ \frac{\mu_1}{\mu_1+\theta+\lambda_{12}-\lambda_{12}g_2}\,,\quad
g_2\ =\ \frac{\mu_2}{\mu_2+\theta+\lambda_{21}-\lambda_{21}g_1}\] where for brevity $g_i$ means $g_i(\theta)$.
This gives \[ g_1\ =\ \frac{ \mu_1 \lambda_{21} - \mu_2 \lambda_{12} + (\mu_1 + \theta + \lambda_{12} )(\mu_2 + \theta + \lambda_{21} ) - \sqrt{\Delta} }{2 \lambda_{21} (\mu_1 + \theta + \lambda_{12} ) }, \] \[ g_2\ =\ \frac{
- \mu_1 \lambda_{21} + \mu_2 \lambda_{12} + (\mu_1 + \theta + \lambda_{12} )(\mu_2 + \theta + \lambda_{21} ) - \sqrt{\Delta} }{2 \lambda_{12} (\mu_2 + \theta + \lambda_{21} ) }, \] where \[ \Delta = [ \mu_1 \mu_2 + \lambda_{12} \lambda_{21} + \theta^2 + \theta(\mu_1 + \mu_2 + \lambda_{12} + \lambda_{21} ) ]^2 - 4 \mu_1 \mu_2 \lambda_{12} \lambda_{21} . \]
\end{example}
\subsection{Busy period asymptotics}\label{sec7}
In this section, we offer some observations on the tail asymptotics of the busy period in the case of heavy-tailed service time distributions. For light-tailed service time distributions, we refer the reader to the recent work of Palmowski and Rolski~\cite{Rolski}. For the current case of heavy tails, we refer the reader to Zwart~\cite{Bert}, Jelenkovi\'c and Momcilovi\'c~\cite{Predrag} and Denisov and Shneer~\cite{DenisSeva}.
The key idea in both Jelenkovi\'c and Momcilovi\'c \cite{Predrag} and in Zwart \cite{Bert} (as in many other instances of heavy-tailed behavior) is the principle of \emph{one big jump}. For busy periods, this leads us to expect a large busy period to occur as consequence of one large service time. For concreteness, consider the standard $M/G/1$ queue with \textcolor{black}}\def\tcrr{\textcolor{black}{$\rho<1$} and suppose there is a single large service time of size $S=z$. The workload after the large jump is $u+z$ for some small or moderate $u$. The workload then decreases at the rate $1-\rho$ until it reaches 0 and the busy period terminates. By the Law of Large Numbers (LLN), the time of termination is approximately $(z+u)/(1-\rho)$. Since the time before the big jump can be neglected, we have $B>x$ if and only if $z>(1-\rho)x$. Both Asmussen~\cite{SA98} and Foss and Zachary~\cite{FossZ} show the probability of this large jump is \textcolor{black}}\def\tcrr{\textcolor{black}{asymptotically equal to $\mathbb{P}\bigl(S>(1-\rho)x\bigr) \mathbb{E}\sigma$ for large $x$, where $\sigma$ is the number} of customers served in a busy period. \textcolor{black}}\def\tcrr{\textcolor{black}{But $\mathbb{E}\sigma= \sum_{n=0}^\infty \rho^n = 1/(1-\rho)$. Indeed, 1 corresponds to the customer initiating the busy period, $\rho$ is the number of customers arriving while he is in service (the first generation), $\rho^2$ is the number of customers arriving while they are in service, and so forth. In the framework of branching processes, $\rho^n$ is the number of individuals in the $n$th generation. These considerations} lead to \begin{align} \label{22.7e} \mathbb{P}(B>x)\ &\sim\ \frac{1}{1-\rho} \mathbb{P}\bigl(S>(1-\rho)x\bigr), \end{align} which Jelenkovi\'c and Momcilovi\'c \cite{Predrag} shows to be the correct asymptotics if the service time distribution is subexponential and square root insensitive, i.e.\ with a heavier tail than ${\mathrm{e}}^{-\sqrt{x}}$.
Generalizing this approach to our multiclass system, we recall that $\beta_i\, =\,\sum_{j=1}^K\lambda_{ij}\mathbb{E} B_j$ and we
introduce a subexponential and square root insensitive reference distribution $F$ for which the individual service time distributions are related as \begin{equation}\label{0509a} {\overline F}_i\bigl(x/(1+\beta_i)\bigr)\ \sim\ c_i{\overline F}(x). \end{equation} In practice, one chooses ${\overline F}(x)$ as $\sup_i{\overline F}_i\bigl(x/(1+\beta_i)\bigr)$. This is common in heavy-tailed studies involving distributions with different degrees of heavy-tailedness. In particular, it allows some $F_j$ to be light-tailed ($c_j=0$). \\ \indent Recalling the interpretation of $\beta_i$ as the rate of arriving work while a class $i$ customer is in service, a big service time $S_i$ of a class $i$ customer will lead to $B_i>x$ precisely when $S_i(1+\beta_i)>x$. Using the same reasoning as for~\eqref{22.7e}, we first note that $(\mathbf{M}^n)_{ij}$ is the number of type $j$ progeny of a type $i$ ancestor. Hence if $\rho(\mathbf{M}) < 1$, the probability that one of these large service times occur in $[0,B_i)$ is approximately \begin{align*}\sum_{n=0}^\infty \sum_{j=1}^K (\mathbf{M}^n)_{ij}{\overline F}_j\bigl(x/(1+\beta_j)\bigr)\ \sim\ d_i{\overline F}(x),\\ \intertext{where}\
d_i\,=\, \sum_{n=0}^\infty \sum_{j=1}^K (\mathbf{M}^n)_{ij}c_j
\,=\, \sum_{j=1}^K (\mathbf{I}-\mathbf{M})_{ij}^{-1}c_j. \end{align*}
\tcrr{Equivalently, the $d_i$ solve
\begin{align}\label{30.7b} d_i\ &=\ c_i+\sum_{j=1}^K m_{ij}d_j\,. \end{align}} As for the standard $M/G/1$ queue, it is straightforward to verify that this is an asymptotic lower bound. \begin{proposition}\label{Prop:0509a} Assume that $F$ in \eqref{0509a} is subexponential with finite mean \textcolor{black}{so that} $c_k>0$ for some $k$ and \textcolor{black}}\def\tcrr{\textcolor{black}{$\rho(\mathbf{M}) < 1$.} Then for each $i=1,\ldots,K$, \begin{equation}\label{0509b} \liminf_{x\to\infty}\frac{\mathbb{P}(B_i>x)}{{\overline F}(x)}\ \ge\ d_i. \end{equation} \end{proposition}
\begin{remark} \rm \tcrr{Square root insensitivity of $F$ is not needed for Proposition~\ref{Prop:0509a}.} The assumption \begin{equation}\label{0509avar} {\overline F}_i(x) \sim\ \widetilde c_i{\overline F}_0(x) \end{equation} may apriori be more appealing than \eqref{0509a} since it does not involve evaluation of the $\beta_i$. However, it is closely related. The reason is that if $F$ is regularly varying with ${\overline F}(x)=L(x)/x^\alpha$, then \eqref{0509a} and \eqref{0509avar} with $F_0=F$ are equivalent, with the constants related by $c_i=\widetilde c_i(1+\beta_i)^\alpha$. For $F_0$ lognormal or Weibull with tail ${\mathrm{e}}^{-x^\delta}$ (where $\delta<1/2$ in the square root insensitive case), one has, for $\gamma_1>\gamma_2$, ${\overline F}_0(\gamma_1x)=o\bigl({\overline F}_0(\gamma_2x)\bigr)$. Hence if \eqref{0509avar} holds, we may define $\beta^*=\max_1^K\beta_j$ and take ${\overline F}(x)={\overline F}_0\bigl(x/(1+\beta^*)\bigr)$, where $c_j=1$ if $\beta_j=\beta^*$ and $c_j=0$ if $\beta_j<\beta^*$. \end{remark}
\newcommand{\eqdistr}{\stackrel{{\footnotesize \cal D}}{=}}
\indent The $M/G/1$ literature leads to the conjecture that further contributions to $\mathbb{P}(B_i>x)$ can be neglected, i.e.\ that $\mathbb{P}(B_i>x)\sim d_i{\overline F}(x)$ \tcrr{in the square-root insensitive case.} However, the upper bound \tcrr{is much more difficult (even in the single-class $M/G/1$ setting)} and follows in the regular varying case from more general results recently established in Asmussen \& Foss~\cite{SASF17}: \begin{theorem}\label{Th:29.7a} Assume that in addition to the conditions of Proposition~\ref{Prop:0509a} that $F$ is regularly varying. Then $\mathbb{P}(B_i>x)\sim d_i {\overline F}(x)$ for each $i=1,\ldots,K$. \end{theorem} In the proof, we need: \begin{lemma}\label{L:30.7a} Let $S$ be subexponential and let the conditional distribution of $N$ given $S=s$ be \textcolor{black}{Poisson$(\lambda s)$}. Then $\mathbb{P}(S+N>x)\sim\mathbb{P}\bigl(S(1+\lambda)>x\bigr)$ \tcrr{as $x\to\infty$}. Further, the conditional distribution of $(S,N)/(S+N)$ given $S+N>x$ converges to the one-point distribution at $((1/1+\lambda),\lambda/(1+\lambda))$. \end{lemma} \begin{proof} The argument is standard, with the key intuition being that the variation in $S$ dominates that of the Poisson distribution, so that $N$ can be replaced by its conditional expectation $\lambda S$ given $S$.
Firstly, note that if $x$ is so large that $x-x^{1/2}>2\lambda x^{1/2}$ and $N(x^{1/2})$ is Poisson$(\lambda x^{1/2})$, then \begin{align*}\mathbb{P}(S+N>x, S<x^{1/2})\ &\le\ \mathbb{P}\bigl(x^{1/2}+N(x^{1/2})>x\bigr)\ \le\ \mathbb{P}\bigl(N(x^{1/2})> 2\lambda x^{1/2}\bigr), \end{align*} which (by large deviations theory) tends to zero \tcrr{faster than ${\mathrm{e}}^{-\delta x^{1/2}}$ for some $\delta>0$}, and hence faster than \textcolor{black}{ $\mathbb{P}\bigl(S(1+\lambda) > x\bigr)$ }. Secondly, $N/S\to \lambda$ as $y\to\infty$ given $S>y$ and so \begin{align*}\mathbb{P}(S+N>x, S\ge x^{1/2})\ &\sim\ \mathbb{P}\bigl(S(1+\lambda)>x, \textcolor{black}{ S\ge x^{1/2} \bigr)}, \end{align*} the latter equaling $ \mathbb{P}\bigl(S(1+\lambda)>x\bigr)$ for large $x$. This proves the first statement, and the second follows since (asymptotically) only large values of $S$ contribute to large values of $S+N$, and in this regime $N/S\sim\lambda$. \end{proof}
The set-up of ~\cite{SASF17} is a set of random variables $(B_1,\ldots,B_K)$ satisfying \begin{equation}\label{AF30.6a}B_i \ \eqdistr\ S_i+ \sum_{j=1}^K\sum_{m=1}^{N_{j;i}}B_{m;\tcrr{j}}.\end{equation} The assumptions for \eqref{AF30.6a} are that all $B_{m;j}$ are independent of the vector $(S_i,N_{1;i},\ldots,N_{K;i})$, that they are mutually independent, and that $B_{m;j}\eqdistr B_j$. Further, all random variables are non-negative. In our multiclass queue, $B_i$ is the length of the busy period initiated by a class $i$ customer, $S_i$ is the service time, and $N_{j;i}$ is the number of class $j$ customers arriving during his service. In the following, we omit the index $i$ and instead express the dependence on $i$ in terms of a governing probability measure $\mathbb{P}_i$.
\begin{proof} \emph{of Theorem~\ref{Th:29.7a}} \ \ To apply the results of \cite{SASF17}, we first need to verify a condition on multivariate regular variation (see~\cite{Elephant} for background) of the vector $(S,N_1,\ldots,N_K)$. Its first part is that $\mathbb{P}(S+N_1+\cdots+N_K>x)$ $\sim b_i{\overline F}(x)$ for some $b_i$. This is immediate from Lemma~\ref{L:30.7a} by taking $N=N_1+\cdots+N_K$, $\lambda=\overline\lambda_i$, $b_i=\widetilde c_i(1+\overline\lambda_i)^\alpha$. A minor extension of the proof of Lemma~\ref{L:30.7a} further yields that, given $S+N_1+\cdots+N_K>x$,
\begin{eqnarray} \label{eqangular} \frac{1}{S+N_1+\cdots+N_K}\bigl(S, N_1, \ldots ,N_K)\ \to \frac{1}{1+\overline\lambda_i}\bigl(1,\lambda_{i1}, \ldots , \lambda_{iK}\bigr), \end{eqnarray} where the limit is taken as $x \rightarrow \infty$.
This establishes the second part, namely the existence of the so-called angular measure (in this case a one-point distribution at the right-hand side of (\ref{eqangular})).
It now follows from~\cite{SASF17} that $\mathbb{P}(B_i>x)\sim d_i^* {\overline F}(x)$, where the $d_i^*$ solve the set of linear equations \begin{align}\label{30.7a} d_i^*\ &=\ c_i^*+\sum_{j=1}^K m_{ij}d_j,\end{align} and \[c_i^*\ =\ \lim_{x\to\infty}\frac{1}{{\overline F}(x)}\mathbb{P}_i(S+N_1\overline r_1+\cdots+N_K\overline r_K>x)\quad \text{with } \overline r_j=\mathbb{E}_j \tcrr{B}. \] Comparing with \eqref{30.7b}, we see that we need only check that $c_i^*=c_i$. But by similar arguments to those above, \begin{align*} &\mathbb{P}_i(S+N_1\overline r_1+\cdots+N_K\overline r_K>x)\ \sim\ \textcolor{black}{ \mathbb{P}(S(1+\lambda_{i1}\overline r_1+\cdots+\lambda_{iK} \overline r_K ) >x) } \\ &=\ \mathbb{P}(S(1+\beta_i)>x)\ \sim\ \widetilde c_i(1+\beta_i)^\alpha{\overline F}(x)\ =\ c_i{\overline F}(x),\end{align*} where \tcrr{parts (ii) and (iii) of} Lemma~\ref{Lemma22.7a} are employed in the second step. \end{proof}
\begin{remark} \rm The general subexponential case seems much more difficult. One obstacle is that theory and applications of multivariate subexponentiality is much less developed than for the regular varying case. See, however, Samorodnitsky and Sun~\cite{Genna} for a recent contribution and for further references. \end{remark}
\section{Conclusion} \label{sec8}\setcounter{equation}{0} We have introduced a multiclass single-server queueing server in which the arrival rates depend on the current job in service. The model departs from existing state-dependent models in the literature in which the parameters depend primarily on the number of jobs in the system rather than the job in service. \\ \indent The main contributions of this paper can be summarized as follows. Firstly, we formulate the multiclass queueing model and its corresponding fluid model, and provide motivation for its practical importance. The necessary and sufficient conditions for stability of the queueing system are obtained via the corresponding fluid model. Secondly, by appealing to the natural connection with multitype Galton-Watson processes, we utilize Laplace-Stieltjes transforms to characterize the busy period of the queueing system. Thirdly, we present \tcrr{ a preliminary study of} busy period tail asymptotics for heavy-tailed service time distributions \tcrr{ and give a complete set of results for the regularly varying case, using recent results of Asmussen \& Foss~\cite{SASF17}}. Tail asymptotics in our multiclass setting for non-regularly varying heavy-tailed service time distributions, as well as for light-tailed service time distributions, are much more difficult and will be attempted in a separate manuscript.\\
\noindent \textbf{Acknowledgments} We are very grateful to a referee for pointing out a problem in our initial proof of the upper bound in Section~\ref{sec7}. We also thank a second referee \tcrr{and an associate editor} for many useful suggestions. The first author thanks Dr. Quan Zhou and Professor Guodong Pang for helpful conversations.
\end{document} |
\begin{document}
\title{Addressing the underrepresentation of women in mathematics conferences} \author[Greg Martin]{Greg Martin} \address{Department of Mathematics \\ University of British Columbia \\ Room 121, 1984 Mathematics Road \\ Canada V6T 1Z2} \email{[email protected]} \subjclass[2010]{01A80} \maketitle \thispagestyle{empty}
\begin{abstract} Despite significant improvements over the last few generations, the discipline of mathematics still counts a disproportionately small number of women among its practitioners. These women are underrepresented as conference speakers, even more so than the underrepresentation of women among PhD-earners as a whole. This underrepresentation is the result of implicit biases present within all of us, which cause us (on average) to perceive and treat women and men differently and unfairly. These mutually reinforcing biases begin in primary school, remain active through university study, and continue to oppose women's careers through their effects on hiring, evaluation, awarding of prizes, and inclusion in journal editorial boards and conference organization committees. Underrepresentation of women as conference speakers is a symptom of these biases, but it also serves to perpetuate them; therefore, addressing the inequity at conferences is valuable and necessary for countering this underrepresentation. We describe in detail the biases against women in mathematics, knowing that greater awareness of them leads to a better ability to mitigate them. Finally, we make explicit suggestions for organizing conferences in ways that are equitable for female mathematicians. \end{abstract}
\section{Introduction}
In the context of mathematics conferences, the subject of gender is somewhat of a taboo. Certainly, bringing up the subject at all during a conference would be deemed outside the norm. And yet: in our graduate programs, women are still noticeably in the minority. We have a significant shortage of female mathematicians in our departments, particularly in senior positions. And where conferences are concerned, it is unfortunately quite common to have so few female speakers that they stand out from the homogeneously male pack. The pace of progress seems to be slowing, if we are continuing to improve at all. In short, we have too few women among the speakers at our conferences, and there is good reason to doubt whether the problem is simply going to fix itself any time soon.
The purpose of this article is to examine the issue of underrepresentation of women as speakers in mathematics conferences, as well as related disparities in conference organization, mathematics department composition, and recipients of mathematical prizes. We believe that it is our ethical responsibility to equitably represent all members of our profession\footnote{By ``our profession'', we are referring to academic mathematics; by ``our responsibility'', we mean the responsibility of all academic mathematicians, regardless of gender. This article is targeted most directly to American mathematicians, but it is relevant for mathematics around the world.} and to dismantle any obstacles to advancement in that profession, particularly when those obstacles disproportionately burden a minority group. In particular, we argue that it should be an explicit priority for any organizers of mathematics conferences to address appropriate representation of women in their lists of speakers; we further assert that we are not currently succeeding at meeting that priority.
Inviting speakers to conferences is about more than just rewarding a few already established people: we want to enrich the research of attendees and speakers alike. And one aspect of that enrichment is to expose ourselves to as many new and different viewpoints as possible; limiting our speaker pool (however unintentionally) is directly at odds with this goal. Research has shown that demographic diversity\footnote{In this article, we address gender diversity directly; however, much of what we say applies equal well to other aspects of diversity. We will include references to bias unrelated to gender, for example, when it illuminates the points we are making. Whenever we use the unmodified word ``diversity'', we intend the statement to be valid whether it is read as a statement about gender diversity specifically or about diversity in general.} has measurable positive effects on the outcomes of group enterprises; conversely, lack of diversity, in addition to perpetuating harmful stereotypes about mathematics, actually diminishes our ability to evaluate unfamiliar ideas. Once we include gender diversity among our explicit goals for conference organization, we become motivated to recognize the problem that currently exists and to proactively seek solutions.
Shortfalls of female speakers are, unfortunately, extremely common in all areas of science, technology, engineering, and mathematics (STEM).\footnote{Similarly, we address academic mathematics directly herein, but much of what we say applies equally well to other STEM fields and types of careers. We certainly make relevant arguments using data from STEM fields besides mathematics; indeed, we find corroborating evidence from disciplines even further afield.} In mathematics, just as in other STEM fields, American graduate schools have been producing a steady source of female PhDs for a generation, but that level of representation has failed to persist in many aspects of our discipline. We will display the overall underrepresentation of women, as well as additional differential underrepresentation in prestigious conferences roles, in two recent major conferences: the 2014 ICM and the 2014 Joint Meetings of the AMS/MAA. The same disappointing trends occur in employment statistics, on editorial boards, and in lists of prize-winners. Clearly, our current system is not living up to our standards where gender diversity is concerned; what are the causes of this shortfall?
Certainly there is no genetic predisposition that favors males over females in STEM fields (despite how often such claims are made). Girls and boys have always performed comparably on measures other than standardized tests; on these tests, the gap has dramatically decreased, to a nearly insignificant size, over the last generation. Furthermore, gaps in standardized test scores are significantly correlated to measures of gender inequality in the students' cultures. These effects manifest themselves in the set of high-achieving mathematics students as well as in the entire population (refuting ``more males at the top'' theories as well). None of this is consistent with innate gender-based differences in mathematics ability.
What, then, can be causing this underrepresentation of female mathematicians? It arises, in fact, from an assemblage of deeply entrenched biases that have been surreptitiously inserted into our perceptions and reactions. These implicit biases cause us to internally associate STEM careers---and, for that matter, positions of authority---with male defaults. Our culture's insertion of these biases into our subconscious, sadly, begins extremely early in our lives.
Through unconscious differences in the way they respond to girls and boys, schoolteachers reinforce assertive behavior and (over)confidence in boys, but passiveness and math anxiety in girls.\footnote{It is worth remarking that statements of this type are statistical statements, about large-scale trends of behavior. Of course there are individual exceptions to any such trend. However, unlike a proof in an axiomatic system, the existence of specific counterexamples does not invalidate the larger trends described in this article. Scientific research involving human behavior in complex societies does not look like theoretical mathematics, but it is completely appropriate for its subject.} Girls become less likely to volunteer for mathematical enrichment than boys, and children are unwittingly trained to perceive women as less mathematically able than men. These erroneous attitudes are compounded by a pervasive categorization of mathematical ability as being fixed and innate, rather than malleable and able to be strengthened---a categorization that our professional community unintentionally perpetuates.
Once these implicit biases are in place in all of us, they lead to further measurable discrimination that happens right under our noses. Experiments with dual versions of applications, resumes, and promotion files, identical except for the gender of the name, consistently demonstrate that women are rated lower than men even when there is literally no difference between them. Teaching evaluations display the same gender differential, as do multiple other evaluation instruments both inside and outside STEM; the vaguer and less concrete the evaluation criteria, the more our unconscious biases manifest.
The gender-related implicit biases of those around us also socialize us into choosing different behavior patterns. Without realizing it, we interpret a man's assertive demeanor as confidence but a woman's assertive demeanor as abrasiveness; we notice when women interrupt men but not when men interrupt women. The corresponding negative reinforcement indoctrinates women into dismissing their own ability (and socializes men into overestimating their own). Particularly in contexts that are stereotyped as male, such as STEM, this socialization of women leads to internal experiences of impostor phenomenon and stereotype threat, which degrade their ability to succeed. We cannot judge the choices that individual female scientists make in isolation from this social context.
The fact that biases exist at every stage of students' and professionals' careers causes a ``leaky pipeline'', where fewer and fewer women find success in advancing to progressively higher levels of achievement. Well-documented examples of this in the business world include the persistent wage gap between women and men and the poor record of top companies promoting women to the executive level. The same attrition can be seen in mathematics when looking at grant funding, tenure decisions, and awardee selection.
We are mostly unable to perceive, in individual situations, this pervasive pattern of invisible discrimination (that is the definition of invisible!); as a result, we fool ourselves into thinking that academic mathematics is a pure meritocracy. Once we take a closer look, however, we see that the current system (academic and societal) has been sullied with extraneous features that consistently discount merit where disadvantaged populations are concerned. Making an effort to address underrepresentation of women in mathematics, therefore, is not some extra component that introduces injustice---rather, it is an attempt to recognize and do away with the injustice that is already present.
It behooves us, then, to consider explicit actions we can take to mitigate the current unfairness in our discipline. Underrepresentation of women at conferences is a symptom of this unfairness, but it also contributes to perpetuating it; for this reason, we find it extremely important for this particular symptom to be treated (in conjunction with efforts to point out the larger inequities). Our goal is to put into practice guidelines for compensating for all the bias inherent in the system, with the hope that conscious attention to those biases will also help reduce them in the future.
To that end, we should plan our conferences with equitable gender representation in mind from the very start, and explicitly communicate with other conference organizers our expectation of meeting this goal. We should be extremely attentive to the way we select speakers, particularly keeping in mind that we are prone to misevaluating academic records of women and to overlooking qualified female candidates. We should recognize that mundane logistical choices can make conferences less welcoming to women if we are not careful. Moreover, we should publicly commit to equitable gender representation at our conferences, display this commitment visibly in conference materials and through our actions during the conference, and track over time how well we are (or are not) succeeding. Finally, we should simply talk more openly about underrepresentation of female mathematicians, not only in the context of conferences but in all aspects of the academic career; and we should have our words (and our attention to others' words) reflect the reality that mathematics is for women as well as for men.
We now go into more detail about the facts described in this introduction. We provide ample references to the primary research literature in sociology and psychology; we also include articles and informative summaries published by research organizations, web sites of relevant institutions, and well-written blog posts. In Section 2 we discuss the priorities for organizers of mathematics conferences to address, and we support the assertion that we are not currently succeeding with appropriate representation of women among speakers at conferences. We demonstrate in Section 3 that the underrepresentation of women in mathematics, far from being genetic in origin, in fact arises from an assemblage of deeply entrenched biases that have been surreptitiously inserted into our perceptions and reactions. With this perspective in mind, we conclude in Section 4 with lists of guidelines for conference organizers committed to equitable representation of female speakers.
\section{Recognizing our responsibility}
Dialogue about how to undertake any enterprise must start with a necessarily philosophical inspection of the priorities of that enterprise. We begin by discussing the various goals of organizing a conference, and in particular placing the priority of diversity itself---why it is important, what benefits it provides, and what harms are done by its absence---on equal footing with our other priorities. We go on to verify our empirical observation that this diversity priority is not being successfully met when it comes to women speaking at mathematics conferences. At the end of this section, we quickly review the research that refutes any attempt to ascribe these empirical observations to genetic differences between the sexes, in preparation for an examination of the actual causes of the disparity.
\subsection{Appropriate representation of women as an explicit priority}
Every project comes with a set of goals and priorities; organizing a conference and selecting its speakers is no different. Even though we tend to leave these goals unspoken, stating priorities always improves the quality and appropriateness of the end result. So let us articulate what we hope to accomplish when we choose a set of speakers for our conferences.
We know that giving a talk at a conference is not simply a reward for having published the most papers or won the most awards---otherwise every conference would feature the same prolific writers or the set of living Fields Medalists (or alternately, by induction, the speakers who have given the most talks at conferences!). Rather, the main purpose of a conference is to enrich the research and careers of those in attendance. In service to that goal, we try to include people working in newly hot topics, to give exposure to up-and-coming colleagues, and even to invite mathematicians who give accessible and entertaining talks~\cite{G}. Of course, inviting speakers is beneficial for the speakers' careers as well, and we value providing that benefit equitably to a representative cross-section of successful practitioners of our field. In addition, providing speaking opportunities to more mathematicians increases the community's awareness of them, leading to a greater likelihood of subsequent invitations to speak beyond the individual conference we are organizing. Thus the benefit extends those future conferences as well, by enlarging the pool of known candidates to invite as speakers or even to organize events around~\cite{AWM}.
In addition to these already laudable goals, much of what we want to accomplish in our conference organization involves diversity. We instinctively accept the desire for diversity when it comes to seeking a spectrum of mathematical fields, for example: a Joint Meeting of the AMS and MAA, or an International Congress of Mathematicians, would definitely appear strange if it had, say, no number theory whatsoever. Depending on the scope of our conference, we might also include as a goal the representation of scientists at various stages of their academic careers, and perhaps speakers from different geographical regions as well~\cite{VS}. In all these ways, we understand that diversity is good for its own sake, as the practical implementation of the desire to represent the entirety of our community. It stands to reason, then, that {\em appropriate representation of women as speakers is a valid and sensible priority for our conferences (indeed, for all STEM conferences).} Indeed, we deem it an even more crucial priority: while complementary thematic conferences can favor scientists from different topics and regions without any one population being worse off on average, gender differences consistently manifest as underrepresentation of the same population, namely women---a population, moreover, defined in terms of a characteristic completely unrelated to the subject matter of the conference.\footnote{Conferences specifically designed to be exclusively for female mathematicians exist, of course, but do not undermine this argument---indeed, such conferences are a specific reaction to the underrepresentation of women at mathematics conferences and the biases that female mathematicians face. If there were no gender bias in the discipline of mathematics, then organizing conferences for women only would not be necessary.}
It is worth pointing out that not only is diversity a noble ideal, but it also has measurable positive effects. Groups that are more diverse have been shown to benefit from the greater range of perspectives present and from a more inclusive environment that encourages people to contribute their individual views \cite{Su}. Including a diverse set of participants ensures that the best minds are represented, affirming the event's commitment to honoring merit---and merit-centered processes elevate the performance of everyone involved \cite{Ries}. With respect to gender, efforts to seek out submissions to tech conferences by women have resulted in noticeably more submissions at the highest end of the quality range \cite{JSConf}; as a related example, startup companies tend to be more successful when founded by women than when founded by men \cite{Ries}.
On the other side of the same coin, lack of diversity has measurable detrimental effects. Math, in our society, has plenty of negative stereotypes associated with it already; presenting nearly all-male conference speaker lineups perpetuates the stereotype of math as for men alone. This stereotype perpetuation occurs in other fields too: in philosophy, for instance, it ``undermines the self-confidence of women who aspire to become professional philosophers, or to remain in this exceptionally competitive profession [and] feeds the conscious or unconscious biases against women of the people who decide the fate of those who aspire to become or remain in the profession'' \cite{G}.
Psychological studies have shown that people unconsciously evaluate women less favorably in settings where they make up a small fraction of the participants, all the more so when gender-typing (the process in which our society trains us to associate certain activities or qualities with a single gender) is present \cite{H}. Thus an underrepresentation of women also causes them to be judged more harshly. The simple fact that a conference is overwhelmingly male makes it less welcoming for women to apply for and join (as it does for businesses \cite{Ries}), because the imbalance makes their gender a salient characteristic in a situation where it oughtn't be; the artificial salience of gender is a major component of ``stereotype threat'' \cite{FP:G}, about which we will say more later. Lack of gender diversity also harms men: homogeneous groups promote a sense of entitlement and complacency, an atmosphere where feedback and outside opinion is less welcome, and a susceptibility to ``groupthink''. As E.\ Ries summarizes \cite{Ries}: ``That's why I care a lot about diversity: not for its own sake, but because it is a source of strength for teams that have it, and a symptom of dysfunction for those that don't.''
With this list of priorities in hand, it is clear that we would not be satisfied with a conference whose speakers were all in stagnant mathematical fields, or uniformly dull and inaccessible, or (thematic conferences aside) all in their 20s or all from Texas. No organizer of a Joint Meetings or an ICM would say, ``Gosh, every time we plan our conference, we always end up with practically no number theory at all \dots\ oh well, them's the breaks!''---because an outcome that fails to achieve an explicit priority is not a matter to shrug off, but rather motivation to examine and improve the process leading to the outcome. Similarly, the reasonable reaction to chronic underrepresentation of women at math conferences---which is a failure to achieve an even more important explicit priority--- is not ``Gosh \dots\ them's the breaks'', but rather a concerted effort to identify and overcome the flaws in the process.
Of course, concentrating on gender carries with it the danger of failing to acknowledge other existing inequities present to various extents in the US and other countries (relating, for example, to ethnicity, sexual orientation, physical ability, socio-economic status, country of origin, religion, or political ideology); for that matter, we are starting to better understand that our society's binary gender construct does not fully represent every individual's self-identification (nor even their biology). However, we should not let the perfect be the enemy of the good.
There have been hundreds of math PhDs earned by women in North America alone, every year for many years running \cite[Table GS.2]{CMR}. We quite plainly are in a position to ensure appropriate female representation at math conferences; our inability to deal immediately with all inequities does not give us an excuse to ignore the important inequities we do have the power to correct. That being said, most of the discussion in this article can be immediately applied {\em mutatis mutandis} to related priorities such as better inclusion of underrepresented ethnicities.
\subsection{The present shortfall of female speakers} \label{miao}
Once we agree that proper representation by female scientists is a priority for us, it is depressingly easy to see that our priority is not being met. This failure is not unique to mathematics: even in scientific disciplines with greater gender parity, such as physical anthropology, primatology, and microbiology, women are less likely than men to be invited to speak, particularly when the organizing team does not include women \cite{IYH,CH}. In tech conferences, proposed talks are submitted far less often by women than by men, unless herculean efforts are made by the organizers to solicit proposals from women \cite{St}---even though the proposals that are eventually submitted by women tend to be of significantly higher quality on average than those submitted by men \cite{JSConf}.
Returning to mathematics, let us fix a conservative measuring standard and then apply it to two recent highly prestigious conferences. For this discussion, we will use 24\% as a minimum estimate of an appropriate proportion of female speakers in mathematical conferences. Every year since 1991, the percentage of PhDs in mathematics granted by US institutions to women has been 24\% or greater, with a peak as high as 34\%~\cite{AMSsurvey}. While these percentages would ideally be closer to 50\%, they still represent thousands of female mathematicians---mathematicians who have used these two decades and more to complete successful postdocs, amass tens of thousands of publications, earn tenure, be promoted to full professor, train PhD students of their own, and even lose their eligibility for the Fields Medal on account of their age. In particular, suggestions to the effect that not enough time has passed for women to work their way through the system cannot be taken seriously.
The first of the two conferences we examine is the 2014 International Congress of Mathematicians (ICM), held in August 2014 in Seoul, South Korea. Counting directly from the program on the ICM's official website, we find that only one of the twenty plenary speakers (5\%) was female. There were 20 sessions of invited speakers, 17 of which had at least six invited speakers; more than half of those 17 sessions included either a single woman among the speakers or no women at all. Overall, among the plenary and invited speakers, there were 35 female speakers out of a total of 237, or 14.8\%. (All the data gathered for this section can be examined in more detail in the author's supplement \cite{Martin} to this article.)
Human beings' notoriously poor sense of probability tempts us to believe that such underrepresentation of women might just be the result of chance. Indeed, it is a general trait known to psychologists that people tend to ascribe outcomes they expect to stable causes, while chalking up unexpected outcomes to temporary causes \cite{CI}. We mathematicians are well equipped, however, to perform the easy calculations showing how wrong this instinct would be. The appropriate null hypothesis is ``the ICM speakers were selected independently of gender from among the pool of people who have received PhDs in mathematics in the last 25 years''. Under our conservative 24\% assumption from above, the observation of nineteen male plenary speakers and one female plenary speaker rejects ($p<0.031$) this null hypothesis. Indeed, it is 18 times as likely that we would have seen an ``overrepresentation'' of female plenary speakers (five or more, since $20\times24\%=4.8$) by chance than to have seen at most one.
We might hope to discount this one observation as an anomaly (despite the fact that this statistical test is designed precisely to tell us not to do so); however, we have much more data at our disposal. The data on invited ICM speakers as a whole soundly rejects ($p<0.0004$) the null hypothesis that speakers were included independently of their genders; in this case, it is almost 1600 times as likely that chance would have led to more than 24\% women than at most the 14.8\% we observe. The only rational conclusion is that there has been bias somewhere along the line. One wonders, of course, where this bias takes place: in the process of producing the PhDs in the first place, or during postdoctoral positions, or faculty hiring and promotion, or evaluation of research records, or selection processes for conferences. One of the main purposes of this article is to demonstrate the existence of {\em gender biases at every one of these stages}.
The second conference we examine is the 2014 Joint Mathematics Meetings of the AMS and the MAA, held in January 2014 in Baltimore, Maryland. At first glance, the news seems to be better here: with 746 female participants (speakers and organizers) out of a total of 2,708, the percentage of female participants was 27.5\%, which at least exceeds our conservative measuring standard of 24\%. As we look a little more closely, however, we see some alarming disparities in the distribution of those participants. The percentage of speakers in contributed paper sessions who were female was 36.8\%, while the percentage of invited speakers who were female was only 25.6\%; this echoes the lower percentage of invited participants observed in other sciences \cite{IYH}. When we restrict to AMS-organized invited sessions, there were only 24.8\% female speakers and 22.0\% female organizers.
Among the JMM sessions that had their organizers listed explicitly, sessions with at least one female organizer had an average of 38.3\% female speakers, yet sessions with no female organizers had only 19.8\% female speakers, or half as many. Using a binomial regression, the null hypothesis (that the proportion of organizers who are female is not associated with the proportion of speakers who are female) is dramatically rejected: the probability of seeing a result at least as extreme as the data is less than $10^{-12}$. Again, this disparity echoes observations in other fields~\cite{CH}.
Differential representation of women can be found in employment statistics as well \cite{CMR}: as we pass from part-time faculty to full-time non-tenure-track faculty, to tenured faculty at non-PhD-granting institutions, to tenured faculty at PhD-granting institutions, the percentage of women decreases steadily. Where mathematics journals are concerned, a sample of ten of the most prestigious ones yields only 6.6\% of editors who are female; six of these ten journals have no women at all on their editorial boards. (Again, details can be found in \cite{Martin}.) And while it was legitimately exciting when M.\ Mirzakhani became the first woman to be awarded the Fields Medal in August 2014, it is hard not to wonder, given the fact that less than 2\% of all Fields Medalists and 0\% of all Abel Prize winners to date are female, how many outstanding female mathematicians have not had their work sufficiently recognized.
The AMS website has a page listing eighteen prizes and awards which have been given, as of the end of 2014, to 374 recipients; only 25 of them, or 6.7\%, have been women. But even that is not the complete story: one of these prizes is the Ruth Lyttle Satter Prize in Mathematics, for which only women are eligible. Once this prize is removed, the percentage of AMS prize and award recipients who are female plummets to a paltry 3.3\%. Rather embarrassingly for us, in the history of the AMS, more women (thirteen) have won the Satter Prize than the seventeen other AMS prizes and awards combined.
It is certainly not rare for us to experience firsthand the underrepresentation of women at conference after conference, award announcement after award announcement. The very fact that we can find, on the internet, a web page \cite{P} devoted to debunking the claim that underrepresentation of women at STEM conferences is often due to chance, as well as a ``bingo card'' \cite{F} full of anticipated excuses for not having more female speakers at an STEM conference, is dark-humor evidence that underrepresentation of women in science is both widespread and widely dismissed. Even single instances of underrepresentation we see are too skewed to rationally explain away as due to chance; in aggregate, there is simply no way to ignore the reality of bias. Despite the part of human nature that doesn't like to examine our imperfections closely, we simply must acknowledge the truth: the system we have in place now is causing a failure to meet our gender-diversity priority. And in addition to missing out on the positive effects of diversity, we need to question our default assumption that we really are choosing the most qualified speakers available. As Ries writes \cite{Ries}: ``[W]hen a team lacks diversity, that's a bad sign. What are the odds that the decisions that were made to create that team were really meritocratic?''
As scientists, after observing the existence of gender-based inequities in our discipline, our natural next step is to investigate their causes.
\subsection{Invalidity of genetic explanations}
A common interjection at this point in a discussion of women in STEM disciplines is to propose that there are neurological or ``hard-wired'' differences between female and male brains that result in men being better, on average, at mathematics than women. The simplest such hypothesis is that the distribution of males' mathematical ability (however that might be measured and quantified) is of similar shape to, but with a significantly higher mean than, the distribution of females' mathematical ability. A variant of this hypothesis is the just-so story that the two distributions have similar means, but the males' distribution has a significantly higher variance than the females' distribution; somehow the assertion that there are exceptionally math-hopeless men in the population seems soothing to those proposing that only men can be exceptionally math-savvy in great numbers.
Many researchers have made explicit various hypotheses that might explain overrepresentation of men in STEM fields \cite{Bar,EHL,KM,LHPL}, some genetic and some environmental. Any hypothesis, regardless of the motivations of its proponents, can stake its claim as a possibility for the truth and have its validity tested by the usual scientific method. As it happens, however, these genetic hypotheses have been thoroughly refuted by scientific studies. Indeed, such a refutation has appeared recently in the pages of these Notices \cite{KM}, so we restrict ourselves to a brief summary here.
First, the hypotheses that are genetic in nature are usually supported by girls' and boys' scores on standardized tests, even though restricting to this single instrument of measurement of children's mathematical skill is problematic \cite{AAUW,HLLEW,NV} and not particularly consistent with other data. (For example, girls consistently get better grades than boys in math classes at all levels of primary and secondary school \cite{EHL}.) It is true that a generation ago, boys did perform notably better than girls on standardized tests, but that gender gap has been repeatedly shown to be practically nonexistent today \cite{AGKM, EHL,GMSZ,HLLEW,HM,KM,OECD}; such a rapid change is utterly inconsistent with any genetic basis for differential mathematical skill. Moreover, several studies of students in dozens of countries \cite{EHL,GMSZ,HM} have shown that when gaps between boys' and girls' mathematical performance do exist, they are significantly correlated with measures of gender inequality in the country's employment, government, and culture.
The hypothesis that men exhibit greater variability in mathematical skill than women, and hence that men are appropriately represented in the subset of people with extremely high mathematical achievement, also does not stand up to scrutiny \cite{AGKM,GMSZ,HLLEW,HM,NV}. The correlations with measures of gender inequality mentioned above persist even when restricting to only the best mathematics students. It is worth noting that this ``greater male variability hypotheses'' would predict that the overrepresentation of men in scientific fields would be extremely large in the most math-intensive careers but would diminish as one examined jobs that required less and less mathematics for success; this prediction is certainly inconsistent with the empirical observation that underrepresentation of women is uniform over STEM disciplines and careers.
In response to a question to this effect at a panel discussion, N. d. Tyson \cite{Tyson} described the many ways in which his environment discouraged him, an African--American, from pursuing science as a career, and speculated that the experiences of girls interested in scientific opportunities might be quite similar. ``So before we start talking about genetic differences,'' he said, ``you gotta come up with a system where there's equal opportunity. {\em Then} we can have that conversation.'' Even though there seems little need, given the evidence summarized above, to continue the conversation about genetic differences, it is definitely true that there is no point in looking elsewhere if the system itself is not equitable to women and men. Accordingly, we turn now to an honest examination of the ways our society and our academic system treat women and men who are interested in mathematics.
\section{Assessing the obstacles}
We've debunked the arguments that there are genetic reasons for women to constitute such a small percentage of speakers at mathematics conferences. The opposing hypotheses assert, in one form or another, that the underrepresentation comes from environmental factors (aspects of our society, our academic system, and so on); these hypotheses must clearly be taken seriously. As Ries writes \cite{Ries}, ``Demographic diversity is an indicator. It's a reasonable inference that a group that is homogeneous in appearance was probably chosen by a biased selector.''
To be clear, we are not suggesting that academic institutions in the US are prohibiting women from joining their universities or degree programs, as was the case several decades ago; we are not even suggesting that significant numbers of individual faculty members intentionally discriminate against women any more. ``The word `bias' here is not meant to imply deliberate bias. Although there may be deliberate cases, those are not the ones we are concerned about. Rather, we are concerned about the subtle, unintentional examples'' \cite{VS}. More specifically, we are proposing that typical mathematicians (indeed, typical human beings), both female and male, carry inside them unconscious habits and patterns of thought that add up to a significant difference in how female and male mathematicians are perceived and treated.
Of course, we are scientists: even though we have seen plentiful arguments against genetic hypotheses, we also need to see arguments supporting this new hypothesis. If we are to believe that the underrepresentation of women at our mathematics conferences is unknowingly caused by our community of well-meaning mathematics professionals as a whole, what are the mechanisms at work?
Fortunately for our argument, but most unfortunately for our discipline, the mechanisms causing bias against women in mathematics are both numerous and thoroughly documented.
\subsection{Ubiquity of implicit biases} \label{implicit bias sec}
We are complicated human beings in a complicated society, and as it turns out, we actually carry with us implicit biases hidden in the blind spots of our self-perception. Colored by these biases, our brains associate words like ``scientist'', ``boss'', ``doctor'', ``leader'', ``genius'' with male defaults. Mathematics, specifically, is stereotyped as a discipline for men, as has been measured by tests of people's implicit attitudes \cite{GRD,LHPL}. (We encourage the reader to try talking about colleagues or students without using gender pronouns: watch how quickly the assumptions of the listeners kick in.) These biases cause us to think of male colleagues more readily than female colleagues, ``leading to more invitations to men, leading to greater visibility for men, leading to yet easier availability of men's names'' \cite{VS}; they even cause us to evaluate files differently when the name is changed between female and male, with everything else kept equal. These biases, furthermore, are present in both women and men.
This sounds too terrible to be true (and indeed, those of us who have experienced success in our discipline are understandably resistant to the assertion of unfairness in the system). However, the biases summarized above are an extremely well-chronicled phenomenon.
\subsubsection*{Implicit biases in schools}
As early as primary school, our society trains girls to believe that they are not as valued as boys in an educational environment. Girls receive significantly less attention from teachers than boys \cite{AAUW}. Specifically, boys assert themselves by raising their hands, switching the arm they have raised, even jumping up and down until they get their teacher's attention, while girls will put their hand down soon when they are not called on. These behaviors are reinforced by the reactions they provoke: boys call out answers and contributions to the discussion out of turn and are not reprimanded, but when girls do the same, they are told to remember proper etiquette. When girls do answer questions, they are more likely to receive a brief response that merely recognizes the fact that they answered, whereas boys are more often given follow-up questions or time to expand upon their answer \cite{W}.\footnote{This unconscious differential treatment manifests with ethnicity as well as gender: for example, even though African--American girls attempt to participate more, they actually receive less attention than Caucasian girls from teachers \cite{AAUW}.}
With mathematics specifically, this situation is exacerbated by the fact that most primary school mathematics teachers are not math specialists; indeed, many primary school math teachers have math anxiety, as measured by psychological instruments such as surveys. Studies have shown that the math achievement scores of girls (but not boys) at the end of a school year are correlated with the level of math anxiety of their teacher, even though it was uncorrelated at the beginning of the school year \cite{BGRL}---presumably because witnessing their predominantly female teachers' anxiety reinforced the stereotype that boys are better at mathematics than girls. We are literally teaching girls to have math anxiety before they even leave elementary school.
Not surprisingly, these constant micro-inequities accumulate to form significant barriers. Girls assess their own mathematical abilities as worse than they actually are, in stark contrast to boys. Consequently, girls are less likely to volunteer for math competitions and programs for mathematically gifted children, and are more likely to underperform in standard testing environments, particularly when their gender is made salient \cite{NV}. While young children may not evaluate their peers' math ability differently according to gender, they do hold the attitude that adult men are better at math than adult women \cite{LHPL}. When rating their very own children, parents estimate the mathematical IQ of their sons as ten points higher on average than that of their daughters \cite{LHPL}.
A compounding factor is the dissonance between two theories of intelligence that people can implicitly hold: an ``entity theory'', which asserts that intelligence is ``a fixed quantity that cannot be changed very much by effort and learning'', and an ``incremental theory'', which asserts that intelligence ``is malleable and expandable'' \cite{MD}. In general, people who give credence to the entity theory (or who hear it asserted) are more likely to judge people's ability based on stereotypes than people who give credence to the incremental theory. Moreover, when comparing organizations or disciplines whose statements endorse fixed or malleable views of intelligence, people are more drawn to the latter kind and expect to feel more comfortable and accepted there \cite{MD}.
Believing math to be a fixed trait can ``turn students away from challenges that might undermine their belief that they have high ability'', while believing math to be a malleable quality can make students ``seek challenges that can result in better learning'' and ``remain highly strategic and effective in the face of setbacks, even showing enhanced motivation and performance'' \cite{GRD}. The entity theory, and the corresponding emphasis on innate ability, seems to be particularly strong in American culture: it has been shown to partially explain the difference in math achievement between students of American parents and students of Asian parents, who are more likely to subscribe to the incremental theory and emphasize effort and working through mistakes \cite{U}. The one-two stereotype punch of ``boys are better than girls at math'' and ``math ability is innate and fixed'' is actually what constantly erodes the confidence of female students as they develop mathematically; emphasizing a description of math ability as malleable and acquirable has been shown to mitigate the effect of the ``boys better than girls'' stereotype \cite{GRD}.
Not only do most of us mathematics instructors presumably hold an incremental theory of math ability (why else would we bother teaching at all?), we have surely all bemoaned many times the fact that our students can label themselves as innately incapable of understanding math, and wished we could make them understand that effort and persistence will pay off. But despite what we want our students to believe, our discipline seems to have a strong view of mathematical research ``power'' as a fixed aspect of intelligence. We gossip about our fields' wunderkinds and the {\it a priori} inevitability of their successes, we celebrate our Beautiful Minds and Good Will Huntings, and we prohibit mathematicians over 40 from winning the Fields medal on the suspicion that anyone capable of producing spectacular breakthroughs would have done so by then.\footnote{Whatever its origins or traditions, the rule prohibiting people over 40 years of age from receiving the Fields medal is another form of inequity---in this case, age discrimination. Again we note how the implicit (and questionable) assumption ``people over 40 are less capable of mathematical innovation than people under 40'' teams up with ``math ability is innate and fixed'' to produce this bias. In this case, the rule does not just exclude older mathematicians; it also causes a bias against people whose careers might have been delayed due to any number of reasons---parental leave, political unrest, bad job markets, taking care of elderly parents, settling into a tenured job later because of a two-body problem---unrelated to mathematical ability.} Furthermore, we fear making public mistakes (as evidenced by the comments of participants in the Polymath 8 project \cite{Polymath}) lest others judge us to be {\em inherently} incapable of serious mathematics.
In reality, our mathematical heroines and heroes, past and present, have all spent huge amounts of time learning new things, struggling to understand them, and feeling proud when they sort the dissonances out, just like us. It is important to explicitly remind ourselves of the acquirable nature of math research ability---particularly since we are not just instructors, but also personnel evaluators, and thus gatekeepers of the academic mathematical world.
\subsubsection*{Implicit biases in academia and business}
After receiving an entire childhood of unintentional training in the supposed differences between female and male math ability (and other cultural myths), we enter our careers burdened with implicit biases that manifest in a whole new set of ways. For example, professors at US universities who are contacted by students interested in their doctoral program respond more frequently to men than to women (and, for that matter, more frequently to Caucasians than to applicants of other ethnicities)---and this propensity is exaggerated in more lucrative fields and at more prestigious institutions \cite{MAC}. Both female and male faculty members rate students' application materials differently when the applicant is female or male: even with identical files, the female applicant is judged to be less competent, and male applicants are offered a 14\% higher starting salary and more mentoring on average than female applicants \cite{MDBGH}.
Similar disparities are present in situations other than authority figures evaluating younger personnel. Teaching evaluations of female professors by students are lower than teaching evaluations of male professors---all the more so when the professor teaches a technical course or requires a large amount of work from the students, or when the student evaluator has been given negative feedback by the professor or is male \cite{Kas,N,SK}. Topics for conferences are often chosen based on the strengths and specialties of the first star researchers who come to the organizers' mind---which are overwhelmingly male---and so even the very choice of conference topics can suffer from a bias towards men over women \cite{VS}.
In addition to these and many other examples in STEM fields (see also \cite[Chapter 8]{DK} and \cite{Newsome}), biases of this sort manifest in nearly every arena imaginable. In law school, traditional pedagogical structure depresses the grades of female students, causing a gender gap that disappears when the structure of courses changes~\cite{HK}. Orchestras started hiring dramatically more women when they began to place auditionees behind screens so that their gender could not be ascertained \cite{GR}. Prospective investors listening to entrepreneurial pitches pledge more funding to male presenters than female presenters, even when the content is the same \cite{BHKM}. Particularly in stereotypically male arenas, successful women are liked less and belittled more than successful men, to the detriment of their evaluations and careers \cite{HWFT}. Female parents are perceived as being less competent in the workplace and are given lower salaries than male parents \cite{DK}; managers display bias against women's requests for flexible work schedules, interpreting them for example as revealing less dedication to their career, unlike similar requests from men \cite{BGS,Munsch,RP}.
A.\ Phillips (as quoted in \cite{G}) speaks of ``a cluster of vaguer characteristics which can override the stricter numerical hierarchy of grades or publications or degrees, always moderated by additional criteria. These more qualitative criteria [(]`personality', `character', whether the candidates will `fit in') often favour those who are most like the people conducting the interview: more starkly, they often favour the men.'' In other words, our biases find ways to manifest even when we are addressing objective, quantifiable data related to people. For example, observers tasked with judging people's attributes such as height, weight, and (startlingly) income from photographs consistently overestimate these attributes in men and underestimate them in women, even when objective comparison tools (such as a door frame of fixed height) is present \cite{BMN}. More saliently, when evaluating the research records of female and male scientists by their number of publications and the journals in which they appear, evaluators devalued the work of the women to the extent that a woman's file had to contain $2.5$ times as much productivity and impact as a man's file for the woman to be considered as competent as the man \cite{WW}. Sadly, the more subjective our criteria for research excellence are, the more our unconscious biases manifest and skew our evaluations.
Although we are most concerned with bringing unconscious sexism into the light where it can be examined and addressed, we must point out that explicit sexism is sadly not absent from STEM communities. Sexual harassment is so common at tech conferences, unfortunately, as to make it necessary to include explicit anti-harassment statements in conference materials \cite{GF:C}; such harassment is present in academic departments as well \cite[Fostering Success for Women in Science and Engineering, pages 6--8]{WISELI:o} and, in horrifyingly graphic form, on the internet~\cite{Hess}. B.\ Barres, a transgender mathematician who was born genetically female and began identifying as male to his colleagues during graduate school, reports \cite{Bar} that when he was outwardly female, professors would not give him credit for solving difficult math problems, fellowships were given to less-qualified male applicants, and he was frequently interrupted in conversation; one person even declared his research after he became outwardly male to be ``much better than his sister's''.
In summary: we don't want to have implicit biases, but, rather demoralizingly, we all do. This realization surely motivates us to consider what steps we can take to counteract our biases (and Section~\ref{process} contains many explicit suggestions). But before we get there, we need to further examine the outcomes that these implicit biases generate in our society and in our profession.
\subsection{Gender-based socialization differences}
We have seen how our perceptions of other people are tainted by implicit biases; but even our perceptions of ourselves are not immune. Much of our own behavior and self-evaluation is influenced by socialization---the inductive training we receive, from how members of our society typically react to certain actions from certain types of people, in how to act in the way society deems acceptable. Girls, for example, are socialized from an early age to place others' needs over their own interests, and men are socialized to expect women to act that way. Women who violate this social norm are deemed demanding and malicious instead of ``nice'' and likable \cite{BLGS,BBL,EK,HWFT}; ``what appears assertive, self-confident, or entrepreneurial in a man often looks abrasive, arrogant, or self-promoting in a woman'' \cite{EIK}, and indeed the word ``abrasive'' itself seems to be {\em de facto} reserved for women in performance reviews \cite{Snyder}.
Analogously to how boys and girls are (as described earlier) socialized in primary school classrooms, adult men speak more often and more forcefully, adopt more dominant body language, and interrupt other speakers more often than women. These habits of assertiveness are negatively reinforced in women, who are again disliked and perceived as untrustworthy when they adopt such patterns \cite{R}. When women and men give feedback to others, ``the evaluation of women depended more on the favorability of the feedback they provided than was the case for men'', and women (but not men) who gave negative feedback were judged less competent by the people they criticized~\cite{SK}.
Differential socialization of women and men means that corresponding behaviors are selectively reinforced. When conferences require submission of proposals, for instance, women are quick to dismiss their own ability to submit high-quality proposals, tending to submit in much smaller numbers in the absence of specific invitations. (We scientists detect patterns, after all: when a conference has startlingly few women year after year, and when women face greater scrutiny and criticism than men for equivalent work, it's logical for a woman to conclude that it might not be worth applying to speak!) On the other hand, men are very quick to submit---even when the quality of their proposals is well below average for the conference---because they overestimate their own abilities \cite{JSConf,St}. Analogously, in the business world, even an even-handed boss will end up making decisions that are biased against women due to not taking men's propensity for bragging about their achievements into account \cite{RSZ}.\footnote{This sentence, containing the word ``boss'', probably evoked a quick image of a person in the reader's mind. What gender was that person?}
Many women have ``internalized into a self-stereotype the societal sex-role stereotype that they are not considered competent'' \cite{CI}. For instance, ``[g]irls and boys with the same math test scores have very different assessments of their relative ability\dots. Conditional on math performance, boys are more overconfident than girls, and this gender gap is greatest among gifted children''; and socialized differences, such as men's overconfidence in their own abilities and women's reluctance to enter into competitive situations, are made more extreme when women are inadequately represented in the situation in question \cite{NV}. ``In line with their lower expectancies, women tend to attribute their successes to temporary causes, such as luck or effort, in contrast to men who are much more likely to attribute their successes to the internal, stable factor of ability'' \cite{CI}.
In fact, the aggregate effects of these socialization biases are so powerful that they manifest in a psychologically significant way. The phrase ``impostor phenomenon'' was used by Clance and Imes \cite{CI} to ``designate an internal experience of intellectual [phoniness], which appears to be particularly prevalent and intense among a select sample of high achieving women. ... Despite outstanding academic and professional accomplishments, women who experience the imposter phenomenon persists [sic] in believing that they are really not bright and have fooled anyone who thinks otherwise.'' This phenomenon affects a large and diverse group of women with fantastic career accomplishments \cite{Kap}.
Another internalized obstacle to success for women (and other minorities) is ``stereotype threat'', described by Spencer, Steele, and Quinn \cite{SSQ} in this way: ``When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament stereotype threat and hypothesize that the apprehension it causes may disrupt women's math performance.'' The effect of stereotype threat on actual measurable performance has been pointed out multiple times: for example, ``ability-impugning stereotypes such as these can trigger psychological processes that can undermine the performance of stereotyped individuals, including females in math'' \cite{GRD}. It has even been shown that when administering math tests, the likelihood of a girl ``choking'' on the test is noticeably diminished when the test is explicitly described in advance as not having displayed any gender difference in performance \cite{NV}; this effect has been observed in girls as young as five years old \cite{LHPL}. Seemingly innocuous environmental cues, such as references to words like ``pink'' or ``Barbie'', can prime stereotypic beliefs to the point where it changes what actions people deem acceptable, thus breeding more biased beliefs about themselves and more biased responses from others \cite{UC}.
We all have our behaviors affected by these socializations, but we certainly didn't choose them: they were inculcated into us by the culture in which we were raised, without our consent. If our society irrationally taught everyone continuously that blue-eyed people are less intelligent than other people, it would be the blue-eyed who experienced the impostor phenomenon and suffered from stereotype threat. And it would be foolish to think that this society's problems could be fixed just by blue-eyed people deciding to act or think differently: destroying the ``blue-eyed bias'' would be the responsibility of everybody whose default attitudes had been trained by this society---that is, each individual person, regardless of eye color. Similarly, in our current American society, with its (non-hypothetical) gender-based implicit biases, it is naive to think that women can simply change the way they react to their environment and dissolve all this inequity themselves. There are deeply entrenched reasons why resolving our underrepresentation problem will not be possible until all of us, not just the affected population, decide to devote effort towards recognizing and addressing the causes.
\subsection{Vicious circle of underrepresentation}
Without such efforts, the cumulative effect of biases throughout the system leads to the observation known as the ``leaky pipeline'': the higher the academic rank, the smaller the percentage of women (see \cite{CMR} and \cite[Advancing women in science and engineering]{WISELI:o}). We have already discussed how this leaky pipeline cannot be attributed to a shortfall of time for women to gain seniority in the academic world; the result must be due to aspects of our system that are biased against women. Inequities of this type have been categorized in sociology literature on the ``theory of cumulative advantage''~\cite{DE}.
In business, the fact that successful women are disliked and viewed as less competent than equally successful men prevents women from advancing---in rank, salary, and authority---as quickly as men \cite{HWFT}. For example, women are less likely to negotiate than men for raises and promotions; but when they do, the culture of many companies punishes them \cite{BLGS,BBL}, leading to a cycle of wage discrimination which, although having decreased somewhat over time, still persists today in the form of a significant gender wage gap \cite{EK,HWHH}. In addition to being implicitly regarded as unsuitable for leadership positions, women have fewer female contacts in positions of authority, which means that they are disadvantaged by having less influential networks \cite{EIK}. When they do actually become managers and leaders, ``[t]he mismatch between qualities attributed to women and qualities thought necessary for leadership places women leaders in a double bind and subjects them to a double standard'' \cite{EIK}; they have higher expectations placed upon them yet are perceived as less competent, and thus ``a woman manager's efforts to assert authority over others is subtly undercut by continuing, implicit assumptions that she is not quite as competent in the role as a man would be'' \cite{R}. Performance evaluations for women are far more likely to contain critical comments, and overwhelmingly more likely to have negative personality criticism attached to the comments on their performance, than those for men \cite{Snyder}. As a result, the leaky pipeline manifests itself in, say, disappointingly little change in the proportion of CEOs of top companies who are female \cite{CE}.
We have already seen biases when evaluating younger members of our discipline \cite{MAC,MDBGH,RSZ,WW} and when evaluating instructors \cite{Kas,N,SK}, both of which can obviously suppress the affected population from advancing in their careers; unsurprisingly, these biases manifest even when peers are appraising peers. Experiments in which identical conference abstracts were attributed to women or men show that peers (of both genders) perceived higher scientific merit, and were more likely to want to collaborate, when a male name was attached. Similarly, actual recommendation letters of contemporaries in STEM fields have been shown to exhibit different patterns of language usage in a way that benefits men over women \cite{KGH}. In medicine, for instance, letters of recommendation written for female faculty, in comparison to those written for male faculty, are shorter, mention high-status terms less and doubt-raising phrases more, lack basic features of recommendation letters more frequently, and tend to depict women as students and teachers while depicting men as professionals and researchers \cite{TP}. Even when a woman's tenure file is evaluated positively, the evaluators are four times as likely to volunteer ``cautionary comments'', saying that they would need to be given additional information to make a final judgment, than for a man's file \cite{SAR}. Once the leaky pipeline gets going, the mere fact of lower representation of women can exacerbate these biases: when MBA students assessed applicants for managerial positions, ``personnel decisions [by] both males and females were significantly more unfavorable towards women when they represented 25\% or less of the total pool'' \cite{H}.
These biased aspects of evaluation further disadvantage women in award competitions, leading to future disparities that, naturally, measurably compound themselves thereafter. For example, biased evaluations lead to smaller grants for women, which lead to somewhat curtailed research opportunities, which lead to artificially diminished research records that penalize them further for the next grant applications \cite{CGFSH,KGH}. Tenure cases are more harshly judged for female professors than for male professors \cite{SAR}. Male faculty members, who have benefited from these systemic biases enough to be overrepresented in elite universities, tend to take on fewer female graduate students and postdocs; this bias leads to fewer women being trained at these elite universities, which leads to female applicants being underrepresented in the next round of faculty hiring \cite{SS}.
The fact of the matter is: when a woman achieves the same level of accomplishment as a man (even if we were skilled at deciding who really was at ``the same level''), it is most probably a sign of much higher skill and potential in the woman, due to the cumulative disadvantages impairing her progress at every stage.
\subsection{The status quo as a distortion}
The above examples, and plenty of others from the literature, put the lie to the pleasant fiction that academia is a pure meritocracy where rewards always correspond to ability and achievement and nothing else \cite{G,KGH}. Instead, implicit biases and gender-based socialization sustain a persistent pattern of invisible discrimination against female scientists. That being said, each one of us might hope that we, personally, are objective and free from biases of this sort; unfortunately, that is extremely unlikely to be the case. Studies definitely document a universal tendency for individuals to believe themselves much less biased than others; and when our biases are hidden in our blind spots, our human nature comes up with all kinds of rationalizations about why the unsuccessful outcomes were not under our control. As L.\ Bacon writes \cite{Bac}: ``The defining feature of a blind spot is that we don't know it's there. And it's hard to notice it until we're challenged on it. We see this again and again with all-male speaker lineups at tech conferences. I certainly don't believe the organizers of those conferences are rabid misogynists; they just have a blind spot when it comes to gender, and frequently don't notice the lack of women until it's pointed out to them.''
The fact that we are aware of our thoughts and introspections yet do not recognize subliminal biases among them, psychologists theorize, convinces us that they must be absent even when they are surely present \cite{PGR}. And it takes courage, certainly, to look inwards and acknowledge our own imperfection (in this or any arena). It can be an eye-opening experience to take an implicit biases test \cite{GBN} and see that we are not a perfect specimen of objectivity. Indeed, being aware of our personal biases is far superior to the alternative, since people who consider themselves extremely objective can actually be more prone to act in a biased fashion \cite{UC}.
Even more unfairly, our biases against women operate on multiple levels in society as a whole. Even before getting to the point where science can display its biases and business can discriminate with its wages, girls have to deal with social pressures pushing them away from cerebral subjects, athletic pursuits, and male-typed careers; they grow up learning a version of history and science consisting almost entirely of the contributions of white males; they are expected to contribute more towards childcare, home maintenance, and so on outside of work yet are not credited for this ``second shift'' \cite{HoMa}; they also, unfortunately, can never ignore the very real physical danger of sexual assault \cite{AAUW,AGKM,BMN,LHPL}. ``What is interesting about the age old gender system in Western society,'' write C.\ Ridgeway and S.\ Correll \cite{RC}, ``is not that it never changes but that it sustains itself by continually redefining who men and women are and what they do while preserving the fundamental assumption that whatever the differences are, on balance, they imply that men are rightly more powerful.'' And it is difficult and frustrating to engage with this demoralizing reality. Not engaging with it, however, is tantamount to perpetuating it. In an ideal world, every part of our society would be gender-blind when it came to opportunities and rewards; but in today's actual society, being gender-blind effectively means ``if the status quo is biased, then we're ignoring that''.
Regarding mathematics conferences in particular, organizers might bemoan that there was ``nothing more'' they could have done to increase the number of female speakers, which suggests that our current system is neutral and that efforts to include adequate representation of women would be add-ons of some sort. But in fact, we have documented how our current system already includes a powerful collection of add-ons that consistently skew our decisions unfairly in favor of male mathematicians over female mathematicians. In other words, we are not simply trying to react to a perceived shortage of female mathematicians---we are, unintentionally and against our wishes, maintaining the shortage ourselves. So let us frame the issue of appropriate representation of women in mathematics, not in terms of some additional constraints that we must add in, but rather in terms of how to take out (or at least circumvent) the extraneous biases that are already there.
\section{Improving our process} \label{process}
As pointed out earlier, we would not accept consistently failing to meet other priorities, such as the representation of all mathematical areas, or inclusion of speakers from all geographical regions represented by the event: if a major conference's scientific structure were causing such a failure, we would augment or adapt its structure in response. While we might prefer to ``make it up as we go'' each time, the simple fact (as we saw near the end of Section~\ref{implicit bias sec}) is that such a strategy is inherently biased against success. Once we assert gender diversity as one of our priorities and acknowledge that the current system hinders our ability to fulfill that priority, it becomes clear that we must consider an explicit process for our conferences to assure adequate representation of female speakers.
We have gathered in this section some guidelines for meeting this priority. Many of them are common sense, especially now that we understand the causes and ubiquity of underrepresentation. While some of these steps might be labeled drastic if adequate representation of women at our conferences were in the ``Gosh \dots\ them's the breaks'' category, they are all perfectly natural ways of addressing a recognized failure to meet an important goal.\footnote{In a hypothetical situation where a national conference repeatedly left out a subject area such as number theory, the analogues of these steps would be natural ways of ensuring that number theorists were included. We reiterate, however, that remedying gender inequity is more important than such a hypothetical situation: while number theorists might be occasionally undervalued, female mathematicians are {\em consistently} underrepresented, based on a factor completely unrelated to the practice of mathematics.} While many of these action items will be useful no matter how the conference is structured, some of them make more sense for invited speakers, others for contributed lectures.
We repeat that our suggestions are aimed at mitigating existing unfairness in the current system of conference organization (which none of us desire to be there). This unfairness can be reduced and eventually eliminated, both by taking deliberate steps to fully include women in our scientific activities and by focusing attention critically on the unfairness itself. Moreover, in addition to resulting in appropriate inclusion of female mathematicians, we believe that thoughtful adoption of these guidelines will quite simply lead to better conferences, independently of speakers' genders.
\subsection{Plan from the beginning}
\begin{itemize} \item The conference's scientific committee must have women adequately represented. This is a no-brainer. A group that cannot find women at this early, controlled stage of the process will not magically be able to balance genders at their conference later on. Recall our observations about the correlation between female organizers and female speakers \cite{CH,Martin}. \item Plan far ahead of time, so that women, who tend to have more non-work responsibilities \cite{HoMa}, have adequate time to make arrangements for travel. \item Don't automatically structure a conference (or part of it) around an eminent man, but consider building it around a woman---her expertise, her dates of availability. \item Some conferences are built around a slate of invited speakers, while others consist predominantly of speakers selected from submitted proposals. The challenges to achieving gender diversity in a conferences are somewhat different for these two types of conferences. Be aware of, and plan to address, the challenges particular to the chosen format. \end{itemize}
\subsection{Communicate expectations and goals among planners}
\begin{itemize} \item As a planning committee, articulate all the priorities of the conference, including proper representation of women. \item Set explicit targets---for example, that 30\% of speakers should be female. Note the subtle but important difference between ``targets'', which connotes the quantification of a goal we already wish to achieve, versus ``quotas'', which carries overtones of the myth of excluding more qualified men. Remember that having fewer women participate means that those who are present are judged more harshly \cite{H}. \item Make the selection criteria explicit and unambiguous within the scientific committee, to minimize the effect of biases that give more credence to candidates who conform to irrelevant stereotypes. \item Explicitly inform all decision-makers that letters of recommendation, and statements of self-promotion, tend to bias the process against women. A detailed list of suggestions and practices can be found in \cite{Valian}. \end{itemize}
\subsection{Come up with names}
\begin{itemize} \item Be explicitly prepared for the fact that men's names will come to mind more readily then women's names. \item Do a literature search for women's names in the relevant areas. For that matter, make sure the relevant areas are broad enough to capture a significant cross-section of scientists. \item Consult a database of female mathematicians. For example, if a conference is affiliated with any of the mathematical institutes sponsored in part by the National Science Foundation, the organizers should have access to their expertise database \cite{NSF}. \item For that matter, organizers of recurring specialist conferences can band together with others in the field to build their own databases of minority scientists, such as the Women in Number Theory database~\cite{WIN}. \item Have each organizer ask five respected colleagues to suggest female speakers for the conference, as part of the initial planning process. \item Since women are overrepresented at lower prestige institutions, don't stop searching once the people at high-prestige institutions have been exhausted. Don't dismiss possible presenters because of quick initial impressions that they aren't on a high enough level---remember that implicit biases color these impressions. \item Proactively seek personal communication with potential female speakers, brainstorming their possible submissions. \item Add a ``suggest a speaker'' form or email link to the conference proposal website. Include nearby a link to a conference diversity statement (see below). \end{itemize}
\subsection{Select speakers attentively}
\begin{itemize} \item Give scientific panels and submission evaluators ample time to consider their decisions. Include live conversations in those deliberations, in which diversity priorities are discussed. \item Structure the submission process so that proposal materials will be evaluated without seeing the submitter's gender. \item Invite the entire first round of speakers, with women proportionally represented, at once---don't invite a round of speakers and then add a few last-minute invitations to women later. \item Check that diversity is coming in prestigious conference positions (plenary talks, invited talks) rather than only contributed talks and poster sessions. \item Communicate individually with invitees, rather than through form letters. \end{itemize}
\subsection{Create equitable logistics}
\begin{itemize} \item Set aside some of the budget specifically to offer to defray the costs of female speakers---after all, they have already been implicitly biased against for funding opportunities so far in their careers. Indeed, apply for grants from scientific funding agencies specifically for these costs, or partner with mathematical institutes or hosting institutions to acquire such funding. The Association for Women in Mathematics (AWM) advertises its own funding source, the Travel Grants for Women Researchers, as well \cite{AWM}. \item For those in a position to grant funding to organized scientific events, make it a condition of acceptance that the organizers have an explicit process for ensuring proper representation of women at their event. \item Be aware that travel logistics can be extremely difficult and stressful for parents of young children. Make arrangements for childcare and nursing mothers at conferences, and communicate that information to prospective participants. The official statement of the Association for Women in Mathematics on childcare \cite{Sa} is a good place to start. Follow the lead of the 2015 Joint Meetings of the AMS and MAA \cite{JMM}, for instance, or the case studies of conferences offering childcare arrangements in \cite[Chapters 5--6]{DK} and \cite{FP:C}. Particularly if one has never nursed an infant before, one should consider firsthand suggestions for how to make conference welcoming for nursing mothers~\cite{Lalin}. \item Pay attention to ``extracurricular'' details, so that all aspects of the conference are both inviting and safe to women. For example, make sure that women's restrooms are as convenient and well-maintained as men's washrooms. As Geek Feminism points out \cite{GF:T}, ``conference dinners with 90\% or more men and free alcohol are not welcoming or safe''---a fact that many men, with the privilege of not having to worry about harassment, might overlook. Similarly, going out for evening drinks, at a location far from the conference venue or hotel, is simply not comfortable for a woman when the rest of the group consists only of men. Having to avoid social opportunities because of unsafe circumstances means missing mathematical discussions and networking opportunities, which has real professional consequences. \end{itemize}
\subsection{Walk the walk}
\begin{itemize} \item Track the diversity of a conference's speakers. Have the results publicly accessible. \item If there has been lack of diversity, step forward and admit it---and make a public pledge to do better. \item Have a diversity statement (such as \cite{OM}) prominently featured in conference materials, including the web site, and include it in calls for speakers and communications with potential participants. Even aside from helping to generate more female speakers, such statements can help women feel more comfortable attending the conference \cite{Ries,RM}. \item As a conference organizer, attend sessions by female speakers (as well as speakers from underrepresented minorities), and initiate conversations with female participants about their research. \item As a chair of a conference session, call on women for questions that follow talks as well as men (and notice how men speak without being called on more often than women). \item Also include an anti-harassment statement or code of conduct (such as \cite{GF:C}) in conference materials. \item Particularly for those who are part of a population with the privilege of rarely being the target of discrimination or harassment, proactively seek information about past experiences by surveying conference attendees, with such questions as: when you were deciding whether to attend this conference, what factors weighed in your decision? How inclusive do you find this conference in terms of gender, ethnicity, sexual orientation, people with disabilities, etc.\ for both speakers and participants? Have you experienced any harassment at this conference, including harassment based on gender, sexual orientation, or ethnicity? \end{itemize}
\subsection{Talk the talk}
\begin{itemize} \item Talk openly about underrepresentation of women at mathematics conferences (and in mathematics in general). Make discussing the issue, and whether our community is adequately addressing it, the norm, so that ignoring the issue becomes the controversial stance instead of the safe option. \item Introduce good management practices, such as those described in \cite{Keyfitz}, for equitable hiring, retention, and work environment in the mathematics department. \item Put on professional web pages a statement (such as \cite{VS}) endorsing diversity, or pledging participation only in events that make diversity an explicit priority. \item Thoughtfully monitor interactions with other mathematicians. Do we mention the physical appearance of some graduate students but not others? Do we ask some postdocs about their research but others merely who their supervisor was? Do we interrupt some colleagues in conversation but not others? Are there any correlations between these differences and the other mathematicians' genders? \item Practice representing mathematics as suitable for girls and boys, for women and men. Practice using gender-neutral speech patterns when speaking about mathematicians. \item Speak about mathematics skill as a malleable quality rather than a fixed quantity \cite{GRD,MD,U}---not just to students, but among ourselves. \end{itemize}
\section{Considering the issue further}
We all know that excellence in mathematics requires hard work, mental focus, and self-awareness, as well as an understanding of what details to focus those virtues upon. Excellence in conference organization is no different. We accept the necessity of this effort, both in research and in interactions with our scientific community, because we understand both the value of the end product and how introspection and attention to detail improves that end product.
It is an unfortunate reality that mathematics still has a gender inequity problem, despite the improvements we have made over the generations. There is good news, though: not only do we understand quite a bit about what causes underrepresentation of woman---as well as actions we can take to rectify it---but also, we can make our discipline's challenges easier to successfully address simply by talking openly about them.
For those wanting to learn more about the facts and guidelines included in this article, its list of references has been expanded into an annotated bibliography \cite{Martin} with remarks and quotes to help describe the content of those sources. We emphasize here several sources rich in further references to the relevant literature on the disappearing gender gap on standardized tests \cite{AGKM,EHL,GMSZ,KM}, biases and barriers to advancement for women \cite{DE,EIK,KGH,N,R,WISELI:o}, psychology theory and research \cite{EHL,GRD,PGR}, and action items to address gender bias in schools \cite{AAUW,W} and in academia \cite{DK,FP:G,WISELI:o}.
We have made the conscious choice to include only initials and last names in the bibliography and throughout this article. We have observed a tendency to be curious about the gender of the authors of the research we cite, and perhaps to involuntarily wonder how the authors' genders should affect our evaluation of their conclusions. These reflexive speculations, we believe, tellingly illuminate the depth to which these implicit biases about gender are ingrained in us, even though we rationally know that possessing one gender or another does not affect a person's objectivity.
We hope this article has been thought-provoking in ways that we as a community will carry with us into future planning. Only by examining our current practices, and carrying the resulting knowledge forward, will our conferences (and our departments) improve in this regard. Being socialized to have biases is not our fault; but preventing our biases from negatively affecting the world around us is nonetheless our responsibility.
We conclude with a challenge to each reader: find a female colleague who is willing to donate some of her time, and ask her about her experiences as a scientist in training and as a woman in today's society. Female readers will probably find commonalities of experience, while male readers might well be surprised at the injustice some mathematicians have had to deal with. The more concretely we apprehend the inequities that still exist, the better equipped we will be to remove them.
And finally, let us completely reverse the taboo: let us make it the norm to talk about gender in conferences, so that overlooking the issue becomes a dubious exception. None of us want to perpetuate prejudice, but we are doing so nonetheless. Let's change that.
\section*{Acknowledgments}
We thank J. Bryan for statistical analysis of the data from the 2014 JMM and W. Miao for gathering the data in Section~\ref{miao} and for locating several of the papers cited herein. We also thank T. Gowers and L. Walling for helping to crystalize our ideas at the early stages of conceiving this article, and L. Addario--Berry, J. Gordon, T. Gowers, C. Hagan, E. Jones, M. Lalin, W. Miao, A. Mottahed, R. Pries, and R. Scheidler for their helpful feedback on a draft of the manuscript. We are also grateful to other friends and colleagues, too numerous to list here, for their encouragement and inspiration.
\end{document} |
\begin{document}
\title{Divisor-Bounded Multiplicative Functions in Short Intervals} \author{Alexander P. Mangerel} \address{Centre de Recherches Math\'{e}matiques, Universit\'{e} de Montr\'{e}al, Montr\'{e}al, Qu\'{e}bec} \email{[email protected]}
\begin{abstract}
We extend the Matom\"{a}ki-Radziwi\l\l{} theorem to a large collection of unbounded multiplicative functions that are uniformly bounded, but not necessarily bounded by 1, on the primes. Our result allows us to estimate averages of such a function $f$ in typical intervals of length $h(\log X)^c$, with $h = h(X) \rightarrow \infty$ and where $c = c_f \geq 0$ is determined by the distribution of $\{|f(p)|\}_p$ in an explicit way. We give three applications.\\
First, we show that the classical Rankin-Selberg-type asymptotic formula for partial sums of $|\lambda_f(n)|^2$, where $\{\lambda_f(n)\}_n$ is the sequence of normalized Fourier coefficients of a primitive non-CM holomorphic cusp form, persists in typical short intervals of length $h\log X$, if $h = h(X) \rightarrow \infty$. We also generalize this result to sequences $\{|\lambda_{\pi}(n)|^2\}_n$, where $\lambda_{\pi}(n)$ is the $n$th coefficient of the standard $L$-function of an automorphic representation $\pi$ with unitary central character for $GL_m$, $m \geq 2$, provided $\pi$ satisfies the generalized Ramanujan conjecture. \\
Second, using recent developments in the theory of automorphic forms we estimate the variance of averages of all positive real moments $\{|\lambda_f(n)|^{\alpha}\}_n$ over intervals of length $h(\log X)^{c_{\alpha}}$, with $c_{\alpha} > 0$ explicit, for any $\alpha > 0$, as $h = h(X) \rightarrow \infty$.\\ Finally, we show that the (non-multiplicative) Hooley $\Delta$-function has average value $\gg \log\log X$ in typical short intervals of length $(\log X)^{1/2+\eta}$, where $\eta >0$ is fixed. \end{abstract}
\maketitle
\section{Introduction and Main Results} \label{sec:MRDB}
\subsection{The Matom\"{a}ki-Radziwi\l\l{} theorem for bounded multiplicative functions} The Matom\"{a}ki-Radziwi\l\l{} theorem, in its various incarnations, gives estimates for the error term in approximating the average of a bounded multiplicative function in a typical short interval by a corresponding long interval average. In the breakthrough paper \cite{MR}, the authors showed that, uniformly over all real-valued multiplicative functions $f: \mathbb{N} \rightarrow [-1,1]$, for any $1 \leq h \leq X$ such that $h = h(X) \rightarrow \infty$ as $X \rightarrow \infty$, \begin{equation}\label{eq:MRqual} \frac{1}{h}\sum_{x-h < n \leq x} f(n) = \frac{2}{X} \sum_{X/2 < n \leq X} f(n) + o(1) \end{equation} for all but $o(X)$ integers $x \in [X/2,X]$. A key feature of this result is that the interval length $h$ can grow arbitrarily slowly as a function of $X$. This result has had countless applications to a variety of problems across mathematics, including to partial results towards Chowla's conjecture on correlations of the Liouville function \cite{Tao}, \cite{TaoTer}, the resolution of the famous Erd\H{o}s discrepancy problem \cite{EDP}, and progress on Sarnak's M\"{o}bius disjointness conjecture (e.g., \cite{TaoSar}, \cite{FranHost}; see \cite{SarSur} for a more exhaustive list).
Since \cite{MR}, the result has been extended and generalized in various directions. In \cite{MRT}, a corresponding short interval result was given for non-pretentious complex-valued multiplicative functions $f:\mathbb{N} \rightarrow \mathbb{U}$, where $\mathbb{U} := \{z \in \mathbb{C} : |z| \leq 1\}$. To be precise, if we define $$
D_f(X;T) := \min_{|t| \leq T} \mathbb{D}(f,n^{it};X)^2 := \min_{|t| \leq T}\sum_{p \leq X} \frac{1-\text{Re}(f(p)p^{-it})}{p}, $$ where $\mathbb{D}$ denotes the Granville-Soundararajan pretentious distance, they showed that if $f: \mathbb{N} \rightarrow \mathbb{U}$ satisfies $D_f(X;X) \rightarrow \infty$ then $$
\left|\frac{1}{h} \sum_{x-h < n \leq x} f(n)\right| = o(1) $$ for all but $o(X)$ integers $x \in [X/2,X]$, whenever $h = h(X) \rightarrow \infty$. In a different direction, exploring the heuristic relationship between the distributions of arithmetic functions in short intervals and in short arithmetic progressions, Klurman, the author and Ter\"{a}v\"{a}inen \cite{KMT} obtained an analogue of \eqref{eq:MRqual} for typical\footnote{Complications arise concerning both the prime divisors of the modulus $q$ as well as the distribution of zeros of Dirichlet $L$-functions $\pmod{q}$, so the theorem proven in \cite{KMT} is qualitatively weaker than \eqref{eq:MRqual} unconditionally in general.} short arithmetic progressions.
In the recent paper \cite{MRII}, a widely generalized version of the results of \cite{MR} was developed, which among other things extended the work of \cite{MRT}. The authors showed that for a general complex-valued multiplicative function $f: \mathbb{N} \rightarrow \mathbb{U}$, if $t_0 = t_0(f,X)$ is a minimizer in the definition of $D_f(X;X)$ then for all but $o(X)$ integers $x \in [X/2,X]$, one obtains an asymptotic formula with main term of the form \begin{equation}\label{eq:MRqualCV} \frac{1}{h}\sum_{x-h < n \leq x} f(n) = \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0} + o(1), \end{equation} with a better quantitative dependence of the bound for the exceptional set on the interval length $h$ than in \cite{MR}.
By Shiu's theorem (Lemma \ref{lem:Shiu} below), we have \[
\frac{1}{X}\sum_{n \leq X} |f(n)| \ll \prod_{p \leq X} \left(1+ \frac{|f(p)|-1}{p}\right), \]
so \eqref{eq:MRqualCV} is trivial whenever $\sum_{p \leq X} \frac{1-|f(p)|}{p} \rightarrow \infty$, for instance if $f(p) = 0$ significantly often on the primes. Rectifying this weakness, Matom\"{a}ki and Radziwi\l\l{} improved the quality of the $o(1)$ error term for a large collection of $1$-bounded functions with \emph{sparse} prime support. Specifically, they showed that if there are constants $A > 0$ and $\theta \in (0,1]$ such that the sieve-type lower bound condition \begin{equation}\label{eq:sieveforF}
\sum_{z < p \leq w} \frac{|f(p)|}{p} \geq A\sum_{z < p \leq w} \frac{1}{p} - O\left(\frac{1}{\log z}\right) \text{ holds for all } 2 \leq z \leq w \leq X^{\theta} \end{equation} then one can improve the $o(1)$ term to $$
o\left( \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right)\right). $$ This savings comes at a natural cost, namely that the length of the interval $h$ is no longer arbitrarily slow growing as a function of $X$, but must grow in a manner that depends on the sparseness of the support\footnote{As pointed out in \cite[p. 8]{MRII}, it is generally unclear what the least size of such intervals must be for a given bounded multiplicative function.} of $f$.
Precisely, the main result of \cite{MRII} may be stated as follows. In the sequel, for a multiplicative function $f: \mathbb{N} \rightarrow \mathbb{U}$ we write $$
H(f;X) := \prod_{p \leq X} \left(1+\frac{(|f(p)|-1)^2}{p}\right). $$ \begin{thm1}[Matom\"{a}ki-Radziwi\l\l{}, \cite{MRII} Thm. 1.9] Let $A > 0$ and $\theta \in (0,1]$. Let $f: \mathbb{N} \rightarrow \mathbb{U}$ be a multiplicative function such that \eqref{eq:sieveforF} holds for all $2 \leq z\leq w \leq X^{\theta}$. Let $2 \leq h_0 \leq X^{\theta}$ and put $h := h_0H(f;X)$. Also, set $t_0 = t_0(f,X)$. Then there are constants\footnote{In \cite{MRII} they obtained the explicit constant $\rho_A = A/3 - \frac{2}{3\pi} \sin(\pi A/2)$.} $C = C(\theta) > 1$, $\rho_A > 0$ such that for any $\delta \in (0,1/1000)$ and $0 < \rho < \rho_A$, \begin{align*}
&\left|\frac{1}{h} \sum_{x-h < n \leq x} f(n) - \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{2}{X}\sum_{X/2 < n\leq X} f(n)n^{-it_0}\right| \\
&\leq \left(\delta + C\left(\frac{\log\log h_0}{\log h_0}\right)^A + (\log X)^{-A\rho/36}\right) \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right), \end{align*} for all $x \in [X/2,X]$ outside of a set of size $$ \ll_{\theta} X\left(h^{-(\delta/2000)^{1/A}} + X^{-\theta^3(\delta/2000)^{6/A}}\right). $$ \end{thm1}
\subsection{Divisor-bounded multiplicative functions} \label{sec:DBHist} Let $B \geq 1$. We define the \emph{generalized $B$-divisor function} $d_B(n)$ via $$ \zeta(s)^B = \sum_{n \geq 1} \frac{d_B(n)}{n^s} \text{ for } \text{Re}(s) > 1. $$ It can be deduced that $d_B(n)$ is multiplicative, and moreover $d_B(p^k) = \binom{B+k-1}{k}$, for all $k \geq 1$. In particular, $d_B(p) = B$. For integer values of $B$ this coincides with the usual $B$-fold divisor functions, e.g., when $B = 2$ we have $d_B(n) = d(n)$, and when $B = 1$ we have $d_B(n) \equiv 1$.
We say that a multiplicative function $f: \mathbb{N} \rightarrow \mathbb{C}$ is \emph{divisor-bounded} if there is a $B \geq 1$ such that $|f(n)| \leq d_B(n)$ for all $n$. When $B = 2$, for example, this includes functions such as the twisted divisor function $d(n,\theta) := \sum_{d|n} d^{i\theta}$ for $\theta \in \mathbb{R}$, as well as $r(n)/4$, where $r(n) := |\{(a,b) \in \mathbb{Z}^2 : a^2+b^2 =n\}|$.
There is a rich literature about mean values of general, 1-bounded multiplicative functions. The works of Wirsing \cite{WirMV} and Hal\'{a}sz \cite{Hal} are fundamental, with noteworthy developments by Montgomery \cite{MonMV} and Tenenbaum \cite[Thm. III.4.7]{Ten}. The theory has recently undergone an important change in perspective, due in large part to the extensive, pioneering works of Granville and Soundararajan (e.g., \cite{GSDec}, \cite{GSPret}). This well-formed theory significantly informs the results of \cite{MR} and \cite{MRII}.
In comparison, the study of long averages of general \emph{unbounded} multiplicative functions has only garnered significant interest more recently. Granville, Harper and Soundararajan \cite{GHS}, in developing a new proof of a quantitative form of Hal\'{a}sz' theorem, were able to obtain bounds for averages of multiplicative functions $f: \mathbb{N} \rightarrow \mathbb{C}$ for which the coefficients of the Dirichlet series\footnote{Implicitly, it is assumed that $-\frac{L'}{L}(s,f)$ is well-defined in $\text{Re}(s) > 1$.} \begin{equation}\label{eq:LambdafDef} -\frac{L'}{L}(s,f) = \sum_{n \geq 1} \frac{\Lambda_f(n)}{n^s}, \text{ where } L(s,f) := \sum_{n \geq 1} \frac{f(n)}{n^s} \text{ for } \text{Re}(s) > 1, \end{equation}
satisfy the bound $|\Lambda_f(n)| \leq B \Lambda(n)$ uniformly over $n \in \mathbb{N}$ for some $B \geq 1$, where $\Lambda(n)$ is the von Mangoldt function. Such functions satisfy $|f(n)| \leq d_B(n)$ for all $n$. In \cite{TenVM}, Tenenbaum, improving on qualitative results due to Elliott \cite{EllMV}, established quantitative upper bounds and asymptotic formulae for the ratios $|\sum_{n \leq X} f(n)|/(\sum_{n \leq X} |f(n)|)$, assuming $f$ is uniformly bounded on the primes, not too large on average at prime powers, and satisfies a hypothesis like \eqref{eq:sieveforF}. See also \cite[Ch. 2]{ManThe} for results of a similar kind under stronger hypotheses.
On the basis of these developments, it is reasonable to ask whether the results of \cite{MR} and \cite{MRII} can be extended to divisor-bounded functions of a certain type. This was hinted at in \cite[p. 9]{MRII} but, as far as the author is aware, it does not yet exist in the literature. The purpose of this paper is to establish such extensions for a broad class of divisor-bounded multiplicative functions, among other unbounded functions.
In the following subsection we provide three examples that motivate our main theorem, Theorem \ref{thm:MRFull}. Besides the applications we give here, this result will also be applied in \cite{ManErd} to study short interval averages of general additive functions.
\subsection{Applications} \label{subsec:apps}
\subsubsection{Rankin-Selberg estimates for $GL_m$ in typical short intervals} Let $f$ be a fixed even weight $k \geq 2$, level $1$ primitive, Hecke-normalized holomorphic cusp form without complex multiplication, and write its Fourier expansion at $\infty$ as $$ f(z) = \sum_{n \geq 1} \lambda_f(n)n^{\frac{k-1}{2}}e(nz), \quad \text{Im}(z) > 0, $$ with $\lambda_f(1) = 1$. Set also $$
g_f(n) := \sum_{d^2|n} |\lambda_f(n/d^2)|^2. $$ By the Hecke relation $$
\lambda_f(m)\lambda_f(n) = \sum_{d|(m,n)} \lambda_f\left(\frac{mn}{d^2}\right), \quad m,n \in \mathbb{N}, $$
$|\lambda_f|^2$ and thus also $g_f$ are multiplicative functions.
Deligne showed that $|\lambda_f(p)| \leq 2$ for all primes $p$, and in general $|\lambda_f(n)|^2 \leq d(n)^2$. Thus $|\lambda_f|^2$ is bounded by a power of a divisor-function, and the same can be shown for $g_f$.
The classical Rankin-Selberg method shows \cite[Sec. 14.9]{IK} that asymptotic formulae \begin{align*}
\frac{1}{X}\sum_{n \leq X} |\lambda_f(n)|^2 = c_f + O(X^{-2/5}), \quad \quad \frac{1}{X}\sum_{n \leq X} g_f(n) = d_f + O(X^{-2/5}), \end{align*} hold as $X \rightarrow \infty$, where $c_f,d_f > 0$ are constants depending on $f$. The Rankin-Selberg problem is equivalent to asking for an improvement of the error term $X^{-2/5}$ in both of these estimates, but this is not our point of interest here.
One can ask whether the above asymptotic formulae continue to hold in short intervals. Ivi\'{c} \cite{Ivic} considered
the variance of the error term in short interval averages. Specifically, he showed \cite[Cor. 2]{Ivic} on the Lindel\"{o}f hypothesis that \begin{equation}\label{eq:Ivic} \frac{1}{X} \int_X^{2X} \left(\frac{1}{h} \sum_{x < n \leq x+h} g_f(n) - d_f\right)^2 dx = o(1), \end{equation} as long as $h \geq X^{2/5-\varepsilon}$, albeit with a power-saving error term in the latter range. \\ At the expense of the quality of the error term, we obtain the following improvement in the range where \eqref{eq:Ivic} holds. \begin{cor}\label{cor:RankinSelberg} Let $10 \leq h_0 \leq X/(10\log X)$ and set $h := h_0 \log X$. Then there is a constant $\theta > 0$ such that $$
\frac{1}{X}\int_X^{2X} \left(\frac{1}{h} \sum_{x < n \leq x+h} |\lambda_f(n)|^2 - c_f\right)^2 dx \ll \frac{\log\log h_0}{\log h_0} + \frac{\log\log X}{(\log X)^{\theta}}. $$
The same estimate holds when $|\lambda_f|^2$ and $c_f$ are replaced by $g_f$ and $d_f$, respectively.
\end{cor} This corollary might appear surprising, given our currently incomplete understanding of the shifted convolution problem $$
\sum_{X < n \leq 2X} |\lambda_f(n)|^2|\lambda_f(n+r)|^2, \quad\quad 1 \leq |r| \leq h. $$
Actually, our proof of Corollary \ref{cor:RankinSelberg} relies only on the multiplicativity of $|\lambda_f|^2$, Deligne's theorem and the prime number theorem for Rankin-Selberg $L$-functions (see e.g., Lemma \ref{lem:PNTRS}). This suggests\footnote{We would like to thank Maksym Radziwi\l\l{} and Jesse Thorner for pointing this out.} that a generalization to coefficients of automorphic $L$-functions for $GL_n$ should be possible, provided that these satisfy the generalized Ramanujan conjecture and hence are divisor-bounded. \\ To be more precise, let $m \geq 2$, let $\mathbb{A}$ be the ring of adeles of $\mathbb{Q}$, and let $\pi$ be a cuspidal automorphic representation of $GL_m(\mathbb{A})$ with unitary central character that acts trivially on the diagonally embedded copy of $\mathbb{R}^+$. We let $q_{\pi}$ denote the conductor of $\pi$. The finite part of $\pi$ factors as a tensor product $\pi = \otimes_p \pi_p$, with local representations $\pi_p$ at each prime $p$. The local $L$-function at $p$ takes the form $$ L(s,\pi_p) = \prod_{1 \leq j \leq m} \left(1-\frac{\alpha_{j,\pi}(p)}{p^s}\right)^{-1} := \sum_{l \geq 0} \frac{\lambda_{\pi}(p^l)}{p^{ls}}, $$ where $\{\alpha_{1,\pi}(p),\ldots,\alpha_{m,\pi}(p)\} \subset \mathbb{C}$ are the Satake parameters of $\pi_p$. The standard $L$-function of $\pi$ is then $$ L(s,\pi) := \prod_p L(s,\pi_p) = \sum_{n \geq 1} \frac{\lambda_{\pi}(n)}{n^s}, $$ which converges absolutely when $\text{Re}(s) > 1$. The sequence of coefficients $\lambda_{\pi}(n)$ thus defined is multiplicative, with the property that $$ \lambda_{\pi}(p^r) = \sum_{\substack{r_1,\ldots,r_m \geq 0 \\ r_1 + \cdots + r_m = r}} \prod_{1 \leq j \leq m} \alpha_{j,\pi}(p)^{r_j}. $$
The generalized Ramanujan conjecture (GRC) implies that for all $1 \leq j \leq m$, $|\alpha_{j,\pi}(p)|= 1$ whenever $p \nmid q_{\pi}$ and otherwise $|\alpha_{j,\pi}(p)| \leq 1$. It follows that if $\pi$ satisfies GRC then $$
|\lambda_{\pi}(p^r)| \leq \sum_{\substack{r_1,\ldots,r_m \geq 0 \\ r_1 + \cdots + r_m = r}} 1 = \binom{m+r-1}{r} = d_m(p^r), $$
and therefore that $|\lambda_{\pi}(n)| \leq d_m(n)$. As a consequence of these properties we will prove the following. \begin{thm}\label{thm:genAutForms} Let $m \geq 2$ and let $\pi$ be a fixed cuspidal automorphic representation for $GL_m(\mathbb{A})$ as above. Assume that $\pi$ satisfies GRC. Let $10 \leq h_0 \leq X/(10(\log X)^{m^2-1})$ and let $h:= h_0 (\log X)^{m^2-1}$. Then there is a constant $\theta = \theta(m) > 0$ such that $$
\frac{1}{X}\int_X^{2X} \left(\frac{1}{h}\sum_{x< n \leq x + h} |\lambda_{\pi}(n)|^2 - \frac{1}{X}\sum_{X < n \leq 2X} |\lambda_{\pi}(n)|^2 \right)^2 dx \ll \frac{\log\log h_0}{\log h_0} + \frac{\log\log X}{(\log X)^{\theta}}. $$ \end{thm} \begin{rem}
In the case $m =2$ the parameter $h$ must grow faster than $(\log X)^3$ in Theorem \ref{thm:genAutForms}, whereas Corollary \ref{cor:RankinSelberg} allows any $h$ growing faster than $\log X$. This is due to the fact that the range of $h$ in these estimates depends on the size of $\sum_{p \leq X} |\lambda_{\pi}(p)|^4/p$. When $\pi = \pi_f$ for a cusp form $f$ on $GL_2(\mathbb{A})$ we may estimate this sum using the well-known expression $$
|\lambda_f(p)|^4 = 2 + 3\lambda_{\text{Sym}^2 f}(p) + \lambda_{\text{Sym}^4 f}(p) $$
for all primes $p$, since $\text{Sym}^r f$ is cuspidal automorphic for $r = 2,4$ and thus $\sum_{p \leq X} \lambda_{\text{Sym}^r f}(p)/p = O(1)$. When $m \geq 3$ such data for $|\lambda_{\pi}(p)|^4$ is not available unconditionally in general, to the best of the author's knowledge. Assuming the validity of Langlands' functoriality conjecture, a (likely more complicated) expression would follow from the factorization of the standard $L$-function $L(s,f)$ of the representation $f = \pi \otimes \tilde{\pi} \otimes \pi \otimes \tilde{\pi}$, where $\tilde{\pi}$ is the contragredient representation of $\pi$. Using GRC alone, we cheaply obtain the simple upper bound \[
\sum_{p \leq X} \frac{|\lambda_{\pi}(p)|^4}{p} \leq m^2 \sum_{p \leq X} \frac{|\lambda_{\pi}(p)|^2}{p} = m^2 \log\log X + O(1), \] from Rankin-Selberg theory, and this is the source of the exponent $m^2$ in the range of $h$. \\ We will instead deduce Corollary \ref{cor:RankinSelberg} from Theorem \ref{thm:momCusp} below, which is tailored to $GL_2$ cusp forms. \end{rem}
\subsubsection{Moments of coefficients of $GL_2$ cusp forms in typical short intervals} \label{sec:RS}
Our next application concerns short interval averages of the moments $n \mapsto |\lambda_f(n)|^{\alpha}$, for any $\alpha > 0$, with the notation of the previous subsection. This generalizes Corollary \ref{cor:RankinSelberg}.
\begin{thm}\label{thm:momCusp} Let $\alpha > 0$ and define \[ c_{\alpha} := \frac{2^{\alpha}}{\sqrt{\pi}} \frac{\Gamma\left(\frac{\alpha+1}{2}\right)}{\Gamma(\alpha/2 + 2)}, \quad d_{\alpha} := c_{2\alpha}-2c_{\alpha} + 1. \] Let $10 \leq h_0 \leq X/(10 (\log X)^{d_{\alpha}})$ and put $h := h_0 (\log X)^{d_{\alpha}}$. There is a constant $\theta = \theta(\alpha) > 0$ such that \[
\frac{1}{X}\int_X^{2X} \left(\frac{1}{h} \sum_{x < n \leq x+h} |\lambda_f(n)|^{\alpha} - \frac{1}{X}\sum_{X < n \leq 2X} |\lambda_f(n)|^{\alpha}\right)^2 dx \ll_{\alpha} \left(\left(\frac{\log\log h_0}{\log h_0}\right)^{c_{\alpha}} + \frac{\log\log X}{(\log X)^{\theta}}\right) (\log X)^{2(c_{\alpha}-1)}. \] \end{thm} When $\alpha \neq 2$, the Rankin-Selberg theory is no longer available. In its place, a crucial role in the proof of this result is played by a quantitative version of the Sato-Tate theorem for non-CM cusp forms, due to Thorner \cite{Tho}, which uses the deep results of Newton and Thorne \cite{NeTh}; see Section \ref{sec:RSProof} for the details. \begin{rem}
Using the Sato-Tate theorem and \cite[Thm. 1.2.4]{ManThe} it can be shown that $\frac{1}{X}\sum_{X < n \leq 2X} |\lambda_f(n)|^{\alpha} \gg_{\alpha} (\log X)^{c_{\alpha}-1}$, so the estimate in Theorem \ref{thm:momCusp} is indeed non-trivial. \end{rem}
\subsubsection{Hooley's $\Delta$-function in short intervals} The distribution of divisors of a typical positive integer is a topic of classical interest, and a source of many difficult problems. Given an integer $n \in \mathbb{N}$, let $$
\mathcal{D}_n(v) := \frac{1}{d(n)}\sum_{\substack{d|n \\ d \leq e^v}} 1, \quad\quad \text{ for } v \in \mathbb{R}. $$ This is a distribution function on the divisors of $n$.
A concentration function for $\mathcal{D}_n(v)$, in the sense of probability theory, can be given by $$
Q(n) := \max_{u \in \mathbb{R}} |\mathcal{D}_n(u+1)-\mathcal{D}_n(u)| = \max_{u \in \mathbb{R}}\frac{1}{d(n)} \sum_{\substack{d|n \\ e^u < d \leq e^{u+1}}} 1. $$ Hooley \cite{Hoo} considered the unnormalized variant $$
\Delta(n) := d(n) Q(n) = \max_{u \in \mathbb{R}} \sum_{\substack{d|n \\ e^u < d \leq e^{u+1}}} 1, $$ now known as Hooley's $\Delta$-function, and used it to attack various problems related, among other things, to inhomogeneous Diophantine approximation by squares, as well as Waring's problem for cubes. Clearly, $0 \leq \Delta(n) \leq d(n)$, but one seeks more refined data about this function. For example, Erd\H{o}s \cite{ErdHoo} conjectured in 1948 that, except on a set of natural density 0, $\Delta(n) > 1$.\\ Many authors have investigated the average and almost sure behaviour of $\Delta$. Maier and Tenenbaum \cite{MaiTen} proved Erd\H{o}s' conjecture in a quantitative form. A significant portion of Hall and Tenenbaum's book \cite{HallTenBook} is devoted to the $\Delta$ function, including the currently best known upper bound for its mean value (see also \cite{HallTen}). For a partial survey of these results, see \cite{TenHooSur} . \\ Much less has been done concerning the local behaviour of the $\Delta$-function. To the author's knowledge the only result about its short interval behaviour was worked out in the setting of polynomials over a finite field by Gorodetsky \cite[Cor. 1.5]{Gor}. \\ By relating $\Delta(n)$ to integral averages of the characteristic function of $\mathcal{D}_n$ (which is multiplicative), we can deduce the following lower bound for $\Delta$ on average over typical short intervals of length $(\log X)^{1/2+\eta}$, for $\eta \in (0,1/2]$. \begin{cor} \label{cor:Hooley} Fix $\delta \in (0,1]$, and let $10 \leq h_0 \leq \frac{X}{10(\log X)^{(1+\delta)/2}}$ and set $h = h_0 (\log X)^{(1+\delta)/2}$. Then for all but $o_{h_0 \rightarrow \infty}(X)$ integers $x \in [X/2,X]$ we have $$ \frac{1}{h}\sum_{x-h < n \leq x} \Delta(n) \gg \delta \log\log X. $$ \end{cor}
\subsection{Statement of main results} We fix $B,C \geq 1$, $0 < A \leq B$, and for $X$ large we define $\mathcal{M}(X;A,B,C)$ to denote the set of multiplicative functions $f : \mathbb{N} \rightarrow \mathbb{C}$ such that: \begin{enumerate}[(i)]
\item $|f(p)| \leq B$ for all primes $p \leq X$,
\item $|f(n)| \leq d_B(n)^C \text{ for all } n\leq X$,
\item for all $z_0 \leq z\leq w \leq X$, we have \begin{equation}\label{eq:hyp3}
\sum_{z < p \leq w} \frac{|f(p)|}{p} \geq A \sum_{z < p \leq w} \frac{1}{p} - O\left(\frac{1}{\log z}\right). \end{equation} \end{enumerate}
As described above, the work \cite{MRII} treats 1-bounded multiplicative functions $f \in \mathcal{M}(X;A,1,1)$. We are interested in generalizing the results from \cite{MRII} to be applicable to the collection $\mathcal{M}(X;A,B,C)$, with $B \geq 1$. For the purpose of applications, we further extend $\mathcal{M}(X;A,B,C)$ as follows.
Fixing $\gamma > 0$ and $0 < \sigma \leq A$, we define $\mathcal{M}(X;A,B,C;\gamma,\sigma)$ to be the collection of multiplicative functions $f: \mathbb{N} \rightarrow \mathbb{C}$ satisfying (i) and (ii), as well as the additional hypotheses \begin{itemize} \item[(iii')] for all $z_0 \leq z \leq w \leq X$, we have \begin{equation}\label{eq:hyp3'}
\sum_{z < p \leq w} \frac{|f(p)|}{p} \geq A \sum_{z < p \leq w} \frac{1}{p} - O\left(\frac{1}{(\log z)^{\gamma}}\right), \end{equation} \item[(iv)] letting $t_0 \in [-X,X]$ be a minimizer on $[-X,X]$ of the map $$
t \mapsto \rho(f,n^{it};X)^2 := \sum_{p \leq X} \frac{|f(p)|-\text{Re}(f(p)p^{-it})}{p}, $$ we have for all $t \in [-2X,2X]$ that \begin{equation}\label{eq:hyp4}
\rho(f,n^{it};X)^2 \geq \sigma \min\{ \log\log X, \log(1+|t-t_0|\log X)\} - O_{A,B}(1). \end{equation} \end{itemize} Condition (iii') is a weaker form of (iii). The full strength of (iii) is needed in \cite[Lem. A.1]{MRII} to obtain (iv) for all $0 < \sigma < \sigma_A$ with a particular constant $\sigma_A > 0$, which is crucial to the proof of \cite[Theorem 1.9]{MRII}. We will show below, as a consequence of \cite[Lem. 5.1(i)]{MRII}, that if $f \in \mathcal{M}(X;A,B,C)$ then condition (iv) here holds for any $0 < \sigma < \sigma_{A,B}$, where
\begin{equation} \label{eq:sgAB} \sigma_{A,B} := \frac{A}{3} \left(1- \text{sinc}\left(\frac{\pi A}{2B}\right)\right), \quad\quad \text{sinc}(t) := \frac{\sin t}{t} \text{ for } t \neq 0. \end{equation} In particular, for any $0 < \sigma < \sigma_{A,B}$, $\mathcal{M}(X;A,B,C) \subseteq \mathcal{M}(X;A,B,C;1,\sigma)$. In proving Corollary \ref{cor:RankinSelberg}, for instance, it is profitable to assume (iii') rather than (iii), given currently available quantitative versions of the Sato-Tate theorem (see \eqref{eq:quantST} below).
In the sequel, fix $B,C \geq 1$, $0 < A \leq B$, $\gamma > 0$ and $0 < \sigma \leq A$. We define \begin{equation}\label{eq:params} \hat{\sigma} := \min\{1,\sigma\}, \quad \quad \kappa := \frac{\hat{\sigma}}{8B+21}. \end{equation}
Given $T \geq 1$ we set $$
M_f(X;T) := \min_{|t| \leq T} \rho(f,n^{it};X)^2 = \min_{|t| \leq T} \sum_{p \leq X} \frac{|f(p)|-\text{Re}(f(p)p^{-it})}{p}. $$ We select $t_0(f,T)$ to be a real number $t \in [-T,T]$ that gives the minimum in the definition of $M_f(X;T)$. \\ Finally, for a multiplicative function $f: \mathbb{N} \rightarrow \mathbb{C}$ we recall that $$
H(f;X) := \prod_{p \leq X} \left(1+\frac{(|f(p)|-1)^2}{p}\right), $$
observing for future reference that whenever $|f(p)| \leq B$ for all $p \leq X$, \begin{equation} \label{eq:HfXBd}
H(f;X) \asymp_B \prod_{p \leq X} \left(1+\frac{|f(p)|^2-1}{p}\right) \left(1+\frac{|f(p)|-1}{p}\right)^{-2}. \end{equation} The main result of this paper is the following. \begin{thm}\label{thm:MRFull}
Let $X \geq 100$. Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$, and put $t_0 = t_0(f,X)$. Let $10 \leq h_0 \leq X/10H(f;X)$, and set $h := h_0H(f;X)$. Then \begin{align*}
&\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h}\sum_{x-h < n \leq x} f(n) - \frac{1}{h} \int_{x-h}^x u^{it_0} du \cdot \frac{2}{X} \sum_{X/2 < n \leq X} f(n)n^{-it_0}\right|^2 dx \\
&\ll_{A,B,C} \left(\left(\frac{\log\log h_0}{\log h_0}\right)^A + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right)\prod_{p \leq X} \left(1+ \frac{|f(p)|-1}{p}\right)^2. \end{align*} \end{thm}
\begin{cor}\label{cor:MRVers}
Let $X \geq 100$. Let $f \in \mathcal{M}(X;A,B,C)$ and put $t_0 = t_0(f,X)$. Let $10 \leq h_0 \leq X/10H(f;X)$, and set $h := h_0H(f;X)$. Then \begin{align*}
&\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h}\sum_{x-h < n \leq x} f(n) - \frac{1}{h} \int_{x-h}^x u^{it_0} du \cdot \frac{2}{X} \sum_{X/2 < n \leq X} f(n)n^{-it_0}\right|^2 dx \\
&\ll_{A,B,C} \left(\left(\frac{\log\log h_0}{\log h_0}\right)^A + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right)\prod_{p \leq X} \left(1+ \frac{|f(p)|-1}{p}\right)^2, \end{align*} for any $0 < \kappa < \kappa_{A,B} := \frac{\min\{1,\sigma_{A,B}\}}{16B+21}$. \end{cor}
\begin{rem} \label{rem:trivBd} By Shiu's theorem (Lemma \ref{lem:Shiu} below), it is easy to show that the long sum term in the LHS is $$
\ll \frac{1}{X} \left(\sum_{X/3 < n \leq X} |f(n)|\right)^2 \ll_B \prod_{p \leq X}\left(1+\frac{|f(p)|-1}{p}\right) ^2. $$ Thus, this theorem shows that the variance is smaller than the square of the ``trivial'' bound for the long sum by a factor tending to 0 provided $h_0(X) \rightarrow \infty$, as $X \rightarrow \infty$. \end{rem}
\section{Outline of the proof of Theorem \ref{thm:MRFull}}
To prove Theorem \ref{thm:MRFull} we will establish two estimates.
The first compares typical short averages of $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$ to typical medium-length ones (i.e., of length $X/(\log X)^c$, for $c = c(\sigma) > 0$ small). The techniques involved were developed in \cite{MR}, using certain corresponding refinements that arose in \cite{MRII}. \begin{thm}\label{thm:MRDB} Let $B,C \geq 1$, $0 < A \leq B$, $\gamma > 0$ and $0 < \sigma \leq A$. Assume that $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Let $10 \leq h_0 \leq X/10H(f;X)$, set $h_1 := h_0 H(f;X)$ and $h_2 = X/(\log X)^{\hat{\sigma}/2}$ and assume that $h_1 \leq h_2$. Finally, put $t_0 = t_0(f,X)$. Then \begin{align*}
&\frac{2}{X}\int_{X/2}^{X} \left|\frac{1}{h_1} \sum_{x-h_1 < m \leq x} f(m) - \frac{1}{h_1} \int_{x-h_1}^x u^{it_0} du \cdot \frac{1}{h_2} \sum_{x-h_2 < m \leq x} f(m)m^{-it_0} \right|^2 dx \\
&\ll_{A,B,C} \left(\left(\frac{\log\log h_0}{\log h_0}\right)^A + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right)\prod_{p \leq X} \left(1+ \frac{|f(p)|-1}{p}\right)^2. \end{align*} \end{thm} Essential to the treatment of Theorem \ref{thm:MRDB} are strong pointwise upper bounds for Dirichlet polynomials \[ \sum_{X/3 < n \leq X} \frac{a(n)f(n)}{n^{1+it}}, \] where $\{a(n)\}_n \subset [0,1]$ is a particular sequence of weights, $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$ and $t \in [-X,X]$. To obtain these estimates we will apply some of the recent results about unbounded multiplicative functions described in Section \ref{sec:DBHist}. This is carried out at the beginning of Section \ref{sec:multPart}.
The second estimate we require towards Theorem \ref{thm:MRFull} is a ``Lipschitz'' bound, approximating the averages of a multiplicative function $f$ on any sufficiently long medium-length interval by a long interval average of $f$. The techniques involved are different from those used in the proof of Theorem \ref{thm:MRDB}, and largely follow the work of Granville, Harper and Soundararajan \cite{GHS}; see Section \ref{sec:Lip} for the details. \begin{thm}\label{thm:compLongSums} Let $B,C \geq 1$, $0 < A \leq B$, $\gamma > 0$ and $0 < \sigma \leq A$. Let $X/(\log X)^{\hat{\sigma}/2} \leq h \leq X/10$, and let $x \in [X/2,X]$. Assume that $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Then the following bounds hold: \begin{align*}
\frac{1}{h}\sum_{x-h < n \leq x} f(n)n^{-it_0} &= \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0} + O_{A,B,C}\left( \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}/2}} \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right) \right), \\ \frac{1}{h} \sum_{x-h < n \leq x} f(n) &= \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0} \\
&+ O_{A,B,C}\left( \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}/2}} \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right) \right). \end{align*} \end{thm}
\begin{proof}[Proof of Theorem \ref{thm:MRFull} assuming Theorem \ref{thm:MRDB} and Theorem \ref{thm:compLongSums}] Assume the hypotheses of Theorem \ref{thm:MRFull}, and set $h' := X/(\log X)^{\hat{\sigma}/2}$. If $h > h'$ then Theorem \ref{thm:MRFull} follows immediately from the second estimate in Theorem \ref{thm:compLongSums}. Thus, we may assume that $h \leq h'$. Applying Cauchy-Schwarz, we get \begin{align*}
&\frac{2}{X}\int_{X/2}^X\left|\frac{1}{h}\sum_{x-h < n \leq x} f(n) - \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0}\right|^2 dx \\
&\ll \frac{2}{X}\int_{X/2}^X\left|\frac{1}{h}\sum_{x-h < n \leq x} f(n) - \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{1}{h'}\sum_{x-h' < n \leq x} f(n)n^{-it_0}\right|^2 dx \\
&+ \sup_{X/2 < x \leq X} \left|\frac{1}{h'} \sum_{x-h' < n \leq x} f(n)n^{-it_0} - \frac{2}{X} \sum_{X/2 < n \leq X} f(n)n^{-it_0} \right|^2 \\
&=: T_1 + T_2, \end{align*}
upon trivially bounding $h^{-1} |\int_{x-h}^x u^{it_0} du| \leq 1$. By Theorem \ref{thm:MRDB} and the first estimate of Theorem \ref{thm:compLongSums}, \begin{align*}
T_1 &\ll_{A,B,C} \left(\left(\frac{\log\log h}{\log h}\right)^A + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right) \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right)^2 \\
T_2 &\ll_{A,B,C} \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}/2}} \prod_{p \leq X} \left(1+\frac{|f(p)|-1}{p}\right)^2. \end{align*} Combining these bounds proves the claim. \end{proof}
\section{Averages of Divisor-Bounded Multiplicative Functions and the proof of Theorem \ref{thm:compLongSums}} \label{sec:multPart} In the sequel we will require control over various averages of multiplicative functions $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. In this and the next subsection, such bounds are derived/recorded. \\ First, we will require some general pointwise estimates for prime power values of $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$.
In preparation, define $$
P(s) := \sum_{\substack{n \geq 1 \\ p^k||n \Rightarrow p^k \leq X}} \frac{f(n)}{n^s} = \prod_{p \leq X} \left(1+\sum_{\substack{k \geq 1 \\ p^k \leq X}} \frac{f(p^k)}{p^{ks}}\right), \quad \text{Re}(s) > 1. $$
Wherever $P$ is non-zero we may also write the logarithmic derivative Dirichlet series $$ -\frac{P'}{P}(s) = \sum_{n \geq 1} \frac{\Lambda_f(n)}{n^s}. $$ \begin{lem} \label{lem:ppBdswithLambda}
Suppose $f: \mathbb{N} \rightarrow \mathbb{C}$ is multiplicative and satisfies $|f(n)| \leq d_B(n)^C$ for all $n \leq X$. \\ a) For any prime power $p^{\nu} \leq X$ we have $$
|f(p^{\nu})| \ll_{B,C} \begin{cases} \left(\frac{5}{4}\right)^{\nu}\left(1+\frac{\nu}{B-1}\right)^{(B-1)C}, &\text{ if $B > 1$,} \\ 1 &\text{ if $B = 1$.}\end{cases} $$ b) For any $\eta \in [0,1/2)$ we have $$
\sum_{\substack{p^{\nu} \leq X \\ \nu \geq 2}} \frac{|f(p^{\nu})|}{p^{(1-\eta)\nu}} \ll_{\eta,B,C} 1. $$
c) $\Lambda_f(n) = 0$ unless $n = p^{\nu}$ for some prime power $p^{\nu}$. In particular, if $p^{\nu} \leq X$ we have $|\Lambda_f(p)| \leq B \log p$ when $\nu = 1$ and otherwise $|\Lambda_f(p^{\nu})| \ll_{\varepsilon,B,C} p^{\varepsilon\nu}$.
\end{lem} \begin{proof} a) If $B = 1$ then the claim is obvious since $d_B \equiv 1$. Thus, we may assume that $B > 1$. We may also assume that $\nu$ is large relative to $B,C$, for otherwise the estimate is trivial for a suitably large implicit constant.\\
Given these assumptions, observe that by Stirling's approximation, \begin{align*}
|f(p^{\nu})| &\leq d_B(p^{\nu})^C = \binom{\nu + B-1}{\nu}^C \ll_{B,C} \left(\frac{\sqrt{2\pi (\nu+B-1)}}{2\pi \sqrt{\nu(B-1)}} \left(1+\frac{B-1}{\nu}\right)^{\nu} \cdot \left(1+\frac{\nu}{B-1}\right)^{B-1}\right)^C \\ &\ll_{B,C} \left(\frac{5}{4}\right)^{\nu} \left(1+\frac{\nu}{B-1}\right)^{C(B-1)}, \end{align*} provided that $\nu$ is large enough that $\left(1+\frac{B-1}{\nu}\right)^C \leq \frac{5}{4}$. This proves a).\\ b) Let $\delta := \frac{1}{2}-\eta$. For each $2 \leq p \leq X$ we have $p^{1/2} > 5/4$, and thus by a), $$
\sum_{\substack{\nu \geq 2: \\ p^{\nu} \leq X}} \frac{|f(p^{\nu})|}{p^{(1-\eta)\nu}} \ll_{B,C} \sum_{\nu \geq 2} \left(1+\frac{\nu}{B-1}\right)^{BC} \left(\frac{5}{4p^{1/2+\delta}}\right)^{\nu} \ll_{B,C,\delta} \sum_{\nu \geq 2} \left(\frac{5}{4p^{(1+\delta)/2}}\right)^{\nu} \ll p^{-1-\delta}. $$ We deduce b) upon summing over $p \leq X$.\\ c) We begin by giving an expression for $\Lambda_f(p^{\nu})$. \\ In light of a), we may deduce that there is $\sigma = \sigma(B,C) > 1$ such that when $\text{Re}(s) \geq \sigma$, \begin{equation}\label{eq:smallatPrimes}
\left|\sum_{p^\nu \leq X} \frac{f(p^\nu)}{p^{\nu s}}\right| \leq \frac{1}{2} \text{ for all $2 \leq p \leq X$.} \end{equation} It follows from the Euler product representation of $P(s)$ that $P(s) \neq 0$ in the half-plane $\text{Re}(s) \geq \sigma$. Thus, $-P'(s)/P(s)$ is also analytic in this half-plane. \\ Integrating $-P'/P$ term-by-term from $s$ to $\infty$ along a line contained in the half-plane $\text{Re}(s) \geq \sigma$, we deduce that $$ \sum_{n \geq 1} \frac{\Lambda_f(n)}{n^s \log n} = \log P(s) = \sum_{p \leq X} \log\left(1+\sum_{\substack{\nu \geq 1 \\ p^{\nu} \leq X}} \frac{f(p^{\nu})}{p^{\nu s}}\right). $$ Given \eqref{eq:smallatPrimes}, we obtain the Taylor expansion \begin{align*} \sum_{n \geq 1} \frac{\Lambda_f(n)}{n^s\log n} &= \sum_{p \leq X} \sum_{k \geq 1} \frac{(-1)^{k-1}}{k} \sum_{ \substack{\nu_1,\ldots,\nu_k \geq 1 \\ p^{\nu_i} \leq X \forall i}} \frac{f(p^{\nu_1}) \cdots f(p^{\nu_k})}{p^{(\nu_1+\cdots + \nu_k)s}} \\ &= \sum_{\substack{p^{\nu} \\ p \leq X}} \frac{1}{p^{\nu s} \log p^{\nu}}\left(\log p^{\nu} \cdot \sum_{1 \leq k \leq \nu} \frac{(-1)^{k-1}}{k} \sum_{\substack{\nu_1 + \cdots + \nu_k = \nu \\ \nu_1,\ldots,\nu_k \geq 1 \\ p^{\nu_i} \leq X \forall i}} \prod_{1 \leq i \leq k} f(p^{\nu_i})\right). \end{align*} By the identity theorem for Dirichlet series, we thus find that $\Lambda_f(n) = 0$ unless $n = p^{\nu}$ for some prime power $p^{\nu}$ with $p \leq X$, in which case \begin{equation}\label{eq:exact} \Lambda_f(p^{\nu}) = \log p^{\nu} \cdot \sum_{1 \leq k \leq \nu} \frac{(-1)^{k-1}}{k} \sum_{\substack{\nu_1 + \cdot + \nu_k = \nu \\ \nu_1,\ldots,\nu_k \geq 1 \\ p^{\nu_i} \leq X \forall i}} \prod_{1 \leq i \leq k} f(p^{\nu_i}). \end{equation}
When $\nu = 1$ we get the expression $\Lambda_f(p) = f(p)\log p$, so that $|\Lambda_f(p)| \leq B \log p$. \\ For $\nu \geq 2$ we simply note using the uniform bound $d_B(n)^C \ll_{B,C,\varepsilon} n^{\varepsilon}$ and the triangle inequality in \eqref{eq:exact} that $$
|\Lambda_f(p^{\nu})| \ll_{B,C,\varepsilon} \sum_{1 \leq k \leq \nu} \frac{1}{k} \sum_{\substack{\nu_1 + \cdots + \nu_k = \nu \\ \nu_1,\ldots,\nu_k \geq 1}} \prod_{1 \leq i \leq k} p^{\nu_i \varepsilon} \leq p^{\nu\varepsilon} \mathfrak{p}(\nu), $$ where $\mathfrak{p}(\nu)$ denotes the number of partitions of the positive integer $\nu$. By a classical bound of Hardy-Ramanujan \cite[Sec. 2.3]{HarRam}, there is an absolute constant $c > 0$ such that $$ \mathfrak{p}(\nu) \ll e^{c\sqrt{\nu}} \ll_{\varepsilon} p^{\nu \varepsilon}, $$ which implies the claim. \end{proof}
We will use the following upper bound for non-negative functions repeatedly in the sequel. \begin{lem}[P. Shiu; \cite{Shiu}, Thm. 1] \label{lem:Shiu}
Let $f: \mathbb{N} \rightarrow \mathbb{C}$ be a multiplicative function satisfying $|f(n)| \leq d_B(n)^C$ for all $n \leq X$. Let $\sqrt{X} < Y \leq X$, $\delta \in (0,1)$ and let $Y^{\delta} \leq y \leq Y$. Then $$
\sum_{Y-y < n \leq Y} |f(n)| \ll_{B,C,\delta} y \mathcal{P}_f(X). $$ \end{lem} \begin{proof}
The hypotheses required to apply \cite[Thm. 1]{Shiu} are more precisely that $|f(n)| \ll_{\varepsilon} n^{\varepsilon}$ for all $n \leq X$, and that there is a constant $A \geq 1$ such that $|f(p^{\nu})| \leq A^{\nu}$ for all $p^{\nu} \leq X$. The first hypothesis is obvious from $d_B(n)^C \leq d(n)^{\lceil B\rceil C} \ll_{B,C,\varepsilon} n^{\varepsilon}$, while the second is implied by Lemma \ref{lem:ppBdswithLambda} a). \end{proof}
\begin{lem} \label{lem:HalType} Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$ and let $t_0 = t_0(f,X)$. Let $X^{1/5} \leq Y \leq X$, and let $2 \leq P \leq Q \leq \exp\left(\frac{\log X}{\log\log X}\right)$. Then for any $1 \leq Z \leq \log X$, $$
\sup_{Z < |u| \leq X/2} \left|\sum_{\substack{ n \leq Y \\ p|n \Rightarrow p \notin [P,Q]}} f(n)n^{-i(t_0+u)}\right| \ll_{A,B,C} Y \mathcal{P}_f(X) \left( \left(\frac{\log Q}{\log P}\right)^{2B}\frac{\log\log X}{\log X^{\sigma}} + \frac{1}{\sqrt{Z}} \right). $$ \end{lem} \begin{proof}
Define $\beta(n) := f(n)1_{p|n \Rightarrow p \notin [P,Q]}$, and for $t \in \mathbb{R}$ set $\beta_{t}(n) := \beta(n)n^{-it}$. Note that $|\beta_t(n)| \leq |f(n)|$ for all $n$ and $t \in \mathbb{R}$. As $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$, we have that \begin{enumerate}
\item $\max_{p \leq X} |f(p)|\leq B$
\item $\sum_{\substack{p^k\leq X \\ k \geq 2}} \frac{|f(p^k)|\log p^k}{p^k} \ll_{B,C} 1$ by Lemma \ref{lem:ppBdswithLambda}b), and
\item $\sum_{y < p \leq X} \frac{|f(p)|}{p} \geq A \log\left(\frac{\log X}{\log y}\right) - O_{A,B}(1)$ uniformly in $2 \leq y \leq X$, \end{enumerate}
for all $t \in \mathbb{R}$. Thus, the hypotheses of \cite[Cor. 2.1]{TenVM} are fulfilled with $r = |f|$. Applying that result gives, for every $Z < |u| \leq X/2$, $$
\left|\sum_{n \leq Y} \beta_{t_0+u}(n) \right| \ll_{A,B,C} \left(\sum_{n \leq Y} |f(n)|\right) \left((1+M_{\beta_{t_0+u}(Y;Z/2)}e^{-M_{\beta_{t_0+u}}(Y;Z/2)} + \frac{1}{\sqrt{Z}}\right). $$ Let $t = t(u) \in [-Z/2,Z/2]$ be the minimizer implicit in $M_{f_{t_0+u}}(Y;Z/2)$, so that $$ M_{\beta_{t_0+u}}(Y;Z/2) = \rho(\beta,n^{i(t_0+u+t)};Y)^2 \leq 2B \log\log X. $$ As $X^{1/5} \leq Y \leq X$, $$ \rho(\beta,n^{i(t_0+u+t)}; Y)^2 = \rho(\beta,n^{i(t_0+u+t)}; X)^2 - O_B(1) \geq \rho(f,n^{i(t_0+u+t)}; X)^2 - 2B \log\left(\frac{\log Q}{\log P}\right) - O_B(1). $$
Since $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$ and $|t_0 + u+t| \leq 2X$ and $|u+t| > Z/2$, we have by \eqref{eq:hyp4} that $$
\rho(f,n^{i(t_0+u+t)};X)^2 \geq \sigma \min\{\log\log X, \log(1+|u+t|\log X)\} - O_{A,B}(1) \geq \sigma \log\log X - O_{A,B}(1). $$ It thus follows that $$
\max_{Z < |u| \leq X/2} \left|\sum_{n \leq Y} \beta_{t_0+u}(n) \right| \ll_{A,B,C} \left(\sum_{n \leq Y} |f(n)|\right) \left( \left(\frac{\log Q}{\log P} \right)^{2B} \frac{\log\log X}{(\log X)^{\sigma}} + \frac{1}{\sqrt{Z}}\right). $$ Finally, applying Lemma \ref{lem:Shiu} together with Mertens' theorem, we obtain $$
\sum_{n \leq Y} |f(n)| \ll_{B,C} Y \prod_{p \leq Y} \left(1+ \frac{|f(p)|-1}{p}\right) \ll_B Y \mathcal{P}_f(X), $$ and the claim follows. \end{proof}
We need the following estimate for certain divisor-bounded functions on $y$-smooth\footnote{By a \emph{$y$-smooth} or \emph{$y$-friable} integer we mean a positive integer $n$ such that $p|n \Rightarrow p \leq y$.} integers, which is essentially due to Tenenbaum and Wu \cite[Cor. 2.2]{TeWu}, improving on work of Song \cite{Song}. An important r\^{o}le is played by the function $\rho_k(u)$ for $k \in \mathbb{N}$ and $u \geq 0$, which is a generalization of the classical Dickman-de Bruijn function, defined by the differential delay equation $$ u\rho_k'(u) = (k-1)\rho_k(u)-k\rho_k(u-1) \text{ if } u \geq 1, $$ and $\rho_k(u) := u^{k-1}/\Gamma(k)$ for $0\leq u < 1$. \begin{lem} \label{lem:Song} Let $g: \mathbb{N} \rightarrow \mathbb{R}$ be a non-negative multiplicative function for which there are real constants $\delta \in (0,1)$, $\eta \in (0,1/2)$ and $D > 0$, and an integer $k \geq 1$ such that \begin{align} &\sum_{p \leq z} g(p)\log p = k z + O(z/(\log z)^{\delta}) \text{ for all } z \geq 2, \label{eq:asympKap} \\ &\sum_{p^{\nu}, \nu \geq 2} \frac{g(p^{\nu})}{p^{(1-\eta)\nu}} \leq D. \label{eq:ppBd} \end{align} Let $x \geq 3$ and let $\exp\left((\log x \log\log x)^{2/(2+\delta)}\right) \leq y \leq x$. Set $u := \frac{\log x}{\log y}$. Then $$ \sum_{\substack{n \leq x \\ P^+(n) \leq y}} g(n) = e^{-\gamma k} x \rho_{k}(u) \frac{G(1,y)}{\log y} \left(1+O\left(\frac{\log(u+1)}{(\log y)^{\delta/2}}\right)\right), $$ where $G(s,y) := \sum_{P^+(n) \leq y} g(n)n^{-s}$ for $\text{Re}(s) > 0$, and $P^+(n)$ denotes the largest prime factor of $n$. \\ In particular, for $y$ in the given range and such that $u \rightarrow \infty$ we have $$ \sum_{\substack{n \leq x \\ P^+(n) \leq y}} g(n) \ll_{k,D,\delta} x(\log y)^{k-1} \exp\left(-\frac{1}{3}u\log u \right). $$ \end{lem} \begin{proof} The first claim is a special case of \cite[Cor. 2.2]{TeWu}. \\ For the second claim, we note that $$ G(1,y)
\leq \exp\left(\sum_{p \leq y} \frac{g(p)}{p} + \sum_{\substack{p \leq y \\ \nu \geq 2}} \frac{g(p^{\nu})}{p^{\nu}}\right) \ll_D \exp\left(\sum_{p \leq y} \frac{g(p)}{p}\right) \ll_{\delta} (\log y)^{k}, $$ where the penultimate estimate follows from \eqref{eq:ppBd}, and the last estimate is obtained by partial summation from \eqref{eq:asympKap}. Furthermore, by \cite[(3.10)]{Smida} and well-known upper bounds for the Dickman-de Bruijn function (e.g., \cite[(1.6)]{GraSmooth}), we have $$ \rho_k(u) = k^{u+O(u/\log(1+u))} \rho(u) \leq \exp\left(2u \log k - \frac{1}{2}u\log u \right) \leq \exp\left(-\frac{1}{3}u\log u\right), $$ whenever $u$ is large enough in terms of $k$, and the claim follows. \end{proof}
By combining the last two lemmas, we may deduce the following upper bound for Dirichlet polynomials of a special type (cf. \cite[Lem. 3]{MR}). \begin{cor}\label{cor:Hal} Let $10 \leq P \leq Q \leq \exp\left(\frac{\log X}{\log\log X}\right)$, and let $1 \leq Z \leq \log X$. Assume the hypotheses of Lemma \ref{lem:HalType}, and assume $X \geq X_0(B,C)$. Then for any $\sqrt{X} \leq Y \leq X$, \begin{align*}
&\sup_{Z < |u| \leq X/2} \left|\sum_{Y/3 < n \leq Y} \frac{f(n)}{n^{1+i(t_0+u)}(1+\omega_{[P,Q]}(n))}\right| \\ &\ll_{A,B,C} \mathcal{P}_{f}(X) \left( \left(\frac{\log Q}{\log P}\right)^{3B}\frac{\log\log X}{(\log X)^{\sigma}} + \left(\frac{\log Q}{\log P}\right)^{B}\frac{1}{\sqrt{Z}}\right), \end{align*}
where $\omega_{[P,Q]}(n) := \sum_{\substack{p|n \\ P \leq p \leq Q}} 1$. \end{cor} \begin{proof} Fix $u \in [-Z,Z]$ and set $t := t_0 + u$. Write $f = \alpha \ast \beta$, where $\alpha$ and $\beta$ are multiplicative functions with $\alpha(p^k) = f(p^k)$ whenever $P \leq p \leq Q$ and $p^k \leq X$, and $\beta(p^k) = f(p^k)$ for all other primes powers $p^k \leq X$. We apply the hyperbola method with $M = \sqrt{Y}$ to get \begin{align*}
\left|\sum_{Y/3 < n \leq Y} \frac{f(n)}{n^{1+it}(1+\omega_{[P,Q]}(n))}\right| &\ll \sum_{a \leq M} \frac{|\alpha(a)|}{a} \left|\sum_{Y/(3a) < b \leq Y/a} \frac{\beta(b)b^{-it}}{b}\right| + \sum_{b \leq Y/M} \frac{1}{b}\sum_{\max\{M,Y/(3b)\} < a \leq Y/b} \frac{|\alpha(a)|}{a} \\ &=: \mathcal{R}_1 + \mathcal{R}_2. \end{align*} To bound $\mathcal{R}_1$ we apply partial summation and Lemma \ref{lem:HalType} to obtain, uniformly in $u$, \begin{align*}
\sum_{Y/(3a) < b \leq Y/a} \frac{\beta(b)b^{-it}}{b} &\ll_{A,B,C} \sup_{Y/(3a) \leq y \leq Y/a} \frac{1}{y} \left|\sum_{n \leq y} \beta(b)b^{-it}\right| \\ &\ll \mathcal{P}_f(X)\left(\left(\frac{\log Q}{\log P}\right)^{2B} \frac{\log\log X}{(\log X)^{\sigma}} + \frac{1}{\sqrt{Z}}\right). \end{align*} Given the prime power support of $\alpha$, we have $$
\sum_{a \leq M} \frac{|\alpha(a)|}{a} \ll_{B,C} \prod_{P \leq p \leq Q} \left(1+\frac{|f(p)|}{p}\right) \ll_B \left(\frac{\log Q}{\log P}\right)^B, $$ so that on combining this with the previous estimate, we obtain $$ \mathcal{R}_1 \ll_{A,B,C} \mathcal{P}_{f}(X) \left(\left(\frac{\log Q}{\log P}\right)^{3B} \frac{\log\log X}{(\log X)^{\sigma}} + \left(\frac{\log Q}{\log P}\right)^{B}\frac{1}{\sqrt{Z}}\right). $$
Next, consider $\mathcal{R}_2$. Note that $\alpha(n) = f(n)1_{p|n \Rightarrow p \in [P,Q]}$, and so since $|f(n)| \leq d_{B}(n)^C$ uniformly over $n \leq X$ we have $$
\sum_{X/(3b) < a \leq X/b} \frac{|\alpha(a)|}{a} \ll \frac{b}{X} \sum_{\substack{n \leq X/b \\ P^+(n) \leq Q}} g(n), $$ where $g(n) := d_{\lceil B\rceil}(n)^{\lceil C\rceil}$. Note that $g(n)$ takes integer values, and in particular taking $k := \lceil B\rceil^{\lceil C\rceil} \in \mathbb{Z}$, we have $$ \sum_{p \leq X} g(p)\log p = k \sum_{p \leq X} \log p = k X + O(X/(\log X)^{1/2}), $$ say, by the prime number theorem. Furthermore, that $g$ satisfies \eqref{eq:ppBd} with some $\eta \in (0,1/2)$ and $D = O_{B,C}(1)$ is the content of Lemma \ref{lem:ppBdswithLambda}b). Hence, as $u_b := \log (X/b)/\log Q \geq \frac{\log X}{2\log Q} \geq \frac{1}{2}\log\log X$, Lemma \ref{lem:Song} implies that when $X$ is sufficiently large in terms of $B$ and $C$ we obtain $$ \frac{b}{X} \sum_{\substack{n \leq X/b \\ P^+(n) \leq Q}} g(n) \ll (\log Q)^{k-1} \exp\left(-\frac{1}{6} u_b \log u_b \right) \ll_{B,C} (\log X)^{-200}. $$ Combining this with the bound $$
\sum_{b \leq X/M} \frac{|\beta(b)|}{b} \leq \sum_{b \leq X/M} \frac{|f(b)|}{b} \ll (\log X) \mathcal{P}_{f}(X), $$ which again follows from partial summation and Lemma \ref{lem:Shiu}, we obtain $\mathcal{R}_2 \ll (\log X)^{-100}$. Altogether, we conclude that $$
\left|\sum_{Y/3 < n \leq Y} \frac{f(n)}{n^{1+i(t_0+u)}(1+\omega_{[P,Q]}(n))}\right| \ll_{B,C} \mathcal{P}_{f}(X) \left( (\log X)^{-100} + \left(\frac{\log Q}{\log P}\right)^{3B} \frac{\log\log X}{(\log X)^{\sigma}} + \left(\frac{\log Q}{\log P}\right)^B \frac{1}{\sqrt{Z}}\right), $$
uniformly over $Z < |u| \leq X/2$, and the claim follows since $Z^{-1/2} \geq (\log X)^{-1/2}$. \end{proof}
\subsection{Lipschitz Bounds and Main Terms} \label{sec:Lip}
In this subsection we derive a slight refinement of the Lipschitz bounds found in \cite[Thm. 1.5]{GHS}. Specifically, our estimates are sensitive to the distribution of values $|f(p)|$, which will allow us to obtain Theorem \ref{thm:compLongSums}. See also \cite{Mat} for some related Lipschitz-type bounds for unbounded multiplicative functions that share some overlap with the general result obtained presently. \begin{thm}[Relative Lipschitz bounds] \label{thm:Lipschitz} Let $1 \leq w \leq X^{1/3}$ and let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Set $t_0 = t_0(f,X)$. Then \begin{align*}
&\left|\frac{w}{X} \sum_{n \leq X/w} f(n)n^{-it_0} - \frac{1}{X} \sum_{n \leq X} f(n)n^{-it_0}\right| \ll_{A,B,C} \log\left(\frac{\log X}{\log(ew)}\right) \left(\frac{\log(ew) + \log\log X}{\log X}\right)^{\hat{\sigma}}\mathcal{P}_f(X), \end{align*} where $\hat{\sigma} := \min\{1,\sigma\}$. The same bound holds for the quantity $$
\left|\left(\frac{w}{X}\right)^{1+it_0} \sum_{n \leq X/w} f(n) - \frac{1}{X^{1+it_0}} \sum_{n \leq X} f(n)\right|. $$ \end{thm} To this end, we need to introduce some notation that is consistent with the notation from \cite{GHS}. For $\text{Re}(s) > 1$ we write $$
\mathcal{F}(s) = \sum_{\substack{n \geq 1 \\ p^k||n \Rightarrow p^k \leq X}} \frac{f(n)n^{-it_0}}{n^s}. $$ For each prime power $p^k \leq X$, $k \geq 1$, define $$ s(p^k) := \begin{cases} f(p^k)p^{-ikt_0} &\text{ if $p \leq y$} \\ 0 &\text{ if $p > y$} \end{cases} \quad\quad \ell(p^k) := \begin{cases} 0 &\text{ if $p \leq y$} \\ f(p^k)p^{-ikt_0} &\text{ if $p > y$} \end{cases}, $$
where $y\geq 2$ is a large parameter to be chosen later. We extend $s$ and $\ell$ multiplicatively to all $n \in \mathbb{N}$ with $p^k||n \Rightarrow p^k \leq X$, and set $s(n) = \ell(n) = 0$ otherwise. For $\text{Re}(s) > 1$, also define $$ \mathcal{S}(s) = \sum_{n \geq 1} \frac{s(n)}{n^s}, \quad \quad \mathcal{L}(s) := \sum_{n \geq 1} \frac{\ell(n)}{n^s}. $$ We recall that $\Lambda_{\ell}(n)$ is $n$th coefficient of the Dirichlet series $-\mathcal{L}'/\mathcal{L}(s)$, the logarithmic derivative of $\mathcal{L}(s)$, which is well-defined for all $\text{Re}(s) > 1$ whenever $y \geq y_0(B,C)$ by Lemma \ref{lem:ppBdswithLambda} c).
\begin{lem} \label{lem:LfnBd} Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Let $t \in \mathbb{R}$ and $\xi \geq 1/\log X$. Then $$
|\mathcal{F}(1+\xi + it)| \ll_{A,B,C} \xi^{-1} (1+\xi \log X)^{1-A} \mathcal{P}_f(X) e^{-\rho(fn^{-it_0},n^{it};e^{1/\xi})^2} \ll_B (\log X)^B. $$ \end{lem} \begin{proof}
By Lemma \ref{lem:ppBdswithLambda}b) and $\max_{p \leq X} |f(p)|\leq B$, we deduce that \begin{align*}
|\mathcal{F}(1+\xi+it)| &\ll_{B} \prod_{p \leq X} \left|1+ \frac{f(p)p^{-i(t_0+t)}}{p^{1+\xi}}\right| \exp\left(\sum_{\substack{p^k \leq X \\ k \geq 2}} \frac{|f(p^k)|}{p^{k(1+\xi)}}\right) \\
&\ll_{B,C} \prod_{p\leq X} \left(1+\frac{|f(p)|}{p^{1+\xi}}\right)\left(1+ \frac{\text{Re}(f(p)p^{-i(t_0+t)})-|f(p)|}{p^{1+\xi}}\right). \end{align*} By partial summation and the prime number theorem, the estimates \begin{align}\label{eq:tails} \sum_{p > e^{1/\xi}} \frac{\alpha_p}{p^{1+\xi}} &\ll_B \int_{e^{1/\xi}}^{\infty} e^{-\xi v} \frac{dv}{v} \ll_B 1 \nonumber \\ \sum_{p \leq e^{1/\xi}} \left(\frac{\alpha_p}{p}-\frac{\alpha_p}{p^{1\pm \xi}}\right) &\ll B\xi \sum_{p \leq e^{1/\xi}} \frac{\log p}{p} \ll_B 1, \end{align}
are valid for any sequence $\{\alpha_p\}_p \subset \mathbb{C}$ with $\max_p |\alpha_p| \ll_B 1$. It follows that \begin{align*}
|\mathcal{F}(1+\xi+it)| &\ll_{B,C} \prod_{p \leq e^{1/\xi}} \left(1+\frac{|f(p)|}{p}\right) \exp\left(-\sum_{p \leq e^{1/\xi}} \frac{|f(p)|-\text{Re}(f(p)p^{-i(t_0+t)})}{p}\right) \\
&\ll_B \xi^{-1} \mathcal{P}_f(X) \exp\left(\sum_{e^{1/\xi} < p \leq X} \frac{1-|f(p)|}{p}\right) e^{-\rho(fn^{-it_0},n^{it}; e^{1/\xi})^2}. \end{align*} Since $f \in \mathcal{M}(X;A,B;\gamma,\sigma)$, we have $$
\sum_{e^{1/\xi} < p \leq X} \frac{1-|f(p)|}{p} \leq (1-A) \sum_{e^{1/\xi} < p \leq X} \frac{1}{p} +O(\xi^{\gamma}) = (1-A) \log(\xi \log X) + O_A(1). $$ We thus obtain $$
|\mathcal{F}(1+\xi+it)| \ll_{A,B,C} \xi^{-1} (1+\xi \log X)^{1-A} \mathcal{P}_f(X)e^{-\rho(fn^{-it_0},n^{it}; e^{1/\xi})^2}, $$ which proves the first claimed estimate.\\
To obtain the second, note that $\rho(fn^{-it_0},n^{it};Y)^2 \geq 0$ for all $Y \geq 2$, and so using $|f(p)| \leq B$ and $A \geq 0$ we obtain the further bound \begin{align*}
\ll \xi^{-1}(\xi\log X)^{1-A} \mathcal{P}_f(X) \ll (\log X) \mathcal{P}_f(X) &\ll_B \exp\left(\sum_{p \leq X} \frac{|f(p)|}{p} \right) \ll (\log X)^B, \end{align*} as claimed. \end{proof}
To bound certain error terms in the proof of Theorem \ref{thm:Lipschitz} we require the following estimate, whose proof largely follows that of \cite[Lem. 2.4]{GHS} \begin{lem}\label{lem:24ref} Let $1 \leq w \leq X^{1/3}$, $w \leq y \leq \sqrt{X}$ and $\eta := 1/\log y$. Then for any $X/w \leq Z \leq X$, $$
\sum_{mn \leq Z} |s(m)|\frac{|\ell(n)|}{n^{\eta}} + \int_0^{\eta}\sum_{mkn \leq Z} |s(m)|\frac{|\Lambda_{\ell}(k)||\ell(n)|}{k^{\alpha}n^{2\eta + \alpha}}d\alpha \ll_{A,B,C} Z\left(\frac{\log y}{\log Z}\right)^A \mathcal{P}_f(X). $$ \end{lem} \begin{proof}
In the first sum the summands arise from a Dirichlet convolution of multiplicative functions $|s| \ast |\ell|n^{-\eta}$, and we clearly have $|s(n)|,|\ell(n)| \leq d_B(n)^C$ for all $n \leq X$. By Lemma \ref{lem:Shiu} and \eqref{eq:tails}, the first sum is therefore $$
\ll_{B,C} \frac{Z}{\log Z} \exp\left(\sum_{p \leq y} \frac{|f(p)|}{p} + \sum_{y < p \leq Z} \frac{|f(p)|}{p^{1+\eta}}\right) \ll_B Z\frac{\log y}{\log X} \mathcal{P}_f(X) \exp\left(\sum_{y < p \leq X} \frac{1-|f(p)|}{p}\right). $$ Arguing as in the previous lemma, since $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$ this is bounded by $$ \ll_A Z\frac{\log y}{\log X} \cdot \left(\frac{\log X}{\log y}\right)^{1-A} \mathcal{P}_f(X) \ll Z\left(\frac{\log y}{\log X}\right)^A \mathcal{P}_f(X). $$
For the second term, we use Lemma \ref{lem:ppBdswithLambda}c), which shows that$|\Lambda_{\ell}(p)| \leq B \log p $ and $|\Lambda_{\ell}(p^{\nu})| \ll_{\varepsilon,B,C} p^{\nu\varepsilon}$. It follows that for $1 \leq K \leq X$, $$
\left|\sum_{k \leq K} \Lambda_{\ell}(n)n^{-\alpha}\right| \leq B\sum_{p \leq K} p^{-\alpha} \log p + O_{B,C} \left(\sum_{\substack{p^{\nu} \leq K \\ \nu \geq 2}} p^{\nu/6} \right) \ll_{B,C} K^{1-\alpha} + K^{2/3}, $$ say, which is acceptable.
Therefore taking $K = Z/mn$, the $\alpha$ integral in the statement is $$
\ll_{B,C} \int_0^{\eta} Z^{1-\alpha}\sum_{mn \leq Z} \frac{|s(m)|}{m^{1-\alpha}}\frac{|\ell(n)|}{n^{1+2\eta}} d\alpha. $$
Extending the inner sum by positivity to all products $mn$ with $p^k||mn \Rightarrow p^k \leq Z$ and using the estimates \eqref{eq:tails} once again, we may bound the integral using Euler products as \begin{align*}
&\ll_{B,C} \int_0^{\eta} Z^{1-\alpha} \cdot \prod_{p \leq y} \left(1+\frac{|f(p)|}{p^{1-\alpha}}\right) \prod_{y < p \leq X} \left(1+\frac{|f(p)|}{p^{1+2\eta}}\right) d\alpha \\
&\ll_B Z\prod_{p \leq y} \left(1+\frac{|f(p)|}{p}\right) \int_0^{\eta} Z^{-\alpha} d\alpha \ll_B Z \frac{\log y}{\log Z} \mathcal{P}_f(X)\exp\left(\sum_{y < p \leq X} \frac{1-|f(p)|}{p}\right) \\ &\ll_{A,B} Z\left(\frac{\log y}{\log X}\right)^A\mathcal{P}_f(X). \end{align*} This completes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Lipschitz}] The proof follows that of \cite[Thm. 1.5]{GHS}, and we principally highlight the differences. \\ We begin with the first claim of the theorem. Let $T := (\log X)^{B+1}$ and $y := \max\{ew,T^2\}$. Fix $\eta := 1/\log y$, $c_0 := 1+1/\log X$. Then \cite[Lem. 2.2]{GHS} (replacing $\beta$ by $\beta/2$) and Lemma \ref{lem:24ref} combine to show that \begin{align*} &\frac{1}{X}\sum_{n \leq X} f(n)n^{-it_0} - \frac{w}{X}\sum_{n \leq X/w} f(n)n^{-it_0} \\ &= \int_0^{\eta}\int_0^{\eta} \frac{1}{\pi i} \int_{c_0-i\infty}^{c_0+i\infty} \mathcal{S}(s) \mathcal{L}(s+\alpha) \frac{\mathcal{L}'}{\mathcal{L}}(s+\alpha)\frac{\mathcal{L}'}{\mathcal{L}}(s+\alpha+2\beta) \frac{X^{s-1} (1-w^{1-s})}{s} ds d\beta d\alpha \\ &+ O_{A,B,C}\left(\left(\frac{\log y}{\log X}\right)^A\mathcal{P}_f(X)\right). \end{align*} Consider the inner integral over $s$. Shifting $s \mapsto s - \alpha-\beta$ and applying \cite[Lem. 2.5]{GHS}, this is \begin{align*} \frac{1}{\pi i} \int_{c_0-iT}^{c_0+iT} \mathcal{S}(s-\alpha-\beta) \mathcal{L}(s+\beta) \left(\sum_{y < m < X/y} \frac{\Lambda_{\ell}(m)}{m^{s-\beta}}\right)\left(\sum_{y < n < X/y} \frac{\Lambda_{\ell}(n)}{n^{s+\beta}}\right) &\frac{X^{s-1-\alpha-\beta}(1-w^{1+\alpha + \beta-s})}{s-\alpha-\beta} ds \\ &+ O\left(\frac{1}{\log X}\right). \end{align*}
Extracting the maximum over $|t| \leq T$, then applying Cauchy-Schwarz and \cite[Lem. 2.6]{GHS}, the main term for the $s$-integral is bounded above by \begin{align*}
&\ll X^{-\alpha-\beta} \left(\max_{|t| \leq T} \frac{|\mathcal{S}(c_0-\alpha-\beta+it)\mathcal{L}(c_0+\beta+it)||1-w^{1+\alpha+\beta-c_0-it}|}{|c_0-\alpha-\beta+it|}\right) \\
&\cdot \left(\int_{-T}^{T} \left|\sum_{y < m < X/y} \frac{\Lambda_{\ell}(m)}{m^{c_0-\beta-it}}\right|^2 dt\right)^{1/2}\left(\int_{-T}^{T} \left|\sum_{y < m < X/y} \frac{\Lambda_{\ell}(m)}{m^{c_0+\beta-it}}\right|^2 dt\right)^{1/2} \\
&\ll_{B,C} X^{-\alpha-\beta} \left(\max_{|t| \leq T} \frac{|\mathcal{S}(c_0-\alpha-\beta+it)\mathcal{L}(c_0+\beta+it)||1-w^{1+\alpha+\beta-c_0-it}|}{|c_0-\alpha-\beta+it|}\right) \\ &\cdot \left(\sum_{y < p < X/y} \frac{\log p}{p^{c_0 -2\beta}} + y^{-1/2+2\beta } \right)^{1/2}\left(\sum_{y < p < X/y} \frac{\log p}{p^{c_0 +2\beta}} + y^{-1/2-2\beta}\right)^{1/2} \\
&\ll X^{-\alpha-\beta} \left(\frac{X}{y}\right)^{\beta}\min\{\log X, 1/\beta\} \left(\max_{|t| \leq T} \frac{|\mathcal{S}(c_0-\alpha-\beta+it)\mathcal{L}(c_0+\beta+it)||1-w^{1+\alpha+\beta-c_0-it}|}{|c_0-\alpha-\beta+it|}\right). \end{align*} Furthermore, by \eqref{eq:tails} and $\alpha,\beta \leq \eta = 1/\log y$, we see that for any $t \in \mathbb{R}$, \begin{align*}
X^{-\alpha-\beta}\left(\frac{X}{y}\right)^{\beta}|\mathcal{S}(c_0-\alpha-\beta+it)\mathcal{L}(c_0+\beta+it)| &\ll_{B,C} X^{-\alpha}y^{-\beta}|\mathcal{S}(c_0+\beta+it)\mathcal{L}(c_0+\beta+it)| \\
&\ll X^{-\alpha} |\mathcal{F}(c_0+\beta+it)|. \end{align*} Thus, we have so far shown that \begin{align}
&\left|\frac{1}{X}\sum_{n \leq X} f(n)n^{-it_0} - \frac{w}{X}\sum_{n \leq X/w} f(n)n^{-it_0}\right| \nonumber\\
&\ll_{A,B,C} \int_0^{\eta}\int_0^{\eta} X^{-\alpha} \min\{\log X,1/\beta\} \max_{|t| \leq T} \frac{|\mathcal{F}(c_0+\beta+it)(1-w^{1+\alpha+\beta-c_0-it})|}{|c_0+\beta+it|} d\beta d\alpha \label{eq:intstoBd}\\ &+ \left(\frac{\log\log X + \log(ew)}{\log X}\right)^A\mathcal{P}_f(X) + \frac{1}{\log X}. \nonumber \end{align}
Observe next that $$
|w^{-\beta-it} - w^{1+\alpha+\beta-c_0-it}| \ll \left(\alpha + \beta + \frac{1}{\log X}\right) \log (ew), $$ so that we may rewrite the integral expression in \eqref{eq:intstoBd} as \begin{align*}
&\int_0^{\eta} \left(\int_0^{\eta} X^{-\alpha} dx\right) \min\{\log X,1/\beta\} \max_{|t| \leq T} \frac{|\mathcal{F}(c_0+\beta+it)(1-w^{-\beta-it})|}{|c_0+\beta+it|} d\beta \\
&+ (\log (ew))\int_0^{\eta}\max_{|t| \leq T} |\mathcal{F}(c_0+\beta+it)| \min\{\log X , \beta^{-1}\}\int_0^{\eta} X^{-\alpha} \left(\alpha+\beta + 1/\log X\right) d\alpha d\beta \\ &=: T_1 + T_2. \end{align*} We first estimate $T_2$. The integral over $\alpha$ is $$ \ll \left(\beta + \frac{1}{\log X}\right) \int_0^{\eta} X^{-\alpha}d\alpha + \int_0^{\eta} \alpha X^{-\alpha} d\alpha \ll \frac{1}{\log X} \left(\beta+ \frac{1}{\log X}\right). $$ Applying Lemma \ref{lem:LfnBd} (with $\xi = 1/\log X + \beta$) and $\rho(fn^{-it_0},n^{it};Y)^2 \geq 0$ for all $Y \geq 2$, \begin{align*} T_2 &\ll_{A,B,C} (\log (ew))\mathcal{P}_f(X)\int_0^{\eta} \min\{1,(\beta \log X)^{-1}\}(1+\beta\log X)^{1-A} d\beta.
\end{align*}
Splitting the $\beta$-integral at $1/\log X$ and evaluating, we obtain \begin{align*} T_2
&\ll_{A,B,C} (\log (ew))\mathcal{P}_f(X) \left(\frac{1}{\log X} + \frac{1_{A = 1} \log(\eta \log X) + 1}{(\log X)^A}\left((\log X)^{A-1} + \eta^{1-A}\right)\right) \\ &\ll_B \mathcal{P}_f(X)\left(\frac{(\log (ew))(1+1_{A = 1} \log(\log X/\log(ew)))}{\log X} + \left(\frac{\log (ew) + \log\log X}{\log X}\right)^A\right) \\ &\ll \mathcal{P}_f(X) \left(\frac{\log (ew) + \log\log X}{\log X}\right)^{\min\{1,A\}} \left(1+1_{A = 1} \log\left(\frac{\log X}{\log(ew)}\right)\right). \end{align*} We now turn to $T_1$. By evaluating the $\alpha$ integral, we have $$
T_1 \ll \frac{1}{\log X} \int_0^{\eta} \min\{\log X,1/\beta\} \max_{|t| \leq T} \frac{|\mathcal{F}(c_0+\beta+it)||1-w^{-\beta-it}|}{|c_0+\beta+it|} d\beta. $$
Put $T' := \frac{1}{2}(\log X)^B$. If the maximum occurs at $|t| > T'$ then using the second estimate in Lemma \ref{lem:LfnBd} we obtain
$$ T_1 \ll_{B,C} \frac{1}{\log X} \int_0^{\eta} \min\{\log X, 1/\beta\} \cdot \frac{(\log X)^B}{T'} d\beta \ll \frac{1}{\log X} \left((\log X) \cdot \frac{1}{\log X } + \log\left(\eta \log X\right)\right) \ll \frac{\log\log X}{\log X}. $$
Thus, suppose the maximum occurs with $|t| \leq T'$. Applying \cite[Lem. 3.1]{GHS}, we get \begin{align*}
\max_{|t| \leq T'} |\mathcal{F}(c_0 + \beta + it)(1-w^{-\beta-it})| &\leq \max_{|t| \leq (\log X)^B} |\mathcal{F}(c_0 + it)(1-w^{-it})| + O\left(\frac{\beta}{(\log X)^B} \sum_{n \leq X} \frac{|f(n)|}{n^{1+1/\log X}}\right) \\
&= \max_{|t| \leq (\log X)^B} |\mathcal{F}(c_0 + it)(1-w^{-it})| + O_{B,C}\left(\frac{\beta}{(\log X)^{B-1}} \mathcal{P}_f(X)\right). \end{align*} Inserting this into the $\beta$ integral yields, in this case, \begin{align*}
T_1 &\ll_B \max_{|t| \leq (\log X)^B} |\mathcal{F}(c_0+it)(1-w^{-it})| \cdot \frac{1}{\log X} \int_0^{\eta} \min\{\log X, 1/\beta\} d\beta + \frac{\mathcal{P}_f(X)}{(\log X)^B}\int_0^{\eta} \beta \min\{\log X,1/\beta\} d\beta \\
&\ll_B \max_{|t| \leq (\log X)^B} |\mathcal{F}(1+1/\log X+it)(1-w^{-it})| \cdot \frac{\log(\log X/\log (ew))}{\log X} + \frac{\mathcal{P}_f(X)}{(\log X)^B}. \end{align*}
Finally, we focus on the maximum here. Note that $|1-w^{-it}| \ll \min\{1,|t|\log (ew)\}$, so combining Lemma \ref{lem:LfnBd} with our hypothesis \eqref{eq:hyp4}, we obtain \begin{align*}
&\max_{|t| \leq (\log X)^B} |\mathcal{F}(1+1/\log X + it)(1-w^{-it})| \\
&\ll_{A,B,C} (\log X) \mathcal{P}_f(X) \cdot \max_{|t| \leq (\log X)^B}\min\{1,|t| \log (ew)\} e^{-\rho(f,n^{i(t_0+t)};X)^2} \\
&\ll_{A,B} (\log X) \mathcal{P}_f(X) \max_{|t| \leq (\log X)^B} \min\{1,|t|\log (ew)\} \left(\frac{1}{(\log X)^{\sigma}} + \frac{1}{(1+|t|\log X)^{\sigma}}\right) \\ &\ll (\log X) \mathcal{P}_f(X) \cdot \left(\frac{\log (ew)}{\log X}\right)^{\min\{1,\sigma\}}. \end{align*} Hence, as $\hat{\sigma} = \min\{1,\sigma\} \leq A \leq B$, and $(\log X)^{-1} \ll_A (\log X)^{-A} \mathcal{P}_f(X)$ by \eqref{eq:hyp3'} we get \begin{align*} T_1 &\ll_{A,B,C} \log\left(\frac{\log X}{\log(ew)}\right) \left(\frac{\log (ew)}{\log X}\right)^{\hat{\sigma}} \mathcal{P}_f(X) + \frac{\mathcal{P}_f(X)}{(\log X)^B} + \frac{\log\log X}{\log X} \\ &\ll \log\left(\frac{\log X}{\log(ew)}\right) \left(\frac{\log (ew) + \log\log X}{\log X}\right)^{\hat{\sigma}} \mathcal{P}_f(X). \end{align*} Combining all of these bounds and inserting them into \eqref{eq:intstoBd}, we thus find that \begin{align*}
&\left|\frac{1}{X}\sum_{n \leq X} f(n)n^{-it_0} - \frac{w}{X}\sum_{n \leq X/w} f(n)n^{-it_0}\right| \\ &\ll_{A,B,C} \mathcal{P}_f(X) \log\left(\frac{\log X}{\log(ew)}\right)\left(\left(\frac{\log (ew) + \log\log X}{\log X}\right)^{\min\{1,A\}}+ \left(\frac{\log (ew) + \log\log X}{\log X}\right)^{\hat{\sigma}}\right) \\
&\ll \mathcal{P}_f(X) \log\left(\frac{\log X}{\log(ew)}\right) \left(\frac{\log(ew) + \log\log X}{\log X}\right)^{\hat{\sigma}}. \end{align*} This proves the first claim. \\ The second claim can be deduced similarly, since (using the same notation as above) in the first step we have (after shifting $s \mapsto s-it_0$) \begin{align*} &\frac{1}{X^{1+it_0}}\sum_{n \leq X} f(n) - \left(\frac{w}{X}\right)^{1+it_0}\sum_{n \leq X/w} f(n) \\ &= \int_0^{\eta}\int_0^{\eta} \frac{1}{\pi i} \int_{c_0-i\infty}^{c_0+i\infty} \mathcal{S}(s-\alpha-\beta) \mathcal{L}(s+\beta) \frac{\mathcal{L}'}{\mathcal{L}}(s+\alpha)\frac{\mathcal{L}'}{\mathcal{L}}(s+\alpha+2\beta) \frac{X^{s-1} (1-w^{1-s})}{s+it_0} ds d\beta d\alpha \\ &+ O_{A,B,C}\left(\left(\frac{\log y}{\log X}\right)^A\mathcal{P}_f(X)\right), \end{align*}
which simply localizes the argument above to the range $|t+t_0| \leq T$ instead. \end{proof}
Theorem \ref{thm:Lipschitz} may be directly applied to obtain the first estimate in Theorem \ref{thm:compLongSums}. To obtain the second, we will use the following corollary of Theorem \ref{thm:Lipschitz} that allows us to pass from $n^{-it_0}$-twisted sums to untwisted sums of $f(n)$ on long intervals.
\begin{cor}\label{cor:tShift} Let $t_0 = t_0(f,X)$ be as above. Then for any $x \in (X/2,X]$, $$
\frac{1}{x}\sum_{n \leq x} f(n)n^{-it_0} = \frac{1+it_0}{x^{1+it_0}} \sum_{n \leq x} f(n) + O_{A,B,C}\left(|t_0| \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}}}\right). $$
\end{cor} \begin{proof} By partial summation, we have \begin{equation}\label{eq:PSLipschitz} \frac{1}{x}\sum_{n \leq x} f(n)n^{-it_0} = \frac{1}{x}\int_1^x u^{-it_0} d\{\sum_{n \leq x} f(n)\} = \frac{1}{x^{1+it_0}} \sum_{n \leq x} f(n) +\frac{it_0}{x} \int_1^x \frac{1}{u^{1+it_0}}\sum_{n \leq u} f(n) du. \end{equation} We split the integral over $u$ at $x/(\log X)$. In the first range we use the trivial bound together with Lemma \ref{lem:Shiu}, obtaining $$
\leq \frac{|t_0|}{x} \int_1^{x/(\log X)^2} \left(\frac{1}{u} \sum_{n \leq u} |f(n)|\right) du \ll_{B,C} \frac{|t_0|}{(\log X)^2} \prod_{p \leq X} \left(1+\frac{|f(p)|}{p}\right) \ll \frac{|t_0|}{\log X}\mathcal{P}_f(X). $$ In the remaining range $x/(\log X)^2 < u \leq x$ we apply the second claim in Theorem \ref{thm:Lipschitz} (with $1 \leq w \leq (\log X)^2$), which gives \begin{align*} &\frac{it_0}{x} \int_{x/(\log X)^2}^X \left(\frac{1}{x^{1+it_0}} \sum_{n \leq x} f(n) + O_{A,B,C} \left(\mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}}}\right)\right) du \\
&= \left(\frac{1}{x^{1+it_0}}\sum_{n \leq x} f(n) \right) \cdot \frac{it_0}{x} \int_{x/(\log X)^2}^x du + O_{A,B,C} \left(|t_0| \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right) \\
&= \frac{it_0}{x^{1+it_0}} \sum_{n \leq x} f(n) + O_{A,B,C} \left(|t_0| \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right). \end{align*} Combining this with the estimate from $1 \leq u \leq x/(\log X)^2$, then inserting this into \eqref{eq:PSLipschitz}, we prove the claim. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:compLongSums}] We begin by proving the first estimate in the statement of the theorem. Let $X/(\log X)^{\hat{\sigma}/2} \leq h \leq X/10$. Note that by writing $x-h = x/w$, where $w := (1-h/x)^{-1} \in [1,2]$, the LHS is \begin{equation}\label{eq:hsumf} \frac{1}{h} \left( \sum_{n \leq x} f(n)n^{-it_0} -\sum_{n \leq x/w} f(n)n^{-it_0}\right). \end{equation} and similarly the main term in the RHS is \begin{equation}\label{eq:Xsumf} \frac{2}{X}\left(\sum_{n \leq X} f(n)n^{-it_0}-\sum_{n \leq X/2}f(n)n^{-it_0}\right). \end{equation} By Theorem \ref{thm:Lipschitz}, \eqref{eq:hsumf} becomes \begin{align*} &\frac{1}{h}\left( \left(1-\frac{1}{w}\right) \sum_{n \leq x} f(n)n^{-it_0} \right) + O_{A,B,C} \left(\frac{X}{h} \frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}}}\mathcal{P}_f(X)\right) \\ &= \frac{1}{x}\sum_{n \leq x} f(n)n^{-it_0} + O_{A,B,C} \left(\frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}/2}} \mathcal{P}_f(X)\right). \end{align*}
Similarly, applying Theorem \ref{thm:Lipschitz} twice to \eqref{eq:Xsumf}, we also find that \begin{align} \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0} &= \frac{2}{X}\left(1-\frac{1}{2}\right) \sum_{n \leq X} f(n)n^{-it_0} + O_{A,B,C} \left(\frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}}}\mathcal{P}_f(X)\right) \nonumber\\ &= \frac{1}{x} \sum_{n \leq x} f(n)n^{-it_0} + O_{A,B,C}\left(\frac{(\log\log X)^{\hat{\sigma}+1}}{(\log X)^{\hat{\sigma}}}\mathcal{P}_f(X)\right), \label{eq:xtoX} \end{align} viewing $x = X/u$ for some $u \in [1,2]$ in the last step. Combined with the previous estimate, we deduce the first claimed estimate of the theorem.
To prove the second claimed estimate we apply Corollary \ref{cor:tShift} to obtain \begin{align*} \sum_{x-h < n \leq x} f(n) = \frac{x^{it_0}}{1+it_0} \sum_{n \leq x} f(n)n^{-it_0} - \frac{(x-h)^{it_0}}{1+it_0} \sum_{n \leq x-h} f(n)n^{-it_0} + O\left(X \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right). \end{align*} Setting $w := (1-h/x)^{-1}$ as in \eqref{eq:hsumf}, we have \begin{align*} &\frac{x^{it_0}}{1+it_0} \sum_{n \leq x} f(n)n^{-it_0} - \frac{(x-h)^{it_0}}{1+it_0} \sum_{n \leq x-h} f(n)n^{-it_0} \\ &= \left(\frac{x^{1+it_0}}{1+it_0} - \frac{1}{w} \frac{(x-h)^{it_0}}{1+it_0}\right) \sum_{n \leq x} f(n)n^{-it_0} + O\left(X \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right) \\ &= \frac{x^{it_0} - (x-h)^{1+it_0}}{1+it_0} \cdot \frac{1}{x} \sum_{n \leq x} f(n)n^{-it_0} + O\left(X \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right). \end{align*} By \eqref{eq:xtoX}, the main term here is $$ \int_{x-h}^x u^{it_0} du \cdot \frac{2}{X}\sum_{X/2 < n \leq X} f(n)n^{-it_0} + O\left(h \mathcal{P}_f(X) \frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right). $$ Combining these estimates, we thus obtain $$ \frac{1}{h} \sum_{x-h < n \leq x} f(n) = \frac{1}{h}\int_{x-h}^x u^{it_0} du \cdot \frac{2}{X} \sum_{X/2 < n \leq x} f(n)n^{-it_0} + O\left(\frac{X}{h} \mathcal{P}_f(X)\frac{(\log\log X)^{\hat{\sigma} + 1}}{(\log X)^{\hat{\sigma}}}\right), $$ and so the second claimed estimate of the theorem follows from $h\geq X/(\log X)^{\hat{\sigma}/2}$. \end{proof}
\section{Applying the Matom\"{a}ki-Radziwi\l\l{} Method} In this section, which broadly follows the lines of the proof of \cite[Thm. 3]{MR}, we set out the key elements of the proof of Theorem \ref{thm:MRDB}. \subsection{Large sieve estimates} The content of this subsection can essentially all be found in \cite[Sec. 6 and 7]{MR} and in \cite[Sec. 3]{MRII}.
In what follows, a set $\mathcal{T} \subset \mathbb{R}$ is said to be \emph{well-spaced} if $|t_1-t_2| \geq 1$ for any distinct $t_1,t_2 \in \mathcal{T}$. \begin{lem}[Sparse large sieve for multiplicative sequences] \label{lem:LSInts} Let $T \geq 1$ and $2 \leq N \leq X$. Let $\{a_n\}_{n \leq N}$ be a sequence of complex numbers. Let $\mathcal{T} \subset [-T,T]$ be well-spaced. The following bounds hold: \begin{enumerate}[(a)] \item ($L^2$ mean value theorem, sparse version) \begin{align*}
\int_{-T}^T |\sum_{n \leq N} a_n n^{-it}|^2 dt &\ll T\sum_{n \leq N} |a_n|^2 + T\sum_{n \leq N} \sum_{1 \leq |m| \leq n/T} |a_na_{m+n}|. \end{align*}
\item ($L^2$ mean value theorem with multiplicative majorant) Let $1 \leq M \leq N$, and let $c > 0$. Assume there is a multiplicative function $f: \mathbb{N} \rightarrow \mathbb{C}$ satisfying $|f(n)| \leq d_B(n)^C$ such that $|a_n| \leq c|f(n)|$ for all $n\leq N$. Then \begin{align*}
\int_{-T}^T |\sum_{N-M < n \leq N} \frac{a_nn^{-it}}{n}|^2
&\ll_{B,C} c^2\left(\frac{TM}{N^2} \mathcal{P}_{f^2}(N) + \frac{M}{N}\mathcal{P}_f(N)^2\right). \end{align*} \item (Discrete mean value theorem) $$
\sum_{t \in \mathcal{T}} |\sum_{N/3 < n \leq N} \frac{a_nn^{-it}}{n}|^2 \ll \min\left\{\left(1+\frac{T}{N}\right) \log(2N) , \left(1+|\mathcal{T}|\frac{T^{1/2}}{N}\right)\log(2T)\right\} \frac{1}{N} \sum_{N/3 < n \leq N} |a_n|^2. $$ \end{enumerate} \end{lem}
\begin{proof} Part (a) is \cite[Lem. 3.2]{MRII}, part (b) is proven in the same way as\footnote{The technique used there relies on the main result of \cite{Hen}, which is valid generally for multiplicative functions that are bounded by a power of the divisor function.} \cite[Lem. 3.4]{MRII} and part (c) is a combination of \cite[Lem. 7 and 9]{MR}. \end{proof}
\begin{lem}[Large Sieve with Prime Support] \label{lem:LSPrim}
Let $B,T \geq 1$, $P \geq 10$. Let $\{a_p\}_{P < p \leq 2P}$ be a sequence with $\max_{P < p \leq 2P}|a_p| \leq B$. Let $P(s) := \sum_{P < p \leq 2P} a_pp^{-s}$, for $s \in \mathbb{C}$ and let $\mathcal{T} \subset [-T,T]$ be a well-spaced set. \begin{enumerate}[(a)] \item (Hal\'{a}sz-Montgomery estimate for primes) $$
\sum_{t \in \mathcal{T}} |P(1+it)|^2 \ll_B \frac{1}{(\log P)^2} \left(1+ |\mathcal{T}|(\log T)^2 \exp\left(-\frac{\log P}{(\log T)^{2/3+\varepsilon}}\right)\right). $$
\item (Large values estimate) If $\mathcal{T}$ consists only of $t \in [-T,T]$ with $|P(1+it)| \geq V^{-1}$ then $$
|\mathcal{T}| \ll_B T^{2\frac{\log V}{\log P}} V^2 \exp\left(2B\frac{\log T}{\log P}\log\log T\right). $$ \end{enumerate} \end{lem} \begin{proof}
Part (a) is \cite[Lem. 11]{MR}, while part (b) is proven precisely as in \cite[Lem. 8]{MR}, keeping track of the upper bound condition $|a_p| \leq B$. \end{proof}
\subsection{Dirichlet Polynomial Decomposition} The following is a variant of \cite[Lem. 12]{MR} tailored to elements of $\mathcal{M}(X;A,B,C;\gamma,\sigma)$. \begin{lem}\label{lem:decomp} Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Let $T \geq 1$, $1 \leq H \leq X^{1/2}$ and $1 \leq P \leq Q \leq X^{1/2}$. Set $\mathcal{I}$ to be the interval of integers $\left\lfloor H\log P \right\rfloor \leq v \leq H \log Q$, and let $\mathcal{T} \subset [-T,T]$. Then \begin{align*}
\int_{\mathcal{T}} &\left|\sum_{\substack{X/3 < n \leq X \\ \omega_{[P,Q]}(n) \geq 1}} \frac{f(n)}{n^{1+it}}\right|^2 dt \ll_{B,C} H \log(Q/P) \sum_{v \in \mathcal{I}} \int_{\mathcal{T}} |Q_{v,H}(1+it)R_{v,H}(1+it)|^2 dt \\ &+ \left(\frac{1}{H}+\frac{1}{P}\right) \left(\frac{T}{X}\mathcal{P}_{f^2}(X) + \mathcal{P}_f(X)^2\right), \end{align*} where for $v \in \mathcal{I}$ and $s \in \mathbb{C}$ we have set \begin{align*} Q_{v,H}(s) &:= \sum_{\substack{P \leq p \leq Q \\ v/H \leq \log p \leq (v+1)/H}} f(p)p^{-s} \\ R_{v,H}(s) &:= \sum_{Xe^{-v/H}/3 \leq m \leq Xe^{-v/H}} \frac{f(m)}{m^s(\omega_{[P,Q]}(m) + 1)}. \end{align*} \end{lem} \begin{proof} The proof is the same as that of \cite[Lem. 12]{MR} (with $a_n = f(n)1_{\omega_{[P,Q]} \geq 1}(n)$, $b_m = f(m)$ and $c_p = f(p)$ for $P \leq p \leq Q$), with appropriate appeal to Lemma \ref{lem:LSInts} in place of the usual mean value theorem. For example (as on \cite[top of p.20]{MR}), for $Y \in \{X/(3Q),X/P\}$ we have \begin{align*}
\int_{\mathcal{T}}\left|\sum_{m \in [Ye^{-1/H},Ye^{1/H}]} f(m)m^{-1-it}\right|^2 dt
&\ll_{B,C} \frac{T}{XH} \mathcal{P}_{f^2}(X) + \frac{1}{H}\mathcal{P}_f(X)^2.
\end{align*}
\end{proof}
\begin{lem} \label{lem:MixedMom} Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Let $T,Y_1,Y_2 \geq 1$, $1 \leq X' \leq X/Y_1$ and let $\ell := \lceil \frac{\log Y_2}{\log Y_1}\rceil$. Define
$$
Q(s) := \sum_{Y_1/2 < p \leq Y_1} c_pp^{-s}, \quad \quad A(s) := \sum_{X'/(2Y_2) < m \leq X'/Y_2} f(m)m^{-s},
$$
where $|c_p| \leq B$ for all $Y_1/2<p \leq Y_1$. Finally, let $\mathcal{T} \subseteq [-T,T]$. Then
$$
\int_{\mathcal{T}} |Q(1+it)^{\ell}A(1+it)|^2 dt \ll_{B,C} B^{2\ell} (\ell!)^2 \left(\frac{T}{X} \mathcal{P}_{f^2}(X) + \mathcal{P}_f(X)^2\right).
$$
\end{lem}
\begin{proof}
Writing $c_p := B c_p'$, where now $|c_p'| \leq 1$ and letting $\tilde{Q}(s)$ denote the Dirichlet polynomial with $c_p$ replaced by $c_p'$ for all $p \in (Y_1/2,Y_1]$, the LHS in the statement is \[
\ll B^{2\ell} \int_{\mathcal{T}} |\tilde{Q}(1+it) A(1+it)|^2 dt. \]
The rest of the proof is essentially the same as that of \cite[Lem. 7.1]{MRII}, save that the function $g^{\ast}$ is replaced by $|f|$ (this does not affect the proof, which depends on our Lemma \ref{lem:Shiu} and \cite[Thm. 3]{Hen}, both of which also apply to $|f|$.)
\end{proof}
\subsection{Integral Averages of Dirichlet Polynomials} As above, we write $t_0 = t_0(f,X)$ to denote an element of $[-X,X]$ that minimizes the map $$
t \mapsto \sum_{p \leq X} \frac{|f(p)| - \text{Re}(f(p)p^{-it})}{p} = \rho(f, n^{it};X)^2. $$ \begin{prop} \label{prop:Pars} Let $1/\log X \leq \delta \leq 1$, and put $I_{\delta}:= [t_0-\delta^{-1},t_0 + \delta^{-1}]$. Let $\{a_n\}_{n \leq X}$ be a sequence of complex numbers. Then \begin{align*}
&\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h}\sum_{\substack{x-h < m \leq x}} a_m - \frac{1}{2\pi} \int_{I_{\delta}} A(1+it) \frac{x^{1+it}-(x-h)^{1+it}}{h(1+it)}dt\right|^2dx \\
&\ll \int_{[-X/h, X/h] \backslash I_{\delta}} |A(1+it)|^2 dt + \max_{T \geq X/h} \frac{X}{hT}\int_T^{2T} |A(1+it)|^2 dt, \end{align*} where we have set $$ A(s) := \sum_{\substack{X/3 < n \leq X }} \frac{a(n)}{n^s}, \quad\quad s \in \mathbb{C}. $$ \end{prop} \begin{proof}
For each $x \in [X/2,X]$, Perron's formula gives $$ \frac{1}{h}\sum_{x-h < m \leq x} a_m = \frac{1}{2\pi} \int_{\mathbb{R}} A(1+it) \frac{x^{1+it}-(x-h)^{1+it}}{h(1+it)} dt. $$
Subtracting the contribution from $|t-t_0| \leq \delta^{-1}$, the LHS in the statement becomes $$
\frac{2}{X}\int_{X/2}^X\left|\frac{1}{2\pi} \int_{\mathbb{R} \backslash I_{\delta}} A(1+it) \frac{x^{1+it}-(x-h)^{1+it}}{h(1+it)} dt\right|^2 dx. $$ The remainder of the proof is identical to that of \cite[Lemma 14]{MR} (which only uses the boundedness of the coefficients $\{a_n\}_n$ there for the corresponding contribution of $I_{\delta}$). \end{proof}
\subsection{Restricting to a ``nicely factored'' set} We fix parameters $\eta \in (0,1/12)$, $Q_1 := h_0$, $P_1 := (\log h_0)^{40B/\eta}$ and $P_j := \exp\left(j^{4j-2} (\log Q_1)^{j-1}(\log P_1)\right)$, $Q_j := \exp\left(j^{4j} (\log Q_1)^j\right)$, for $1 \leq j \leq J$, where $J$ is maximal with $Q_J \leq \exp\left(\sqrt{\log X}\right)$. We highlight the different choice of $P_1$, with all other choices being the same as in \cite[Sec. 2 and 8]{MR}. We also let $$ \mathcal{S} = \mathcal{S}_{X,P_1,Q_1} := \{n \leq X : \omega_{[P_j,Q_j]}(n) \geq 1 \text{ for all } 1 \leq j \leq J\}. $$ \begin{rem} \label{rem:params} The following properties may be verified directly, as long as $h_0$ is sufficiently large (in terms of $B$): \begin{enumerate} \item $\log P_j \geq
\frac{8Bj^2}{\eta} \log\log (2BQ_{j+1}) \text{ for all $1 \leq j \leq J$}$ \item $\log Q_j \leq 2^{4j} (\log Q_{j-1})(\log Q_1) \leq (\log Q_{j-1})^{3} \leq Q_{j-1}^{1/24}$ \item $\frac{\log P_j}{\log Q_j} = \frac{\log P_1}{j^2 \log Q_1}$ for $2 \leq j \leq J$, so the terms $\{\log P_j/\log Q_j\}_{j \geq 1}$ are summable. \end{enumerate} We will use these in due course. \end{rem}
We wish to reduce our work to handling short and long averages with $n$ restricted to the set $\mathcal{S}$. To handle averages of $f(n)$ for $n \notin \mathcal{S}$ we use the following result. For a set of integers $\mathcal{A}$, we write $(n,\mathcal{A}) = 1$ to mean that $(n,a) = 1$ for all $a \in \mathcal{A}$.
\begin{lem} \label{lem:contS} Let $1 < P \leq Q \leq X$, and let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Then for any $10 \leq h_0 \leq X/10H(f;X)$ and $h := h_0H(f;X)$, $$
\frac{2}{X}\int_{X/2}^{X} \left|\frac{1}{h}\sum_{\substack{x-h < m \leq x \\ (m,[P,Q]) = 1}} f(m)\right|^2 dx \ll_{A,B,C} \left(\left(\frac{\log P}{\log Q} \right)^A+ \frac{1}{h_0}\right) \mathcal{P}_f(X)^2. $$ In particular, $$
\frac{2}{X} \int_{X/2}^{X} \left|\frac{1}{h} \sum_{\substack{x-h < m \leq x \\ m \notin \mathcal{S}}} f(m)\right|^2 dx \ll_{A,B,C} \left(\frac{\log \log h_0}{\log h_0}\right)^{A} \mathcal{P}_f(X)^2. $$ \end{lem} \begin{proof} Expanding the square and applying the triangle inequality, the LHS is \begin{align*}
&\leq \frac{2}{h^2X} \sum_{\substack{X/2-h < m_1,m_2 \leq X \\ |m_1-m_2| \leq h \\ (m_1m_2,[P,Q]) = 1}} |f(m_1)\overline{f(m_2)}| \int_{X/2}^X 1_{[m_1,m_1+h)}(x) 1_{[m_2,m_2+h)}(x) dx \\
&\ll \frac{1}{hX} \sum_{\substack{X/3 < m \leq X}} |f(m)|^2 + \frac{1}{hX} \sum_{1 \leq |l| \leq h} \sum_{\substack{X/3 < m \leq X \\ (m,[P,Q]) = 1}} |f(m)||f(m+l)|. \end{align*} By Lemma \ref{lem:Shiu} and \eqref{eq:HfXBd}, the first term on the RHS is bounded as \begin{equation}\label{eq:DiagBd} \ll_{B,C} \frac{1}{h_0H(f;X)}\mathcal{P}_{f^2}(X) \ll_B \frac{1}{h_0}\mathcal{P}_f(X)^2. \end{equation} Next, to bound the correlation sums we apply\footnote{This result is stated in \cite{MRII} for bounded multiplicative functions, but the proof there works identically for divisor-bounded functions as well since it relies principally on the general setup of \cite[Thm. 3]{Hen}.} \cite[Lem. 3.3]{MRII} (with $r_1 = r_2 = 1$) to the pair of multiplicative functions $f1_{\mathcal{S}^c}$ and $f$, which gives \begin{comment} We next focus on the correlation sums, and for these we will invoke a result due to Henriot \cite[Thm. 3]{Hen}.
For $1 \leq |h_0| \leq h$, we let $F(n,m) := |f(n)|^2|f(m)|^2$, $Q_1(n) := n$, $Q_2(n):= n+h_0$ and $Q(n) := Q_1(n)Q_2(n)$. Then Henriot's result yields $$
\sum_{X/3 < n \leq X} |f(n)f(n+h_0)|^2 \ll_{A,B} \Delta_{h_0}\prod_{p|h_0} \left(1-\frac{1}{p}\right)\left(1-\frac{2}{p}\right)^{-1} \cdot \frac{1}{(\log X)^2}\sum_{\substack{nm \leq X \\ (nm,h_0) = 1}} \frac{|f(n)|^2|f(m)|^2}{nm}, $$ where we have $$
\Delta_{h_0} = \prod_{p |h_0} \left(1+\sum_{0 \leq \nu_1,\nu_2 \leq 1} |f(p^{\nu_1})f(p^{\nu_2})|^2 \frac{|\{b \pmod{p^{1+\max_j \nu_j}} : p^{\nu_j}||Q_j(n) \forall j\}|}{p^{1+\max_j \nu_j}}\right). $$ The sum over $nm$ can be trivially bounded by $$
\leq \frac{1}{(\log X)^2}\left(\sum_{n \leq X} \frac{|f(n)|^21_{(n,h_0) = 1}}{n} \right)^2 \ll\prod_{p|h_0} \left(1+\frac{|f(p)|^2-1}{p}\right)^{-2} \cdot \prod_{p \leq X} \left(1+\frac{|f(p)|^2-1}{p}\right)^2. $$ On the other hand, an elementary computation yields $$
\Delta_{h_0} \prod_{p|h_0} \left(1-\frac{1}{p}\right)\left(1-\frac{2}{p}\right)^{-1} \ll \prod_{p|h_0} \left(1+\frac{|f(p)|^4-1}{p}\right). $$ We thus obtain \end{comment} $$
\sum_{1 \leq |l| \leq h} \sum_{\substack{X/3 < n \leq X \\ (n,[P,Q]) = 1}} |f(n)f(n+l)| \ll_{B,C} hX\mathcal{P}_{f1_{(m,[P,Q]) = 1}}(X)\mathcal{P}_f(X). $$ Since $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$, by \eqref{eq:hyp3'} we get \begin{align*}
\mathcal{P}_{f1_{(m,[P,Q]) = 1}}(X) \asymp_B \mathcal{P}_f(X) \prod_{P \leq p \leq Q}\left(1+\frac{1-|f(p)|}{p}\right)\left(1-\frac{1}{p}\right) &\ll_{A,B} \mathcal{P}_f(X) \exp\left(A \frac{\log P}{\log Q}\right) \\ &= \left(\frac{\log P}{\log Q}\right)^{A} \mathcal{P}_f(X). \end{align*} It follows that $$
\frac{1}{hX}\sum_{1 \leq |l| \leq h} \sum_{\substack{X/3 < n \leq X \\ n \notin \mathcal{S}}} |f(n)||f(n+l)| \ll_{A,B} \left(\frac{\log P}{\log Q}\right)^{A} \mathcal{P}_f(X)^2. $$ The first claim now follows upon combining this with \eqref{eq:DiagBd}.
The second claim follows similarly, save that in the argument above the term $\mathcal{P}_{f1_{(m,[P,Q]) = 1}}(X)$ is replaced by \begin{align*}
\mathcal{P}_{f1_{\mathcal{S}^c}}(X) &\ll_B \mathcal{P}_f(X)\prod_{1 \leq j \leq J} \prod_{P_j \leq p \leq Q_j} \left(1+\frac{1-|f(p)|}{p}\right)\left(1-\frac{1}{p}\right) \ll_{A,B} \mathcal{P}_f(X) \exp\left(A \sum_{j \geq 1} \frac{\log P_j}{\log Q_j}\right) \\ &\ll \left(\frac{\log P_1}{\log Q_1}\right)^{A} \mathcal{P}_f(X) \ll_{A,B} \left(\frac{\log\log h_0}{\log h_0}\right)^A \mathcal{P}_f(X). \end{align*} \end{proof}
Having disposed of $n \notin \mathcal{S}$, we now concerntrate on $n \in \mathcal{S}$. To prove Theorem \ref{thm:MRDB} we will apply Proposition \ref{prop:Pars} to the sequence $a_m = f(m)1_{m \in \mathcal{S}}$, in combination with the following key proposition. \\ Recall that $\kappa := \frac{\min\{1,\sigma\}}{8B + 21}$. We also define $\Delta := (2B+5)\kappa$, and $$ F(s) := \sum_{\substack{ X/3 < n \leq X \\ n \in \mathcal{S}}} \frac{f(n)}{n^s}, s \in \mathbb{C}. $$ \begin{prop} \label{prop:L2DPInt} Set $\delta = (\log X)^{-\Delta}$. Then $$
\int_{[-X/h,X/h] \backslash I_{\delta}} |F(1+it)|^2 dt \ll_{A,B,C} \left(\frac{(\log Q_1)^{1/3}}{P_1^{1/6-2\eta}} + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}} \right) \mathcal{P}_f(X)^2. $$
\end{prop} \begin{rem}\label{rem:2ndInt} We remark that both terms in Proposition \ref{prop:Pars} can be treated using Proposition \ref{prop:L2DPInt}. Indeed, by Lemma \ref{lem:LSInts}(b), $$
\frac{1}{T}\int_T^{2T} |F(1+it)|^2 dt \ll_{B,C} \frac{1}{X} \mathcal{P}_{f^2}(X) + \frac{1}{T}\mathcal{P}_f(X)^2.
$$ and therefore $$
\max_{T \geq X(\log h_0)^A/h} \frac{X/h}{T} \int_T^{2T} |F(1+it)|^2 dt \ll_{B,C} \frac{1}{h}\mathcal{P}_{f^2}(X) + \frac{1}{(\log h_0)^A} \mathcal{P}_{f}(X)^2 \ll \frac{1}{(\log h_0)^A} \mathcal{P}_f(X)^2, $$ which is obviously sufficient in Theorem \ref{thm:MRDB}. We clearly also have $$
\max_{X/h < T \leq X(\log h_0)^A/h} \frac{X/h}{T} \int_T^{2T} |F(1+it)|^2 dt \leq \int_{X/h}^{X(\log h_0)^A/h} |F(1+it)|^2 dt. $$ This expression will also be bounded using Proposition \ref{prop:L2DPInt} with $h$ replaced by $h/(\log h_0)^A$, which does not change the form of the final estimates. \end{rem}
\subsection{Proof of Proposition \ref{prop:L2DPInt}} The proof follows the same lines as those in \cite[Sec. 8]{MR}, save that we apply our versions of the corresponding lemmas that address the growth of $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. We sketch the details here, emphasizing the main differences. \\ We select parameters $\alpha_1,\ldots,\alpha_J$ such that $\alpha_j := \frac{1}{4}-\eta \left(1+\frac{1}{2j}\right)$. For each $1 \leq j \leq J$ let $$ \mathcal{I}_j := [\left\lfloor H_j \log P_j \right\rfloor, H_j\log Q_j] \cap \mathbb{Z}, \quad \quad H_j := j^2 \frac{P_1^{1/6-\eta}}{(\log Q_1)^{1/3}}. $$ Also, with $s \in \mathbb{C}$ we let $$ Q_{v,H_j}(s) := \sum_{\substack{P_j \leq p \leq Q_j \\ e^{v/H_j} \leq p \leq e^{(v+1)/H_j}}} f(p)p^{-s}, \quad\quad R_{v,H_j}(s) := \sum_{\frac{1}{3}Xe^{-v/H_j} < Xe^{-v/H_j}} \frac{f(m)}{m^s(1+\omega_{[P_j,Q_j]}(m))}. $$
We split the set of $t \in \mathcal{X} := [-X/h,X/h] \backslash I_{\delta}$ into sets \begin{align*}
\mathcal{T}_1 &:= \{t \in \mathcal{X} : |Q_{v,H_1}(1+it)| \leq e^{-\alpha_1v/H_1} \text{ for all } v_1 \in \mathcal{I}_1\} \\
\mathcal{T}_j &:= \{t \in \mathcal{X} : |Q_{v,H_j}(1+it)| \leq e^{-\alpha_jv/H_j} \text{ for all } v_j \in \mathcal{I}_j\} \backslash \bigcup_{1 \leq i \leq j-1} \mathcal{T}_i, \quad 2 \leq j \leq J, \text{ if } J \geq 2 \\ \mathcal{U} &:= \mathcal{X} \backslash \bigcup_{1 \leq j \leq J} \mathcal{T}_j. \end{align*} We thus have $$
\int_{[-X/h,X/h] \backslash I_{\delta}} |F(1+it)|^2 dt = \sum_{1 \leq j \leq J} \int_{\mathcal{T}_j} |F(1+it)|^2 dt + \int_{\mathcal{U}} |F(1+it)|^2 dt. $$ We estimate the contributions from $\mathcal{T}_j$ as in \cite{MR}, save that in the applications of the large sieve inequalities we use Lemma \ref{lem:LSInts}(b); as an example we will give full details for $j = 1$ and highlight the main changes for the corresponding bounds for $2 \leq j \leq J$. \\ In each case we apply Lemma \ref{lem:decomp} to obtain $$
\int_{\mathcal{T}_j} |F(1+it)|^2 dt \ll_{B,C} \mathcal{M}_j + \mathcal{E}_j, $$ where we set \begin{align*}
\mathcal{M}_j &:= H_j\log(Q_j/P_j) \sum_{v \in \mathcal{I}_j} \int_{\mathcal{T}_j} |Q_{v,H_j}(1+it)R_{v,H_j}(1+it)|^2 dt \\ \mathcal{E}_j &:= \left(\frac{1}{H_j}+\frac{1}{P_j}\right) \left(\frac{T}{X} \mathcal{P}_{f^2}(X) + \mathcal{P}_f(X)^2\right). \end{align*} The choice of parameters gives, by \eqref{eq:DiagBd}, \begin{equation}\label{eq:Pf2toPf} \sum_{1 \leq j \leq J} \mathcal{E}_j \ll \frac{(\log Q_1)^{1/3}}{P_1^{1/6-\eta}}\left(\frac{1}{h_0H(f;X)} \mathcal{P}_{f^2}(X) + \mathcal{P}_f(X)^2\right) \ll_{B,C} \frac{(\log Q_1)^{1/3}}{P_1^{1/6-\eta}} \mathcal{P}_f(X)^2, \end{equation} since $\sum_j P_j^{-1} \ll P_1^{-1}$ and $H(f;X)^{-1} \mathcal{P}_{f^2}(X) \ll_{B,C} \mathcal{P}_f(X)^2$. \\ We next consider the contribution from the main terms. When $j = 1$, since $v/H_1 \leq \log Q_1 = \log h_0$, we have \begin{align*}
\mathcal{M}_1 &\leq H_1\log Q_1\sum_{v \in \mathcal{I}_1} e^{-2\alpha_1 v/H_1} \int_{-X/h}^{X/h} |R_{v,H_j}(1+it)|^2 dt \\ &\ll_{B,C} H_1\log Q_1\sum_{v \in \mathcal{I}_1} e^{-2\alpha_1 v/H_1} \left(\mathcal{P}_f(X)^2+ \frac{e^{v/H_1}}{h_0H(f;X)} \mathcal{P}_{f^2}(X)\right) \\ &\ll_{B,C} \frac{H_1^2\log Q_1}{P_1^{2\alpha_1}}\mathcal{P}_{f}(X)^2, \end{align*} treating $\mathcal{P}_{f^2}(X)$ using \eqref{eq:DiagBd}. Since $H_1^2\log Q_1 = P_1^{1/3-2\eta}(\log Q_1)^{1/3} \leq P_1^{1/3-\eta}$, $P_1^{2\alpha_1} \geq P_1^{1/2-3\eta}$ and $\eta \in (0,1/12)$, we get $$ \mathcal{M}_1 \ll P_1^{-1/6+2\eta} \mathcal{P}_{f}(X)^2 \leq P_1^{-1/6+2\eta}\mathcal{P}_f(X)^2. $$ Let now $2 \leq j \leq J$, if $J \geq 2$. By definition, we have $$ \mathcal{T}_j = \bigcup_{r \in \mathcal{I}_{j-1}} \mathcal{T}_{j,r}, $$
where $\mathcal{T}_{j,r}$ is the set of all $t \in \mathcal{T}_j$ such that $|Q_{r,H_{j-1}}(1+it)| > e^{-\alpha_{j-1} r/H_{j-1}}$.
Pointwise bounding $|Q_{v,H_j}(1+it)|$ for each $v \in \mathcal{I}_j$ leads to \begin{align*}
\mathcal{M}_j&\leq H_j \log Q_j\sum_{v \in \mathcal{I}_j} \sum_{r \in \mathcal{I}_{j-1}} e^{-2\alpha_j v/H_j} \int_{\mathcal{T}_j} |R_{v,H_j}(1+it)|^2 dt \\
&\leq (H_j \log Q_j)|\mathcal{I}_j||\mathcal{I}_{j-1}| e^{-2v_0 \alpha_j + 2\ell_j r_0 \alpha_{j-1}} \int_{-X/h}^{X/h} |Q_{r_0,H_{j-1}}(1+it)^{\ell} R_{v,H_j}(1+it)|^2 dt, \end{align*} where $(r_0,v_0) \in \mathcal{I}_{j-1} \times \mathcal{I}_j$ yield the maximal contribution among all such pairs, and $\ell_j := \lceil\frac{v/H_j}{r/H_{j-1}}\rceil$. Using Lemma \ref{lem:MixedMom}, we get $$ \mathcal{M}_j \ll_{B,C} (H_j \log Q_j)^3 e^{-2v_0\alpha_j + 2\ell_j r_0 \alpha_{j-1}} \exp\left(2\ell \log(2B\ell)\right) \left(1 + \frac{1}{h_0}\right) \mathcal{P}_{f}(X)^2. $$ Minor modifications to the estimates in \cite[Sec. 8.2]{MR} (with $h_0$ in place of $h$ there), selecting $h_0$ sufficiently large in terms of $B$, shows that $$ \mathcal{M}_j \ll_{B,C} \frac{1}{j^2P_1}\mathcal{P}_{f}(X)^2, $$ whence it follows that $$ \sum_{2 \leq j \leq J} \mathcal{M}_j \ll P_1^{-1} \mathcal{P}_{f}(X)^2 $$ (the requirements of our parameters $P_j,Q_j$ and $\alpha_j$ summarized in Remark \ref{rem:params} are sufficient for this). \\ Finally, we consider $\mathcal{U}$. Set $H := (\log X)^{\kappa}$, $P = \exp((\log X)^{1-\kappa})$ and $Q = \exp(\log X/\log\log X)$, put $\mathcal{I} := [\left\lfloor H\log P\right\rfloor, H\log Q] \cap \mathbb{Z}$, and define $Q_{v,H}$ and $R_{v,H}$ by $$ Q_{v,H}(s) := \sum_{\substack{P \leq p \leq Q \\ e^{v/H} \leq p \leq e^{(v+1)/H}}} f(p)p^{-s}, \quad\quad R_{v,H}(s) := \sum_{\frac{1}{3}Xe^{-v/H} < Xe^{-v/H}} \frac{f(m)}{m^s(1+\omega_{[P,Q]}(m))}. $$ Combining Lemma \ref{lem:decomp} with Lemma \ref{lem:LSInts} a) and the proof of Lemma \ref{lem:contS} to control those $n$ coprime to $P \leq p \leq Q$, we get that there is some $v_0 \in \mathcal{I}$ such that \begin{align} \label{eq:UInt}
&\int_{\mathcal{U}} |F(1+it)|^2 dt \nonumber\\
&\ll_{B,C} (H \log X)^2 \int_{\mathcal{U}} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 dt + \int_{-T}^T \left|\sum_{\substack{X/3 < n \leq X \\ (n,[P,Q]) = 1}} \frac{f(n)}{n^{1+it}}\right|^2 dt + \mathcal{P}_f(X)^2\left(1+\frac{1}{h_0}\right)\left(\frac{1}{H} + \frac{1}{P}\right) \nonumber\\
&\ll_{B,C} (H \log X)^2 \int_{\mathcal{U}} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 dt + \mathcal{P}_f(X)^2\left(\frac{1}{H} + \frac{1}{P}+ \left(\frac{\log P}{\log Q}\right)^A\right) \nonumber\\
&\ll_{B,C} (H \log X)^2 \int_{\mathcal{U}} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 dt + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\mathcal{P}_f(X)^2, \end{align} As in \cite[Sec. 8.3]{MR} we may select a discrete subset $\mathcal{V} \subset \mathcal{U}$ that is well-spaced, such that $$
\int_{\mathcal{U}} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 dt \ll \sum_{t \in \mathcal{V}} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2. $$
By assumption, for each $t \in \mathcal{V}$ we have $|Q_{r_0,H}(1+it)| > e^{-\alpha_J r_0/H_J} \geq P_J^{-\alpha_J}$, for some $r_0 \in \mathcal{I}_J$. We have $\log Q_{J+1} \geq \sqrt{\log X}$ by definition, and (as mentioned in Remark \ref{rem:params}) $\log P_J \geq \frac{4B}{\eta} \log\log Q_{J+1}$, whence $(\log X)^{2B/\eta} \leq P_J \leq Q_J \leq \exp(\sqrt{\log X})$. Applying Lemma \ref{lem:LSPrim}(b) for each $r_0 \in \mathcal{I}_J$, we thus have $$
|\mathcal{V}| \ll_B |\mathcal{I}_J|\exp\left(2\alpha_J(\log P_J) \left(1+\frac{\log X}{\log P_J}\right) + 2B \frac{\log X \log\log X}{\log P_J}\right) \leq X^{1/2-2\eta + o(1)} \cdot X^{\eta} = X^{1/2-\eta + o(1)}. $$ We now split the set $\mathcal{V}$ into the subsets \begin{align*}
\mathcal{V}_S &:= \{t \in \mathcal{V} : |Q_{v_0,H}(1+it)| \leq (\log X)^{-\frac{1}{2}B^2 - 10}\} \\
\mathcal{V}_L &:= \{t \in \mathcal{V} : |Q_{v_0,H}(1+it)| > (\log X)^{-\frac{1}{2}B^2 - 10}\};
\end{align*} the exponent $B^2/2$ is present in order to cancel the $\log X$ power that arises from $$ \mathcal{P}_{f^2}(X) \ll_B H(f;X) \mathcal{P}_f(X)^2 \ll_B (\log X)^{B^2} \mathcal{P}_f(X)^2. $$
By a pointwise bound, Lemma \ref{lem:LSInts}(c) and the above estimate for $|\mathcal{V}| \geq |\mathcal{V}_S|$, we obtain \begin{align*}
\sum_{t \in \mathcal{V}_S} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 &\ll (\log X)^{-B^2-19}(1+|\mathcal{V}_S|X^{-1/2}) \frac{e^{v_0/H}}{X}\sum_{X/(3e^{v_0/H}) < n \leq X/e^{v_0/H}} |f(n)|^2 \\ &\ll_{B,C} (\log X)^{-B^2-19} \mathcal{P}_{f^2}(X) \ll_B (\log X)^{-19} \mathcal{P}_f(X)^2. \end{align*}
Consider next the contribution from $\mathcal{V}_L$. Applying Lemma \ref{lem:LSPrim}(b) once again, this time using the condition $|Q_{v_0,H}(1+it)| > (\log X)^{-\frac{1}{2}B^2-10}$, to obtain $$
|\mathcal{V}_L| \ll_B \exp\left((B^2+20) \left(\log\log X\right) \left(1 + \frac{\log X}{\log P}\right) + 2B \frac{\log X \log\log X}{\log P}\right) = \exp\left((\log X)^{\kappa + o_B(1)}\right). $$ Recall that $\Delta = (2B+5) \kappa \in (0,1)$. Applying Lemma \ref{cor:Hal} (with $Z = 1/\delta = (\log X)^{\Delta}$) together with Lemma \ref{lem:LSPrim}(a), noting that $\kappa < 1/3-\kappa$, we obtain \begin{align*}
&\sum_{t \in \mathcal{V}_L} |Q_{v_0,H}(1+it)R_{v_0,H}(1+it)|^2 \\
&\ll_{A,B,C} \mathcal{P}_{f}(X)^2\left(\left(\frac{\log Q}{\log P}\right)^{B}\delta^{1/2} + \left(\frac{\log Q}{\log P}\right)^{3B}\frac{\log\log X}{(\log X)^{\sigma}} \right)^2 \sum_{t \in \mathcal{V}_L} |R_{v_0,H}(1+it)|^2 \\
&\ll_B \frac{\mathcal{P}_{f}(X)^2}{(\log P)^2} \left((\log X)^{2B\kappa -\Delta} + (\log X)^{6B\kappa-2\sigma+o(1)}\right) \left(1 + |\mathcal{V}_L|(\log X)^2 \exp\left(-(\log X)^{1/3-\kappa-o(1)}\right)\right)\\ &\ll_B \mathcal{P}_{f}(X)^2\left((\log X)^{2(B+1)\kappa - 2 -\Delta} + (\log X)^{(6B+2)\kappa-2-2\sigma+o(1)}\right). \end{align*} Combining this estimate with the one for $\mathcal{V}_S$, then plugging this back into our estimate \eqref{eq:UInt}, we get \begin{align*}
&\int_{\mathcal{U}}|F(1+it)|^2 dt\\
&\ll_{A,B,C} (\log X)^{2+2\kappa} \left((\log X)^{-19} + (\log X)^{2(B+1)\kappa - 2 - \Delta} + (\log X)^{(6B+2)\kappa- 2-2\sigma + o(1)}\right)\mathcal{P}_f(X)^2 \\ &+ \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\mathcal{P}_{f}(X)^2 \\
&\ll \left((\log X)^{-15} + (\log X)^{2(B+2)\kappa-\Delta} + (\log X)^{(6B+4)\kappa-2\sigma} + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right)\mathcal{P}_{f}(X)^2 \\ &\ll \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\mathcal{P}_{f}(X)^2, \end{align*} since $2(B+2) \kappa - \Delta = -\kappa$, and $\sigma \geq \hat{\sigma} > (3B+5/2)\kappa$ by definition. This completes the proof of Proposition \ref{prop:L2DPInt}.
\begin{proof}[Proof of Theorem \ref{thm:MRDB}] Let $f \in \mathcal{M}(X;A,B,C;\gamma,\sigma)$. Set $h := h_1/(\log h_0)^A$, and select $P_1 = (\log h)^{40B/\eta}$ and $Q_1 = h$. We may assume that $X$ is larger than any constant depending on $B$, since otherwise Theorem \ref{thm:MRDB} follows with a sufficiently large implied constant; we may also assume $h$ is larger than any constant depending on $B$, since otherwise the theorem follows (again with a large enough implied constant depending at most on $B$) from Remark \ref{rem:trivBd} and Lemma \ref{lem:contS} (taking $P = Q = 3/2$, say). \\ By Lemma \ref{lem:contS}, we have \begin{align*}
&\frac{2}{X}\int_{X/2}^{X} \left|\frac{1}{h_1} \sum_{x-h_1 < n \leq x} f(m) - \frac{1}{h_1}\int_{n-h_1}^n u^{it_0} du \cdot \frac{1}{h_2} \sum_{\substack{n-h_2 < m \leq n}} f(m)m^{-it_0}\right|^2 \\
&\ll_{A,B,C} \frac{2}{X}\int_{X/2}^{X}\left|\frac{1}{h_1} \sum_{\substack{n-h_1 < m \leq n \\ m \in \mathcal{S}}} f(m) - \frac{1}{h_1}\int_{n-h_1}^n u^{it_0} du \cdot \frac{1}{h_2} \sum_{\substack{n-h_2 < m \leq n \\ m \in \mathcal{S}}} f(m)m^{-it_0}\right|^2\\ &+\left(\frac{\log\log h_0}{\log h_0}\right)^A \mathcal{P}_{f}(X)^2. \end{align*} Set $\delta := (\log X)^{-(2B+5)\kappa}$ once again, and note that $t_0(fn^{-it_0},X) = 0$ is admissible. Thus, $$ \sum_{\substack{X/3 < n \leq X \\ n \in \mathcal{S}}} \frac{f(n)n^{-it_0}}{n^s} = F(s+it_0). $$ By the Cauchy-Schwarz inequality, we obtain \begin{align*}
&\frac{2}{X}\int_{X/2}^{X} \left|\frac{1}{h_1} \sum_{\substack{x-h_1 < m \leq x \\ m \in \mathcal{S}}} f(m) - \frac{1}{h_1}\int_{x-h_1}^x u^{it_0} du \cdot \frac{1}{h_2} \sum_{\substack{x-h_2 < m \leq x \\ m \in \mathcal{S}}} f(m)m^{-it_0}\right|^2 dx \\
&\ll \frac{1}{X} \int_{X/2}^{X} \left|\frac{1}{h_1} \sum_{\substack{x-h_1 < m \leq x \\ m \in \mathcal{S}}} f(m) - \frac{1}{2\pi h_1}\int_{I_{\delta}} F(1+it) \frac{x^{1+it} - (x-h_1)^{1+it}}{1+it}dt \right|^2 dx\\
&+ \frac{1}{X} \int_{X/2}^{X} \left|\frac{1}{h_1}\int_{x-h_1}^x u^{it_0} du\right|^2\left|\frac{1}{h_2}\sum_{\substack{x-h_2 < m \leq x \\ m \in \mathcal{S}}} f(m)m^{-it_0} - \frac{1}{2\pi h_2}\int_{-\delta^{-1}}^{\delta^{-1}} F(1+it+it_0) \frac{x^{1+it} - (x-h_2)^{1+it}}{1+it}dt \right|^2 dx\\
&+ \small \max_{X/2 < x \leq X} \left|\frac{1}{h_1} \int_{I_{\delta}} F(1+it) \frac{x^{1+it} - (x-h_1)^{1+it}}{1+it} dt - \frac{1}{h_1}\int_{x-h_1}^x u^{it_0} du \cdot \frac{1}{h_2} \int_{-\delta^{-1}}^{\delta^{-1}} F(1+i(t+t_0)) \frac{x^{1+it}-(x-h_2)^{1+it}}{1+it}dt \right|^2 \normalsize\\ & =: \mathcal{I}_1 + \mathcal{I}_2 + \mathcal{I}_3. \end{align*}
Consider $\mathcal{I}_3$ first. Making a change of variables $w = t-t_0$ and using $u^{iw} = x^{iw} + O(\delta^{-1} h_1/X)$ for $|w| \leq \delta^{-1}$ and $u \in [x-h_1,x]$, we see that \begin{align*} &\frac{1}{h_1}\int_{I_{\delta}} F(1+it)\frac{x^{1+it}-(x-h_1)^{1+it}}{1+it} dt = \frac{1}{h_1}\int_{-\delta^{-1}}^{\delta^{-1}} F(1+i(t_0+w)) \int_{x-h_1}^x u^{it_0 + iw} du dw \\
&= \frac{1}{h_1} \int_{x-h_1}^x u^{it_0} du \cdot \int_{-\delta^{-1}}^{\delta^{-1}} F(1+i(t_0+w)) x^{iw} dw + O\left(\delta^{-2}\frac{h_1}{X}\max_{|w| \leq \delta^{-1}} |F(1+i(t_0+w))| \right). \end{align*} Similarly, we have $$
\frac{1}{h_2} \int_{-\delta^{-1}}^{\delta^{-1}} F(1+i(t+t_0)) \frac{x^{1+it}-(x-h_2)^{1+it}}{1+it}dt = \int_{-\delta^{-1}}^{\delta^{-1}} F(1+i(t+t_0)) x^{it} dt + O\left(\delta^{-2} \frac{h_2}{X} \max_{|t| \leq \delta^{-1}} |F(1+i(t+t_0))|\right). $$ Applying partial summation, dropping the condition $n \in \mathcal{S}$, and then using Lemma \ref{lem:Shiu}, we find $$
\max_{|t| \leq \delta^{-1}} |F(1+i(t+t_0))| \ll \frac{1}{X} \sum_{X/3 < n \leq X} |f(n)| \ll_{B,C} \mathcal{P}_f(X), $$ and thus as $h_1 \leq h_2 = X/(\log X)^{\hat{\sigma}/2}$ we obtain $$ \mathcal{I}_3 \ll_{B,C} \left(\delta^{-2}\frac{h_2}{X}\right)^2 \mathcal{P}_f(X)^2 \ll (\log X)^{4(2B+5) \kappa - (8 B + 21) \kappa} \mathcal{P}_{f}(X)^2 = (\log X)^{-\kappa} \mathcal{P}_{f}(X). $$
Next, we treat $\mathcal{I}_1$ and $\mathcal{I}_2$. Applying Proposition \ref{prop:Pars} to each of these integrals and trivially bounding the $u$ integral in $\mathcal{I}_2$, we find \begin{align*}
\mathcal{I}_1 + \mathcal{I}_2 &\ll \int_{[-X/h_1,X/h_1] \backslash I_{\delta}} |F(1+it)|^2dt + \max_{T \geq X/h_1} \frac{X}{h_1T} \int_T^{2T} |F(1+it)|^2 dt. \end{align*} By the argument in Remark \ref{rem:2ndInt} and our choice of $h$, the latter is bounded by $$
\ll \int_{[-X/h,X/h] \backslash I_{\delta}} |F(1+it)|^2 dt + (\log h_0)^{-A} \mathcal{P}_{f}(X)^2. $$ But by Proposition \ref{prop:L2DPInt} this integral is bounded by $$ \ll_{A,B,C} \left(\frac{(\log Q_1)^{1/3}}{P_1^{1/6-\eta}} + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right) \mathcal{P}_{f}(X)^2 \ll \left((\log h)^{-30B} + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}}\right) \mathcal{P}_f(X)^2. $$ Combining these steps, and using $A \leq B$ and $\log h \asymp \log h_1$, we obtain \begin{align*}
&\frac{2}{X}\int_{X/2}^{X} \left|\frac{1}{h} \sum_{x-h_1 < n \leq x} f(m) - \frac{1}{h_1}\int_{n-h_1}^n u^{it_0} du \cdot \frac{1}{h_2} \sum_{n-h_2 < m \leq n} f(m)m^{-it_0}\right|^2 dx \\ &\ll_{A,B,C} \left(\left(\frac{\log\log h_0}{\log h_0}\right)^A + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\min\{1,A\}} \right)\mathcal{P}_{f}(X)^2, \end{align*} which completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:MRVers}] It suffices to show that $\mathcal{M}(X;A,B,C) \subseteq \mathcal{M}(X;A,B,C;1,\sigma)$ for any $0 < \sigma < \sigma_{A,B}$, where $\sigma_{A,B}$ is given by \eqref{eq:sgAB}, after which point the result follows from Theorem \ref{thm:MRDB}. \\ Suppose $f\in \mathcal{M}(X;A,B,C)$. By definition, $f$ is $(A,X)$ non-vanishing in the sense of \cite{MRII}, and so satisfies the condition \eqref{eq:hyp3} (or equivalently, $\gamma =1$ is admissible in \eqref{eq:hyp3'}). Observe furthermore that upon defining $\tilde{f}_B := f\cdot B^{-\Omega}$, $\tilde{f}_B$ takes values in $\mathbb{U}$ and satisfies $$
\sum_{z < p \leq w} \frac{|\tilde{f}_B(p)|}{p} \geq \frac{A}{B}\sum_{z < p \leq w} \frac{1}{p} - O_{A,B}\left(\frac{1}{\log z}\right) \text{ for all } 2 \leq z \leq w \leq X, $$ with $A/B \in (0,1]$ necessarily. By \cite[Lem. 5.1(i)]{MRII}, we get that \begin{align}
\rho(f,n^{it};X)^2 &= B\sum_{p \leq X} \frac{|\tilde{f}_B(p)| - \text{Re}(\tilde{f}_B(p)p^{-it})}{p} \nonumber\\
&\geq B \rho \min\{\log\log X, 3\log(|t-t_0| \log X+1)\} + O_{\rho,A}(1), \label{eq:lowBdRho} \end{align} for any $0 < \rho < \rho_{A/B}$ with $\rho_{\alpha}$ defined (see \cite[(14)]{MRII}) by $$ \rho_{\alpha} = \frac{\alpha}{3}\left(1-\text{sinc}(\pi \alpha/2)\right) > 0. $$ Since $\sigma_{A,B} = B \rho_{A/B}$, this implies that condition \eqref{eq:hyp4} holds with any $0 < \sigma < \sigma_{A,B}$. This completes the proof. \end{proof}
\section{Applications} \begin{comment} \subsection{Proof of Corollary \ref{cor:GCP}} \begin{proof}[Proof of Corollary \ref{cor:GCP}] Define $\tilde{r}(n) := r(n)/4$. It is well-known that $\tilde{r}(n) = 1 \ast \chi_4(n)$ for all $n \in \mathbb{N}$, where $\chi_4$ is the unique non-principal Dirichlet character modulo $4$; in particular, $\tilde{r}(n)$ is multiplicative. \\ Clearly, then, $0 \leq \tilde{r}(p) \leq 2$, and more generally $\tilde{r}(p^{\nu}) \leq \nu+1 \leq \nu 2^{\nu}$, for all $\nu \geq 2$. We observe by the prime number theorem for arithmetic progressions that for any $2 \leq z \leq w \leq X$, $$ \sum_{z < p \leq w} \frac{\tilde{r}(p)}{p} = 2\sum_{\substack{z < p \leq w \\ p \equiv 1 \pmod{4}}} \frac{1}{p} = \sum_{z <p \leq w} \frac{1}{p} + O(1/\log z); $$ in particular we have that $\sum_{p \leq X} (\tilde{r}(p)-1)/p = O(1)$. Thus, $\tilde{r} \in \mathcal{M}(X;1,2)$, for any $X$ large. \\
Next, we show that $t_0(\tilde{r};X) = 0$. Assume, to the contrary, that $t_0(\tilde{r};X) > 0$. Using \eqref{eq:lowBdRho} with $t \in [-X,X]$ satisfying $|t-t_0| \geq t_0/2$, we have $$
\sum_{p \leq X} \frac{\tilde{r}(p)(1-\cos(t\log p))}{p} \geq (\sigma_{1,2}-\varepsilon) \min\{\log\log X, 3\log(1+|t-t_0|\log X)\} \gg \log(1+|t_0|/2) > 0. $$ On the other hand, $t = 0$ is such a point $t$, and clearly $$ \sum_{p \leq X} \frac{\tilde{r}(p) - \text{Re}(\tilde{r}(p))}{p} = 0. $$ This contradiction implies that, indeed, $t_0(\tilde{r};X) =0.$\\ We deduce from Corollary \ref{cor:MRVers} that \begin{align*}
&\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h} \sum_{x-h < n \leq x} r(n) - \frac{2}{X}\sum_{X/2 < n \leq X} r(n)\right|^2 \\
&= \frac{32}{X}\int_{X/2}^X \left|\frac{1}{h} \sum_{x-h < n \leq x} \tilde{r}(n) - \frac{2}{X}\sum_{X/2 < n \leq X} \tilde{r}(n)\right|^2 &\ll \left(\frac{\log\log h_0}{\log h_0} + (\log X)^{-\kappa_{1,2} + o(1)}\right) \prod_{p \leq X} \left(1+\frac{\tilde{r}(p)-1}{p}\right)^2 \ll \frac{\log\log h_0}{\log h_0} + (\log X)^{-\kappa_{1,2} + o(1)}. \end{align*} Now, by a trivial lattice point estimate we have $$ \frac{2}{X}\sum_{X/2 < n \leq X} r(n) = \pi + O(X^{-1/2}), $$ and moreover it is easy to see that $$
\sum_{x-h < n \leq x} r(n) = \sum_{x-h < n \leq x} |\{(a,b) \in \mathbb{Z}^2 : a^2 + b^2 = n\}| = N((x-h,x]). $$ Combined with the estimate above, we deduce the claimed bound $$
\frac{2}{X}\int_{X/2}^X |N((x-h,x]) - \pi h|^2 \ll h^2\left(\frac{\log\log h_0}{\log h_0} + (\log X)^{-\kappa_{1,2} + o(1)}\right). $$ The second claim of the corollary follows immediately by Chebyshev's moment inequality. \end{proof} \end{comment}
\subsection{Proof of Theorem \ref{thm:momCusp}} \label{sec:RSProof} Let $f$ be a primitive, Hecke-normalized holomorphic cusp form without complex multiplication of fixed even weight $k \geq 2$ and level 1. Write the Fourier expansion of $f$ at $\infty$ as $$ f(z) = \sum_{n \geq 1} \lambda_f(n) n^{\frac{k-1}{2}} e(nz), \quad\quad \text{Im}(z) > 0, $$ where $\{\lambda_f(n)\}_n$ is the sequence of normalized Fourier coefficients with $\lambda_f(1) = 1$. As noted in Section \ref{sec:RS} $\lambda_f$ is a multiplicative function.
Deligne's proof of the Ramanujan conjecture for $f$ shows that $|\lambda_f(p)| \leq 2$ and $|\lambda_f(n)| \leq d(n)$ for all primes $p$ and positive integers $n$. Moreover, the quantiative Sato-Tate theorem of Thorner \cite{Tho}, which is based on the deep results of Newton and Thorne \cite{NeTh}, shows that for any $[a,b] \subseteq [-2,2]$, \begin{equation}\label{eq:quantST}
|\{p \leq X: \lambda_f(p) \in [a,b]\}| = \left(\frac{1}{\pi} \int_a^b \sqrt{1-(v/2)^2} dv\right) \int_2^X \frac{dt}{\log t} + O\left(\frac{X\log(k\log X)}{(\log X)^{3/2}}\right). \end{equation} Recall that \[ c_\alpha = \frac{2^\alpha}{\sqrt{\pi}} \frac{\Gamma\left(\frac{\alpha + 1}{2}\right)}{\Gamma(\alpha/2+2)}. \] Using this data, we will prove the following. \begin{prop}\label{prop:MABC}
Let $\alpha > 0$. There is a constant $\delta = \delta(\alpha) > 0$ such that $|\lambda_f|^{\alpha} \in \mathcal{M}(X;c_{\alpha},2^{\alpha},2;1/2-\varepsilon,\delta)$.
\end{prop}
Using Proposition \ref{prop:MABC} we will be able to apply Theorem \ref{thm:MRFull} in order to derive Corollary \ref{cor:RankinSelberg}.\\
We will check that $|\lambda_f|^\alpha$ satisfies the required hypotheses in the following lemmas. \begin{lem} \label{lem:MertST} For any $2 \leq z < w$ we have $$
\sum_{z < p \leq w} \frac{|\lambda_f(p)|^\alpha}{p} = c_\alpha \sum_{z < p \leq w} \frac{1}{p} + O((\log z)^{-1/2+o(1)}). $$ Similarly, we have $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^{2\alpha}}{p} = c_{2\alpha} \log\log X + O(1). $$ \end{lem} \begin{proof} Let $\beta \in \{\alpha,2\alpha\}$. By partial summation, $$
\sum_{z < p \leq w} \frac{|\lambda_f(p)|^\beta}{p} = \int_0^2 u^{\beta} d\{\sum_{\substack{z < p \leq w \\ |\lambda_f(p)| \leq u}} \frac{1}{p}\} = 2^\beta \sum_{z < p \leq w} \frac{1}{p} - \beta\int_0^2 \left(\sum_{\substack{z < p \leq w \\ |\lambda_f(p)| \leq u}} \frac{1}{p}\right) u^{\beta-1} du. $$
Fix $u \in (0,2]$, and let $I_u := [0,u] \cup [-u,0]$ so that $|\lambda_f(p)| \leq u$ if and only if $\lambda_f(p) \in I_u$. By partial summation and \eqref{eq:quantST}, \begin{align*} \sum_{\substack{z < p \leq w \\ \lambda_f(p) \in I_u}} \frac{1}{p} &= \frac{2}{\pi} \int_0^u \sqrt{1-(v/2)^2} dv \cdot \int_z^w \frac{dy}{y \log y} + O\left(\frac{\log\log z}{(\log z)^{3/2}} + \int_z^w \frac{\log\log y \, dy}{y(\log y)^{3/2}}\right) \\ &= \left(\frac{2}{\pi} \int_0^u \sqrt{1-(v/2)^2} dv\right) \log(\log w/\log z) + O((\log z)^{-1/2+o(1)}). \end{align*} Multiplying the main term by $\beta u^{\beta-1}$ and integrating in $u$, we obtain $I_\beta \log(\log w/\log z) + O(1/\log z)$, where \begin{align*} I_\beta &:= \frac{2\beta}{\pi} \int_0^2 u^{\beta-1} \int_0^u \sqrt{1-(v/2)^2} dv du = \frac{2\beta}{\pi} \int_0^2 \left(\int_v^2 u^{\beta-1} du \right) \sqrt{1-(v/2)^2} dv \\ &= \frac{2^{\beta+1}}{\pi}\int_0^2 (1-(v/2)^\beta) \sqrt{1-(v/2)^2} dv = 2^{\beta} - \frac{2^{\beta+1}}{\pi} \int_0^2 (v/2)^{\beta}\sqrt{1-(v/2)^2} dv. \end{align*} Making the change of variables $t := (v/2)^2$, we find that \[ 2^{\beta}-I_{\beta} = \frac{2^{\beta+1}}{\pi} \int_0^1 t^{(\beta-1)/2} (1-t)^{1/2} dt = \frac{2^{\beta+1}}{\pi} \frac{\Gamma\left(\frac{\beta + 1}{2}\right) \Gamma(3/2)}{\Gamma(\beta/2 + 2)} = \frac{2^{\beta}}{\sqrt{\pi}} \frac{\Gamma\left(\frac{\beta+1}{2}\right)}{\Gamma(\beta/2+2)} = c_{\beta}. \]
It follows, therefore, that \begin{align*}
\sum_{z < p \leq w} \frac{|\lambda_f(p)|^\alpha}{p} &= I_{\alpha} \sum_{z < p \leq w} \frac{1}{p} + O((\log z)^{-1/2+o(1)}) = c_{\alpha} \sum_{z < p \leq w} \frac{1}{p} + O((\log z)^{-1/2+o(1)}), \\
\sum_{z < p \leq w} \frac{|\lambda_f(p)|^{2\alpha}}{p} &= I_{2\alpha}\sum_{z < p \leq w} \frac{1}{p} + O((\log z)^{-1/2+o(1)}) = c_{2\alpha}\sum_{z < p \leq w} \frac{1}{p} + O((\log z)^{-1/2+o(1)}), \end{align*} and both claims follow. \end{proof}
To obtain uniform lower bound estimates for $\rho(|\lambda_f|^{\alpha},n^{it};X)^2$ we will need some control over the product $|\lambda_f(p)|^\alpha (1-\cos(t\log p))$, on average over $p$. In some ranges of $t$ this is furnished by the following lemma. \begin{lem} \label{lem:GoldLi}
Let $|t| \geq 1$. There is a constant $c = c(\alpha) > 0$ such that if $Y \geq (|t|+3)^2$ and $Y$ is sufficiently large then $$
\sum_{Y < p \leq 2Y} |\lambda_f(p)|^\alpha|1-p^{it}|^2 \geq c\frac{Y}{\log Y}. $$ \end{lem} \begin{proof} We adapt an argument due to Goldfeld and Li \cite[Lem. 12.12 and 12.15]{GoldLi} and Humphries \cite[Lem. 2.1]{Hump}. Let $\eta \in (0,1/2)$ be a parameter to be chosen later. By the prime number theorem for Rankin-Selberg $L$-functions \cite[Thm. 2]{Ran} we have $$
\sum_{Y < p \leq 2Y} |\lambda_f(p)|^2 \log p = (1+o(1))Y. $$
Since $|\lambda_f(p)|^2 \leq 4$ for all $p$, invoking the usual prime number theorem we obtain \begin{align*}
(1+o(1))Y &\leq \eta^2 \sum_{\substack{Y < p \leq 2Y \\ |\lambda_f(p)| \leq \eta}} \log p + 4(\log 2Y) |\{Y < p \leq 2Y : |\lambda_f(p)| > \eta\}| \\
&\leq (\eta^2 + o(1)) Y + 4(\log 2Y) |\{Y < p \leq 2Y : |\lambda_f(p)| > \eta\}|, \end{align*} which rearranges as \begin{equation}\label{eq:lrglambda}
|\{Y < p \leq 2Y : |\lambda_f(p)| > \eta\}| \geq \left(\frac{1-\eta^2}{4} - o(1)\right) \frac{Y}{\log Y}. \end{equation} Next, we estimate the cardinality \begin{align*}
|\{Y < p \leq 2Y : |1-p^{it}| \leq \eta\}| &= |\{Y < p \leq 2Y : |\sin((t\log p)/2)| \leq \eta/2\}|. \end{align*} Set $\beta := \sin^{-1}(\eta/2)/\pi \in [0,1/2]$, Whenever $\sin(t\log p/2) \in [-\eta/2, \eta/2]$ there is $m \in \mathbb{Z}$ such that $(t \log p)/2 \in [\pi (m - \beta), \pi (m + \beta)]$. By Jordan's inequality, $\beta \leq \frac{1}{2} \sin( \pi \beta) = \frac{\eta}{4}$, and we see that \begin{align*}
|\{Y < p \leq 2Y : |1-p^{it}| \leq \eta\}| \leq |\{Y < p \leq 2Y : \|(t\log p)/(2\pi)\| \leq \eta/4\}|, \end{align*}
where $\|t\| := \min_{m \in \mathbb{Z}} |t-m|$. Splitting up the primes $Y < p \leq 2Y$ according to the nearest integer $m$ to $(t\log p)/(2\pi)$, the latter may be bounded above as $$
\leq \sum_{\frac{|t|\log Y+\eta/4}{2\pi} \leq m \leq \frac{|t|\log(2Y) - \eta/4}{2\pi}} \left(\pi\left(e^{\frac{2\pi m+\eta/4}{|t|}}\right) - \pi\left(e^{\frac{2\pi m-\eta/4}{|t|}}\right)\right). $$
For each $m$ we have $e^{\frac{2\pi m+\eta/4}{|t|}} \leq \left(1+\frac{\eta}{2|t|}\right) e^{\frac{2\pi m-\eta/4}{|t|}}$.
By the Brun-Titchmarsh theorem we get, uniformly over all $m$ in the sum, $$
\pi\left(e^{\frac{2\pi m+\eta/4}{|t|}}\right) - \pi\left(e^{\frac{2\pi m-\eta/4}{|t|}}\right) \leq \frac{\eta}{|t|} \frac{e^{2\pi(m-\eta/4)/|t|}}{2\pi(m-\eta/4)/|t|-\log(|t|/\eta)} \leq \eta \frac{2Y}{|t|(\log Y - (\log Y)/2)} \leq 8\eta\frac{Y}{|t|\log Y}, $$
for $Y$ large enough. Since there are $\leq 1+ |t|(\log(2Y)-\log Y)/(2\pi) \leq 2|t|$ integers in the range of summation, we obtain \begin{equation}\label{eq:smallpit}
|\{Y < p \leq 2Y : |1-p^{it}| \leq \eta\}| \leq 16 \eta\frac{Y}{\log Y}. \end{equation} We deduce from \eqref{eq:lrglambda} and \eqref{eq:smallpit} that \begin{align*}
|\{Y < p \leq 2Y : |\lambda_f(p)| > \eta \text{ and } |1-p^{it}| > \eta\}| &\geq |\{Y < p \leq 2Y : |\lambda_f(p)| > \eta\}| - |\{Y < p \leq 2Y : |1-p^{it}| \leq \eta\}| \\ &\geq \max\left\{0, \frac{1-64\eta-\eta^2}{4}-o(1)\right\} \cdot \frac{Y}{\log Y}. \end{align*} Selecting $\eta = \frac{1}{128}$, we get that if $Y$ is sufficiently large, $$
|\{Y < p \leq 2Y : |\lambda_f(p)| > \eta \text{ and } |1-p^{it}| > \eta\}| \geq \frac{Y}{10 \log Y}. $$ Finally, we obtain that $$
\sum_{Y < p \leq 2Y} |\lambda_f(p)|^{\alpha} |1-p^{it}|^2 > \eta^{2+\alpha}|\{Y < p \leq 2Y : |\lambda_f(p)| > \eta \text{ and } |1-p^{it}| > \eta\}| \geq \frac{\eta^{2+\alpha}}{10} \cdot \frac{Y}{\log Y}, $$ which proves the claim with $c := 2^{-10(2+\alpha)}$. \end{proof}
We now obtain lower bounds for $\rho(|\lambda_f|^\alpha,n^{it};X)^2$ for all $|t| \leq X$. \begin{lem} \label{lem:check4}
There is a $\delta = \delta(\alpha) > 0$ such that whenever $|t| \leq X$ we have $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha(1- \cos(t\log p))}{p} \geq \delta \min\{\log\log X, \log(1+|t|\log X)\} - O(1). $$
Moreover, we may select $t_0(|\lambda_f|^2,X) = 0$. \end{lem} \begin{proof}
We may assume that $X$ is sufficiently large, else the estimate given is trivial. When $|t| \leq 1/\log X$ the claim is vacuous. Thus, we may focus on the case $1/\log X < |t| \leq X$. We consider the ranges $1/\log X < |t| \leq 1$, $1 < |t| \leq \log X$ and $\log X < |t| \leq X$ separately.
Throughout we will introduce an auxiliary parameter $2 \leq Y \leq X$, chosen case by case, and use the inequality $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha(1-\cos(t\log p)}{p} \geq \sum_{Y \leq p \leq X} \frac{|\lambda_f(p)|^\alpha (1-\cos(t\log p))}{p}. $$
Suppose first that $1/\log X < |t| \leq 1$. Let $Y := e^{1/|t|}$ and write $$
R(X) := \sum_{p \leq X} |\lambda_f(p)|^{\alpha} - c_{\alpha} \pi(X), $$ so that by arguments analogous to those of Lemma \ref{lem:MertST}, $R(X) \ll X/(\log X)^{3/2-o(1)}$. By partial summation and Lemma \ref{lem:MertST}, \begin{align*}
\sum_{Y \leq p \leq X} \frac{|\lambda_f(p)|^\alpha \cos(t\log p)}{p} &= c_{\alpha} \int_Y^X \frac{\cos(t\log u)}{u} \frac{du}{\log u} + O\left(\frac{1}{\log Y}\right) - \int_Y^X R(u) \left(\cos(t\log u) - t\sin(t\log u)\right)\frac{du}{u^2} \\
&= c_{\alpha}\int_{1}^{|t|\log X} \cos( tv/|t| ) \frac{dv}{v} + O\left((1+|t|)(\log Y)^{-1/2+o(1)}\right) \ll 1, \end{align*}
the bound in the last step arising from setting $v := |t| \log u$ and integrating by parts. Thus, in light of Lemma \ref{lem:MertST} $$
\sum_{Y < p \leq X} \frac{|\lambda_f(p)|^\alpha (1-\cos(t\log p))}{p} = c_{\alpha}\log(1+ |t|\log X) - O(1).
$$
Next, we consider the intermediate range $1 < |t| \leq \log X$. Here, we set $Y := (10\log X)^2$, employ a dyadic decomposition and apply Lemma \ref{lem:GoldLi} to obtain, when $X$ is large enough, \begin{align*}
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha(1-\cos(t\log p))}{p} &\geq \frac{1}{2} \sum_{Y < 2^j \leq X/2} 2^{-(j+1)} \sum_{2^j <p \leq 2^{j+1}} |\lambda_f(p)|^\alpha|1-p^{it}|^2 \geq \frac{c}{4\log 2} \sum_{Y < 2^j \leq X/2} \frac{1}{j} \\ &= \frac{c}{4\log 2} \log\left(\frac{\log X}{\log Y}\right) - O(1) \geq \frac{c}{8\log 2} \log\log X - O(1), \end{align*} for some $c = c(\alpha) > 0$. \\
Finally, assume that $\log X \leq |t| \leq X$. In this case, put $Y := \exp\left((\log X)^{2/3+\varepsilon}\right)$, where $\varepsilon > 0$ is small. Let $m \geq 1$ be an integer parameter to be chosen later. By H\"{o}lder's inequality, \begin{align*}
\left|\sum_{Y < p \leq X} \frac{|\lambda_f(p)|^\alpha\cos(t\log p)}{p}\right| \leq \left(\sum_{Y < p \leq X} \frac{|\lambda_f(p)|^{\frac{2\alpha m}{2m-1}}}{p}\right)^{1-1/2m} \cdot \left(\sum_{Y < p \leq X} \frac{\cos(t\log p)^{2m}}{p}\right)^{1/2m}. \end{align*}
We can bound $|\lambda_f(p)|^{2\alpha m/(2m-1)} \leq |\lambda_f(p)|^\alpha 2^{\alpha/(2m-1)}$, so that by Lemma \ref{lem:MertST} the first sum is $$ \leq 2^{\alpha/(2m)} c_{\alpha}^{1-1/(2m)} (\log(\log X/\log Y))^{1-1/2m} + O_m(1). $$ Now, we can write $$ \cos(t\log p)^{2m} = 2^{-2m} (p^{it} + p^{-it})^{2m} = 2^{-2m} \binom{2m}{m} + 2^{-2m} \sum_{0 \leq j \leq m-1} \binom{2m}{j} (p^{2i(m-j)t} +p^{-2i(m-j)t}). $$ By the zero-free region of the Riemann zeta function (see e.g., \cite[Lem. 2]{MRLiou}) we have $$
\max_{1 \leq |l| \leq m} \left|\sum_{Y < p \leq X} \frac{1}{p^{1+ilt}}\right| \ll \frac{\log X}{1+|t|} + (\log X)^{-10} \ll 1. $$ It follows that $$ \sum_{Y < p \leq X} \frac{\cos(t \log p)^{2m}}{p} = 2^{-2m} \binom{2m}{m}\log(\log X/\log Y) + O_m(1), $$ and therefore in sum we have $$
\left|\sum_{Y < p \leq X} \frac{|\lambda_f(p)|^\alpha \cos(t\log p)}{p}\right| \leq c_{\alpha} \left(\frac{2^{\alpha}}{c_{\alpha}}\right)^{1/(2m)} \left(2^{-2m} \binom{2m}{m}\right)^{1/2m} \log\left(\frac{\log X}{\log Y}\right) + O_m(1). $$ Using the bounds $\sqrt{2\pi n} (n/e)^n \leq n! \leq 2\sqrt{2\pi n} (n/e)^n$, valid for all $n \in \mathbb{N}$ (see e.g., \cite{Rob}), we get that $$ \left(\frac{2^{\alpha}}{c_{\alpha}}\right)^{1/(2m)} \left(2^{-2m} \binom{2m}{m}\right)^{1/(2m)} \leq \left(\frac{2^{\alpha+1}}{c_{\alpha}\sqrt{\pi m}}\right)^{1/(2m)}, $$ and thus taking $m \geq m_0(\alpha)$, we obtain $$
\left|\sum_{Y < p \leq X} \frac{|\lambda_f(p)|^\alpha \cos(t\log p)}{p}\right| \leq 2^{-1/(2m)} c_{\alpha} \log(\log X/\log Y) + O(1), $$ for $\eta \in (0,1)$. We thus obtain in this case that $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha (1-\cos(t\log p))}{p} \geq c_\alpha (1-2^{-1/(2m)}) \left(\frac{1}{3}-\varepsilon\right) \log\log X - O(1). $$ Combining our estimates from each of these ranges and putting $\delta := c_\alpha \min\{c/(8\log 2), \frac{1}{4}(1-2^{-1/(2m)})\}$, we deduce that $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha(1-\cos(t\log p))}{p} \geq \delta \min\{\log\log X , \log(1+|t|\log X)\} - O(1). $$ This completes the proof of the lower bound. \\
Finally, note that $\rho(|\lambda_f|^\alpha,1;X) = 0$, so $t_0 = 0$ is a minimizer of $t\mapsto \rho(|\lambda_f|^\alpha,n^{it};X)^2$. This completes the proof. \end{proof}
\begin{proof}[Proof of Proposition \ref{prop:MABC}]
Let $X$ be large. By Deligne's theorem we have $|\lambda_f(p)|^\alpha \leq 2^\alpha$ for all $p$. In addition, we have $|\lambda_f(n)|^\alpha \leq d(n)^\alpha \leq d_{2^{\alpha+1}}(n)^{\max\{1,\alpha\}}$ for all $n$. By Lemma \ref{lem:MertST} we see that we can take $A = c_\alpha$ and any $\gamma \in (0,1/2)$ in an estimate of the form $$
\sum_{z < p \leq w} \frac{|\lambda_f(p)|^\alpha}{p} \geq c_\alpha\sum_{z < p \leq w} \frac{1}{p} - O\left(\frac{1}{(\log z)^{\gamma}}\right) \text{ for } 2 \leq z \leq w \leq X, $$ and finally by Lemma \ref{lem:check4} there is a constant $\delta > 0$ such that $$
\sum_{p \leq X} \frac{|\lambda_f(p)|^\alpha-\text{Re}(|\lambda_f(p)|^\alpha p^{-it})}{p} \geq \delta \min\{ \log\log X, \log(1+|t-t_0| \log X)\} - O(1), $$
as $t_0 =0$ is admissible. Thus, by definition, $|\lambda_f|^\alpha \in \mathcal{M}(X; c_\alpha, 2^\alpha, \max\{1,\alpha\}; 1/2-\varepsilon,\delta)$ for any $\varepsilon \in (0,1/2)$ and $X$ large, as claimed.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:momCusp}]
In view of Proposition \ref{prop:MABC} we may apply Theorem \ref{thm:MRFull} to the function $|\lambda_f|^\alpha$. We see that if $h = h_0 H(|\lambda_f|^\alpha;X)$ and $h_0 \rightarrow \infty$ then \begin{align*}
&\frac{1}{X}\int_X^{2X} \left|\frac{1}{h} \sum_{x < n \leq x+h} |\lambda_f(n)|^\alpha - \frac{1}{X}\sum_{X < n \leq 2X} |\lambda_f(n)|^\alpha \right|^2 dx \\
&\ll \left( \left(\frac{\log\log h_0}{\log h_0}\right)^{c_{\alpha}} + \frac{\log\log X}{(\log X)^{\theta}}\right) (\log X)^{2(c_{\alpha}-1)}, \end{align*}
where $\theta = \theta(\alpha) > 0$. Furthermore, by Lemma \ref{lem:MertST}, we have $$
H(|\lambda_f|^\alpha;X) \asymp \exp\left(\sum_{p \leq X} \frac{|\lambda_f(p)|^{2\alpha} - 2|\lambda_f(p)|^\alpha + 1}{p}\right) \asymp d_\alpha \log X, $$ where $d_\alpha = c_{2\alpha} - 2c_\alpha + 1$. Thus, changing $h_0$ by a constant factor, we deduce that $h = h_0 (\log X)^{d_\alpha}$ can be taken in the above estimate. The claim follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:RankinSelberg}]
When $\alpha = 2$ we have $c_2 = 1$ and $d_2 = c_4 - 2c_2 + 1 = 1$. Hence, $H(|\lambda_f|^2;X) \asymp \log X$. Since $$
c_f = \frac{1}{X}\sum_{X <n \leq 2X} |\lambda_f(n)|^2 + O(X^{-2/5}), $$
the claim for $|\lambda_f|^2$ follows immediately from Theorem \ref{thm:momCusp}.\\
It remains to consider $g_f(n) = \sum_{d^2|n} |\lambda_f(n/d^2)|^2$. Clearly, $g_f(n) \leq \sum_{e|n} d(n/e)^2 \leq d(n)^3$, and $g_f(p) = |\lambda_f(p)|^2$ for all primes $p$. By Proposition \ref{prop:MABC} we have $g_f \in \mathcal{M}(X; 1,4,3;1/2-\varepsilon,\delta)$ for any $\varepsilon \in (0,1/2)$, and $H(g_f;X) = H(|\lambda_f|^2;X)$. The result now follows in the same was as for $|\lambda_f|^2$, using $$ d_f = \frac{1}{X}\sum_{X < n \leq 2X} g_f(n) + O(X^{-2/5}) $$ in this case. \end{proof}
\subsection{Proof of Theorem \ref{thm:genAutForms}} Fix $m \geq 2$, let $\mathbb{A}$ denote the adeles over $\mathbb{Q}$, and let $\pi$ be a fixed cuspidal automorphic representation for $GL_m(\mathbb{A})$ with unitary central character normalized so that it is trivial on the diagonally embedded copy of $\mathbb{R}^+$. We write $q_{\pi}$ to denote the conductor of $\pi$. We assume that $\pi$ satisfies GRC, and we write $\lambda_{\pi}(n)$ to denote the $n$th coefficient of the standard $L$-function of $\pi$. The key result of this subsection is the following analogue of Proposition \ref{prop:MABC}. \begin{prop}\label{prop:MABCGen}
With the above notation, we have $|\lambda_{\pi}|^2 \in \mathcal{M}(X; 1, m^2, 2)$ whenever $X$ is sufficiently large in terms of $q(\pi)$, and $t_0(|\lambda_{\pi}|^2,X) = 0$ is admissible. \end{prop}
For primes $p \nmid q_{\pi}$ we have $|\lambda_{\pi}(p)|^2 = \lambda_{\pi \otimes \tilde{\pi}}(p)$, where $\tilde{\pi}$ denotes the contragredient representation of $\pi$ and $\pi \otimes \tilde{\pi}$ is the Rankin-Selberg convolution of $\pi$ and $\tilde{\pi}$. As $\pi$ is fixed, the primes $p|q_{\pi}$ will cause no harm to our estimates. \begin{lem}\label{lem:PNTRS} There is a constant $c = c(m) > 0$ such that $$
\sum_{p \leq X} |\lambda_\pi(p)|^2 \log p = X + O_{\pi}(X e^{-c\sqrt{\log X}}) \text{ as $X \rightarrow \infty$.} $$
In particular, we have $\sum_{z < p \leq w} \frac{|\lambda_{\pi}(p)|^2}{p} = \sum_{z < p \leq w} \frac{1}{p} + O_{\pi}(\frac{1}{\log z})$ for any $z < p \leq w$. \end{lem} \begin{proof} Assume $X$ is sufficiently large relative to $q_{\pi}$. Set $f := \pi \otimes \tilde{\pi}$, and write $\Lambda_f(n)$ to denote the $n$th coefficient of the logarithmic derivative $-\frac{L'}{L}(s,f)$. Combining \cite[Thm. 5.13]{IK} (see the remarks that follow the statement for a discussion relevant to the case of a Rankin-Selberg convolution) with \cite[Exer. 6]{IK}, we deduce that $$
\sum_{p \leq X} |\lambda_{\pi}(p)|^2 \log p = \sum_{n \leq X} \Lambda_f(n) + O_{\pi}(\sqrt{X} \log^2 X) = X + O_{\pi}\left(X \exp\left(-c'm^{-4} \sqrt{\log X}\right)\right), $$ for some absolute constant $c' > 0$ (note that the exceptional zero plays no role when $X$ is large enough). This implies the first claim. The second follows immediately from the first statement by partial summation. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:MABCGen}]
Lemma \ref{lem:PNTRS} implies that \eqref{eq:hyp3} holds with $A = 1$, and by GRC we have $|\lambda_{\pi}(n)|^2 \leq d_m(n)^2$. It thus follows that $|\lambda_{\pi}|^2 \in \mathcal{M}(X;1,m^2,2)$, as claimed. That the choice $t_0(|\lambda_{\pi}|^2,X) = 0$ is admissible is obvious, as in the proof of Proposition \ref{prop:MABC}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:genAutForms}] Recall that $m$ and $\pi$ are fixed. A direct application of Corollary \ref{cor:MRVers} shows (after replacing $X$ by $2X$ an $x-h$ by $x$) that there is $\kappa = \kappa(m) > 0$ such that \begin{align*}
&\frac{1}{X}\int_{X}^{2X} \left|\frac{1}{h}\sum_{x < n \leq x+h} |\lambda_{\pi}(n)|^2 - \frac{1}{X}\sum_{X < n \leq 2X} |\lambda_{\pi}(n)|^2\right|^2 dx \ll_m \left(\frac{\log\log h_0}{\log h_0} + \frac{\log\log X}{(\log X)^{\kappa}}\right) \mathcal{P}_{|\lambda_{\pi}|^2}(X)^2, \end{align*}
where $h = h_0 H(|\lambda_{\pi}|^2;X)$ and $10 \leq h_0 \leq X/(10 H(|\lambda_{\pi}|^2;X))$.
By Lemma \ref{lem:PNTRS} $\mathcal{P}_{|\lambda_{\pi}|^2}(X) \ll_m 1$. We also have $$
H(|\lambda_{\pi}|^2;X) \asymp_m \exp\left(\sum_{p \leq X} \frac{|\lambda_{\pi}(p)|^4 - 2|\lambda_{\pi}(p)|^2 + 1}{p}\right) \ll \exp\left(\sum_{p \leq X} \frac{m^2 |\lambda_{\pi}(p)|^2 - 1}{p}\right) \ll (\log X)^{m^2-1}. $$ It follows that our variance estimate holds if $h \geq h_0 (\log X)^{m^2-1}$ and $10 \leq h_0 \leq X/(10(\log X)^{m^2-1})$, and the proof of the theorem is complete. \end{proof}
\subsection{Proof of Corollary \ref{cor:Hooley}} Recall that \[
\Delta(n) := \max_{u \in \mathbb{R}} \sum_{ \substack{ d|n \\ e^u \leq d < e^{u+1}}} 1. \] Given $\theta \in \mathbb{R}$, write also $$
d(n,\theta) := \sum_{d|n} d^{i\theta}. $$ This is clearly a multiplicative function, with $d(n) = d(n,0)$. In probabilistic terms, it is also $d(n)$ times the characteristic function of the distribution function \[
\mathcal{D}_n(v) := \frac{1}{d(n)} \sum_{ \substack{ d|n \\ d \leq e^v} } 1, \quad v \in \mathbb{R}. \] introduced in Section \ref{subsec:apps}. The following general bounds for concentration functions in terms of characteristic functions allow us to relate $\Delta(n)$ with integral averages of $d(n,\theta)$. \begin{lem} \label{lem:TenExer} There are constants $c_2 > c_1 > 0$ such that, uniformly in $n \in \mathbb{N}$, $$
c_1\frac{1}{d(n)} \int_0^1 |d(n,\theta)|^2 d\theta \leq \Delta(n) \leq c_2\int_0^1 |d(n,\theta)| d\theta. $$ \end{lem} \begin{proof} This is a special case of \cite[Lem. 30.2]{HallTenBook}. \end{proof}
By Lemma \ref{lem:TenExer} we find that for any $x \in [X/2,X]$ and $10 \leq h \leq X$, \begin{equation} \label{eq:lowBdDelta}
\sum_{x-h < n \leq x} \Delta(n) \gg \int_0^1 \sum_{x-h < n \leq x} \frac{|d(n,\theta)|^2}{d(n)} d\theta. \end{equation}
For $\theta \in \mathbb{R}$ and $n \in \mathbb{N}$ we write $f_{\theta}(n) := |d(n,\theta)|^2/d(n)$. \begin{cor} \label{cor:fthetaApp} Let $\theta \in (1/\log X,1]$. Let $10 \leq h_0 \leq X/\log X$ and put $h := h_0 (\theta^{-1} \log X)^{1/2}$. Then there is a constant $\kappa_{1,2} > 0$ such that for any $0 < \kappa < \kappa_{1,2}$, $$
\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h}\sum_{x-h < n \leq x} f_{\theta}(n) - \frac{2}{X} \sum_{X/2 < n \leq X} f_{\theta}(n)\right|^2 dx \ll \left(\frac{\log\log h_0}{\log h_0} + \frac{\log\log X}{(\log X)^{\kappa}}\right), $$ The implicit constant is independent of $\theta$. \end{cor}
To this end, we prove that $f_{\theta} \in \mathcal{M}(X;1,2,1)$ for all $1/\log X < \theta \leq 1$, which is the purpose of the following lemmas. \begin{lem}\label{lem:primeftheta} Let $\theta \in (0, 1]$, and let $\beta := \min\{1/\theta,\log X\}$. Then $$ \sum_{p \leq X} \frac{f_{\theta}(p)}{p} = \log(\beta \log X) + O(1). $$ Similarly, $$ H(f_{\theta};X) \asymp (\beta \log X)^{1/2}. $$ \end{lem} \begin{proof} Observe that for each $p$, $$
f_{\theta}(p) = \frac{1}{2}|1+p^{i\theta}|^2 = 1+\cos(\theta \log p). $$ Put $Y := \min\{X,\exp(1/\theta)\} = e^{\beta}$. For $p \leq Y$ we have $\cos(\theta \log p) = 1 +O(\theta^2(\log p)^2)$, so that by the prime number theorem, $$ \sum_{p \leq Y} \frac{f_{\theta}(p)}{p} = \sum_{p \leq Y} \frac{2}{p} + O\left(\theta^2 \sum_{p \leq e^{1/\theta}} \frac{(\log p)^2}{p}\right) = 2 \log(\min\{1/\theta,\log X\}) + O(1). $$ This proves the first claim if $0 \leq \theta \leq 1/\log X$, so assume now that $1/\log X < \theta \leq 1$. By partial summation and the prime number theorem, we have \begin{align*} \sum_{Y < p \leq X} \frac{1+\cos(\theta \log p)}{p} = \int_1^{\theta \log X} (1+\cos v) \frac{dv}{v} + O(1) &= \left(\frac{1}{2\pi} \int_0^{2\pi} (1+\cos u) du\right) \log(\theta \log X) + O(1) \\ &= \log(\theta \log X) + O(1). \end{align*} We thus deduce that $$ \sum_{p \leq X} \frac{f_{\theta}(p)}{p} = 2\log(1/\theta) + \log(\theta \log X) = \log(\theta^{-1} \log X), $$ and the first claim follows for all $\frac{1}{\log X} < \theta \leq 1$ as well. \\ For the second claim, we simply note that $$ H(f_{\theta};X) \asymp \exp\left(\sum_{p \leq X} \frac{(f_{\theta}(p)-1)^2}{p}\right) = \exp\left(\sum_{p \leq X} \frac{\cos(\theta \log p)^2}{p}\right). $$ A similar partial summation argument shows that \begin{align*} \sum_{p \leq X} \frac{\cos(\theta \log p)^2}{p} &= \log(\min\{1/\theta,\log X\}) + \left(\frac{1}{2\pi} \int_0^{2\pi} (\cos u)^2 du\right) \log(1+\theta \log X) + O(1) \\ &= \frac{1}{2}\log(\min\{\log X,1/\theta\}\log X) + O(1), \end{align*} and the claim follows. \end{proof}
\begin{lem} \label{lem:check3} Let $\theta \in (0,1]$. Let $2 \leq z \leq w \leq X$. Then $$ \sum_{z < p \leq w} \frac{f_{\theta}(p)}{p} \geq \sum_{z < p \leq w} \frac{1}{p} + O(1/\log z). $$ \end{lem} \begin{proof} Set $Y := \min\{X,e^{1/\theta}\}$ once again. If $w \leq Y$ then $\cos(\theta\log p) \geq \cos(1) \geq 0$ for all $z < p \leq w$, and thus
$$ \sum_{z < p \leq w} \frac{f_{\theta}(p)}{p} \geq \sum_{z < p \leq w} \frac{1}{p} + O(1/\log z). $$ On the other hand, if $Y \leq z$ (so that $\theta > 1/\log X$) then by the same partial summation argument as in Lemma \ref{lem:primeftheta} we find that $$ \sum_{z < p \leq w} \frac{f_{\theta}(p)}{p} = \left(\frac{1}{2\pi} \int_0^{2\pi} (1+\cos u) du\right) \log(\log w/\log z) + O(1/\log z) = \sum_{z < p \leq w} \frac{1}{p} + O(1/\log z). $$ Finally, suppose $z < Y < w$. In this case, we split the interval into the segments $(z,Y]$ and $(Y,w]$ and apply the arguments in each of the previous two cases to obtain $$ \sum_{z < p \leq w} \frac{f_{\theta}(p)}{p} \geq \sum_{z < p \leq Y} \frac{1}{p} + \sum_{Y < p \leq w} \frac{1}{p} + O(1/\log z) \geq \sum_{z < p \leq w} \frac{1}{p} + O(1/\log z), $$ as claimed. \end{proof}
\begin{proof}[Proof of Corollary \ref{cor:fthetaApp}]
Let $h = h_0 (\theta^{-1} \log X)$. Note that $|d(n;\theta)|^2/d(n) \leq d(n)$ uniformly over $n$, so that combined with Lemma \ref{lem:check3} we have that $f_{\theta} \in \mathcal{M}(X;1,2,1)$ for all $\theta \in (1/\log X,1]$. As in Lemma \ref{lem:check4}, $t_0(f_{\theta}, X) = 0$ is admissible for all $\theta \in (1/\log X,1]$. Since $H(f_{\theta};X) \ll (\theta^{-1} \log X)^{1/2}$ uniformly over all $\theta \in (1/\log X,1]$ by Lemma \ref{lem:primeftheta}, the claim follows from Corollary \ref{cor:MRVers}. \end{proof}
\begin{proof}[Proof of Corollary \ref{cor:Hooley}] Let $\delta \in (0,1]$, set $Y := \exp\left((\log X)^{\delta}\right)$ and put $h = h_0 (\log X)^{(1+\delta)/2}$, with $10 \leq h_0 \leq X/10(\log X)^{(1+\delta)/2}$. By Lemma \ref{lem:primeftheta} we have $H(f_{\theta};X) \gg (\log X)^{(1+\delta)/2}$ for all $1/\log Y < \theta \leq 1$. By Fubini's theorem, the Cauchy-Schwarz inequality and Corollary \ref{cor:fthetaApp}, we thus see that \begin{align*}
&\frac{2}{X}\int_{2/X}^X \left[\int_{1/\log Y}^1 \left|\frac{1}{h}\sum_{x-h < n \leq x} f_{\theta}(n) - \frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right| d\theta \right] dx \\
&\leq \int_{1/\log Y}^1 \left(\frac{2}{X}\int_{X/2}^X \left|\frac{1}{h}\sum_{x- h < n \leq x} f_{\theta}(n)-\frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right|^2 dx\right)^{1/2} d\theta \\ &\ll \left(\sqrt{\frac{\log\log h_0}{\log h_0}} + \left(\frac{\log\log X}{(\log X)^{\kappa}}\right)^{\frac{1}{2}} \right) \int_{1/\log Y}^1 \mathcal{P}_{f_{\theta}}(X) d\theta, \end{align*} for any $0 < \kappa < \kappa_{1,2}$. By Lemma \ref{lem:primeftheta} we have $$ \int_{1/\log Y}^1 \mathcal{P}_{f_{\theta}}(X) d\theta \ll \int_{1/\log Y}^1 \exp\left(\sum_{p \leq X} \frac{f_{\theta}(p)-1}{p}\right) d\theta \ll \int_{1/\log Y}^1 \frac{d\theta}{\theta} = \log\log Y. $$ We thus deduce that for all but $o_{h_0 \rightarrow \infty}(X)$ exceptional integers $x \in [X/2,X]$, we have that $$
\int_{1/\log Y}^1 \left|\frac{1}{h}\sum_{x-h < n \leq x} f_{\theta}(n) - \frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right| d\theta = o_{h_0 \rightarrow \infty}(\log\log Y). $$ For any of the non-exceptional $x$, we apply \eqref{eq:lowBdDelta} to give \begin{align} &\frac{1}{h} \sum_{x-h < n \leq x} \Delta(n) \geq \int_0^1 \left(\frac{1}{h} \sum_{x-h < n \leq x} f_{\theta}(n)\right) d\theta \nonumber \\
&\geq \int_{1/\log Y}^1 \left(\frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right) d\theta - \int_{1/\log Y}^1 \left|\frac{1}{h} \sum_{x-h < n \leq x} f_{\theta}(n) - \frac{2}{X}\sum_{X/2 < n \leq X}f_{\theta}(n)\right| d\theta \nonumber\\ &= \int_{1/\log Y}^1 \left(\frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right) d\theta - o_{h_0 \rightarrow \infty}(\log\log Y). \label{eq:lowBdHooley} \end{align} On the other hand, by \cite[Exer. 208]{Ten} we find that when $1/\log X \leq \theta \leq 1$, $$
\frac{2}{X}\sum_{X/2 < n \leq X} \frac{|d(n,\theta)|^2}{d(n)} \geq \frac{2}{X}\sum_{X/2 < n \leq X} \frac{\mu^2(n)|d(n,\theta)|^2}{d(n)} = |\zeta(1+i\theta)| H_{\theta}(1) + O(|\theta^{3/2}|/\sqrt{\log X}), $$ where for $\text{Re}(s) > 3/4$, $H_{\theta}(s)$ is some convergent Dirichlet series satisfying $H_{\theta}(1) \gg 1$ uniformly in $\theta \in [1/\log X,1]$. Integrating over $\theta \in [1/\log Y,1]$ and using the Laurent expansion $\zeta(1+i\theta) = (i\theta)^{-1} + O(1)$, we deduce that \begin{equation}\label{eq:longHooSum} \int_{1/\log Y}^1 \left(\frac{2}{X}\sum_{X/2 < n \leq X} f_{\theta}(n)\right) d\theta \gg \int_{1/\log Y}^1 \frac{d\theta}{\theta} + O(1) = \delta \log\log X + O(1). \end{equation} We thus have obtained $$ \frac{1}{h}\sum_{x-h < n \leq x} \Delta(n) \gg \delta \log\log X $$ for all but $o_{h_0 \rightarrow \infty}(X)$ integers $x \in [X/2,X]$, and the claim follows. \end{proof}
\section*{Acknowledgments} The author warmly thanks
Oleksiy Klurman and Aled Walker for helpful suggestions about improving the exposition of the paper, as well as for their encouragement. He is also grateful to Maksym Radziwi\l\l{} and Jesse Thorner for helpful conversations and suggestions regarding the applications to automorphic forms. Finally, he would like to thank G\'{e}rald Tenenbaum for useful comments and references. Much of this paper was written while the author held a Junior Fellowship at the Mittag-Leffler institute for mathematical research during the Winter of 2021. He would like to thank the institute for its support.
\end{document} |
\begin{document}
\title{Local distinguishability of quantum states in bipartite systems} \author{Xiaoqian Zhang} \thanks{These authors contributed equally to this work.} \affiliation{College of Information Science and Technology, Jinan University, Guangzhou 510632, China} \author{Cheng Guo} \thanks{These authors contributed equally to this work.} \affiliation{Institute for Advanced Study, Tsinghua University, Beijing 100084, China} \author{Weiqi Luo} \email{[email protected]} \affiliation{College of Information Science and Technology, Jinan University, Guangzhou 510632, China} \author{Xiaoqing Tan} \affiliation{College of Information Science and Technology, Jinan University, Guangzhou 510632, China}
\date{\today} \pacs{03.67.Lx, 03.67.Dd, 03.65.Ud.}
\begin{abstract}
In this article, we show a sufficient and necessary condition for locally distinguishable bipartite states via one-way local operations and classical communication (LOCC). With this condition, we present some minimal structures of one-way LOCC indistinguishable quantum state sets. As long as an indistinguishable subset exists in a state set, the set is not distinguishable. We also list several distinguishable sets as instances. \end{abstract}
\maketitle
\section{Introduction} Quantum entanglement is an important manifestation of quantum nonlocality \cite{Einstein1935}. Quantum entangled states, especially maximally entangled states, play an important role in quantum information theory \cite{Ekert91,Bennett1992,Bennett1993,Barenco1996}. Maximally entangled states have attracted considerable attention in recent years, since they can assist us in managing quantum information and understanding the fundamental principles of quantum mechanics. A given set of quantum states is distinguishable, if we can identify each state in this set unambiguously. Otherwise, this set is so-called indistinguishable. Many scholars have studied the distinguishable \cite{Tian2015,Cao2004,Ghosh2004,Zhang015,Zhang5,Bandyopa11} and the indistinguishable \cite{Zhang014,Wang2016,Yu12,Li2015,Yu2015} maximally entangled states. In \cite{Bandyopa11}, Bandyopadhyay \emph{et al.} revealed an important conclusion: unilaterally transformable quantum states are distinguishable by one-way local operations and classical communication (LOCC). With this conclusion, they proved that maximally entangled states are exactly indistinguishable by one-way LOCC in $C^d\otimes C^d$ ($d=4, 5, 6$). Zhang \emph{et al.} pointed out two indistinguishable classes of maximally entangled states by one-way LOCC in $C^d\otimes C^d$ ($d \leq 10$) \cite{Zhang014}. The definitions of one-way LOCC and two-way LOCC are given as follows.
The different subsystems of quantum states may be in distance, so the following two restrictions are reasonable: (1) (local operations) each subsystem can only perform operations on its own but global operations on the composite system are not allowed, and (2) (classical communication) each party can transmit other parties its measurement outcomes by classical channels in order to coordinate and communicate \cite{Cohen07}. The protocols with these two rules are called ``local operations and classical communication" protocols, abbreviated as LOCC. The LOCC protocols can be further refined based on the use of classical communication. In one-way LOCC, the classical communication is only transmitted from one part (Alice) to the other (Bob), but no information is allowed to move in the other direction (Bob to Alice). For two-way LOCC, Alice and Bob can communicate enough rounds after each turn of his or her local operations. In the following texts, the LOCC means two-way LOCC conveniently.
Quantum nonlocality of orthogonal product states was widely studied, which included the local distinguishability \cite{Duan09,Bandy2012,Yang2013} and the local indistinguishability \cite{Duan2010,Feng09,Zhang14,Wang15,Zhang15,Zhan15}. Duan \emph{et al.} exhibited the distinguishable orthogonal product states by separable operations in $C^3\otimes C^3$ and $C^2\otimes C^2\otimes C^2$ dimensional quantum systems, respectively \cite{Duan09}. However, this result is not always true in LOCC protocols because LOCC operations are weaker than separable operations. These orthogonal product states are all indistinguishable by LOCC, which exhibits quantum nonlocality without entanglement in \cite{Duan2010,Feng09,Zhang14,Wang15,Zhang15,Zhan15}. Many other interesting works are presented about quantum distinguishable problems in \cite{Walgate00,Band09,Duan07,Duan08,feng07,duan11,duan14,Nath05,Chen03,Singal2016,Nathanson2013,Horo2003}. Singal proposed a framework for distinguishing orthogonal bipartite states by one-way LOCC \cite{Singal2016}. Similarly, Nathanson showed that any set of three orthogonal maximally entangled states can be distinguished via one-way LOCC with high probability \cite{Nathanson2013}.
The distinguishability of complete orthogonal product states can be proved by using the sufficient and necessary conditions in \cite{Chen2004,Ma2014}. Walgate and Hardy also researched this problem for distinguishing bipartite orthogonal quantum states in $C^2\otimes C^n$ by LOCC \cite{Horod03}. The sufficient and necessary condition was used to prove the local distinguishability (or indistinguishability) of a class orthogonal product states. Zhang \emph{et al.} extended and proved the indistinguishability of orthogonal product states in $C^d\otimes C^d$ ($d$ is odd) \cite{Zhang14}. Not only the distinguishability of pure states has already been researched widely but also mixed states \cite{Duan2010,Herzog04,Feng04,Zhang06,Stojn07}. Duan \emph{et al.} proved the distinguishability of mixed states in $C^2\otimes C^n$ \cite{Duan2010}. Feng \emph{et al.} pointed out that if and only if the states from a mixed state set are orthogonal, these states can be unambiguously discriminated \cite{Feng04}.
In this paper, we focus on studying local distinguishability of bipartite quantum state sets. In our protocol, Alice is the first person to perform a nontrivial measurement. We first present the sufficient and necessary condition for locally distinguishable bipartite quantum states in Sec. \ref{sec:LOCC}. Secondly, this condition for distinguishability of a certain state set can be used to prove the local indistinguishability of orthogonal product states in $C^{(3l_A+1)} \otimes C^{(3l_B+1)}$ ($1\leq l_A\leq l_B$). We also analyse and prove the distinguishability of three and four orthogonal quantum states via one-way LOCC in Sec. \ref{sec:jud}. We assess the number of product states in a $C^{d_A}\otimes C^{d_B}$ distinguishable set via one-way LOCC when Alice goes first. Furthermore, we analyze and show the minimum structures of one-way LOCC indistinguishable orthogonal product states in Sec. \ref{sec:one}. Finally, the conclusions are shown in Sec. \ref{sec:con}.
\section{A sufficient and necessary condition for bipartite distinguishable states } \label{sec:LOCC} When global operations are allowed, a state set is distinguishable if and only if these quantum states in the set are orthogonal to each other. \begin{lemma}\label{A4} A given set of quantum states $\{ \rho^{1}, \rho^{2}, \cdots, \rho^{N}\}$ are distinguishable if and only if $\forall h\neq k,~ \text{tr}(\rho^h\rho^k)=0.$ \end{lemma}
By the way, when the set $\{ \rho^{1}, \rho^{2}, \cdots, \rho^{N} \}$ is a pure state set $\{|\psi_1\rangle, |\psi_2\rangle, \cdots, |\psi_N\rangle\}$, the condition $\text{tr}(\rho^h\rho^k)=0$ can be rewritten as $ \langle \psi_h|\psi_k\rangle=0.$
After introducing the definitions of one-way LOCC and LOCC in introduction, we will introduce an important lemma (Lemma \ref{A2}) \cite{Singal2016}, by which one can simplify the measure to an orthogonality preserving rank-one positive-operator valued measure (POVM) in the one-way LOCC protocols, which checks the distinguishability of pure and mixed states.
\begin{lemma} \cite{Singal2016,Nathanson2013}\label{A2} Alice can commence a one-way LOCC protocol to distinguish among $\rho^{1}_{AB}, \cdots ,\rho^{N}_{AB}$ if and only if there exists a protocol which starts with an orthogonality preserving rank-one POVM on the side of Alice. \end{lemma}
Now, one of our main results can be introduced: the sufficient and necessary condition of the bipartite distinguishable quantum state sets via one-way LOCC. Notice, a rank-one POVM can be presented via some certain basis.
\begin{theorem}
Alice and Bob share a $C^{d_A}\otimes C^{d_B}$ composite system. For any set $S = \{\rho_1, \rho_2, \cdots, \rho_N\}$ in this system, $S$ is one-way LOCC distinguishable when Alice goes first, if and only if there exists a basis $\{|1\rangle, |2\rangle, \cdots, |d_A\rangle\}_A$, such that for any element $\rho_x$, \begin{eqnarray}
\rho_x=\sum\limits_{m,n=1}^{d_A}|m\rangle\langle n|\otimes\rho_{mn}^x, \ ~(x=1, 2, \ldots, N), \end{eqnarray} where $\{\rho^x_{mm}\}$ is a set of orthogonal non-normalized states, may be zero sometimes, $(m = 1, \ldots, d_A ~ ),$ and following condition holds: \begin{eqnarray} \forall h\neq k,~ \text{tr}(\rho^h_{mm}\rho^k_{mm})=0, \ ~(m = 1, \ldots, d_A). \end{eqnarray} \end{theorem}
\emph{Proof:}
The sufficiency is obvious. For the necessity, by \emph{Lemma \ref{A2}}, when Alice commences a one-way LOCC protocol, the basis $\{|1\rangle, |2\rangle, \cdots, |d_A\rangle\}_A$ is according to her orthogonality preserving rank-one POVM. The reason of ``$\forall h \neq k, \text{tr}(\rho^h_{mm}\rho^k_{mm})=0 $'' is the distinguishability of states in Bob's subsystems. Otherwise, $\rho_h$ and $\rho_k$ are indistinguishable. By \emph{Lemma \ref{A2}}, if the state set is indistinguishable via all rank-one POVM elements, the set is indistinguishable for all one-way LOCC protocols. The proof is completed. $\square$
When $S$ is a pure state set, we can obtain the following theorem. \begin{theorem}\label{A3}
A $C^{d_A} \otimes C^{d_B}$ pure quantum state system is shared by Alice and Bob. For any set $S$ in this system, $S$ is one-way LOCC distinguishable when Alice goes first, if and only if there exists a basis $\{|1\rangle, |2\rangle, \cdots, |d_A\rangle\}_A$, such that for any element $|\psi_k\rangle$ in $S$, \begin{eqnarray}\label{B2}
|\psi_k\rangle=\sum\limits_{j=1}^{d_A} |j\rangle_A|\eta_j^k\rangle_B, ~\text{and}~
\forall h \neq k, \langle\eta_j^h|\eta_j^k\rangle=0. \end{eqnarray} \end{theorem}
When $d_A =2$, these two theorems will be the result shown by Walgate and Hardy \cite{Horod03} and the result from Feng, Duan and Ying \cite{Duan2010}.
\section{Bipartite distinguishable pure quantum states in $C^{d_A}\otimes C^{d_B}$ system} \label{sec:jud} Theorem \ref{A3} shows a sufficient and necessary condition for distinguishable high-dimensional state sets with the concrete forms. It is a useful tool for researchers to analyze whether concrete state sets are one-way local distinguishable or not, such as these following orthogonal product states in Eq.(\ref{B1}) (also FIG. \ref{A1}).
\begin{eqnarray}\label{B1} \begin{array}{l}
\displaystyle |\varphi_{a_tb_t}^\pm\rangle=|a_t\rangle_A(\frac{1}{2}|b_t\rangle\pm\frac{1}{\sqrt{2}}|(b_t+1)\rangle+\frac{1}{2}|(b_t+2)\rangle)_B,\\ \displaystyle \qquad where \ t=1, 2\ \text{and} \\ \displaystyle \qquad a_1=0,1, 2, 3, \cdots, l_A-1, \\ \displaystyle \qquad b_1=a_1, a_1+3, a_1+6, \cdots,3l_B-2a_1-3.\\ \displaystyle \qquad a_2=3l_A, 3l_A-2, 3l_A-4, \cdots, 3l_A-2(l_A-1),\\ \displaystyle \qquad b_2=(3l_A-a_2)/2+1, (3l_A-a_2)/2+4, (3l_A-a_2)/2+7,\\ \displaystyle \qquad\qquad \cdots,3l_B-(3l_A-a_2)-2.\\
\displaystyle |\varphi_{a_rb_r}^\pm\rangle=(\frac{1}{2}|a_r\rangle\pm\frac{1}{\sqrt{2}}|(a_r+1)\rangle+\frac{1}{2}|(a_r+2)\rangle)_A|b_r\rangle_B, \\ \displaystyle \qquad where \ r=1, 2 \ \text{and}\ \\ \displaystyle \qquad b_1=0, 1, 2, 3, \cdots, l_A-1,\\ \displaystyle \qquad a_1=b_1+1, b_1+4, b_1+7, \cdots,3l_A-2b_1-2. \\ \displaystyle \qquad b_2=3l_B, 3l_B-2, 3l_B-4, \cdots, 3l_B-2(l_A-1),\\ \displaystyle \qquad a_2=(3l_B-b_2)/2, (3l_B-b_2)/2+3, (3l_B-b_2)/2+6,\\ \displaystyle \qquad \qquad \cdots, 3l_A-(3l_B-b_2)-3.\\
\displaystyle |\varphi_{q_1c_1}\rangle=|(3l_A-1)\rangle_A|c_1\rangle_B, where\ c_1=1, 2, 3, \cdots, 3l_B-1.\\
\displaystyle |\varphi_{q_2c_2}\rangle=|q_2\rangle_A|(3l_B-1)\rangle_B, where \ q_2=1, 2, 3, \cdots, 3l_A-2. \end{array} \end{eqnarray}
\begin{figure}
\caption{The tiling structure of $C^{(3l_A+1)}\otimes C^{(3l_B+1)}$ dimension orthogonal product states $(1\leq l_A\leq l_B)$.}
\label{A1}
\end{figure} According to Theorem \ref{A3}, we have the following conclusions.
{\bf Corollary 1}
\emph{In $C^{3l_A+1}\otimes C^{3l_B+1}$ $(1\leq l_A\leq l_B)$ quantum system, there exists an indistinguishable set via one-way LOCC no matter who goes first, which contains $4l_Al_B+7l_A+3l_B-3$ orthogonal product states $|\varphi_s\rangle$ (in Eq.(\ref{B1})).}
By Theorem \ref{A2}, Corollary 1 can be proved (details in Appendix). Note that in Corollary 1, the indistinguishability of the set in Eq. (\ref{B1}) can be directly proved via our Theorem \ref{A3}, since we can consider the subspaces $C^3\otimes C^{(3l_B+1)}$ in this set. Then extend it to the general case: the dimension of Alice's subsystems is greater than three, our Theorem \ref{A3} is also applicable. \\
Next, let us check how many orthogonal product states in a state set are always distinguishable via one-way LOCC. Walgate \emph{et al.} pointed out that any three orthogonal product states are always distinguishable via LOCC in $C^2 \otimes C^2$ system \cite{Horod03}. Now, we show the general case.
{\bf Corollary 2} \emph{Any three orthogonal product states are always distinguishable via one-way LOCC when Alice commences.}
\emph{Proof:} Suppose three orthogonal product states are
$$|\psi_0\rangle=|a_0b_0\rangle_{AB},\ |\psi_1\rangle=|a_1b_1\rangle_{AB},\ |\psi_2\rangle=|a_2b_2\rangle_{AB}.$$
If $\langle a_0|a_1\rangle=\langle a_0|a_2\rangle=\langle a_1|a_2\rangle = 0,$ it is obvious that Formula (\ref{B2}) holds.
If two of $|a_0\rangle, |a_1\rangle$ and $|a_2\rangle$ are not orthogonal, Formula (\ref{B2}) also holds. We give the proof as follows:
Without loss of generality, suppose $\langle a_1|a_2\rangle\neq 0$ and $|\psi_0\rangle=|a_0b_0\rangle_{AB}=|00\rangle_{AB},$ we have $\langle b_1|b_2\rangle=0.$ There are $4$ different cases:
$1)$ $ \langle 0|a_1\rangle=\langle 0|a_2\rangle = 0,$~
$2)$ $ \langle 0|a_1\rangle=0$ but $\langle 0|a_2\rangle\neq 0,$~
$\\3)$ $ \langle 0|a_2\rangle=0$ but $ \langle 0|a_1\rangle\neq 0,$
$4)$ $\langle 0|a_1\rangle\neq 0$ and $\langle 0|a_2\rangle\neq 0.$
For case $1)$, a basis $\{|0\rangle, |a_1\rangle, |a'_2\rangle\}$ can be constructed by Gram-Schimdt orthogonalization, where the state $|a'_2\rangle=(|a_2\rangle-\alpha|a_1\rangle)$ can be constructed from $\{|0\rangle, |a_1\rangle\}_A$, thus we have \begin{eqnarray*} \begin{array}{l}
\displaystyle |\psi_0\rangle=|00\rangle_{AB}, \ |\psi_1\rangle=|a_1b_1\rangle_{AB},\\
\displaystyle |\psi_2\rangle=\alpha |a_1b_2\rangle_{AB}+(|a_2\rangle-\alpha|a_1\rangle)_A|b_2\rangle_B. \end{array} \end{eqnarray*}
It means that Formula $(\ref{B2})$ holds. Similarly, the two elements $\{|0\rangle, |a_2\rangle\}_A$ can also construct a basis $\{|0\rangle, |a'_1\rangle, |a_2\rangle\}$ by Gram-Schimdt orthogonalization. So we have \begin{eqnarray*} \begin{array}{l}
\displaystyle |\psi_0\rangle=|00\rangle_{AB},\ |\psi_2\rangle= |a_2 b_2\rangle_{AB}, \\ \displaystyle|\psi_1\rangle=\alpha|a_2b_1\rangle_{AB}+(|a_1\rangle-\beta|a_2\rangle)_A|b_1\rangle_B, \end{array} \end{eqnarray*}
where $|a'_1\rangle=(|a_1\rangle-\beta|a_2\rangle).$ This is a form as Formula $(\ref{B2}).$
For case $2)$, we get the same basis $\{|0\rangle, |a_1\rangle, |a'_2\rangle\}_A$ as the case 1).
For case $3)$, we get the same basis $\{|0\rangle, |a'_1\rangle, |a_2\rangle\}_A$ as the case 1).
For case $4)$, it means $\langle b_0|b_1\rangle = \langle b_0|b_2\rangle = \langle b_1|b_2\rangle = 0.$ Alice can choose any basis.
Thus, Formula $(\ref{B2})$ always holds for all set mentioning three orthogonal product states. Therefore, any three orthogonal product states are always distinguishable. $\square$\\
Generally, for one-way LOCC, if Alice and Bob can decide who does the first quantum operation, we can prove the following corollary.
{\bf Corollary 3} \emph{Any four orthogonal product states are one-way LOCC distinguishable whoever goes first.} \begin{figure}
\caption{The structure of at least three orthogonal states in Alice's or Bob's subsystems.}
\label{C1}
\end{figure}
\emph{Proof:} Suppose $|\psi_k\rangle=|a_k\rangle_A|b_k\rangle_B, ~k=0,1,2,3$. In FIGs. \ref{C1} and \ref{C2}, we present the relationship of $a_k$ and $b_k$. The two points $k$ and $j$ are linked by solid lines if $|a_k\rangle$ and $|a_j\rangle $ are orthogonal in Alice's subsystems, as well as the dotted lines for Bob's subsystems.
If $\{|a_k\rangle\}$ (See FIG. \ref{C1}(d1)) or three of $\{|a_0\rangle,~|a_1\rangle,~|a_2\rangle,~|a_3\rangle\}$ (See FIG. \ref{C1}(d2, d3, d4)) are orthogonal to each other, Formula $(\ref{B2})$ holds. That is, Alice can always commence a one-way LOCC protocol to distinguish these four orthogonal product states.
Similarly for Bob, if three of $\{|b_0\rangle,~|b_1\rangle,~|b_2\rangle,~|b_3\rangle \}$ are orthogonal to each other, Formula $(\ref{B2})$ holds.
In the following, we check the other cases, which is equivalent to FIG. \ref{C2}. That is, only two equivalent classes don't have solid triangles or dotted triangles in $4$ points perfect picture as follows:
\begin{figure}
\caption{The structure of only two orthogonal states in Alice's or Bob's subsystems.}
\label{C2}
\end{figure}
In FIG. \ref{C2}(g), we have $\langle a_0|a_1\rangle=\langle a_0|a_2\rangle=\langle a_1|a_3\rangle=\langle a_2|a_3\rangle=0$ in Alice's side and $\langle b_0|b_3\rangle=\langle b_1|b_2\rangle=0$ in Bob's side. Alice constructs the basis with $\{|a_0\rangle, |a_1\rangle\}$ via Gram-Schimdt orthogonalization to obtain \begin{eqnarray*} \begin{array}{l}
\displaystyle |\psi_0\rangle=|a_0\rangle_A|b_0\rangle_B, \ |\psi_1\rangle=|a_1\rangle_A|b_1\rangle_B,\\
\displaystyle |\psi_2\rangle=\alpha|a_1\rangle_A|b_2\rangle_B+(|a_2\rangle-\alpha|a_1\rangle)_A|b_2\rangle_B,\\
\displaystyle |\psi_3\rangle=\beta|a_0\rangle_A|b_3\rangle_B+(|a_3\rangle-\beta|a_0\rangle_A|b_3\rangle_B, \end{array} \end{eqnarray*}
where $\langle a_0|(|a_2\rangle-\alpha|a_0\rangle)=0$, $\langle a_1|(|a_3\rangle-\beta|a_1\rangle)=0$. After orthogonal normalization, these bases are $|\bar{a}_0\rangle=\frac{|a_0\rangle}{|||a_0\rangle||}$, $|\bar{a}_1\rangle=\frac{|a_1\rangle}{|||a_1\rangle||}$, $|\bar{a}_2\rangle=\frac{|a_2\rangle-\alpha|a_1\rangle}{|||a_2\rangle-\alpha|a_1\rangle||}$ and $|\bar{a}_3\rangle=\frac{|a_3\rangle-\beta|a_0\rangle}{|||a_3\rangle-\beta|a_0\rangle||}$. Formula $(\ref{B2})$ holds.
In FIG. \ref{C2}(h), we have $\langle a_0|a_2\rangle=\langle a_0|a_3\rangle=\langle a_1|a_2\rangle=0$ in Alice's side and $\langle b_0|b_1\rangle=\langle b_1|b_3\rangle=\langle b_2|b_3\rangle=0$ in Bob's side. Alice constructs the basis with Gram-Schimdt orthogonalization from $\{|a_0\rangle, |a_2\rangle\}$ to get \begin{eqnarray*} \begin{array}{l}
\displaystyle |\psi_0\rangle=|a_0\rangle_A|b_0\rangle_B,\ |\psi_2\rangle=|a_2\rangle_A|b_2\rangle_B,\\
\displaystyle |\psi_1\rangle=\alpha|a_0\rangle_A|b_1\rangle_B+(|a_1\rangle-\alpha|a_0\rangle)_A|b_1\rangle_B,\\
\displaystyle |\psi_3\rangle=(\beta|\bar{a}_1\rangle+\gamma|a_2\rangle)_A|b_3\rangle_B+(|a_3\rangle-\beta|\bar{a}_1\rangle-\gamma|a_2\rangle)_A|b_3\rangle_B, \end{array} \end{eqnarray*}
where $\langle a_0|(|a_1\rangle-\alpha|a_0\rangle)=0$, $(\beta^*\langle \bar{a}_1|+\gamma^*\langle a_2|)(|a_3\rangle-\beta|\bar{a}_1\rangle-\gamma|a_2\rangle)=0$. After orthogonal normalization, these bases are
$|\bar{a}_0\rangle=\frac{|a_0\rangle}{|||a_0\rangle||}$, $|\bar{a}_1\rangle=\frac{|a_1\rangle-\alpha|a_0\rangle}{|||a_1\rangle-\alpha|a_0\rangle||}$, $|\bar{a}_2\rangle=\frac{|a_2\rangle}{|||a_2\rangle||}$ and $|\bar{a}_3\rangle=\frac{|a_3\rangle-\beta|\bar{a}_1\rangle-\gamma|a_2\rangle}{|||a_3\rangle-\beta|\bar{a}_1\rangle-\gamma|a_2\rangle||}$. Formula (\ref{B2}) holds.
Thus, any four orthogonal states are always distinguishable via one-way LOCC no matter who goes first. $\square$ \\
According to Theorem \ref{A3}, we can estimate the number of product states of a one-way LOCC distinguishable set in $C^{d_A}\otimes C^{d_B}$ system when Alice goes first. These results are Corollary 4 and Corollary 5.
{\bf Corollary 4} \emph{In a $C^{d_A}\otimes C^{d_B}$ composite system, $[\frac{d_A d_B}{2}]+k$ orthogonal states expressed as the Eq. (\ref{B2}) can be exactly one-way locally distinguished, only if at least $2k$ (when $d_Ad_B$ is even) or $2k-1$ (when $d_Ad_B$ is odd) of those states are product states. Here, ``[ $ \cdot $ ]'' is the integer-valued function.}
\emph{Proof:} According to Theorem \ref{A3}, any $[\frac{d_A d_B}{2}]+k$ orthogonal states expressed as $|\psi_k\rangle=\sum\limits_{j=1}^{d_A} |j\rangle_A|\eta_j^k\rangle_B,$ where $h \neq k, \langle\eta_j^h|\eta_j^k\rangle=0.$ If the set is one-way locally distinguishable, Alice firstly measures her particles A with the basis $\{|1\rangle,|2\rangle,\cdots,|d_A\rangle\}_A$ such that Bob's quantum states collapse. Without loss of generality, suppose some $|\eta_{j}^k\rangle=0$. After considering those nonzero $|\eta_{j}^k\rangle$, the number of the nonzero pair $|j\rangle_A |\eta_{j}^k\rangle_B$ is at most $d_A d_B$.
Suppose the number of product states in the set is $x$, then the number of non-product states is $[\frac{d_A d_B}{2}]+k-x$. There are at least two $|j\rangle_A |\eta_{j}^k\rangle_B$ in one non-product state and at least one $|j\rangle_A |\eta_{j}^k\rangle_B$ in one product state. Combined with Drawer Principle, therefore, $1\cdot x+2\cdot([\frac{d_A d_B}{2}]+k-x)\leq d_A d_B$ holds, i.e. $ x \geq 2k + 2\cdot[\frac{d_A d_B}{2}] - d_A d_B.$ Therefore, we obtain this corollary.$\square$
Immediately obtain the following corollary via the above one.
{\bf Corollary 5} \emph{In a $C^{d_A}\otimes C^{d_B}$ composite system, $d_Ad_B$ orthogonal states can be exactly one-way locally distinguished if and only if all of them are product states.}
\section{The minimum structures of one-way LOCC indistinguishable pure states} \label{sec:one} In this section, we mainly study the minimum structures of one-way local indistinguishability of quantum pure states. We present an observation which makes it unnecessary for us to check the local indistinguishability of a whole set but only need to check a locally indistinguishable subset.
{\bf Observation 1} \emph{In a $C^{d_A}\otimes C^{d_B}$ quantum system, any subset of a locally distinguishable set is locally distinguishable. It is true that there always exists at least one locally indistinguishable subset in a locally indistinguishable set.}
Firstly, let us consider the general case: the entangled state sets. Any two orthogonal pure states are always distinguishable via LOCC \cite{Walgate00}. Now, let us show the proof of the local indistinguishability of any three Bell states by using Theorem \ref{A3}. Note that, Ghosh \emph{et al.} \cite{Ghosh01} gave the similar conclusion: any three Bell states cannot be perfectly LOCC distinguished.
{\bf Corollary 6} \emph{Any three Bell states cannot be perfectly LOCC distinguished no matter who goes first.}
\emph{Proof:} Suppose three Bell states are \begin{eqnarray} \begin{array}{l}
\displaystyle |\Phi_0\rangle=1/\sqrt{2}(|00\rangle+|11\rangle)_{AB},\\
\displaystyle |\Phi_1\rangle=1/\sqrt{2}(|00\rangle-|11\rangle)_{AB}, \\
\displaystyle |\Phi_2\rangle=1/\sqrt{2}(|01\rangle+|10\rangle)_{AB}. \end{array} \end{eqnarray}
Let $|\varphi\rangle=cos\theta |0\rangle+e^{i\delta}sin\theta |1\rangle$, $|\varphi^\perp\rangle=-e^{-i\delta}sin\theta |0\rangle+cos\theta |1\rangle$, where $\theta, \ \delta\in[0,2\pi]$. We have $|0\rangle=cos\theta |\varphi\rangle-e^{i\delta}sin\theta |\varphi^\perp\rangle$, $|1\rangle=e^{-i\delta}(sin\theta |\varphi\rangle+cos\theta |\varphi^\perp\rangle.$ Therefore, we rewrite the three Bell states as follows: \begin{eqnarray*} \begin{array}{l}
\displaystyle |\Phi_0\rangle=1/\sqrt{2}\{|\varphi\rangle[(cos^2\theta+e^{-2i\delta}sin^2\theta)|\varphi\rangle+(e^{-i\delta}-e^{i\delta})sin\theta cos\theta|\varphi^\perp\rangle]\\
\displaystyle \qquad\quad +|\varphi^\perp\rangle[(e^{-i\delta}-e^{i\delta})sin\theta cos\theta|\varphi\rangle
+(cos^2\theta+e^{2i\delta}sin^2\theta)|\varphi^\perp\rangle]\},\\
\displaystyle |\Phi_1\rangle=1/\sqrt{2}\{|\varphi\rangle[(cos^2\theta-e^{-2i\delta}sin^2\theta)|\varphi\rangle-(e^{-i\delta}+e^{i\delta})sin\theta cos\theta|\varphi^\perp\rangle]\\
\displaystyle \qquad\quad -|\varphi^\perp[(e^{-i\delta}+e^{i\delta})sin\theta cos\theta\rangle|\varphi\rangle
+(e^{2i\delta}sin^2\theta-cos^2\theta)|\varphi^\perp\rangle]\},\\
\displaystyle |\Phi_2\rangle=1/\sqrt{2}[|\varphi\rangle (e^{-i\delta}sin2\theta|\varphi\rangle+cos2\theta|\varphi^\perp\rangle)+|\varphi^\perp\rangle (cos2\theta|\varphi\rangle\\
\displaystyle \qquad\quad -e^{i\delta}sin2\theta|\varphi^\perp\rangle)]. \end{array} \end{eqnarray*} \indent From the equation, we can get \begin{eqnarray*} \begin{array}{l}
\displaystyle|\eta_\varphi^0\rangle=(cos^2\theta+e^{-2i\delta}sin^2\theta)|\varphi\rangle+(e^{-i\delta}-e^{i\delta})sin\theta cos\theta|\varphi^\perp\rangle,\\
\displaystyle|\eta_\varphi^1\rangle=(cos^2\theta-e^{-2i\delta}sin^2\theta)|\varphi\rangle-(e^{-i\delta}+e^{i\delta})sin\theta cos\theta|\varphi^\perp\rangle,\\
\displaystyle |\eta_\varphi^2\rangle=e^{-i\delta}sin2\theta|\varphi\rangle+cos2\theta|\varphi^\perp\rangle \end{array} \end{eqnarray*} and \begin{eqnarray*} \begin{array}{l}
\displaystyle |\eta_{\varphi^\perp}^0\rangle=(e^{-i\delta}-e^{i\delta})sin\theta cos\theta|\varphi\rangle
+(cos^2\theta+e^{2i\delta}sin^2\theta)|\varphi^\perp\rangle,\\
\displaystyle |\eta_{\varphi^\perp}^1\rangle=-(e^{-i\delta}+e^{i\delta})sin\theta cos\theta|\varphi\rangle
-(e^{2i\delta}sin^2\theta-cos^2\theta)|\varphi^\perp\rangle,\\
\displaystyle |\eta_{\varphi^\perp}^2\rangle=cos2\theta|\varphi\rangle-e^{i\delta}sin2\theta|\varphi^\perp\rangle. \end{array} \end{eqnarray*} \\
\indent Three states $|\Phi_0\rangle, |\Phi_1\rangle, |\Phi_2\rangle$ are all entangled states, so none of $|\eta_\varphi^k\rangle$ and $|\eta^k_{\varphi^\perp}\rangle$ ($k=0, 1, 2$) is zero for $\forall \theta, \delta\in[0, 2\pi]$ in Bob's subsystems. If this set is to be locally distinguishable with Alice going first, there must be some choice of $\{|\varphi\rangle, |\varphi^\perp\rangle\}$ such that $\langle \eta_\varphi^0|\eta_\varphi^1\rangle=\langle \eta_\varphi^1|\eta_\varphi^2\rangle=\langle \eta_\varphi^0|\eta_\varphi^2\rangle=0$ and $\langle \eta_{\varphi^\perp}^0|\eta_{\varphi^\perp}^1\rangle=\langle \eta_{\varphi^\perp}^1|\eta_{\varphi^\perp}^2\rangle=\langle \eta_{\varphi^\perp}^0|\eta_{\varphi^\perp}^2\rangle=0$. However, there is no room in Bob's two-dimensional Hilbert space for three mutually orthogonal states. It is impossible to satisfy the Formula (\ref{B2}), thus the set is LOCC indistinguishable. $\square$\\
Secondly, let us consider the minimum structures of one-way LOCC indistinguishable orthogonal product states. To prove the Theorem \ref{A6} more smoothly, we first introduce Theorem \ref{A5}, Corollaries 7 and 8, which are obtained from Lemma \ref{A4}.
{\bf Corollary 7}
\emph{Two states $|\psi\rangle$ and $|\phi\rangle$ are orthogonal in $C^d$ system. The measurement $\{M_{k}\}$ can be used to distinguish $|\psi\rangle$ and $|\phi\rangle$, if and only if $\forall k,~ \langle\psi|M_{k}^\dagger M_{k}|\phi\rangle=0.$}
\begin{theorem}\label{A5}
Two states $|\psi\rangle$ and $|\phi\rangle$ are orthogonal in $C^d$ system. The rank one POVM $\{|\bar{k}\rangle\langle\bar{k}|:k=0,\cdots,d-1 \}$ can be used to distinguish $|\psi\rangle$ and $|\phi\rangle$, if and only if $\forall k,~ \langle\psi|\bar{k}\rangle\langle\bar{k}|\phi\rangle=0.$ \end{theorem}
This theorem implies the following one directly.
{\bf Corollary 8}
\emph{Two states $|\psi\rangle$ and $|\phi\rangle$ are orthogonal in $C^3$ system. The rank one POVM $\{|\bar{k}\rangle\langle\bar{k}|:k=0,1,2\}$ can be used to distinguish $|\psi\rangle $ and $ |\phi\rangle$, if and only if either $|\psi\rangle $ or $|\phi\rangle$ is in $\{|\bar{k}\rangle:k=0,1,2\}.$}
\emph{Proof:} Suppose $|\psi\rangle=|0\rangle$ and $|\phi\rangle=|1\rangle.$ Notice that in $C^3$ system, the rank one POVM $\{|\bar{k} \rangle \langle \bar{k}|: k=0,1,2 \}$ can only be one of these forms by Theorem \ref{A5}: $\{|0\rangle\langle0|, |1\rangle\langle1|, |2\rangle\langle2|\}$, $\{|0\rangle\langle0|, (\alpha|1\rangle+\beta|2\rangle)(\alpha^*\langle1|+\beta^* \langle2|), (\beta|1\rangle-\alpha|2\rangle)( \beta^*\langle1|- \alpha^*\langle2|):\alpha\beta\neq0\},$ or $\{|1\rangle\langle1|, (\alpha|0\rangle+\beta|2\rangle)(\alpha^*\langle0|+\beta^*\langle2|), (\beta|0\rangle-\alpha|2\rangle)(\beta^*\langle0|-\alpha^*\langle2|):\alpha\beta\neq0\}.$ $\square$
\begin{figure}
\caption{ The tiling structures of orthogonal product states in $C^2\otimes C^2$ and $C^3\otimes C^2$ dimension respectively.}
\label{C3}
\end{figure}
\begin{theorem}\label{A6} It is four that is the least number of indistinguishable orthogonal product states via one-way LOCC when Alice goes first. \end{theorem} \emph{Proof:} In FIG. \ref{C3}(a), these states are as follows:
$$|\psi_{1,2}\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)_A|0\rangle_B,\
|\psi_{3,4}\rangle=\frac{1}{\sqrt{2}}(|1\rangle\pm|2\rangle)_A|1\rangle_B.$$
For this set of orthogonal product states, Bob cannot distinguish $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$ (or $|\psi_{3}\rangle$ and $|\psi_{4}\rangle$) in his subsystems. By Theorem \ref{A3} and Corollary 8, if these four states are distinguishable, one of $\{ \frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)_A$, $\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)_A \}$ must be chosen by Alice to distinguish $|\psi_{1}\rangle $ and $|\psi_{2}\rangle$ , also for $\frac{1}{\sqrt{2}}(|1\rangle+|2\rangle)_A$ and $\frac{1}{\sqrt{2}}(|1\rangle-|2\rangle)_A$. However, $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)_A$ and $\frac{1}{\sqrt{2}}(|1\rangle\pm|2\rangle)_A$ are not orthogonal.
$\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle_A$ and $\frac{1}{\sqrt{2}}(|1\rangle\pm|2\rangle)_A$ are not orthogonal either. This means these four states are indistinguishable via one-way LOCC when Alice goes first.
Similarly in FIG. \ref{C3}(b), the set $\{|\phi_0\rangle=|0\rangle_A|0\rangle_B,|\phi_1\rangle=|1\rangle_A|0\rangle_B,
|\phi_{2,3}\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)_A|1\rangle_B\}$ is also indistinguishable via one-way LOCC when Alice goes first. In fact, Groisman and Vaidman have shown the one-way LOCC indistinguishability for the above set by a different method \cite{Groisman01}. $\square$\\
With above conclusions, we can also judge immediately the one-way local indistinguishability of orthogonal product states in Eq. (\ref{B1}). We choose these states
$|\psi_{1,2}\rangle=|0\rangle_A(|\frac{1}{2}|0\rangle\pm\frac{1}{\sqrt{2}}|1\rangle+\frac{1}{2}|2\rangle)_B,$ $|\psi_{3,4}\rangle=(|\frac{1}{2}|0\rangle\pm\frac{1}{\sqrt{2}}|1\rangle+\frac{1}{2}|2\rangle)_A|3l_B\rangle_B,$ $|\psi_5\rangle=|1\rangle_A|(3l_B-1)\rangle_B,$ $|\psi_6\rangle=|2\rangle_A|(3l_B-1)\rangle_B.$ The tiling structure is similar to FIG. \ref{C3}(a), thus the subset is one-way LOCC indistinguishable when Alice goes first. When Bob goes first, we can also find the similar structure. Therefore, the set in Eq. (\ref{B1}) is one-way LOCC indistinguishable no matter who goes first.
We continue to check whether there exists five indistinguishable orthogonal product states by one-way LOCC. Notice that, DiVincenzo \emph{et al.} has found the very similar example in \cite{DiVincenzo03}. The indistinguishability of this special set can be proved by different ways.
{\bf Corollary 9} \emph{There exist five orthogonal product indistinguishable states via one-way LOCC no matter who goes first.}
\emph{Proof:} In FIG. \ref{C4}, suppose $|\Psi_k\rangle=|a_k\rangle_A|b_k\rangle_B$, specifically,
$|\Psi_0\rangle=|0\rangle_A|0\rangle_B,~|\Psi_1\rangle= \frac{1}{\sqrt{3}} |2\rangle_A(|0\rangle-|1\rangle+|2\rangle)_B,$
$|\Psi_2\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)_A|2\rangle_B,~
|\Psi_3\rangle=\frac{1}{\sqrt{6}}(|0\rangle-|1\rangle+|2\rangle)_A(|1\rangle+|2\rangle)_B,$
$|\Psi_4\rangle=\frac{1}{2}(|1\rangle+|2\rangle)_A(|0\rangle+|1\rangle)_B.$
\begin{figure}
\caption{ (Color online) The structure of five indistinguishable orthogonal product states by one-way LOCC.}
\label{C4}
\end{figure}
If $|a_k\rangle$ and $|a_j\rangle $ in Alice's subsystems are orthogonal then the solid lines link two points $k$ and $j$, and the dotted lines for Bob's subsystems. If the five states are one-way distinguishable by Theorem \ref{A3} when Alice goes first (similar proof when Bob starts), we check which basis should be chosen in Alice's side. For the convenience of the readers, suppose $|\Psi_5\rangle=|a_5\rangle_A|b_5\rangle_B=|a_0\rangle_A|b_0\rangle_B=|\Psi_0\rangle.$
We will use a lemma: `` $\forall k,$ either $|a_k\rangle_A $ or $|a_{k+1}\rangle_A$ of the orthogonal pair $\{|a_k\rangle_A, |a_{k+1}\rangle_A\}$ is in the chosen basis for $k=0,1,2,3,4$". This lemma can be founded because of $\langle b_{k}|b_{k+1}\rangle\neq 0$ and Corollary 7.
However, the contradiction will be produced. For instance, if we choose $|a_0\rangle$, it means that we cannot choose $|a_1\rangle$, so we must choose $|a_2\rangle.$ But $|a_0\rangle$ and $|a_2\rangle$ are not orthogonal, this is a contradiction. Otherwise, if we choose $|a_1\rangle$, it means that we cannot choose $|a_2\rangle$, so we must choose $|a_3\rangle.$ However, $\langle a_3|a_1\rangle\neq 0,$ this also produces contradiction. Therefore, the five orthogonal product states are indistinguishable via one-way LOCC. $\square$ \\
In $C^3\otimes C^2$ (See FIG. \ref{C3}(c)), we further find that six orthogonal product states are one-way LOCC indistinguishable whoever goes first. Their forms are as follows: \begin{eqnarray} \begin{array}{l}
\displaystyle |\Psi_1\rangle=|0\rangle_A|0\rangle_B,\ |\Psi_2\rangle=|1\rangle_A|0\rangle_B,\\
\displaystyle |\Psi_{3,4}\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm |1\rangle)_A|1\rangle_B,\\
\displaystyle |\Psi_{5,6}\rangle=|2\rangle_A\frac{1}{\sqrt{2}}(|0\rangle\pm |1\rangle)_B. \end{array} \end{eqnarray}
\section{Conclusion} \label{sec:con} We have provided sufficient and necessary conditions respectively for distinguishing bipartite pure quantum states and mixed quantum states by one-way LOCC. We give some applications to show that the distinguishability of a state set can be proved efficiently via these conditions. The indistinguishability of a state set can be checked via searching out one indistinguishable subset in it. Sometimes only four orthogonal product states or three orthogonal maximally entangled states are needed to find via our results. Moreover, we also provide a bound of the number of orthogonal product states in a distinguishable state set.
\begin{acknowledgments} This work was supported by National Key R\&D Plan of China (Grant No. 2017YFB0802203), National Natural Science Foundation of China (Grant Nos. U1736203, 61877029, 61872153, 61732021, 61472165, 61373158, 61672014 and 61502200), Guangdong Provincial Engineering Technology Research Center on Network Security Detection and Defence (Grant No. 2014B090904067), Guangdong Provincial Special Funds for Applied Technology Research and Development and Transformation of Important Scientific and Technological Achieve (Grant No. 2016B010124009), Natural Science Foundation of Guangdong Province (Grant No. 2016A030313090), the Zhuhai Top Discipline--Information Security, Guangzhou Key Laboratory of Data Security and Privacy Preserving, Guangdong Key Laboratory of Data Security and Privacy Preserving, National Joint Engineering Research Center of Network Security Detection and Protection Technology, National Cryptography Development Fund MMJJ20180109, and the Fundamental Research Funds for the Central Universities and Guangdong Provincial Special Funds for Applied Technology Research and Development and Transformation of Important Scientific and Technological Achieve (Grant No. 2017B010124002). \end{acknowledgments}
\nocite{*}
\section*{ Appendix} Appendix is the proof of \emph{Corollary 1}.
\emph{Corollary 1. In $C^{3l_A+1}\otimes C^{3l_B+1}$ $(1\leq l_A\leq l_B)$ quantum system, there exists an indistinguishable set via one-way LOCC no matter who goes first, which contains $4l_Al_B+7l_A+3l_B-3$ orthogonal product states $|\varphi_s\rangle$ (in Eq. (\ref{B1})).}
\emph{Proof:}
We only consider that Alice goes first and the same as Bob. A set of general $C^{(3l_A+1)}\otimes C^{(3l_B+1)}$ POVM elements $M_m^\dagger M_m$ under the basis $\{|0\rangle,|1\rangle,\cdots, |3l_A\rangle\}_A$ can be expressed as
$$M_m^\dagger M_m=(a_{ij}^{m}), where \ a_{ij}^{m}\geq0\ with \ i, j\in\{0, 1, 2, \cdots, 3l_A\}.$$
Firstly, we point that this selected sets $\{|0\rangle, |1\rangle, |2\rangle\}_A$, $\{|1\rangle, |2\rangle, |3\rangle\}_A$, $\cdots$, and $\{|3l_A-2\rangle, |3l_A-1\rangle, |3l_A\rangle\}_A$ of states are of dimension $C^3\otimes C^{(3l_B+1)}$, Alice cannot find appropriate basis to express them in the form of Eq.(\ref{B2}) after Alice performs measurement. Notice, the product states $|q_2\rangle|3l_B-1\rangle$ ($q_2=1, 2, 3, \cdots, 3l_A-1$) can only be distinguished where the measurements are $\{|1\rangle\langle 1|, |2\rangle\langle2|, \cdots, |3l_A-1\rangle\langle 3l_A-1|\}_A$. There no exists the superposition state of $|0\rangle$ and $|3l_A\rangle$ (See Fig. 1), so the remaining two elements in the above measurements are $|0\rangle\langle 0|$ and $|3l_A\rangle\langle 3l_A|$. The effect of this positive operator upon states \begin{eqnarray*} \begin{array}{l}
\displaystyle |\varphi_{i_dj_d}^\pm\rangle=|i_d\rangle_A(\frac{1}{2}|j_d\rangle\pm\frac{1}{\sqrt{2}}|j_d+1\rangle+\frac{1}{2}|j_d+2\rangle)_B, where\\ \displaystyle d=1\ and \ i_1=0,1, 2,\ j_1=i_1, i_1+3,\cdots, 3l_B-2i_1-3,\\
\displaystyle |\varphi_{i_ej_e}^\pm\rangle=(\frac{1}{2}|i_e\rangle\pm\frac{1}{\sqrt{2}}|i_e+1\rangle+\frac{1}{2}|i_e+2\rangle)_A|j_e\rangle_B,where\\ \displaystyle e=2\ and \ j_2=3l_B,\ i_2=(3l_B-j_2)/2,\\
\displaystyle |\varphi_{i_fj_f}\rangle=|i_f\rangle_A|j_f\rangle_B, where\\ \displaystyle f=2 \ and \ i_2=1, 2, j_2=3l_B-1. \end{array} \end{eqnarray*}
is entirely specified by those elements in the submatrix $(a_{ij}^m)$ drawn from the subspace $\{|0\rangle, |1\rangle, |2\rangle\}_A$, where $i, j\in\{0, 1, 2\}$. It means that Alice cannot perform a nontrivial measurement upon the subspace $\{|0\rangle,$ $ |1\rangle,|2\rangle\}_A$. Thus, the corresponding submatrix must be proportional to the identity. Then, we obtain $a_{00}=a_{11}=a_{22}=a,$ $ a_{01}=a_{02}=a_{10}=a_{20}=a_{12}=a_{21}=0$. For the states \begin{eqnarray*} \begin{array}{l}
\displaystyle |\varphi_{i_dj_d}^\pm\rangle=|i_d\rangle_A(\frac{1}{2}|j_d\rangle\pm\frac{1}{\sqrt{2}}|j_d+1\rangle+\frac{1}{2}|j_d+2\rangle)_B,where\nonumber\\ \displaystyle d=1 \ and \ i_1=1, 2, 3, \ j_1=i_1, i_1+3,\cdots, 3l_B-2i_1-3,\nonumber\\
\displaystyle |\varphi_{i_ej_e}^\pm\rangle=(\frac{1}{2}|i_e\rangle\pm\frac{1}{\sqrt{2}}|i_e+1\rangle+\frac{1}{2}|i_e+2\rangle)_A|j_e\rangle_B,where\nonumber\\ \displaystyle j_1=0, i_1=j_1+1,\nonumber\\
\displaystyle |\varphi_{i_fj_f}\rangle=|i_f\rangle_A|j_f\rangle_B, where\nonumber\\ \displaystyle f=2 \ and \ i_2=1, 2, 3,\ j_2=3l_B-1. \end{array} \end{eqnarray*}
and the subspace $\{|1\rangle, |2\rangle, |3\rangle\}_A$, we make the same argument. Then we obtain results $a_{11}=a_{22}=a_{33}=a,$ $ a_{12}=a_{13}=a_{21}=a_{31}=a_{23}=a_{32}=0$. In the same way, for the subspace $\{|2\rangle, |3\rangle,|4\rangle\}_A$, $\{|3\rangle, |4\rangle,|5\rangle\}_A$, $\cdots$ and the subspace $\{|3l_A-2\rangle, |3l_A-1\rangle, |3l_A\rangle\}_A$, we obtain result \begin{eqnarray*} &&{}a_{44}=a_{55}=\cdots=a_{3l_A,3l_A}=a, \nonumber\\ &&{}a_{45}=a_{46}=a_{56}=\cdots=a_{3l_A-1,3l_A}=a_{3l_A,3l_A-1}=0. \end{eqnarray*} Because the POVM elements $M_m^\dagger M_m$ is Hermitian, $(M_m^\dagger M_m)^\dagger=M_m^\dagger M_m$ holds. Then we obtain $a^*=a, a_{30}=a_{03}^*, a_{40}=a_{04}^*,\cdots,a_{3l_A,3l_A-3}=a_{3l_A-3,3l_A}^*.$ We now consider the states \begin{eqnarray*}
&&{}|\varphi_{i_dj_d}^\pm\rangle=|i_d\rangle_A(\frac{1}{2}|j_d\rangle\pm\frac{1}{\sqrt{2}}|j_d+1\rangle+\frac{1}{2}|j_d+2\rangle)_B,where \nonumber\\ &&{}d=1\ and \ i_1=0, \ j_1=3l_B-2i_1-3,\nonumber\\
&&{}|\varphi_{i_fj_f}\rangle=|3\rangle_A|3l_B-1\rangle_B \end{eqnarray*}
and the subspace $\{|0\rangle, |3\rangle\}_A$. After Alice measures, the result is either the states orthogonal or distinguishing them outright. If the result is the states orthogonal, we demand that $\langle\varphi_{d=1}|M_m^+M_m|\varphi_{f=3l_B+2}\rangle=\frac{1}{2}a_{03}=0$. So, we obtain $a_{03}^*=a_{03}=0$. For the subspace $\{|0\rangle, |4\rangle\}_A$,$\cdots$ and the subspace $\{|3l_A-3\rangle, |3l_A\rangle\}_A$, we can obtain results \begin{eqnarray*} &&{}a_{04}=a_{04}^*=a_{05}=a_{05}^*=\cdots=a_{3l_A,3l_A-3}=a_{3l_A-3,3l_A}^*=0. \end{eqnarray*} Now the $M_m^\dagger M_m$ is proportional to the identity.
However, if Alice distinguishes the state \begin{eqnarray*}
&&{}|\varphi_{i_dj_d}^\pm\rangle=|i_d\rangle_A(\frac{1}{2}|j_d\rangle\pm\frac{1}{\sqrt{2}}|j_d+1\rangle+\frac{1}{2}|j_d+2\rangle)_B, where\nonumber\\ &&{} d=1\ and \ i_1=0,\ j_1=3l_B-2i_1-3,\nonumber\\
&&{}|\varphi_{i_fj_f}\rangle=|3\rangle_A|3l_B-1\rangle_B, \end{eqnarray*}
we get the result $\langle\varphi_i|M_m^\dagger M_m|\varphi_i\rangle=0$. We can also obtain the result $\langle\varphi_i|M_m^\dagger M_m|\varphi_i\rangle=\frac{a}{2}$, therefore $a=0$. It contradicts the Theorem \ref{A3} in this paper. So, $M_m^\dagger M_m$ is proportional to the identity and the orthogonal product states are indistinguishable.$\square$
\end{document} |
\begin{document}
\title{Relationship between the Mandelbrot Algorithm and the Platonic Solids}
\begin{abstract} This paper focuses on the dynamics of the eight tridimensional principal slices of the tricomplex Mandelbrot set: the Tetrabrot, the Arrowheadbrot, the Mousebrot, the Turtlebrot, the Hourglassbrot, the Metabrot, the Airbrot (octahedron) and the Firebrot (tetrahedron). In particular, we establish a geometrical classification of these 3D slices using the properties of some specific sets that correspond to projections of the bicomplex Mandelbrot set on various two-dimensional vector subspaces, and we prove that the Firebrot is a regular tetrahedron. Finally, we construct the so-called “Stella octangula” as a tricomplex dynamical system composed of the union of the Firebrot and its dual, and after defining the idempotent 3D slices of $\M3$, we show that one of them corresponds to a third Platonic solid: the cube. \end{abstract}
\noindent\textbf{AMS subject classification:} 32A30, 30G35, 00A69, 51M20 \\ \textbf{Keywords:} Generalized Mandelbrot Sets, Tricomplex Dynamics, Metatronbrot, 3D Fractals, Platonic Solids, Airbrot, Earthbrot, Firebrot, Stella Octangula
\section*{Introduction}
Quadratic polynomials iterated on hypercomplex algebras have been used to generate multidimensional Mandelbrot sets for several years \cite{bedding, BrouilletteRochon, Dang, GarantRochon, Katunin, norton, Rochon1, Senn, Wang, Wang2}. Although this approach is widespread in the litterature, other attempts at generalizing the classic fractal to higher dimensions have been made \cite{Barrallo, Fishback, Garg}. While Bedding and Briggs \cite{bedding} established that possibly no interesting dynamics occur in the case of the quaternionic Mandelbrot set, the generalization given in \cite{Rochon1}, which uses the four-dimensional commutative algebra of bicomplex numbers, possesses an interesting fractal aspect reminiscent of the classical Mandelbrot set. Figure \ref{Deep_Tetrabrot} shows that phenomenon for the so-called Tetrabrot (Fig. \ref{fig:tetrabrot}).
\begin{wrapfigure}{r}{0.51\textwidth} \centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{Tetrabrot_01.jpg}
\label{fig:Deep_Tetrabrot_01}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{Tetrabrot_02.jpg}
\label{fig:Deep_Tetrabrot_02}
\end{subfigure} \caption{Deep zoom\\ on the Tetrabrot.} \label{Deep_Tetrabrot} \end{wrapfigure}
This bicomplex Mandelbrot set $\M2$ is proven to be connected \cite{Rochon1} and related to a bicomplex version of the Fatou-Julia theorem \cite{Rochon2}. In 2009, these results and ideas were subsequently extended to the multicomplex space for quadratic polynomials of the form $z^2+c$ over multicomplex numbers \cite{GarantRochon}. In addition, the same authors introduced an equivalence relation between the fifty-six principal 3D slices of the tricomplex Mandelbrot set $\M3$ in order to establish which slices have the same dynamics and appearance in 3D visualization software. By doing this, eight equivalence classes were identified and characterized \cite{BrouilletteRochon}, and for each of those, one particular slice was designated as a canonical representative (Fig. \ref{fig:8 slices}).
Later, the scope of this comprehensive study was broadened by the use of the polynomial $z^p+c$ where $p\geq2$. In particular, the authors of \cite{RochonParise2,RochonRansfordParise} determined the exact intervals corresponding to $\mathcal{M}^{p}\cap\ensuremath{\mathbb{R}}$, depending on whether the integer $p$ is odd or even, and showed that the tricomplex Mandelbrot set $\mathcal{M}_{3}^{3}$ generated by the cubic polynomial $z^3+c$ has only four principal 3D slices \cite{RochonParise}. Then, Brouillette and Rochon \cite{Brouillette,BrouilletteRochon} generalized this result to the multicomplex space by establishing that the multicomplex Multibrot set $\ensuremath{\mathcal{M}_n^p}$, $n\geq 2$, possesses at most nine distinct dynamics when $p$ is even, and at most four when $p$ is odd. In particular, the authors show that every principal 3D slice of $\ensuremath{\mathcal{M}_n^p}$ is equivalent to a tricomplex slice up to an affine transformation, thus establishing the optimality of the tricomplex space in the context of principal 3D slices.
Nevertheless, there are a few aspects of the theory that have not yet been addressed. Indeed, although the principal 3D slices of the tricomplex Mandelbrot set have been classified according to their dynamics, the relationships between some of them remain largely unexplored. Moreover, surprisingly there are two principal 3D slices that exhibit no irregularity, which is in sharp contrast to the other six. In fact, one is a regular octahedron (the Airbrot, Fig.\ref{fig:airbrot}) while the other is conjectured to be a regular tetrahedron (the Firebrot, Fig.\ref{fig:firebrot}) \cite{GarantRochon}, and the underlying mechanism explaining such behavior is currently poorly understood.
Thus, the main objective of this paper is to deepen knowledge on the aspects mentioned above, and by extension, to establish a relationship between the Mandelbrot algorithm and the Platonic solids, which have become an integral part of natural sciences like chemistry and geology due to their remarkable properties and prevalence in those fields (see for example \cite{Daintith, Fragmentation, Tang}). To achieve this, we develop several geometrical characterizations for the principal 3D slices of $\M3$. However, in order to do so, we first need to recall relevant properties in the algebras of bicomplex and tricomplex numbers and introduce new ones.
Therefore, in \autoref{sec:tricomplex}, the algebras of bicomplex and tricomplex numbers are introduced and new results concerning the latter are presented. Then, we establish various geometrical characterizations for the principal 3D slices of $\M3$ in \autoref{sec:geometrical} and we also discuss some of the geometric properties that can be inferred from them, notably the fact that one slice is a regular tetrahedron. In \autoref{sec:cube}, we introduce another type of 3D projection called an \textit{idempotent tridimensional slice}, and we show that one of them is a cube. Finally, we talk about the possibility to generate, in the same 3D subspace as that of the Firebrot and its geometric dual, a regular compound called the stellated octahedron (also named the \textit{Stella octangula}) as a tricomplex dynamical system.
\section{The algebra of tricomplex numbers}\label{sec:tricomplex} \subsection{Definitions and basics} Bicomplex and tricomplex numbers are special cases of multicomplex numbers, which were first introduced by Segre \cite{Segre} in 1892. One important reference on the subject is due to Price \cite{Baley}, who studied the general multicomplex number system and provided details for the bicomplex case. A modern recursive definition for the set of multicomplex numbers of order $n$ would be \cite{BrouilletteRochon,GarantRochon,Baley}: \begin{equation}\label{eq:multicomplex} \Mu{n}:=\{\eta_1+\eta_2\im{n}:\eta_1,\eta_2\in\Mu{n-1}\} \end{equation} where $n\in\ensuremath{\mathbb{N}}$, $\im{n}^2=-1$ and $\Mu{0}:=\ensuremath{\mathbb{R}}$. Multicomplex addition and multiplication are defined in an analogous manner to that of the complex plane\footnote{Moreover, the set $\Mu{n}$ together with multicomplex addition and multiplication by real numbers is a vector space over the field $\ensuremath{\mathbb{R}}$ and is isomorphic to $\ensuremath{\mathbb{R}}^{2^{n}}$.}: \begin{align*} &(\eta_1+\eta_2\im{n}) + (\zeta_1+\zeta_2\im{n}) = (\eta_1+\zeta_1) + (\eta_2+\zeta_2)\im{n};\\ &(\eta_1+\eta_2\im{n})(\zeta_1+\zeta_2\im{n}) = (\eta_1\zeta_1 - \eta_2\zeta_2) + (\eta_1\zeta_2+\eta_2\zeta_1)\im{n}. \end{align*} Hence, we have $\Mu{1}\simeq\ensuremath{\mathbb{C}}(\im1)$, and setting $n=2$ gives the bicomplex numbers $\Mu{2}:=\ensuremath{\mathbb{B}\mathbb{C}}$, which have been studied extensively \cite{Bicomplex,Baley,RochonMaitrise,Rochon3,RochonShapiro}.\footnote{It should be noted that beginning in 1848, J. Cockle developed an algebra which he called the \textit{tessarines} \cite{cockle1,cockle2,cockle3,cockle4} that was later proved to be isomorphic to $\ensuremath{\mathbb{B}\mathbb{C}}$.} It follows immediately that a bicomplex number $w$ can be written as \begin{gather*} w=z_1+z_2\im2,\quad z_1,z_2\in\ensuremath{\mathbb{C}}(\im1) \intertext{and expressing both $z_1$ and $z_2$ by their real coefficients $x_i$ yields} w=x_1+x_2\im1+x_3\im2+x_4\jh1 \end{gather*} where $\jh1=\im1\im2=\im2\im1$. Note that \jh1 is called a \textit{hyperbolic} unit since it satisfies the property $\jh1^2=1$ \cite{RochonShapiro,Sobczyk}. One direct but far-reaching consequence stemming from the existence of such a unit is the presence of non-trivial idempotent elements in $\ensuremath{\mathbb{B}\mathbb{C}}$, namely $\ensuremath{\gamma_{1}}=\frac{1+\jh1}{2}$ and $\ensuremath{\overline{\gamma_{1}}}=\frac{1-\jh1}{2}$. As demonstrated in the references listed above, these two peculiar bicomplex numbers are the key to extend many classical results from the complex plane. In fact, more generally, the idempotent elements existing in $\Mu{n}$ generate various representations of a given multicomplex number, which in turn can be used to give natural extensions to concepts like holomorphy and power series \cite{Baley,RochonMaitrise,Holomorphy,vajiac}. In our context, the formal identity \begin{equation}\label{eq:idemp BC} w=(z_1-z_2\im1)\ensuremath{\gamma_{1}}+(z_1+z_2\im1)\ensuremath{\overline{\gamma_{1}}} \end{equation} which holds for all $w=z_1+z_2\im2\in\ensuremath{\mathbb{B}\mathbb{C}}$ is called the idempotent representation of $w$ and will be used in \autoref{sec:geometrical}. Remark that in this form, multiplication can be carried out term by term.
We now turn our attention to the set of tricomplex numbers, which is denoted $\ensuremath{\mathbb{TC}}$ and corresponds to the case $n=3$ in \eqref{eq:multicomplex}: \[ \ensuremath{\mathbb{TC}}=\Mu3 := \{\eta=\eta_{1}+\eta_{2}\im3:\eta_{1},\eta_{2}\in\ensuremath{\mathbb{B}\mathbb{C}},\im3^{2}=-1\}. \] It follows from this definition that any tricomplex number can be expressed as a sum of two, four or eight terms, respectively having bicomplex, complex or real coefficients: \begin{align*} \eta &= \eta_{1}+\eta_{2}\im3 \\ &= z_{1}+z_{2}\im2 + z_{3}\im3+z_{4}\jh3 \\ &= x_{1} + x_{2}\im1 + x_{3}\im2 + x_{4}\jh1 + x_{5}\im3 + x_{6}\jh2 + x_{7}\jh3 + x_{8}\im4, \end{align*} where $\im4=\im1\im2\im3$ is a fourth imaginary unit and $\jh3=\im2\im3=\im3\im2,\,\jh2=\im1\im3=\im3\im1$ are other instances of hyperbolic units. Hence, there exists additional non-trivial idempotent elements in $\ensuremath{\mathbb{TC}}$, such as $\ensuremath{\gamma_{3}}=\frac{1+\jh3}{2},\ensuremath{\overline{\gamma_{3}}}=\frac{1-\jh3}{2},\gamma_{2}=\frac{1+\jh2}{2}$ and $\overline{\gamma_{2}}=\frac{1-\jh2}{2}$. Note that $\ensuremath{\gamma_{3}}\ensuremath{\overline{\gamma_{3}}}=\gamma_{2}\overline{\gamma_{2}}=0$ and $\ensuremath{\gamma_{3}}+\ensuremath{\overline{\gamma_{3}}}=\gamma_{2}+\overline{\gamma_{2}}=1$, which are the properties that make the set $\{\ensuremath{\gamma_{1}},\ensuremath{\overline{\gamma_{1}}}\}$ interesting and useful in the setting of bicomplex numbers. In fact, it is possible to derive an idempotent representation in each of the sets $\{\ensuremath{\gamma_{3}},\ensuremath{\overline{\gamma_{3}}}\}$ and $\{\gamma_{2},\overline{\gamma_{2}}\}$ akin to that of bicomplex numbers. More will be said on the subject in \autoref{subsec:reps}.
Furthermore, the existence of idempotent elements that do not cancel out when multiplied together (e.g. $\ensuremath{\gamma_{1}}$ and $\ensuremath{\gamma_{3}}$) along with the commutativity of multiplication in $\ensuremath{\mathbb{TC}}$ imply that the products $\ensuremath{\gamma_{1}}\ensuremath{\gamma_{3}},\ensuremath{\overline{\gamma_{1}}}\ensuremath{\gamma_{3}},\ensuremath{\gamma_{1}}\ensuremath{\overline{\gamma_{3}}}$ and $\ensuremath{\overline{\gamma_{1}}}\ensuremath{\overline{\gamma_{3}}}$ are other examples of idempotent elements. Note that the same could be said about the products $\gamma_{2}\gamma_{i},\overline{\gamma_{2}}\gamma_{i},\gamma_{2}\overline{\gamma_{i}}$ and $\overline{\gamma_{2}}\overline{\gamma_{i}}$ where $i\in\{1,3\}$. However, simple calculations show that these elements are all equal to $\ensuremath{\gamma_{1}}\ensuremath{\gamma_{3}},\ensuremath{\overline{\gamma_{1}}}\ensuremath{\gamma_{3}},\ensuremath{\gamma_{1}}\ensuremath{\overline{\gamma_{3}}}$ or $\ensuremath{\overline{\gamma_{1}}}\ensuremath{\overline{\gamma_{3}}}$. The following theorem establishes precisely how many distinct idempotent elements exist in the algebra of tricomplex numbers.\footnote{It is conjectured in \cite{Vallieres} that the algebra $\Mu{n},n\geq1$ contains exactly $2^{2^{n-1}}$ distinct idempotent elements.}
\begin{theorem}[See \cite{Vallieres}]\label{thm:16 idempotents} There exists exactly sixteen distinct idempotent elements in $\ensuremath{\mathbb{TC}}$: \begin{gather*} 0,\,1,\,\gamma_k = \frac{1+\jh{k}}{2},\,\overline{\gamma_k}=\frac{1-\jh{k}}{2},\quad\text{where }k=1,2,3, \shortintertext{the four products} \ensuremath{\gamma_{1}\gamma_{3}},\,\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\,\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\,\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}} \intertext{and} 1-\ensuremath{\gamma_{1}\gamma_{3}},\,1-\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\,1-\ensuremath{\gamma_{1}\overline{\gamma_{3}}}\,\text{ and }\,1-\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}. \end{gather*} \end{theorem} The different types of conjugation of a tricomplex number have been studied in \cite{GarantPelletier,GarantRochon}. More precisely, it is shown that any tricomplex number $\eta$ has seven different conjugates, denoted $\eta^{\ddagger_{i}},\,i=1,\dotsc,7$ (see Figure \ref{eq:7conj}), and that their composition (together with the identity conjugation) forms a group isomorphic to $\mathbb{Z}_{2}^{3}$. Analogous results in $\Mu{n}$, $n\geq1$ are also presented. In addition, the authors of \cite{Holomorphy} provided a different but equivalent definition of conjugation valid in the general case.
\begin{figure}
\caption{The seven tricomplex conjugates.}
\label{eq:7conj}
\end{figure}
Nevertheless, one aspect of the theory that currently lacks understanding in the multicomplex setting $\Mu{n},n\geq3$ is the relationship between conjugation and invertibility. Indeed, although Price \cite{Baley} proposed various invertibility criteria based on the use of idempotent representations or Cauchy-Riemann matrices, none of these methods involve multicomplex conjugates, unlike the well known formula $z^{-1}=\frac{\overline{z}}{{|z|}^{2}}$ in the complex plane. The next new results, valid in the tricomplex case, represent a first step in this direction.\footnote{The reference \cite{Vallieres} contains two conjectures extending these results to $\Mu{n},n\geq1$.} \begin{proposition}\label{prop:produit reel} Let $\eta\in\ensuremath{\mathbb{TC}}$. The multiplication of $\eta$ by its seven different conjugates always equals a non-negative real number. In other words, \[ \eta\eta^{\ddagger_{1}}\eta^{\ddagger_{2}}\eta^{\ddagger_{3}}\eta^{\ddagger_{4}}\eta^{\ddagger_{5}}\eta^{\ddagger_{6}}\eta^{\ddagger_{7}}=\eta\cdot\prod_{i=1}^{7}{\eta^{\ddagger_{i}}}\in\ensuremath{\mathbb{R}}_{\geq0}. \] \end{proposition} \begin{proof} A brute force approach is to perform straightforward but tedious calculations, which ultimately lead to the desired result. However, a more refined proof is available in \cite{Vallieres}. \end{proof} \begin{theorem} Let $\eta\in\ensuremath{\mathbb{TC}}$. Then, $\eta$ is invertible if and only if \[ \eta\cdot\prod_{i=1}^{7}{\eta^{\ddagger_{i}}}\neq 0. \] Moreover, when $\eta$ is invertible, its multiplicative inverse is given by the formula \[ \eta^{-1}=\frac{\eta^{\ddagger_{1}}\eta^{\ddagger_{2}}\eta^{\ddagger_{3}}\eta^{\ddagger_{4}}\eta^{\ddagger_{5}}\eta^{\ddagger_{6}}\eta^{\ddagger_{7}}}{\eta\cdot\prod_{i=1}^{7}{\eta^{\ddagger_{i}}}}. \] \end{theorem} \begin{proof} This is a direct consequence of proposition \ref{prop:produit reel}. \end{proof} \noindent It is rather interesting to note that these results also uncover a link between conjugation and Cauchy-Riemann matrices, as the multiplication of any $\eta\in\ensuremath{\mathbb{TC}}$ by its seven conjugates equals the determinant of its Cauchy-Riemann matrix with real coefficients \cite{Vallieres}. Furthermore, we have the following corollary. \begin{corollary} Any non-zero tricomplex number is a zero divisor if and only if it cancels out with one of its conjugates or a product of these. \end{corollary}
\subsection{Idempotent representations of a tricomplex number}\label{subsec:reps} Let $n\geq2$ and consider the algebra $\Mu{n}$. Then, as in \cite{BrouilletteRochon,GarantRochon,Holomorphy,vajiac}, one can use the procedure developed by Price \cite{Baley} to construct $n-1$ sets of idempotent elements, each forming an orthogonal \textit{basis} of the space $\Mu{n}$, and find an idempotent representation in each of those bases. Thus, in the tricomplex case, two idempotent representations can be obtained this way. First, we consider the elements $\ensuremath{\gamma_{3}}$ and $\ensuremath{\overline{\gamma_{3}}}$, to which corresponds the identity \cite{Baley} \begin{equation}\label{eq:idemp tr} \eta = (\eta_1-\eta_2\im2)\ensuremath{\gamma_{3}} + (\eta_1+\eta_2\im2)\ensuremath{\overline{\gamma_{3}}} \end{equation} that we will call the \ensuremath{\gamma_{3}}-representation of $\eta=\eta_1+\eta_2\im3\in\ensuremath{\mathbb{TC}}$. The second representation can be derived from the first by noticing that the two idempotent components in \eqref{eq:idemp tr} are bicomplex numbers. Hence, writing $\eta_1=\eta_{11}+\eta_{12}\im2,\,\eta_2=\eta_{21}+\eta_{22}\im2$ and using \eqref{eq:idemp BC} yields \begin{equation}\label{eq:idemp 4} \eta = \eta_{\ensuremath{\gamma_{1}\gamma_{3}}}\cdot\ensuremath{\gamma_{1}\gamma_{3}} + \eta_{\ensuremath{\overline{\gamma_{1}}\gamma_{3}}}\cdot\ensuremath{\overline{\gamma_{1}}\gamma_{3}} + \eta_{\ensuremath{\gamma_{1}\overline{\gamma_{3}}}}\cdot\ensuremath{\gamma_{1}\overline{\gamma_{3}}} + \eta_{\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}}\cdot\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}} \end{equation} where \begin{align*} \eta_{\ensuremath{\gamma_{1}\gamma_{3}}} &= (\eta_{11}+\eta_{22}) - (\eta_{12}-\eta_{21})\im1;\\ \eta_{\ensuremath{\overline{\gamma_{1}}\gamma_{3}}} &= (\eta_{11}+\eta_{22}) + (\eta_{12}-\eta_{21})\im1;\\ \eta_{\ensuremath{\gamma_{1}\overline{\gamma_{3}}}} &= (\eta_{11}-\eta_{22}) - (\eta_{12}+\eta_{21})\im1;\\ \eta_{\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}} &= (\eta_{11}-\eta_{22}) + (\eta_{12}+\eta_{21})\im1. \end{align*} Remark that every idempotent component above is a complex number: thus, this process cannot be repeated. However, there is no pretense of completeness regarding the $n-1$ sets of idempotent elements obtained by applying Price's method. For example, in the algebra of tricomplex numbers, \autoref{thm:16 idempotents} states that there exists fourteen non-trivial idempotent elements, whereas only six of them are used in identities \eqref{eq:idemp tr} and \eqref{eq:idemp 4}. This suggests that additional representations involving other idempotent elements exist in $\ensuremath{\mathbb{TC}}$. Indeed, by solving the system of linear equations associated to the equality $\eta=a\gamma_{2} + b\overline{\gamma_{2}}$ where $a,b\in\ensuremath{\mathbb{B}\mathbb{C}}$, one obtains the solution $\{a=\eta_1-\eta_2\im1,b=\eta_1+\eta_2\im1\}$, which leads to the identity \begin{equation}\label{eq:idemp de} \eta = (\eta_1-\eta_2\im1)\gamma_{2} + (\eta_1+\eta_2\im1)\overline{\gamma_{2}}. \end{equation}
Note that the properties of the idempotents elements involved in identities \eqref{eq:idemp tr} to \eqref{eq:idemp de} ensure that the algebraic operations in $\ensuremath{\mathbb{TC}}$ can be carried out term by term in these representations. More generally, this is also true for the idempotent representations in $\Mu{n}$ for $n\geq2$, and in fact, this property is the cornerstone of several important results related to generalized Mandelbrot and filled-in Julia sets \cite{ClaudiaRochon, BrouilletteRochon,GarantRochon}. In a similar fashion, representations \eqref{eq:idemp BC} to \eqref{eq:idemp de} will, most of the time, be the key to establish the results presented in sections \ref{sec:geometrical} and \ref{sec:cube}.
\section{Geometrical characterizations}\label{sec:geometrical} \subsection{The tricomplex Mandelbrot set $\M3$} We begin this section by recalling relevant definitions introduced in \cite{BrouilletteRochon,GarantRochon}. Let $P_{c}(\eta)=\eta^2+c$ and denote \[ P_{c}^{(n)}(\eta)=\underbrace{(P_{c}\circ P_{c}\circ\cdots\circ P_{c})}_{n\text{ times}}(\eta). \] It follows that the classical Mandelbrot set can be defined as \[ \M{1} = \{c\in\Mu{1} : \{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\}. \] If we set $c\in\Mu{2}$ or $c\in\Mu{3}$ instead, we have the following generalizations. \begin{definition} The bicomplex Mandelbrot set is defined as \[ \M{2}=\{c\in\ensuremath{\mathbb{B}\mathbb{C}}:\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\}. \] \end{definition} \begin{definition} The tricomplex Mandelbrot set is defined as \[ \M{3}=\{c\in\ensuremath{\mathbb{TC}}:\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\}. \] \end{definition} The bicomplex Mandelbrot set was first studied by Rochon in \cite{Rochon1} while its tricomplex analog was introduced in \cite{GarantRochon}. In addition, these generalizations are particular cases of the multicomplex Multibrot sets studied in \cite{BrouilletteRochon}. For the sake of clarity and continuity, we will use the same subspaces and notation as those in the last two references. Now, since $\M3$ is an eight-dimensional object, we are only able to visualize its various projections on tridimensional subspaces of \ensuremath{\mathbb{TC}}, called 3D slices. This brings us to the following definitions. \begin{definition} Let $\im{k},\im{l},\im{m}\in\{1,\im1,\im2,\im3,\im4,\jh1,\jh2,\jh3\}$ with $\im{k}\neq\im{l},\,\im{k}\neq\im{m}$ and $\im{l}\neq\im{m}$. The space \[ \ensuremath{\mathbb{T}}(\im{k},\im{l},\im{m}) := \text{span}_{\R}\{\im{k},\im{l},\im{m}\} \] is the vector subspace of $\ensuremath{\mathbb{TC}}$ consisting of all real finite linear combinations of these three distinct units. \end{definition} \begin{definition} Let $\im{k},\im{l},\im{m}\in\{1,\im1,\im2,\im3,\im4,\jh1,\jh2,\jh3\}$ with $\im{k}\neq\im{l},\,\im{k}\neq\im{m}$ and $\im{l}\neq\im{m}$. We define a principal 3D slice of the tricomplex Mandelbrot set $\M3$ as \begin{align*} \ensuremath{\mathcal{T}}(\im{k},\im{l},\im{m}) &= \{c\in\ensuremath{\mathbb{T}}(\im{k},\im{l},\im{m}) : \{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\} \\ &= \ensuremath{\mathbb{T}}(\im{k},\im{l},\im{m})\cap\M{3}. \end{align*} \end{definition} As noted earlier, the fifty-six 3D slices corresponding to the various combinations of $\im{k},\im{l},\im{m}\in\{1,\im1,\im2,\im3,\im4,\jh1,\jh2,\jh3\}$ have been classified according to their dynamics (and, consequently, their appearance in visualization software) in \cite{GarantRochon}. \autoref{fig:8 slices} illustrates the eight principal 3D slices of the tricomplex Mandelbrot set (power 2) resulting from this characterization \cite{BrouilletteRochon}.
\begin{figure}
\caption{The eight principal 3D slices of $\M3$.}
\label{fig:tetrabrot}
\label{fig:arrowheadbrot}
\label{fig:mousebrot}
\label{fig:turtlebrot}
\label{fig:hourglassbrot}
\label{fig:metabrot}
\label{fig:airbrot}
\label{fig:firebrot}
\label{fig:8 slices}
\end{figure} Among these are two principal 3D slices called the Tetrabrot (Fig.\ref{fig:tetrabrot}) and the Airbrot (Fig.\ref{fig:airbrot}), for which geometrical characterizations have been respectively developed in \cite{Rochon1} and \cite{GarantRochon}. For the latter, doing so indirectly proved that the Airbrot is a regular octahedron, thus confirming the presence of a first Platonic solid within tricomplex dynamics.
\subsection{Characterizations of the principal 3D slices} Essentially, we want to elaborate similar characterizations for the remaining principal slices in order to confirm or explain some of their properties. One notable example is the fact that the Firebrot (Fig.\ref{fig:firebrot}) is conjectured to be a regular tetrahedron \cite{GarantPelletier}. This assertion will be the subject of \autoref{thm:tetraedre}. Unless otherwise stated, the following results are an excerpt of those established in \cite{Vallieres}. \begin{proposition}[See \cite{Rochon1}]\label{Tetrabrot} The Tetrabrot can be characterized as follows: \[ \mathcal{T}(1,\im1,\im2) = \underset{y\in [-m,m]}{\bigcup}\{[(\M1-y\im1)\cap(\M1+y\im1)] + y\im2\} \] where \[ m:=\sup\{q\in\ensuremath{\mathbb{R}}:\exists p\in\ensuremath{\mathbb{R}}\text{ such that }p+q\im1\in\M1\}. \] \end{proposition}
\begin{proposition}\label{Arrowhead} The principal 3D slice $\ensuremath{\mathcal{T}}(1,\im1,\jh1)$, named the Arrowheadbrot (Fig.\ref{fig:arrowheadbrot}), can be expressed as follows: \[ \mathcal{T}(1,\im1,\jh1) = \underset{y\in \left[-\frac{9}{8},\frac{9}{8}\right]}{\bigcup}\{[(\M1-y)\cap(\M1+y)] + y\jh1\}. \] \end{proposition}
\begin{proof} Let $c\in\ensuremath{\mathcal{T}}(1,\im1,\jh1)$. Then, $c=c_{1} + c_{2}\im1 + c_{4}\jh1$ and by using identity \eqref{eq:idemp BC}, we can write \begin{align*} c &= c_{1} + c_{2}\im1 + (c_{4}\im1)\im2 \\ &= (c_{1} + c_{2}\im1 + c_{4})\ensuremath{\gamma_{1}} + (c_{1} + c_{2}\im1 - c_{4})\ensuremath{\overline{\gamma_{1}}} \\ &= (d + c_{4})\ensuremath{\gamma_{1}} + (d - c_{4})\ensuremath{\overline{\gamma_{1}}}, \end{align*} where $d=c_{1} + c_{2}\im1 \in\ensuremath{\mathbb{C}}(\im1)$. By remarking that $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded if and only if $\{P_{d+c_4}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{P_{d-c_4}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded, we deduce that $c\in\ensuremath{\mathcal{T}}(1,\im1,\jh1)$ if and only if $d\pm c_4\in\M1$. Then, since \[ \M{1}-z=\{c\in\ensuremath{\mathbb{C}}(\im1):\{P_{c+z}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}\text{ is bounded}\}\,\forall z\in\ensuremath{\mathbb{C}}(\im1), \] we have $d\pm c_4\in\M1$ if and only if $d\in(\M{1}-c_{4})\cap(\M{1}+c_{4})$. Therefore, \begin{align*} \ensuremath{\mathcal{T}}(1,\im1,\jh1) &= \{c\in\mathbb{T}(1,\im1,\jh1):\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\} \\ &= \{c_{1} + c_{2}\im1 + c_{4}\jh1:c_{1} + c_{2}\im1\in(\M{1}-c_{4})\cap(\M{1}+c_{4})\}\\ &=\underset{y\in\ensuremath{\mathbb{R}}}{\bigcup}\{[(\M{1}-y)\cap(\M{1}+y)] + y\jh1\}. \end{align*} It is possible to be more precise with the last expression by using the well known property $\M{1}\cap\ensuremath{\mathbb{R}} = \left[-2,\frac{1}{4}\right]$ (see \cite{Carleson, RochonRansfordParise}). Indeed, we infer from it that $(\M1-y)\cap(\M1+y)=\emptyset$ whenever $y\in{\left[-\frac{9}{8},\frac{9}{8}\right]}^{c}$. Thus, \[ \ensuremath{\mathcal{T}}(1,\im1,\jh1) = \underset{y\in \left[-\frac{9}{8},\frac{9}{8}\right]}{\bigcup}\{[(\M1-y)\cap(\M1+y)] + y\jh1\}, \] and since $\M{1}\cap\ensuremath{\mathbb{R}}$ is an interval, $(\M1-y)\cap(\M1+y)\neq\emptyset,\forall y\in\left[-\frac{9}{8},\frac{9}{8}\right]$. \end{proof} The above results show that both the Tetrabrot and the Arrowheadbrot can be described as the union of intersections of two translated classical Mandelbrot sets (along the imaginary axis for the former, and along the real axis for the latter). In other words, these tridimensional fractals can be obtained by using multiple copies of a two-dimensional related fractal. It is rather interesting to note that this is the case for each of the eight principal 3D slices of $\M3$, although the required 2D subsets vary. In fact, for a given principal 3D slice, the properties of a related subspace called its “iterates space” determine the type and number of 2D subsets involved in its geometrical characterizations.\footnote{The \textit{iterates space} of a principal slice was first introduced in \cite{Brouillette,BrouilletteRochon} for other purposes.} A detailed analysis on the subject is available in \cite{Vallieres}.
\begin{proposition}\label{Metabrot} The principal slice $\ensuremath{\mathcal{T}}(\im1,\im2,\im3)$, called the Metabrot (Fig.\ref{fig:metabrot}), can be characterized as follows: \[ \ensuremath{\mathcal{T}}(\im1,\im2,\im3) = \underset{y\in[-m,m]}{\bigcup}\{[(A-y\im2)\cap(A+y\im2)] + y\im3\} \] where \[ A:=\{a\in\text{span}_{\R}\{\im1,\im2\}:\{P_{a}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}\text{ is bounded }\forall n\in\ensuremath{\mathbb{N}}\} \] and \[ m:=\sup\{q\in\ensuremath{\mathbb{R}}:\exists p\in\ensuremath{\mathbb{R}}\text{ such that }p+q\im1\in\M1\}. \] \end{proposition} \begin{proof} Let $c\in\ensuremath{\mathcal{T}}(\im1,\im2,\im3)\subset\mathbb{T}(\im1,\im2,\im3)$. Using identity \eqref{eq:idemp tr} yields \begin{align*} c &= (c_{2}\im1 + c_{3}\im2) + (c_{5})\im3 \\ &= ((c_{2}\im1 + c_{3}\im2) - c_{5}\im2)\ensuremath{\gamma_{3}} + ((c_{2}\im1 + c_{3}\im2) + c_{5}\im2)\ensuremath{\overline{\gamma_{3}}}\\ &= (d - c_{5}\im2)\ensuremath{\gamma_{3}} + (d + c_{5}\im2)\ensuremath{\overline{\gamma_{3}}}, \end{align*} where $d=c_{2}\im1 + c_{3}\im2$. It is not too difficult to see that the sequence $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded if and only if the sequences $\{P_{d-c_{5}\im2}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{P_{d+c_{5}\im2}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded. Then, consider the set \begin{align}\label{eq: ensemble A} A:\!&=\{a\in\text{span}_{\R}\{\im1,\im2\}:\{P_{a}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}\text{ is bounded }\forall n\in\ensuremath{\mathbb{N}}\} \notag\\ &= \M{2}\cap\mathbb{T}(\im1,\im2). \end{align} It follows that $\{P_{d-c_{5}\im2}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{P_{d+c_{5}\im2}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded if and only if $d\mp c_{5}\im2\in A$, and that is the case if and only if $d\in(A-c_{5}\im2)\cap(A+c_{5}\im2)$. Consequently, \begin{align*} \ensuremath{\mathcal{T}}(\im1,\im2,\im3) &= \{c_{2}\im1 + c_{3}\im2 + c_5\im3:c_{2}\im1 + c_{3}\im2\in(A-c_{5}\im2)\cap(A+c_{5}\im2)\}\\ &= \underset{y\in\ensuremath{\mathbb{R}}}{\bigcup}\{[(A-y\im2)\cap(A+y\im2)] + y\im3\}. \end{align*}
In order to be more precise with the last expression, we need to remark that $A\subset\{c_{2}\im1 + c_{3}\im2:|c_3|\leq m\}$. Indeed, let $a=a_2\im1+a_3\im2\in A$. Then, identity \eqref{eq:idemp BC} implies we can write $a=(a_2-a_3)\im1\ensuremath{\gamma_{1}} + (a_2+a_3)\im1\ensuremath{\overline{\gamma_{1}}}$. In addition, the Mandelbrot set $\M1$ is symmetric along the real axis, meaning that $\forall x\in\ensuremath{\mathbb{R}}, x\im1\in\M{1} \Leftrightarrow -x\im1\in\M{1}$. It follows that \begin{align}\label{eq: carac A} a\in A &\Leftrightarrow \{P_{a}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}\text{ is bounded} \notag\\ &\Leftrightarrow (a_2 \pm a_3)\im1 \in \M{1} \notag\\ &\Leftrightarrow (\pm a_2 \pm a_3)\im1 \in \M{1}, \end{align} hence the aforementioned property. We conclude that $(A+y\im2)\cap(A-y\im2)=\emptyset$ whenever $y\in[-m,m]^c$, whence \[ \ensuremath{\mathcal{T}}(\im1,\im2,\im3) = \underset{y\in[-m,m]}{\bigcup}\{[(A-y\im2)\cap(A+y\im2)] + y\im3\}. \qedhere \] \end{proof} \begin{figure}
\caption{The set $A\subset\text{span}_{\R}\{\im1,\im2\}$.}
\label{fig:Ensemble A}
\end{figure}
\autoref{fig:Ensemble A} illustrates that the set $A$ looks like a filled-in square with a fractal perimeter. Although this might seem intriguing, equivalence \eqref{eq: carac A} provides a simple explanation for this phenomenon. Indeed, since $a_2,a_3\in\ensuremath{\mathbb{R}}$, the visual appearance of the set $A$ is entirely determined by the peculiar dynamics of the classical Mandelbrot set $\M1$ along the imaginary axis.
Furthermore, as equation \eqref{eq: ensemble A} points out, we can view the set $A$ as a projection of the bicomplex Mandelbrot set $\M2$ onto the vector subspace $\mathbb{T}(\im1,\im2)$, just like the classical Mandelbrot set can be viewed as a projection of $\M2$ onto the complex plane $\mathbb{T}(1,\im1)$. It is then natural to ask whether the projection of $\M2$ onto the subspace $\mathbb{T}(1,\jh1)$, which is the vector space of hyperbolic numbers, plays a significant role within tricomplex dynamics.
Let us start by recalling the relevant definition. The set of hyperbolic numbers (see \cite{Sobczyk, vajiac2}) is defined as \[ \mathbb{D}:=\{x+y\jh1:x,y\in\ensuremath{\mathbb{R}}\text{ and }\jh1^2=1\}. \] From this, it is evident that $\mathbb{D} \subset \ensuremath{\mathbb{B}\mathbb{C}}$. In 1990, Senn \cite{Senn} was the first to apply the Mandelbrot algorithm to hyperbolic numbers in order to generate another 2D set that revealed itself as a simple filled-in square. This property was later proved by Metzler \cite{Metzler}. The hyperbolic Mandelbrot set (or Hyperbrot) is defined as follows: \[ \ensuremath{\mathcal{H}} := \{c\in\mathbb{D} : \{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\} \] while Metzler obtained the following characterization: \[
\ensuremath{\mathcal{H}} = \left\{(a,b)\in{\ensuremath{\mathbb{R}}}^{2}:\left|a+\frac{7}{8}\right|+|b|\leq\frac{9}{8}\right\}. \] In terms of tricomplex dynamics, the Hyperbrot is the key to generate Platonic solids in specific three-dimensional subspaces of \ensuremath{\mathbb{TC}}. Indeed, it can be shown that two principal 3D slices of $\M3$ can be characterized as such, and that is the content of the next two propositions. \begin{proposition}[See \cite{GarantRochon}]\label{Airbrot} The Airbrot admits the following representation: \[ \ensuremath{\mathcal{T}}(1,\jh1,\jh2)= \underset{y\in [-\frac{9}{8},\frac{9}{8}]}{\bigcup}\{[(\ensuremath{\mathcal{H}}-y\jh1)\cap(\ensuremath{\mathcal{H}}+y\jh1)] + y\jh2\}. \] \end{proposition}
As stated in \cite{GarantRochon}, proposition \ref{Airbrot} and Metzler's characterization establish that the Airbrot is a regular octahedron of edge length equal to $\frac{9}{8}\sqrt{2}$. We wish to prove a similar result for the Firebrot, whose geometrical shape also seems regular. \begin{proposition}\label{Firebrot characterization} The principal slice $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$, named the Firebrot (Fig.\ref{fig:firebrot}), can be characterized as follows: \[ \ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3) = \underset{y \in [-\frac{1}{4},\frac{1}{4}]}{\bigcup}\{[(\ensuremath{\mathcal{H}'}+y\jh1)\cap(-\ensuremath{\mathcal{H}'}-y\jh1)] + y\jh2\} \] where $\ensuremath{\mathcal{H}'} := \{c_{7}\jh3+c_{4}\jh1:c_{7}+c_{4}\jh1 \in \ensuremath{\mathcal{H}}\}$. \end{proposition}
\begin{proof} For any $c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)\subset\mathbb{T}(\jh1,\jh2,\jh3)$, we have \begin{align*} c &= c_{4}\jh1 + c_{6}\jh2 + c_{7}\jh3\\ &= (d-c_{6}\jh1)\ensuremath{\gamma_{3}} + (-\overline{d}+c_{6}\jh1)\ensuremath{\overline{\gamma_{3}}}, \end{align*} where $d=c_{7} + c_{4}\jh1\in\mathbb{D}$ and $-\overline{d}=-c_{7} + c_{4}\jh1$.\footnote{The hyperbolic conjugate of any $z=x+y\jh1\in\mathbb{D}$ is defined as $\overline{z}=x-y\jh1$.} Again, since $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded if and only if $\{P_{d-c_{6}\jh1}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{P_{-\overline{d}+c_{6}\jh1}^{(n)}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded, we have $c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ if and only if $d\in(\ensuremath{\mathcal{H}}+c_{6}\jh1)\cap(-\ensuremath{\mathcal{H}}-c_{6}\jh1)$. Therefore, \[ \ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3) = \{c_{4}\jh1 + c_{6}\jh2 + c_{7}\jh3:c_{7}+c_{4}\jh1\in(\ensuremath{\mathcal{H}}+c_{6}\jh1)\cap(-\ensuremath{\mathcal{H}}-c_{6}\jh1)\}. \] Notice that $\ensuremath{\mathcal{H}}\not\subset\mathbb{T}(\jh1,\jh2,\jh3)\!\supset\!\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$. In order to establish a characterization analogous to that of the previous 3D slices, we must make sure that the 2D sets involved are in the right subspace. This can be achieved by setting \[ \ensuremath{\mathcal{H}'} := \{c_{7}\jh3+c_{4}\jh1:c_{7}+c_{4}\jh1 \in \ensuremath{\mathcal{H}}\}, \] where $\ensuremath{\mathcal{H}'}$ is a duplicate of the Hyperbrot in the subspace $\mathbb{T}(\jh1,\jh3)$. This way, we may write \begin{align*} \ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3) &= \{c_{4}\jh1 + c_{6}\jh2 + c_{7}\jh3:c_{7}\jh3+c_{4}\jh1\in(\ensuremath{\mathcal{H}'}+c_{6}\jh1)\cap(-\ensuremath{\mathcal{H}'}-c_{6}\jh1)\}\\ &= \underset{y\in\ensuremath{\mathbb{R}}}{\bigcup}\{[(\ensuremath{\mathcal{H}'}+y\jh1)\cap(-\ensuremath{\mathcal{H}'}-y\jh1)] + y\jh2\}. \end{align*} To complete the proof, remark that $(\ensuremath{\mathcal{H}'}+y\jh1)\cap(-\ensuremath{\mathcal{H}'}-y\jh1)=\emptyset,\,\forall y\in{\left[-\frac{1}{4},\frac{1}{4}\right]}^{c}$. Therefore, \[ \ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3) = \underset{y \in \left[-\frac{1}{4},\frac{1}{4}\right]}{\bigcup}\{[(\ensuremath{\mathcal{H}'}+y\jh1)\cap(-\ensuremath{\mathcal{H}'}-y\jh1)] + y\jh2\}. \] Finally, we could verify that $\forall y\in\left[-\frac{1}{4},\frac{1}{4}\right]$, the sets $(\ensuremath{\mathcal{H}'}+y\jh1)\cap(-\ensuremath{\mathcal{H}'}-y\jh1)$ are non-empty rectangles, hence the result. \end{proof}
Propositions \ref{Airbrot} and \ref{Firebrot characterization} are interesting and similar in that they state how both the Airbrot and the Firebrot are Platonic solids that can be generated by the union of intersections of translated squares (Hyperbrots). The former can be generated using vertically translated Hyperbrots (that is, a translation along the hyperbolic axis), which ensures each intersection is square shaped. For the latter, a reflection along the hyperbolic axis is applied to one of the Hyperbrots prior to the vertical translation, meaning that every non-empty intersection will be rectangular. This particularity is what accounts for the Firebrot's tetrahedral shape.
\begin{theorem}\label{thm:tetraedre} $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ is a regular tetrahedron with edge length of $\frac{\sqrt{2}}{2}$. \end{theorem}
Although proposition \ref{Firebrot characterization} provides relevant insight into the Firebrot's geometrical shape, the assertion above is proven using a more direct approach.
\begin{proof} Suppose that $c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ with $c=c_{4}\jh1+c_{6}\jh2+c_{7}\jh3$. Using identity \eqref{eq:idemp 4} yields \begingroup \addtolength{\jot}{0.5em} \begin{align*} c &= (0c_1 + 0c_2\im1) + (0c_3 + c_4\im1)\im2 + [(0c_5+c_6\im1) + (c_7 + 0c_8\im1)\im2]\im3 \\
&= \begin{multlined}[t] (c_{7} - (c_{4}\im1-c_{6}\im1)\im1)\ensuremath{\gamma_{1}\gamma_{3}} + (c_{7} + (c_{4}\im1-c_{6}\im1)\im1)\ensuremath{\overline{\gamma_{1}}\gamma_{3}} \\ + (-c_{7} - (c_{4}\im1+c_{6}\im1)\im1)\ensuremath{\gamma_{1}\overline{\gamma_{3}}} +(-c_{7} + (c_{4}\im1+c_{6}\im1)\im1)\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}} \end{multlined} \\
&= \begin{multlined}[t] (c_{4}-c_{6}+c_{7})\ensuremath{\gamma_{1}\gamma_{3}} + (-c_{4} + c_{6}+c_{7})\ensuremath{\overline{\gamma_{1}}\gamma_{3}} \\ + (c_{4} + c_{6}-c_{7})\ensuremath{\gamma_{1}\overline{\gamma_{3}}} +(-c_{4} - c_{6}-c_{7})\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}} \end{multlined} \\ &= (a_1)\ensuremath{\gamma_{1}\gamma_{3}} + (a_2)\ensuremath{\overline{\gamma_{1}}\gamma_{3}} + (a_3)\ensuremath{\gamma_{1}\overline{\gamma_{3}}} +(a_4)\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}, \end{align*} \endgroup where we denote every component of the last equality by $a_i$ for convenience. It follows that $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded if and only if $\{P_{a_{i}}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded, $i=1,\dotsc,4$. Since $a_{i}\in\ensuremath{\mathbb{R}}$ $\forall i\in\{1,\dotsc,4\}$ and $\M{1}\cap\ensuremath{\mathbb{R}}=\left[-2,\frac{1}{4}\right]$, we deduce that \begin{align*} a_{1},a_{2},a_{3},a_{4}\in\M{1} \,
&\Leftrightarrow\, \left\{
\begin{array}{l}
c_{4} - c_{6} + c_{7} \leq \frac{1}{4} \\ [1pt]
-c_{4}+ c_{6} - c_{7} \leq 2 \\[1pt]
-c_{4} + c_{6}+ c_{7} \leq \frac{1}{4} \\[1pt]
c_{4} - c_{6} - c_{7} \leq 2\\[1pt]
c_{4} + c_{6}-c_{7} \leq \frac{1}{4} \\[1pt]
-c_{4} - c_{6} + c_{7} \leq 2 \\[1pt]
-c_{4} - c_{6}-c_{7} \leq \frac{1}{4} \\[1pt]
c_{4} + c_{6} + c_{7} \leq 2.
\end{array}
\right. \end{align*}
By using Fourier-Motzkin elimination (see \cite{Vallieres}, annex A), it is possible to show that $c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)\Rightarrow|c_6|\leq\frac{1}{4}$. In turn, this means that the second, fourth, sixth and eighth inequalities of the last system are unnecessary. In other words, they are redundant constraints and they can be removed without changing the set of solutions. Doing so reduces the number of inequalities to four: \begin{equation*} \begin{array}{l} c_{4}-c_{6}+c_{7} \leq \frac{1}{4} \\ -c_{4} + c_{6}+c_{7} \leq \frac{1}{4} \\ c_{4} + c_{6}-c_{7} \leq \frac{1}{4} \\ -c_{4} - c_{6}-c_{7} \leq \frac{1}{4} \end{array} \end{equation*} The rest of the proof is carried out by determining the $(\jh1,\jh2,\jh3)$ coordinates of the four extreme points (vertices) of this system, and then calculating the distance between these. \end{proof}
We now turn our attention to the remaining principal slices, which are called the Mousebrot (Fig.\ref{fig:mousebrot}), the Turtlebrot (Fig.\ref{fig:turtlebrot}) and the Hourglassbrot (Fig.\ref{fig:hourglassbrot}). Although the three of them admit geometrical characterizations similar to those presented at the beginning of the current section, the emphasis is placed on their interrelationships. \begin{proposition}\label{prop:turtlebrot} Consider the principal 3D slice $\ensuremath{\mathcal{T}}(\im1,\im2,\jh2)$. We have \begin{gather*} \ensuremath{\mathcal{T}}(\im1,\im2,\jh2) = \T^{\ast}(\im1,\im2,-\jh1)\mcap\T^{\ast}(\im1,\im2,\jh1) \shortintertext{where} \T^{\ast}(\im1,\im2,\jh1) := \{c_2\im1+c_3\im2+c_6\jh2:c_2\im1+c_3\im2+c_6\jh1 \in \ensuremath{\mathcal{T}}(\im1,\im2,\jh1)\}. \end{gather*} \end{proposition} \begin{proof} Suppose that $c\in\ensuremath{\mathcal{T}}(\im1,\im2,\jh2)\subset\mathbb{T}(\im1,\im2,\jh2)$. Applying identity \eqref{eq:idemp tr} to $c$ yields \begin{align*} c &= c_2\im1+c_3\im2+(c_6\im1)\im3 \\ &= (c_2\im1+c_3\im2-c_6\jh1)\ensuremath{\gamma_{3}} + (c_2\im1+c_3\im2+c_6\jh1)\ensuremath{\overline{\gamma_{3}}} \\ &= (-d^{\ddagger_4})\ensuremath{\gamma_{3}} + (d)\ensuremath{\overline{\gamma_{3}}}, \end{align*} where $d=c_2\im1+c_3\im2+c_6\jh1 \Rightarrow -d^{\ddagger_4}=c_2\im1+c_3\im2-c_6\jh1$.\footnote{For any $d\in\ensuremath{\mathbb{TC}}$, the symbol $d^{\ddagger_4}$ denotes its fourth tricomplex conjugate \cite{GarantRochon}.} Since $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ is bounded if and only if $\{P_{-d^{\ddagger_4}}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ and $\{P_{d}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded, we have \begin{align*} c\in\ensuremath{\mathcal{T}}(\im1,\im2,\jh2) &\Leftrightarrow -d^{\ddagger_4}\in \ensuremath{\mathcal{T}}(\im1,\im2,\jh1)\text{ and }d\in\ensuremath{\mathcal{T}}(\im1,\im2,\jh1) \\ &\Leftrightarrow d\in\ensuremath{\mathcal{T}}(\im1,\im2,-\jh1)\text{ and }d\in \ensuremath{\mathcal{T}}(\im1,\im2,\jh1) \\ &\Leftrightarrow d\in\ensuremath{\mathcal{T}}(\im1,\im2,-\jh1)\mcap\ensuremath{\mathcal{T}}(\im1,\im2,\jh1). \end{align*} Observe that $\ensuremath{\mathcal{T}}(\im1,\im2,\pm\jh1)\not\subset\mathbb{T}(\im1,\im2,\jh2)$. In an analogous manner to that of proposition \ref{Firebrot characterization}, we define \[ \T^{\ast}(\im1,\im2,\jh1) := \{c_2\im1+c_3\im2+c_6\jh2:c_2\im1+c_3\im2+c_6\jh1 \in \ensuremath{\mathcal{T}}(\im1,\im2,\jh1)\} \] to make sure the 3D sets involved are in the right subspace. Thus, \begin{align*} \ensuremath{\mathcal{T}}(\im1,\im2,\jh2) &= \{c\in\mathbb{T}(\im1,\im2,\jh2) : \{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}} \text{ is bounded}\}\\ &= \begin{multlined}[t] \{c=c_2\im1+c_3\im2+c_6\jh2:\\ c_2\im1+c_3\im2+c_6\jh1\in\ensuremath{\mathcal{T}}(\im1,\im2,-\jh1)\mcap\ensuremath{\mathcal{T}}(\im1,\im2,\jh1) \} \end{multlined}\\ &=\{c\in\mathbb{T}(\im1,\im2,\jh2):c\in\T^{\ast}(\im1,\im2,-\jh1)\mcap\T^{\ast}(\im1,\im2,\jh1) \}. \qedhere \end{align*} \end{proof} The above result reveals that the Turtlebrot $\ensuremath{\mathcal{T}}(\im1,\im2,\jh2)$ can be expressed as the intersection of two other 3D slices. Moreover, in the right subspace, one of these slices is none other than a duplicate of the Mousebrot (Fig.\ref{fig:mousebrot}), while the second is obtained by applying a reflection along the plane $z\jh2=0$ to the first.\footnote{The reference \cite{Vallieres} contains another characterization of the same principal slice involving the Tetrabrot $\ensuremath{\mathcal{T}}(1,\im1,\im2)$ with all the details.} Interestingly, only one other principal 3D slice possesses the same property: the Hourglassbrot. \begin{proposition}\label{prop:hourglassbrot} Consider the principal 3D slice $\ensuremath{\mathcal{T}}(\im1,\jh1,\jh2)$. We have \begin{gather*} \ensuremath{\mathcal{T}}(\im1,\jh1,\jh2) = \T^{\ast}(1,\im1,\jh1)\mcap\T^{\ast}(-1,\im1,\jh1) \shortintertext{where} \T^{\ast}(1,\im1,\jh1) := \{c_2\im1+c_4\jh1+c_6\jh2:c_2\im1+c_4\jh1+c_6 \in \ensuremath{\mathcal{T}}(1,\im1,\jh1)\}. \end{gather*} \end{proposition} \begin{proof} We begin by setting $c=c_2\im1+c_4\jh1+c_6\jh2\in\ensuremath{\mathcal{T}}(\im1,\jh1,\jh2)$ and then using identity \eqref{eq:idemp de}. The rest of the proof is similar to that of Proposition \ref{prop:turtlebrot}. \end{proof} Thus, in the right subspace, the Hourglassbrot $\ensuremath{\mathcal{T}}(\im1,\jh1,\jh2)$ can be expressed as the intersection of two 3D slices: a copy of the Arrowheadbrot (Fig.\ref{fig:arrowheadbrot}), and the slice obtained by applying a reflection along the plane $z\jh2=0$ to it.
\section{The cube and the stellated octahedron}\label{sec:cube} The main reason why the principal slices $\ensuremath{\mathcal{T}}(1,\jh1,\jh2)$ and $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ are Platonic solids is that in both cases, the iterates calculated when applying the Mandelbrot algorithm to numbers of the form $c_1+c_4\jh1+c_6\jh2$ and $c_4\jh1+c_6\jh2+c_7\jh3$ stay in a particular four-dimensional subspace of $\ensuremath{\mathbb{TC}}$ called the \textit{biduplex numbers} \cite{Vallieres}. Indeed, it is easily verified that the set of biduplex numbers \[ \mathbb{D}(2) := \{c_1+c_4\jh1+c_6\jh2+c_7\jh3:c_1,c_4,c_6,c_7\in\ensuremath{\mathbb{R}}\text{ and }\jh1^2=\jh2^2=\jh3^2=1\} \] together with tricomplex addition and multiplication forms a proper subring of $\ensuremath{\mathbb{TC}}$, and is thus closed under these operations. Moreover, in the right idempotent basis \eqref{eq:idemp 4}, every biduplex number can be expressed through four real components. It follows that the boundedness of the sequence $\{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}$ specific to any $c\in\ensuremath{\mathcal{T}}(1,\jh1,\jh2)$ or $c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ is entirely dependent on the dynamics of the classical Mandelbrot set $\M1$ along the real axis, the intersection of which corresponds to a single interval, hence the regularity.
This suggests another approach to generate regular polyhedra within tricomplex dynamics: to define and visualize 3D slices in a basis that is directly linked to the simple dynamics of the real line. Since identity \eqref{eq:idemp 4} indicates that the set $\{\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}\}$ is a basis of the vector space of $\ensuremath{\mathbb{TC}}$ with complex coefficients, the set $\{\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}},\im1\ensuremath{\gamma_{1}\gamma_{3}},\im1\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\im1\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\im1\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}\}$ is a basis of the same space, but with real coefficients. This brings us to the following definitions. \begin{definition} Let $\alpha,\beta,\delta$ be three distinct elements taken in $\{\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\allowbreak\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}},\im1\ensuremath{\gamma_{1}\gamma_{3}},\im1\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\im1\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\im1\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}\}$. The space \[ \ensuremath{\mathbb{T}}(\alpha,\beta,\delta) := \text{span}_{\R}\{\alpha,\beta,\delta\} \] is the vector subspace of $\ensuremath{\mathbb{TC}}$ consisting of all real finite linear combinations of these three distinct units. \end{definition} \begin{definition} Let $\alpha,\beta,\delta$ be three distinct elements taken in $\{\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\allowbreak\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}},\im1\ensuremath{\gamma_{1}\gamma_{3}},\im1\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\im1\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\im1\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}}\}$. We define an idempotent 3D slice of the tricomplex Mandelbrot set $\M3$ as \begin{align*} \ensuremath{\T_{e}}(\alpha,\beta,\delta) &= \{c\in\ensuremath{\mathbb{T}}(\alpha,\beta,\delta) : \{P_{c}^{(n)}(0)\}_{n\in\mathbb{N}}\text{ is bounded}\} \\ &= \ensuremath{\mathbb{T}}(\alpha,\beta,\delta)\cap\M{3}. \end{align*} \end{definition} Although there are still fifty-six possible slices in total, it is obvious that few distinct 3D dynamics actually occur in this basis. For the sake of brevity, we will introduce the only 3D slice that is of interest regarding our objective. Figure \ref{fig:earthbrot} illustrates the idempotent 3D slice $\ensuremath{\mathcal{T}}_{e}(\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}})$, called the \textit{Earthbrot}. \begin{figure}\label{fig:earthbrot}
\end{figure} \begin{proposition} The Earthbrot is a cube with edge length of $\frac{9}{4}$. \end{proposition} \begin{proof} Take $x=x_1\ensuremath{\gamma_{1}\gamma_{3}}+x_2\ensuremath{\overline{\gamma_{1}}\gamma_{3}}+x_3\ensuremath{\gamma_{1}\overline{\gamma_{3}}}\in\ensuremath{\mathbb{T}}(\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}})$ such that $\{P_{x}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ is bounded. Then, the properties of the idempotent representation \eqref{eq:idemp 4} imply this is the case if and only if the sequences $\{P_{x_{i}}^{n}(0)\}_{n\in\ensuremath{\mathbb{N}}}$ are bounded, and since $x_i\in\ensuremath{\mathbb{R}}$, we must have $x_i\in\left[-2,\,\frac{1}{4}\right], i=1,2,3$. Thus, in the basis $\{\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}}\}$, we obtain the equality \[ \ensuremath{\mathcal{T}}_{e}(\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}}) = \left[-2,\,\frac{1}{4}\right] \times \left[-2,\,\frac{1}{4}\right] \times \left[-2,\,\frac{1}{4}\right], \] where $\times$ denotes the standard cartesian product. \end{proof} It seems highly unlikely that the two remaining Platonic solids, the dodecahedron and the icosahedron, can be visualized through tricomplex dynamics. However, it is possible to generate other types of polyhedra, like regular compounds and some specific Archimedean solids. To conclude this section, we wish to give a basic example of the first type. The stellated octahedron, can be seen as a regular dual compound made of two regular tetrahedra. As such, it can be regarded as the simplest polyhedral compound. In our context, Theorem \ref{thm:tetraedre} states that the principal 3D slice $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ is a regular tetrahedron, while several proofs in section \ref{sec:geometrical} hinted that tricomplex conjuation can have a fundamental geometric sense in specific 3D subspaces. More precisely, its effect can be equivalent to that of a reflection along a certain plane. Combining these ideas gives us a simple way, within tricomplex dynamics, to generate a stellated octahedron as the union of the slice $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$ and of its geometric dual.
By proposition \ref{Firebrot characterization}, it is not too hard to see that the latter can be obtained by applying a reflection along the plane $y\jh2=0$ to the Firebrot. Moreover, since $c=c_{4}\jh1 + c_{6}\jh2 + c_{7}\jh3\,\Leftrightarrow\,-(c^{\ddagger_{5}})=c_{4}\jh1 - c_{6}\jh2 + c_{7}\jh3$, we deduce that the operation $-(c^{\ddagger_{5}})$ corresponds to the desired reflection. Therefore, the 3D slice \[
\ensuremath{\mathcal{T}}(\jh1,-\jh2,\jh3):=\{-(c^{\ddagger_{5}})\,|\,c\in\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)\} \] must coincide with the geometric dual of the slice $\ensuremath{\mathcal{T}}(\jh1,\jh2,\jh3)$. Generating these simultaneously results in the polyhedron illustrated in Figure \ref{fig:starbrot}. \begin{figure}
\caption{The stellated octahedron (also called Starbrot) with various divergence layers.}
\label{fig:starbrot}
\end{figure}
\section*{Conclusion} In this article, we first established new results in the algebra of tricomplex numbers related to idempotent elements and invertibility, thus paving the way for interesting extensions valid in the multicomplex setting $\Mu{n},n\geq3$. Then, we presented various geometrical characterizations for the principal 3D slices of the tricomplex Mandelbrot set $\M3$, allowing these to be classified according to their connections to 2D or 3D related sets. In the process, we also confirmed the presence of three Platonic solids within the tricomplex dynamics associated with the Mandelbrot algorithm.
In subsequent works, it could be interesting to extend the classification of the principal slices of $\M3$ to the power $p\geq2$. Doing so would probably expand the list of convex polyhedra found among the principal slices because when considering $p=8$, the Firebrot strongly resembles a truncated tetrahedron, which is an Archimedean solid (Figure \ref{fig:archibrot}). \begin{figure}\label{fig:archibrot}
\end{figure}
In addition, by considering the algebra $\Mu{n},n\geq3$, the search for regular convex polytopes could be generalised to $n$-dimensional slices. In fact, it is worth noting that the method used in section \ref{sec:cube} provides a straightforward way to establish that the idempotent 4D slice $\ensuremath{\mathcal{T}}_{e}(\ensuremath{\gamma_{1}\gamma_{3}},\ensuremath{\overline{\gamma_{1}}\gamma_{3}},\ensuremath{\gamma_{1}\overline{\gamma_{3}}},\ensuremath{\overline{\gamma_{1}}\overline{\gamma_{3}}})$ is a tesseract (also called hypercube), that is, a four-dimensional regular convex polytope. Furthermore, the approach in Theorem \ref{thm:tetraedre} can probably be used to prove that in the usual basis, at least one specific 4D slice corresponds to a regular four-dimensional cross-polytope (also called hyperoctahedron). Together, these examples provide sufficient information to allow us to emit a conjecture. \begin{conjecture} Let $n\geq3$. The multicomplex Mandelbrot set $\M{n}$ contains exactly three regular convex $n$-polytopes among all possible principal or idempotent $n$-dimensional slices. \end{conjecture} In view of what has been mentioned above, it appears the easiest way to start to prove this conjecture is to show that both the $n$-dimensional hypercube and the $n$-dimensional hyperoctahedron exist within multicomplex dynamics when $n\geq3$. This is supported by the fact that the number of equations needed to describe these $n$-polytopes elegantly match the number of equations provided by the extended representation \eqref{eq:idemp 4} under any of the two bases considered herein. However, note that this is not the case for the $n$-dimensional simplex. Thus, proving its existence for every value of $n\geq3$ promises to be a more arduous task. Finally, in a more applied perspective, it could be relevant to consider this theory in relation with the natural geometry of fragmentation \cite{Fragmentation}.
\begin{multicols}{2}
\end{multicols}
\end{document} |
\begin{document}
\title{The Communication Value of a Quantum Channel}
\author{Eric Chitambar, Ian George, Brian Doolittle, Marius Junge \thanks{E. Chitambar and I. George are with the Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, 61801 USA, e-mail: [email protected].} \thanks{B. Doolittle is with the Department of Physics, University of Illinois Urbana-Champaign, Urbana, IL, 61801 USA.} \thanks{M. Junge is with the Department of Mathematics, University of Illinois Urbana-Champaign, Urbana, IL, 61801 USA.}}
\maketitle
\begin{abstract} There are various ways to quantify the communication capabilities of a quantum channel. In this work we study the communication value (cv) of channel, which describes the optimal success probability of transmitting a randomly selected classical message over the channel. The cv also offers a dual interpretation as the classical communication cost for zero-error channel simulation using non-signaling resources. We first provide an entropic characterization of the cv as a generalized conditional min-entropy over the cone of separable operators. Additionally, the logarithm of a channel's cv is shown to be equivalent to its max-Holevo information, which can further be related to channel capacity. We evaluate the cv exactly for all qubit channels and the Werner-Holevo family of channels. While all classical channels are multiplicative under tensor product, this is no longer true for quantum channels in general. We provide a family of qutrit channels for which the cv is non-multiplicative. On the other hand, we prove that any pair of qubit channels have multiplicative cv when used in parallel. Even stronger, all entanglement-breaking channels and the partially depolarizing channel are shown to have multiplicative cv when used in parallel with any channel. We then turn to the entanglement-assisted cv and prove that it is equivalent to the conditional min-entropy of the Choi matrix of the channel. Combining with previous work on zero-error channel simulation, this implies that the entanglement-assisted cv is the classical communication cost for perfectly simulating a channel using quantum non-signaling resources. A final component of this work investigates relaxations of the channel cv to other cones such as the set of operators having a positive partial transpose (PPT). The PPT cv is analytically and numerically investigated for well-known channels such as the Werner-Holevo family and the dephrasure family of channels.
\end{abstract}
\section{Introduction}
A noisy communication channel prohibits perfect transmission of messages from the sender (Alice) to the receiver (Bob). While there are a number of ways to quantify the noise of a channel, perhaps the simplest is in terms of a guessing game. Suppose that with uniform probability Alice randomly chooses a channel input and sends it to Bob over the channel. Based on the channel output, Bob tries to guess Alice's input with the greatest probability of success. In this game, Bob's optimal strategy is to perform maximum likelihood estimation based on the channel's transition probabilities. To be concrete, suppose that $\mathbf{P}:[n]\to[n']$ is a channel mapping set $[n]:=\{1,\cdots,n\}$ to set $[n']$ with transition probabilities $P(y|x)$. We define the channel's \textit{communication value} (cv) to be \begin{equation} \label{Eq:Cv}
\text{cv}(\mathbf{P})=\sum_{y\in[n']}\max_{x\in[n]}P(y|x). \end{equation} It is then straightforward to see that $\frac{1}{n}\text{cv}(\mathbf{P})$ is the largest success probability of correctly identifying the input $x$ based on the output $y$, when $x$ is drawn uniformly from $[n]$. The quantity $\text{cv}(\mathbf{P})$ is thus a natural measure for how well a channel $\mathbf{P}$ transmits data on the single-copy level. The goal of this paper is to better understand the channel cv in different communication settings.
The channel cv also emerges in the problem of zero-error channel simulation \cite{Cubitt-2011a, Duan-2016a, Wang-2016a, Fang-2020a}. In the general task of channel simulation, Alice and Bob attempt to generate one channel $\mathbf{P}$ using another channel $\mathbf{Q}$ combined with pre- and post-processing \cite{Bennett-2002a, Bennett-2014a, Berta-2011a, Heinosaari-2019a, Heinosaari-2020a}. Interesting variations to this problem arise when different types of resources are used to coordinate the pre- and post-processing of $\mathbf{Q}$. For example, these resources could be shared randomness \cite{Bennett-2014a, Frenkel-2015a}, shared quantum entanglement \cite{Berta-2013a, Wang-2018a, Wilde-2018a, Gour-2021a, Frenkel-2021a}, or non-signaling side-channel \cite{Cubitt-2011a, Duan-2016a, Fang-2020a}. The latter refers to a general bipartite channel that prohibits communication from one party to the other. When $\mathbf{Q}=\textrm{id}_r$ is the identity map on $[r]$, then the goal is to perfectly simulate $\mathbf{P}$ using $r$ noiseless messages from Alice to Bob, along with any auxiliary resource. For a given class of resource, the smallest number $r$ needed to accomplish this simulation is called the communication cost of $\mathbf{P}$ (also referred to as the signaling dimension of $\mathbf{P}$ in Refs. \cite{Dall'Arno-2017a, Doolittle-2021a}). It turns out that $\lceil\text{cv}(\mathbf{P})\rceil$ is a lower bound on the communication cost when Alice and Bob have access to shared randomness \cite{Doolittle-2021a}. In fact, this lower bound is tight when the Alice and Bob are allowed to use non-signaling resources \cite{Cubitt-2011a}. Combining this discussion with the previous paragraph, we thus have two dual interpretations of the communication value: $\frac{1}{n}\text{cv}(\mathbf{P})$ as an optimal guessing probability and $\text{cv}(\mathbf{P})$ as an optimal simulation cost. This is no coincidence since Eq. \eqref{Eq:Cv} is the dual formulation of the linear program characterizing the communication cost for perfectly simulating $\mathbf{P}$ using classical non-signaling resources (see Section \ref{Sect:Capacity-NS} for more details).
The goal of this paper is to understand the communication value of quantum channels. Formally, a quantum channel is described by a completely positive trace-preserving (CPTP) map $\mathcal{N}$ mapping density operators $\rho^A$ on Hilbert space $\mathcal{H}^A$ to density operators $\mathcal{N}(\rho)$ on Hilbert space $\mathcal{H}^B$. Every quantum channel is able to generate a family of classical channels by encoding classical data into quantum objects. Namely, for each $x\in[n]$, Alice prepares a quantum state $\rho_x$ and sends it through the channel to Bob's side. Upon receiving $\mathcal{N}(\rho_x)$, Bob performs a quantum measurement, described by a general positive operator-valued measure (POVM) $\{\Pi_y\}_{y\in[n']}$, and regards his measurement outcome as the decoded classical data. The induced classical channel then has the form \begin{equation}
P(y|x)=\textrm{Tr}[\Pi_y\mathcal{N}(\rho_x)]. \end{equation} How noisy this channel will be depends on the state encoding $\{\rho_x\}_{x\in[n]}$ and measurement decoding $\{\Pi_y\}_{y\in[n']}$, and ideally one chooses the states and measurement to minimize the error in data transmission. We define the cv of $\mathcal{N}$ in terms of the classical channels it can generate. \begin{definition} \label{Defn:cv} The $[n]\to[n']$ communication value (cv) of a quantum channel $\mathcal{N}\in\text{CPTP}(A\to B)$ is \begin{equation}
\text{cv}^{n\to n'}(\mathcal{N})=\max_{\substack{\{\Pi_{y}\}_{y=1}^{n'}\\\{\rho_x\}_{x=1}^n}}\{\text{cv}(\mathbf{P})\;|\; P(y|x)=\textrm{Tr}[\Pi_y\mathcal{N}(\rho_x)]\}, \end{equation} and the cv value of $\mathcal{N}$ is defined as \begin{equation} \text{cv}(\mathcal{N})=\sup_{n,n\in\mathbb{N}}\text{cv}^{n\to n'}(\mathcal{N}). \end{equation} \end{definition} \noindent Analogous to the classical case, $\frac{1}{n}\text{cv}^{n\to n'}(\mathcal{N})$ quantifies the largest success probability attainable in an $n$-input guessing game using the channel $\mathcal{N}$. The quantity $\text{cv}(\mathcal{N})$ also has a dual interpretation as the classical communication cost for simulating any classical channel generated by $\mathcal{N}$ when Alice and Bob have access to non-signaling resources.
By taking multiple copies of the channel, one can consider the cv capacity, defined as \begin{align} \mathcal{CV}(\mathbf{P})&=\lim_{k\to\infty}\frac{1}{k}\log\text{cv}(\mathbf{P}^{k}),\notag\\ \mathcal{CV}(\mathcal{N})&=\lim_{k\to\infty}\frac{1}{k}\log\text{cv}(\mathcal{N}^{\otimes k}) \end{align} in the classical and quantum cases, respectively. It is not difficult to see that $\log \text{cv}(\mathbf{P})$ is an additive quantity and so $\mathcal{CV}(\mathbf{P})=\log\text{cv}(\mathbf{P})$. On the other hand, as we show below, $\log\text{cv}(\mathcal{N})$ is non-additive in general for quantum channels. A primary objective of this paper is to understand when additivity of $\log\text{cv}(\mathcal{N})$ (equivalently multiplicativity of $\text{cv}(\mathcal{N})$) holds and when it does not. One of our main results is that multiplicativity always holds for qubit channels, whereas it does not for qutrits.
One can also study the cv of channels that are enhanced by auxiliary resources shared between the sender and receiver. In the quantum setting, it is natural to consider entanglement-assisted channel communication, as depicted in Fig. \ref{fig:ea_cv}. This is precisely the setup in the well-known quantum superdense coding protocol \cite{Bennett-1992a}. Letting $\text{cv}^*(\mathcal{N})$ denote the entangled-assisted cv of $\mathcal{N}$, another main result of ours is that $\text{cv}^*(\mathcal{N})$ equals the conditional min-entropy \cite{Konig-2009a} of the Choi matrix of $\mathcal{N}$. Combining with the results of Duan and Winter \cite{Duan-2016a}, this implies that $\lceil\text{cv}^*(\mathcal{N})\rceil$ captures the zero error classical communication cost for simulating $\mathcal{N}$ when Alice and Bob have \textit{quantum} non-signaling resources. Note that by additivity of the min-entropy, it follows that $\log\text{cv}^*(\mathcal{N})$ is additive and therefore $\mathcal{CV}^*(\mathcal{N})=\log\text{cv}^*(\mathcal{N})$. A summary between channel cv and zero-error channel simulation is given in Table \ref{table:cv-and-chan-sim}.
\begin{center}
\begin{tabular}{|p{2cm}|p{6cm}|}\hline \label{table:cv-and-chan-sim}
$\lceil\text{cv}(\mathcal{N})\rceil$ & Classical communication cost to perfectly simulate every classical channel induced by $\mathcal{N}$ using classical non-signaling resource. \\ \hline
$\lceil\text{cv}^*(\mathcal{N})\rceil$ \Tstrut & Classical communication cost to perfectly simulate $\mathcal{N}$ using quantum non-signaling resource.\\ \hline \end{tabular} \end{center}
This paper is structured as follows. We begin in Section \ref{Sect:Notation} by introducing the notation used in this manuscript and reviewing some preliminary concepts. Section \ref{Sect:cv-characterization} takes a deeper dive into the definition of channel communication value and relates it to the geometric measure of entanglement and other information-theoretic quantities such as the conditional min-entropy. Section \ref{Sect:qubits} focuses on qubit channels and provides an analytic expression for cv in terms of the correlation matrix of the channel's Choi matrix. The Werner-Holveo family of channels is introduced in Section \ref{Sect:Werner-Holevo} and the cv is computed. The question of cv multiplicativity is taken up in Section \ref{Sect:Multiplicativity} with examples of both multiplicativity and non-multiplicativity being presented. Notably, the cv capacity is shown to take a single-letter form for entanglement-breaking channels, Pauli qubit channels, and the general depolarizing channel. Section \ref{sec:entanglement_assisted_cv} introduces the notion of entanglement-assisted communication value and relates it to the conditional min-entropy of the Choi matrix. Different relations to the communication value are considered in Section \ref{sec:relaxations_of_cv}, with a particular focus on the PPT communication value and computable examples that it supports. In Section \ref{Sect:Numerics} we describe a procedure for numerically estimating the cv of a given channel, and we provide a link to our developed software package, which performs this estimation. Finally, Section \ref{Sect:Capacity-NS} provides a discussion of our results as they relate to channel capacity and zero error channel simulation.
\section{Notation and Preliminaries}
\label{Sect:Notation}
This paper considers exclusively finite-dimensional quantum systems represented by Hilbert spaces $\mathcal{H}^A, \mathcal{H}^B,\cdots$ etc. The collection of positive operators acting on Hilbert space $\mathcal{H}^A$ will be denoted by $\mathrm{Pos}(A)$, which consists of all hermitian operators $\text{Herm}(A)$ acting on $A$ with a non-negative eigenvalue spectrum. The subset of these operators having unit trace constitute the collection of density operators for system $A$, and we denote this set by $\mathcal{D}(A)$. We write $\Vert\cdot\Vert_\infty$ and $\Vert\cdot\Vert_1$ to indicate the spectral and trace norms of elements in $\mathrm{Pos}(A)$, respectively. For bipartite systems, an operator $\Omega\in \mathrm{Pos}(AB)$ is called separable if it can be expressed as a positive combination of product states, $\Omega=\sum_it_i\op{\alpha_i}{\alpha_i}\otimes\op{\beta_i}{\beta_i}$, with $t_i\geq 0$, $\op{\alpha_i}{\alpha_i}\in\mathcal{D}(A)$, and $\op{\beta_i}{\beta_i}\in\mathcal{D}(B)$, and we let $\text{SEP}(A:B)$ denote the set of all separable operators on systems $A$ and $B$. Classical systems can be incorporated into this framework by demanding that the density matrix of every classical state be diagonal in a fixed basis. In general, we will label a classical system by $X$ or $Y$.
Quantum channels provide the basic building blocks of any dynamical system. Mathematically, they are represented by CPTP maps, and we denote the set of CPTP maps from system $A$ to $B$ by $\text{CPTP}(A\to B)$. The set $\text{CPTP}(A\to B)$ is isomorphic to the subset of $\mathrm{Pos}(AB)$ consisting of operators whose reduced density operator on system $A$ is the identity. Specifically, for every $\mathcal{N}\in\text{CPTP}(A\to B)$ its Choi matrix is the associated operator $J_{\mathcal{N}}\in\mathrm{Pos}(AB)$ given by \[J_{\mathcal{N}}=\textrm{id}\otimes\mathcal{N}(\phi^+_{d_A}),\] where $\textrm{id}$ is the identity map and $\phi^+_{d_A}=\op{\phi^+_{d_A}}{\phi^+_{d_A}}=\sum_{i,j=1}^{d_A}\op{ii}{jj}$. Note that $\ket{\phi^+_{d_A}}$ is proportional to the normalized $d_A$-dimensional maximally entangled state, and we write the latter as $\ket{\Phi^+_{d_A}}:=\frac{1}{\sqrt{d_A}}\ket{\phi^+_{d_A}}$. The fact that $\mathcal{N}$ is completely positive assures that $J_{\mathcal{N}}\geq 0$, and the trace-preserving condition means that $\textrm{Tr}_B J_{\mathcal{N}}=\mathbb{I}^A$, where $\mathbb{I}$ is the identity operator. On the other hand, if $\textrm{Tr}_A J_{\mathcal{N}}=\mathbb{I}^B$ then $\mathcal{N}$ is a unital map, meaning that $\mathcal{N}(\mathbb{I}^A)=\mathbb{I}^B$. More generally, we say a map is sub-unital if $\mathcal{N}(\mathbb{I}^A)\leq \mathbb{I}^B$.
An important subclass of channels are known as entanglement-breaking (EB). These are characterized by the property that $\mathcal{N}^{A\to B}\otimes \textrm{id}^C(\rho^{AC})\in\text{SEP}(B:C)$ for all $\rho^{AC}\in\mathrm{Pos}(AC)$. It is not difficult to see that $\mathcal{N}\in\text{CPTP}(A\to B)$ is EB if and only if $J_{\mathcal{N}}\in\text{SEP}(A:B)$. For any subset $S\subset \text{Herm}(A)$, we let
\[S^*=\{\omega\in\text{Herm}(A)\;|\;\langle \omega,\tau\rangle:=\textrm{Tr}[\omega\tau]\geq 0\;\;\forall \tau\in S\}\] denote the dual cone of $S$. As a final bit of notation, we write $\exp(x)$ and $\log(x)$ to mean $2^x$ and $\log_2x$, respectively.
\section{Characterizing the Communication Value}
\label{Sect:cv-characterization}
Let us begin with a few remarks regarding Definition \ref{Defn:cv}. First, every choice of optimal POVM $\{\Pi_y\}_{y=1}^{n'}$ and set of signal states $\{\rho_x\}_{x=1}^n$ is characterized by a labeling function $f:[n']\to[n]$ such that $\max_{x\in[n]}\textrm{Tr}[\Pi_y\rho_x]=\textrm{Tr}[\Pi_y\rho_{f(x)}]$. If the range of $f$ is strictly contained in $[n]$, then we can replace the set $\{\rho_x\}_{x=1}^n$ with a smaller set of signal states such that the map $f$ is surjective. Similarly, if $f$ is not one-to-one, then we can coarse-grain the $n'$ POVM elements so that each outcome uniquely identifies a signal state. Hence without changing the cv, we can assume $f:[m]\to[m]$ is a bijection with $m=\min\{n,n'\}$, and so \begin{equation} \text{cv}^{n\to n'}(\mathcal{N})=\text{cv}^{m\to m}(\mathcal{N})\qquad\text{for}\qquad m=\min\{n,n'\}. \end{equation}
Another observation is that \begin{equation} \text{cv}^{m\to m}(\mathcal{N})\leq \text{cv}^{m'\to m'}(\mathcal{N})\qquad\text{for}\qquad m\leq m', \end{equation} which follows from the fact that we can always trivially split a POVM to increase outcomes $\Pi\to \frac{1}{2}\Pi+\frac{1}{2}\Pi$, and we can always increase the size of our input set $\{\rho_x\}_{x=1}^m$ by adding the same state $\rho_x$ multiple times. Finally, we note that \begin{equation} \label{Eq:cv-carethedory} \text{cv}(\mathcal{N})=\text{cv}^{d_B^2\to d_B^2}(\mathcal{N}), \end{equation} where $d_B$ is the dimension of the output system. This follows from the fact that any POVM on a $d_B$-dimensional systems can always be decomposed into a convex combination of extremal POVMs, each with at most $d_B^2$ outcomes \cite{Davies-1978a}, and the cv can always be attained with one of these extremal measurements.
It is also not difficult to see that \begin{align} \label{Eq:cv-bounds}
1\leq \text{cv}(\mathcal{N})\leq \min\{d_A,d_B\} \end{align} for any channel $\mathcal{N}$. The lower bound holds by considering a constant input $\rho_x=\rho$ (for all $x$) so that $\sum_x\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)]=\sum_x\textrm{Tr}[\Pi_x\mathcal{N}(\rho)]=1$, since $\mathcal{N}$ is trace preserving and $\sum_x\Pi_x=\mathbb{I}_{d_B}$. Similarly, the upper bound follows from the inequalities \begin{align}
\sum_x\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)]&\leq \sum_x\textrm{Tr}[\Pi_x]=\textrm{Tr}[\mathbb{I}_{d_B}]=d_B;\\
\sum_x\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)]&= \sum_x\textrm{Tr}[\mathcal{N}^\dagger(\Pi_x)\rho_x]\notag\\
&\leq \sum_x\textrm{Tr}[\mathcal{N}^\dagger(\Pi_x)]=\textrm{Tr}[\mathbb{I}_{d_A}]=d_A, \end{align} where $\mathcal{N}^\dagger:B\to A$ is the adjoint map of $\mathcal{N}$, and so $\{\mathcal{N}^\dagger(\Pi_x)]\}_x$ will always be a valid POVM on Alice's system.
Notice that when $\text{cv}(\mathcal{N})=1$, Bob can do no better than randomly guessing Alice's input. The following proposition characterizes the type of channel for which this is the case. \begin{proposition} $\text{cv}(\mathcal{N})=1$ iff $\mathcal{N}$ is a replacer channel; i.e. there exists a fixed state $\sigma$ such that $\mathcal{N}(\rho)=\sigma$ for all states $\rho$. \end{proposition} \begin{proof} If $\mathcal{N}$ is a replacer channel, then clearly $\text{cv}(\mathcal{N})=1$. On the other hand, suppose that $\mathcal{N}$ is not a replacer channel. This means there exists two inputs $\rho_1$ and $\rho_2$ such that $\Delta=\mathcal{N}(\rho_1)-\mathcal{N}(\rho_2)\not=0$. Hence by performing a Helstrom measurement on the channel output (i.e. projecting onto the $\pm$ parts of $\Delta$) \cite{Helstrom-1976a}, one obtains $\text{cv}(\mathcal{N})\geq 1+\tfrac{1}{2}\Vert\Delta\Vert_1>1$. \end{proof}
\subsection{Communication Value via Conic Optimization}
In general $\text{cv}(\mathcal{N})$ is difficult to compute. This can be seen more explicitly by casting $\text{cv}(\mathcal{N})$ as an optimization over the separable cone, $\text{SEP}(A:B)$, whose membership is NP-Hard to decide \cite{Gurvits-2003a}. Nevertheless, expressing $\text{cv}(\mathcal{N})$ as an optimization over $\text{SEP}(A:B)$ leads to computable upper bounds since there are well-known relaxations to the set $\text{SEP}(A:B)$ that are easier to handle analytically. \begin{proposition} \label{Prop:Proposition-cv} For $\mathcal{N}\in\text{CPTP}(A\to B)$, \begin{align} \text{cv}(\mathcal{N})=\max &\;\;\textrm{Tr}[\Omega^{AB}J_\mathcal{N}]\notag\\ \text{subject to}\;&\;\;\textrm{Tr}_A[\Omega^{AB}]=\mathbb{I}^B;\notag\\ &\;\;\Omega^{AB}\in\text{SEP}(A:B). \label{Eq:Proposition-cv} \end{align} \end{proposition} \begin{proof} By Eq. \eqref{Eq:cv-carethedory} we have \begin{align} \text{cv}(\mathcal{N})&=\max_{\{\Pi_x\}, \{\rho_x\}}\sum_{x=1}^{d_B^2}\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)]\notag\\ &=\max_{\{\Pi_x\}, \{\rho_x\}}\sum_{x=1}^{d_B^2}\textrm{Tr}[\rho_x^T\otimes\Pi_x(J_{\mathcal{N}})]\notag\\ &=\max \textrm{Tr}[\Omega^{AB}J_{\mathcal{N}}], \end{align} where $\Omega^{AB}=\sum_{x=1}^{d_B^2}\rho_x^T\otimes\Pi_x$ satisfies the conditions of Eq. \eqref{Eq:Proposition-cv}. Conversely, any $\Omega^{AB}\in\text{SEP}(A:B)$ can be written as $\Omega^{AB}=\sum_{x}\op{\psi_x}{\psi_x}^A\otimes\omega_x^B$ with $\ket{\psi_x}$ being a pure state. The condition $\textrm{Tr}_A[\Omega^{AB}]=\mathbb{I}^B$ implies that $\{\omega_x\}_x$ constitutes a POVM. \end{proof} \noindent Note that strong duality holds for the conic program here, and so Proposition \ref{Prop:Proposition-cv} can be cast in dual form as \begin{align} \text{cv}(\mathcal{N})=\min &\;\;\textrm{Tr}[Z^{B}]\notag\\ \text{subject to}&\;\;\mathbb{I}^{A}\otimes Z^{B}-J_{\mathcal{N}}^{AB}\in\text{SEP}^*(A:B). \label{Eq:cv-dual} \end{align}
In Section \ref{Sect:Capacity-NS}, we will explore different relaxations to this problem by considering outer approximations of $\text{SEP}(A:B)$. For example, the cone $\text{PPT}(A:B)$, which consists of all bipartite positive operators having a positive partial transpose, contains $\text{SEP}(A:B)$ \cite{Peres-1996a}. Replacing $\text{SEP}(A:B)$ with $\text{PPT}(A:B)$ in Eq. \eqref{Eq:Proposition-cv} gives us a semi-definite program (SDP). Furthermore, since $\text{SEP}(A:B)=\text{PPT}(A:B)$ whenever $d_Ad_B\leq 6$ \cite{Horodecki-1996a}, we thus obtain the following. \begin{corollary} \label{cor:Proposition-PPT} Suppose $\mathcal{N}\in\text{CPTP}(A\to B)$ with $d_Ad_B\leq 6$. Then \begin{align} \text{cv}(\mathcal{N})=\max &\;\;\textrm{Tr}[\Omega^{AB}J_\mathcal{N}]\notag\\ \text{subject to}\;&\;\;\textrm{Tr}_A[\Omega^{AB}]=\mathbb{I}^B\notag\\ &\;\;\Omega^{AB}\in\text{PPT}(A:B). \label{Eq:Proposition-PPT} \end{align} \end{corollary}
\subsection{An Entropic Characterization of Communication Value} \label{sec:entropic-characterization-of-cv}
\subsubsection{The Conditional Separable Min-Entropy}
An alternative but related manner of characterizing the communication value is in terms of the min-entropy or variations of it. Equation \eqref{Eq:cv-dual} might strike the reader as closely resembling the conditional min-entropy of $J_\mathcal{N}$. Recall that the conditional min-entropy of a positive bipartite operator $\omega^{AB}$ is given by \begin{align}
H_{\min}(A|B)_{\omega}=-\min_{\sigma^B\in\mathcal{D}(B)}D_{\max}(\omega\Vert \mathbb{I}^A\otimes \sigma^B), \end{align}
where $D_{\max}(\mu\Vert\nu)=\min\{\lambda\;|\;\mu\leq 2^\lambda \nu\}$ \cite{Konig-2009a}. Here $\leq$ denotes a generalized inequality over the convex cone of positive-semidefinite operators; i.e. $X\leq Y$ iff $Y-X\in \mathrm{Pos}(AB)$. Equivalently, we can combine the two minimizations in the definition of $H_{\min}$ to write \begin{align}
\exp[-H_{\min}(A|B)_{\omega}]=\min &\;\;\textrm{Tr}[Z^{B}]\notag\\ \text{subject to}&\;\;\mathbb{I}^{A}\otimes Z^{B}-\omega^{AB}\in\mathrm{Pos}(AB). \end{align} Comparing with Eq. \eqref{Eq:cv-dual}, we see that cv is recovered by changing the cone from $\mathrm{Pos}(AB)$ to $\text{SEP}^*(A:B)$. Let us denote the cone inequality over $\text{SEP}^*(A:B)$ by $\leq_{\text{SEP}^*}$ such that $X\leq_{\text{SEP}^*}\! Y$ iff $Y-X\in\text{SEP}^*(A:B)$. Then we can introduce a restricted conditional min-entropy. \begin{definition}\label{defn:sep-min-entropy} The conditional \textit{separable} min-entropy of a positive bipartite operator $\omega^{AB}$ is defined as \begin{equation}\label{eqn:sep-min-ent-def}
H_{\min}^{\text{sep}}(A|B)_{\omega}=-\min_{\sigma^B\in\mathcal{D}(B)}D^{\text{sep}}_{\max}(\omega\Vert \mathbb{I}^A\otimes \sigma^B), \end{equation}
where $D^{\text{sep}}_{\max}(\mu\Vert\nu)=\min\{\lambda\;|\;\mu\leq_{\text{SEP}^*}\! 2^\lambda \nu\}$. \end{definition} \noindent By Eq. \eqref{Eq:cv-dual}, we therefore have \begin{equation} \label{eqn:cv-sep-min-relation}
\text{cv}(\mathcal{N})=\exp[-H^{\text{sep}}_{\min}(A|B)_{J_\mathcal{N}}]. \end{equation}
The separable min-entropy enjoys a data-processing inequality under one-way LOCC from Bob to Alice. The latter consists of any bipartite map $\Phi\in \text{CPTP}(AB \to A'B')$ having the form $\Phi=\sum_i\mathcal{N}_i\otimes\mathcal{M}_i$, where $\mathcal{N}_i\in\text{CPTP}(A\to A')$ and $\sum_i\mathcal{M}_i\in \text{CPTP}(B\to B')$ with each individual $\mathcal{M}_i$ being CP. In fact we can prove the data-processing inequality under an even larger class of operations. \begin{proposition}\label{eqn:sep-min-DPI}
Let $\Phi:\mathrm{Pos}(AB)\to\mathrm{Pos}(A'B')$ be any positive map whose adjoint is non-entangling (i.e. $\Phi^\dagger:\text{SEP}(A':B')\to\text{SEP}(A:B)$) and that further satisfies $\Phi(\mathbb{I}^A\otimes \sigma^{B'})\leq\mathbb{I}^{A'}\otimes\phi(\sigma^{B'})$ for some trace-preserving map $\phi:\mathrm{Pos}(B)\to\mathrm{Pos}(B')$. Then $H^{\text{sep}}_{\min}(A|B)_{P} \leq H^{\text{sep}}_{\min}(A|B)_{\Phi(P)} $ for all $P \in \mathrm{Pos}(AB)$. \end{proposition} \begin{proof} Since $\Phi^\dagger$ preserves separability, we must have $\Phi(Q)\in\text{SEP}^*(A':B')$ for all $Q\in \text{SEP}^*(A:B)$. Hence, \begin{align}
2^{\lambda}\mathbb{I} \otimes \sigma-P\geq_{\text{SEP}^*} 0\;&\Rightarrow\;2^{\lambda}\Phi(\mathbb{I} \otimes \sigma)-\Phi(P)\geq_{\text{SEP}^*} 0\notag\\
&\Rightarrow\;2^{\lambda}\mathbb{I} \otimes \phi(\sigma)-\Phi(P)\geq_{\text{SEP}^*} 0.\notag \end{align}
In other words, any feasible pair $(\sigma,\lambda)$ in the minimization of $H_{\min}^{\text{sep}}(A|B)_P$ also leads to a feasible pair for $H_{\min}^{\text{sep}}(A|B)_{\Phi(P)}$. The $-1$ factor in the definition of $H_{\min}^{\text{sep}}$ then implies the proposition. \end{proof} \noindent The maps of Proposition \ref{eqn:sep-min-DPI} include those of the form $\Phi=\mathcal{M}\otimes\mathcal{N}$, where $\mathcal{M}$ is sub-unital and $\mathcal{N}$ is CPTP. These maps are known to satisfy the data-processing inequality for the standard min-entropy \cite{Tomamichel-2015}. However we suspect that Proposition \ref{eqn:sep-min-DPI} includes maps for which the standard min-entropy data-processing inequality does \textit{not} hold.
We can apply Proposition \ref{eqn:sep-min-DPI} to the processing of Choi matrices. However, in this case not all maps $\Phi$ satisfying the conditions of Proposition \ref{eqn:sep-min-DPI} are physically meaningful. Specifically, we require the additional condition that $\textrm{Tr}_{B'}\Phi(P)=\mathbb{I}^{A'}$ for all operators $P$ in which $\textrm{Tr}_{B}P=\mathbb{I}^A$. This assures that $\Phi$ maps Choi matrices to Choi matrices. One particular class of maps having this form are those in which $\Phi$ is a product of a positive unital map and a CPTP map, i.e. $\Phi=\mathcal{E}_{\text{pre}}^\dagger \otimes\mathcal{E}_{\text{post}}$. In this case, $\mathcal{E}_{\text{pre}}$ and $\mathcal{E}_{\text{post}}$ are pre- and post-processing maps for a given channel, respectively. As a consequence of Proposition \ref{eqn:sep-min-DPI} we therefore observe the following corollary, which can also be seen directly from the definition of communication value. \begin{corollary}\label{corr:pre-post-proc} Communication value is non-increasing under pre- and post-processing of the channel. \end{corollary}
\begin{comment} \ian{The communication value satisfies data-processing under LOCC operations from Bob to Alice. \begin{proposition}\label{prop:cv-data-processing} The communication value, $\text{cv}(\mathcal{N})$, is non-increasing under one-way LOCC maps from Bob to Alice. \end{proposition} \begin{proof} Consider a map $\Phi\in\text{CPTP}(AB \to A'B')$ defined by $\Phi:=\sum_{i}p_{i}\mathcal{N}_i\otimes\mathcal{M}_i$ such that $\mathcal{N}_i:A \to A'$ is sub-unital for every $i$, $p_i$ defines a probability distribution, and $\sum_i p_{i} \mathcal{M}_i$ is CPTP. When $\Phi$ acts on the Choi matrix $J_{\mathcal{N}}$, the resulting operator $\Phi(J_{\mathcal{N}})$ defines the Choi matrix of a one-way LOCC map from Bob to Alice$\mathcal{L} \circ \mathcal{N}$ where $\mathcal{L}$ is an LOCC map from Bob to Alice.
By our assumptions, this map is CP and a separable map, so it preserves separability,i.e. $\Phi^{*}(Q) \in \text{SEP}(A:B) \, \forall Q \in \text{SEP}(A':B')$, which implies $\Phi(R) \in \text{SEP}^{\ast}(A':B')$ if $R \in \text{SEP}^{\ast}(A:B)$. It follows, \begin{align*}
J_\mathcal{N} &\leq_{\text{SEP}^*} 2^{\lambda}\mathbb{I}^{A} \otimes \sigma \\ \Rightarrow \Phi(J_\mathcal{N}) &\leq_{\text{SEP}^*} 2^{\lambda}\mathbb{I}^{A'} \otimes \sum_{i} p_{i}\mathcal{M}_{i}(\sigma) \ , \end{align*} where we have used the sub-unitality of $\mathcal{N}$. Note that by the assumption $\sum_{i} p_{i}\mathcal{M}_{i}$ is a CPTP map, $(\sum_{i} p_{i}\mathcal{M}_{i}(\sigma),2^{\lambda})$ is feasible for $\text{cv}(\mathcal{L} \circ \mathcal{N})$ by \eqref{eqn:sep-min-ent-def}. Since the negatives in \eqref{eqn:sep-min-ent-def} and \eqref{eqn:cv-sep-min-relation} cancel, we are considering a minimization problem, and this feasible point will always be an upper bound on $\text{cv}(\mathcal{L} \circ \mathcal{N})$. Thus, $\text{cv}(\mathcal{L} \circ \mathcal{N}) \leq\text{cv}(\mathcal{N})$. \end{proof} The data-processing inequality for the separable min-entropy, which is the same as for the standard min-entropy \cite{Tomamichel-2015}, may then be seen as a corollary. \begin{corollary}\label{eqn:sep-min-DPI}
Let $P \in \mathrm{Pos}(A \otimes B)$. Let $\mathcal{N}:A \to A'$ be a sub-unital CP map and $\mathcal{M}:B \to B'$ be a CPTP map. Then $H^{\text{sep}}_{\min}(A|B)_{P} \leq H^{\text{sep}}_{\min}(A|B)_{(\mathcal{N} \otimes \mathcal{M})(P)} \ .$ \end{corollary} \begin{proof} Following the proof of Proposition \ref{prop:cv-data-processing}, one replaces $J_{\mathcal{N}}$ with an arbitrary positive operator $P$ and notes $\mathcal{N} \otimes \mathcal{M}$ is a special case of $\Phi$. The rest of the proof is the same except we consider the negative of a minimization problem \eqref{eqn:sep-min-ent-def}, so the feasible point is a lower bound. \end{proof} As pre- and post-processing are special cases of a Bob to Alice LOCC map, we have this final corollary. \begin{corollary}\label{corr:pre-post-proc} Communication value is non-increasing under pre- and post-processing of the channel. \end{corollary} }
\todo{Comment: Can we combing Prop. 4 and Cor. 2? It will streamline the discussion a bit, and it also seems more natural to link one-way LOCC with cv since they're both operational}
\end{comment}
Note that for classical systems $X$ and $Y$, we have $\mathrm{Pos}(XB)=\text{SEP}(X:B)$ and $\mathrm{Pos}(AY)=\text{SEP}(A:Y)$. These correspond to classical-to-quantum and quantum-to-classical channels, respectively, and in these cases Eq. \eqref{eqn:cv-sep-min-relation} reduces to \begin{align}
\text{cv}(\mathcal{N}^{X\to B})&=\exp[-H_{\min}(X|B)_{J_\mathcal{N}}]\label{Eq:cv-cq}\\
\text{cv}(\mathcal{N}^{A\to Y})&=\exp[-H_{\min}(A|Y)_{J_\mathcal{N}}]\label{Eq:cv-qc}. \end{align}
\subsubsection{The max-Holevo Information}
The cv can be further related to the max-Holevo information of a channel, $\chi_{\max}(\mathcal{N})$. This quantity has been introduced in the study of ``sandwiched'' R\'{e}nyi divergences \cite{Wilde-2014a, Beigi-2013a} and is defined as \begin{align*}
\chi_{\max}(\mathcal{N}) = \max_{\rho^{XA}} \min_{\sigma^{B}} D_{\max}(\rho^{XB}||\rho^{X} \otimes \sigma^{B}), \end{align*} where the maximization is taken over all cq states $\rho^{XA}=\sum_xp(x)\op{x}{x}\otimes\rho_x^A$ and \[\rho^{XB}:=\sum_xp(x)\op{x}{x}\otimes\mathcal{N}(\rho_x^A).\] In fact, since $D_{\max}$ is quasi-convex (i.e. $D_{\max}(\sum_ip(i)\rho_i\Vert\sum_ip(i)\sigma_i)\leq\max_i D_{\max}(\rho_i\Vert\sigma_i)$ \cite{Datta-2009a}), if follows that we can restrict attention to pure $\rho_x^A=\op{\psi_x}{\psi_x}^A$ in the definition of $\chi_{\max}$. Letting $U$ be the unitary such that $U\ket{x}=\ket{\psi_x}$, the maximization over $\rho^{XA}$ can then be replaced by a maximization over $U$ such that \begin{equation} \label{Eq:cq-state-simplify}
\rho^{XA}=(\mathbb{I}\otimes U)\sum_{x}p(x)\op{xx}{xx}(\mathbb{I}\otimes U)^\dagger. \end{equation}
We use this simplification to prove a relationship between channel $\chi_{\max}$ and conditional $H^{\text{sep}}_{\min}$. \begin{theorem} \label{Thm:chi-max-hmin} For any channel $\mathcal{N}^{A\to B}$, \begin{equation}
\chi_{\max}(\mathcal{N})=\log\text{cv}(\mathcal{N}). \end{equation} \end{theorem} \begin{proof} Using Eq. \eqref{Eq:cq-state-simplify} we have \[\rho^{XB}=\sum_xp(x)\op{x}{x}\otimes\rho_x,\] where $\rho_x=\mathcal{N}(U\op{x}{x}U^\dagger)$. Since $\rho^X=\sum_xp(x)\op{x}{x}$, the definition of $D_{\max}$ yields \begin{align}
D_{\max}(\rho^{XB}\Vert\rho^X\otimes\sigma^B)&=\min\{\lambda\;|\;\rho_x\leq 2^\lambda \sigma,\;\forall x\}\notag\\
&=D_{\max}(\widetilde{\rho}^{XB}\Vert\mathbb{I}\otimes\sigma^B)\notag\\
&=D_{\max}^{\text{sep}}(\widetilde{\rho}^{XB}\Vert\mathbb{I}\otimes\sigma^B),\label{Eq:Dmax-ineq} \end{align} where \begin{align}
\widetilde{\rho}^{XB}&=\sum_x\op{x}{x}\otimes\mathcal{N}(U\op{x}{x}U^\dagger) \notag\\
&=\Delta_{U^T}\otimes\textrm{id}^B(J_\mathcal{N}), \end{align} and $\Delta_{U^T}(\tau)=\sum_{x}\op{x}{x}U^T(\tau) U^*\op{x}{x}$ is a completely dephasing map after applying the rotation $U^T$. The last equality in Eq. \eqref{Eq:Dmax-ineq} follows from the fact that $\mathrm{Pos}(XB)=\text{SEP}(X:B)$, as noted above. Then by data-processing (Proposition \ref{eqn:sep-min-DPI}), we have \begin{align}
D_{\max}^{\text{sep}}(\Delta_{U^T}\otimes\textrm{id}^B(J_\mathcal{N})\Vert \mathbb{I}\otimes\sigma^B)\leq D_{\max}^{\text{sep}}(J_\mathcal{N}\Vert \mathbb{I}\otimes\sigma^B) \notag \end{align} for any $\sigma^B$ and unitary $U$ on system $A$. Hence from the definitions it follows that \begin{equation}
\chi_{\max}(\mathcal{N})\leq -H_{\min}^{\text{sep}}(A|B)_{J_{\mathcal{N}}}=\log\text{cv}(\mathcal{N}). \end{equation}
To prove the reverse inequality, for arbitrary $\sigma^B$ let $\lambda_0=D_{\max}^{\text{sep}}(J_\mathcal{N}\Vert\mathbb{I}\otimes\sigma^B)$. Hence \begin{equation}
2^{\lambda_0}\mathbb{I}\otimes\sigma^B-J_{\mathcal{N}}\in\text{SEP}^*(A:B), \end{equation} and since $\lambda_0$ is a minimizer, there must exist some product state $\ket{\alpha}\ket{\beta}$ such that \begin{equation}
2^{\lambda_0}\bra{\beta}\sigma^B\ket{\beta}=\bra{\beta}\mathcal{N}(\op{\alpha^*}{\alpha^*})\ket{\beta}. \end{equation} Let $U$ be any unitary that rotates $\{\ket{x}\}_{x=1}^{d_A}$ such that $U\ket{1}=\ket{\alpha^*}$. Therefore \begin{align}
2^{\lambda_0}\bra{\alpha,\beta}\mathbb{I}\otimes\sigma^B\ket{\alpha,\beta}= \bra{\alpha,\beta}\Delta_{U^T}\otimes\textrm{id}^B(J_\mathcal{N})\ket{\alpha,\beta}, \end{align} which means that $D_{\max}^{\text{sep}}(\Delta_{U^T}\otimes\textrm{id}^B(J_\mathcal{N})\Vert \mathbb{I}\otimes\sigma^B)$ can be no less than $\lambda_0=D_{\max}^{\text{sep}}(J_\mathcal{N}\Vert\mathbb{I}\otimes\sigma^B)$. Since this holds for all $\sigma^B$ and we are maximizing over $U$, we have \begin{equation}
\chi_{\max}(\mathcal{N})\geq -H_{\min}^{\text{sep}}(A|B)_{J_{\mathcal{N}}}=\log\text{cv}(\mathcal{N}). \end{equation} \end{proof}
We close this section by providing an alternative proof of Theorem \ref{Thm:chi-max-hmin}. Instead of going through the Choi matrix, the following argument relies on a characterization of cv in terms of maximizing the min-entropy over encodings. In some sense this is intuitive as the communication value is optimizing minimal error discrimination, which min-entropy characterizes \cite{Konig-2009a}. For this reason, the conceptual underpinning of this alternative derivation may be of interest in other applications.
Let $\{\rho_{x}^{A}\}$ denote a subset of states for some alphabet $\mathcal{X}$, $\rho^{XA}$ be a cq state defined using $\{\rho_{x}^{A}\}$, $\rho_{U}$ be the maximally mixed state on the relevant space, and $\rho^{XB} := (\mathrm{id}^{X} \otimes \mathcal{N})(\rho^{XA})$. Starting from \eqref{Eq:cv-dual},
\begin{align}
\text{cv}(\mathcal{N})
=& \min\{\textrm{Tr}[Z^{B}] : \mathbb{I}^{A} \otimes Z^{B} \geq_{\text{SEP}^{*}} J_{\mathcal{N}} \} \notag \\
=& \sup_{\{\rho_{x}^{A}\}} \min\{\textrm{Tr}[Z^{B}] : Z^{B} \geq \mathcal{N}(\rho_{x}^{A}) \, \forall x \in \mathcal{X} \} \notag \\
=& \sup_{\substack{\rho^{XA}: \\ \rho^{X} = \rho_{U}}} |\mathcal{X}| \min\{\textrm{Tr}[\widetilde{Z}^{B}] : \mathbb{I}^{A} \otimes \widetilde{Z}^{B} \geq \rho^{XB} \} \notag \\
=& \sup_{\substack{\rho^{XA}: \\ \rho^{X} = \rho_{U}}} |\mathcal{X}| \exp(-H_{\min}(X|B)_{\rho^{XB}}) \notag \\
=& \sup_{\{\rho_{x}^{A}\}} \min_{\sigma^B} \lambda_{\max}(\sum_{x} \op{x}{x} \otimes \sigma^{-1/2 } \rho^{B}_{x} \sigma^{-1/2}) \notag \\
=& \sup_{\rho^{XA}} \min_{\sigma^{B}} \exp( D_{\max}(\rho^{XB}||\rho^{X} \otimes \sigma^{B})) \notag \\
=& \exp(\chi_{\max}) \label{eqn:cv-max-holevo-alt} \ ,
\end{align}
where the second equality is using $X^{AB} \in \text{SEP}^{*} \Leftrightarrow \bra{\alpha}\bra{\beta}X\ket{\alpha}\ket{\beta}$ for all unit vectors $\alpha,\beta$ and the action of the channel in terms of the Choi, the third is by using uniform probability on $\mathcal{X}$, the fourth is by definition of min-entropy, the fifth is using $D_{\max}(\rho||\sigma) = \lambda_{\max}(\sigma^{-1/2}\rho \sigma^{-1/2})$, the sixth is using that $p(x)^{-1/2}p(x)p(x)^{-1/2} = 1$, and the final equality is by definition.
\begin{comment}
\begin{proof}
For any $\rho^{XA}$ with the form of Eq. \eqref{Eq:cq-state-simplify}, we have \[\rho^{XY}=\sum_xp(x)p(y|x)\op{x}{x}\otimes\op{y}{y},\] where $p(y|x)=\bra{y}\mathcal{N}(U\op{x}{x}U^\dagger)\ket{y}$. By the definition of $D_{\max}$, any $\sigma^Y=\sum_yq(y)\op{y}{y}$ yields \begin{align}
D_{\max}(\rho^{XY}\Vert\rho^X\otimes\sigma^Y)&=\min\{\lambda\;|\;p(x,y)\leq 2^\lambda p(x)q(y)\}\notag\\
&=\min\{\lambda\;|\;p(y|x)\leq 2^\lambda q(y)\}\notag\\
&=D_{\max}(\widetilde{\rho}^{XY}\Vert\mathbb{I}\otimes\sigma^Y), \end{align} where \begin{align}
\widetilde{\rho}^{XY}&=\sum_x\op{x}{x}\otimes\mathcal{N}(U\op{x}{x}U^\dagger)\op{y}{y} \notag\\
&=\Delta_{U^T}\otimes\textrm{id}^Y(J_\mathcal{N}), \end{align} and $\Delta_{U^T}(\omega)=\sum_{x}\op{x}{x}U^T(\omega) U^*\op{x}{x}$ is a completely dephasing map after applying the rotation $U^T$. By data-processing, we have \begin{align} \label{Eq:chi-max-Dmax-ineq}
D_{\max}(\Delta_{U^T}\otimes\textrm{id}^Y(J_\mathcal{N})\Vert \mathbb{I}\otimes\sigma^Y)\leq D_{\max}(J_\mathcal{N}\Vert \mathbb{I}\otimes\sigma^Y) \end{align} for any $\sigma^Y$. On the other hand, since $J_\mathcal{N}=\sum_y\Pi_y^A\otimes\op{y}{y}$ is a qc state, $D_{\max}(J_\mathcal{N}\Vert\mathbb{I}\otimes \sigma^Y)=\max_y\log\Vert\Pi_y\Vert_\infty/q(y)$, where $\sigma^Y=\sum_yq(y)\op{y}{y}$ with $q(y)>0$. Thus we can dephase system $A$ of $J_{\mathcal{N}}$ in the eigenbasis of some $\Pi_{y'}^A$ that attains this maximum and $D_{\max}$ will be unchanged, \begin{align}
D_{\max}(J_\mathcal{N}\Vert \mathbb{I}\otimes\sigma)= D_{\max}(\Delta_{U^T}\otimes\textrm{id}^Y(J_\mathcal{N})\Vert \mathbb{I}\otimes\sigma), \end{align} where $U^T$ rotates the the eigenbasis of $\Pi_{y'}^A$ into the computational basis. THus for each choice of $\sigma^Y$, we obtain an equality in Eq. \eqref{Eq:chi-max-Dmax-ineq} for this choice of $U^T$. \todo{If you could switch the max and min in the definition of $\chi_{\max}$, then we could first choose $\sigma$ and then max over $U^T$ to obtain \[ D_{\max}(J_\mathcal{N}\Vert \mathbb{I}\otimes\sigma)\leq \max_{U^T}D_{\max}(\Delta_{U^T}\otimes\textrm{id}^Y(J_\mathcal{N})\Vert \mathbb{I}\otimes\sigma)\] for all $\sigma^B$. But we cannot switch the order.}
Since the LHS of Eq. \eqref{Eq:chi-max-Dmax-ineq} is $\chi_{\max}(\mathcal{N})$ and the RHS is $H_{\min}(A|Y)_{J_\mathcal{N}}$, we have proven the lemma.
\end{proof}
As a corollary to Lemma \ref{Lem:chi-max-hmin}, we have the following. \begin{corollary} Any channel $\mathcal{N}$ satisfies \[\text{cv}(\mathcal{N}) \leq \exp[\chi_{\max}(\mathcal{N})]\] with equality if $\mathcal{N}$ is quantum-to-classical. \end{corollary} \begin{proof} The equality for quantum-to-classical states holds in light of Lemma \ref{Lem:chi-max-hmin} and Eq. \eqref{Eq:cv-qc}. For a general channel, $\mathcal{N}^{A\to B}$ note that from the definition of cv we can write \begin{align}
\text{cv}(\mathcal{N})=\max_{\Pi^{B\to Y}}\text{cv}(\Pi\circ\mathcal{N}), \end{align} where $\Pi^{B\to Y}$ is a quantum-to-classical channel. But since $\Pi\circ\mathcal{N}$ is also a quantum-to-classical channel, we have \begin{align}
\text{cv}(\Pi\circ\mathcal{N})= \exp[\chi_{\max}(\Pi\circ\mathcal{N})]\leq \exp[\chi_{\max}(\mathcal{N})], \end{align} where the inequality follows from the data processing of $\chi_{\max}$ \cite{Wilde-2014a}. From this we obtain the desired upper bound on $\text{cv}(\mathcal{N})$. \end{proof}
\end{comment}
\subsection{Communication Value in Terms of Singlet Fraction}
Another advantage of viewing $\text{cv}(\mathcal{N})$ in terms of a restricted min-entropy is that it provides an alternative operational interpretation of the communication value in terms of the singlet fraction, which it inherits from the min-entropy conic program. Recall that for a bipartite density matrix $\omega^{AB}$, its $d_A$-dimensional singlet fraction is defined as \begin{equation}
F^+_{d_A}(\omega)=\max_U\bra{\Phi^+_{d_A}}(\mathbb{I}^A\otimes U^B)\omega^{AB}(\mathbb{I}^A\otimes U^B)^\dagger\ket{\Phi^+_{d_A}},\notag \end{equation} where the maximization is taken over all unitaries applied to system $B$ \cite{Bennett-1996a}. \begin{proposition} $\text{cv}(\mathcal{N})$ is the maximum singlet fraction achievable using an entanglement-breaking channel after the action of $\mathcal{N}$ on $\ket{\Phi^+_{d_A}}$. \end{proposition} \begin{proof} This follows the proof of the operational interpretation of the min-entropy \cite{Konig-2009a}, and we walk through the argument again here to exemplify that the only change is in restricting to the separable cone. Proposition \ref{Prop:Proposition-cv} shows that $\text{cv}(\mathcal{N})$ is the maximum value $\textrm{Tr}[\Omega^{AB}J_{\mathcal{N}}]$, where $\Omega^{AB}$ is the Choi matrix of a unital (entanglement-breaking) map, i.e. $\Omega^{AB} = J_{\mathcal{M}}$ for some entanglement-breaking unital map $\mathcal{M}$. Thus, \begin{align*} \text{cv}(\mathcal{N})&= \langle J_{\mathcal{M}}, J_{\mathcal{N}} \rangle \\ =& d_{A}^{2}\langle (\textrm{id} \otimes \mathcal{M})(\widehat{\Phi}^{+}), (\textrm{id} \otimes \mathcal{N})(\widehat{\Phi}^{+}) \rangle \\ =& d_{A}^{2} \langle \widehat{\Phi}^{+}, (\textrm{id} \otimes \mathcal{M}^\ast \circ \mathcal{N})(\Phi^{+}) \rangle \ , \end{align*} where we have used the definitions of the Choi matrix and adjoint map, and $\widehat{\Phi}^{+}$ is the normalized maximally entangled state. Noting the adjoint of an entanglement-breaking map is entanglement-breaking, and the adjoint of a unital map is trace-preserving, we have \begin{align} \text{cv}(\mathcal{N}) &=d_{A}^2\max_{\mathcal{E}\in\text{EB}(B\to A)}F^+_{d_A}\left(\textrm{id} \otimes \mathcal{E} \circ \mathcal{N}(\Phi_{d_A}^+)\right) \notag\\ &=d_{A}^2\max_{\mathcal{E}\in\text{EB}(B\to A)}\bra{\Phi^+_{d_A}}\textrm{id} \otimes \mathcal{E} \circ \mathcal{N}(\Phi_{d_A}^+)\ket{\Phi^+_{d_A}},\label{Eq:eb-recoverability} \end{align} where the last line follows from the fact that the maximization over local unitaries in the definition of $F^+_{d_A}$ can be included in the maximization over entanglement-breaking channels. \end{proof} \begin{figure}
\caption{The channel value is a measure of the maximum singlet fraction achievable using an entanglement-breaking (EB) channel to recover the singlet after the action of $\mathcal{N}$. In this setting, non-multiplicativity means that the optimal EB channel $\Psi$ changes when using $\mathcal{N}$ in parallel such that the achievable singlet fraction increases. This suggests that $\text{cv}(\mathcal{N})$ is a measurement of entanglement preservation.}
\label{fig:CVMeasure}
\end{figure}
Equation \eqref{Eq:eb-recoverability} yields the interpretation of $\text{cv}(\mathcal{N})$ as how well a maximally entangled state can be recovered after Alice sends one half of $\ket{\Phi^+_{d_A}}$ to Bob over the channel $\mathcal{N}$, and he is limited to performing an entanglement-breaking channel as post-processing error correction (see Fig. \ref{fig:CVMeasure}). In Section \ref{sec:entanglement_assisted_cv}, it will be shown that the entanglement-assisted communication value is characterized by $\exp(-H_{\min}(A|B)_{J_{\mathcal{N}}})$, and it thus has a similar operational interpretation except Bob is now able to perform an arbitrary quantum channel to try and recover the maximally entangled state. Moreover, in Section \ref{sec:relaxations_of_cv}, when we consider relaxations of cv to other cones, we will find that the PPT min-entropy $H_{\min}^{\text{PPT}}$ also retains this operational interpretation, but with recovery being relaxed to the use of co-positive maps.
\subsection{The Geometric Measure of Entanglement and Maximum Output Purity}
The channel cv is closely related to the geometric measure of entanglement (GME) \cite{Shimony-1995a, Wei-2003a}. For a bipartite positive operator $\omega^{AB}$, its GME is defined as $G(\omega)=-\log\Lambda^2(\omega)$, where \begin{align} \Lambda^2(\omega)=\max_{\ket{\alpha}\ket{\beta}}\bra{\alpha,\beta}\omega\ket{\alpha,\beta}. \end{align} We can phrase this as the SDP \begin{align} \Lambda^2(\omega)=\max &\;\;\textrm{Tr}[\sigma^{AB}\omega]\notag\\ \text{subject to}&\;\;\textrm{Tr}[\sigma^{AB}]=1\notag\\ &\;\;\sigma^{AB}\in\text{SEP}(A:B). \label{Eq:GME-SDP} \end{align}
For a channel $\mathcal{N}$ with Choi matrix $J_\mathcal{N}$, this SDP can be expressed in dual form as \begin{align} \Lambda^2(J_{\mathcal{N}})=\min &\;\;\lambda\notag\\ \text{subject to}&\;\;\lambda\mathbb{I}^{AB}-J_{\mathcal{N}}^{AB}\in\text{SEP}^*(A:B). \label{Eq:GM-dual} \end{align} For any dual feasible $\lambda$ in Eq. \eqref{Eq:GM-dual}, we can take $Z=\lambda\mathbb{I}^B$ in Eq. \eqref{Eq:cv-dual} to obtain \begin{align} \text{cv}(\mathcal{N})\leq d_B\Lambda^2(J_\mathcal{N}). \end{align} On the other hand, suppose that $Z$ is dual feasible in Eq. \eqref{Eq:cv-dual}. Then since $\mathbb{I}^A\otimes (\lambda\mathbb{I}^B-Z)\in\text{SEP}^*(A:B)$ for $\lambda=\Vert Z\Vert_\infty$, we have that \begin{align} \lambda\mathbb{I}^A\otimes \mathbb{I}^B-J_{\mathcal{N}}^{AB}&=\mathbb{I}^A\otimes (\lambda\mathbb{I}-Z)^B+\mathbb{I}^A\otimes Z^B-J_{\mathcal{N}}^{AB}\notag\\ &\in\text{SEP}^*(A:B). \end{align} Hence $\lambda \mathbb{I}^{AB}$ is dual feasible in Eq. \eqref{Eq:GM-dual}. Since $\lambda\leq \textrm{Tr}[Z]$ for every $Z$, we conclude \begin{align} \label{Eq:cv-GM} \Lambda^2(J_{\mathcal{N}})\leq\text{cv}(\mathcal{N})\leq d_B\Lambda^2(J_\mathcal{N}). \end{align} While these relationships are somewhat obvious from the formulation of the problem, it is nice to see them explicitly falling out of the two conic programs.
It is known that $\Lambda^2(J_\mathcal{N})$ is equal to the maximum output purity of the channel $\mathcal{N}$ \cite{Werner-2002a, Zhu-2011a} which is defined as
\[\nu_{\infty}(\mathcal{N}) := \underset{\rho \in \mathcal{D}(A)}{\sup} \|\mathcal{N}(\rho)\|_{\infty}.\] To see the equivalence, first note that this supremum is attained for a pure-state input due to convexity of the operator norm. Then \begin{align}
\nu_{\infty}(\mathcal{N})&= \underset{\ket{\alpha}}{\sup} \|\mathcal{N}(\op{\alpha}{\alpha})\|_{\infty}\notag\\
&= \underset{\ket{\alpha}}{\sup}\sup_{\ket{\beta}}\bra{\beta}\mathcal{N}(\op{\alpha}{\alpha})\ket{\beta}\notag\\
&=\sup_{\ket{\alpha},\ket{\beta}}\bra{\alpha^*,\beta}J_{\mathcal{N}}\ket{\alpha^*,\beta}\notag\\
&=\Lambda^2(J_{\mathcal{N}}). \label{eqn:GME-max-output-purity-equiv} \end{align} An alternative way to prove this equality is using \cite{Werner-2002a}. In that work they establish that $\nu_{\infty}(\mathcal{N}) = \Lambda^{2}(\ket{\mathcal{N}})$ where $\ket{\mathcal{N}}$ is an un-normalized vector induced by the Kraus representation of $\mathcal{N}$. One can in fact show that $\ket{\mathcal{N}}$ is the purification of the Choi matrix, though this has not been stated previously to the best of our knowledge. As the GME of a pure state is the same as the GME of the pure state with a single register traced off \cite{Jung-2008}, one can conclude $\nu_{\infty}(\mathcal{N}) = \Lambda^{2}(J_{\mathcal{N}})$. Despite $\Lambda^2(J_\mathcal{N})=\nu_\infty(\mathcal{N})$ being a lower bound of $\text{cv}(\mathcal{N})$ in Eq. \eqref{Eq:cv-GM}, in general this bound will not be tight. Hence communication value is capturing property of a quantum channel that is distinct from maximum output purity. In fact, we have the following. \begin{proposition} $\nu_\infty(\mathcal{N})=\text{cv}(\mathcal{N})$ iff $\mathcal{N}$ is a replacer channel. \end{proposition} \begin{proof} If $\mathcal{N}$ is a replacer channel, say, $\mathcal{N}(\rho)=\op{\beta}{\beta}$ $\forall \rho$, then clearly $\nu_\infty(\mathcal{N})=\text{cv}(\mathcal{N})=1$. On the other hand, suppose that $\nu_\infty(\mathcal{N})=\text{cv}(\mathcal{N})$ and let $\ket{\alpha}\ket{\beta} := \mathrm{argmax}(\Lambda^{2}(J_{\mathcal{N}}))$. Then for an arbitrary state $\rho^A\in\mathcal{D}(A)$ consider the operator \begin{align}\notag
\Omega^{AB}=\op{\alpha}{\alpha}\otimes\op{\beta}{\beta}+\rho\otimes(\mathbb{I}-\op{\beta}{\beta}). \end{align} Note that since $\textrm{Tr}_A\Omega^{AB}=\mathbb{I}^B$ and $\Omega^{AB}\in\text{SEP}(A:B)$, it is a feasible solution for the optimization of $\text{cv}(\mathcal{N})$ in Proposition \ref{Prop:Proposition-cv}. Hence for all $\rho$ we have \begin{align}
\text{cv}(\mathcal{N})&\geq \textrm{Tr}[\Omega^{AB} J_{\mathcal{N}}]\notag\\
&=\bra{\alpha,\beta}J_{\mathcal{N}}\ket{\alpha,\beta}+\textrm{Tr}[\mathcal{N}(\rho^*)(\mathbb{I}-\op{\beta}{\beta})]\notag\\&=\nu_\infty(\mathcal{N})+\textrm{Tr}[\mathcal{N}(\rho^*)(\mathbb{I}-\op{\beta}{\beta})]. \end{align} But by the assumption $\nu_\infty(\mathcal{N})=\text{cv}(\mathcal{N})$, the second term must vanish for all $\rho$. This means that $\mathcal{N}$ is a replacer channel, outputting $\op{\beta}{\beta}$ for all its trace-one inputs. \end{proof}
\begin{comment} \mbox{}
\todo{[[I merged a lot of this into the previous section... consider cutting??} \ian{I moved the last thing in this section into the previous; whole section can be deleted.}
Beyond the bounds above, this also implies a relationship between the communication value and the maximum output purity, where the maximum output purity is given by
$$ \nu_{\infty}(\mathcal{N}) := \underset{\rho \in \mathrm{D}(A)}{\sup} \|\mathcal{N}(\rho)\|_{\infty} \ . $$ This is also known as the maximal $\infty$-norm of a channel $\mathcal{N}$ \cite{King-2002}. The relationship follows from the equivalence of the GME of a Choi matrix and the maximal output purity, which to the best of our knowledge has not been proven before. \todo{[[Doesn't this follow almost trivially by the following argument: \begin{align}
\nu_{\infty}(\mathcal{N})&= \underset{\rho \in \mathrm{D}(A)}{\sup} \|\mathcal{N}(\rho)\|_{\infty}\notag\\
&= \underset{\op{\alpha}{\alpha} \in \mathrm{D}(A)}{\sup} \|\mathcal{N}(\op{\alpha}{\alpha})\|_{\infty}\notag\\
&= \underset{\op{\alpha}{\alpha} \in \mathrm{D}(A)}{\sup}\sup_{\op{\beta}{\beta} \in \mathrm{D}(B)}\bra{\beta}\mathcal{N}(\op{\alpha}{\alpha})\ket{\beta}\notag\\
&=\sup_{\ket{\alpha},\ket{\beta}}\bra{\alpha^*,\beta}J_{\mathcal{N}}\ket{\alpha^*,\beta}=\Lambda^2(J_{\mathcal{N}}) \end{align} ?? I will move this into the previous section.]] }
First, we prove that the GME of the purification of the Choi matrix is equivalent to the maximum output purity of the channel. This was in effect proven by Werner and Holevo \cite[Equation 5]{Werner-2002a}, but they did not note their construction was the purification of the Choi matrix. \begin{lemma}\label{lemma:norm_and_gme_purif} Given a channel $\mathcal{N}_{A \to B}$, $\nu_{\infty}(\mathcal{N}) = \Lambda^{2}(\ket{\mathcal{N}})$ where $\ket{\mathcal{N}}$ is the purification of $J_{\mathcal{N}}$. \end{lemma} \todo{[[Isn't this lemma already noted in Ref. \cite{Zhu-2011a} (see Section 2.5). While no proof is given in their paper, they acknowledge Andreas Winter for first identifying this equivalence]].}\ian{ I think Andreas Winter just told the authors of \cite{Zhu-2011a} about \cite{Werner-2002a}, which is where the proof is; neither work notes that the vector is the purification of the Choi matrix, which is what we need.} \begin{proof} Let $\mathcal{N}(\rho) = \sum_{j=1}^{r} A_{j} \rho A_{j}^{\ast}$ where $A_{j} = \operatorname{vec}^{-1}(u_{j})$, $J(\Phi) = \sum_{j=1} u_{j} u_{j}^{\ast}$, and $r := \text{rank}(J_{\mathcal{N}})$, i.e. we are considering a `canonical' set of Kraus operators for simplicity. Then one can write \begin{equation}\label{deriv_2_eqn_1}
\begin{aligned}
\langle \varphi, \mathcal{N}(\dyad{\phi}) \varphi \rangle =& \sum_{j} |\langle \varphi , A_{j} \phi \rangle |^{2} \\
=& \sup_{\psi \in \mathcal{S}(\mathbb{C}^{r})} \left|\sum_{j} \overline{\psi}_{j} \langle \varphi , A_{j} \phi \rangle \right|^{2} \ ,
\end{aligned} \end{equation}
where the first equality is by writing out the channel's action in Kraus operators, and the second is using that the second expression may be viewed as $\|v\|^{2}_{2}$ where $v:= \sum_{j} \langle \varphi , A_{j} \phi \rangle \ket{j}$ along with the fact $\|x\|_{2} = \max_{u \in \mathcal{S}} \langle u , x \rangle$. Note that \begin{align*} \langle \varphi , A_{j} \phi \rangle = \sum_{i,k} \overline{\varphi}_i \phi_k \langle f_{i}, A_{j} e_{k} \rangle =& \sum_{i,k} \overline{\varphi}_i \phi_k A_{j}(i,k) \\ =&\bra{\varphi}\bra{\overline{\phi}}\operatorname{vec}(A_{j}) \ , \end{align*} where $f_{i}, e_{k}$ denote standard basis vectors for the respective spaces. Using this observation, \begin{align*}
& \sup_{\psi \in \mathcal{S}(\mathbb{C}^{r})} \left|\sum_{j} \overline{\psi}_{j} \langle \varphi , A_{j} \phi \rangle \right|^{2} \\
=& \sup_{\psi \in \mathcal{S}(\mathbb{C}^{r})} \left| \sum_{j} \overline{\psi}_{j} \bra{\varphi}\bra{\overline{\phi}} \operatorname{vec}(A_{j}) \right|^{2} \\
=& \sup_{\psi \in \mathcal{S}(\mathbb{C}^{r})} \left| \bra{\psi}\bra{\varphi}\bra{\overline{\phi}} \left(\sum_{j} \operatorname{vec}(A_{j}) \otimes \ket{j} \right) \right|^{2} \ . \end{align*} Define $\ket{\mathcal{N}} := \sum_{j} \operatorname{vec}(A_{j}) \otimes \ket{j}$. Combining all these observations, and by optimizing over the unit vectors for $\varphi,\phi$, we have \begin{align*} \nu_{\infty}(\mathcal{N}) =& \max_{\varphi,\phi,\psi} \langle \varphi, \mathcal{N}(\dyad{\phi}) \varphi \rangle \\
=& \max_{\varphi,\phi,\psi} \left| \bra{\psi}\bra{\varphi} \langle \phi | \mathcal{N} \rangle \right|^{2} \\ =& \Lambda^{2}(\ket{\mathcal{N}}) \ , \end{align*}
where the first equality is using the variational definition of $\|\cdot\|_{\infty}$, in the second we drop the conjugate of $\phi$ as both versions are unit vectors, and the final equality is by definition GME \\
All that's left is to prove $\ket{\mathcal{N}}$ is a purification of $J_{\mathcal{N}}$. This can be verified as follows: \begin{align*}
\textrm{Tr}_{\mathbb{C}^{r}}(\dyad{\mathcal{N}})
=& \textrm{Tr}_{\mathbb{C}^{r}} \left(\sum_{j,j'} \operatorname{vec}(A_{j})\operatorname{vec}_(A_{j'})^{\ast} \otimes \dyad{j}{j'} \right) \\
=& \sum_{j} \operatorname{vec}(A_{j})\operatorname{vec}(A_{j})^{\ast} \\
=& J_{\mathcal{N}} \ . \end{align*} This completes the proof. \end{proof} Second, we need that the GME of the purification is the same as the GME of the original state. \begin{lemma}\cite[Theorem 1]{Jung-2008}\label{lemma:purif_and_marginal_gme} Consider pure state $\ket{\psi} \in \mathcal{S}(\mathcal{H}_{1}^{n})$. Then $\Lambda^{2}(\dyad{\psi}) = \Lambda^{2}(\textrm{Tr}_{\mathcal{H}_{k}}(\dyad{\psi}))$ for any $k \in [n]$. \end{lemma}
\begin{theorem} \label{thm:choi_norm_equiv} Given a channel $\mathcal{N}$, $\Lambda^{2}(J_{\mathcal{N}}) = \nu_{\infty}(\mathcal{N})$. \end{theorem} \begin{proof} Combining Lemmas \ref{lemma:norm_and_gme_purif} and \ref{lemma:purif_and_marginal_gme}, $$\nu_{\infty}(\mathcal{N})=\Lambda^{2}(\ket{\mathcal{N}}) = \Lambda^{2}(J_{\mathcal{N}}) \ . $$ \end{proof} This result implies the following properties Finally, given how related GME and communication value seem to be, we show that even for entanglement-breaking channels they are distinct in general.
\todo{[[Let's move this into the previous section]]}
\begin{proposition} Let $\mathcal{N}$ be an entanglement-breaking channel. Then $\text{cv}(\mathcal{N}) \geq \Lambda^{2}(J_{\mathcal{N}})$ where equality only holds if it's a replacer channel. \end{proposition} \begin{proof} We consider the primal problems for cv and GME. Let $\ket{\alpha}\ket{\beta} := \mathrm{argmax}(\Lambda^{2}(J_{\mathcal{N}}))$. This is a product unit vector. Let $\{\psi_{j}\}\cup \ket{\alpha}$, $\{\phi_{j}\} \cup \ket{\beta}$ form complete bases of the $A$ and $B$ space respectively. Define $$\sigma := \dyad{\alpha} \otimes \dyad{\beta} + \frac{1}{d_{A}}\sum_{j} \left( \dyad{\psi_{j}} \otimes (\mathbb{I}_{B}-\dyad{\beta}) \right) \ .$$ Note that $\textrm{Tr}_{A}\sigma = \mathbb{I}_{B}$ by construction so it's a feasible solution for $\text{cv}(\mathcal{N})$. It follows, \begin{align*} & \langle \sigma, J_{\mathcal{N}} \rangle \\ =& \bra{\alpha}\bra{\beta}J_{\mathcal{N}}\ket{\alpha}\ket{\beta} + \frac{1}{d_{A}}\sum_{j,j'} \bra{\psi_{j}}\bra{\phi_{j'}}J_{\mathcal{N}}\ket{\psi_{j}}\ket{\phi_{j}} \\ := & \Lambda^{2}(J_{\mathcal{N}}) + c \ . \end{align*}
Our only interest is whether $c = 0$ or not. All inner product terms in the sum must be non-negative since $J_{\mathcal{N}} \succeq 0$. Thus $c > 0$ if any of the inner product terms are non-negative. We can express $J_{\mathcal{N}} = \sum_{k} u_{k}u_{k}^{*} \otimes v_{k}v_{k}^{*}$ for some sets of (unnormalized) vectors $\{u_{k}\},\{v_{k}\}$. Note by the condition $\textrm{Tr}_{B}J_{\mathcal{N}} = \mathbb{I}_{A}$, there exists $j,k$ such that $|\bra{\psi_{j}}u_{k}|^{2} > 0$. It follows that $c > 0$ unless, $v_{k} = \ket{\beta}$ for all $k$. This would imply the channel's output is always $\ket{\beta}$, i.e. it is a replacer channel. This completes the proof. \end{proof} Note this shows there are cases where the lower bound is achieved as well as places that it is not, and so we know that the communication value of a channel is not proportional to the geometric measure of entanglement of the channel in general. \end{comment}
\section{Examples}
Having characterized the channel cv in a variety of different ways, we now focus on the problem of computing it. In general, this is a challenging task. Here we provide closed-form solutions for arbitrary qubit channels and the family of Holevo-Werner channels. The latter will also provide a useful case study when we study relaxations to the communication value in Section \ref{sec:relaxations_of_cv}.
\subsection{Qubit Channels}
\label{Sect:qubits}
Every qubit channel $\mathcal{N}$ induces an affine transformation on the Bloch vector of the input state. In more detail, every positive operator $\rho$ can be written as $\rho=\gamma(\mathbb{I}+\mathbf{r}\cdot\hat{\sigma})$, where $\mathbf{r}\in\mathbb{R}^3$ has norm no greater than one. Then when $\mathcal{N}$ acts on $\rho$, it induces an affine transformation $\mathbf{r}\mapsto A\mathbf{r}+\mathbf{c}$, with $A$ being some $3\times 3$ matrix and $\mathbf{c}\in\mathbb{R}^3$. Now, let $\sigma=\sum_{k}\alpha^T_k\otimes \beta_k$ be an arbitrary two-qubit separable operator with $\textrm{Tr}[\alpha_k]=1$ and $\sum_k\beta_k=\mathbb{I}$. We given them Bloch sphere representations $\alpha_k=\frac{1}{2}(\mathbb{I}+\mathbf{a}_k\cdot\vec{\sigma})$ and $\beta_k=\gamma_k(\mathbb{I}+\mathbf{b}_k\cdot\vec{\sigma})$ so that \begin{align} \textrm{Tr}[\sigma J_\mathcal{N}]&=\sum_k\textrm{Tr}[\beta_k\mathcal{N}(\alpha_k)]\notag\\ &=\sum_k\frac{\gamma_k}{2}\textrm{Tr}[(\mathbb{I}+\mathbf{b}_k\cdot\vec{\sigma})(\mathbb{I}+(A\mathbf{a}_k+\mathbf{c})\cdot\vec{\sigma})]\notag\\ &=1+\sum_k\gamma_k\mathbf{b}_k\cdot (A\mathbf{a}_k+\mathbf{c})\notag\\ &=1+\sum_k\gamma_k\mathbf{b}_k^TA\mathbf{a}_k, \end{align} where the last equality follows from the fact that $\sum_k\gamma_k\mathbf{b}_k=0$ since $\sum_k\beta_k=\mathbb{I}$. Our task then is to maximize $\sum_k\gamma_k\mathbf{b}_k^TA\mathbf{a}_k$ under the constraints that (i) $\sum_k\gamma_k\mathbf{b}_k=0$, (ii) $\sum_k\gamma_k=1$, and (iii) $\Vert\mathbf{b}_k\Vert,\Vert\mathbf{a}_k\Vert\leq 1$. It is easy to see that this maximization is attained by taking $\mathbf{b}_1$ and $\mathbf{b}_2$ as anti-parallel unit vectors aligned with the left singular vectors of $A$ corresponding to its largest singular value, and likewise for $\mathbf{a}_1$ and $\mathbf{a}_2$ with respect to the right singular vector. Additionally, taking $\gamma_k=\frac{1}{2}$ for $k=1,2$ satisfies all the conditions. Hence we have the following. \begin{theorem} \label{Thm:qubit-cv} For a qubit channel $\mathcal{N}$, let $A$ be the $3\times 3$ correlation matrix of $J_\mathcal{N}$; i.e. $A_{ij}=\frac{1}{2}\textrm{Tr}[(\sigma_i\otimes\sigma_j) J_\mathcal{N}]$. Then \begin{equation} \text{cv}(\mathcal{N})=1+\sigma_{\max}(A) \end{equation} where $\sigma_{\max}(A)$ is the largest singular value of $A$. \end{theorem}
\begin{remark} For a unital channel $\mathcal{N}$, the Bloch vector $\mathbf{c}$ is zero. Since we are able to obtain the largest value of $\gamma_k\mathbf{b}_k^TA\mathbf{a}_k$ for each value $k$, it follows that \begin{equation} \label{Eq:unital-GM-tight} \text{cv}(\mathcal{N})=2\Lambda^2(J_{\mathcal{N}}); \end{equation} i.e. the upper bound in Eq. \eqref{Eq:cv-GM} is tight. \end{remark}
\noindent \textit{Example: Pauli Channels.} As a nice example of Theorem \ref{Thm:qubit-cv}, consider the family of Pauli channels, which consists of any qubit channel having the form \begin{equation} \mathcal{N}(\rho)=p_0 \rho+ p_1X\rho X+ p_2 Z\rho Z +p_3 Y\rho Y, \end{equation} where $\{X,Y,Z\}$ are the standard Pauli matrices and $\sum_{i=0}^3p_i=1$. We can write the Choi matrix as \begin{equation} J_{\mathcal{N}}=2\sum_{i=0}^3p_i\op{\Phi^+_i}{\Phi^+_i}, \end{equation} where $\ket{\Phi^+_i}$ denotes the four Bell states. It is easy to see that the correlation matrix of $J_{\mathcal{N}}$ is diagonal with entries $\{p_0+p_1-p_2-p_3, -p_0+p_1-p_2+p_3, p_0-p_1-p_2+p_3\}$. Therefore, by Theorem \ref{Thm:qubit-cv} we can conclude that \begin{align} \label{Eq:cv-Pauli} \text{cv}(\mathcal{N})=2(p^\downarrow_{3}+p^\downarrow_{2}), \end{align} where $p^\downarrow_{3}$ and $p^\downarrow_{2}$ are the two largest probabilities of Pauli gates.
Notice that $\text{cv}(\mathcal{N})$ will equal its largest value of two if and only if there are no more than two Pauli gates applied with nonzero probability in $\mathcal{N}$. In particular, when $p^\downarrow_{3}=p^\downarrow_{2}=\frac{1}{2}$ the channel is entanglement-breaking; in fact it is a classical channel. Hence, this example shows that a channel's communication value captures a property distinct from its ability to transmit entanglement.
\subsection{Werner-Holevo Channels}
\label{Sect:Werner-Holevo}
The Werner-Holevo family of channels \cite{Werner-2002a,Leung-2015a} is defined by \begin{align}\label{eqn:WH-defn} \mathcal{W}_{d,\lambda} := \lambda \Phi_{0}(X) + (1-\lambda) \Phi_{1}(X) \ , \end{align} where \begin{align*} \Phi_{0}(X) &= \frac{1}{n+1} \left((\textrm{Tr}(X))\mathbb{I} + X^{\textrm{T}} \right) \\ \Phi_{1}(X) &=\frac{1}{n-1}\left((\textrm{Tr}(X))\mathbb{I} - X^{\textrm{T}} \right) \ . \end{align*} This implies the Choi matrix is given by \begin{align}
J_{\mathcal{W}_{d,\lambda}} =\lambda \frac{2}{d+1}\Pi_+ + (1-\lambda)\frac{2}{d-1}\Pi_-, \end{align} where $\Pi_+ = \frac{1}{2}\left(\mathbb{I} + \mathbb{F}\right)$ and $\Pi_- = \mathbb{I} - \Pi_+$.
\begin{proposition}\label{prop:cv-WH-1-copy}
The communication value of the Werner-Holevo channel is given by
\begin{align*}
\text{cv}(\mathcal{W}_{d,\lambda})
=
\begin{cases}
\frac{d(d+1-2\lambda)}{d^{2}-1} & \lambda \leq \frac{1+d}{2d} \\
\frac{2d\lambda}{1+d} & \lambda > \frac{1+d}{2d}
\end{cases}
\end{align*} \end{proposition} \begin{proof} Let $\mathcal{U}(\mathbb{C}^d)$ denote the group of $d\times d$ unitary operators. Since $J_{\mathcal{W}_{d,\lambda}}$ enjoys $U\otimes U$ invariance under conjugation for every $U\in\mathcal{U}(\mathbb{C}^d)$ \cite{Werner-1989a}, we can apply the ``twirling map'' \begin{equation} \label{Eq:UU-twirling}
\mathcal{T}_{UU}(X)=\int_{\mathcal{U}} (U\otimes U) X(U\otimes U)^\dagger d U \end{equation} to the $A'B'$ systems of $\sigma$ while leaving the cv invariant: \[\textrm{Tr}[ \mathcal{T}_{UU}(J_{\mathcal{W}}) \Omega^{AB} ] = \textrm{Tr}[J_{\mathcal{W}} \mathcal{T}_{UU}(\Omega^{AB})].\] Furthermore, since $\mathcal{T}_{UU}$ preserves the constraints on $\Omega^{AB}$, we can conclude without loss of generality the optimizer is given by $X := \mathcal{T}_{UU}(\Omega^{AB}) = x\mathbb{I}^{AB} + y \mathbb{F}$ for some choice of $x,y$, i.e. it is an element of the $U\otimes U$-invariant space of operators.
As the space of PPT and SEP $UU$-invariant operators are the same \cite{Vollbrecht-2001a}, we can relax the optimization program to only requires $X \in \text{PPT}$. As is shown in \eqref{eqn:PPTPrimal}, this means we require $X$ satisfies $X \geq 0, \Gamma^{B}(X) \geq 0, \textrm{Tr}_{A}[\Omega] = \mathbb{I}^{B}$. We will therefore convert these linear constraints into linear constraints on $x,y$.
Note that $\{\Pi_{+},\Pi_{-}\}$ define an orthogonal basis for the space spanned by $\{\mathbb{I},\mathbb{F}\}$. Therefore we can write $X = (x+y)\Pi_{+} + (x-y)\Pi_{-}$, and the positivity constraints on $X$ are given by \begin{align}
x \pm y \geq 0 \ . \end{align} Similarly, $\Gamma^{B}(X) = x\mathbb{I}^{AB} + y\Phi^{+}$, where $\Phi^{+}$ is the unnormalized maximally entangled state. An orthogonal basis for the space spanned by $\{\mathbb{I}^{AB},\Phi^{+}\}$ is given by $\{\Phi^{\perp} := d\mathbb{I}-\Phi^{+},\Phi^{+}\}$. Therefore, $X^{\Gamma^{B}} = \frac{x}{d}(\Phi^{\perp}+\Phi^{+}) + y\Phi^{+}$. It follows the partial transpose positivity constraints simplify to \begin{align}
x \geq 0 \hspace{1cm} x+yd \geq 0 \end{align}
The objective function is given by \begin{align}
& \textrm{Tr}[J_{\mathcal{N}}X] \notag \\
= & \textrm{Tr}\left[\left(\frac{2\lambda}{d+1}\Pi_+ + \frac{2(1-\lambda)}{d-1}\Pi_{-}\right)X\right] \notag \\
= & \lambda d (x+y) + (1-\lambda)d(x-y) \notag \\
= & d(x+(2\lambda - 1)y) \ . \end{align} Lastly, the trace condition is given by \begin{align}
x\textrm{Tr}_{A}[\mathbb{I}^{AB}] + y\textrm{Tr}_{A}[\mathbb{F}] = \mathbb{I}^{B}
\Rightarrow xd + y = 1 \ . \end{align} Combining these, we have the linear program \begin{align}
\text{maximize} \quad & d(x+(2\lambda - 1)y) \notag \\
\text{subject to} \quad & xd + y = 1 \label{eqn:WH-LP-trace} \\
& x+yd \geq 0 \label{eqn:WH-LP-3} \\
& x+y \geq 0 \label{eqn:WH-LP-2} \\
& x-y \geq 0 \label{eqn:WH-LP-1} \\
& x \geq 0 \label{eqn:WH-LP-4} \ . \end{align} \eqref{eqn:WH-LP-trace} implies $y = 1 -xd$. \eqref{eqn:WH-LP-3},\eqref{eqn:WH-LP-1} imply $\frac{1}{d+1} \leq x \leq \frac{d}{d^{2}-1}$. $\eqref{eqn:WH-LP-2}$ implies $x \leq \frac{1}{d-1}$, which is always satisfied if $x \leq \frac{d}{d^{2}-1}$. Finally, if $x$ satisfies these constraints, $$ x+yd = x + (1-xd)d \geq \frac{d}{d+1} + d - \frac{d^{2}}{d+1} \geq 0 \ , $$ so \eqref{eqn:WH-LP-4} is also satisfied. Thus we have reduced the LP to \begin{align}
\text{maximize} \quad & d(x+(2\lambda - 1)(1-xd)) \label{eqn:WH-LP-simplified-obj} \\
\text{subject to} \quad & \frac{1}{d+1} \leq x \leq \frac{1}{d-1} \notag \ . \end{align} Taking the derivative of the objective function one finds that for $\lambda \leq \frac{1+d}{2d}$, the derivative is positive. Therefore, \begin{align*} x^{*} =
\begin{cases}
\frac{d}{d^{2}-1} & \lambda \leq \frac{1+d}{2d} \\
\frac{1}{d+1} & \text{ otherwise} \ .
\end{cases} \end{align*} Plugging $x^{*}$ into \eqref{eqn:WH-LP-simplified-obj} completes the proof. \end{proof} \noindent In Section \ref{sec:relaxations_of_cv}, we generalize this derivation to determine the PPT relaxation of cv for the $n$-fold Werner-Holevo channels.
\section{Multiplicativity of cv }
\label{Sect:Multiplicativity}
We next consider how the communication value behaves when we combine two or more channels. The cv is multiplicative for two channels $\mathcal{N}$ and $\mathcal{M}$ if \begin{equation} \text{cv}(\mathcal{N}\otimes\mathcal{M})=\text{cv}(\mathcal{N})\text{cv}(\mathcal{M}). \end{equation} When multiplicativity holds, it means that an optimal strategy for guessing channel inputs involves using uncorrelated inputs and measurements across the two channels. A concrete example of non-multiplicativity is given by the Holevo-Werner family of channels, as proven in Section \ref{Sect:non-multiplicativity}. In general, it is a hard problem to decide whether two channels have a multiplicative communication value. More progress can be made when relaxing this problem to the PPT cone, and we conduct such an analysis in Section \ref{sec:relaxations_of_cv}. Here we resolve on the question of multiplicativity for a few special cases.
\subsection{Entanglement-Breaking Channels}
Our first result shows that non-multiplicativity arises only if the channel is capable of transmitting entanglement.
\begin{theorem} \label{Thm:EBC-mult} If $\mathcal{N}$ is an entanglement-breaking channel, then $\text{cv}(\mathcal{N}\otimes\mathcal{M})=\text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$ for an arbitrary channel $\mathcal{M}$. \end{theorem} \begin{proof} Since $\mathcal{N}$ is EB, its Choi matrix has the form $J_{\mathcal{N}}=\sum_x\Pi_x\otimes\rho_x$ for some POVM $\{\Pi_x\}_x$. The dual optimization of cv (i.e. Eq. \eqref{Eq:cv-dual}) can then be expressed as \begin{align} \label{Eq:EB-multiplicative} &\text{cv}(\mathcal{N}\otimes\mathcal{M})=\min \;\;\textrm{Tr}[Z^{BB'}]\notag\\ &\text{subject to}\;\; Z^{BB'}\geq \sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes\mathcal{M}(\sigma_x) \notag\\ & \qquad\sigma_x:=\textrm{Tr}_A[(\Pi^A_x\otimes\mathbb{I}^{A'})\rho^{AA'}]/\textrm{Tr}[\Pi_x\rho^A],\;\; \forall \rho^{AA'}. \end{align} Suppose that $Z^B\geq \mathcal{N}(\rho)$ for all $\rho$ and $Z^{B'}\geq\mathcal{M}(\sigma)$. Then \begin{align}
\sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes\mathcal{M}(\sigma_x)&\leq \sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes Z^{B'}\notag\\
&\leq Z^B\otimes Z^{B'}.\notag \end{align} Thus $Z^B\otimes Z^{B'}$ is feasible in Eq. \eqref{Eq:EB-multiplicative}. By choosing $Z^B$ and $Z^{B'}$ to be the dual optimizers for $\mathcal{N}$ and $\mathcal{M}$, respectively, we have $\text{cv}(\mathcal{N}\otimes\mathcal{M})\leq \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$. Since the opposite inequality trivially holds, we have the equality $\text{cv}(\mathcal{N}\otimes\mathcal{M})= \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$. \end{proof} \noindent By applying Theorem \ref{Thm:EBC-mult} iteratively across $n$ copies of an entanglement-breaking channel, we obtain a single-letter formulation of the cv capacity. \begin{corollary} If $\mathcal{N}$ is an entanglement-breaking channel, then \begin{equation}
\mathcal{CV}(\mathcal{N})=\text{cv}(\mathcal{N}). \end{equation} \end{corollary}
\begin{comment}
Our first result shows that non-multiplicativity is a fully quantum effect. \begin{lemma}\label{lemma:non-mult-fully-quantum-effect} If $\mathcal{N}$ is either a classical-to-quantum or quantum-to-classical channel, then $\text{cv}(\mathcal{N}\otimes\mathcal{M})=\text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$ for an arbitrary channel $\mathcal{M}$. \end{lemma} \begin{proof} First suppose that $\mathcal{N}$ is classical-to-quantum so that $J_\mathcal{N}=\sum_x\op{x}{x}^X\otimes\rho_x$ for some collection of states $\{\rho_x\}_x$. Then the dual formulation of cv (i.e. Eq. \eqref{Eq:cv-dual}) implies that \begin{align} \label{Eq:classical-multiplicativ-1} \text{cv}(\mathcal{N}\otimes\mathcal{M})&=\min \;\;\textrm{Tr}[Z^{BB'}]\notag\\ \text{subject to}&\;\; Z^{BB'}\geq \rho_x\otimes\mathcal{M}(\sigma_x) \quad\forall \{\rho_x,\sigma_x\}. \end{align} If $Z^B\geq \rho$ for all $\rho$ and $Z^{B'}\geq\mathcal{M}(\sigma)$ for all $\sigma$ then clearly $Z^B\otimes Z^{B'}$ is feasible for the optimization in Eq. \eqref{Eq:classical-multiplicativ-1}. By choosing $Z^B$ and $Z^{B'}$ to be the dual optimizers for $\mathcal{N}$ and $\mathcal{M}$, respectively, we have $\text{cv}(\mathcal{N}\otimes\mathcal{M})\leq \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$. Since the opposite inequality trivially holds, we have the equality $\text{cv}(\mathcal{N}\otimes\mathcal{M})= \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$. Next suppose that $\mathcal{N}$ is quantum-to-classical so that $J_{\mathcal{N}}=\sum_x\Pi_x\otimes\op{x}{x}$ for some POVM $\{\Pi_x\}_x$. This time the dual formulation takes the form \begin{align} \label{Eq:classical-multiplicativ-2} \text{cv}(\mathcal{N}\otimes\mathcal{M})&=\min \;\;\textrm{Tr}[Z^{BB'}]\notag\\ \text{subject to}&\;\; Z^{BB'}\geq \sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes\mathcal{M}(\sigma_x) \notag\\ &\forall \rho^{AA'}, \;\sigma_x:=\textrm{Tr}_A[\Pi_x\rho^{AA'}]/\textrm{Tr}[\Pi_x\rho^A]. \end{align} If $Z^B\geq \mathcal{N}(\rho)$ for all $\rho$ and $Z^{B'}\geq\mathcal{M}(\sigma)$ for all $\sigma$ then \begin{align}
\sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes\mathcal{M}(\sigma_x)&\leq \sum_x\textrm{Tr}[\Pi_x\rho^A]\op{x}{x}\otimes Z^{B'}\notag\\
&\leq Z^B\otimes Z^{B'}.\notag \end{align} Thus $Z^B\otimes Z^{B'}$ is feasible in Eq. \eqref{Eq:classical-multiplicativ-2}, which again implies $\text{cv}(\mathcal{N}\otimes\mathcal{M})\leq \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$ and hence $\text{cv}(\mathcal{N}\otimes\mathcal{M})= \text{cv}(\mathcal{N})\text{cv}(\mathcal{M})$. \end{proof}
\todo{***}
\todo{[[I think my improved Lemma 3 now supersedes your Proposition 7. But I'm not sure what you mean by an EB channel ``decomposing'' into q-to-c or c-to-q channels. Could you please comment on this.]] }
\ian{EDIT: I think if the Lemma 1 is converted to just being EB channels, this whole blue part can be deleted.
A related question would be the multiplicativity of the entanglement-breaking channel with an arbitrary channel. This is known to hold for some measures such as the minimum output entropy, which is known to be multiplicative for an entanglement-breaking channel ran in parallel with any channel \cite{King-2002}. While this problem remains unresolved for the communication value, we note the following bound for entanglement-breaking channels which is strong evidence that entanglement-breaking channels are multiplicative with an arbitrary channel. \begin{corollary} Let $\mathcal{N}$ be an entanglement-breaking channel and $\mathcal{M}$ be an arbitrary channel. The communication value of $\mathcal{N}$ is bounded according to Moreover, \begin{align*}
\text{cv}(\mathcal{N})cv(\mathcal{M}) \leq\text{cv}(\mathcal{N} \otimes \mathcal{M}) \leq \zeta \,\text{cv}(\mathcal{M}) \ , \end{align*} where $\zeta := \min \left\{\text{cv}(\mathcal{N}_{CQ}),\text{cv}(\mathcal{N}_{QC}) \right\}$, $\mathcal{N}_{QC}$ and $\mathcal{N}_{CQ}$ are the quantum-to-classical and classical-to-quantum channels respectively such that $\mathcal{N} = \mathcal{N}_{CQ} \circ \mathcal{N}_{QC}$. \todo{[[The first line here is just DPI and first inequality of the second line is trivial. Let's just focus on the second inequality of the second line since this is really the connection to multiplicativity.]]}\ian{Let me know if this fixes it.} \end{corollary} \begin{proof} It is well-known that any entanglement-breaking channel $\mathcal{N}$ may be implemented by some quantum-to-classical channel $\mathcal{N}_{i,QC}$ followed by a classical-to-quantum channel $\mathcal{N}_{i,CQ}$. That is, without loss of generality, $\mathcal{N} = \mathcal{N}_{CQ} \circ \mathcal{N}_{QC}$. Equivalently, an entanglement-breaking channel may be viewed as a quantum-to-classical channel with post-processing or a classical-to-quantum channel with pre-processing. By Corollary \ref{corr:pre-post-proc}, we know that communication value only decreases under pre- and post-processing. Thus $\text{cv}(\mathcal{N}) \leq \min \{\text{cv}(\mathcal{N}_{CQ},\text{cv}(\mathcal{N}_{QC}) \}$. Noting this argument is independent of running a channel $\mathcal{M}$ in parallel and applying Lemma \ref{lemma:non-mult-fully-quantum-effect} completes the proof. \end{proof} Physically, what the above proof says is that the communication value of an entanglement-breaking channel is limited by the worst component of its implementation (measurement or preparation) and that no ancillary channel can improve upon this fact. } \todo{***}
\ian{ Using the relationship between GME, maximal output purity, and cv, we can obtain the following alternative bounds for an entanglement-breaking channel composed with an arbitrary channel, which may be seen as further evidence that the cv is multiplicative when one channel is entanglement-breaking. \begin{corollary} Let $\mathcal{N}_{A\to B}$ be an entanglement-breaking channel and $\mathcal{M}_{A' \to B'}$ be an arbitrary channel. The following bounds holds: \begin{align}
\Lambda^{2}(J_{\mathcal{N}} \otimes J_{\mathcal{M}}) &= \Lambda^{2}(J_{\mathcal{N}}) \Lambda^{2}(J_{\mathcal{M}}) \label{eqn:gme-EB-mult} \\
\text{cv}(\mathcal{N}\otimes\mathcal{M}) &\leq d_{BB'}\Lambda^{2}(\mathcal{N})\Lambda^{2}(\mathcal{M}) \label{eqn:cv-GME-EB-mult} \end{align} \end{corollary} \begin{proof} \eqref{eqn:gme-EB-mult} follows from \eqref{eqn:GME-max-output-purity-equiv} and that the maximum output purity is multiplicative if one of the channels is entanglement-breaking \cite[Theorem 1]{King-2002}. \eqref{eqn:cv-GME-EB-mult} combines this point with \eqref{Eq:cv-GM}. \end{proof}
}
The question of cv mulitiplicativity is closely related to the question of additivity of the geometric measure of entanglement. The following proposition provides a sufficient condition for when additivity of the latter implies additivity of the former. \begin{proposition} If $\text{cv}(\mathcal{M})=d\Lambda^2(J_\mathcal{M})$, $\text{cv}(\mathcal{N})=d\Lambda^2(J_\mathcal{N})$, and $\Lambda^2(J_{\mathcal{M}}\otimes J_{\mathcal{N}})=\Lambda^2(J_\mathcal{M})\Lambda^2(J_{\mathcal{N}})$ for any pair of $d$-dimensional channels $\mathcal{M}$ and $\mathcal{N}$, then $\text{cv}(\mathcal{M}\otimes\mathcal{N})=\text{cv}(\mathcal{M})\text{cv}(\mathcal{N})$. \end{proposition} \label{prop:multiplicative-sufficient} \begin{proof} Under the assumptions of the proposition, we have \begin{align} \text{cv}(\mathcal{M})\text{cv}(\mathcal{N})\leq \text{cv}(\mathcal{M}\otimes\mathcal{N})\leq d^2\Lambda^2(J_{\mathcal{M}}\otimes J_{\mathcal{N}})=d^2\Lambda^2(J_\mathcal{M})\Lambda^2(J_{\mathcal{N}})=\text{cv}(\mathcal{M})\text{cv}(\mathcal{N}), \end{align} where the second inequality is the upper bound in Eq. \eqref{Eq:cv-GM}, which evidently must be tight. \end{proof}
\end{comment}
\begin{comment}
\subsection{Multiplicativity with the Identity}\label{subsec:multiplicativity-with-id}
One of the more natural situations to consider is when the given channel $\mathcal{N}$ acts on just one half of an entangled state. In this case, the overall channel on the joint input state is $\mathcal{N}\otimes \textrm{id}_d$, where $\textrm{id}_d$ is the identity on a $d$-dimensional auxiliary system. If the communication value were multiplicative for channels of this form, then it would enjoy a type of stability in the sense that it cannot be relatively increased by embedding the channel in a larger-dimensional space. While we are unable to prove multiplicativity in general, we can show that it holds for entanglement-breaking and covariant channels.
We first make a simple observation regarding the size of the second system. \begin{proposition} \label{prop:identity-dim} If $\text{cv}(\mathcal{N}\otimes\textrm{id}_d)>d\cdot\text{cv}(\mathcal{N})$, then $\text{cv}(\mathcal{N}\otimes\textrm{id}_{d'})>d'\cdot\text{cv}(\mathcal{N})$ for all $d'\geq d$. \end{proposition} \begin{proof} By direct-product encoding on a $d$-dimensional subspace of $\mathbb{C}^{d'}$ and its orthogonal complement, one always has \begin{equation}
\text{cv}(\mathcal{N}\otimes\textrm{id}_{d'})\geq \text{cv}(\mathcal{N}\otimes\textrm{id}_{d'-d})+\text{cv}(\mathcal{N}\otimes\textrm{id}_{d})\geq (d'-d)\text{cv}(\mathcal{N})+\text{cv}(\mathcal{N}\otimes\textrm{id}_d), \end{equation} from which the proposition follows. \end{proof}
\subsubsection{Entanglement-Breaking Channels}
Consider an arbitrary entanglement-breaking channel $\mathcal{N}\in\text{EB}(A\to B)$. For $A'\cong B'\cong\mathbb{C}^d$, let $\Omega^{AA':BB'}$ be a separable operator that is optimal for the cv in Proposition \ref{Prop:Proposition-cv}; i.e. $\textrm{Tr}_{AA'}(\Omega)=\mathbb{I}^{BB'}$ and $\text{cv}(\mathcal{N}\otimes\textrm{id})=\textrm{Tr}[\sigma(J_\mathcal{N}\otimes \phi_d^+)]$, where $\phi_d^+$ is the Choi matrix of $\textrm{id}_d$. Since $\phi_d^+$ enjoys $U\otimes U^*$ invariance under conjugation, we can apply the ``twirling map'' \begin{equation} \label{Eq:UU-twirling}
\mathcal{T}_{\mathcal{U}\mathcal{U}^*}(X)=\int_{\mathcal{U}} (U\otimes U^*) X(U\otimes U^*)^\dagger d U \end{equation} to the $A'B'$ systems of $\sigma$ while leaving the cv invariant: \begin{align}
\textrm{Tr}[\Omega(J_\mathcal{N}\otimes\phi^+)]=\textrm{Tr}[\textrm{id}^{AB}\otimes\mathcal{T}_{\mathcal{U} \mathcal{U}^*}^{A'B'}(\Omega)(J_\mathcal{N}\otimes\phi^+)]. \end{align} It is well-known \cite{Horodecki-1999a} that \begin{equation} \label{Eq:twirl-output}
\mathcal{T}_{\mathcal{U}\mathcal{U}^*}(X)=\frac{\textrm{Tr}[X\phi^+_d]}{d^2}\phi^+_d+\frac{\textrm{Tr}[X(d\mathbb{I}-\phi^+_d)]}{d^2(d^2-1)}(d\mathbb{I}-\phi^+_d). \end{equation} Hence without loss of generality, we can assume that $\Omega^{AA':BB'}$ has the form \begin{align} \Omega^{AA':BB'}=R^{AB}\otimes\phi_d^{+A'B'}+S^{AB}\otimes\frac{d\mathbb{I}^{A'B'}-\phi^{+A'B'}_d}{d^2-1}. \end{align} Here $R,S\geq 0$ and satisfy the condition \begin{align} \label{Eq:reduced-R+S}
\textrm{Tr}_A R+ \textrm{Tr}_A S=\mathbb{I}^B. \end{align} Furthermore, the separability of $\Omega^{AA':BB'}$ means that contracting the $AB$ systems by arbitrary product states $\ket{ab}$ should preserve separability of the $A'B'$ systems. That is, we have that \begin{align}
\bra{ab}R\ket{ab}\phi^+_d+\bra{ab}S\ket{ab}\frac{d\mathbb{I}-\phi^{+}_d}{d^2-1}\in \text{SEP}(A':B'). \end{align} By the separability conditions of isotropic states \cite{Horodecki-1999a}, this requires that \begin{align} \label{Eq:product-state-cond}
\bra{ab}R\ket{ab}\leq\frac{\bra{ab}S\ket{ab}}{d-1} \end{align} for arbitrary product states $\ket{ab}$.
So far we have not used the assumption that $\mathcal{N}$ is EB, and Eq. \eqref{Eq:product-state-cond} is a necessary condition for optimality when combining any channel with the identity. However, when restricting to EB channels multiplicativity of cv is an immediate consequence. \begin{lemma} If $\mathcal{N}$ is an entanglement-breaking channel, then \begin{equation}
\text{cv}(\mathcal{N}\otimes\textrm{id}_d)=d\cdot\text{cv}(\mathcal{N}). \end{equation} \end{lemma} \begin{proof} Since $J_{\mathcal{N}}$ is separable, Eq. \eqref{Eq:product-state-cond} implies that $\textrm{Tr}[(S-(d-1)R)J_{\mathcal{N}}]\geq 0$. Hence \begin{align}
\textrm{Tr}[\sigma(J_\mathcal{N}\otimes\phi^+)]&=d^2\textrm{Tr}[R J_{\mathcal{N}}]\notag\\
&\leq d^2\textrm{Tr}[RJ_{\mathcal{N}}]+\textrm{Tr}[(S-(d-1)R)J_{\mathcal{N}}]\notag\\
&=d\textrm{Tr}[(R+S)J_{\mathcal{N}}]\notag\\
&\leq d\cdot\text{cv}(\mathcal{N}), \end{align} where the last inequality follows from Eq. \eqref{Eq:reduced-R+S} and the fact that $R+S$ is separable. \end{proof}
\end{comment}
\subsection{Covariant Channels}
We next turn to channels that have a high degree of symmetry. To study the question of multiplicativity, it will be helpful to use the relationship between cv and GME. The following is a powerful result proven in Ref. \cite{Zhu-2011a} regarding multiplicativity of the GME. We say an operator $\rho^{AB}$ is component-wise non-negative if there exists an orthonormal product basis $\{\ket{i,j}\}_{i,j}$ such that $\bra{i,j}\rho\ket{i',j'}\geq 0$ for all $i,j,i',j'$. \begin{lemma}[\cite{Zhu-2011a}] \label{Lemma:non-negative-multiplicative} If $\rho^{AB}$ is component-wise non-negative and $\sigma^{A'B'}$ is any other density operator, then $\Lambda^2(\rho\otimes\sigma)=\Lambda^2(\rho)\Lambda^2(\sigma)$. \end{lemma}
\begin{comment}
\begin{proof} We reproduce the proof here for completeness. Let $\ket{\psi}^{AA'}$ and $\ket{\phi}^{BB'}$ be any two product states. Then we can write $\ket{\psi}=\sum_{i}a_i\ket{i}\ket{\alpha_i}$ where $a_i>0$ with $\sum_{i}a_i^2=1$, and likewise $\ket{\phi}=\sum_{j}b_j\ket{j}\ket{\beta_j}$ where $b_j>0$ with $\sum_{j}b_i^2=1$. Here, $\ket{\alpha_i}$ and $\ket{\beta_j}$ are normalized states that have absorbed any complex phase of $a_i$ and $b_j$, respectively. Then \begin{align}
|\bra{\psi,\phi}\rho\otimes\sigma\ket{\psi,\phi}|&=\left|\sum_{i,i',j,j'}a_ia_{i'}b_jb_{j'}\bra{i,j}\rho\ket{i',j'}\bra{\alpha_i,\beta_j}\sigma\ket{\alpha_{i'},\beta_{j'}}\right|\notag\\
&=\sum_{i,i',j,j'}a_ia_{i'}b_jb_{j'}\bra{i,j}\rho\ket{i',j'}\left|\bra{\alpha_i,\beta_j}\sigma\ket{\alpha_{i'},\beta_{j'}}\right|\notag\\
&\leq \sum_{i,i',j,j'}a_ia_{i'}b_jb_{j'}\bra{i,j}\rho\ket{i',j'}\sqrt{\bra{\alpha_i,\beta_j}\sigma\ket{\alpha_i,\beta_j}\bra{\alpha_{i'},\beta_{j'}}\sigma\ket{\alpha_{i'},\beta_{j'}}}\notag\\
&\leq \sum_{i,i',j,j'}a_ia_{i'}b_jb_{j'}\bra{i,j}\rho\ket{i',j'}\Lambda^2(\sigma)\notag\\
&\leq\Lambda^2(\rho)\Lambda^2(\sigma), \end{align} where the last inequality follows from the fact that $\sum_{i}a_i\ket{i}$ and $\sum_{j}b_j\ket{j}$ are normalized states. Since the converse inequality is trivial, the theorem is proven. \end{proof}
\end{comment} \noindent For example, the Choi matrix of the identity channel, $\phi^+_{d'}=J_{\textrm{id}_{d'}}$, is component-wise non-negative in the computational basis. Therefore by the previous lemma we have \[\Lambda(J_{\mathcal{N}}\otimes\textrm{id}_{d'})=\Lambda(J_{\mathcal{N}})\Lambda(\textrm{id}_{d'})=\Lambda(J_{\mathcal{N}})\] for any channel $\mathcal{N}$. As we will now show, this sort of multipicativity can be readily extended to the communication value for channels with symmetry.
Let $\mathcal{G}$ be any group with an irreducible unitary representation on $\mathbb{C}^d$. Then, as we did in Eq. \eqref{Eq:UU-twirling}, let $\mathcal{T}_{UU}$ denote the bipartite group twirling map with respect to $\mathcal{G}$, \begin{align}
\mathcal{T}_{UU}(\rho^{AB})=\int_{\mathcal{G}}dU (U\otimes U)\rho(U\otimes U)^\dagger. \end{align} A channel $\mathcal{N}$ is called $\mathcal{G}$-covariant if $\mathcal{N}(U_g\rho U_g^\dagger)=U_g\mathcal{N}(\rho)U_g^\dagger$ for all $g\in\mathcal{G}$ and all $\rho$. On the level of Choi matrices, this is equivalent to $\mathcal{T}_{\overline{U}U}(J_\mathcal{N})=J_{\mathcal{N}}$, where $\overline{U}$ denote complex conjugation. Note that $\Gamma_A\circ\mathcal{T}\circ\Gamma_A$ is the CPTP twirling map $\mathcal{T}_{\overline{U}U}$, where $\Gamma_A$ is the partial transpose on system $A$. Likewise, the map $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant if $\mathcal{T}_{UU}(J_\mathcal{N})=J_{\mathcal{N}}$, where $\Gamma$ denotes the transpose map.
\begin{comment}
\begin{theorem}\label{thm:covariance-identity-multiplicativity} If $\mathcal{G}$ has an irreducible unitary representation on $\mathbb{C}^d$ and either $\mathcal{N}$ or $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant, then \begin{align}
\text{cv}(\mathcal{N}\otimes\textrm{id}_{d'})=d' \cdot\text{cv}(\mathcal{N}). \end{align} \end{theorem} \begin{proof} We apply Lemma \ref{Lemma:non-negative-multiplicative} on the joint Choi matrix $J_{\mathcal{N}}\otimes \phi^+_{d'}$. Let $\op{\alpha}{\alpha}^A\otimes\op{\beta}{\beta}^B$ be a product operator with trace equaling $d$ and satisfying $d\Lambda^2(J_{\mathcal{N}})=\bra{\alpha,\beta}J_{\mathcal{N}}\ket{\alpha,\beta}$. Suppose now that either $\mathcal{N}$ or $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant. In either case we have \begin{align}
\bra{\alpha,\beta}J_{\mathcal{N}}\ket{\alpha,\beta}&=\bra{\alpha,\beta}\mathcal{T}(J_{\mathcal{N}})\ket{\alpha,\beta}\notag\\
&=\textrm{Tr}\left[J_{\mathcal{N}}\mathcal{T}^\dagger\left(\op{\alpha,\beta}{\alpha,\beta}\right)\right]\notag\\
&=\textrm{Tr}[J_\mathcal{N}\Omega^{AB}], \end{align} where $\Omega^{AB}=\mathcal{T}^\dagger(\op{\alpha,\beta}{\alpha,\beta})$. Note that $\Omega^{AB}$ has trace equaling $d$, and since $U_g$ is an irrep, we have $\textrm{Tr}_{A}\Omega^{AB}=\mathbb{I}$. Hence $\text{cv}(\mathcal{N})\geq d\Lambda^2(J_{\mathcal{N}})$, and by Eq. \eqref{Eq:cv-GM}, this inequality must be tight. Therefore, \begin{align}
\text{cv}(\mathcal{N}\otimes\textrm{id}_{d'})&\geq d'\text{cv}(\mathcal{N})\notag\\
&=dd'\Lambda^2(J_{\mathcal{N}})\notag\\
&=dd'\Lambda^2(J_\mathcal{N})\Lambda^2(\phi^+_{d'})\notag\\
&=dd'\Lambda^2(J_{\mathcal{N}}\otimes\textrm{id}_{d'}), \end{align} since $\Lambda^2(\phi^+_{d'})=1$. Again using Eq. \eqref{Eq:cv-GM} this inequality must be tight, and so $\text{cv}(\mathcal{N}\otimes\textrm{id}_{d'})= d'\text{cv}(\mathcal{N})$. \end{proof}
\end{comment}
\begin{theorem}\label{thm:covariance-multiplicativity} Let $\mathcal{G}$ and $\mathcal{G}'$ have irreducible unitary representations on $\mathbb{C}^d$ and $\mathbb{C}^{d'}$ respectively. Suppose either $\mathcal{N}$ or $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant and likewise either $\mathcal{N}'$ or $\Gamma\circ\mathcal{N}'\circ\Gamma$ is $\mathcal{G}$'-covariant. Further suppose that $J_{\mathcal{N}}$ is component-wise non-negative. Then \begin{align}
\text{cv}(\mathcal{N}\otimes\mathcal{N}')=\text{cv}(\mathcal{N})\text{cv}(\mathcal{N}'). \end{align} \end{theorem} \begin{proof} Let $\op{\alpha}{\alpha}^A\otimes\op{\beta}{\beta}^B$ be a product operator with trace equaling $d$ and satisfying $d\Lambda^2(J_{\mathcal{N}})=\bra{\alpha,\beta}J_{\mathcal{N}}\ket{\alpha,\beta}$. Suppose now that either $\mathcal{N}$ or $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant. In either case we have \begin{align}
\bra{\alpha,\beta}J_{\mathcal{N}}\ket{\alpha,\beta}&=\bra{\alpha,\beta}\mathcal{T}(J_{\mathcal{N}})\ket{\alpha,\beta}\notag\\
&=\textrm{Tr}\left[J_{\mathcal{N}}\mathcal{T}^\dagger\left(\op{\alpha,\beta}{\alpha,\beta}\right)\right]\notag\\
&=\textrm{Tr}[J_\mathcal{N}\Omega^{AB}], \end{align} where $\Omega^{AB}=\mathcal{T}^\dagger(\op{\alpha,\beta}{\alpha,\beta})$. Note that $\Omega^{AB}$ has trace equaling $d$, and since $U_g$ is an irrep, we have $\textrm{Tr}_{A}\Omega^{AB}=\mathbb{I}$. Hence $\text{cv}(\mathcal{N})\geq d\Lambda^2(J_{\mathcal{N}})$, and an analogous argument for $\mathcal{N}'$ establishes that $\text{cv}(\mathcal{N}')\geq d'\Lambda^2(J_{\mathcal{N}'})$. Therefore, \begin{align}
\text{cv}(\mathcal{N}\otimes\mathcal{N}')&\geq \text{cv}(\mathcal{N})\text{cv}(\mathcal{N}')\notag\\
&\geq dd'\Lambda^2(J_{\mathcal{N}})\Lambda^2(J_{\mathcal{N}'})\notag\\
&= dd'\Lambda^2(J_{\mathcal{N}}\otimes J_{\mathcal{N}'}), \end{align} where the last equality follows from Lemma \ref{Lemma:non-negative-multiplicative}. However, by the upper bound in Eq. \eqref{Eq:cv-GM} this inequality must be tight, which implies the desired multiplicativity.
\end{proof}
Using this theorem, we can compute the cv capacity for certain channels. \begin{corollary} \label{Cor:CV-capacity-symmetry} Let $\mathcal{G}$ have irreducible unitary representation on $\mathbb{C}^d$. Suppose that either $\mathcal{N}$ or $\Gamma\circ\mathcal{N}\circ\Gamma$ is $\mathcal{G}$-covariant and $J_{\mathcal{N}}$ is component-wise non-negative. Then \begin{align}
\mathcal{CV}(\mathcal{N})=\text{cv}(\mathcal{N}). \end{align} \end{corollary} \begin{proof} It suffices to prove that $\text{cv}(\mathcal{N}^{\otimes n})=n\text{cv}(\mathcal{N})$. This follows directly from Theorem \ref{thm:covariance-multiplicativity} by letting $\mathcal{N}'=\mathcal{N}^{\otimes n-1}$ and $\mathcal{G}'=\mathcal{G}^{\otimes n}$. \end{proof}
For example, in a qubit system all Pauli channels satisfy the conditions of Corollary \ref{Cor:CV-capacity-symmetry}. These are channels of the form \begin{align}
\mathcal{N}_{\text{Pauli}}(\rho)=p_0\rho+p_1\sigma_x\rho\sigma_x+p_2\sigma_z\rho\sigma_z+p_3\sigma_y\rho\sigma_y, \end{align} and they are covariant with respect to the Pauli group. Moreover, $J_{\mathcal{N}_{\text{Pauli}}}$ can always be converted into a matrix with non-negative entries by local unitaries. Hence using Eq. \eqref{Eq:cv-Pauli} we have \begin{equation}
\mathcal{CV}(\mathcal{N}_{\text{Pauli}})=2(p_3^{\downarrow}+p_2^{\downarrow}). \end{equation}
As another example, consider the $d$-dimensional partially depolarizing channel $\mathcal{D}_{d,\lambda}$ given by \begin{equation}
\mathcal{D}_{d,\lambda}(\rho):=\lambda\rho+(1-\lambda)\frac{\mathbb{I}}{d}, \qquad 0\leq\lambda\leq 1. \end{equation} The channel $\Gamma\circ\mathcal{D}_{d,\lambda}\circ\Gamma$ is $\mathcal{G}$-covariant with respect to the full unitary group on $\mathbb{C}^d$ \cite{Horodecki-1999a}. The Choi matrix is given by $J_{\mathcal{D}_{d,\lambda}}=\lambda\phi^+_d+(1-\lambda)\mathbb{I}\otimes\mathbb{I}/d$, which is clearly component-wise non-negative. Thus by Corollary \ref{Cor:CV-capacity-symmetry} we have \begin{align}
\mathcal{CV}(\mathcal{D}_{d,\lambda})=\text{cv}(\mathcal{D}_{d,\lambda})=\lambda d+(1-\lambda). \end{align}
We remark that the Werner-Holevo family of channels introduced in Section \ref{Sect:Werner-Holevo} fail to satisfy Corollary \ref{Cor:CV-capacity-symmetry} since they are not component-wise non-negative. In fact, their cv is non-multiplicative, as we will see below. Nevertheless, Theorem \ref{thm:covariance-multiplicativity} can be applied to a Werner-Holevo channel by using in parallel with another channel that is component-wise non-negative. For example, when trivially embedding $\mathcal{W}_{d,\lambda}$ into a larger system we have multiplicativity: \begin{equation}
\text{cv}(\mathcal{W}_{d,\lambda}\otimes\textrm{id}_{d'})=d'\text{cv}(\mathcal{W}_{d,\lambda}). \end{equation} This result is perhaps surprising since mutliplicativity appears to not hold when we relax the cv optimization to the cone of PPT operators, as is shown in Section \ref{subsec:WHID} (See Fig. \ref{fig:cv-ppt-versus-cv-wh-with-id}).
\subsection{Qubit Channels}
\label{Sect:qubits-multiplicative}
In Section \ref{Sect:qubits} we derived an explicit formula for the communication value of qubit channels. Namely, $\text{cv}(\mathcal{N})=1+\sigma_{\max}(N)$, where $N$ is the correlation matrix of $J_{\mathcal{N}}$. Here we show that the cv is multiplicative when using two qubit channels in parallel. \begin{theorem} \label{Thm:qubit-multiplicative} Suppose $\mathcal{M},\mathcal{N}\in\text{CPTP}(A\to B)$ with $d_A=d_B=2$. Then \begin{equation} \text{cv}(\mathcal{M}\otimes\mathcal{N})=\text{cv}(\mathcal{M})\text{cv}(\mathcal{N}). \end{equation} \end{theorem}
\begin{proof} For $\mathcal{M}$ and $\mathcal{N}$ consider their Choi matrices \begin{align} J_{\mathcal{M}}&=\frac{1}{2}\bigg(\mathbb{I}\otimes(\mathbb{I}+\mathbf{c}\cdot\vec{\sigma})+\sum_{i}m_i\sigma_i\otimes\sigma_i\bigg),\notag\\ J_{\mathcal{N}}&=\frac{1}{2}\bigg(\mathbb{I}\otimes(\mathbb{I}+\mathbf{d}\cdot\vec{\sigma})+\sum_in_i\sigma_i\otimes\sigma_i\bigg). \end{align} Notice that their correlation matrices $M=\text{diag}[m_1,m_2,m_3]$ and $N=\text{diag}[n_1,n_2,n_3]$ are diagonal. An arbitrary channel can always be converted into this form by performing appropriate pre- and post $SU(2)$ rotations on the channel, which do not change the communication value. Define the operator \begin{align} \label{Eq:qubit-multiplicative-dual-1} Z^{BB'}=\frac{1}{4}\bigg(&(\mathbb{I}+\mathbf{c}\cdot\vec{\sigma})\otimes(\mathbb{I}+\mathbf{d}\cdot\vec{\sigma})+(\sigma_{\max}(M)\notag\\ &+\sigma_{\max}(N)+\sigma_{\max}(M)\sigma_{\max}(N))\mathbb{I}\otimes\mathbb{I}\bigg). \end{align} Since $\textrm{Tr}[Z^{BB'}]=\text{cv}(\mathcal{M})\text{cv}(\mathcal{N})$, by the dual characterization of cv given in Eq. \eqref{Eq:cv-dual}, we will prove that $\text{cv}(\mathcal{M}\otimes\mathcal{N})\leq\text{cv}(\mathcal{M})\text{cv}(\mathcal{N})$ if we can show that \begin{equation} \label{Eq:qubit-multiplicative-dual-2} Z^{BB'}\geq \mathcal{M}\otimes\mathcal{N}(\rho^T). \end{equation} for an arbitrary two-qubit state \begin{align} \rho^{AA'}=\frac{1}{4}\bigg(\mathbb{I}\otimes\mathbb{I}+\mathbf{r}\cdot\vec{\sigma}\otimes\mathbb{I} +\mathbb{I}\otimes\mathbf{s}\cdot\vec{\sigma}+\sum_{i,j}c_{ij}\sigma_i\otimes\sigma_j\bigg).\notag \end{align} Note that we have the action $\mathcal{M}(\mathbb{I})=\mathbb{I}+\mathbf{c}\cdot\vec{\sigma}$, $\mathcal{M}(\sigma_x)=m_x\sigma_x$, $\mathcal{M}(\sigma_y)=-m_y\sigma_y$, $\mathcal{M}(\sigma_z)=m_z\sigma_z$, and likewise for the action of $\mathcal{N}$. Hence \begin{align} \mathcal{M}\otimes\mathcal{N}(\rho^T)=\frac{1}{4}&\bigg((\mathbb{I}+\mathbf{c}\cdot\vec{\sigma})\otimes(\mathbb{I}+\mathbf{d}\cdot\vec{\sigma})+M\mathbf{r}\cdot\vec{\sigma}\otimes\mathbb{I}\notag\\ &+\mathbb{I}\otimes N\mathbf{s}\cdot\vec{\sigma}+\sum_{i,j}m_in_jc_{ij}\sigma_i\otimes\sigma_j\bigg). \end{align} When comparing with Eq. \eqref{Eq:qubit-multiplicative-dual-1}, we see that Eq. \eqref{Eq:qubit-multiplicative-dual-2} reduces to \begin{align} \label{Eq:qubit-multiplicative-dual-3} \Vert &M\mathbf{r}\cdot\vec{\sigma}\otimes\mathbb{I}+\mathbb{I}\otimes N\mathbf{s}\cdot\vec{\sigma}+\sum_{i,j}m_in_jc_{ij}\sigma_i\otimes\sigma_j\Vert_\infty\notag\\ &\leq \sigma_{\max}(M)+\sigma_{\max}(N)+\sigma_{\max}(M)\sigma_{\max}(N). \end{align} To prove this inequality, we note that effectively the non-unital components of $\mathcal{M}$ and $\mathcal{N}$ do not appear here. That is, let $\widetilde{\mathcal{M}}$ and $\widetilde{\mathcal{N}}$ be the unital CPTP maps defined by the Choi matrices \begin{align} J_{\widetilde{\mathcal{M}}}&=\frac{1}{2}\bigg(\mathbb{I}\otimes\mathbb{I}+\sum_{i}m_i\sigma_i\otimes\sigma_i\bigg),\notag\\ J_{\widetilde{\mathcal{N}}}&=\frac{1}{2}\bigg(\mathbb{I}\otimes\mathbb{I}+\sum_in_i\sigma_i\otimes\sigma_i\bigg). \end{align} Letting $\ket{\psi}$ denote an eigenvector of largest eigenvalue for the operator on the LHS of Eq. \eqref{Eq:qubit-multiplicative-dual-3}, we have \begin{align} \Vert \mathbb{I}\otimes\mathbb{I}+M\mathbf{r}\cdot\vec{\sigma}\otimes&\mathbb{I}+\mathbb{I}\otimes N\mathbf{s}\cdot\vec{\sigma}+\sum_{i,j}m_in_jc_{ij}\sigma_i\otimes\sigma_j\Vert_\infty\notag\\ &=4\bra{\varphi}\widetilde{\mathcal{M}}\otimes\widetilde{\mathcal{N}}(\rho^T)\ket{\varphi}\notag\\ &=4\textrm{Tr}\left[\rho^{AA'}\otimes\op{\varphi}{\varphi}^{BB'}J_{\widetilde{\mathcal{M}}}\otimes J_{\widetilde{\mathcal{N}}}\right]\notag\\ &\leq 4\Lambda^2(J_{\widetilde{\mathcal{M}}}\otimes J_{\widetilde{\mathcal{N}}})\notag\\ &=2\Lambda^2(J_{\widetilde{\mathcal{M}}})\cdot 2\Lambda^2(J_{\widetilde{\mathcal{N}}})\notag\\ &=(1+\sigma_{\max}(M))(1+\sigma_{\max}(N)), \end{align} where we have used the fact that the GME is multiplicative for unital qubit channels (Lemma \ref{Lemma:non-negative-multiplicative}), along with Eq. \eqref{Eq:unital-GM-tight}. This proves Eq. \eqref{Eq:qubit-multiplicative-dual-3}.
\end{proof}
\begin{comment}
We next show that unital qubit channels demonstrate strong multiplicativity. In particular, it implies that $\text{cv}(\mathcal{N}^{\otimes n})=[\text{cv}(\mathcal{N})]^n$ for such channels. \begin{theorem} If $\mathcal{N}$ is a unital qubit channel and $\mathcal{M}$ is an arbitrary channel then \begin{equation} \text{cv}(\mathcal{N}\otimes\mathcal{M})=\text{cv}(\mathcal{N})\text{cv}(\mathcal{M}). \end{equation} \end{theorem} \begin{proof} Since $\mathcal{N}$ is unital, we have that $J_\mathcal{N}$ is Bell-diagonal up to local unitaries (which do not affect the cv of the channel). Let $J_{\mathcal{N}}=2\sum_{i=0}^3 p_i\op{\Phi^+_i}{\Phi^+_i}$ with a labeling chosen such that $p_3\geq p_2\geq p_1\geq p_0$. For any separable $\sigma^{AA':BB'}$ we have \begin{align} \textrm{Tr}[\sigma(J_\mathcal{N}\otimes J_{\mathcal{M}})]&=2\sum_{i=0}^3 p_i\textrm{Tr}[\sigma(\op{\Phi^+_i}{\Phi^+_i}\otimes J_{\mathcal{M}})]\notag\\ &\leq 2p_3\textrm{Tr}[\sigma((\op{\Phi_3^+}{\Phi_3^+}+\op{\Phi_1^+}{\Phi_1^+})\otimes J_\mathcal{M})]\notag\\ &\quad + 2p_2\textrm{Tr}[\sigma((\op{\Phi_2^+}{\Phi_2^+}+\op{\Phi_0^+}{\Phi_0^+})\otimes J_\mathcal{M})]. \end{align} A key property is that a uniform mixture of any two Bell diagonal states is equivalent to the fully classical state $\frac{1}{2}(\op{00}{00}+\op{11}{11})$, up to local unitaries. Hence $\op{\Phi_3^+}{\Phi_3^+}+\op{\Phi_1^+}{\Phi_1^+}$ and $\op{\Phi_2^+}{\Phi_2^+}+\op{\Phi_0^+}{\Phi_0^+}$ are both Choi matrices for classical channels, and we can use Lemma \ref{Lem:Classical-multiplicativity} \end{proof}
\end{comment}
A natural question is whether Theorem \ref{Thm:qubit-multiplicative} can generalized to the case in which only one of the channels is a qubit channel. Unfortunately the proof of Theorem \ref{Thm:qubit-multiplicative} relies heavily on the Pauli representation of qubit channels, and we therefore only conjecture that qubit channels possess an even stronger form of multiplicativity.
\noindent\textit{Conjecture.} If $\mathcal{M}\in\text{CPTP}(A\to B)$ with $d_A=d_B=2$, then \[\text{cv}(\mathcal{M}\otimes\mathcal{N})=\text{cv}(\mathcal{M})\text{cv}(\mathcal{N})\] for any other channel $\mathcal{N}$.
\subsection{Non-Multiplicativity in Qutrits}
\label{Sect:non-multiplicativity}
In the previous sections we identified examples of channels for which the communication value is multiplicative. We now provide an example of channels that demonstrate non-multiplicativity. Our construction is the Werner-Holevo channels, which were previously known to exemplify non-additivity of a channel's minimum output purity \cite{Werner-2002a}, \cite{Zhu-2011a}. Specifically, the channel $\mathcal{W}_{d,0}$ has a Choi matrix proportional to the anti-symmetric subspace projector, \begin{equation}\label{eqn:anti-sym-choi} J_{\mathcal{W}_{d,0}}=\frac{1}{d-1}(\mathbb{I}\otimes\mathbb{I}-\mathbb{F}). \end{equation} The entanglement properties of this operator have been well-studied \cite{Christandl-2012a, Hubener-2009a, Zhu-2011a}. In particular, Zhu \textit{et al.} have computed its one and two-copy geometric measures of entanglement to be \begin{align} \max_{\ket{\alpha}^{A}\ket{\beta}^B}\bra{\alpha,\beta}J_{\mathcal{W}_{d,0}}\ket{\alpha,\beta}&=\frac{1}{d-1} \label{Eq:asym-projGM}\\ \max_{\ket{\alpha}^{AA'}\ket{\beta}^{BB'}}\bra{\alpha,\beta}J_{\mathcal{W}_{d,0}}\otimes J_{\mathcal{W}_{d,0}}\ket{\alpha,\beta}&=\frac{2}{d(d-1)}.\label{Eq:asym-projGM2} \end{align} Equation \eqref{Eq:asym-projGM2} is strictly larger than the square of Eq. \eqref{Eq:asym-projGM} whenever $d\geq 3$. Furthermore, the maximization in Eq. \eqref{Eq:asym-projGM2} is attained whenever $\ket{\alpha}^{AA'}$ and $\ket{\beta}^{BB'}$ are maximally entangled states. Thus, we consider the separable operator \begin{equation} \sigma^{AA':BB'}=\sum_{k=1}^{d^2}\op{\varphi_k^+}{\varphi_k^+}^{AA'}\otimes\op{\varphi_k^+}{\varphi_k^+}^{BB'}, \end{equation} where $\{\ket{\varphi_k^+}\}_{k=1}^{d_2}$ is an orthonormal basis consisting of maximally entangled states for $\mathbb{C}^d\otimes\mathbb{C}^d$. This satisfies the conditions of Proposition \ref{Prop:Proposition-cv}. Hence, we conclude \begin{align} \text{cv}(\mathcal{W}_{d,0})&=\frac{d}{d-1} \label{eqn:anti-sym-cv} \\ \text{cv}(\mathcal{W}_{d,0}\otimes\mathcal{W}_{d,0})&=\frac{2d}{d-1}, \end{align} which yields $\text{cv}(\mathcal{W}_{d,0})^2<\text{cv}(\mathcal{W}_{d,0}\otimes\mathcal{W}_{d,0})$ when $d\geq 3$. Most notably, for $d=3$ we have $\text{cv}(\mathcal{W}_{d,0}\otimes\mathcal{W}_{d,0})=3$ while $\text{cv}(\mathcal{W}_{d,0})^2=2.25$.
\begin{remark}
In Section \ref{sec:relaxations_of_cv}, we use that the spaces of PPT and SEP $UUVV$-invariant operators are equivalent \cite{Vollbrecht-2001a}, to determine the range of $\lambda$ that satisfy non-multiplicativity for $\text{cv}(\mathcal{W}_{d,\lambda} \otimes \mathcal{W}_{d,\lambda})$ numerically. \end{remark}
\section{Entanglement-Assisted CV} \label{sec:entanglement_assisted_cv}
\begin{figure}
\caption{Entanglement-assisted communication value scenario.}
\label{fig:ea_cv}
\end{figure}
We next generalize the communication scenario and allow the sender and receiver to share entanglement. Remarkably, this added resource simplifies the problem immensely. In what follows, we will allow Alice and Bob to share an entangled state $\varphi^{A'B'}$ that can be used to increase the channel cv. The most general entanglement-assisted protocol is as follows (see Fig. \ref{fig:ea_cv}). For input $x\in[n]$, Alice performs a CPTP map $\mathcal{E}_x\in\text{CPTP}(A'\to A)$ on her half of the entangled state $\varphi^{A'B'}$. System $A$ is then fed into the channel, and Bob finally performs a POVM $\{\Pi_y^{BB'}\}_{y\in[n']}$ on systems $BB'$. The induced channel has transition probabilities given by \begin{equation} \label{Eq:Ea-cv}
P(y|x)=\textrm{Tr}\left[\Pi_y^{BB'}\left(\mathcal{N}^{A\to B}\circ\mathcal{E}_x^{A'\to A}\otimes\textrm{id}^{B'}[\varphi^{A'B'}]\right)\right]. \end{equation} Note that this scenario corresponds to the one used in superdense coding \cite{Bennett-1992a}. The entanglement-assisted channel cv can now be defined. \begin{definition} The entanglement-assisted communication value (ea cv) of a quantum channel $\mathcal{N}\in\text{CPTP}(A\to B)$, denoted $\text{cv}^{*}(\mathcal{N})$, is \begin{align}
\sup_{\varphi^{A'B'}}\max\{\text{cv}(\mathbf{P})\;|\;\text{$P(y|x)$ given by Eq. \eqref{Eq:Ea-cv}}\} \, , \end{align} where the supremum is taken over all entangled states $\varphi^{A'B'}$ (and all dimensions $A'B'$), while the maximization considers all $n,n'\in\mathbb{N}$ along with arbitrary states $\{\rho_x\}_{x\in[n]}$ and POVMs $\{\Pi_y\}_{y\in[n']}$. \end{definition} \begin{theorem} \label{Thm:ea-cv} For an arbitrary channel $\mathcal{N}\in\text{CPTP}(A\to B)$, \begin{align} \text{cv}^*(\mathcal{N})=\max &\;\;\textrm{Tr}[\sigma^{AB}J_\mathcal{N}]\notag\\ &\;\;\textrm{Tr}_A[\sigma^{AB}]=\mathbb{I}^B\notag\\ &\;\;\sigma^{AB}\in\text{Pos}(A:B), \label{Eq:theorem-ea-cv} \end{align} where $\text{Pos}(A:B)$ denotes the positive cone on $AB$. Moreover, $\text{cv}^*(\mathcal{N})$ is attained using a $(d_A)$-dimensional maximally entangled state. \end{theorem} \noindent In other words, the restriction of $\sigma^{AB}$ to the separable cone (\textit{cf} Eq. \eqref{Eq:Proposition-cv}) is removed when considering the entanglement-assisted problem. \begin{proof}
It is clear that we need to only consider pure states in the supremum since $\text{cv}(\mathbf{P})$ is convex-linear w.r.t. $\varphi^{A'B'}$. Let $\ket{\varphi}^{A'B'}$ be arbitrary. We first show that without loss of generality we can take $\ket{\varphi}$ to be maximally entangled. Recall Nielsen's Theorem \cite{Nielsen-1999a} (see also \cite{Lo-2001a}), which ensures the existence of an LOCC transformation $\ket{\Phi^+_{d_{A'}}}^{A'\tilde{A'}}\to\ket{\varphi}^{A'B'}$, where $\ket{\Phi^+_{d_{A'}}}^{A'\tilde{A'}}$ is a maximally entangled state on $A'\tilde{A'}$, with $\tilde{A'}\cong A'$. Explicitly, there exists a measurement on Bob's side with Kraus operators $\{M_k\}_k$ and correcting unitaries $\{U_k\}_k$ on Alice's side such that $U^{A'}_k\otimes M^{\tilde{A'}\to B'}_k\ket{\Phi^+_{d_{A'}}}=\sqrt{p(k)}\ket{\varphi_{k}}$. Using that cv is achieved by minimal error discrimination, i.e. $\text{cv}^*(\mathbb{P}) = \sum_{x} P(x|x)$, \begin{align} \label{Eq:Ea-cv-1} & \text{cv}^*(\mathbf{P})\notag\\ =&\sum_{x,k}\textrm{Tr}\left[\Pi_x^{BB'}\left(\mathcal{N}^{A\to B}\circ\mathcal{E}_x^{A'\to A}\otimes\textrm{id}^{B'}\left[p(k)\op{\varphi_{k}}{\varphi_{k}}\right]\right)\right]. \end{align} where by construction $$ p(k)\op{\varphi_{k}}{\varphi_{k}} = [(U_k\otimes M_k)\Phi^{+A'\tilde{A'}}(U_k\otimes M_k)^\dagger] \ . $$ Notice that $\{(\mathbb{I}\otimes M_k)^\dagger\Pi_x(\mathbb{I}\otimes M_k)\}_{k,x}$ constitutes a set of POVM elements on $B\tilde{A'}$. This follows from the fact that $\{M_k\}_k$ are Kraus operators for a CPTP map, and so the dual of this map, $X\to \sum_kM_k^\dagger(X)M_k$ , is unital. Likewise, letting $\mathcal{U}_k(\cdot):=U_k(\cdot) U_k^\dagger$ denote a unitary channel, the collection $\{\mathcal{E}_x\circ \mathcal{U}_k\}_{x,k}$ forms a family of encoding maps. Therefore, we can express Eq. \eqref{Eq:Ea-cv-1} as \begin{align} \label{Eq:Ea-cv-2} \text{cv}^*(\mathbf{P})&=\sum_z\textrm{Tr}\!\left[\hat{\Pi}_z^{B\tilde{A'}}\!\left(\mathcal{N}^{A\to B}\circ\hat{\mathcal{E}}_z^{A'\to A}\otimes\textrm{id}^{\tilde{A'}}[\Phi^{+A'\tilde{A'}}]\right)\right], \end{align} where the $\hat{\mathcal{E}}_z$ and $\hat{\Pi}_z$ are the concatenated encoders and decoder. This shows that we can restrict attention just to shared maximally entangled states. Furthermore, without loss of generality, we can assume that $d_{A'}\geq d_{A}$. The reason is that the transformation $\ket{\Phi^+_{d_{A''}}}^{A''\tilde{A''}}\to\ket{\varphi}^{A'B'}$ is always possible for any $d_{A''}\geq d_{A'}$; so we could have just as well used the same argument with system $A''$ and arrived at $\Phi^{+A''\tilde{A''}}$ in Eq. \eqref{Eq:Ea-cv-2}.
We next take Kraus-operator decompositions $\mathcal{E}_z(\cdot)=\sum_{k}N_{z,k}(\cdot)N_{z,k}^\dagger$ with each $N_{z,k}:\mathbb{C}^{d_{A'}}\to\mathbb{C}^{d_A}$. Since $d_{A'}\geq d_A$, we can use the ``ricochet'' property $N_{z,k}\otimes\mathbb{I}\ket{\phi_{d_{A'}}^+}^{A'\tilde{A'}}=\mathbb{I}\otimes N_{z,k}^T\ket{\phi_{d_A}^+}^{A\tilde{A}}$ to obtain \begin{align} \label{Eq:Ea-cv-3} \text{cv}^*(\mathbf{P}) = &\frac{1}{d_{A'}}\sum_z\sum_k\textrm{Tr}\left[\widetilde{P}_{z,k}\left(\mathcal{N}^{A\to B}\otimes\textrm{id}^{\tilde{A}}[\phi_{d_A}^{+A\tilde{A}}]\right)\right]\notag\\ = & \textrm{Tr}[\Omega^{AB} J_\mathcal{N}^{AB}], \end{align} where $\widetilde{P}_{z,k} := (\mathbb{I}^B\otimes N_{z,k}^*)\hat{\Pi}_z^{B\tilde{A'}}(\mathbb{I}^B\otimes N_{z,k}^T)$, we have swapped the ordering of the systems to match earlier notation, and \begin{align} \Omega^{AB}&=\frac{1}{d_{A'}}\sum_z\sum_k(N_{z,k}^*\otimes\mathbb{I}^B )\hat{\Pi}_z^{A'B}(N_{z,k}^T\otimes\mathbb{I}^B)\notag\\ &=\frac{1}{d_{A'}}\sum_z\mathcal{E}^{*A'\to A}_z\otimes\textrm{id}^B\left(\hat{\Pi}_z^{A'B}\right), \end{align} in which $\mathcal{E}^*_z(\cdot):=\sum_{k}N^*_{z,k}(\cdot)N_{z,k}^T$. Since each $\mathcal{E}_z^*$ is trace-preserving, we have \begin{align} \textrm{Tr}_A\Omega^{AB}&=\frac{1}{d_{A'}}\sum_z\textrm{Tr}_A\left[\mathcal{E}^{*A'\to A}_z\otimes\textrm{id}^B\left(\hat{\Pi}_z^{A'B}\right)\right]\notag\\ &=\frac{1}{d_{A'}}\sum_z\textrm{Tr}_{A'}\left(\hat{\Pi}_z^{A'B}\right)\notag\\ &=\frac{1}{d_{A'}}\textrm{Tr}_{A'}\left(\mathbb{I}^{A'}\otimes\mathbb{I}^B\right)=\mathbb{I}^B. \end{align} Hence $\textrm{Tr}_A\Omega^{AB}=\mathbb{I}^B$ is a necessary condition on the operator $\Omega^{AB}$ such that $\text{cv}^*(\mathbf{P})=\textrm{Tr}[\Omega^{AB}J_{\mathcal{N}}^{AB}]$. Let us now sure that it is also sufficient.
Consider any positive operator $\Omega^{AB}$ such that $\textrm{Tr}_A\Omega^{AB}=\mathbb{I}^B$. Introduce the generalized Pauli operators on system $A$, explicitly given by $U_{m,n}=\sum_{k=0}^{d_A-1}e^{i mk 2\pi/d_A}\op{m\oplus n}{m}$, where $m,n=0,\dots,d_A-1$ and addition is taken modulo $d_A$. It is easy to see that $\Delta(\cdot):=\frac{1}{d_A}\sum_{m,n}U_{m,n}(\cdot)U_{m,n}^\dagger$ is a completely depolarizing map; i.e. $\Omega(X)=\textrm{Tr}[X]\mathbb{I}$. Hence, \[\Delta^A\otimes\textrm{id}^B[\Omega^{AB}]=\mathbb{I}^A\otimes \textrm{Tr}_A\Omega^{AB}=\mathbb{I}^A\otimes\mathbb{I}^B.\] This implies that the elements $\{\mathcal{U}^A_{m,n}\otimes\textrm{id}^B(\Omega^{AB})\}_{m,n}$ form a valid POVM on $AB$. Therefore, we can construct an entanglement-assisted protocol as follows. Let Alice and Bob share a maximally entangled state $\ket{\Phi^+_{d_A}}^{\tilde{A}A}$. Alice applies the unitary encoding map on system $A$ given by $\mathcal{U}_{m,n}^T(\cdot):=U_{m,n}^T(\cdot) U_{m,n}^*$, and sends her system through the channel $\mathcal{N}$. When Bob performs the POVM just described on systems $\tilde{A}B$, the obtained score is \begin{align}
& \sum_{m,n}P(m,n|m,n) \notag \\ =& \frac{1}{d_A}\sum_{m,n}\textrm{Tr}\bigg[\left(\mathcal{U}^{\tilde{A}}_{m,n}\otimes\textrm{id}^B\left[\Omega^{\tilde{A}B}\right]\right)(\textrm{id}^{\tilde{A}}\otimes\mathcal{N}) \notag \\ & \hspace{3.75cm} \left(\textrm{id}^{\tilde{A}}\otimes\mathcal{U}^{TA}_{m,n}\left[\Phi^{+ \tilde{A}A}_{d_A}\right]\right)\bigg]\notag\\ =& \frac{1}{(d_A)^2}\sum_{m,n}\textrm{Tr}\left[\Omega^{\tilde{A}B}\left(\textrm{id}^{\tilde{A}}\otimes\mathcal{N}\left[\phi_{d_A}^{+\tilde{A}A}\right]\right)\right]\notag\\ =& \textrm{Tr}[\Omega J_{\mathcal{N}}]. \end{align} The key idea in this equation is that the unitary encoding $U_{m,n}$ performed on Alice's side is canceled by exactly one POVM element on Bob's side. This completes the proof of Theorem \ref{Thm:ea-cv}. \end{proof}
\begin{remark} The achievability protocol in the previous proof is essentially the original superdense coding protocol applied on a $d_A$-dimensional input channel. \end{remark}
Theorem \ref{Thm:ea-cv} shows that the ea cv can be computed using semi-definite programming. Here we provide a family of channels in which it can be computed even easier. \begin{corollary} Let $\mathcal{N}\in\text{CPTP}(A\to B)$ be any channel such that $J_\mathcal{N}$ has an eigenvector $\ket{\varphi}^{AB}$ with largest eigenvalue $\lambda_{\max}(J_{\mathcal{N}})$ such that $\varphi^B=\mathbb{I}/d_B$. Then \begin{equation} \text{cv}^*(\mathcal{N})=d_B\lambda_{\max}(J_{\mathcal{N}}). \end{equation} \end{corollary} \begin{proof} Choose $\sigma^{AB}=d_B\op{\varphi}{\varphi}^{AB}$ in Theorem \ref{Thm:ea-cv}. Clearly this choice is optimal. \end{proof} In addition, a solution can easily be deduced for all qubit channels. \begin{theorem} \label{Thm:qubit-ea-cv} For a qubit channel $\mathcal{N}$, let $A$ be the $3\times 3$ correlation matrix of $J_\mathcal{N}$; i.e. $A_{ij}=\frac{1}{2}\textrm{Tr}[(\sigma_i\otimes\sigma_j) J_\mathcal{N}]$. Then \begin{equation} \text{cv}^*(\mathcal{N})=1+\Vert A\Vert_1 \end{equation} where $\Vert A\Vert_1=\textrm{Tr}\sqrt{A^\dagger A}$. \end{theorem} \begin{proof} Using Theorem \ref{Thm:ea-cv}, we can write $\Omega=\frac{1}{2}\left((\mathbb{I}+\mathbf{r}\cdot\vec{\sigma})\otimes\mathbb{I}+\sum_{i,j}t_{ij}\sigma_i\otimes\sigma_j\right)$. On the other hand, up to local unitaries, the Choi matrix of a channel $\mathcal{N}$ can be expressed as $J_\mathcal{N}=\frac{1}{2}(\mathbb{I}\otimes(\mathbb{I}+\mathbf{s}\cdot\vec{\sigma})+\allowbreak\sum_{i}a_{i}\sigma_i\otimes\sigma_i)$. Hence \begin{align}
\textrm{Tr}[\Omega J_\mathcal{N}]=1+\sum_{i=1}^3a_i t_{ii}\leq 1+\sum_{i=1}^3|a_i|, \end{align}
where the last inequality follows form the fact that $|t_{ii}|\leq 1$ since $\Omega\geq 0$. The theorem is proven by recalling that the $|a_i|$ are the singular values of the correlation matrix $A$. \end{proof}
It is worthwhile to compare Theorems \ref{Thm:qubit-cv} and \ref{Thm:qubit-ea-cv}. Since $\Vert A\Vert_1\leq 3\sigma_{\max}(A)$ and $\sigma_{\max}(A)\leq 1$, we have \begin{align} \text{cv}^*(\mathcal{N})=1+\Vert A\Vert_1\leq 2+2\sigma_{\max}(A)=2 \text{cv}(\mathcal{N}). \end{align} Hence, the shared entanglement between sender and receiver cannot offer a multiplicative enhancement in the cv larger than the dimension. In general, we conjecture the following.
\noindent\textit{Conjecture}: For any channel $\mathcal{N}\in\text{CPTP}(A\to B)$, \begin{equation} \text{cv}^*(\mathcal{N})\leq d_B\cdot \text{cv}(\mathcal{N}). \end{equation}
\section{Relaxations on the Communication Value}\label{sec:relaxations_of_cv} In previous sections we have made use of the fact the communication value can be expressed as a conic optimization problem (Proposition \ref{Prop:Proposition-cv}). It was noted in generality this problem would be hard to solve, but if the dimension the Choi matrix was sufficiently small, we could relax the cone, $\text{SEP}(A:B)$, to the PPT cone, $\text{PPT}(A:B)$, and still determine $\text{cv}(\mathcal{N})$ (Corollary \ref{cor:Proposition-PPT}). Moreover, in Section \ref{sec:entropic-characterization-of-cv}, we used the optimization program of $H_{\min}$ to justify characterizing $\text{cv}(\mathcal{N})$ by a restricted min-entropy, and in Section \ref{sec:entanglement_assisted_cv} we saw the relationship between $H_{\min}$ and $\text{cv}^{*}$. In all of these cases, we have considered the same optimization program and simply varied the cone to which the variable was restricted. That is, we have considered the general conic program \begin{equation}
\begin{aligned}\label{eqn:conicPrimal}
\text{maximize:}\quad & \textrm{Tr}[X \Omega^{AB}] \\
\text{subject to:}\quad & \textrm{Tr}_{A}(X) = \mathbb{I}^{B} \\
& \Omega^{AB} \in \mathcal{K}
\end{aligned} \end{equation} where $\text{cv}(\mathcal{N})$ corresponds to $\mathcal{K}=\text{SEP}(A:B)$ and $\text{cv}^{*}$ corresponds to $\mathcal{K} = \mathrm{Pos}(A \otimes B)$. It follows whenever we pick a cone $\mathcal{K}$ such that $\text{SEP}(A:B) \subset \mathcal{K}$, we obtain an upper bound on $\text{cv}(\mathcal{N})$. Throughout the rest of this section, when considering relaxation $\text{SEP}(A:B) \subset \mathcal{K}$, we denote the value of the optimization program by $\text{cv}^{\mathcal{K}}(\mathcal{N})$. In this section we primarily consider the PPT relaxation, $\mathcal{K}=\text{PPT}(A:B)$. We also discuss the relaxation to the $k$-symmetric cone, which is known to converge to the separable cone as $k$ goes to infinity \cite{Doherty-2004}, making it particularly relevant.
\subsection{Multiplicativity of Tensored PPT Operators over the PPT cone} We begin with the relaxation to the PPT cone. The primary advantage of this relaxation is that the problem becomes a semidefinite program and so pre-existing software may be used to find the optimal value. One may derive the primal and dual problems to be: \begin{center}
\emph{Primal problem}\\[-5mm]
\begin{equation}
\begin{aligned}\label{eqn:PPTPrimal}
\text{maximize:}\quad & \textrm{Tr}[X\Omega^{AB}]\\
\text{subject to:}\quad & \textrm{Tr}_{A}(\Omega^{AB}) = \mathbb{I}^{B} \\
& \Gamma(\Omega^{AB}) \geq 0 \\
& \Omega^{AB} \geq 0 \ .
\end{aligned}
\end{equation}
\\
\emph{Dual problem}\\[-5mm]
\begin{equation}
\begin{aligned}\label{eqn:PPTDual}
\text{minimize:}\quad & \textrm{Tr}(Y_{1}) \\
& \mathbb{I}^{A} \otimes Y_{1} - \Gamma^{B}(Y_{2}) \geq \sigma \\
& Y_{2} \geq 0 \\
& Y_{1} \in \mathrm{Herm}(B) \ ,
\end{aligned}
\end{equation} \end{center} where $\Gamma^{B}$ is the partial transpose map on the $B$ space. This SDP satisfies strong duality as can be verified using Slater's condition.
With this established, we will now present a special multiplicativity property of the PPT relaxation, $\text{cv}^{\text{PPT}}$. \begin{theorem}\label{thm:ppt-multiplicativity} Let $R \in \text{PPT}(A_{1}:B_{1}), Q \in \text{PPT}(A_{2}:B_{2})$. Then $$\text{cv}^{\text{PPT}}(R\otimes Q) = \text{cv}^{\text{PPT}}(R) \, \text{cv}^{\text{PPT}}(Q) \ . $$ \end{theorem} \begin{proof} Let $R \in \text{PPT}(A_{1}:B_{1}), Q \in \text{PPT}(A_{2}:B_{2})$. Let $(Y_{1},Y_{2}),(\overline{Y}_{1},\overline{Y}_{2})$ be the dual optimizers for $R,Q$ respectively. From \eqref{eqn:PPTDual}, we have \begin{equation}\label{eqn:ppt_mult_deriv_1}
\begin{aligned}
\mathbb{I}^{A_{1}} \otimes Y_{1} \geq R + \Gamma^{B_{1}}(Y_{2}) \\ \mathbb{I}^{A_{2}} \otimes \overline{Y}_{1} \geq Q + \Gamma^{B_{2}}(\overline{Y}_{2}) \ .
\end{aligned} \end{equation} Define $R' := \Gamma^{B_{1}}(R), \, Q' := \Gamma^{B_{2}}(Q)$, which are both positive operators by assumption. Then we have \begin{align*}
& (\mathbb{I}^{A_{1}} \otimes Y_{1}) \otimes (\mathbb{I}^{A_{2}} \otimes \overline{Y}_{1}) \\
\geq &(R + \Gamma^{B_{1}}(Y_{2})) \otimes (Q + \Gamma^{B_{2}}(\overline{Y}_{2})) \\
= &R \otimes Q + R \otimes \Gamma^{B_{2}}(\overline{Y}_{2}) \notag\\
&\quad +\Gamma^{B_{1}}(Y_{2}) \otimes Q + \Gamma^{B_{1}}(Y_{2}) \otimes \Gamma^{B_{2}}(\overline{Y}_{2}) \\
= &R \otimes Q + \Gamma^{B_{1}B_{2}}(R' \otimes \overline{Y}_{2})\notag\\
&\quad+ \Gamma^{B_{1}B_{2}}(Y_{2} \otimes Q') + \Gamma^{B_{1}B_{2}}(Y_{2} \otimes \overline{Y}_{2}) \\
= &R \otimes Q + \Gamma^{B_{1}B_{2}}(R' \otimes \overline{Y}_{2} + Y_{2} \otimes Q' + Y_{2} \otimes \overline{Y}_{2}) \ , \end{align*} where the first line follows from \eqref{eqn:ppt_mult_deriv_1}, the third is because of how the partial transpose over multiple systems may be decomposed, and the fourth is by linearity. Note that $R',Q'$ are positive as $R,Q$ are PPT. Moreover $Y_{2},\overline{Y}_{2}$ are positive by \eqref{eqn:PPTDual}. Thus the whole argument of $\Gamma^{B_{1}B_{2}}$ is a positive semidefinite operator. Therefore $(Y_{1,new} = Y_{1} \otimes \overline{Y}_{1}, Y_{2,new} = R' \otimes \overline{Y}_{2} + Y_{2} \otimes Q' + Y_{2} \otimes \overline{Y}_{2})$ is a feasible point of the dual problem for $R \otimes Q$, and it achieves an optimal value of $\textrm{Tr}(Y_{1})\textrm{Tr}(Y_{2})$. If we let $X_{1},X_{2}$ be the optimizers for the primal problem for $R,Q$ respectively, then $X_{1} \otimes X_{2}$ is clearly a feasible point for the primal problem for $R \otimes Q$ that achieves optimal value $\textrm{Tr}(RX_{1})\textrm{Tr}(QX_{2})$. Using the strong duality of the program, we have $\textrm{Tr}(RX_{1})\textrm{Tr}(QX_{2}) = \textrm{Tr}(Y_{1})\textrm{Tr}(Y_{2})$, so by strong duality we know our proposed optimizers are optimal and this completes the proof. \end{proof} \begin{corollary}\label{corr:ppt-multiplicativity} Given any two co-positive maps, $\mathcal{N}, \mathcal{M}$, $\text{cv}^{\text{PPT}}(\mathcal{N} \otimes \mathcal{M}) = \text{cv}^{\text{PPT}}(\mathcal{N}) \text{cv}^{\text{PPT}}(\mathcal{M})$. \end{corollary} It is interesting to note that we do not know that if only one of the channels is co-positive, then $\text{cv}^{\text{PPT}}$ is multiplicative, which would be a stronger claim. This is relevant because we conjecture that $\text{cv}(\mathcal{N} \otimes \mathcal{M})$ is multiplicative if either of the channels is entanglement breaking, which is known to hold for maximal $p$-norms for $p \geq 1$ \cite{King-2002}, but even the weaker case of multiplicativity where both channels are entanglement-breaking remains open and would mirror Corollary \ref{corr:ppt-multiplicativity}, but for separable Choi matrixs and the separable cone optimization.
\subsubsection*{Relation to $k$-Symmetric Extendable Cone} Given the multiplicativity of tensors of PPT operators for $\text{cv}^{\text{PPT}}$, one might hope this property might hope this property extends to $\text{cv}^{\text{Sym}_{k}}$ with $k$-symmetrically extendable operators, where an operator $R \in \mathrm{Pos}(A \otimes B)$ is $k$-symmetrically extendable if there exists $\tilde{R}^{AB^{k}_{1}} \in \mathrm{Pos}(A \otimes B^{\otimes k})$ such that \begin{enumerate}
\item $\tilde{R} = (\mathbb{I}_{A} \otimes W_{\pi})\tilde{R}(\mathbb{I}_{A} \otimes W_{\pi})^{\ast}$ for all $\pi \in \mathcal{S}_{k}$
\item $\textrm{Tr}_{B^{k}_{2}}(\tilde{R}) = R$
\item $(\mathbb{I}_{A} \otimes T^{B} \otimes \mathbb{I}_{\overline{B}^{k}_{2}})\tilde{R} \geq 0$ \ . \end{enumerate} Note that the $k$-symmetric extendable operators form a cone defined by semidefinite constraints. Moreover, it is known $\underset{k \to \infty}{\lim} \text{Sym}_{k} = \text{SEP}(A:B)$ \cite{Doherty-2004}. One can then attempt to extend Theorem \ref{thm:ppt-multiplicativity} in this setting. One can do this by deriving the dual program for $\text{cv}^{\text{Sym}_{k}}$: \begin{equation}\label{eqn:SimplekSymDual} \begin{aligned}
\text{min:}\quad & \textrm{Tr}(W) \\
& \mathbb{I}_{A} \otimes W \otimes \mathbb{I}_{\overline{B}^{k}_{2}} + \sum_{j=1}^{k!} \left(Y_{j} - \Phi_{\pi^{-1}_{j-1}}(Y_{j})\right) \\
& \hspace{3cm} \succeq \sigma \otimes \mathbb{I}_{B_{2}^{k}} + \Gamma^{B_{1}}(Z)\\
& Y_{j} \in \mathrm{Herm}(AB_{1}^{k}) \quad \forall j \in [k!]\\
& Z \geq 0 \\
& W \in \text{Herm}(B) \ , \end{aligned} \end{equation} where the indexing of $\pi_{j}$ is given by a chosen bijection between the index set $[k!]$ and the permutations in $\mathcal{S}_{k}$. However, the proof method for Theorem \ref{thm:ppt-multiplicativity} does not seem to naturally extend due to the permutations of the spaces.
\subsection{Numerical Evaluation of the Communication Value}
\label{Sect:Numerics}
To numerically support this work, we developed the CVChannel.jl software package which is publicly available on Github \cite{CVChannel2021}. This Julia \cite{bezanson2017julia} software package provides tools for bounding the communication value of quantum channels and certifying their non-multiplicativity. Our software is built upon the disciplined convex programming package, Convex.jl \cite{convexjl2014}, and our numerical results are produced using the splitting conic solver (SCS) \cite{scs2019}. For more details, the curious reader should review the software documentation and source code found on our Github repository \cite{CVChannel2021}.
The communication value is difficult to compute in general, but it can be bounded with relative efficiently. CVChannel.jl provides the following methods for bounding $\text{cv}(\mathcal{N})$. An upper bound on $\text{cv}(\mathcal{N})$ is computed via the dual formulation of the PPT relaxation of the communication value Eq. \eqref{eqn:PPTDual}, \begin{equation}
\text{cv}(\mathcal{N})\leq \text{cv}^{\text{PPT}}(\mathcal{N}). \end{equation} While $\text{cv}^{\text{PPT}}(\mathcal{N})$ is a natural upper bound of $\text{cv}(\mathcal{N})$, we consider the dual specifically so that we take a conservative approach to numerical error. That is, numerical error in minimizing the dual will result in a looser upper bound. In general, when considering upper bounds we work with the dual problem and when considering lower bounds we work with the primal problem. While the SDP satisfies strong duality, this guarantees minimizing false positives
For a lower bound on $\text{cv}(\mathcal{N})$, we take a biconvex optimization approach to the problem \begin{equation} \text{cv}(\mathcal{N})=\max_{\{\Pi_x\}, \{\rho_x\}}\sum_{x=1}^{d_B^2}\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)]. \end{equation} This ``see-saw" technique is applied to similar problems in \cite{Reimpell2005,Kosut2009}, although our implementation remains distinct. To begin, an ensemble of pure quantum states $\{\rho_x\}_{x=1}^{d_B^2}$ are initialized at random according to the Haar measure. Then, the following procedure is iterated: \begin{enumerate}
\item With the states fixed, the POVM measurement is numerically optimized as a semidefinite program
\begin{equation}
\max_{\{\Pi_y\}_{y=1}^{d_B^2}}\sum_{x=1}^{d_B^2}\textrm{Tr}[\Pi_x\mathcal{N}(\rho_x)].
\end{equation}
\item With optimal measurement as $\{\Pi_y^{\star}\}$, we compute the optimal ensemble of quantum states $\{\rho_x^{\star}\}_{x=1}^{d_b^2}$ as
\begin{equation}
\rho_x^{\star} = ||\mathcal{N}^{\dagger}(\Pi_x)||_{\infty},
\end{equation}
where $\mathcal{N}^{\dagger}$ is the adjoint channel and $||\cdot||_{\infty}$ denotes the largest eigenvalue. \end{enumerate} Repeating this procedure results in a set of optimized states $\{\rho_x^{\star}\}_{y=1}^{d_B^2}$ and measurement $\{\Pi_y^{\star}\}_{y=1}^{d_B^2}$ such that \begin{equation}
\text{cv}^{SeeSaw}(\mathcal{N})=\sum_{x=1}^{d_B^2}\textrm{Tr}[\Pi_x^{\star}\mathcal{N}(\rho_x^{\star})] \leq \text{cv}(\mathcal{N}). \end{equation} To improve the see-saw optimization, the procedure is simply performed many times with randomly initialized states. Combining these techniques, we numerically bound the communication value, \begin{equation}
\text{cv}^{SeeSaw}(\mathcal{N}) \leq \text{cv}(\mathcal{N})\leq \text{cv}^{\text{PPT}}(\mathcal{N}, \;\text{dual}). \end{equation}
To numerically certify that quantum channels $\mathcal{N}$ and $\mathcal{M}$ are non-multiplicative, we need to compute a lower bound on $\text{cv}(\mathcal{N}\otimes\mathcal{M})$ and upper bound on $\text{cv}(\mathcal{N})$ and $\text{cv}(\mathcal{N})$. CVChannel.jl computes the lower bound as $\text{cv}^{SeeSaw}(\mathcal{N}\otimes\mathcal{M})\leq \text{cv}(\mathcal{N}\otimes\mathcal{M})$ and the upper bound as $\text{cv}(\mathcal{N})\leq \text{cv}^{\text{PPT}}(\mathcal{N}, \; \text{dual})$. Non-multiplicativity is numerically confirmed when \begin{equation}
\text{cv}^{SeeSaw}(\mathcal{N}\otimes\mathcal{M}) - \text{cv}^{\text{PPT}}(\mathcal{N})\text{cv}^{\text{PPT}}(\mathcal{M}) > \varepsilon \end{equation} where $\varepsilon>0$ is a conservative bound to the numerical error. One drawback of this procedure is its susceptibility to false negatives due to the fact that the PPT Relaxation is a loose upper bound of the communication value.
\subsection{Examples} Having established properties of the $\text{PPT}$ relaxation of the communication value, we investigate channels which are known in other settings to admit non-multiplicative behaviour. In particular, we look at the family of Werner-Holevo channels \cite{Werner-2002a}, the dephrasure channel \cite{Leditzky-2018}, and the Siddhu channel \cite{Siddhu-2020}. We see that the Werner-Holevo channel is not multiplicative over a range of parameters, but the dephrasure and Siddhu channel which are known for their superactivation of coherent information are always multiplicative for the communication value. In some sense this should not be surprising as communication value captures a notion of using the quantum channel to transmit classical information whereas the coherent information measures the ability to transfer quantum information. However, it exemplifies how different the coherent information and communication values are as measures. \subsubsection*{Werner-Holevo Channels} In Section \ref{Sect:Werner-Holevo}, we showed how to determine the $\text{cv}$ for the Werner-Holevo channels. In this section we extend the method to obtain this result to the construction of a linear program (LP) for determining the $\text{cv}^{\text{PPT}}$ for $n$ Werner-Holevo channels ran in parallel. We then use this to show non-multiplicativity for $\text{cv}(\mathcal{W}_{d,\lambda} \otimes \mathcal{W}_{d,\lambda})$ as a function of $\lambda$, as well as the non-multiplicativity of $\text{cv}^{\text{PPT}}$ for more copies of the channel. We note our derivation assumes the dimension is the same for all channels, but a generalization is straightforward. \begin{proposition} Considering $n$ Werner-Holevo channels, there is a linear program $$ \max\{\langle a , c \rangle \, : \, Ac \geq 0 , \, Bc \geq 0, \, \langle g , c \rangle = 1 \} \ , $$ which obtains the value of $\text{cv}^{\text{PPT}}(\otimes_{i=1}^{n} J(\mathcal{W}_{d,\lambda_{i}}))$. Moreover, there exists an algorithm to generate the constraints $a,A,B,g$ which takes at most $\mathcal{O}(n2^{2n})$ steps. \end{proposition} \begin{proof}[Derivation of Constraints] Let $\Pi_{0} := \Pi_+$, $\Pi_{1} := \Pi_-$. This labelling will simplify notation. We are interested in $\text{cv}^{\text{PPT}}(\bigotimes_{i=1}^{n} J(\mathcal{W}_{d,\lambda_{i}}))$. Recalling the objective function of \eqref{eqn:PPTPrimal} is $$\textrm{Tr}[\bigotimes_{i=1}^{n} J(\mathcal{W}_{d,\lambda_{i}}) \Omega^{A^{n}B^{n}}] \ ,$$ we can twirl $\Omega$ by moving the symmetry of the Holevo channels onto $\Omega$. This results in $\Omega = \sum_{s \in \{0,1\}^{n}} c_{s} R_{s}$ where $$ R_{s} = \bigotimes_{i=1}^{n} \mathbb{F}^{s(i)} = \sum_{j \in \{0,1\}^{n}} \left( \bigotimes_{i \in [n]} (-1)^{s(i)\wedge j(i)} \Pi_{j(i)} \right) \ , $$ where the constraint on the sign is because $\mathbb{F}^0 = \mathbb{I} = \Pi_0 + \Pi_1$ and $\mathbb{F}^1 = \Pi_0 - \Pi_1$, so a term is negative iff $s(i)=j(i)=1$. Combining these, we can express $\Omega$ as a linear combination of orthogonal subspaces with coefficients stored in a vector $c$: \begin{align}\label{eq:general-wh-lp-optimizer-decomp}
\Omega = \sum_{s \in \{0,1\}^{n}} c_{s} \sum_{j \in \{0,1\}^{n}} \left( \bigotimes_{i \in [n]} (-1)^{s(i)\wedge j(i)} \Pi_{j(i)} \right) \ . \end{align} With the state simplified into mutually orthogonal subspaces, we just need to convert the constraints of \eqref{eqn:PPTPrimal} to constraints on $c \in \mathbb{R}^{2^{n}}$.
Guaranteeing positivity of $\Omega$ is equivalent to guaranteeing the weight of each orthogonal subspace in \eqref{eq:general-wh-lp-optimizer-decomp} is non-negative. As multiple elements of $c$ can have weight on multiple subspaces, the constraint is that the relevant linear combination of $c$ is non-negative for each subspace. Thus the positivity constraints may be written as $Ac \geq 0$ where $A \in \mathbb{R}^{2^{n} \times 2^{n}}$ matrix storing the sign information $(-1)^{s(i)\wedge j(i)}$ for all $s$,$j$.
The PPT constraints correspond to $\Omega^{\Gamma} \geq 0$. Noting that $\mathbb{F}^{\Gamma} = \Phi^{+}$, the unnormalized maximally entangled state. We have $ \Omega^{\Gamma} = \sum_{s \in \{0,1\}^{n}} c_{s} \bigotimes_{i \in [n]} X^{s(i)} \ , $ where $$ X^{s(i)} := \begin{cases} d^{-1}(\Phi^{\perp}+\Phi^{+}) & s(i) = 0 \\ \Phi^{+} & s(i) = 1 \end{cases} \ , $$ where $\Phi^{\perp} = d\mathbb{I} - \Phi^{+}$. In other words, we have decomposed $\Omega^{\Gamma}$ into linear combinations of a set of orthogonal subspaces.\footnote{We note this implies $d^{-1}(\Phi^{\perp}+\Phi^{+}) = \mathbb{I}$. The choice of presentation is to make it clear we are considering two orthogonal subspaces.} Again, we only need to store the constraints on $c$ which in this case is the order of $d$ and if the coefficient is zero. By the definition of $X^{s(i)}$, there is not weight of a subspace for $c_{s}$ iff the $i^{th}$ element in the tensor is $\Phi^{\perp}$ and $s(i) = 1$, and otherwise the weight is given by $d^{-(2^{n}-w(s))}$ where $w(\cdot)$ is the Hamming weight of the string $s$. Thus the PPT constraints may be written as $Bc \geq 0$ where $B \in \mathbb{R}^{2^{n} \times 2^{n}}$.
Recalling $\Omega = \sum_{s \in \{0,1\}^{n} c_{s} R_{s}}$ and $\textrm{Tr}_{A}(F) = \mathbb{I}^{B}$, $\textrm{Tr}_{A}(\mathbb{I}) = d\mathbb{I}^{B}$, the partial trace condition is reduced to $\langle g , c \rangle = 1$ where $g \in \mathbb{R}^{2^{n}}$ and $g(s) = d^{n-w(s)}$.
Finally, we have the objective function. We write $\bigotimes_{i=1}^{n} J(\mathcal{W}_{d,\lambda_{i}}) = \sum_{s \in \{0,1\}^{n}} \left(\bigotimes_{i} \zeta_{i}(s(i))\Pi_{s({i})} \right)$, where $\zeta_{i}(s(i)) = \begin{cases} \lambda_{i} f_0 & i = 0 \\ (1-\lambda_{i})f_{1} & i = 1 \end{cases}$ where $f_{i}$ is the normalization constant in front of the projector. Calculating $\textrm{Tr}[\bigotimes_{i=1}^{n} J(\mathcal{W}_{d,\lambda_{i}})\Omega]$ using the above expression along with \eqref{eq:general-wh-lp-optimizer-decomp}, one can simplify the objective function to $$ \sum_{s \in \{0,1\}^{n}} c_{s} \left( \sum_{j \in \{0,1\}^{n}}\left[ \prod_{i \in [n]} (-1)^{s(i)\wedge j(i)} \varphi_{i}(j(i)) \right] \right) \ , $$ where $\varphi_{i}(j(i))$ is the same as $\zeta_{i}(j(i))$, except without the normalization constant. Thus we may define $a$ as the argument of the large parentheses. This completes the derivation of the LP. Finally we note to construct the constraints one needs to loop through nested loops of sizes $2^{n},2^{n},n$ which results in the $\mathcal{O}(n2^{2^n})$ steps in the algorithm. \end{proof} Using these numerics, we can look at the behaviour of the PPT-relaxation of the communication value of the $n$-fold Werner Holevo Channel (Fig. \ref{fig:multi-copy-werner-holevo}). We can see that the non-multiplicativity over the PPT cone grows exponentially (Fig. \ref{fig:multi-copy-werner-holevo}) and that all non-multiplicativity dies out at $\lambda=0.3$ in all cases. We note it is known that for the tensor product of two Werner states, the space of PPT operators is the same as the space of separable operators. In this case, we see the non-additivity of the true communication value for the Werner-Holevo channels. \begin{figure}\label{fig:multi-copy-werner-holevo}
\end{figure} \begin{figure}\label{fig:non-multiplicativity}
\end{figure}
\subsubsection{PPT Relaxation of Werner-Holevo with the Identity}\label{subsec:WHID} An immediate corollary of Theorem \ref{thm:covariance-multiplicativity} is that the Werner-Holevo channel when ran in parallel with the identity channel of any dimension is multiplicative. That is, $\text{cv}(\mathcal{W}_{d,\lambda} \otimes id_{d'}) = d' \cdot\text{cv}(\mathcal{W}_{d,\lambda})$. However, here we find that this is not the case for $\text{cv}^{PPT}$ which is non-multiplicative, exhibiting a clear separation between the $\text{cv}$ and its relaxation. This separation is given for the $\mathcal{W}_{d,0} \otimes id_{d'}$ in Fig. \ref{fig:cv-ppt-versus-cv-wh-with-id}. It is determined using the following proposition. \begin{proposition} The PPT communication value of the Werner-Holevo channel ran in parallel with an identity channel, $\text{cv}^{\text{PPT}}(\mathcal{W}_{d,\lambda} \otimes\textrm{id}_{d'})$, is given by the linear program \begin{equation} \begin{aligned} \max & \;\; dd'[w+yd'+(2\lambda-1)(x+zd')]\\ &\;\; 0\leq w-x+d'y-d'z\\ &\;\; 0\leq w-x\\ &\;\; 0\leq w+x+d'y+d'z\\ &\;\; 0\leq w+x&\\ &\;\; 0\leq w+dx-y-dz\\ &\;\; 0\leq w-y \\ &\;\; 0\leq w+dx+y+dz \\ &\;\; 0\leq w+y\\ &\;\; 1=dd'w+d'x+dy+z. \end{aligned} \end{equation} \end{proposition} \begin{proof}[Derivation] The derivation is similar to that of the previous $\text{cv}^{PPT}$ LP derivations, we just also consider $\overline{V}V$ covariance for the identity channel. Let us consider the channel $\mathcal{W}_{d,\lambda} \otimes\textrm{id}_{d'}$, where $\mathcal{W}_{d,\lambda}$ is defined in \eqref{eqn:WH-defn}. Then $\mathcal{J}_{\mathcal{W}}\otimes \phi^+_{d'}$ is $UU\overline{V}V$-covariant, and so for any feasible operator $\sigma^{AB}$ in the $\text{cv}^{PPT}$ SDP, we have \begin{align}
& \textrm{Tr}[\sigma^{AA:BB'}J_{\mathcal{W}}\otimes J_{\textrm{id}_{d'}}] \notag \\ = & \textrm{Tr}[\sigma^{AA':BB'}\mathcal{T}_{UU}(\mathcal{J}_{\mathcal{W}})\otimes\mathcal{T}_{\overline{V}V}(\phi^+_{d'})]\notag\\
=& \textrm{Tr}[\mathcal{T}_{UU}\otimes\mathcal{T}_{\overline{V}V}(\sigma^{AA':BB'})J_{\mathcal{W}}\otimes \phi^+_{d'}].\notag \end{align} Note that $\mathcal{T}_{UU}\otimes\mathcal{T}_{\overline{V}V}(\sigma)$ is still a feasible operator, and so without loss of generality we can assume that $\sigma^{AA':BB'}$ is itself $UU\overline{V}V$-covariant. Thus, we can parametrize $\sigma$ as \begin{align*}
w\mathbb{I}^{AB}\otimes\mathbb{I}^{A'B'}+x\mathbb{F}_d\otimes\mathbb{I}+y\mathbb{I}\otimes\phi^+_{d'}+z\mathbb{F}_d\otimes\phi^+_{d'}. \end{align*} The space of $UU\overline{V}V$ operators is spanned by the set of four orthogonal operators \begin{equation*}
\left\{
\begin{array}{cc}
\Pi_d^-\otimes\phi_{d'}^+ & \Pi_d^-\otimes(d'\mathbb{I}-\phi_{d'}^+) \\[2mm]
\Pi_d^+\otimes\phi_{d'}^+ & \Pi_d^+\otimes(d'\mathbb{I}-\phi_{d'}^+)
\end{array}
\right\} \end{equation*} Positivity then amounts to the conditions \begin{equation}\label{eqn:WHID-Pos-cond}
\begin{aligned}
w-x+d'y-d'z&\geq 0 \\
w-x&\geq 0 \\
w+x+d'y+d'z&\geq 0 \\
w+x&\geq 0.
\end{aligned} \end{equation} The partial transpose of $\sigma$, $\sigma^{\Gamma_{BB'}}$ is given by \begin{align*}
w\mathbb{I}^{AB}\otimes\mathbb{I}^{A'B'}+x\phi^+_d\otimes\mathbb{I}+y\mathbb{I}\otimes\mathbb{F}_{d'}+z\phi^+_d\otimes\mathbb{F}_{d'}. \end{align*} To check positivity, we now use just need to swap the orthogonal basis operators: \begin{equation*}
\left\{
\begin{array}{cc}
\phi_{d}^+\otimes\Pi_{d'}^- & (d\mathbb{I}-\phi_{d}^+)\otimes\Pi_{d'}^- \\[2mm] \phi_{d}^+\otimes\Pi_{d'}^+ & (d\mathbb{I}-\phi_{d}^+)\otimes\Pi_{d'}^+
\end{array}
\right\} \end{equation*} This yields the conditions \begin{equation}\label{eqn:WHID-PPT-cond}
\begin{aligned}
w+dx-y-dz&\geq 0 \\
w-y&\geq 0 \\
w+dx+y+dz&\geq 0 \\
w+y&\geq 0.
\end{aligned} \end{equation} Finally, we compute the objective function \begin{align} & \textrm{Tr}[\sigma^{AA':BB'}J_\mathcal{W}\otimes\phi_{d'}^+] \notag\\ =&d'(w\textrm{Tr}[J_{\mathcal{W}}]+x\textrm{Tr}[\mathbb{F}J_{\mathcal{W}}])+{d'}^2(y\textrm{Tr}[J_{\mathcal{W}}]+z\textrm{Tr}[\mathbb{F}J_{\mathcal{W}}])\notag\\ =&d'(wd+xd(2\lambda-1))+{d'}^2(yd+zd(2\lambda-1))\notag\\ =&dd'[w+yd'+(2\lambda-1)(x+zd')], \label{eqn:WHID-obj-func} \end{align} and the partial trace condition \begin{align}\label{eqn:WHID-tr-cond}
\textrm{Tr}_{AA'}[\sigma^{AA':BB'}]&=(dd' w+d'x+dy+z)\mathbb{I}^{BB'}. \end{align} Combining \eqref{eqn:WHID-Pos-cond} -- \eqref{eqn:WHID-tr-cond} completes the derivation. \end{proof}
\begin{figure}\label{fig:cv-ppt-versus-cv-wh-with-id}
\end{figure}
\subsubsection*{Dephrasure Channel} We next consider the dephrasure channel, $$ \mathcal{N}_{p,q}(X) := (1-q)\left((1-p)\rho + pZ\rho Z \right) + q \textrm{Tr}(X) \dyad{e} \ , $$ where $p,q \in [0,1]$. The interesting aspect of the dephrasure channel is that in some parameter regime it admits superadditivity of coherent information \cite{Leditzky-2018}. We will first present it's communication value. \begin{lemma} $\text{cv}(\mathcal{N}_{p,q}) = 2-q$. \end{lemma} \begin{proof} We are going to prove this by constructing feasible operators in the primal and dual which achieve this value. First we note the Choi matrix: \begin{align*} J(\mathcal{N}_{p,q}) =& (1-q)\left(\op{00}{00}+\op{11}{11}\right) \\ & + \gamma\left(\ket{00}\bra{11} + \ket{11}\bra{00}\right) \\ & \hspace{1cm} + q\left(\op{0e}{0e} + \dyad{1e}\right) \ , \end{align*} where $\gamma := (1-q)(1-2p)$. Then for the primal problem, we may choose $$X =\dyad{00} + \dyad{11} + 1/2\left(\dyad{0e} + \dyad{1e}\right) \ . $$ This clearly satisfies $\textrm{Tr}_{A}(X) = \mathbb{I}^{B}$, it is PPT as it is diagonal, and $\langle X , J(\mathcal{N}_{p,q}) \rangle = 2 -q$. For the dual problem, let \begin{align*} Y_{1} &= (1-q)(\dyad{0}+\dyad{1}) + q\dyad{e} \\ Y_{2} &= \kappa (\dyad{01} + \dyad{10}) - \gamma(\ket{01}\bra{10}+\ket{10}\bra{01}) \ , \end{align*}
where $(1-q) \geq \kappa \geq |\gamma| = (1-q)|(1-2p)|$. Note this interval is never empty as $|(1-2p)|\in[0,1]$ for all $p \in [0,1]$.
Then $Y_{1}$ is clearly Hermitian, and $Y_{2} \succeq 0$ as it's eigenvalues are $\kappa \pm \gamma \geq |\gamma| \pm \gamma \geq 0$ and $0$ with multiplicity $4$. Then, one may calculate from these expressions that \begin{align*} & \mathbb{I}_{A} \otimes Y_{1} - \Gamma(Y_{2}) - J(\mathcal{N}_{p,q}) \\ =& \left((1-q-\kappa \right) \left[\ket{00}\bra{11} + \ket{11}\bra{00}\right] \ , \end{align*} Therefore we have constructed a feasible choice. Finally, $\textrm{Tr}(Y_{1}) = 2-q$ completes the proof. \end{proof} Note what the above implies is the `dephasing' property of the dephrasure is irrelevant. This is in some sense intuitive as the dephasing cannot hurt the classical information if the optimal strategy is sending data in the classical basis. Indeed, it is easy to see the above value may be achieved by using the signal states $\{\dyad{0},\dyad{1}\}$ and the projective measurement decoder $\{\dyad{0}+1/2\dyad{e},\dyad{1}+1/2\dyad{e}\}$ as then for both signal states you will guess correctly $(1-q)+q/2$ conditioned on the state sent. As one might expect, in such a situation the communication value of the channel would be multiplicative with itself. As we require an upper bound, we verify this by an exhaustive numerical search using the dual problem of $\text{cv}^{\text{PPT}}$.
\begin{theorem} $\text{cv}(\mathcal{N}_{p,q}^{\otimes 2}) =\text{cv}(\mathcal{N}_{p,q})^{2}$, i.e. the dephrasure channel's communication value is multiplicative. \end{theorem} \begin{proof} A search over the dual problem $\text{cv}^{\text{PPT}}(\mathcal{N}_{p,q}^{\otimes 2})$ for $p,q \in [0,0.01,...,1]$ is always within numerical error of $\text{cv}(\mathcal{N}_{p,q})^{2}$. As the dual problem always obtains an upper bound on $\text{cv}^{\text{PPT}}$, and $\text{cv}^{\text{PPT}}$ is an upper bound on $\text{cv}$, we may conclude that the dephrasure channel is multiplicative. \end{proof}
\subsubsection*{Siddhu Channel} Finally we consider the following family of channels: \begin{align*}
\mathcal{N}_{s}(X) := \sum_{i=0}^{1} K_{i}XK_{i}^{\dagger} \ , \end{align*} where \begin{align*}
K_{0} = \sqrt{s} \op{0}{0} + \op{2}{1} \quad K_{1} = \sqrt{1-s} \op{1}{0} + \op{2}{2} \ , \end{align*} where $s \in [0,1/2]$. This channel is known to have non-additive coherent information over its entire parameter range when tensored with itself. However, we will now show the communication value of the channel is multiplicative with itself over the whole range. \begin{lemma} $\text{cv}(\mathcal{N}_{s}) = 2$ for all $s \in [0,1/2]$. \end{lemma} \begin{proof} Like the dephrasure channel, we prove this by constructing upper and lower bounds that are the same. \\
For a lower bound on $\text{cv}(\mathcal{N}_{s})$, consider the encoding $\{\dyad{0},\dyad{1},\dyad{2}\}$ and the decoding $\{\dyad{0}+\dyad{1},\dyad{2}\}$. Note that for all $s \in [0,1/2]$, $\mathcal{N}_{s}(\dyad{0}) = s\dyad{0} + (1-s)\dyad{1}$ and $\mathcal{N}_{s}(\dyad{1}) = \mathcal{N}_{s}(\dyad{2}) = \dyad{2}$. Thus, with this encoding and decoding, we induce the conditional probability distribution $1 = \mathbf{P}(0|1) = \mathbf{P}(1|2) = \mathbf{P}(1|3)$ and zero otherwise. Thus we have $2 \leq \text{cv}^{3 \to 2}(\mathcal{N}_{s}) \leq \text{cv}(\mathcal{N}_{s})$.
For an upper bound, we consider the dual problem of $\text{cv}^{\text{PPT}}$ \eqref{eqn:PPTDual}. First note that we can write the Choi matrix as: \begin{align*}
J(\mathcal{N}_{s})
= \begin{bmatrix}
s & 0 & 0 & 0 & 0 & \sqrt{s} & 0 & 0 & 0 \\
0 & 1-s & 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{1-s} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\sqrt{s} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \sqrt{1-s} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix} \ . \end{align*} Then we let $Y_{1} = s \dyad{0} + (1-s) \dyad{1} + \dyad{2}$ and \begin{align*}
Y_{2} =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & -\alpha & 0 & 0 & 0 & -\beta & 0 \\
0 & 0 & -\alpha & s & 0 & 0 & 0 & \gamma & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -\beta & \gamma & 0 & 0 & 0 & 1-s & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix} \ , \end{align*} where $\alpha = \sqrt{s}, \beta = \sqrt{1-s}, \gamma = \sqrt{s(1-s)}$, which is positive semidefinite as it has eigenvalues $2$ and $0$ with multiplicity eight. It is then easy to determine \begin{align*}
& \mathbb{I}_{A} \otimes Y_{1} - \Gamma(Y_{2}) - J(\mathcal{N}_{s}) \\
= &\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & \delta & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \epsilon \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1-s & 0 & -\gamma & 0 & 0 \\
\delta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -\gamma & 0 & s & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \epsilon & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{bmatrix} \ , \end{align*} where $\delta = \alpha - \sqrt{s}, \epsilon = \beta - \sqrt{1-s}$. One may verify that this has eigenvalues of $1$ and $0$ with multiplicity eight. Thus it is feasible and $\textrm{Tr}(Y_{1}) = 2$. Noting that $\text{cv}(\mathcal{N}_{s}) \leq \text{cv}^{\text{PPT}}(\mathcal{N}_{s})$, we have $2 \leq\text{cv}(\mathcal{N}_{s}) \leq 2$, which completes the proof. \end{proof} \begin{theorem} For all $s \in [0,1/2]$, $\text{cv}(\mathcal{N}_{s}^{\otimes 2}) =\text{cv}(\mathcal{N}_{s})^{2}$, i.e. the communication value is always multiplicative. \end{theorem} \begin{proof} A numerical search of $\text{cv}^{\text{PPT}}(\mathcal{N}_{s}^{\otimes 2})$ for $s \in [0,0.01,...,0.5]$ finds the value equals four within an error of $\leq 3 \times 10^{-6}$. As $\text{cv}^{\text{PPT}}$ upper bounds $\text{cv}$, the channel is multiplicative. \end{proof}
\section{Relationship to Capacities and No-Signalling}
\label{Sect:Capacity-NS}
As noted in the introduction, $\lceil\text{cv}(\mathcal{N}) \rceil$ captures the classical communication cost to perfectly simulate every classical channel induced by $\mathcal{N}$ using non-signalling (NS) resources. This is because for a classical channel $\mathbf{P}$, the one-shot classical communication cost for zero-error simulation with classical NS, $\kappa_0^{\text{NS}}$, is given by $\lceil \sum_{y} \max_{x} p(y|x)\rceil$ \cite[Theorem 16]{Cubitt-2011a}. Noting that $\text{cv}(\mathbf{P}) = \sum_{y} \max_{x} p(y|x)$, it follows $\kappa_0^{\text{NS}} = \lceil \text{cv}(\mathbf{P}) \rceil$. Furthermore, due to the multiplicativity of cv for classical channels, the no-signalling assisted zero-error simulation capacity is also given by $\kappa_{0}^{\text{NS}}$, as was remarked in the original paper. Moreover, it is easy to show the classical capacity of a classical channel is bounded by $\text{cv}(\mathbf{P})$ \cite[Remark 17]{Cubitt-2011a}: $$ C(\mathbf{P}) \leq \log\text{cv}(\mathbf{P}) = \chi_{\max}(\mathbf{P}) \ , $$ where we have used Theorem \ref{Thm:chi-max-hmin} in the last equality. Losing the single-letter property, it is easy to generalize this to arbitrary quantum channels by using the Holevo-Schumacher-Westmoreland theorem \cite{Holevo-1998a,Schumacher-1997a}, \begin{align*} C(\mathcal{N}) =& \underset{k\to \infty}{\lim} \frac{1}{k} \chi(\mathcal{N}^{\otimes k}) \\ \leq& \underset{k\to \infty}{\lim} \frac{1}{k} \chi_{\max}(\mathcal{N}^{\otimes k})\\ =& \underset{k\to \infty}{\lim} \frac{1}{k} \log(\text{cv}(\mathcal{N}^{\otimes k})) \ , \end{align*} and whenever $\mathcal{N}$ satisfies weak multiplicativity for $\text{cv}$, such as for entanglement-breaking channels, this reduces to a single-letter upper bound.
In the entanglement-assisted regime, the relationships persist. First we recall that the SDP for min-entropy is multiplicative, and so $\mathcal{CV}^{*}(\mathcal{N}) = \text{cv}^{*}(\mathcal{N})$ for arbitrary quantum channel $\mathcal{N}$. This aligns with the fact the entanglement-assisted capacity of a quantum channel, $C_{E}(\mathcal{N})$, is single-letter but the unassisted capacity is not. Continuing the parallels, $\lceil \text{cv}^{*}(\mathcal{N}) \rceil$ gives the classical communication cost to perfectly simulate $\mathcal{N}$ with a quantum no-signalling resource \cite{Duan-2016a}. Given the above, a natural question is then if one can find bounds on the entanglement-assisted capacity, $C_{E}(\mathcal{N})$, in terms of $\text{cv}^*(\mathcal{N})$. Indeed, this can be done by using the definition of $\text{cv}^{*}$ and the fact that $\text{cv}^{*}$ is characterized by minimal error discrimination (as in Eq. \eqref{eqn:cv-max-holevo-alt}),
$$ \text{cv}^{*}(\mathcal{N}) = \underset{\rho_{XAA'}}{\sup} |\mathcal{X}| \exp(-H_{\min}(X|BC)_{(id_{X} \otimes \mathcal{N} \otimes id_{A'})(\rho)}) \ , $$ where the supremum is over $\rho_{XAA'}$ such that $\rho_{X}$ is uniform and the state is homogenous on register $A'$ \cite{Holevo-2002a,Watrous-2018a}. It follows by the same manipulations used in Eq. \eqref{eqn:cv-max-holevo-alt} that \begin{equation}
\log \text{cv}^{*}(\mathcal{N}) = \chi_{E,\max}(\mathcal{N}), \end{equation} where $\chi_{E,\max}$ is the entanglement-assisted $\max$-Holevo information, which is straightforward to define using \cite{Holevo-2002a,Watrous-2018a,Beigi-2013a}. Since the entanglement-assisted capacity equals the regularized entanglement-assisted Holevo information, we can conclude the $$ C_{E}(\mathcal{N}) \leq \log \text{cv}^{*}(\mathcal{N}) \ , $$ where the regularization disappears because $\text{cv}^*(\mathcal{N})$ is always multiplicative.
\end{document} |
\begin{document}
\title{\textbf{Sensitivity analysis in longitudinal clinical trials via distributional imputation}}
\author{Siyi Liu$^1$, Shu Yang$^1$, Yilong Zhang$^2$, Guanghan (Frank) Liu$^{2}$}
\date{
}
\maketitle \begin{center} $^1$Department of Statistics, North Carolina State University, Raleigh, NC, USA
$^2$Merck \& Co., Inc., Kenilworth, NJ, USA \end{center}
\spacingset{1.5} \begin{abstract} Missing data is inevitable in longitudinal clinical trials. Conventionally, the missing at random assumption is assumed to handle missingness, which however is unverifiable empirically. Thus, sensitivity analysis is critically important to assess the robustness of the study conclusions against untestable assumptions. Toward this end, regulatory agencies often request using imputation models such as return-to-baseline, control-based, and washout imputation. Multiple imputation is popular in sensitivity analysis; however, it may be inefficient and result in an unsatisfying interval estimation by Rubin's combining rule. We propose distributional imputation (DI) in sensitivity analysis, which imputes each missing value by samples from its target imputation model given the observed data. Drawn on the idea of Monte Carlo integration, the DI estimator solves the mean estimating equations of the imputed dataset. It is fully efficient with theoretical guarantees. Moreover, we propose weighted bootstrap to obtain a consistent variance estimator, taking into account the variabilities due to model parameter estimation and target parameter estimation. The finite-sample performance of DI inference is assessed in the simulation study. We apply the proposed framework to an antidepressant longitudinal clinical trial involving missing data to investigate the robustness of the treatment effect. Our proposed DI approach detects a statistically significant treatment effect in both the primary analysis and sensitivity analysis under certain prespecified sensitivity models in terms of the average treatment effect, the risk difference, and the quantile treatment effect in lower quantiles of the response, uncovering the benefit of the test drug for curing depression.
\noindent \textbf{keywords:} Longitudinal clinical trial, missing data, distributional imputation, multiple imputation, sensitivity analysis \end{abstract}
{}
\section{Introduction}
In longitudinal clinical trials, participants are likely to deviate from the protocol that causes the missing data. The deviations from the protocol may include poor compliance with the treatment or loss of follow-ups. \citet{rubin1976inference} develops a framework to handle missingness in data. Three missing mechanisms have been proposed as missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). Missingness that is not related to any components of the data, e.g., participants dropping out of the trial due to work or family considerations, is categorized as MCAR. While in most clinical studies involving patients with missing outcomes, it is likely that the missingness depends on the health status of patients. For example, individuals with severe outcomes are more likely to drop out from the study or switch to certain rescue therapies. MAR is typically used in longitudinal clinical trials targeting the primary analysis, which assumes that the conditional outcome distribution stays the same between the participants who remain in the study and the ones who drop out, i.e., the participants are assumed to take the assigned treatment even after the occurrence of missingness. However, the MAR assumption is not verifiable and may be violated for some drugs with a short half-life, where the treatment effect quickly fades away once the individuals discontinue from the active treatment, leading to a missing not at random (MNAR) assumption. Therefore, it is vital to conduct sensitivity analyses to explore the robustness of results to alternative MNAR-related assumptions as recommended by the US Food and Drug Administration (FDA) and National Research Council \citep{little2012prevention}.
The importance of defining an appropriate treatment effect estimand in the presence of missing data has been put forward by the ICH E9(R1) working group. Following the instructions in \citet{international2019addendum}, the estimand should give a precise description of the treatment effect of interest from a population perspective, and account for the intercurrent events such as the discontinuation of treatment.
In the primary analysis of the treatment effect estimand, we can assume MAR under an envisioned condition that participants with the treatment discontinuation still follow the assigned therapy throughout the study \citep{carpenter2013analysis}. For sensitivity or supplemental analyses, we evaluate the treatment effect under scenarios that deviate from MAR and call these settings sensitivity analyses for simplicity throughout the paper.
In sensitivity analyses, we consider several plausible missingness scenarios under MNAR based on the pattern-mixture model (PMM; \citealp{little1993pattern}) framework, which we call the ``sensitivity models''. Our main focus in this paper is on the jump-to-reference (J2R) scenario proposed by \citet{carpenter2013analysis}, which assumes that the missing outcomes in both treatment groups will have the same distributional profile as those in the control group with the same covariates. We also briefly introduce other sensitivity models such as return-to-baseline (RTB) and washout imputation, which have been used in the FDA statistical review and evaluation reports for certain treatments (e.g., \citealp{rtb2016tresiba}).
Although we focus on specific sensitivity models, our framework can be extended readily to other imputation mechanisms and the mixture of imputation strategies in sensitivity analyses.
To handle missingness in sensitivity analyses, the likelihood-based method and multiple imputation (MI) are the two most common approaches. The likelihood-based method typically utilizes the ignorability of the missing mechanism under MAR to draw valid maximum-likelihood inferences given variation independence, i.e., the parameters that control the missing mechanism and the model parameters are separable. For longitudinal clinical trials with continuous responses, one can fit a mixed model with repeated measures (MMRM) and incorporate the missing information to obtain inferences (e.g., \citealp{mehrotra2017missing}; \citealp{zhang2020likelihood}). While it is efficient, the analytical form for the likelihood-based method is only feasible to derive under restrictive scenarios such as normality or when we are dealing with mean types of estimands, and it requires rederivations if the missingness pattern changes. MI developed by \citet{rubin2004multiple} resorts to using computational techniques to ease the analytical requirements from the likelihood-based method. The FDA and National Research Council \citep{little2012prevention} highly recommend the use of MI and Rubin's MI combining rules to get inferences due to its flexibility and simplicity. However, \citet{wang1998large} reveal that the MI estimator is not efficient in general. Moreover, the inefficiency of MI can be more severe in terms of interval estimation, where the variance estimation using Rubin's rule may not be consistent even when the imputation and analysis models are the same correctly specified \citep{robins2000inference}. In sensitivity analyses, overestimation of the variance using Rubin's rule is commonly detected in literature (e.g., \citealp{lu2014analytic}; \citealp{liu2016analysis}; \citealp{yang2016note}). The motivating example in Section \ref{sec:example} further shows an alteration of the study conclusion due to the conservative variance estimator, where the same statistically significant treatment effect fails to be detected in the sensitivity analysis, rising a dilemma for the investigators in the process of decision-making. To overcome the problem, the variance estimation derived from the bootstrap approach is applied. But it is more computationally intensive than the traditional Rubin's method since it requires the re-imputation of the missing components and the reconstruction of the imputation model per bootstrap iteration.
In this paper, we propose distributional imputation (DI) based on the idea of Monte Carlo (MC) integration \citep{lepage1978new} and develop a unified framework to conduct sensitivity analyses using DI in longitudinal clinical trials. The motivation of DI is to impute the missing components from the target imputation model given the observed data and use the mean estimating equations approximated by MC integration to draw efficient inferences. The implementation consists of three major steps: first, obtain the model parameter estimator based on the observed data; second, impute the missing values from the estimated sensitivity model; and third, derive the DI estimator of the parameter of interest by jointly evaluating the entire imputed dataset through mean estimating equations. We show that the DI estimator is consistent and asymptotically normal. We also propose a weighted bootstrap procedure for variance estimation, which incorporates the uncertainty from model parameter estimation and target parameter estimation. The DI estimator drawn from our framework is fully efficient with the firm theoretical ground. Moreover, the weighted-bootstrap variance estimator is consistent with straightforward realization and the avoidance of re-imputing the missing components compared to the conventional bootstrap methods. In the motivating example in Section \ref{sec:example}, DI resolves the overestimation issue of Rubin's combining rule under MI in the sensitivity analysis and detects a statistically significant benefit of using the test drug to cure depression. Our framework is applicable to a wide range of sensitivity models defined through estimating equations.
The rest of the paper proceeds as follows. Section \ref{sec:example} uses antidepressant clinical trial data to motivate the development of an efficient imputation method. Section \ref{sec:setup} introduces the basic setup, provides notations, estimands, imputation mechanisms in sensitivity analyses, and comments on existing methods to handle missingness. Section \ref{sec:fi} presents DI and its main steps. Section \ref{sec:theory} gives the asymptotic theories for the DI estimator and proposes weighted bootstrap on variance estimation. Section \ref{sec:simulation} explores the finite-sample performance of the DI estimator via simulation. Section \ref{sec:application} returns to the motivating example and applies the proposed framework to the data. Section \ref{sec:conclude} draws the conclusion. Supplementary material contains the technical setup, proof of the theorems, and additional simulation and real-data application results.
\section{Motivating example \label{sec:example}}
An antidepressant clinical trial from the Auspices of the Drug Information Association is conducted to evaluate the effect of an experimental medication \citep{mallinckrodt2014recent}. The study measures the longitudinal outcomes of the HAMD-17 score at baseline and weeks 1, 2, 4, 6, and 8 for 200 patients who are randomly assigned to the control and treatment groups at a 1:1 ratio. The data has a monotone missingness pattern except for one individual in the treatment group containing intermittent missingness. For illustration purposes, we assume MAR for this intermittent missingness and focus on the monotone missing data.
To investigate the treatment effect in different aspects, we explore two population summaries by constructing different treatment effect estimands to evaluate the treatment effect. The first population-level summary is the average treatment effect (ATE) defined by the difference between the relative change of the HAMD-17 score from the baseline value in the last visit. The second population-level summary is the risk difference defined by the percentage difference of patients with $50\%$ or more improvement from the baseline HAMD-17 score at the last visit.
Figure \ref{fig:sp_real} shows the spaghetti plot of the relative change of the HAMD-17 score for the two groups. It reflects a typical missing data issue in longitudinal clinical trials, with 39 patients in the control group and 30 patients in the treatment group dropping out during the study period. The relatively high dropout rate prompts the need for imputation to utilize the information related to the missingness.
\begin{figure}
\caption{Spaghetti plots of the relative change of the HAMD-17 score separated by the two treatments. }
\label{fig:sp_real}
\end{figure}
As MI relies on the parametric modeling assumption, we begin by checking the normality of the conditional errors at each visit for model diagnosis. Figure \ref{fig:diagnosis} illustrates the normal Q-Q plots of the conditional residuals obtained by fitting the MMRM for the observed data. All the plots indicate an underlying normal distribution for the outcomes since the majority of residuals is within the confidence region, only the conditional errors at week 8 being slightly right-skewed.
\begin{figure}
\caption{Normal Q-Q plots of the conditional residuals at each visit. }
\label{fig:diagnosis}
\end{figure}
Under the normal assumption, we conduct MI to the missing components with the imputation size as $100$ under MAR to perform the primary analysis and under J2R for the sensitivity analysis to obtain the inferences using Rubin's rule. The point estimates of the ATE and the risk difference accompanied with the $95\%$ CIs are presented in Figure \ref{fig:ci}. In the primary analysis that assumes MAR, both the ATE and the risk difference reveal a significant treatment effect. However, the sensitivity analysis under J2R fails to capture the same significance under MI, leading to a loss of the credibility of the experimental drug and a potential influence on the decision made by the investigators. The alteration of the study conclusion may result from the overestimation issue of Rubin's MI variance estimator as detected in the literature involving sensitivity analyses (e.g., \citealp{liu2016analysis}), rather than the loss of effectiveness in the test drug. To further explore the cause of the altered study result, it is vital to overcome the overestimation issue brought up from MI and develop an efficient imputation approach to obtain a consistent variance estimator without an expensive computational cost.
\begin{figure}
\caption{Estimation results of the ATE and the risk difference under MAR and J2R, accompanied with the $95\%$ CIs. }
\label{fig:ci}
\end{figure}
\section{Basic setup \label{sec:setup}}
\subsection{Notations and estimands \label{sec:notation}}
Let $Y_{ik}$ be the continuous response for patient $i$ at time $t_{k}$, where $i=1,\cdots,n$, and $k=1,\cdots,T$. Denote the baseline $p$-dimensional fully-observed covariate vector as $X_{i}$, the group indicator as $G_{i}$ ranging from 1 to $J$ to represent $J$ distinct treatment groups, and the observed indicator as $R_{i}=(R_{i1},\cdots,R_{iT})^{\intercal}$ for patient $i$, where $R_{ik}=1$ if $Y_{ik}$ is observed, $R_{ik}=0$ otherwise. Without loss of generality, we consider $J=2$, where $G_{i}=1,2$ represents the $i$th patient is in the control or active treatment group, respectively. Let $Y_{i}=(Y_{i1},\cdots,Y_{iT})^{\intercal}$ be a $T$-dimensional longitudinal vector containing history and current information. A monotone missingness pattern is assumed, i.e., if missingness begins at time $t^{*}$, we have $R_{ik}=1$ for $t_{k}<t^{*}$ and $R_{ik}=0$ for $t_{k}\geq t^{*}$. We can partition each response as $Y_{i}=(Y_{\text{obs},i},Y_{\text{mis},i})^{\intercal}$, where $Y_{\text{obs},i}$ and $Y_{\text{mis},i}$ are the observed and missing components. Denote $Z_{i}=(Y_{i},X_{i},G_{i},R_{i})$ as the full data for patient $i$. In the presence of missing data, denote $Z_{\text{obs},i}=(Y_{\text{obs},i},X_{i},G_{i},R_{i})$ as the observed data. Then, $Z_{i}=(Z_{\text{obs},i},Z_{\text{mis},i})$ corresponds to the combination of the observed and missing parts.
For the treatment comparison, we consider different treatment effect estimands based on different population-level summaries defined through the estimation equations as $\tau=\tau_{2}-\tau_{1}$, where $\tau_{j}$ is the value such that $\mathbb{E}\left\{ \psi_{j}(Z_{i},\tau_{j})\right\} =0$ and $\psi_{j}(Z,\tau_{j})$ is a function in the space $\mathbb{R}^{d}\times\Omega$, with $\Omega$ as a compact subset of a Euclidean space and $\psi_{j}(Z,\tau_{j})$ as a continuous function of $\tau_{j}$ for each vector $Z$ and measurable of $Z$ for each $\tau_{j}$.
The expectation $\mathbb{E}\left\{ \psi_{j}(Z_{i},\tau_{j})\right\} $ prompts the determination of the full-data distribution. Under MNAR, we use the PMM framework to describe the data distribution as $f(Z)=\int f(Z\mid R)f(R)dR$, where $Z_{1},\cdots,Z_{n}$ are i.i.d. from a parametric model $f(Z,\theta_{0})$ with the support free of $\theta_{0}$. Here, $Z\in\mathbb{R}^{d}$ is a $d$-dimensional vector, $\Theta$ is a $q$-dimensional Euclidean space, and $\theta_{0}$ is the true model parameter lying in the interior of $\Theta$. Moreover, let $s(Z,\theta)$ be the score function of $f(Z,\theta)$, and assume $s(Z,\theta)$ to be a continuous function of $\theta$ for each $Z$ and a measurable function of $Z$ for each $\theta$. The identification of the treatment effect relies on the assumption of the pattern-specific data distribution $f(Z\mid R)$ under each missingness pattern, which is prespecified in a statistical analysis plan for a clinical trial as the sensitivity models under hypothetical scenarios to address potential intercurrent events that may impact the estimation of treatment effect \citep{international2019addendum}.
The most popular full-data model in longitudinal clinical trials is the MMRM as recommended by the FDA and National Research Council \citep{little2012prevention}, which assumes an underlying multivariate normal distribution for the longitudinal outcomes. The motivating example in Section \ref{sec:example} validates its application in practice. \textcolor{black}{Therefore, throughout the paper, we assume that the continuous longitudinal outcomes $Y_{i}$ given the covariates $X_{i}$ and the group indicator $G_{i}=j$ independently follow a multivariate normal distribution as \begin{equation} Y_{i}\mid(X_{i},G_{i}=j)\sim\mathcal{N}_{T}(\mu_{ij},\Sigma^{(j)}),\label{eq:general_model} \end{equation} where $\mu_{ij}=(\mu_{ij1},\cdots,\mu_{ijT})^{\intercal}=(\Tilde{X}_{i}^{\intercal}\beta_{j1},\cdots,\Tilde{X}_{i}^{\intercal}\beta_{jT})^{\intercal}$, $\beta_{j1},\cdots,\beta_{jT}$ are $(p+1)$ dimensional group-specific vectors, $\Tilde{X}_{i}=(1,X_{i}^{\intercal})^{\intercal}$, and $\Sigma^{(j)}$ is a group-specific covariance matrix. }
When no missingness is involved in the data, the only pattern corresponds to $R=\mathbf{1}_{T}$, where $\mathbf{1}_{T}$ is a $T$-dimensional all-ones vector representing the outcomes are fully observed. The treatment effect identification boils down to the specification of the conditional distribution $Y_{i}\mid(X_{i},G_{i}=j)$, which has been specified in the formula \eqref{eq:general_model}. Under this circumstance, we present three typical population-level summaries to define the treatment effect estimands in the following example.
\begin{example}[Treatment effect estimands]\label{example1} The parameter $\tau=\tau_{2}-\tau_{1}$ can represent different types of treatment effect given the following choices of $\psi_{j}(\cdot)$ for $j=1,2$: \begin{enumerate} \item The ATE when $\tau_{j}=\mathbb{E}(Y_{iT}\mid G_{i}=j)$ for $j=1,2$: $\psi_{j}(Z_{i},\tau_{j})=I(G_{i}=j)(Y_{iT}-\tau_{j})$. \item The risk difference when $\tau_{j}=P(Y_{iT}\geq c\mid G_{i}=j)$ for $j=1,2$: $\psi_{j}(Z_{i},\tau_{j})=I(G_{i}=j)\big\{ I(Y_{iT}\geq c)-\tau_{j}\big\}$, where $c$ is a prespecified threshold. \item Distributional information of the treatment, e.g., the QTE for the $q$th quantile of responses when $\tau_{j,q}$ is the $q$th quantile for distribution of outcomes at the last time point for $j=1,2$: $\psi_{j}(Z_{i},\tau_{j,q})=I(G_{i}=j)\big\{ I(Y_{iT}\leq\tau_{j,q})-q\big\}$. \end{enumerate} \end{example}
Among the above treatment effect estimands, the ATE is most widely used to describe the treatment effect in clinical trials (e.g., \citealp{carpenter2013analysis}; \citealp{liu2016analysis}; \citealp{zhang2020likelihood}). Some clinical studies also care about the risk difference regarding the percentage of patients with the endpoint continuous outcomes dichotomized by a certain threshold for each group. For example, \citet{roussel2019double} take the difference of the percentage of participants with HbA1c $<7\%$ in each group as a secondary endpoint. However, the ATE is possibly insufficient to capture the effect of treatment under a skewed outcome distribution, where treatment may not influence the average outcome, but the tail of the outcome distribution. In these cases, the QTE is preferred \citep{yang2020multiply}.
\subsection{Sensitivity models in sensitivity analyses \label{sec:imp_mechanism}}
When there is more than one missingness pattern, the identification of the treatment effect is accomplished by specifying the data distribution $f(Z\mid R)$ under each pattern. Since the distribution $f(Z\mid R)$ is unobserved if $R\neq\mathbf{1}_{T}$, several plausible sensitivity models are proposed to model the missingness. \citet{international2019addendum} addresses the importance of specifying explicit MNAR assumptions that underlie the sensitivity analysis in advance of the clinical trial based on different characteristics of drugs. Here, we concentrate on the J2R assumption to assess the robustness of the study conclusions.
The J2R sensitivity model is one specific control-based imputation model, which envisions the missing responses in the treatment group will have the same outcome profile as those in the control group with the same covariates after dropout \citep{carpenter2013analysis}.
It can be a conservative missingness assumption that assumes the average treatment effect disappears immediately after patients discontinue from active treatment, and it is a commonly used sensitivity analysis in practice for longitudinal clinical trials (\citealp{mallinckrodt2014recent}; \citealp{mallinckrodt2019estimands}).
Equipped with the normality assumption, the group-specific model for the missing outcomes is the conditional distribution given the observed data under each missingness pattern as \begin{align} Y_{\text{mis},i} & \mid(Y_{\text{obs},i},X_{i},G_{i}=j,R_{ik-1}=1,R_{ik}=0)\sim\nonumber \\
& \mathcal{N}_{T-k}\big\{\mu_{\text{mis},ij}^{(k)}+\Sigma_{21}^{(1)}\Sigma_{11}^{(1)-1}(Y_{\text{obs},i}-\mu_{\text{obs},ij}^{(k)}),\Sigma_{22}^{(1)}-\Sigma_{21}^{(1)}\Sigma_{11}^{(1)-1}\Sigma_{12}^{(1)}\big\},\label{eq:j2r_model} \end{align} where $\mu_{\text{mis},ij}^{(k)},\mu_{\text{obs},ij}^{(k)}$ are the individual-specific mean vectors and $\Sigma^{(1)}=\begin{pmatrix}\Sigma_{11}^{(1)} & \Sigma_{12}^{(1)}\\ \Sigma_{21}^{(1)} & \Sigma_{22}^{(1)} \end{pmatrix}$ is the covariance matrix for the control group partitioned corresponding to $Y_{\text{obs},i}$ and $Y_{\text{mis},i}$. Therefore, modeling the J2R sensitivity model corresponds to specifying the group-specific mean vector and covariance.
\begin{assumption}[Jump-to-reference model]\label{assump:j2r} For the control group, the missing components are MAR. The imputation model is of the form \eqref{eq:j2r_model}, with $\mu_{\text{mis},i1}^{(k)}=(\Tilde{X}_{i}^{\intercal}\beta_{1(k+1)},\cdots,\allowbreak \Tilde{X}_{i}^{\intercal}\beta_{1T})^{\intercal}$ and $\mu_{\text{obs},i1}^{(k)}=(\Tilde{X}_{i}^{\intercal}\beta_{11},\cdots,\Tilde{X}_{i}^{\intercal}\beta_{1k})^{\intercal}$.
For the treatment group, the imputation model is of the form \eqref{eq:j2r_model}, with $\mu_{\text{mis},i2}^{(k)}=(\Tilde{X}_{i}^{\intercal}\beta_{1(k+1)},\cdots,\allowbreak \Tilde{X}_{i}^{\intercal}\beta_{1T})^{\intercal}$ and $\mu_{\text{obs},i2}^{(k)}=(\Tilde{X}_{i}^{\intercal}\beta_{21},\cdots,\Tilde{X}_{i}^{\intercal}\beta_{2k})^{\intercal}$ representing the regression coefficients for the participants after deviation will "jump to" the ones in the control group with the same baseline covariates. \end{assumption}
Apart from J2R as a way to quantify the deviation from MAR, we also summarize two more MNAR assumptions as the RTB and washout imputation used in FDA statistical review and evaluations reports (e.g., \citealp{rtb2016tresiba}) to represent different ways to model the missing outcomes regardless of compliance after discontinuation in sensitivity analyses. The RTB approach assumes a washout effect for the missing responses at the last time point in both groups, indicating that the outcome of interest will return to the baseline performance regardless of the prior treatment after dropout. However, the biological plausibility of the washout assumption needs to be carefully evaluated, and RTB is not necessarily conservative when missing data is imbalanced between treatment and control group \citep{zhang2020missing}.
\begin{assumption}[Return-to-baseline model]\label{assump:rtb}
The imputation model of the outcomes at the last time point follows the marginal baseline model $Y_{iT}\mid(X_{i},R_{iT}=0,G_{i}=j)\sim\mathcal{N}(\mu_{ij1},\Sigma_{(1,1)}^{(j)})$, where $\Sigma_{(1,1)}^{(j)}$ represents the $(1,1)$ element of $\Sigma^{(j)}$. \end{assumption}
Washout imputation also acts as a possible sensitivity model and appears in several statistical review and evaluation reports (e.g., \citealp{washout2018praluent}). It combines the idea of the RTB and J2R assumptions by assuming a MAR pattern for the control group and an RTB pattern for the missing outcomes in the treatment group. One of the reasons to consider washout imputation is to address the potential issue of imbalanced missing data in RTB.
\begin{assumption}[Washout model]\label{assump:washout} For the control group, the assumption for the missing responses is MAR. The model is the same as the one for control group in Assumption \ref{assump:j2r}. For the treatment group, the assumption for the missing responses is the same as Assumption \ref{assump:rtb}. \end{assumption}
Given a prespecified sensitivity model that characterizes the MNAR assumption, one can therefore determine the pattern-specific data distribution $f(Z\mid R)$ and identify the treatment effect under the PMM framework. \textcolor{black}{To obtain valid treatment effect inferences, one can }implement the conventional likelihood-based or imputation approach to deal with missingness in sensitivity analyses. Both methods are elaborated on next section.
\subsection{Existing methods to handle missingness in sensitivity analyses}
The likelihood-based method and MI are two traditional approaches to handle missingness in sensitivity analyses in longitudinal clinical trials. The likelihood-based method utilizes the MMRM model and the ignorability of the missing components under MAR to draw valid inferences. In terms of the MNAR-related sensitivity models, the analytical form of the inference is obtained via averaging over the dropout patterns based on the PMM framework (e.g., \citealp{liu2016analysis,mehrotra2017missing}). However, the treatment effect estimator can be infeasible to derive under cases where the normality assumption is violated or the estimand of interest is not of mean type. For example, when we focus on the risk difference of the test drug in the antidepressant trial in Section \ref{sec:example}, complexity arises when incorporating the dropout patterns. Moreover, the likelihood-based estimator needs to be re-derived for different imputation mechanisms and can be complicated if there are multiple missingness patterns.
MI provides a simple way to handle diverse types of estimands. It creates multiple complete datasets by conducting imputations based on the prespecified imputation model and has Rubin's combining rule to obtain inferences. We illustrate one typical strategy for conducting MI with Rubin's rule, which has appeared in the literature (e.g., \citealp{carpenter2013analysis,mallinckrodt2020aligning}), in the sensitivity analysis under J2R in longitudinal clinical trials using the estimands in Example \ref{example1} as follows. \begin{description} \item [{{Step 1}.}] For the observed data in the control group, fit the MMRM and obtain the estimated sensitivity model. \item [{{Step 2}.}] Impute the missing values in both groups from the sensitivity model specified in Assumption \ref{assump:j2r}. Repeat the imputation $M$ times to create $M$ imputed datasets. \item [{{Step 3}.}] For each imputed dataset, conduct a complete data analysis by solving the estimating equations that correspond to the estimands in Example \ref{example1} to obtain $\hat{\tau}_{\text{MI}}^{(m)}$ as the estimator of the $m$th imputed dataset, where $m=1,\cdots,M$. \item [{{Step 4}.}] Combine the estimations from the $M$ imputed datasets by Rubin's combining rule and obtain the MI estimator as $\hat{\tau}_{\text{MI}}=M^{-1}\sum_{m=1}^{M}\hat{\tau}_{\text{MI}}^{(m)}$, with the variance estimator \[ \mathbb{\hat{V}}(\hat{\tau}_{\text{MI}})=\frac{1}{M}\sum_{m=1}^{M}\mathbb{\hat{V}}(\hat{\tau}_{\text{MI}}^{(m)})+\left(1+\frac{1}{M}\right)B_{M}, \] where $B_{M}=(M-1)^{-1}\sum_{m=1}^{M}(\hat{\tau}_{\text{MI}}^{(m)}-\hat{\tau}_{\text{MI}})^{2}$ represents the between-imputation variance. \end{description} \citet{wang1998large} discover inefficiency in point estimation in the general MI procedure. More loss of efficiency occurs in interval estimation, where Rubin's method may overestimate the variance in practice \citep{robins2000inference}. To illustrate the problem, the variance of the MI estimator is \[ \mathbb{V}(\hat{\tau}_{\text{MI}})=\mathbb{V}(\hat{\tau}_{\text{MI}}-\hat{\tau})+\mathbb{V}(\hat{\tau})+2\text{cov}(\hat{\tau}_{\text{MI}}-\hat{\tau},\hat{\tau}), \] where $\hat{\tau}$ is the treatment effect estimator for the fully observed data. Rubin's method only estimates $\mathbb{V}(\hat{\tau}_{\text{MI}}-\hat{\tau})$ and $\mathbb{V}(\hat{\tau})$, and it treats $\text{cov}(\hat{\tau}_{\text{MI}}-\hat{\tau},\hat{\tau})=0$ which does not hold in sensitivity analyses that assume MNAR. For example, \citet{liu2016analysis} find that the variance estimator using Rubin's rule tends to overestimate the true variance in simulation studies under J2R in sensitivity analyses. The motivating example in Section \ref{sec:example} further captures a change in the study result using MI with Rubin's rule under J2R in the sensitivity analysis, which may result from the overestimation issue. One approach to deal with this issue is to use bootstrap to derive the replication-based variance estimation. But it is computationally intensive since the missing values have to be re-imputed $M$ times based on the reconstructed sensitivity model in each bootstrap step.
Therefore, a more efficient method to get valid estimators for diverse kinds of treatment effect estimands and the corresponding appropriate variance estimators with a simple implementation is needed. We propose DI based on the idea of MC integration to get the inference and the weighted bootstrap procedure to obtain a consistent variance estimator.
\section{Distributional imputation \label{sec:fi}}
We propose DI to draw the inference on the treatment effect in sensitivity analyses. Given the parametric distributions of the missing components based on certain sensitivity analysis settings, the key insight is to impute each missing value by samples from its conditional distribution given the observed data. Drawn on the idea of MC integration, any estimating equations applied to the imputed dataset approximate the mean estimating equations given the observed data and thus allow an efficient estimation of the target estimand.
The use of the mean estimating equations conditional on the observed data to assess the treatment effect is prevalent in the missing data literature. \citet{louis1982finding} takes advantage of the conditional mean estimating equations with the expectation-maximization algorithm to obtain valid inferences for the incomplete data. \citet{robins2000inference} also apply the idea of mean estimating equations to allow for the incompatibility between the imputation and analysis model. In the presence of missing data, one can estimate the function $\psi_{j}(Z_{i},\tau_{j})$ that characterizes the treatment effect by the conditional expectation given the observed values under certain sensitivity models that have been prespecified in the trial protocol. Therefore, a consistent estimator of $\tau_{j}$ for $j=1,2$ is the solution to \begin{equation} \sum_{i=1}^{n}\mathbb{E}\{\psi_{j}(Z_{i},\tau_{j})\mid Z_{\text{obs},i},\hat{\theta}\}=0,\label{conditional_exp} \end{equation} where $\hat{\theta}$ is a consistent estimator of an unknown modeling parameter $\theta\in\Theta$. A common choice of $\hat{\theta}$ is the pseudo maximum likelihood estimator (MLE) given the observed data, i.e., it solves the mean score equations \begin{equation} \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\{s(Z_{i},\theta)\mid Z_{\text{obs},i}\}=0.\label{theta_score_eq} \end{equation} Note that the mean estimating equations in \eqref{conditional_exp} and \eqref{theta_score_eq} have general forms which can accommodate different sensitivity models. In longitudinal clinical trials, the commonly used mean estimating equations correspond to the score function of the MMRM for the observed data. However, even under the multivariate normal assumption, the explicit form of the estimator $\hat{\tau}_{j}$ is only feasible to obtain when the function $\psi_{j}(Z_{i},\tau_{j})$ has a linear form such as the one in Example \ref{example1} (a).
We can estimate the conditional expectation $\mathbb{E}\{\psi_{j}(Z_{i},\tau_{j})\mid Z_{\text{obs},i},\hat{\theta}\}$ using the complete data after imputation. For the missing component of $i$th continuous response $Y_{i}$, we independently draw $Y_{\text{mis},i}^{*(1)},\cdots,Y_{\text{mis},i}^{*(M)}$ from a prespecified sensitivity model with the estimated conditional distribution $f(Z_{\text{mis},i}^{*(m)}\mid Z_{\text{obs},i},\hat{\theta})$ such as Assumptions \ref{assump:j2r}--\ref{assump:washout} used in sensitivity analyses.
With the imputed data, denote $Y_{i}^{*(m)}=(Y_{\text{obs},i},Y_{\text{mis},i}^{*(m)})$ as the imputed longitudinal responses and $Z_{i}^{*(m)}=(Z_{\text{obs},i},Z_{\text{mis},i}^{*(m)})$ as the full imputed data for $i$th patient. DI incorporates the idea of MC integration. When the imputation size $M$ is large, one can estimate the conditional expectation in \eqref{conditional_exp} as \begin{equation} \mathbb{E}\{\psi_{j}(Z_{i},\tau_{j})\mid Z_{\text{obs},i},\hat{\theta}\}\approx\frac{1}{M}\sum_{m=1}^{M}\psi_{j}(Z_{i}^{*(m)},\tau_{j}).\label{weighted_conditional} \end{equation}
Based on the MC approximation, we can therefore derive the DI estimator $\hat{\tau}_{\text{DI},j}$ for $j$th group by solving the estimating equations \begin{equation} \frac{1}{M}\sum_{i=1}^{n}\sum_{m=1}^{M}\psi_{j}(Z_{i}^{*(m)},\tau_{j})=0.\label{fi_ee} \end{equation}
\begin{example}[DI estimator for the treatment effect]\label{example2} For all estimands in Example \ref{example1}, the DI estimator of the treatment effect is $\hat{\tau}_{\text{DI}}=\hat{\tau}_{\text{DI},2}-\hat{\tau}_{\text{DI},1}$, where $\hat{\tau}_{\text{DI},j}$ for $j=1,2$ is derived by defining the following specific $\psi_{j}$ function and solving the following estimating equations: \begin{enumerate} \item The ATE: Set $\psi_{j}(Z_{i}^{*(m)},\tau_{j})=I(G_{i}=j)\big\{ R_{iT}Y_{\text{obs},iT}+(1-R_{iT})Y_{\text{mis},iT}^{*(m)}-\tau_{j}\big\}$, and $\hat{\tau}_{\text{DI},j}$ is the solution to \[ \sum_{i=1}^{n}I(G_{i}=j)\left\{ R_{iT}Y_{\text{obs},iT}+(1-R_{iT})(M^{-1}\sum_{m=1}^{M}Y_{\text{mis},iT}^{*(m)})-\tau_{j}\right\} =0. \] \item The risk difference of the treatment: Set \[ \psi_{j}(Z_{i}^{*(m)},\tau_{j})=I(G_{i}=j)\big\{ R_{iT}I(Y_{\text{obs},iT}\geq c)+(1-R_{iT})I(Y_{\text{mis},iT}^{*(m)}\geq c)-\tau_{j}\big\}, \] and $\hat{\tau}_{\text{DI},j}$ is the solution to \[ \sum_{i=1}^{n}I(G_{i}=j)\Big[R_{iT}I(Y_{\text{obs},iT}\geq c)+(1-R_{iT})\left\{ M^{-1}\sum_{m=1}^{M}I(Y_{\text{mis},iT}^{*(m)}\geq c)\right\} -\tau_{j}\Big]=0. \] \item Distributional information of the treatment, e.g. the QTE for the $q$th quantile of responses when $\tau_{j,q}$ is the $q$th quantile for distribution of outcomes at the last time point: Set \[ \psi_{j}(Z_{i}^{*(m)},\tau_{j,q})=I(G_{i}=j)\big\{ R_{iT}I(Y_{\text{obs},iT}\leq\tau_{j,q})+(1-R_{iT})I(Y_{\text{mis},iT}^{*(m)}\leq\tau_{j,q})-q\big\}, \] and $\hat{\tau}_{\text{DI},j}$ is the solution to \[ \sum_{i=1}^{n}I(G_{i}=j)\Big[R_{iT}I(Y_{\text{obs},iT}\leq\tau_{j,q})+(1-R_{iT})\left\{ M^{-1}\sum_{m=1}^{M}I(Y_{\text{mis},iT}^{*(m)}\leq\tau_{j,q})\right\} -q\Big]=0. \] \end{enumerate} \end{example}
\begin{remark}[Discussion of the incorporation of covariates in estimation] In sensitivity analyses, one may incorporate the covariate information to improve the efficiency of the treatment effect estimator \citep{tsiatis2007semiparametric}. For example, the ATE estimator derived from the sample average may not be the most efficient one; the one motivated by the analysis of covariance model (ANCOVA) is preferred in practice. We present an ANCOVA-motived DI estimator $\hat{\tau}_{\text{DI},j}$ by defining the $\psi_{j}$ function and its corresponding estimating equations for $j=1,2$ as follows:
Set the $\psi_{j}$ function as \[ \psi_{j}(Z_{i}^{*(m)},\tau_{j},\gamma)=\begin{pmatrix}V_{i}\big\{ R_{iT}Y_{\text{obs},iT}+(1-R_{iT})Y_{\text{mis},iT}^{*(m)}-V_{i}^{\intercal}\gamma\big\}\\ \begin{Bmatrix}\Tilde{X}_{i}^{\intercal} & I(j=2)\Tilde{X}_{i}^{\intercal}\end{Bmatrix}\gamma-\tau_{j} \end{pmatrix}, \] where $V_{i}=\begin{Bmatrix}\Tilde{X}_{i}^{\intercal} & I(G_{i}=2)\Tilde{X}_{i}^{\intercal}\end{Bmatrix}^{\intercal}$, and $\gamma$ is the vector of joint regression coefficients in the two treatment groups. $\hat{\tau}_{\text{DI},j}$ is the solution to \[ \sum_{i=1}^{n}\begin{pmatrix}V_{i}\big\{ R_{iT}Y_{\text{obs},iT}+(1-R_{iT})(M^{-1}\sum_{m=1}^{M}Y_{\text{mis},iT}^{*(m)})-V_{i}^{\intercal}\gamma\big\}\\ \begin{Bmatrix}\Tilde{X}_{i}^{\intercal} & I(j=2)\Tilde{X}_{i}^{\intercal}\end{Bmatrix}\gamma-\tau_{j} \end{pmatrix}=0. \]
Note that the ANCOVA-motivated estimator for the ATE replaces the treatment-specific covariate mean with the overall covariate mean to gain efficiency, while this information is not applicable to the risk difference and the QTE. Without this information, the estimating functions presented in Example \ref{example2} render the most efficient estimators of the ATE, the risk difference and the QTE. One can use the augmented inverse propensity weighted (AIPW) type of estimators to incorporate additional information in the propensity score and outcome regression model \citep{zhang2012causal}; however, AIPW does not improve the efficiency of the simple estimators (supported by unshown simulation studies). \end{remark}
To summarize, the DI procedure under specific sensitivity models is as follows: \begin{description} \item [{{Step 1}.}] For each group, fit an MMRM from a population-averaged perspective for the observed data and obtain the model parameter estimator $\hat{\theta}$ by solving the estimating equations \eqref{theta_score_eq}. \item [{{Step 2}.}] Impute the missing values $M$ times from the estimated imputation model $f(Z_{\text{mis},i}^{*(m)}\mid Z_{\text{obs},i},\hat{\theta})$ based on prespecified imputation mechanisms such as Assumptions \ref{assump:j2r}--\ref{assump:washout} for each group. \item [{{Step 3}.}] Derive the DI estimator $\hat{\tau}_{\text{DI},j}$ by solving the estimating equations \eqref{fi_ee} and get the treatment effect DI estimator $\hat{\tau}_{\text{DI}}$. \end{description} The theoretical asymptotic properties of the DI estimator and the variance estimation procedure are given in Section \ref{sec:theory}.
\begin{remark}[Computation complexity of DI and MI] DI and MI both use $M^{-1}$ as the weight for each imputed dataset. However, the approaches to conduct the full-data analysis after the imputation procedure are different. For MI, we conduct separate analyses for each imputed dataset and use Rubin's MI combining rule to get inference; while for DI, the analysis is done jointly based on the entire imputed dataset, with the inference derived from the mean estimating equations \eqref{fi_ee}. In terms of the computation complexity after imputation, DI is more computationally efficient than MI for point estimation. For example, if linear models are fitted to in the analysis stage, MI fits $M$ linear models separately, with the total computational cost $O(Mnp^{2}+Mp^{3})$; DI fits one linear model for the pooled imputed dataset, with the total computational cost $O(Mnp^{2}+p^{3})$ \citep{friedman2001elements}. \end{remark}
\begin{remark}[Connection with fractional imputation] DI is similar to parametric fractional imputation (FI; \citealp{kim2011parametric}; \citealp{yang2016fractional}), where we pool the $M$ imputed dataset and conduct the full-data analysis jointly by solving the estimating equations. Our proposed DI utilizes the distributional behavior of the missing components, by imputing them directly from the estimated conditional distribution given the observed data under some prespecified sensitivity analysis settings, thus avoids applying importance sampling required by FI to generate imputed data from the proposal distribution, and yields simplicity. \end{remark}
\begin{remark}[Choice of the imputation size $M$] DI utilizes the idea of MC integration to approximate the conditional expectation in \eqref{conditional_exp}. Based on the MC approximation theory \citep{geweke1989bayesian}, the MC error rate is $O(M^{-1/2})$ for any dimension. If the model is not computationally intensive, larger $M$ can be selected to further reduce the MC error. As shown in the simulation studies, the selection of the imputation size $M$ is not sensitive to the inferences. We observe a decent performance of the DI estimator with a small imputation size $M$ (e.g., $M=5$). \end{remark}
\section{Theoretical properties and variance estimation \label{sec:theory}}
\subsection{Asymptotic properties of the DI estimator}
We verify the consistency and asymptotic normality for the DI estimator. For simplicity, we consider the inference for one group here and omit the group subscript $j$. Extension to multiple groups is straightforward. Denote $\tau_{0}$ as the true parameter such that $\mathbb{E}\big\{\psi(Z_{i},\tau)\big\}=0$. The comprehensive regularity conditions and technical proofs are given in Sections \ref{appen:thm1} and \ref{appen:thm2} in the supplementary material.
\begin{theorem}\label{consistency_thm} Under the regularity conditions listed in Section \ref{appen:thm1} in the supplementary material, the DI estimator $\hat{\tau}_{\text{DI}}\xrightarrow{\mathbb{P}}\tau_{0}$ as the imputation size $M\rightarrow\infty$ and sample size $n\rightarrow\infty$. \end{theorem}
\begin{theorem}\label{theo:asym_normal} Under the regularity conditions listed in Section \ref{appen:thm2} in the supplementary material, as the imputation size $M\rightarrow\infty$ and sample size $n\rightarrow\infty$, \[ \sqrt{n}(\hat{\tau}_{\text{DI}}-\tau_{0})\xrightarrow{d}\mathcal{N}[0,A(\tau_{0},\theta_{0})^{-1}B(\tau_{0},\theta_{0})\{A(\tau_{0},\theta_{0})^{-1}\}^{\intercal}], \] where \begin{align*} A(\tau_{0},\theta_{0}) & =\mathbb{E}\Big[\frac{\partial\psi(Z_{i},\tau_{0})}{\partial\tau}+\big\{\frac{\partial\Gamma(\tau_{0},\theta_{0})}{\partial\tau}\big\}\bar{s}_{i}(\theta_{0})\Big];\\ B(\tau_{0},\theta_{0}) & =\mathbb{V}\big\{\psi_{i}^{*}(\tau_{0},\theta_{0})+\Gamma(\tau_{0},\theta_{0})^{\intercal}\bar{s}_{i}(\theta_{0})\big\}. \end{align*} Here $\psi_{i}^{*}(\tau,\theta)=M^{-1}\sum_{m=1}^{M}\psi(Z_{i}^{*(m)},\tau)$, $\bar{s}_{i}(\theta)=\mathbb{E}\{s(Z_{i},\theta)\mid Z_{\text{obs},i}\}$, $\Gamma(\tau,\theta_{0})=\mathbf{I}_{\text{obs}}(\theta_{0})^{-1}\mathbf{I}_{\psi,\text{mis}}(\tau,\theta_{0})$, where $\mathbf{I}_{\text{obs}}(\theta)=\mathbb{E}\{-\partial\bar{s}_{i}(\theta)/\partial\theta\}$ and $\mathbf{I}_{\psi,\text{mis}}(\tau,\theta)=\mathbb{E}[\{s(Z_{i},\theta)-\bar{s}_{i}(\theta)\}\psi(Z_{i},\tau)]$. \end{theorem}
\subsection{Variance estimation}
From the result of asymptotic normality of $\hat{\tau}_{\text{DI}}$ in Theorem \ref{theo:asym_normal}, one consistent variance estimator of $\hat{\tau}_{\text{DI}}$ under a large imputation size $M$ is \[ \hat{\mathbb{V}}_{1}(\hat{\tau}_{\text{DI}})=A_{n}(Z,\hat{\tau}_{\text{DI}},\hat{\theta})^{-1}\big\{\frac{1}{n}B_{n}(Z,\hat{\tau}_{\text{DI}},\hat{\theta})\big\}\big\{ A_{n}(Z,\hat{\tau}_{\text{DI}},\hat{\theta})^{-1}\big\}^{\intercal}, \] where \begin{align*} A_{n}(Z,\hat{\tau}_{\text{DI}},\hat{\theta}) & =\frac{1}{nM}\sum_{i=1}^{n}\sum_{m=1}^{M}\frac{\partial\psi(Z_{i}^{*(m)},\hat{\tau}_{\text{DI}})}{\partial\tau},\\ B_{n}(Z,\hat{\tau}_{\text{DI}},\hat{\theta}) & =\frac{1}{n}\sum_{i=1}^{n}\big\{\psi_{i}^{*}(\hat{\tau}_{\text{DI}},\hat{\theta})+\hat{\Gamma}(\hat{\tau}_{\text{DI}},\hat{\theta})^{\intercal}\bar{s}_{i}^{*}(\hat{\theta})\big\}\big\{\psi_{i}^{*}(\hat{\tau}_{\text{DI}},\hat{\theta})+\hat{\Gamma}(\hat{\tau}_{\text{DI}},\hat{\theta})^{\intercal}\bar{s}_{i}^{*}(\hat{\theta})\big\}^{\intercal}, \end{align*} and $\bar{s}_{i}^{*}(\theta)=M^{-1}\sum_{m=1}^{M}s(Z_{i}^{*(m)},\theta)$. Here \[ \hat{\Gamma}(\hat{\tau}_{\text{DI}},\hat{\theta})=\big\{\frac{1}{n}\sum_{i=1}^{n}\bar{s}_{i}^{*}(\hat{\theta})\bar{s}_{i}^{*}(\hat{\theta})^{\intercal}\big\}^{-1}\Big[\frac{1}{nM}\sum_{i=1}^{n}\sum_{m=1}^{M}\big\{ s(Z_{i}^{*(m)},\hat{\theta})-\bar{s}_{i}^{*}(\hat{\theta})\big\}\psi(Z_{i}^{*(m)},\hat{\tau}_{\text{DI}})\Big]. \]
The sandwich form of $\mathbb{\hat{V}}_{1}(\hat{\tau}_{\text{DI}})$ involves the analytical form of estimated score function $s(Z_{i}^{*(m)},\hat{\theta})$ that is difficult to compute in longitudinal settings. The replication-based variance estimation is preferred for its simplicity, and it is commonly obtained by nonparametric bootstrap. But it requires computational efforts and the re-imputation of the missing components on the refitted imputation model, especially in a large-scale clinical trial with numerous participants. We propose weighted bootstrap to obtain a consistent variance estimator without having to re-impute the missing values per bootstrap step.
The weighted bootstrap procedure has parallel steps as the DI procedure. However, cautions should be taken when deriving the replicated DI estimator. To preserve the imputation model in DI, we draw on the idea of importance sampling \citep{geweke1989bayesian} to incorporate the variability of the current replicated model parameter estimator $\hat{\theta}^{(b)}$ and target parameter estimator $\hat{\tau}_{\text{DI}}^{(b)}$ in each bootstrap iteration $b=1,\cdots,B$, where $B$ is the total number of bootstrap replicates. A standard recommendation for the total number of bootstrap replicates to get variance estimation is
$B=100$ \citep{boos2013essential}. In this way, one can approximate the conditional expectation $\mathbb{E}\big\{\psi(Z_{i},\tau)|Z_{\text{obs},i},\hat{\theta}^{(b)}\big\}$ by a weighted sum as \begin{align*} \mathbb{E}\big\{\psi(Z_{i},\tau)\mid Z_{\text{obs},i},\hat{\theta}^{(b)}\big\}\approx\sum_{m=1}^{M}w_{i}^{(m)}(\hat{\theta}^{(b)})\psi(Z_{i}^{*(m)},\tau), \end{align*} where the importance weights are computed as \begin{equation} w_{i}^{(m)}(\hat{\theta}^{(b)})\propto\frac{f(Z_{i}^{*(m)}\mid Z_{\text{obs},i},\hat{\theta}^{(b)})}{f(Z_{i}^{*(m)}\mid Z_{\text{obs},i},\hat{\theta})},\label{importance_weight} \end{equation} with the constraint $\sum_{m=1}^{M}w_{i}^{(m)}(\hat{\theta}^{(b)})=1$ for all $i$.
To summarize, conduct the weighted bootstrap procedure in each iteration as follows: \begin{description} \item [{{Step 1}.}] Generate the i.i.d. bootstrap weights $u_{i}^{(b)}$ such that $\mathbb{E}(u_{i}^{(b)})=1,\mathbb{V}(u_{i}^{(b)})=1$ with $u_{i}^{(b)}\geq0$ for each individual. Obtain the model parameter estimator $\hat{\theta}^{(b)}$ by solving the estimating equations \begin{equation} \sum_{i=1}^{n}u_{i}^{(b)}\mathbb{E}\{s(Z_{i},\theta)\mid Z_{\text{obs},i}\}=0.\label{bootstrap_score} \end{equation} \item [{{Step 2}.}] Update the importance weights $w_{i}^{(m)}(\hat{\theta}^{(b)})$ such that it satisfies \eqref{importance_weight} with a constraint $\allowbreak \sum_{m=1}^{M}w_{i}^{(m)}(\hat{\theta}^{(b)})=1$ for all $i$, under the prespecified imputation settings such as Assumptions \ref{assump:j2r}--\ref{assump:washout}. \item [{{Step 3}.}] Obtain the DI estimator $\hat{\tau}_{\text{DI}}^{(b)}$ by solving the estimating equations \begin{equation} \sum_{i=1}^{n}\sum_{m=1}^{M}u_{i}^{(b)}w_{i}^{(m)}(\hat{\theta}^{(b)})\psi(Z_{i}^{*(m)},\tau)=0.\label{bootstrap_ee} \end{equation} \end{description} Repeat Steps 1--3 $B$ times, and get the replication variance estimator of the DI estimator as \[ \hat{\mathbb{V}}_{2}(\hat{\tau}_{\text{DI}})=\frac{1}{B-1}\sum_{b=1}^{B}(\hat{\tau}_{\text{DI}}^{(b)}-\hat{\tau}_{\text{DI}})^{2}. \]
\begin{remark}[Choice of the weight distribution] There are many candidate distributions to generate the bootstrap weights $u_{i}^{(b)}$. For example, one may try an exponential distribution with the rate parameter 1 denoted as $\text{Exp}(1)$, or a discrete distribution such as Poisson with parameter 1. The choice of the generated distribution is not sensitive to the variance estimation. We adopt $\text{Exp}(1)$ in simulation studies. \end{remark}
Theorem \ref{theo:wb} shows the asymptotic validity of the above weighted bootstrap method, with the proof in Section \ref{appen:thm3} in the supplementary material.
\begin{theorem}\label{theo:wb} Under regularity conditions listed in Sections \ref{appen:thm1} and \ref{appen:thm2} in the supplementary material, with the bootstrap weights $u_{1}^{(b)},\cdots,u_{n}^{(b)}$ i.i.d. satisfying $\mathbb{E}(u_{i}^{(b)})=1,\mathbb{V}(u_{i}^{(b)})=1\text{ with }u_{i}^{(b)}\geq0$, $\hat{\mathbb{V}}_{2}(\hat{\tau}_{\text{DI}})$ is a consistent estimator of $\mathbb{V}(\hat{\tau}_{\text{DI}})$. \end{theorem}
\section{Simulation study \label{sec:simulation}}
We conduct simulation studies to assess the finite-sample validity of our proposed framework using DI and weighted bootstrap for sensitivity analyses in longitudinal clinical trials. Consider a clinical trail with two groups and five visits. The baseline covariates $X$ are generated from the standard normal distribution with $p=3$ dimensions. For the longitudinal responses $Y_{i}$ of the $i$th individual, we generate them independently from a multivariate normal distribution as the full-data model \eqref{eq:general_model}, where for the control and treatment group, the group-specific coefficients $\beta_{j1},\cdots,\beta_{j5}$ and covariance matrices $\Sigma^{(j)}$ for $j=1,2$ are presented in Section \ref{appen:sim_result} in the supplementary material.
Consider the missing mechanism as MAR with a monotone missingness pattern. More precisely, assume all the baseline responses are observed, i.e. $R_{i1}=1$. For visit $k>1$, if $R_{ik-1}=0$, then $R_{il}=0$
for $l=k,\cdots,T$; otherwise let $R_{ik}\sim\text{Bernoulli}(\pi_{ik})$. Model the observed probability $\pi_{ik}$ at visit $k>1$ as a function of the observed information as $\text{logit}(\pi_{ik}|G_{i}=j)=\phi_{1j}+\phi_{2j}Y_{ik-1}$, where $\phi_{1j},\phi_{2j}$ are tuning parameters for the observed probabilities. We set $\phi_{11}=-3.2,\phi_{12}=-4.0,\phi_{21}=\phi_{22}=0.2$ to get the observed probabilities as 0.7865 and 0.7938 for control and treatment group, respectively.
Different types of treatment effect estimands including the ATE, the risk difference, and the QTE are assessed. When the primary interest is the ATE, we use the ATE estimator motivated by ANCOVA since it shows an increase of efficiency compared to the one obtained by directly taking the sample average. When the risk difference is of interest, we set a threshold $c=4.5$ and are interested in $\tau=P(Y_{iT}\geq4.5|G_{i}=2)-P(Y_{iT}\geq4.5|G_{i}=1)$. When the QTE is of interest, we set $q=0.5$ to obtain the behavior of median. We focus on the sensitivity analysis under J2R to describe the deviation from MAR, which is consistent with our motivating example. For illustration, we only present the result for the ATE under J2R. Simulation results for sensitivity analyses under other sensitivity models such as the RTB and washout imputation, along with other treatment effect estimands under J2R are given in Section \ref{appen:sim_result} in the supplementary material. We select the number of bootstrap replicates $B=100$. Consider the sample size $N$ for each group to be the same value ranging from $\{100,500,1000\}$ for each group, and the imputation size $M$ ranging from $\{5,10,100\}$. Select $\text{Exp}(1)$ as the generated distribution of the bootstrap weights.
We compare our proposed DI with MI in the simulation study. Rubin's method and weighted bootstrap are applied to the MI and DI estimator to get the variance estimation, respectively, under 1000 MC simulations. The estimators are assessed using the point estimate (Point est), MC variance denoted as true variance (True var), variance estimate (Var est), relative bias of the variance estimate computed by $\Big[\mathbb{E}\big\{\mathbb{\hat{V}}(\hat{\tau})\big\}-\mathbb{V}(\hat{\tau})\Big]/\mathbb{V}(\hat{\tau})$, coverage rate of $95\%$ confidence interval (CI) and mean CI length. We choose the $95\%$ Wald CI estimated by $\big(\hat{\tau}-1.96\mathbb{\hat{V}}^{1/2}(\hat{\tau}),\hat{\tau}+1.96\mathbb{\hat{V}}^{1/2}(\hat{\tau})\big)$.
Table \ref{rbi_mean_table} shows the simulation result of the ATE estimator. The point estimates from both DI and MI are closer to the true value as the sample size increases, and their MC variances are getting smaller. It indicates that the MI and DI estimators are consistent and have comparable performances regarding the point estimation. The efficiency of the estimator increases as the imputation size $M$ grows. For variance estimation, Rubin's method ends up overestimating the true variance as expected, with a larger relative bias and a conservative coverage rate. The variance estimate using weighted bootstrap in DI is close to the true variance, with a well-controlled relative bias and a coverage rate close to the empirical value. For other types of estimands, the variance estimate of the QTE in MI and DI overestimates the true variance when the sample size is small. But as the sample size grows, the results from DI are much better, with the variance estimate proximal to the true value. Therefore, a relatively large sample size is suggested when estimating the QTE based on the simulation results.
\begin{table}[!htbp] \centering \caption{Simulation results under J2R of the ATE estimator. Here the true value $\tau=1.5400$.}
\scalebox{1}{ \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 150.84 & 150.36 & & 14.86 & 14.60 & & 20.92 & 14.70 & & 40.85 & 0.74 & & 97.90 & 94.90 & & 178.76 & 149.61\tabularnewline 100 & 10 & 150.95 & 150.48 & & 14.27 & 14.25 & & 20.63 & 14.43 & & 44.59 & 1.25 & & 97.90 & 94.90 & & 177.64 & 148.19\tabularnewline
& 100 & 150.74 & 150.76 & & 14.00 & 13.91 & & 20.37 & 14.21 & & 45.44 & 2.17 & & 97.80 & 95.10 & & 176.61 & 147.05\tabularnewline \hline
& 5 & 153.52 & 153.13 & & 3.06 & 3.07 & & 4.08 & 3.03 & & 33.44 & -1.25 & & 98.00 & 94.50 & & 79.09 & 67.99\tabularnewline 500 & 10 & 153.42 & 153.45 & & 3.02 & 3.02 & & 4.01 & 2.97 & & 33.09 & -1.43 & & 97.80 & 94.50 & & 78.48 & 67.38\tabularnewline
& 100 & 153.39 & 153.43 & & 2.98 & 2.99 & & 3.97 & 2.92 & & 33.17 & -2.27 & & 98.20 & 94.10 & & 78.07 & 66.75\tabularnewline \hline
& 5 & 154.27 & 154.06 & & 1.46 & 1.47 & & 2.03 & 1.52 & & 39.23 & 3.48 & & 97.60 & 94.90 & & 55.84 & 48.14\tabularnewline 1000 & 10 & 154.09 & 154.06 & & 1.45 & 1.44 & & 2.00 & 1.49 & & 38.63 & 3.54 & & 97.70 & 94.80 & & 55.45 & 47.70\tabularnewline
& 100 & 154.12 & 154.11 & & 1.43 & 1.42 & & 1.98 & 1.46 & & 38.55 & 2.99 & & 97.90 & 94.20 & & 55.18 & 47.26\tabularnewline \hline \end{tabular}} } \label{rbi_mean_table} \end{table}
Each type of estimands based on DI and MI has a comparable performance of the point estimation under each prespecified sensitivity analysis setting. The variance estimation using weighted bootstrap in DI outperforms Rubin's MI combining method in all cases with much tolerable relative biases for the variance estimation and better coverage probabilities. The same interpretations of the results apply to other settings as shown in Section \ref{appen:sim_result} in the supplementary material.
\section{Return to the motivating example \label{sec:application}}
We apply our proposed DI framework to the motivating example in Section \ref{sec:example} to uncover the treatment effect in sensitivity analyses. Apart from comparing the performance between MI with Rubin's rule and DI with the proposed weighted bootstrap for the ATE and the risk difference, we also explore the QTE defined by the quantile difference between the relative change of the HAMD-17 score in both the primary analysis under MAR and the sensitivity analysis under J2R. \textcolor{black}{The three treatment effect estimands are estimated through the estimating equations in Example \ref{example2} after imputation. For the QTE, we do not limit it to one specific quantile; instead, we present the estimated cumulative distribution function (CDF) of the relative change from baseline for each group. In the implementation of MI and DI, the imputation size $M=100$, and the number of bootstrap replicates $B=100$. }
Tables \ref{tab:real_reg_nonpara} and \ref{tab:real_prop_nonpara} present the analysis results with the use of MI and DI. The primary analysis in the ``MAR'' rows in the tables indicates a parallel performance of MI and DI, with similar point and variance estimates. While MI with Rubin's variance estimator alters the study conclusion under J2R in terms of the ATE and the risk difference, DI with the weighted bootstrap procedure preserves a significant treatment effect by producing smaller standard errors and narrower CIs. Applying DI with the weighted bootstrap resolves the suspicion of the effectiveness of the experimental drug, as it guarantees consistent variance estimators of the treatment effect.
\begin{table}[!htbp] \centering{}\centering \caption{Analysis of the HAMD-17 data of the ATE.\label{tab:real_reg_nonpara}} \begin{tabular}{ccccccccc} \hline
& \multicolumn{2}{c}{Point estimation} & & \multicolumn{2}{c}{Standard error} & & \multicolumn{2}{c}{P-value}\tabularnewline \cline{2-3} \cline{3-3} \cline{5-6} \cline{6-6} \cline{8-9} \cline{9-9} Setting & \multicolumn{1}{c}{MI} & DI & & \multicolumn{1}{c}{MI} & DI & & \multicolumn{1}{c}{MI} & DI\tabularnewline
& ($95\%$ CI) & ($95\%$ CI) & & & & & & \tabularnewline \hline MAR & -2.42 & -2.30 & & 1.06 & 1.11 & & 0.022 & 0.038\tabularnewline
& (-4.49, -0.35) & (-4.47, -0.13) & & & & & & \tabularnewline J2R & -1.81 & -1.68 & & 1.07 & 0.82 & & 0.091 & 0.039\tabularnewline
& (-3.91, 0.29) & (-3.28, -0.08) & & & & & & \tabularnewline RTB & -1.23 & -1.25 & & 1.10 & 0.96 & & 0.266 & 0.192\tabularnewline
& (-3.39, 0.93) & (-3.13, 0.63) & & & & & & \tabularnewline Washout & -0.71 & -0.75 & & 1.08 & 1.04 & & 0.511 & 0.475\tabularnewline
& (-2.83, 1.41) & (-2.79, 1.39) & & & & & & \tabularnewline \hline \end{tabular} \end{table}
\begin{table}[!htbp] \centering{}\centering \caption{Analysis of the HAMD-17 data of the risk difference.\label{tab:real_prop_nonpara}} \begin{tabular}{ccccccccc} \hline
& \multicolumn{2}{c}{Point estimation ($\%$)} & & \multicolumn{2}{c}{Standard error ($\%$)} & & \multicolumn{2}{c}{P-value}\tabularnewline \cline{2-3} \cline{3-3} \cline{5-6} \cline{6-6} \cline{8-9} \cline{9-9} Setting & \multicolumn{1}{c}{MI} & DI & & \multicolumn{1}{c}{MI} & DI & & \multicolumn{1}{c}{MI} & DI\tabularnewline
& ($95\%$ CI) & ($95\%$ CI) & & & & & & \tabularnewline \hline MAR & 15.64 & 15.53 & & 7.48 & 6.89 & & 0.037 & 0.024\tabularnewline
& (0.98, 30.30) & (2.02, 29.03) & & & & & & \tabularnewline J2R & 12.81 & 12.78 & & 7.44 & 5.95 & & 0.085 & 0.032\tabularnewline
& (-1.78, 27.40) & (1.11, 24.45) & & & & & & \tabularnewline RTB & 12.87 & 13.05 & & 6.94 & 6.54 & & 0.064 & 0.046\tabularnewline
& (-0.73, 26.48) & (0.24, 25.86) & & & & & & \tabularnewline Washout & 8.40 & 8.42 & & 7.17 & 6.74 & & 0.241 & 0.211\tabularnewline
& (-5.65, 22.45) & (-4.79, 21.63) & & & & & & \tabularnewline \hline \end{tabular} \end{table}
To evaluate the distributional behavior of the data, we estimate the CDF of the relative change of the HAMD-17 score at the last time point for each group and the QTE as a function of $q$ denoted as the quantile percentage under both MAR and J2R in Figures \ref{fig:primary} and \ref{fig:j2r}. Similar to the results from the ATE and the risk difference, the estimated CDF obtained from DI has a comparable shape as the one obtained from MI, while the curve from DI has a narrower $95\%$ confidence region in the sensitivity analysis compared to MI. A statistically significant treatment effect is detected for patients in the lower quantiles of the HAMD-17 score in both primary and sensitivity analyses.
\begin{figure}
\caption{The estimated CDF and QTE of relative change from baseline at last time point via MI and DI in the primary analysis, accompanied by the point-wise $95\%$ CI in dashed lines.}
\label{fig:primary}
\end{figure}
\begin{figure}
\caption{The estimated CDF and QTE of relative change from baseline at last time point via MI and DI in the sensitivity analysis under J2R, accompanied by the point-wise $95\%$ CI in dashed lines.}
\label{fig:j2r}
\end{figure}
Our general framework captures a comprehensive evaluation of the treatment effect. With the use of DI, the experimental drug reveals its significant benefit for curing depression in both the primary analysis and the sensitivity analysis under J2R. If other sensitivity models such as the RTB and washout imputation are assumed in the trial protocol, we also provide the corresponding results in Tables \ref{tab:real_reg_nonpara} and \ref{tab:real_prop_nonpara}, and in Section \ref{appen:real_result} in the supplementary material, to illustrate a wide application of the proposed framework. Under each sensitivity model, DI produces smaller standard errors and CIs compared to MI. However, one should notice that the study conclusion regarding the treatment effect changes with the prespecified sensitivity models. Among those sensitivity analyses, only the result under J2R using DI captures the same statistical significance of the ATE; the result under washout imputation fails to show an improvement of the treatment with respect to the risk difference. It suggests the potential impact of different missingness assumptions on the treatment effect of an experimental drug. The study statistician should carefully interpret the primary and sensitivity analyses results with investigators based on the missingness assumptions and additional clinical knowledge of the drug.
\section{Conclusion \label{sec:conclude}}
In the paper, we propose DI using the idea of MC integration and establish a unified framework for the sensitivity analysis in longitudinal clinical trials to assess the impact of the MAR assumption in the primary analysis. Our framework is flexible to accommodate various sensitivity models and treatment effect estimands. We apply the proposed DI with weighted bootstrap to an antidepressant longitudinal clinical study and detect a statistically significant treatment effect in both the primary and sensitivity analyses for the ATE, the risk difference, and the QTE, overcoming the inefficiency and overestimation issue from MI with Rubin's rule. The study result of the experimental drug uncovers the significant improvement for curing depression. The DI framework has a solid theoretical guarantee, with the avoidance of re-imputation of the missing data in the variance estimation via weighted bootstrap.
While we present DI under monotone missingness, DI is applicable to handle intermittent missing values as long as the imputation model of the missing values given the observed values is tractable. If direct sampling from the target imputation model is difficult, one can resort to alternative sampling strategies such as importance sampling, Metropolis--Hastings, etc. We leave this topic as a future research direction.
Although we present the framework for sensitivity analyses using continuous longitudinal outcomes, it is possible to extend the framework to the cases of categorical or survival outcomes under some prespecified imputation mechanisms. For example, \citet{tang2018controlled} modifies the control-based imputation model for binary and ordinal responses based on the generalized linear mixed model; \citet{yang2020smim} adopts the $\delta$-adjusted and control-based imputation models for survival outcomes in sensitivity analysis. With motivated treatment effect estimands and suitable prespecified sensitivity assumptions, our framework can handle sensitivity analyses for different types of responses in clinical trials. \citet{guan2019unified} establish a unified framework of MI via wild bootstrap for causal inference in observational studies; extending the proposed DI inference to this context is straightforward.
The proposed general framework for sensitivity analyses in longitudinal clinical trials relies on parametric modeling assumptions in both the imputation and analysis stages. This parametric setup is originated from MI. In the future, we will develop DI under semiparametric and nonparametric models as more flexible settings in sensitivity analyses.
\section*{Supplementary material}
The supplementary materials include the setup and proof for the theorems, additional simulation, and real data application results.
{} \begin{center} \textbf{\Large{}Supplementary material for "Sensitivity analysis in longitudinal clinical trials via distributional imputation" by Liu et al.}{\Large{} }{\Large\par} \par\end{center}
\pagenumbering{arabic} \renewcommand*{\thepage}{S\arabic{page}}
\setcounter{lemma}{0} \global\long\def\textup{S}\arabic{lemma}{\textup{S}\arabic{lemma}} \setcounter{equation}{0} \global\long\defS\arabic{equation}{S\arabic{equation}} \setcounter{section}{0} \global\long\defS\arabic{section}{S\arabic{section}} \global\long\defS\arabic{section}.\arabic{subsection}{S\arabic{section}.\arabic{subsection}} \setcounter{table}{0} \global\long\def\textup{S}\arabic{table}{\textup{S}\arabic{table}} \setcounter{figure}{0} \global\long\def\textup{S}\arabic{figure}{\textup{S}\arabic{figure}} \setcounter{thm}{0} \global\long\def\textup{S}\arabic{thm}{\textup{S}\arabic{thm}} \setcounter{corollary}{0} \global\long\def\textup{S}\arabic{corollary}{\textup{S}\arabic{corollary}}
This supplementary material contains technical details, additional simulation studies, and results for the real-data application. Section \ref{appen:thm} gives the regularity conditions and proof of the theorems. Section \ref{appen:sim_result} presents additional simulation results under RTB and washout imputation mechanisms. Section \ref{appen:real_data} contains additional analysis results regarding the QTE under RTB and washout imputation and the model diagnosis of the underlying modeling assumptions in the real-data application.
\section{Setup and proof of the theorems}\label{appen:thm} To emphasize that the DI estimator depends on sample size, we put subscript $n$ in $\hat\tau_{DI,n}$ for illustration in the following theorems.
\subsection{Setup and proof of Theorem \ref{consistency_thm}}\label{appen:thm1}
Let $s(Z, \theta), \psi(Z, \tau)$ satisfy the conditions listed in Section \ref{sec:notation}. Denote $$\bar s(\theta) = n^{-1} \sum_{i=1}^n \mathbb{E}\{ s(Z_i, \theta)|Z_{\text{obs},i} \},$$and $$\bar \psi_n(\tau, \theta):= n^{-1}\sum_{i=1}^n \mathbb{E}\big\{\psi(Z_i, \tau)|Z_{\text{obs},i}, \theta \big\}.$$ Assume the following regularity conditions:
\begin{enumerate} \item[C1.] $\bar{s}(\hat{\theta})$ solves for equation \eqref{theta_score_eq}, and there exists a unique $\theta_{0}$ such that $\mathbb{E}\big\{ s(Z_{i},\theta_{0})\big\}=0$. \item[C2.] $\bar{s}(\theta)$ is dominated by an integrable function $g(Z_{\text{obs}})$
for all $Z_{\text{obs}}\subset\mathbb{R}^{d}$ and $\tau$ with respect to the conditional distribution function $f(Z_{\text{obs}}|Z_{\text{mis}},\theta)$. \item[C3.] There exists a unique $\tau_{0}$ such that $\mathbb{E}(\psi(Z_{i},\tau_{0}))=0$. \item[C4.] $\bar{\psi}_{n}(\tau,\theta)$ is dominated by an integrable function $g_{1}(Z_{\text{obs}})$ for all $Z_{\text{obs}}\subset\mathbb{R}^{d}$ and $\tau$ with respect to the conditional distribution function
$f(Z_{\text{obs}}|Z_{\text{mis}},\theta)$. \item[C5.] $\mathbb{V}\big\{\bar{\psi}_{n}(\tau,\theta)\big\}<\infty$. \end{enumerate} Under regularity conditions listed above, we prove that the DI estimator $\hat{\tau}_{DI,n}\xrightarrow{\mathbb{P}}\tau_{0}$ as the imputation size $M\rightarrow\infty$ and sample size $n\rightarrow\infty$, where $\hat{\tau}_{DI,n}$ solves for equation \eqref{fi_ee}.
\begin{proof} First show the consistency of $\hat \theta$. Since $s(Z, \theta)$ is a continuous function for $\theta$ and a measurable function for $Z$, then $\bar s(\theta)$ is continuous for $\theta$ and measurable for $Z$. Based on the above observations and regularity condition C2, it satisfies the conditions for Theorem 2 in \cite{jennrich1969asymptotic}. Thus \begin{equation*}
\bar s(\theta) \xrightarrow{a.s.} \mathbb{E}\big\{s(Z_i, \theta)\big\} \text{ uniformly for } \forall \theta \in \Theta. \end{equation*}
Then prove the consistency by contradiction. Suppose $\hat \theta$ does not converge to $\theta_0$ in probability, i.e., there exists a subsequence $\{\hat \theta_{n_k}\} \xrightarrow{\mathbb{P}} \theta_1 \neq \theta_0$. Therefore, for $\forall \varepsilon > 0$, by triangle inequality we have \begin{align*}
P\Big\{\big|\bar s(\hat \theta_{n_k}) - \mathbb{E}\big\{s(Z_i, \theta_1)\big\} \big| \geq \varepsilon\Big\} & \leq P\Big\{\big|\bar s(\hat \theta_{n_k}) - \mathbb{E}\big\{s(Z_i, \hat \theta_{n_k})\big\} \big| \geq \frac{\varepsilon}{2}\Big\}\\ &\quad +P\Big\{\big|\mathbb{E}\big\{s(Z_i, \hat \theta_{n_k})\big\} - \mathbb{E}\big\{s(z_i, \hat \theta_{1})\big\}\big| \geq \frac{\varepsilon}{2}\Big\}. \end{align*}
By uniform convergence of $\bar s(\theta)$ for $\forall \theta$, the first term of the right hand-side converges to 0. By the continuity of $\mathbb{E}\big\{s(Z_i, \hat \theta)\big\}$ for $\theta$ and $\{\hat \theta_{k}\} \xrightarrow{\mathbb{P}} \theta_1 \neq \theta_0$, the second term of the right hand-side also converges to 0. Thus we have $
\text{lim}_{n\rightarrow \infty} P\Big\{\big|\bar s(\hat \theta_{k}) - \mathbb{E}\big\{s(Z_i, \theta_1)\big\} \big| \geq \varepsilon\Big\} = 0 $, i.e., $\bar s(\hat \theta_{k}) \xrightarrow{\mathbb{P}} \mathbb{E}\big\{s(Z_i, \theta_1)\big\} \neq 0$. Since C1 demands the uniqueness of $\theta_0$ such that $\mathbb{E}\big\{s(Z_i, \theta_0)\big\} = 0$ and $\hat \theta$ as the solution for equation \eqref{theta_score_eq} which means $\bar s(\hat \theta_{k}) \xrightarrow{\mathbb{P}} 0$, contradicting to what is proved above. Therefore $\hat \theta \xrightarrow{\mathbb{P}} \theta_0$.
Denote $\bar \psi(\tau, \theta):= \mathbb{E}\big\{\psi(Z_i, \tau) | \theta \big\}$. Note that $\psi(Z,\tau)$ is continuous with respect to $\tau$, and a measurable function of $z$ for each $\tau$, then $\bar \psi_n(\tau, \theta)$ and $\bar \psi(\tau, \theta)$ are continuous functions to $\tau$. Also, $\bar \psi_n(\tau, \theta)$ is a measurable function of $z_{\text{obs}}$ for each $\tau$. By continuous mapping theorem, for $\forall \tau$, $\bar \psi_n(\tau, \hat \theta) - \bar \psi_n(\tau, \theta_0) \xrightarrow{\mathbb{P}} 0$ and $\bar \psi(\tau, \hat \theta) \xrightarrow{\mathbb{P}} \bar \psi(\tau, \theta_0)$.
Note that $Z^{*(m)}_{i}$ are generated via Monte Carlo integration. By C5, $\mathbb{V}\Big[ \mathbb{E}\big\{\psi(Z_i,\tau)|Z_{\text{obs},i}, \theta \big\}\Big]$ is finite. From the asymptotic theory of Monte Carlo approximation in \cite{lepage1978new}, we have \begin{align}\label{eq:mc_integration}
\frac{1}{n}\frac{1}{M}\sum_{i=1}^n \sum_{m = 1}^M \psi(Z^{*(m)}_{i},\tau) &= \frac{1}{n}\sum_{i=1}^n \Big[\mathbb{E}\big\{\psi(Z_i, \tau)|Z_{\text{obs},i}, \theta \big\} + O_p(M^{-1/2}) \Big] \nonumber \\
& = \bar \psi_n(\tau, \theta) + O_p(M^{-1/2}). \end{align}
Since $\hat \tau_{DI,n}$ solves for equation \eqref{fi_ee}, as $M \rightarrow \infty$, we have \begin{equation*}
\bar \psi_n(\hat \tau_{DI,n}, \hat \theta) \xrightarrow{\mathbb{P}} 0. \end{equation*}
Regularity condition C4, the assumptions for $\psi(Z, \tau)$ and the consistency of $\hat \theta$ satisfy the conditions for Theorem 2 in \cite{jennrich1969asymptotic}, then as $n \rightarrow \infty$ \begin{equation*}
\bar \psi_n(\tau, \hat \theta) \xrightarrow{a.s.} \bar \psi(\tau, \theta_0) \text{ uniformly for } \forall \tau \in \Omega. \end{equation*}
Then again prove the consistency by contradiction. Suppose $\hat \tau_{DI,n}$ does not converge to $\tau_0$ in probability, i.e., there exists a subsequence $\{\hat \tau_{n_k}\} \xrightarrow{\mathbb{P}} \tau_1 \neq \tau_0$. Therefore, for $\forall \varepsilon > 0$, by triangle inequality we have \begin{align*}
P\Big\{\big|\bar \psi_n(\hat \tau_{n_k}, \hat \theta) - \bar \psi(\tau_1, \theta_0)\big| \geq \varepsilon\Big\} &\leq P\Big\{\big|\bar \psi_n(\hat \tau_{n_k}, \hat \theta) - \bar \psi(\hat \tau_{n_k}, \theta_0)\big| \geq \frac{\varepsilon}{2}\Big\} \\& \quad+ P\Big\{\big|\bar \psi(\hat \tau_{n_k}, \theta_0) - \bar \psi(\tau_1, \theta_0)\big| \geq \frac{\varepsilon}{2}\Big\}. \end{align*}
By uniform convergence of $\bar \psi_n(\tau, \hat \theta)$ for $\forall \tau$ and consistency of $\hat \theta$, the first term of the right hand-side converges to 0. By the continuity of $\bar \psi(\tau, \theta_0)$ and $\{\hat \tau_{n_k}\} \xrightarrow{\mathbb{P}} \tau_1 \neq \tau_0$, the second term of the right hand-side also converges to 0. Thus we have $
\text{lim}_{n\rightarrow \infty} P\Big\{\big|\bar \psi_n(\hat \tau_{n_k}, \hat \theta) - \bar \psi(\tau_1, \theta_0)\big| \geq \varepsilon\Big\} = 0 $, i.e., $\bar \psi_n(\hat \tau_{n_k}, \hat \theta) \xrightarrow{\mathbb{P}} \bar \psi(\tau_1, \theta_0) \neq 0$ since regularity condition C4 demands the uniqueness of $\tau_0$ such that $\bar \psi(\tau, \theta_0) = 0$, which contradicts the fact that $\bar \psi_n(\hat \tau_{n_k}, \hat \theta) \xrightarrow{\mathbb{P}} 0$ as proved above. Therefore $\hat \tau_{DI,n} \xrightarrow{\mathbb{P}} \tau_0$.
\end{proof}
\subsection{Setup and proof of Theorem \ref{theo:asym_normal}}\label{appen:thm2}
Let $s(Z, \theta), \psi(Z, \tau)$ satisfy the conditions listed in Section \ref{sec:notation}. Use the same notations in Theorem \ref{consistency_thm}, and assume the regularity conditions C1 - C5 hold. Add the additional regularity conditions: \begin{enumerate}
\item[C6.] The solution $\hat \theta$ in \eqref{theta_score_eq} satisfies $\hat \theta - \theta_0 = O_p(n^{-1/2})$.
\item[C7.] The partial derivatives of $\bar s(\theta), \bar \psi(\tau, \theta)$ with respect to $\theta$ exist and are continuous around $\theta_0$ almost everywhere. The second derivatives of $\bar s(\theta), \bar \psi(\tau, \theta)$ with respect to $\theta$ are continuous and dominated by some integrable functions.
\item[C8.] The partial derivative of $\bar s(\theta)$ satisfies
\begin{equation*}
\lVert \frac{\partial \bar s(\theta)}{\partial \theta} - \mathbb{E}\big\{\frac{\partial \bar s(\theta)}{\partial \theta}\big\} \rVert \xrightarrow{\mathbb{P}} 0 \text{ uniformly in } \theta,
\end{equation*}
and $\mathbb{E}\big\{\partial \bar s(\theta)/\partial \theta \big\}$ is continuous and nonsingular at $\theta_0$. The partial derivative of $\bar \psi(\tau, \theta)$ with respect to $\theta$ satisfies
\begin{equation*}
\lVert \frac{\partial \bar \psi(\tau, \theta)}{\partial \theta} - \mathbb{E}\big\{\frac{\partial \bar \psi(\tau, \theta)}{\partial \theta}\big\} \rVert \xrightarrow{\mathbb{P}} 0 \text{ uniformly in } \theta,
\end{equation*}
and $\mathbb{E}\big\{\partial \bar \psi(\tau, \theta)/\partial \theta \big\}$ is continuous with respect to $\theta$ at $\theta_0$.
\item[C9.] There exists $a > 0$ such that $\mathbb{E}\big\{\psi(Z_i, \tau)^{2+a} \big\} < \infty$ and $\mathbb{E}\big\{s_{j}(Z_i, \theta)^{2+a} \big\} < \infty$ where $s_{j}(Z_i, \theta) = \partial f(Z_i,\theta)/\partial \theta_j$ for $j = 1, \cdots, q$ and $\theta_j$ is the $j$th element of $\theta$.
\item[C10.] $\psi(Z, \tau)$ and its first two derivatives with respect to $\tau$ exist for all $y \in \mathbb{R}$ and $\tau$ in a neighborhood of $\tau_0$, where $\mathbb{E} \big\{\psi(Z_i, \tau_0)\big\} = 0$.
\item[C11.] For each $\tau$ in a neighborhood of $\tau_0$, there exists an integrable function $g_2(Z)$ such that for all $j,k \in \{1, \cdots, d\}$ and all $Z$,
\begin{equation*}
\big|\frac{\partial^2}{\partial \tau_j \tau_k}\psi_l(Z, \tau) \big| \leq g_2(Z).
\end{equation*}
\item[C12.] $A(\tau_0, \theta_0) = \mathbb{E}\Big[\partial \psi(Z_i, \tau_0)/\partial \tau + \big\{\partial \Gamma(\tau_0, \theta_0)/\partial \tau \big\}\bar s_i(\theta_0) \Big]$ exists and is nonsingular.
\item[C13.] $B(\tau_0, \theta_0) = \mathbb{V}\big\{\psi_i^{*}(\tau_0, \theta_0) + \Gamma(\tau_0, \theta_0)^{\intercal}\bar s_i(\theta_0)\big\}$ exists and is finite, where
$\psi_i^{*}(\tau, \theta)= M^{-1}\sum_{m = 1}^M \psi(Z^{*(m)}_{i},\tau)$, $\bar s_i(\theta) = \mathbb{E}\{s(Z_i, \theta) | Z_{\text{obs},i}\}$, $\Gamma(\tau, \theta_0) = \mathbf{I}_{\text{obs}}(\theta_0)^{-1} \mathbf{I}_{\psi, \text{mis}}(\tau, \theta_0)$. Here $\mathbf{I}_{\text{obs}}(\theta) = \mathbb{E}\{-\partial \bar s_i(\theta)/\partial \theta\}$, $\mathbf{I}_{\psi, \text{mis}}(\tau, \theta) = \mathbb{E}[\{s(Z_i, \theta) - \bar s_i(\theta)\}\psi(Z_{i},\tau)]$. \end{enumerate}
Under regularity conditions, as the imputation size $M \rightarrow \infty$, we prove that \begin{equation*}
\sqrt{n}(\hat \tau_{DI,n} - \tau_0) \xrightarrow{d} \mathcal{N}(0, A(\tau_0, \theta_0)^{-1}B(\tau_0, \theta_0)\{A(\tau_0, \theta_0)^{-1}\}^{\intercal}). \end{equation*}
\begin{proof} In Theorem 1, we prove that $\hat \theta \xrightarrow{\mathbb{P}} \theta_0$. Consider a Taylor series expansion of $\bar s(\hat \theta)$ around $\theta_0$, there exists $\Tilde{\theta}$ that is between $\hat \theta$ and $\theta_0$, such that
\begin{equation*}
\bar s(\hat \theta) = \bar s(\theta_0) + \frac{\partial \bar s_i(\theta_0)}{\partial \theta^{\intercal}}(\hat \theta - \theta_0) + \frac{1}{2} (\hat \theta - \theta_0)^{\intercal} \big\{\frac{\partial^2 \bar s(\Tilde{\theta})}{\partial \theta \partial \theta^{\intercal}}\big\} (\hat \theta - \theta_0). \end{equation*}
Note that by C7, $\partial^2 \bar s(\Tilde{\theta})/(\partial \theta \partial \theta^{\intercal}) = O_p(1)$, thus by C6, the second term is $o_p(n^{-1/2})$. Since $\hat \theta$ solves for \eqref{theta_score_eq}, we have \begin{align}\label{eq:theta_taylor}
\hat \theta - \theta_0 & = \big\{-\frac{\partial \bar s( \theta_0)}{\partial \theta^{\intercal}}\big\}^{-1} \big\{\bar s(\theta_0) \big\} + o_p(n^{-1/2}) \nonumber \\
& = \big\{-\frac{1}{n}\sum_{i=1}^n \frac{\partial \bar s_i( \theta_0)}{\partial \theta^{\intercal}}\big\}^{-1} \big\{\frac{1}{n}\sum_{i=1}^n \bar s_i(\theta_0) \big\} + o_p(n^{-1/2}). \end{align}
Consider a Taylor series expansion of $\bar{\psi}_n(\tau, \hat \theta)$ with respect to $\theta$ around $\theta_0$, there exists $\Tilde{\theta}$ such that it is between $\theta_0$ and $\hat \theta$ and satisfies \begin{equation*}
\bar{\psi}_n(\tau, \hat \theta) - \bar{\psi}_n(\tau, \theta_0) = \frac{\partial \bar{\psi}_n(\tau, \theta_0)}{\partial \theta^{\intercal}} (\hat \theta - \theta_0) + \frac{1}{2}(\hat \theta - \theta_0)^{\intercal}\big\{\frac{\partial^2 \bar{\psi}_n(\tau; \Tilde{\theta})}{\partial \theta\partial \theta^{\intercal}} \big\}(\hat \theta - \theta_0) + o_p(n^{-1/2}). \end{equation*}
By C7, $\partial^2 \bar{\psi}_n(\tau; \Tilde{\theta})/(\partial \theta \partial \theta^{\intercal}) = O_p(1)$. Therefore by C6, the second term of the right-hand side is $o_p(n^{-1/2})$. We proceed to compute the partial derivative $\partial \bar{\psi}_n(\tau, \theta_0)/\partial \theta^{\intercal}$ as follows. \begin{align*}
\frac{\partial \bar{\psi}_n(\tau, \theta_0)}{\partial \theta} & = \frac{1}{n}\sum_{i=1}^n \int \frac{\partial f(Z_i|Z_{\text{obs},i}, \theta_0)}{\partial \theta} \psi(Z_i, \tau) dZ_i \\
& = \frac{1}{n}\sum_{i=1}^n \int \frac{\partial \log \big\{f(Z_i|Z_{\text{obs},i}, \theta_0)\big\}}{\partial \theta}f(Z_i|z_{\text{obs},i}, \theta_0) \psi(Z_i, \tau) dZ_i \\
& = \frac{1}{n}\sum_{i=1}^n \int \Big[\frac{\partial \log \big\{f(Z_i| \theta_0)\big\}}{\partial \theta} - \frac{\partial \log \big\{f(Z_{\text{obs}, i}| \theta_0)\big\}}{\partial \theta} \Big]f(Z_i|Z_{\text{obs},i}, \theta_0) \psi(Z_i, \tau) dZ_i \\
& = \frac{1}{n}\sum_{i=1}^n \int \Big[s(Z_i, \theta_0) - \frac{\partial \log \big\{f(Z_{\text{obs}, i}| \theta_0)\big\}}{\partial \theta}\Big]f(Z_i|Z_{\text{obs},i}, \theta_0) \psi(Z_i, \tau) dZ_i \\
& = \frac{1}{n}\sum_{i=1}^n \mathbb{E}\Big[ \big\{s(Z_i, \theta_0) - \bar s_i(\theta_0)\big\}\psi(Z_i, \tau)\big| Z_{\text{obs},i}, \theta_0 \Big]. \end{align*} Note that in the first line of the equations above, we interchange the integral and derivative since the support of $f(Z, \theta_{0})$ is free of $\theta_{0}$. The subsequent lines are derived from basic analysis.
Therefore, we can rewrite the Taylor expansion of $\bar \psi_n(\tau, \hat \theta)$ around $\theta_0$ based on the last equation and \eqref{eq:theta_taylor} as \begin{align*}
\bar{\psi}_n(\tau, \hat \theta) - \bar{\psi}_n(\tau, \theta_0) & = \Bigg(\frac{1}{n}\sum_{i=1}^n \mathbb{E}\Big[\big\{s(Z_i, \theta_0) - \bar s_i(\theta_0)\big\}\psi(Z_i, \tau) \big| z_{\text{obs},i}, \Big]\Bigg) \\
&\quad \times \big\{-\frac{1}{n}\sum_{i=1}^n \frac{\partial \bar s_i( \theta_0)}{\partial \theta^{\intercal}}\big\}^{-1} \big\{\frac{1}{n}\sum_{i=1}^n \bar s_i(\theta_0) \big\} + o_p(n^{-1/2}). \end{align*}
By weak law of large number and C8, $n^{-1}\sum_{i=1}^n \mathbb{E}\Big[\big\{s(Z_i, \theta_0) - \bar s_i(\theta_0)\big\}\psi(Z_i, \tau) \big| Z_{\text{obs},i}, \theta_0 \Big] = \mathbb{E}\Big[\big\{s(Z_i, \theta_0) - \bar s_i(\theta_0)\big\}\psi(Z_i, \tau) \Big] + o_p(1)$, and $$\big\{n^{-1}\sum_{i=1}^n \partial \bar s_i(\theta_0)/\partial \theta^{\intercal}\big\}^{-1} = \Big[\mathbb{E}\big\{\partial \bar s_i(\theta_0)/\partial \theta^{\intercal}\big\}\Big]^{-1} + o_p(1).$$ Denote $$\Gamma(\tau, \theta_0) := I_{\text{obs}}(\theta_0)^{-1}I_{\psi, \text{mis}}(\tau, \theta_0) = \Big[\mathbb{E}\big\{-\frac{\partial \bar s_i(\theta_0)}{\partial \theta^{\intercal}}\big\}\Big]^{-1}\mathbb{E}\Big[\big\{s(Z_i, \theta_0) - \bar s_i(\theta_0)\big\}\psi(Z_i, \tau) \Big].$$ Then we have \begin{align*}
\bar{\psi}_n(\tau, \hat \theta) - \mathbb{E}\big\{\psi(Z_i, \tau) \big\} & = \frac{1}{n}\sum_{i=1}^n\Big[\Gamma(\tau, \theta_0)^{\intercal}\bar s_i(\theta_0) + \mathbb{E}\big\{\psi(Z_i, \tau)|Z_{\text{obs,i}}, \theta_0\big\} \Big] + o_p(n^{-1/2}) \\
& = \frac{1}{n}\sum_{i=1}^n\Big[\Gamma(\tau, \theta_0)^{\intercal}\bar s_i(\theta_0) + \psi^*(\tau, \theta_0) \Big] + o_p(n^{-1/2}) + O(M^{-1/2}). \end{align*}
Thus for any $\tau$, as the imputation size $M \rightarrow \infty$ and sample size $n \rightarrow \infty$, \begin{equation}\label{asym_normal}
\sqrt{n}\Big[\bar{\psi}_n(\tau; \hat \theta) - \mathbb{E}\big\{\psi(Z_i, \tau) |\theta_0 \big\}\Big] \xrightarrow{d} \mathcal{N}(0, B(\tau, \theta_0)). \end{equation}
Denote \begin{align*}
\Tilde{\psi}_n(\tau, \theta) = \frac{1}{nM}\sum_{i=1}^n \sum_{m = 1}^M \psi(Z^{*(m)}_{i},\tau) + \frac{1}{n}\sum_{i=1}^n \Gamma(\tau, \theta)^{\intercal}\bar s_i(\theta). \end{align*}
Consider a Taylor series expansion of $\Tilde{\psi}(\hat \tau_{DI,n}, \hat \theta)$ around $\tau_0$ in a component-wise way. For $l = 1, \cdots, q$ representing the $l$th component of a vector, there exists $\Tilde{\tau}_l^*$ between $\hat \tau_{DI,n_l}$ and $\tau_{0l}$, such that \begin{equation*}
\Tilde{\psi}_{n}(\hat \tau_{DI,n}, \hat \theta) = \Tilde{\psi}_{n_l}(\tau_{0}, \hat \theta) + \frac{\partial \Tilde{\psi}_{n_l}(\tau_{0}, \hat \theta)}{\partial \tau_{l}} (\hat \tau_{DI,n} - \tau_{0}) + \frac{1}{2}(\hat \tau_{DI,n} - \tau_{0})^{\intercal}\frac{\partial^2 \Tilde{\psi}_{n_j}(\Tilde{\tau}_l^*, \hat \theta)}{\partial \tau_l^2}(\hat \tau_{DI,n} - \tau_{0}). \end{equation*}
Stack the above $q$ equations together, \begin{align*}
\bar{\psi}_{n}(\hat \tau_{DI,n}, \hat \theta) = \bar{\psi}_{n}(\tau_0, \hat \theta) + \big\{\frac{\partial \bar \psi_{n}(\tau_0, \hat \theta)}{\partial \tau^{\intercal}} + \frac{1}{2}(\hat \tau_{DI,n} - \tau_0)^{\intercal} \Tilde{Q}^* \big\}(\hat \tau_{DI,n} - \tau_0), \end{align*} where $\Tilde{Q}^*$ is a matrix with the $j$th row vector equals to $\partial^2 \Tilde{\psi}_{n_l}(\Tilde{\tau}_l^*, \hat \theta)/\partial \tau_l^2$. From C9, each row vector is $O_p(1)$. Thus $(\hat \tau_{DI,n} - \tau_0)^{\intercal} \Tilde{Q}^* = o_p(1)$ by Theorem 1.
By weak law of large number and C6, \begin{align*}
-\frac{\partial \Tilde \psi_{n}(\tau_0, \hat \theta)}{\partial \tau} & \xrightarrow{\mathbb{P}} \mathbb{E}\Big[\frac{\partial \mathbb{E}\big\{\psi(Z_i, \tau_0)|Z_{\text{obs},i} \big\}}{\partial \tau} + \frac{\partial \Gamma(\tau_0, \theta_0)}{\partial \tau}\mathbb{E}\big\{ s(Z_i, \theta_0)|Z_{\text{obs},i}\big\} \Big] \\
& = \mathbb{E}\Big[\partial \psi(Z_i, \tau_0)/\partial \tau + \big\{\partial \Gamma(\tau_0, \theta_0)/\partial \tau \big\}\bar s_i(\theta_0) \Big] = A(\tau_0, \theta_0). \end{align*} By C10, $A(\tau_0, \theta_0)$ is nonsingular. By C8, $ \Tilde{\psi}_{n}(\hat \tau_{DI,n}, \hat \theta) = o_p(n^{-1/2})$. Thus we can re-express the stack form of the equations as \begin{equation*}
\hat \tau_{DI,n} - \tau_0 = A(\tau_0, \theta_0)^{-1}\Tilde{\psi}_{n}(\tau_0, \hat \theta) + o_p(n^{-1/2}). \end{equation*} Let $\tau = \tau_0$ in \eqref{asym_normal}, and by C10,$\sqrt{n}\Tilde{\psi}_{n}(\tau_0, \hat \theta) \xrightarrow{d} \mathcal{N}\big(0, B(\tau_0, \theta_0)\big)$. Then by Slutsky's theorem, we have \begin{equation*}
\sqrt{n}(\hat \tau_{DI,n} - \tau_0) \xrightarrow{d} \mathcal{N}\big(0, A(\tau_0, \theta_0)^{-1}B(\tau_0, \theta_0)\{A(\tau_0, \theta_0)^{-1}\}^{\intercal}\big). \end{equation*} \end{proof}
\subsection{Proof of Theorem \ref{theo:wb}}\label{appen:thm3}
\begin{proof} Denote $w^{(m)}_i(\hat \theta) = M^{-1}$, and define \begin{equation*}
\bar \psi^{*(b)}_n (\tau, \theta) = \frac{1}{n}\sum_{i=1}^n u^{(b)}_i \sum_{m=1}^M w^{(m)}_i(\theta)\psi(Z_{i}^{*(m)}, \tau). \end{equation*}
Given the complete data, $\hat \theta^{(b)} \xrightarrow{\mathbb{P}} \hat \theta$ by the same argument from Theorem 1. Consider a Taylor series expansion of $n^{-1}\sum_{i = 1}^n u_{i}^{(b)} \bar s_i(\hat \theta^{(b)}) = n^{-1}\sum_{i = 1}^n u_{i}^{(b)}\mathbb{E}\{s(Z_i, \hat \theta^{(b)})| Z_{\text{obs},i}\}$ around $\hat \theta$, there exists $\Tilde{\theta}^{(b)}$ that is between $\hat \theta^{(b)}$ and $\hat \theta$, such that \begin{align*}
\frac{1}{n}\sum_{i = 1}^n u_{i}^{(b)} \bar s_i(\hat \theta^{(b)}) &= \frac{1}{n}\sum_{i = 1}^n u_{i}^{(b)} \bar s_i(\hat \theta) + \frac{1}{n}\sum_{i=1}^n u_{i}^{(b)} \frac{\partial \bar s_i(\hat \theta)}{\partial \theta^{\intercal}}(\hat \theta^{(b)} - \hat \theta) \\& \quad + \frac{1}{2} (\hat \theta^{(b)} - \hat \theta)^{\intercal} \big\{\frac{1}{n}\sum_{i=1}^n u_{i}^{(b)} \frac{\partial^2 \bar s_i(\Tilde{\theta}^{(b)})}{\partial \theta \partial \theta^{\intercal}}\big\} (\hat \theta^{(b)} - \hat \theta). \end{align*}
Note that by C5, $\sum_{i=1}^n u_{i}^{(b)} \big\{\partial^2 \bar s_i(\Tilde{\theta}^{(b)})/(\partial \theta \partial \theta^{\intercal})\big\} = O_p(1)$, thus $$(\hat \theta^{(b)} - \hat \theta)^{\intercal} \Big[n^{-1}\sum_{i=1}^n u_{i}^{(b)} \big\{\partial^2 \bar s_i(\Tilde{\theta}^{(b)})/(\partial \theta \partial \theta^{\intercal})\big\}\Big] (\hat \theta^{(b)} - \hat \theta) = o_p(n^{-1/2}).$$ Note that $\hat \theta^{(b)}$ solves for \eqref{bootstrap_score}, then
\begin{equation}\label{boot_thetab}
\hat \theta^{(b)} - \hat \theta = \big\{\frac{1}{n}\sum_{i=1}^n u_{i}^{(b)} \frac{\partial \bar s_i(\hat \theta)}{\partial \theta^{\intercal}}\big\}^{-1} \big\{\frac{1}{n}\sum_{i = 1}^n u_{i}^{(b)} \bar s_i(\hat \theta) \big\}. \end{equation}
Also consider a Taylor series expansion of $\bar \psi^{*(b)}_n (\tau, \hat \theta^{(b)})$ around $\hat \theta$, there exists $\Tilde{\theta}^{*(b)}$ that is between $\hat \theta^{(b)}$ and $\hat \theta$, such that \begin{equation*}
\bar \psi^{*(b)}_n (\tau, \hat \theta^{(b)}) = \bar \psi^{*(b)}_n (\tau, \hat \theta) + \frac{\partial \bar \psi^{*(b)}_n (\tau, \hat \theta)}{\partial \theta^{\intercal}} (\hat \theta^{(b)} - \hat \theta) + \frac{1}{2} (\hat \theta^{(b)} - \hat \theta)^{\intercal} \big\{\frac{\partial^2 \psi^{*(b)}_n (\tau, \Tilde{\theta}^{*(b)})}{\partial \theta \partial \theta^{\intercal}}\big\} (\hat \theta^{(b)} - \hat \theta). \end{equation*}
By C5, $\partial^2 \psi^{*(b)}_n (\tau, \Tilde{\theta}^{*(b)})/(\partial \theta \partial \theta^{\intercal}) = O_p(1)$, thus the second term of the above equation is $o_p(n^{-1/2})$. Plug in \eqref{boot_thetab}, apply the same technique we use in the proof of Theorem 2, one can rewrite the above Taylor expansion as \begin{align*}
\bar \psi^{*(b)}_n (\tau, \hat \theta^{(b)}) = \frac{1}{n}\sum_{i = 1}^n u_{i}^{(b)} \big\{\sum_{m=1}^M w^{(m)}_i(\hat \theta^{(b)})\psi(Z_{i}^{*(m)}, \tau) + \Gamma^{(b)}(\tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\} + o_p(n^{-1/2}), \end{align*} where $\Gamma^{(b)}(\tau, \hat \theta) = \mathbf{I}^{(b)-1}_{\text{obs}}(\hat \theta)\mathbf{I}^{(b)}_{\psi, \text{mis}}(\tau, \hat \theta)$. Here $\mathbf{I}^{(b)-1}_{\text{obs}}(\hat \theta) = n^{-1}\sum_{i=1}^n u^{(b)}_i \big\{\partial \bar s_i(\hat\theta)/\partial \theta^{\intercal}\big\}$ and $$\mathbf{I}^{(b)}_{\psi, \text{mis}}(\tau, \hat \theta) = n^{-1}\sum_{i=1}^n u^{(b)}_i \sum_{m=1}^M w^{(m)}_i(\hat \theta)\big\{s(Z_{i}^{*(m)}, \hat \theta) - \bar s^*_i(\hat \theta)\big\}\psi(Z_{i}^{*(m)}, \tau).$$
Given both the observed and imputed data, by weak law of large number, $\mathbf{I}^{(b)}_{\text{obs}}(\hat \theta) \xrightarrow{\mathbb{P}} n^{-1}\sum_{i=1}^n \bar s^{*}_i(\hat \theta) \bar s^{*}_i(\hat \theta)^{\intercal}$ and $$\mathbf{I}^{(b)}_{\psi, \text{mis}}(\tau, \hat \theta) \xrightarrow{\mathbb{P}} \Big[n^{-1}\sum_{i=1}^n\sum_{m=1}^M w^{(m)}_{i}(\hat \theta)\big\{s(Z^{*(m)}_{i}, \hat\theta) - \bar s^{*}_i(\hat \theta)\big\}\psi(Z^{*(m)}_{i}, \tau)\Big],$$ then we have $\Gamma^{(b)}(\tau, \hat \theta) - \hat \Gamma(\tau, \hat \theta) = o_p(1)$. Also by C6, $n^{-1}\sum_{i=1}^n u^{(b)}_i \bar s_i(\hat \theta) = O_p(n^{-1/2})$, then for any $\tau$,
\begin{equation*}
\bar \psi^{*(b)}_n (\tau, \hat \theta^{(b)}) = \frac{1}{n}\sum_{i = 1}^n u_{i}^{(b)} \big\{\sum_{m=1}^M w^{(m)}_i(\hat \theta)\psi(Z_{i}^{*(m)}, \tau) + \hat \Gamma(\tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\} + o_p(n^{-1/2}). \end{equation*}
Then follow the proof for asymptotic normality for M-estimation in Theorem 2, the asymptotic distribution of $\hat \tau^{(b)}_{DI,n}$ given the complete data is \begin{equation*}
\sqrt n (\hat \tau^{(b)}_{DI,n} - \hat \tau_{DI,n}) \xrightarrow{d} \mathcal{N}(0, A^{(b)}(\hat \tau_{DI,n}, \hat \theta)^{-1}B^{(b)}(\hat \tau_{DI,n}, \hat \theta)\{A^{(b)}(\hat \tau_{DI,n}, \hat \theta)^{-1}\}^{\intercal}), \end{equation*} where \begin{align*}
A^{(b)}(\hat \tau, \hat \theta) & = \lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \mathbb{E}\Big[\frac{\partial \big\{u_{i}^{(b)}\sum_{m=1}^M w_i^{(m)}(\hat \theta)\psi (Z^{*(m)}_{i}, \hat \tau) + u_{i}^{(b)}\hat \Gamma(\hat \tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\}}{\partial \tau} \big| Z \Big] \\
& = \lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \sum_{m=1}^M w_i^{(m)}(\hat \theta) \frac{\partial \psi (Z^{*(m)}_{i}, \hat \tau)} {\partial \tau} \\
& = \lim_{n \rightarrow \infty} A_n(Z,\hat \tau, \hat \theta). \end{align*} Note that the second equation holds since $\sum_{i=1}^n \bar s_i(\hat \theta) = 0$ by \eqref{theta_score_eq}, and \begin{align*}
B^{(b)}(\hat \tau, \hat \theta) & = \lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \mathbb{V}\Big[u_{i}^{(b)}\big\{\sum_{m=1}^M w_i^{(m)}(\hat \theta)\psi (Z^{*(m)}_{i}, \hat \tau) + \hat \Gamma(\tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\} \big| Z \Big] \\
& = \lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \big\{\sum_{m=1}^M w_i^{(m)}(\hat \theta)\psi (Z^{*(m)}_{i}, \hat \tau)\\ &\quad + \hat \Gamma(\hat \tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\} \big\{\sum_{m=1}^M w_i^{(m)}(\hat \theta)\psi (Z^{*(m)}_{i}, \hat \tau) + \hat \Gamma(\hat \tau, \hat \theta)^{\intercal}\bar s_i(\hat \theta) \big\}^{\intercal} \\
& = \lim_{n \rightarrow \infty}B_n(Z,\hat \tau, \hat \theta). \end{align*}
Note that by construction, $\hat{\mathbb{V}}_2(\hat \tau_{DI,n})$ is a consistent estimator of $$A^{(b)}(\hat \tau_{DI,n}, \hat \theta)^{-1}\big\{\frac{1}{n} B^{(b)}(\hat \tau_{DI,n}, \hat \theta)\big\}\{A^{(b)}(\hat \tau_{DI,n}, \hat \theta)^{-1}\}^{\intercal},$$ $A^{(b)}(\hat \tau, \hat \theta), B^{(b)}(\hat \tau, \hat \theta)$ are equivalent to $A_n(Z,\hat \tau, \hat \theta) , B_n(Z,\hat \tau, \hat \theta)$ when sample size $n$ is large, and $\allowbreak A_n(Z,\hat \tau, \hat \theta) , B_n(Z,\hat \tau, \hat \theta)$ are consistent estimators of $A(\tau_0, \theta_0), B(\tau_0, \theta_0)$. Therefore, $\hat{\mathbb{V}}_2(\hat \tau_{DI,n})$ is a consistent estimators of $\mathbb{V}(\hat \tau_{DI,n})$.
\end{proof}
\section{Additional simulation results}\label{appen:sim_result}
We generate the longitudinal responses in a sequential manner. The baseline responses are generated by $Y_{i1}\mid(G_{i}=j)=\alpha_{j0}^{(1)}+X_{i}^{\intercal}\alpha_{j1}^{(1)}+\varepsilon_{ij1},$ where $\alpha_{j}^{(1)}=(\alpha_{j0}^{(1)},\alpha_{j1}^{(1)\intercal})^{\intercal}$ is set to be $\alpha_{0}^{(1)}=\alpha_{1}^{(1)}=(0.5,1,-3,2)^{\intercal}$ for both groups to mimic a randomized clinical trial. At $k$th visit for $k=2,\cdots,5$, the sequential responses are generated by $Y_{ik}\mid(G_{i}=j)=\alpha_{j0}^{(k)}+X_{i}^{\intercal}\alpha_{j1}^{(k)}+\alpha_{j2}^{(k)}Y_{i1}+\cdots+\alpha_{jk}^{(k)}Y_{ik-1}+\varepsilon_{ijk}.$ For control group, set $\alpha_{10}^{(k)}\sim\mathcal{N}(\mu_{1k},\eta_{k}^{2})$ where $\mu_{1}:=(\mu_{12},\cdots,\mu_{15})^{\intercal}=(1,1.5,2,3)^{\intercal}$ and $\eta:=(\eta_{2},\cdots,\eta_{5})^{\intercal}=(0.5,1,1,1)^{\intercal}$, $\alpha_{1l}^{(k)}\sim\mathcal{N}(0,0.5)$ for $l=1,\cdots,k-1$ and $\alpha_{1k}^{(k)}\sim\text{Unif}(0,1)$ indicates a positive correlation with the adjacent response. For treatment group, generate the regression parameters as $\alpha_{20}^{(k)}\sim\mathcal{N}(\mu_{2k},\eta_{k}^{2})$ where $\mu_{2}:=(\mu_{22},\cdots,\mu_{25})^{\intercal}=(1.5,3,4.5,6)^{\intercal}$ represents different longitudinal responses for distinct groups, $\alpha_{2l}^{(k)}\sim\mathcal{N}(0,0.5)$ for $l=1,\cdots,k-1$ and $\alpha_{2k}^{(k)}\sim\text{Unif}(0,1)$. The error terms are independently generated by $\varepsilon_{ijl}\sim\mathcal{N}(0,\sigma_{l}^{2})$ for $l=1,\cdots,T$, where $\sigma:=(\sigma_{1},\cdots,\sigma_{5})^{\intercal}=(2.0,1.8,2.0,2.1,2.2)^{\intercal}$ imitates an increase in variations for longitudinal outcomes in each group.
Note that generating the longitudinal responses using the sequential regression approach is equivalent to generating them from the multivariate normal distribution following \eqref{eq:general_model}. Transform the above sequential generating process, for the control and treatment group, the coefficients $\beta_{j1},\cdots,\beta_{j5}$ used in the simulation study are \begin{align*} \begin{pmatrix}\beta_{11}^{\intercal}\\ \beta_{12}^{\intercal}\\ \beta_{13}^{\intercal}\\ \beta_{14}^{\intercal}\\ \beta_{15}^{\intercal} \end{pmatrix}=\begin{pmatrix}0.50 & 1.00 & -3.00 & 2.00\\ 0.73 & 0.80 & -1.46 & 0.16\\ 1.55 & -0.07 & 1.31 & -0.09\\ 2.19 & -0.08 & -1.35 & 0.95\\ 4.29 & 0.62 & -1.76 & 1.30 \end{pmatrix};\begin{pmatrix}\beta_{21}^{\intercal}\\ \beta_{22}^{\intercal}\\ \beta_{23}^{\intercal}\\ \beta_{24}^{\intercal}\\ \beta_{25}^{\intercal} \end{pmatrix}=\begin{pmatrix}0.50 & 1.00 & -3.00 & 2.00\\ 2.16 & 1.08 & -2.24 & 1.23\\ 7.31 & 0.39 & -3.29 & 0.88\\ 6.45 & 1.05 & -0.22 & 0.18\\ 5.82 & 0.09 & 0.83 & -0.47 \end{pmatrix}. \end{align*}
The group specific covariance matrices $\Sigma^{(j)}$ for $j=1,2$ are \begin{align*} \Sigma^{(1)}=\begin{pmatrix}4.00 & 2.66 & -0.63 & 1.58 & 1.93\\ 2.66 & 5.01 & 0.34 & 1.10 & 1.81\\ -0.63 & 0.34 & 4.27 & 0.98 & 0.42\\ 1.58 & 1.10 & 0.98 & 5.41 & 3.09\\ 1.93 & 1.81 & 0.42 & 3.09 & 6.99 \end{pmatrix};\Sigma^{(2)}=\begin{pmatrix}4.00 & 2.91 & 2.28 & 0.12 & 0.21\\ 2.91 & 5.36 & 4.74 & 1.99 & 0.73\\ 2.28 & 4.74 & 8.23 & 2.63 & -0.22\\ 0.12 & 1.99 & 2.63 & 5.67 & 0.37\\ 0.21 & 0.73 & -0.22 & 0.37 & 5.16 \end{pmatrix}. \end{align*}
Tables \ref{rtb_mean_table} and \ref{washout_mean_table} show the simulation results of an ATE estimator under RTB and washout imputation respectively. Similar to Table \ref{rbi_mean_table}, point estimates from MI and DI become closer to the true value with smaller Monte Carlo variances as sample size increases which confirms consistency. MI and DI estimators have similar point estimates and Monte Carlo variances under each imputation setting. Again, conservative variance estimates take place under Rubin's method. The overestimation issue, however, is relatively moderate compared to J2R setting. But one can still observe relatively larger variances estimates compared to true variances. Variance estimates using weighted bootstrap in DI outperforms by smaller relative bias and a more precise coverage rate.
\begin{table}[!htbp] \centering \caption{Simulation results under RTB of the ATE estimator. Here the true value $\tau=1.5896$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 156.32 & 155.70 & & 22.48 & 22.36 & & 25.88 & 21.82 & & 15.14 & -2.41 & & 96.20 & 94.60 & & 199.12 & 182.34\tabularnewline 100 & 10 & 156.13 & 155.70 & & 22.20 & 22.30 & & 25.64 & 21.77 & & 15.47 & -2.41 & & 96.30 & 94.60 & & 198.22 & 182.12\tabularnewline
& 100 & 156.08 & 156.02 & & 22.17 & 22.09 & & 25.55 & 21.72 & & 15.26 & -1.68 & & 96.20 & 94.90 & & 197.91 & 181.92\tabularnewline \hline
& 5 & 159.29 & 159.19 & & 4.38 & 4.45 & & 5.04 & 4.51 & & 15.25 & 1.22 & & 97.10 & 94.70 & & 87.99 & 82.97\tabularnewline 500 & 10 & 159.32 & 159.32 & & 4.36 & 4.41 & & 5.04 & 4.50 & & 15.63 & 2.09 & & 97.30 & 95.10 & & 87.96 & 82.89\tabularnewline
& 100 & 159.28 & 159.35 & & 4.36 & 4.35 & & 5.00 & 4.49 & & 14.65 & 3.14 & & 96.90 & 95.20 & & 87.60 & 82.82\tabularnewline \hline
& 5 & 159.19 & 159.11 & & 2.13 & 2.15 & & 2.52 & 2.26 & & 18.71 & 5.05 & & 96.10 & 95.10 & & 62.26 & 58.81\tabularnewline 1000 & 10 & 159.10 & 159.11 & & 2.15 & 2.12 & & 2.51 & 2.26 & & 16.88 & 6.62 & & 96.10 & 95.30 & & 62.07 & 58.76\tabularnewline
& 100 & 159.18 & 159.14 & & 2.12 & 2.10 & & 2.50 & 2.25 & & 17.51 & 7.16 & & 96.50 & 95.00 & & 61.91 & 58.72\tabularnewline \hline \end{tabular}} \label{rtb_mean_table} \end{table}
\begin{table}[!htbp] \centering \caption{Simulation results under J2R of the ATE estimator. Here the true value $\tau=0.7858$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 75.35 & 74.55 & & 22.07 & 22.10 & & 24.22 & 21.02 & & 9.74 & -4.88 & & 96.00 & 94.40 & & 192.54 & 178.92\tabularnewline 100 & 10 & 75.05 & 74.63 & & 21.75 & 21.96 & & 23.96 & 21.18 & & 10.17 & -3.56 & & 95.80 & 94.10 & & 191.59 & 179.61\tabularnewline
& 100 & 75.01 & 74.94 & & 21.70 & 21.62 & & 23.84 & 21.33 & & 9.87 & -1.38 & & 96.30 & 94.90 & & 191.14 & 180.23\tabularnewline \hline
& 5 & 78.79 & 78.69 & & 4.38 & 4.47 & & 4.71 & 4.35 & & 7.59 & -2.71 & & 96.10 & 94.80 & & 85.00 & 81.54\tabularnewline 500 & 10 & 78.82 & 78.83 & & 4.37 & 4.39 & & 4.70 & 4.38 & & 7.61 & -0.43 & & 96.20 & 95.30 & & 84.94 & 81.76\tabularnewline
& 100 & 78.79 & 78.87 & & 4.34 & 4.35 & & 4.65 & 4.40 & & 7.07 & 1.28 & & 96.60 & 95.60 & & 84.49 & 82.00\tabularnewline \hline
& 5 & 79.01 & 78.92 & & 2.15 & 2.16 & & 2.36 & 2.19 & & 9.41 & 1.30 & & 95.90 & 94.40 & & 60.15 & 57.87\tabularnewline 1000 & 10 & 78.92 & 78.92 & & 2.16 & 2.13 & & 2.34 & 2.20 & & 8.40 & 3.53 & & 95.50 & 94.40 & & 59.91 & 58.05\tabularnewline
& 100 & 79.00 & 78.95 & & 2.13 & 2.12 & & 2.32 & 2.22 & & 8.85 & 4.71 & & 95.60 & 94.80 & & 59.73 & 58.21\tabularnewline \hline \end{tabular}} \label{washout_mean_table} \end{table}
Tables \ref{rtb_prop_table}--\ref{washout_prop_table} present the results estimating risk difference under RTB, J2R and washout imputation respectively. The performances are similar to the previous cases. Again, similar to the cases of estimation of a regression type of ATE, MI with Rubin's variance estimator overestimates the true variance since it has much larger variance estimates under each imputation assumption. In most cases under RTB and washout imputation assumption, it is moderately conservative compared to the one under J2R. DI with variance estimates obtained from weighted bootstrap outperforms MI with Rubin's estimates by the proximity to true variances, much smaller relative bias, and more precise coverage probabilities in most cases under each imputation assumption. The variance estimates are close to true variances, with relative bias controlled under $5\%$, and do not show a tendency of overestimation or underestimation. The coverage probabilities are around $95\%$ in a tolerable range.
\begin{table}[!htbp] \centering \caption{Simulation results under RTB of the risk difference estimator. Here the true value $\tau=0.2192$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 21.71 & 21.71 & & 42.75 & 42.48 & & 51.37 & 44.37 & & 20.14 & 4.45 & & 97.20 & 95.30 & & 28.08 & 26.04\tabularnewline 100 & 10 & 21.70 & 21.67 & & 41.96 & 42.21 & & 50.99 & 44.09 & & 21.53 & 4.46 & & 97.40 & 96.00 & & 27.98 & 25.96\tabularnewline
& 100 & 21.72 & 21.72 & & 42.02 & 42.00 & & 50.72 & 43.86 & & 20.71 & 4.44 & & 97.30 & 96.00 & & 27.91 & 25.89\tabularnewline \hline
& 5 & 21.93 & 21.89 & & 9.48 & 9.64 & & 10.32 & 9.10 & & 8.91 & -5.62 & & 96.00 & 93.70 & & 12.59 & 11.80\tabularnewline 500 & 10 & 21.91 & 21.91 & & 9.35 & 9.45 & & 10.28 & 9.05 & & 9.93 & -4.28 & & 95.70 & 94.40 & & 12.56 & 11.76\tabularnewline
& 100 & 21.91 & 21.91 & & 9.35 & 9.32 & & 10.21 & 9.01 & & 9.26 & -3.39 & & 95.70 & 94.70 & & 12.53 & 11.73\tabularnewline \hline
& 5 & 21.93 & 21.93 & & 4.44 & 4.40 & & 5.17 & 4.55 & & 16.51 & 3.40 & & 96.70 & 95.90 & & 8.91 & 8.35\tabularnewline 1000 & 10 & 21.93 & 21.93 & & 4.35 & 4.32 & & 5.15 & 4.53 & & 18.35 & 4.74 & & 97.10 & 95.50 & & 8.89 & 8.32\tabularnewline
& 100 & 21.94 & 21.93 & & 4.36 & 4.33 & & 5.11 & 4.51 & & 17.34 & 4.14 & & 97.60 & 95.70 & & 8.86 & 8.31\tabularnewline \hline \end{tabular}} \label{rtb_prop_table} \end{table}
\begin{table}[!htbp] \centering \caption{Simulation results under J2R of the risk difference estimator. Here the true value $\tau=0.2197$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 21.65 & 21.64 & & 39.02 & 38.53 & & 53.60 & 39.94 & & 37.37 & 3.67 & & 98.00 & 95.30 & & 28.66 & 24.71\tabularnewline 100 & 10 & 21.69 & 21.61 & & 37.07 & 37.75 & & 52.95 & 39.31 & & 42.84 & 4.15 & & 98.30 & 95.30 & & 28.50 & 24.51\tabularnewline
& 100 & 21.64 & 21.64 & & 37.24 & 37.08 & & 52.18 & 38.67 & & 40.14 & 4.28 & & 98.10 & 95.10 & & 28.31 & 24.31\tabularnewline \hline
& 5 & 21.86 & 21.81 & & 8.34 & 8.32 & & 10.79 & 8.15 & & 29.35 & -2.04 & & 96.60 & 94.80 & & 12.86 & 11.16\tabularnewline 500 & 10 & 21.85 & 21.85 & & 8.23 & 8.32 & & 10.61 & 8.01 & & 28.90 & -3.72 & & 97.30 & 94.70 & & 12.76 & 11.06\tabularnewline
& 100 & 21.84 & 21.85 & & 8.16 & 8.15 & & 10.49 & 7.89 & & 28.54 & -3.23 & & 97.30 & 94.50 & & 12.70 & 10.98\tabularnewline \hline
& 5 & 21.98 & 21.95 & & 4.22 & 4.20 & & 5.38 & 4.07 & & 27.35 & -2.94 & & 97.30 & 94.60 & & 9.08 & 7.89\tabularnewline 1000 & 10 & 21.97 & 21.96 & & 4.15 & 4.09 & & 5.30 & 4.00 & & 27.67 & -2.18 & & 97.80 & 94.80 & & 9.02 & 7.82\tabularnewline
& 100 & 21.96 & 21.97 & & 4.06 & 4.04 & & 5.25 & 3.94 & & 29.29 & -2.57 & & 97.70 & 94.80 & & 8.98 & 7.76\tabularnewline \hline \end{tabular}} \label{rbi_prop_table} \end{table}
\begin{table}[!htbp] \centering \caption{Simulation results under washout of the risk difference estimator. Here the true value $\tau=0.1478$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\times10^{-4}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 14.58 & 14.56 & & 44.97 & 44.85 & & 54.15 & 46.13 & & 20.41 & 2.87 & & 97.30 & 94.80 & & 28.82 & 26.56\tabularnewline 100 & 10 & 14.56 & 14.51 & & 44.06 & 44.70 & & 53.48 & 46.03 & & 21.37 & 2.99 & & 96.80 & 95.00 & & 28.66 & 26.52\tabularnewline
& 100 & 14.57 & 14.57 & & 44.16 & 44.05 & & 53.08 & 45.93 & & 20.21 & 4.27 & & 96.60 & 95.10 & & 28.56 & 26.50\tabularnewline \hline
& 5 & 14.77 & 14.78 & & 9.87 & 10.06 & & 10.87 & 9.50 & & 10.14 & -5.64 & & 95.80 & 94.00 & & 12.91 & 12.05\tabularnewline 500 & 10 & 14.76 & 14.78 & & 9.74 & 9.88 & & 10.80 & 9.46 & & 10.92 & -4.21 & & 95.30 & 94.30 & & 12.88 & 12.03\tabularnewline
& 100 & 14.77 & 14.78 & & 9.75 & 9.71 & & 10.70 & 9.45 & & 9.79 & -2.63 & & 95.70 & 94.50 & & 12.82 & 12.02\tabularnewline \hline
& 5 & 14.80 & 14.81 & & 4.82 & 4.78 & & 5.43 & 4.76 & & 12.53 & -0.36 & & 96.50 & 94.20 & & 9.13 & 8.54\tabularnewline 1000 & 10 & 14.80 & 14.80 & & 4.77 & 4.71 & & 5.41 & 4.75 & & 13.41 & 1.01 & & 96.30 & 94.40 & & 9.11 & 8.53\tabularnewline
& 100 & 14.81 & 14.80 & & 4.73 & 4.71 & & 5.35 & 4.74 & & 13.24 & 0.79 & & 96.40 & 94.70 & & 9.07 & 8.52\tabularnewline \hline \end{tabular}} \label{washout_prop_table} \end{table}
Tables \ref{rtb_quantile_table}--\ref{washout_quantile_table} show the results estimating QTE under RTB and washout imputation respectively. MI and DI estimators have similar point estimates and Monte Carlo variances under each imputation setting. In terms of variance estimates, unlike the ones regarding ATE and risk difference, when the sample size is relatively small, variance estimates of QTE from both methods overestimate the true variance with large relative biases. The variance estimates using Rubin's method for MI estimator are much larger than true variances, resulting in large relative bias and very conservative coverage rates. Under RTB and washout imputation, the overestimation issue is less severe compared to the one under J2R as the sample size increases. However, variance estimates from DI overestimate the true variance when sample size $N=100$. With a small sample size, the relative bias for DI estimator appears to be unsatisfying, while the coverage rate does not show much overestimation. It may be due to the instability of point estimates since the quantile estimator may be skewed. Variance estimates become closer to true values with small relative bias as the sample size grows. The majority of coverage rates are close to the empirical values except for Table \ref{rbi_quantile_table} when $N=500$ and $M=5$, which may be due to some Monte Carlo error. Large sample size is recommended when estimating QTE based on the simulation results.
\begin{table}[!htbp] \centering \caption{Simulation results under RTB of the QTE estimator. Here the true value of $\tau=1.8120$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 180.30 & 180.75 & & 30.58 & 31.11 & & 42.45 & 35.38 & & 38.83 & 13.73 & & 97.60 & 94.40 & & 254.19 & 229.74\tabularnewline 100 & 10 & 180.53 & 180.32 & & 30.04 & 30.86 & & 42.26 & 35.42 & & 40.70 & 14.79 & & 97.70 & 95.10 & & 253.71 & 229.83\tabularnewline
& 100 & 180.48 & 180.53 & & 29.84 & 30.80 & & 42.04 & 35.39 & & 40.85 & 14.90 & & 97.80 & 94.90 & & 253.07 & 229.76\tabularnewline \hline
& 5 & 181.46 & 181.14 & & 6.69 & 6.87 & & 8.05 & 6.90 & & 20.37 & 0.43 & & 96.30 & 94.70 & & 111.07 & 102.12\tabularnewline 500 & 10 & 181.27 & 181.37 & & 6.58 & 6.67 & & 7.99 & 6.87 & & 21.53 & 3.10 & & 96.20 & 94.30 & & 110.68 & 101.94\tabularnewline
& 100 & 181.31 & 181.30 & & 6.56 & 6.64 & & 7.95 & 6.84 & & 21.22 & 2.98 & & 96.30 & 94.50 & & 110.40 & 101.71\tabularnewline \hline
& 5 & 181.30 & 181.26 & & 3.29 & 3.37 & & 3.93 & 3.40 & & 19.36 & 0.87 & & 96.80 & 94.20 & & 77.63 & 71.82\tabularnewline 1000 & 10 & 181.24 & 181.29 & & 3.29 & 3.34 & & 3.92 & 3.38 & & 19.11 & 1.32 & & 97.10 & 94.00 & & 77.57 & 71.61\tabularnewline
& 100 & 181.32 & 181.27 & & 3.28 & 3.30 & & 3.90 & 3.37 & & 18.88 & 2.20 & & 96.90 & 94.20 & & 77.33 & 71.50\tabularnewline \hline \end{tabular}} \label{rtb_quantile_table} \end{table}
\begin{table}[!htbp] \centering \caption{Simulation results under J2R of the QTE estimator. Here the true value $\tau=1.5570$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 153.43 & 153.00 & & 24.95 & 25.67 & & 39.31 & 29.32 & & 57.58 & 14.21 & & 98.60 & 95.10 & & 244.22 & 208.96\tabularnewline 100 & 10 & 153.47 & 152.83 & & 24.09 & 25.13 & & 38.57 & 29.10 & & 60.07 & 15.80 & & 98.30 & 95.20 & & 242.12 & 208.12\tabularnewline
& 100 & 153.22 & 153.17 & & 23.72 & 24.41 & & 38.15 & 28.87 & & 60.87 & 18.31 & & 98.40 & 95.20 & & 241.00 & 207.39\tabularnewline \hline
& 5 & 155.56 & 155.12 & & 5.69 & 5.80 & & 7.44 & 5.69 & & 30.80 & -1.82 & & 97.40 & 92.70 & & 106.64 & 92.84\tabularnewline 500 & 10 & 155.25 & 155.38 & & 5.66 & 5.76 & & 7.32 & 5.60 & & 29.32 & -2.62 & & 97.10 & 94.10 & & 105.88 & 92.13\tabularnewline
& 100 & 155.27 & 155.41 & & 5.59 & 5.67 & & 7.24 & 5.55 & & 29.53 & -2.11 & & 97.30 & 93.20 & & 105.36 & 91.68\tabularnewline \hline
& 5 & 155.94 & 155.80 & & 2.66 & 2.67 & & 3.64 & 2.80 & & 36.73 & 4.84 & & 98.40 & 95.80 & & 74.64 & 65.17\tabularnewline 1000 & 10 & 155.92 & 155.74 & & 2.62 & 2.62 & & 3.59 & 2.76 & & 37.00 & 5.51 & & 98.20 & 95.80 & & 74.19 & 64.76\tabularnewline
& 100 & 155.83 & 155.82 & & 2.58 & 2.57 & & 3.55 & 2.73 & & 37.79 & 6.23 & & 98.50 & 95.90 & & 73.85 & 64.37\tabularnewline \hline \end{tabular}} \label{rbi_quantile_table} \end{table}
\begin{table}[!htbp] \centering \caption{Simulation results under washout of the QTE estimator. Here the true value $\tau=1.1313$.} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{2}{c}{Point est} & & \multicolumn{2}{c}{True var} & & \multicolumn{2}{c}{Var est} & & \multicolumn{2}{c}{Relative bias} & & \multicolumn{2}{c}{Coverage rate} & & \multicolumn{2}{c}{Mean CI length}\tabularnewline
& & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\times10^{-2}$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\%$)} & & \multicolumn{2}{c}{($\times10^{-2}$)}\tabularnewline \cline{3-4} \cline{4-4} \cline{6-7} \cline{7-7} \cline{9-10} \cline{10-10} \cline{12-13} \cline{13-13} \cline{15-16} \cline{16-16} \cline{18-19} \cline{19-19} N & M & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI & & MI & DI\tabularnewline \hline
& 5 & 111.26 & 111.43 & & 28.85 & 29.72 & & 40.59 & 33.64 & & 40.69 & 13.20 & & 97.30 & 94.80 & & 248.45 & 223.75\tabularnewline 100 & 10 & 111.53 & 111.29 & & 28.49 & 29.23 & & 40.26 & 33.97 & & 41.34 & 16.21 & & 97.00 & 94.70 & & 247.52 & 225.00\tabularnewline
& 100 & 111.41 & 111.77 & & 28.23 & 29.15 & & 40.01 & 34.07 & & 41.73 & 16.87 & & 97.20 & 95.00 & & 246.82 & 225.32\tabularnewline \hline
& 5 & 113.38 & 113.42 & & 6.60 & 6.82 & & 7.70 & 6.62 & & 16.56 & -3.01 & & 96.80 & 93.70 & & 108.53 & 100.08\tabularnewline 500 & 10 & 113.28 & 113.45 & & 6.48 & 6.70 & & 7.62 & 6.60 & & 17.75 & -1.50 & & 96.40 & 93.70 & & 108.09 & 99.98\tabularnewline
& 100 & 113.33 & 113.38 & & 6.52 & 6.63 & & 7.56 & 6.60 & & 16.10 & -0.37 & & 97.00 & 93.50 & & 107.69 & 99.99\tabularnewline \hline
& 5 & 113.43 & 113.37 & & 3.19 & 3.20 & & 3.76 & 3.28 & & 17.88 & 2.67 & & 97.40 & 95.20 & & 75.88 & 70.55\tabularnewline 1000 & 10 & 113.47 & 113.46 & & 3.18 & 3.15 & & 3.75 & 3.28 & & 18.03 & 3.98 & & 97.00 & 95.20 & & 75.85 & 70.53\tabularnewline
& 100 & 113.48 & 113.45 & & 3.13 & 3.13 & & 3.71 & 3.28 & & 18.51 & 4.78 & & 97.20 & 95.30 & & 75.44 & 70.51\tabularnewline \hline \end{tabular}} \label{washout_quantile_table} \end{table}
\section{Real data application}\label{appen:real_data}
\subsection{Additional analysis results}\label{appen:real_result}
The public dataset is available at \url{https://www.lshtm.ac.uk/research/centres-projects-groups/missing-data\#dia-missing-data}. For the ATE, we estimate it by fitting a group-specific ANCOVA model with the baseline HAMD-17 score as the baseline covariate. For the risk difference, we are interested in the percentage difference of patients with $50\%$ or more improvement from the baseline HAMD-17 score at the end of the trial between the control and treatment groups. We estimate it through the estimating equations in Example \ref{example2} (b). For the QTE, we do not limit on one specific quantile; instead, we present the estimated cumulative distribution function (CDF) of the relative change from baseline for each group obtained from the estimating equations in Example \ref{example2} (c). In the implementation of MI and DI, the imputation size $M=100$, and the number of bootstrap replicates $B=100$. For all the hypothesis tests, we choose the significance level $\alpha=0.05$.
We present the estimated CDFs of the relative change at the last time point via MI and DI in each group accompanied by the point-wise $95\%$ CI under RTB and washout imputation mechanism in the left side of Figures \ref{fig:rtb} and \ref{fig:washout}, and the estimated QTEs accompanied by the point-wise $95\%$ CI as a function of $q$ denoted as quantile percentage in the right side of Figures \ref{fig:rtb} and \ref{fig:washout}. Similar to results in the simulation study and the results under J2R, the estimated CDF obtained from DI has a comparable shape as the one from MI with a narrower $95\%$ confidence region under those two imputation mechanisms. From the figure of the estimated QTE, a significant effect of the treatment is detected for patients in the lower quantile of HAMD-17 score in all three sensitivity analysis settings.
\begin{figure}
\caption{The left figure is the estimated cdf of relative change from baseline at last time point via MI and DI in each group under RTB; the right figure is the estimated QTE as a function for a particular value of $q$. Both plots are accompanied by the point-wise $95\%$ CI in dashed lines.}
\label{fig:rtb}
\end{figure}
\begin{figure}
\caption{The left figure is the estimated cdf of relative change from baseline at last time point via MI and DI in each group under washout imputation; the right figure is the estimated QTE as a function for a particular value of $q$. Both plots are accompanied by the point-wise $95\%$ CI in dashed lines.}
\label{fig:washout}
\end{figure}
\subsection{Model diagnosis \label{appen:real_diagnosis}}
In primary analysis and sensitivity analyses, we fit an MMRM from a population-averaged perspective based on the observed data to get a consistent estimator of the model parameter as $\hat{\theta}$. The underlying assumptions for the MMRM are: (i) Normality: $Y_{i}:=(Y_{i1},\cdots,Y_{iT})^{\intercal}\mid(G_{i}=j,R_{iT}=1)\sim\mathcal{N}_{T}(\mu_{ij},\Sigma^{(j)})$; (ii) The parameters controlling the missing mechanism and the model parameters satisfy the separability condition, i.e., they are variation independent. Note that the second assumption is not testable since we cannot verify the missing mechanism only through observed components, we focus on conducting a model diagnosis on the multivariate normality of the observed components of the HAMD-17 trial data in each treatment group.
Normal Q-Q plots for residuals from the marginal distribution of the baseline response $Y_{1}$, and the full conditional distribution of the following responses $Y_{2},\cdots,Y_{6}$ based on observed data are given in Figure \ref{fig:diagnosis}. It supports the multivariate normality assumption since the majority of residuals are within the confidence region. We observe several outstanding residuals at the last time point that mildly violate the normality assumption. However, we still believe the normality holds in general.
\end{document} |
\begin{document}
\title[Arithmetic progressions of four squares over quadratic fields]{Arithmetic progressions of four squares over quadratic fields}
\author{Enrique Gonz{\'a}lez-Jim{\'e}nez} \address{Universidad Aut{\'o}noma de Madrid\\ Departamento de Matem{\'a}ticas \\ 28049 Madrid, Spain} \email{[email protected]}
\author{J\"orn Steuding} \address{W\"urzburg University\\ Institut f\"ur Mathematik\\ 97074 W\"urzburg, Germany} \email{[email protected]}
\keywords{Arithmetic progressions of four squares, $\theta$-congruent numbers, Euler's concordant forms, quadratic fields} \date{\today} \maketitle
\begin{abstract} Let $d$ be a squarefree integer. Does there exist four squares in arithmetic progression over $\mathbb{Q}(\sqrt{d})$? We shall give a partial answer to this question, depending on the value of $d$. In the affirmative case, we construct explicit arithmetic progressions consisting of four squares over $\mathbb{Q}(\sqrt{d})$. \end{abstract}
\tableofcontents
\section{Introduction}\label{intro}
Non-constant arithmetic progressions consisting of rational squares have been studied since ancient times. While it is not difficult to obtain an arithmetic progression of three rational squares (e.g. $1^2,5^2,7^2$), there are no four distinct rational squares in arithmetic progression as already stated by Fermat and proved by Euler (among others). However, the situation is different over number fields. It is easy to construct four squares in arithmetic progression over a quadratic number field; e.g. $1^2,5^2,7^2,(\sqrt{73})^2$ over $\mathbb{Q}(\sqrt{73})$. It is even possible to find five squares in arithmetic progressions; e.g. $7^2,13^2,17^2,(\sqrt{409})^2,23^2$ over $\mathbb{Q}(\sqrt{409})$. By Faltings' proof of the Mordell conjecture \cite{faltings} in any finite algebraic extension of $\mathbb{Q}$ there can be at most finitely many arithmetic progressions of at least five squares. Recently Xarles \cite{xarles} has proved that six squares in arithmetic progression over quadratic number fields do not exist. The case of length five over quadratic fields has been treated by the first author and Xarles \cite{GJX}.
In this paper we consider the following natural problem:
\noindent {\it Let $d$ be an squarefree integer. Do there exist four squares in arithmetic progression over $\mathbb{Q}(\sqrt{d})$? In the affirmative case, give an algorithm to construct explicit examples.} \\
For special values of $d$ the following table indicates for which quadratic fields $\mathbb{Q}(\sqrt{d})$ there exist or do not exist non-constant arithmetic progressions of four squares. Here '?' indicates that in this case we do not know whether there exists a non-constant four term arithmetic progression of squares or not.
{\footnotesize \begin{center} \begin{table}[!h]
\begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline
$p\ge 5$ prime & \multicolumn{8}{|c|}{Is there a non-constant arithmetic progression of four squares over $\mathbb{Q}(\sqrt{d})$?} \\ \hline $p\bmod\,24$ & $d=p$ & $d=2p$ &$d=3p$ &$d=6p$ & $d=-p$ & $d=-2p$ &$d=-3p$ &$d=-6p$\\ \hline
$1$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ \\ \hline
$5$ & no & $?$ & no & $?$ & $?$ & $?$ & $?$ & no \\ \hline
$7$ & no & no & $?$ & $?$ & no & $?$ & $?$ & no \\ \hline
$11$ & $?$ & $?$ & no & $?$ & no & $?$ & $?$ & $?$ \\ \hline
$13$ & $?$ & no & $?$ & $?$ & no & no & $?$ & no \\ \hline
$17$ & $?$ & $?$ & no & $?$ & $?$ & $?$ & no & no \\ \hline
$19$ & no & $?$ & no & $?$ & $?$ & no & $?$ &$?$ \\ \hline
$23$ & yes & yes & $?$ & yes & yes & yes & yes & $?$ \\ \hline \end{tabular}\\[2mm] \caption{}\label{table1} \end{table} \end{center} }
The proof of the correctness of the table will be given in Section \ref{section-explicit}.
We shall show that four squares in arithmetic progression lead to points on an elliptic curve. This approach is not new; it seems to be a folklore result, however, we shall provide our own parametrization of arithmetic progressions of four squares over a number field $k$ by $k$-rational points on the modular curve $X_0(24)$. More precisely, there exists a non-constant arithmetic progression of four squares over a number field $k$ if and only if the Mordell-Weil group of the elliptic curve $X_0(24)$ has positive rank over $k$; in this case, there exist infinitely many arithmetic progressions of four squares. For the specific case of a quadratic number field $\mathbb{Q}(\sqrt{d})$ this characterization reduces our problem to the problem of determining the rank of the quadratic $d$-twists of the underlying elliptic curve.
The above characterization allows us to link our problem with two other problems, both being generalizations of the famous congruent number problem. These problems are related to $\theta$-congruent numbers and Euler's concordant forms.
This paper is organized as follows: Section \ref{section-4squares} is devoted to construct a parametrization of four term arithmetic progressions of squares over a number field $k$ by $k$-rational points of the elliptic curve $X_0(24)$. Here we derive some particular results when $k$ is a quadratic field. In Section \ref{section-congruent} we introduce the notion of $\theta$-congruent numbers and state some well-known results for the case when $\theta$ is either equal to $\pi/3$ or $2\pi/3$, which are the relevant cases with respect to the main problem of this paper. Section \ref{section-concordant} deals with Euler's concordant forms. In Section \ref{section-explicit} we apply results from the previous sections in order to obtain a partial answer to our problem. Here we also give examples of arithmetic progressions of four squares over $\mathbb{Q}(\sqrt{d})$ for all cases $|d|\le 40$ for which such arithmetic progressions do exist. Moreover, we give another construction using pythagorean triples and Thue equations. The last section contains some average results related to our main problem.
\section{Four squares in arithmetic progression}\label{section-4squares}
Four squares $a^2,b^2,c^2$ and $d^2$ over a field $k$ are in arithmetic progression if and only if $b^2-a^2=c^2-b^2$ and $c^2-b^2=d^2-c^2$. This is equivalent to $[a,b,c,d]\in \mathbb{P}^3(k)$ being in the intersection of the two quadric surfaces $a^2+c^2=2b^2$ and $b^2+d^2=2c^2$, resp. lying on the curve \begin{equation}\label{eqn1} C\,:\,\left\{ \begin{array}{l} a^2+c^2=2b^2,\\ b^2+d^2=2c^2. \end{array} \right. \end{equation} Therefore, $k$-rational points of $C$ parametrize arithmetic progressions of four squares over $k$. Note that the eight points $[\pm 1,\pm 1,\pm 1,\pm 1 ]$ belong to $C$, however, these points correspond to constant arithmetic progressions. The next step is to compute an explicit equation for $C$. In the generic case the intersection of two quadric surfaces in $\mathbb{P}^3$ gives an elliptic curve and, indeed, this will turn out to be true in our case. For this purpose we are going to compute a Weierstrass equation for $C$. The system of equations (\ref{eqn1}) is equivalent to $a^2+2d^2=3c^2, b^2=2c^2-d^2$. A parametrization (up to sign) of the first conic is given by $$ (a,d,c)=(2t^2-4t-1,2t^2+2t-1,2t^2+1),\quad t\in k, $$ where the inverse is given by $t=\frac{d-c}{a-c}$ if $a\neq c$ and $t=-\frac{1}{2}$ if $a=c$. Therefore, if we substitute the values of $a,d$ and $c$ in the second equation, we obtain the quartic equation \begin{equation}\label{eqn_quartic} \mathcal{Q}\,:\,b^2=4t^4-8t^3+8t^2+4t+1\,. \end{equation} Our next aim is to find a Weierstrass model for $\mathcal{Q}$. Note that there is only one point at infinity, namely $[0:1:0]$. This point is a node. We denote by $\infty_{1}$ and $\infty_{2}$ the two branches at infinity at the desingularization of $\mathcal{Q}$. A Weierstrass model for $\mathcal{Q}$ is $E:y^2=x(x+3)(x-1)$, where the isomorphism $\phi: \mathcal{Q} \rightarrow E$ is defined by $$ \phi(P)= \left(\frac{1+b+2t}{2t^2},\frac{1+b+3t+bt+4t^2-2t^3}{2t^3}\right)\quad\mbox{if} \quad P=(t,b)\neq (0,\pm 1), $$ and $\phi(0,-1)=(-1,2)$, $\phi(0,1)=[0:1:0]$, $\phi(\infty_1)=(1,0)$, $\phi(\infty_2)=(-1,-2)$. The inverse is defined by $$ \phi^{-1}(P)= \left(\frac{x+y-1}{x^2-1},\frac{x^3+5x^2+2xy-2y-x+3}{(x^2-1)(x+1)}\right)\quad \mbox{if} \quad P=(x,y),\,x\neq \pm 1. $$ Therefore, by the above construction we have proved
\begin{theorem}\label{parametrization} Let $k$ be a field of $\mbox{char($k$)}\neq 2,3$, then arithmetic progressions of four squares in $k$ are parametrized by $k$-rational points of the elliptic curve $$ E:y^2=x(x+3)(x-1). $$ This parametrization is as follows:\\ \noindent $\bullet$ Let $[a,b,c,d]\in\mathbb{P}^3(k)$ such that $a^2,b^2,c^2,d^2$ form an arithmetic progression. If $a\neq c$ and $d\neq c$, let $t=\frac{d-c}{a-c}$ and define $$ P=\left(\frac{1+b+2t}{2t^2},\frac{1+b+3t+bt+4t^2-2t^3}{2t^3}\right), $$ and $$ P=\left\{ \begin{array}{ccl} [0:1:0] & \mbox{if} & [a,b,c,d]=[-1,1,1,1],\\ (-1,2) & \mbox{if} & [a,b,c,d]=[-1,-1,1,1],\\ (3,-6) & \mbox{if} & [a,b,c,d]=[1,1,1,1],\\ (-3,0) & \mbox{if} & [a,b,c,d]=[1,-1,1,1],\\ (1,0) & \mbox{if} & [a,b,c,d]=[-1,1,-1,1],\\ (-1,-2) & \mbox{if} & [a,b,c,d]=[-1,-1,-1,1]. \end{array} \right. $$ Then $P\in E(k)$.\\ \noindent $\bullet$ Let be $P\in E(k)$. If $P=(x,y)$, $x\neq \pm 1$, let $t=\frac{x+y-1}{x^2-1}$ and define $$ [a,b,c,d]=\left[2t^2-4t-1, \frac{x^3+5x^2+2xy-2y-x+3}{(x^2-1)(x+1)}, 2t^2+1, 2t^2+2t+1\right], $$ and $$ [a,b,c,d]=\left\{ \begin{array}{ccl} {[-1,-1,1,1]} & \mbox{if} & P=(-1,2),\\ {[-1,-1,-1,1]} & \mbox{if} & P=(-1,-2),\\ {[-1,1,-1,1]} & \mbox{if} & P=(1,0),\\ {[-1,1,1,1]} & \mbox{if} & P=[0:1:0]. \end{array} \right. $$ Then $a^2,b^2,c^2,d^2$ form an arithmetic progression in $k$. \end{theorem}
It is natural to ask for which number fields $k$ there exist arithmetic progressions of four squares over $k$. In order to investigate this problem we shall make use of the theory of elliptic curves.
\begin{proposition}\label{prop1} Let $k$ be a number field. Then there exists a non-constant arithmetic progression of four squares over $k$ if and only if $\#E(k)>8$. Furthermore, there exist infinitely many such progressions if and only if $\rank{E(k)}\ne 0$. \end{proposition}
\begin{proof} Firstly, note that the points $[\pm 1,\pm 1,\pm 1,\pm 1]$ in $\mathbb{P}^3(k)$ give constant arithmetic progressions. Therefore, using the parametrization $\phi$, it follows that the points $\phi([\pm 1, \pm 1, \pm1, \pm 1])$ belong to $E(k)$, and this set has cardinality $8$. This concludes the proof. \end{proof}
As a corollary we obtain
\begin{corollary} There is no non-constant arithmetic progression of four rational squares. \end{corollary}
\noindent This statement is due to Fermat, however, the first proof is attributed to Euler who applied Fermat's method of infinite descent. For the sake of completeness and since some of the data that appear will be useful later, we give the short
\begin{proof} Using \verb+SAGE+ \cite{sage} or \verb+MAGMA+ \cite{magma}, one can check that $E$ is the curve \verb+24A1+ in Cremona's tables \cite{cremona}, resp. \verb+24B+ in the Antwerp tables \cite{antwerp}. In other words, $E$ is the modular curve $X_0(24)$. Checking these tables or using one of the above mentioned computer algebra systems, one can prove $E(\mathbb{Q})\simeq \mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}$. There are no $\mathbb{Q}$-rational points on $E$ apart those eight points $[\pm 1,\pm 1,\pm 1,\pm 1]$ which correspond to constant arithmetic progressions. \end{proof}
Next we shall consider the case of quadratic number fields. Here the question translates to study the Mordell-Weil group $E(\mathbb{Q}(\sqrt{d}))$. However, instead to treat the elliptic curve $E$ over $\mathbb{Q}(\sqrt{d})$ directly, we are going to study the quadratic $d$-twist of the elliptic curve $E$ over $\mathbb{Q}$, i.e., the elliptic curve $$ E^d: dy^2=x(x+3)(x-1). $$ It should be noted that $E$ and $E^d$ are $\mathbb{Q}(\sqrt{d})$-isomorphic.
\begin{corollary} Let $d$ be a squarefree integer. Then there is a non-constant arithmetic progression of four squares over $\mathbb{Q}(\sqrt{d})$ if and only if $\rank{E^d}\ne 0$; in this case, there exist infinitely many such progressions. \end{corollary}
\begin{proof} We are going to compute the structure of $E(\mathbb{Q}(\sqrt{d}))$. Since the $2$-torsion subgroup of $E$ is defined over $\mathbb{Q}$, by applying Kwon's results \cite{kwon} we see that $E(\mathbb{Q}(\sqrt{d}))_{\tors}$ and $E(\mathbb{Q})_{\tors}$ are equal. Thus Proposition \ref{prop1} shows that if there exists a non-constant arithmetic progression of four squares over $\mathbb{Q}(\sqrt{d})$, then there exist infinitely many or, equivalently, $\rank{E(\mathbb{Q}(\sqrt{d}))}\ne 0$. Now, since \begin{equation}\label{rank-formula} \rank E(\mathbb{Q}(\sqrt{d})) = \rank E(\mathbb{Q})+ \rank E^d(\mathbb{Q}), \end{equation} the statement follows from $\rank E(\mathbb{Q})=0$. \end{proof}
Therefore, the problem to decide for which quadratic fields $\mathbb{Q}(\sqrt{d})$ there exist non-constant arithmetic progressions of four squares is reduced to the question whether the rank of $E^d(\mathbb{Q})$ is positive or not. In Section \ref{section-explicit} we give a partial solution to this problem and for some quadratic number fields we also give explicit four term arithmetic progressions of squares.
For number fields of higher degree we have the following result:
\begin{theorem} There are infinitely many cyclic cubic number fields in each of which there exist infinitely many non-constant arithmetic progressions of four squares. \end{theorem}
\begin{proof} Applying \cite[Theorem 6.1]{FKK} to the elliptic curve $E=X_0(24)$, we find $\#E(\mathbb{Q})=8>6$. Hence, for infinitely many cyclic cubic extensions $K/\mathbb{Q}$ we have $\rank E(K) > \rank E(\mathbb{Q})=0$. \end{proof}
\section{$\theta$-congruent numbers}\label{section-congruent}
A positive integer $n$ is called a congruent number if there exists a right triangle with rational sides and area equal to $n$. The problem to decide whether a given integer is a congruent number has been studied since Diophantus. In 1983, Tunnell \cite{tunnell} found a deterministic criterion for this problem; if the Birch \& Swinnerton--Dyer conjecture is true, then his criterion also can be used to determine congruent numbers. The notion of congruent numbers was extended by Fujiwara \cite{fujiwara} to rational $\theta$-triangles, that are triangles with rational sides where one angle is equal to $\theta$. Note that for such a triangle, $\cos \theta=s/r$ for some coprime integers $r,s$ with $r>0$. It follows that $\sin\theta=\alpha_{\theta}/r$, where $\alpha_{\theta}:=\sqrt{r^2-s^2}$ is uniquely determined by $\theta$. Then $\theta$-congruent numbers are defined as follows:
\begin{definition} Let be $\theta \in [0,\pi)$. A positive integer $n$ is a $\theta$-congruent number if there exists a rational $\theta$-triangle whose area is equal to $n\alpha_\theta$. \end{definition}
Therefore, $\pi/2$-congruent numbers coincide with the ordinary congruent numbers (in which case $r=1$ and $s=0$). Generalizing the case of ordinary congruent numbers, there is a characterization of $\theta$-congruent numbers in terms of rational points on elliptic curves.
\begin{theorem}{\cite{fujiwara,fujiwara2}} For $\theta \in [0,\pi)$ and $n\in\mathbb{N}$, define the elliptic curve $$ E_{n,\theta} : y^2=x(x+(r+s)n)(x-(r-s)n). $$ Then, $n$ is a $\theta$-congruent number if and only if there exists a rational point on $E_{n,\theta}$ of order greater than $2$. Moreover, if $n\ne 1,2,3,6$, then $n$ is a $\theta$-congruent number if and only if $\rank E_{n,\theta}(\mathbb{Q})>0$. \end{theorem}
\noindent {\bf Remark:} Note that the elliptic curve $E_{n,\theta}$ is the $n$-twist of $E_{1,\theta}$. Therefore, whenever $n\ne 1,2,3,6$, to prove that $n$ is a $\theta$-congruent number is equivalent to show that the rank of the $n$-twist of the elliptic curve $E_{1,\theta}$ is non-zero. Another interesting remark is that $E_{n,\pi-\theta}$ is the $(-n)$-twist of $E_{1,\theta}$ .
Several papers \cite{fujiwara,fujiwara2,goto,HK,kan, yoshida,yoshida2} have been studying the $\theta$-congruent number problem for $\theta\ne \pi/2$. For our purposes the cases $\theta=\pi/3$ and $\theta=2\pi/3$ are of special interest (see Section \ref{section-explicit}). In these cases, the curve $E_{1,\pi/3}$ is the curve \verb+24A1+ and $E_{1,2\pi/3}$ is the curve \verb+48A1+ in Cremona's tables, respectively.
The following table resumes all known results on $\pi/3$-congruent and $2\pi/3$-congruent numbers (see \cite{fujiwara,HK,kan, yoshida,yoshida2}). Here '?' indicates that in this case we do not know whether $n$ is $\theta$-congruent or not.
{\footnotesize \begin{center} \begin{table}[!h]
\begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline
$p\ge 5$ prime & \multicolumn{4}{|c||}{Is $n$ a $\pi/3$-congruent number?} & \multicolumn{4}{|c|}{Is $n$ a $2\pi/3$-congruent number?}\\ \hline $p\bmod\,24$ & $n=p$ & $n=2p$ &$n=3p$ &$n=6p$ & $n=p$ & $n=2p$ &$n=3p$ &$n=6p$\\ \hline
$1$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ & $?$ \\ \hline
$5$ & no & $?$ & no & $?$ & $?$ & $?$ & $?$ & no \\ \hline
$7$ & no & no & $?$ & $?$ & no & $?$ & $?$ & no \\ \hline
$11$ & $?$ & $?$ & no & $?$ & no & $?$ & $?$ & $?$ \\ \hline
$13$ & $?$ & no & $?$ & $?$ & no & no & $?$ & no \\ \hline
$17$ & $?$ & $?$ & no & $?$ & $?$ & $?$ & no & no \\ \hline
$19$ & no & $?$ & no & $?$ & $?$ & no & $?$ & $?$\\ \hline
$23$ & yes & yes & $?$ & yes & yes & yes & yes & $?$ \\ \hline \end{tabular}\\[2mm] \caption{}\label{table2} \end{table} \end{center} }
Let $n$ be a squarefree positive integer from one of the residue classes $1,7$ or $13\bmod\,24$. Yoshida \cite{yoshida,yoshida2} proved along the lines of Tunnell's approach to the congruent number problem that $n$ is not a $2\pi/3$-congruent number if the number of representations of $n$ by the ternary quadratic forms $$ X^2+3Y^2+144Z^2\qquad\mbox{and}\qquad 3X^2+9Y^2+16Z^2 $$ with integral $X,Y,Z$ are not equal; if the conjecture of Birch and Swinnerton-Dyer is true, then also the converse implication holds, i.e., $n$ is $2\pi/3$-congruent if the number of representations are identical. In particular, it follows from the theory of quadratic forms that all primes $p\equiv 7$ or $13\bmod\,24$ are not $2\pi/3$-congruent numbers. Moreover, if $n\neq 1$ is squarefree with $n\equiv 1,7$ or $19\bmod\,24$, then he showed that $n$ is not a $\pi/3$-congruent number if the number of integer representations of $n$ by the ternary quadratic forms $$ X^2+12Y^2+15Z^2+12YZ\qquad\mbox{and}\qquad 3X^2+4Y^2+13Z^2+4YZ $$ are not equal; if the conjecture of Birch and Swinnerton-Dyer is true, then also the converse implication is true. Here it follows that no prime $p\equiv 7\bmod\,24$ is a $\pi/3$-congruent number. For other residue classes Yoshida obtained analogous statements with, of course, different quadratic forms, which explain the 'no's in the above table. Note that all affirmative 'yes' rely on primes $p\equiv 23\bmod\,24$ by a theorem of Kan \cite{kan}. We collect these results on primes in the following theorem; for the other cases we refer to the mentioned papers.
\begin{theorem} \cite{kan,yoshida,yoshida2} There is no prime number $p\equiv 7,11,13\bmod\,24$ which is a $2\pi/3$-congruent number, and there is no prime number $p\equiv 5,7,19\bmod\,24$ which is a $\pi/3$-congruent number where $p>5$. On the contrary, any prime $p\equiv 23\bmod\,24$ is a $\theta$-congruent number for both $\theta=\pi/3$ and $\theta=2\pi/3$. \end{theorem}
A classical conjecture, supported by numerical evidence (based on computations from the 1970s), states that all squarefree positive integers congruent $5,6$ or $7$ modulo $8$ are congruent numbers. In fact, this conjecture is a direct consequence of the Birch \& Swinnerton--Dyer conjecture. Similar to this conjecture the following one covers the cases of $\pi/3$- and $2 \pi/3$-congruent numbers, respectively.
\begin{conjecture}\label{conjecture} Let $n$ be a squarefree positive integer. \begin{itemize} \item If $n\equiv\,11,13,17,23\,\mbox{mod $24$}$, then $n$ is a $\pi/3$-congruent number. \item If $n\equiv\,5,17,19,23\,\mbox{mod $24$}$, then $n$ is a $2\pi/3$-congruent number. \end{itemize} \end{conjecture}
\section{Euler's concordant forms}\label{section-concordant}
Another equivalent formulation of the congruent number problem is the following: a positive integer $n$ is a congruent number if and only if the system of diophantine equations $$ \left\{ \begin{array}{c} x^2 + ny^2=t^2\\ x^2 - ny^2=z^2 \end{array} \right. $$ has a solution $x,y,z,t\in\mathbb{Z}$ with $xy\ne 0$. In 1780, Euler \cite{euler} gave another generalization of the congruent number problem. He was interested to classify those pairs of distinct non-zero integers $M$ and $N$ for which there exist $x,y,z,t\in\mathbb{Z}$ with $xy\ne 0$ such that $$ \left\{ \begin{array}{c} x^2+My^2=t^2\\ x^2+Ny^2=z^2 \end{array} \right. $$ This is known as Euler's concordant forms problem. If the above diophantine system has a solution, then the pair $(M,N)$ is said to be concordant, otherwise discordant. In the particular case when $M=-N$ this yields the congruent number problem. As for the congruent number problem and its generalization to $\theta$-congruent numbers, there is a characterization due to Ono \cite{ono} for Euler's concordant forms problem in terms of rational points of an elliptic curve.
\begin{theorem}{\cite{ono}} For $M,N\in\mathbb{Z}$ such that $NM(M-N)\ne 0$, define the elliptic curve $$ E_{M,N}:y^2=x(x+M)(x+N). $$ Then, the pair $(M,N)$ is concordant if and only if there exists a rational point on $E_{M,N}$ of order $\neq 1,2$ or $4$. In particular, if $\rank E_{M,N}(\mathbb{Q})$ is positive, then $(M,N)$ is concordant. \end{theorem}
Using Waldspurger's results and Shimura's correspondence {\it a la} Tunnell, Ono obtained several results on the ranks of twists of $E_{M,N}$. In particular, if the number of integer representations of an odd positive squarefree integer $n$ by the ternary quadratic forms $$ X^2+2Y^2+12Z^2 \qquad\mbox{and}\qquad 2X^2+3Y^2+4Z^2 $$ is not equal, then $(M,N)=(6n,-18n)$ is discordant; if the conjecture of Birch and Swinnerton-Dyer is true, then also the converse implication is true. Moreover, for $r$ being an odd integer with $1\leq r\leq 24$, he showed that there are infinitely many positive squarefree integers $n\equiv\,r\bmod\,24$ such that \begin{itemize} \item $(M,N)=(6n,-18n)$ is discordant. \item $(M,N)=(9n,-3n)$ is discordant, where $r\ne 7,15,23$. \end{itemize}
\section{Four squares in arithmetic progressions over quadratic fields}\label{section-explicit}
The existence of a non-constant four term arithmetic progression of squares over a quadratic field is determined by the rank of twist of the elliptic curve $E=X_0(24)$. For a real-quadratic field $\mathbb{Q}(\sqrt{d})$ the elliptic curve $E^d$ is equal to $E_{d,\pi/3}$, whereas for an imaginary-quadratic field $\mathbb{Q}(\sqrt{-d})$ it is $E_{d,2\pi/3}$.
Thus, we may use the information of the table \ref{table2} to deduce information about the rank of $E^d(\mathbb{Q})$ from $E_{d,\pi/3}$ and $E_{d,2\pi/3}$, respectively, corresponding to $d$ being positive or negative. This proves the information provided by table \ref{table1} from the introduction. Moreover assuming the conjecture \ref{conjecture} we have that there exists a non-constant arithmetic progression of four squares over $\mathbb{Q}(\sqrt{d})$ if $d\equiv\,11,13,17,23\,\mbox{mod $24$}$ in the real case and $d\equiv\,1,5,7,19\,\mbox{mod $24$}$ in the imaginary case.
Further, note that for $(M,N)=(-1,3)$ or $(3,-1)$ we have $E_{d,\pi/3}=E_{M,N}$; moreover, if $(M,N)=(1,-3)$ or $(-3,1)$, then $E_{d,2\pi/3}=E_{M,N}$. In these cases we can use Ono's results \cite{ono} on the rank of $E_{M,N}(\mathbb{Q})$.
\subsection{Explicit examples}
Using \verb+MAGMA+, we may compute the rank of $E^d(\mathbb{Q})$; if this rank is positive, using the parametrization from Theorem \ref{parametrization}, we can also compute an explicit arithmetic progression of four squares. The following two tables list explicit examples of such progressions according to $\mathbb{Q}(\sqrt{d})$ being an imaginary-quadratic or a real-quadratic number field for the range $|d|\le 40$. Each of the tables consists of three colums; the first column indicates the value of $d$, the second and third one give an example for $a,r\in\mathbb{Q}(\sqrt{d})$ such that $a^2,a^2+r,a^2+2r,a^2+3r$ forms an arithmetic progressions of squares over $\mathbb{Q}(\sqrt{d})$, where $\alpha:=\sqrt{d}$.
\begin{center}
\begin{tabular}{|c|c|c|} \hline $d$ & $a$ & $r$\\ \hline $-5$ & $(-4 \alpha - 73)/2$ & $42 \alpha - 840$ \\ $-10$ & $(-60 \alpha - 629)/2$ & $-14322 \alpha - 55440$ \\ $-14$ & $-6 \alpha - 361$ & $8580 \alpha - 65520$ \\ $-15$ & $(\alpha - 19)/4$ & $\alpha - 15$ \\ $-17$ & $3339440 \alpha - 57973177$ & $219447603254880 \alpha - 2180390987174400$ \\ $-19$ & $(-26589360 \alpha + 21015523)/2$ & $22024049983320 \alpha + 2196495218332800$ \\ $-21$ & $-160 \alpha - 2393$ & $-590920 \alpha - 3141600$ \\ $-22$ & $-1224720 \alpha - 2179673$ & $-3235062111120 \alpha + 19986461510400$ \\ $-23$ & $(2625 \alpha + 11951)/4$ & $-1231230 \alpha + 6592950$ \\ $-29$ & $(22143940 \alpha - 361130617)/2$ & $3478556113902870 \alpha - 13110939890248200$ \\ $-33$ & $-612000 \alpha + 1945781$ & $673901512480 \alpha + 8195655283200$ \\ $-34$ & $319440 \alpha + 650807$ & $-95395404720 \alpha + 2288266041600$ \\ $-39$ & $(36 \alpha - 683)/2$ & $10450 \alpha - 51480$ \\ \hline \end{tabular} \end{center}
\begin{center}
\begin{tabular}{|c|c|c|} \hline $d$ & $a$ & $r$\\ \hline $6$ & $(2 \alpha + 1)/2$ & $25 \alpha + 60$ \\ $10$ & $(10 \alpha - 19)/2$ & $33 \alpha - 60$ \\ $11$ & $(-1320 \alpha + 2843)/2$ & $-7242060 \alpha + 23839200$ \\ $13$ & $1440 \alpha + 5183$ & $1323960 \alpha + 4773600$ \\ $17$ & $-15 \alpha - 1511$ & $-555360 \alpha + 1591200$ \\ $21$ & $-36 \alpha + 163$ & $2200 \alpha - 10080$ \\ $22$ & $(-750 \alpha + 3529)/2$ & $453705 \alpha - 2009700$ \\ $23$ & $19476668640 \alpha + 90283636367$ & \begin{tabular}{c} $\!\!\!\!\!\!\!\!\!\!\!\!\!1725783576049531078080 \alpha +$\\ $\qquad\qquad 8274631385821773772800$ \end{tabular}\\ $30$ & $(18 \alpha + 19)/2$ & $-413 \alpha + 1260$ \\ $34$ & $6860 \alpha - 12239$ & $-12575640 \alpha + 43982400$ \\ $35$ & $-84 \alpha + 487$ & $-28968 \alpha + 171360$ \\ $37$ & $38306628360 \alpha - 276487794001$ & \begin{tabular}{c} $\!\!\!\!\!\!\!\!\!\!\!\!\!10021678513795431723240 \alpha -$\\
$\qquad\qquad 24114970612028472976800$ \end{tabular}\\ $39$ & $720 \alpha + 3869$ & $7990640 \alpha + 49795200$ \\ \hline \end{tabular} \end{center}
\subsection{Pythagorean triples}
Next we use Pythagorean triples to construct arithmetic progressions of four squares over quadratic number fields. In the following cases the arithmetic progression consists of four integers, all of them being a square over a specific quadratic number field.
It is well-known that for arbitrary $a,b\in \mathbb{Z}$ the triple $(a^2-b^2,2ab,a^2+b^2)$ defines a Pythagorean triple. Then $(a^2+b^2)^2-4n, (a^2+b^2)^2,(a^2+b^2)^2+4n$ forms an arithmetic progression of three squares where $n=ab(a^2-b^2)$. If we add a new square term, $\alpha^2=(a^2+b^2)^2+8ab(a^2-b^2)$ say, we obtain an arithmetic progression of four squares over $\mathbb{Q}(\sqrt{(a^2+b^2)^2+8ab(a^2-b^2)})$. In order to construct a quadratic number field $\mathbb{Q}(\sqrt{d})$, where $d$ is a squarefree integer, we define the Thue equation $$ F(x,y)=(x^2+y^2)^2+8xy(x^2-y^2)=d. $$
Of course, here we are interested in the set of integer solutions. Using $\verb+MAGMA+$, we have observed that for $|d|<100$ there exist integer solutions in the cases $d=-71, -47, -23, 73$, all of them being congruent to $1\bmod\,24$. By the above construction this yields the following arithmetic progressions:
\begin{center}
\begin{tabular}{|c|c|c|} \hline $d$ & Four square in arithmetic progression over $\mathbb{Q}(\sqrt{d})$\\[.1cm] \hline $-71$ & $(\sqrt{-71})^2,7^2,13^2,17^2$\\[.1cm] $-47$ & $(\sqrt{-47})^2,17^2,25^2,31^2$\\[.1cm] $-23$ & $(\sqrt{-23})^2,1,5^2,7^2$\\[.1cm] $73$ & $1,5^2,7^2,(\sqrt{73})^2$\\[.1cm] \hline \end{tabular} \end{center}
It is straightforward to compute that $(a^2+b^2)^2+8ab(a^2-b^2) \equiv\,1\bmod\,24$ for $a,b\in \mathbb{Z}$ with coprime $a,b$ and $a\not\equiv\,b\bmod\,2$. Therefore, our construction is restricted to the case of number fields $\mathbb{Q}(\sqrt{d})$ with $d\equiv\,1\bmod\,24$. Moreover, all arithmetic progressions discovered by this method satisfy that the difference of any two successive members is divisible by $24$.
\section{Average results}
We conclude by discussing some average results for the central values of $L$-functions associated with elliptic curves. Let $E$ be an elliptic curve over $\mathbb{Q}$ and denote by $L(E,s)$ its elliptic curve $L$-function. Roughly speaking, the yet unproved Birch \& Swinnerton--Dyer conjecture states that the order of non-vanishing of $L(E,s)$ is equal to the rank of $E$. Kolyvagin \cite{koly} has shown that if $E$ is a modular elliptic curve with $L(E,1)\neq 0$, then the rank of $E$ is equal to zero. By the proof of Wiles et al. \cite{wiles,wiles2} of the Shimura-Taniyama conjecture any elliptic curve over $\mathbb{Q}$ is modular, however, the corresponding statement for quadratic number fields is not known to be true. Note that $$ L(E(\mathbb{Q}(\sqrt{d})),s)=L(E,s)L(E^d,s), $$ where $E^d$ denotes the quadratic twist of $E$ by $d$; here it suffices to consider squarefree integers $d$. This formula directly corresponds to (\ref{rank-formula}). Hence in our case, in order to have rank zero for $E(\mathbb{Q}(\sqrt{d}))$ we need the non-vanishing of $L(E^d,1)$ since $L(E,1)=0.53...$; this computations has been made using \verb+SAGE+.
We may ask for the statistical behaviour as $d$ varies. Goldfeld \cite{gold} has conjectured that a positive proportion of $0<\vert d\vert \leq X$ have the property that $L(E^d,1)$ is non-vanishing. This has been established only in exceptional cases. For instance, Heath--Brown \cite{hb} confirmed this conjecture for the congruent number elliptic curve. Moreover, Ono \& Skinner \cite[Corollary 2]{onoskin} have proved that if $E$ is an elliptic curve over $\mathbb{Q}$ with conductor $\leq 100$, then either $E^{-p}$ or $E^p$ has rank zero for a positive proportion of primes $p$. In the special case of our elliptic curve we obtain:
\begin{corollary} For a positive proportion of primes $p$, there are either no non-constant arithmetic progressions of four squares in $\mathbb{Q}(\sqrt{-p})$ or $\mathbb{Q}(\sqrt{p})$. \end{corollary}
If the Birch \& Swinnerton--Dyer conjecture is true, then the vanishing of $L(E^d,1)$ would imply that the rank of $E(\mathbb{Q}(\sqrt{d})$ is positive, and so from (\ref{rank-formula}) and Proposition \ref{prop1} it would follow that there exist infinitely many non-constant arithmetic progressions of four squares over $\mathbb{Q}(\sqrt{d})$. M.R. Murty \& V.K. Murty \cite{mm} have shown that for $L(E,1)\neq 0$ there are infinitely many fundamental discriminants $d<0$ such that $L(E^d,s)$ has a simple zero at $s=1$; this result was independently obtained by Bump, Friedberg \& Hoffstein \cite{bump}. Hence, in our special case we may deduce that there exist infinitely many imaginary quadratic fields $\mathbb{Q}(\sqrt{d})$ each of which containing infinitely many non-constant arithmetic progressions of four squares subject to the truth of the Birch \& Swinnerton--Dyer conjecture.
\end{document} |
\begin{document}
\date{}\title{A note on permutation polynomials over finite fields} \begin{abstract} Permutation polynomials over finite fields constitute an active research area and have applications in many areas of science and engineering. In this paper, two conjectures on permutation polynomials proposed recently by Wu and Li \cite{WuL} are settled. Moreover, a new class of permutation trinomials of the form $x+\gamma \textup{Tr}_{q^n/q}(x^k)$ is also presented, which generalizes two examples of \cite{kyureghyan2016}.
\noindent {{\it Keywords and phrases\/}: Permutation polynomial, trinomial, trace function. }\\
\noindent {{\it Mathematics subject classifications\/}: 11T06, 11T55, 05A05.} \end{abstract}
\section{Introduction} Let $\mathbb{F}_{p^{n}}$ be a finite field with $p^{n}$ elements, where $p$ is a prime and $n$ is a positive integer. A polynomial $f(x)\in\mathbb{F}_{p^{n}}[x]$ is called a permutation polynomial (PP) if the associated mapping $f:c\mapsto f(c)$ from $\mathbb{F}_{p^{n}}$ to itself is a bijection. PPs have been intensively studied in recent years due to their important applications in cryptography, coding theory and combinatorial design theory (see \cite{Ding06,dobb1999,D99,L07,QTTL13,ST05} and the references therein). Recently, the study of PPs with few terms, especially binomials and trinomials, has attached people's interest due to their simple algebraic form and additional extraordinary properties.
In \cite{WuL}, Wu and Li presented several classes of permutation trinomials over $\mathbb{F}_{5^n}$ from Niho exponents with the form of \begin{equation}\label{eq1.1} f(x)=x+\lambda_1 x^{s(5^k-1)+1}+\lambda_2 x^{t(5^k-1)+1}, \end{equation} where $n=2k, 1\leq s,t\leq 5^k,$ and $\lambda_1, \lambda_2\in\{1,-1\}.$ In the same paper, they also proposed the following two conjectures, which can be used to obtain two new classes of permutation trinomials with the form \eqref{eq1.1}. More recent progress on permutation trinomials can be found in \cite{DQWYY15,Gupta2016,Hou2014,Hou2015,H15,kyureghyan2016,LQC2017,Likuanquan2016New,Linian2016,LH2016,Ma2016,Zha2017}.
\begin{conjecture} The polynomial $f(x)=x\big(\frac{x^2-x+2}{x^2+x+2}\big)^2$ is a PP over $\mathbb{F}_{5^k}$ for odd $k.$ \end{conjecture}
\begin{conjecture} Let $q=5^k,$ where $k$ is an even integer. Then $g(x)=-x\big(\frac{x^2-2}{x^2+2}\big)^{2}$ permutates $\mu_{q+1}.$ \end{conjecture}
This paper is organized as follows. In Section~\ref{preliminaries}, we introduce some basic notations and related results. In Sections~\ref{conj2} and \ref{conj1}, we prove the above two conjectures by using different tricks, such as treating the squares and non-squares separately. In Section~\ref{trapp}, we give a new class of PPs of the form $x+\gamma \textup{Tr}_{q^n/q}(x^k).$ Section~\ref{conclusion} concludes the paper.
\section{Preliminaries}\label{preliminaries} The following notations are fixed throughout this paper. \begin{itemize}
\item Let $q$ be a prime power, $n$ be an integer, and $\mathbb{F}_{q^n}$ be the finite field of order $q^{n}$.
\item Let $\textup{Tr}_{r}^{n}\ :\ \mathbb{F}_{q^{n}}\mapsto\mathbb{F}_{q^{r}}$ be the trace mapping defined by
$$\textup{Tr}_{r}^{n}(x)=x+x^{q^{r}}+x^{q^{2r}}+\cdots+x^{q^{n-r}},$$
where $r|n$. For $r=1$, we get the absolute trace function, which is denoted by $\textup{Tr}_{n}$.
\item Let $\overline{x}=x^q$ for any $x\in \mathbb{F}_{q^{2}}$. \end{itemize}
Now, we recall a well-known lemma which will be needed in the following sections. \begin{lemma}\cite{Dickson1906}\label{lem2.1} $x^2+ax+b$ is irreducible over $\mathbb{F}_{p^n},$ $p>2$, if and only if its discriminant $\Delta=a^2-4b$ is a non-square element in $\mathbb{F}_{p^n}.$ \end{lemma}
\section{Proof of Conjecture 2}\label{conj2} In this section, we will prove the following theorem, which is a conjecture posed by Wu et al. \cite{WuL}. Let us set initially that $\mu_{q+1}=\{x\in\mathbb{F}_{q^2}: x^{q+1}=1\},$ where $q=5^k$ with $k$ being an even integer. \begin{theorem}\cite[Conjecture 2]{WuL}\label{thm3.1} Let $q=5^k,$ where $k$ is an even integer. Then $g(x)=-x\big(\frac{x^2-2}{x^2+2}\big)^{2}$ permutates $\mu_{q+1}.$ \end{theorem}
Before proving this conjecture, we need to show the following lemmas.
\begin{lemma}\label{lem0} Let $q=5^k,$ where $k$ is an even integer. Then $\pm2$ are square elements in $\mathbb{F}_{q^2}.$ Moreover, $\sqrt{\pm2}\in \mathbb{F}_q.$ \end{lemma} \begin{proof}
Let $\mathbb{F}_{q^2}^*=\langle \omega \rangle.$ Noting that $8|(q^2-1)$ and $-1=\omega^{\frac{q^2-1}{2}}$, we have that $2=\omega^{\frac{q^2-1}{4}}$ and $-2=\omega^{\frac{3(q^2-1)}{4}}$ are two square elements in $\mathbb{F}_{q^2}.$
Write $\sqrt{2}=\omega^{\frac{q^2-1}{8}},$ $\sqrt{-2}=\omega^{\frac{3(q^2-1)}{8}}$. Noting that $8|(q-1)$ since $k$ is an even integer. Thus $(\sqrt{2})^{q-1}=(-1)^{\frac{q-1}{4}}=1,$ and $(\sqrt{-2})^{q-1}=(-1)^{\frac{3(q-1)}{4}}=1,$ which imply that $\sqrt{2}\in \mathbb{F}_q$, $\sqrt{-2}\in \mathbb{F}_q$. \end{proof}
Let $\Omega_{+}=\{x^2:x\in\mu_{q+1}\}$, $\Omega_{-}=\{-x^2:x\in\mu_{q+1}\}$. \begin{lemma}\label{lem1} $\Omega_{+}\cap \Omega_{-}=\varnothing$, $\Omega_{+}\cup \Omega_{-}=\mu_{q+1}$. \end{lemma} \begin{proof} If $\Omega_{+}\cap \Omega_{-}\neq \varnothing,$ that is $\exists x_1^2=-x_2^2$ with $x_i\in \mu_{q+1}$. We have $\big( \frac{x_1}{x_2}\big)^2=-1$, which means $\big( \frac{x_1}{x_2}\big)^4=1$. Since $\big( \frac{x_1}{x_2}\big)^{q+1}=1$, it follows that $\big( \frac{x_1}{x_2}\big)^{\gcd(4, q+1)}=1$. Hence, $\big( \frac{x_1}{x_2}\big)^2=1$, this leads to a contradiction.
Clearly, $|\Omega_{+}|=|\Omega_{-}|=\frac{q+1}{2}$. It follows that $\Omega_{+}\cup \Omega_{-}=\mu_{q+1}$. \end{proof}
\begin{lemma}\label{lem2} $g(\Omega_{+})\subseteq \Omega_{+}$ and $g(\Omega_{-})\subseteq \Omega_{-}$. \end{lemma} \begin{proof} $\forall x\in \Omega_{+}$, $\exists a\in \mu_{q+1}$, s.t. $x=a^2$. We need to show $g(x)=b^2$ for some $b\in \mu_{q+1}$. In fact, $g(x)=g(a^2)=-a^2\big( \frac{a^4-2}{a^4+2}\big)^2=\Big(2a\big(\frac{a^4-2}{a^4+2}\big)\Big)^2$. Let $b=2a\big(\frac{a^4-2}{a^4+2}\big)$, noting that $\overline{a}=\frac{1}{a},$ we have \begin{align*} \overline{b}=2\overline{a}\big(\frac{\overline{a}^4-2}{\overline{a}^4+2}\big)=\frac{2}{a}\Big( \frac{(\frac{1}{a})^4-2}{(\frac{1}{a})^4+2} \Big)=\frac{1}{2a}\big(\frac{a^4+2}{a^4-2}\big)=\frac{1}{b}. \end{align*}
Similarly, we have $g(\Omega_{-})\subseteq \Omega_{-}$. In fact, $\forall x\in \Omega_{-}$, $\exists a\in \mu_{q+1}$, s.t. $x=-a^2$. Since $g(x)=g(-a^2)=a^2\big( \frac{a^4-2}{a^4+2}\big)^2=-\Big(2a\big(\frac{a^4-2}{a^4+2}\big)\Big)^2$. Let $b=2a\big(\frac{a^4-2}{a^4+2}\big)$, noting that $b\in \mu_{q+1},$ we have shown $g(x)\in\Omega_{-}.$ \end{proof}
\begin{lemma}\label{lem3.1} The group of equations \begin{equation*} \begin{cases} xy=1,\\ x+y=\pm1 \end{cases} \end{equation*} has no solution in $\Omega_{+},$ $\Omega_{-},$ respectively. \end{lemma} \begin{proof} If not, there exist $x,y\in \Omega_{+}$ (or $\Omega_{-}$) such that the equations hold. It follows that $x^2\pm x+1=0$. So $x=\pm2\pm 2\sqrt{2}\in\mathbb{F}_q$ by Lemma \ref{lem0}. Thus, we obtain that $x^{q-1}=1$. Since $x^{q+1}=1$. It deduces that $x^{\gcd(q-1,q+1)}=1$, that is $x=\pm1$, which is a contradiction. \end{proof}
\begin{lemma}\label{lem3} $g(x)$ permutes $\Omega_{+}$. \end{lemma}
\begin{proof} If the assertion would not hold, there exist $x\neq y\in\Omega_{+}$, such that $g(x)=g(y)$. Let $x=a^2$, $y=b^2$, where $a,b\in \mu_{q+1}$, and $a\neq \pm b$. Since $g(x)=g(y)$, we have $-a^2\big(\frac{a^4-2}{a^4+2}\big)^2=-b^2\big(\frac{b^4-2}{b^4+2}\big)^2$, which means that $\frac{a^5-2a}{a^4+2}=\pm\frac{b^5-2b}{b^4+2}$.
Below, we consider the two cases where $\frac{a^5-2a}{a^4+2}=\frac{b^5-2b}{b^4+2}$ and $\frac{a^5-2a}{a^4+2}=-\frac{b^5-2b}{b^4+2}$.
{\bf Case 1: $\frac{a^5-2a}{a^4+2}=\frac{b^5-2b}{b^4+2}.$}
We obtain \begin{equation*} (a-b)\Big(a^4b^4+2(a-b)^4+2ab(a-b)^2-4a^2b^2-4\Big)=0, \end{equation*} which implies that $$a^4b^4+2(a-b)^4+2ab(a-b)^2-4a^2b^2-4=0,$$ since $a\neq b$. Let $c=ab$, $d=a-b$, we have $$c^4+2d^4+2cd^2-4c^2-4=0,$$ that is $$d^4+cd^2-2(c^4+c^2+1)=0.$$ Thus, we have $d^2=2c\pm 2\sqrt{-2}(c^2-1)$, since $\Delta=-2(c^2-1)^2$ is a square in $\mathbb{F}_{q^2}.$
Now, we consider the following two subcases.
{\bf Subcase 1.1 } For the case \begin{equation}\label{eq3.1} d^2=2c+2\sqrt{-2}(c^2-1), \end{equation} raising $q$-th power of both sides of Eq. \eqref{eq3.1}, by Lemma \ref{lem0}, we get \begin{equation*} \overline{d}^2=2\overline{c}+2\sqrt{-2}(\overline{c}^2-1). \end{equation*} Since $\overline{c}=\frac{1}{c}$ and $\overline{d}=\overline{a}-\overline{b}=\frac{1}{a}-\frac{1}{b}=\frac{b-a}{ab}=\frac{-d}{c}$, we obtain $$\frac{d^2}{c^2}=\frac{2}{c}+2\sqrt{-2}(\frac{1}{c^2}-1),$$ which implies that \begin{equation}\label{eq3.2} d^2=2c+2\sqrt{-2}(1-c^2). \end{equation} Combining Eq. \eqref{eq3.1} and Eq. \eqref{eq3.2}, we have $c^2=1$. It follows that \begin{equation*} d^2=2c= \begin{cases} 2,~ \text{if $c=1$,}\\ -2,~ \text{if $c=-1$.} \end{cases} \end{equation*}
\begin{enumerate} \item When $c=1$, $d^2=2$, we have
\begin{equation}\label{eq0.1}
\begin{cases}
xy=a^2b^2=c^2=1,\\
x+y=a^2+b^2=(a-b)^2+2ab=d^2+2c=-1.
\end{cases}
\end{equation}
By Lemma \ref{lem3.1}, Eq. \eqref{eq0.1} has no solution in $\Omega_{+}$, which is a contradiction. \item When $c=-1$, $d^2=-2$, we get
\begin{equation}\label{eq0.2}
\begin{cases}
xy=1,\\
x+y=1.
\end{cases}
\end{equation}
By Lemma \ref{lem3.1}, Eq. \eqref{eq0.2} has no solution in $\Omega_{+}$, which is a contradiction. \end{enumerate}
{\bf Subcase 1.2} For the case $d^2=2c-2\sqrt{-2}(c^2-1),$ we could also get a contradiction in a similar way. In fact, raising $q$-th power of both sides of $d^2=2c-2\sqrt{-2}(c^2-1)$, by Lemma \ref{lem0}, we get \begin{equation*} \overline{d}^2=2\overline{c}-2\sqrt{-2}(\overline{c}^2-1). \end{equation*} Noting that $\overline{c}=\frac{1}{c}$ and $\overline{d}=\frac{-d}{c}$, we obtain $$\frac{d^2}{c^2}=\frac{2}{c}-2\sqrt{-2}(\frac{1}{c^2}-1),$$ which implies that \begin{equation*} d^2=2c-2\sqrt{-2}(1-c^2). \end{equation*} It follows that $c^2=1.$ The rest of the argument is exactly the same as {\bf Subcase 1.1}.
{\bf Case 2: $\frac{a^5-2a}{a^4+2}=-\frac{b^5-2b}{b^4+2}$}. The discussion is similar with {\bf Case 1}. In fact, we have \begin{equation*} (a+b)\Big(a^4b^4+2(a+b)^4-2ab(a+b)^2-4a^2b^2-4\Big)=0, \end{equation*} which implies that $$a^4b^4+2(a+b)^4-2ab(a+b)^2-4a^2b^2-4=0,$$ since $a\neq -b$. Let $c=ab$, $d=a+b$, we have $$c^4+2d^4-2cd^2-4c^2-4=0,$$ that is $$d^4-cd^2-2(c^4+c^2+1)=0.$$ Therefore, we have $d^2=-2c\pm 2\sqrt{-2}(c^2-1)$, since $\Delta=-2(c^2-1)^2$ is a square in $\mathbb{F}_{q^2}.$
Now, we consider the following two subcases.
{\bf Subcase 2.1 } For the case \begin{equation}\label{eq3.2.1} d^2=-2c+2\sqrt{-2}(c^2-1), \end{equation} raising $q$-th power of both sides of Eq. \eqref{eq3.2.1}, by Lemma \ref{lem0}, we get \begin{equation*} \overline{d}^2=-2\overline{c}+2\sqrt{-2}(\overline{c}^2-1). \end{equation*} Since $\overline{c}=\frac{1}{c}$ and $\overline{d}=\overline{a}+\overline{b}=\frac{1}{a}+\frac{1}{b}=\frac{a+b}{ab}=\frac{d}{c}$, we obtain $$\frac{d^2}{c^2}=\frac{-2}{c}+2\sqrt{-2}(\frac{1}{c^2}-1),$$ which implies that \begin{equation}\label{eq3.2.2} d^2=-2c+2\sqrt{-2}(1-c^2). \end{equation} Combining Eq. \eqref{eq3.2.1} and Eq. \eqref{eq3.2.2}, we have $c^2=1$. It follows that \begin{equation*} d^2=-2c= \begin{cases} -2,~ \text{if $c=1$,}\\ 2,~ \text{if $c=-1$.} \end{cases} \end{equation*}
\begin{enumerate} \item If $c=1$, $d^2=-2$, we have \begin{equation}\label{eq0.3} \begin{cases} xy=1,\\ x+y=1. \end{cases} \end{equation} By Lemma \ref{lem3.1}, Eq. \eqref{eq0.3} has no solution in $\Omega_{+}$, which is a contradiction. \item If $c=-1$, $d^2=2$, we get \begin{equation}\label{eq0.4} \begin{cases} xy=1,\\ x+y=-1. \end{cases} \end{equation} By Lemma \ref{lem3.1}, Eq. \eqref{eq0.4} has no solution in $\Omega_{+}$, which is a contradiction. \end{enumerate}
{\bf Subcase 2.2} For the case $d^2=-2c-2\sqrt{-2}(c^2-1),$ we could also get a contradiction in a similar way. In fact, raising $q$-th power of both sides of $d^2=-2c-2\sqrt{-2}(c^2-1)$, by Lemma \ref{lem0}, we get \begin{equation*} \overline{d}^2=2\overline{c}-2\sqrt{-2}(\overline{c}^2-1). \end{equation*} Noting that $\overline{c}=\frac{1}{c}$ and $\overline{d}=\frac{d}{c}$, we obtain $$\frac{d^2}{c^2}=\frac{2}{c}-2\sqrt{-2}(\frac{1}{c^2}-1),$$ which implies that \begin{equation*} d^2=-2c-2\sqrt{-2}(1-c^2). \end{equation*} It follows that $c^2=1.$ The rest of the argument is exactly the same as {\bf Subcase 2.1}. \end{proof}
\begin{lemma}\label{lem4} $g(x)$ permutes $\Omega_{-}$. \end{lemma}
\begin{proof} The proof is similar to that of Lemma \ref{lem3}. In fact, if the assertion would not hold, there exist $x\neq y\in\Omega_{-}$, such that $g(x)=g(y)$. Let $x=-a^2$, $y=-b^2$, where $a,b\in \mu_{q+1}$, and $a\neq \pm b$. Since $g(x)=g(y)$, we have $a^2\big(\frac{a^4-2}{a^4+2}\big)^2=b^2\big(\frac{b^4-2}{b^4+2}\big)^2$, which means that $\frac{a^5-2a}{a^4+2}=\pm\frac{b^5-2b}{b^4+2}$. The rest of the argument is exactly the same as Lemma \ref{lem3}, except for changing the sign of $x+y$ since $x+y=-(a^2+b^2)$, which doesn't affect the rest discussion. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm3.1}] It can be derived directly from Lemmas \ref{lem1}, \ref{lem2}, \ref{lem3} and \ref{lem4}. \end{proof}
\section{Proof of Conjecture 1}\label{conj1}
\begin{theorem}\cite[Conjecture 1]{WuL}\label{thm4.1} The polynomial $f(x)=x\big(\frac{x^2-x+2}{x^2+x+2}\big)^2$ is a PP over $\mathbb{F}_{5^k}$ for odd $k.$ \end{theorem} Before proving this conjecture, we need to show the following lemmas. Let $\Omega_{1}=\{x^2:x\in \mathbb{F}_{5^k}^{*}\}$ and $\Omega_{2}=\{2x^2:x\in\mathbb{F}_{5^k}^{*}\}$.
\begin{lemma}\label{lemconj1.1} $f(x)$ is a PP on $\Omega_{1}.$ \end{lemma} \begin{proof} If not, there exist two distinct elements $x,y\in \Omega_1$ such that $f(x)=f(y).$ Let $x=a^2$, $y=b^2,$ where $a,b\in \mathbb{F}_{5^k}^*$, and $a\neq \pm b.$ We have $f(a^2)=f(b^2)$. That is $$a^2\big( \frac{a^4-a^2+2}{a^4+a^2+2} \big)^2=b^2\big( \frac{b^4-b^2+2}{b^4+b^2+2} \big)^2.$$ We get $$\frac{a^5-a^3+2a}{a^4+a^2+2}=\pm \frac{b^5-b^3+2b}{b^4+b^2+2}.$$
{\bf Case 1:} For the case $$\frac{a^5-a^3+2a}{a^4+a^2+2}=\frac{b^5-b^3+2b}{b^4+b^2+2}.$$
After simple computations, we have $$(a-b)\big( a^4b^4+2(a-b)^4+(a^2b^2-2ab-2)(a^2+ab+b^2)+a^3b^3-a^2b^2-2ab+4 \big)=0.$$ Let $c=ab$, $d=a-b$. Noting that $a\neq b$, we have $$c^4+2d^4+(c^2-2c-2)(d^2-2c)+c^3-c^2-2c+4=0.$$ Simplifying the above equation gives $$d^4-(2c^2+c+1)d^2-2c^4+2c^3-c^2+c+2=0.$$ Let $z=d^2\in\mathbb{F}_{5^k}^*$, we have \begin{equation}\label{eq4.1} z^2-(2c^2+c+1)z-2c^4+2c^3-c^2+c+2=0. \end{equation}
By Lemma \ref{lem2.1}, we know $z^2-(2c^2+c+1)z-2c^4+2c^3-c^2+c+2$ is irreducible over $\mathbb{F}_{5^k}$ if and only if the discriminant $\Delta$ is non-square. It is easy to obtain $\Delta=2(c^2-c-2)^2.$
{\bf Subcase 1.1:} If $c^2-c-2\neq 0,$ clearly, $\Delta$ is a non-square element in $\mathbb{F}_{5^k}$ since $2$ is non-square. Thus Eq. \eqref{eq4.1} has no solution in $\mathbb{F}_{5^k}$.
{\bf Subcase 1.2:} If $c^2-c-2=0,$ we have $c=2$ or $c=-1$.
\begin{enumerate}
\item If $c=2,$ we get $d^2=-2$. Then we have
\begin{equation}\label{eqn1}
\begin{cases}
xy=a^2b^2=c^2=-1, \\
x+y=a^2+b^2=(a-b)^2+2ab=d^2+2c=2.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2-2u-1=0.$ However, $u^2-2u-1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=3$ is non-square. We get a contradiction.
\item If $c=-1$, we have $d^2=1$. Then \begin{equation}\label{eqn2}
\begin{cases}
xy=1, \\
x+y=-1.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2+u+1=0.$ However, $u^2+u+1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=2$ is non-square. We get a contradiction. \end{enumerate}
{\bf Case 2:} For the case $$\frac{a^5-a^3+2a}{a^4+a^2+2}=-\frac{b^5-b^3+2b}{b^4+b^2+2}.$$
After simple computations, we have $$(a+b)\big( a^4b^4+2(a+b)^4+(a^2b^2+2ab-2)(a^2-ab+b^2)-a^3b^3-a^2b^2+2ab+4 \big)=0.$$ Let $c=ab$, $d=a+b$. Noting that $a\neq -b$, we have $$c^4+2d^4+(c^2+2c-2)(d^2+2c)-c^3-c^2+2c+4=0.$$ Simplifying the above equation gives $$d^4-(2c^2-c+1)d^2-2c^4-2c^3-c^2-c+2=0.$$ Let $z=d^2\in\mathbb{F}_{5^k}^*$, we have \begin{equation}\label{eq4.2} z^2-(2c^2-c+1)z-2c^4-2c^3-c^2-c+2=0. \end{equation} By Lemma \ref{lem2.1}, we know $z^2-(2c^2-c+1)z-2c^4-2c^3-c^2-c+2$ is irreducible over $\mathbb{F}_{5^k}$ if and only if the discriminant $\Delta$ is non-square. It is easy to obtain $\Delta=2(c^2+c-2)^2.$
{\bf Subcase 2.1:} If $c^2+c-2\neq 0,$ clearly, $\Delta$ is a non-square element in $\mathbb{F}_{5^k}$ since $2$ is non-square. Thus Eq. \eqref{eq4.2} has no solution in $\mathbb{F}_{5^k}$.
{\bf Subcase 2.2:} If $c^2+c-2=0,$ we have $c=-2$ or $c=1$.
\begin{enumerate}
\item If $c=-2,$ we get $d^2=-2$. Then we have
\begin{equation}\label{eqn1}
\begin{cases}
xy=-1, \\
x+y=-1.
\end{cases}
\end{equation}
So we get $x=y=2,$ which is a contradiction.
\item If $c=1$, we have $d^2=1$. Then \begin{equation}\label{eqn2}
\begin{cases}
xy=1, \\
x+y=-1.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2+u+1=0.$ However, $u^2+u+1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=2$ is non-square. We get a contradiction. \end{enumerate}
\end{proof}
\begin{lemma}\label{lemconj1.2} $f(x)$ permutates $\Omega_{2}.$ \end{lemma}
\begin{proof} An argument similar to that of the above Lemma \ref{lemconj1.1} shows that$f(x)$ permutates $\Omega_{2}.$
In fact, if not, there exist two distinct elements $x$ and $y$ such that $f(x)=f(y).$ Let $x=2a^2$, $y=2b^2,$ where $a,b\in \mathbb{F}_{5^k}^*$, and $a\neq \pm b.$ We have $f(2a^2)=f(2b^2)$. That is $$2a^2\big( \frac{4a^4-2a^2+2}{4a^4+2a^2+2} \big)^2=2b^2\big( \frac{4b^4-2b^2+2}{4b^4+2b^2+2} \big)^2.$$ We get $$\frac{4a^5-2a^3+2a}{4a^4+2a^2+2}=\pm \frac{4b^5-2b^3+2b}{4b^4+2b^2+2}.$$
Next, we consider the following two cases separately.
{\bf Case 1:} For the case $$\frac{4a^5-2a^3+2a}{4a^4+2a^2+2}=\frac{4b^5-2b^3+2b}{4b^4+2b^2+2}.$$
After simple computations, we have $$(a-b)\big( a^4b^4+3(a-b)^4+(3a^2b^2-3ab-4)(a^2+ab+b^2)+3a^3b^3-4a^2b^2-4ab+4 \big)=0.$$ Let $c=ab$, $d=a-b$. Noting that $a\neq b$, we have $$c^4+3d^4+(3c^2-3c-4)(d^2-2c)+3c^3-4c^2-4c+4=0.$$ Simplifying the above equation gives $$d^4+(c^2-c+2)d^2+2c^4-c^3-c^2-2c+3=0.$$ Let $z=d^2\in\mathbb{F}_{5^k}^*$, we have \begin{equation}\label{eq4.2.1} z^2+(c^2-c+2)z+2c^4-c^3-c^2-2c-2=0. \end{equation}
By Lemma \ref{lem2.1}, we know $z^2+(c^2-c+2)z+2c^4-c^3-c^2-2c-2$ is irreducible over $\mathbb{F}_{5^k}$ if and only if the discriminant $\Delta$ is non-square. It is easy to obtain $\Delta=3(c^2+2c+2)^2.$
{\bf Subcase 1.1:} If $c^2+2c+2\neq 0,$ clearly, $\Delta$ is a non-square element in $\mathbb{F}_{5^k}$ since $2$ is non-square. Thus Eq. \eqref{eq4.2.1} has no solution in $\mathbb{F}_{5^k}$.
{\bf Subcase 1.2:} If $c^2+2c+2=0,$ we have $c=2$ or $c=1$.
\begin{enumerate}
\item If $c=2,$ we get $d^2=-2$. Then we have
\begin{equation}\label{eqn2.1}
\begin{cases}
xy=1, \\
x+y=-1.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2+u+1=0.$ However, $u^2+u+1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=2$ is non-square. We get a contradiction.
\item If $c=1$, we have $d^2=-1$. Then \begin{equation}\label{eqn2.2}
\begin{cases}
xy=-1, \\
x+y=2.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2-2u-1=0.$ However, $u^2-2u-1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=3$ is non-square. We get a contradiction.
\end{enumerate}
{\bf Case 2:} For the case $$\frac{4a^5-2a^3+2a}{4a^4+2a^2+2}=- \frac{4b^5-2b^3+2b}{4b^4+2b^2+2}.$$
After simple computations, we have $$(a+b)\big( a^4b^4-2(a+b)^4-2(a^2b^2+ab+2)(a^2-ab+b^2)+2a^3b^3-4a^2b^2+4ab+4 \big)=0.$$ Let $c=ab$, $d=a+b$. Noting that $a\neq -b$, we have $$c^4-2d^4-2(c^2+c+2)(d^2+2c)+2c^3-4c^2+4c+4=0.$$ Simplifying the above equation gives $$d^4+(c^2+c+2)d^2+2c^4+c^3-c^2+2c-2=0.$$ Let $z=d^2\in\mathbb{F}_{5^k}^*$, we have \begin{equation}\label{eq4.3.2} z^2+(c^2+c+2)z+2c^4+c^3-c^2+2c-2=0. \end{equation} By Lemma \ref{lem2.1}, we know $z^2+(c^2+c+2)z+2c^4+c^3-c^2+2c-2$ is irreducible over $\mathbb{F}_{5^k}$ if and only if the discriminant $\Delta$ is non-square. It is easy to obtain $\Delta=3(c^2-2c+2)^2.$
{\bf Subcase 2.1:} If $c^2-2c+2\neq 0,$ clearly, $\Delta$ is a non-square element in $\mathbb{F}_{5^k}$ since $2$ is non-square. Thus Eq. \eqref{eq4.3.2} has no solution in $\mathbb{F}_{5^k}$.
{\bf Subcase 2.2:} If $c^2-2c+2=0,$ we have $c=-1$ or $c=3$.
\begin{enumerate}
\item If $c=-1,$ we get $d^2=-1$. Then we have
\begin{equation}\label{eqn3.1}
\begin{cases}
xy=-1, \\
x+y=2.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2-2u-1=0.$ However, $u^2-2u-1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=3$ is non-square. We get a contradiction.
\item If $c=3$, we have $d^2=-2$. Then \begin{equation}\label{eqn3.2}
\begin{cases}
xy=1, \\
x+y=-1.
\end{cases}
\end{equation}
So $x,y$ are two roots of $u^2+u+1=0.$ However, $u^2+u+1$ is irreducible over $\mathbb{F}_{5^k}$ by Lemma \ref{lem0}, since the discriminant $\Delta=2$ is non-square. We get a contradiction. \end{enumerate}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm4.1}] Noting that $\Omega_{1}\cap \Omega_{2}=\varnothing$, $\Omega_{1}\cup \Omega_{2}=\mathbb{F}_{5^k}^{*}$. Clearly, $f(\Omega_{1})\subseteq \Omega_{1}$ and $f(\Omega_{2})\subseteq \Omega_{2}.$ It is sufficient to prove that $f(x)$ permutates $\Omega_{1},\Omega_{2},$ respectively. Then, it can be derived directly from Lemmas \ref{lemconj1.1} and \ref{lemconj1.2}. \end{proof}
\section{PPs of the form $x+\gamma \textup{Tr}_{n}(x^k)$}\label{trapp}
In \cite{kyureghyan2016}, the authors computed all PPs over fields $\mathbb{F}_{q^n}$ of the form $x+\gamma \textup{Tr}_{n}(x^k)$ with $\gamma\in \mathbb{F}_{q^n}^*,$ $n>1$ and $q^n<5000.$ They gave several families of PPs, which explain almost all PPs of this form. However, the following five examples arising in their computation are not covered: \begin{example}\label{ex5.1} $q=7,~ n=2, ~k=10, ~\gamma^4=1.$ \end{example}
\begin{example}\label{ex5.2} $q=9,~ n=2,~ k=33,~ \gamma^2-\gamma=1.$ \end{example}
\begin{example}\label{ex5.3} $q=27, ~n=2,~ k=261,~ (\gamma-1)^{13}=\gamma^{13}.$ \end{example}
\begin{example}\label{ex5.4} $q=9,~n=3, ~k=\{11, 19, 33, 57\},~ \gamma^4=-1.$ \end{example}
\begin{example}\label{ex5.5} $q=49,~ n=2,~ k=385, ~\gamma^5=-1.$ \end{example}
In the following theorem, we generalize Examples \ref{ex5.2} and \ref{ex5.3} into a new infinite class.
\begin{theorem} Let $q=3^r$ with $r \geq 2$, and $n=2,$ $k=3^{2r-1}+3^r-3^{r-1}.$ Then $f(x)=x+\gamma \textup{Tr}_{2}(x^k)$ is a PP over $\mathbb{F}_{q^{2}}$, where $\gamma\in \mathbb{F}_{q^{2}}$ satisfying $(\gamma-1)^{\frac{q-1}{2}}=\gamma^{\frac{q-1}{2}}$. \end{theorem}
\begin{proof} We will show that $f(x)=a$ has a unique nonzero solution for each $a\in \mathbb{F}_{q^2}$. That is, for the equation \begin{equation}\label{eq:pp1} x+\gamma (x^k+\overline{x}^k)=a, \end{equation} there exists a unique solution $x\in \mathbb{F}_{q^2}.$ Raising both sides of Eq. \eqref{eq:pp1} to the $3$-th power, noting that $\overline{x}^q=x^{q^2}=x,$ we get \begin{equation}\label{eq:pp2} x^{3}+\gamma^3 (x\overline{x}^2+\overline{x}x^2)=a^3, \end{equation} Since $(\gamma-1)^{\frac{q-1}{2}}=\gamma^{\frac{q-1}{2}},$ we can easily obtain $\gamma \in \mathbb{F}_q.$ Raising both sides of Eq. \eqref{eq:pp2} to the $q$-th power \begin{equation}\label{eq:pp3} \overline{x}^{3}+\gamma^3 (\overline{x}x^2+x\overline{x}^2)=\overline{a}^3. \end{equation} By Eq. \eqref{eq:pp2} and Eq. \eqref{eq:pp3}, we have $(x-\overline{x})^3=(a-\overline{a})^3.$ Since $\textup{gcd}(3,q^2-1)=1,$ we get \begin{equation}\label{eq:pp4} \overline{x}=x+\overline{a}-a. \end{equation} Substituting Eq. \eqref{eq:pp4} into Eq. \eqref{eq:pp2}, after some simple computation, we obtain \begin{equation}\label{eq:pp5} x^3+\Big(\frac{\gamma}{1-\gamma}\Big)^3(\overline{a}-a)^2x=\Big(\frac{a}{1-\gamma}\Big)^3. \end{equation} Therefore, $x$ is a solution of Eq. \eqref{eq:pp1} if and only if it is a solution to following equations \begin{equation}\label{eqs1} \left\{ \begin{array}{rcl} x^3+\Big(\frac{\gamma}{1-\gamma}\Big)^3(\overline{a}-a)^2x&=&\Big(\frac{a}{1-\gamma}\Big)^3, \\[1.0ex] \overline{x}-x&=&\overline{a}-a. \end{array}\right. \end{equation}
Next, we will prove that Eq. \eqref{eqs1} has at most one solution. If it has two distinct solutions, denoted by $x_1,x_2.$ By $\overline{x_1}-x_1=\overline{a}-a=\overline{x_2}-x_2,$ we get $\overline{x_1-x_2}=x_1-x_2,$ that is $x_1-x_2\in \mathbb{F}_q.$ Write $c=x_1-x_2\in \mathbb{F}_{q}^*,$ we have $x_2+c, x_2, x_2-c$ are three solutions to the first equation of Eq. \eqref{eqs1}. Then, we obtain $c^2=\Big(\frac{\gamma}{\gamma-1}\Big)^3(\overline{a}-a)^2.$ Note that $Y=\{\gamma\in \mathbb{F}_{q^2} |(\gamma-1)^{\frac{q-1}{2}}=\gamma^{\frac{q-1}{2}}\}\subset \mathbb{F}_q^{*}.$ Let $z=\frac{\gamma}{\gamma-1},$ we have $Z=\{z \in \mathbb{F}_{q^2} | z^{\frac{q-1}{2}}=1\}=\langle \omega^{2(q+1)} \rangle \setminus\{1\}\subset \mathbb{F}_q^{*},$ where $w$ is a primitive element of $\mathbb{F}_{q^2}.$
Thus we have $c=\pm d(\overline{a}-a),$ where $d=w^{3j(q+1)} \in \mathbb{F}_q$ for some $j.$ Below, we consider the two cases where $c=d(\overline{a}-a)$ and $c=-d(\overline{a}-a)$.
{\bf Case 1: } Raising both sides of $c=d(\overline{a}-a)$ to the $q$-th power, we have $\overline{c}=d(a-\overline{a})=-c,$ which leads to a contradiction since $c\in \mathbb{F}_{q}^*.$
{\bf Case 2: } Raising both sides of $c=-d(\overline{a}-a)$ to the $q$-th power, we have $\overline{c}=-d(a-\overline{a})=-c,$ which leads to a contradiction since $c\in \mathbb{F}_{q}^*.$
This completes the proof. \end{proof}
\begin{remark} When $q=9,~ n=2,~ k=33,$ the condition on $\gamma$ in the above theorem is a little different from $\gamma^2-\gamma=1.$ Noting that $(\gamma-1)^4=\gamma^4$ implies $(\gamma+1)(\gamma^2-\gamma-1)=0.$ It follows that $\gamma=-1$ is also a suitable solution, which is contained in a class of PPs proposed in \cite{kyureghyan2016}. \end{remark}
\section{Conclusion}\label{conclusion} This paper demonstrates some new results on permutation polynomials. We prove two conjectures on permutation polynomial proposed recently by Wu and Li \cite{WuL}. Moreover, we give a new class of trinomial PPs of the form $x+\gamma \textup{Tr}_{n}(x^k)$, which generalizes two examples of \cite{kyureghyan2016}, namely, Examples \ref{ex5.2} and \ref{ex5.3}.
One question remains at the end of this paper: How to extend Examples \ref{ex5.1}, \ref{ex5.4} and \ref{ex5.5} to infinite classes? If it can be solved, we would have a complete understanding on PPs over fields $\mathbb{F}_{q^n}$ of the form $x+\gamma \textup{Tr}_{n}(x^k)$ with $\gamma\in \mathbb{F}_{q^n}^*,$ $n>1$ and $q^n<5000.$ Due to this, we propose the following problem:
\begin{problem} Is it possible to extend Examples \ref{ex5.1}, \ref{ex5.4} and \ref{ex5.5} to infinite classes with the form $x+\gamma \textup{Tr}_{n}(x^k)$? \end{problem}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Dynamic modeling of enzyme controlled metabolic networks using a receding time horizon}
\thanks[footnoteinfo]{H.L. and A.-M.R are funded by ERANET for Systems Biology ERASysApp, project ROBUSTYEAST, BMBF grant
IDs 031L0017A and 031L0017B.}
\author[First]{Henning Lindhorst} \author[Second]{Alexandra-M. Reimers} \author[Third]{Steffen Waldherr}
\address[First]{Institute for Automation Engineering, Otto-von-Guericke-Universit\"{a}t Magdeburg; (e-mail: [email protected]).} \address[Second]{Department of Mathematics and Computer Science, Freie Universit\"{a}t Berlin; (email: [email protected])} \address[Third]{KU Leuven, Department of Chemical Engineering; (e-mail: [email protected])}
\begin{abstract}
Microorganisms have developed complex regulatory features controlling their reaction and internal adaptation to changing environments.
When modeling these organisms we usually do not have full understanding of the regulation and rely on substituting it with an
optimization problem using a biologically reasonable objective function.
The resulting constraint-based methods like the Flux Balance Analysis (FBA)
and Resource Balance Analysis (RBA)
have proven to be powerful tools to predict growth rates, by-products, and pathway usage for fixed environments.
In this work, we focus on the dynamic enzyme-cost Flux Balance Analysis (deFBA), which models the environment, biomass products, and their composition dynamically and contains reaction rate constraints based on enzyme capacity.
We extend the original deFBA formalism to include storage molecules and biomass-related maintenance costs.
Furthermore, we present a novel usage of the receding prediction horizon as used in Model Predictive Control (MPC) in the deFBA framework, which we call the short-term deFBA (sdeFBA).
This way we eliminate some mathematical artifacts arising from the formulation as an optimization problem and gain access to new applications in MPC schemes.
A major contribution of this paper is also a systematic approach for choosing the prediction horizon and identifying conditions to ensure solutions grow exponentially.
We showcase the effects of using the sdeFBA with different horizons through a numerical example. \end{abstract}
\begin{keyword} model predictive control, metabolic engineering, gene expression, linear optimization \end{keyword}
\end{frontmatter}
\section{Introduction} Microorganisms encounter a vast array of environmental conditions and have developed complex regulatory mechanisms to cope with them. While a lot of research is done to investigate this, most regulatory features are still unknown. An effective alternative approach is the substitution of the regulation with an optimization problem as originally done with the Flux Balance Analysis (FBA) in \citep{varma1994metabolic}. This method models the organism as a metabolic network in steady-state and maximizes a single biomass flux. This approach led to a family of methods focusing on different aspects.
Initial steps towards dynamic models with the ability to react to changing environments were made with the dynamic FBA \citep{mahadevan2002dynamic}. But this method still lacks a connection between reaction rates and the enzyme levels necessary to realize them.
The first optimization method to take this into account is the Resource Balance Analysis (RBA) \citep{goelzer2011cell}. In this method the growth rate of the cell is optimized to a fixed medium composition while enzymatic flux constraints limit uptake and metabolic reaction rates. The combination of these enzymatic constraints and a dynamic approach resulted in the dynamic enzyme-cost Flux Balance Analysis (deFBA) presented in \citep{waldherr2015}. The deFBA predicts all reaction rates and enzymatic levels for given nutrient dynamics on a chosen time frame. An application of the deFBA to a genome scale model can be found in \citep{reimers2017}.
During a recent study \citep{waldherr2017} we learned that the fixed end-time in the deFBA can lead to artificial solutions usually not observed in the modeled organisms. Furthermore, we plan to use deFBA inside a model predictive controller to maximize certain biomass products by manipulation of the medium composition. Thus, we present in this work the \emph{short-term deFBA} (sdeFBA), which combines the deFBA with the idea of a receding prediction horizon. This also allows us to solve problems with large end-times piece-wise and in some cases reduces the computational cost for the simulation.
\section{Dynamic enzyme-cost Flux Balance Analysis}\label{sec:deFBA} \subsection{Constructing the optimization problem} In this section we present the basics of the deFBA and showcase the extensions of our current formulation in comparison to the original one \citep{waldherr2015}. At the heart of deFBA models lies a metabolic reaction network consisting of $n$ biochemical species and $m$ reactions converting the species into each other. We further classify the species depending on their physical location and their biological function as either \begin{itemize}
\item \emph{external species} $Y \in \mathbb{R}^{n_y}_{\geq 0}$ outside of the cell (carbon sources, oxygen, etc.),
\item \emph{metabolic species} $X \in \mathbb{R}^{n_x}_{\geq 0}$ which are intermediates and intracellular products of the metabolism (amino acids, ATP, etc.),
\item \emph{storage species} $C \in \mathbb{R}^{n_c}_{\geq 0}$ which are allowed to accumulate in the model (glycogen, starch, etc.),
\item \emph{macromolecules} $P \in \mathbb{R}^{n_p}_{\geq 0}$ representing biomass components (enzymes, cell walls, DNA, etc.), \end{itemize} with $n=n_y + n_x + n_p + n_c$. We measure all species in molar amounts, e.g., $[X]=$ mol.
The macromolecules $P$ represent the complete reproductive machinery of the organism and can be further divided into a catalytic part, enabling reactions via enzymes and taking care of reproduction via the ribosome, and a non-catalytic part, like cell walls, DNA, etc. To keep the notation simple we address both kinds with $P$. Most organisms use some of the available nutrients to create an energy storage, which can be used to survive phases of starvation, e.g. production of starch during day for consumption at night.
The storage species $C$ can either be some macromolecules or simply metabolites allowed to accumulate.
The deFBA assumes the network maximizes biomass accumulation over time. Thus, we assign the accumulating species $C, P$ their molecular weights $w_i, ~[w_i]=$ g/mol and define the \emph{total biomass} $B$ as \begin{align}
B (t) = w_C^T C(t) + w_P^T P(t), \end{align} depending on the time $t$, $[t] =$ h. As recent studies have shown \citep{waldherr2017} the inclusion of non-catalytic biomass in the objective may lead to unexpected results if these species are very "cheap" to produce in comparison to their weights $w_C$. Thus, we additionally define the \emph{objective biomass} $B_o$ via the \emph{objective weights} $b_i$, which in most cases coincide with the molecular weights, but can be set to zero if necessary \begin{align}\label{eq:objective_biomass} B_o(t) = b_C^T C(t) + b_P^T P(t). \end{align}
The reactions $R$ between the species are subdivided into the following types: \begin{itemize}
\item \emph{exchange reactions} $v_Y \in \mathbb{R}^{m_y}$ exchanging matter with the outside,
\item \emph{metabolic reactions} $v_X \in \mathbb{R}^{m_x}$ transforming metabolites into one another,
\item \emph{storage reactions} $v_C \in \mathbb{R}^{m_c}$ converting metabolites in storage and vice versa,
\item \emph{biomass reactions} $v_P \in \mathbb{R}^{m_p}$ producing macromolecules, \end{itemize} with $m = m_y + m_x + m_c + m_p$. We write shortly $v = (v_Y^T, v_X^T, v_C^T, v_P^T)^T$, $[v] = $ mol/h. The dynamics of the species are then given by the \emph{stoichiometric matrix} $S \in \mathbb{R}^{n,m}$ \begin{align}\label{eq:full_dynamics} \begin{split} \frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix} Y(t) \\ X(t) \\ C(t) \\ P(t) \end{pmatrix} & = \begin{pmatrix}
S_{Y,Y} & 0 & 0 & 0 \\
S_{X,Y} & S_{X,X} & S_{X,C} & S_{X,P} \\
0 & 0 & S_{C,C} & 0 \\
0 & 0 & 0 & S_{P,P} \end{pmatrix} \begin{pmatrix} v_Y(t) \\ v_X(t) \\ v_C(t) \\ v_P(t)\end{pmatrix}\\ & = \begin{pmatrix} S_Y \\ S_X \\ S_C \\ S_P \end{pmatrix}\begin{pmatrix} v_Y(t) \\ v_X(t) \\ v_C(t) \\ v_P(t) \end{pmatrix} = S v(t), \end{split} \end{align} with the submatrices $S_{I,J} \in \mathbb{R}^{n_I,m_J}, ~I,J \in \{ Y,X,C,P \}$. Following \citep{waldherr2015}, the metabolism is modelled to operate in quasi steady-state. This translates to the constraint \begin{align}\label{eq:qss} \begin{split} \frac{\mathrm{d}}{\mathrm{d}t}X(t) & = S_X v(t) = 0, ~ \forall t \geq 0. \\
\end{split} \end{align}
The enzymatic biomass catalyzes the reactions in the network and the maximal rates are determined by the reaction-specific \emph{catalytic constants} (or turnover numbers) $k_{{\mathrm{cat}},\pm j}$, $j \in \{1, \ldots, m\}$, $[k_{{\mathrm{cat}},\pm j}]=\mathrm{h}^{-1}$ and the amount of the respective enzyme ${P_i}$. We differentiate between the \emph{forward value} $k_{{\mathrm{cat}}, +j}$ and the \emph{backward value} $k_{{\mathrm{cat}}, -j}$.
The bounds for the reactions rates are given by \begin{align} -v_j \leq k_{{\mathrm{cat}},- j} {P_i},~ v_j \leq k_{{\mathrm{cat}},+ j} {P_i}. \end{align} Furthermore, some enzymes are capable of catalyzing multiple reactions, which we describe with the sets \begin{align}
\mathrm{cat}({P_i}) = \{ v_j~|~ {P_i} \text{ catalyzes } v_j \}. \end{align} The corresponding constraint with respect to reversibility of the reactions then reads \begin{align}\label{eq:ecc_base_constraint}
\sum_{v_j \in \mathrm{cat}({P_i})} \left| \frac{v_j(t)}{k_{\mathrm{cat},\pm j}} \right| \leq {P_i}(t),~ \forall t \geq 0. \end{align}
We call the matrix form the \emph{enzyme capacity constraint} \begin{align}\label{eq:ecc}
H_c v(t) \leq H_{e} P(t), ~ \forall t \geq 0, \end{align} with the filter matrix $H_{e}$. For more detail on the construction of these matrices see \citep{waldherr2015}. The constraint \eqref{eq:ecc} is the central constraint in deFBA as it limits growth.
In regular FBA the growth rate is constrained by biomass independent constraints \begin{align}\label{eq:box_constraints} v_{\mathrm{min}} \leq v(t) \leq v_{\max} \end{align} derived from measured reaction rates. Because all reactions can reach arbitrarily large rates given enough enzyme is present (cf.~\eqref{eq:ecc}), we make the following assumption. \begin{assumption}\label{as:reversibility}
The biomass independent constraints \eqref{eq:box_constraints} are only used to define the reversibility of the reactions with $v_{\min}, v_{\max} \in \{\pm \infty, 0\}^m$. \end{assumption} Any organism needs structural macromolecules to keep working, e.g., the cell wall separating it from the outside. We express this necessity by enforcing certain fractions $\psi_s \in [0,1)$ of the total biomass $B(t)$ to be made of structural components, e.g.,for a structural macromolecule ${P_s}$ \begin{align}\label{eq:biomass_composition_base} \psi_s B(t) \leq {P_s}(t),~ \forall t\geq 0. \end{align} The extension of \eqref{eq:biomass_composition_base} to the network level can be expressed by collecting the individual constraints into the \emph{biomass composition matrix} $H_b$ with \begin{align}\label{eq:biomass_composition} H_b \begin{pmatrix} C(t) \\ P(t) \end{pmatrix} \leq 0, \end{align} where the rows of $H_b$ are derived from \eqref{eq:biomass_composition_base}. We call \eqref{eq:biomass_composition} the \emph{biomass composition constraint}. Furthermore, we can enforce specific reaction rates \begin{align}\label{eq:maintenance_base} \begin{split}
v_m(t) &\geq \phi_m B(t), ~ \forall t \geq 0\\ \end{split} \end{align} with the \emph{maintenance coefficient} $\phi_m \in [0,1)$ to model maintenance reactions scaling with biomass, e.g., re-synthesis of lipids. Hence, we call \eqref{eq:maintenance} the \emph{maintenance constraint} \begin{align}\label{eq:maintenance} v(t) \geq H_m \begin{pmatrix} C(t) \\ P(t) \end{pmatrix}, \end{align} with the rows of $H_m$ corresponding to $\phi_m (w_C^T, w_P^T)$ (cf.~\eqref{eq:maintenance_base}). To construct the full deFBA problem, we introduce an \emph{end-time} $t_{\mathrm{end}}>0$ and define the objective function as accumulation of the objective biomass \eqref{eq:objective_biomass} as \begin{align} \begin{split}\label{eq:deFBA_problem}
\max_{v(t)} & \int_0^{t_{\mathrm{end}}} B_o(t)\diff t \\
\mathrm{s.t.}~& \eqref{eq:qss}, \eqref{eq:ecc},\eqref{eq:box_constraints}, \eqref{eq:biomass_composition},\eqref{eq:maintenance}; \forall t \in [0,t_{\mathrm{end}}]. \end{split} \end{align} This dynamic optimization problem can be solved by discretization with a collocation method. The result is a linear program (LP) for which efficient, specialized solvers are available.
With respect to the computational and numerical details of solving such problems, we refer the reader to \citep{waldherr2015}, and to \citep{reimers2017} for a large scale example. We provide an implementation of the deFBA model class in Python 2.7\footnote{\url{https://bitbucket.org/hlindhor/defba-python-package}}, which imports/exports models using libSBML \citep{bornstein2008libsbml} and the resource allocation modeling (RAM) annotations \citep{ram2017}. A step-by-step guide for the generation of deFBA models is described in \citep{reimers2017generating}.
\subsection{Important growth modes} There are multiple reasons to discard the large end-time $t_{\mathrm{end}}$ in favor of a shorter prediction horizon $0 < t_{\mathrm{p}} << t_{\mathrm{end}}$ and implement an iterative version of the original problem \eqref{eq:deFBA_problem}.
Foremost, the deFBA can produce \emph{linear phases}, defined as \begin{align} \diff B_o(t) / \diff t = \lambda, \end{align} with the constant linear growth rate $\lambda \geq 0$. These phases can occur if some macromolecules are very "cheap" in comparison to others. The model uses all resources to solely produce the cheap molecules, regardless of their utility. These phases can either be observed when using very small end-times or as mean to top off the objective value near nutrient depletion or the end-time $t_{\mathrm{end}}$ \citep{waldherr2017}.
We regard the linear phases as mathematical artifacts of the optimization method itself as we do not know of biological examples for this behavior. Thus, one goal of the prediction horizon is to eliminate these linear arcs in the solutions.
Another important growth mode, called a \emph{balanced phase}, is defined by \begin{align}
\diff B_o(t) / \diff t = \mu_{{\mathrm{bal}}} B_o(t) , \end{align} with the constant exponential growth rate $\mu_{{\mathrm{bal}}} \in \mathbb{R}_{\geq 0}$ depending on nutrient availability and the current biomass composition. In these phases the composition of the biomass stays fixed as it is already optimal for the environment. A dynamic solution generated by the deFBA typically consists of a series of balanced growth phases and the transitions between these.
\section{Short-term deFBA} \subsection{Implementing the receding time horizon}\label{sec:receding_prediction_horizon} The implementation of the receding prediction horizon $t_{\mathrm{p}}$ is straightforward. We split the time interval $[0, t_{\mathrm{end}}]$ into intervals $[t_k, t_{k+1}]$ using the time grid $\Delta_t (t_{\mathrm{c}}) = \{ t_k = kt_{\mathrm{c}}~\vert~k \in \mathbb{N} \}$ defined by the \emph{iteration time} $t_{\mathrm{c}} \in (0, t_{\mathrm{p}})$. Then we replace the original deFBA problem \eqref{eq:deFBA_problem} with a series of small problems we call the \emph{short-term deFBA} (sdeFBA). With given values ${Y}^{t_k}$, $C^{t_k}$, $P^{t_k}$, these read \begin{subequations}\label{eq:sdeFBA_problem} \begin{align} \max_{v(t)} & \int_{t_k}^{t_{k}+t_{\mathrm{p}}} B_o(t)\diff t \label{eq:sdeFBA_problem_objective}\\ \mathrm{s.t.}\; &\forall t \in [t_k,t_{k}+t_{\mathrm{p}}] \\ & \frac{\mathrm{d}}{\mathrm{d}t} \begin{pmatrix} Y(t) \\ C(t) \\ P(t) \end{pmatrix} = \begin{pmatrix}S_{Y} \\ S_{C} \\ S_{P} \end{pmatrix} v(t) \label{eq:sdeFBA_problem_dynamics}\\ & S_X v(t) = 0 \label{eq:sdeFBA_problem_steady_state}\\ & H_c v(t) \leq H_e P(t) \label{eq:sdeFBA_problem_ecc}\\ & H_b \begin{pmatrix} C(t) \\ P(t) \end{pmatrix} \leq 0 \label{eq:sdeFBA_problem_bcc}\\ & v(t) \geq H_m \begin{pmatrix} C(t) \\ P(t) \end{pmatrix}\label{eq:sdeFBA_problem_maintenance}\\ & v_{\min} \leq v(t) \leq v_{\max} \label{eq:sdeFBA_problem_box}\\ & Y(t),\;C(t),\;P(t)\geq 0 \label{eq:sdeFBA_problem_positivity}\\ & Y(t_k) = {Y}^{t_k}, ~ C(t_k)=C^{t_k}, ~ P(t_k) = P^{t_k}. \label{eq:sdeFBA_problem_continous} \end{align} \end{subequations} For given initial values $Y_0, C_0, P_0$, we solve the problem iteratively starting at time zero and connecting the iterations via \eqref{eq:sdeFBA_problem_continous}. The solution trajectories $Y^*(t)$, $C^*(t)$, $P^*(t)$, $v^*(t)$, ~$0 \leq t \leq t_{\mathrm{end}}$ are generated by appending the calculated slices over the iteration time $[t_k , t_k + t_{\mathrm{c}}]$ after each iteration.
\subsection{Choosing the prediction horizon}\label{sec:choosing_the_prediction_horizon} \begin{figure}
\caption{Illustration for choosing the prediction horizon. Upper bound on linear growth shown in red ($\circ$), balanced growth in blue ($\square$), and optimal solution in brown (x).}
\label{fig:lin_vs_exp}
\end{figure} We already stated that the native growth mode for metabolic networks is exponential growth, while linear phases are undesired. Our analysis in \citep{waldherr2017} shows, that linear solutions can arise on very short time scales as exponential solutions need a longer time horizon to outperform them. Hence, we must ensure to choose the prediction horizon $t_p$ large enough such that linear solutions become sub-optimal. At the same time we want to keep $t_p$ as small as possible to minimize computational cost. We suggest to determine the prediction horizon by comparison of a strict upper bound on linear growth with an arbitrary balanced growth phase. The idea is sketched in Figure \ref{fig:lin_vs_exp}. This way we ensure the existence of at least piece-wise exponential solutions on the time horizon $t_{\mathrm{p}}$. This calculation is dependent on two sets of variables; the nutrients available and the initial biomass composition $P_{\mathrm{init}}, ~C_{\mathrm{init}}$ at time zero (or $t_k$ in the sdeFBA). To eliminate the influence of nutrient availability in this first investigation we make the following assumption. \begin{assumption}\label{as:nutrients}
All external components $Y$ are limitlessly available. \end{assumption}
We define the \emph{initial objective biomass} as \begin{align} B_{{\mathrm{init}}} = b_C^T C_{\mathrm{init}} + b_P^T P_{\mathrm{init}}. \end{align}
First we identify a strict upper bound on linear growth dependent on the initial biomass amount by constructing an optimization problem inspired by the regular FBA \citep{varma1994stoichiometric}. We assume a linear growth phase $\diff B_o(t)/ \mathrm{d} t = \lambda$ and maximize the linear growth rate \begin{align} \lambda &= b^T_C S_C v_{\mathrm{lin}} + b^T_P S_P v_{\mathrm{lin}}. \end{align} Following Assumption \ref{as:nutrients}, we ignore the nutrient dynamics. The optimization problem is then constructed as \begin{subequations}\label{eq:upper_lin_bound} \begin{align} \lambda_s(B_{{\mathrm{init}}}) = \underset{v_{\mathrm{lin}},P_{\mathrm{lin}}, C_{\mathrm{lin}}}{\max}~~ & b^T_C S_{C} v_{\mathrm{lin}} + b^T_P S_{P} v_{\mathrm{lin}} \label{eq:upper_lin_bound_obj}\\ \mathrm{s.t.~~} & S_X v_{\mathrm{lin}} = 0 \label{eq:upper_lin_bound_qss}\\ & H_c v_{\mathrm{lin}} - H_e P_{\mathrm{lin}} \leq 0 \label{eq:upper_lin_bound_ecc}\\ & H_b \begin{pmatrix} C_{\mathrm{lin}} \\ P_{\mathrm{lin}} \end{pmatrix} \leq 0 \label{eq:upper_lin_bound_bcc}\\ & w_C^T C_{\mathrm{lin}} + w_P^T P_{\mathrm{lin}} = B_{\mathrm{init}} \label{eq:upper_lin_bound_biomass}\\ & v_{\mathrm{lin}} \geq H_m \begin{pmatrix} C_{\mathrm{lin}} \\ P_{\mathrm{lin}} \end{pmatrix} \label{eq:upper_lin_bound_maintenance} \\ & v_{\min} \leq v_{\mathrm{lin}} \leq v_{\max}, \label{eq:upper_lin_bound_box} \end{align} \end{subequations} with \eqref{eq:upper_lin_bound_biomass} fixing the initial amount of biomass to $B_{\mathrm{init}}$. The value of the \emph{specific growth rate} $\lambda_s(B_{\mathrm{init}})$ is dependent on the amount of biomass. Instead we use the \emph{regularized rate} \begin{align} \lambda_r = \frac{\lambda_s(B_{\mathrm{init}})}{B_{\mathrm{init}}}. \end{align} For easier reading we omit the dependency of $\lambda_s$ on the biomass.
We construct the linear solution as \begin{align} P(t) = P_{\mathrm{lin}} + S_P v_{\mathrm{lin}} t,~ C(t)=C_{\mathrm{lin}} + S_C v_{\mathrm{lin}} t.\label{eq:lin_solution} \end{align} This solution is usually not feasible for the original sdeFBA problem \eqref{eq:sdeFBA_problem} with $t_{\mathrm{p}}>0$ as violations of \eqref{eq:sdeFBA_problem_bcc} and \eqref{eq:sdeFBA_problem_maintenance} are to be expected with increase in biomass over time.
As next step, we identify a balanced growth phase to use as a lower bound for optimal exponential growth by optimizing the static growth rate $\mu \geq 0$ at $t=0$
\begin{align} \frac{\diff}{\diff t}\begin{pmatrix} C_{\mathrm{init}} \\ P_{\mathrm{init}} \end{pmatrix} = \mu \begin{pmatrix} C_{\mathrm{init}} \\ P_{\mathrm{init}} \end{pmatrix}. \end{align}
The resulting optimization problem reads \begin{subequations}
\begin{align}
\mu_{{\mathrm{bal}}} = & \max_{v_{{\mathrm{bal}}}} \mu \\
\text{s.t.~~}& \mu \begin{pmatrix} C_{\mathrm{init}} \\ P_{\mathrm{init}} \end{pmatrix} =\begin{pmatrix} S_C \\ S_P \end{pmatrix} v_{{\mathrm{bal}}} \\
& S_X v_{{\mathrm{bal}}} = 0 \\
& H_c v_{{\mathrm{bal}}} - H_E P_{\mathrm{init}} \leq 0 \\
& v_{{\mathrm{bal}}} \geq H_m \begin{pmatrix} C_{\mathrm{init}} \\ P_{\mathrm{init}} \end{pmatrix} \\
& v_{\min} \leq v_{{\mathrm{bal}}} \leq v_{\max}.
\end{align} \end{subequations}
The trajectories of the balanced growth phase are derived by solving the initial value problem \begin{align}\label{eq:bal_ode}
\frac{\diff}{\diff t}\begin{pmatrix} C(t) \\ P(t) \end{pmatrix} & = \mu_{\mathrm{bal}} \begin{pmatrix} C(t) \\ P(t) \end{pmatrix}, \end{align} with $C(0)=C_{\mathrm{init}}$, $P(0)=P_{\mathrm{init}}$. These trajectories are realized by the rates $v(t) = v_{\mathrm{bal}} e^{\mu_{bal} t}$ and represent a feasible solution to \eqref{eq:sdeFBA_problem}, if Assumption \ref{as:nutrients} holds and the initial values are feasible \begin{align}
H_b \begin{pmatrix} C_{\mathrm{init}} \\ P_{\mathrm{init}} \end{pmatrix} \leq 0. \end{align}
We can calculate a suitable time $t_{\mathrm{p}}$, by comparing the the balanced solution \eqref{eq:bal_ode} to the linear one \eqref{eq:lin_solution}.
The integral of the biomass curve for \eqref{eq:bal_ode} is derived as \begin{align} \begin{split} IB_{{\mathrm{bal}}}(t, \mu_{\mathrm{bal}}, B_{\mathrm{init}}) & = \int_0^{t} b_C^T C(t) + b_P^T P(t) ~\mathrm{d}t \\
&= \mu_{{\mathrm{bal}}}^{-1} B_{\mathrm{init}} (e^{\mu_{{\mathrm{bal}}} {t}} - 1) \label{eq:exp_solution}
\end{split} \end{align} and the corresponding integral for the linear case is \begin{align} IB_{{\mathrm{lin}}}({t},\lambda_r,B_{\mathrm{init}}) & = \int_0^{t} B_{\mathrm{lin}}(t) ~\mathrm{d}t \\
& = \frac{\lambda_r B_{\mathrm{init}}}{2} {t}^2 + B_{\mathrm{init}}~ t. \label{eq:linear_solution} \end{align} We calculate the prediction horizon by solving \begin{align}\label{eq:determine_tp} IB_{\mathrm{lin}}(t_{\mathrm{p}}, \lambda_r,B_{\mathrm{init}}) - IB_{{\mathrm{bal}}}(t_{\mathrm{p}},\mu_{{\mathrm{bal}}}, B_{\mathrm{init}})= 0 \end{align} for $t_{\mathrm{p}}$. By looking at the slopes of the biomass curves at time zero, we can deduce that this $t_{\mathrm{p}} >0$ only exists if, and only if, $\lambda_r > \mu_{{\mathrm{bal}}}$. Otherwise, the model does not tend to the linear solution and we can chose $t_{\mathrm{p}}$ arbitrarily. \begin{assumption}\label{as:lambdageqmu} The linear growth rate is larger than the balanced growth rate $\lambda_r > \mu_{{\mathrm{bal}}}$. \end{assumption}
An optimal solution of \eqref{eq:sdeFBA_problem} on $[0, t_{\mathrm{p}}]$ can only produce an objective value equal or larger than $IB_{{\mathrm{bal}}}(t_{\mathrm{p}})$, otherwise it would contradict the optimality principle. Hence, we conclude that this optimal solution must contain a superlinear (typically exponential) arc as shown in Figure \ref{fig:lin_vs_exp}.
\begin{remark}\label{rm:recalculate} Calculating $t_{\mathrm{p}}$ is strongly dependent on the initial biomass $P_{\mathrm{init}}, C_{\mathrm{init}}$. Hence, during an sdeFBA run the prediction horizon should be recalculated after each iteration step. \end{remark}
\subsection{Choosing the iteration time}\label{sec:choosing_the_control_horizon} To keep the computational cost of a sdeFBA run as small as possible we choose the iteration time $t_{\mathrm{c}}$ as large as possible, such that the solution is still of exponential form. Hence, we show that each solution of \eqref{eq:sdeFBA_problem} starts with an exponential phase. For this we assume a solution starting with a linear phase \begin{equation}\label{eq:B_mix}
B_{\mathrm{mix}}(t) = \left\{ \begin{array}{ll}
B_{\mathrm{init}} \lambda_r t +B_{\mathrm{init}} & 0 \leq t \leq t_{\mathrm{s}} \\
B_{\mathrm{init}} (\lambda_r t_{\mathrm{s}} + e^{\mu_{{\mathrm{bal}}}(t - t_{\mathrm{s}})}) & t_{\mathrm{s}} < t \leq t_{\mathrm{p}},
\end{array} \right. \end{equation} with the switching time $t_{\mathrm{s}}$ and assume Assumption \ref{as:lambdageqmu} holds. This solution is constructed on the assumption that the linear growth phase does not benefit the autocatalytic capabilities of the system.
\begin{theorem}
If Assumption \ref{as:lambdageqmu} holds, any optimal solution curve $B_{\mathrm{mix}}$ \eqref{eq:B_mix} consists only of a single linear phase with $t_{\mathrm{s}} = t_{\mathrm{p}}$. \end{theorem} \begin{pf} We identify the optimal switching time by solving \begin{align}\label{eq:max_Bmix} \max_{t_{\mathrm{s}}} \int_0^{t_{\mathrm{p}}} B_{\mathrm{mix}}(t) ~\diff t \end{align} analytically by finding local extrema via the first order derivative with respect to $t_{\mathrm{s}}$ \begin{align}\label{eq:zero_theorem_1} \begin{split}
0 & = \frac{\diff} {\diff t_{\mathrm{s}}} \int_0^{t_{\mathrm{p}}} B_{\mathrm{mix}}(t) ~\diff t\\
& = B_{\mathrm{init}}(\lambda_r(t_{\mathrm{p}} - t_{\mathrm{s}}) + 1 - e^{\mu_{{\mathrm{bal}}}(t_{\mathrm{p}}-t_{\mathrm{s}})}), \end{split} \end{align} with the obvious zero $\bar{t}_{\mathrm{s}} = t_{\mathrm{p}}$. Evaluating the second derivative at this point gives
\begin{small} \begin{align} \begin{split}
\left.\frac{\diff^2} {\diff t_{\mathrm{s}}^2} \int_0^{t_{\mathrm{p}}} B_{\mathrm{mix}} (t) ~\diff t\right|_{\bar{t}_{\mathrm{s}}} & = B_{\mathrm{init}}(\mu_{{\mathrm{bal}}}-\lambda_r) < 0, \end{split} \end{align} \end{small} \hspace*{-.2cm}with the last inequality following Assumption \ref{as:lambdageqmu}. Hence, $\bar{t}_{\mathrm{s}}$ is a local maximum and any solution of the $B_{\mathrm{mix}}$ form does not include an exponential arc. For the sake of completeness, we must also mention that there exists another zero of \eqref{eq:zero_theorem_1} $\bar{t}_{\mathrm{s,2}} \in [0,t_{\mathrm{p}})$, which cannot be given in closed form. But, due to continuity and the intermediate value theorem, $\bar{t}_{\mathrm{s,2}}$ is a local minimum of \eqref{eq:max_Bmix}.
~ \qed
\end{pf} As we have chosen $t_{\mathrm{p}}$ such that the balanced growth solution \eqref{eq:exp_solution} outgrows the maximal linear one, we know that there exists a time frame $[0, t_{\mathrm{c}}]$ on which the solution of \eqref{eq:sdeFBA_problem} must at least grow exponentially. Thus, we assume the following form for the solution \begin{small}\begin{align}\label{eq:B_opt}
B_{\mathrm{opt}}(t) & = \left\{ \begin{array}{ll}
B_{\mathrm{init}} e^{\mu_{{\mathrm{bal}}}t}, & 0 \leq t \leq t_{\mathrm{s}}, \\
B_{\mathrm{init}} e^{\mu_{{\mathrm{bal}}}t_{\mathrm{s}}} ( \lambda_r (t-t_{\mathrm{s}}) + 1), & t_{\mathrm{s}} < t \leq t_{\mathrm{p}},
\end{array}\right. \end{align}\end{small} \hspace*{-.2cm}with $B_{\mathrm{init}} \lambda_r e^{\mu_{{\mathrm{bal}}}t_{\mathrm{s}}} = \lambda_s (B_{\mathrm{opt}}(t_{\mathrm{s}}))$. We want to choose $t_{\mathrm{c}}$ such that no linear phase occurs in the final solution of the sdeFBA. Otherwise, we can get faulty solutions as shown in the next section.
\begin{theorem}
If Assumption \ref{as:lambdageqmu} holds, an optimal solution $B_{\mathrm{opt}}$ \eqref{eq:B_opt} of the sdeFBA \eqref{eq:sdeFBA_problem} is growing exponentially on the time frame $[0, t_{\mathrm{c}})$, with
\begin{align}\label{eq:approx_tc}
0 < t_{\mathrm{c}} < t_{\mathrm{p}} - 2\left( \frac{1}{\mu_{\mathrm{bal}}}- \frac{1}{\lambda_r}\right).
\end{align} \end{theorem} \begin{pf}
As in the previous proof, we identify the optimal switching time $t_{\mathrm{s}}$ by solving the optimization problem
\begin{align}\label{eq:theo_2_obj}
\max_{t_{\mathrm{s}}} \int_0^{t_{\mathrm{p}}} B_{\mathrm{opt}}(t) ~\diff t.
\end{align}
The zeros of the first order derivative are given by
\begin{gather}
\frac{\diff}{\diff t_{\mathrm{s}}} \int_0^{t_{\mathrm{p}}} B_{\mathrm{opt}}(t) ~\diff t = 0 \\
\Rightarrow ~ \hat{t}_{\mathrm{s},1} = t_{\mathrm{p}} - 2\left( \frac{1}{\mu_{\mathrm{bal}}}- \frac{1}{\lambda_r}\right),~ \hat{t}_{\mathrm{s},2} = t_{\mathrm{p}} .
\end{gather}
The second-order derivative evaluated at these points is
\begin{small}
\begin{align}
\begin{split}
\left.\frac{\diff^2} {\diff t_{\mathrm{s}}^2} \int_0^{t_{\mathrm{p}}} B_{\mathrm{opt}} (t) ~\diff t\right|_{\hat{t}_{\mathrm{s},1}} & = (\mu_{{\mathrm{bal}}} - \lambda_r ) B_{\mathrm{init}} e^{\mu_{{\mathrm{bal}}}t_{\mathrm{p}}}< 0, \\
\left.\frac{\diff^2} {\diff t_{\mathrm{s}}^2} \int_0^{t_{\mathrm{p}}} B_\opt§ (t) ~\diff t\right|_{\hat{t}_{\mathrm{s},2}} & = (\lambda_r - \mu_{{\mathrm{bal}}})B_{\mathrm{init}} e^{\mu_{{\mathrm{bal}}}t_{\mathrm{p}}}> 0.
\end{split}
\end{align}
\end{small}
Hence, $\hat{t}_{\mathrm{s},1}$ maximizes \eqref{eq:theo_2_obj} and the solution is of exponential form until $\hat{t}_{\mathrm{s},1}$.
~ \qed \end{pf}
We strongly advise to choose the iteration time smaller than given by \eqref{eq:approx_tc} to compensate for numerical errors. Otherwise, we might see solutions mixing linear and exponential phases as shown in Figure \ref{fig:results_1} (C).
Please note that $t_{\mathrm{c}}$ is also dependent on the prediction horizon $t_{\mathrm{p}}$ and the initial biomass composition $B_{\mathrm{init}}$. So it should be recalculated together with $t_{\mathrm{p}}$ after each iteration (cf. Remark \ref{rm:recalculate}).
\section{Numerical example}\label{sec:example_1} We present a simple model, analyzed in detail in \citep{waldherr2017}, to give the reader an idea about the impact of end-times, prediction horizons, and iteration times on the quality of the solution. In this minimal example the organism can invest nutrients in either its' auto catalytic capabilities by investing in enzymes or it can produce non-catalytic components yielding a better nutrients-to-biomass ratio.
The three irreversible reactions of the network are \begin{subequations}
\begin{eqnarray}
v_A : & 1 ~N & \rightarrow 1~ A \label{eq:example_1_network1} \\
v_E : & 1~N + 1~A & \rightarrow 1~ E \label{eq:example_1_network2} \\
v_M : & 1~N + 1~A & \rightarrow 1~ M. \label{eq:example_1_network3}
\end{eqnarray} \end{subequations}
The external nutrient $N$ represents a collection of components necessary for growth, such as carbon, nitrogen, etc. Further processed components made from these nutrients are collected as the internal metabolite $A$. We differentiate the macromolecules into the group of enzymes $E$, collecting the whole enzymatic machinery needed for growth, and non-enzymatic macromolecules $M$. These can be interpreted as storage components such as lipids, starch, or glycogen.
Assuming unlimited nutrients, we would expect a biological system to work exclusively in the exponential phase and produce no storage $M$ at all. But the deFBA model \eqref{eq:deFBA_problem} may generate a solution containing linear phases depending on the system parameters and the end-time.
In this work we are only interested in the effects of the time variables and fix the system parameters to the values shown in Table \ref{tb:ex1_numerical_values}. The numerical results using these values were all generated with our Python deFBA package\footnote{Available at \url{bitbucket.org/hlindhor/defba-python-package}} using a discretization step size $d = 0.1$ h and the initial values $E(0) = M(0) = 0.1$ mol. \begin{table}
\caption{Values used in the numerical example}\label{tb:ex1_numerical_values}
\centering
\begin{tabular}{lccccccc}
\toprule
$b_M~[\frac{\mathrm{g}}{\mathrm{mol}}]$ & $b_E~[\frac{\mathrm{g}}{\mathrm{mol}}]$ & $k_A~[\mathrm{h}^{-1}]$ & $k_M~[\mathrm{h}^{-1}]$ & $k_E~[\mathrm{h}^{-1}]$ \\
\midrule
15 & 10 & 1.5 & 2 & 1 \\
\bottomrule
\end{tabular} \end{table}
Following \citep{waldherr2017}, we can derive the necessary condition for a single linear phase to be the optimal solution as \begin{align} t_{\mathrm{lin}} \leq \frac{2(k_M b_M - k_E b_E)}{b_M k_M k_E} \approx 1.45~\mathrm{h}. \end{align} Choosing any $t_{\mathrm{end}} > t_{\mathrm{lin}}$ results in a mixed trajectory starting with an exponential phase and ending with a linear one. This behavior can be observed in Figure \ref{fig:results_1} (A). A purely exponential solution is not attainable with the deFBA as any solution ends in a linear phase producing only $M$ to top off the objective.
But we can use the short-term deFBA to generate an exponential solution. Using the idea from Section 3 we calculate the initial prediction horizon as $t_{\mathrm{p}} \approx 3.25~\mathrm{h}$ and the iteration time as $t_{\mathrm{c}} \approx 1.45~\mathrm{h}$. The sdeFBA generates a purely exponential solution as shown in Figure \ref{fig:results_1} (B). While this is a more reasonable solution from a biological view, the objective value for this solution is slightly smaller than the one obtained by the deFBA (cf. Figure \ref{fig:results_1} (D)).
Figure \ref{fig:results_1} (C) shows a sdeFBA solution using a prediction horizon $t_{\mathrm{p}} = 2.5$ h and an iteration time $t_{\mathrm{c}} = 1.5$ h. While this $t_{\mathrm{p}}$ is capable of producing an exponential phase in each iteration the the chosen iteration time is way too large. Hence, we see a solution in which exponential growth and linear phases take turns on each iteration slice. This is neither optimal nor observed in nature.
\begin{figure}\label{fig:results_1}
\end{figure}
\section{Conclusion} While our presentation of the sdeFBA focuses on the quality of the solution, this method provides further advantages in comparison to the original deFBA. Foremost, we can replace the fixed time frame $[0, t_{\mathrm{end}}]$ in the original deFBA \eqref{eq:deFBA_problem} with a variable one dependent on the network's state. As example, the deFBA is not designed to handle starvation scenarios and the optimization problem may become infeasible if the nutrients deplete. But in the sdeFBA we can simply stop iterating once the nutrients deplete or another chosen threshold is reached. Of course, this also means we can update state variables or dynamics while setting up the next iteration. So we can use the sdeFBA as predictor in an online model predictive controller, which maximizes, e.g., some biomass component by changing the nutrient composition.
Lastly, the sdeFBA can be a way to solve large scale deFBA problems on large time-scales more efficiently. The problem lies in the linear programs constructed by the deFBA, whose states can vary several orders of magnitude due to exponential growth phases. This leads to ill-posed problems, which take very long to solve even with sophisticated commercial solvers. By breaking the problem into smaller pieces via the sdeFBA we can reduce the computational time.
\end{document} |
\begin{document}
\title{Frobenius Polytopes} \author[J. Collins]{John Collins} \author[D. Perkinson]{David Perkinson} \address{Reed College, Portland Oregon}
\date{1/21/04}
\begin{abstract} A real representation of a finite group naturally determines a polytope, generalizing the well-known Birkhoff polytope. This paper determines the structure of the polytope corresponding to the natural permutation representation of a general Frobenius group. \end{abstract}
\thanks{The authors would like to thank Rao Potluri for many useful insights. The second author would like to thank Reed College students Judy Ridenour and Hana Steinkamp.}
\maketitle \section{Introduction} The collection of $n\times n$ matrices over the real numbers is the $n^2$-dimensional Euclidean space $\mathbb{R}^{n\times n}$. Given a finite group $G$ of real $n\times n$ matrices, the convex hull of its elements in $\mathbb{R}^{n\times n}$ is a polytope $P(G)$ whose vertices are the group elements. A famous example arises when $G$ is the collection of all $n\times n$ permutation matrices. In that case, $P(G)$ is the Birkhoff polytope. Much is known about the Birkhoff polytope (\cite{Brualdi1}, \cite{Brualdi2}, \cite{Brualdi3}, \cite{Brualdi4}) but there are still open questions (\cite{Pak}, \cite{Beck}); for instance, its volume is not known in general. Our interest in polytopes associated with groups was inspired by \cite{Billera}, \cite{Brualdi0}, and \cite{Onn}.
In this paper, we consider the case of an important class of permutation groups, the Frobenius groups. In sections 1 and 2, we recall basic facts concerning Frobenius groups and polytopes. In section 3, we establish our main result, Theorem~\ref{main-theorem}, identifying the polytope associated with a Frobenius group as a free sum of simplices.
\section{Frobenius groups} \begin{dfn} A group $G$ is a {\em Frobenius group} if it has a proper subgroup $1<H<G$ such that $H\cap(xHx^{-1})=\{1\}$ for all $x\in G\setminus H$. The subgroup $H$ is called a {\em Frobenius complement}. \end{dfn} We recall some basic facts about Frobenius groups. Our references are \cite{Alperin}, \cite{Dixon}, and \cite{Huppert}.
Frobenius groups are precisely those which have representations as transitive permutation groups which are not regular---meaning there is at least one non-identity element with a fixed point---and for which only the identity has more than one fixed point. In that case, the stablizer of any point may be taken as a Frobenius complement. On the other hand, starting with an abstract Frobenius group with complement $H$, the group $G$ acts on the collection of left-cosets $G/H$ via left-multiplication. This gives a faithful permutation representation of $G$ with the desired properties. The Frobenius complement $H$ is unique up to conjugation; hence the corresponding permutation representation is unique up to isomorphism.
A theorem of Frobenius says that if $G$ is a finite Frobenius group given as a permutation group, as above, the set consisting of the identity of $G$ and those elements with no fixed points forms a normal subgroup $N$. The group $N$ is called the {\em Frobenius kernel}. We have $G=NH$ with $N\cap H=1$, where $H$ is a Frobenius complement. Thus, $G$ is a semi-direct product $N\rtimes H$. Conversely, if $N$ and $H$ are any two finite groups, and if $\phi$ is a monomorphism of $H$ into the automorphism group of $N$ for which each $\phi(h)$ is fixed-point free, then $N\rtimes_\phi H$ is a Frobenius group with kernel $N$ and complement $H$. A theorem of J.\ G. Thompson implies that $N$ is nilpotent.
\begin{example} A few examples of Frobenius groups: \begin{enumerate} \item The most familiar class of Frobenius groups is the collection of odd dihedral groups, \[ D_n=\langle \rho,\phi\mid\rho^n=\phi^2=1, \rho\phi=\phi\rho^{n-1}\rangle,\ \mbox{$n$ odd}, \] with Frobenius complement $H=\langle\phi\rangle$ and kernel $N=\langle\rho\rangle$. The permutation representation is the usual group of symmetries of a regular $n$-gon. \item The alternating group $A_4=\langle(123),(12)(34)\rangle$ is a Frobenius group with complement $H=\langle(123)\rangle$ and kernel $N=\langle(12)(34),(13)(24)\rangle$. \item Let $p$ and $q$ be prime numbers with $p\equiv1\mod q$, and let $\phi$ be any monomorphism of $H:=\mathbb{Z}/q\mathbb{Z}$ into the automorphism group (i.e., the group of units) of $N:=\mathbb{Z}/p\mathbb{Z}$. Then $N\rtimes_\phi H$ is a Frobenius group with complement $H$ and kernel $N$. Thus, the unique non-abelian group of size $pq$ is Frobenius. \end{enumerate} \end{example}
\section{Polytopes} Here we recall basic facts we need concerning polytopes. Our main reference is~\cite{Ziegler}. The {\em convex hull} of a subset $K\subseteq\mathbb{R}^n$ is the intersection of all convex subsets of $\mathbb{R}^n$ containing $K$. A {\em polytope} in $\mathbb{R}^n$ is the convex hull of a finite set of points. If the polytope $P$ is the convex hull of points $X=\{p_1,\dots,p_t\}$, then $\dim P$, the {\em dimension} of $P$, is the dimension of the affine span of $X$, \[ \textstyle\operatorname{aff}(X):=\{x\in\mathbb{R}^n\mid x=\sum_{i=1}^ta_ip_i,\ a_i\in\mathbb{R},\ \sum_{i=1}^ta_i=1\}. \] An {\em affine relation} on $X$ is an equation $\sum_{i=1}^ta_ip_i=0$ with $\sum_{i=1}^ta_i=0$. Two such relations are {\em independent} if their vectors of coefficients are linearly independent. If~$q$ is the number of independent affine relations on $X$, then \begin{equation}\label{eq:dim} \dim P=t-q-1. \end{equation} If there are no affine relations, then $P$ is called a {\em $(t-1)$-simplex}.
A function of the form $A=A(x_1,\dots,x_n)=a_0+\sum_{i=1}^na_ix_i$ with $a_i\in\mathbb{R}$ for all $i$ is called {\em affine}. The function $A$ determines two {\em half-spaces}: $A\geq0$ and $A\leq0$. It is intuitively obvious, although not trivial to prove, that a set $P$ is a polytope if and only if it is a compact set which is the intersection of finitely many half-spaces.
Given a polytope $P\subset\mathbb{R}^n$, we say that $P$ {\em lies on one side} of the affine function $A$ if $A(p)\geq0$ for all $p\in P$ or if $A(p)\leq 0$ for all $p\in P$. In that case, we define a {\em face} of $P$ as the intersection $P\cap\{p\in\mathbb{R}^n\mid A(p)=0\}$. The {\em dimension} of the face is the dimension of its affine span. The empty set is the unique face of dimension $-1$. A vertex is a face of dimension $0$, and a {\em facet} is a face of dimension $\dim(P)-1$. The collection of faces of $P$, ordered by inclusion, forms a lattice, $\mathcal{F}(P)$. The face lattice is determined by either the facets or by the vertices in that every face is the intersection of the facets containing it and is the convex hull of the vertices it contains. Polytopes $P$ and $Q$ are {\em combinatorially equivalent} if their face lattices are isomorphic as lattices. The equivalence class of $P$ under this relation is the {\em combinatorial type} of $P$.
Polytopes $P\subset\mathbb{R}^n$ and $Q\subset\mathbb{R}^m$ are {\em isomorphic}, denoted $P\approx Q$, if there is an affine function $A\colon\mathbb{R}^n\to\mathbb{R}^m$, injective when restriced to the affine span of $P$, such that $A(P)=Q$. Isomorphic polytopes are combinatorially equivalent.
We will need the following construction,~(cf.\ \cite{Henk}). Suppose $P$ and $Q$ are polytopes in $\mathbb{R}^n$ whose relative interiors have nonempty intersection. Say $x\in\operatorname{relint}(P)\cap\operatorname{relint}(Q)$. Further, suppose that the linear spaces $\operatorname{aff}(P)-x$ and $\operatorname{aff}(Q)-x$ are orthogonal (hence, $\operatorname{aff}(P)\cap\operatorname{aff}(Q)=\{x\}$). Define the {\em free sum}, $P\oplus Q$, to be the convex hull of $P\cup Q$. The following isomorphism of lattices is well-known: \[ \mathcal{F}(P\oplus Q)\approx(\mathcal{F}(P)\times\mathcal{F}(Q))/\sim \] where $\sim$ connotes identification of $(F_1,F_2)\in \mathcal{F}(P)\times\mathcal{F}(Q)$ with $(P,Q)$ if either $F_1=P$ or $F_2=Q$. The lattice structure on the right-hand side has $(P,Q)$ as the maximal element, and if $F_1,F_1'$ are faces of $P$ not equal to $P$ and $F_2,F_2'$ are faces of $Q$ not equal to $Q$, then $(F_1,F_2)\leq (F_1',F_2')$ if $F_1\subseteq F_1'$ and $F_2\subseteq F_2'$. If $F_1\neq P$ and $F_2\neq Q$, then the face of $P\oplus Q$ corresponding to $(F_1,F_2)$ is the convex hull of $F_1\cup F_2$ and has dimension $\dim F_1+\dim F_2+1$. Otherwise, $(F_1,F_2)$ corresponds to $P\oplus Q$, itself, which has dimension $\dim P+\dim Q$. This construction and identification of lattices extends in an obvious way to the case of polytopes $P_1,\dots, P_k$ in $\mathbb{R}^n$ sharing a point $x$ in their relative interiors and such that their affine spans, when translated by $-x$, are pairwise orthogonal.
A polytope is {\em simplicial} if all of its facets (hence all of its proper faces) are simplices. For example, an octahedron is simplicial.
\section{Frobenius polytopes} From now on, let $G$ be a finite Frobenius group with kernel $N$ and complement $H$ acting as a permutation group on the left-cosets $G/H$ via left-multiplication. Our results apply to regular groups as well, which for convenience we consider to be Frobenius groups with trivial complement. In any case, the elements of $N$ serve as a set of representatives for the distinct cosets of $H$. We fix a list $\nu_1=1,\nu_2,\dots,\nu_n$ of the elements of $N$ and define an action of $G$ on $[n]:=\{1,\dots,n\}$ as follows: for $g\in G$, define $g(j)=i$ when $i,j\in[n]$ and $g\nu_jH=\nu_iH$. In this way, we identify $G$ with a subgroup of the symmetric group, $S_n$, and identify $H$ with the stabilizer, $G_1$, of $1$ in $G$.
We further identify $G$ with a collection of $n\times n$ permutation matrices. The collection of all $n\times n$ real matrices is the $n^2$-dimensional Euclidean space $\mathbb{R}^{n\times n}$ with coordinates $\{x_{ij}\}$. The value of $x_{ij}$ at any matrix $M$ is the $ij$-th entry of $M$. For $g\in G$, we take \[ x_{ij}(g)=\begin{cases} 1&\text{if $g(j)=i$},\\ 0&\text{if $g(j)\neq i$}. \end{cases} \] \begin{dfn} The {\em Frobenius polytope} corresponding to $G$ is the convex hull of $G\subset\mathbb{R}^{n\times n}$, denoted $P(G)$. \end{dfn}
\begin{prop}\label{cosets-prop} Let $G\subset\mathbb{R}^{n\times n}$ be a Frobenius group embedded in Euclidean space as above. \begin{enumerate} \item\label{first} $\sum_{g\in hN}g=\mathbf{1}$, where $\mathbf{1}$ is the $n\times n$ matrix with each entry equal to $1$. \item\label{second} If $\sum_{g\in G} a_gg=\mathbf{0}\in\mathbb{R}^{n\times n}$ for some $a_g\in\mathbb{R}$, then $a_g=a_{g'}$ for all $g,g'\in hN$. \end{enumerate} \end{prop} \begin{proof} We first recall a basic property of Frobenius groups: \begin{itemize} \item[($\star$)] for all $i,j\in[n]$, there is precisely one element $g$ in each coset of $N$ such that $g(j)=i$. \end{itemize} To see this, take $H$ as a set of coset representatives of $N$ in $G$ and consider the coset $hN$ with $h\in H$. Given $i,j\in[n]$, we have $(\nu_ih\nu_j^{-1})\nu_jH=\nu_iH$. Since $N$ is normal, there exists $\nu\in N$ such that $\nu_ih\nu_j^{-1}=h\nu\in hN$, and $(h\nu)(j)=i$. Suppose there is also $\nu'\in N$ such that $(h\nu')(j)=i$. We then have $h\nu'\nu_jH=h\nu\nu_jH$, whence $(h\nu\nu_j)^{-1}(h\nu'\nu_j)\in H\cap N=\{1\}$. Therefore, $\nu=\nu'$, establishing ($\star$).
Assertion~(\ref{first}) follows immediately from ($\star$). For each $i,j$ and coset $hN$, we have $x_{ij}(\sum_{g\in hN}g)=\sharp\{g\in hN\mid g(j)=i\}=1$.
Now suppose $\sum_{g\in G}a_gg=0$, as in (\ref{second}). For each $i,j\in[n]$, applying the coordinate function $x_{ij}$, it follows that $\sum_{g\in G: g(j)=i}a_g=0$. Fix a coset $hN$ and an element $g'\in hN$. For each $j$ we have $\sum_{g\in G:g(j)=g'(j)}a_g=0$, and by ($\star$) there is precisely one element $g$ in each coset of $N$ such that $g(j)=g'(j)$. Further, since no element besides the identity has more than one fixed point, if $g(j)=g'(j)$ and $g(j')=g'(j')$, it follows that $g=g'$ or $j=j'$. Hence, \begin{eqnarray*} 0&=&\sum_{j=1}^n\sum_{\genfrac{}{}{0pt}{2}{g\in G:}{g(j)=g'(j)}}a_g =\sum_{j=1}^n\sum_{\genfrac{}{}{0pt}{2}{g\in hN:}{g(j)=g'(j)}}a_g +\sum_{j=1}^n\sum_{\genfrac{}{}{0pt}{2}{g\in G\setminus hN:}{g(j)=g'(j)}}a_g\\ &=&na_{g'}+\sum_{g\in G\setminus hN}a_g. \end{eqnarray*} Solving for $a_{g'}$, we see that its value only depends on $G\setminus hN$, and (\ref{second}) follows. \end{proof}
\begin{cor} Let $P(N)$ denote the polytope which is the convex hull of the Frobenius kernel, $N\subset\mathbb{R}^{n\times n}$. Then $P(N)$ is a simplex of dimension $|N|-1$. \end{cor} \begin{proof} The proposition also immediately implies that the elements of $N$ are affinely independent. \end{proof}
In the following theorem, for each $h\in H$, let $P(hN)$ denote the polytope which is the convex hull of the coset $hN\subset\mathbb{R}^{n\times n}$. Matrix multiplication by $h$ defines a linear automorphism of $\mathbb{R}^{n\times n}$ which is an isomorphism of $P(N)\approx P(hN)$. By
$P(N)^{\oplus|H|}$, we mean the convex hull of $|H|$ copies of $P(N)$ placed in pairwise orthogonal affine spaces so that the copies of $P(N)$ meet at their barycenters (vertex average). \begin{thm}\label{main-theorem} A Frobenius polytope is a free sum of simplices: \[
P(G)=\oplus_{h\in H}P(hN)\approx P(N)^{\oplus |H|}. \] \end{thm} \begin{proof}
By Proposition~\ref{cosets-prop}~\eqref{second}, we have $\sum_{g\in hN}=\mathbf{1}$ for each $h\in H$. Hence, $\tfrac{1}{|N|}\mathbf{1}$ is in the relative interior of each $P(hN)$. Translating by this vector, we must show that
$\{\operatorname{aff}(P(hN))-\tfrac{1}{|N|}\mathbf{1}\}_{h\in H}$ consist of pairwise orthogonal spaces. To this end, let $h\nu\in hN$ and $h'\nu'\in h'N$ with $h\neq h'$. We first show that the inner product of these two group elements as points in $\mathbb{R}^{n\times n}$ is $1$. To say that $\langle h\nu,h'\nu'\rangle=1$ is the same as saying that $h\nu(j)=h'\nu'(j)$ for precisely one $j$, i.e., that $\mu:=\nu^{-1}h^{-1}h'\nu'$ has exactly one fixed point. Since $\mu\neq 1$ and $G$ is a Frobenius group, the only other possibility is that $\mu$ has no fixed points and hence is an element of $N$. However, this would imply that $h^{-1}h'\in H\cap N=\{1\}$ contrary to the assumption that $h\neq h'$.
Orthogonality quickly follows: \begin{eqnarray*}
\langle h\nu-\tfrac{1}{|N|}\mathbf{1},h'\nu'-\tfrac{1}{|N|}\mathbf{1}\rangle
&=& \langle h\nu,h'\nu'\rangle-\langle h\nu,\tfrac{1}{|N|}\mathbf{1}\rangle
-\langle\tfrac{1}{|N|}\mathbf{1},h'\nu'\rangle
+\langle\tfrac{1}{|N|}\mathbf{1},\tfrac{1}{|N|}\mathbf{1}\rangle\\[5pt] &=&1-1-1+1=0. \end{eqnarray*} \end{proof} We now summarize some immediate consequences of the theorem.
\begin{cor}\label{Frob-poly} Let $|N|=n$ and $|H|=h$. \begin{enumerate} \item The polytope $P(G)$ is a simplicial polytope of dimension
$|G|-|H|=(n-1)h$ with $|G|$ vertices and $n^h$ facets. \item The faces of $P(G)$ not equal to $P(G)$ itself are exactly the convex hulls of subsets $X$ of $G$ omitting at least one element from each coset of $N$. The dimension of the face corresponding to a subset $X$
is $|X|-1$. \item The complement of any set of $h$ elements of $G$, one chosen from each of the cosets of $N$, forms the set of vertices of a facet, and all facets arise in this way. \item The number of faces of dimension $k$ in $P(G)$ is the coefficient of $x^{k+1}$ in $x^{(n-1)h+1}+((1+x)^n-x^n)^h$. \end{enumerate} \end{cor} \begin{remark}
The dimension of $P(G)$ also follows immediately from Proposition~\ref{cosets-prop}. It implies that the affine relations on the elements of $G$ are exactly the affine relations on $|H|$
copies of the matrix $\mathbf{1}$. There are $|H|-1$ independent such relations; so, $\dim P(G)=|G|-|H|$ (cf.\ (\ref{eq:dim})).
The fact that each element of $G$ is a vertex of $P(G)$ also follows from a more general principle. Multiplication by any element of $G$, thought of as a permutation matrix, is a linear automorphism of $\mathbb{R}^{n\times n}$ sending $P(G)$ to itself. At least one element of
$G$ is a vertex, and since the action of $G$ on itself is transitive, all elements must be vertices. Therefore, there are $|G|$ vertices. \end{remark}
\end{document} |
\begin{document}
\title{Proof of a conjecture of Bergeron, Ceballos and Labb\'e} \author{Alexander Postnikov and Darij Grinberg} \date{\today} \maketitle
\begin{abstract} The reduced expressions for a given element $w$ of a Coxeter group $\left( W,S\right) $ can be regarded as the vertices of a directed graph $\mathcal{R}\left( w\right) $; its arcs correspond to the braid moves. Specifically, an arc goes from a reduced expression $\overrightarrow{a}$ to a reduced expression $\overrightarrow{b}$ when $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by replacing a contiguous subword of the form $stst\cdots$ (for some distinct $s,t\in S$) by $tsts\cdots$ (where both subwords have length $m_{s,t}$, the order of $st\in W$). We prove a strong bipartiteness-type result for this graph $\mathcal{R}\left( w\right) $: Not only does every cycle of $\mathcal{R}\left( w\right) $ have even length; actually, the arcs of $\mathcal{R}\left( w\right) $ can be colored (with colors corresponding to the type of braid moves used), and to every color $c$ corresponds an \textquotedblleft opposite\textquotedblright\ color $c^{\operatorname*{op}}$ (corresponding to the reverses of the braid moves with color $c$), and for any color $c$, the number of arcs in any given cycle of $\mathcal{R}\left( w\right) $ having color in $\left\{ c,c^{\operatorname*{op}}\right\} $ is even. This is a generalization and strengthening of a 2014 result by Bergeron, Ceballos and Labb\'{e}.
\end{abstract}
\section*{Introduction}
Let $\left( W,S\right) $ be a Coxeter group\footnote{All terminology and notation that appears in this introduction will later be defined in more detail.} with Coxeter matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime}\right) \in S\times S}$, and let $w\in W$. Consider a directed graph $\mathcal{R}\left( w\right) $ whose vertices are the reduced expressions for $w$, and whose arcs are defined as follows: The graph $\mathcal{R}\left( w\right) $ has an arc from a reduced expression $\overrightarrow{a}$ to a reduced expression $\overrightarrow{b}$ whenever $\overrightarrow{b}$ can be obtained from $\overrightarrow{a}$ by replacing some contiguous subword of the form $\underbrace{\left( s,t,s,t,\ldots \right) }_{m_{s,t}\text{ letters}}$ by $\underbrace{\left( t,s,t,s,\ldots \right) }_{m_{s,t}\text{ letters}}$, where $s$ and $t$ are two distinct elements of $S$. (This replacement is called an $\left( s,t\right) $\textit{-braid move}.)
The directed graph $\mathcal{R}\left( w\right) $ (or, rather, its undirected version) has been studied many times; see, for example, \cite{ReiRoi11} and the references therein. In this note, we shall prove a bipartiteness-type result for $\mathcal{R}\left( w\right) $. Its simplest aspect (actually, a corollary) is the fact that $\mathcal{R}\left( w\right) $ is bipartite (i.e., every cycle of $\mathcal{R}\left( w\right) $ has even length); but we shall concern ourselves with stronger statements. We can regard $\mathcal{R} \left( w\right) $ as an edge-colored directed graph: Namely, whenever a reduced expression $\overrightarrow{b}$ is obtained from a reduced expression $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move, we color the arc from $\overrightarrow{a}$ to $\overrightarrow{b}$ with the conjugacy class\footnote{A \textit{conjugacy class}\ here means an equivalence class under the relation $\sim$ on the set $S\times S$, which is given by \[ \left( \left( s,t\right) \sim\left( s^{\prime},t^{\prime}\right) \ \Longleftrightarrow\ \text{there exists a }q\in W\text{ such that } qsq^{-1}=s^{\prime}\text{ and }qtq^{-1}=t^{\prime}\right) . \] The conjugacy class of an $\left( s,t\right) \in S\times S$ is denoted by $\left[ \left( s,t\right) \right] $.} $\left[ \left( s,t\right) \right] $ of the pair $\left( s,t\right) \in S\times S$. Our result (Theorem \ref{thm.BCL}) then states that, for every such color $\left[ \left( s,t\right) \right] $, every cycle of $\mathcal{R}\left( w\right) $ has as many arcs colored $\left[ \left( s,t\right) \right] $ as it has arcs colored $\left[ \left( t,s\right) \right] $, and that the total number of arcs colored $\left[ \left( s,t\right) \right] $ and $\left[ \left( t,s\right) \right] $ in any given cycle is even. This generalizes and strengthens a result of Bergeron, Ceballos and Labb\'{e} \cite[Theorem 3.1]{BCL}.
\subsection*{Acknowledgments}
We thank Nantel Bergeron and Cesar Ceballos for introducing us to the problem at hand, and the referee for useful remarks.
\section{\label{sect.motivate-ex}A motivating example}
Before we introduce the general setting, let us demonstrate it on a simple example. This example is not necessary for the rest of this note (and can be skipped by the reader\footnote{All notations introduced in Section \ref{sect.motivate-ex} should be understood as local to this section; they will not be used beyond it (and often will be replaced by eponymic notations for more general objects).}); it merely provides some intuition and motivation for the definitions to come.
For this example, we fix an integer $n\geq1$, and we let $W$ be the symmetric group $S_{n}$ of the set $\left\{ 1,2,\ldots,n\right\} $. For each $i\in\left\{ 1,2,\ldots,n-1\right\} $, let $s_{i}\in W$ be the transposition which switches $i$ with $i+1$ (while leaving the remaining elements of $\left\{ 1,2,\ldots,n\right\} $ unchanged). Let $S=\left\{ s_{1} ,s_{2},\ldots,s_{n-1}\right\} \subseteq W$. The pair $\left( W,S\right) $ is an example of what is called a \textit{Coxeter group} (see, e.g., \cite[Chapter 4]{Bourbaki4-6} and \cite[\S 1]{Lusztig-Hecke}); more precisely, it is known as the Coxeter group $A_{n-1}$. In particular, $S$ is a generating set for $W$, and the group $W$ can be described by the generators $s_{1} ,s_{2},\ldots,s_{n-1}$ and the relations \begin{align} s_{i}^{2} & =\operatorname*{id}\ \ \ \ \ \ \ \ \ \ \text{for every } i\in\left\{ 1,2,\ldots,n-1\right\} ;\label{eq.exam.A3.quad}\\ s_{i}s_{j} & =s_{j}s_{i}\ \ \ \ \ \ \ \ \ \ \text{for every }i,j\in\left\{ 1,2,\ldots,n-1\right\} \text{ such that }\left\vert i-j\right\vert >1;\label{eq.exam.A3.braid1}\\ s_{i}s_{j}s_{i} & =s_{j}s_{i}s_{j}\ \ \ \ \ \ \ \ \ \ \text{for every }i,j\in\left\{ 1,2,\ldots,n-1\right\} \text{ such that }\left\vert i-j\right\vert =1. \label{eq.exam.A3.braid2} \end{align} This is known as the \textit{Coxeter presentation} of $S_{n}$, and is due to Moore (see, e.g., \cite[(6.23)--(6.25)]{CoxMos80} or \cite[Theorem 1.2.4]{Williamson}).
Given any $w\in W$, there exists a tuple $\left( a_{1},a_{2},\ldots ,a_{k}\right) $ of elements of $S$ such that $w=a_{1}a_{2}\cdots a_{k}$ (since $S$ generates $W$). Such a tuple is called a \textit{reduced expression} for $w$ if its length $k$ is minimal among all such tuples (for the given $w$). For instance, when $n=4$, the permutation $\pi\in S_{4}=W$ that is written as $\left( 3,1,4,2\right) $ in one-line notation has reduced expressions $\left( s_{2},s_{1},s_{3}\right) $ and $\left( s_{2} ,s_{3},s_{1}\right) $; in fact, $\pi=s_{2}s_{1}s_{3}=s_{2}s_{3}s_{1}$. (We are following the convention by which the product $u\circ v=uv$ of two permutations $u,v\in S_{n}$ is defined to be the permutation sending each $i$ to $u\left( v\left( i\right) \right) $.)
Given a $w\in W$, the set of reduced expressions for $w$ has an additional structure of a directed graph. Namely, the equalities (\ref{eq.exam.A3.braid1} ) and (\ref{eq.exam.A3.braid2}) show that, given a reduced expression $\overrightarrow{a}=\left( a_{1},a_{2},\ldots,a_{k}\right) $ for $w\in W$, we can obtain another reduced expression in any of the following two ways:
\begin{itemize} \item Pick some $i,j\in\left\{ 1,2,\ldots,n-1\right\} $ such that $\left\vert i-j\right\vert >1$, and pick any factor of the form $\left( s_{i},s_{j}\right) $ in $\overrightarrow{a}$ (that is, a pair of adjacent entries of $\overrightarrow{a}$, the first of which is $s_{i}$ and the second of which is $s_{j}$), provided that such a factor exists, and replace this factor by $\left( s_{j},s_{i}\right) $.
\item Alternatively, pick some $i,j\in\left\{ 1,2,\ldots,n-1\right\} $ such that $\left\vert i-j\right\vert =1$, and pick any factor of the form $\left( s_{i},s_{j},s_{i}\right) $ in $\overrightarrow{a}$, provided that such a factor exists, and replace this factor by $\left( s_{j},s_{i},s_{j}\right) $. \end{itemize}
In both cases, we obtain a new reduced expression for $w$ (provided that the respective factors exist). We say that this new expression is obtained from $\overrightarrow{a}$ by an $\left( s_{i},s_{j}\right) $\textit{-braid move}, or (when we do not want to mention $s_{i}$ and $s_{j}$) by a \textit{braid move}. For instance, the reduced expression $\left( s_{2},s_{1},s_{3}\right) $ for $\pi=\left( 3,1,4,2\right) \in S_{4}$ is obtained from the reduced expression $\left( s_{2},s_{3},s_{1}\right) $ by an $\left( s_{3} ,s_{1}\right) $-braid move, and conversely $\left( s_{2},s_{3},s_{1}\right) $ is obtained from $\left( s_{2},s_{1},s_{3}\right) $ by an $\left( s_{1},s_{3}\right) $-braid move.
Now, we can define a directed graph $\mathcal{R}_{0}\left( w\right) $ whose vertices are the reduced expressions for $w$, and which has an edge from $\overrightarrow{a}$ to $\overrightarrow{b}$ whenever $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by a braid move (of either sort). For instance, let $n=5$, and let $w$ be the permutation written in one-line notation as $\left( 3,2,1,5,4\right) $. Then, $\mathcal{R}_{0}\left( w\right) $ looks as follows: \[
\xymatrix{ & \left(s_2,s_4,s_1,s_2\right) \arcstr[r]^{\left(s_4,s_1\right)} \arcstr[dl]^{\left(s_2,s_4\right)} & \left(s_2,s_1,s_4,s_2\right ) \arcstr[l]^{\left(s_1,s_4\right)} \arcstr[rd]^{\left(s_4,s_2\right)} \\ \left(s_4,s_2,s_1,s_2\right) \arcstr[ur]^{\left(s_4,s_2\right)} \arcstr [d]^{\left(s_2,s_1\right)} & & & \left(s_2,s_1,s_2,s_4\right) \arcstr [lu]^{\left(s_2,s_4\right)} \arcstr[d]^{\left(s_2,s_1\right)} \\ \left(s_4,s_1,s_2,s_1\right) \arcstr[u]^{\left(s_1,s_2\right)} \arcstr [dr]^{\left(s_4,s_1\right)} & & & \left(s_1,s_2,s_1,s_4\right) \arcstr [u]^{\left(s_1,s_2\right)} \arcstr[dl]^{\left(s_1,s_4\right)} \\ & \left(s_1,s_4,s_2,s_1\right) \arcstr[ul]^{\left(s_1,s_4\right)} \arcstr[r]^{\left(s_4,s_2\right)} & \left(s_1,s_2,s_4,s_1\right) \arcstr [l]^{\left(s_2,s_4\right)} \arcstr[ur]^{\left(s_4,s_1\right)} }
. \] Here, we have \textquotedblleft colored\textquotedblright\ (i.e., labelled) every arc $\left( \overrightarrow{a},\overrightarrow{b}\right) $ with the pair $\left( s_{i},s_{j}\right) $ such that $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s_{i},s_{j}\right) $-braid move.
In our particular case, the graph $\mathcal{R}_{0}\left( w\right) $ consists of a single bidirected cycle. This is not true in general, but certain things hold in general. First, it is clear that whenever an arc from some vertex $\overrightarrow{a}$ to some vertex $\overrightarrow{b}$ has color $\left( s_{i},s_{j}\right) $, then there is an arc with color $\left( s_{j} ,s_{i}\right) $ from $\overrightarrow{b}$ to $\overrightarrow{a}$. Thus, $\mathcal{R}_{0}\left( w\right) $ can be regarded as an undirected graph (at the expense of murkying up the colors of the arcs). Furthermore, every reduced expression for $w$ can be obtained from any other by a sequence of braid moves (this is the Matsumoto-Tits theorem; it appears, e.g., in \cite[Theorem 1.9]{Lusztig-Hecke}). Thus, the graph $\mathcal{R}_{0}\left( w\right) $ is strongly connected.
What do the cycles of $\mathcal{R}_{0}\left( w\right) $ have in common? Walking down the long cycle in the graph $\mathcal{R}_{0}\left( w\right) $ for $w=\left( 3,2,1,5,4\right) \in S_{5}$ counterclockwise, we observe that the $\left( s_{1},s_{2}\right) $-braid move is used once (i.e., we traverse precisely one arc with color $\left( s_{1},s_{2}\right) $), the $\left( s_{2},s_{1}\right) $-braid move once, the $\left( s_{1},s_{4}\right) $-braid move twice, the $\left( s_{4},s_{1}\right) $-braid move once, the $\left( s_{2},s_{4}\right) $-braid move once, and the $\left( s_{4} ,s_{2}\right) $-braid move twice. In particular:
\begin{itemize} \item The total number of $\left( s_{i},s_{j}\right) $-braid moves with $\left\vert i-j\right\vert =1$ used is even (namely, $2$).
\item The total number of $\left( s_{i},s_{j}\right) $-braid moves with $\left\vert i-j\right\vert >1$ used is even (namely, $6$). \end{itemize}
This example alone is scant evidence of any general result, but both evenness patterns persist for general $n$, for any $w\in S_{n}$ and any directed cycle in $\mathcal{R}_{0}\left( w\right) $. We can simplify the statement if we change our coloring to a coarser one. Namely, let $\mathfrak{M}$ denote the subset $\left\{ \left( s,t\right) \in S\times S\ \mid\ s\neq t\right\} =\left\{ \left( s_{i},s_{j}\right) \ \mid\ i\neq j\right\} $ of $S\times S$. We define a binary relation $\sim$ on $\mathfrak{M}$ by \[ \left( \left( s,t\right) \sim\left( s^{\prime},t^{\prime}\right) \ \Longleftrightarrow\ \text{there exists a }q\in W\text{ such that } qsq^{-1}=s^{\prime}\text{ and }qtq^{-1}=t^{\prime}\right) . \] This relation $\sim$ is an equivalence relation; it thus gives rise to a quotient set $\mathfrak{M}/\sim$. It is easy to see that the quotient set $\mathfrak{M}/\sim$ has exactly two elements (for $n\geq4$): the equivalence class of all $\left( s_{i},s_{j}\right) $ with $\left\vert i-j\right\vert =1$, and the equivalence class of all $\left( s_{i},s_{j}\right) $ with $\left\vert i-j\right\vert >1$. Let us now define an edge-colored directed graph $\mathcal{R}\left( w\right) $ by starting with $\mathcal{R}_{0}\left( w\right) $, and replacing each color $\left( s_{i},s_{j}\right) $ by its equivalence class $\left[ \left( s_{i},s_{j}\right) \right] $. Thus, in $\mathcal{R}\left( w\right) $, the arcs are colored with the (at most two) elements of $\mathfrak{M}/\sim$. Now, our evenness patterns can be restated as follows: For any $n\in\mathbb{N}$, any $w\in S_{n}$ and any color $c\in\mathfrak{M}/\sim$, any directed cycle of $\mathcal{R}\left( w\right) $ has an even number of arcs with color $c$.
This can be generalized further to every Coxeter group, with a minor caveat. Namely, let $\left( W,S\right) $ be a Coxeter group with Coxeter matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime}\right) \in S\times S}$. Notions such as reduced expressions and braid moves still make sense (see below for references and definitions). We redefine $\mathfrak{M}$ as $\left\{ \left( s,t\right) \in S\times S\ \mid\ s\neq t\text{ and }m_{s,t} <\infty\right\} $ (since pairs $\left( s,t\right) $ with $m_{s,t}=\infty$ do not give rise to braid moves). Unlike in the case of $W=S_{n}$, it is not necessarily true that $\left( s,t\right) \sim\left( t,s\right) $ for every $\left( s,t\right) \in\mathfrak{M}$. We define $\left[ \left( s,t\right) \right] ^{\operatorname*{op}}=\left[ \left( t,s\right) \right] $. The evenness pattern now has to be weakened as follows: For every $w\in W$ and any color $c\in\mathfrak{M}/\sim$, any directed cycle of $\mathcal{R}\left( w\right) $ has an even number of arcs whose color belongs to $\left\{ c,c^{\operatorname*{op}}\right\} $. (For $W=S_{n}$, we have $c=c^{\operatorname*{op}}$, and thus this recovers our old evenness patterns.) This is part of the main theorem we will prove in this note -- namely, Theorem \ref{thm.BCL} \textbf{(b)}; it extends a result \cite[Theorem 3.1]{BCL} obtained by Bergeron, Ceballos and Labb\'{e} by geometric means. The other part of the main theorem (Theorem \ref{thm.BCL} \textbf{(a)}) states that any directed cycle of $\mathcal{R}\left( w\right) $ has as many arcs with color $c$ as it has arcs with color $c^{\operatorname*{op}}$.
\section{The theorem}
In the following, we shall use the notations of \cite[\S 1]{Lusztig-Hecke} concerning Coxeter groups. (These notations are compatible with those of \cite[Chapter 4]{Bourbaki4-6}, except that Bourbaki writes $m\left( s,s^{\prime}\right) $ instead of $m_{s,s^{\prime}}$, and speaks of \textquotedblleft Coxeter systems\textquotedblright\ instead of \textquotedblleft Coxeter groups\textquotedblright.)
Let us recall a brief definition of Coxeter groups and Coxeter matrices:
A \textit{Coxeter group} is a pair $\left( W,S\right) $, where $W$ is a group, and where $S$ is a finite subset of $W$ having the following property: There exists a matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime }\right) \in S\times S}\in\left\{ 1,2,3,\ldots,\infty\right\} ^{S\times S}$ such that
\begin{itemize} \item every $s\in S$ satisfies $m_{s,s}=1$;
\item every two distinct elements $s$ and $t$ of $S$ satisfy $m_{s,t} =m_{t,s}\geq2$;
\item the group $W$ can be presented by the generators $S$ and the relations \[ \left( st\right) ^{m_{s,t}}=1\ \ \ \ \ \ \ \ \ \ \text{for all }\left( s,t\right) \in S\times S\text{ satisfying }m_{s,t}\neq\infty. \]
\end{itemize}
In this case, the matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime}\right) \in S\times S}$ is called the \textit{Coxeter matrix} of $\left( W,S\right) $. It is well-known (see, e.g., \cite[\S 1] {Lusztig-Hecke}\footnote{See also \cite[Chapter V, n$^{\circ}$ 4.3, Corollaire]{Bourbaki4-6} for a proof of the existence of a Coxeter group corresponding to a given Coxeter matrix. Note that Bourbaki's definition of a \textquotedblleft Coxeter system\textquotedblright\ differs from our definition of a \textquotedblleft Coxeter group\textquotedblright\ in the extra requirement that $m_{s,t}$ be the order of $st\in W$; but this turns out to be a consequence of the other requirements.}) that any Coxeter group has a unique Coxeter matrix, and conversely, for every finite set $S$ and any matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime}\right) \in S\times S}\in\left\{ 1,2,3,\ldots,\infty\right\} ^{S\times S}$ satisfying the first two of the three requirements above, there exists a unique (up to isomorphism preserving $S$) Coxeter group $\left( W,S\right) $.
We fix a Coxeter group $\left( W,S\right) $ with Coxeter matrix $\left( m_{s,s^{\prime}}\right) _{\left( s,s^{\prime}\right) \in S\times S}$. Thus, $W$ is a group, and $S$ is a set of elements of order $2$ in $W$ such that for every $\left( s,s^{\prime}\right) \in S\times S$, the element $ss^{\prime }\in W$ has order $m_{s,s^{\prime}}$. (See, e.g., \cite[Proposition 1.3(b)]{Lusztig-Hecke} for this well-known fact.)
We let $\mathfrak{M}$ denote the subset \[ \left\{ \left( s,t\right) \in S\times S\ \mid\ s\neq t\text{ and } m_{s,t}<\infty\right\} \] of $S\times S$. (This is denoted by $I$ in \cite[Chapter 4, n$^{\circ}$ 1.3]{Bourbaki4-6}.) We define a binary relation $\sim$ on $\mathfrak{M}$ by \[ \left( \left( s,t\right) \sim\left( s^{\prime},t^{\prime}\right) \ \Longleftrightarrow\ \text{there exists a }q\in W\text{ such that } qsq^{-1}=s^{\prime}\text{ and }qtq^{-1}=t^{\prime}\right) . \] It is clear that this relation $\sim$ is an equivalence relation; it thus gives rise to a quotient set $\mathfrak{M}/\sim$. For every pair $P\in\mathfrak{M}$, we denote by $\left[ P\right] $ the equivalence class of $P$ with respect to this relation $\sim$.
We set $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $.
A \textit{word} will mean a $k$-tuple for some $k\in\mathbb{N}$. A \textit{subword} of a word $\left( s_{1},s_{2},\ldots,s_{k}\right) $ will mean a word of the form $\left( s_{i_{1}},s_{i_{2}},\ldots,s_{i_{p}}\right) $, where $i_{1},i_{2},\ldots,i_{p}$ are elements of $\left\{ 1,2,\ldots ,k\right\} $ satisfying $i_{1}<i_{2}<\cdots<i_{p}$. For instance, $\left( 1\right) $, $\left( 3,5\right) $, $\left( 1,3,5\right) $, $\left( {}\right) $ and $\left( 1,5\right) $ are subwords of the word $\left( 1,3,5\right) $. A \textit{factor} of a word $\left( s_{1},s_{2},\ldots ,s_{k}\right) $ will mean a word of the form $\left( s_{i+1},s_{i+2} ,\ldots,s_{i+m}\right) $ for some $i\in\left\{ 0,1,\ldots,k\right\} $ and some $m\in\left\{ 0,1,\ldots,k-i\right\} $. For instance, $\left( 1\right) $, $\left( 3,5\right) $, $\left( 1,3,5\right) $ and $\left( {}\right) $ are factors of the word $\left( 1,3,5\right) $, but $\left( 1,5\right) $ is not.
We recall that a \textit{reduced expression} for an element $w\in W$ is a $k$-tuple $\left( s_{1},s_{2},\ldots,s_{k}\right) $ of elements of $S$ such that $w=s_{1}s_{2}\cdots s_{k}$, and such that $k$ is minimum (among all such tuples). The length of a reduced expression for $w$ is called the \textit{length} of $w$, and is denoted by $l\left( w\right) $. Thus, a reduced expression for an element $w\in W$ is a $k$-tuple $\left( s_{1} ,s_{2},\ldots,s_{k}\right) $ of elements of $S$ such that $w=s_{1}s_{2}\cdots s_{k}$ and $k=l\left( w\right) $.
\begin{definition} \label{def.braid}Let $w\in W$. Let $\overrightarrow{a}=\left( a_{1} ,a_{2},\ldots,a_{k}\right) $ and $\overrightarrow{b}=\left( b_{1} ,b_{2},\ldots,b_{k}\right) $ be two reduced expressions for $w$.
Let $\left( s,t\right) \in\mathfrak{M}$. We say that $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $\textit{-braid move} if $\overrightarrow{b}$ can be obtained from $\overrightarrow{a}$ by finding a factor of $\overrightarrow{a}$ of the form $\underbrace{\left( s,t,s,t,s,\ldots\right) }_{m_{s,t}\text{ elements}}$ and replacing it by $\underbrace{\left( t,s,t,s,t,\ldots\right) }_{m_{s,t}\text{ elements}}$.
We notice that if $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move, then $\overrightarrow{a}$ is obtained from $\overrightarrow{b}$ by an $\left( t,s\right) $-braid move. \end{definition}
\begin{definition} \label{def.R}Let $w\in W$. We define an edge-colored directed graph $\mathcal{R}\left( w\right) $, whose arcs are colored with elements of $\mathfrak{M}/\sim$, as follows:
\begin{itemize} \item The vertex set of $\mathcal{R}\left( w\right) $ shall be the set of all reduced expressions for $w$.
\item The arcs of $\mathcal{R}\left( w\right) $ are defined as follows: Whenever $\left( s,t\right) \in\mathfrak{M}$, and whenever $\overrightarrow{a}$ and $\overrightarrow{b}$ are two reduced expressions for $w$ such that $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move, we draw an arc from $s$ to $t$ with color $\left[ \left( s,t\right) \right] $. \end{itemize} \end{definition}
\begin{theorem} \label{thm.BCL}Let $w\in W$. Let $C$ be a (directed) cycle in the graph $\mathcal{R}\left( w\right) $. Let $c=\left[ \left( s,t\right) \right] \in\mathfrak{M}/\sim$ be an equivalence class with respect to $\sim$. Let $c^{\operatorname*{op}}$ be the equivalence class $\left[ \left( t,s\right) \right] \in\mathfrak{M}/\sim$. Then:
\textbf{(a)} The number of arcs colored $c$ appearing in the cycle $C$ equals the number of arcs colored $c^{\operatorname*{op}}$ appearing in the cycle $C$.
\textbf{(b)} The number of arcs whose color belongs to $\left\{ c,c^{\operatorname*{op}}\right\} $ appearing in the cycle $C$ is even. \end{theorem}
None of the parts \textbf{(a)} and \textbf{(b)} of Theorem \ref{thm.BCL} is a trivial consequence of the other: When $c=c^{\operatorname*{op}}$, the statement of Theorem \ref{thm.BCL} \textbf{(a)} is obvious and does not imply part \textbf{(b)}.
Theorem \ref{thm.BCL} \textbf{(b)} generalizes \cite[Theorem 3.1]{BCL} in two directions: First, Theorem \ref{thm.BCL} is stated for arbitrary Coxeter groups, rather than only for finite Coxeter groups as in \cite{BCL}. Second, in the terms of \cite[Remark 3.3]{BCL}, we are working with sets $Z$ that are \textquotedblleft stabled by conjugation instead of automorphism\textquotedblright.
\section{Inversions and the word $\rho_{s,t}$}
We shall now introduce some notations and state some auxiliary results that will be used to prove Theorem \ref{thm.BCL}. Our strategy of proof is inspired by that used in \cite[\S 3.4]{BCL} and thus (indirectly) also by that in \cite[\S 3, and proof of Corollary 5.2]{ReiRoi11}; however, we shall avoid any use of geometry (such as roots and hyperplane arrangements), and work entirely with the Coxeter group itself.
We denote the subset $\bigcup\limits_{x\in W}xSx^{-1}$ of $W$ by $T$. The elements of $T$ are called the \textit{reflections} (of $W$). They all have order $2$. (The notation $T$ is used here in the same meaning as in \cite[\S 1]{Lusztig-Hecke} and in \cite[Chapter 4, n$^{\circ}$ 1.4] {Bourbaki4-6}.)
\begin{definition} \label{def.biset}For every $k\in\mathbb{N}$, we consider the set $W^{k}$ as a left $W$-set by the rule \[ w\left( w_{1},w_{2},\ldots,w_{k}\right) =\left( ww_{1},ww_{2},\ldots ,ww_{k}\right) , \] and as a right $W$-set by the rule \[ \left( w_{1},w_{2},\ldots,w_{k}\right) w=\left( w_{1}w,w_{2}w,\ldots ,w_{k}w\right) . \]
\end{definition}
\begin{definition} Let $s$ and $t$ be two distinct elements of $T$. Let $m_{s,t}$ denote the order of the element $st\in W$. (This extends the definition of $m_{s,t}$ for $s,t\in S$.) Assume that $m_{s,t}<\infty$. We let $D_{s,t}$ denote the subgroup of $W$ generated by $s$ and $t$. Then, $D_{s,t}$ is a dihedral group (since $s$ and $t$ are two distinct nontrivial involutions, and since any group generated by two distinct nontrivial involutions is dihedral). We denote by $\rho_{s,t}$ the word \[ \left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m_{s,t}-1}s\right) =\left( s,sts,ststs,\ldots ,\underbrace{ststs\cdots s}_{2m_{s,t}-1\text{ letters}}\right) \in\left( D_{s,t}\right) ^{m_{s,t}}. \]
\end{definition}
The \textit{reversal} of a word $\left( a_{1},a_{2},\ldots,a_{k}\right) $ is defined to be the word $\left( a_{k},a_{k-1},\ldots,a_{1}\right) $.
The following proposition collects some simple properties of the words $\rho_{s,t}$.
\begin{proposition} \label{prop.rhost}Let $s$ and $t$ be two distinct elements of $T$ such that $m_{s,t}<\infty$. Then:
\textbf{(a)} The word $\rho_{s,t}$ consists of reflections in $D_{s,t}$, and contains every reflection in $D_{s,t}$ exactly once.
\textbf{(b)} The word $\rho_{t,s}$ is the reversal of the word $\rho_{s,t}$.
\textbf{(c)} Let $q\in W$. Then, the word $q\rho_{t,s}q^{-1}$ is the reversal of the word $q\rho_{s,t}q^{-1}$. \end{proposition}
\begin{proof} [Proof of Proposition \ref{prop.rhost}.]\textbf{(a)} We need to prove three claims:
\textit{Claim 1:} Every entry of the word $\rho_{s,t}$ is a reflection in $D_{s,t}$.
\textit{Claim 2:} The entries of the word $\rho_{s,t}$ are distinct.
\textit{Claim 3:} Every reflection in $D_{s,t}$ is an entry of the word $\rho_{s,t}$.
\textit{Proof of Claim 1:} We must show that $\left( st\right) ^{k}s$ is a reflection in $D_{s,t}$ for every $k\in\left\{ 0,1,\ldots,m_{s,t}-1\right\} $. Thus, fix $k\in\left\{ 0,1,\ldots,m_{s,t}-1\right\} $. Then, \begin{align*} \left( st\right) ^{k}s & =\underbrace{stst\cdots s}_{2k+1\text{ letters}}= \begin{cases} \underbrace{stst\cdots t}_{k\text{ letters}}s\underbrace{tsts\cdots s}_{k\text{ letters}}, & \text{if }k\text{ is even};\\ \underbrace{stst\cdots s}_{k\text{ letters}}t\underbrace{stst\cdots s}_{k\text{ letters}}, & \text{if }k\text{ is odd} \end{cases} \\ & = \begin{cases} \underbrace{stst\cdots t}_{k\text{ letters}}s\left( \underbrace{stst\cdots t}_{k\text{ letters}}\right) ^{-1}, & \text{if }k\text{ is even};\\ \underbrace{stst\cdots s}_{k\text{ letters}}t\left( \underbrace{stst\cdots s}_{k\text{ letters}}\right) ^{-1}, & \text{if }k\text{ is odd} \end{cases} \\ & \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c} \text{since }\underbrace{tsts\cdots s}_{k\text{ letters}}=\left( \underbrace{stst\cdots t}_{k\text{ letters}}\right) ^{-1}\text{ if }k\text{ is even,}\\ \text{and }\underbrace{stst\cdots s}_{k\text{ letters}}=\left( \underbrace{stst\cdots s}_{k\text{ letters}}\right) ^{-1}\text{ if }k\text{ is odd} \end{array} \right) . \end{align*} Hence, $\left( st\right) ^{k}s$ is conjugate to either $s$ or $t$ (depending on whether $k$ is even or odd). Thus, $\left( st\right) ^{k}s$ is a reflection. Also, it clearly lies in $D_{s,t}$. This proves Claim 1.
\textit{Proof of Claim 2:} The element $st$ of $W$ has order $m_{s,t}$. Thus, the elements $\left( st\right) ^{0},\left( st\right) ^{1},\ldots,\left( st\right) ^{m_{s,t}-1}$ are all distinct. Hence, the elements $\left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m_{s,t}-1}s$ are all distinct. In other words, the entries of the word $\rho_{s,t}$ are all distinct. Claim 2 is proven.
\textit{Proof of Claim 3:} The dihedral group $D_{s,t}$ has $2m_{s,t}$ elements\footnote{since it is generated by two distinct involutions $s\neq1$ and $t\neq1$ whose product $st$ has order $m_{s,t}$}, of which at most $m_{s,t}$ are reflections\footnote{\textit{Proof.} Consider the group homomorphism $\operatorname*{sgn}:W\rightarrow\left\{ 1,-1\right\} $ defined in \cite[\S 1.1]{Lusztig-Hecke}. The group homomorphism $\operatorname*{sgn} \mid_{D_{s,t}}:D_{s,t}\rightarrow\left\{ 1,-1\right\} $ sends either none or $m_{s,t}$ elements of $D_{s,t}$ to $-1$. Thus, this homomorphism $\operatorname*{sgn}\mid_{D_{s,t}}$ sends at most $m_{s,t}$ elements of $D_{s,t}$ to $-1$. Since it must send every reflection to $-1$, this shows that at most $m_{s,t}$ elements of $D_{s,t}$ are reflections. \par (Actually, we can replace \textquotedblleft at most\textquotedblright\ by \textquotedblleft exactly\textquotedblright\ here, but we won't need this.)}. But the word $\rho_{s,t}$ has $m_{s,t}$ entries, and all its entries are reflections in $D_{s,t}$ (by Claim 1); hence, it contains $m_{s,t}$ reflections in $D_{s,t}$ (by Claim 2). Since $D_{s,t}$ has only at most $m_{s,t}$ reflections, this shows that every reflection in $D_{s,t}$ is an entry of the word $\rho_{s,t}$. Claim 3 is proven.
This finishes the proof of Proposition \ref{prop.rhost} \textbf{(a)}.
\textbf{(b)} We have $\rho_{s,t}=\left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m_{s,t}-1}s\right) $ and \newline$\rho_{t,s}=\left( \left( ts\right) ^{0}t,\left( ts\right) ^{1}t,\ldots,\left( ts\right) ^{m_{s,t}-1}t\right) $ (since $m_{t,s} =m_{s,t}$). Thus, in order to prove Proposition \ref{prop.rhost} \textbf{(b)}, we must merely show that $\left( st\right) ^{k}s=\left( ts\right) ^{m_{s,t}-1-k}t$ for every $k\in\left\{ 0,1,\ldots,m_{s,t}-1\right\} $.
So fix $k\in\left\{ 0,1,\ldots,m_{s,t}-1\right\} $. Then, \begin{align*} \left( st\right) ^{k}s\cdot\left( \left( ts\right) ^{m_{s,t} -1-k}t\right) ^{-1} & =\left( st\right) ^{k}s\underbrace{t^{-1}} _{=t}\underbrace{\left( \left( ts\right) ^{m_{s,t}-1-k}\right) ^{-1} }_{=\left( s^{-1}t^{-1}\right) ^{m_{s,t}-1-k}}=\underbrace{\left( st\right) ^{k}st}_{=\left( st\right) ^{k+1}}\left( \underbrace{s^{-1} }_{=s}\underbrace{t^{-1}}_{=t}\right) ^{m_{s,t}-1-k}\\ & =\left( st\right) ^{k+1}\left( st\right) ^{m_{s,t}-1-k}=\left( st\right) ^{m_{s,t}}=1, \end{align*} so that $\left( st\right) ^{k}s=\left( ts\right) ^{m_{s,t}-1-k}t$. This proves Proposition \ref{prop.rhost} \textbf{(b)}.
\textbf{(c)} Let $q\in W$. Proposition \ref{prop.rhost} \textbf{(b)} shows that the word $\rho_{t,s}$ is the reversal of the word $\rho_{s,t}$. Hence, the word $q\rho_{t,s}q^{-1}$ is the reversal of the word $q\rho_{s,t}q^{-1}$ (since the word $q\rho_{t,s}q^{-1}$ is obtained from $\rho_{t,s}$ by conjugating each letter by $q$, and the word $q\rho_{s,t}q^{-1}$ is obtained from $\rho_{s,t}$ in the same way). This proves Proposition \ref{prop.rhost} \textbf{(c)}. \end{proof}
\begin{definition} \label{def.Invsle}Let $\overrightarrow{a}=\left( a_{1},a_{2},\ldots ,a_{k}\right) \in S^{k}$. Then, $\operatorname*{Invs}\overrightarrow{a}$ is defined to be the $k$-tuple $\left( t_{1},t_{2},\ldots,t_{k}\right) \in T^{k}$, where we set \[ t_{i}=\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\left( a_{1}a_{2}\cdots a_{i-1}\right) ^{-1}\ \ \ \ \ \ \ \ \ \ \text{for every }i\in\left\{ 1,2,\ldots,k\right\} . \]
\end{definition}
\begin{remark} Let $w\in W$. Let $\overrightarrow{a}=\left( a_{1},a_{2},\ldots,a_{k}\right) $ be a reduced expression for $w$. The $k$-tuple $\operatorname*{Invs} \overrightarrow{a}$ is denoted by $\Phi\left( \overrightarrow{a}\right) $ in \cite[Chapter 4, n$^{\circ}$ 1.4]{Bourbaki4-6}, and is closely connected to various standard constructions in Coxeter group theory. A well-known fact states that the set of all entries of $\operatorname*{Invs}\overrightarrow{a}$ depends only on $w$ (but not on $\overrightarrow{a}$); this set is called the \textit{(left) inversion set} of $w$. The $k$-tuple $\operatorname*{Invs} \overrightarrow{a}$ contains each element of this set exactly once (see Proposition \ref{prop.Invsles} below); it thus induces a total order on this set. \end{remark}
\begin{proposition} \label{prop.Invsles}Let $w\in W$.
\textbf{(a)} If $\overrightarrow{a}$ is a reduced expression for $w$, then all entries of the tuple $\operatorname*{Invs}\overrightarrow{a}$ are distinct.
\textbf{(b)} Let $\left( s,t\right) \in\mathfrak{M}$. Let $\overrightarrow{a}$ and $\overrightarrow{b}$ be two reduced expressions for $w$ such that $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move. Then, there exists a $q\in W$ such that $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal\footnotemark. \end{proposition}
\footnotetext{See Definition \ref{def.biset} for the meaning of $q\rho _{s,t}q^{-1}$.}
\begin{proof} [Proof of Proposition \ref{prop.Invsles}.]Let $\overrightarrow{a}$ be a reduced expression for $w$. Write $\overrightarrow{a}$ as $\left( a_{1} ,a_{2},\ldots,a_{k}\right) $. Then, the definition of $\operatorname*{Invs} \overrightarrow{a}$ shows that $\operatorname*{Invs}\overrightarrow{a}=\left( t_{1},t_{2},\ldots,t_{k}\right) $, where the $t_{i}$ are defined by \[ t_{i}=\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\left( a_{1}a_{2}\cdots a_{i-1}\right) ^{-1}\ \ \ \ \ \ \ \ \ \ \text{for every }i\in\left\{ 1,2,\ldots,k\right\} . \] Now, every $i\in\left\{ 1,2,\ldots,k\right\} $ satisfies \begin{align*} t_{i} & =\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\underbrace{\left( a_{1}a_{2}\cdots a_{i-1}\right) ^{-1}}_{\substack{=a_{i-1}^{-1}a_{i-2} ^{-1}\cdots a_{1}^{-1}=a_{i-1}a_{i-2}\cdots a_{1}\\\text{(since each } a_{j}\text{ belongs to }S\text{)}}}=\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\left( a_{i-1}a_{i-2}\cdots a_{1}\right) \\ & =a_{1}a_{2}\cdots a_{i-1}a_{i}a_{i-1}\cdots a_{2}a_{1}. \end{align*} But \cite[Proposition 1.6 (a)]{Lusztig-Hecke} (applied to $q=k$ and $s_{i}=a_{i}$) shows that the elements $a_{1},a_{1}a_{2}a_{1},a_{1}a_{2} a_{3}a_{2}a_{1},\ldots,a_{1}a_{2}\cdots a_{k-1}a_{k}a_{k-1}\cdots a_{2}a_{1}$ are distinct\footnote{This also follows from \cite[Chapter 4, n$^{\circ}$ 1.4, Lemme 2]{Bourbaki4-6}.}. In other words, the elements $t_{1},t_{2} ,\ldots,t_{k}$ are distinct (since \newline$t_{i}=a_{1}a_{2}\cdots a_{i-1}a_{i}a_{i-1}\cdots a_{2}a_{1}$ for every $i\in\left\{ 1,2,\ldots ,k\right\} $). In other words, all entries of the tuple $\operatorname*{Invs} \overrightarrow{a}$ are distinct. Proposition \ref{prop.Invsles} \textbf{(a)} is proven.
\textbf{(b)} We need to prove that there exists a $q\in W$ such that $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal.
We set $m=m_{s,t}$ (for the sake of brevity).
Write $\overrightarrow{a}$ as $\left( a_{1},a_{2},\ldots,a_{k}\right) $.
The word $\overrightarrow{b}$ can be obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move. In other words, the word $\overrightarrow{b}$ can be obtained from $\overrightarrow{a}$ by finding a factor of $\overrightarrow{a}$ of the form $\underbrace{\left( s,t,s,t,s,\ldots\right) }_{m\text{ elements}}$ and replacing it by $\underbrace{\left( t,s,t,s,t,\ldots\right) }_{m\text{ elements}}$ (by the definition of an \textquotedblleft$\left( s,t\right) $-braid move\textquotedblright, since $m_{s,t}=m$). In other words, there exists an $p\in\left\{ 0,1,\ldots,k-m\right\} $ such that $\left( a_{p+1} ,a_{p+2},\ldots,a_{p+m}\right) =\underbrace{\left( s,t,s,t,s,\ldots\right) }_{m\text{ elements}}$, and the word $\overrightarrow{b}$ can be obtained by replacing the $\left( p+1\right) $-st through $\left( p+m\right) $-th entries of $\overrightarrow{a}$ by $\underbrace{\left( t,s,t,s,t,\ldots \right) }_{m\text{ elements}}$. Consider this $p$. Write $\overrightarrow{b}$ as $\left( b_{1},b_{2},\ldots,b_{k}\right) $ (this is possible since the tuple $\overrightarrow{b}$ has the same length as $\overrightarrow{a}$). Thus, \begin{align} \left( a_{1},a_{2},\ldots,a_{p}\right) & =\left( b_{1},b_{2},\ldots ,b_{p}\right) ,\label{pf.prop.Invsles.b.1}\\ \left( a_{p+1},a_{p+2},\ldots,a_{p+m}\right) & =\underbrace{\left( s,t,s,t,s,\ldots\right) }_{m\text{ elements}},\label{pf.prop.Invsles.b.2}\\ \left( b_{p+1},b_{p+2},\ldots,b_{p+m}\right) & =\underbrace{\left( t,s,t,s,t,\ldots\right) }_{m\text{ elements}},\label{pf.prop.Invsles.b.3}\\ \left( a_{p+m+1},a_{p+m+2},\ldots,a_{k}\right) & =\left( b_{p+m+1} ,b_{p+m+2},\ldots,b_{k}\right) . \label{pf.prop.Invsles.b.4} \end{align} Write the $k$-tuples $\operatorname*{Invs}\overrightarrow{a}$ and $\operatorname*{Invs}\overrightarrow{b}$ as $\left( \alpha_{1},\alpha _{2},\ldots,\alpha_{k}\right) $ and $\left( \beta_{1},\beta_{2},\ldots ,\beta_{k}\right) $, respectively. Their definitions show that \begin{equation} \alpha_{i}=\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\left( a_{1} a_{2}\cdots a_{i-1}\right) ^{-1} \label{pf.prop.Invsles.b.alpha} \end{equation} and \begin{equation} \beta_{i}=\left( b_{1}b_{2}\cdots b_{i-1}\right) b_{i}\left( b_{1} b_{2}\cdots b_{i-1}\right) ^{-1} \label{pf.prop.Invsles.b.beta} \end{equation} for every $i\in\left\{ 1,2,\ldots,k\right\} $.
Now, set $q=a_{1}a_{2}\cdots a_{p}$. From (\ref{pf.prop.Invsles.b.1}), we see that $q=b_{1}b_{2}\cdots b_{p}$ as well. In order to prove Proposition \ref{prop.Invsles} \textbf{(b)}, it clearly suffices to show that $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ -- namely, the factor $\left( \alpha _{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) $ -- by its reversal.
So let us show this. In view of $\operatorname*{Invs}\overrightarrow{a} =\left( \alpha_{1},\alpha_{2},\ldots,\alpha_{k}\right) $ and $\operatorname*{Invs}\overrightarrow{b}=\left( \beta_{1},\beta_{2} ,\ldots,\beta_{k}\right) $, it clearly suffices to prove the following claims:
\textit{Claim 1:} We have $\beta_{i}=\alpha_{i}$ for every $i\in\left\{ 1,2,\ldots,p\right\} $.
\textit{Claim 2:} We have $\left( \alpha_{p+1},\alpha_{p+2},\ldots ,\alpha_{p+m}\right) =q\rho_{s,t}q^{-1}$.
\textit{Claim 3:} The $m$-tuple $\left( \beta_{p+1},\beta_{p+2},\ldots ,\beta_{p+m}\right) $ is the reversal of $\left( \alpha_{p+1},\alpha _{p+2},\ldots,\alpha_{p+m}\right) $.
\textit{Claim 4:} We have $\beta_{i}=\alpha_{i}$ for every $i\in\left\{ p+m+1,p+m+2,\ldots,k\right\} $.
\textit{Proof of Claim 1:} Let $i\in\left\{ 1,2,\ldots,p\right\} $. Then, (\ref{pf.prop.Invsles.b.1}) shows that $a_{g}=b_{g}$ for every $g\in\left\{ 1,2,\ldots,i\right\} $. Now, (\ref{pf.prop.Invsles.b.alpha}) becomes \begin{align*} \alpha_{i} & =\left( a_{1}a_{2}\cdots a_{i-1}\right) a_{i}\left( a_{1}a_{2}\cdots a_{i-1}\right) ^{-1}=\left( b_{1}b_{2}\cdots b_{i-1} \right) b_{i}\left( b_{1}b_{2}\cdots b_{i-1}\right) ^{-1}\\ & \ \ \ \ \ \ \ \ \ \ \left( \text{since }a_{g}=b_{g}\text{ for every } g\in\left\{ 1,2,\ldots,i\right\} \right) \\ & =\beta_{i}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.prop.Invsles.b.beta})}\right) . \end{align*} This proves Claim 1.
\textit{Proof of Claim 2:} We have \[ \rho_{s,t}=\left( \left( st\right) ^{0}s,\left( st\right) ^{1} s,\ldots,\left( st\right) ^{m_{s,t}-1}s\right) =\left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m-1}s\right) \] (since $m_{s,t}=m$). Hence, \begin{align*} q\rho_{s,t}q^{-1} & =q\left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m-1}s\right) q^{-1}\\ & =\left( q\left( st\right) ^{0}sq^{-1},q\left( st\right) ^{1} sq^{-1},\ldots,q\left( st\right) ^{m-1}sq^{-1}\right) . \end{align*} Thus, in order to prove $\left( \alpha_{p+1},\alpha_{p+2},\ldots,\alpha _{p+m}\right) =q\rho_{s,t}q^{-1}$, it suffices to show that $\alpha _{p+i}=q\left( st\right) ^{i-1}sq^{-1}$ for every $i\in\left\{ 1,2,\ldots,m\right\} $. So let us fix $i\in\left\{ 1,2,\ldots,m\right\} $.
We have \[ a_{1}a_{2}\cdots a_{p+i-1}=\underbrace{\left( a_{1}a_{2}\cdots a_{p}\right) }_{=q}\underbrace{\left( a_{p+1}a_{p+2}\cdots a_{p+i-1}\right) }_{\substack{=\underbrace{stst\cdots}_{i-1\text{ letters}}\\\text{(by (\ref{pf.prop.Invsles.b.2}))}}}=q\underbrace{stst\cdots}_{i-1\text{ letters} }. \] Hence, \begin{align*} \left( a_{1}a_{2}\cdots a_{p+i-1}\right) ^{-1} & =\left( q\underbrace{stst\cdots}_{i-1\text{ letters}}\right) ^{-1}=\underbrace{\cdots t^{-1}s^{-1}t^{-1}s^{-1}}_{i-1\text{ letters}}q^{-1}\\ & =\underbrace{\cdots tsts}_{i-1\text{ letters}}q^{-1} \ \ \ \ \ \ \ \ \ \ \left( \text{since }s^{-1}=s\text{ and }t^{-1}=t\right) . \end{align*} Also, \[ \left( a_{1}a_{2}\cdots a_{p+i-1}\right) a_{p+i}=a_{1}a_{2}\cdots a_{p+i}=\underbrace{\left( a_{1}a_{2}\cdots a_{p}\right) }_{=q} \underbrace{\left( a_{p+1}a_{p+2}\cdots a_{p+i}\right) } _{\substack{=\underbrace{stst\cdots}_{i\text{ letters}}\\\text{(by (\ref{pf.prop.Invsles.b.2}))}}}=q\underbrace{stst\cdots}_{i\text{ letters}}. \] Now, (\ref{pf.prop.Invsles.b.alpha}) (applied to $p+i$ instead of $i$) yields \begin{align*} \alpha_{p+i} & =\underbrace{\left( a_{1}a_{2}\cdots a_{p+i-1}\right) a_{p+i}}_{=q\underbrace{stst\cdots}_{i\text{ letters}}}\underbrace{\left( a_{1}a_{2}\cdots a_{p+i-1}\right) ^{-1}}_{=\underbrace{\cdots tsts} _{i-1\text{ letters}}q^{-1}}=q\underbrace{\underbrace{stst\cdots}_{i\text{ letters}}\underbrace{\cdots tsts}_{i-1\text{ letters}}} _{=\underbrace{stst\cdots s}_{2i-1\text{ letters}}=\left( st\right) ^{i-1} s}q^{-1}\\ & =q\left( st\right) ^{i-1}sq^{-1}. \end{align*} This completes the proof of $\left( \alpha_{p+1},\alpha_{p+2},\ldots ,\alpha_{p+m}\right) =q\rho_{s,t}q^{-1}$. Hence, Claim 2 is proven.
\textit{Proof of Claim 3:} In our proof of Claim 2, we have shown that $\left( \alpha_{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) =q\rho _{s,t}q^{-1}$. The same argument (applied to $\overrightarrow{b}$, $\left( b_{1},b_{2},\ldots,b_{k}\right) $, $\left( \beta_{1},\beta_{2},\ldots ,\beta_{k}\right) $, $t$ and $s$ instead of $\overrightarrow{a}$, $\left( a_{1},a_{2},\ldots,a_{k}\right) $, $\left( \alpha_{1},\alpha_{2} ,\ldots,\alpha_{k}\right) $, $s$ and $t$) shows that $\left( \beta _{p+1},\beta_{p+2},\ldots,\beta_{p+m}\right) =q\rho_{t,s}q^{-1}$ (where we now use (\ref{pf.prop.Invsles.b.3}) instead of (\ref{pf.prop.Invsles.b.2}), and use $q=b_{1}b_{2}\cdots b_{p}$ instead of $q=a_{1}a_{2}\cdots a_{p}$).
Now, recall that the word $q\rho_{t,s}q^{-1}$ is the reversal of the word $q\rho_{s,t}q^{-1}$. Since \newline$\left( \alpha_{p+1},\alpha_{p+2} ,\ldots,\alpha_{p+m}\right) =q\rho_{s,t}q^{-1}$ and $\left( \beta _{p+1},\beta_{p+2},\ldots,\beta_{p+m}\right) =q\rho_{t,s}q^{-1}$, this means that the word $\left( \beta_{p+1},\beta_{p+2},\ldots,\beta_{p+m}\right) $ is the reversal of $\left( \alpha_{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) $. This proves Claim 3.
\textit{Proof of Claim 4:} Since $m=m_{s,t}$, we have $\underbrace{stst\cdots }_{m\text{ letters}}=\underbrace{tsts\cdots}_{m\text{ letters}}$ (this is one of the braid relations of our Coxeter group). Let us set $x=\underbrace{stst\cdots}_{m\text{ letters}}=\underbrace{tsts\cdots}_{m\text{ letters}}$. Now, (\ref{pf.prop.Invsles.b.2}) yields $a_{p+1}a_{p+2}\cdots a_{p+m}=\underbrace{stst\cdots}_{m\text{ letters}}=x$. Similarly, from (\ref{pf.prop.Invsles.b.3}), we obtain $b_{p+1}b_{p+2}\cdots b_{p+m}=x$.
Let $i\in\left\{ p+m+1,p+m+2,\ldots,k\right\} $. Thus, \begin{align*} a_{1}a_{2}\cdots a_{i-1} & =\underbrace{\left( a_{1}a_{2}\cdots a_{p}\right) }_{=q}\underbrace{\left( a_{p+1}a_{p+2}\cdots a_{p+m}\right) }_{=x}\underbrace{\left( a_{p+m+1}a_{p+m+2}\cdots a_{i-1}\right) }_{\substack{=b_{p+m+1}b_{p+m+2}\cdots b_{i-1}\\\text{(by (\ref{pf.prop.Invsles.b.4}))}}}\\ & =qx\left( b_{p+m+1}b_{p+m+2}\cdots b_{i-1}\right) . \end{align*} Comparing this with \begin{align*} b_{1}b_{2}\cdots b_{i-1} & =\underbrace{\left( b_{1}b_{2}\cdots b_{p}\right) }_{=q}\underbrace{\left( b_{p+1}b_{p+2}\cdots b_{p+m}\right) }_{=x}\left( b_{p+m+1}b_{p+m+2}\cdots b_{i-1}\right) \\ & =qx\left( b_{p+m+1}b_{p+m+2}\cdots b_{i-1}\right) , \end{align*} we obtain $a_{1}a_{2}\cdots a_{i-1}=b_{1}b_{2}\cdots b_{i-1}$. Also, $a_{i}=b_{i}$ (by (\ref{pf.prop.Invsles.b.4})). Now, (\ref{pf.prop.Invsles.b.alpha}) becomes \begin{align*} \alpha_{i} & =\left( \underbrace{a_{1}a_{2}\cdots a_{i-1}}_{=b_{1} b_{2}\cdots b_{i-1}}\right) \underbrace{a_{i}}_{=b_{i}}\left( \underbrace{a_{1}a_{2}\cdots a_{i-1}}_{=b_{1}b_{2}\cdots b_{i-1}}\right) ^{-1}=\left( b_{1}b_{2}\cdots b_{i-1}\right) b_{i}\left( b_{1}b_{2}\cdots b_{i-1}\right) ^{-1}\\ & =\beta_{i}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.prop.Invsles.b.beta})}\right) . \end{align*} This proves Claim 4.
Hence, all four claims are proven, and the proof of Proposition \ref{prop.Invsles} \textbf{(b)} is complete. \end{proof}
The following fact is rather easy (but will be proven in detail in the next section):
\begin{proposition} \label{prop.has}Let $w\in W$. Let $s$ and $t$ be two distinct elements of $T$ such that $m_{s,t}<\infty$. Let $\overrightarrow{a}$ be a reduced expression for $w$.
\textbf{(a)} The word $\rho_{s,t}$ appears as a subword of $\operatorname*{Invs}\overrightarrow{a}$ at most one time.
\textbf{(b)} The words $\rho_{s,t}$ and $\rho_{t,s}$ cannot both appear as subwords of $\operatorname*{Invs}\overrightarrow{a}$. \end{proposition}
\begin{proof} [Proof of Proposition \ref{prop.has}.]\textbf{(a)} This follows from the fact that the word $\rho_{s,t}$ has length $m_{s,t}\geq2>0$, and from Proposition \ref{prop.Invsles} \textbf{(a)}.
\textbf{(b)} Assume the contrary. Then, both words $\rho_{s,t}$ and $\rho_{t,s}$ appear as a subword of $\operatorname*{Invs}\overrightarrow{a}$. By Proposition \ref{prop.rhost} \textbf{(b)}, this means that both the word $\rho_{s,t}$ and its reversal appear as a subword of $\operatorname*{Invs} \overrightarrow{a}$. Since the word $\rho_{s,t}$ has length $m_{s,t}\geq2$, this means that at least one letter of $\rho_{s,t}$ appears twice in $\operatorname*{Invs}\overrightarrow{a}$. This contradicts Proposition \ref{prop.Invsles} \textbf{(a)}. This contradiction concludes our proof. \end{proof}
\section{The set $\mathfrak{N}$ and subwords of inversion words}
We now let $\mathfrak{N}$ denote the subset $\bigcup\limits_{x\in W}x\mathfrak{M}x^{-1}$ of $T\times T$. Clearly, $\mathfrak{M}\subseteq \mathfrak{N}$. Moreover, for every $\left( s,t\right) \in\mathfrak{N}$, we have $s\neq t$ and $m_{s,t}<\infty$ (because $\left( s,t\right) \in\mathfrak{N}=\bigcup\limits_{x\in W}x\mathfrak{M}x^{-1}$, and because these properties are preserved by conjugation). Thus, for every $\left( s,t\right) \in\mathfrak{N}$, the word $\rho_{s,t}$ is well-defined and has exactly $m_{s,t}$ entries.
We define a binary relation $\approx$ on $\mathfrak{N}$ by \[ \left( \left( s,t\right) \approx\left( s^{\prime},t^{\prime}\right) \ \Longleftrightarrow\ \text{there exists a }q\in W\text{ such that } qsq^{-1}=s^{\prime}\text{ and }qtq^{-1}=t^{\prime}\right) . \] It is clear that this relation $\approx$ is an equivalence relation; it thus gives rise to a quotient set $\mathfrak{N}/\approx$. For every pair $P\in\mathfrak{N}$, we denote by $\left[ \left[ P\right] \right] $ the equivalence class of $P$ with respect to this relation $\approx$.
The relation $\sim$ on $\mathfrak{M}$ is the restriction of the relation $\approx$ to $\mathfrak{M}$. Hence, every equivalence class $c$ with respect to $\sim$ is a subset of an equivalence class with respect to $\approx$. We denote the latter equivalence class by $c_{\mathfrak{N}}$. Thus, $\left[ P\right] _{\mathfrak{N}}=\left[ \left[ P\right] \right] $ for every $P\in\mathfrak{M}$.
We notice that the set $\mathfrak{N}$ is invariant under switching the two elements of a pair (i.e., for every $\left( u,v\right) \in\mathfrak{N}$, we have $\left( v,u\right) \in\mathfrak{N}$). Moreover, the relation $\approx$ is preserved under switching the two elements of a pair (i.e., if $\left( s,t\right) \approx\left( s^{\prime},t^{\prime}\right) $, then $\left( t,s\right) \approx\left( t^{\prime},s^{\prime}\right) $). This shall be tacitly used in the following proofs.
\begin{definition} \label{def.has}Let $w\in W$. Let $\overrightarrow{a}$ be a reduced expression for $w$.
\textbf{(a)} For any $\left( s,t\right) \in\mathfrak{N}$, we define an element $\operatorname*{has}\nolimits_{s,t}\overrightarrow{a}\in\left\{ 0,1\right\} $ by \[ \operatorname*{has}\nolimits_{s,t}\overrightarrow{a}= \begin{cases} 1, & \text{if }\rho_{s,t}\text{ appears as a subword of }\operatorname*{Invs} \overrightarrow{a};\\ 0, & \text{otherwise} \end{cases} . \] (Keep in mind that we are speaking of subwords, not just factors, here.)
\textbf{(b)} Consider the free $\mathbb{Z}$-module $\mathbb{Z}\left[ \mathfrak{N}\right] $ with basis $\mathfrak{N}$. We define an element $\operatorname*{Has}\overrightarrow{a}\in\mathbb{Z}\left[ \mathfrak{N} \right] $ by \[ \operatorname*{Has}\overrightarrow{a}=\sumnonlimits\limits_{\left( s,t\right) \in \mathfrak{N}}\operatorname*{has}\nolimits_{s,t}\overrightarrow{a}\cdot\left( s,t\right) \] (where the $\left( s,t\right) $ stands for the basis element $\left( s,t\right) \in\mathfrak{N}$ of $\mathbb{Z}\left[ \mathfrak{N}\right] $). \end{definition}
We can now state the main result that we will use to prove Theorem \ref{thm.BCL}:
\begin{theorem} \label{thm.has}Let $w\in W$. Let $\left( s,t\right) \in\mathfrak{M}$. Let $\overrightarrow{a}$ and $\overrightarrow{b}$ be two reduced expressions for $w$ such that $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move.
Proposition \ref{prop.Invsles} \textbf{(b)} shows that there exists a $q\in W$ such that $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal. Consider this $q$. Set $s^{\prime}=qsq^{-1}$ and $t^{\prime}=qtq^{-1}$; thus, $s^{\prime}$ and $t^{\prime}$ are reflections and satisfy $m_{s^{\prime},t^{\prime}} =m_{s,t}<\infty$. Also, the definitions of $s^{\prime}$ and $t^{\prime}$ yield $\left( s^{\prime},t^{\prime}\right) =q\underbrace{\left( s,t\right) }_{\in\mathfrak{M}}q^{-1}\in q\mathfrak{M}q^{-1}\subseteq\mathfrak{N}$. Similarly, $\left( t^{\prime},s^{\prime}\right) \in\mathfrak{N}$ (since $\left( t,s\right) \in\mathfrak{M}$).
Now, we have \begin{equation} \operatorname*{Has}\overrightarrow{b}=\operatorname*{Has}\overrightarrow{a} -\left( s^{\prime},t^{\prime}\right) +\left( t^{\prime},s^{\prime}\right) . \label{eq.thm.has.a} \end{equation}
\end{theorem}
Before we prove Theorem \ref{thm.has}, we first show two lemmas. The first one is a crucial property of dihedral subgroups in our Coxeter group:
\begin{lemma} \label{lem.dihindih}Let $\left( s,t\right) \in\mathfrak{M}$ and $\left( u,v\right) \in\mathfrak{N}$. Let $q\in W$. Assume that $u\in qD_{s,t}q^{-1}$ and $v\in qD_{s,t}q^{-1}$. Then, $m_{s,t}=m_{u,v}$. \end{lemma}
\begin{proof} [Proof of Lemma \ref{lem.dihindih}.]\textit{Claim 1:} Lemma \ref{lem.dihindih} holds in the case when $\left( u,v\right) \in\mathfrak{M}$.
\textit{Proof.} Assume that $\left( u,v\right) \in\mathfrak{M}$. Thus, $u,v\in S$. Let $I$ be the subset $\left\{ s,t\right\} $ of $S$. We shall use the notations of \cite[\S 9]{Lusztig-Hecke}. In particular, $l\left( r\right) $ denotes the length of any element $r\in W$.
We have $W_{I}=D_{s,t}$. Consider the coset $W_{I}q^{-1}$ of $W_{I}$. From \cite[Lemma 9.7 (a)]{Lusztig-Hecke} (applied to $a=q^{-1}$), we know that this coset $W_{I}q^{-1}$ has a unique element of minimal length. Let $w$ be this element. Thus, $w\in W_{I}q^{-1}$, so that $W_{I}w=W_{I}q^{-1}$. Now, \[ \underbrace{q}_{=\left( q^{-1}\right) ^{-1}}\underbrace{W_{I}}_{=\left( W_{I}\right) ^{-1}}=\left( q^{-1}\right) ^{-1}\left( W_{I}\right) ^{-1}=\left( \underbrace{W_{I}q^{-1}}_{=W_{I}w}\right) ^{-1}=\left( W_{I}w\right) ^{-1}=w^{-1}W_{I}. \]
Let $u^{\prime}=wuw^{-1}$ and $v^{\prime}=wvw^{-1}$.
We have $u\in q\underbrace{D_{s,t}}_{=W_{I}}q^{-1}=q\underbrace{W_{I}q^{-1} }_{=W_{I}w}=\underbrace{qW_{I}}_{=w^{-1}W_{I}}w=w^{-1}W_{I}w$. In other words, $wuw^{-1}\in W_{I}$. In other words, $u^{\prime}\in W_{I}$ (since $u^{\prime }=wuw^{-1}$). Similarly, $v^{\prime}\in W_{I}$.
We have $u^{\prime}=wuw^{-1}$, hence $u^{\prime}w=wu$. But \cite[Lemma 9.7 (b)]{Lusztig-Hecke} (applied to $a=q^{-1}$ and $y=u^{\prime}$) shows that $l\left( u^{\prime}w\right) =l\left( u^{\prime}\right) +l\left( w\right) $. Hence, \[ l\left( u^{\prime}\right) +l\left( w\right) =l\left( \underbrace{u^{\prime}w}_{=wu}\right) =l\left( wu\right) =l\left( w\right) \pm1\ \ \ \ \ \ \ \ \ \ \left( \text{since }u\in S\right) . \] Subtracting $l\left( w\right) $ from this equality, we obtain $l\left( u^{\prime}\right) =\pm1$, and thus $l\left( u^{\prime}\right) =1$, so that $u^{\prime}\in S$. Combined with $u^{\prime}\in W_{I}$, this shows that $u^{\prime}\in S\cap W_{I}=I$. Similarly, $v^{\prime}\in I$.
We have $u\neq v$ (since $\left( u,v\right) \in\mathfrak{N}$), thus $wuw^{-1}\neq wvw^{-1}$, thus $u^{\prime}=wuw^{-1}\neq wvw^{-1}=v^{\prime}$. Thus, $u^{\prime}$ and $v^{\prime}$ are two distinct elements of the two-element set $I=\left\{ s,t\right\} $. Hence, either $\left( u^{\prime },v^{\prime}\right) =\left( s,t\right) $ or $\left( u^{\prime},v^{\prime }\right) =\left( t,s\right) $. In either of these two cases, we have $m_{u^{\prime},v^{\prime}}=m_{s,t}$. But since $u^{\prime}=wuw^{-1}$ and $v^{\prime}=wvw^{-1}$, we have $m_{u^{\prime},v^{\prime}}=m_{u,v}$. Hence, $m_{s,t}=m_{u^{\prime},v^{\prime}}=m_{u,v}$. This proves Claim 1.
\textit{Claim 2:} Lemma \ref{lem.dihindih} holds in the general case.
\textit{Proof.} Consider the general case. We have $\left( u,v\right) \in\mathfrak{N}=\bigcup_{x\in W}x\mathfrak{M}x^{-1}$. Thus, there exists some $x\in W$ such that $\left( u,v\right) \in x\mathfrak{M}x^{-1}$. Consider this $x$. From $\left( u,v\right) \in x\mathfrak{M}x^{-1}$, we obtain $x^{-1}\left( u,v\right) x\in\mathfrak{M}$. In other words, $\left( x^{-1}ux,x^{-1}vx\right) \in\mathfrak{M}$. Moreover, \[ x^{-1}\underbrace{u}_{\in qD_{s,t}q^{-1}}x\in x^{-1}qD_{s,t}\underbrace{q^{-1} x}_{=\left( x^{-1}q\right) ^{-1}}=x^{-1}qD_{s,t}\left( x^{-1}q\right) ^{-1}, \] and similarly $x^{-1}vx\in x^{-1}qD_{s,t}\left( x^{-1}q\right) ^{-1}$. Hence, Claim 1 (applied to $\left( x^{-1}ux,x^{-1}vx\right) $ and $x^{-1}q$ instead of $\left( u,v\right) $ and $q$) shows that $m_{s,t}=m_{x^{-1} ux,x^{-1}vx}=m_{u,v}$. This proves Claim 2, and thus proves Lemma \ref{lem.dihindih}. \end{proof}
Next comes another lemma, bordering on the trivial:
\begin{lemma} \label{lem.GandH}Let $G$ be a group. Let $H$ be a subgroup of $G$. Let $u\in G$, $v\in G$ and $g\in\mathbb{Z}$. Assume that $\left( uv\right) ^{g-1}u\in H$ and $\left( uv\right) ^{g}u\in H$. Then, $u\in H$ and $v\in H$. \end{lemma}
\begin{proof} [Proof of Lemma \ref{lem.GandH}.]We have $\underbrace{\left( \left( uv\right) ^{g}u\right) }_{\in H}\left( \underbrace{\left( uv\right) ^{g-1}u}_{\in H}\right) ^{-1}\in HH^{-1}\subseteq H$ (since $H$ is a subgroup of $G$). Since \[ \left( \left( uv\right) ^{g}u\right) \underbrace{\left( \left( uv\right) ^{g-1}u\right) ^{-1}}_{=u^{-1}\left( \left( uv\right) ^{g-1}\right) ^{-1}}=\left( uv\right) ^{g}\underbrace{uu^{-1}}_{=1}\left( \left( uv\right) ^{g-1}\right) ^{-1}=\left( uv\right) ^{g}\left( \left( uv\right) ^{g-1}\right) ^{-1}=uv, \] this rewrites as $uv\in H$. However, $\left( uv\right) ^{-g}\left( uv\right) ^{g}u=u$, so that \[ u=\left( \underbrace{uv}_{\in H}\right) ^{-g}\underbrace{\left( uv\right) ^{g}u}_{\in H}\in H^{-g}H\subseteq H \] (since $H$ is a subgroup of $G$). Now, both $u$ and $uv$ belong to the subgroup $H$ of $G$. Thus, so does $u^{-1}\left( uv\right) $. In other words, $u^{-1}\left( uv\right) \in H$, so that $v=u^{-1}\left( uv\right) \in H$. This completes the proof of Lemma \ref{lem.GandH}. \end{proof}
\begin{proof} [Proof of Theorem \ref{thm.has}.]Conjugation by $q$ (that is, the map $W\rightarrow W,\ x\mapsto qxq^{-1}$) is a group endomorphism of $W$. Hence, for every $i\in\mathbb{N}$, we have \begin{equation} q\left( st\right) ^{i}sq^{-1}=\left( \underbrace{\left( qsq^{-1}\right) }_{=s^{\prime}}\left( \underbrace{qtq^{-1}}_{=t^{\prime}}\right) \right) ^{i}\underbrace{\left( qsq^{-1}\right) }_{=s^{\prime}}=\left( s^{\prime }t^{\prime}\right) ^{i}s^{\prime}. \label{pf.thm.has.qconj} \end{equation}
Let $m=m_{s,t}$. We have \[ \rho_{s,t}=\left( \left( st\right) ^{0}s,\left( st\right) ^{1} s,\ldots,\left( st\right) ^{m_{s,t}-1}s\right) =\left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m-1}s\right) \] (since $m_{s,t}=m$) and thus \begin{align*} q\rho_{s,t}q^{-1} & =q\left( \left( st\right) ^{0}s,\left( st\right) ^{1}s,\ldots,\left( st\right) ^{m-1}s\right) q^{-1}\\ & =\left( q\left( st\right) ^{0}sq^{-1},q\left( st\right) ^{1} sq^{-1},\ldots,q\left( st\right) ^{m-1}sq^{-1}\right) \\ & =\left( \left( s^{\prime}t^{\prime}\right) ^{0}s^{\prime},\left( s^{\prime}t^{\prime}\right) ^{1}s^{\prime},\ldots,\left( s^{\prime} t^{\prime}\right) ^{m-1}s^{\prime}\right) \\ & \ \ \ \ \ \ \ \ \ \ \left( \begin{array} [c]{c} \text{since every }i\in\left\{ 0,1,\ldots,m-1\right\} \text{ satisfies}\\ q\left( st\right) ^{i}sq^{-1}=\left( s^{\prime}t^{\prime}\right) ^{i}s^{\prime}\text{ (by (\ref{pf.thm.has.qconj}))} \end{array} \right) \\ & =\left( \left( s^{\prime}t^{\prime}\right) ^{0}s^{\prime},\left( s^{\prime}t^{\prime}\right) ^{1}s^{\prime},\ldots,\left( s^{\prime} t^{\prime}\right) ^{m_{s^{\prime},t^{\prime}}-1}s^{\prime}\right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }m=m_{s,t}=m_{s^{\prime},t^{\prime} }\right) \\ & =\rho_{s^{\prime},t^{\prime}}\ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }\rho_{s^{\prime},t^{\prime}}\right) . \end{align*}
The word $\overrightarrow{b}$ is obtained from $\overrightarrow{a}$ by an $\left( s,t\right) $-braid move. Hence, the word $\overrightarrow{a}$ can be obtained from $\overrightarrow{b}$ by a $\left( t,s\right) $-braid move.
From $\left( s^{\prime},t^{\prime}\right) \in\mathfrak{N}$, we obtain $s^{\prime}\neq t^{\prime}$. Hence, $\left( s^{\prime},t^{\prime}\right) \neq\left( t^{\prime},s^{\prime}\right) $.
From $s^{\prime}=qsq^{-1}$ and $t^{\prime}=qtq^{-1}$, we obtain $D_{s^{\prime },t^{\prime}}=qD_{s,t}q^{-1}$ (since conjugation by $q$ is a group endomorphism of $W$).
Proposition \ref{prop.rhost} \textbf{(c)} shows that the word $q\rho _{t,s}q^{-1}$ is the reversal of the word $q\rho_{s,t}q^{-1}$. Hence, the word $q\rho_{s,t}q^{-1}$ is the reversal of the word $q\rho_{t,s}q^{-1}$.
Recall that $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal. Since this latter reversal is $q\rho_{t,s}q^{-1}$ (as we have previously seen), this shows that $\operatorname*{Invs}\overrightarrow{b}$ has a factor of $q\rho_{t,s}q^{-1}$ in the place where the word $\operatorname*{Invs}\overrightarrow{a}$ had the factor $q\rho_{s,t}q^{-1}$. Hence, $\operatorname*{Invs}\overrightarrow{a}$ can, in turn, be obtained from $\operatorname*{Invs}\overrightarrow{b}$ by replacing a particular factor of the form $q\rho_{t,s}q^{-1}$ by its reversal (since the reversal of $q\rho_{t,s}q^{-1}$ is $q\rho_{s,t}q^{-1}$). Thus, our situation is symmetric with respect to $s$ and $t$; more precisely, we wind up in an analogous situation if we replace $s$, $t$, $\overrightarrow{a}$, $\overrightarrow{b}$, $s^{\prime}$ and $t^{\prime}$ by $t$, $s$, $\overrightarrow{b}$, $\overrightarrow{a}$, $t^{\prime}$ and $s^{\prime}$, respectively.
We shall prove the following claims:
\textit{Claim 1:} Let $\left( u,v\right) \in\mathfrak{N}$ be such that $\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) $ and $\left( u,v\right) \neq\left( t^{\prime},s^{\prime}\right) $. Then, $\operatorname*{has}\nolimits_{u,v}\overrightarrow{b}=\operatorname*{has} \nolimits_{u,v}\overrightarrow{a}$.
\textit{Claim 2:} We have $\operatorname*{has}\nolimits_{s^{\prime},t^{\prime }}\overrightarrow{b}=\operatorname*{has}\nolimits_{s^{\prime},t^{\prime} }\overrightarrow{a}-1$.
\textit{Claim 3:} We have $\operatorname*{has}\nolimits_{t^{\prime},s^{\prime }}\overrightarrow{b}=\operatorname*{has}\nolimits_{t^{\prime},s^{\prime} }\overrightarrow{a}+1$.
\textit{Proof of Claim 1:} Assume the contrary. Thus, $\operatorname*{has} \nolimits_{u,v}\overrightarrow{b}\neq\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}$. Hence, one of the numbers $\operatorname*{has} \nolimits_{u,v}\overrightarrow{b}$ and $\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}$ equals $1$ and the other equals $0$ (since both $\operatorname*{has}\nolimits_{u,v}\overrightarrow{b}$ and $\operatorname*{has}\nolimits_{u,v}\overrightarrow{a}$ belong to $\left\{ 0,1\right\} $). Without loss of generality, we assume that $\operatorname*{has}\nolimits_{u,v}\overrightarrow{a}=1$ and $\operatorname*{has}\nolimits_{u,v}\overrightarrow{b}=0$ (because in the other case, we can replace $s$, $t$, $\overrightarrow{a}$, $\overrightarrow{b}$, $s^{\prime}$ and $t^{\prime}$ by $t$, $s$, $\overrightarrow{b}$, $\overrightarrow{a}$, $t^{\prime}$ and $s^{\prime}$, respectively).
The elements $u$ and $v$ are two distinct reflections (since $\left( u,v\right) \in\mathfrak{N}$).
Write the tuple $\operatorname*{Invs}\overrightarrow{a}$ as $\left( \alpha_{1},\alpha_{2},\ldots,\alpha_{k}\right) $. The tuple $\operatorname*{Invs}\overrightarrow{b}$ has the same length as $\operatorname*{Invs}\overrightarrow{a}$, since $\operatorname*{Invs} \overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal. Hence, write the tuple $\operatorname*{Invs}\overrightarrow{b}$ as $\left( \beta_{1},\beta_{2},\ldots,\beta_{k}\right) $.
From $\operatorname*{has}\nolimits_{u,v}\overrightarrow{a}=1$, we obtain that $\rho_{u,v}$ appears as a subword of $\operatorname*{Invs}\overrightarrow{a}$. In other words, $\rho_{u,v}=\left( \alpha_{i_{1}},\alpha_{i_{2}} ,\ldots,\alpha_{i_{f}}\right) $ for some integers $i_{1},i_{2},\ldots,i_{f}$ satisfying $1\leq i_{1}<i_{2}<\cdots<i_{f}\leq k$. Consider these $i_{1} ,i_{2},\ldots,i_{f}$. From $\operatorname*{has}\nolimits_{u,v} \overrightarrow{b}=0$, we conclude that $\rho_{u,v}$ does not appear as a subword of $\operatorname*{Invs}\overrightarrow{b}$.
On the other hand, $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal. This factor has $m_{s,t}=m$ letters; thus, it has the form $\left( \alpha_{p+1},\alpha_{p+2} ,\ldots,\alpha_{p+m}\right) $ for some $p\in\left\{ 0,1,\ldots,k-m\right\} $. Consider this $p$. Thus, \[ \left( \alpha_{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) =q\rho _{s,t}q^{-1}=\left( \left( s^{\prime}t^{\prime}\right) ^{0}s^{\prime },\left( s^{\prime}t^{\prime}\right) ^{1}s^{\prime},\ldots,\left( s^{\prime}t^{\prime}\right) ^{m-1}s^{\prime}\right) . \] In other words, \begin{equation} \alpha_{p+i}=\left( s^{\prime}t^{\prime}\right) ^{i-1}s^{\prime }\ \ \ \ \ \ \ \ \ \ \text{for every }i\in\left\{ 1,2,\ldots,m\right\} . \label{pf.thm.has.c1.2} \end{equation}
We now summarize:
\begin{itemize} \item The word $\rho_{u,v}$ appears as the subword $\left( \alpha_{i_{1} },\alpha_{i_{2}},\ldots,\alpha_{i_{f}}\right) $ of $\operatorname*{Invs} \overrightarrow{a}$, but does not appear as a subword of $\operatorname*{Invs} \overrightarrow{b}$.
\item The word $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing the factor \newline$\left( \alpha_{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) $ by its reversal. \end{itemize}
Thus, replacing the factor $\left( \alpha_{p+1},\alpha_{p+2},\ldots ,\alpha_{p+m}\right) $ in $\operatorname*{Invs}\overrightarrow{a}$ by its reversal must mess up the subword $\left( \alpha_{i_{1}},\alpha_{i_{2} },\ldots,\alpha_{i_{f}}\right) $ of $\operatorname*{Invs}\overrightarrow{a}$ badly enough that it no longer appears as a subword (not even in different positions). This can only happen if at least two of the integers $i_{1} ,i_{2},\ldots,i_{f}$ lie in the interval $\left\{ p+1,p+2,\ldots,p+m\right\} $.
Hence, at least two of the integers $i_{1},i_{2},\ldots,i_{f}$ lie in the interval $\left\{ p+1,p+2,\ldots,p+m\right\} $. In particular, there must be a $g\in\left\{ 1,2,\ldots,f-1\right\} $ such that the integers $i_{g}$ and $i_{g+1}$ lie in the interval $\left\{ p+1,p+2,\ldots,p+m\right\} $ (since $i_{1}<i_{2}<\cdots<i_{f}$). Consider this $g$.
We have $i_{g}\in\left\{ p+1,p+2,\ldots,p+m\right\} $. In other words, $i_{g}=p+r_{g}$ for some $r_{g}\in\left\{ 1,2,\ldots,m\right\} $. Consider this $r_{g}$.
We have $i_{g+1}\in\left\{ p+1,p+2,\ldots,p+m\right\} $. In other words, $i_{g+1}=p+r_{g+1}$ for some $r_{g+1}\in\left\{ 1,2,\ldots,m\right\} $. Consider this $r_{g+1}$.
We have $\left( \alpha_{i_{1}},\alpha_{i_{2}},\ldots,\alpha_{i_{f}}\right) =\rho_{u,v}=\left( \left( uv\right) ^{0}u,\left( uv\right) ^{1} u,\ldots,\left( uv\right) ^{m_{u,v}-1}u\right) $ (by the definition of $\rho_{u,v}$). Hence, $\alpha_{i_{g}}=\left( uv\right) ^{g-1}u$ and $\alpha_{i_{g+1}}=\left( uv\right) ^{g}u$. Now, \begin{align*} \left( uv\right) ^{g-1}u & =\alpha_{i_{g}}=\alpha_{p+r_{g}} \ \ \ \ \ \ \ \ \ \ \left( \text{since }i_{g}=p+r_{g}\right) \\ & =\left( s^{\prime}t^{\prime}\right) ^{r_{g}-1}s^{\prime} \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.has.c1.2}), applied to }i=r_{g}\right) \\ & \in D_{s^{\prime},t^{\prime}} \end{align*} and \begin{align*} \left( uv\right) ^{g}u & =\alpha_{i_{g+1}}=\alpha_{p+r_{g+1} }\ \ \ \ \ \ \ \ \ \ \left( \text{since }i_{g+1}=p+r_{g+1}\right) \\ & =\left( s^{\prime}t^{\prime}\right) ^{r_{g+1}-1}s^{\prime} \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.has.c1.2}), applied to }i=r_{g+1}\right) \\ & \in D_{s^{\prime},t^{\prime}}. \end{align*} Hence, Lemma \ref{lem.GandH} (applied to $G=W$ and $H=D_{s^{\prime},t^{\prime }}$) yields $u\in D_{s^{\prime},t^{\prime}}$ and $v\in D_{s^{\prime} ,t^{\prime}}$.
Furthermore, we have \[ \alpha_{i_{1}}=u\ \ \ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \ \ \alpha _{i_{f}}=v \] \footnote{\textit{Proof.} From $\left( \alpha_{i_{1}},\alpha_{i_{2}} ,\ldots,\alpha_{i_{f}}\right) =\left( \left( uv\right) ^{0}u,\left( uv\right) ^{1}u,\ldots,\left( uv\right) ^{m_{u,v}-1}u\right) $, we obtain $\alpha_{i_{1}}=\underbrace{\left( uv\right) ^{0}}_{=1}u=u$. \par We have $\left( uv\right) ^{m_{u,v}}=1$, and thus $\left( uv\right) ^{m_{u,v}-1}=\left( uv\right) ^{-1}=v^{-1}u^{-1}$. \par From $\left( \alpha_{i_{1}},\alpha_{i_{2}},\ldots,\alpha_{i_{f}}\right) =\left( \left( uv\right) ^{0}u,\left( uv\right) ^{1}u,\ldots,\left( uv\right) ^{m_{u,v}-1}u\right) $, we obtain $\alpha_{i_{f}} =\underbrace{\left( uv\right) ^{m_{u,v}-1}}_{=v^{-1}u^{-1}}u=v^{-1} u^{-1}u=v^{-1}=v$ (since $v$ is a reflection), qed.}.
Now, we have $i_{1}\in\left\{ p+1,p+2,\ldots,p+m\right\} $ (by a simple argument\footnote{\textit{Proof.} The element $u$ is a reflection and lies in $D_{s^{\prime},t^{\prime}}$. Hence, Proposition \ref{prop.rhost} \textbf{(a)} (applied to $s^{\prime}$ and $t^{\prime}$ instead of $s$ and $t$) shows that the word $\rho_{s^{\prime},t^{\prime}}$ contains $u$. Since $\rho_{s^{\prime },t^{\prime}}=q\rho_{s,t}q^{-1}=\left( \alpha_{p+1},\alpha_{p+2} ,\ldots,\alpha_{p+m}\right) $, this shows that the word $\left( \alpha _{p+1},\alpha_{p+2},\ldots,\alpha_{p+m}\right) $ contains $u$. In other words, $u=\alpha_{M}$ for some $M\in\left\{ p+1,p+2,\ldots,p+m\right\} $. Consider this $M$. \par But Proposition \ref{prop.Invsles} \textbf{(a)} shows that all entries of the tuple $\operatorname*{Invs}\overrightarrow{a}$ are distinct. In other words, the elements $\alpha_{1},\alpha_{2},\ldots,\alpha_{k}$ are pairwise distinct (since those are the entries of $\operatorname*{Invs}\overrightarrow{a}$). Hence, from $\alpha_{i_{1}}=u=\alpha_{M}$, we obtain $i_{1}=M\in\left\{ p+1,p+2,\ldots,p+m\right\} $. Qed.}) and $i_{f}\in\left\{ p+1,p+2,\ldots ,p+m\right\} $ (by a similar argument, with $v$ occasionally replacing $u$). Thus, all of the integers $i_{1},i_{2},\ldots,i_{f}$ belong to $\left\{ p+1,p+2,\ldots,p+m\right\} $ (since $i_{1}<i_{2}<\cdots<i_{f}$).
Now, recall that $f$ is the length of the word $\rho_{u,v}$ (since $\rho _{u,v}=\left( \alpha_{i_{1}},\alpha_{i_{2}},\ldots,\alpha_{i_{f}}\right) $), and thus equals $m_{u,v}$. Thus, $f=m_{u,v}$.
But $u\in D_{s^{\prime},t^{\prime}}=qD_{s,t}q^{-1}$ and $v\in D_{s^{\prime },t^{\prime}}=qD_{s,t}q^{-1}$. Hence, Lemma \ref{lem.dihindih} yields $m_{s,t}=m_{u,v}$. Since $m=m_{s,t}$ and $f=m_{u,v}$, this rewrites as $m=f$.
Recall that all of the integers $i_{1},i_{2},\ldots,i_{f}$ belong to $\left\{ p+1,p+2,\ldots,p+m\right\} $. Since $i_{1}<i_{2}<\cdots<i_{f}$ and $f=m$, these integers $i_{1},i_{2},\ldots,i_{f}$ form a strictly increasing sequence of length $m$. Thus, $\left( i_{1},i_{2},\ldots,i_{f}\right) $ is a strictly increasing sequence of length $m$ whose entries belong to $\left\{ p+1,p+2,\ldots,p+m\right\} $. But the only such sequence is $\left( p+1,p+2,\ldots,p+m\right) $ (because the set $\left\{ p+1,p+2,\ldots ,p+m\right\} $ has only $m$ elements). Thus, $\left( i_{1},i_{2} ,\ldots,i_{f}\right) =\left( p+1,p+2,\ldots,p+m\right) $. In particular, $i_{1}=p+1$ and $i_{f}=p+m$.
Now, $\alpha_{i_{1}}=u$, so that \begin{align*} u & =\alpha_{i_{1}}=\alpha_{p+1}\ \ \ \ \ \ \ \ \ \ \left( \text{since }i_{1}=p+1\right) \\ & =\underbrace{\left( s^{\prime}t^{\prime}\right) ^{1-1}}_{=1}s^{\prime }\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.has.c1.2}), applied to }i=1\right) \\ & =s^{\prime}. \end{align*} Also, $\alpha_{i_{f}}=v$, so that \begin{align*} v & =\alpha_{i_{f}}=\alpha_{p+m}\ \ \ \ \ \ \ \ \ \ \left( \text{since }i_{f}=p+m\right) \\ & =\underbrace{\left( s^{\prime}t^{\prime}\right) ^{m-1}} _{\substack{=\left( s^{\prime}t^{\prime}\right) ^{-1}\\\text{(since }\left( s^{\prime}t^{\prime}\right) ^{m}=1\\\text{(since }m=m_{s,t}=m_{s^{\prime },t^{\prime}}\text{))}}}s^{\prime}\ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.has.c1.2}), applied to }i=m\right) \\ & =\left( s^{\prime}t^{\prime}\right) ^{-1}s^{\prime}=t^{\prime}. \end{align*} Combined with $u=s^{\prime}$, this yields $\left( u,v\right) =\left( s^{\prime},t^{\prime}\right) $, which contradicts $\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) $. This contradiction proves that our assumption was wrong. Claim 1 is proven.
\textit{Proof of Claim 2:} The word $\operatorname*{Invs}\overrightarrow{b}$ is obtained from $\operatorname*{Invs}\overrightarrow{a}$ by replacing a particular factor of the form $q\rho_{s,t}q^{-1}$ by its reversal. Thus, the word $\operatorname*{Invs}\overrightarrow{a}$ has a factor of the form $q\rho_{s,t}q^{-1}$. Since $q\rho_{s,t}q^{-1}=\rho_{s^{\prime},t^{\prime}}$, this means that the word $\operatorname*{Invs}\overrightarrow{a}$ has a factor of the form $\rho_{s^{\prime},t^{\prime}}$. Consequently, the word $\operatorname*{Invs}\overrightarrow{a}$ has a subword of the form $\rho_{s^{\prime},t^{\prime}}$. In other words, $\operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{a}=1$.
The same argument (applied to $t$, $s$, $\overrightarrow{b}$, $\overrightarrow{a}$, $t^{\prime}$ and $s^{\prime}$ instead of $s$, $t$, $\overrightarrow{a}$, $\overrightarrow{b}$, $s^{\prime}$ and $t^{\prime}$) shows that $\operatorname*{has}\nolimits_{t^{\prime},s^{\prime}} \overrightarrow{b}=1$. In other words, the word $\operatorname*{Invs} \overrightarrow{b}$ has a subword of the form $\rho_{t^{\prime},s^{\prime}}$. Hence, the word $\operatorname*{Invs}\overrightarrow{b}$ has no subword of the form $\rho_{s^{\prime},t^{\prime}}$ (because Proposition \ref{prop.has} \textbf{(b)} (applied to $\overrightarrow{b}$, $s^{\prime}$ and $t^{\prime}$ instead of $\overrightarrow{a}$, $s$ and $t$) shows that the words $\rho_{s^{\prime},t^{\prime}}$ and $\rho_{t^{\prime},s^{\prime}}$ cannot both appear as subwords of $\operatorname*{Invs}\overrightarrow{b}$). In other words, $\operatorname*{has}\nolimits_{s^{\prime},t^{\prime}}\overrightarrow{b} =0$.
Combining this with $\operatorname*{has}\nolimits_{s^{\prime},t^{\prime} }\overrightarrow{a}=1$, we immediately obtain $\operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{b}=\operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{a}-1$. Thus, Claim 2 is proven.
\textit{Proof of Claim 3:} Applying Claim 2 to $t$, $s$, $\overrightarrow{b}$, $\overrightarrow{a}$, $t^{\prime}$ and $s^{\prime}$ instead of $s$, $t$, $\overrightarrow{a}$, $\overrightarrow{b}$, $s^{\prime}$ and $t^{\prime}$, we obtain $\operatorname*{has}\nolimits_{t^{\prime},s^{\prime}}\overrightarrow{a} =\operatorname*{has}\nolimits_{t^{\prime},s^{\prime}}\overrightarrow{b}-1$. In other words, $\operatorname*{has}\nolimits_{t^{\prime},s^{\prime} }\overrightarrow{b}=\operatorname*{has}\nolimits_{t^{\prime},s^{\prime} }\overrightarrow{a}+1$. This proves Claim 3.
Now, our goal is to prove that $\operatorname*{Has}\overrightarrow{b} =\operatorname*{Has}\overrightarrow{a}-\left( s^{\prime},t^{\prime}\right) +\left( t^{\prime},s^{\prime}\right) $. But the definition of $\operatorname*{Has}\overrightarrow{b}$ yields \begin{align*} & \operatorname*{Has}\overrightarrow{b}\\ & =\sumnonlimits\limits_{\left( u,v\right) \in\mathfrak{N}}\operatorname*{has} \nolimits_{u,v}\overrightarrow{b}\cdot\left( u,v\right) \\ & =\sumnonlimits\limits_{\substack{\left( u,v\right) \in\mathfrak{N};\\\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) ;\\\left( u,v\right) \neq\left( t^{\prime},s^{\prime}\right) }}\underbrace{\operatorname*{has}\nolimits_{u,v} \overrightarrow{b}}_{\substack{=\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}\\\text{(by Claim 1)}}}\cdot\left( u,v\right) +\underbrace{\operatorname*{has}\nolimits_{s^{\prime},t^{\prime} }\overrightarrow{b}}_{\substack{=\operatorname*{has}\nolimits_{s^{\prime },t^{\prime}}\overrightarrow{a}-1\\\text{(by Claim 2)}}}\cdot\left( s^{\prime},t^{\prime}\right) +\underbrace{\operatorname*{has} \nolimits_{t^{\prime},s^{\prime}}\overrightarrow{b}} _{\substack{=\operatorname*{has}\nolimits_{t^{\prime},s^{\prime} }\overrightarrow{a}+1\\\text{(by Claim 3)}}}\cdot\left( t^{\prime},s^{\prime }\right) \\ & \ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( s^{\prime},t^{\prime }\right) \neq\left( t^{\prime},s^{\prime}\right) \right) \\ & =\sumnonlimits\limits_{\substack{\left( u,v\right) \in\mathfrak{N};\\\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) ;\\\left( u,v\right) \neq\left( t^{\prime},s^{\prime}\right) }}\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}\cdot\left( u,v\right) +\left( \operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{a}-1\right) \cdot\left( s^{\prime},t^{\prime}\right) +\left( \operatorname*{has}\nolimits_{t^{\prime },s^{\prime}}\overrightarrow{a}+1\right) \cdot\left( t^{\prime},s^{\prime }\right) \\ & =\sumnonlimits\limits_{\substack{\left( u,v\right) \in\mathfrak{N};\\\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) ;\\\left( u,v\right) \neq\left( t^{\prime},s^{\prime}\right) }}\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}\cdot\left( u,v\right) +\operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{a}\cdot\left( s^{\prime },t^{\prime}\right) -\left( s^{\prime},t^{\prime}\right) +\operatorname*{has}\nolimits_{t^{\prime},s^{\prime}}\overrightarrow{a} \cdot\left( t^{\prime},s^{\prime}\right) +\left( t^{\prime},s^{\prime }\right) \\ & =\underbrace{\sumnonlimits\limits_{\substack{\left( u,v\right) \in\mathfrak{N};\\\left( u,v\right) \neq\left( s^{\prime},t^{\prime}\right) ;\\\left( u,v\right) \neq\left( t^{\prime},s^{\prime}\right) }}\operatorname*{has}\nolimits_{u,v} \overrightarrow{a}\cdot\left( u,v\right) +\operatorname*{has} \nolimits_{s^{\prime},t^{\prime}}\overrightarrow{a}\cdot\left( s^{\prime },t^{\prime}\right) +\operatorname*{has}\nolimits_{t^{\prime},s^{\prime} }\overrightarrow{a}\cdot\left( t^{\prime},s^{\prime}\right) } _{\substack{=\sumnonlimits\limits_{\left( u,v\right) \in\mathfrak{N}}\operatorname*{has} \nolimits_{u,v}\overrightarrow{a}\cdot\left( u,v\right) \\\text{(since }\left( s^{\prime},t^{\prime}\right) \neq\left( t^{\prime},s^{\prime }\right) \text{)}}}-\left( s^{\prime},t^{\prime}\right) +\left( t^{\prime },s^{\prime}\right) \\ & =\underbrace{\sumnonlimits\limits_{\left( u,v\right) \in\mathfrak{N}}\operatorname*{has} \nolimits_{u,v}\overrightarrow{a}\cdot\left( u,v\right) } _{=\operatorname*{Has}\overrightarrow{a}}-\left( s^{\prime},t^{\prime }\right) +\left( t^{\prime},s^{\prime}\right) =\operatorname*{Has} \overrightarrow{a}-\left( s^{\prime},t^{\prime}\right) +\left( t^{\prime },s^{\prime}\right) . \end{align*} This proves Theorem \ref{thm.has}. \end{proof}
\section{The proof of Theorem \ref{thm.BCL}}
We are now ready to establish Theorem \ref{thm.BCL}:
\begin{proof} [Proof of Theorem \ref{thm.BCL}.]We shall use the \textit{Iverson bracket notation}: i.e., if $\mathcal{A}$ is any logical statement, then we shall write $\left[ \mathcal{A}\right] $ for the integer $ \begin{cases} 1, & \text{if }\mathcal{A}\text{ is true};\\ 0, & \text{if }\mathcal{A}\text{ is false} \end{cases} $.
For every $z\in\mathbb{Z}\left[ \mathfrak{N}\right] $ and $n\in\mathfrak{N} $, we let $\operatorname*{coord}\nolimits_{n}z\in\mathbb{Z}$ be the $n$-coordinate of $z$ (with respect to the basis $\mathfrak{N}$ of $\mathbb{Z}\left[ \mathfrak{N}\right] $).
For every $z\in\mathbb{Z}\left[ \mathfrak{N}\right] $ and $N\subseteq \mathfrak{N}$, we set $\operatorname*{coord}\nolimits_{N}z=\sumnonlimits\limits_{n\in N}\operatorname*{coord}\nolimits_{n}z$.
We have $c=\left[ \left( s,t\right) \right] $, thus $c_{\mathfrak{N} }=\left[ \left[ \left( s,t\right) \right] \right] $ and $c^{\operatorname{op}}=\left[ \left( t,s\right) \right] $. From the latter equality, we obtain $\left( c^{\operatorname*{op}}\right) _{\mathfrak{N} }=\left[ \left[ \left( t,s\right) \right] \right] $.
Let $\overrightarrow{c_{1}},\overrightarrow{c_{2}},\ldots ,\overrightarrow{c_{k}},\overrightarrow{c_{k+1}}$ be the vertices on the cycle $C$ (listed in the order they are encountered when we traverse the cycle, starting at some arbitrarily chosen vertex on the cycle and going until we return to the starting point). Thus:
\begin{itemize} \item We have $\overrightarrow{c_{k+1}}=\overrightarrow{c_{1}}$.
\item There is an arc from $\overrightarrow{c_{i}}$ to $\overrightarrow{c_{i+1}}$ for every $i\in\left\{ 1,2,\ldots,k\right\} $. \end{itemize}
Fix $i\in\left\{ 1,2,\ldots,k\right\} $. Then, there is an arc from $\overrightarrow{c_{i}}$ to $\overrightarrow{c_{i+1}}$. In other words, there exists some $\left( s_{i},t_{i}\right) \in\mathfrak{M}$ such that $\overrightarrow{c_{i+1}}$ is obtained from $\overrightarrow{c_{i}}$ by an $\left( s_{i},t_{i}\right) $-braid move. Consider this $\left( s_{i} ,t_{i}\right) $. Thus, \begin{equation} \text{the color of the arc from }\overrightarrow{c_{i}}\text{ to }\overrightarrow{c_{i+1}}\text{ is }\left[ \left( s_{i},t_{i}\right) \right] . \label{pf.thm.BCL.a.color} \end{equation} Proposition \ref{prop.Invsles} \textbf{(b)} (applied to $\overrightarrow{c_{i} }$, $\overrightarrow{c_{i+1}}$, $s_{i}$ and $t_{i}$ instead of $\overrightarrow{a}$, $\overrightarrow{b}$, $s$ and $t$) shows that there exists a $q\in W$ such that $\operatorname*{Invs}\overrightarrow{c_{i+1}}$ is obtained from $\operatorname*{Invs}\overrightarrow{c_{i}}$ by replacing a particular factor of the form $q\rho_{s_{i},t_{i}}q^{-1}$ by its reversal. Let us denote this $q$ by $q_{i}$. Set $s_{i}^{\prime}=q_{i}s_{i}q_{i}^{-1}$ and $t_{i}^{\prime}=q_{i}t_{i}q_{i}^{-1}$. Thus, $s_{i}^{\prime}\neq t_{i} ^{\prime}$ (since $s_{i}\neq t_{i}$) and $m_{s_{i}^{\prime},t_{i}^{\prime} }=m_{s_{i},t_{i}}<\infty$ (since $\left( s_{i},t_{i}\right) \in\mathfrak{M} $). Also, the definitions of $s_{i}^{\prime}$ and $t_{i}^{\prime}$ yield $\left( s_{i}^{\prime},t_{i}^{\prime}\right) =\left( q_{i}s_{i}q_{i} ^{-1},q_{i}t_{i}q_{i}^{-1}\right) =q_{i}\underbrace{\left( s_{i} ,t_{i}\right) }_{\in\mathfrak{M}}q_{i}^{-1}\in q_{i}\mathfrak{M}q_{i} ^{-1}\subseteq\mathfrak{N}$. From $s_{i}^{\prime}=q_{i}s_{i}q_{i}^{-1}$ and $t_{i}^{\prime}=q_{i}t_{i}q_{i}^{-1}$, we obtain $\left( s_{i}^{\prime} ,t_{i}^{\prime}\right) \approx\left( s_{i},t_{i}\right) $.
We shall now show that \begin{equation} \operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}-\operatorname*{Has}\overrightarrow{c_{i}}\right) =\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op} }\right] -\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] . \label{pf.thm.BCL.a.hasdiff1} \end{equation}
\textit{Proof of (\ref{pf.thm.BCL.a.hasdiff1}):} We have the following chain of logical equivalences: \begin{align*} & \ \left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in \underbrace{c_{\mathfrak{N}}}_{=\left[ \left[ \left( s,t\right) \right] \right] }\right) \\ & \Longleftrightarrow\ \left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in\left[ \left[ \left( s,t\right) \right] \right] \right) \ \Longleftrightarrow\ \left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) \approx\left( s,t\right) \right) \ \Longleftrightarrow\ \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( t,s\right) \right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \approx\left( t,s\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( s_{i},t_{i}\right) \right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \sim\left( t,s\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since the restriction of the relation }\approx\text{ to }\mathfrak{M}\text{ is }\sim\right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \in \underbrace{\left[ \left( t,s\right) \right] }_{=c^{\operatorname*{op}} }\right) \ \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \in c^{\operatorname*{op}}\right) \ \Longleftrightarrow\ \left( \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right) . \end{align*} Hence, \begin{equation} \left[ \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in c_{\mathfrak{N} }\right] =\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] . \label{pf.thm.BCL.a.hasdiff1.eq1} \end{equation}
Also, we have the following chain of logical equivalences: \begin{align*} & \ \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in \underbrace{c_{\mathfrak{N}}}_{=\left[ \left[ \left( s,t\right) \right] \right] }\right) \\ & \Longleftrightarrow\ \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in\left[ \left[ \left( s,t\right) \right] \right] \right) \ \Longleftrightarrow\ \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( s,t\right) \right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \approx\left( s,t\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( s_{i},t_{i}\right) \right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \sim\left( s,t\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since the restriction of the relation }\approx\text{ to }\mathfrak{M}\text{ is }\sim\right) \\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \in \underbrace{\left[ \left( s,t\right) \right] }_{=c}\right) \ \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \in c\right) \ \Longleftrightarrow\ \left( \left[ \left( s_{i},t_{i}\right) \right] =c\right) . \end{align*} Hence, \begin{equation} \left[ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in c_{\mathfrak{N} }\right] =\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] . \label{pf.thm.BCL.a.hasdiff1.eq2} \end{equation}
Applying (\ref{eq.thm.has.a}) to $\overrightarrow{c_{i}}$, $\overrightarrow{c_{i+1}}$, $s_{i}$, $t_{i}$, $q_{i}$, $s_{i}^{\prime}$ and $t_{i}^{\prime}$ instead of $\overrightarrow{a}$, $\overrightarrow{b}$, $s$, $t$, $q$, $s^{\prime}$ and $t^{\prime}$, we obtain $\operatorname*{Has} \overrightarrow{c_{i+1}}=\operatorname*{Has}\overrightarrow{c_{i}}-\left( s_{i}^{\prime},t_{i}^{\prime}\right) +\left( t_{i}^{\prime},s_{i}^{\prime }\right) $. In other words, $\operatorname*{Has}\overrightarrow{c_{i+1} }-\operatorname*{Has}\overrightarrow{c_{i}}=\left( t_{i}^{\prime} ,s_{i}^{\prime}\right) -\left( s_{i}^{\prime},t_{i}^{\prime}\right) $. Thus, \begin{align*} & \operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}-\operatorname*{Has} \overrightarrow{c_{i}}\right) \\ & =\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) -\left( s_{i}^{\prime},t_{i}^{\prime }\right) \right) =\underbrace{\operatorname*{coord} \nolimits_{c_{\mathfrak{N}}}\left( t_{i}^{\prime},s_{i}^{\prime}\right) }_{\substack{=\left[ \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in c_{\mathfrak{N}}\right] \\=\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] \\\text{(by (\ref{pf.thm.BCL.a.hasdiff1.eq1}))}}}-\underbrace{\operatorname*{coord} \nolimits_{c_{\mathfrak{N}}}\left( s_{i}^{\prime},t_{i}^{\prime}\right) }_{\substack{=\left[ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in c_{\mathfrak{N}}\right] \\=\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \\\text{(by (\ref{pf.thm.BCL.a.hasdiff1.eq2}))}}}\\ & =\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] -\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] . \end{align*} This proves (\ref{pf.thm.BCL.a.hasdiff1}).
Now, let us forget that we fixed $i$. Thus, for every $i\in\left\{ 1,2,\ldots,k\right\} $, we have defined $\left( s_{i},t_{i}\right) \in\mathfrak{M}$ satisfying (\ref{pf.thm.BCL.a.color}) and (\ref{pf.thm.BCL.a.hasdiff1}).
We have $\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}-\operatorname*{Has} \overrightarrow{c_{i}}\right) =\operatorname*{coord} \nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i+1} }\right) -\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i}}\right) $ for all $i\in\left\{ 1,2,\ldots,k\right\} $. Hence, \begin{align*} & \sumnonlimits\limits_{i=1}^{k}\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}-\operatorname*{Has} \overrightarrow{c_{i}}\right) \\ & =\sumnonlimits\limits_{i=1}^{k}\left( \operatorname*{coord}\nolimits_{c_{\mathfrak{N}} }\left( \operatorname*{Has}\overrightarrow{c_{i+1}}\right) -\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has} \overrightarrow{c_{i}}\right) \right) =0 \end{align*} (by the telescope principle). Hence, \begin{align*} 0 & =\sumnonlimits\limits_{i=1}^{k}\operatorname*{coord}\nolimits_{c_{\mathfrak{N}}}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}-\operatorname*{Has} \overrightarrow{c_{i}}\right) \\ & =\sumnonlimits\limits_{i=1}^{k}\left( \left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] -\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \right) \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.BCL.a.hasdiff1})}\right) \\ & =\sumnonlimits\limits_{i=1}^{k}\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] -\sumnonlimits\limits_{i=1}^{k}\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] . \end{align*} Comparing this with \begin{align*} & \left( \text{the number of arcs colored }c^{\operatorname*{op}}\text{ appearing in }C\right) \\ & \ \ \ \ \ \ \ \ \ \ -\left( \text{the number of arcs colored }c\text{ appearing in }C\right) \\ & =\sumnonlimits\limits_{i=1}^{k}\left[ \left( \text{the color of the arc from }\overrightarrow{c_{i}}\text{ to }\overrightarrow{c_{i+1}}\right) =c^{\operatorname*{op}}\right] \\ & \ \ \ \ \ \ \ \ \ \ -\sumnonlimits\limits_{i=1}^{k}\left[ \left( \text{the color of the arc from }\overrightarrow{c_{i}}\text{ to }\overrightarrow{c_{i+1}}\right) =c\right] \\ & =\sumnonlimits\limits_{i=1}^{k}\left[ \left[ \left( s_{i},t_{i}\right) \right] =c^{\operatorname*{op}}\right] -\sumnonlimits\limits_{i=1}^{k}\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.BCL.a.color})}\right) , \end{align*} we obtain \begin{align*} & \left( \text{the number of arcs colored }c^{\operatorname*{op}}\text{ appearing in }C\right) \\ & \ \ \ \ \ \ \ \ \ \ -\left( \text{the number of arcs colored }c\text{ appearing in }C\right) \\ & =0. \end{align*} In other words, the number of arcs colored $c$ appearing in $C$ equals the number of arcs colored $c^{\operatorname*{op}}$ appearing in $C$. This proves Theorem \ref{thm.BCL} \textbf{(a)}.
\textbf{(b)} If $c\neq c^{\operatorname*{op}}$, then Theorem \ref{thm.BCL} \textbf{(b)} follows immediately from Theorem \ref{thm.BCL} \textbf{(a)}. Thus, for the rest of this proof, assume that $c=c^{\operatorname*{op}}$ (without loss of generality).
We have $\left[ \left( s,t\right) \right] =c=c^{\operatorname*{op} }=\left[ \left( t,s\right) \right] $, so that $\left( t,s\right) \sim\left( s,t\right) $. Hence, $\left( t,s\right) \approx\left( s,t\right) $ (since $\sim$ is the restriction of the relation $\approx$ to $\mathfrak{M}$).
Fix some total order on the set $S$. Let $d$ be the subset $\left\{ \left( u,v\right) \in c_{\mathfrak{N}}\ \mid\ u<v\right\} $ of $c_{\mathfrak{N}}$.
Fix $i\in\left\{ 1,2,\ldots,k\right\} $. We shall now show that \begin{equation} \operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}-\operatorname*{Has}\overrightarrow{c_{i}}\right) \equiv\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \operatorname{mod}2. \label{pf.thm.BCL.b.hasdiff1} \end{equation}
\textit{Proof of (\ref{pf.thm.BCL.b.hasdiff1}):} Define $q_{i}$, $s_{i}^{\prime}$ and $t_{i}^{\prime}$ as before. We have $s_{i}^{\prime}\neq t_{i}^{\prime}$. Hence, either $s_{i}^{\prime}<t_{i}^{\prime}$ or $t_{i}^{\prime}<s_{i}^{\prime}$.
We have the following equivalences: \begin{align} \left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in c_{\mathfrak{N} }\right) \ & \Longleftrightarrow\ \left( \left( t_{i}^{\prime} ,s_{i}^{\prime}\right) \in\left[ \left[ \left( s,t\right) \right] \right] \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }c_{\mathfrak{N} }=\left[ \left[ \left( s,t\right) \right] \right] \right) \nonumber\\ & \Longleftrightarrow\ \left( \left( t_{i}^{\prime},s_{i}^{\prime}\right) \approx\left( s,t\right) \right) \ \Longleftrightarrow\ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( t,s\right) \ \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \approx\left( s,t\right) \right) \nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( s_{i}^{\prime} ,t_{i}^{\prime}\right) \approx\left( s_{i},t_{i}\right) \text{ and }\left( t,s\right) \approx\left( s,t\right) \right) \nonumber\\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \sim\left( s,t\right) \right) \label{pf.thm.BCL.b.hasdiff1.pf.equiv1} \end{align} (since the restriction of the relation $\approx$ to $\mathfrak{M}$ is $\sim$) and \begin{align} \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in c_{\mathfrak{N} }\right) \ & \Longleftrightarrow\ \left( \left( s_{i}^{\prime} ,t_{i}^{\prime}\right) \in\left[ \left[ \left( s,t\right) \right] \right] \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }c_{\mathfrak{N} }=\left[ \left[ \left( s,t\right) \right] \right] \right) \nonumber\\ & \Longleftrightarrow\ \left( \left( s_{i}^{\prime},t_{i}^{\prime}\right) \approx\left( s,t\right) \right) \ \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \approx\left( s,t\right) \right) \nonumber\\ & \Longleftrightarrow\ \left( \left( s_{i},t_{i}\right) \sim\left( s,t\right) \right) . \label{pf.thm.BCL.b.hasdiff1.pf.equiv2} \end{align}
Applying (\ref{eq.thm.has.a}) to $\overrightarrow{c_{i}}$, $\overrightarrow{c_{i+1}}$, $s_{i}$, $t_{i}$, $q_{i}$, $s_{i}^{\prime}$ and $t_{i}^{\prime}$ instead of $\overrightarrow{a}$, $\overrightarrow{b}$, $s$, $t$, $q$, $s^{\prime}$ and $t^{\prime}$, we obtain $\operatorname*{Has} \overrightarrow{c_{i+1}}=\operatorname*{Has}\overrightarrow{c_{i}}-\left( s_{i}^{\prime},t_{i}^{\prime}\right) +\left( t_{i}^{\prime},s_{i}^{\prime }\right) $. In other words, $\operatorname*{Has}\overrightarrow{c_{i+1} }-\operatorname*{Has}\overrightarrow{c_{i}}=\left( t_{i}^{\prime} ,s_{i}^{\prime}\right) -\left( s_{i}^{\prime},t_{i}^{\prime}\right) $. Thus, \begin{align*} & \operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}-\operatorname*{Has}\overrightarrow{c_{i}}\right) \\ & =\operatorname*{coord}\nolimits_{d}\left( \left( t_{i}^{\prime} ,s_{i}^{\prime}\right) -\left( s_{i}^{\prime},t_{i}^{\prime}\right) \right) =\operatorname*{coord}\nolimits_{d}\left( t_{i}^{\prime} ,s_{i}^{\prime}\right) -\operatorname*{coord}\nolimits_{d}\left( s_{i}^{\prime},t_{i}^{\prime}\right) \\ & =\left[ \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in d\right] -\left[ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in d\right] \\ & \equiv\left[ \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in d\right] +\left[ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in d\right] \\ & =\left[ \left( t_{i}^{\prime},s_{i}^{\prime}\right) \in c_{\mathfrak{N} }\text{ and }t_{i}^{\prime}<s_{i}^{\prime}\right] +\left[ \left( s_{i}^{\prime},t_{i}^{\prime}\right) \in c_{\mathfrak{N}}\text{ and } s_{i}^{\prime}<t_{i}^{\prime}\right] \\ & \ \ \ \ \ \ \ \ \ \ \left( \text{since a pair }\left( u,v\right) \text{ belongs to }d\text{ if and only if }\left( u,v\right) \in c_{\mathfrak{N} }\text{ and }u<v\right) \\ & =\left[ \left( s_{i},t_{i}\right) \sim\left( s,t\right) \text{ and }t_{i}^{\prime}<s_{i}^{\prime}\right] +\left[ \left( s_{i},t_{i}\right) \sim\left( s,t\right) \text{ and }s_{i}^{\prime}<t_{i}^{\prime}\right] \\ & \ \ \ \ \ \ \ \ \ \ \left( \text{by the equivalences (\ref{pf.thm.BCL.b.hasdiff1.pf.equiv1}) and (\ref{pf.thm.BCL.b.hasdiff1.pf.equiv2})}\right) \\ & =\left[ \left( s_{i},t_{i}\right) \sim\left( s,t\right) \right] \ \ \ \ \ \ \ \ \ \ \left( \text{because either }s_{i}^{\prime}<t_{i} ^{\prime}\text{ or }t_{i}^{\prime}<s_{i}^{\prime}\right) \\ & =\left[ \left[ \left( s_{i},t_{i}\right) \right] =\left[ \left( s,t\right) \right] \right] =\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \operatorname{mod}2\ \ \ \ \ \ \ \ \ \ \left( \text{since }\left[ \left( s,t\right) \right] =c\right) . \end{align*} This proves (\ref{pf.thm.BCL.b.hasdiff1}).
Now, $\operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}-\operatorname*{Has}\overrightarrow{c_{i}}\right) =\operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}\right) -\operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has}\overrightarrow{c_{i}}\right) $ for each $i\in\left\{ 1,2,\ldots,k\right\} $; hence, \[ \sumnonlimits\limits_{i=1}^{k}\operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has} \overrightarrow{c_{i+1}}-\operatorname*{Has}\overrightarrow{c_{i}}\right) =\sumnonlimits\limits_{i=1}^{k}\left( \operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}\right) -\operatorname*{coord} \nolimits_{d}\left( \operatorname*{Has}\overrightarrow{c_{i}}\right) \right) =0 \] (by the telescope principle). Hence, \begin{align*} 0 & =\sumnonlimits\limits_{i=1}^{k}\operatorname*{coord}\nolimits_{d}\left( \operatorname*{Has}\overrightarrow{c_{i+1}}-\operatorname*{Has} \overrightarrow{c_{i}}\right) \\ & \equiv\sumnonlimits\limits_{i=1}^{k}\left[ \left[ \left( s_{i},t_{i}\right) \right] =c\right] \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.BCL.b.hasdiff1} )}\right) \\ & =\sumnonlimits\limits_{i=1}^{k}\left[ \left( \text{the color of the arc from }\overrightarrow{c_{i}}\text{ to }\overrightarrow{c_{i+1}}\right) =c\right] \ \ \ \ \ \ \ \ \ \ \left( \text{by (\ref{pf.thm.BCL.a.color})}\right) \\ & =\left( \text{the number of arcs colored }c\text{ appearing in }C\right) \operatorname{mod}2. \end{align*} Thus, the number of arcs colored $c$ appearing in $C$ is even. In other words, the number of arcs whose color belongs to $\left\{ c\right\} $ appearing in $C$ is even. In other words, the number of arcs whose color belongs to $\left\{ c,c^{\operatorname*{op}}\right\} $ appearing in $C$ is even (since $\left\{ c,\underbrace{c^{\operatorname*{op}}}_{=c}\right\} =\left\{ c,c\right\} =\left\{ c\right\} $). This proves Theorem \ref{thm.BCL} \textbf{(b)}. \end{proof}
\section{Open questions}
Theorem \ref{thm.BCL} is a statement about reduced expressions. As with all such statements, one can wonder whether a generalization to \textquotedblleft non-reduced\textquotedblright\ expressions would still be true. If $w$ is an element of $W$, then an \textit{expression} for $w$ means a $k$-tuple $\left( s_{1},s_{2},\ldots,s_{k}\right) $ of elements of $S$ such that $w=s_{1} s_{2}\cdots s_{k}$. Definition \ref{def.braid} can be applied verbatim to arbitrary expressions, leading to the concept of an $\left( s,t\right) $-braid move. Finally, for every $w\in W$, we define a directed graph $\mathcal{E}\left( w\right) $ in the same way as we defined $\mathcal{R} \left( w\right) $ in Definition \ref{def.R}, but with the word \textquotedblleft reduced\textquotedblright\ removed everywhere. This directed graph $\mathcal{E}\left( w\right) $ will be infinite (in general) and consist of many connected components (one of which is $\mathcal{R}\left( w\right) $), but we can still inquire about its cycles. We conjecture the following generalization of Theorem \ref{thm.BCL}:
\begin{conjecture} \label{conj.E(w)}Let $w\in W$. Theorem \ref{thm.BCL} is still valid if we replace $\mathcal{R}\left( w\right) $ by $\mathcal{E}\left( w\right) $. \end{conjecture}
A further, slightly lateral, generalization concerns a kind of \textquotedblleft spin extension\textquotedblright\ of a Coxeter group:
\begin{conjecture} \label{conj.spin}For every $\left( s,t\right) \in\mathfrak{M}$, let $c_{s,t}$ be an element of $\left\{ 1,-1\right\} $. Assume that $c_{s,t}=c_{s^{\prime},t^{\prime}}$ for any two elements $\left( s,t\right) $ and $\left( s^{\prime},t^{\prime}\right) $ of $\mathfrak{M}$ satisfying $\left( s,t\right) \sim\left( s^{\prime},t^{\prime}\right) $. Assume furthermore that $c_{s,t}=c_{t,s}$ for each $\left( s,t\right) \in\mathfrak{M}$. Let $W^{\prime}$ be the group with the following generators and relations:
\textit{Generators:} the elements $s\in S$ and an extra generator $q$.
\textit{Relations:} \begin{align*} s^{2} & =1\ \ \ \ \ \ \ \ \ \ \text{for every }s\in S;\\ q^{2} & =1;\\ qs & =sq\ \ \ \ \ \ \ \ \ \ \text{for every }s\in S;\\ \left( st\right) ^{m_{s,t}} & =1\ \ \ \ \ \ \ \ \ \ \text{for every }\left( s,t\right) \in\mathfrak{M}\text{ satisfying }c_{s,t}=1;\\ \left( st\right) ^{m_{s,t}} & =q\ \ \ \ \ \ \ \ \ \ \text{for every }\left( s,t\right) \in\mathfrak{M}\text{ satisfying }c_{s,t}=-1. \end{align*}
There is clearly a surjective group homomorphism $\pi:W^{\prime}\rightarrow W$ sending each $s\in S$ to $s$, and sending $q$ to $1$. There is also an injective group homomorphism $\iota:\mathbb{Z}/2\mathbb{Z}\rightarrow W^{\prime}$ which sends the generator of $\mathbb{Z}/2\mathbb{Z}$ to $q$. Then, the sequence \begin{equation} 1\longrightarrow\mathbb{Z}/2\overset{\iota}{\longrightarrow}W^{\prime }\overset{\pi}{\longrightarrow}W\longrightarrow1 \label{eq.conj.spin.seq} \end{equation} is exact. Equivalently, $\left\vert \operatorname*{Ker}\pi\right\vert =2$. \end{conjecture}
(Note that exactness of the sequence (\ref{eq.conj.spin.seq}) at $W^{\prime}$ and at $W$ is easy.)
If Conjecture \ref{conj.spin} holds, then so does Conjecture \ref{conj.E(w)} \textbf{(b)} (that is, Theorem \ref{thm.BCL} \textbf{(b)} holds with $\mathcal{R}\left( w\right) $ replaced by $\mathcal{E}\left( w\right) $). Indeed, assume Conjecture \ref{conj.spin} to hold. Let $c\in\mathfrak{M}/\sim$ be an equivalence class. For any $\left( u,v\right) \in\mathfrak{M}$, define \[ c_{u,v}= \begin{cases} -1, & \text{if }\left( u,v\right) \in c\text{ or }\left( v,u\right) \in c;\\ 1, & \text{otherwise} \end{cases} . \] Thus, a group $W^{\prime}$ is defined. Pick any section $\mathbf{s} :W\rightarrow W^{\prime}$ (in the category of sets) of the projection $\pi:W^{\prime}\rightarrow W$. If $w\in W$, and if $\left( s_{1},s_{2} ,\ldots,s_{k}\right) $ is an expression of $w$, then the product $s_{1} s_{2}\cdots s_{k}$ formed in $W^{\prime}$ will either be $\mathbf{s}\left( w\right) $ or $q\mathbf{s}\left( w\right) $; and these latter two values are distinct (by Conjecture \ref{conj.spin}). We can then define the \textit{sign} of the expression $\left( s_{1},s_{2},\ldots,s_{k}\right) $ to be $ \begin{cases} 1, & \text{if }s_{1}s_{2}\cdots s_{k}=\mathbf{s}\left( w\right) ;\\ -1, & \text{if }s_{1}s_{2}\cdots s_{k}=q\mathbf{s}\left( w\right) \end{cases} \in\left\{ 1,-1\right\} $. The sign of an expression switches when we apply a braid move whose arc's color belongs to $\left\{ c,c^{\operatorname*{op} }\right\} $, but stays unchanged when we apply a braid move of any other color. Theorem \ref{thm.BCL} \textbf{(b)} then follows by a simple parity argument.
The construction of $W^{\prime}$ in Conjecture \ref{conj.spin} generalizes the construction of one of the two \textit{spin symmetric groups} (up to a substitution). We suspect that Conjecture \ref{conj.spin} could be proven by constructing a \textquotedblleft regular representation\textquotedblright, and this would then yield an alternative proof of Theorem \ref{thm.BCL} \textbf{(b)}.
\end{document} |
\begin{document}
\title{Multimodal Neuroimaging Data Integration and Pathway Analysis}
\thispagestyle{empty}
\begin{abstract} With fast advancements in technologies, the collection of multiple types of measurements on a common set of subjects is becoming routine in science. Some notable examples include multimodal neuroimaging studies for the simultaneous investigation of brain structure and function, and multi-omics studies for combining genetic and genomic information. Integrative analysis of multimodal data allows scientists to interrogate new mechanistic questions. However, the data collection and generation of integrative hypotheses is outpacing available methodology for joint analysis of multimodal measurements. In this article, we study high-dimensional multimodal data integration in the context of mediation analysis. We aim to understand the roles different data modalities play as possible mediators in the pathway between an exposure variable and an outcome. We propose a mediation model framework with two data types serving as separate sets of mediators, and develop a penalized optimization approach for parameter estimation. We study both the theoretical properties of the estimator through an asymptotic analysis, and its finite-sample performance through simulations. We illustrate our method with a multimodal brain pathway analysis having both structural and functional connectivities as mediators in the association between sex and language processing.
\end{abstract}
\noindent \textbf{Key Words:} Brain connectivity analysis; Linear structural equation model; Mediation analysis; Multimodal data integration; Regularization.
\section{Introduction} \label{sec:introduction}
Neuroimaging technology is ever expanding with new imaging measurements, such as anatomical magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), positron emission tomography (PET), among many others. These imaging techniques provide crucial tools to facilitate our understanding of brain structure and function, as well as their associations with numerous neurological disorders. Most modern MRI studies are multimodal, in the sense that several types of MRI measurements are collected on the subjects in a scanning session, as different measurements can be obtained from nuclear magnetic resonance in a single scanner. Other more ambitious studies collect multiple imaging measurements for the same group of subjects from different scanners, such as MRI and PET. Intuitively, integration of such diverse, but scientifically complementary neuroimaging information would strengthen our understanding of brain. Although there have been some studies on integrative analysis of multimodal neuroimaging data \citep{zhang2011multimodal, dai2012discriminative, uludaug2014general, lili2018integrative}, many questions remain open, and call for the development of new statistical methodology.
In this article, we study multimodal data integration in the context of mediation analysis. Mediation analysis seeks to identify and explain the mechanism, or path, that underlies an observed relationship between a treatment or exposure variable and an outcome variable, through the inclusion of an intermediary variable, known as a mediator. Such an analysis is a generalization of path analysis, and represents the starting point for many mechanistic studies. It was originally developed in the psychometric and behavioral sciences literature \citep{baron1986moderator}, but has been extensively studied in the statistics literature \citep[see, e.g.,][among many others]{pearl2001direct, van2008direct, wang2013estimation, vanderweele2014mediation, huang2016hypothesis}. Recently, mediation analysis has received increased attention in neuroimaging analysis to understand the roles of brain structure and function as possible mediators between an exposure variable and some cognitive or behavioral outcome \citep{caffo2007brain, wager2009brain, atlas2010brain, lindquist2012functional, zhao2016pathway, chen2017high, zhao2018sparsepc}. However, all existing work focused on a single imaging modality as the mediator. Herein, we consider the more complex problem of multiple high-dimensional imaging modalities as mediators.
Our motivation lies in a study of how brain structure and function mediate the relationship between sex and language processing behavior. Sex differences in language processing behavior have been consistently observed \citep[see][for a review]{pinker2007stuff}. Numerous studies have noted sex differences in structure and function of brain regions related to language \citep[e.g.,][]{shaywitz1995sex, kansaku2000sex}. To further investigate this problem, we study a dataset from the Human Connectome Project. It includes a set of $n=136$ young adult participants, of whom 65 are females and 71 males. Each participant went through a battery of cognitive and behavioral tests. We consider the picture vocabulary test, a measure of language processing behavior, as our outcome variable. Each participant also took imaging scanning, including both a DTI and a resting-state fMRI scan. DTI is an MRI technique that measures the diffusion of water molecules, an indirect measure of white matter connectivity, as water diffuses anisotropically along whiter matter fiber bundles. Meanwhile, fMRI is an MRI technique that measures blood oxygen level, which in turn serves as a surrogate measure of brain neural activity. Whereas DTI measures brain structural connectivity, fMRI measures functional connectivity. After preprocessing, each DTI scan is summarized in the form of a 531-dimensional vector, and each fMRI scan as a 917-dimensional vector. We explain in more detail about the DTI and fMRI imaging preprocessing in Section \ref{sec:real}. Logically, brain structural and functional connectivity must be associated. Hebb's law \citep{hebb2005organization} formalizes this notion, observing that distinct brain areas that have communicated frequently are more likely to have more direct structural connections. More recent research also suggests that brain structural connectivity regulates the dynamics of cortical circuits and systems captured by the functional connectivity \citep{sporns2007brain}, and there is increased interest in integrative analysis of structural and functional connectivities \citep{higgins2018integrative}. The goal of our study is to integrate brain structural and functional connectivity to identify brain pathways that associate sex with language behavior. We also comment that, although motivated by a multimodal neuroimaging problem, our method is equally applicable to a large variety of multimodal data, e.g., multi-omics data \citep{shen2013sparse,richardson2016statistical}.
Toward our study goal, we consider a mediation model framework as depicted in Figure \ref{fig:diagram}. Let $X$ denote the exposure or treatment variable, and $Y$ the outcome variable. In our case, $X$ is the participant's sex, and $Y$ is the picture vocabulary test score. Let $\mathbf{M}_1 = (M_{11},\dots,M_{1p_{1}})^\top$ denote the set of potential mediators from the first modality, and $\mathbf{M}_2 = (M_{21},\dots,M_{2p_{2}})^\top$ the set of potential mediators from the second modality. In our case, the structural connectivity measures from DTI are $\mathbf{M}_1$, and the functional connectivity measures from fMRI are $\mathbf{M}_2$, with $p_1 = 531$ and $p_2 = 917$. This order of the two sets of mediators is determined by the prior knowledge that structural connectivity shapes and constrains functional connectivity \citep{hagmann2008mapping, honey2009predicting}. We seek to uncover various pathway effects between $X$ and $Y$, including: (i) the indirect effect through some elements of $\mathbf{M}_1$ but not the rest of $\mathbf{M}_1$ nor $\mathbf{M}_2$, (ii) the indirect effect through some elements of $\mathbf{M}_2$ but not the rest of $\mathbf{M}_2$ nor $\mathbf{M}_1$, (iii) the indirect effect through some elements of both $\mathbf{M}_1$ and $\mathbf{M}_2$, and (iv) the direct effect of $X$ on $Y$ through neither $\mathbf{M}_1$ nor $\mathbf{M}_2$.
We begin with a \emph{sequential model} as depicted in Figure~\ref{fig:diagram} (a), which captures the pathway effects of interest. However, it faces two immediate challenges. First, in our study, the numbers of the mediators, $p_1 = 531$ and $p_2 = 917$, both far exceed the sample size, $n=136$. To address this issue, we introduce a Lasso-type penalty \citep{tibshirani1996regression} to induce a sparse estimate of the pathway effects. Second, the ordering of the mediators within each modality is unknown. As a result, there are an intractable number of possible combinations of mediation pathways within each modality. To address this issue, we adopt the idea of \citet{zhao2016pathway} and consider a \emph{marginal model}, as depicted in Figure~\ref{fig:diagram} (b). That is, we do not aim to delineate the underlying relationships among the mediators \emph{within} the same modality. Instead, we aim to reveal the roles of the mediators \emph{between} the two sets of modalities in the treatment-outcome pathway. We achieve this by introducing the notion of pathway effect of the mediators, which we formally define in Section \ref{sec:model}, and we argue this is probably of a greater practical interest in scientific applications. We employ correlated errors to account for the dependence among the mediators within each modality, and relax the ordering of the mediators within the same modality. Our model, although motivated by \citet{zhao2016pathway}, is distinct in several ways. First, while \citet{zhao2016pathway} tackled the single modality pathway analysis, we focus on the multimodality scenario, for which there is no existing solution. Given the strong demand for this type of multimodal mediation analysis in brain imaging as well as many other applications, we offer a timely solution to this important family of scientific problems. Second, even though a straightforward extension conceptually, our multimodal pathway analysis is technically much more involved than the single modality analysis of \citet{zhao2016pathway}, as we describe in detail later.
\begin{figure}\label{fig:diagram}
\end{figure}
The rest of the paper is organized as follows. Section \ref{sec:model} describes our model, and Section \ref{sec:method} develops the penalized optimization function and the associated estimation algorithm. Section \ref{sec:asmp} studies the asymptotic properties of our estimator. Section \ref{sec:sim} presents the simulation study, and Section \ref{sec:real} revisits our motivating multimodal brain imaging example. Section \ref{sec:discuss} gives a discussion. The supporting information collects all technical proofs and some additional results.
\section{Model} \label{sec:model}
In this section, we describe and compare in detail the sequential model and the marginal model, then formally define various path effects of interest.
We begin with the sequential model. For this model, we assume the orderings of effects of all variables in $\mathbf{M}_1$ and $\mathbf{M}_2$ are completely known, and without loss of generality, the variables in $\mathbf{M}_1$ and $\mathbf{M}_2$ are ordered accordingly. We later relax this assumption in the marginal model. For $n$ independent and identically distributed observations, let $\mathbf{X} \in \mathbb{R}^{n}$ denote the exposure vector, $\mathbf{Y} \in \mathbb{R}^{n}$ the response vector, $\mathbf{M}_{1j} \in \mathbb{R}^{n}$, and $\mathbf{M}_{2k} \in \mathbb{R}^{n}$ the two sets of mediators, $j=1,\ldots,p_1$ and $k=1,\ldots,p_2$. We adopt the linear structural equation modeling (LSEM) framework of \citet{pearl2003causality}, and have that, \begin{eqnarray}\label{eq:LSEM_full} && \mathbf{M}_{11} = \mathbf{X} \alpha_{1}+\boldsymbol{\epsilon}_{1}, \; \cdots, \; \mathbf{M}_{1p_{1}} = \mathbf{X} \alpha_{p_{1}}+\sum_{j=1}^{p_{1}-1}\mathbf{M}_{1j}\phi_{jp_{1}}+\boldsymbol{\epsilon}_{p_{1}}, \nonumber \\ &&\mathbf{M}_{21} = \mathbf{X} \gamma_{1}+\sum_{j=1}^{p_{1}}\mathbf{M}_{1j}\omega_{j1}+\boldsymbol{\eta}_{1}, \; \cdots, \; \mathbf{M}_{2p_{2}} = \mathbf{X} \gamma_{p_{2}}+\sum_{j=1}^{p_{1}}\mathbf{M}_{1j}\omega_{jp_{2}}+\sum_{k=1}^{p_{2}-1}\mathbf{M}_{2k}\psi_{kp_{2}}+\boldsymbol{\eta}_{p_{2}}, \nonumber \\ &&\mathbf{Y} = \mathbf{X}\delta+\sum_{j=1}^{p_{1}}\mathbf{M}_{1j}\theta_{j}+\sum_{k=1}^{p_{2}}\mathbf{M}_{2k}\pi_{k}+\boldsymbol{\xi}, \end{eqnarray} where $\alpha_j$, $\phi_{jj'}$, $\gamma_k$, $\omega_{jk}$, $\psi_{kk'}$, $\delta$, $\theta_j$, $\pi_k$ are all scalar coefficients, and $\boldsymbol{\epsilon}_{j} \in \mathbb{R}^{n}$, $\boldsymbol{\eta}_{k} \in \mathbb{R}^{n}$, $\boldsymbol{\xi} \in \mathbb{R}^{n}$ are independent normal random errors with zero means, $j, j' = 1,\ldots,p_1, k, k' = 1, \ldots, p_2$. Furthermore, the error term $\boldsymbol{\epsilon}_{j}$ is independent of $\mathbf{X}, \mathbf{M}_1$, $\boldsymbol{\eta}_{k}$ and $\boldsymbol{\xi}$ are independent of $\mathbf{X}, \mathbf{M}_{1}, \mathbf{M}_2$. We assume the data are centered at zero and thus drop the intercept terms. We depict model \eqref{eq:LSEM_full} in Figure~\ref{fig:diagram}(a). Next we stack the coefficients together, in that
\begin{eqnarray*} \boldsymbol{\alpha} = \left( \alpha_{1}, \cdots, \alpha_{p_{1}} \right)_{1 \times p_1}, \; \boldsymbol{\gamma} = \left(\gamma_{1}, \cdots, \gamma_{p_{2}} \right)_{1 \times p_2}, \; \boldsymbol{\theta} = \left( \theta_{1}, \cdots, \theta_{p_{1}} \right)^\top_{p_1 \times 1}, \; \boldsymbol{\pi} = \left( \pi_{1}, \cdots, \pi_{p_{2}} \right)^\top_{p_2 \times 1}, \end{eqnarray*} {\small \begin{eqnarray*}
\boldsymbol{\Phi}=\begin{pmatrix}
0 & \phi_{12} & \cdots & \phi_{1p_{1}} \\
& \ddots & \ddots & \vdots \\
& & \ddots & \phi_{p_{1}-1,p_{1}} \\
& & & 0
\end{pmatrix}_{p_1 \times p_1}, \;\;
\boldsymbol{\Omega}=\begin{pmatrix}
\omega_{11} & \cdots & \omega_{1p_{2}} \\
\vdots & \ddots & \vdots \\
\omega_{p_{1}p_{1}} & \cdots & \omega_{p_{1}p_{2}}
\end{pmatrix}_{p_1 \times p_2}, \;\;
\boldsymbol{\Psi}=\begin{pmatrix}
0 & \psi_{12} & \cdots & \psi_{1p_{2}} \\
& \ddots & \ddots & \vdots \\
& & \ddots & \psi_{p_{2}-1,p_{2}} \\
& & & 0
\end{pmatrix}_{p_2 \times p_2}. \end{eqnarray*} } \noindent We also stack the mediators and the error terms together, in that $\mathbf{M}_1 = (\mathbf{M}_{11}, \ldots, \mathbf{M}_{1p_1}) \in \mathbb{R}^{n \times p_1}$, $\mathbf{M}_2 = (\mathbf{M}_{21}, \ldots, \mathbf{M}_{2p_2}) \in \mathbb{R}^{n \times p_2}$, $\boldsymbol{\epsilon} = (\boldsymbol{\epsilon}_1, \ldots, \boldsymbol{\epsilon}_{p_1}) \in \mathbb{R}^{n \times p_1}$, $\boldsymbol{\eta} = (\boldsymbol{\eta}_1, \ldots, \boldsymbol{\eta}_{p_2}) \in \mathbb{R}^{n \times p_2}$. Then we can rewrite model \eqref{eq:LSEM_full} in a matrix form, \begin{eqnarray}\label{eq:LSEM_full_mat}
& & \begin{pmatrix}
\mathbf{M}_{1} & \mathbf{M}_{2} & \mathbf{Y}
\end{pmatrix} = \begin{pmatrix}
\mathbf{X} & \mathbf{M}_{1} & \mathbf{M}_{2}
\end{pmatrix}\begin{pmatrix}
\boldsymbol{\alpha} & \boldsymbol{\gamma} & \delta \\
\boldsymbol{\Phi} & \boldsymbol{\Omega} & \boldsymbol{\theta} \\
\boldsymbol{0} & \boldsymbol{\Psi} & \boldsymbol{\pi}
\end{pmatrix}+\begin{pmatrix}
\boldsymbol{\epsilon} & \boldsymbol{\eta} & \boldsymbol{\xi}
\end{pmatrix}, \end{eqnarray} where $\mathrm{vec}(\boldsymbol{\epsilon}) \sim \mathcal{N}(\boldsymbol{\mathrm{0}},\boldsymbol{\Xi}_{1}\otimes\boldsymbol{\mathrm{I}}_n)$, $\mathrm{vec}(\boldsymbol{\eta}) \sim \mathcal{N}(\boldsymbol{\mathrm{0}},\boldsymbol{\Xi}_{2}\otimes\boldsymbol{\mathrm{I}}_n)$, $\boldsymbol{\xi}\sim\mathcal{N}(\boldsymbol{\mathrm{0}},\sigma^{2}\boldsymbol{\mathrm{I}}_n)$, $\boldsymbol{\Xi}_{1} \in \mathbb{R}^{p_{1} \times p_{1}}$, $\boldsymbol{\Xi}_{2} \in \mathbb{R}^{p_{2} \times p_{2}}$ are the covariance matrices, and $\boldsymbol{\mathrm{I}}_n$ is the identity matrix with dimension $n$. Model~\eqref{eq:LSEM_full} fully characterizes the dependencies among all the variables, including the treatment, the mediators, and the outcome. The model errors $\boldsymbol{\epsilon}$, $\boldsymbol{\eta}$ and $\boldsymbol{\xi}$ are assumed to be mutually independent, and $\boldsymbol{\Xi}_{1}$ and $\boldsymbol{\Xi}_{2}$ are diagonal matrices.
Model \eqref{eq:LSEM_full} requires the knowledge of the ordering of the mediators within each modality, which is generally unknown in practice. To circumvent this challenge, we consider an alternative model of the form, \begin{eqnarray}\label{eq:LSEM_reduced} \mathbf{M}_{1j} &=& \mathbf{X} \beta_j + \boldsymbol{\varepsilon}_j, \;\; j=1, \ldots, p_1, \nonumber \\ \mathbf{M}_{2k} &=& \mathbf{X} \zeta_k + \sum_{j=1}^{p_{1}}\mathbf{M}_{1j} \lambda_{jk} + \boldsymbol{\vartheta}_k, \;\; k=1, \ldots, p_2, \nonumber \\ \mathbf{Y} & = & \mathbf{X}\delta+\sum_{j=1}^{p_{1}}\mathbf{M}_{1j}\theta_{j}+\sum_{k=1}^{p_{2}}\mathbf{M}_{2k}\pi_{k}+\boldsymbol{\xi}, \end{eqnarray} where $\beta_j$, $\zeta_k$, $\lambda_{jk}$ are scalar coefficients, $\delta$, $\theta_j$, $\pi_k$, $\boldsymbol{\xi}$ are the same as defined in model \eqref{eq:LSEM_full}, and $\boldsymbol{\varepsilon}_{j} \in \mathbb{R}^{n}$, $\boldsymbol{\vartheta}_{k} \in \mathbb{R}^{n}$ are normal random errors with zero means, $j=1,\dots,p_{1}$, $k=1,\dots,p_{2}$. The error term $\boldsymbol{\varepsilon}_{j}$ is independent of $\mathbf{X}$, and $\boldsymbol{\vartheta}_{k}$ is independent of $\mathbf{X}, \mathbf{M}_{1}$. We depict model \eqref{eq:LSEM_reduced} in Figure \ref{fig:diagram} (b). It extends that of \citet{zhao2016pathway} from a single modality of mediators to multiple modalities of mediators. Next we stack the coefficients, in that \begin{eqnarray*} \boldsymbol{\beta} = \left( \beta_{1}, \cdots, \beta_{p_{1}} \right)_{1 \times p_1}, \; \boldsymbol{\zeta} = \left(\zeta_{1}, \cdots, \zeta_{p_{2}} \right)_{1 \times p_2}, \; \boldsymbol{\Lambda}= \begin{pmatrix}
\lambda_{11} & \cdots & \lambda_{1p_{2}} \\
\vdots & \ddots & \vdots \\
\lambda_{p_{1}p_{1}} & \cdots & \lambda_{p_{1}p_{2}} \end{pmatrix}_{p_1 \times p_2}, \end{eqnarray*} and the error terms $\boldsymbol{\varepsilon} = (\boldsymbol{\varepsilon}_1, \ldots, \boldsymbol{\varepsilon}_{p_1}) \in \mathbb{R}^{n \times p_1}$, and $\boldsymbol{\vartheta} = (\boldsymbol{\vartheta}_1, \ldots, \boldsymbol{\vartheta}_{p_2}) \in \mathbb{R}^{n \times p_2}$. Then we can again rewrite model \eqref{eq:LSEM_reduced} in a matrix form, \begin{eqnarray}\label{eq:LSEM_full_mat}
& & \begin{pmatrix}
\mathbf{M}_{1} & \mathbf{M}_{2} & \mathbf{Y}
\end{pmatrix} = \begin{pmatrix}
\mathbf{X} & \mathbf{M}_{1} & \mathbf{M}_{2}
\end{pmatrix}\begin{pmatrix}
\boldsymbol{\beta} & \boldsymbol{\zeta} & \delta \\
\boldsymbol{0} & \boldsymbol{\Lambda} & \boldsymbol{\theta} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{\pi}
\end{pmatrix}+\begin{pmatrix}
\boldsymbol{\varepsilon} & \boldsymbol{\vartheta} & \boldsymbol{\xi}
\end{pmatrix},
\end{eqnarray} where $\mathrm{vec}(\boldsymbol{\varepsilon}) \sim \mathcal{N}(\boldsymbol{\mathrm{0}},\boldsymbol{\Sigma}_{1}\otimes\boldsymbol{\mathrm{I}}_n)$, $\mathrm{vec}(\boldsymbol{\vartheta}) \sim \mathcal{N}(\boldsymbol{\mathrm{0}},\boldsymbol{\Sigma}_{2}\otimes\boldsymbol{\mathrm{I}}_n)$, and $\boldsymbol{\xi}\sim\mathcal{N}(\boldsymbol{\mathrm{0}},\sigma^{2}\boldsymbol{\mathrm{I}}_n)$.
Comparing the marginal model \eqref{eq:LSEM_reduced} to the sequential model \eqref{eq:LSEM_full}, for $\mathbf{M}_{1}$, model \eqref{eq:LSEM_reduced} can be viewed as a total effect model of $\mathbf{X}$~\citep{imai2010identification}, which avoids the explicit modeling of the relationship among the variables in $\mathbf{M}_{1}$. A similar relation holds for $\mathbf{M}_{2}$. Consequently, model \eqref{eq:LSEM_reduced} does not require the knowledge of the within-modality mediator ordering. Moreover, the model parameters between the two models satisfy: \begin{eqnarray*} \boldsymbol{\beta}=\boldsymbol{\alpha}(\boldsymbol{\mathrm{I}}-\boldsymbol{\Phi})^{-1}, \quad \boldsymbol{\zeta}=\boldsymbol{\gamma}(\boldsymbol{\mathrm{I}}-\boldsymbol{\Psi})^{-1}, \quad \boldsymbol{\Lambda}=\boldsymbol{\Omega}(\boldsymbol{\mathrm{I}}-\boldsymbol{\Psi})^{-1}, \\ \boldsymbol{\Sigma}_{1}=(\boldsymbol{\mathrm{I}}-\boldsymbol{\Phi}^\top)^{-1}\boldsymbol{\Xi}_{1}(\boldsymbol{\mathrm{I}}-\boldsymbol{\Phi})^{-1}, \quad \boldsymbol{\Sigma}_{2}=(\boldsymbol{\mathrm{I}}-\boldsymbol{\Psi}^\top)^{-1}\boldsymbol{\Xi}_{2}(\boldsymbol{\mathrm{I}}-\boldsymbol{\Psi})^{-1}, \end{eqnarray*} where $\boldsymbol{\Phi}$ and $\boldsymbol{\Psi}$ are the weighted adjacency matrices of $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$, respectively, which are upper-triangular matrices with zero diagonal elements demonstrating the direct impact between the mediators, and $(\boldsymbol{\mathrm{I}}-\boldsymbol{\Phi})^{-1}$ and $(\boldsymbol{\mathrm{I}}-\boldsymbol{\Psi})^{-1}$ are the influence matrices that reveal the overall influence of one mediator on the other. The term $\boldsymbol{\beta}$, which is the product of the influence matrix and the corresponding treatment effect $\boldsymbol{\alpha}$, summarizes the overall treatment effect on the corresponding mediator $\mathbf{M}_1$. A similar interpretation applies to $\boldsymbol{\zeta}$. The $(j,k)$th element of $\boldsymbol{\Omega}$ captures the direct impact of $M_{1j}$ on $M_{2k}$, and the corresponding element in $\boldsymbol{\Lambda}$ reveals the overall effect regardless of the underlying relationships among the mediators in $\mathbf{M}_{2}$. Furthermore, under model \eqref{eq:LSEM_full}, the error terms in $\boldsymbol{\epsilon}$ and $\boldsymbol{\eta}$ are mutually independent. By contrast, under model \eqref{eq:LSEM_reduced}, the error terms in $\boldsymbol{\varepsilon}$ and $\boldsymbol{\vartheta}$ are dependent, due to the influences among the mediators. Therefore, even though model \eqref{eq:LSEM_reduced} does not explicitly model the relationship among the mediators within the same modality, it encapsulates the dependencies among the mediators through the correlations between the error terms. Next we formally define the various pathway effects of interest under model \eqref{eq:LSEM_reduced}.
\begin{definition}\label{def:patheffect} Under model~\eqref{eq:LSEM_reduced}, considering two treatment/exposure conditions $X = x$ and $X = x^{*}$, we define the following pathway effects of $X$ on the outcome $Y$. \begin{enumerate}[(i)] \item The indirect pathway effect of $X$ through path $X\rightarrow M_{1j}\rightarrow Y$ is: $\mathrm{IE}_{j}^{1}(x,x^{*})=\beta_{j}\theta_{j}(x-x^{*})$, $j=1,\dots,p_{1}$. The total indirect pathway effect of $X$ through $\mathbf{M}_{1}$ but not through $\mathbf{M}_{2}$ is: $\mathrm{IE}^{1}(x,x^{*})=\sum_{j=1}^{p_{1}}\beta_{j}\theta_{j}(x-x^{*})$.
\item The indirect pathway effect of $X$ through path $X\rightarrow M_{2k}\rightarrow Y$ is: $\mathrm{IE}_{k}^{2}(x,x^{*})=\zeta_{k}\pi_{k}(x-x^{*})$, $k=1,\dots,p_{2}$. The total indirect pathway effect of $X$ through $\mathbf{M}_{2}$ but not through $\mathbf{M}_{1}$ is $\mathrm{IE}^{2}(x,x^{*})=\sum_{k=1}^{p_{2}}\zeta_{k}\pi_{k}(x-x^{*})$.
\item The indirect pathway effect of $X$ through path $X\rightarrow M_{1j}\rightarrow M_{2k}\rightarrow Y$ is: $\mathrm{IE}_{jk}^{1,2}(x,x^{*})=\beta_{j}\lambda_{jk}\pi_{k}(x-x^{*})$, $j=1,\dots,p_{1}, k=1,\dots,p_{2}$. The total indirect effect of $X$ through both $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ is: $\mathrm{IE}^{1,2}=\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}\beta_{j}\lambda_{jk}\pi_{k}(x-x^{*})$.
\item The direct effect of $X$ is: $\mathrm{DE}(x,x^{*})=\delta(x-x^{*})$. \end{enumerate} \end{definition}
\noindent By Definition~\ref{def:patheffect}, the total effect (TE) of $X$ on $Y$ is decomposed as the sum of the direct effect (DE) and the total indirect effect (IE): \begin{eqnarray} \label{eqn:total-effect-decomp} \mathrm{TE}(x,x^{*}) = \mathrm{DE}(x,x^{*}) + \mathrm{IE}(x,x^{*}) \equiv \mathrm{DE}(x,x^{*})+\mathrm{IE}^{1}(x,x^{*})+\mathrm{IE}^{2}(x,x^{*})+\mathrm{IE}^{1,2}(x,x^{*}). \end{eqnarray}
\section{Estimation} \label{sec:method}
Our goal is to estimate the pathway effects defined in Definition~\ref{def:patheffect} under model~\eqref{eq:LSEM_reduced}. To achieve this, we define the objective function, \begin{eqnarray*} \ell(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta) & = & \textrm{trace}\left\{ (\mathbf{M}_{1}-\mathbf{X}\boldsymbol{\beta})^\top(\mathbf{M}_{1}-\mathbf{X}\boldsymbol{\beta}) \right\} \\ & & + \; \textrm{trace}\left\{ (\mathbf{M}_{2}-\mathbf{X}\boldsymbol{\zeta}-\mathbf{M}_{1}\boldsymbol{\Lambda})^\top(\mathbf{M}_{2}-\mathbf{X}\boldsymbol{\zeta}-\mathbf{M}_{1}\boldsymbol{\Lambda}) \right\} \\ & & + \; \left(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol{\pi}\right)^\top\left(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol{\pi}\right). \end{eqnarray*} This objective function conceptually sets $\boldsymbol{\Sigma}_{1}$ and $\boldsymbol{\Sigma}_{2}$ to be identity matrices. However, this simplification would not affect the consistency of our final estimators as long as all the variables are standardized to unit scale \citep{huber1967behavior, white1980heteroskedasticity}.
Next we introduce a series of penalty functions and consider the following regularized optimization problem, \begin{eqnarray*} && \textrm{minimize}_{\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta} \; \ell(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta) \\ [5pt]
& & \quad\quad\quad \textrm{ subject to } \sum_{j=1}^{p_{1}}|\beta_{j}\theta_{j}|\leq t_{1}, \sum_{k=1}^{p_{2}}|\zeta_{k}\pi_{k}|\leq t_{2}, \sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\beta_{j}\lambda_{jk}\pi_{k}|\leq t_{3}, |\delta|\leq t_{4}, \end{eqnarray*} where $t_{1},t_{2},t_{3},t_{4}\geq 0$ are the regularization parameters. Our penalty functions are not convex. The next lemma presents a convex relaxation through elastic net type penalties \citep{zou2005regularization}. We also observe that, in our study, the mediation effect through both $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ is defined as a three-way product, $\beta_{j}\lambda_{jk}\pi_{k}$. Since $\beta_{j}$ and $\pi_{k}$ are already regularized in the pathway effect through $\mathbf{M}_{1}$ but not through $\mathbf{M}_{2}$, i.e., $\beta_j \theta_j$, and that through $\mathbf{M}_{2}$ but not through $\mathbf{M}_{1}$, i.e., $\zeta_k \pi_k$, respectively, it suffices to regularize $\lambda_{jk}$ alone.
\begin{lemma}\label{lemma:convex} For any $\nu_{1},\nu_{2} \in\mathbb{R} \geq 1/2$, \[
\sum_{j=1}^{p_{1}}\left\{ |\beta_{j}\theta_{j}|+\nu_{1}(\beta_{j}^{2}+\theta_{j}^{2}) \right\}, \quad
\sum_{k=1}^{p_{2}}\left\{ |\zeta_{k}\pi_{k}|+\nu_{2}(\zeta_{k}^{2}+\pi_{k}^{2}) \right\}, \quad \text{and }
\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}| \] are convex functions of $\{\boldsymbol{\beta},\boldsymbol{\theta}\}$, $\{\boldsymbol{\zeta},\boldsymbol{\pi}\}$ and $\boldsymbol{\Lambda}$, respectively. For any $t_{1},t_{2},t_{3}\in\mathbb{R}\geq 0$, there exist $r_{1},r_{2},r_{3}\in\mathbb{R}\geq 0$, such that \[ \begin{cases}
\sum_{j=1}^{p_{1}}\left\{ |\beta_{j}\theta_{j}|+\nu_{1}(\beta_{j}^{2}+\theta_{j}^{2}) \right\} \leq r_{1} \\
\sum_{k=1}^{p_{2}}\left\{ |\zeta_{k}\pi_{k}|+\nu_{2}(\zeta_{k}^{2}+\pi_{k}^{2}) \right\} \leq r_{2} \\
\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}|\leq r_{3} \\ \end{cases} \quad \Rightarrow \quad \begin{cases}
\sum_{j=1}^{p_{1}}|\beta_{j}\theta_{j}|\leq t_{1} \\
\sum_{k=1}^{p_{2}}|\zeta_{k}\pi_{k}|\leq t_{2} \\
\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\beta_{j}\lambda_{jk}\pi_{k}|\leq t_{3} \end{cases}. \] \end{lemma}
Based on this convex relaxation, we turn to the following optimization problem, \begin{eqnarray}\label{eq:objfunc} \textrm{minimize}_{\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta} \; \left\{ \frac{1}{2} \ell(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta) + P_{1}(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi}) + P_{2}(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi}) + P_{3}(\boldsymbol{\Lambda},\delta) \right\}, \end{eqnarray} where the three penalty functions are of the form, \begin{eqnarray*}
P_{1}(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi}) &=& \kappa_{1} \left[ \sum_{j=1}^{p_{1}} \left\{ |\beta_{j}\theta_{j}|+\nu_{1}(\beta_{j}^{2}+\theta_{j}^{2}) \right\} \right] + \kappa_{2} \left[ \sum_{k=1}^{p_{2}} \left\{ |\zeta_{k}\pi_{k}|+\nu_{2}(\zeta_{k}^{2}+\pi_{k}^{2}) \right\} \right], \\
P_{2}(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi}) &=& \mu_{1} \sum_{j=1}^{p_{1}}(|\beta_{j}|+|\theta_{j}|)+\mu_{2} \sum_{k=1}^{p_{2}}(|\zeta_{k}|+|\pi_{k}|), \\
P_{3}(\boldsymbol{\Lambda},\delta) &=& \kappa_{3} \sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}|+\kappa_{4} |\delta|, \end{eqnarray*} with $\nu_{1},\nu_{2} \geq 1/2$, $\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4} \geq 0$, and $\mu_{1},\mu_{2}\geq 0$ as the tuning parameters. Here $\nu_{1},\nu_{2}$ control the level of convexity relaxiation, $\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4}$ control the level of penalty on various pathway effects, and $\mu_{1},\mu_{2}$ control the level of sparsity of individual parameters. We note that the tuning parameters $\kappa_{1},\kappa_{2},\kappa_{3},\mu_{1},\mu_{2}$ can also vary with $j$ and $k$. For simplicity, we keep them the same across $1 \leq j \leq p_1$ and $1 \leq k \leq p_2$.
The objective function in \eqref{eq:objfunc} consists of a differentiable loss function $\ell/{2}$ and an indifferentiable regularization function $(P_{1} + P_{2} + P_{3})$. We next develop an alternating direction method of multipliers \citep[ADMM,][]{boyd2011distributed} to solve \eqref{eq:objfunc}. The ADMM form of the optimization problem \eqref{eq:objfunc} is, \begin{eqnarray*} && \textrm{minimize}_{\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta, \tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}} } \; \frac{1}{2}\ell(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi},\boldsymbol{\Lambda},\delta) + P_{1}(\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}}) + P_{2}(\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}}) + P_{3}(\boldsymbol{\Lambda},\delta), \\ & & \quad\quad\quad \textrm{ subject to } \boldsymbol{\beta}=\tilde{\boldsymbol{\beta}}, \; \boldsymbol{\theta}=\tilde{\boldsymbol{\theta}}, \; \boldsymbol{\zeta}=\tilde{\boldsymbol{\zeta}}, \; \boldsymbol{\pi}=\tilde{\boldsymbol{\pi}}, \end{eqnarray*} where $\tilde{\boldsymbol{\beta}} \in \mathbb{R}^{1\times p_{1}}$, $\tilde{\boldsymbol{\theta}} \in \mathbb{R}^{p_{1}}$, $\tilde{\boldsymbol{\zeta}} \in \mathbb{R}^{1\times p_{2}}$, and $\tilde{\boldsymbol{\pi}}\in\mathbb{R}^{p_{2}}$ are the newly introduced parameters. Let $\boldsymbol{\Upsilon}=(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi})$, $\tilde{\boldsymbol{\Upsilon}}=(\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}})$, the augmented Lagrangian function to enforce the constraints is, \begin{equation}\label{eq:ADMM_AL}
\frac{1}{2}\ell(\boldsymbol{\Upsilon},\boldsymbol{\Lambda},\delta) + P_{1}(\tilde{\boldsymbol{\Upsilon}}) + P_{2}(\tilde{\boldsymbol{\Upsilon}}) + P_{3}(\boldsymbol{\Lambda},\delta)+\sum_{r=1}^{4}\left( \langle h_{r}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}),\boldsymbol{\tau}_{r}\rangle+\frac{\rho}{2}\| h_{r}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})\|_{2}^{2}\right), \end{equation} where $h_{1}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})=\boldsymbol{\beta}-\tilde{\boldsymbol{\beta}}$, $h_{2}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})=\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}$, $h_{3}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})=\boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}}$, $h_{4}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})=\boldsymbol{\pi}-\tilde{\boldsymbol{\pi}}$, $\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2}\in\mathbb{R}^{p_{1}}$, $\boldsymbol{\tau}_{3},\boldsymbol{\tau}_{4}\in\mathbb{R}^{p_{2}}$, and $\rho>0$ is the augmented Lagrangian parameter. We propose to update the parameters in~\eqref{eq:ADMM_AL} iteratively, and summarize our estimation procedure in Algorithm~\ref{alg:ADMM}.
\begin{algorithm}[t!] \caption{The optimization algorithm for \eqref{eq:ADMM_AL}.} \begin{algorithmic}[1] \INPUT $(\mathbf{X},\mathbf{M}_{1},\mathbf{M}_{2},\mathbf{Y})$. \OUTPUT $\left( \hat{\boldsymbol{\beta}},\hat{\boldsymbol{\theta}},\hat{\boldsymbol{\zeta}},\hat{\boldsymbol{\pi}},\hat{\boldsymbol{\Lambda}},\hat{\delta} \right)$.
\State \textbf{initialization}: $\left\{ \boldsymbol{\beta}^{(0)},\boldsymbol{\theta}^{(0)},\boldsymbol{\zeta}^{(0)},\boldsymbol{\pi}^{(0)},\boldsymbol{\Lambda}^{(0)},\delta^{(0)},\tilde{\boldsymbol{\beta}}^{(0)},\tilde{\boldsymbol{\theta}}^{(0)},\tilde{\boldsymbol{\zeta}}^{(0)},\tilde{\boldsymbol{\pi}}^{(0)},\boldsymbol{\tau}_{1}^{(0)},\boldsymbol{\tau}_{2}^{(0)},\boldsymbol{\tau}_{3}^{(0)},\boldsymbol{\tau}_{4}^{(0)} \right\}$.
\Repeat \State update $\boldsymbol{\beta}^{(s+1)} = (\mathbf{X}^\top\mathbf{X}+\rho)^{-1} \left\{ \mathbf{X}^\top\mathbf{M}_{1}-\boldsymbol{\tau}_{1}^{(s)\top}+\rho\tilde{\boldsymbol{\beta}}^{(s)} \right\}$.
\State update $\boldsymbol{\theta}^{(s+1)} = (\mathbf{M}_{1}^\top\mathbf{M}_{1}+\rho\boldsymbol{\mathrm{I}})^{-1} \left[ \mathbf{M}_{1}^\top \left\{ \mathbf{Y}-\mathbf{X}\delta^{(s)}-\mathbf{M}_{2}\boldsymbol{\pi}^{(s)} \right\} - \boldsymbol{\tau}_{2}^{(s)}+\rho\tilde{\boldsymbol{\theta}}^{(s)} \right]$.
\State update $\boldsymbol{\zeta}^{(s+1)} = (\mathbf{X}^\top\mathbf{X}+\rho)^{-1} \left[ \mathbf{X}^\top \left\{ \mathbf{M}_{2}-\mathbf{M}_{1}\boldsymbol{\Lambda}^{(s)} \right\} - \boldsymbol{\tau}_{3}^{(s)\top}+\rho\tilde{\boldsymbol{\zeta}}^{(s)} \right]$.
\State update $\boldsymbol{\pi}^{(s+1)} = (\mathbf{M}_{2}^\top\mathbf{M}_{2} + \rho\boldsymbol{\mathrm{I}})^{-1} \left[ \mathbf{M}_{2}^\top \left\{ \mathbf{Y}-\mathbf{X}\delta^{(s)}-\mathbf{M}_{1}\boldsymbol{\theta}^{(s+1)} \right\} - \boldsymbol{\tau}_{4}^{(s)}+\rho\tilde{\boldsymbol{\pi}}^{(s)} \right]$.
\State update $\delta^{(s+1)} = (\mathbf{X}^\top\mathbf{X})^{-1} \mathrm{Soft}\left[ \mathbf{X}^\top \left\{ \mathbf{Y}-\mathbf{M}_{1}\boldsymbol{\theta}^{(s+1)}-\mathbf{M}_{2}\boldsymbol{\pi}^{(s+1)} \right\}, \kappa_{4} \right]$.
\For {$k$ = 1 to $p_2$}
\State update $\boldsymbol{\Lambda}_{k}^{(s+1)}$ by solving \eqref{eq:lambda_k}. \EndFor
\For {$j$ = 1 to $p_1$}
\State update $\left\{ \tilde{\beta}_{j}^{(s+1)},\tilde{\theta}_{j}^{(s+1)} \right\}$ by solving \eqref{eq:beta_theta_tilde}. \EndFor
\For {$k$ = 1 to $p_2$}
\State update $\left\{ \tilde{\zeta}_{k}^{(s+1)},\tilde{\pi}_{k}^{(s+1)} \right\}$ by solving \eqref{eq:zeta_pi_tilde}. \EndFor
\State update $\boldsymbol{\tau}_{r}^{(s+1)} = \boldsymbol{\tau}_{r}^{(s)}+\rho h_{r}\left\{ \boldsymbol{\Upsilon}^{(s+1)},\tilde{\boldsymbol{\Upsilon}}^{(s+1)} \right\}$, $r=1,\ldots,4$.
\Until{the objective function converges.} \end{algorithmic} \label{alg:ADMM} \end{algorithm}
A few remarks are in order. The explicit forms of Steps 3--6 of Algorithm~\ref{alg:ADMM} are derived in Section~\ref{appendix:sec:ADMM} of the supporting information. In Step 7, $\textrm{Soft}(a, b)=\operatorname{sgn}(a)\max\{|a|-b,0\}$ is the soft-thresholding function. Steps 8--10 are to update $\boldsymbol{\Lambda}$, one column at a time. Its $k$th column, $k = 1, \ldots, p_2$, can be obtained by, \begin{eqnarray}\label{eq:lambda_k}
\textrm{minimize}_{\boldsymbol{\Lambda}_k\in\mathbb{R}^{p_{1}}}~\frac{1}{2}\|\mathbf{M}_{2k}-\mathbf{X}\zeta_{k}^{(s+1)}-\mathbf{M}_{1}\boldsymbol{\Lambda}_k\|_{2}^{2}+\kappa_{3}\|\boldsymbol{\Lambda}_{k}\|_{1}. \end{eqnarray} This is a standard Lasso problem with $\left\{ \mathbf{M}_{2k}-\mathbf{X}\zeta_{k}^{(s+1)} \right\}$ as the ``outcome" and $\mathbf{M}_{1}$ as the ``predictor". Steps 11--13 are to update $(\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}})$, also one column pair at a time, and the $j$th column pair $(\tilde{\beta}_j,\tilde{\theta}_j)$, $j=1,\dots,p_{1}$, can be obtained by, \begin{eqnarray}\label{eq:beta_theta_tilde} \textrm{minimize}_{(\tilde{\beta}_j,\tilde{\theta}_j)} \; v \left\{ \tilde{\beta}_j, \tilde{\theta}_j; \; \kappa_{1},\mu_{1}, 2\kappa_{1}\nu_{1}+\rho, 2\kappa_{1}\nu_{1}+\rho,\tau_{1j}^{(s)}+\rho\beta_{j}^{(s+1)}, \tau_{2j}^{(s)}+\rho\theta_{j}^{(s+1)} \right\}. \end{eqnarray} Similarly, Steps 14--15 are to update $(\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}}_{k})$, one column pair at a time, and the $k$th column pair $(\tilde{\zeta}_{k},\tilde{\pi}_{k})$, $k=1,\dots,p_{2}$, can be obtained by, \begin{eqnarray}\label{eq:zeta_pi_tilde} \textrm{minimize}_{(\tilde{\zeta}_k,\tilde{\pi}_k)} \; v \left\{ \tilde{\zeta}_k,\tilde{\pi}_k; \; \kappa_{2}, \mu_{2}, 2\kappa_{2}\nu_{2}+\rho , 2\kappa_{2}\nu_{2}+\rho,\tau_{3k}^{(s)}+\rho\zeta_{k}^{(s+1)}, \tau_{4k}^{(s)}+\rho\pi_{k}^{(s+1)} \right\}. \end{eqnarray} In both \eqref{eq:beta_theta_tilde} and \eqref{eq:zeta_pi_tilde}, the function $v(a_1, a_2)$ is of the form, \begin{equation}\label{eq:PathLasso_func}
v(a_1, a_2; \; b_1, b_2, b_3, b_4, b_5, b_6)=b_1 |a_1 a_2| + b_2 |a_1| + b_2|a_2| + \frac{1}{2}b_3 a_1^{2} + \frac{1}{2}b_4 a_2^{2} - b_5 a_1-b_6 a_2. \end{equation} Its optimization has a closed-form solution; see \citet[Lemma 3.2]{zhao2016pathway}.
Our method involves a number of tuning parameters. For $\nu_{1}$, $\nu_{2}$ and $\rho$, our simulations have found that the final estimators are not overly sensitive to their values. The same phenomenon was observed in \citet{zhao2016pathway}. We thus fix $\nu_{1}=\nu_{2}=2$ and $\rho=1$. For $(\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4},\mu_{1},\mu_{2})$, for simplicity, we set $\kappa_{1}=\kappa_{2}=\kappa_{3}=\kappa_{4}=\tilde\kappa$, and $\mu_{1}=\mu_{2}=\tilde{\mu}$. We further fix the ratio between $\tilde\kappa$ and $\tilde\mu$, following a similar tuning strategy as elastic net~\citep{zou2005regularization}. We then run a grid search to minimize a modified Bayesian information criterion (BIC), \begin{equation} \label{eqn:bic}
\mathrm{BIC} = -2\log L\left( \hat{\boldsymbol{\beta}},\hat{\boldsymbol{\theta}},\hat{\boldsymbol{\zeta}},\hat{\boldsymbol{\pi}},\hat{\boldsymbol{\Lambda}},\hat{\delta} \right) + \log(n)\left(|\hat{\mathcal{A}}_{1}|+|\hat{\mathcal{A}}_{2}|+|\hat{\mathcal{A}}_{3}|\right), \end{equation} where the estimators are obtained under a given set of tuning parameters, $\hat{\mathcal{A}}_{1} = \{j : \hat\beta_{j}\hat\theta_{j}\neq 0\}$, $\hat{\mathcal{A}}_{2} = \{k : \hat\zeta_{k}\hat\pi_{k} \neq 0\}$, and $\hat{\mathcal{A}}_{3}=\{(j,k):\hat\beta_{j}\hat\lambda_{jk}\hat\pi_{k}\neq 0\}$.
\section{Theory} \label{sec:asmp}
In this section, we study the asymptotic properties of our proposed estimator. We consider two combinations of penalties. One is as in \eqref{eq:objfunc} with all three penalties, $P_{1}$, $P_{2}$ and $P_{3}$. The second only involves $P_{1}$ and $P_{3}$, since these two penalties alone would have achieved the selection of pathways between $X$ and $Y$. We show that the two combinations can achieve the same convergence rate in estimating the pathway effects as $p_{1},p_{2}$ and $n$ tending infinity. To simplify the problem and focus on the mediation pathways, we assume the direct effect $\delta$ is known, and let $V=Y-X\delta$. Let $\boldsymbol{\Theta}^{*} = (\boldsymbol{\beta}^{*},\boldsymbol{\theta}^{*},\boldsymbol{\zeta}^{*},\boldsymbol{\pi}^{*},\boldsymbol{\Lambda}^{*})$ denote the true parameters, and $\hat{\boldsymbol{\Theta}} = (\hat{\boldsymbol{\beta}},\hat{\boldsymbol{\theta}},\hat{\boldsymbol{\zeta}},\hat{\boldsymbol{\pi}},\hat{\boldsymbol{\Lambda}})$ the global minimizer of the optimization in \eqref{eq:objfunc}. Let $\boldsymbol{\varsigma}^{*}=\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}\in\mathbb{R}^{p_{1}}$. Let $\mathcal{S}_{1}=\{j:\beta_{j}^{*}\neq 0\}$, $\mathcal{S}_{2}=\{j:\theta_{j}^{*}\neq 0\}$, $\mathcal{S}_{3}=\{k:\zeta_{k}^{*}\neq 0\}$, $\mathcal{S}_{4}=\{k:\pi_{k}^{*}\neq 0\}$, and $\mathcal{S}_{5}=\{j:\varsigma_{j}^{*}\neq 0\}$ denote the support of $\boldsymbol{\beta}^{*}$, $\boldsymbol{\theta}^{*}$, $\boldsymbol{\zeta}^{*}$, $\boldsymbol{\pi}^{*}$ and $\boldsymbol{\varsigma}^{*}$, respectively, and $s_l = | \mathcal{S}_l |$ the cardinality of set $\mathcal{S}_l$, $l = 1, \ldots, 5$. We first introduce a set of regularity conditions.
\begin{enumerate}[({C}1)]
\item The distribution of $X$ has a finite variance, and $|X_i| \leq c_0$ almost surely, $i=1,\dots,n$.
\item The true parameters are bounded, in that, $|\theta_{j}^{*}|\leq c_{01}$ for $j = 1, \ldots, p_1$, $|\pi_{k}^{*}|\leq c_{02}$ for $k = 1, \ldots, p_2$, and $|\varsigma_{j}^{*}| \leq c_{03}$ for $j \in 1, \ldots, p_1$.
\item The penalty functions, evaluated at the true parameters, are bounded. That is, \begin{enumerate}[({C3}-1)]
\item For $P_1$, $\sum_{j=1}^{p_{1}}\left\{ |\beta_{j}^{*}\theta_{j}^{*}|+\nu_{1}(\beta_{j}^{*2}+\theta_{j}^{*2}) \right\} \leq c_{11}$, and $\sum_{k=1}^{p_{2}}\left\{ |\zeta_{k}^{*}\pi_{k}^{*}|+\nu_{2}(\zeta_{k}^{*2}+\pi_{k}^{*2}) \right\}$ $\leq c_{12}$;
\item For $P_2$, $\sum_{j=1}^{p_{1}}\left(|\beta_{j}^{*}|+|\theta_{j}^{*}|\right)\leq c_{21}$, and $\sum_{k=1}^{p_{2}}\left(|\zeta_{k}^{*}|+|\pi_{k}^{*}|\right)\leq c_{22}$;
\item For $P_3$, $\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}^{*}|\leq c_{3}$. \end{enumerate}
\item All the entries of the error variance terms are bounded by $c_4$. \end{enumerate} Condition (C1) is a standard regularity condition on the design matrix in high-dimensional regression settings. When $X$ is binary or categorical, (C1) is satisfied. When considering all three penalties $P_{1}$, $P_{2}$ and $P_{3}$, we do not actually need (C2), as (C3-2) is sufficient. But if we only consider $P_{1}$, and $P_{3}$, we impose (C2) that puts bounds on the true parameters. (C3-2) also implicitly regulates the sparsity in $\boldsymbol{\beta}^{*}$, $\boldsymbol{\theta}^{*}$, $\boldsymbol{\zeta}^{*}$ and $\boldsymbol{\pi}^{*}$. (C4) is the finite variance condition on the model errors, which is again common in the literature.
We evaluate the accuracy of our estimator by the mean squared prediction error, \begin{equation} \mathrm{MSPE}=\frac{1}{n}\sum_{i=1}^{n}\left(\hat{V}_{i}-V_{i}^{*}\right)^{2}, \end{equation} where $\hat{V}_{i}$ and $V_{i}^{*}$ are the predicted pathway effects under the estimated parameter $\hat{\boldsymbol{\Theta}}$ and the true parameter $\boldsymbol{\Theta}^{*}$, respectively, for subject $i$, $i=1,\dots,n$. The predicted pathway effects under our mediation model setting are defined as follows. \begin{definition}\label{def:prediction} For a treatment condition $X=x$, define \begin{enumerate}[(1)] \item The predicted outcome through $\mathbf{M}_{1}$, but not through $\mathbf{M}_{2}$, is: $\hat{V}_{1}=x\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}=x\left(\sum_{j=1}^{p_{1}}\hat{\beta}_{j}\hat{\theta}_{j}\right)$.
\item The predicted outcome through $\mathbf{M}_{2}$, but not through $\mathbf{M}_{1}$, is: $\hat{V}_{2}=x\hat{\boldsymbol{\zeta}}\hat{\boldsymbol{\pi}}=x\left(\sum_{k=1}^{p_{2}}\hat{\zeta}_{k}\hat{\pi}_{k}\right)$.
\item The prediction through both $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ is: $\hat{V}_{3}=x\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\Lambda}}\hat{\boldsymbol{\pi}}=x\left(\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}\hat{\beta}_{j}\hat{\lambda}_{jk}\hat{\pi}_{k}\right)$.
\item The total prediction of $V$ is: $\hat{V}=\hat{V}_{1}+\hat{V}_{2}+\hat{V}_{3}$. \end{enumerate} \end{definition}
\begin{theorem}\label{thm:MSIPE_est} We have the following results regarding the pathway effect estimation. \begin{enumerate}[(i)] \item Under penalties $P_{1}$ and $P_{3}$, suppose (C1), (C2), (C3-1), (C3-3) and (C4) hold, then \begin{eqnarray*}\label{eq:eMSIPE_P1P3} \mathbb{E}(\mathrm{MSPE}) & \leq & 2 c_0 c_4 \left\{ c_{11} s_{2} c_{01} \sqrt{\frac{2\log(2p_{1})}{n}} + c_{12} s_{4} c_{02} \sqrt{\frac{2\log(2p_{2})}{n}} \right. \\ & & \quad\quad\quad \left. + \; c_3 \sqrt{c_{11} c_{12} / \nu_1 \nu_2} (1+s_{5} c_{03})\sqrt{\frac{2\log(2p_{1})}{n}} \right\}, \\
\|\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{11}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{0}c_{4} c_{11}s_{2}c_{01}\sqrt{\frac{2\log(2p_{1})}{n}} \right\}, \label{eq:asmp_IEM1_P1P3} \\
\|\hat{\boldsymbol{\zeta}}\hat{\boldsymbol{\pi}}-\boldsymbol{\zeta}^{*}\boldsymbol{\pi}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{12}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{0}c_{4} c_{12}s_{4}c_{02}\sqrt{\frac{2\log(2p_{2})}{n}} \right\}, \label{eq:asmp_IEM2_P1P3} \\
\|\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\Lambda}}\hat{\boldsymbol{\pi}}-\boldsymbol{\beta}^{*}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{3}^{2}(c_{11}c_{12}/\nu_{1}\nu_{2})c_{0}^{2}\sqrt{\frac{2\log(2)}{n}} \right. \\ && \quad\quad \left. + \; 2c_{0}c_{4} c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+s_{5}c_{03})\sqrt{\frac{2\log(2p_{1})}{n}} \right\}. \label{eq:asmp_IEM1M2_P1P3} \end{eqnarray*}
\item Under penalties $P_{1}$, $P_{2}$ and $P_{3}$, suppose (C1), (C3) and (C4) hold, then \begin{eqnarray*}\label{eq:eMSIPE_P2P3} \mathbb{E}(\mathrm{MSPE}) & \leq & 2 c_0 c_4 \left\{ c_{11} c_{21} \sqrt{\frac{2\log(2p_{1})}{n}} + c_{12} c_{22} \sqrt{\frac{2\log(2p_{2})}{n}} \right. \\ & & \quad\quad\quad \left. + \; c_3 \sqrt{c_{11} c_{12} / \nu_1 \nu_2} (1+c_{3}c_{22})\sqrt{\frac{2\log(2p_{1})}{n}} \right\}, \\
\|\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{11}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{0}c_{4} c_{11}c_{21}\sqrt{\frac{2\log(2p_{1})}{n}}\right \}, \label{eq:asmp_IEM1_P1P2P3} \\
\|\hat{\boldsymbol{\zeta}}\hat{\boldsymbol{\pi}}-\boldsymbol{\zeta}^{*}\boldsymbol{\pi}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{12}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{0}c_{4} c_{12}c_{22}\sqrt{\frac{2\log(2p_{2})}{n}} \right\}, \label{eq:asmp_IEM2_P1P2P3} \\
\|\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\Lambda}}\hat{\boldsymbol{\pi}}-\boldsymbol{\beta}^{*}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}\|_{2}^{2} &\leq& \frac{1}{c_{5}}\left\{ 8c_{3}^{2}(c_{11}c_{12}/\nu_{1}\nu_{2})c_{0}^{2}\sqrt{\frac{2\log(2)}{n}} \right. \\ && \quad\quad \left. + \; 2c_{0}c_{4} c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+c_{3}c_{22})\sqrt{\frac{2\log(2p_{1})}{n}} \right\}. \label{eq:asmp_IEM1M2_P1P2P3} \end{eqnarray*} \end{enumerate} \end{theorem}
\noindent Theorem~\ref{thm:MSIPE_est} shows that, under both penalty combinations, the mean squared prediction error converges. It also shows that all three types of total pathway effects estimators are consistent in $\ell_{2}$-norm. Moreover, the convergence rate of the total indirect effect $\mathrm{IE}^{1}$ and $\mathrm{IE}^{2}$ are $\sqrt{\log(p_{1})/n}$ and $\sqrt{\log(p_{2})/n}$, respectively, which are consistent with the rate under only one set of mediators as studied in \citet{zhao2016pathway}. When considering the indirect effect through both $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$, $\boldsymbol{\varsigma}=\boldsymbol{\Lambda}\boldsymbol{\pi}$ summarizes the post-$\mathbf{M}_{1}$ pathway effects, and the problem degenerates to the case with a single set of $p_{1}$ mediators. As such, the convergence rate is $\sqrt{\log(p_{1})/n}$, and depends on the number of nonzero elements in $\boldsymbol{\varsigma}$.
\section{Simulations} \label{sec:sim}
In this section, we investigate the finite-sample performance of our method. We generate data following Model~\eqref{eq:LSEM_reduced}. In the interest of space, we report the generative scheme and the corresponding signal-to-noise ratio in Section~\ref{appendix:sec:sim} of the supporting information. We consider two dimension sizes $p_{1}=20$, $p_{2}=30$, with the sparsity level set at $0.1$, and $p_{1}=p_{2}=100$, with the sparsity level at $0.01$. We also consider two sample sizes $n=50$ and $n=500$. We compare different combinations of penalty functions: (i) $P_2$ and $P_3$, which is essentially a Lasso solution (P2P3); (ii) $P_1$ and $P_3$, as we discuss in Section~\ref{sec:asmp} (P1P3); and (iii) $P_1, P_2$ and $P_3$, our proposed method. For the last case, we consider two ratios between the tuning parameters, one with $\tilde{\mu} / \tilde\kappa = 1$ (P1P2P3-1) and the other with $\tilde{\mu} / \tilde\kappa = 0.1$ (P1P2P3-2).
Figure~\ref{fig:sim-n50} reports the average simulation results based on 200 data replications with $n=50$, as the tuning parameter $\tilde\kappa$ varies. The evaluation criteria include the receiver operating characteristic (ROC) curve of the identification of all the nonzero indirect effects, the mean squared error (MSE) of estimating the total indirect effect IE as defined in \eqref{eqn:total-effect-decomp}, and the corresponding computation time in seconds. Table~\ref{tab:sim-n50} reports the results with the tuning parameter $\tilde\kappa$ selected by the BIC criterion in \eqref{eqn:bic}. The evaluation criteria include the sensitivity, specificity, MSE, and the true and actual estimate of the total indirect effect. We also report the results with $n=500$ in Section~\ref{appendix:sec:sim} of the supporting information. From these figures and tables, we see that there is a trade-off between the estimation accuracy and the selection accuracy. The method with all three penalties (P1P2P3-2) achieves a competitive overall performance.
\begin{figure}
\caption{ The ROC curve of the indirect effect pathway selection, the MSE of the total indirect effect, and the computation time, as functions of the tuning parameter $\tilde\kappa$. The sample size is $n=50$.}
\label{fig:sim-n50}
\end{figure}
\begin{table} \caption{\label{tab:sim-n50}The estimate and the mean squared error of the total indirect effect, and the sensitivity and specificity of the indirect effect pathway selection, with the tuning parameter $\tilde\kappa$ selected by BIC. The sample size is $n=50$.} \begin{center} \begin{tabular}{l l R{2cm} R{2cm} R{2cm} R{2cm}} \hline Mediator dimension & Criterion & \multicolumn{1}{c}{P2P3} & \multicolumn{1}{c}{P1P3} & \multicolumn{1}{c}{P1P2P3-1} & \multicolumn{1}{c}{P1P2P3-2} \\
\hline
& Truth & \multicolumn{4}{c}{8} \\
& Estimate & 8.190 & 8.122 & 8.208 & 8.148 \\
& MSE & 29.67 & 31.35 & 29.70 & 31.01 \\
& Sensitivity & 0.620 & 0.539 & 0.560 & 0.541 \\
\multirow{-5}{*}{$p_{1}=20,=p_{2}=30$} & Specificity & 0.973 & 0.965 & 0.972 & 0.966 \\
\hline
& Truth & \multicolumn{4}{c}{8} \\
& Estimate & 9.445 & 9.503 & 7.797 & 9.169 \\
& MSE & 13.72 & 13.52 & 12.06 & 11.69 \\
& Sensitivity & 0.721 & 0.417 & 0.517 & 0.439 \\
\multirow{-5}{*}{$p_{1}=p_{2}=100$} & Specificity & 0.999 & 0.999 & 0.999 & 0.999 \\ \hline \end{tabular} \end{center} \end{table}
\section{Data Analysis} \label{sec:real}
We revisit our motivation example in Section~\ref{sec:introduction}. We analyzed a set of $n=136$ participants from the recent S1200 release of the Human Connectome Project. The outcome of interest is a language behavior measure, the picture vocabulary test, which evaluates language and vocabulary comprehension. In the test, an audio recording of a word and four photographic images were presented to the participants on a screen, who responded by choosing the image that most closely matches the meaning of the word. Statistically significant sex difference was observed in this test after age adjustment under a linear model. The two sets of mediators are DTI and resting-state fMRI measures. The DTI images were preprocessed following the pipeline of \cite{zhang2019tensor}. From each DTI scan, we obtained a symmetric structural connectivity matrix, with nodes corresponding to the brain regions-of-interest based on the Desikan Atlas \citep{desikan2006automated}, and the edges recording the number of white fiber pathways \citep{zhang2019tensor}, a measure of structural connectivity between pairs of regions. We then removed the regions with zero connectivity in over 25\% subjects, vectorized the upper triangular connectivity matrix, and obtained a $p_1=531$-dimensional vector of DTI measures. The fMRI images were preprocessed following the pipeline of \cite{glasser2013minimal}. From each fMRI scan, we obtained a symmetric functional connectivity matrix, with nodes corresponding to the brain regions based on the Harvard-Oxford Atlas of FSL \citep{smith2004advances}, and the edges recording the $z$-transformed Pearson correlation. We then focused on the brain regions corresponding to those of DTI, vectorized the upper triangular connectivity matrix, and obtained a $p_2=917$-dimensional vector of fMRI measures.
We applied the proposed method to this data. Since P1P2P3-2 achieved the overall best performance in both selection and estimation accuracy in simulations, we employed this penalty combination. We observed a significant sex difference total effect, with a $p$-value of 0.016. This total effect can be decomposed following \eqref{eqn:total-effect-decomp}. Specifically, the penalized estimate of direct effect was zero, suggesting that the difference can be fully explained by the variations in brain connectivity. The estimated total indirect effect due to the structural connectivity alone, the functional connectivity alone, and both connectivities was 3.322, 0.297, and -0.109, respectively. Figure~\ref{fig:real_path} presents the identified brain pathways through both the structural and functional connectivities.
\begin{figure}
\caption{ The estimated pathways with picture vocabulary test performance as the outcome ($Y$) when comparing male ($X=1$) versus female ($X=0$). The nodes in purple are from brain structural connectivity, and nodes in orange are from brain functional connectivity. The edges in red indicate positive effects, and the ones in blue indicate negative effects.}
\label{fig:real_path}
\end{figure}
Among these pathways, one group involves structural connectivity between left postcentral gyrus (\texttt{postCentral\_L}) and left superior parietal lobule (\texttt{SupP\_L}), then via functional connectivity between numerous brain regions. Figure~\ref{fig:real_brain_memory} shows some of these pathways through these structural connections and two functional connectivities, between left superior parietal lobule and left cuneus (\texttt{Cuneus\_L}), and between left precuneus (\texttt{Precuneus\_L}) and left lingual gyrus (\texttt{Lingual\_L}). These pathways are suggestive of working memory pathways. Postcentral gyrus and cuneus have been identified in visual processing, and the cuneus as a mid-level visual processing area has been found to be modulated by working memory~\citep{salmon1996regional}. Precuneus, as part of the default mode network, is involved in working memory, especially for tasks related to verbal processing~\citep{wallentin2006parallel}. Lingual gyrus, located in the occipital lobe, plays an important role in visual processing. Left lingual gyrus is found activated during memorization~\citep{kozlovskiy2014activation}, and is involved in tasks related to naming and word recognition~\citep{mechelli2000differential}.
\begin{figure}
\caption{The identified brain pathway related to working memory. (a) DTI: postcentral gyrus (Left) -- superior parietal lobule (Left), (b) fMRI: superior parietal lobule (Left) -- cuneus (Left), (c) fMRI: precuneus (Left) -- lingual gyrus (Left).}
\label{fig:real_brain_memory}
\end{figure}
The other pathway, through the structural connectivity between left inferior temporal gyrus (\texttt{IT\_L}) and left precentral gyrus (\texttt{preCentral\_L}), then through the functional connectivity between left middle temporal gyrus (\texttt{MT\_L}) and left postcentral gyrus. Figure~\ref{fig:real_brain_language} presents this pathway, which is suggestive of a language pathway. Inferior temporal gyrus, as part of the inferior longitudinal and the inferior occipito-frontal fasciculi, is crucial for semantic processing~\citep{mandonnet2007does} and responsible for word naming~\citep{race2013area}. Middle temporal gyrus is typically viewed as part the language networks~\citep{ficek2018effect}.
\begin{figure}
\caption{The identified brain pathway related to language. (a) DTI: inferior temporal gyrus (Left) -- precentral gyrus (Left), (b) fMRI: middle temporal gyrus (Left) -- postcentral gyrus (Left).}
\label{fig:real_brain_language}
\end{figure}
\section{Discussion} \label{sec:discuss}
In this article, we proposed a method of multimodal mediation analysis. It requires the ordering of modalities in the proposed mediation pathways, but it does not require the ordering of potential individual mediators within each modality. We defined three types of indirect pathway effects, and employed a lasso-type regularization for estimation. We studied both the asymptotic and empirical behavior of our method.
The proposed method neatly fits within the context of brain imaging data for combining diffusion weighted MRI with functional MRI data. Abstracting the setting, the framework interrogates the idea that, across a sample of subjects, an exposure or treatment impacts neural wiring, and the consequent changes in wiring impacts brain functional activity, which in turn impacts behavior. As our model is agnostic to domain, one might also consider applying the same approach where an exposure is postulated to impact epigenetic measurements, then subsequently genomic measurements, such as RNA expression, and changes in behavior or clinical outcome.
In our data analysis, by integrating structural and functional imaging, we have postulated mechanistic pathways, including ones related to working memory and language, that mediate sex related differences in language behavior. However, care must be taken, as any mechanistic interpretation would be highly dependent on a variety of modeling assumptions, for instance, the path analysis ordering and the linearity assumption.
Our work also points to a number of potential extensions. In our analysis, the model vectorized the connectivity measures and did not exploit the symmetric and positive definite matrix structure. Recent developments in covariance regression \citep{sun2017store, zhao2018covariate} are potentially useful. Also, we did not consider cases where the exposure and / or the outcome are themselves high-dimensional. We plan to pursue these lines of work as our future research.
\appendix
\renewcommand{\Alph{figure}}{\Alph{figure}} \renewcommand{\Alph{table}}{\Alph{table}} \renewcommand{A\arabic{equation}}{A\arabic{equation}}
\section{Proofs}
\subsection{Proof of Lemma~\ref{lemma:convex}}
\begin{proof}
By Theorem 1 in \citet{zhao2016pathway}, if and only if when $\nu\geq 1/2$, $v(a,b)=|ab|+\nu(a^{2}+b^{2})$ is a convex function. This proves the convexity of the function.
The following proves the second part of the lemma. \[
|\beta_{j}\theta_{j}|+\nu_{1}\left(\beta_{j}^{2}+\theta_{j}^{2}\right) = \frac{1}{2}\left( |\beta_{j}|+|\theta_{j}| \right)^{2} + \left( \nu_{1}-\frac{1}{2} \right)\left(\beta_{j}^{2}+\theta_{j}^{2}\right) \] \[
\sum_{j=1}^{p_{1}}\left\{ \frac{1}{2}\left(|\beta_{j}|+|\theta_{j}|\right)^{2} + \left(\nu_{1}-\frac{1}{2}\right)\left(\beta_{j}^{2}+\theta_{j}^{2}\right) \right\} \leq r_{1} \quad \Rightarrow \quad \sum_{j=1}^{p_{1}}\beta_{j}^{2}\leq\frac{r_{1}}{\nu_{1}} \] Analogously, \begin{eqnarray*} \sum_{k=1}^{p_{2}}\pi_{k}^{2} & \leq & \frac{r_{2}}{\nu_{2}}, \\
\left(\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}||\beta_{j}\pi_{k}|\right)^{2} & \leq & \left(\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}|^{2}\right)\left(\sum_{j=1}^{p_{1}}\sum_{k=2}^{p_{1}}|\beta_{j}\pi_{k}|^{2}\right) \\
& \leq & \left(\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}|\right)^{2}\left\{ \sum_{j=1}^{p_{1}}\beta_{j}^{2}\left(\sum_{k=1}^{p_{2}}\pi_{k}^{2}\right) \right\} \\
& \leq & r_{3}^{2}\left\{ \sum_{j=1}^{p_{1}}\beta_{j}^{2}\left(\frac{r_{2}}{\nu_{2}}\right) \right\} \\
& \leq & r_{3}^{2}\frac{r_{1}r_{2}}{\nu_{1}\nu_{2}}.
\end{eqnarray*}
We finish the proof by setting $r_{1}\leq t_{1}$, $r_{2}\leq t_{2}$, and $r_{3}\leq t_{3}\sqrt{\nu_{1}\nu_{2}/r_{1}r_{2}}$. \end{proof}
\subsection{Proof of Theorem~\ref{thm:MSIPE_est}}
\begin{proof} Before we prove Theorem~\ref{thm:MSIPE_est}, we briefly comment that, a key difference of our method compared to the method in \citet{zhao2016pathway} is the pathway effect through both $M_{1}$ and $M_{2}$ is decomposed as a product of three parameters. Lemma~\ref{lemma:convex} introduces a convex relaxation of the three-way product regularization. Therefore, in the following, based on the fact that the mediation effect is decomposed in three components, we prove the consistency of each component under such a convex regularization.
(i) We first consider the case with penalties $P_{1}$ and $P_{3}$. The estimator $\hat{\boldsymbol{\Theta}}$ is the solution to the optimization problem, \begin{eqnarray*} \text{minimize} && \frac{1}{2}\ell(\boldsymbol{\Theta}), \\
\text{such that} && \sum_{j=1}^{p_{1}}\left\{|\beta_{j}\theta_{j}|+\nu_{1}(\beta_{j}^{2}+\theta_{j}^{2})\right\}\leq c_{11}, \\
& & \sum_{k=1}^{p_{2}}\left\{|\zeta_{k}\pi_{k}|+\nu_{2}(\zeta_{k}^{2}+\pi_{k}^{2})\right\}\leq c_{12}, \;\;
\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{1}}|\lambda_{jk}|\leq c_{3}. \end{eqnarray*}
Let \begin{eqnarray*}
\mathcal{C}_{1} &=& \left\{X\boldsymbol{\beta}\boldsymbol{\theta}:P_{11}(\boldsymbol{\beta},\boldsymbol{\theta})\leq r_{1}\right\}, \quad \text{where} \;\; P_{11}(\boldsymbol{\beta},\boldsymbol{\theta})=\sum_{j=1}^{p_{1}}\left\{|\beta_{j}\theta_{j}|+\nu_{1}(\beta_{j}^{2}+\theta_{j}^{2})\right\}, \\
\mathcal{C}_{2} &=& \left\{X\boldsymbol{\zeta}\boldsymbol{\pi}:P_{12}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq r_{2}\right\}, \quad \text{where} \;\; P_{12}(\boldsymbol{\zeta},\boldsymbol{\pi})=\sum_{k=1}^{p_{2}}\left\{|\zeta_{k}\pi_{k}|+\nu_{2}(\zeta_{k}^{2}+\pi_{k}^{2})\right\}, \\ \mathcal{C}_{3} &=& \left\{X\boldsymbol{\beta}\boldsymbol{\Lambda}\boldsymbol{\pi}:P_{11}(\boldsymbol{\beta},\boldsymbol{\theta})\leq r_{1},P_{12}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq r_{2},P_{3}(\boldsymbol{\Lambda})\leq r_{3}\right\}, \\ \mathcal{C} &=& \left\{V_{1}+V_{2}+V_{3}:V_{1}\in\mathcal{C}_{1},V_{2}\in\mathcal{C}_{2},V_{3}\in\mathcal{C}_{3}\right\}. \end{eqnarray*} Let $V_{1}=\mathbf{M}_{1}\boldsymbol{\theta}=X\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}+\boldsymbol{\epsilon}\boldsymbol{\theta}^{*}$, $V_{2}=X\boldsymbol{\zeta}^{*}\boldsymbol{\pi}^{*}+\boldsymbol{\vartheta}\boldsymbol{\pi}^{*}$, and $V_{3}=\mathbf{M}_{1}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}+\xi=X\boldsymbol{\beta}^{*}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}+\boldsymbol{\epsilon}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*}+\xi$. Then $V=V_{1}+V_{2}+V_{3}$. Note that $\hat{V}_{1}$, $\hat{V}_{2}$ and $\hat{V}_{3}$ are the projections of $V_{1}$, $V_{2}$, $V_{3}$ onto $\mathcal{C}_{1}$, $\mathcal{C}_{2}$ and $\mathcal{C}_{3}$, respectively. We have, \begin{eqnarray*}
\|\hat{V}-V^{*}\|_{2}^{2} &=& \|(\hat{V}_{1}-V_{1}^{*})+(\hat{V}_{2}-V_{2}^{*})+(\hat{V}_{3}-V_{3}^{*})\|_{2}^{2} \\
&\leq& \|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2}+\|\hat{V}_{2}-V_{2}^{*}\|_{2}^{2}+\|\hat{V}_{3}-V_{3}^{*}\|_{2}^{2}. \end{eqnarray*}
Next we bound the expectation of $\|\hat{V}_{l}-V_{l}^{*}\|_{2}^{2}$, $l=1,2,3$, respectively.
For $\mathbb{E}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2}$, we note that, for any $ x\in\mathcal{C}_{1}$, $\langle V_{1}-\hat{V}_{1}, x-\hat{V}_{1} \rangle \leq 0$. Setting $x=V_{1}^{*}$, then \begin{eqnarray*}
\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2} &=& \langle \hat{V}_{1}-V_{1}^{*},\hat{V}_{1}-V_{1}^{*} \rangle \\
&=& \langle \hat{V}_{1}-V_{1},\hat{V}_{1}-V_{1}^{*} \rangle + \langle V_{1}-V_{1}^{*},\hat{V}_{1}-V_{1}^{*} \rangle \\
&\leq& \langle V_{1}-V_{1}^{*},\hat{V}_{1}-V_{1}^{*} \rangle. \end{eqnarray*} By Definition~\ref{def:prediction}, we have, \begin{eqnarray*}
\hat{V}_{1}-V_{1}^{*} &=& X\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-X\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}=X(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}), \\
V_{1}-V_{1}^{*} &=& (X\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}+\boldsymbol{\epsilon}\boldsymbol{\theta}^{*})-X\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}=\boldsymbol{\epsilon}\boldsymbol{\theta}^{*}. \end{eqnarray*} Therefore, \begin{eqnarray*}
\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2} \leq \sum_{i=1}^{n}\left(\boldsymbol{\epsilon}_{i}\boldsymbol{\theta}^{*}\right)\left\{X_{i}(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\right\}=(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\sum_{j=1}^{p_{1}}\theta_{j}^{*}\left(\sum_{i=1}^{n}\epsilon_{ij}X_{i}\right). \end{eqnarray*} Let $Q_{1j}=\sum_{i=1}^{n}\epsilon_{ij}X_{i}$. By \citet[Lemma 3]{chatterjee2013assumptionless}, we have, \begin{eqnarray*}
Q_{1j}\sim\mathcal{N}\left(0,\sigma_{1j}^{2}\sum_{i=1}^{n}X_{i}^{2}\right), \quad \text{and} \quad \mathbb{E}\left(\max_{1\leq j\leq p_{1}}|Q_{1j}|\right)\leq \sqrt{\sum_{i=1}^{n}X_{i}^{2}}\left(\max_{j}\sigma_{1j}\right)\sqrt{2\log(2p_{1})}, \end{eqnarray*}
where $\mathrm{diag}(\boldsymbol{\Sigma}_{1})=\{\sigma_{11}^{2},\dots,\sigma_{1p_{1}}^{2}\}$. Under the conditions $|X_{i}|\leq c_{0}$ and $\max_{j}\sigma_{1j}\leq c_{4}$, we have \[
\mathbb{E}\left(\max_{1\leq j\leq p_{1}}|Q_{1j}|\right)\leq c_{0}c_{4} \sqrt{2n\log(2p_{1})}. \] In addition, $P_{11}(\hat{\boldsymbol{\beta}},\hat{\boldsymbol{\theta}})\leq c_{11}$ and $P_{11}(\boldsymbol{\beta}^{*},\boldsymbol{\theta}^{*})\leq c_{11}$, then \[ (\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\leq P_{11}(\hat{\boldsymbol{\beta}},\hat{\boldsymbol{\theta}})+P_{11}(\boldsymbol{\beta}^{*},\boldsymbol{\theta}^{*}) \leq 2c_{11}. \]
As $|\theta_{j}^{*}|\leq c_{01}$, we have, \[
\mathbb{E}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2}\leq 2c_{11}s_{2}c_{01}c_{0}c_{4}\sqrt{2n\log(2p_{1})}. \]
For $\mathbb{E}\|\hat{V}_{2}-V_{2}^{*}\|_{2}^{2}$, we can bound it in an analogous way. That is, denote $\mathrm{diag}(\boldsymbol{\Sigma}_{2})=\{\sigma_{21}^{2},\dots,\sigma_{2p_{2}}^{2}\}$, $\max_{k}\sigma_{2k}\leq c_{4}$, and $|\pi_{k}^{*}|\leq c_{02}$, we have, \begin{eqnarray*}
\|\hat{V}_{2}-V_{2}^{*}\|_{2}^{2} & \leq & \langle V_{2}-V_{2}^{*},\hat{V}_{2}-V_{2}^{*} \rangle = (\hat{\boldsymbol{\zeta}}\hat{\boldsymbol{\pi}}-\boldsymbol{\zeta}^{*}\boldsymbol{\pi}^{*})\sum_{k=1}^{p_{2}}\pi_{k}^{*}\left(\sum_{i=1}^{n}\vartheta_{ik}X_{i}\right), \\
\mathbb{E}\|\hat{V}_{2}-V_{2}^{*}\|_{2}^{2} & \leq & 2c_{12}s_{4}c_{02}c_{0}c_{4}\sqrt{2n\log(2p_{2})}. \end{eqnarray*}
For $\mathbb{E}\|\hat{V}_{3}-V_{3}^{*}\|_{2}^{2}$, we have, \begin{eqnarray*}
\|\hat{V}_{3}-V_{3}^{*}\|_{2}^{2} &\leq& \langle V_{3}-V_{3}^{*},\hat{V}_{3}-V_{3}^{*} \rangle \\
&=& (\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\Lambda}}\hat{\boldsymbol{\pi}}-\boldsymbol{\beta}^{*}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*})\sum_{i=1}^{n}(\xi_{i}X_{i})+(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\Lambda}}\hat{\boldsymbol{\pi}}-\boldsymbol{\beta}^{*}\boldsymbol{\Lambda}^{*}\boldsymbol{\pi}^{*})\sum_{j=1}^{p_{1}}\varsigma_{j}^{*}\left(\sum_{i=1}^{n}\xi_{ij}X_{i}\right). \end{eqnarray*} Let $\tilde{Q}_{j}=\sum_{i=1}^{n}\xi_{ij}X_{i}$. By \citet[Lemma 3]{chatterjee2013assumptionless}, we have, \[
\tilde{Q}_{j}\sim\mathcal{N}\left(0,\sigma^{2}\sum_{i=1}^{n}X_{i}^{2}\right), \quad \text{and} \quad \mathbb{E}\left(\max_{1\leq j\leq p_{1}}|\tilde{Q}_{j}|\right)\leq \sqrt{\sum_{i=1}^{n}X_{i}^{2}}\sigma\sqrt{2\log(2p_{1})}\leq c_{0}c_{4}\sqrt{2n\log(2p_{1})}. \]
Under the condition that $|\varsigma_{j}^{*}|\leq c_{03}$, we have \[
\mathbb{E}\|\hat{V}_{3}-V_{3}^{*}\|_{2}^{2}\leq 2c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+s_{5}c_{03})c_{0}c_{4}\sqrt{2n\log(2p_{2})}, \] where $t_{3}=r_{3}\sqrt{r_{1}r_{2}/\nu_{1}\nu_{2}}$.
Putting the above three bounds together, we have, \begin{eqnarray*} && \mathbb{E}\left(\mathrm{MSPE}\right) = \mathbb{E}\left\{\frac{1}{n}\sum_{i=1}^{n}(\hat{V}-V^{*})^2\right\} \\
&\leq& 2c_{0}c_{4}\left\{c_{11}s_{2}c_{01}\sqrt{\frac{2\log(2p_{1})}{n}}+c_{12}s_{4}c_{02}\sqrt{\frac{2\log(2p_{2})}{n}}+c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+s_{5}c_{03})\sqrt{\frac{2\log(2p_{1})}{n}}\right\}. \end{eqnarray*}
Next, to establish the convergence of the pathway effect estimators in Theorem~\ref{thm:MSIPE_est}, we study the expectation of $(\hat{V}_{l}-V_{l}^{*})^{2}$, $l = 1, 2, 3$, respectively,
For $\mathbb{E}(\hat{V}_{1}-V_{1}^{*})^{2}$, we have, \begin{eqnarray*} \mathbb{E}\left(\hat{V}_{1}-V_{1}^{*}\right)^{2} & = & \mathbb{E}\left\{X(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\right\}^{2}=\mathbb{E}X^{2}\left(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}\right)^{2}, \\
\frac{1}{n}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2} & = & \frac{1}{n}\sum_{i=1}^{n}\left\{X_{i}(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\right\}^{2}=\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}^{2}\right)\left(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}\right)^{2}. \end{eqnarray*} Therefore, \begin{eqnarray*}
\mathbb{E}\left(\hat{V}_{1}-V_{1}^{*}\right)^{2}-\frac{1}{n}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2}=\left(\mathbb{E}X^{2}-\frac{1}{n}\sum_{i=1}^{n}X_{i}^{2}\right)\left(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*}\right)^{2}. \end{eqnarray*}
Let $Z_{i}=\mathbb{E}X^{2}-X_{i}^{2}$. Since $|X_{i}|\leq c_{0}$, $|Z_{i}|\leq 2c_{0}^{2}$, together with $\mathbb{E}X_{i}=0$, by \citet[Lemma 5]{chatterjee2013assumptionless}, we have \[
\mathbb{E}\left(e^{t\sum_{i=1}^{n}Z_{i}}\right)\leq e^{t^{2}n4c_{0}^{2}/2}. \] By \citet[Lemma 4]{chatterjee2013assumptionless}, \[
\mathbb{E}\left(|\sum_{i=1}^{n}Z_{i}|\right)\leq 2c_{0}^{2}\sqrt{2n\log(2)}, \quad \text{and} \quad \mathbb{E}\left(|\frac{1}{n}\sum_{i=1}^{n}Z_{i}|\right)\leq 2c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}. \] Under condition (C3-1), $(\hat{\boldsymbol{\beta}}\hat{\boldsymbol{\theta}}-\boldsymbol{\beta}^{*}\boldsymbol{\theta}^{*})\leq 2c_{11}$. Then \[
\mathbb{E}\left(\hat{V}_{1}-V_{1}^{*}\right)^{2}-\frac{1}{n}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2}\leq 8c_{11}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}, \] which implies that \[ \mathbb{E}\left(\hat{V}_{1}-V_{1}^{*}\right)^{2}\leq 8c_{11}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{11}s_{2}c_{01}c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}. \]
For $\mathbb{E}(\hat{V}_{2}-V_{2}^{*})^{2}$, we have, analogously, \[ \mathbb{E}\left(\hat{V}_{2}-V_{2}^{*}\right)^{2}\leq 8c_{12}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{12}s_{4}c_{02}c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}, \]
For $\mathbb{E}(\hat{V}_{3}-V_{3}^{*})^{2}$, similarly, \[
\mathbb{E}\left(\hat{V}_{3}-V_{3}^{*}\right)^{2}\leq 8c_{3}^{2}(c_{11}c_{12}/\nu_{1}\nu_{2})c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+s_{5}c_{03})c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}.
\]
Together, the convergence of the pathway effect estimators in Theorem~\ref{thm:MSIPE_est} (i) follows.
(ii) We next consider the case with penalties $P_{1}$, $P_{2}$ and $P_{3}$. The proof is similar to the case with penalties $P_{1}$ and $P_{3}$.
Specifically, let \begin{eqnarray*}
\tilde{\mathcal{C}}_{1} & = & \left\{X\boldsymbol{\beta\theta}:P_{11}(\boldsymbol{\beta},\boldsymbol{\theta})\leq c_{11},P_{21}(\boldsymbol{\beta},\boldsymbol{\theta})\leq c_{21}\right\}, \quad \text{where} \;\; P_{21}(\boldsymbol{\beta},\boldsymbol{\theta})=\sum_{j=1}^{p_{1}}\left(|\beta_{j}|+|\theta_{j}|\right), \\
\tilde{\mathcal{C}}_{2} & = & \left\{X\boldsymbol{\zeta\pi}:P_{12}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq c_{12},P_{22}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq c_{22}\right\}, \quad \text{where} \;\; P_{22}(\boldsymbol{\zeta},\boldsymbol{\pi})=\sum_{k=1}^{p_{2}}\left(|\zeta_{k}|+|\pi_{k}|\right), \\ \tilde{\mathcal{C}}_{3} & = & \left\{X\boldsymbol{\beta\Lambda\pi}:P_{11}(\boldsymbol{\beta},\boldsymbol{\theta})\leq c_{11},P_{21}(\boldsymbol{\beta},\boldsymbol{\theta})\leq c_{21},P_{12}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq c_{12},P_{22}(\boldsymbol{\zeta},\boldsymbol{\pi})\leq c_{22},P_{3}(\boldsymbol{\Lambda})\leq c_{3}\right\}, \\ \tilde{\mathcal{C}} & = & \left\{V_{1}+V_{2}+V_{3}:V_{1}\in\tilde{\mathcal{C}}_{1},V_{2}\in\tilde{\mathcal{C}}_{2},V_{3}\in\tilde{\mathcal{C}}_{3}\right\}. \end{eqnarray*} Following similar arguments as case (i), we have, \begin{eqnarray*}
\mathbb{E}\|\hat{V}_{1}-V_{1}^{*}\|_{2}^{2} &\leq& 2c_{11}c_{21}c_{0}c_{4}\sqrt{2n\log(2p_{1})}, \\
\mathbb{E}\|\hat{V}_{2}-V_{2}^{*}\|_{2}^{2} &\leq& 2c_{12}c_{22}c_{0}c_{4}\sqrt{2n\log(2p_{2})}. \end{eqnarray*} Moreover, based on the fact that, \begin{eqnarray*}
\sum_{j=1}^{p_{1}}\varsigma_{j}^{*}=\sum_{j=1}^{p_{1}}\left(\sum_{k=1}^{p_{2}}\lambda_{jk}^{*}\pi_{k}^{*}\right)\leq\left(\sum_{j=1}^{p_{1}}\sum_{k=1}^{p_{2}}|\lambda_{jk}^{*}|\right)\left(\sum_{k=1}^{p_{2}}|\pi_{k}^{*}|\right)\leq c_{3}c_{22}, \end{eqnarray*} we have \[
\mathbb{E}\|\hat{V}_{3}-V_{3}^{*}\|_{2}^{2}\leq 2c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+c_{3}c_{22})c_{0}c_{4}\sqrt{2n\log(2p_{2})}. \] Putting the above three bounds together, we have, \begin{eqnarray*}
&& \mathbb{E}\left(\mathrm{MSPE}\right) \\
&\leq& 2c_{0}c_{4}\left\{c_{11}c_{21}\sqrt{\frac{2\log(2p_{1})}{n}}+c_{12}c_{22}\sqrt{\frac{2\log(2p_{2})}{n}}+c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+c_{3}c_{22})\sqrt{\frac{2\log(2p_{1})}{n}}\right\}. \end{eqnarray*}
Next, we have that,
\begin{eqnarray*}
\mathbb{E}\left(\hat{V}_{1}-V_{1}^{*}\right)^{2} &\leq& 8c_{11}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{11}c_{21}c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}, \\
\mathbb{E}\left(\hat{V}_{2}-V_{2}^{*}\right)^{2} &\leq& 8c_{12}^{2}c_{0}^{2}\sqrt{\frac{2\log(2)}{n}}+2c_{12}c_{22}c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}, \\ \mathbb{E}\left(\hat{V}_{3}-V_{3}^{*}\right)^{2} & \leq & 8c_{3}^{2}(c_{11}c_{12}/\nu_{1}\nu_{2})c_{0}^{2}\sqrt{\frac{2\log(2)}{n}} \\ & & +2c_{3}\sqrt{c_{11}c_{12}/\nu_{1}\nu_{2}}(1+c_{3}c_{22})c_{0}c_{4}\sqrt{\frac{2\log(p_{1})}{n}}. \end{eqnarray*} Then the convergence of the pathway effect estimators in Theorem~\ref{thm:MSIPE_est} (ii) follows. \end{proof}
\section{Optimization} \label{appendix:sec:ADMM}
We derive here the explicit forms of the estimators in Algorithm~\ref{alg:ADMM}. Consider the augmented Lagrangian function, \begin{eqnarray*}
\mathcal{L}(\boldsymbol{\Upsilon},\boldsymbol{\Lambda},\delta,\tilde{\boldsymbol{\Upsilon}}) = \frac{1}{2}\ell(\boldsymbol{\Upsilon},\boldsymbol{\Lambda},\delta)+ P_{1}(\tilde{\boldsymbol{\Upsilon}})+ P_{2}(\tilde{\boldsymbol{\Upsilon}})+ P_{3}(\boldsymbol{\Lambda},\delta)+\sum_{r=1}^{4}\left( \langle h_{r}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}),\boldsymbol{\tau}_{r}\rangle+\frac{\rho}{2}\| h_{r}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}})\|_{2}^{2}\right), \end{eqnarray*} where $\boldsymbol{\Upsilon}=(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi})$, $\tilde{\boldsymbol{\Upsilon}}=(\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{\zeta}},\tilde{\boldsymbol{\pi}})$, $h_{1}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}) = \boldsymbol{\beta}-\tilde{\boldsymbol{\beta}}$, $h_{2}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}) = \boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}$, $h_{3}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}) = \boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}}$, $h_{4}(\boldsymbol{\Upsilon},\tilde{\boldsymbol{\Upsilon}}) = \boldsymbol{\pi}-\tilde{\boldsymbol{\pi}}$, $\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2}\in\mathbb{R}^{p_{1}}$, and $\boldsymbol{\tau}_{3},\boldsymbol{\tau}_{4}\in\mathbb{R}^{p_{2}}$.
For $\boldsymbol{\Upsilon}=(\boldsymbol{\beta},\boldsymbol{\theta},\boldsymbol{\zeta},\boldsymbol{\pi})$, letting $\mathbf{W}_{1}=\boldsymbol{\mathrm{I}}_{p_{1}}$, $\mathbf{W}_{2}=\boldsymbol{\mathrm{I}}_{p_{2}}$ and $w=1$, we have, \begin{eqnarray*}
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\beta}} &=& -\mathbf{X}^\top(\mathbf{M}_{1}-\mathbf{X}\boldsymbol{\beta})+\boldsymbol{\tau}_{1}^\top+\rho(\boldsymbol{\beta}-\tilde{\boldsymbol{\beta}}), \\
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\theta}} &=& -\mathbf{M}_{1}^\top(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol\pi)+\boldsymbol{\tau}_{2}+\rho(\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}), \\
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\zeta}} &=& -\mathbf{X}^\top(\mathbf{M}_{2}-\mathbf{X}\boldsymbol{\zeta}-\mathbf{M}_{1}\boldsymbol{\Lambda})+\boldsymbol{\tau}_{3}^\top+\rho(\boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}}), \\
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\pi}} &=& -\mathbf{M}_{2}^\top(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol{\pi})+\boldsymbol{\tau}_{4}+\rho(\boldsymbol{\pi}-\tilde{\boldsymbol{\pi}}). \end{eqnarray*} Therefore, we have \begin{eqnarray*}
\boldsymbol{\beta} &=& (\mathbf{X}^\top\mathbf{X}+\rho)^{-1}(\mathbf{X}^\top\mathbf{M}_{1}-\boldsymbol{\tau}_{1}^\top+\rho\tilde{\boldsymbol{\beta}}) \\
\boldsymbol{\theta} &=& (\mathbf{M}_{1}^\top\mathbf{M}_{1}+\rho\boldsymbol{\mathrm{I}})^{-1}\left\{\mathbf{M}_{1}^\top(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{2}\boldsymbol{\pi})-\boldsymbol{\tau}_{2}+\rho\tilde{\boldsymbol{\theta}}\right\} \\
\boldsymbol{\zeta} &=&(\mathbf{X}^\top\mathbf{X}+\rho)^{-1}\left\{\mathbf{X}^\top(\mathbf{M}_{2}-\mathbf{M}_{1}\boldsymbol{\Lambda})-\boldsymbol{\tau}_{3}^\top+\rho\tilde{\boldsymbol{\zeta}}\right\} \\
\boldsymbol{\pi} &=& (\mathbf{M}_{2}^\top\mathbf{M}_{2}+\rho\boldsymbol{\mathrm{I}})^{-1}\left\{\mathbf{M}_{2}^\top(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta})-\boldsymbol{\tau}_{4}+\rho\tilde{\boldsymbol{\pi}}\right\} \end{eqnarray*}
For $\delta$, we have \begin{eqnarray*} \frac{\partial\mathcal{L}}{\partial\delta}=-\mathbf{X}^\top(\mathbf{Y}-\mathbf{X}\delta-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol{\pi})+\kappa_{4}\operatorname{sgn}{\delta}. \end{eqnarray*} Therefore, we have \begin{eqnarray*} \quad \delta=\frac{1}{\mathbf{X}^\top\mathbf{X}}\mathcal{S}_{\kappa_{4}}\left\{\mathbf{X}^\top(\mathbf{Y}-\mathbf{M}_{1}\boldsymbol{\theta}-\mathbf{M}_{2}\boldsymbol{\pi})\right\}, \end{eqnarray*}
where $\mathcal{S}_{\kappa}(\mu)=\max\{|\mu|-\kappa,0\}\operatorname{sgn}{\mu}$ is the soft-thresholding function.
For $\boldsymbol{\Lambda}$, it is equivalent to $p_{2}$ standard lasso problems. That is, for $k=1,\dots,p_{2}$, we seek to \begin{eqnarray*}
\text{minimize} \quad \frac{1}{2}\|\mathbf{M}_{2k}-\mathbf{X}\zeta_{j}-\mathbf{M}_{1}\boldsymbol{\Lambda}_{k}\|_{2}^{2}+\kappa_{3}\|\boldsymbol{\Lambda}_{k}\|_{1}. \end{eqnarray*} This is equivalent to a Lasso problem with $(\mathbf{M}_{2k}-\mathbf{X}\zeta_{j})$ as the ``outcome" and $\mathbf{M}_{1}$ as the ``predictor".
For $\tilde{\boldsymbol{\beta}}$ and $\tilde{\boldsymbol{\theta}}$, we seek to minimize the function, \begin{eqnarray*}
\kappa_{1}\sum_{j=1}^{p_{1}}\left\{|\tilde{\beta}_{j}\tilde{\theta}_{j}|+\nu_{1}(\tilde{\beta}_{j}^{2}+\tilde{\theta}_{j}^{2})\right\}+\mu_{1}\sum_{j=1}^{p_{1}}\left(|\tilde{\beta}_{j}|+|\tilde{\theta}_{j}|\right)+(\boldsymbol{\beta}-\tilde{\boldsymbol{\beta}})\boldsymbol{\tau}_{1}+\frac{\rho}{2}\|\boldsymbol{\beta}-\tilde{\boldsymbol{\beta}}\|_{2}^{2}+\boldsymbol{\tau}_{2}^\top(\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}})+\frac{\rho}{2}\|\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}\|_{2}^{2}, \end{eqnarray*} which can be minimized one element at a time. That is, for $j=1,\dots,p_{1}$, we minimize \begin{eqnarray*}
\kappa_{1}\left\{|\tilde{\beta}_{j}\tilde{\theta}_{j}|+\nu_{1}(\tilde{\beta}_{j}^{2}+\tilde{\theta}_{j}^{2})\right\}+\mu_{1}\left(|\tilde{\beta}_{j}|+|\tilde{\theta}_{j}|\right)+\tau_{1k}(\beta_{j}-\tilde{\beta}_{j})+\frac{\rho}{2}(\beta_{j}-\tilde{\beta}_{j})^{2}+\tau_{2j}(\theta_{j}-\tilde{\theta}_{j})+\frac{\rho}{2}(\theta_{j}-\tilde{\theta}_{j})^{2}, \end{eqnarray*} which is equivalent to minimizing \begin{eqnarray*}
\kappa_{1}|\tilde{\beta}_{j}\tilde{\theta}_{j}|+\mu_{1}|\tilde{\beta}_{j}|+\mu_{1}|\tilde{\theta}_{j}|+\frac{1}{2}(2\kappa_{1}\nu_{1}+\rho)\tilde{\beta}_{j}^{2}+\frac{1}{2}(2\kappa_{1}\nu_{1}+\rho)\tilde{\theta}_{j}^{2}-(\tau_{1k}+\rho\beta_{j})\tilde{\beta}_{j}-(\tau_{2j}+\rho\theta_{j})\tilde{\theta}_{j}. \end{eqnarray*} This can be solved by \citet[Lemma 3.2]{zhao2016pathway}.
For $\tilde{\boldsymbol{\zeta}}$ and $\tilde{\boldsymbol{\pi}}$, we seek to minimize the function, \begin{eqnarray*}
\kappa_{2}\sum_{k=1}^{p_{2}}\left\{|\tilde{\zeta}_{k}\tilde{\pi}_{k}|+\nu_{2}(\tilde{\zeta}_{k}^{2}+\tilde{\pi}_{k}^{2})\right\}+\mu_{2}\sum_{k=1}^{p_{2}}\left(|\tilde{\zeta}_{k}|+|\tilde{\pi}_{k}|\right)+(\boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}})\boldsymbol{\tau}_{3}+\frac{\rho}{2}\|\boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}}\|_{2}^{2}+\boldsymbol{\tau}_{4}^\top(\boldsymbol{\pi}-\tilde{\boldsymbol{\pi}})+\frac{\rho}{2}\|\boldsymbol{\pi}-\tilde{\boldsymbol{\pi}}\|_{2}^{2}, \end{eqnarray*} which can again be minimized one element at a time. That is, for $k=1,\dots,p_{2}$, we minimize \begin{eqnarray*}
\kappa_{2}\left\{|\tilde{\zeta}_{k}\tilde{\pi}_{k}|+\nu_{2}(\tilde{\zeta}_{k}^{2}+\tilde{\pi}_{k}^{2})\right\}+\mu_{2}\left(|\tilde{\zeta}_{k}|+|\tilde{\pi}_{k}|\right)+\tau_{3j}(\zeta_{k}-\tilde{\zeta}_{k})+\frac{\rho}{2}(\zeta_{k}-\tilde{\zeta}_{k})^{2}+\tau_{4k}(\pi_{k}-\tilde{\pi}_{k})+\frac{\rho}{2}(\pi_{k}-\tilde{\pi}_{k})^{2}, \end{eqnarray*} which is equivalent to minimizing \begin{eqnarray*}
\kappa_{2}|\tilde{\zeta}_{k}\tilde{\pi}_{k}|+\mu_{2}|\tilde{\zeta}_{k}|+\mu_{2}|\tilde{\pi}_{k}|+\frac{1}{2}(2\kappa_{2}\nu_{2}+\rho)\tilde{\zeta}_{k}^{2}+\frac{1}{2}(2\kappa_{2}\nu_{2}+\rho)\tilde{\pi}_{k}^{2}-(\tau_{3j}+\rho\zeta_{k})\tilde{\zeta}_{k}-(\tau_{4k}+\rho\pi_{k})\tilde{\pi}_{k}. \end{eqnarray*} This can again be solved by \citet[Lemma 3.2]{zhao2016pathway}.
Finally, for $\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2},\boldsymbol{\tau}_{3},\boldsymbol{\tau}_{4}$, we have, \begin{eqnarray*} \boldsymbol{\tau}_{1}^{(s+1)} &=& \boldsymbol{\tau}_{1}^{(s)}+\rho(\boldsymbol{\beta}-\tilde{\boldsymbol{\beta}})^\top, \\ \boldsymbol{\tau}_{2}^{(s+1)} &=& \boldsymbol{\tau}_{2}^{(s)}+\rho(\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}), \\ \boldsymbol{\tau}_{3}^{(s+1)} &=& \boldsymbol{\tau}_{3}^{(s)}+\rho(\boldsymbol{\zeta}-\tilde{\boldsymbol{\zeta}})^\top, \\ \boldsymbol{\tau}_{4}^{(s+1)} &=& \boldsymbol{\tau}_{4}^{(s)}+\rho(\boldsymbol{\pi}-\tilde{\boldsymbol{\pi}}). \end{eqnarray*}
\section{Simulations} \label{appendix:sec:sim}
\subsection{Simulation setting} Figure~\ref{appendix:fig:sim_setting} presents the generative scheme for the simulated data when $p_{1}=20$ and $p_{2}=30$.
\begin{figure}
\caption{The generative scheme for the simulated data when $p_{1}=20$ and $p_{2}=30$.}
\label{appendix:fig:sim_setting}
\end{figure}
\subsection{Additional results when $n=500$}
Figure~\ref{appendix:fig:sim_n500} and Table~\ref{appendix:table:sim_n500_BIC} present the average simulation results based on 200 data replications with $n=500$ and $p_{1}=p_{2}=100$. We see that, as the sample size increases, the performance of all methods improve and converge to the truth.
\begin{figure}
\caption{The estimate and the mean squared error of the total indirect effect, and the sensitivity and specificity of the indirect effect pathway selection, with the tuning parameter $\tilde\kappa$ selected by BIC. The sample size is $n=500$.}
\label{appendix:fig:sim_n500}
\end{figure}
\begin{table}[b!] \caption{\label{appendix:table:sim_n500_BIC}The estimate and the mean squared error of the total indirect effect, and the sensitivity and specificity of the indirect effect pathway selection, with the tuning parameter $\tilde\kappa$ selected by BIC. The sample size is $n=500$.}
\begin{center}
\begin{tabular}{l l R{2cm} R{2cm} R{2cm} R{2cm}}
\hline
& & \multicolumn{1}{c}{P2P3} & \multicolumn{1}{c}{P1P3} & \multicolumn{1}{c}{P1P2P3-1} & \multicolumn{1}{c}{P1P2P3-2} \\
\hline
& Truth & \multicolumn{4}{c}{8} \\
& Estimate & 8.077 & 8.077 & 8.077 & 8.077 \\
& MSE & 1.766 & 1.766 & 1.766 & 1.766 \\
& Sensitivity & 0.568 & 0.568 & 0.568 & 0.568 \\
\multirow{-5}{*}{$p_{1}=20,=p_{2}=30$} & Specificity & 0.996 & 0.996 & 0.996 & 0.996 \\
\hline
& Truth & \multicolumn{4}{c}{8} \\
& Estimate & 8.027 & 8.027 & 8.027 & 8.027 \\
& MSE & 1.223 & 1.223 & 1.223 & 1.223 \\
& Sensitivity & 0.699 & 0.699 & 0.699 & 0.699 \\
\multirow{-5}{*}{$p_{1}=p_{2}=100$} & Specificity & 0.999 & 0.999 & 0.999 & 0.999 \\
\hline
\end{tabular}
\end{center} \end{table}
\end{document} |
\begin{document}
\mainmatter
\title{Closed-form EM for Sparse Coding\\ And its Application to Source Separation }
\titlerunning{Closed-form EM for Sparse Coding and its Application to Source Separation}
\author{J\"org L\"ucke$^{\star }$ \and Abdul-Saboor Sheikh \thanks{joint first authorship} }
\authorrunning{J\"org L\"ucke and Abdul-Saboor Sheikh}
\institute{FIAS, Goethe-University Frankfurt, 60438 Frankfurt, Germany \mailsa\\ }
\toctitle{Closed-form EM for Sparse Coding and its Application to Source Separation} \tocauthor{J\"org L\"ucke \& Abdul-Saboor Sheikh} \maketitle
\begin{abstract}
We define and discuss the first sparse coding algorithm based on
closed-form EM updates and continuous latent variables. The
underlying generative model consists of a standard
`spike-and-slab' prior and a Gaussian noise model.
Closed-form solutions for E- and M-step equations are derived by
generalizing probabilistic PCA.
The resulting EM algorithm can take all modes of a potentially
multi-modal posterior into account. The computational cost of the
algorithm scales exponentially with the number of hidden dimensions.
However, with current computational resources, it is still possible
to efficiently learn model parameters for medium-scale problems.
Thus the model can be applied to the typical range of source separation tasks.
In numerical experiments on artificial data we verify likelihood
maximization and show that the derived algorithm recovers the sparse
directions of standard sparse coding distributions. On source
separation benchmarks comprised of realistic data we show that the
algorithm is competitive with other recent methods.
\end{abstract}
\begin{textblock}{9}(0.0,5.5) {\em Preprint. Final version to appear in Proc. LVA/ICA, LNCS pp. 213-221, 2012.} \end{textblock}
\section{Introduction} \label{SecIntro} Probabilistic generative models are a standard approach to model data distributions and to infer instructive information about the data generating process.
Methods like principle component analysis, factor analysis, or sparse coding (SC) (e.g., \cite{OlshausenField1996}) have all been formulated in the form of probabilistic generative models.
Moreover, independent component analysis (ICA), which is a very popular approach to blind source separation, can also be recovered from sparse coding in the limit of zero observation noise (e.g., \cite{DayanAbbott2001}).
A standard procedure to optimize parameters in generative models is the application of Expectation Maximization (EM) (e.g., \cite{NealHinton1998}). However, for many generative models the optimization using EM is analytically intractable. For stationary data only the most elementary models such as mixture models and factor analysis (which contains probabilistic PCA as special case) have closed-form solutions for E- and M-step equations. EM for more elaborate models requires approximations. In particular, sparse coding models (\cite{OlshausenField1996,LeeEtAl2007,Seeger2008} and many more) require approximations because integrals over the latent variables do not have closed-form solutions.
In this work we study a generative model that combines the Gaussian prior of probabilistic PCA (p-PCA) with a binary prior distribution.
Distributions combining binary and continuous parts have been discussed and used as priors before (e.g., \cite{MitchellBeauchamp1988}) and are commonly referred to as `spike-and-slab' distributions. Also sparse coding variants with spike-and-slab distributions have been studied previously (compare \cite{West2003,KnowlesGhahramani2007,TehGorurGhar2007,PaisleyLawrence2009,KnowlesGhahramani2010,MohamedEtAl2010}). However, in this work we show that combining binary and Gaussian latents maintains the p-PCA property of having a closed-form solution for EM optimization. We can, therefore, derive an algorithm that uses exact posteriors with potentially many modes to update model parameters.
\begin{figure}
\caption{\small Distributions generated by the GSC generative model. The left column shows the distributions generated for $\pi_h=1$ for all $h$. In this case the model generates p-PCA distributions. The middle column shows an intermediate value of $\pi_h$. The generated distributions are not Gaussians anymore but have a slight star shape. The right column shows distributions for small values of $\pi_h$. The generated distributions have a salient star shape similar to standard sparse coding distributions.}
\label{FigGSCDistributions}
\end{figure}
\section{The Gaussian Sparse Coding (GSC) model} \label{SecGSCmodel} Let us first consider a pair of $H$--dimensional i.i.d. latent vectors, a continuous $\vec{z}${}$\in${}$\mathbbm{R}^H$ and a binary $\vec{s}${}$\in${}$\{0,1\}^H$ with:
\begin{eqnarray}
\hspace{-2mm}{}p(\vec{s}\,|\Theta) &=& \prod_{h=1}^{H}\pi_h^{s_h}\,(1-\pi_h)^{1-s_h} = \small{\mathrm{Bernoulli}}(\vec{s};\vec{\pi})\label{EqnPriorS} \mbox{\ \ and\ \ }
p(\vec{z}\,|\Theta) = {\cal N}(\vec{z};\,\vec{0},\One_{H})\label{EqnPriorZ},
\end{eqnarray}
where $\pi_h$ parameterizes the probability of non-zero entries.
After generation, both hidden vectors are combined using a pointwise multiplication operator: i.e., $(\vec{s}\odot\vec{z})_h = s_h\,z_h$ for all $h$.
The resulting hidden random variable is a vector of continuous values and zeroes, and it follows a `spike-and-slab' distribution.
Given a hidden vector (which we will denote by $\vec{s}\odot\vec{z}$)
, we generate a $D$--dimensional observation $\vec{y}\in\mathbbm{R}^{D}$ by linearly combining a set of basis functions $W$ and adding Gaussian noise:
\begin{eqnarray}
\hspace{-2mm}\hspace{-2mm}{}p(\vec{y}\,|\,\vec{s},\vec{z},\Theta) &=& {\cal N}(\vec{y};\,W(\vec{s}\odot\vec{z}),\Sigma), \label{EqnNoise}
\end{eqnarray}
where $W\in\mathbbm{R}^{D\times{}H}$ is the matrix containing the basis functions $\vec{W}_h$ as columns, and $\Sigma\in\mathbbm{R}^{D\times{}D}$ is a covariance matrix parameterizing the data noise.
The latents' priors \refp{EqnPriorS} together with their pointwise combination and the noise distribution \refp{EqnNoise} define the generative model under consideration. As a special case, the model contains probabilistic PCA (or factor analysis). This can easily be seen by setting all $\pi_h$ equal to one.
The model \refp{EqnPriorS} to \refp{EqnNoise} is capable of generating a broad range of distributions including sparse coding like distributions. This is illustrated in Fig.\,\ref{FigGSCDistributions} where the parameters $\pi_h$ allow for continuously changing PCA-like to a SC-like distribution.
While the generative model itself has been studied previously \cite{West2003,TehGorurGhar2007,PaisleyLawrence2009,KnowlesGhahramani2010}, we will show that a closed-form EM algorithm can be derived, which can be applied to blind source separation tasks. We will refer to the generative model \refp{EqnPriorS} to \refp{EqnNoise} as the {\em
Gaussian Sparse Coding} (GSC) model in order to stress that a specific spike-and-slab prior (Gaussian slab) in conjunction with a Gaussian noise model is used. The GSC model is thus an instance of the spike-and-slab sparse coding model (or alternatively known {\em sparse factor analysis} models;
see e.g., \cite{West2003,TehGorurGhar2007,PaisleyLawrence2009,KnowlesGhahramani2010}).
\subsection{Expectation Maximization (EM) for Parameter Optimization} \label{SecEM}
Consider a set of $N$ independent data points \mbox{$\{\vec{y}^{\,(n)}\}_{n=1,\ldots,N}$} with $\vec{y}^{\,(n)}\in\mathbbm{R}^D$. For these data we seek parameters $\Theta=(W,\Sigma,\piVec)$ that maximize the data likelihood
${\cal L}=\prod_{n=1}^N{}p(\vec{y}^{\,(n)}\,|\,\Theta)$ under the GSC generative model.
We employ Expectation Maximization (EM) algorithm for parameter optimization. The EM algorithm \cite{NealHinton1998} optimizes the data likelihood w.r.t.\ the parameters $\Theta$ by iteratively maximizing the free-energy given by:
\begin{eqnarray}
\hspace{-2mm} {\cal F}(\Theta^{\mathrm{old}},\Theta) &=& \textstyle
\sum\limits_{n=1}^{N} \sum\limits_{\vec{s}}\int\limits_{\vec{z}}\ p(\vec{s},\vec{z}\,|\,\vec{y}^{\,(n)},\Theta^{\mathrm{old}})
\textstyle \Big[\log\big( p(\vec{y}^{\,(n)}\,|\,\vec{s},\vec{z},\Theta)\big)+ \log \big( p(\vec{s}\,|\,\Theta) \big) \nonumber\\
\hspace{-2mm} &&\hspace{47mm}\textstyle + \log \big( p(\vec{z}\,|\,\Theta) \big) \Big]\,\mathrm{d}\zVec\,+\,H(\Theta^{\mathrm{old}})\,,
\label{EqnFreeEnergy}
\end{eqnarray}
where $H(\Theta^{\mathrm{old}})$ is an entropy term only depending on parameter values held fixed during the optimization of ${\cal F}$ w.r.t.\ $\Theta$. Note that integration over the hidden space involves an integral over the continuous part and a sum over the binary part.
Optimizing the free-energy consists of two steps: given the current parameters $\Theta^{\mathrm{old}}$ the posterior probability is computed in the E-step; and given the posterior, ${\cal F}(\Theta^{\mathrm{old}},\Theta)$ is maximized w.r.t.\ $\Theta$ in the M-step. Iteratively applying E- and M-steps locally maximizes the data likelihood.
\\[2mm] {\bf M-step parameter updates:} Let us first consider the maximization of the free-energy in the M-step before considering expectation values w.r.t.\ to the posterior in the E-step. Given a generative model, conditions for a maximum free-energy are canonically derived by setting the derivatives of ${\cal F}(\Theta^{\mathrm{old}},\Theta)$ w.r.t.\ the second argument to zero. For the GSC model we obtain the following parameter updates:
\small
\begin{eqnarray} W &=& \big( \sum_{n=1}^{N} \vec{y}^{\,(n)} \E{\vec{s}\odot\vec{z}}^{\mathrm{T}}_n \big)
\big( \sum_{n=1}^{N} \E{(\vec{s}\odot\vec{z})(\vec{s}\odot\vec{z})^{\mathrm{T}}}_n \big)^{-1},\label{EqnMStepW}\\
\Sigma &=& \frac{1}{N}\sum_{n=1}^{N}\Big[ \vec{y}^{\,(n)}(\vec{y}^{\,(n)})^{\mathrm{T}}
- 2 \big(W \E{\vec{s}\odot\vec{z}}_n\big)(\vec{y}^{\,(n)})^{\mathrm{T}} + W\big(\E{(\vec{s}\odot\vec{z})(\vec{s}\odot\vec{z})^{\mathrm{T}}}_nW^{\mathrm{T}}\big)\Big]\label{EqnMStepSigma} \nonumber\\
& &\hspace{-7mm}\mbox{and}\hspace{2mm} \vec{\pi} = \frac{1}{N}\sum_{n=1}^{N}\E{\vec{s}}_n, \label{EqnMStepPi}
\hspace{2mm}\mbox{where}\hspace{2mm}\E{f(\vec{s},\vec{z})}_n = \sum_{\vec{s}}\int_{\vec{z}}\,p(\vec{s},\vec{z}\,|\,\vec{y}^{\,(n)},\Theta^{\mathrm{old}})\ f(\vec{s},\vec{z})\,\mathrm{d}\zVec.\label{EqPstrXpt}
\end{eqnarray}
\normalsize
Equations \refp{EqnMStepW} to \refp{EqnMStepPi} define a new set of parameter values $\Theta=(W,\Sigma,\piVec)$ given the current values $\Theta^{\mathrm{old}}$. These 'old' parameters are only used to compute the sufficient statistics $\E{\vec{s}}_n$, $\E{\vec{s}\odot\vec{z}}_n$ and $\E{(\vec{s}\odot\vec{z})(\vec{s}\odot\vec{z})^{\mathrm{T}}}_n$ of the model.
\\[2mm] {\bf Expectation Values:} Although the derivation of M-step equations can be analytically intricate, it is the E-step that, for most generative models, poses the major challenge.
Source of the problems involved are analytically intractable integrals required for posterior distributions and for expectation values w.r.t.\ the posterior. The true posterior is therefore often replaced by an approximate distribution (see, e.g.,\cite{Bishop2006,Seeger2008}) or in the form of factored variational distributions \cite{JordanEtAl1999,Jaakkola2001}.
The most frequently used approximation is the maximum-a-posterior (MAP) estimate (see, e.g., \cite{OlshausenField1996,LeeEtAl2007}) which replaces the true posterior by a delta-function around the posterior's maximum value. Alternatively, analytically intractable expectation values are often approximated using sampling approaches.
Using approximations always implies, however, that many analytical properties of exact EM are not maintained. Approximate EM iterations may, for instance, decrease the likelihood or may not recover (local or global) likelihood optima in many cases.
There are nevertheless, a limited number of models with exact EM solutions; e.g., mixture models such as the mixture-of-Gaussians, p-PCA or factor analysis etc. Our novel work here extends the set of known models with exact EM solutions.
By following along the same lines as for the p-PCA derivations, we maintain in our E-step the analytical tractability of computing expectation values w.r.t. the posterior of the GSC model \refp{EqPstrXpt}.
\\[2mm] {\bf Posterior Probability:} First observe that the discrete latent variable $\vec{s}$ of the GSC model can be directly combined with the basis functions, i.e., $W(\vec{s}\odot\vec{z})\,=\,\tilde{W}_{\vec{s}}\,\vec{z},$ where $(\tilde{W}_{\vec{s}})_{dh}\,=\,W_{dh}s_h$.
Now we apply the Bayes' rule to write down the posterior: \begin{eqnarray}
p(\vec{s},\vec{z}\,|\,\vec{y}^{\,(n)},\Theta)
&=& \frac{{\cal N}(\vec{y}^{\,(n)};\,\tilde{W}_{\vec{s}}\,\vec{z},\Sigma)\,{\cal N}(\vec{z};\,\vec{0},\One_{H})\,p(\vec{s}\,|\,\Theta)}
{\sum_{\vec{s}^{\prime}}\int{\cal N}(\vec{y}^{\,(n)};\,\tilde{W}_{\vec{s}^{\prime}}\,\vec{z}^{\prime},\Sigma)\,{\cal N}(\vec{z}^{\prime};\,\vec{0},\One_{H})\,p(\vec{s}^{\prime}\,|\,\Theta)\,\mathrm{d}\zVec^{\prime}}. \label{EqnPstrOrig}
\end{eqnarray} Note that given a state $\vec{s}$ in \refp{EqnPstrOrig}, the Gaussian governing the observations $\vec{y}^{\,(n)}$ is only dependent on the Gaussian over the continuous latent $\vec{z}$, which is analytically independent of $\vec{s}$. We can exploit this joint relation to refactorize the Gaussians. Using Gaussian identities the posterior can be rewritten as: \begin{eqnarray}
p(\vec{s},\vec{z}\,|\,\vec{y}^{\,(n)},\Theta)
&=& \frac{{\cal N}(\vec{y}^{\,(n)};\vec{0},C_{\vec{s}})\,p(\vec{s}\,|\,\Theta)\,{\cal N}(\vec{z};\,\kappaVec^{(n)}_{\vec{s}},\Lambda_{\vec{s}})}
{\sum_{\vec{s}^{\prime}}{\cal N}(\vec{y}^{\,(n)};\vec{0},C_{\vec{s}^{\prime}})\,p(\vec{s}^{\prime}\,|\,\Theta)\,\int{\cal N}(\vec{z}^{\prime};\,\kappaVec^{(n)}_{\vec{s}^{\prime}},\Lambda_{\vec{s}^{\prime}})\,\mathrm{d}\zVec^{\prime}}\nonumber\\%[1mm]
&=& p(\vec{s}\,|\,\vec{y}^{\,(n)},\Theta)\ {\cal N}(\vec{z};\,\kappaVec^{(n)}_{\vec{s}},\Lambda_{\vec{s}}), \label{EqnPostMain} \\[-8mm]\nonumber
\label{EqnPstrRfct}
\end{eqnarray} \begin{eqnarray}
\mbox{where}\hspace{2mm}C_{\vec{s}} &=& \tilde{W}_{\vec{s}}\tilde{W}^{\mathrm{T}}_{\vec{s}}\,+\,\Sigma,\label{EqnPostC}
\hspace{8mm}\Lambda_{\vec{s}} = \big(\tilde{W}_{\vec{s}}^{T}\,\Sigma^{-1}\,\tilde{W}_{\vec{s}} + \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt\vrule depth 0pt height 0.3pt width 0.12em$}_H\big)^{-1} \nonumber\\[1mm]
\mbox{and}\hspace{3mm}\kappaVec^{(n)}_{\vec{s}} &=& \Lambda_{\vec{s}}\,\tilde{W}^{\mathrm{T}}_{\vec{s}}\,\Sigma^{-1}\,\vec{y}^{\,(n)} . \label{EqnPostK}
\end{eqnarray}
Equations \refp{EqnPostMain} to \refp{EqnPostK} represent the crucial result for the computation of the E-step below because, first, they show that the posterior does not involve analytically intractable integrals and, second, for fixed $\vec{s}$ and $\vec{y}^{\,(n)}$ the dependency on $\vec{z}$ follows a Gaussian distribution. This special form allows for the derivation of analytical expressions for the expectation values as required for the M-step parameter updates.
\\[2mm] {\bf E-step Equations:} Derived from \refp{EqnPostMain}, the expectation values are computed as: \begin{eqnarray}
\E{\vec{s}}_n &=& \sum_{\vec{s}} p(\vec{s}\,|\,\vec{y}^{\,(n)},\Theta)\ \vec{s}, \label{EqnEStepS}
\hspace{6.5mm} \E{\vec{s}\odot\vec{z}}_n = \sum_{\vec{s}} p(\vec{s}\,|\,\vec{y}^{\,(n)},\Theta)\ \kappaVec^{(n)}_{\vec{s}} \ \label{EqnEStepSZ}\\
& &\hspace{-11mm}\mbox{and}\hspace{4mm}\E{(\vec{s}\odot\vec{z})(\vec{s}\odot\vec{z})^{\mathrm{T}}}_n =
\sum_{\vec{s}} p(\vec{s}\,|\,\vec{y}^{\,(n)},\Theta)\,\big(\Lambda_{\vec{s}} + \kappaVec^{(n)}_{\vec{s}}(\kappaVec^{(n)}_{\vec{s}})^{\mathrm{T}}\big). \label{EqnEStepSZSZ}
\end{eqnarray}
Note that we have to use the current values $\Theta=\Theta^{\mathrm{old}}$ for all parameters on the right-hand-side. The E-step equations \refp{EqnEStepS} to \refp{EqnEStepSZSZ} represent a closed-form solution for expectation values required for the closed-form M-step \refp{EqnMStepW} to \refp{EqnMStepPi}.
\\[2mm] {\bf Relation to the Mixture of Gaussians:}
The special form of the posterior in \refp{EqnPostMain} allows the derivation of a closed-form experession of the data likelihood: i.e.,
$p(\vec{y}\,|\,\Theta) = \sum_{\vec{s}}\ \mathrm{Bernoulli}(\vec{s};\vec{\pi})\ {\cal N}(\vec{y}^{\,(n)};\vec{0},C_{\vec{s}})$. Note that in principle, this form can be reproduced by a Gaussian mixture model. However, such a model would consist of $2^H$ mixture components, with strongly dependent mixing proportions and covariance matrices $C_{\vec{s}}$. Closed-form EM-updates can in general not be derived for such dependencies. The standard updates for mixtures of Gaussians require independently parameterized mixing proportions and components. Therefore, the closed-form EM-solutions for the GSC model is not a consequence of closed-form EM for classical Gaussian mixtures.
\section{Numerical Experiments}
GSC parameter optimization is non-convex, However, as for all algorithms based on closed-form EM, the GSC algorithm always increases the data likelihood at least to a local maxima. We first numerically investigate how frequently local optima are obtained. Later we assess model's performace on more practical tasks. \\[2mm] {\bf Model verification:} First, we verify on artificial data that the algorithm increases the likelihood and that it can recover the parameters of the generating distribution. For this, we generated $N = 500$ data points $\vec{y}^{\,(n)}$ from the GSC generative model (\ref{EqnPriorS}) to (\ref{EqnNoise}) with $D = H = 2$. We used randomly initialized generative parameters \footnote{We obtained $W^{\mathrm{gen}}$ by independently drawing each matrix entry from a normal distribution with zero mean and standard deviation $3$. $\pi^{\mathrm{gen}}_h$ values were drawn from a uniform distribution between $0.05$ and $1$, $\Sigma = \sigma^{\mathrm{gen}}{\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt\vrule depth 0pt height 0.3pt width 0.12em$}_D}$ (where $\sigma^{\mathrm{gen}}$ was uniformly drawn between $0.05$ and $10$).}.
The algorithm was run $250$ times on the generated data. For each run we performed $300$ EM iterations. For each run, we randomly and uniformly initialized $\pi_h$ between $0.05$ and $10$, set $\Sigma$ to the covariance accross the data points, and the elements of $W$ we chose to be independently drawn from a normal distribution with zero mean and unit variance. In all runs the generating parameter values were recovered with high accuracy. Runs with different generating parameters produced essentially the same results.
\\[2mm] {\bf Recovery of sparse directions:} To test the model's robustness w.r.t.\ a relaxation of the GSC assumptions, we applied the GSC algorithm to data generated by standard sparse coding models. We used a standard Cauchy prior and a Gaussian noise model \cite{OlshausenField1996} for data generation. Fig.\,\ref{FigSparseDirections} second panel shows data generated by this sparse coding model while the first panel shows the prior density along one of its hidden dimensions. We generated $N=500$ data points with $H=D=2$. We then applied the GSC algorithm with the same parameter initialization as in the previous experiment. We performed $100$ trials using 300 EM iterations per trial. Again, the algorithm converged to high likelihood values in most runs (see Fig.\,\ref{FigCauchyComp}). As a performace measure for this experiment we investigated how well the heavy tails (i.e., the sparse directions) of standard SC were recovered. As a performance metric, we used the Amari index \cite{AmariEtAl1996}:
\begin{figure}
\caption{Histogram of likelihood values for 100 runs of the GSC algorithm on data generated by a SC model with Cauchy prior. Almost all runs converged to high likelihood values.}
\label{FigCauchyComp}
\end{figure} \begin{figure}
\caption{Comparison of standard sparse coding and GSC. {\bf Left panels}: Cauchy distribution (along one hidden dimension) as a standard SC prior \cite{OlshausenField1996}
and data generated by it. {\bf Right panels}: Spike-and-slab distribution (one of the hidden dimensions) inferred by the GSC algorithm along with inferred sparse directions (solid red lines) and posterior data density contours (dotted red lines).}
\label{FigSparseDirections}
\end{figure}
\begin{eqnarray}
\textstyle{}A(W) &=& \textstyle \frac{1}{2H(H-1)} \sum_{h,h^\prime=1}^{H} \Big( \frac {\vert O_{hh^\prime}\vert} {\max_{h^{\prime\prime}}\vert O_{hh^{\prime\prime}}\vert}
+ \frac{ \vert O_{ hh^{\prime}}\vert } { \max_{h^{\prime\prime}}\vert O_{h^{\prime\prime}h^{\prime}}\vert } \Big) - \frac{1}{H-1}
\label{EqnAmari} \end{eqnarray}
where $O_{hh'}:=\left(W^{-1}W^{\mathrm{gen}}\right)_{hh'}$.
The mean Amari index of all runs with high likelihood values was below $10^{-2}$, which shows a very accurate recovery of the sparse directions. Fig.\,\ref{FigSparseDirections} (right panel) visualizes the distribution recovered by the GSC algorithm in a typical run. The dotted red lines show the density contours of the learned distribution
$p(\vec{y}\,|\,\Theta)$. High accuracy in the recovery of the generating sparse directions (solid black lines) can be observed by comparison with the recovered directions (solid red lines). The results of experiments are qualitatively the same if we increase the number of hidden and observed dimensions; e.g., for $H=D=4$ we found the algorithm converged to a high likelihood in $91$ (with average Amari index below $10^{-2}$) of $100$ runs.
Other than standard SC with Cauchy prior, we also ran the algorithm on data generated by SC with Laplace prior \cite{OlshausenField1996,LeeEtAl2007}. There for $H=D=2$, we converged to high likelihood values in $99$ of $100$ runs with an average Amari index $0.06$. In the experiment with $H=D=4$ the algorithm converged to a high likelihood in $97$ of $100$ runs. The average Amari index of all runs with high likelihoods was $0.07$ in this case.
\\[2mm] {\bf Source separation:} We applied the GSC algorithm to publicly available benchmarks. We used the non-artificial benchmarks of \cite{SuzukiSugiyama2011}. The datasets mainly contain acoustic data obtained from ICALAB \cite{icalab2007}. We generated the observed data by mixing the benchmark sources using randomly generated orthogonal mixing matrix (we followed \cite{SuzukiSugiyama2011}). Again, the Amari index (\ref{EqnAmari}) was used as a performace measure.
\begin{figure}
\caption{Histogram of the deviation from orthogonality of the $W$
matrix for 100 runs of the GSC algorithm on the {\tt Speech4}
benchmark ($N=500$). A clear cluster of the most orthogonal runs can
automatically be detected: the threshold of runs considered is
defined to be the minimum after the cluster (black
arrow). }
\label{FigSpeech4OrthoHist}
\end{figure} \begin{table*}[t] \footnotesize \caption{Performance of different algorithms on benchmarks for source separation. Data for NG-LICA, KICA, FICA, and JADE are taken from \cite{SuzukiSugiyama2011}. Performances are compared based on the Amari index (\ref{EqnAmari}). Bold values highlight the best performing algorithm(s).}
\label{TabLimits} \renewcommand{1.1}{1.1} \begin{center}
\begin{tabular}{|c|c|cccccc|}\hline
\multicolumn{2}{|c|}{datasets} & \multicolumn{6}{c|}{Amari index (standard deviation)}\\\hline
name & N & GSC & GSC$^\mathrm{\perp}$ & NG-LICA & KICA & FICA & JADE \\\hline
10halo & 200 & 0.34(0.05) & \bf{0.29(0.03)} & \bf{0.29(0.02)} & 0.38(0.03) & 0.33(0.07) & 0.36(0.00)\\
& 500 & 0.27(0.01) & 0.27(0.01) & \bf{0.22(0.02)} & 0.37(0.03) & \bf{0.22(0.03)} & 0.28(0.00)\\\hline
Sergio7 & 200 & 0.23(0.06) & 0.20(0.06) & \bf{0.04(0.01)} & 0.38(0.04) & 0.05(0.02) & 0.07(0.00)\\
& 500 & 0.18(0.05) & 0.17(0.03) & 0.05(0.02) & 0.37(0.03) & \bf{0.04(0.01)} & \bf{0.04(0.00)}\\\hline
Speech4 & 200 & 0.25(0.05) & \bf{0.17(0.04)} & 0.18(0.03) & 0.29(0.05) & 0.20(0.03) & 0.22(0.00)\\
& 500 & 0.11(0.04) & \bf{0.05(0.01)} & 0.07(0.00) & 0.10(0.04) & 0.10(0.04) & 0.06(0.00)\\\hline
c5signals & 200 & 0.39(0.03) & 0.44(0.05) & 0.12(0.01) & 0.25(0.15) & \bf{0.10(0.02)} & 0.12(0.00)\\
& 500 & 0.41(0.05) & 0.44(0.04) & 0.06(0.04) & 0.07(0.06) & \bf{0.04(0.02)} & 0.07(0.00)\\\hline
\end{tabular} \end{center}
\end{table*}
For all the benchmarks we used $N=200$ and $N=500$ data points (as selected by\cite{SuzukiSugiyama2011}). We applied GSC to the data using the same initialization as described before. For each experiment we performed $100$ trials with a random new parameter initialization per trial. The first column of Tab.\,1 list average Amari indices obtained including all trials per experiment\footnote{We obtained the reported results by diagonalizing the updated $\Sigma$ in the M-step by setting $\Sigma = \sigma^2 \hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt\vrule depth 0pt height 0.3pt width 0.12em$}_D$, where $\sigma^2 = \mathrm{Tr}(\Sigma)/D$.}. It is important to note that all the other algorithms listed in the comparison assume orthogonal mixing matrices, while the GSC algorithm does not. Therefore in the column 'GSC$^{\perp}$' in Tab.\,1, we report statistics that are only computed over the runs which infered the most orthogonal $W$ matrices. As a measure of orthogonality we used the maximal deviation from $90^{o}$ between any two axes. Fig.\,\ref{FigSpeech4OrthoHist} shows as an example a histogram of the maximal deviations of all trials on the {\tt Speech4} data with $N=500$. As can be observed, we obtained a clear cluster of runs with high orthogonality.
We observed worst performace of the GSC algorithm on the {\tt c5signals} dataset. However, the dataset contains sub-Gaussian sources which in general can not be recovered by sparse coding approaches.
\section{Discussion}
The GSC algorithm falls into the class of standard SC algorithms. However, instead of a heavy-tail prior, it uses a spike-and-slab distribution. The algorithm has a distinguishing capability of taking the whole (potentially a multimodal) posterior into account for parameter optimization, which is in contrast to the MAP approximation of the posterior, which is a widely used approach for training SC models
(see, e.g., \cite{LeeEtAl2007,MairalBPS09}) . Various sophisticated methods have been proposed to efficently find the MAP (e.g., \cite{Tibshirani1996}).\comment{here the newest paper for MAP?} MAP based optimizations usually also require regularization parameters that have to be inferred (e.g., by means of cross-validation). Other approximations that take more aspects of the posterior into account are also being investigated actively (e.g., \cite{Seeger2008,MohamedEtAl2010}).
However, approximations can introduce learning biases. For instance, MAP and Laplace approximations assume monomodality in posterior estimation, which is not always the case.
Closed-form EM learning of the GSC algorithm also comes with a cost, which is exponential w.r.t. the number of hidden dimensions $H$. This can be seen by considering (\ref{EqnPstrRfct}) where the partition function requires a summation over all binary vectors $\vec{s}$ (similar for expectation values w.r.t.\ the posterior).
Nevertheless, we show in numerical experiments that the algorithm is well applicable to the typical range of source separation tasks.
In such domains the GSC algorithm can benefit from taking potentially a multimodal posterior into acount and infering a whole set of model parameters including the sparsitiy per latent dimension. For instance, when using a number of hidden dimensions larger than the number of actual sources, the model can discard dimensions by setting $\pi_h$ parameters to zero. The studied model could thus be considered as treating parameter inference in a more Bayesian way than, e.g., SC with MAP estimates (compare \cite{LeeEtAl2007}). The second line of research aims at a fully Bayesian description of sparse coding and emphasises a large flexibility using estimations of the number of hidden dimensions and by being applicable with a range of different noise models. The great challenge of these general models is the procedure of parameter estimation. For the model in \cite{MohamedEtAl2010}, for instance, the Bayesian methodology involves conjugate priors and hyperparameters in combination with Laplace approximation and different sampling schemes. While the aim, e.g.,\ in \cite{West2003,KnowlesGhahramani2007,TehGorurGhar2007,PaisleyLawrence2009,KnowlesGhahramani2010,MohamedEtAl2010} is a large flexibility, we aim at a generalization of sparse coding that maintains the closed-form EM solutions.
To summarize, we have studied a novel sparse coding algorithm and have shown its competitiveness on source separation benchmarks. Along with the reported results on source separation, the main contribution of this work is the derivation and numerical investigation of the (to the knowledge of the authors) first closed-form, exact EM algorithm for spkie-and-slab sparse coding.
\\[2mm] {\bf Acknowledgement.} J. L\"ucke is funded by the German Research Foundation (DFG), grant \mbox{LU 1196/4-1}; A.-S. Sheikh is funded by the German BMBF, grant 01GQ0840.
\end{document} |
\begin{document}
\title{Fundamental limitation on qubit operations due to the Bloch-Siegert Oscillation } \author{M. S. Shahriar and Prabhakar Pradhan }
\address{ Department of Electrical and Computer Engineering, Northwestern Univeristy \\
Evanston, IL 60208 and \\
Research Laboratory of Electronics, Massachusetts Institute of Technology \\
Cambridge, MA 02139 } \maketitle
\abstracts{ We show that
if the Rabi frequency is comparable to the Bohr frequency so that the rotating wave approximation is inappropriate, an extra oscillation is present with the Rabi oscillation.
We discuss how the sensitivity of the degree of excitation to the phase of the field may pose severe constraints on precise rotations of quantum bits involving low-frequency transitions. We present a scheme for observing this effect in an atomic beam.}
It is well known that the amplitude of an atomic state is necessarily complex.
The electric or magnetic field generated by an oscillator, on the other hand, is real, composed of the sum of two complex components. In describing semiclassically the atom-field interaction involving such a field, one often side-steps this difference by making the so-called rotating wave approximation (RWA), under which only one of the two complex components is kept, and the counter-rotating part is ignored.
We discuss how the sensitivity of the degree of excitation to the phase of the field
poses severe constraints on precise rotations of quantum bits involving low-frequency transitions. We also present a scheme for observing this effect in an atomic beam.
We consider an ideal two-level system where a ground state $|0\rangle$
is coupled to an excited state $|1\rangle $, and the $0 \leftrightarrow 1$ transitions are magnetic dipolar, with a transition frequency $\omega$, and the magnetic field is of the form $B=B_0 \cos(\omega t+\phi)$. We now summarize briefly the two-level dynamics without the RWA. In the dipole approximation, the Hamiltonian can be written as:
\begin{equation} \hat{H} = \epsilon ( \sigma_0 -\sigma_z )/2 + g(t) \sigma_x \label{hmt_original} \end{equation}
where $g(t) = -g_0\left[\exp (i\omega t + i\phi)+c.c. \right] /2$, $\sigma_i$ are Pauli matrices, and $\epsilon=\omega$ corresponds to resonant excitation. The state vector is written as: \begin{equation}
|\xi(t)\rangle = \left( \begin{array} {c} C_0(t) \\ C_1(t) \end{array} \right). \label{ket_c_original} \end{equation}
We perform a rotating wave transformation by operating on $|\xi(t)\rangle$ with the unitary operator $\hat{Q}$, where: $\hat{Q} = (\sigma_0 + \sigma_z )/2 + \exp (i\omega t + i\phi)(\sigma_0 - \sigma_z)/2 $.
The Schr\"{o}dinger equation then takes the form (setting $\hbar=1$):
$\dot{ |\tilde{\xi}\rangle } = -i \tilde{H(t)} |\tilde{\xi}(t) \rangle $ where the effective Hamiltonian is given by:
\begin{equation} \tilde{H} = \alpha(t)\sigma_{+} + \alpha^{*}(t) \sigma_{-}, \label{hmt_tilde} \end{equation}
with $\alpha(t) = - (g_0/2)\left[\exp (-i2\omega t- i2\phi)+1 \right] $, and in the rotating frame the state vector is: \begin{equation}
|\tilde{\xi}(t) \rangle \equiv \hat{Q}| {\xi}(t)\rangle= \left( \begin{array}{c} \tilde{C}_0(t) \\ \tilde{C}_1(t) \end{array} \right). \label{ket_c_tilde} \end{equation}
Now, by making the rotating wave approximation (RWA), corresponding to dropping the fast oscillating term in $\alpha(t)$ one ignores effects (such as the Bloch-Siegert shift ) of the order of $(g_0/\omega)$, which can easily be observed in experiment if $g_0$ is large \cite{Corney77,Bloch40}. Otherwise, by choosing $g_0$ to be small enough, one can make the RWA for any value of $\omega$. We explore here both regimes, and we find the general results without the RWA.
From Eqs.\ref{hmt_tilde} and \ref{ket_c_tilde}, we get two coupled differential equations:
\begin{eqnarray} \dot{\tilde{C}}_0(t) & = & i(g_0/2)\left[ 1+ \exp (-i2\omega t-i2\phi)\right ]\tilde{C}_1(t)\\ \label{c_diff_eqn_a} \dot{\tilde{C}}_1(t) & = & i(g_0/2)\left[ 1+ \exp (+i2\omega t+i2\phi)\right] \tilde{C}_0(t). \label{c_diff_eqn_b} \end{eqnarray}
Let $|C_0(0)|^2=1$ be the initial condition, and we proceed to find approximate analytical solutions of Eqs.5 and 6. Due to the periodic nature of the effective Hamiltonian, the general solution to Eqs.5 and 6 can be written in the form:
\begin{equation}
|\tilde{\xi}(t)\rangle = \sum_{n=-\infty}^{\infty} \left( \begin{array}{c} a_n \\ b_n \end{array} \right) \exp( n(-i2\omega t-i2\phi)). \label{xi_Bloch_expn} \end{equation}
Inserting Eq.\ref{xi_Bloch_expn} in Eqs.5 and 6 we get for all $n$ : \begin{eqnarray}
\dot{a}_n & = & i2n\omega a_n + i g_0(b_n +b_{n-1})/2, \\
\dot{b}_n & = & i2n\omega b_n + i g_0(a_n +a_{n+1})/2. \end{eqnarray} \label{a_n_b_n_eqn}
Here, the coupling between $a_0$ and $b_0$ is the conventional one when the RWA is made. The couplings to the nearest neighbors, $a_{\pm 1}$ and $b_{\pm 1}$, are detuned by an amount $2\omega$, and so on. To the lowest order in $(g_0/\omega)$, we can ignore
terms with $|n|>1$, thus yielding a truncated set of six equations for $\dot{a}_0,\dot{b}_0,\dot{a}_{\pm 1},\dot{b}_{\pm 1}$. Now consider $g_0$ to have a time-dependence of the form $g_0(t)=g_{0M}\left[1-\exp(-t/\tau_{sw})\right]$, where the switching time constant $\tau_{sw}$ is large compared to other characteristic time scales such as $1/\omega$ and $1/g_{0M}$. Then, these equations can be solved by the method of adiabatic elimination, which is valid to first order in $\eta\equiv(g_0/4\omega )$.
(Note $\eta(t)=\eta_0\left[1-\exp(-t/\tau_{sw})\right]$, where $\eta_0\equiv(g_0M/4\omega) )$. To solve the set of equations above, we consider first $a_{-1}$ and $b_{-1}$. Define $\mu_{\pm}\equiv (a_{-1}\pm b_{-1})$, then one can write $\dot{\mu}_{\pm} = -i(2\omega+ g_0/2 ) \mu_{\pm} \pm i g_0 a_{0}/2 $. Adiabatic following then yields (again, to lowest order in $\eta$ ):
$a_{-1} \approx 0 $ and $b_{-1} \approx \eta a_{0}$.
Likewise, we can show that $a_{1} \approx -\eta b_0$ and $b_{1} \approx 0$.
The only non-vanishing (to lowest order in $\eta $ with $|C_0(t=0)|^2=1$) terms in the solution of Eqs.5 and 6 are:
\begin{eqnarray} C_0(t) &=& \cos(g_0'(t)t/2) - i\eta e^{-i(2\omega t +2\phi)} \sin(g_0'(t)t/2), \\ C_1(t) &=& i e^{-i(\omega t + \phi)} [ \sin(g_0'(t)t/2)
- i\eta e^{+i(2\omega t +2\phi)} \cos(g_0'(t)t/2) ], \end{eqnarray} \label{c_total_eqn}
where $g_0'(t)=1/t \int_0^t g_{0}(t)dt = g_0 \left[1-(t/\tau_{sw})^{-1}\exp(-t/\tau_{sw})\right].$
To lowest order in $\eta$, this solution is normalized at all times. Note that if one wants to carry this excitation on an ensemble of atoms using a $\pi/2$ pulse and measure the population of the
state $|1\rangle$ after the excitation terminates (at $t=\tau $ when $g'(\tau)\tau/2= \pi /2$ ), the output signal will be: \begin{equation}
|C_{1}(g_0'(\tau),\phi)|^2 =\frac{1}{2}\left[1+2\eta \sin(2\omega\tau + 2\phi)\right], \label{population_c1} \end{equation} which contains information of both the amplitude and the phase of the driving field $B$. This clearly indicates that the Rabi transition probability depends on the total phase $\phi_{\tau}= \omega\tau + \phi$ of the driving field.\\
\begin{figure}
\caption{ Left: Schematic illustration of an experimental arrangement for measuring the phase
dependence of the population of the excited state $|1\rangle$: (a) The microwave field couples the ground state
($|0\rangle$) to the excited state ($|1\rangle$).
A third level, $|2\rangle$, which can be coupled
to $|1\rangle$ optically, is used to measure the
population of $|1\rangle$ via fluorescence detection. (b) The microwave field is turned on adiabatically with a switching time-constant $\tau_{sw}$, and the fluorescence is monitored after a total interaction time of $\tau$.
Right: Illustration of the Bloch-Siegert Oscillation (BSO): (a) The population of state
$|1\rangle$, as a function of the interaction time $\tau$ , showing the BSO superimposed on the conventional Rabi oscillation. (b) The BSO oscillation (amplified scale) by itself, produced by subtracting the Rabi oscillation from the plot in (a). (c) The time-dependence of the Rabi frequency. Inset: BSO as a function of the absolute phase of the field with fixed $\tau$. }
\end{figure} A physical realization of this result can be best appreciated by considering an experimental arrangement of the type illustrated in Fig.1(left). And plot of the Rabi oscillations is shown in Fig.1(right). Under the RWA, the curve of Fig.1(a)(right) represents the conventional Rabi oscillation. However, we notice here some additional oscillation, which is magnified and shown separately in Fig.1(b)(right), produced by subtracting the conventional Rabi oscillation ($\sin^2(g(t)/2)$) from Fig.1(a)(right). That is, Fig.1(b)(right) corresponds to what we call the Bloch-Siegert Oscillation (BSO), given by $\eta \sin(g'_0(\tau)\tau)\sin(2\phi_{\tau})$.
These analytical results agree very closely to the results which are obtained via direct numerical integration of Eqs.5 and 6. Note that the BSO is at twice the frequency of the driving field, and its amplitude is enveloped by a function that vanishes when all the atoms are in a single state. This osillation will be stonger when the ratio of the resonance frequency to the Rabi frequency is large \cite{Bouwmeester00,Steane97a,Jonathan92}. One should keep track of the phase of the excitation field at the location of the qubit \cite{Pradhan02a,Pradhan02b} for low energy qubit systems \cite{Thomas82,Shahriar90,Shahriar97} with fast driving.
In conclusion, we have shown that when a two-level atomic system is driven by a strong periodic field, the Rabi oscillation is accompanied by another oscillation at twice the transition frequency. This extra oscillation can limit the qubit operations in Rabi flopping without a proper matching of the parameters. However, it has been shown that, this phase has potential application in teleportation. By making use of distant entanglement, this mechanism may enable teleportation of the phase of a field that is encoded in the atomic state amplitude, and has potential applications to remote frequency locking \cite{Jozsa00,Lloyd01,Levy87,Shahriar01}.
We wish to acknowledge support from DARPA grant No. F30602-01-2-0546 under the QUIST program, ARO grant No. DAAD19-001-0177 under the MURI program, and NRO grant No. NRO-000-00-C-0158.
\end{document} |
\begin{document}
\author{Jeremy Berquist} \title{Singularities on Demi-Normal Varieties} \maketitle
\noindent \linebreak \textbf{Abstract. } The birational classification of varieties inevitably leads to the study of singularities. The types of singularities that occur in this context have been studied by Mori, Koll\'ar, Reid, and others, beginning in the 1980s with the introduction of the Minimal Model Program. Normal singularities that are terminal, canonical, log terminal, and log canonical, and their non-normal counterparts, are typically studied by using a resolution of singularities (or a semi-resolution), and finding numerical conditions that relate the canonical class of the variety to that of its resolution. In order to do this, it has been assumed that a variety $X$ is has a $\mathbb{Q}$-Cartier canonical class: some multiple $mK_X$ of the canonical class is Cartier. In particular, this divisor can be pulled back under a resolution $f: Y \rightarrow X$ by pulling back its local sections. Then one has a relation $K_Y \sim \frac{1}{m}f^*(mK_X) + \sum a_iE_i$. It is then the coefficients of the exceptional divisors $E_i$ that determine the type of singularities that belong to $X$. It might be asked whether this $\mathbb{Q}$-Cartier hypothesis is necessary in studying singularities in birational classification. In \cite{dFH09}, de Fernex and Hacon construct a boundary divisor $\Delta$ for arbitrary normal varieties, the resulting divisor $K_X + \Delta$ being $\mathbb{Q}$-Cartier even though $K_X$ itself is not. This they call (for reasons that will be made clear) an $m$-compatible boundary for $X$, and they proceed to show that the singularities defined in terms of the pair $(X,\Delta)$ are none other than the singularities just described, when $K_X$ happens to be $\mathbb{Q}$-Cartier. Thus, a wider context exists within which one can study singularities of the above types. In the present paper, we extend the results of \cite{dFH09} still further, to include demi-normal varieties without a $\mathbb{Q}$-Cartier canonical class. Our secondary aim is to give an introduction to the study of those non-normal varieties that appear naturally in the Minimal Model Program, culminating with the extension of de Fernex and Hacon's results to these varieties.
\tableofcontents \addcontentsline{}{}{}
\begin{section}{Introduction}
Let us first recall briefly the birational classification of surfaces. Birationally, a surface $X$ is nonsingular and projective (by theorems of Nagata, Chow, and Hironaka). Certain rational curves on $X$ (the (-1)-curves) can be contracted, giving a birational morphism $X \rightarrow X'$ to another (nonsingular and projective) surface. This process stops after finitely many steps, yielding a ``relatively minimal model" of $X$. Dealing with the rational and ruled surfaces as separate cases, there exists a unique such model, called the minimal model of $X$. It is characterized uniquely by the property that $K_{X'}$ is nef (numerically effective).
The Minimal Model Program (MMP, or Mori's program) extends the birational classification of surfaces to higher dimensions; that is, it looks for a way to find a birational model $X'$ where $K_{X'}$ is nef. It is complete in dimension three, and large parts are complete in all dimensions. It is hoped that having a nef canonical divisor will allow further useful steps in birational classification.
In fact, on a different scale, MMP (if true) is used in moduli theory to reduce to cases where the canonical class is generated by its global sections. Then one reduces to cases of ``general type," where there exists a birational morphism from $X$ to its canonical model $Y := \textnormal{Proj }\oplus_{m \geq 0} H^0(X, mK_X).$ Then the Hilbert polynomial of the canonical divisor of $Y$ is the principle discrete invariant used in constructing moduli spaces.
Birationally, $X$ is smooth and projective. However, MMP introduces singularities as it finds a nef birational model. Thus the canonical model may have some singularities. These singularities are well-understood in certain cases. We attempt to resolve issues dealing with the remaining cases.
Letting $Y$ be the canonical model of $X$, $K_Y$ is used to measure how singular the canonical model is. From a resolution of singularities $f: Y' \rightarrow Y$, one looks at the coefficients of the exceptional divisors in $K_{Y'} \sim f^*K_Y + \Sigma a_iE_i.$ The discrepancy is the minimum of the coefficients $a_i$. Depending on whether the discrepancy is at least -1, greater than -1, at least 0, or greater than 0, the canonical model has log canonical, log terminal, canonical, or terminal singularities.
Two issues immediate arise. First, if we want to compare divisors on $Y$ with those on $Y'$, we need $f$ to be an isomorphism over the codimension one points of $Y$. For varieties where $K_Y$ makes sense (in particular, assuming Serre's $S_2$ property), this is the same thing as saying that $Y$ is normal. Second, providing that we can talk reasonably about $K_Y$, we still need to be able to pull it back. This is possible if $K_Y$ is $\mathbb{Q}$-Cartier, and until recently this has been a standard assumption.
These two assumptions can be relaxed independently of one another in order to deal with more general varieties. That is, one might want to deal with non-normal varieties, and one might want to dispense with the $\mathbb{Q}$-Cartier hypothesis on $K_Y$. The non-normal varieties studied in the context of MMP are the demi-normal varieties. These are varieties with properties $G_1, S_2,$ and seminormality. (Note that a normal variety is one that is $R_1$ and $S_2$, and that $G_1$, or Gorenstein in codimension one, is slightly weaker than $R_1$. We will discuss these properties in the second section of this paper.) Equivalently, these are varieties with Serre's $S_2$ property, and such that the codimension one singularities are of the simplest possible type, being double normal crossings. For the second assumption, de Fernex and Hacon have solved the problem of pulling back $K_Y$ on a normal variety when $K_Y$ is not $\mathbb{Q}$-Cartier. In fact, they construct a boundary divisor $\Delta$ called an $m$-compatible boundary and study the singularities of the pair $(Y, \Delta)$; they also show that the resulting class of singularities corresponds with the usual class when $K_Y$ is $\mathbb{Q}$-Cartier.
It is our primary goal in this paper to resolve this two issues simulataneously. In other words, we follow the lead of de Fernex and Hacon and dispense with a $\mathbb{Q}$-Cartier canonical class, in our case treating demi-normal varieties.
There are technical reasons why we cannot deal with arbitrary non-normal varieties. For instance, to be able to talk about Weil divisors on a non-normal variety, we need to assume properties $G_1$ and $S_2$. Furthermore, if we plan on somehow resolving the singularities of a non-normal variety, then it seems we need to assume seminormality as well. Hironaka's resolution of singularities is too strong for non-normal varieties, at least if we want an isomorphism over the codimension one structure. Koll\'ar solves this problem by showing that a semiresolution exists for demi-normal varieties. There are some singularities, but they are only of two types and easily described. In fact, the only singularities on a semiresolution are double points, either double normal crossings (locally, solutions of $xy = 0$) or pinch points (local solutions of $x^2 - y^2z = 0$).
We resolve both issues by following de Fernex and Hacon with Koll\'ar's semiresolutions in mind. The main result in this direction is the following generalization of a theorem of de Fernex and Hacon:
\begin{theorem} Let $X$ be a demi-normal variety. Then for any semiresolution $f: Y \rightarrow X$, $m$-compatible boundaries exist with respect to $f$, for all sufficiently divisible positive integers $m$. \end{theorem}
We first define a generalized pullback that works for alll Weil divisors on a demi-normal variety. Then we show that the resulting theory is equivalent to the theory of pairs. The crux of the matter is that we can assume that a boundary divisor exists for $X$, with the property that the singularities of the resulting pair (as they are already understood) are the same as the generalized singularities of de Fernex and Hacon, interpreted for demi-normal varieties.
There is some surprising behavior in the resulting classes of singularities. In fact, we show that following definitions properly leads to the existence of a semi-canonical variety that is not sem-log terminal. This is evidently impossible when dealing with varieties with a $\mathbb{Q}$-Cartier canonical class. Nevertheless, our example shows how new characteristics arise when relaxing this hypothesis.
The paper is organized as follows. In the seond section, we review semiresolutions of demi-normal varieties and discuss why ``demi-normal" is the correct notion of a non-normal variety in the context of MMP. Next, we present the theory of divisors on varieties with properties $G_1$ and $S_2$. It turns out that if we consider only rank one reflexive sheaves, which correspond to Weil divisors on this class of varieties, we obtain enough flexibility to pull back arbitrary (not necessarily $\mathbb{Q}$-Cartier) Weil divisors. Working with sheaves turns out to be easier than working with divisors. We also discuss the standard formation of the canonical or dualizing class on a variety with the $G_1$ and $S_2$ properties. This is essential to developing a definition of singularities that extends the standard definitions in MMP. In the fourth section, we give our proof of Theorem 1.1. In the following section, we give new definitions of singularities of demi-normal varieties without a $\mathbb{Q}$-Cartier canonical class, following the definitions of de Fernex and Hacon for normal varieties without such a class. Finally, we give an example of a variety that is semi-canonical but not semi-log terminal. It is to be wondered if a different definition of singularities is perhaps better, or whether this type of unexpected behavior is a result of properties of arbitrary demi-normal varieties themselves. \end{section}
\begin{section}{Semiresolutions} This section contains definitions of properties assumed to hold on the varieties that we study. At the end, we work out an example of a semiresolution, demonstrating the standard technique of finding a semiresolution by finding a resolution of the normalization and gluing along the birational transform of the conductor.
We assume that $X$ is a reduced, equidimensional, quasi-projective variety over an algebraically closed field of characteristic zero. In particular, the normalization is finite and resolution of singularities is true for $X$.
We also assume three conditions on the local rings of $X$: (i) $S_2$, (ii) $G_1$, and (iii) $SN$. Equivalently, $X$ is $S_2$ and there is an open subvariety $U$, whose complement has codimension at least two, such that for any closed singular point $x \in U$, $\mathcal{O}_{X,x}^* \cong k[[x_1, ...,x_n]]/(x_1x_2),$ where the star denotes completion. Such varieties are called demi-normal.
Next, we will define these properties and give examples to show how they are distinct and related to one another.
We say that a local ring is $S_n$ if its depth at the maximal ideal is greater than the minimum of $n$ and its dimension: $$\textnormal{depth } \mathcal{O}_{X,x} \geq \textnormal{min } \{n, \textnormal{dim } \mathcal{O}_{X,x} \}.$$ It is Cohen-Macaulay if it is $S_n$ for all $n$, or equivalently (noting that the depth is always bounded above by the dimension), if the depth is equal to the dimension. In more advanced terms, the dualizing complex exists for our varieties, and being Cohen-Macaulay means that the dualizing complex is quasi-isomorphic to a complex concentrated in a single term. We'll say more about the dualizing complex and dualizing sheaf in the following sections. The condition $S_2$ in particular has the geometric interpretation that a function regular in codimension one is regular everywhere. It is a base condition that allows us to compare $X$ with its normalization by looking just at the codimension one structure of $X$.
A local ring is Gorenstein if it is Cohen-Macaulay and the dualizing sheaf (meaning in this case the single term in the dualizing complex) is invertible. Any smooth point has a corresponding local ring that is Gorenstein. In fact, we have the following implications: $$\textnormal{complete intersection } \implies \textnormal{Gorenstein } \implies \textnormal{Cohen-Macaulay }.$$ So hypersurfaces are also Gorenstein, and we have a large class of singular, Gorenstein local rings given by the local rings of hypersurface singularities. Also, any Gorenstein variety (one whose local rings are all Gorenstein) trivially has $S_2$. The condition $G_1$ means Gorenstein in codimension one; in other words, at any codimension one or codimension zero point, the local ring is Gorenstein.
We say that a ring extension $A \hookrightarrow B$ is subintegral if it is finite, a bijection on prime spectra, and each residue field extension $k(p) \hookrightarrow k(q)$ is an isomorphism (here $k$ is the base field described above). For any finite extension $A \hookrightarrow B$, there exists a maximal subintegral extension of $A$ in $B$, called the seminormalization of $A$ in $B$. We say that $A$ is seminormal (SN) in $B$ if it equals its seminormalization in $B$. We call a reduced ring $A$ seminormal if it is seminormal in its normalization. A variety $X$ is seminormal if its local rings are all seminormal. Equivalently, any proper bijection to $X$ is an isomorphism.
Of course, any normal variety is $S_2$, $G_1$, and seminormal. In fact, normality is characterized by the two conditions $S_2$ and $R_1$, or regular in codimension one, and clearly $R_1$ is stronger than $G_1$ and every normal ring is seminormal. The typical model for a demi-normal variety is that of a simple normal crossing divisor in a smooth variety. We think of demi-normal varieties as being of the very simplest singular type in codimension one, with regular functions in codimension one being regular everywhere. Several other examples will be given throughout this paper.
The two conditions $S_2 + G_1$ together imply that rank one reflexive sheaves determine subschemes of pure codimension one in $X$ (Weil divisors in the normal case). The conditions $S_2 + SN$ imply that the normal locus is a reduced subscheme of pure codimension one. Finally, the condtions $G_1 + SN$ imply that a each codimension one localization $R$, the maximal ideal $m$ is the radical of the normalization $\overline{R}$, and the $R/m$-dimension of $\overline{R}/m$ is exactly two.
\begin{example} For a variety that is $S_2$ and $G_1$ but not seminormal, take any hypersurface that is not seminormal. For instance, the cusp $k[x,y]/(x^2 - y^3)$. Its normalization is the affine line, and maps finitely and bijectively onto the cusp, but is clearly not an isomorphism. \end{example}
\begin{example} For a variety that is $S_2$ and SN but not $G_1$, consider the three coordinate axes in 3-dimensional affine space. This is reduced and one-dimensional, hence $S_2$ (just $S_1$ is sufficient), and seminormal by \cite{LV81}. However, there are not double normal crossing singularities only, so it cannot be Gorenstein. \end{example}
\begin{example} Finally, for a variety that is $G_1$ and SN but not $S_2$, consider two planes in 4-space meeting at a point. The coordinate ring is $k[x,y,z,w]/((x,y) \cap (z,w)).$ This is $G_1$ because it is smooth except at the origin, which is a point of codimension two. SN can be proved directly using the following condition: a ring $A$ is seminormal in $B$ if for every $b \in B$, $b^2, b^3 \in A$ implies $b \in A$. It is not $S_2$ because at the origin, the element $x+z$ forms a maximal regular sequence: the depth at the origin is exactly one. \end{example}
We call a variety $X$ semismooth if for any closed singular point $x$, the completion $\mathcal{O}_{X,x}^*$ is isomorphic to either $k[[x_1, \ldots, x_n]]/(xy)$ or $k[[x_1, \ldots, x_n]]/(x_1^2-x_2^2x_3).$ The latter type of singularity is called a pinch point. Pinch points are notorious because blowing up at the origin creates another pinch point; the only way to resolve its singularities is to blow up the entire double locus. Locally away from the origin, the local ring of a pinch point has double normal crossing singularities, and a pinch point in dimension two can be obtained from a double normal crossing point as a quotient singularity. Both these singularities are simple examples of partition singularities. See \cite{vS87}.
The normalization of a semismooth variety is smooth. The double locus of $X$ and its preimage in the normalization $\overline{X}$ are both smooth, and the induced morphism on the double loci is a double cover, ramified along the pinch locus. Note that a variety is smooth if all its closed points are smooth (that is, the local rings are all regular local rings), because the localization of a regular local ring is regular. Also, a local ring is regular if and only if its completion is also a regular local ring. This is why we can restrict our attention to completions of local rings at closed points in the definition of semi-smoothness.
Given a demi-normal variety $X$ and an open subvariety $U$ where the only singularities are double normal crossings, there is a morphism $f: Y \rightarrow X$ with the following properties: \begin{itemize} \item{$f$ is proper and birational},
\item{$f|_{f^{-1} U}$ is an isomorphism}, \item{$Y$ is semismooth}, \item{no component of the double locus of $Y$ is exceptional}. \end{itemize}
We call such a morphism a semiresolution of $X$. The existence of $f$ (in fact, a stronger form of semiresolution, where we incorporate divisors on $X$, also exists, given reasonable conditions on the divisor) has been proved by Koll\'ar in his paper ``Semi Log Resolutions." Here $Y$ is not normal, but every exceptional divisor determines a discrete valuation ring at its generic point. This enables us to pull back Weil divisors on $X$ (subschemes determined by rank one reflexive sheaves) via semiresolutions by considering the order of vanishing in each such discrete valuation ring.
Since $Y$ is not smooth, there is not everywhere a local system of parameters. However, $\overline{Y}$ is smooth, and we obtain our model for a normal crossing divisor on $Y$ as the gluing along the double locus of a normal crossing divisor on $\overline{Y}$ that makes transverse intersections with the double locus. Equivalently, the local models for a global normal crossing divisor $D$ on a semismooth variety $Y$ are as follows:
When $Y$ is given by $x_1x_2=0$ in $\mathbb{A}^n_{x_1, \ldots, x_n}$, the local model for $D$ is $D = \Pi_{i \in I} x_i = 0$, for some set $I \subseteq \{3, \ldots, n \}$. For the pinch point, given locally (analytically) by $x_1^2-x_2^2x_3 = 0$, the local model is $D = (\Pi_{i \in I} x_i = 0) + D_2$, for some $I \subseteq \{4, \ldots, n \}$ and where either $D_2 = 0$ or $D_2 = (x_1 = x_3 = 0).$
The analog of a log resolution of a pair is a semiresolution with the property that the exceptional divisors, the double locus, and the birational transform of the given divisor form a global normal crossing divisor. When $X$ is demi-normal with an open subvariety $U$ as described above, and $D$ is a divisor such that $D|_U$ is smooth and disjoint from the singular locus of $U$, a semi log resolution of $(X,D)$ exists. The idea is to pass to the normalization, find a log resolution of singularities, and then glue along the birational transform of the double locus.
The main tool in constructing semiresolutions is the universal pushout. Given a variety $Y$, a closed subscheme $B$, and a finite morphism $B \rightarrow B/ \tau$ (we use the language of a quotient singularity, because that is often the form of a given finite morphism from a closed subscheme of $Y$), there is a commutative diagram $$\begin{CD} B @>>> Y \\ @VVV @VVV \\ B/\tau @>>> X \\ \end{CD}$$ where both horizontal arrows are inclusions of closed subschemes. The resulting morphism $Y \rightarrow X$ to the universal pushout is finite. It is also birational; in fact, it agrees with $B \rightarrow B/\tau$ on $B$ and is an isomorphism elsewhere. The pushout occurs naturally in connection with the normalization of a variety. In fact, when $X$ is a variety and $Y$ is its normalization, then one obtains $X$ from $Y$ as a universal pushout by gluing along the conductor loci. (When we say gluing we just mean constructing the pushout; $Y$ is ``glued" to $X$ along the morphism $B \rightarrow B/\tau.$) The universal pushout does indeed have a universal property. In fact, given morphisms $B/\tau \rightarrow X_0$ and $Y \rightarrow X_0$ that pull back to the same morphism from $B$, there exists a unique morphism $X \rightarrow X_0$ commuting with the given morphisms. Finally, the universal pushout and pullback are related in the usual scheme-theoretic way: given affine schemes, their pushout is the affine scheme whose coordinate ring is the pullback of the coordinate rings of the given schemes.
We close this section with an example of a semiresolution.
\begin{example} Consider the surface defined by $k[x,y,z]/(xy)$, and let $\mathbb{Z}_2$ act on it by $x \mapsto -x, y \mapsto -y, z \mapsto -z$. The resulting quotient is a non-normal variety $X$ with two normal components. A semiresolution is obtained by resolving the components and gluing along the birational transform of the line of intersection. \end{example} By definition, the hyperquotient singularity $X$ is obtained as a residue class of a ring of invariants. If $\mathbb{Z}_2$ acts on $k[x,y,z]$ as stated, the ring of invariants is $k[x^2,xy,y^2,xz,yz,z^2]$. We get the coordinate ring of $X$ by annihilating the intersection of the ideal $(xy)$ with this ring. The intersection ideal has generators $xy,x^2y^2,xyz^2,x^2yz,$ and $xy^2z$. Therefore, in terms of generators and relations, the coordinate ring $\mathcal{O}_X$ is given by $$k[u_0,u_1,u_2,u_3,u_4,u_5]/(I + J),$$ where $$I = (u_0u_2-u_1^2, u_2u_5-u_4^2, u_0u_5-u_3^2,u_1u_3-u_0u_4, u_3u_4-u_1u_5)$$ and $$J = (u_1,u_0u_2,u_3u_4,u_0u_4,u_2u_3).$$ If we simplify, then we obtain $k[u_0,u_2,u_3,u_4,u_5]$ modulo the ideal $$(u_2u_5-u_4^2,u_0u_5-u_3^2,u_0u_2,u_3u_4,u_0u_4, u_2u_3).$$ Note that the spectrum of this ring has two components, given by $(u_0=u_3=0)$ and $(u_2=u_4=0)$. Each of these defines a quadric cone, and the cones are identified along the line $\mathbb{A}^1 \cong \textnormal{Spec } k[u_5]$. We can obtain the same ring as a pullback, where the maps $c$ and $d$ are the quotient maps: $$\begin{CD} \mathcal{O}_X @>>> k[u_0,u_3,u_5]/(u_0u_5-u_3^2)\\ @VVV @VcVV \\ k[u_2,u_4,u_5](u_2u_5-u_4^2) @>d>> k[u_5] \end{CD}$$ If we compute the $\mathbb{Z}_2$-quotient of each component of $k[x,y,z]/(xy)$ and then glue, we get $\mathcal{O}_X$. In fact, the quadric cone is a $\mathbb{Z}_2$-quotient of affine two-space $\mathbb{A}^2$.
The components of $X$ are both normal, and each of their singularities is resolved by blowing up the origin. In each of these two blowups, the birational transform of the intersection line $\mathbb{A}^1$ appears in only one chart. Gluing over the corresponding ring $k[u_5']$, we get the pullback diagram $$\begin{CD} \mathcal{O}_Y @>>> k[u_3',u_5'] \\ @VVV @Vc'VV \\ k[u_4',u_5'] @>d'>> k[u_5'] \end{CD}$$ Here $\mathcal{O}_Y$ is $k[u_3',u_4',u_5']/(u_3'u_4')$. On the other charts, no gluing takes place and the variety remains smooth. Thus $Y$ has only double normal crossing singularities and is semismooth.
Globally, $Y$ is the pushout $$\begin{CD} \tilde{\mathbb{A}}^1 @>>> Y_2 \\ @VVV @VVV \\ Y_1 @>>> Y \end{CD}$$ Here $Y_1$ and $Y_2$ are the resolutions of the components $X_1 := (u_0=u_3=0)$ and $X_2 := (u_2 = u_4 =0)$ of $X$, and $\tilde{\mathbb{A}}^1$ is the birational transform of the intersection of $X_1$ and $X_2$. The universal property of the pushout implies that $f: Y \rightarrow X$ exists. The open sets of $X$ can be identified with pairs of open sets that have the same pullback to $\mathbb{A}^1$. Then $Y \rightarrow X$ is an isomorphism over $(U_1,U_2)$, where $U_i$ is the complement of the origin in $X_i$. By construction, the singular locus of $Y$ is the birational transform $\mathbb{A}^1$, which is not exceptional.
It needs to be checked that $f$ is proper. This can be done using the valuative criterion of properness. \end{section} \begin{section}{Pullbacks of Divisors} Most of the discussion in this section follows \cite{Hart94} and \cite{dFH09}. The only real novelty comes in combining those results to develop a theory of pullbacks of divisors on a demi-normal variety under their semiresolutions.
The two conditions $S_2$ and $G_1$ imply that rank one reflexive sheaves correspond to subschemes of pure codimension one. If $\mathcal{I}$ is a rank one reflexive ideal sheaf, there is an exact sequence $$0 \rightarrow \mathcal{I} \rightarrow \mathcal{O}_X \rightarrow \mathcal{O}_Y \rightarrow 0,$$ where $Y$ is the subscheme determined by $\mathcal{I}$. The point is that on a scheme with $S_2$ and $G_1$, a sheaf is reflexive if and only if it is $S_2$ (the same definition as for rings, except the depth of a module over a local ring is considered); thus forming the reflexive hull is an $S_2$-ification. Then the depth at each local ring of dimension at least two is at least two. The long exact sequence in local cohomology begins with $$0 \rightarrow H^0_x(X, \mathcal{I}_x) \rightarrow H^0_x(X, \mathcal{O}_{X,x}) \rightarrow H^0_x(X, \mathcal{O}_{Y,x}) \rightarrow H^1_x(X, \mathcal{I}_x) \rightarrow \cdots.$$ Then the vanishing of the local cohomology modules in dimensions smaller than the depth implies that $Y$ has associated points of pure codimension one. In other words, $Y$ is a divisor. An arbitrary rank one reflexive sheaf embeds in the sheaf of total quotient rings, and can be multiplied by an invertible sheaf so that it becomes an ideal sheaf. Thus rank one reflexive sheaves correspond to divisors, where we interpret some of the coefficients as negative.
If $f: Y \rightarrow X$ is a birational morphism, and $D$ is a divisor on $X$, then the reflexive hull of the inverse image ideal sheaf, or $(\mathcal{O}_X(-D)\cdot \mathcal{O}_Y)^{\vee\vee}$ is again of rank one and reflexive, and the natural pullback, denoted $f^{\flat}D$, is determined by setting $\mathcal{O}_Y(-f^{\flat}D)$ equal to this sheaf. The natural pullback is additive if one of the summands is Cartier. We let addition of divisors correspond to the reflexive hull of the product of the sheaves, the product being taken in the sheaf of total quotient rings. However, we don't have additivity when one of the summands is only $\mathbb{Q}$-Cartier. This is the same problem that occurs when we try to pull back Weil divisors on normal varieties. The group laws of the group of divisors are not respected, and so we need to come up with a more general type of pullback that preserves the group structure. To remedy this problem, we make use of discrete valuation rings corresponding to generic points of divisors on a normal variety. When $X$ is only $G_1$, not every divisor corresponds to a discrete valuation ring; however, most of our birational morphisms are semiresolutions, and so every exceptional divisor does in fact correspond to such a ring (recall that no singular component on a semiresolution is ever exceptional).
Let $v$ be a discrete valuation ring associated to a codimension one point of $X$ (if such a point exists), and $\mathcal{J}$ and coherent $\mathcal{O}_X$-submodule of $\mathcal{K}_X$, where $\mathcal{J}$ is nonzero on the unique component of $X$ containing the center of $v$. The valuation $v(\mathcal{J})$ of $\mathcal{J}$ along $v$ is defined to be $$v(\mathcal{J}) := \textnormal{min} \{v(\phi) : \phi \in \mathcal{J}(U \cap X_i), U \cap c_{X_i}(v) \neq 0 \}.$$ In particular, when $v$ is associated to an exceptional divisor $E$ of a semiresolution, the valuation of an invertible sheaf is the coefficient of $E$ in the corresponding Cartier divisor.
Let $v$ be a valuation of a component $X_i$ of $X$, and let $m$ be any positive integer. Let us temporarily write $v^{\flat}(D)$ for the valuation along a divisor, described above for the rank one reflexive sheaf corresponding to $D$. If $\mathcal{J}$ is any rank one reflexive sheaf, associated to a divisor $D$, and nonzero on $X_i$, then $$\inf_{k \geq 1} \frac{v^{\flat}(kD)}{k} = \lim \inf_{k \rightarrow \infty} \frac{v^{\flat}(kD)}{k} = \lim_{k \rightarrow \infty} \frac{v^{\flat}(k!D)}{k!} \in \mathbb{R}.$$ The reason the limit is finite is that $\mathcal{O}_X(D) \subseteq \mathcal{O}_X(C)$ for some Cartier divisor $C$ when $X$ is quasi-projective. This suggests a definition, as in \cite{dFH09}: for any divisor $D$ and a discrete valuation ring associated to an appropriate codimension one point, the valuation of $v$ along $D$ is $$v(D) := \lim_{k \rightarrow \infty} \frac{v^{\flat}(k!D)}{k!} \in \mathbb{R}.$$
This allows us to define a pullback under a semiresolution $f: Y \rightarrow X$: the pullback of a divisor $D$ on $X$ is the divisor $$f^*D := f^{-1}_*D + \Sigma v_E(D) \cdot E,$$ where $f^{-1}_*D$ is the birational transform of $D$ and the sum is over the exceptional divisors $E$, with the valuation $v_E$ corresponding to $E$ as just described.
This generalized pullback is additive if one of the summands is $\mathbb{Q}$-Cartier. In particular, it generalizes the notion of the pullback of a $\mathbb{Q}$-Cartier divisor; in that case, we chose a multiple $mD$ that is Cartier, pulled back the local equations, and then divided by $m$. We will be able to use this generalized pullback to define singularities on demi-normal varieties without a $\mathbb{Q}$-Cartier canonical class. Again, it should be noted that we measure the singularity of $X$ by comparing the canonical classes of $X$ and that of one of its semiresolutions. To make this comparison, it is imperative that $K_X$ can be pulled back under a semiresolution, and for our definitions to agree with the usual case where $K_X$ is $\mathbb{Q}$-Cartier, we need the pullback to be the same as the usual pullback when the canonical class is $\mathbb{Q}$-Cartier.
This prompts a question: what exactly is the canonical class $K_X$? The most general definition is as follows: a quasi-projective variety over $k$ possesses a dualizing complex. Then the lowest cohomology of this complex is a coherent sheaf. When $X$ is $S_2$, the so-called dualizing sheaf is also $S_2$ (this is a general fact that can be proved using the definition alone.) If $X$ is also $G_1$, then by definition the dualizing sheaf is invertible in codimension one, and it is reflexive since on varieties with $S_2$ and $G_1$, a sheaf is $S_2$ if and only if it is reflexive. Thus the dualizing sheaf corresponds to a divisor (actually, a class of divisors, equivalence classes being defined up to multiples by elements of $\mathcal{K}_X$). We call the correponding divisor class $K_X$. A more practical definition is to form the dualizing sheaf on an open subvariety $U$ containing all the codimension one points, that being defined by differential forms for hypersurface singularities, push the sheaf forward to $X$, and take the corresponding divisor class on $X$. In fact, with $S_2$ and $G_1$, a reflexive sheaf is completely determined by its behavior in codimension one. The point is that we know how to form the canonical class on any smooth variety, or any variety with only hypersurface singularities, such as the double normal crossing locus in a demi-normal variety, and that with appropriate conditions on $X$, we can push forward to all of $X$
We now have all the ingredients we need to state and prove Theorem 1.1. We do this in the next section, and in the following section, we make our definitions of singularities on demi-normal varieties without a $\mathbb{Q}$-Cartier canonical class. There are three relative canonical divisors that will be useful to us; here $Y \rightarrow X$ is a semiresolution of $X$:
\begin{itemize} \item{the $m$-th limiting relative canonical divisor $K_{m, Y/X} := K_Y - \frac{1}{m}f^{\flat}(mK_X)$,} \item{the relative canonical divisor $K_{Y/X} := K_Y + f^*(-K_X)$,} \item{for a boundary divisor $\Delta$ such that $K_X + \Delta$ is $\mathbb{Q}$-Cartier, and $\Delta_Y$ its birational transform, the log relative canonical divisor $$K^{\Delta}_{Y/X} := K_Y + \Delta_Y - f^*(K_X + \Delta).$$} \end{itemize}
We may take the lim sup of the coefficients of $K_{m,Y/X}$ to obtain the divisor $K^{-}_{Y/X}.$ Then for each $m$ such that $m(K_X + \Delta)$ is Cartier, and all sufficiently large $q$, we have $$K^{\Delta}_{Y/X} \leq K_{m,Y/X} \leq K_{mq, Y/X} \leq K^{-}_{Y/X} \leq K_{Y/X}.$$ See \cite{dFH09} for a proof of these facts in the normal case; the demi-normal case is proved in the same way. \end{section}
\begin{section}{Existence of $m$-Compatible Boundaries} In this section, we prove Theorem 1.1. In order to do so, we must first show that semi log resolutions exist for suitable pairs. Koll\'ar discusses existence of semi log resolutions in his paper ``Semi Log Resolutions." In particular, given a demi-normal variety $X$ with double normal crossing locus $U$, a divisor $D$ must be smooth and disjoint from the singular locus on $U$ in order for semi log resolutions to exist. In our case, we begin only with a collection of rank one reflexive sheaves, invertible in codimension one, and it is not immediately clear that we can obtain a semi log resolution for the associated divisor on $X$. However, by choosing sections appropriately, we can arrange for a semi log resolution to exist. In proving the main theorem, we follow the proof of \cite{dFH09}, modified suitably for demi-normal varieties. The difficult part is to pass to the normalization, apply typical results for normal varieties, and then glue along the conductor. It is important that the normalization of a semismooth variety is smooth. In fact, given $X$ and a semiresolution $Y \rightarrow X$, the induced morphism on normalizations $\overline{Y} \rightarrow \overline{X}$ is a resolution of singularities. This gives us the freedom to work with normal and smooth varieties and glue in order to obtain results on demi-normal and semismooth varieties.
We start with a lemma.
\begin{lemma} Suppose that $\mathcal{J}$ is a divisorial sheaf, invertible in codimension one. Then there exists a semi log resolution of $(X, \mathcal{J})$. \proof The point here is that the associated divisor needs to have a nice restriction to an open semismooth subscheme. So we need to be able to choose a sufficiently general global section to make everything work.
Choose a very ample sheaf $\mathcal{L}$ such that $\mathcal{L} \otimes \mathcal{J}^{\vee}$ is generated by global sections. Dualizing via a nondegenerate global section shows that $\mathcal{L}^{\vee} \otimes \mathcal{J}$ is isomorphic to an ideal sheaf. We start by blowing up this ideal sheaf. Then $(\mathcal{L}^{\vee} \otimes \mathcal{J}) \cdot \mathcal{O}_Y$ is invertible. In particular, since $\mathcal{L}$ is invertible, so is $\mathcal{J} \cdot \mathcal{O}_Y$. The blowup is an isomorphism in codimension one by the hypothesis on $\mathcal{J}$. Moreover, any subsequent morphisms will preserve the invertibility condition. We would like to replace $\mathcal{J}$ by $\mathcal{J} \cdot \mathcal{O}_Y$, thereby assuming that $\mathcal{J}$ is invertible. Let $f: Y \rightarrow X$ be this first blowup. If $V \subset Y$ is an open set with complement of codimension at least two, then $f(V^c)$ is also closed and with codimension at least two. Thus, we can shrink $U$ if necessary so that it contains the codimension one points and so that $f^{-1}(U) \subset V$. Then if we obtain an isomorphism in codimension one starting from $Y$, then we will have the desired morphism by composing with $f$.
We denote by $U$ an open set in $X$ that is semismooth and contains the codimension one points of $X$, and we let $U^{sm}$ be the smooth locus of $U$. Choose a very ample sheaf $\mathcal{L}$ such that $\mathcal{L} \otimes \mathcal{J}^{\vee}$ is globally generated. Then $(\mathcal{L} \otimes \mathcal{J}^{\vee})|_{U^{sm}}$ is generated by images of global sections in $\Gamma(X, \mathcal{L} \otimes \mathcal{J}^{\vee})$. We would like to apply Bertini's theorem to these global sections to conclude that a general section gives a smooth divisor on $U^{sm}$. By [Hart, III.10.9.2], the theorem (stated for projective varieties) holds for quasi-projective varieties when the system is finite-dimensional. Considering $X$ as an open subset of a projective variety $\overline{X}$, the inclusions $X \stackrel{i}{\rightarrow} \overline{X} \stackrel{j}{\rightarrow} \mathbb{P}$ determine $\mathcal{L} = i^*(j^*\mathcal{O}_{\mathbb{P}}(1))$ as the restriction of a very ample divisor on $\overline{X}$. Similarly, the coherent sheaf $\mathcal{J}^{\vee}$ lifts to a coherent sheaf on $\overline{X}$, by [Hart, II.Ex.5.15]. Now on any projective variety, the global sections of a coherent sheaf form a finite dimensionsl vector space [Hart, II.5.19]. Putting this altogether, we may choose a very ample sheaf on $\overline{X}$ so that its tensor product with a lifting of $\mathcal{J}^{\vee}$ is globally generated. The vector space of global sections is finite dimensional, hence so is the image of this vector space in $\Gamma(X, \mathcal{L} \otimes \mathcal{J}^{\vee})$. The image generates the sheaf on $X$. The same thing holds when restricting further to the open subscheme $U^{sm}$. We denote by $V \subset \Gamma(X, \mathcal{L} \otimes \mathcal{J}^{\vee})$ this finite-dimensional vector space of global sections that generate the sheaf $\mathcal{L} \otimes \mathcal{J}^{\vee}$.
Since $U^{sm}$ is smooth, Bertini's theorem implies that a general element of a finite-dimensional linear system without base points gives a smooth divisor on $U^{sm}$. But the restriction of $V$ to $\Gamma(U^{sm}, (\mathcal{L} \otimes \mathcal{J}^{\vee})|_{U^{sm}})$ is such a system. We conclude that there is a dense open subset of $\mathbb{P}(V)$ corresponding to divisors whose restrictions to $U^{sm}$ are smooth.
There are finitely many points of $X$ that are either generic points or singular codimension one points. For each such point $p$, the subspace $W \subset V$ of sections vanishing at $p$ is a proper subspace because the sections in $V$ generate the sheaf. The associated linear subspaces have as their union a proper subvariety of the projective space $\mathbb{P}(V)$, a finite union of vector subspaces is not the entire space, since the ground field is infinite). Thus, we have two dense open subsets of $\mathbb{P}(V)$, which must therefore intersect. Replacing $U$ by a smaller open subset (still containing the codimension one points) if necessary, it follows that there exist general global sections of $\mathcal{L} \otimes \mathcal{J}^{\vee}$ whose associated divisors are smooth and disjoint from the singular locus of $U$, after restriction to $U$.
Choosing such a section $s$, dualize $\mathcal{O}_X \stackrel{s}{\rightarrow} \mathcal{L} \otimes \mathcal{J}^{\vee}$. Let $D$ be the cosupport of the resulting ideal sheaf $\mathcal{L}^{\vee} \otimes \mathcal{J}$. Following Koll\'ar, there is a semiresolution $f: Y \rightarrow X$ such that the local models of $f^{-1}_*D + \textnormal{Ex}(f)$ are as in Section 2. The inverse image ideal sheaf $(\mathcal{L}^{\vee} \otimes \mathcal{J}) \cdot \mathcal{O}_Y$ has cosupport equal to $f^{-1}D$. Thus the inverse image ideal sheaf is invertible and has the correct normal form.
We do the same construction for the invertible ideal sheaf $\mathcal{L}^{\vee}$. We can choose $\mathcal{L}$ so that it is generated by global sections. Furthermore, we can choose a general section so that the associated divisor is smooth and disjoint from both Sing$(U)$ and the cosupport of $\mathcal{L}^{\vee} \otimes \mathcal{J}$. This is possible once we have fixed a section of $\mathcal{L} \otimes \mathcal{J}^{\vee}$, since a general section of $\mathcal{L}$ is nonvanishing at a finite set of codimension one points. Thus, after possibly shrinking $U$ (and keeping all the codimension one points), we can assume that $\mathcal{L}^{\vee}$ and $\mathcal{L}^{\vee} \otimes \mathcal{J}$ are ideal sheaves whose divisors are smooth and disjoint from one another and from Sing$(U)$ on $U$.
Then the inverse images $$(\mathcal{L}^{\vee} \otimes \mathcal{J}) \cdot \mathcal{O}_Y = \mathcal{L}^{\vee} \cdot \mathcal{O}_Y \otimes \mathcal{J} \cdot \mathcal{O}_Y$$ and $\mathcal{L}^{\vee} \cdot \mathcal{O}_Y$ are both invertible. Hence $\mathcal{J} \cdot \mathcal{O}_Y$ is invertible. It corresponds to a (not necessarily effective) divisor $E$ whose components are smooth and intersect Ex$(f)$ and $C_Y$ transversally.\qed \end{lemma}
We would like a semi log resolution of a pair $(X, I)$ when all of the summands of $I$ are divisorial sheaves. If each of these is invertible in codimension one, then the same moving argument used in (4.1) implies that a semi log resolution exists. In fact, when we include more than one divisorial sheaf in $I$, we can proceed inductively to obtain sections that do not vanish at any of the generic points, the singular codimension one points of $U$, or the codimension one points of inductively-defined divisors corresponding to components of $I$. Note that for an arbitrary fractional ideal sheaf, we can always blow up to make the sheaf invertible, but the resulting morphism might not satisfy $f_*\mathcal{O}_Y = \mathcal{O}_X$.
We say that the pair $(X, I)$ is \textit{effective} if the summands of $I$ are ideal sheaves and the coefficients are nonnegative rationals.
\begin{definition} Let $(X, I)$ be an effective pair, and fix an integer $m \geq 2$. Given a semi log resolution $f: Y \rightarrow X$ of $(X, I + \mathcal{O}_X(-mK_X))$, a boundary $\Delta$ on $X$ is said to be $m$-compatible for $(X, I)$ with respect to $f$ if:
(i) $m\Delta$ is integral and $\left\lfloor \Delta \right\rfloor = 0$,
(ii) no component of $\Delta$ is contained in the cosupport of $I$,
(iii) $f$ is a semi log resolution for $(X,\Delta; I + \mathcal{O}_X(-mK_X))$, and
(iv) $K^{\Delta}_{Y/X} = K_{m,Y/X}.$ \end{definition}
Once the sheaf $\mathcal{O}_X(-mK_X)$is invertible, pulling back commutes with composition of morphisms. Our definitions of singularities need to be independent of the semiresolution chosen, and for that we need to be able to compose semiresolutions without affecting the pullback.
If there are $m$-compatible boundaries with respect to a semi log resolution of $(X, I + \mathcal{O}_X(-mK_X))$, then we say that the pair $(X, I)$ \textit{admits $m$-compatible boundaries}. Before proving the existence of $m$-compatible boundaries, we prove the semismooth version of Bertini's theorem, which will be used in the proof that follows.
\begin{lemma} Suppose $\mathcal{J}$ is an invertible sheaf on a semismooth variety $Y$, generated by a finite-dimensional subspace $V \subseteq \Gamma(Y, \mathcal{J})$. Let $D$ be a given global normal crossing divisor on $Y$. Then a general section in $V$ produces a divisor that forms global normal crossings with $D$. \proof Let $p: \overline{Y} \rightarrow Y$ be the normalization. Since $\mathcal{J}$ is generated by global sections in $V$, there is a surjection $\bigoplus_{v \in V} \mathcal{O}_Y \rightarrow \mathcal{J}.$ Applying $p^*$ and following with the obvious surjection, we have the surjection $$\bigoplus_{p^*v} \mathcal{O}_{\overline{Y}} \rightarrow p^*\mathcal{J} \rightarrow \mathcal{J} \cdot \mathcal{O}_{\overline{Y}}.$$ Thus $\mathcal{J} \cdot \mathcal{O}_{\overline{Y}}$ is generated by the images of the $p^*v$. Given a general combination of vectors in $V$, we obtain a morphism $\mathcal{O}_Y \rightarrow \mathcal{J}$. Then ``the same" combination of the pullbacks of those vectors produces the morphism obtained by applying $p^*$ and following with the obvious surjection.
Bertini's theorem for the smooth variety $\overline{Y}$ implies that a general combination of the $p^*v$ determines a divisor that makes simple normal crossings with the preimage of $D$. Likewise, a general combination of the $v$ exhibits an effective divisor associated to $\mathcal{J}$. Since the intersection of two general conditions is still general, we obtain a divisor $D'$ on $Y$ whose preimage makes simple normal crossings with the preimage of $D$. (We need that the cosupport of $\mathcal{I} \cdot \mathcal{O}_{\overline{Y}}$ is just the preimage of the cosupport of $\mathcal{I}$, for any ideal sheaf $\mathcal{I}$.) This implies that $D'$ makes global normal crossings with $D$.\qed \end{lemma}
\begin{theorem}\textnormal{(Theorem 1.1)} Every effective pair $(X,\mathcal{I})$ where the components of $\mathcal{I}$ are rank one reflexive sheaves and invertible in codimension one admits $m$-compatible boundaries for any $m \geq 2$. \proof We choose a very ample sheaf $\mathcal{L}$ such that $\mathcal{L}^{\vee} \otimes \omega_X \cong \mathcal{I}$ is an ideal sheaf. By hypothesis, $\omega_X$ is invertible in codimension one, and hence the same is true for $\mathcal{I}$. Let $D$ be the effective divisor associated to $\mathcal{I}$. By (4.1), there is a semi log resolution of $(X, \mathcal{O}_X(-mK_X) + I + \mathcal{O}_X(-mD))$. Then $\mathcal{O}_X(-mD) \cdot \mathcal{O}_Y$ is invertible. Let $E = f^{^\natural}(mD)$ be the associated effective divisor. Then $$f^{\natural}(mK_X) = f^{\natural}(mK_X -mD + mD) = f^{\natural}(mK_X -mD) + E,$$ and since $K_X -D$ is Cartier, $$K_{m,Y/X} = K_Y - f^*(K_X - D) - \frac{1}{m}E.$$
Next, we choose a very ample sheaf $\mathcal{N}$ such that $\mathcal{N} \otimes \mathcal{O}_X(-mD)$ is globally generated. A general section is also a section of $\mathcal{N}$ and produces an isomorphism $\mathcal{N}^{\vee} \otimes \mathcal{O}_X(mD) \cong \mathcal{J}$ for some ideal sheaf $\mathcal{J}$. On the divisor level, we have $G = M + mD$, where $G$ is an effective Cartier divisor associated to $\mathcal{N}^{\vee}$ and $M$ is an effective divisor associated to $\mathcal{J}$. Then $$f^*\mathcal{N}^{\vee} \cong \mathcal{N}^{\vee} \cdot \mathcal{O}_Y = (\mathcal{J} \cdot \mathcal{O}_Y) \otimes (\mathcal{O}_X(-mD) \cdot \mathcal{O}_Y).$$ Thus $\mathcal{J} \cdot \mathcal{O}_Y$ is invertible. By definition of the inverse image of a divisorial sheaf, we have $(\mathcal{J} \cdot \mathcal{O}_Y)^{\vee} \cong \mathcal{J}^{\vee} \cdot \mathcal{O}_Y$.
It follows that $f^*G = f^{\natural}(M) + f^{\natural}(mD)$, where we have used the fact that additivity holds if both pullbacks are invertible. Being an image of the sheaf $f^*(\mathcal{N} \otimes \mathcal{O}_X(-mD))$, $\mathcal{J}^{\vee} \cdot \mathcal{O}_Y$ is generated by global sections. We conclude that $f^{\natural}(M)$ corresponds to a sheaf generated by its global sections.
Write $\mathcal{M} = \mathcal{J}^{\vee} \cdot \mathcal{O}_Y$, so that $\mathcal{M}$ is an invertible sheaf, generated by global sections. As in the proof of (4.1), we may assume that $\mathcal{N}$ is the restriction of a very ample divisor on the projective closure of $X$, so that there is a finite-dimensional space of global sections generating $\mathcal{N} \otimes \mathcal{O}_X(-mD)$. Take the inverse image $\mathcal{M} \cdot \mathcal{O}_{\overline{Y}}$ and apply Bertini's theorem. A general section gives a divisor whose image in $Y$ makes global normal crossings with everything already in global normal crossing (including the conductor). We can choose such a section generally enough that also the corresponding divisor in $\overline{Y}$ has no components in common with the preimage of $D$ and the preimages of the components of $I$. Then in $X$ we necessarily have no components in common with $D$ or the components of $I$.
Therefore, we let $\Delta = \frac{1}{m}M$. Then $m\Delta$ is integral and $\left\lfloor \Delta \right\rfloor = 0$ since $M$ can be chosen reduced. We have $K_X + \Delta = K_X - D + \frac{1}{m}G$ is $\mathbb{Q}$-Cartier. If again we choose a sufficiently general section and look at the normalization, then $f$ is a log resolution of the log pair $(X, \Delta; \mathcal{O}_X(-mK_X) + I)$. The final condition for $m$-compatibility follows from the computation $$K^{\Delta}_{Y/X} = K_Y + \Delta_Y - f^*(K_X + \Delta)$$ $$= K_Y + \Delta_Y - f^*(K_X + \Delta - \frac{1}{m}G) - \frac{1}{m}f^*G$$ $$= K_Y - f^*(K_X-D) - \frac{1}{m}E = K_{m,Y/X}.$$ This completes the proof.\qed \end{theorem} \end{section} \begin{section}{Singularities on Demi-Normal Varieties} Let $f: Y \rightarrow X$ be a semiresolution of a demi-normal variety $X$ (possibly without $\mathbb{Q}$-Cartier canonical class), and let $E$ be an exceptional prime divisor. For any $m \geq 0$, define $$a_{m,Z} := \textnormal{ord}_E(K_{m,Y/X}) + 1 - \textnormal{val}_E(Z).$$ The pair $(X, Z)$ is semi log canonical (resp., semi log terminal) if there exists and integer $m_0$ such that $a_{m_0, E} \geq 0$ (resp., $> 0$) for every exceptional prime divisor over $X$. Define also $a_E(X,Z) : = \textnormal{ord}_E(K_{Y/X}) + 1 - \textnormal{val}_E(Z).$ The pair $(X,Z)$ is semi canonical if $a_E(X,Z) \geq 1$ for every exceptional prime divisor over $X$.
The reason for the ``discrepancy" between the two definitions is that the important properties of the resulting class of singularities should be as in the case where $K_X$ is $\mathbb{Q}$-Cartier. We have the following results, stated without proof.
A pair $(X,Z)$ admitting semi log resolutions is semi log canonical (resp., semi log terminal) if and only if there is a boundary $\Delta$ such that $(X,\Delta;Z)$ is semi log canonical (resp., semi log terminal). In other words, we look at the order of $E$ in the log relative canonical divisor $K^{\Delta}_{Y/X}$ in place of $K_{m,Y/X}$.
Suppose $f: Y \rightarrow X$ is a semiresolution of $X$, with the property that $\mathcal{O}_X(mK_X)\cdot \mathcal{O}_Y$ is invertible for some integer $m$. Then $\textnormal{ord}_E(K^{-}_{Y/X}) > -1$ for every exceptional prime divisor $E$ of $f$ if and only if $X$ is semi log terminal.
A pair $(X,Z)$ is semi log canonical (resp., semi log terminal) only if the pair $(\overline{X}, Z \cdot \mathcal{O}_{\overline{X}})$ is log canonical (resp., log terminal) in the generalized sense of de Fernex and Hacon.
Let $X$ be demi-normal, and suppose that $Z = \Sigma a_k \cdot Z_k$ is an effective $\mathbb{Q}$-linear combination of effective Cartier divisors on $X$. Then the pair $(X,Z)$ is semi canonical if and only if for all sufficiently divisible $m \geq 1$ (in particular, we ask that $ma_k \in \mathbb{Z}$ for every $k$), and for every semi log resolution of $(X, Z + \mathcal{O}_X(mK_X))$, there is an inclusion $$\mathcal{O}_X(m(K_X + Z)) \cdot \mathcal{O}_Y \subseteq \mathcal{O}_Y(m(K_Y + Z_Y))$$ as sub-$\mathcal{O}_Y$-modules of $\mathcal{K}_Y$, where $Z_Y$ is the birational transform of $Z$.
Suppose that $f: Y \rightarrow X$ is a semi log resolution of $(X,0)$. If $\textnormal{ord}_E(K_{Y/X}) \geq 0$ for every exceptional divisor of $f$, then $X$ is semi canonical.
If the log pair $(\overline{X}, C_{\overline{X}})$ is canonical, then $X$ is semi canonical.
The proofs of these facts are not difficult. They either follow from the same proofs in \cite{dFH09}, or else from a comparison of $X$ with its normalization. Note that when $f: Y \rightarrow X$ is a semiresolution, then the induced morphism on normalizations $\overline{Y} \rightarrow \overline{X}$ is a resolution of singularities.
In the final section of this paper, we explore an example which shows that, when following the definitions given above, one can have a semi canonical demi-normal variety that is not semi log terminal. This distinction evidently arises only in case the canonical divisor is not $\mathbb{Q}$-Cartier. Whether or not it suggests a different set of definitions is necessary is not something we investigate. We include it mainly because it illustrates the use of tools covered in the present paper. Future research will be needed to show whether there is anything inherently wrong with the present definitions. \end{section} \begin{section}{An Example} Suppose that $Y$ is a projectively-embedded variety whose canonical class is Cartier, and with semi canonical singularities. The projectivized cone $X$ has $\mathbb{Q}$-Cartier canonical class if and only if $mK_Y = nH$ for some nonzero integers $m$ and $n$, where $H$ is the very ample divisor giving the embedding of $Y$.
Now let $X := \mathbb{P}^1 \times \mathbb{P}^1$ be embedded by the very ample sheaf $\mathcal{M} = \mathcal{O}_X(1,3)$. We consider first a double covering of $X$ ramified over a general nonsingular curve, the curve being given by a general section of the sheaf $\mathcal{L}^{\otimes 2} $, where $\mathcal{L}= \mathcal{O}_X(0,2)$. Such curves exist by Bertini's Theorem. The double covering can be described as $$ W = \textbf{\textnormal{Spec}}_X(\mathcal{O}_X \oplus \mathcal{L}^{\vee}).$$ Next, we let $Y$ be a smooth curve in the linear system corresponding to $\mathcal{M}$, and we consider the induced double cover $$Z = \textbf{\textnormal{Spec}}_Y(\mathcal{O}_Y \oplus i^*\mathcal{L}^{\vee}).$$ There is a diagram $$\begin{CD} Z @>j>> W \\ @VqVV @VpVV \\ Y @>i>> X \\ \end{CD}.$$ Note that both $Z$ and $W$ are smooth, by \cite{KM98} 2.51, and that $j$ is a closed immersion because closed immersions are stable under base extension.
The ample sheaf $p^*\mathcal{M}$ is in fact very ample on $W$. By this result, we have a commutative diagram $$\begin{CD} W @>p^*\mathcal{M}>> \mathbb{P}^{M+N+1} \\ @VpVV @VVV \\ X @>\mathcal{M}>> \mathbb{P}^M \\ \end{CD}.$$ All horizontal morphisms are closed immersions, all objects are smooth, and the hyperplane sections $Z$ and $Y$ are one-dimensional. The rightmost vertical arrow is a linear projection. Moreover, $W$ and $X$ embed as projectively normal varieties, since both very ample sheaves have degree six on a smooth variety. In particular, the cone is normal at the vertex.
Since the diagram commutes, there is an induced diagram on the projectivized cones with respect to these embeddings, namely $$\begin{CD} C(Z) @>>> C(W) \\ @VVV @VVV \\ C(Y) @>>> C(X) \\ \end{CD}.$$ There is an involution $\tau$ on $Z$ that interchanges elements in the fibers of $Z$ over points of $Y$. Since $Z$ and $Y$ are smooth projective curves and the base field has characteristic zero, $Z \rightarrow Y$ is ramified at only finitely many points, so the fixed point locus of $\tau$ is a finite set. The fixed point locus is nonempty because $Y$ has intersection number 4 with the ramifying curve for $W \rightarrow X$.
There is an induced involution on $C(Z)$, obtained as follows. Letting $\pi: C(Z) - P \rightarrow Z$ be the projection from the vertex $P$, we see that $Z$ is covered by open sets $U_i$ such that $\pi$ is given by $U_i \times \mathbb{A}^1 \rightarrow U_i$. In other words, the fibers of the projection are isomorphic to $\mathbb{A}^1$, and we send one line isomorphically to the other if the corresponding points of $Z$ are interchanged by $\tau$. Sending the vertex to itself, we thus obtain an involution on $C(Z)$ whose fixed point locus is nonempty and of pure codimension one.
Now we can describe the variety whose singularities we want to compute. It is the universal pushout $$\begin{CD} C(Z) @>>> C(W) \\ @VVV @VVV \\ C(Z)/\tau @>>> V \\ \end{CD}.$$ We claim that $V$ has a canonical class which is not $\mathbb{Q}$-Cartier, and that $V$ is demi-normal. Its normalization is $C(W)$.
We know that $C(W) \rightarrow V$ is finite and birational. Moreover, $C(W)$ is normal because away from the vertex it is smooth (a resolution is given by blowing up the vertex and is isomorphic to a $\mathbb{P}^1$-bundle over $W$) and at the vertex it is normal because $W$ is projecitvely normal. Thus $C(W)$ is the normalization of $V$.
That $V$ has a canonical class that is not $\mathbb{Q}$-Cartier follows from the fact that $C(W)$ has the same property. This is due to the property we mentioned at the beginning of this section. Note that the conductor is given by the hyperplane section $C(Z)$ and hence is a Cartier divisor.
To show that $V$ has the three properties $G_1$, $S_2$, and SN, we look at a similar pushout diagram involving $\mathbb{P}^1$-bundles over $Z$ and $W$: $$\begin{CD} \mathbb{P}(Z) @>>> \mathbb{P}(W) \\ @VVV @VVV \\ \mathbb{P}(Z)/\tau @>>> V' \\ \end{CD}.$$ The involution on $\mathbb{P}(Z)$ is obtained as before. In particular, since $\mathbb{P}(Z)$ and $\mathbb{P}(W)$ are smooth and the fixed point locus has pure codimension one, $V'$ is semismooth. The universal property of the pushout gives a morphism $f: V' \rightarrow V$, which is proper and birational. With claim that $C(W)$ is $S_3$. Then $C(Z)$ is $S_2$, hence normal. By Zariski's Main Theorem, the proper birational morphisms $\mathbb{P}(Z) \rightarrow C(Z)$ (and likewise for $C(W)$) induce isomorphisms of structure sheaves. Since the structure sheaf of the pushout is the pullback of the structure sheaves, we find that in fact $f_*\mathcal{O}_{V'} = \mathcal{O}_V.$ Since $V$ is $S_2$, this implies that $f$ is an isomorphism over codimension one points of $V$, and hence $V$ is also $G_1$ and SN (with $S_2$, SN in codimension one implies SN; see \cite{GT80}), because $V'$ is semismooth and such varieties are always demi-normal.
We now show that the normal variety $C(W)$ is not log terminal, and that the pair $(C(W),C(Z))$ is canonical. It follows from the results presented in the previous section that $V$ is semi canonical but not semi log terminal.
Recall that the canonical divisor on the projective bundle over over $W$ is given by $K_{\mathbb{P}} \sim \pi^*K_W -2W_0 + \pi^*(-L)$, where $L$ is the very ample divisor giving the embedding of $W$. Pushing forward by the resolution $g: \mathbb{P}(W) \rightarrow C(W)$, we get $K_{C(W)} = f_*K_{\mathbb{P}} = C_{K_W} - C_L.$ On the cone, we are looking for a boundary $\Gamma = C_{\Delta}$ such that $K_{C(W)} + C_{\Delta}$ is $\mathbb{Q}$-Cartier. However, a divisor on the cone is $\mathbb{Q}$-Cartier if and only if the associated divisor on $W$ is linearly equivalent to a rational multiple $kL$ of $L$. This follows from a previous remark that a multiple of a hyperplane section of $W$ lifts to a multiple of a hyperplane section of $C(W)$. Any other hypersurface section of $W$ will require two generators at the vertex. Therefore, we are looking for effective divisors of the form $\Delta = sL - K_W$, where $s = k+1$.
For semi log terminal singularities, we must compute the $m$-th limiting relative canonical divisor. It suffices to calculate the order of $W_0$ in $K^-_{\mathbb{P}/C(W)}$. We know that $K^{\Gamma}_{\mathbb{P}/C(W)} \leq K_{m,\mathbb{P}/C(W)}$ for all boundaries $\Gamma$ and all values of $m$, and by the main result, there exists a boundary divisor such that this is an equality, for any given $m$. Thus we want to compute the supremum of the numbers $$\textnormal{ord}_{W_0}(K^{\Gamma}_{\mathbb{P}/C(W)}),$$ where $\Gamma$ is a boundary. We calculate that $$K^{\Gamma}_{\mathbb{P}/C(W)} = K_{C(W)} + g^{-1}_*\Gamma - g^*(K_{C(W)} + \Gamma)$$ $$= \pi^*K_W - 2W_0 + \pi^*(-L) + g^{-1}_*\Gamma - \pi^*(K_W-L+\Delta) - (s-1)W_0$$ $$ = -(s+1)W_0 + g^{-1}_*\Gamma - \pi^*\Delta.$$ So, if $t$ is the infimum of values $s$ such that $sL-K_W$ is effective, then the value of the relative canonical divisor at $W_0$ is $-(1+t).$
For semi canonical singularities, we need to calculate the valuation along $W_0$ of the relative canonical divisor $K_{\mathbb{P}/C(W)}$. The proof of the main theorem can be modified to show that $$\textnormal{val}_{W_0}(K_{\mathbb{P}/C(W)}) = \textnormal{sup}\{\textnormal{ord}_{W_0}(K_{\mathbb{P}} + g^*(-K_{C(W)}+ \Gamma'))\},$$ where $-K_{C(W)} + \Gamma'$ is $\mathbb{Q}$-Cartier and $\Gamma'$ is effective. Here $\Gamma'$ corresponds to $\Delta'=rL+K_W$ with $r$ rational, which is the condition for divisors on the cone to be $\mathbb{Q}$-Cartier. Thus if $t$ is the smallest value of $r$ such that $rL+K_W$ is effective, then the valuation of $K_{\mathbb{P}/C(W)}$ along $W_0$ will be $t-1$.
Let $L$ be the divisor corresponding to the sheaf $\mathcal{M}$. To determine whether $C(W)$ has log terminal singularities, we need to compute the smallest value of $s$ such that $sp^*L-K_W$ is linearly equivalent to an effective divisor. If $s=0$, we have $-K_W$, which corresponds to the sheaf $p^*\mathcal{O}_X(2,0)$, which clearly has global sections. In fact, its pushforward is $\mathcal{O}_X(2,0) \oplus \mathcal{O}_X(2,-2)$. Any nonzero global section of the form $(t,0)$ will work. For negative $s$, $sp^*L-K_W$ is linearly equivalent to an effective divisor if and only if one of its positive integer multiples is. The corresponding sheaf is $p^*\mathcal{O}_X(r(s+2), 3rs)$ for a positive integer $r$. The pushforward is $\mathcal{O}_X(r(s+2), 3rs) \oplus \mathcal{O}_X(r(s+2), 3rs-2)$. Neither summand has nonzero global sections, since the second factors are negative. The valuation of the relative canonical divisor $K^-_{\mathbb{P}/C(W)}$ along the exceptional divisor is therefore $-(1+s) = -1$. In particular, $C(W)$ is not log terminal.
For canonical singularities, we look for the smallest value of $r$ such that $rp^*L+K_W$ is effective. The corresponding sheaf is $\mathcal{O}_X(r-2,3r)$. As in the previous part of the proof, we see that when $r=2$, there are nonzero global sections, while if $r<2$ there can be none. Thus the relative canonical divisor has value $r-1=1$ along the exceptional divisor. We claim that the value of the conductor $C(Z)$ along the exceptional divisor is 1. In fact, $C(Z)$ is a hyperplane section of $C(W)$, so its multiplicity at the vertex is exactly one. When we pull back via $g$, we get exactly one copy of the exceptional divisor. Then it follows that the pair $(C(W),C(Z))$ is canonical.
\end{section} \nocite{Art70} \nocite{Kol13} \nocite{Hart94} \nocite{Hart87} \nocite{BH98} \nocite{GT80} \nocite{Kov99} \nocite{LV81} \nocite{Reid94} \nocite{dFH09} \nocite{KSS09} \nocite{Sch09} \nocite{vS87} \nocite{Berq14} \nocite{GR70} \nocite{Berq14b} \nocite{KM98} \nocite{K00}
\end{document} |
\begin{document}
\title[Rough differential equations on homogeneous Spaces]{Planarly branched Rough Paths and\\ rough differential equations on Homogeneous Spaces}
\author[C.~Curry]{C.~Curry} \address{Dept.~of Mathematical Sciences,
Norwegian University of Science and Technology (NTNU),
7491 Trondheim, Norway.} \email{[email protected]}
\author[K.~Ebrahimi-Fard]{K.~Ebrahimi-Fard} \address{Dept.~of Mathematical Sciences,
Norwegian University of Science and Technology (NTNU),
7491 Trondheim, Norway.}
\email{[email protected]}
\urladdr{https://folk.ntnu.no/kurusche/}
\author[D.~Manchon]{D.~Manchon} \address{Univ. Blaise Pascal,
C.N.R.S.-UMR 6620,
63177 Aubi\`ere, France}
\email{[email protected]}
\urladdr{http://math.univ-bpclermont.fr/~manchon/}
\author[H.~Z.~Munthe-Kaas]{H.~Z.~Munthe-Kaas} \address{Dept.~of Mathematics,
University of Bergen,
Postbox 7800,
N-5020 Bergen, Norway}
\email{[email protected]}
\urladdr{http://hans.munthe-kaas.no}
\begin{abstract} This work studies rough differential equations (RDEs) on homogeneous spaces. We provide an explicit expansion of the solution at each point of the real line using decorated planar forests. The notion of planarly branched rough path is developed, following Gubinelli's branched rough paths. The main difference being the replacement of the Butcher--Connes--Kreimer Hopf algebra of non-planar rooted trees by the Munthe-Kaas--Wright Hopf algebra of planar rooted forests. The latter underlies the extension of Butcher's $B$-series to Lie--Butcher series known in Lie group integration theory. Planarly branched rough paths admit the study of RDEs on homogeneous spaces, the same way Gubinelli's branched rough paths are used for RDEs on finite-dimensional vector spaces. An analogue of Lyons' extension theorem is proven. Under analyticity assumptions on the coefficients and when the H\"older index of the driving path is one, we show convergence of the planar forest expansion in a small time interval. \end{abstract}
\maketitle
\noindent {\footnotesize{\bf Keywords}: rough paths; rough differential equations; homogeneous spaces; Lie--Butcher series; Hopf algebras; post-Lie algebras; planar rooted forests.}
\noindent {\footnotesize{\bf MSC Classification}: (Primary) 16T05; 16T15; 34A34 (Secondary) 16T30; 60H10.}
\tableofcontents
\section{Introduction} \label{sec:intro}
Given a set of vector fields $\{f_i\}_{i=1}^d$ on some $n$-dimensional smooth manifold $\Cal M$, we are interested in the controlled differential equation: \begin{align}\label{eq:control1}
dY_{st} &= \sum_{i=1}^d f_i(Y_{st}) dX_t^i, \end{align} with initial condition $Y_{ss} =y$, where the controls $t\mapsto X_t^i$ are differentiable, or even only H\"older-continuous real-valued functions\footnote{We have chosen any real $s$ as initial time rather than zero, whence the two-variable notation. Derivation is always understood with respect to the variable $t$, the first variable $s$ remaining inert. The two positive integers $n$ and $d$ are a priori unrelated.}. When $\Cal M$ is an affine space $\mathbb R^n$, rough path theory on $\mathbb R^d$, together with its branched version introduced by M.~Gubinelli \cite{Gubi2010}, is the correct setting to express the solutions of \eqref{eq:control1} when the controls are not differentiable. An important case of the latter situation is given by Brownian motion on $\mathbb R^d$, of which sample paths are almost surely nowhere differentiable\footnote{Brownian motion is almost surely of H\"older regularity $\gamma$ for any $\gamma <1/2$.}. The existence of a solution in a small interval around the point $s$ has been proven by Gubinelli, using the notion of controlled path in the branched setting \cite[Section 8]{Gubi2010} (see also \cite[Section 3]{HaiKel2015}). The Taylor expansion of such a solution at any point is expressed by means of choosing a branched rough path $\mathbb X$ over the driving path $X=(X^1,\ldots, X^d)$. See, for example, the introduction of reference \cite{HaiKel2015} by M.~Hairer and D.~Kelly for a concise account.
The theory of rough paths was introduced and developed by T.~Lyons' \cite{L98}. It is based on Chen's theory of iterated integrals \cite{Chen77} and provides an integration theory for solving differential equations driven by irregular signals. The intuitive idea of prescribing the path together with its iterated integrals is encapsulated by the definition of a rough path as a two-parameter family of Hopf algebra characters of the shuffle Hopf algebra $\Cal H_{{\shuffle}}^A$ over the finite alphabet $A=\{a_1,\ldots,a_d\}$, subject to precise estimates as well as to Chen's lemma. The latter is a lifting of the chain rule for integration. Gubinelli's branched rough paths are based on J.~Butcher's $B$-series from numerical integration theory, and are defined similarly to Lyons' rough paths, with the exception that the Hopf algebra at hand is the Butcher--Connes--Kreimer Hopf algebra $\Cal H_{\smop{BCK}}^A$ of $A$-decorated non-planar rooted trees.
In a first step, this article introduces and develops the theory of rough paths on $\mathbb R^d$ for any connected graded Hopf algebra fulfilling rather mild assumptions with respect to its combinatorics. An analogue of Lyons' extension theorem is proven (Theorem \ref{extension-generalized}), using the Sewing Lemma as in the classical case (Proposition \ref{prop:fdpsewing}). In particular, following Gubinelli's approach we use the notion of Lie--Butcher series from Lie group integration theory to define \textsl{planarly branched rough paths} on $\mathbb R^d$ as rough paths in that generalised sense, for which the Hopf algebra at hand is the Hopf algebra of Lie group integrators $\Cal H_{\smop{MKW}}^A$ introduced in \cite{MunWri2008} by W.~Wright and one of the current authors. In a nutshell, this combinatorial Hopf algebra is
linearly spanned by planar ordered rooted forests, possibly with decorations on the vertices. The product in this commutative Hopf algebra is the shuffle product of the forests, which are considered as words with planar rooted trees as letters. The coproduct is based on the notion of \textsl{left admissible cuts} on forests. We then argue that planarly branched rough paths provide the correct setting for understanding controlled differential equations on a homogeneous space, i.e., a manifold acted upon transitively by a finite-dimensional Lie group. To be more precise, it provides the means to write the Taylor expansion of a solution at each time, particularly suited to the underlying geometric setting.
We conclude with a first discussion of the analytic aspects of differential equations driven by planarly branched rough paths. In this article, we restrict to considering the convergence of the full Taylor series on a small time interval (Corollary \ref{Taylor-cv}). This necessarily assumes analyticity of the vector fields, and makes use of Cauchy estimates in a similar manner to \cite[Section 5]{Gubi2010}. On the other hand, this method is limited to considering driving paths for which the H\"older index $\gamma$ of the control path $X$ is equal to one (Lipschitz case). A much more promising approach is to consider instead truncations of the Taylor expansion with controlled remainder, following Davie \cite{D08}, see also \cite{CW17} for the extension of this method to Lie series for the pullback flow. The main obstacle to this technique is the lack of results showing that iterative applications of approximate flows can be concatenated to give an approximation of controlled error on a larger compact time interval. This is equivalent to the existence of global error estimates for Lie group integrators. Such results are established in the forthcoming work \cite{CS2018}, the ramifications of which will be explored in a future sequel on the existence and uniqueness of solutions under much less restrictive assumptions.\\
The paper is organised as follows: in Section \ref{sect:fs} we write down the Taylor expansion of the solution of \eqref{eq:control1} on a homogeneous space in the case of differentiable controls, using a Picard iteration. \textcolor{blue}{We then introduce} a suitable class of combinatorial Hopf algebras in Section \ref{sect:comb-fact}, defining a notion of factorial adapted to this general setting. Then we define in Section \ref{sect:H-rough} a functorial notion of $\gamma$-regular rough path associated to any combinatorial Hopf algebra in the above sense, and we prove Lyons' extension theorem in this setting, along the lines of reference \cite{FP2006}. After giving a brief account on Lie--Butcher theory in Section \ref{sec:flows}, we recall the Munthe-Kaas--Wright Hopf algebra $\Cal H_{\smop{MKW}}^A$ of Lie group integrators. We recall in Section \ref{sec:MKWHA} two relevant combinatorial notions associated with planar forests, namely three partial orders on the set of vertices \cite{A14}, and the planar forest factorial $\sigma\mapsto \sigma!$ of \cite{MF17}, which matches the general notion of factorial mentioned above. Planarly branched rough paths are then defined as rough paths associated to the particular Hopf algebra $\Cal H_{\smop{MKW}}^A$. Section \ref{sec:quotient} is devoted to a canonical surjective Hopf algebra morphism $\mathfrak a_\ll$ from $\Cal H_{\smop{MKW}}^A$ onto the shuffle Hopf algebra $\Cal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$, the \textsl{planar arborification}, in the sense of J.~Ecalle's notion of arborification. A contracting version of planar arborification is also given, where the shuffle Hopf algebra is replaced by a quasi-shuffle Hopf algebra. Finally, Section \ref{sec:RDEHom} deals with rough differential equations on homogeneous spaces driven by a H\"older-continuous path $X$. Any planarly branched rough path above $X$ yields a corresponding formal solution. Following the lines of thought of \cite[Section 5]{Gubi2010} (see also \cite[Proposition 1.8]{BS2016}), we prove convergence of the planar forest expansion in a small interval at each time, under an appropriate analyticity assumption on the coefficients $f_i$, when the driving path is Lipschitz, i.e., of H\"older regularity $\gamma=1$. \textcolor{blue}{An account of the sewing lemma is given in the Appendix.}\\
\noindent\textbf{Acknowledgements:} We would like to thank Lorenzo Zambotti and Ilya Chevyrev for crucial discussions and comments which led to substantial improvements of this paper, in particular by pointing us to the recent article \cite{B17}. We also thank \textcolor{blue}{Igor Mencattini,} Alexander Schmeding and Rosa Preiss for helpful comments. The third author greatly acknowledges the warm hospitality and stimulating working conditions which he experienced at NTNU in Trondheim and at Bergen University during his visits in May 2017. He also would like to thank Fr\'ed\'eric Fauvet for illuminating discussions on J.~Ecalle's notion of arborification. \textcolor{blue}{Finally, we thank the referee for very pertinent suggestions and remarks.} The article received support from Campus France, PHC Aurora 40946NM.
\section{Formal series expansion of the solution} \label{sect:fs}
The theory of numerical integration algorithms on Lie groups and manifolds \cite{Iserles00} has been developed over the last two decades. In this context new algebraic structures were revealed which combine Butcher's $B$-series \cite{HWL_02} and Lie-series into Lie--Butcher series on manifolds \cite{LMK_13}. Brouder's work \cite{Brouder_00} showed that Hopf and pre-Lie algebras of non-planar rooted trees provide the algebraic foundation of $B$-series. For Lie--Butcher series the new concepts of post-Lie algebras and the Munthe-Kaas--Wright Hopf algebra are the foundations. These are examples of algebraic combinatorial structures which arise naturally from the geometry of connections on homogenous spaces.
We rewrite the differential equation \eqref{eq:control1} in the following form: \begin{equation} \label{rde}
dY_{st}=\sum_{i=1}^d \# f_i(Y_{st})\,dX_t^i \end{equation} with initial condition $Y_{ss}=y$, where the unknown is a path $Y_s \colon t \mapsto Y_{st}$ in a homogeneous space $\Cal M$, with transitive action $(g,x)\mapsto g.x$ of a Lie group $G$ on it. The control path $X \colon t \mapsto X_t = (X_t^1,\ldots,X_t^d)$ with values in $\mathbb R^d$ is given, and the $f_i$'s are smooth maps from $\Cal M$ into the Lie algebra $\frak g=\mop{Lie}(G)$, which in turn define smooth vector fields $x \mapsto \# f_i(x)$ on $\Cal M$: \begin{equation*}
\# f_i(x):=\frac{d}{dt}{\restr{t=0}}\exp \big(tf_i(x)\big).x\in T_x \Cal M. \end{equation*} In the language of Lie algebroids, considering the tangent vector bundle and the trivial vector bundle $E=\Cal M\times\mathfrak g$, the map $\# \colon C^\infty(\Cal M,\mathfrak g)\to\chi(\Cal M)$ is the composition on the left with the anchor map $\rho \colon E\to T\Cal M$ defined by $\rho (x,X):=\frac{d}{dt}{\restr{t=0}}(\exp tX).x$.
The central point of our approach is based on formally lifting the differential equation \eqref{rde} to the space $C^\infty\big(\Cal M,\mathcal U(\frak g)\big)[[h]]$, where $\mathcal U(\mathfrak g)$ is the universal enveloping algebra of $\mathfrak g$. This is achieved as follows: setting $t=s+h$, we denote by $\varphi_{st}$ the formal diffeomorphism defined by $\varphi_{st}(Y_{ss}):=Y_{st}$, where $t\mapsto Y_{st}$ is the solution of the initial value problem \eqref{rde}. This formal diffeomorphism can be expressed as: \begin{equation*}
\varphi_{st}=\# \bm Y_{st} \end{equation*} with $\bm Y_{st} \in C^\infty\big(\Cal M,\mathcal U(\frak g)\big)[[h]]$. It turns out that there exists a non-commutative associative product $*$ on $C^\infty\big(\Cal M,\mathcal U(\frak g)\big)$, distinct from the pointwise product in $\Cal U(\mathfrak g)$, which reflects the composition product of differential operators on $\Cal M$, in the sense that: \begin{equation*}
\#(u*v)=\# u\circ\# v \end{equation*} for any $u,v\in C^\infty\big(\Cal M,\mathcal U(\frak g)\big)$. See reference \cite{MunWri2008} for details. The unit is the constant function $\bm 1$ on $\Cal M$ equal to $1\in\Cal U(\mathfrak g)$, and $\#\bm 1$ is the identity operator. The existence of this product is a direct consequence of the post-Lie algebra structure on $C^\infty\big(\Cal M,\frak g)$. The reader may consult \cite{KAH2015} for details. Extending this product to formal series, our lifting of \eqref{rde} is written as: \begin{equation} \label{rde-lifted}
d\bm Y_{st} = \sum_{i=1}^d \bm Y_{st}*f_i\,dX_t^i \end{equation} with initial condition $\bm Y_{ss}=\bm 1$. The non-commutative product $*$ is the extension of the Grossman--Larson product on the post-associative algebra $C^\infty \big(\Cal M,\mathcal U(\frak g)\big)$ to formal series, which reflects the composition of differential operators \cite{MunWri2008}. A full account of the post-Lie algebra structure on $C^\infty \big(\Cal M,\frak g)$ and the post-associative algebra structure on $C^\infty \big(\Cal M,\mathcal U(\frak g)\big)$ will be provided further below in Section \ref{sec:flows}. Let us just mention at this stage that for any $f,g\in C^\infty \big(\Cal M,\frak g)$ we have (Leibniz' rule): \begin{equation} \label{GL-1}
f*g=fg+f\rhd g, \end{equation} where $fg$ stands for the pointwise product in $C^\infty \big(\Cal M,\mathcal U(\frak g)\big)$, and where $f\rhd g$ stands for $\#(f).g$. The solution of \eqref{rde-lifted} is a formal diffeomorphism, i.e., it verifies $\bm Y_{st}\rhd(\rho\psi)=(\bm Y_{st}\rhd \rho)(\bm Y_{st}\rhd\psi)$ for any $\rho,\psi\in C^\infty(\Cal M)$. The formal path $Y_{st}$ solving the initial value problem \eqref{rde}, with initial condition $Y_{ss}=y$, is then the character of $C^\infty(\Cal M)$ with values in $\mathbb R[[h]]$ given for any $\psi \in C^\infty(\Cal M)$ by: \begin{eqnarray} \label{evaluation}
Y_{st} \colon C^\infty(\Cal M)&\longrightarrow &\mathbb R[[h]]\nonumber\\
\psi &\longmapsto &\psi(Y_{st})=(\bm Y_{st}\rhd\psi)(y). \end{eqnarray} Plugging \eqref{rde-lifted} into \eqref{evaluation} yields: \allowdisplaybreaks \begin{eqnarray*}
\frac{d}{dt}\psi(Y_{st})
&=&\frac{d}{dt}(\bm Y_{st}\rhd\psi)(y)\\
&=&\big(\big(\bm Y_{st}*F(t)\big)\rhd\psi\big)(y)\\
&=&\big(\bm Y_{st}\rhd\big(F(t)\rhd\psi\big)\big)(y)\\
&=&\big(F(t)\rhd\psi\big)(Y_{st}), \end{eqnarray*} which proves this assertion, and therefore justifies viewing \eqref{rde-lifted} as a lift of \eqref{rde}. We refer to $\psi(Y_{st})$ as the evaluation of $\psi$ on the formal path $Y_{st}$. Equation \eqref{rde-lifted} can be written in integral form: \begin{eqnarray} \label{rde-gl-int}
\bm Y_{st} &=&\bm 1+\int_s^t \bm Y_{su}*F(u)\,du\nonumber\\
&=&\bm 1+\sum_{i=1}^d\int_s^t \bm Y_{su}*f_i\,dX^i_u. \end{eqnarray} A simple Picard iteration gives the formal expansion: \begin{align}
\bm Y_{st}
&=\bm 1+\sum_{n\ge 1}\,\sum_{1\le i_1,\ldots,i_n\le d}\left(\int\cdots
\int_{s\le t_n\le\cdots\le t_1\le t}f_{i_n}*\cdots *f_{i_1}\,dX^{i_1}_{t_1}\cdots dX^{i_n}_{t_n}\right) \nonumber\\
&=\bm 1+\sum_{n\ge 1}\,\sum_{1\le i_1,\ldots,i_n\le d}\left(\int\cdots
\int_{s\le t_n\le\cdots\le t_1\le t}dX^{i_1}_{t_1}\cdots dX^{i_n}_{t_n}\right)f_{i_n}* \cdots * f_{i_1}. \label{formalsol} \end{align} Using word notation, where $f_w$ stands for the monomial $f_{i_n}*\cdots *f_{i_1}$ when the word $w$ is given by $a_{i_1}\cdots a_{i_n}$, the formal expansion \eqref{formalsol} will be written as a word series \begin{equation} \label{expansion}
\bm Y_{st}=\sum_{w\in A^*}\langle\mathbb X_{st},w\rangle f_w. \end{equation} Using \eqref{GL-1}, the first terms of the expansion are: \allowdisplaybreaks \begin{align*}
\lefteqn{\bm Y_{st}=\bm 1+\sum_{i=1}^d \langle\mathbb X_{st},a_i\rangle f_i
+\sum_{i,j=1}^d\langle\mathbb X_{st},a_ia_j\rangle (f_jf_i+f_j\rhd f_i)}\\
&+\sum_{i,j,k=1}^d\langle\mathbb X_{st},a_ia_ja_k\rangle
\Big(f_kf_jf_i
+(f_k\rhd f_j)f_i
+f_k(f_j\rhd f_i)
+f_j(f_k\rhd f_i)
+(f_kf_j)\rhd f_i
+(f_k\rhd f_j)\rhd f_i\Big)\\
&+O(h^4). \end{align*} We observe that the number of components in the term of order three on the right-hand side can be reduced from six to five: \begin{align} \label{six-to-five}
\sum_{i,j,k=1}^d
&\bigg[\langle\mathbb X_{st},a_ia_ja_k\rangle \Big(f_kf_jf_i
+(f_k\rhd f_j)f_i+(f_kf_j)\rhd f_i+(f_k\rhd f_j)\rhd f_i\Big)\nonumber\\
&+\langle\mathbb X_{st},a_ia_ja_k+a_ia_ka_j\rangle f_j(f_k\rhd f_i)\bigg], \end{align} which corresponds to the five planar rooted decorated forests with three vertices, displayed in the following order: \begin{equation} \label{five-forests}
\racine_k\racine_j\racine_i\hskip 8mm
\arbrea_j^k\racine_i\hskip 8mm
\arbrebb_i^{\,j\hskip -7mm k}\hskip 8mm
{\arbreba_i^j}^{\hskip -4.5pt k}\hskip 8mm
\racine_j\arbrea_i^k. \end{equation} The appearance of planar rooted forests relates to a natural further step in abstraction, namely using the \textsl{Lie--Butcher series formalism}. It consists in an additional lifting of equation \eqref{rde-lifted} to the free post-associative algebra, i.e., the universal enveloping algebra over the free post-Lie algebra in $d$ generators, more precisely to its completion $(\Cal H_{\smop{MKW}}^A)^*$. We obtain then the so-called fundamental differential equation: \begin{equation} \label{rde-postlifted}
d\mathbb Y_{st} = \sum_{i=1}^d \mathbb Y_{st}*\racine_i\,dX_t^i \end{equation} with initial condition $\mathbb Y_{ss}=\bm 1$, where $*$ is now the non-commutative convolution (Grossman--Larson) product of two linear forms on $\Cal H_{\smop{MKW}}^A$. Suppose for the moment that the path $X$ in $\mathbb{R}^d$ is differentiable. Equation \eqref{rde-postlifted} can then be re-written as: \begin{equation} \label{rde-gl}
\dot{\mathbb Y}_{st}=\frac{d}{dt}{\mathbb Y}_{st}=\sum_{i=1}^d\dot X_t^i\mathbb Y_{st}*\racine_i, \end{equation} with initial condition $\mathbb Y_{ss}=\bm 1$. For any $s,t$ the so-called fundamental solution $\mathbb Y_{st}$ of \eqref{rde-postlifted} is given by \begin{equation} \label{expansion-word}
\mathbb Y_{st}=\sum_{\ell\ge 0}\sum_{w=a_1\cdots a_\ell\in A^*}\langle\mathbb X_{st},w\rangle \racine_{a_\ell}*\cdots*\racine_{a_1}. \end{equation}
The coefficient of the last component in \eqref{six-to-five} is obtained by integrating $dX^{i_1}_{t_1}dX^{i_2}_{t_2}dX^{i_3}_{t_3}$ on the union of two simplices $\{(t_1,t_2,t_3),\ s\le t_3\le t_1,t_2\le t\}$. This domain is associated to the decorated forest $\racine_j\arbrea_i^k$ by means of a partial order $\ll$ on the vertices described in Subsection \ref{ssec:partialorders}, which is closely related to the notion of left-admissible cuts for the coproduct in $\Cal H_{\smop{MKW}}^A$. The order $\ll$ is total on the four other planar forests of degree three appearing in \eqref{five-forests}, hence the corresponding coefficients are obtained by integrating on a single simplex. Integrating over these domains lifts $\mathbb X_{st}$ to a two-parameter family of characters of the Hopf algebra $\Cal H_{\smop{MKW}}^A$, which still verifies Chen's lemma. This calls for considering rough differential equations defined on the homogeneous space $\Cal M$ driven by planarly branched rough paths.
Further below we will use J.~Ecalle's notion of arborification to write the Taylor expansion of the solution \eqref{expansion}, or rather its abstract counterpart \eqref{expansion-word} in its planar arborified form: \begin{equation} \label{expansion-lie-butcher}
\mathbb Y_{st}=\sum_{\sigma\in F^A_{\smop{pl}}}\langle\wt\mathbb X_{st},\sigma\rangle \sigma \end{equation} with $\wt\mathbb X_{st}:=\mathbb X_{st}\circ \mathfrak a_\ll$ where $(\mathbb X_{st})_{s,t\in\mathbb R}$ is the signature of the path $X$, and where $ F^A_{\smop{pl}}$ stands for the set of $A$-decorated planar rooted forests.
\section{Factorials in combinatorial Hopf algebras} \label{sect:comb-fact}
We consider the notion of factorial in the context of a fairly general class of combinatorial algebras. This concept will encompass the usual factorial of positive integers, the tree and forest factorials as well as a planar version of the latter.
\subsection{Inverse-factorial characters in connected graded Hopf algebras} \label{sect:inv-fact}
Let $\mathcal H = \bigoplus_{n \ge 0} \mathcal H_n$ be any connected graded Hopf algebra over some field $\mathbf k$ of characteristic zero, and let $\alpha \colon \mathcal H_1\to \mathbf{k}$ be a nonzero linear map. The degree of an element $x \in \mathcal H$ is denoted $|x|$. The \textsl{inverse-factorial character} $q_\alpha$ associated to these data is defined by \begin{itemize}
\item $q_\alpha(\mathbf{1})=1$,
\item $q_\alpha\restr{\mathcal H_1}=\alpha$,
\item $q_\alpha*q_\alpha(x)=2^{|x|}q_\alpha(x)$ for any $x\in\mathcal H$. \end{itemize}
It is indeed given for any homogeneous $x$ of degree $|x| \ge 2$ by the recursive procedure: \begin{equation}\label{rec-inv-fact}
q_\alpha(x)=\frac{1}{2^{|x|}-2}\sideset{}{'}\sum_{(x)}q_\alpha(x')q_\alpha(x''). \end{equation}
See \cite[Section 7]{Gubi2010} in the particular case of the Butcher--Connes--Kreimer Hopf algebra. Here $\Delta'(x):=\sideset{}{'}\sum_{\!\!(x)}x' \otimes x''$ denotes the reduced coproduct of $\mathcal H$ (in Sweedler's notation), and the full coproduct is $\Delta(x)=\Delta'(x)+x\otimes \mathbf{1} + \mathbf{1} \otimes x=\sum_{(x)}x_{1}\otimes x_{2}$. The multiplicativity property $q_\alpha(xy)=q_\alpha(x)q_\alpha(y)$ is verified recursively with respect to $\ell=|x|+|y|\ge 2$ (the cases $\ell=0$ and $\ell=1$ are immediately checked): \begin{eqnarray*}
q_\alpha(xy)
&=&\frac{1}{2^{|x|+|y|}}\sum_{(x),(y)}q_\alpha(x_{1}y_{1})q_\alpha(x_{2}y_{2})\\
&=&\frac{1}{2^{|x|+|y|}}\left(\sum_{(x),(y)}q_\alpha(x_{1})q_\alpha(y_{1})q_\alpha(x_{2})q_\alpha(y_{2})
-2q_\alpha(x)q_\alpha(y)+2q_\alpha(xy)\right)\\
&=&q_\alpha(x)q_\alpha(y)-\frac{1}{2^{|x|+|y|-1}}\Big(q_\alpha(x)q_\alpha(y)-q_\alpha(xy)\Big), \end{eqnarray*} hence $q_\alpha(x)q_\alpha(y)-q_\alpha(xy)=0$.
In general the linear form $\alpha$ is fixed once and for all, and $q_\alpha$ will be abbreviated to $q$. We remark that in concrete situations there is a natural linear basis for the degree one component $\mathcal H_1$ (and more generally for $\mathcal H$, see Paragraph \ref{sect:cha} below for a precise setting), and $\alpha$ will be the linear form on $\mathcal H_1$ which takes the value $1$ on each element of the basis. Taking as a simple example the shuffle algebra on an alphabet $A$, the binomial formula: \begin{equation*}
\frac{2^n}{n!}=\sum_{p=0}^n\frac{1}{p!}\frac{1}{(n-p)!} \end{equation*}
shows that $q_\alpha(w)=1/|w|!$ where $\alpha(a)=1$ for each letter $a\in A$. This example justifies the terminology chosen.
\begin{proposition} For any $h\in\mathbf k$ the $h$-th power convolution of $q=q_\alpha$ makes sense as a character of $\mathcal H$, and admits the following explicit expression: \begin{equation}\label{q-to-h}
q^{*h}(x)=h^{|x|}q(x). \end{equation} \end{proposition}
\begin{proof} One can express $q$ as $\varepsilon+\kappa$ with $\kappa(\mathbf 1)=0$. Then we have for any $x\in\mathcal H$ \begin{equation*}
q^{*h}(x)=(\varepsilon+\kappa)^{*h}(x)=\sum_{p\ge 0}{h\choose p}\kappa^{*p}(x). \end{equation*}
The right-hand side is a finite sum, owing to the co-nilpotence of the coproduct. The expression $q^{*h}(xy)-q^{*h}(x)q^{*h}(y)$ is polynomial in $h$ and vanishes at any non-negative integer $h$, hence vanishes identically. Similarly, the expression $q^{*h}(x)-h^{|x|}q(x)$ is polynomial in $h$ and vanishes at any $h=2^N$ where $N$ is a non-negative integer, hence vanishes identically. \end{proof}
The following corollary generalises both the binomial formula and Gubinelli's branched binomial formula \cite[Lemma 4.4]{Gubi2010}.
\begin{corollary}\label{hopf-binom} For any $h,k\in \mathbf k$ and for any homogeneous element $x\in\mathcal H$, the following Hopf-algebraic binomial formula holds: \begin{equation*}
q(x)(h+k)^{|x|}=\sum_{(x)}q(x_1)q(x_2)h^{|x_1|}k^{|x_2|}. \end{equation*} \end{corollary}
\begin{proof} It is a straightforward application of \eqref{q-to-h} together with the group property $q^{*h}*q^{*k}=q^{*(h+k)}$. \end{proof}
Inverse-factorial characters are functorial, that is, if $\Cal H$ and $\Cal H'$ are two connected graded Hopf algebras and if $\Phi:\Cal H\to\Cal H'$ is a morphism of Hopf algebras preserving the degree, then for any linear map $\alpha':\Cal H'\to \mathbf{k}$ we have: \begin{equation} \label{funct}
q_{\alpha}=q_{\alpha'}\circ \Phi \end{equation} where $\alpha:=\alpha'\circ \Phi\restr{\mathcal H_1}$.
\subsection{A suitable category of combinatorial Hopf algebras} \label{sect:cha}
Although the theory of combinatorial Hopf algebras constitutes an active field of research, with duly acknowledged applications in discrete mathematics, analysis, probability, control, and quantum field theory, no general consensus has yet emerged on a proper definition of those Hopf algebras. Saying this, we propose here a definition which will match our purpose, i.e., give estimates which will ensure convergence of the formal solutions of our singular differential equations in some particular cases. A different proposal for a definition of combinatorial Hopf algebras can also be found in \cite{DS2016} (see Definition 3.1 therein). In both definitions, a privileged linear basis is part of the initial data.\\
\begin{definition} A combinatorial Hopf algebra is a graded connected Hopf algebra $\mathcal H=\bigoplus_{n\ge 0}\mathcal H_n$ over a field $\mathbf{k}$ of characteristic zero, together with a basis $\mathcal B=\bigsqcup_{n\ge 0}\mathcal B_n$ of homogeneous elements, such that \begin{enumerate}
\item There exist two positive constants $B$ and $C$ such that the dimension of $\mathcal H_n$ is bounded by $BC^n$ (in other words, the Poincar\'e--Hilbert series of $\mathcal H$ converges in a small disk around the origin).
\item The structure constants $c_{\sigma\tau}^\rho$ and $c_{\rho}^{\sigma\tau}$ of the product and the coproduct, defined for any $\sigma,\tau,\rho \in \mathcal B$ respectively by $$
\sigma\tau=\sum_{\rho\,\in\mathcal B}c_{\sigma\tau}^\rho \rho,\hskip 12mm \Delta \rho
=\sum_{\sigma,\tau\,\in\mathcal B}c_{\rho}^{\sigma\tau}\sigma\otimes\tau$$
are non-negative integers (which vanish unless $|\sigma|+|\tau|=|\rho|$)\ignore{, and are bounded by $B'C'^{|\rho|}$ where $B'$ and $C'$ are two positive constants}. \end{enumerate} \end{definition}
In any combinatorial Hopf algebra in the above sense, the inverse-factorial character $q$ will be chosen such that $q(\tau)=1$ for any $\tau\in \mathcal B$ of degree one. We adopt the natural shorthand notation: \begin{equation}\label{gen-factorial}
\tau!=\frac{1}{q(\tau)} \end{equation} for any $\tau\in \mathcal B$. The two main examples are the shuffle Hopf algebra $\mathcal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$ and the Butcher--Connes--Kreimer Hopf algebra $\mathcal H^A_{\smop{BCK}}$ over a finite alphabet $A$. In the former case the basis $\mathcal B$ is given by words with letters in $A$, whereas in the latter case we have non-planar rooted forests decorated by $A$. On $\mathcal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$ the corresponding factorial is the usual factorial of the length of a word. On $\mathcal H^A_{\smop{BCK}}$ it is the usual forest factorial \cite[Lemma 4.4]{Gubi2010}. A third major example is the Hopf algebra $\mathcal H^A_{\smop{MKW}}$ of Lie group integrators described in Paragraph \ref{sect:hmkw} below.
Let $(\mathcal H,\mathcal B)$ and $(\mathcal H',\mathcal B')$ be two combinatorial hopf algebras in the above sense. A Hopf algebra morphism $\Phi:(\mathcal H,\mathcal B)\to (\mathcal H',\mathcal B')$ is \textsl{combinatorial} if it is of degree zero and if, for any $\tau\in\mathcal B$, the element $\Phi(\tau)\in \mathcal H'$ is a linear combination of elements of the basis $\mathcal B'$ with non-negative integer coefficients. Combinatorial Hopf algebras in the above sense together with combinatorial morphisms form a category. The forgetful functor $(\mathcal H,\mathcal B)\mapsto \mathcal H$ into the category of connected graded Hopf algebras is given by forgetting the basis.
\begin{remark}\label{nondeg}\rm The inverse-factorial character $q$ may vanish on some elements $\tau$ of the basis, yielding $\tau!=+\infty$. This happens if and only if $\tau$ is primitive of degree $n\ge 2$. We therefore call a combinatorial Hopf algebra \textsl{non-degenerate} if \begin{equation}
\mathcal B\cap\mop{Prim}(\mathcal H)=\mathcal B_1. \end{equation} The three combinatorial Hopf algebras $\mathcal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$, $\mathcal H^A_{\smop{BCK}}$ and $\mathcal H^A_{\smop{MKW}}$ happen to be non-degenerate. Examples of degenerate combinatorial Hopf algebras can easily be found among Hopf algebras of Feynman graphs, as primitive multiloop Feynman graphs do exist. \end{remark}
\section{Rough paths and connected graded Hopf algebras} \label{sect:H-rough}
We show that Lyons' definition of rough paths \cite{L98} extends straightforwardly when replacing the shuffle Hopf algebra with any commutative connected graded Hopf algebra. In particular a naturally extended version of the extension theorem \cite[Theorem 2.2.1]{L98} is available.
\subsection{Chen iterated integrals and rough paths}
\noindent Let $d$ be a positive integer, and let us consider a smooth path in $\mathbb R^d$ \allowdisplaybreaks \begin{eqnarray*}
X:\mathbb R &\longrightarrow
& \mathbb R^d\\
t &\longmapsto & X(t)=\big(X_1(t),\ldots,X_d(t)\big). \end{eqnarray*} Let $\Cal H^A$ be the algebra of the free monoid $A^*$ generated by the alphabet $A:=\{a_1,\ldots,a_d\}$, and augmented with the empty word $\bm 1$ as unit. Let $\mathbb X_{st}\in(\Cal H^A)^\star$ be defined for any $s,t\in\mathbb R$ and word $w=a_{j_1}\cdots a_{j_n} \in A^*$ by $n$-fold iterated integrals: \begin{eqnarray}\label{signature}
\langle \mathbb X_{st},\,a_{j_1}\cdots a_{j_n}\rangle
&:=&\int\cdots\int_{s\le t_n\le\cdots \le t_1\le t}\,\dot X_{j_1}(t_1)\cdots \dot X_{j_n}(t_n)\,dt_1\cdots dt_n\nonumber\\
&=&\int\cdots\int_{s\le t_n\le\cdots \le t_1\le t}\,dX_{j_1}(t_1)\cdots dX_{j_n}(t_n). \end{eqnarray}
This is extended to the empty word $\bm 1$ by $\langle \mathbb X_{st},\bm 1\rangle:=1$. Suppose moreover that the derivative $\dot X$ is bounded, i.e., $\mop{sup}_{j=1}^d\mop{sup}_{t\in\mathbb R}|\dot X_j(t)|=C<+\infty$. The volume of the simplex $$
\Delta^n_{[s,t]}:=\{(t_1,\ldots, t_n),\ s \le t_n \le \cdots \le t_1 \le t\} $$
over which the iterated integration \eqref{signature} of length $n$ is performed is equal to $|t-s|^n/n!$, which yields the following estimate for any word $w\in A^*$: \begin{equation} \label{sp-estimates}
\mop{sup}_{s\neq t}\frac{\vert\langle \mathbb X_{st},\,w\rangle\vert}{\vert t-s\vert^{\vert w\vert}}\le\frac{C^{|w|}}{|w|!}, \end{equation} where $\vert w\vert$ stands for the length of the word $w$, i.e., its number of letters. It turns out \cite{Chen54, Chen57, Chen67, Chen77, Ree58} that $\mathbb X_{st}$ is a two-parameter family of characters with respect to the shuffle product of words, namely: \begin{equation}\label{shuffle}
\langle \mathbb X_{st},\,v\rangle\langle \mathbb X_{st},\,w\rangle=\langle \mathbb X_{st},\,v\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, w\rangle. \end{equation} The shuffle product $\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,$ is defined inductively by $w \joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \bm 1 =\bm 1\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, w=w$ and \begin{equation} \label{shuffleproduct}
(a_i v) \joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, (a_j w)=a_i(v\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, a_j w) + a_j(a_i v \joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, w), \end{equation} for all words $v,w \in A^\ast$ and letters $a_i,a_j\in A$. The resulting shuffle algebra is denoted $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$. For instance, $a_i \joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, a_j = a_ia_j + a_ja_i$ and \begin{equation*}
a_{i_1} a_{i_2}\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, a_{i_3}a_{i_4} = a_{i_1}a_{i_2}a_{i_3}a_{i_4}
+ a_{i_1}a_{i_3}a_{i_2}a_{i_4}+a_{i_1}a_{i_3}a_{i_4}a_{i_2}
+ a_{i_3}a_{i_1}a_{i_2}a_{i_4} + a_{i_3}a_{i_1}a_{i_4}a_{i_2} + a_{i_3}a_{i_4}a_{i_1}a_{i_2}. \end{equation*} Moreover, the following property, now widely referred to as ``Chen's lemma", is verified: \begin{equation}\label{chen-lemma}
\langle\mathbb X_{st},\,v\rangle=\sum_{v'v''=v}\langle\mathbb X_{su},\,v'\rangle\langle\mathbb X_{ut},\,v''\rangle. \end{equation} The sum on the righthand side extends over all splittings of the word $v \in A^*$ into two words, $v'$ and $v''$, such that the concatenation $v'v''$ equals $v$. Both properties are easily shown by a suitable decomposition of the integration domain into smaller pieces with Lebesgue-negligible mutual intersections: a product of two simplices is written as a union of simplices for proving \eqref{shuffle}, and a simplex of size $t-s$ is written as a union of products of simplices of respective size $u-s$ and $t-u$ for proving \eqref{chen-lemma} when $s\le u\le t$. The latter is advantageously re-written in terms of the convolution product associated to the deconcatenation coproduct $\Delta: \Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A \to \Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A \otimes \Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ defined on words, $w\mapsto \Delta(v)=\sum_{v'v''=v}v'\otimes v''$. The latter turns the shuffle algebra $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ into a connected graded commutative Hopf algebra $(\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A,\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,,\Delta)$ with convolution product defined on the dual $({\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A})^\star$: \begin{equation}\label{chen-lemma-hopf}
\mathbb X_{st}=\mathbb X_{su}\ast\mathbb X_{ut}=m_{\mathbb{R}}(\mathbb X_{su}\otimes\mathbb X_{ut})\Delta. \end{equation} It has long ago been proposed by K.~T.~Chen to call ``generalized path" \cite{Chen67} any two-parameter family of characters of the shuffle algebra $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ verifying \eqref{shuffle} and \eqref{chen-lemma} together with a mild continuity assumption. Lyons introduced the seminal notion of rough path \cite{L98}, which can be defined as follows \cite[Definition 1.2]{HaiKel2015}: a \textsl{geometric rough path\footnote{To be precise: \textsl{weak geometric rough path}, see \cite[Remark 1.3]{HaiKel2015}.} of regularity $\gamma$}, with $0<\gamma\le 1$, is a generalized path in the sense of Chen, satisfying moreover the estimates: \begin{equation}\label{rp-estimates}
\mop{sup}_{s\neq t}\frac{\vert\langle \mathbb X_{st},\,w\rangle\vert}{\vert t-s\vert^{\gamma\vert w\vert}}<C(w) \end{equation} for any word $w$ of length $\vert w\vert$, where $C(w)$ is some positive constant. The evaluation on length one words is then given by the increments of a $\gamma$-H\"older continuous path: \begin{equation}
\langle\mathbb X_{st},\,a_j\rangle:=X_j(t)-X_j(s) \end{equation} with $X_j(t):=\langle\mathbb X_{t_0t},\,a_j\rangle$ for some arbitrary choice of $t_0\in\mathbb R$. Iterated integrals \eqref{signature} cannot be given any sense for any $n>0$ if the path is only of regularity $\gamma\le 1/2$. Lyons' extension theorem \cite{L98,HaiKel2015}, however, stipulates that the collection of coefficients $\langle \mathbb X_{st},\,w\rangle$ for the words $w$ of length up to $[1/\gamma]$ completely determines the $\gamma$-regular rough path $\mathbb X$. This result is a particular case of Theorem \ref{extension-generalized} below, the proof of which also uses the sewing lemma (Proposition \ref{prop:fdpsewing}).
\subsection{Rough paths generalized to commutative combinatorial Hopf algebras} \label{sect:rpha}
We have briefly indicated in the Introduction how to adapt the notion of rough path to any connected graded Hopf algebra. Here is the precise definition:
\begin{definition}\label{rough} Let $\Cal H=\bigoplus_{n\ge 0}\Cal H_n$ be a commutative graded Hopf algebra with unit $\bm 1$, connected in the sense that $\Cal H_0$ is one-dimensional, and let $\gamma\in]0,1]$. We suppose that $\mathcal H$ is endowed with a homogeneous basis $\mathcal B$ making it combinatorial and non-degenerate in the sense of Section \ref{sect:cha}. A {\rm $\gamma$-regular $\Cal H$-rough path\/} is a two-parameter family $\mathbb X=(\mathbb X_{st})_{s,t\in\mathbb R}$ of linear forms on $\Cal H$ such that $\langle \mathbb X_{st},\bm 1\rangle=1$ and \begin{enumerate}[I)] \item \label{lyons-one} for any $s,t\in\mathbb R$ and for any $\sigma,\tau$ in $\Cal H$, the following equality holds $$
\langle\mathbb X_{st},\,\sigma\tau\rangle=\langle\mathbb X_{st},\,\sigma \rangle\langle\mathbb X_{st},\, \tau\rangle, $$
\item \label{lyons-two} for any $s,t,u\in\mathbb R$, Chen's lemma holds $$
\mathbb X_{su}*\mathbb X_{ut}=\mathbb X_{st}, $$ where the convolution $\ast$ is the usual one defined in terms to the coproduct on $\Cal H$,
\item \label{lyons-three} for any $n\ge 0$ and for any $\sigma \in \Cal B_n$, we have the estimates \begin{equation}\label{plrp-estimates}
\mop{sup}_{s\neq t}\frac{\vert\langle \mathbb X_{st},\,\sigma\rangle\vert}{\vert t-s\vert^{\gamma |\sigma|}}<C(\sigma). \end{equation} \end{enumerate} \end{definition}
The notion of $\gamma$-regular branched rough path \cite{Gubi2010, HaiKel2015} is recovered by choosing for $\Cal H$ the Butcher--Connes--Kreimer Hopf algebra $\Cal H_{\smop{BCK}}^A$ of $A$-decorated (non-planar) rooted forests. Recall that the product in this Hopf algebra is given by the disjoint union of rooted trees. \textcolor{blue} {\begin{remark}\rm Theorem \ref{extension-generalized} below will permit to give precise expressions of the constants $C(w)$ and $C(\sigma)$ in Estimates \eqref{rp-estimates} and \eqref{plrp-estimates}, respectively. \end{remark}}
\noindent The truncated counterpart of $\Cal H$-rough paths is defined as follows.
\begin{definition}\label{rough-truncated} Let $N$ be a positive integer and let $\Cal H^{(N)}:=\bigoplus_{k=0}^N \Cal H_k$. Let $\gamma\in]0,1]$. A {\rm $\gamma$-regular $N$-truncated $\Cal H$-rough path\/} is a two-parameter family $\mathbb X=(\mathbb X_{st})_{s,t\in\mathbb R}$ of linear forms on $\Cal H^{(N)}$ such that: \begin{enumerate}[i)] \item \label{item-one-truncated} the multiplicativity property \eqref{lyons-one} above holds for any $\sigma\in \Cal H_{p}$ and $\tau \in \Cal H_{q}$ with $p+q\le N$,
\item \label{item-two-truncated} Chen's lemma \eqref {lyons-two} holds, where the convolution refers to the restriction of the coproduct to $\Cal H^{(N)}$,
\item \label{item-three-truncated} the estimates \eqref{lyons-three} hold for any $\sigma \in \Cal H_n$ with $n\le N$. \end{enumerate} \end{definition}
For later use we also recall Sweedler's notation $\Delta(\sigma)=\sum_{(\sigma)}\sigma_{1}\otimes \sigma_{2},$ for the full coproduct $\Delta$ in $\Cal H$, as well as its iterated versions $\Delta^{(k-1)}(\sigma)=\sum_{(\sigma)}\sigma_1\otimes\cdots\otimes \sigma_k.$ For the reduced coproduct, we also adopted a Sweedler-type notation: $$
\Delta'(\sigma) := \Delta(\sigma) - \sigma \otimes\bm 1 - \bm 1 \otimes \sigma = \sideset{}{'}\sum_{(\sigma)} \sigma' \otimes \sigma''. $$
Lyons' extension theorem \cite[Theorem 2.2.1]{L98} can be generalised to this setting, with basically the same proof:
\begin{theorem}\label{extension-generalized} Let $\gamma\in]0,1]$, and let $N:=[1/\gamma]$. Any $\gamma$-regular $N$-truncated $\Cal H$-rough path admits a unique extension to a $\gamma$-regular $\Cal H$-rough path. Moreover, there exists a positive constant $c$ such that the following estimate holds: \begin{equation}\label{estimate-gamma-rough}
\vert\langle\mathbb X_{st},\sigma\rangle\vert\le c^{|\sigma|}q_\gamma(\sigma)|t-s|^{\gamma|\sigma|} \end{equation}
for any $\sigma\in\mathcal B$, with $q_\gamma(\sigma)=q(\sigma)$ for $|\sigma|\le N$ and \begin{equation}\label{recursive-q-gamma}
q_\gamma(\sigma):=\frac{1}{2^{\gamma|\sigma|}-2}\sideset{}{'}\sum_{(\sigma)}q_\gamma(\sigma')q_\gamma(\sigma'') \end{equation}
for $|\sigma|\ge N+1$. \end{theorem}
\begin{proof} Notice that $\varepsilon:=\gamma(N+1)-1$ is (strictly) positive. If the element $\sigma\in\Cal H$ is homogeneous of degree $n$, then $\sigma'$ and $\sigma''$ in its reduced coproduct, $\Delta'(\sigma)$, can be taken homogeneous with respective degree $p$ and $q$ with $p+q=n$ and $p,q\le n-1$. Now let $(\mathbb X_{st})_{s,t\in\mathbb R}$ be a $\gamma$-regular $N$-truncated $\Cal H$-rough path. We extend it trivially to $\Cal H^{(N+1)}$ by setting $\langle\mathbb X_{st},\,\sigma\rangle=0$ for any $\sigma\in\Cal H_{N+1}$.\\
\noindent Now let $\sigma\in \Cal B_{N+1}$. Fix an arbitrary $o\in\mathbb R$, and consider the function of two real variables: \begin{equation}
\mu(s,t):=\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{st},\sigma''\rangle. \end{equation} Let $s,t,u\in\mathbb R$. A simple computation yields: \begin{align} \lefteqn{\mu(s,u)+\mu(u,t)-\mu(s,t)
=\sideset{}{'}\sum_{(\sigma)}-\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{st},\sigma''\rangle+\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{su},\sigma''\rangle
+ \langle\mathbb X_{ou},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle} \nonumber\\
&=-\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{su},\sigma''\rangle\langle\mathbb X_{ut},\sigma'''\rangle
-\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{su},\sigma''\rangle
-\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle \nonumber\\
&\hskip 10mm +\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{su},\sigma''\rangle\nonumber\\
&\hskip 10mm +\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{su},\sigma''\rangle\langle\mathbb X_{ut},\sigma'''\rangle
+\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{os},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle
+\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{su},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle \nonumber\\
&=\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{su},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle.\label{mu} \end{align} Hence if $s\le u\le t$ or $t\le u\le s$, we can estimate, using \eqref{recursive-q-gamma}: \begin{eqnarray*}
\vert\mu(s,t)-\mu(s,u)-\mu(u,t)\vert
&\le& \sideset{}{'}\sum_{(\sigma)}\vert\langle\mathbb X_{su},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle\vert\\
&\le& \sideset{}{'}\sum_{(\sigma)}q_\gamma(\sigma')c^{|\sigma'|}|u-s|^{\gamma|\sigma'|}q_\gamma(\sigma'')c^{|\sigma''|}|t-u|^{\gamma|\sigma''|}.\\
\ignore{
&\le& c^{|\sigma|}(2^{\gamma|\sigma|}-2)q_\gamma(\sigma)\mop{sup}_{0\le a\le \gamma|\sigma|}\hskip 2mm\mop{sup}_{u\in[\smop{min}(s,t),\,\smop{max}(s,t)]}|t-u|^a|u-s|^{\gamma|\sigma|-a}\\
&\le & c^{|\sigma|}(2^{\gamma|\sigma|}-2)q_\gamma(\sigma)\mop{sup}_{0\le a\le \gamma|\sigma|} \left(\frac{a}{\gamma|\sigma|}\right)^a\left(\frac{\gamma|\sigma|-a}{\gamma|\sigma|}\right)^{\gamma|\sigma|-a}|t-s|^{\gamma|\sigma|}\\
&\le&c^{|\sigma|}(2^{\gamma|\sigma|}-2)\textcolor{blue}{2^{-\gamma\sigma}\hskip -7mm //\hskip 4mm}q_\gamma(\sigma)|t-s|^{\gamma|\sigma|}.} \end{eqnarray*}
Here we have chosen the constant $c$ such that the estimates \eqref{estimate-gamma-rough} hold for any $\sigma\in\mathcal B_n,\, n\le N$. This is possible since the sets $\mathcal B_n$ are finite. It follows from $\gamma|\sigma|=\gamma(N+1)=1+\varepsilon$ and the Sewing Lemma (Proposition \ref{prop:fdpsewing}) that there exists a unique map $\varphi$ defined on $\mathbb R$, up to an additive constant, such that: \begin{align} \label{estimate-phi}
\vert\varphi(t)-\varphi(s)-\mu(s,t)\vert
&\le \frac{c^{|\sigma|}}{2^{\gamma|\sigma|}-2}\sum_{(\sigma)}q_\gamma(\sigma')q_\gamma(\sigma'')|t-s|^{\gamma|\sigma|}\\
&\phantom{==} =c^{|\sigma|}q_\gamma(\sigma)|t-s|^{\gamma|\sigma|}. \end{align} Now defining: \begin{eqnarray}
\langle\wt\mathbb X_{st},\sigma\rangle &:=&
\begin{cases}
\langle\mathbb X_{st},\sigma\rangle \hbox{,\ for }\sigma\in\Cal H_n,\, n\le N,\\
\varphi(s)-\varphi(t)-\mu(s,t) \hbox{,\ for }\sigma\in\Cal H_{N+1},\label{def-xtilde}
\end{cases} \end{eqnarray} we immediately get from \eqref{mu}: \begin{equation} \label{chen-N-plus-one}
\langle\wt\mathbb X_{st}-\wt\mathbb X_{su}-\wt\mathbb X_{ut},\,\sigma\rangle
=\sideset{}{'}\sum_{(\sigma)}\langle\mathbb X_{su},\sigma'\rangle\langle\mathbb X_{ut},\sigma''\rangle. \end{equation} From \eqref{estimate-phi} and \eqref{def-xtilde} we then have \begin{equation} \label{estimate-wtxst}
\vert\langle\wt\mathbb X_{st},\sigma\rangle\vert\le c^{|\sigma|}q_\gamma(\sigma)|t-s|^{\gamma|\sigma|}. \end{equation}
Let us now check item \eqref{item-one-truncated} in Definition \ref{rough-truncated} for $\wt\mathbb X_{st}$ for any $\sigma\in\Cal H_p$ and $\tau\in\Cal H_q$ with $p+q=N+1$. We split the interval $[s,t]$ (or $[t,s]$) into $k$ sub-intervals $[s_j,s_{j+1}]$ of equal length $|t-s|/k$, with $s_0:=\mop{inf}(s,t)$ and $s_k:=\mop{sup}(s,t)$. From Chen's lemma up to degree $N+1$ stemming from \eqref{chen-N-plus-one}, we can compute, supposing $s\le t$ here: \begin{align*}
\lefteqn{\langle\wt\mathbb X_{st},\sigma\tau\rangle-\langle\wt\mathbb X_{st},\sigma\rangle\langle\wt\mathbb X_{st},\tau\rangle}\\
&=\langle\wt\mathbb X_{s_0s_1}\ast\cdots\ast\wt\mathbb X_{s_{k-1}s_k},\,\sigma\tau\rangle
-\langle\wt\mathbb X_{s_0s_1}\ast\cdots\ast\wt\mathbb X_{s_{k-1}s_k},
\,\sigma\rangle\langle\wt\mathbb X_{s_0s_1}\ast\cdots\ast\wt\mathbb X_{s_{k-1}s_k},\,\tau\rangle\\
&=\sum_{(\sigma),(\tau)}\bigg(\prod_{j=0}^{k-1}\langle \wt\mathbb X_{s_js_{j+1}},\sigma_j\tau_j\rangle-\prod_{j=0}^{k-1}\langle
\wt\mathbb X_{s_js_{j+1}},\sigma_j\rangle\langle \wt\mathbb X_{s_js_{j+1}},\tau_j\rangle\bigg). \end{align*} The term under the summation sign vanishes unless there is a $j \in \{0,\ldots,k-1\}$ such that $\sigma_j=\sigma$ and $\tau_j=\tau$, in which case we have $\sigma_i=\tau_i=\bm 1$ for $i\neq j$. Hence, \begin{equation*}
\langle\wt\mathbb X_{st},\sigma\tau\rangle-\langle\wt\mathbb X_{st},\sigma\rangle\langle\wt\mathbb X_{st},\tau\rangle
=\sum_{j=0}^{k-1}
\langle\wt\mathbb X_{s_js_{j+1}},\sigma\tau\rangle-\langle\wt\mathbb X_{s_js_{j+1}},\sigma\rangle\langle\wt\mathbb X_{s_js_{j+1}},\tau\rangle. \end{equation*} From \eqref{estimate-wtxst} and item \eqref{item-three-truncated} of Definition \ref{rough-truncated} we get \begin{equation*}
\vert\langle\wt\mathbb X_{st},\sigma\tau\rangle-\langle\wt\mathbb X_{st},\sigma\rangle\langle\wt\mathbb X_{st},\tau\rangle\vert
\le kc^{|\sigma|}q_\gamma(\sigma)\left \vert \frac{t-s}{k}\right\vert^{1+\varepsilon}
= k^{-\varepsilon} c^{|\sigma|}q_\gamma(\sigma)\vert t-s\vert^{1+\varepsilon}. \end{equation*} Hence $\langle\wt\mathbb X_{st},\sigma\tau\rangle=\langle\wt\mathbb X_{st},\sigma\rangle\langle\wt\mathbb X_{st},\tau\rangle$ by letting $k$ go to $+\infty$. The same argument works \textsl{mutatis mutandis} in the case $s>t$. Hence, estimate \eqref{estimate-gamma-rough} is proven for $\langle \wt\mathbb X,\sigma\rangle$ for any $\sigma\in\mathcal B_n,\, n\le N+1$.\\
Uniqueness can be proven by a similar argument. Indeed, suppose that $\overline \mathbb X$ is another $(N+1)$-truncated $\gamma$-regular $\Cal H$-rough path extending $\mathbb X$, and let $\delta_{st}:=\wt\mathbb X_{st}-\overline\mathbb X_{st}$ for any $s,t\in\mathbb R$. For any $\sigma\in\Cal H_{N+1}$ we have then the following: \allowdisplaybreaks \begin{eqnarray*}
\langle \delta_{st},\sigma\rangle
&=&\langle \wt\mathbb X_{ss_1}\ast\cdots\ast \wt\mathbb X_{s_{k-1}t}
-\overline\mathbb X_{ss_1}\ast\cdots\ast \overline\mathbb X_{s_{k-1}t},\,\sigma\rangle\\
&=&\langle \delta_{ss_1}+\cdots+\delta_{s_{k-1}t},\,\sigma\rangle. \end{eqnarray*} As we have $\vert\langle \delta_{s_js_{j+1}},\sigma\rangle\vert\le \overline C \vert \frac{t-s}{k}\vert^{1+\varepsilon}$ for some constant $\overline C$, we get $\vert\langle \delta_{st},\sigma\rangle\vert\le \overline C \vert t-s\vert^{1+\varepsilon}k^{-\varepsilon}$, hence $\langle \delta_{st},\sigma\rangle=0$ by letting $k$ go to infinity.
Iterating this process at any order finally yields a fully-fledged $\gamma$-regular $\Cal H$-rough path $\wt\mathbb X$ extending $\mathbb X$. \end{proof}
\begin{remark}\rm Considering the striking similarity of the map $q_\gamma$ with the inverse-factorial character, which is nothing but $q_\gamma$ for $\gamma=1$, Gubinelli conjectured \cite[Remark 7.4]{Gubi2010} the following comparison, in the special case of the Butcher--Connes--Kreimer Hopf algebra (corresponding to branched rough paths): \begin{equation}\label{estimate-gubi}
q_\gamma(\sigma)\le \frac{BC^{|\sigma|}}{(\sigma!)^\gamma} \end{equation} for any $\sigma$ in $\mathcal B$ (i.e., any decorated forest in this particular case of $\Cal H_{\smop{BCK}}^A$), where $B$ and $C$ are positive constants. This conjecture has been recently proven by H.~Boedihardjo \cite{B17}. In the case of the shuffle Hopf algebra (corresponding to geometric rough paths), it happens to be a consequence of Lyons' neoclassical inequality (\cite[Theorem 2.1.1]{L98}, see also \cite[Remark 7.4]{Gubi2010}). It would be interesting to prove a similar result for a general class of combinatorial Hopf algebra, in particular for the Hopf algebra of Lie group integrators $\Cal H_{\smop{MKW}}^A$ defined in paragraph \ref{sect:hmkw} below, corresponding to the notion of rough paths we call \textsl{planarly branched rough paths}, defined in Paragraph \ref{sect:pbrp-def}. Our definition of $q_\gamma$ differs form that of Gubinelli's in the initial conditions $q_\gamma(\sigma)=q(\sigma)$ versus $q_\gamma^{\smop{Gub}}(\sigma)=1$ for any $\sigma\in \mathcal B_n$, $\,n\le N$. This choice is dictated by the functorial considerations of Paragraph \ref{sect:rpcha} below. In practice, one has very often $q(\sigma)\le 1$ for any element $\sigma\in \mathcal B_n,$ $n\le N$, which yields $q_\gamma(\sigma)\le q_\gamma^{\smop{Gub}}(\sigma)$ for any $\sigma\in\mathcal B$ by induction. Hence the majorations for $q_\gamma^{\smop{Gub}}$ obtained in \cite{B17} in the branched case also hold for our $q_\gamma$. \end{remark}
\subsection{Factorial decay estimates} \label{ssect:factdecay}
The linear map $q_\gamma$ defined in the statement of Theorem \ref{extension-generalized} is uniquely defined by $q_\gamma(\sigma)=1$ for any $\sigma\in\mathcal B_1\cup\{\mathbf 1\}$ and the recursive equations \begin{eqnarray*}
q_\gamma(\sigma):=
\begin{cases}
\frac{1}{2^{|\sigma|}}q_\gamma* q_\gamma(\sigma) \hbox{,\ for }2\le|\sigma|\le N,\\
\frac{1}{2^{\gamma|\sigma|}}q_\gamma* q_\gamma(\sigma)\hbox{,\ for }|\sigma|\ge N+1.
\end{cases} \end{eqnarray*} As a consequence, $q_\gamma$ has the same functorial properties than the inverse-factorial character $q$, namely if $(\Cal H,\Cal B)$ and $(\Cal H', \Cal B')$ are two connected graded Hopf algebras and if $\Phi:\Cal H\to\Cal H'$ is a morphism of Hopf algebras preserving the degree, then we have for any $\gamma\in]0,1]$, with self-explanatory notations: \begin{equation} \label{funct-qgamma}
q_{\gamma}=q'_{\gamma}\circ \Phi. \end{equation}
\begin{proposition}\cite[Remark 7.4]{Gubi2010} Let $\Cal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$ be the shuffle Hopf algebra on a finite alphabet $A$. Then the following estimates hold: for any word $w\in A^*$, \begin{equation} \label{estimate-qgamma-shuffle}
|q_\gamma(w)|\le \frac{C_\gamma^{|w|-1}}{(|w|!)^\gamma} \end{equation} where $C_\gamma$ is a positive real number depending only on $\gamma$. \end{proposition}
\begin{proof} Recall Gubinelli's variant of Lyons' neo-classical inequality: there exists $c_\gamma>0$ such that \begin{equation} \label{neoclassical}
\sum_{k=0}^n\frac{a^{\gamma k}b^{\gamma(n-k)}}{(k!)^\gamma[(n-k)!]^\gamma}
\le c_\gamma\frac{(a+b)^{n\gamma}}{(n!)^\gamma}. \end{equation} Now set $$
C_\gamma:=\mop{sup}\left(1,\,\frac{2^{\gamma(N+1)}}{2^{\gamma(N+1)}-2}c_\gamma\right), $$
and proceed by induction on the length of the word $w$. The case $|w|\le N$ being obvious, suppose $|w|\ge N+1$. We can compute, using \eqref{neoclassical} in the particular case $a=b=1$: \begin{eqnarray*}
q_\gamma(w)
&=&\frac{1}{2^{\gamma|w|}-2}\sideset{}{'}\sum_{(w)}q_\gamma(w')q_\gamma(w'')\\
&\le&\frac{1}{2^{\gamma|w|}-2}\sideset{}{'}\sum_{(w)}\frac{C_\gamma^{|w'|-1}}{(|w'|!)^\gamma}\frac{C_\gamma^{|w''|-1}}{(|w''|!)^\gamma}\\
&\le&\frac{1}{2^{\gamma|w|}-2}\sum_{(w)}\frac{C_\gamma^{|w_1|-1}}{(|w_1|!)^\gamma}\frac{C_\gamma^{|w_2|-1}}{(|w_2|!)^\gamma}\\
&\le& \frac{C_\gamma^{|w|-2}}{2^{\gamma|w|}-2}\, c_\gamma \frac{2^{|w|\gamma}}{(|w|!)^\gamma}\\
&\le & \frac{C_\gamma^{|w|-1}}{(|w|!)^\gamma}. \end{eqnarray*} \end{proof}
\begin{corollary}\label{estimate-qgamma-comb} Let $(\Cal H,\Cal B)$ be a combinatorial Hopf algebra endowed with a combinatorial morphism $\Phi:(\Cal H,\Cal B)\to (\Cal H^A_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}, \Cal B')$, where $A$ is a finite alphabet and $\Cal B'=A^*$ is the standard basis of words. Then for any $\sigma\in\Cal B$ the following estimate holds: \begin{equation}\label{estimate-qgamma-general}
|q_\gamma(\sigma)|\le C_\gamma^{|\sigma|-1}\frac{(|\sigma|!)^{1-\gamma}}{\sigma!} \end{equation} with the same $C_\gamma$ as above. \end{corollary}
\begin{proof} For any $\sigma\in\Cal B$ we have $\Phi(\sigma)=\sum_{w\in A^*}b_w^\sigma w$, and we have by functoriality of the inverse factorial: \begin{equation*}
\sum_{w\in A^*}b_w^\sigma=\frac{|\sigma|!}{\sigma!}. \end{equation*} The proof relies on a simple computation using functoriality of $q_\gamma$ as well as the non-negativity of the coefficients $b_w^\sigma$, together with the fact that the $L^p$ norms are nondecreasing with respect to $p\in]0,1]$ for probability measures: \allowdisplaybreaks \begin{eqnarray*}
|q_\gamma(\sigma)|^{1/\gamma}
&=&\left(\sum_{w\in A^*}b_w^\sigma q_\gamma(w)\right)^{1/\gamma}\\
&=&\left(\frac{|\sigma|!}{\sigma!}\right)^{1/\gamma}\left(\frac{\sigma!}{|\sigma|!}\sum_{w\in A^*}b_w^\sigma q_\gamma(w)\right)^{1/\gamma}\\
&\le & \left(\frac{|\sigma|!}{\sigma!}\right)^{1/\gamma-1}\sum_{w\in A^*}b_w^\sigma q_\gamma(w)^{1/\gamma}\\
&\le& \left(\frac{|\sigma|!}{\sigma!}\right)^{1/\gamma-1}\sum_{w\in A^*}C_\gamma^{(|w|-1)/\gamma}\frac{b_w^\sigma}{|w|!}\\
&\le&C_\gamma^{(|\sigma|-1)/\gamma}\left(\frac{|\sigma|!}{\sigma!}\right)^{1/\gamma-1}\frac{1}{\sigma!}\\
&\le&\left(C_\gamma^{|\sigma|-1}\frac{(|\sigma|!)^{1-\gamma}}{\sigma!}\right)^{1/\gamma}. \end{eqnarray*} \end{proof} We remark that H.~Boedihardjo recently obtained a much better estimate in the context of Gubinelli's branched rough paths, i.e., for the Butcher--Connes--Kreimer Hopf algebra, see \cite[Theorem 4]{B17}.
\subsection{Rough paths and combinatorial Hopf algebras} \label{sect:rpcha}
We shall examine further properties of rough paths in the generalised sense given in Paragraph \ref{sect:rpha}, i.e., when the Hopf algebra at hand is combinatorial.
\begin{proposition}\label{funct-rough} \begin{enumerate}
\item Let $(\mathcal H,\mathcal B)$ be a combinatorial Hopf algebra in the sense of Paragraph \ref{sect:cha} and let $q$ be the associated inverse factorial character. Then $q(x)=\frac{1}{x!}$ is a (possibly vanishing) non-negative rational number for any $x\in\mathcal B$.
\item Let $(\mathcal H,\mathcal B)$ and $(\mathcal H',\mathcal B')$ be two combinatorial Hopf algebras, and let $\Phi:(\mathcal H,\mathcal B)\to(\mathcal H',\mathcal B')$ be a combinatorial Hopf algebra morphism. Then the pull-back $\mathbb X_{st}:=\mathbb X'_{st}\circ\Phi$ of any $\gamma$-regular $\mathcal H'$-rough path $\mathbb X'_{st}$ is a $\gamma$-regular $\mathcal H$-rough path. \end{enumerate} \end{proposition}
\begin{proof} Recall that $q(x)=1$ for any $x\in \mathcal B_1$. The first assertion is then recursively derived from equation \eqref{rec-inv-fact}. Multiplicativity as well as Chen's Lemma are immediate consequences of the fact that $\Phi$ is a Hopf algebra morphism. We now check the estimate for any $x\in\mathcal B_n$ with $n\ge 0$: \begin{eqnarray*}
\vert\langle\mathbb X_{st},\,x\rangle \vert
&=&\vert\langle\mathbb X'_{st},\,\Phi(x)\rangle\vert\\
&\le&\sum_{y\in\mathcal B'_n}b^x_y\vert\langle\mathbb X'_{st},\,y\rangle\vert\\
&\le&C^n\sum_{y\in\mathcal B'_n}q_\gamma(y)b^x_y\vert t-s\vert^{\gamma n}. \end{eqnarray*} The proof of functoriality of the inverse factorial character \eqref{funct} can be easily adapted to its counterpart $q_\gamma$, although it is generally not a character when $\gamma$ differs from $1$. Hence we can derive the desired estimate: \begin{equation}
\vert\langle\mathbb X_{st},\,x\rangle \vert\le {C^n}q_\gamma(x)\vert t-s\vert^{\gamma n}. \end{equation}
Note that we have used the non-negativity of the coefficients $b^x_y$ of the matrix of $\Phi$ expressed in the bases $\mathcal B$ and $\mathcal B'$. \end{proof}
\section{Lie--Butcher theory} \label{sec:flows}
Butcher's \emph{B-series} are a special form of Taylor expansion indexed by trees. They have become a fundamental tool for analysing numerical integration algorithms. The numerical analysis of general Lie group methods requires the generalisation of the $B$-series theory to so-called Lie--Butcher series, which are based on planar rooted forest, possibly decorated.
\subsection{Rooted trees and forests} \label{ssect:trees}
For any positive integer $n$, a rooted tree of degree $n$ is a finite oriented tree with $n$ vertices. One of them, called the root, is a distinguished vertex without any outgoing edge. Any vertex can have arbitrarily many incoming edges, and any vertex other than the root has exactly one outgoing edge. Vertices with no incoming edges are called leaves. A planar rooted tree is a rooted tree together with an embedding in the plane. A planar rooted forest is a finite ordered collection of planar rooted trees. Here are the planar rooted forests up to four vertices: $$\emptyset\hskip 7mm
\racine\hskip 7mm
\arbrea\hskip 3mm
\racine\racine\hskip 7mm
\arbreba\hskip 3mm
\arbrebb\hskip 3mm
\arbrea\racine\hskip 3mm
\racine\arbrea\hskip 3mm
\racine\racine\racine\hskip 7mm
\arbreca\hskip 3mm
\arbrecb\hskip 3mm
\arbrecc\hskip 3mm
\arbreccc\hskip 3mm
\arbrecd\hskip 3mm
\arbreba\racine\hskip 3mm
\racine\arbreba\hskip 3mm
\arbrebb\racine\hskip 3mm
\racine\arbrebb\hskip 3mm
\arbrea\arbrea\hskip 3mm
\arbrea\racine\racine\hskip 3mm
\racine\arbrea\racine\hskip 3mm
\racine\racine\arbrea\hskip 3mm
\racine\racine\racine\racine $$ Let $A$ be any set. An $A$-decorated planar rooted forest is a pair $\sigma=(\overline\sigma,\varphi)$ where $\overline\sigma$ is a planar forest, and where $\varphi$ is a map from the vertex set $V(\overline\sigma)$ into $A$. We denote by $T_A^{\smop{pl}}$ (respectively $F_A^{\smop{pl}}$) the set of all $A$-decorated planar rooted trees (respectively forests), and by $\Cal{T}_A^{\smop{pl}}$ (respectively $\Cal{F}_A^{\smop{pl}}$) the linear space spanned by the elements of $T_A^{\smop{pl}}$ (respectively $F_A^{\smop{pl}}$).\\
$A$-decorated non-planar rooted forests are denoted by $\wt\sigma=(\overline{\wt\sigma},\varphi)$, where $\varphi$ is the decoration and $\overline{\wt\sigma}$ is the underlying non-planar forest. When $A$ is reduced to one element the notion of $A$-decoration is superfluous. Hence any $A$-decorated forest can be identified with its overlined counterpart.\\
\noindent Every $A$-decorated planar rooted tree can be written as: \begin{equation} \label{B+}
\sigma = B_a^+(\sigma_1\, \cdots \,\sigma_k), \end{equation} where $B_a^+$ is the operation on forests which grafts each connected component $\sigma_i$ of a planar rooted forest $\sigma_1 \cdots\sigma_k$ on a common root decorated by $a\in A$. Note that in numerical analysis the bracket notation for $\sigma = [\sigma_1\, \cdots \,\sigma_k]_a$ is often used instead of the $B_a^+$ operator.
\subsection{Post-Lie and post-associative algebras} \label{ssect:postLie}
A \textsl{left post-Lie algebra} \cite{V2007, KAH2015} is a vector space $A$ (over some field $\mathbf{k}$) together with two bilinear maps $[-,-]$ and $\rhd$ from $A\otimes A$ to $A$ such that \begin{itemize}
\item $[-,-]$ is a Lie bracket, i.e., it is antisymmetric and verifies the Jacobi identity.
\item For any $a,b,c\in A$ we have $$
a\rhd [b,c]=[a\rhd b,c]+[a,b\rhd c]. $$
\item For any $a,b,c\in A$ we have $$
[a,b]\rhd c=a\rhd(b\rhd c)-(a\rhd b)\rhd c-b\rhd(a\rhd c)+(b\rhd a)\rhd c. $$ \end{itemize} \textcolor{blue}{The bracket $[\![-,-]\!]$ defined by $[\![a,b]\!]:=[a,b]+a\rhd b-b\rhd a$ is another Lie bracket on $A$.} The particular case when the Lie bracket $[-,-]$ vanishes on $A$ is referred to as \textsl{left pre-Lie algebra}. See \cite{Manchon2011} for details. Associative counterparts of post-Lie algebras are referred to as \textsl{post-associative algebras}. They first appear under the terminology "D-algebras" in \cite{MunWri2008}. A post-associative algebra is a vector space $B$ endowed with two linear maps $\cdot$ and $\rhd$ from $B\otimes B$ to $B$, a filtration $B^0=\mathbf{k.1}\subset B^1\subset B^2\subset\cdots$ with $B=\bigcup_j B^j$, and an augmentation $\varepsilon: B\to\hskip -3mm\to \mathbf{k}$ such that \begin{enumerate}
\item $L_{\mathbf{1}}=\mop{Id}_B$, and $a\rhd \mathbf{1}=0$ for any $a\in\mop{Ker}\varepsilon$.
\item The product $\cdot$ is associative with unit $\mathbf{1}$, and $B^p\cdot B^q\subset B^{p+q}$ for any $p,q\ge 0$.
\item $A:=B^1\cap\mop{Ker}\varepsilon$ is stable \textcolor{blue}{under the product $\rhd$ as well as} under the Lie bracket obtained by anti-symmetrisation of the associative product, and generates the unital associative algebra $(B,\cdot$).\label{d-alg-gen}
\item\label{d-alg-der} For any $a,b,c\in B$ with $a\in A$ we have $$
a\rhd (b \cdot c)=(a\rhd b)\cdot c+b\cdot (a\rhd c). $$
\item\label{d-alg-induction} For any $a,b,c\in B$ with $a\in A$ we have $$
(a \cdot b)\rhd c=a\rhd(b\rhd c)-(a\rhd b)\rhd c. $$ \end{enumerate} In particular, $A$ is a post-Lie algebra. The other way round, the enveloping algebra of a post-Lie algebra is a post-associative algebra.
\noindent The \textsl{Grossman--Larson product} on a post-associative algebra is \textcolor{blue}{characterised} by the identity: \begin{equation} \label{gl-product}
(a*b)\rhd c=a\rhd(b\rhd c) \end{equation} for any $a,b,c\in B$, in other words $L_{a*b}=L_a\circ L_b$. \textcolor{blue}{It is defined as follows: the map $M:A\to \mop{Der}\mathcal U(A,[-,-])$ defined by $M_ab:=ab+a\rhd b$ is easily seen to verify $$
[M_a,M_b]=M_{[\hskip -1pt [a,b]\hskip -1pt]}. $$ Hence $M$ yields an associative algebra morphism, still denoted by $M$, from $\mathcal U(A,[\![-,-]\!])$ to $\mop{End}\mathcal U(A,[-,-])$. Using for example a Poincar\'e--Birkhoff--Witt basis, one can see that the morphism \begin{eqnarray*}
\Theta:\mathcal U(A,[\![-,-]\!])&\longrightarrow &\mathcal U(A,[-,-])\\
v&\longmapsto& M_v.\mathbf{1} \end{eqnarray*} of $\mathcal U(A,[\![-,-]\!])$-modules is bijective, which yields a new associative product $*$ on $\mathcal U(A,[-,-])$ given by $u*v:=\Theta\big(\Theta^{-1}(u).\Theta^{-1}(v)\big)$. Now the identity of $A$ extends to a unique surjective post-associative algebra morphism $\kappa:\mathcal U(A,[-,-])\to\hskip -5pt \to B$. The ideal $\mop{Ker}\kappa$ is stable by both products $.$ and $\rhd$, hence by the product $*$, which thus descends to the quotient $B$.} In particular, for any $a,b,c\in B^1$ we have $a*b=ab+a\rhd b,$ and $$
a*b*c=a \cdot b \cdot c+(a\rhd b) \cdot c+b \cdot (a\rhd c)+a \cdot (b\rhd c)+(a \cdot b)\rhd c+(a\rhd b)\rhd c. $$
An important example of post-Lie algebra is given by $C^\infty(\Cal M,\mathfrak g)$. We suppose that the Lie group $G$, with Lie algebra $\mathfrak g$, acts transitively on the smooth manifold $\Cal M$. Any smooth map $f\in C^\infty(\Cal M,\mathfrak g)$ defines a smooth vector field $\# f$ on $\Cal M$ (i.e., a derivation of $C^\infty(\Cal M)$) via: \begin{equation} \label{vector-field}
\# f(g):=\frac{d}{dt}{\restr{t=0}}\,g\Big(\exp \big(tf(x)\big).x\Big). \end{equation} In the language of Lie algebroids, considering the tangent vector bundle and the trivial vector bundle $E=\Cal M\times\mathfrak g$, the map $\#:C^\infty(\Cal M,\mathfrak g)\to\mop{Der} C^\infty(\Cal M)$ is the composition on the left with the anchor map $\rho: E\to T\Cal M$ defined by $\rho (x,X):=\frac{d}{dt}{\restr{t=0}}(\exp tX).x$.\\
Formula \eqref{vector-field} also makes sense for $g\in C^\infty(\Cal M,\mathfrak g)$ or $g\in C^\infty\big(\Cal M,\mathcal U(\mathfrak g)\big)$. It is shown in \cite{MunWri2008} that $C^\infty\big(\Cal M,\mathcal U(\mathfrak g)\big)$, endowed with the pointwise product in $\mathcal U(\mathfrak g)$ as well as the product $\rhd$ given by $f\rhd g:=\# f(g)$ is a post-associative algebra. The Grossman--Larson product reflects the composition of differential operators, in the sense that we have $$
\#(f*g)=\#f\circ\#g. $$ Similarly $C^\infty(\Cal M,\mathfrak g)$, endowed with the pointwise Lie bracket in $\mathfrak g$ and the product $\rhd$ given by $f\rhd g:=\# f(g)$ is a post-Lie algebra.
\subsection{Free post-Lie algebras} \label{ssect:freepostLie}
It is proven in \cite{MunWri2008} that the free post-associative algebra $\mathcal D_A$ generated by the set $A$ is the algebra of $A$-decorated planar forests endowed with concatenation and left grafting. The latter is defined for any $A$-decorated planar rooted tree $\sigma$ and forest $\tau$: \begin{equation} \label{searrow}
\sigma \rhd \tau = \sum_{v \smop{vertex of}\tau}{\sigma \searrow_{v} \tau}, \end{equation} where $\sigma \searrow_{v} \tau$ is the decorated forest obtained by grafting the planar tree $\sigma$ on the vertex $v$ of the planar forest $\tau$, such that $\sigma$ becomes the leftmost branch, starting from vertex $v$, of this new tree. It is also well-known that the usual grafting product ``$\to$" given for \textsl{non-planar} rooted trees $\wt\tau$ by the same formula \eqref{searrow}, satisfies the left pre-Lie identity: $$
\wt\sigma_1 \to ({\wt\sigma}_2 \to \wt\tau) - (\wt\sigma_1 \to {\wt\sigma}_2)\to \wt\tau
= {\wt\sigma}_2 \to ({\wt\sigma}_1 \to \wt\tau) - ({\wt\sigma}_2 \to \wt\sigma_1)\to \wt\tau. $$ \textcolor{blue}{One the other hand, the linear span of $A$-decorated planar rooted trees endowed with the operation $\rhd$ of \eqref{searrow} is the free magmatic algebra\footnote{The free magma $(\mathcal M_A,*)$ on a set $A$ is the set of well-parenthesised words with letters in $A$. The binary operation $*$ consists in putting each component between an extra pair of parentheses and concatenating them, e.g. $ab*c(de)=(ab)\big(c(de)\big)$. The free magmatic algebra $\langle \mathcal M_A\rangle$ is the vector space freely generated by $\mathcal M_A$, endowed by the bilinear extension of the product $*$. It is determined up to isomorphism by the universal property so that for any vector space $V$ endowed with a bilinear map $\#:V\times V\to V$ and for any set map $f:A\to V$, there is a unique linear map $\overline f:\langle \mathcal M_A\rangle\to V$ respecting both bilinear products.}, see \cite{EM14}}. The definition of $\sigma \rhd \tau$ when $\sigma$ is a decorated forest is given recursively with respect to the number of connected components of $\sigma$, using axiom \eqref {d-alg-induction}.\\
As a result, the free post-Lie algebra $\mathcal P_A$ generated by $A$ is the free Lie algebra generated by the linear span of $A$-decorated planar rooted trees (see also \cite{V2007}).
\subsection{Lie--Butcher series} \label{ssect:LieButcher}
Let $\Cal M$ be a homogeneous space under the action of a Lie group $G$ with Lie algebra $\mathfrak g$, and let $f:=\{f_i\}_{i\in A}$ be a collection of smooth maps from $\Cal M$ to $\mathfrak g$ indexed by a set $A$. By freeness property, there is a unique post-Lie algebra morphism $\mathcal F_{f}: \mathcal P_A\to C^\infty(\Cal M,\mathfrak g)$ such that $\mathcal F_f(\racine_i)=f_i$. The vector fields $\#\mathcal F_f(\sigma)$, where $\sigma$ is a planar rooted $A$-decorated tree, are the so-called \textsl{elementary differentials}. Similarly, $\mathcal F_f$ extends uniquely to a post-associative algebra morphism $\mathcal F_f: \mathcal D_A\to C^\infty\big(\Cal M,\mathcal U(\mathfrak g)\big)$. This extended morphism also respects the Grossman--Larson product of both sides.
\noindent A \textsl{Lie--Butcher series} is an element of $C^\infty\big(\Cal M,\mathcal U(\mathfrak g)\big)[[h]]$ given by \begin{equation}
LB(\alpha,hf):=\sum_{k\ge 0}\ \sum_{\sigma \in F_{A,k}^{\smop{pl}}}h^k\alpha(\sigma)\mathcal F_{hf}(\sigma), \end{equation} where $F_{A,k}^{\smop{pl}}$ is the set of $A$-decorated planar rooted forests with $k$ vertices and $\alpha$ is a linear map from $\mathcal P_A$ to the field $\mathbf k$.
\subsection{Three partial orders on planar forests} \label{ssec:partialorders}
Let $\overline\sigma$ be any planar rooted forest, and $v$, $w$ be two elements in its vertex set $V(\overline\sigma)$. Define a partial order $<$ on $V(\overline\sigma)$ as follows: $v<w$ if there is a path from one root to $w$ passing through $v$. Roots are the minimal elements, and leaves are the maximal elements. \ignore{The rooted forest $\wt{\sigma}$ (discarding its planar structure) is the Hasse diagram of the poset $(V(\overline\sigma),<)$.}\\
Following \cite{A14}, we define a refinement $\ll$ of this order to be the transitive closure of the relation $R$ defined by: $vRw$ if $v<w$, or both $v$ and $w$ are linked to a third vertex $u \in V(\overline\sigma)$, such that $v$ lies on the right of $w$, like this: $$
\arbrebbLab, $$ or both $v$ and $w$ are roots, with $v$ on the right of $w$. \vskip 2mm A further refinement $\lll$ on $V(\overline\sigma)$ is the total order defined as follows: $v \lll w$ if and only if $v$ occurs before $w$ on a path exploring the rooted forest from right to left, starting from the root of the rightmost connected component: \vskip 6mm \begin{eqnarray*}
&\arbreebz& \end{eqnarray*} \centerline{\small{A planar rooted tree with its vertices labelled according to total order\ $\!\lll\!$.}}
\subsection{The Hopf algebra of Lie group integrators} \label{sect:hmkw}
The universal enveloping algebra over the free post-Lie algebra $\mathcal P_A$ endowed with the Grossman--Larson product and deshuffle coproduct\footnote{Uniquely determined by the fact that any $A$-decorated planar rooted tree is primitive.} is a connected Hopf algebra, graded by the number of vertices. Its graded dual is the Hopf algebra of Lie group integrators $\mathcal H_{\smop{MKW}}^A$ introduced by Munthe-Kaas and Wright \cite{MunWri2008}. The convolution product on $\mathcal L(\mathcal H_{\smop{MKW}}^A,\mathbf k)$ is then the Grossman--Larson product naturally extended to series. The product is the shuffle product of planar forests (where the trees are the letters), and the coproduct is given in terms of left-admissible cuts \cite{MunWri2008}: \begin{equation}
\Delta(\tau)=\sum_{V'\sqcup V''= V(\tau) \atop V''\ll V'}(\tau\restr{V'})^{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}\otimes \tau\restr{V''}. \end{equation} Here $(\tau\restr{V'})^{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}$ is the shuffle product of the connected components of the poset $(V', \ll\restr{V'})$. Note that the restriction of the partial order $\ll$ to $V'$ is generally weaker than the partial order $\ll$ of the forest $\tau\restr {V'}$: the latter makes the poset $V'$ connected, which is generally not the case for the former.
We note that the Hopf algebra of Lie group integrators $\mathcal H_{\smop{MKW}}^A$, endowed with the basis of $A$-decorated planar forests, is a combinatorial Hopf algebra in the sense of Paragraph \ref{sect:cha}.
\section{Planarly branched rough paths} \label{sec:MKWHA}
We prove in this section the non-degeneracy of the combinatorial Hopf algebra $\mathcal H_{\smop{MKW}}^A$ of Lie group integrators endowed with the basis of $A$-decorated rooted forests, and we define planarly branched rough paths as $\mathcal H_{\smop{MKW}}^A$-rough paths.
\subsection{Tree and forest factorials, volume computations} \label{ssect:factorials}
For any $s\le t\in \mathbb R$ and any finite poset $P$ we consider the domain $$
\Omega^{st}_P:=\big\{(t_v)_{v\in P},\, s\le t_v\le t \hbox{ and } t_v\ge t_w
\hbox{ for } v < w\big\}\subset\mathbb R^P. $$ The factorial of the poset $P$ is uniquely determined by \begin{equation} \label{poset-factorial}
\mop{Volume}(\Omega_{st}^P)=\frac{|t-s|^{|P|}}{P!}. \end{equation} For any \textsl{planar} rooted forest $\tau$ (decorated or not), we set \cite{MF17}: \begin{equation*}
\tau!:=\big(V(\tau),\ll\big)! \end{equation*} In particular, the factorial of a poset is the product of factorials of its connected components. Note, however, that our definition differs from L.~Foissy's definition given in \cite[Definition 33]{F16}. In particular, our notion of factorial is invariant under the canonical involution reversing the partial order, which is not the case for the poset factorial of \cite{F16}.
The factorial ${\wt\tau}!$ of a \textsl{non-planar} rooted forest ${\wt\tau}$ is the factorial of the underlying poset $\big(V({\wt\tau}),<\big)$. The notations will always make clear whether a planar or non-planar forest factorial is considered, hence the common notation $-!$ should not cause any confusion.
\begin{lemma}[\cite{MF17}]\label{factorials-volumes} Let $\wt\tau$ (respectively $\sigma$) be a non-planar (respectively planar) rooted forest. Then: \begin{enumerate}
\item For any $s \le t \in \mathbb R$, the volume of the domain $\Omega^{st}_{{\wt\tau}}:=\Omega^{st}_{V({\wt\tau}),<}$ is equal to $\frac{1}{{\wt\tau}!}(t-s)^{\vert {\wt\tau}\vert}$.
\item \label{deux} For any $s \le t \in \mathbb R$, the volume of the domain $\Omega^{st}_{V(\sigma),\ll}$ is equal to $\frac{1}{\sigma !}(t-s)^{\vert \sigma\vert}$.
\item \label{inv-fac-functorial} The following identity holds: \begin{equation*}
\frac{1}{{\wt\tau}!} = \mop{Sym}(\wt\tau)\sum_{\sigma\surj{4}{\wt\tau}}\frac{1}{\sigma !}, \end{equation*} where the sum runs over all the planar representatives $\sigma$ of ${\wt\tau}$, and where $\mop{Sym}(\sigma)$ is the symmetry factor of the planar rooted forest $\sigma$. \end{enumerate} \end{lemma}
\begin{proof} The volume of $\Omega_{{\wt\tau}}^{st}$ is multiplicative, i.e., it is the product of the volumes of $\Omega_c^{st}$ where $c$ runs over the connected components of ${\wt\tau}$. The inverse of the forest factorial shares the same property. Hence it is sufficient to check the result on trees. We proceed by induction on the number of vertices. The case of one vertex boils down to: $$
\mop{Vol}(\Omega_{\racine}^{st})=t-s=\frac{1}{\racine!}(t-s). $$ Suppose that ${\wt\tau}=B^+(f)$ is a tree with at least two vertices. From \eqref{poset-factorial} we get: $$
\frac{1}{{\wt\tau}!}=\frac{1}{\vert {\wt\tau}\vert}\frac{1}{f!}. $$ On the other hand, using the induction hypothesis, we have \begin{eqnarray*}
\mop{Vol}(\Omega_{{\wt\tau}}^{st})
&=&\int_{s}^t\mop{Vol}(\Omega_{f}^{sz})\,dz\\
&=&\frac{1}{f!}\int_{s}^t (z-s)^{\vert f\vert}\, dz\\
&=&\frac{1}{f!}\frac{1}{\vert {\wt\tau}\vert}(t-s)^{\vert {\wt\tau}\vert}\\
&=&\frac{1}{{\wt\tau}!}(t-s)^{\vert {\wt\tau}\vert}. \end{eqnarray*} Now let $\sigma$ be a planar rooted forest. The volume of $\Omega_{V(\sigma),\ll}^{st}$ is multiplicative for the shuffle product but not for the concatenation. However, any planar rooted forest $\sigma$ admits a natural unique decomposition: \begin{equation*}
\sigma=\sigma'\times\sigma''=\sigma'B^+(\sigma''), \end{equation*} where $\sigma'$ and $\sigma''$ are again (possibly empty) planar rooted forests. The poset $(V(\sigma),\ll)$ is obtained by considering the direct product of the two posets $(V(\sigma'),\ll)$ and $(V(\sigma''),\ll)$, and adding an extra element (the root of $\sigma)$ smaller than any other element. From \eqref{poset-factorial} again we get: \begin{equation} \label{rec-fac-one}
\sigma !=\vert\sigma\vert\sigma'!\sigma''! \end{equation} The second assertion is then proved recursively with a computation analogous to one above: \begin{eqnarray*}
\mop{Vol}(\Omega_{V(\sigma),\ll}^{st})
&=&\int_{z=s}^t\mop{Vol}(\Omega_{V(\sigma'),\ll}^{sz})\mop{Vol}(\Omega_{V(\sigma''),\ll}^{sz})\,dz\\
&=&\frac{1}{\sigma'!\sigma''!}\int_{z=s}^t (s-z)^{\vert \sigma'\vert+\vert\sigma''\vert}\, dz\\
&=&\frac{1}{\vert\sigma\vert\sigma'!\sigma''!}(t-s)^{\vert \sigma\vert}\\
&=&\frac{1}{\sigma !}(t-s)^{\vert {\sigma}\vert}. \end{eqnarray*}
Now let $\wt\tau$ be a non-planar rooted forest. For any $v \in V(\wt\tau)$, let $\mop{St}(v)$ be the set of vertices immediately above $v$, let $S_v$ be the set of total orders on $\mop{St}(v)$, and finally let $S_{\wt\tau}$ be the product of the sets $S_v$ for $v \in V(\wt\tau)$. Any element $\prec\ \in S_{\wt\tau}$ obviously defines a binary relation on $V(\wt\tau)$, also denoted by $\prec$: to be precise, $w' \prec w''$ if and only if there exists $v\in V(\wt\tau)$ such that $w',w''\in\mop{St}_v$ and $w'\prec w''$ inside $\mop{St}_v$. For any element $\prec\ \in S_{\wt\tau}$, let $\Cal R_\prec$ be the binary relation on $V(\wt\tau)$ defined by: \begin{equation*}
w'\Cal R w'' \hbox{ if and only if } w'<w'' \hbox{ or } w'\prec w'', \end{equation*} and let $\ll_\prec$ be the transitive closure of $\Cal R_\prec$. This is a partial order refining the forest order $<$, which, by ordering the branches at any vertex, defines a unique planar representative $\sigma_\prec$ of $\wt\tau$.
The third assertion comes then from the following fact: the domain $\Omega_{{\wt\tau}}^{st}$ is the union of the domains $\Omega_{V(\wt\tau),\ll_\prec}^{st}$ (mutually disjoint apart from a Lebesgue-negligible intersection) where $\prec$ runs over $S_{\wt\tau}$. Now two elements $\prec$ and $\prec'$ give rise to the same planar representative $\sigma$ if and only if the unique permutation of $V(\tau)$ which induces an increasing map from $(\mop{St}(v),\prec)$ onto $(\mop{St}(v),\prec')$ is an automorphism of $\wt \tau$. \end{proof}
\noindent As a consequence, we easily obtain an analogue of Lemma 4.4 in reference \cite{Gubi2010}:
\begin{corollary}\label{planar-forest-binomial} For any rooted planar forest $\tau$ and for any $h,k \ge 0$ the following holds: \begin{equation*}
(h+k)^{\vert\tau\vert} = \sum_{V' \sqcup V'' = V(\tau),\atop V''\ll V'}\frac{\tau !}
{(V',\ll\restr{V'})!\,\tau\restr{V''}!}h^{\vert V'\vert}k^{\vert V''\vert}. \end{equation*} \end{corollary}
\begin{proof} Let $s\le u\le t$ three real numbers, with $u-s=h$ and $t-u=k$. From the definition of the domain $\Omega_{\tau,\ll}^{st}$ we immediately can express it as the following union with Lebesgue-negligible pairwise intersections: \begin{equation*}
\Omega_{V(\tau),\ll}^{st}=\bigcup_{V(\tau)=V'\sqcup V'',\atop V''\ll V'}
\Omega^{su}_{(V',\ll\srestr{V'})}\times\Omega^{ut}_{(V'',\ll\srestr{V''})}. \end{equation*} The conclusion then follows from item \ref{deux} of Lemma \ref{factorials-volumes}. \end{proof}
\begin{remark}\label{gen-posets}\rm The inversion of the order in the definition of $\Omega_P^{st}$ is not really necessary, as this inversion amounts to a change of variables $t_v\mapsto s+t-t_v$, which does not change the volume. But it makes the proof slightly more direct. \end{remark}
\begin{remark}\rm Applying Corollary \ref{planar-forest-binomial} in the special case $h=k=1$ shows that $\tau\mapsto 1/\tau!$ extends linearly to the unique inverse-factorial character of the Hopf algebra $\mathcal H_{\smop{MKW}}^A$ taking value $1$ on the letters of $A$. As a consequence, the combinatorial Hopf algebra $\mathcal H^A_{\smop{MKW}}$ endowed with the decorated forest basis is non-degenerate. The analogue is true for non-planar forests and the $A$-decorated Butcher--Connes--Kreimer Hopf algebra, due to Lemma 4.4 in \cite{Gubi2010}. As a consequence, assertion \eqref{inv-fac-functorial} of Lemma \ref{factorials-volumes} can be derived in a purely algebraic way, using the Hopf algebra morphism $\Omega: \mathcal H_{\smop{BCK}}^A \to \mathcal H_{\smop{MKW}}^A$ of \cite{MunWri2008} given by \eqref{omega} below, as well as the functoriality of inverse-factorial characters. \end{remark} \textcolor{blue} { \begin{remark}\rm Another interesting example comes from the extraction-contraction Hopf algebra $\mathcal H^A_{\smop{EC}}$ of reference \cite{CEM2011}, in which the grading is given by the number of edges. In view of the recursive definition of the inverse factorial character given in Paragraph \ref{sect:inv-fact}, and due to the fact that the coproduct of a forest with $n$ edges contains exactly $2^n$ elements, the inverse factorial character is identically equal to $1$ on any forest. It is however not clear whether one can consider the rather exotic corresponding notion of rough path as a driving object for some kind of rough differential equation. \end{remark} } \noindent By a straightforward iteration of \eqref{rec-fac-one} one obtains another recursive formula for the planar factorial:
\begin{proposition} Let $\sigma=\sigma_1\cdots \sigma_k$ be a planar forest, decorated or not, with connected components $\sigma_j=B_+(\tau_j),\,j=1,\ldots k$. Then \begin{equation} \label{rec-fac-two}
\sigma!=|\sigma_1|.|\sigma_1\sigma_2|\cdots |\sigma_1\cdots\sigma_k|\tau_1!\cdots\tau_k! \end{equation} \end{proposition}
\subsection{Planarly branched rough paths} \label{sect:pbrp-def}
The notion of planarly branched rough paths is given in the next definition. It is motivated from a Hopf algebraic point of view. Its significance for controlled rough differential equations will become clear further below.
\begin{definition}\label{def:planarBRP} Let $\gamma\in]0,1]$ and let $A$ be a finite alphabet. A $\gamma$-regular planarly branched rough path is a $\gamma$-regular $\mathcal H_{\smop{MKW}}^A$-rough path. \end{definition}
\section{Simple and contracting arborification in the planar setting} \label{sec:quotient}
\subsection{A projection onto the shuffle Hopf algebra: planar arborification} \label{sect:planar-arb}
Let $A$ be an alphabet, and let $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ (resp.~$\Cal H_{\smop{MKW}}^A$) be the shuffle Hopf algebra with letters in $A$ (resp.~the Hopf algebra of $A$-decorated planar forests). The \textsl{planar arborification map} $\frak a_{\ll}:\Cal H_{\smop{MKW}}^A\to\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ sends any planar decorated forest to the sum of its linear extensions. It is defined for any degree $n$ planar $A$-decorated forest $\tau=(\overline\tau,\varphi)$ as follows: \begin{equation*}
\frak a_{\ll}(\overline\tau,\varphi):=\sum_{\alpha:(\Cal V(\overline\tau),\ll)\nearrow \{1,\ldots,n\}}
\varphi\circ\alpha^{-1}(1)\cdots \varphi\circ\alpha^{-1}(n), \end{equation*} where the sum runs over the increasing bijections from the poset $\big(\Cal V(\overline\tau),\ll)$ onto $\{1,\ldots,n\}$. As an example, we have: \begin{equation*}
\frak a_{\ll}(\arbreadec ab\ \racine^c)=bac,\hskip 12mm \frak a_{\ll}(\racine^c\arbreadec ab\;)=bca+cba. \end{equation*}
This definition is directly inspired from the simple arborification map $\frak a:\Cal H_{\smop{BCK}}^A\to\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ which sends any (non-planar) decorated forest to the sum of its linear extensions \cite{E92, FM17, F02}. It is defined for any degree $n$ non-planar $A$-decorated forest by: \begin{equation*}
\frak a(f,\varphi):=\sum_{\alpha:(\Cal V(f),<)\nearrow \{1,\ldots,n\} }
\varphi\circ\alpha^{-1}(1)\cdots \varphi\circ\alpha^{-1}(n), \end{equation*} where the sum runs over the increasing bijections from the poset $\big(\Cal V(f),<)$ onto $\{1,\ldots,n\}$.
\begin{lemma}\label{planar-decomp} (Canonical decomposition of planar forests) \begin{enumerate} \item Any non-empty $A$-decorated planar forest $\tau$ admits a unique decomposition: \begin{equation*}
\tau=\tau'\times_a \tau'', \end{equation*} where $\tau'$ and $\tau''$ are $A$-decorated planar forests, $a \in A$ and $\tau'\times_a \tau''$ stands for $\tau'B_a^+(\tau'')$.
\item The planar arborification map can be recursively defined by $\frak a_{\ll}(\bm 1)=\bm 1$ and \begin{equation*}
\frak a_{\ll}(\tau'\times_a \tau'')=[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')]a. \end{equation*} \end{enumerate} \end{lemma}
\begin{proof} The first assertion is straightforward, the second is a direct consequence of the poset structure of $V(\tau'\times_a \tau'')=V(\tau')\sqcup V(\tau'')\sqcup \{a\}$ under the partial order $\ll$, which is entirely determined by the fact that \begin{enumerate} \item the restriction of $\ll$ to $V(\tau')$ is the partial order $\ll$ determined by the planar forest $\tau'$, and similarly for $\tau''$,
\item $a \in A$ is the unique minimum,
\item vertices of $\tau'$ are incomparable with vertices of $\tau''$. \end{enumerate} \end{proof}
\begin{theorem}\label{planar-arb} The planar arborification map $\frak a_{\ll}$ is a surjective Hopf algebra morphism, combinatorial if the alphabet $A$ is finite, and the diagram below commutes. \diagramme{ \xymatrix{ \Cal H_{\smop{BCK}}^A \ar[rr]^\Omega \ar@{>>}[ddrr]_{\frak a} && \Cal H_{\smop{MKW}}^A\ar@{>>}[dd]^{\frak a_{\ll}}\\ &&\\ &&\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A } } \noindent where $\Omega$ is the symmetrization map \cite[Definition 8]{MunWri2008}. \end{theorem}
\begin{proof} It is well-known that $\frak a$ and $\Omega$ are Hopf algebra morphisms \cite{F02,MunWri2008}. The map $\Omega$ is given by: \begin{equation} \label{omega}
{\Omega(f)={\mop{Sym}}(f)\sum_{\tau\to\hskip -6.5pt\to f}\tau}, \end{equation} from which the commutation of the diagram easily follows. It only remains to prove by direct checking that $\frak a_{\ll}$ respects the Hopf algebra structures. For any $A$-decorated planar forests $\tau,\omega$ which admit canonical decompositions $\tau=\tau'\times_a \tau''$ and $\omega=\omega'\times_b \omega''$ according to Lemma \ref{planar-decomp}, we compute, using induction on the sum of degrees $\vert \tau\vert +\vert \omega\vert$: \allowdisplaybreaks \begin{eqnarray*}
\frak a_{\ll}(\tau\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega)
&=&\frak a_{\ll}\big((\tau'\times_a \tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, (\omega'\times_b \omega'')\big)\\
&=&\frak a_{\ll}\big(\tau'B^+_a(\tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega' B^+_b(\omega'')\big)\\
&=&\frak a_{\ll}\Big([\tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega'B^+_b(\omega'')]B^+_a(\tau'')
+[\tau'B^+_a(\tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega']B^+_b(\omega'')\Big)\\
&=&\frak a_{\ll}\Big([\tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,(\omega'\times_b \omega'')]\times_a \tau''
+[(\tau'\times_a\tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega']\times_b \omega''\Big)\\
&=&\Big[\frak a_{\ll}\big(\tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,(\omega'\times_b \omega'')\big)\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \frak a_{\ll}(\tau'')\Big]a
+\Big[\frak a_{\ll}\big((\tau'\times_a \tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \omega'\big)\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega'')\Big]b\\
&=&\Big[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega'\times_b \omega'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')\Big]a
+\Big[\frak a_{\ll}(\tau'\times_a \tau'')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega'')\Big]b\\
&=&\Big[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\big[\frak a_{\ll}(\omega')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \frak a_{\ll}(\omega'')\big]b\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')\Big]a
+\Big[\big[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll} (\tau'')\big]a\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega'')\Big]b\\
&=&\Big[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')\Big]a\,\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\,\Big[\frak a_{\ll}(\omega')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega'')\Big]b\\
&=&\frak a_{\ll}(\tau)\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\omega). \end{eqnarray*} To check compatibility with coproducts, we introduce the linear operator of left concatentation, $L_a:\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A\to\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$, defined by $L_a(w)=wa$ for any word $w\in A^*$. It clearly verifies: \begin{equation*}
L_a\circ\frak a_{\ll}=\frak a_{\ll}\circ B^+_a. \end{equation*} \noindent For any $A$-decorated planar forest $\tau=\tau'\times_a \tau''$ we compute, using induction on the degree $\vert \tau\vert$: \begin{eqnarray*}
\Delta\frak a_{\ll}(\tau)
&=&\Delta\Big(\big[\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')\big]a\Big)\\
&=&(\mop{Id}\otimes L_a)\Delta\Big(\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\frak a_{\ll}(\tau'')\Big)
+\frak a_{\ll}(\tau)\otimes\bm 1\\
&=&(\mop{Id}\otimes L_a)\Big(\Delta\frak a_{\ll}(\tau')\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\Delta\frak a_{\ll}(\tau'')\Big)
+\frak a_{\ll}(\tau)\otimes\bm 1\\
&=&(\mop{Id}\otimes L_a)\Big((\frak a_{\ll}\otimes\frak a_{\ll})\Delta \tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,(\frak a_{\ll}\otimes\frak a_{\ll})
\Delta \tau''\Big)+\frak a_{\ll}(\tau)\otimes\bm 1\\
&=&(\frak a_{\ll}\otimes\frak a_{\ll})\Big((\mop{Id}\otimes B^+_a)(\Delta \tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\Delta \tau'')
+\tau\otimes\bm 1\Big)\\
&=&(\frak a_{\ll}\otimes\frak a_{\ll})\Big((\mop{Id}\otimes B^+_a)\big(\Delta (\tau'\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\, \tau'')\big)
+\tau\otimes\bm 1\Big)\\
&=&(\frak a_{\ll}\otimes\frak a_{\ll})(\Delta \tau). \end{eqnarray*} Compatibility with units and co-units is immediate, and compatibility with antipodes comes for free due to connectedness of both Hopf algebras. \end{proof}
\begin{proposition}\label{rp-arbo} Let $\gamma\in]0,1]$, let $A$ be a finite alphabet with $d$ letters, and let $\mathbb X_{st}$ be a $\gamma$-regular rough path in the classical sense on $\mathbb R^d$. Then its arborified version $\wt{\mathbb X}_{st}:=\mathbb X_{st}\circ\mathfrak a^\ll$ is a $\gamma$-regular planarly branched rough path on $\mathbb R^d$. \end{proposition}
\begin{proof} This is an immediate consequence of Proposition \ref{funct-rough}. \end{proof}
\noindent Proposition \ref{rp-arbo} calls for the following definition:
\begin{definition} A $\gamma$-regular planarly branched rough path $\mathbb Z_{st}$ on $\mathbb R^d$ is geometric if there exists a $\gamma$-regular rough path $\mathbb X_{st}$ in the classical sense such that $\mathbb Z_{st}$ is its arborified version, i.e. $$
\mathbb Z_{st}=\mathbb X_{st}\circ\mathfrak a^\ll. $$ \end{definition} \textcolor{blue} { \begin{remark}\rm Any geometric branched rough path is then geometric by definition. The converse is true at the price of inflating the alphabet, as in the branched case \cite[Paragraph 4.2]{HaiKel2015}. Indeed, the Hopf algebra $\Cal H_{\smop{MKW}}^A$ is, as an algebra, the shuffle algebra of the set of $A$-decorated planar rooted trees. Any planarly branched rough path is then geometric provided this bigger alphabet is considered. \end{remark} }
\subsection{Planar contracting arborification} \label{sect:planar-arb-c}
We present a contracting version of planar arborification which has some interest in its own right, although it will not be directly used in the present paper. Suppose that the alphabet $A$ carries an Abelian semigroup structure $(a,b)\mapsto [a+b]$. The quasi-shuffle Hopf algebra is isomorphic to $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$ as coalgebra. The quasi-shuffle product is recursively defined by $\emptyset\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt w=w\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt\emptyset=w$ for any word $w\in A^*$ and: \begin{equation*}
av\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt bw=a(v\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt bw)+b(av\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt w)+[a+b](v\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt w) \end{equation*} for any letters $a,b\in A$ and words $(v,w)\in A^*$. For example, $a\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt b=ab+ba+[a+b]$, and $ab\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt c=abc+acb+cab+[a+c]b+a[b+c]$. It is well-known \cite{H00} that the quasi-shuffle product together with deconcatenation give rise to a Hopf algebra $\Cal H_{\joinrel{\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,\hskip -7pt\hbox{-}\hskip 4pt}^A$ isomorphic to the shuffle Hopf algebra $\Cal H_{\joinrel{\,\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,}^A$.
The \textsl{planar contracting arborification map} $\frak a^c_{\ll}:\Cal H_{\smop{MKW}}^A\to\Cal H_{\joinrel{\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,\hskip -7pt\hbox{-}\hskip 4pt}^A$ sends any planar decorated forest to the sum of its linear extensions \textsl{including contraction terms}. It is defined for any degree $n$ planar $A$-decorated forest as follows: \begin{equation*}
\frak a_{\ll}(\tau,\varphi):=\sum_{r\ge 0}\ \sum_{\alpha:(V(\tau),\ll)\joinrel{\scalebox {0.8}{\hbox{$\nearrow$}}\hskip -2.54mm{\scalebox{0.8}{\raise 2pt\hbox{$\nearrow$}}}}
\{1,\ldots,n-r\}}\varphi\circ\alpha^{-1}(1)\cdots \varphi\circ\alpha^{-1}(n-r) \end{equation*} where the inner sum runs over the increasing surjections from the poset $\big(V(\tau),\ll)$ onto $\{1,\ldots,n-r\}$, i.e., surjective maps $\alpha$ such that $u\ll u'\in V(\tau)$ and $u\neq u'$ implies $\alpha(u)<\alpha(u')$. It can happen that $\alpha^{-1}(j)$ contains several terms: in that case, $\varphi\circ \alpha^{-1}(j)$ is to be understood as the sum in $A$ of the terms $\varphi(u),\,u\in\alpha^{-1}(j)$. As an example, we have: \begin{equation*}
\frak a^c_{\ll}(\arbreadec ab\ \racine^c)
=bac,\hskip 12mm \frak a^c_{\ll}(\racine^c\arbreadec ab\;)=bca+cba+[b+c]a. \end{equation*}
This definition is directly inspired from the contracting arborification map $\frak a^c:\Cal H_{\smop{BCK}}^A\to\Cal H_{\joinrel{\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,\hskip -7pt\hbox{-}\hskip 4pt}^A$ which sends any (non-planar) decorated forest to the sum of its linear extensions including contraction terms \cite{EFM17,E92, FM17, F02}. It is defined for any degree $n$ non-planar $A$-decorated forest by: \begin{equation*}
\frak a^c(f,\varphi):=\sum_{r\ge 0}\ \sum_{\alpha:(V(f),<)\joinrel{\scalebox {0.8}{\hbox{$\nearrow$}}\hskip -2.54mm{\scalebox{0.8}{\raise 2pt\hbox{$\nearrow$}}}} \{1,\ldots,n-r\} }
\varphi\circ\alpha^{-1}(1)\cdots \varphi\circ\alpha^{-1}(n) \end{equation*} where the inner sum runs over the increasing surjections from the poset $\big(V(f),<)$ onto $\{1,\ldots,n-r\}$, and is a surjective Hopf algebra morphism from $\Cal H_{\smop{BCK}}^A$ onto $\Cal H^A_{\joinrel{\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,\hskip -7pt\hbox{-}\hskip 4pt}$.
An analogue of Theorem \ref{planar-arb} holds: \begin{theorem}\label{planar-arb-c} $\ $ \begin{enumerate}
\item The planar contracting arborification map can be recursively defined by $\frak a^c_{\ll}(\bm 1)=\bm 1$ and {\rm \begin{equation*}
\frak a^c_{\ll}(\tau'\times_a \tau'')=[\frak a^c_{\ll}(\tau')\!\joinrel{\!\scriptstyle\amalg\hskip -3.1pt\amalg}\,\hskip -8.2pt\hbox{-}\hskip 4pt\frak a^c_{\ll}(\tau'')]a. \end{equation*} } \item The planar contracting arborification map $\frak a^c_{\ll}$ is a surjective Hopf algebra morphism, combinatorial if the alphabet $A$ is finite, and the diagram below commutes. {\rm \diagramme{ \xymatrix{ \Cal H_{\smop{BCK}}^A \ar[rr]^\Omega \ar@{>>}[ddrr]_{\frak a^c} && \Cal H_{\smop{MKW}}^A\ar@{>>}[dd]^{\frak a_{\ll}^c}\\ &&\\ &&\Cal H_{\joinrel{\scriptscriptstyle\amalg\hskip -2.5pt\amalg}\,\hskip -7pt\hbox{-}\hskip 4pt}^A } } } \end{enumerate} \end{theorem}
\begin{proof} Entirely similar to proof of the analogous results on planar arborification on Paragraph \ref{sect:planar-arb}. Details are left to the reader. \end{proof}
\section{Rough differential equations on homogeneous spaces} \label{sec:RDEHom}
In this section, we prove the convergence of the formal solutions of the rough differential equation \eqref{eq:control1} under particular analyticity assumptions.
\subsection{Formal solutions of a rough differential equation on a homogeneous space}
Let $t\mapsto X_t:=(X_t^1,\ldots,X_t^d)$ be a differentiable path with values in $\mathbb R^d$. Let $A=\{a_1,\ldots,a_d\}$ be an alphabet with $d$ letters. The controlled differential equation we are looking at writes: \begin{equation} \label{rde-rappel}
dY_{st}=\sum_{i=1}^d \# f_i(Y_{st})\,dX_t^i \end{equation} with initial condition $Y_{ss}=y$. The unknown is a path $Y_s: t \mapsto Y_{st}$ in a homogeneous space $\Cal M$, with transitive action $(g,y)\mapsto g.y$ of a Lie group $G$ on it. The elements in $f:=\{f_i\}_{i=1}^d$ are smooth maps from $\Cal M$ into the Lie algebra $\frak g=\mop{Lie}(G)$, which in turn define smooth vector fields $y \mapsto \# f_i(y)$ on $\Cal M$: \begin{equation} \label{vectorfield-rappel}
\# f_i(y):=\frac{d}{dt}{\restr{t=0}}\exp \big(tf_i(y)\big).y \in T_y \Cal M. \end{equation} It has been explained in the Introduction how Equation \eqref{rde-rappel} is lifted to the following differential equation with unknown $\bm Y_{st}\in C^\infty\big(\mathcal M,\mathcal U(\mathfrak g)\big)[[h]]$ and step size $h=t-s$: \begin{equation} \label{rde-lifted-rappel}
d\bm Y_{st} = \sum_{i=1}^d \bm Y_{st}*f_i\,dX_t^i \end{equation} with initial condition $\bm Y_{ss}=\bm 1$. Recall that the $\ast$ product stands for the Grossman--Larson product in the post-associative algebra $C^\infty\big(\mathcal M,\mathcal U(\mathfrak g)\big)$. The formal solution of \eqref{rde-rappel} is recovered by \begin{equation} \label{recover-rde}
\psi(Y_{st})=(\#\bm Y_{st}.\psi)(y). \end{equation} for any test function $\psi\in C^\infty(\mathcal M)$. A further step in abstraction leads to the fundamental differential equation in the character group of the Hopf algebra $\mathcal H^A_{\smop{MKW}}$: \begin{equation} \label{rde-postlifted-rappel}
d\mathbb Y_{st} = \sum_{i=1}^d \mathbb Y_{st}*\racine_i\,dX_t^i \end{equation} with initial condition $\mathbb Y_{ss}=\bm 1$. The $\ast$ product now stands for the Grossman--Larson product in the completed free post-associative algebra $\wh{\mathcal D_A}$ generated by $A$. The solution of \eqref{rde-lifted-rappel} then is obtained by $\mathbf Y_{st}=\mathcal F_f(\mathbb Y_{st})$, where $\mathcal F_f$ is the $h$-adic completion (with $h=t-s$) of the unique post-associative algebra morphism from $\mathcal D_A$ to $C^\infty\big(\mathcal M,\mathcal U(\mathfrak g)\big)$ which sends $\racine_j$ to $f_j$. By using the integral formulation and Picard iteration, the solution of \eqref{rde-postlifted-rappel} is given by the word series expansion: \begin{equation} \label{expansion-word-rappel}
\mathbb Y_{st}=\sum_{\ell\ge 0}\sum_{w=a_{i_1}\cdots a_{i_\ell}\in A^*}\langle\mathbb X_{st},w\rangle \racine_{i_\ell}*\cdots*\racine_{i_1}. \end{equation}
\begin{theorem}[Planar arborification-coarborification transform]\label{planar-arbo-coarbo} The solution of \eqref{rde-postlifted-rappel} is given by the following expansion indexed by $A$-decorated planar rooted forests: \begin{equation} \label{arbo-coarbo-pl}
\mathbb Y_{st}=\sum_{\tau\in F^A_{\smop{pl}}}\langle \mathbb X_{st}\circ\mathfrak a^{\ll},\,\tau\rangle\tau. \end{equation} \end{theorem}
\begin{proof} For any $\overline\tau\in F_{\smop{pl}}$, let $\mathcal L(\overline\tau)$ be the set of linear extensions of $\overline\tau$, i.e., the set of total orders $\prec$ on $V(\overline\tau)$ compatible with the partial order $\ll$, i.e., such that $u\ll v\Rightarrow u\prec v$ for any $u,v\in V(\overline\tau)$. Now let $\tau=(\overline\tau,\alpha)$ be an $A$-decorated forest, and let $\tau_\prec$ be the word in $A^*$ obtained from $(\overline\tau,\alpha)$ by displaying the decorations of the vertices of $\overline\tau$ from left to right according to the total order $\prec$. We will use the notation $\mathcal L(\tau)$ instead of $\mathcal L(\overline\tau)$. It can be easily shown that the planar arborification admits the following explicit expression: \begin{equation*}
\mathfrak a^{\ll}(\tau)=\sum_{\prec\in\mathcal L(\tau)} \tau_\prec. \end{equation*} The following lemma is easily proven by induction on the length:
\begin{lemma}\label{lemma-iterated-gl} For any word $w=a_{i_1}\cdots a_{i_n}\in A^*$, we have: \begin{equation} \label{iterated-gl}
\racine_{i_n}*\cdots *\racine_{i_1}
=\sum_{\overline\tau\in T^{[n]}_{\smop{pl}}}\ \sum_{\prec\in\mathcal L(\overline\tau)}(\overline\tau,\alpha_\prec), \end{equation} where $\alpha_\prec:V(\overline\tau)\to A$ is the decoration map which sends the $j$-th vertex to $a_{i_j}$ according to $\prec$. \end{lemma}
\noindent The total number of terms is $n!$. For example we have $$
\racine_j*\racine_i=\racine_j\racine_i+\arbrea_i^j,\hskip 12mm
\racine_k*\racine_j*\racine_i=\racine_k\racine_j\racine_i+
\arbrea_j^k\racine_i+
\arbrebb_i^{\,j\hskip -7mm k}+
{\arbreba_i^j}^{\hskip -4.5pt k}+
\racine_j\arbrea_i^k+
\racine_k\arbrea_i^j. $$ We compute, using Lemma \ref{lemma-iterated-gl}: \begin{eqnarray*}
\mathbb Y_{st}
&=&\sum_{\ell\ge 0}\sum_{w=a_{i_1}\cdots a_{i_\ell}\in A^*}\langle\mathbb X_{st},w\rangle \racine_{i_\ell}*\cdots*\racine_{i_1}\\
&=&\sum_{\ell\ge 0}\sum_{w=a_{i_1}\cdots a_{i_\ell}\in A^*}\langle\mathbb X_{st},w\rangle\sum_{\overline\tau \in T^{[n]}_{\smop{pl}}}\ \sum_{\prec\in\mathcal L(\overline\tau)}(\overline\tau,\alpha_\prec)\\
&=&\sum_{\tau \in T^A_{\smop{pl}}}\sum_{\prec\in\mathcal L(\tau)}\langle\mathbb X_{st},\tau_\prec\rangle\,\tau\\
&=&\sum_{\tau \in T^A_{\smop{pl}}}\langle\mathbb X_{st}\circ\mathfrak a^{\ll},\tau\rangle\tau. \end{eqnarray*} \end{proof} \noindent Theorem \ref{planar-arbo-coarbo} calls for the following definition.
\begin{definition} Let $\gamma\in ]0,1]$ and let $t\mapsto X_t:=(X_t^1,\ldots,X_t^d)$ be a $\gamma$-H\" older continuous path with values in $\mathbb R^d$. A formal solution of Equation \eqref{rde-rappel} driven by $X$ is defined by \begin{equation}
Y_{st}=\mathcal \#\mathcal F_f(\mathbb Y_{st})(y) \end{equation} where $\mathbb Y_{st}$ is given by the expansion \begin{equation}\label{arbo-expansion}
\mathbb Y_{st}=\sum_{\tau\in F^A_{\smop{pl}}}\langle \wt{\mathbb X}_{st},\,\tau\rangle\tau \end{equation} where $\wt {\mathbb X}_{st}$ is any $\gamma$-regular planarly branched rough path such that $\langle \wt X_{st},\racine_j\rangle=X_t^j-X_s^j$ for any $j\in\{1,\ldots, d\}$. \end{definition} We will freely identify the planarly branched rough path $\wt{\mathbb X}_{st}$ with the expansion $\mathbb Y_{st}$ as grouplike elements of the dual of $\mathcal H_{\smop{MKW}}^A$.
\subsection{Cauchy estimates} \label{ssect:chauchy}
We borrow material from \cite{E92}, see also \cite{FM17, CMP}, adapting it to general homogeneous spaces. For any compact neighbourhood $\mathcal U$ of the origin in $\mathbb C^n$, let $\mathcal A_{\mathcal U}$ be the subspace of analytic germs defined on $\mathcal U$. We have precisely \begin{equation*}
\mathcal A_{\mathcal U}=\{\varphi,\, \|\varphi\|_{\mathcal U}<+\infty\}, \end{equation*} with the norm \begin{equation*}
\|\varphi\|_{\mathcal U}:=\mop{sup}_{y\in\mathcal U}|\varphi(y)| \end{equation*} making $\mathcal A_{\mathcal U}$ a Banach space. Now let $\mathcal V$ be another compact neighbourhood of the origin such that $\mathcal V\subset\mathring{\mathcal U}$. We consider the operator norm defined for any linear operator $P:\mathcal A_{\mathcal U}\to\mathcal A_{\mathcal U}$ by \begin{equation*}
\|P\|_{\mathcal U,\mathcal V}
=\sup_{\varphi\in\mathcal A_{\mathcal U}-\{0\}}\frac{\|P\varphi\|_{\mathcal V}}{\|\varphi\|_{\mathcal U}}. \end{equation*}
\noindent The two following lemmas are straightforward.
\begin{lemma}\label{easy-lemma-one} Let $0\in\mathcal V\subset\mathring{\mathcal U}\subset\mathcal U$ be two compact neighbourhoods of the origin, and let $f\in\mathcal A_{\mathcal U}$. Let $P:\mathcal A_{\mathcal U}\to\mathcal A_{\mathcal U}$ be a linear operator. Denoting by $f:\mathcal A_{\mathcal U}\to\mathcal A_{\mathcal U}$ the pointwise multiplication operator by $f$, then the following estimate holds: \begin{equation*}
\|fP\|_{\mathcal U,\mathcal V}\le \|f\|_{\mathcal V}\|P\|_{\mathcal U,\mathcal V}. \end{equation*} \end{lemma}
\begin{lemma}\label{easy-lemma-two} Let $0\in\mathcal V\subset\mathring{\mathcal W}\subset\mathcal W\subset\mathring{\mathcal U}\subset\mathcal U$ be three compact neighbourhoods of the origin, and let $P,Q:\mathcal A_{\mathcal U}\to\mathcal A_{\mathcal U}$ be a two linear operators. Then we have: \begin{equation*}
\|P\circ Q\|_{\mathcal U,\mathcal V}\le \|P\|_{\mathcal W,\mathcal V}\|Q\|_{\mathcal U,\mathcal W}. \end{equation*} \end{lemma}
\begin{proposition}\label{estimate-vf} Let $0\in\mathcal V\subset\mathring{\mathcal U}\subset\mathcal U$ be two compact neighbourhoods of the origin, and let $r>0$ be such that the $n$-fold product of open disks of radius $r$ centered at $y$ is included in $\mathcal U$ for any $y\in\mathcal V$. Let $f=\sum_{\alpha=1}^n f^\alpha\partial_\alpha$ be a vector field on $\mathcal U$ with analytic coefficients, and let us define \begin{equation*}
\|f\|_{\mathcal V}:=\mop{sup}_{\alpha=1,\ldots,n}\|f^\alpha\|_{\mathcal V}. \end{equation*} Then we have: \begin{equation*}
\|f\|_{\mathcal U,\mathcal V}\le \frac{n\|f\|_{\mathcal V}}{r}. \end{equation*} \end{proposition}
\begin{proof} This is an immediate application of Lemma \ref{easy-lemma-one} and the Cauchy estimate for the partial derivation operator $\partial_\alpha$, which is immediately derived from the Cauchy integral formula \begin{equation*}
\varphi(y)=\varphi(y_1,\ldots,y_n)=\frac{1}{(2i\pi)}\int_{C_\alpha}\frac{\varphi(y_1,\ldots y_{\alpha-1},
\eta_\alpha, y_{\alpha+1},\ldots,y_n)}{\eta_\alpha-y_\alpha}\,d\eta_\alpha, \end{equation*} valid for any $\varphi\in\mathcal A_{\mathcal U}$ and for any $y\in\mathcal V$, where $C_\alpha$ is the circle of radius $r$ in $\mathbb C$ centered at $y_\alpha$, counterclockwise oriented. \end{proof}
\begin{corollary}\label{iterate-estimates} Let $0\in\mathcal V\subset\mathring{\mathcal U}\subset\mathcal U$ be two compact neighbourhoods of the origin, and let $r>0$ be such that the open polydisk of radius $r$ centered at $y$ is included in $\mathcal U$ for any $y\in\mathcal V$. Let $f=\{f_1,\ldots,f_k\}$ be a finite collection of vector fields $$
f_j=\sum_{\alpha=1}^n f^\alpha_j\partial_\alpha $$ on $\mathcal U$ with analytic coefficients, and let us define \begin{equation*}
\|f\|_{\mathcal V}:=\mop{sup}_{\alpha=1,\ldots,n \atop j=1,\ldots, k}\|f_j^\alpha\|_{\mathcal V}. \end{equation*}
and $\|f\|_{\mathcal U}$ similarly. Then we have: \begin{equation*}
\|f_1\circ\cdots\circ f_k\|_{\mathcal U,\mathcal V}\le \left(\frac{n\|f\|_{\mathcal U}}{r}\right)^kk^k. \end{equation*} \end{corollary}
\begin{proof} The case $k=1$ is covered by Proposition \ref{estimate-vf}. For $k \ge 2$, we define intermediate compact neighbourhoods $$
\mathcal V=\mathcal V_0\subset\mathcal V_1\subset\cdots\subset\mathcal V_k=\mathcal U $$
as follows: $\mathcal V_j$ is the closure of the union of the polydisks of radius $r/k$ centered at any point of $\mathcal V_{j-1}$, for any $j\in\{1,\ldots, k-1\}$. The result follows then from Proposition \ref{estimate-vf} and the $k$-fold iteration of Lemma \ref{easy-lemma-two} associated with these data, as well as from the obvious inequality $\|f\|_{\mathcal V_j}\le \|f\|_{\mathcal U}$ for any $j=1,\ldots, k$. \end{proof}
\subsection{Convergence of a formal solution} \label{ssect:convergence}
We address the question whether the formal diffeomorphism $\mathbf Y_{st}:=\#\mathcal F_f(\mathbb Y_{st})$ converges at least for $|t-s|$ sufficiently small. Any homogeneous space $\mathcal M$ under the action of a finite-dimensional Lie group has a canonical analytic structure. We denote by $C^\omega(\mathcal M,V)$ the space of weakly analytic maps form $\mathcal M$ into a vector space $V$. We suppose that the data $f=\{f_j\}_{j=1}^d$ are analytic maps from $\mathcal M$ to $\mathfrak g$, thus yielding analytic vector fields $\#f_j$ on $\mathcal M$. Choosing $y\in\mathcal M$ and two compact chart neighbourhoods $\mathcal U,\mathcal V$ such that $y\in\mathcal V\subset \mathring{\mathcal U}$, we have to prove that the operator norm $\|\mathbf Y_{st}\|_{\mathcal U,\mathcal V}$ is finite for small $h=t-s$.
\noindent Choosing a basis $(E_\alpha)_{\alpha=1,\ldots,N}$ of the Lie algebra $\mathfrak g$, we have: \begin{equation}\label{fj-ebeta}
f_j=\sum_{\beta=1}^N\wt f_j^\beta E_\beta,
\hskip 12mm
\#E_\beta=\sum_{\alpha=1}^n \varepsilon^\alpha_\beta\partial_\alpha, \end{equation} where the coefficients $\wt f_j^\beta$ and $\varepsilon^\alpha_\beta$ are analytic on $\mathcal U$, and where \begin{equation*}
\|\wt f\|_{\mathcal V}:=\mop{sup}_{j=1,\ldots,d \atop \beta=1,\ldots,N}\|\wt f_j^\beta\|_{\mathcal V}. \end{equation*}
\begin{theorem} There exists a positive constant $C_{\mathcal U,\mathcal V}$ such that for any $A$-decorated rooted planar forest $\sigma=\sigma_1\cdots\sigma_k$ with connected components $\sigma_j=B_+^{a_j}(\tau_j)$, the following estimates hold: \begin{equation} \label{main-estimate-one}
\|\wt f_\sigma^{\bm\beta}\|_{\mathcal V}
\le \tau_1!\cdots\tau_k!C_{\mathcal U,\mathcal V}^{|\sigma|-k}\|\wt f\|_{\mathcal U}^{|\sigma|}, \end{equation} where the coefficients $\wt f_\sigma^{\bm\beta}\in C^\omega(\mathcal U)$ are considered with respect to the Poincar\'e--Birkhoff--Witt basis: \begin{equation*}
\mathcal F_{\sigma}=\sum_{\bm{\beta}\in\{1,\ldots,N\}^k \atop \beta_1\le\cdots
\le\beta_k}\wt f_{\sigma}^{\bm\beta}E_{\bm\beta}, \end{equation*} and \begin{equation}\label{main-estimate-two}
\|\#\mathcal F_{\sigma} \|_{\mathcal U,\mathcal V}
\le \sigma!C_{\mathcal U,\mathcal V}^{|\sigma|}\|\wt f\|_{\mathcal U}^{|\sigma|}. \end{equation} \end{theorem}
\begin{proof}
Let us first treat the case $|\tau|=1$, i.e., $\tau=\racine_j,\,j=1,\ldots,d$. Estimate \eqref{main-estimate-one} holds by definition of $\|\wt f\|_{\mathcal V}$. Applying Proposition \ref{estimate-vf} we have: \begin{equation} \label{estimate-sharp-ebeta}
\|\#E_\beta\|_{\mathcal U,\mathcal V}\le\frac{n\|\varepsilon\|_{\mathcal V}}{r}, \end{equation} where $r>0$ is chosen so that any polydisk of radius $r$ centered at a point of $\mathcal V$ is included in $\mathcal U$, and where \begin{equation*}
\|\varepsilon\|_{\mathcal V}:=\mop{sup}_{\alpha=1,\ldots,n\atop \beta=1,\ldots,N}\|\varepsilon^\alpha_\beta\|_{\mathcal V}. \end{equation*} Applying Estimate \eqref{estimate-sharp-ebeta} and Lemma \ref{easy-lemma-one} we get the estimates: \begin{equation} \label{estimates-sharp-fj}
\|\#f_j\|_{\mathcal U,\mathcal V}\le \frac{nN\|\wt f\|_{\mathcal V}\|\varepsilon\|_{\mathcal V}}{r}. \end{equation} We introduce the constant \begin{equation}\label{constant-cuv}
C_{\mathcal U,\mathcal V}:=e\frac{nN\|\varepsilon\|_{\mathcal U}}{r}, \end{equation} so that we immediately get \begin{equation}\label{estimate-degree-one}
\|\#f_j\|_{\mathcal U,\mathcal V}\le C_{\mathcal U,\mathcal V}\|\wt f\|_{\mathcal V}\le C_{\mathcal U,\mathcal V}\|\wt f\|_{\mathcal U}, \end{equation} which is estimate \eqref{main-estimate-two}. Let us now proceed by induction to the higher degree case. The necessity of the extra Euler prefactor $e=2,71828...$ in \eqref{constant-cuv} will appear in the proof, as a consequence of the inequality $k^k\le e^kk!$ coming from Stirling's formula. For any decorated planar forest $\sigma=\sigma_1\cdots\sigma_k$ with connected components $\sigma_j$, we can write its decomposition in the Poincar\'e--Birkhoff--Witt basis: \begin{equation}\label{pbw}
\mathcal F_{\sigma}=\sum_{\bm{\beta}\in\{1,\ldots,N\}^k\atop\beta_1\le\cdots\le\beta_k}
\wt f_{\sigma}^{\bm\beta}E_{\bm\beta} \end{equation} with $\wt f_{\sigma}^{\bm\beta}=\wt f_{\sigma_1}^{\beta_1}\cdots \wt f_{\sigma_k}^{\beta_k}$ and $E_{\bm\beta}=E_{\beta_1}\cdots E_{\beta_k}\in\mathcal U(\mathfrak g)$. Two cases occur for higher-degree forests:
\begin{enumerate} \item \textsl{First case:} the decorated forest $\tau$ is not a tree, i.e., $k\ge 2$. In this case we have, using the induction hypothesis on each connected component, \begin{eqnarray*}
\|\wt f_{\sigma}^{\bm\beta}\|_{\mathcal V}&\le&\prod_{j=1}^k\|\wt f_{\sigma}^{\beta_j}\|_{\mathcal U}\\
&\le&\tau_1!\cdots\tau_k!C_{\mathcal U,\mathcal V}^{|\sigma|-k}\|\wt f\|_{\mathcal U}^{|\sigma|}. \end{eqnarray*} From decomposition \eqref{pbw} and Proposition \ref{iterate-estimates} we get then: \begin{eqnarray*}
\|\#\mathcal F_{\sigma}\|_{\mathcal U,\mathcal V}
&\le& \tau_1!\cdots\tau_k!C_{\mathcal U,\mathcal V}^{|\sigma|-k}\|\wt f\|_{\mathcal U}^{|\sigma|} \sum_{\bm{\beta}\in\{1,\ldots,N\}^k\atop\beta_1\le\cdots\le\beta_k}\|E_{\bm\beta}\|_{\mathcal U,\mathcal V}\\
&\le& N^k \tau_1!\cdots \tau_k!C_{\mathcal U,\mathcal V}^{|\sigma|-k}\|\wt f\|_{\mathcal U}^{|\sigma|}\left(\frac{n\|\varepsilon\|_{\mathcal U}}{r}\right)^kk^k\\
&\le & k!\tau_1!\cdots\tau_k! C_{\mathcal U,\mathcal V}^{|\sigma|}\|\wt f\|_{\mathcal U}^{|\sigma|}\\
&\le &\sigma!C_{\mathcal U,\mathcal V}^{|\sigma|}\|\wt f\|_{\mathcal U}^{|\sigma|}. \end{eqnarray*} The last inequality derives from the recursive formula \eqref{rec-fac-two} for the planar factorial.
\item \textsl{Second case:} $k=1$, i.e., the decorated forest is a tree $\sigma=B^+_{a_j}(\tau)$. From the definition \begin{equation}
\mathcal F_{\sigma}=\mathcal F_{\tau}\rhd f_j \end{equation} we get \begin{equation}
\wt f_{\sigma}^\beta=(\#\mathcal F_{\tau}).\wt f_j^\beta \end{equation} for any $\beta\in\{1,\ldots, N\}$. Applying the induction hypothesis to $\tau$ and Lemma \ref{easy-lemma-one}, we get: \begin{eqnarray*}
\|\wt f_{\sigma}^\beta\|_{\mathcal U}
&\le&\|\#\mathcal F_{\tau}\|_{\mathcal U,\mathcal V}\|f_j^\beta\|_{\mathcal U}\\
&\le& C_{\mathcal U,\mathcal V}^{|\tau|}\tau!\|\wt f\|_{\mathcal U}^{|\tau|+1}\\
&\le& C_{\mathcal U,\mathcal V}^{|\sigma|-1}\tau!\|\wt f\|_{\mathcal U}^{|\sigma|}. \end{eqnarray*} Finally, from \eqref{pbw} in the special case $k=1$ we derive: \begin{eqnarray*}
\|\#\mathcal F_{\sigma}\|_{\mathcal U,\mathcal V}
&\le& \tau!C_{\mathcal U,\mathcal V}^{|\sigma|-1}\|\wt f\|_{\mathcal U}^{|\sigma|} \sum_{\beta\in\{1,\ldots,N\}}\|E_{\beta}\|_{\mathcal U,\mathcal V}\\
&\le& N\tau!C_{\mathcal U,\mathcal V}^{|\sigma|-1}\|\wt f\|_{\mathcal U}^{|\sigma|}
\left(\frac{n\|\varepsilon\|_{\mathcal U}}{r}\right)\\
&\le & \tau! C_{\mathcal U,\mathcal V}^{|\sigma|}\|\wt f\|_{\mathcal U}^{|\sigma|}\\
&\le &\sigma!C_{\mathcal U,\mathcal V}^{|\sigma|}\|\wt f\|_{\mathcal U}^{|\sigma|}. \end{eqnarray*}
\end{enumerate} \end{proof}
\begin{corollary}
The series $\|\mathbf Y_{st}\|_{\mathcal U,\mathcal V}$ is dominated by a series of Gevrey type $1-\gamma$ with respect to the variable $|t-s|^\gamma$. \end{corollary} \begin{proof} Recall that a power series $\sum_{k\ge 0} b_kx^k$ is of Gevrey type $\beta\ge 0$ if and only if there exists a constant $C>0$ such that \begin{equation}
|b_k|\le C^k(k!)^\beta. \end{equation} The series $\mathbf Y_{st}$ is given by $\sum_{k\ge 0}a_k$, with \begin{equation*}
a_k=\sum_{\tau\in F^A_{\smop{pl},k}}\langle \wt{\mathbb X}_{st},\,\tau\rangle\#\mathcal F_{\tau}. \end{equation*} We compute, using estimates \eqref{estimate-gamma-rough}, \eqref{estimate-qgamma-general}, \eqref{main-estimate-two}, and the majoration of the number of planar $A$-decorated rooted forests of degree $k$ by $(4d)^k$: \begin{eqnarray*}
\|a_k\|_{\mathcal U,\mathcal V}
&=&\left\|\sum_{\tau\in F^A_{\smop{pl},k}}\langle \wt{\mathbb X}_{st},\,\tau\rangle\#\mathcal F_{\tau}\right\|_{\mathcal U,\mathcal V}\\
&\le&\sum_{ \tau\in F^A_{\smop{pl},k}}\vert\langle \wt{\mathbb X}_{st},\, \tau\rangle\vert.\|\#\mathcal F_{ \tau}\|_{\mathcal U,\mathcal V}\\
&\le &\sum_{ \tau\in F^A_{\smop{pl},k}}c^{|\tau|}q_\gamma(\tau)\|\#\Cal F_\tau\|_{\Cal U,\Cal V}\\
&\le& \sum_{ \tau\in F^A_{\smop{pl},k}} c^{|\tau|}C_\gamma^{|\tau|}\frac{|\tau!|^{1-\gamma}}{ \tau !} \tau!|t-s|^{\gamma |\tau|}C_{\Cal U,\Cal V}^{|\tau|}\|\wt f\|_{\mathcal V}^{|\tau|}\\
&\le& \left(4dcC_\gamma C_{\Cal U,\Cal V}\|\wt f\|_{\mathcal V}\right)^{k}(k!)^{1-\gamma}|t-s|^{\gamma k}. \end{eqnarray*} \end{proof}
\ignore{ Let $G_{\mathcal H}$ be the set of characters of the algebra $\mathcal H=\mathcal H^A_{\smop{MKW}}$, and let $\mathfrak g_{\mathcal H}$ be the set of infinitesimal characters of $\mathcal H$. There are two pro-unipotent group laws on $G_{\mathcal H}$, respectively given by the Grossman--Larson product $*$ (dual of the admissible cut coproduct) and the concatenation of forests denoted by a dot $\cdot$ (dual of the deconcatenation coproduct), corresponding to two Lie algebra structures $[\![-,-]\!]$ and $[-,-]$ on $\mathfrak g_{\mathcal H}$. The two following diagrams commute: $$
\xymatrix{\mathfrak g_{\mathcal H}\ar[r]^{\exp^*}\ar[d]^{\mathcal F} &G_{\mathcal H}\ar[d]^{\mathcal F}\\ C^{\omega}(\mathcal M,\mathfrak g)[[h]]\ar[r]^{\exp^*} &C^{\omega}(\mathcal M,G)[[h]]} \hskip 12mm
\xymatrix{\mathfrak g_{\mathcal H}\ar[r]^{\exp^\cdot}\ar[d]^{\mathcal F} &G_{\mathcal H}\ar[d]^{\mathcal F}\\ C^{\omega}(\mathcal M,\mathfrak g)[[h]]\ar[r]^{\exp^\cdot} &C^{\omega}(\mathcal M,G)[[h]]} $$ On the bottom lines, $\exp^*$ is the Grossman--Larson exponential reflecting the (formal) flows of vector fields, whereas $\exp^\cdot$ is the pointwise exponential from $\mathfrak g$ to $G$. The righthand side diagram will be more manageable for our purposes: indeed, owing to the fact that the Eulerian idempotent $E=\log^\cdot(\mop{Id})$ is the projection defined by $E(\tau)=\tau$ if $\tau$ is a tree and $E(\tau)=0$ if $\tau=\mathbf 1$ or $\tau$ is a non-connected forest, we have \begin{eqnarray*}
\log^\cdot\mathbb Y_{st}
&=&\mathbb Y_{st}\circ E\\
&=&\sum_{\tau\in\mathcal T^A_{\smop{pl}}}\langle \wt{\mathbb X}_{st},\,\tau\rangle\tau. \end{eqnarray*} }
\begin{corollary}\label{Taylor-cv}
In the case when the driving path $X$ is Lipschitz, i.e., if $\gamma=1$, the norm $\|\mathbf Y_{st}\|_{\mathcal U,\mathcal V}$ is finite for small $h=t-s$. \end{corollary}
\section*{Appendix: The sewing lemma} \label{sect:couture}
Let $S,T$ be two real numbers with $S<T$. A map $\Phi$ from $[S,T] \times [S,T]$ into a vector space $B$ is \textsl{additive} if it verifies the chain rule $\Phi(s,t)=\Phi(s,u)+\Phi(u,t)$ for any $s,u,t\in[S,T]$. In that case there obviously exists a map $\varphi:[S,T]\to B$, unique up to an additive constant, such that $\Phi(s,t)=\varphi(t)-\varphi(s)$. Indeed, choose an arbitrary origin $o \in [S,T]$ and set $\varphi(t):=\Phi(o,t)$.
Loosely speaking, the sewing lemma stipulates that, under an appropriate completeness assumption on the vector space $B$, a {\sl{nearly}} additive map $(s,t)\mapsto\mu(s,t)$ is {\sl{nearly}} given by a difference $\varphi(t)-\varphi(s)$, in the sense that if $\mu(s,t)-\mu(s,u)-\mu(u,t)$ is small, then there is a unique $\varphi$, defined up to an additive constant, such that $\mu(s,t)-\varphi(t)+\varphi(s)$ is small. In view of the importance of this result, we give an account of it in the precised version given by Gubinelli, together with a detailed proof adapted from \cite{FP2006}. For the original proof, see \cite[Appendix A1]{Gubi2004}.
\begin{proposition}\cite[Proposition 1]{Gubi2004}\label{prop:fdpsewing} Let $\mu$ be a continuous function on a square $[S,T] \times [S,T]$ with values in a Banach space $B$, and let $\varepsilon>0$. Suppose that there exist a positive integer $n$ and two collections $a_i,b_i\ge 0$ indexed by $i\in\{1,\ldots,n\}$, with $a_i+b_i=1+\varepsilon$, such that $\mu$ satisfies: \begin{equation} \label{fdpmu}
\vert\mu(s,t)-\mu(s,u)-\mu(u,t)\vert\le \sum_{i=1}^nC_i\vert t-u\vert^{a_i}\vert u-s\vert^{b_i} \end{equation} for any $s,t,u\in[S,T]$ with $s\le u\le t$ or $t\le u\le s$, where the $C_i$'s are positive constants. Then there exists a function $\varphi \colon [S,T] \to B$, unique up to an additive constant, such that: \begin{equation}\label{fdp-sewing}
\vert\varphi(t)-\varphi(s)-\mu(s,t)\vert\le C'\vert t-s\vert^{1+\varepsilon}, \end{equation} where the best constant in \eqref{fdp-sewing} is $$
C':=\frac{1}{2^{1+\varepsilon}-2}\sum_{i=1}^n C_i. $$ \end{proposition}
The proof, adapted from reference \cite{FP2006}, is based on dyadic decompositions of intervals. A sequence $(\mu_n)_{n\ge 0}$ of continuous maps from $[S,T] \times [S,T]$ into $B$ is defined by $\mu_0=\mu$ and \begin{equation} \label{mun}
\mu_n(s,t):=\sum_{i=0}^{2^n-1}\mu(t_i,t_{i+1}) \end{equation} with $t_i=s+i(t-s)2^{-n}$. Denoting by $C$ the sum $C_1+\cdots+ C_n$, the estimate $$
\vert\mu_{n+1}(s,t)-\mu_n(s,t)\vert\le C2^{-n\varepsilon -1-\varepsilon}\vert t-s\vert^{1+\varepsilon} $$ holds, which is easily seen by applying \eqref{fdpmu} to each of the $2^n$ intervals in \eqref{mun}. Hence the sequence $(\mu_n)_{n\ge 0}$ is Cauchy in the complete metric space $\Cal C([S,T]^2,B)$ of continuous maps from $[S,T] \times [S,T]$ into $B$ endowed with the uniform convergence norm: $$
\|f\|:=\mop{sup}_{(s,t)\in[S,T]^2}\|f(s,t)\|_B, $$ and thus converges uniformly to a limit $\Phi$, which moreover verifies: \begin{equation} \label{est-phi}
\vert\Phi(s,t)-\mu(s,t)\vert\le 2^{-1-\varepsilon}C\vert t-s\vert^{1+\varepsilon}
\sum_{n\ge 0}2^{-n\varepsilon}=C\vert t-s\vert^{1+\varepsilon}\frac{1}{2^{1+\varepsilon}-2}. \end{equation}
\begin{lemma} The limit $\Phi$ is additive, that is, it satisfies $$
\Phi(s,t)=\Phi(s,u)+\Phi(u,t) $$ for any $s,u,t\in[S,T]$. \end{lemma}
\begin{proof} From $\mu_{n+1}(s,t)=\mu_n\big(s,(s+t)/2\big)+\mu_n\big((s+t)/2,t\big)$ we get that $\Phi$ is \textsl {semi-additive}, i.e., it satisfies $$
\Phi(s,t)=\Phi\big(s,(s+t)/2\big)+\Phi\big((s+t)/2,t\big) $$ for any $s,t\in[S,T]$. Moreover, $\Phi$ is the unique semi-additive map satisfying estimates \eqref{est-phi}. Indeed, if $\Psi$ is another one, then \begin{eqnarray*}
\vert (\Phi-\Psi)(s,t)\vert
&=&\left\vert\sum_{i=0}^{2^n-1}(\Phi-\Psi)(t_{i+1}-t_i)\right\vert\\
&\le&2C'\sum_{i=0}^{2^n-1}\vert t_{i+1}-t_i\vert^{1+\varepsilon}\\
&\le& 2C'\vert t-s\vert 2^{-n\varepsilon} \end{eqnarray*} with $C'=C/(2^{1+\varepsilon}-2)$. Hence $\Psi=\Phi$ by letting $n$ go to infinity. Now, if $r$ is any positive integer, then the map $\Psi_r$ defined by $$
\Psi_r(s,t)=\sum_{j=0}^{r-1}\Phi(t_j,t_{j+1}), $$ with $t_j=s+j(t-s)/r$, is also semi-additive, hence $\Psi_r=\Phi$. From this we easily get $$
\Phi(s,t)=\Phi(s,u)+\Phi(u,t) $$ for any rational barycenter $u$ of $s$ and $t$, i.e., such that $u=as+(1-a)t$ with $a\in[0,1]\cap\mathbb Q$. Additivity of $\Phi$ follows from continuity. \end{proof}
The proof of Proposition \ref{prop:fdpsewing} follows by choosing $\varphi(t):=\Phi(o,t)$ for any fixed but arbitrary $o\in[S,T]$. Uniqueness of $\varphi$ up to an additive constant follows immediately from the uniqueness of the additive function $\Phi$ satisfying estimate \eqref{est-phi}.
\end{document} |
\begin{document}
\title{On the estimation of variance parameters in non-standard generalised linear mixed models: Application to penalised smoothing \footnote{This is a pre-print of an article published in \textit{Statistics and Computing}. The final authenticated version is available online at: \url{https://doi.org/10.1007/s11222-018-9818-2}.}} \author{Mar\'ia Xos\'e Rodr\'iguez - \'Alvarez$^{1,2}$, Maria Durban$^{3}$, Dae-Jin Lee$^{1}$, Paul H. C. Eilers$^{4}$\\\\
\small{$^{1}$ BCAM - Basque Center for Applied Mathematics}\\
\small{Alameda de Mazarredo, 14. E-48009 Bilbao, Basque Country, Spain}\\
\small{\texttt{[email protected]}}\\
\small{$^{2}$ IKERBASQUE, Basque Foundation for Science, Bilbao, Spain}\\
\small{$^{3}$ Department of Statistics and Econometrics, Universidad Carlos III de Madrid, Legan\'es, Spain}\\
\small{$^{4}$ Erasmus University Medical Centre, Rotterdam, the Netherlands}} \maketitle \date{} \sloppy \begin{abstract} We present a novel method for the estimation of variance parameters in generalised linear mixed models. The method has its roots in \cite{Harville77}'s work, but it is able to deal with models that have a precision matrix for the random-effect vector that is linear in the inverse of the variance parameters (i.e., the precision parameters). We call the method SOP (Separation of Overlapping Precision matrices). SOP is based on applying the method of successive approximations to easy-to-compute estimate updates of the variance parameters. These estimate updates have an appealing form: they are the ratio of a (weighted) sum of squares to a quantity related to effective degrees of freedom. We provide the sufficient and necessary conditions for these estimates to be strictly positive. An important application field of SOP is penalised regression estimation of models where multiple quadratic penalties act on the same regression coefficients. We discuss in detail two of those models: penalised splines for locally adaptive smoothness and for hierarchical curve data. Several data examples in these settings are presented. \end{abstract} \section{Introduction} The estimation of variance parameters is a statistical problem that has received extensive attention for more than 50 years. It originated with the ANOVA methodology proposed by Fisher in the 1920's, where estimates where obtained equating mean squared error to its expected value. However, the results yielded by this method were not optimal in some situations, for example, in the case of unbalanced data. Later on, \cite{Crump51} applied maximum likelihood (ML) under the assumption of normally distributed errors and random effects. But it was not until the 1970's when the estimation of variance parameters based on ML methods gained interest. The method of \emph{Restricted Maximum Likelihood} (REML) \citep{Patterson71} gave a solution to the problem of biased estimators of the variance parameters. However, one of the main obstacles to the use of this technique, at the time, was the fact that the calculation of ML/REML estimates requires the numerical solution of a non-linear problem. \cite{Patterson71} proposed an iterative solution using the Fisher Scoring algorithm, but it was \cite{Harville77} who proposed the first numerical algorithm to compute REML estimates of the variance parameters. His proposal is the inspiration of our work.
Along the years, several computational approaches have appeared with the aim of improving the computational burden of solving the score equations for the variance parameters: \cite{Smith1990} proposed the use of the EM algorithm, \cite{Graser87} suggested the use of the simplex algorithm to obtain the estimates directly from the likelihood, and \cite{Gilmour1995} developed a method based on the use of an average information matrix.
In the context of Generalised Linear Mixed Models (GLMMs), estimation based on iterative re-weighted REML has been proposed independently by a number of authors \citep[e.g.,][]{Schall1991,Engel1994}, as an extension of the iterative re-weighted least squares algorithm for Generalised Linear Models \citep[GLM,][]{McCullagh89}. \cite{Breslow1993} proposed a general method based on Penalised Quasi-Likelihood (PQL) for the estimation of the fixed and random effects, and pseudo-likelihood for the variance parameters. As noted by \cite{Engel1996}, the estimation procedures discussed in all these papers are equivalent, although motivated from quite different starting points.
The majority of the methods mentioned above impose a strong restriction on the vector of random effects: its variance-covariance matrix has to be linear in the variance parameters. The results we present in this paper relax that assumption to the case in which the linearity in the parameters is necessary on the precision matrix and not on the variance-covariance matrix. Our contribution is motivated by the need to estimate smoothing parameters in the context of penalised regression models with non-standard quadratic penalties.
Penalised spline regression \citep[P-splines,][]{Eilers1996} has become a popular method for estimating models in which the mean response (or linear predictor in the non-Gaussian case) is a smooth unknown function of one or more covariates. The method is based on the representation of the smooth component in terms of basis functions, and the estimation of the parameters by modifying the likelihood with a quadratic penalty on the coefficients. The size of the penalty is controlled by the so-called \emph{smoothing parameter}. The connection between penalised smoothing and linear mixed models was first established a long time ago \citep{Green87}, and it has become of common use in the last 15 years \citep{Currie2002, Currie2006, Lee2010, Wand2003}. The key point of the equivalence is that the smoothing parameter becomes the ratio between two variance parameters and, therefore, the methods mentioned above can be used to estimate directly the amount of smoothing needed in the model, instead of using methods based on the optimisation of some criteria such as Akaike Information Criteria (AIC) or Generalised cross-validation (GCV) \citep{Eilers1996, Wood2008}. Standard methods based on REML/ML can be applied when simple penalties are used, i.e., each regression coefficient is affected by a single penalty (by a single smoothing parameter). However, in some circumstances, the penalties present an overlapping structure, with the same coefficients being penalised simultaneously by several smoothing parameters. This includes important cases such as multidimensional penalised splines with anisotropic penalties or adaptive penalised splines. Estimation methods that can deal with this situation have been proposed in the smoothing literature \cite [e.g.,][]{Wood2011, Wood2016}, but they have the drawback of being very computationally demanding, especially when the number of smoothing parameters is large.
This work addresses this problem and presents a fast method for estimating the variance parameters/smoothing parameters in generalised linear mixed models/generalised additive models. The method can be used whenever the precision matrix of the random component (or the penalty matrix of the P-spline model) is a linear combination defined over the inverse of the variance parameters (smoothing parameters). We obtain simple expressions for the estimates of the variance parameters that are ratios between a sum of squares and a quantity related to the notion of effective degrees of freedom in the smoothing context \citep{Hastie1990}. We show the sufficient and necessary conditions that guarantee the positiveness of these estimates, and discuss several situations where these conditions can be easily verified. Particular cases of the method presented here have been introduced in \cite{MXRA2015}, which solved the problem in the case of anisotropic multidimensional smoothing, and in \cite{MXRA2015b}, where results for adaptive P-splines were first discussed. More recently, \cite{Wood2017} extended the abovementioned works to more general penalised spline models. The proposal discussed here presents two main advantages with respect to \cite{Wood2017}'s approach. First, the smoothing/variance parameter estimates described in \cite{Wood2017} rely on Moore-Penrose pseudoinverses of the penalty matrices, which, in our experience, may present numerical instabilities. Second, our proposal establishes an explicit connection between variance component estimates and effective degrees of freedom, which lacks in \cite{Wood2017}. Effective degrees of freedom are key components in smoothing models. They help in summarising a model, as partial effective degrees of freedom are measures of model components' complexity with strong intuitive appeal \cite[see, e.g.,][for an example in the agricultural field]{MX18}.
The rest of this paper is organised as follows: Section \ref{HfD} introduces the work by \cite{Harville77}, that constitutes the foundation of the work presented here. Section \ref{SOP} is the core of the paper: the new method, called SOP (Separation of Overlapping Precision matrices), is presented; and the connection between SOP and the notion of effective degrees of freedom is discussed. Section \ref{OP_WWW} describes several P-splines models whose estimation can be approached using SOP. We focus in this paper on adaptive P-splines and P-splines for hierarchical curve data. Illustrations with data examples are provided in Section \ref{examples}. A Discussion closes the paper. Some technical details have been added as Appendices. The estimating algorithm is detailed there. \section{Estimation of variance parameters in generalised linear mixed models: \cite{Harville77}'s work and extensions}\label{HfD} This Section is our little tribute to \cite{Harville77}'s paper, which was the inspiration for this work. \cite{Harville77}'s paper deals with ML/REML approaches to variance parameters estimation in linear mixed models (LMM) for Gaussian data. Nonetheless, estimation of GLMM can be done by repeated use of LMM methodology on a \textit{working} dependent variable \citep[see, e.g.,][where use is made of the results by \citealp{Harville77}]{Schall1991,Engel1994}. This is the approach we follow in this paper.
Let $\boldsymbol{y} = \left(y_1,\ldots,y_n\right)^{\top}$ be a vector of $n$ observations. A GLMM can be written as \begin{equation} g\left(\boldsymbol{\mu}\right) = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{Z}\boldsymbol{\alpha},\;\;\mbox{with}\;\;\boldsymbol{\alpha}\sim N\left(\boldsymbol{0}, \boldsymbol{G}\right), \label{mm_equation_orig} \end{equation}
where $\mu_i = E\left(y_i\mid \boldsymbol{\alpha}\right)$ and $g\left(\cdot\right)$ is the link function. The model assumes that, conditional on the random effects, $y_i$ is independently distributed with mean $\mu_i$ and variance $\mathbb{V}\mbox{ar}(y_i|\boldsymbol{\alpha}) = \phi\nu\left(\mu_i\right)$. Here, $\nu\left(\cdot\right)$ is a specified variance function, and $\phi$ is the dispersion parameter that may be known or unknown. In model (\ref{mm_equation_orig}), $\boldsymbol{X}$ and $\boldsymbol{Z}$ represent column-partitioned matrices, associated respectively with the fixed and random effects. We assume that $\boldsymbol{X}$ has full rank, $\boldsymbol{Z} = \left[\boldsymbol{Z}_{1},\ldots,\boldsymbol{Z}_{c}\right]$, and $\boldsymbol{\alpha} = \left(\boldsymbol{\alpha}_{1}^{\top},\ldots,\boldsymbol{\alpha}_{c}^{\top}\right)^{\top}$. Each $\boldsymbol{Z}_{k}$ corresponds to the design matrix of the $k$-th random component $\boldsymbol{\alpha}_{k}$, with $\boldsymbol{\alpha}_{k}$ being a $(q_{k} \times 1)$ vector ($k = 1, \ldots, c$). We assume further that $\boldsymbol{\alpha}_k\sim N\left(\boldsymbol{0}, \boldsymbol{G}_k\right)$ and that \begin{equation*} \boldsymbol{G} = \bigoplus_{k = 1}^{c}\boldsymbol{G}_{k} = \bigoplus_{k = 1}^{c}\sigma_{k}^2\boldsymbol{I}_{q_k} = \mbox{diag}\left(\sigma_{1}^2\boldsymbol{I}_{q_1},\ldots,\sigma_{c}^2\boldsymbol{I}_{q_c}\right), \label{G_mm_equation} \end{equation*} where $\boldsymbol{I}_{m}$ is an identity matrix of order $m \times m$, and $\bigoplus$ denotes the direct sum of matrices. Note that the variance-covariance matrix $\boldsymbol{G}$ is linear in the variance parameters $\sigma_{k}^2$.
As noted before, estimation of model (\ref{mm_equation_orig}) can be approached by iterative fitting of a LMM that involves a \textit{working} dependent variable $\boldsymbol{z}$ and a weight matrix $\boldsymbol{W}$ (updated at each iteration). The specific form of $\boldsymbol{z}$ and $\boldsymbol{W}$ is given in Appendix \ref{algorithm}. If $\phi$ and $\sigma_k^2$ ($k=1,\ldots,c$) are known, at each iteration, the updates for $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ follow from the so-called Henderson equations \citep{Henderson1963} \begin{equation} \underbrace{ \begin{bmatrix} \boldsymbol{X}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{X} & \boldsymbol{X}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{Z} \\ \boldsymbol{Z}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{X} & \boldsymbol{Z}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{Z} + {\boldsymbol{G}}^{-1} \end{bmatrix}}_{\boldsymbol{C}} \begin{bmatrix} \boldsymbol{\widehat{\beta}}\\ \boldsymbol{\widehat{\alpha}} \end{bmatrix} = \begin{bmatrix} \boldsymbol{X}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{z}\\ \boldsymbol{Z}^{\top}{\boldsymbol{R}}^{-1}\boldsymbol{z} \end{bmatrix}, \label{MX:linearsystem_1} \end{equation} which give rise to closed-form expressions \begin{align} \widehat{\boldsymbol{\beta}} & = \left(\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}\boldsymbol{X}\right)^{-1}\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}\boldsymbol{z},\nonumber\\ \widehat{\boldsymbol{\alpha}}_k & = \boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{z}\;\;\;\;\;\;(k = 1, \ldots, c), \label{mm_randomest} \end{align} where $\boldsymbol{P} = \boldsymbol{V}^{-1} - \boldsymbol{V^{-1}}\boldsymbol{X}\left(\boldsymbol{X^{\top}V^{-1}X}\right)^{-1}\boldsymbol{X}^{\top}\boldsymbol{V^{-1}}$ with $\boldsymbol{V} = \boldsymbol{R} + \boldsymbol{Z}\boldsymbol{G}\boldsymbol{Z}^{\top}$ and $\boldsymbol{R} = \phi\boldsymbol{W}^{-1}$. The Henderson equations are of little use if the variances parameters $\phi$ and $\sigma_k^2$ ($k=1,\ldots,c$) are unknown. In his 1977 paper, Harville shows how to estimate them by REML by an elegant iterative algorithm. Let's first define \[ \boldsymbol{T} = \left(\boldsymbol{I} + \boldsymbol{Z}^{\top}\boldsymbol{S}\boldsymbol{Z}\boldsymbol{G}\right)^{-1}, \] where $\boldsymbol{S} = \boldsymbol{R}^{-1} - \boldsymbol{R}^{-1}\boldsymbol{X}\left(\boldsymbol{X}^{\top}\boldsymbol{R}^{-1}\boldsymbol{X}\right)^{-1}\boldsymbol{X}^{\top}\boldsymbol{R}^{-1}$. We note that $\boldsymbol{T}$ can be partitioned as follows \[ \boldsymbol{T} = \begin{bmatrix} \boldsymbol{T}_{11} & \boldsymbol{T}_{12} & \cdots & \boldsymbol{T}_{1c}\\ \boldsymbol{T}_{21} & \boldsymbol{T}_{22} & \cdots & \boldsymbol{T}_{2c}\\ \vdots & \vdots & \ddots & \vdots\\ \boldsymbol{T}_{c1} & \boldsymbol{T}_{c2} & \cdots & \boldsymbol{T}_{cc}\\ \end{bmatrix}, \] where $\boldsymbol{T}_{ij}$ are matrices of order $q_i \times q_j$. In \cite{Harville77}, the updated estimate of $\sigma^2_k$ ($k = 1,\ldots,c$) is \begin{equation} \widehat{\sigma}_k^{2} = \frac{\widehat{\boldsymbol{\alpha}}_k^{\top[t]}\widehat{\boldsymbol{\alpha}}_k^{[t]}}{\mbox{ED}_k^{[t]}}, \label{var_per_component_harville} \end{equation} where \begin{equation} \mbox{ED}_k^{[t]} = q_k - \mbox{trace}\left(\boldsymbol{T}_{kk}^{[t]}\right), \label{ed_per_component_harville} \end{equation} and the superscript $[t]$ denotes quantities evaluated at current estimates of the variance parameters. From the estimates of $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ follow an estimate for $\boldsymbol{z}$: $\widehat{\boldsymbol{z}} = \boldsymbol{X}\widehat{\boldsymbol{\beta}} + \boldsymbol{Z}\widehat{\boldsymbol{\alpha}}$. The residuals are $\boldsymbol{z} - \hat{\boldsymbol{z}}$. Harville uses \begin{equation}\label{e_sighar} \hat{\phi} = \frac{\boldsymbol{z}^{\top}\boldsymbol{W}\left(\boldsymbol{z} - \widehat{\boldsymbol{z}}^{[t]}\right)}{n - \mbox{rank}(\boldsymbol{X})}, \end{equation} to estimate the dispersion parameter (not always needed in GLMM). An alternative expression is \cite[see, e.g.,][]{Engel1990,MXRA2015} \begin{equation}\label{e_sigwe} \hat{\phi} = \frac{\left(\boldsymbol{z} - \widehat{\boldsymbol{z}}^{[t]}\right)^{\top}\boldsymbol{W}\left(\boldsymbol{z} - \widehat{\boldsymbol{z}}^{[t]}\right)}{n - \mbox{rank}(\boldsymbol{X}) - \sum_{k=1}^c\mbox{ED}_k^{[t]}}. \end{equation} Here $\mbox{rank}(\boldsymbol{X}) + \sum_{k=1}^c\mbox{ED}_k^{[t]}$ can be interpreted as the effective model dimension. At convergence, eqns. (\ref{e_sighar}) and (\ref{e_sigwe}) give identical numerical values. \subsection{Effective degrees of freedom in Harville's method} As noted by \cite{Harville77}, the iterates derived from eqn. (\ref{var_per_component_harville}) have an intuitively appealing form. On each iteration, $\sigma_k^2$ is estimated by the ratio between the sum of squares of the estimates for $\boldsymbol{\alpha}_{k}$ and a number between zero and $q_k$. We now show that the denominator in eqn. (\ref{var_per_component_harville}) can in fact be interpreted as effective degrees of freedom in smoothing sensu, i.e., as the trace of a ``hat'' matrix \citep{Hastie1990}.
First note that expression (\ref{mm_randomest}) reveals that $\boldsymbol{Z}_k\widehat{\boldsymbol{\alpha}}_k = \boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{z}$. Thus, the ``hat'' matrix corresponding to the $k$-th random component $\boldsymbol{\alpha}_k$ is \[ \boldsymbol{H}_k = \boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}, \] i.e., $\boldsymbol{H}_k\boldsymbol{z} = \boldsymbol{Z}_k\widehat{\boldsymbol{\alpha}}_k$. We now show that $\mbox{trace}\left({\boldsymbol{H}_k}\right) = \mbox{ED}_k$. It is easy to verify that \begin{equation} \boldsymbol{T} = \left(\boldsymbol{I} + \boldsymbol{Z}^{t}\boldsymbol{S}\boldsymbol{Z}\boldsymbol{G}\right)^{-1} = \boldsymbol{G}^{-1}\left(\boldsymbol{G}^{-1} + \boldsymbol{Z}^{t}\boldsymbol{S}\boldsymbol{Z}\right)^{-1}, \label{T_C_rel} \end{equation} where $\left(\boldsymbol{G}^{-1} + \boldsymbol{Z}\boldsymbol{S}\boldsymbol{Z}^{t}\right)^{-1}$ is that partition of the inverse of $\boldsymbol{C}$ in (\ref{MX:linearsystem_1}) corresponding to the random vector $\boldsymbol{\alpha}$ \citep{Harville77, Johnson1995}. Exploiting the block structure of $\boldsymbol{Z}$ and $\boldsymbol{G}$, and making use of result (\ref{T_C_rel}) and (A4) in \cite{Johnson1995}, we have that \begin{equation} \boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k = \boldsymbol{I}_{q_k} - \boldsymbol{G}_k^{-1}\boldsymbol{C}^{*}_{kk} = \boldsymbol{I}_{q_k} - \boldsymbol{T}_{kk}, \label{j_t_result} \end{equation} where, to ease the notation, $\boldsymbol{C}^{*}$ denotes the inverse of $\boldsymbol{C}$, and $\boldsymbol{C}^{*}_{kk}$ denotes that partition of $\boldsymbol{C}^{*}$ corresponding to the $k$-th random component $\boldsymbol{\alpha}_k$. Thus, \begin{align*} \mbox{trace}\left({\boldsymbol{H}_k}\right) & = \mbox{trace}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\right) = \mbox{trace}\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\right)\\ & = \mbox{trace}\left(\boldsymbol{I}_{q_k} - \boldsymbol{T}_{kk}\right) = q_k - \mbox{trace}\left(\boldsymbol{T}_{kk}\right) \\ & = \mbox{ED}_k. \label{ed_sap_2} \end{align*} \section{Separation of overlapping precision matrices: the SOP method}\label{SOP} In previous Section we have discussed an estimating method for generalised linear mixed models where the variance-covariance matrix of the random component is linear in the variance parameters. However, more complex structures of the variance-covariance matrix appear in practice. The present research was motivated by our work on penalised spline regression. In spite of that, the method to be discussed in this Section is not confined to this area: it can be seen as a general estimating method for generalised linear mixed models with a precision matrix of a specific structure. As in Section \ref{HfD}, we consider the generalised linear mixed model \begin{equation} g\left(\boldsymbol{\mu}\right) = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{Z}\boldsymbol{\alpha} = \boldsymbol{X}\boldsymbol{\beta} + \sum_{k=1}^{c}\boldsymbol{Z}_k\boldsymbol{\alpha}_k, \label{mm_equation} \end{equation} with $\boldsymbol{\alpha}_k\sim N\left(\boldsymbol{0}, \boldsymbol{G}_k\right)$, $\boldsymbol{\alpha}\sim N\left(\boldsymbol{0}, \boldsymbol{G}\right)$, and $\boldsymbol{G} = \bigoplus_{k = 1}^{c}\boldsymbol{G}_{k}$. The main difference with respect to Section \ref{HfD} is that we do not assume that $\boldsymbol{G}_{k} = \sigma_k^2\boldsymbol{I}_{q_k}$, but we consider precision matrices of the form \begin{equation} \boldsymbol{G}_{k}^{-1} = \sum_{l = 1}^{p_k}\sigma_{k_l}^{-2}\boldsymbol{\Lambda}_{k_l}, \label{G_mm_equation_sop} \end{equation} where $\sigma_{k_l}^2$ ($l = 1,\ldots,p_k$ and $k = 1, \ldots, c$) are the variance parameters, and $\boldsymbol{\Lambda}_{k_l}$ are known symmetric positive semi-definite matrices of dimension $q_k \times q_k$. Note that we do not require $\boldsymbol{\Lambda}_{k_l}$ to be positive definite. The only requirement we need is that $\boldsymbol{G}_{k}^{-1}$ ($k = 1, \ldots, c$) are positive definite, and so are $\boldsymbol{G}^{-1}$ and its inverse, the variance-covariance matrix $\boldsymbol{G}$.
Expression (\ref{G_mm_equation_sop}) deserves some detailed discussion. Firstly, it is worth noting that we do not work with variance-covariance matrices, but with their inverses, the precision matrices. As said, the developments in this work have their origin on penalised spline methods. In Section~\ref{OP_WWW} the need to work with precision matrices will become clear, or, in the terminology of penalised splines, with penalty matrices. Secondly, what constitutes the main contribution of this paper is that we assume that each random component $\boldsymbol{\alpha}_k$ ($k=1,\ldots,c$) in model $(\ref{mm_equation})$ may be ``affected'' (shrunk) by several variance parameters. A particular case would be when $p_k = 1$ $\forall k$, in which case we are in the situation discussed in Section \ref{HfD}.
For the sake of simplicity, in some cases we will rewrite the precision matrix $\boldsymbol{G}^{-1}$ as follows \begin{equation} \boldsymbol{G}^{-1} = \sum_{l=1}^{p}\sigma_l^{-2}\widetilde{\boldsymbol{\Lambda}}_l, \label{G_inv_compact} \end{equation} where $p = \sum_{k=1}^{c}p_k$. By a slight abuse of notation, let $\boldsymbol{\Lambda}_l$ denote the matrices involved in expression (\ref{G_mm_equation_sop}). The matrix $\widetilde{\boldsymbol{\Lambda}}_l$ is $\boldsymbol{\Lambda}_l$ padded out with zeroes. Some specific examples will be presented in Section \ref{OP_WWW} below. Expression (\ref{G_inv_compact}) makes it clear that the present work deals with the situation of generalised linear mixed models with a precision matrix for the random component that is linear in the precision parameters $\sigma_l^{-2}$. The next Section presents the proposed estimation method, that we call SOP. \subsection{The SOP method} Regardless of the structure of $\boldsymbol{G}$, estimation of $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$, and, when necessary, the dispersion parameter $\phi$, does not pose a problem, and can be done as discussed in Section \ref{HfD} above (see also Appendix \ref{algorithm} for a detailed description of the estimating algorithm). Recall that estimates for $\boldsymbol{\alpha}_k$ are obtained as $\widehat{\boldsymbol{\alpha}}_k = \boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{z}$. The hat matrix associated with the $k$-th random component $\boldsymbol{\alpha}_k$ is, once again, $\boldsymbol{H}_k = \boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}$, and the effective degrees of freedom of this component is $\mbox{ED}_k = \mbox{trace}\left(\boldsymbol{H}_k\right)$. The variance parameters $\sigma_{k_l}^2$ ($l = 1,\ldots,p_k$ and $k = 1, \ldots, c$), however, cannot be estimated by means of Harville's approach. This is a consequence of $\boldsymbol{G}$ not being linear in the variance parameters.
The key of our approach is to work with $\boldsymbol{G}^{-1}$ instead of with $\boldsymbol{G}$. Given that $\boldsymbol{G}^{-1}$ is linear in the precision parameters $\sigma^{-2}_{k_l}$, the first-order partial derivatives of the (approximate) REML log-likelihood function can be explicitly obtained, as well as the REML-based estimates of the variance parameters. We state the result in the following Theorem, whose proof is given in Appendix \ref{AppA}. \begin{theorem} Let $\boldsymbol{G} = \bigoplus_{k = 1}^{c}\boldsymbol{G}_{k}$ be a symmetric positive definite matrix, with $\boldsymbol{G}^{-1}_k = \sum_{l=1}^{p_k}\sigma_{k_l}^{-2}\boldsymbol{\Lambda}_{k_l}$ symmetric positive definite and $\boldsymbol{\Lambda}_{k_l}$ known symmetric positive semi-definite. Then, the REML-based estimate updates of the variance parameters $\sigma_{k_l}^2$ ($l = 1,\ldots,p_k$ and $k = 1, \ldots, c$) are given by \begin{equation} \widehat{\sigma}^{2}_{k_l} = \frac{\widehat{\boldsymbol{\alpha}}_k^{{[t]}\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k^{[t]}}{\mbox{ED}_{k_l}^{[t]}}, \label{var_per_component} \end{equation} where \begin{equation} \mbox{ED}_{k_l}^{[t]} = trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}^{[t]}\boldsymbol{Z}_k\boldsymbol{G}_k^{[t]}\frac{\boldsymbol{\Lambda}_{k_l}}{\widehat{\sigma}_{k_l}^{2[t]}}\boldsymbol{G}_k^{[t]}\right), \label{ed_per_component} \end{equation} with $\widehat{\boldsymbol{\alpha}}^{[t]}$, $\boldsymbol{P}^{[t]}$ and $\boldsymbol{G}^{[t]}$ evaluated at the current estimates $\widehat{\sigma}_{k_l}^{2[t]}$ ($l = 1,\ldots,p_k$ and $k = 1, \ldots, c$), and, when necessary, $\widehat{\phi}^{[t]}$. \label{the1} \end{theorem} Note that when $\boldsymbol{G}_k = \sigma_k^2\boldsymbol{I}_{q_k}$, expressions (\ref{var_per_component}) and (\ref{ed_per_component}) reduce to those of Harville (expressions (\ref{var_per_component_harville}) and (\ref{ed_per_component_harville}) respectively).
An important and desirable property of the updates given in eqn. (\ref{var_per_component}) is that they are always nonnegative, provided that the previous estimates of the variance parameters are nonnegative. In addition, under rather weak conditions, these updates are strictly positive (although it is possible to obtain values very close to zero). \begin{theorem} If $\widehat{\sigma}_{k_l}^{2[t]} > 0$, then the REML-based estimate updates of the variance parameters given in eqn. (\ref{var_per_component}) are larger or equal than zero, with strict inequality holding if: (i) $\mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_k\boldsymbol{G}_k^{[t]}\boldsymbol{\Lambda}_{k_l}\right) > \mbox{rank}\left(\boldsymbol{X}\right)$; and (ii) $\boldsymbol{z}$ (the \it{working} response vector) is not in the space spanned by the columns of $\boldsymbol{X}$. \label{the2} \end{theorem} The proof is provided in Appendix \ref{AppB}. We note that condition (i) is needed for both the numerator and denominator of eqn. (\ref{var_per_component}) to be strictly positive, while (ii) is only needed for the numerator. From an applied point of view, it would undoubtedly important to be able to check if the conditions are fulfilled before fitting the model. This may not be an easy task, since they depend on $\boldsymbol{G}_k^{[t]}$, and thus may vary from iteration to iteration. There are, however, common situations where condition (i) could be checked in advance: \begin{itemize} \item If $\boldsymbol{\Lambda}_{k_l}$ is of full rank, then condition (i) simplifies to $\mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_k\right) > \mbox{rank}\left(\boldsymbol{X}\right)$. We note that this condition is the same as that discussed by \cite{Harville77} in Lemma 1. \item If $\boldsymbol{G}_k^{[t]}$ and $\boldsymbol{\Lambda}_{k_l}$ commute (i.e., $\boldsymbol{G}_k^{[t]}\boldsymbol{\Lambda}_{k_l}$ = $\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k^{[t]}$), then condition (i) simplifies to $\mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_k\boldsymbol{\Lambda}_{k_l}\right) > \mbox{rank}\left(\boldsymbol{X}\right)$. Examples when $\boldsymbol{G}_k^{[t]}$ and $\boldsymbol{\Lambda}_{k_l}$ commute include, for instance, when both are diagonal. \end{itemize} We discuss these situations in more detail in Section \ref{OP_WWW}, where some examples of application of the SOP method are presented. \subsection{Effective degrees of freedom in the SOP method} In line with the Harville method discussed in Section \ref{HfD}, the denominator of eqn. (\ref{var_per_component}) has been denoted as $\mbox{ED}_{\{\cdot\}}$, from effective degrees of freedom. Result (\ref{ed_per_component}) makes it easy to show that the sum of the $\mbox{ED}_{k_l}$ over the $p_k$ variance parameters involved in $\boldsymbol{G}^{-1}_k$ (see eqn. (\ref{G_mm_equation_sop})) corresponds to $\mbox{ED}_k$ (the effective degrees of freedom of $\boldsymbol{\alpha}_k$)
\begin{align*} \sum_{l=1}^{p_k}\mbox{ED}_{k_l} = \sum_{l=1}^{p_k}trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\frac{\boldsymbol{\Lambda}_{k_l}}{\sigma_{k_l}^{2}}\boldsymbol{G}_k\right) = trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\right) = trace\left(\boldsymbol{H}_k\right) = \mbox{ED}_k. \end{align*}
As a consequence, at convergence, the estimated effective degrees of freedom associated with each random component in model (\ref{mm_equation}) is obtained as a by-product of the SOP method.
To finish this part, we would like to point out an interesting link between the upper bound for $\mbox{ED}_{k_l}$ (denoted with $\mbox{ED}^{ub}_{k_l}$) and condition (i) in Theorem \ref{the2}. It can be shown that \[ \mbox{ED}_{k_l} \leq \mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\right) - \mbox{rank}\left(\boldsymbol{X}\right) = \mbox{rank}\left((I_n - \boldsymbol{P}_{\boldsymbol{X}})\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\right) = \mbox{ED}^{ub}_{k_l}, \] where $\boldsymbol{P}_{\boldsymbol{X}} = \boldsymbol{X}\left(\boldsymbol{X}^{\top}\boldsymbol{X}\right)^{-1}\boldsymbol{X}^{\top}$. Thus, if condition (i) in Theorem \ref{the2} is not verified, $\mbox{ED}_{k_l}$ would be exactly zero. We omit the proof of the previous result. It can be obtained in a similar fashion as in the paper by \cite{Cui2010} (see Web Appendix (f) in that paper), by noting that \begin{align*}
\mbox{ED}_{k_l} = trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\frac{\boldsymbol{\Lambda}_{k_l}}{\sigma_{k_l}^{2}}\boldsymbol{G}_k\right) = trace\left(\boldsymbol{Z}_k\boldsymbol{G}_k\frac{\boldsymbol{\Lambda}_{k_l}}{\sigma_{k_l}^{2}}\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\left[\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}}\right)\boldsymbol{V}\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}}\right)\right]^{+}\right), \end{align*} where $\boldsymbol{\Gamma}^{+}$ denotes the Moore-Penrose pseudoinverse of $\boldsymbol{\Gamma}$. This equivalence has been proved, in a less general situation, by \cite{MX18} (see Web Appendix D in that paper). Following a similar reasoning, we obtain the upper bound for the effective degrees of freedom of the $k$-th random component \[ \mbox{ED}_{k} \leq \mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_k\right) - \mbox{rank}\left(\boldsymbol{X}\right) = \mbox{rank}\left((I_n - \boldsymbol{P}_{\boldsymbol{X}})\boldsymbol{Z}_k\right) = \mbox{ED}^{ub}_{k}. \] Using the well known result that the rank of a matrix sum cannot exceed the sum of the ranks of the summand matrices, we have that $\mbox{ED}^{ub}_{k} \leq \sum_{l=1}^{p_k}\mbox{ED}^{ub}_{k_l}$. In general, however, this is a strict inequality, since most of the cases \[ \mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_u}\right) \cap \mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_v}\right) \neq \left\{\boldsymbol{0}\right\}\;\;\;(u \neq v), \] where $\mathcal{C}\left(\boldsymbol{A}\right)$ denotes the linear space spanned by the columns of $\boldsymbol{A}$ \cite[see, e.g., Theorem 18.5.7 in][]{Harville1997}. Intuitively, we can interpret this as a sort of competition among the $p_k$ ``elements'' associated with the $k$-th random component. The $\mbox{ED}_{k_l}$ cannot vary ``free'' between $0$ and $\mbox{ED}^{ub}_{k_l}$, but they have to fulfil that their sum does not excess $\mbox{ED}^{ub}_{k}$. \subsection{Computational aspects}\label{computaspects} From a computational point of view, the evaluation of the expression given in eqn. (\ref{ed_per_component}) can be very costly. However, this computation can be relaxed by using the result given in (\ref{j_t_result}). For our purpose, it is easy to show that \begin{equation*} \boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l} = \left(\boldsymbol{G}_k - \boldsymbol{C}^{*}_{kk}\right)\boldsymbol{\Lambda}_{k_l}. \label{equivalence_hat_matrices} \end{equation*} We note that $\boldsymbol{C}^{*}$ (i.e., the inverse of $\boldsymbol{C}$ in (\ref{MX:linearsystem_1})) is computed in order to estimate $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\alpha}}$ (see Appendix \ref{algorithm}). In addition, in those cases where $\boldsymbol{\Lambda}_{k_l}$ is diagonal, only the diagonal elements of $\left(\boldsymbol{G}_k - \boldsymbol{C}^{*}_{kk}\right)$ need to be explicitly obtained. This will considerably reduce the number of operations required, and therefore the computing time.
\section{Penalised smoothing and the SOP method}\label{OP_WWW} This Section discusses several situations in the P-spline framework where estimation can be approached using the SOP method. As it will be seen, the method can be used whenever there are multiple penalties acting on the same coefficients. Anisotropic tensor-product P-splines is an example of overlapping penalties, and it has been extensively discussed in the paper by \cite{MXRA2015}. However, multiple penalties arise in a broader class of situations. We describe here two of those: Spatially-adaptive P-splines and P-splines for hierarchical curve data. \subsection{Spatially-adaptive P-splines}\label{OP_Adaptive} Consider a regression problem \begin{equation} y_i = f\left(x_i\right) + \epsilon_i \;\;\;\;\; i = 1,\ldots n, \label{MX:PsplineM} \end{equation} where $f$ is a smooth and unknown function and $\epsilon_i \sim N\left(0, \phi\right)$. In the P-spline framework \citep{Eilers1996}, the unknown function $f(x)$ is approximated by a linear combination of $d$ B-splines basis functions, i.e., $f(x) = \sum_{j=1}^{d}\theta_{j}B_{j}\left(x\right)$. In matrix notation, model (\ref{MX:PsplineM}) is thus expressed as \begin{equation} \boldsymbol{y} = \boldsymbol{B}\boldsymbol{\theta} + \boldsymbol{\varepsilon},\;\;\boldsymbol{\varepsilon} \sim N\left(\boldsymbol{0}, \phi\boldsymbol{I}_n\right), \label{MX:PsplineMM} \end{equation} where $\boldsymbol{\theta} = \left(\theta_1, \theta_2, \ldots, \theta_d\right)^{\top}$ and $\boldsymbol{B}$ is a B-spline regression matrix of dimension $n \times d$, i.e., $b_{ij} = B_{j}\left(x_i\right)$ is the $j$-th B-spline evaluated at $x_i$. Smoothness is achieved by imposing a penalty on the regression coefficients $\boldsymbol{\theta}$ in the form \begin{equation} \lambda\sum_{k = q + 1}^{d}\left(\Delta^q\theta_k\right)^2 = \lambda \boldsymbol{\theta}^{\top}\boldsymbol{D}_q^{\top}\boldsymbol{D}_q\boldsymbol{\theta}, \label{MX:1DPenalty} \end{equation} where $\lambda$ is the smoothing parameter, and $\Delta^q$ forms differences of order $q$ on adjacent coefficients, i.e., $\Delta\theta_k = \theta_k - \theta_{k-1}$, $\Delta^2\theta_k = \Delta\left(\Delta\theta_k\right) = \theta_k - \theta_{k-1} - \left(\theta_{k-1} - \theta_{k-2} \right) = \theta_k - 2\theta_{k-1} + \theta_{k-2}$, and so on for higher $q$. Finally, $\boldsymbol{D}_q$ is simply the matrix representation of $\Delta^q$.
\begin{figure}
\caption{Graphical representation of differences of order 2 on adjacent coefficients of cubic B-splines basis functions. Note the local and ordered nature of these differences.}
\label{diff_penalty_1d}
\end{figure}
As can be observed in eqn.~(\ref{MX:1DPenalty}), the same smoothing parameter $\lambda$ applies to all coefficient differences, irrespective of their location (see Figure~\ref{diff_penalty_1d}). Thus, the model assumes that the same amount of smoothing is needed across the whole domain of the covariate. Adaptive P-splines \citep[see, e.g., ][among others]{Krivobokova08, Ruppert00} relax this assumption. The idea is simple, to replace the global smoothing parameter by smoothing parameters that vary locally according to the covariate value. This can be accomplished by specifying a different smoothing parameter for each coefficient difference \citep{Ruppert00, Wood2011} \begin{equation} \sum_{k = q + 1}^{d}\lambda_{k-q}\left(\Delta^q\theta_k\right)^2 = \boldsymbol{\theta}^{\top}\boldsymbol{D}_q^{\top}\mbox{diag}(\boldsymbol{\lambda})\boldsymbol{D}_q\boldsymbol{\theta}, \label{MX:AdaptPenalty} \end{equation} where $\boldsymbol{\lambda} = \left(\lambda_1,\ldots,\lambda_{d-q}\right)^{\top}$. Note that this approach would imply as many smoothing parameters as coefficient differences (i.e., $d - q$), which could lead to under-smoothing and unstable computations. Given the local and ordered nature of the coefficient differences (see Figure~\ref{diff_penalty_1d}), we may model the smoothing parameters $\lambda_k$ as a smooth function of $k$ (its position) and use B-splines for this purpose (here no penalty is assumed) \begin{equation} \boldsymbol{\lambda} = \boldsymbol{\Psi}\boldsymbol{\xi}, \label{MX:Adapt_smooth} \end{equation} where $\boldsymbol{\Psi}$ is a B-spline regression matrix of dimension $(d-q)\times p$ with $p < (d-q)$, and $\boldsymbol{\xi} = \left(\xi_1,\ldots,\xi_{p}\right)^{\top}$ is the new vector of smoothing parameters. Performing some simple algebraic operations, it can be shown that the \textit{adaptive} penalty~(\ref{MX:AdaptPenalty}) is \begin{equation} \boldsymbol{\theta}^{\top}\left(\sum_{l=1}^{p}\xi_{l}\boldsymbol{D}_q^{\top} \mbox{diag}\left(\boldsymbol{\psi}_{l}\right)\boldsymbol{D}_q\right)\boldsymbol{\theta}, \label{MX:AdaptPenalty_smooth} \end{equation} where $\boldsymbol{\psi}_{l}$ denotes the column $l$ of $\boldsymbol{\Psi}$. Note that under this adaptive penalty, all coefficients are penalised by multiple smoothing parameters, i.e., there are overlapping penalties.
\subsubsection{Mixed model reparametrisation} Estimation of the P-spline model (\ref{MX:PsplineMM}) subject to the adaptive penalty defined in~(\ref{MX:AdaptPenalty_smooth}) can be carried out based on the connection between P-splines and mixed models \citep[e.g., ][]{Currie2002, Wand2003}. It is easy to show that the null space (i.e., the unpenalised function space) of the adaptive penalty matrix $\boldsymbol{P}_{Ad} = \sum_{l=1}^{p}\xi_{l}\boldsymbol{D}_q^{\top} \mbox{diag}\left(\boldsymbol{\psi}_{l}\right)\boldsymbol{D}_q$ is independent of $\boldsymbol{\xi}$. In addition, note that when $\xi_u = \xi_v = \lambda$ $\left(\forall\;u, v\right)$ then \[ \boldsymbol{P}_{Ad} = \sum_{l=1}^{p}\xi_{l}\boldsymbol{D}_q^{\top} \mbox{diag}\left(\boldsymbol{\psi}_{l}\right)\boldsymbol{D}_q = \lambda\boldsymbol{D}_q^{\top} \left(\sum_{l=1}^{p}\mbox{diag}\left(\boldsymbol{\psi}_{l}\right)\right)\boldsymbol{D}_q = \lambda\boldsymbol{D}_q^{\top}\boldsymbol{D}_q = \boldsymbol{P}. \] This is consequence of the rows of a B-spline regression matrix adding up to $1$. Thus, the null space of $\boldsymbol{P}_{Ad}$ is the same as that of $\boldsymbol{P}$. Different reparametrisations of P-spline models have been suggested in the literature \citep[see, e.g.,][]{Currie2002, Eilers1999}, all aiming to decompose the model into the unpenalised and the penalised part. The consequence of this decomposition is that the penalty matrix of the reparametrised P-spline model is of full rank, and so is the precision matrix of the corresponding mixed model. For our application, we use the proposal given in \cite{Eilers1999}. As will be seen, this approach gives rise to diagonal precision matrices. As discussed in Section \ref{computaspects}, this is very convenient from a computational point of view. Using \cite{Eilers1999}'s transformation, model (\ref{MX:PsplineMM}) is re-expressed as \begin{equation*} \boldsymbol{y} = \boldsymbol{B}\boldsymbol{\theta} + \boldsymbol{\varepsilon}= \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{Z}\boldsymbol{\alpha} + \boldsymbol{\varepsilon}, \end{equation*}
where $\boldsymbol{X}= \left[\boldsymbol{1}_n|\boldsymbol{x}|\ldots|\boldsymbol{x}^{(q-1)}\right]$ and $\boldsymbol{Z} = \boldsymbol{B}\boldsymbol{D}_q^{\top}\left(\boldsymbol{D}_q\boldsymbol{D}_q^{\top}\right)^{-1}$, and the precision (penalty) matrix of the vector of random (penalised) coefficients $\boldsymbol{\alpha}$ becomes \begin{equation} \boldsymbol{G}^{-1} = \frac{1}{\phi}\boldsymbol{F}^{\top}\boldsymbol{P}_{Ad}\boldsymbol{F} = \sum_{l=1}^{p}\sigma_l^{-2}\mbox{diag}\left(\boldsymbol{\psi}_{l}\right) = \sum_{l=1}^{p}\sigma_l^{-2}\widetilde{\boldsymbol{\Lambda}}_l, \label{MX:Adaptive_G} \end{equation} where $\boldsymbol{F} = \boldsymbol{D}_q^{\top}\left(\boldsymbol{D}_q\boldsymbol{D}_q^{\top}\right)^{-1}$, $\widetilde{\boldsymbol{\Lambda}}_l = \mbox{diag}\left(\boldsymbol{\psi}_{l}\right)$, and $\sigma_l^2 = \phi/\xi_l$. Thus, the precision matrix is linear in the precision parameters $\sigma_l^{-2}$, and the SOP method can therefore be used.
We note that the model has a single random component ($c = 1$). The reparametrisation ensures that $\mbox{rank}(\boldsymbol{X}, \boldsymbol{Z}) = \mbox{rank}(\boldsymbol{X}) + \mbox{rank}(\boldsymbol{Z})$. Thus, $\mbox{rank}(\boldsymbol{X}, \boldsymbol{Z}\widetilde{\boldsymbol{\Lambda}}_l) = \mbox{rank}(\boldsymbol{X}) + \mbox{rank}(\boldsymbol{Z}\widetilde{\boldsymbol{\Lambda}}_l) > \mbox{rank}(\boldsymbol{X})$. Exploiting the fact that $\boldsymbol{G}$ and $\widetilde{\boldsymbol{\Lambda}}_l$ are diagonal, and thus commute, condition (i) of Theorem \ref{the2} is satisfied. \subsection{P-splines for hierarchical curve data}\label{OP_SSC} For simplicity, let's assume balanced hierarchical curve data. Our data consists on $m$ individuals each with $s$ different measurements at times $\boldsymbol{t} = (t_1, t_2,\ldots,t_s)$. Our interest is focused in \begin{equation} y_{ij} = f\left(t_i\right) + g_j\left(t_i\right) + \varepsilon_{ij} \;\; 1\leq i \leq s,\;1\leq j \leq m, \label{SSC_model_ind} \end{equation} where $y_{ij}$ is the response variable on the $j$-th subject at time $t_{i}$, $f$ is a function describing the population effect, $g_j$ are random functions measuring the deviation of the $j$-th subject from the population effect, and $\varepsilon_{ij} \sim N\left(0, \phi\right)$. A simple model would consist in a parametric specification for $f$ and $g_j$, e.g., $f\left(t\right) = \beta_0 + \beta_1t$ and $g_i\left(t\right) = \alpha_{0j} + \alpha_{1j}t$, with $\boldsymbol{\alpha}_0 = \left(\alpha_{01}, \ldots, \alpha_{0m}\right)^{\top} \sim N\left(\boldsymbol{0}, \sigma_{0}^2\boldsymbol{I}_m\right)$ and $\boldsymbol{\alpha}_1 = \left(\alpha_{11}, \ldots, \alpha_{1m}\right)^{\top} \sim N\left(\boldsymbol{0}, \sigma_{1}^2\boldsymbol{I}_m\right)$.
A more flexible approach consists in assuming $f$ and $g_j$ smooth and unknown. Important contributions in the P-spline framework can be found in \cite{Durban05} and \cite{Ruppert2003}. Both approaches are based on modelling $f$ and $g_j$ by means of truncated line basis, and estimation is based on penalised methods and linear mixed model techniques. More recently, \cite{Djeundje10} extend those models, and propose the inclusion of an extra penalty for the individual curve coefficients. The authors argue that this extra penalty is needed to address identifiability issues when estimating the population effect. Under the so-called \texttt{M0} model in \cite{Djeundje10}'s paper, model (\ref{SSC_model_ind}) is expressed in matrix notation as \[ \boldsymbol{y}_{\cdot j} = \underbrace{\boldsymbol{B}\boldsymbol{\theta}}_{f\left(\boldsymbol{t}\right)} + \underbrace{\breve{\boldsymbol{B}}\breve{\boldsymbol{\theta}}_j}_{g_j\left(\boldsymbol{t}\right)} + \boldsymbol{\varepsilon}_{\cdot j}, \] where $\boldsymbol{y}_{\cdot j} = \left(y_{1j},\ldots, y_{sj}\right)^{\top}$ is the response vector for the $j$-th individual (the same holds for $\boldsymbol{\varepsilon}_{\cdot j}$), and $\boldsymbol{B}$ and $\breve{\boldsymbol{B}}$ are B-spline regression matrices of possibly different size for, respectively, the population curve and the individual deviation. The vector $\boldsymbol{\theta}$ is assumed fixed, but subject to a $q$-th order penalty of the form $\boldsymbol{P} = \lambda_1\boldsymbol{D}_q^{\top}\boldsymbol{D}_q$, and $\breve{\boldsymbol{\theta}}_j$ is a random vector with distribution $N\left(\boldsymbol{0}, \phi\breve{\boldsymbol{P}}^{-1}\right)$ where \begin{equation} \breve{\boldsymbol{P}} = \lambda_2\breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}} + \lambda_3\boldsymbol{I}_{\breve{d}}. \label{SSC_P_ind} \end{equation} The first term $\lambda_2\breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}}$ is responsible for the smoothness of the individuals curves, whereas $\lambda_3\boldsymbol{I}_{\breve{d}}$ addresses the identifiability aspect \citep[see][for more details]{Djeundje10}. Note that each random effect is shrunk (penalised) by both smoothing parameters $\lambda_2$ and $\lambda_3$, and thus the precision matrix $\breve{\boldsymbol{G}}^{-1} = 1/\phi\breve{\boldsymbol{P}}$ is linear in the precision parameters \begin{equation} \breve{\boldsymbol{G}}^{-1} = \sum_{l = 2}^{3}\sigma_l^{-2}\breve{\boldsymbol{\Lambda}}_l, \label{SSC_G_ind} \end{equation} where $\sigma_2^2 = \phi/\lambda_2$ and $\sigma_3^2 = \phi/\lambda_3$ and $\breve{\boldsymbol{\Lambda}}_2 = \breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}}$ and $\breve{\boldsymbol{\Lambda}}_3 = \boldsymbol{I}_{\breve{d}}$.
In more compact way, we express the model for the whole sample as \begin{equation} \boldsymbol{y} = [\boldsymbol{1}_{m}\otimes\boldsymbol{B}]\boldsymbol{\theta} + [\boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}}]\breve{\boldsymbol{\theta}} + \boldsymbol{\varepsilon}, \label{SSC_model_pop} \end{equation} where $\otimes$ denote the Kronecker product, $\boldsymbol{y} = \left(\boldsymbol{y}_{\cdot 1}^{\top},\ldots, \boldsymbol{y}_{\cdot m}^{\top}\right)^{\top}$, $\breve{\boldsymbol{\theta}} = \left(\breve{\boldsymbol{\theta}}_{1}^{\top},\ldots, \breve{\boldsymbol{\theta}}_{m}^{\top}\right)^{\top}$, $\boldsymbol{\varepsilon} = \left(\boldsymbol{\varepsilon}_{\cdot,1}^{\top},\ldots, \boldsymbol{\varepsilon}_{\cdot,m}^{\top}\right)^{\top}$ and \[ \breve{\boldsymbol{\theta}} \sim N\left(\boldsymbol{0}, \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{G}}\right). \] One last step is needed in order to apply the SOP method for the estimation of model (\ref{SSC_model_pop}): the decomposition of the population effect into the unpenalised and the penalised part. We use here the approach based on the eigenvalue decomposition (EVD) of the penalty. Let $\boldsymbol{D}^{\top}_q\boldsymbol{D}_q = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{U}^{\top}$ be the EVD of $\boldsymbol{D}_q^{\top}\boldsymbol{D}_q$. Here $\boldsymbol{U}$ denotes the matrix of eigenvectors and $\boldsymbol{\Sigma}$ the diagonal matrix of eigenvalues. Let us also denote by $\boldsymbol{U}_{+}$ ($\boldsymbol{\Sigma}_{+}$) and $\boldsymbol{U}_{0}$ ($\boldsymbol{\Sigma}_{0}$) the sub-matrices corresponding to the non-zero and zero eigenvalues, respectively. In this case, model (\ref{SSC_model_pop}) is re-expressed as \[ \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{Z}\boldsymbol{\alpha} + \boldsymbol{\varepsilon}, \] where $\boldsymbol{\beta} =\boldsymbol{U}_{+}^{\top}\boldsymbol{\theta}$, $\boldsymbol{\alpha} = \left(\left(\boldsymbol{U}_{0}^{\top}\boldsymbol{\theta}\right)^{\top}, \breve{\boldsymbol{\theta}}^{\top}\right)^{\top}$, $\boldsymbol{X} = [\boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{0}]$ and $\boldsymbol{Z} = [\boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{+}:\boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}}]$. Finally, $\boldsymbol{\alpha} \sim N\left(\boldsymbol{0}, \boldsymbol{G}\right)$, where \[ \boldsymbol{G}^{-1} = \begin{pmatrix} \frac{1}{\sigma_{1}^2}\boldsymbol{\Sigma}_ {+} & \boldsymbol{0}_{d\times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{\left(m \breve{d}\right)\times c} & \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{G}}^{-1} \end{pmatrix} = \sum_{l=1}^{3}\sigma_l^{-2}\widetilde{\boldsymbol{\Lambda}}_{l}, \] with $\sigma_{l}^2 = \frac{\phi}{\lambda_l}$ and \[ \widetilde{\boldsymbol{\Lambda}}_{1} = \begin{pmatrix} \boldsymbol{\Sigma}_{+} & \boldsymbol{0}_{d\times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{\left(m \breve{d}\right)\times d} & \boldsymbol{0}_{\left(m \breve{d}\right)\times\left(m \breve{d}\right)} \end{pmatrix} \qquad \widetilde{\boldsymbol{\Lambda}}_{2} = \begin{pmatrix} \boldsymbol{0}_{d \times d} & \boldsymbol{0}_{d\times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{\left(m \breve{d}\right)\times d} & \boldsymbol{I}_{m} \otimes \breve{\boldsymbol{\Lambda}}_2 \end{pmatrix} \qquad \widetilde{\boldsymbol{\Lambda}}_{3} = \begin{pmatrix} \boldsymbol{0}_{d \times d} & \boldsymbol{0}_{d \times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{\left(m \breve{d}\right)\times d} & \boldsymbol{I}_{m} \otimes \breve{\boldsymbol{\Lambda}}_3 \end{pmatrix}. \] The precision matrix is linear in the precision parameters, and thus appropriate for the SOP method. There are, in this case, two random components, modelling, respectively, the population curve and the individual deviations, with \begin{align*} \boldsymbol{Z}_1 & = \boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{+} & \mbox{and}\;\;\;\boldsymbol{G}^{-1}_1 & = \sigma_1^{-2}\boldsymbol{\Sigma}_{+},\\ \boldsymbol{Z}_2 & = \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}} & \mbox{and}\;\;\;\boldsymbol{G}^{-1}_2 & = \sigma_2^{-2}\boldsymbol{I}_{m} \otimes \breve{\boldsymbol{\Lambda}}_2 + \sigma_3^{-2}\boldsymbol{I}_{m} \otimes \breve{\boldsymbol{\Lambda}}_3 = \sigma_{2}^{-2} \boldsymbol{I}_{m} \otimes \breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}} + \sigma_{3}^{-2}\boldsymbol{I}_{m}\otimes\boldsymbol{I}_{\breve{d}}\;. \end{align*} Recall that $\boldsymbol{X} = \boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{0}$, and note that $\mbox{rank}\left(\boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{0}\right) = \mbox{rank}\left(\boldsymbol{B}\boldsymbol{U}_{0}\right)$. We now show that condition (i) of Theorem \ref{the2} is satisfied for all variance parameters: \begin{description} \item[$\sigma_1^2$:] By construction, $\boldsymbol{\Sigma}_{+}$ is positive definite and of full rank, and $\mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_1\right) = \mbox{rank}\left(\boldsymbol{X}\right) + \mbox{rank}\left(\boldsymbol{Z}_1\right)$. Noting that $\mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_1\boldsymbol{G}_1\boldsymbol{\Sigma}_{+}\right) = \mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_1\right) > \mbox{rank}\left(\boldsymbol{X}\right)$, the condition is verified. \item[$\sigma_2^2$:] By e.g. Corollary 18.2.2 in \cite{Harville1997}, $\boldsymbol{G}_2$ and $\boldsymbol{I}_{m} \otimes \breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}}$ commute. Thus, as long as $m > 1$, it is easy to show that \[ \mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_2\boldsymbol{I}_{m} \otimes \breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}}\right) = \mbox{rank}\left(\boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{0}, \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}}\breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}}\right) > \mbox{rank}\left(\boldsymbol{B}\boldsymbol{U}_{0}\right). \] \item[$\sigma_3^2$:] Note that $\boldsymbol{G}_2$ and $\boldsymbol{I}_{m}\otimes\boldsymbol{I}_{\breve{d}}$ commute. As before, as long as $m > 1$, it is easy to show that \[ \mbox{rank}\left(\boldsymbol{X}, \boldsymbol{Z}_2\boldsymbol{I}_{m}\otimes\boldsymbol{I}_{\breve{d}}\right) = \mbox{rank}\left(\boldsymbol{1}_m\otimes\boldsymbol{B}\boldsymbol{U}_{0}, \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}}\right) > \mbox{rank}\left(\boldsymbol{B}\boldsymbol{U}_{0}\right). \] \end{description} \section{Examples}\label{examples} This Section presents several data examples where the SOP method represents a powerful alternative to existing estimation procedures. We discuss three different analyses: the first two examples are concerned with spatially-adaptive P-splines, but each of them deals with a different situation regarding complexity and aim; the last example is devoted to illustrating our method for the analysis of hierarchical curve data. All computations were performed in (64-bit) \texttt{R} 3.4.4 \citep{R16}, and a 2.30GHz $\times$ 4 Intel$^\circledR$ Core$\texttrademark$ i5 processor computer with 15.6GB of RAM and Ubuntu 16.04 LTS operating system. \subsection{Doppler function}\label{doppler_example} For our first example, we consider the Doppler function. This is a common example in the adaptive smoothing literature, and has been discussed, by \cite{Ruppert00, Krivobokova08, Tibshirani14}, among others. Data are generated according to \[ y_i = \sin\left(4/x_i\right) + 1.5 + \varepsilon_i, \;\;\;i = 1,\ldots, n, \] where $x_i \sim U\left[0,1\right]$, $\varepsilon_i \sim N\left(0, 0.2^2\right)$, and $n = 1000$. FFor fitting the data, we assume the spatially-adaptive P-spline model discussed in Section \ref{OP_Adaptive}. We compare the performance of the SOP method with that implemented in the \texttt{R}-package \texttt{mgcv}, version \texttt{1.8-23}, and described in \cite{Wood2011}. It is worth noticing that both approaches implement in essence the same adaptive P-spline model; the only difference is the estimation procedure (and, possibly, the reparametrisation). In addition, we also fit the model without assuming an adaptive penalty. In this case, the SOP method reduces to Harville's approach (see Section \ref{HfD}). In all cases, we use $200$ cubic B-splines to represent the smooth function, jointly with second-order differences. For the adaptive approaches, $15$ equally-spaced cubic B-splines are used for the smoothing parameters (see (\ref{MX:Adapt_smooth})). These values are chosen to provide enough flexibility to the model. Under this configuration, there are a total of $15$ variance parameters. Figure \ref{MX_df_true} shows the true simulated Doppler function. Figures \ref{MX_df_SOP_na} and \ref{MX_df_SOP} show, respectively, the estimated curves based on the SOP method without and with an adaptive penalty. Results using \texttt{mgcv} are depicted in Figure \ref{MX_df_mgcv}. As expected, both adaptive approaches perform similarly. With the specified configuration, they are able to capture $7$ cycles of the Doppler function. On the other hand, the non-adaptive approach is able to capture only $4$ cycles, and presents very wiggly estimates, especially on the right-hand side of the covariate domain. In terms of EDs, for the SOP model without adaptive penalty, we obtain a total ED of $95.8$ (out of $200$). For models with adaptive smoothing we obtained identical results, i.e. $50.2$ (with SOP) and $50.0$ (with \texttt{mgcv}). It is worth remembering that, using the SOP method the total ED is obtained by adding up the EDs associated with each variance parameter in the model (plus the dimension of the fixed part). These EDs are the denominator of the estimate update expressions of the variance parameters. The gain of the SOP method is clear when we compare the computing times: $1.0$ second with our approach ($0.4$ if we do not consider adaptive), and $45$ seconds using \texttt{mgcv}. \begin{figure}
\caption{For the Doppler function: (a) True function (solid line) and simulated data points (grey points), (b) estimated curve using the SOP method without adaptive penalty (solid line) jointly with $95\%$ approximate confidence intervals (dotted lines), (c) estimated curve using the SOP method and adaptive penalty (solid line) jointly with $95\%$ approximate confidence intervals (dotted lines); and (d) estimated curve using the \texttt{mgcv} package (solid line) jointly with $95\%$ approximate confidence intervals (dotted lines).}
\label{MX_df_true}
\label{MX_df_SOP_na}
\label{MX_df_SOP}
\label{MX_df_mgcv}
\label{Doppler_function_results}
\end{figure} \subsection{X-ray diffraction data} For this example, we use data from a X-ray crystallography radiation scan of a thin layer of indium tin oxide. X-ray crystallography allows the exploration of the molecular and atomic structure of crystals. Crystallographers precisely rotate the crystal by entering its desired orientation while it is illuminated by a beam of X-rays. Depending on the angle, the number of diffracted photons varies and they are detected and counted by an electronic detector. The data set was analysed by \cite{Davies08, Davies13} and can be found in the R-package \texttt{diffractometry} as \texttt{indiumoxide}. Figure~\ref{mx_x_ray} shows such an X-ray diffraction scan (grey lines). The aim of X-ray diffraction analysis is to determine (a) the signal baseline (and remove it); and (b) the number of peaks (and isolate them to further analysis of their position, height, symmetry, and so forth). This example is solely included to illustrate the potentiality of the method presented in this paper for the analysis of very complex data. For a different modelling approach, see \cite{Camarda16}. Given that the outcome variable represents count data, a Poisson model is adopted \[ E\left[y_i \mid x_i\right] = \exp\left(f\left(x_i\right)\right), \] where $y_i$ and $x_i$ $(i = 1, \ldots,2000)$ denote, respectively, the photon counts and the angle of diffraction. To provide enough flexibility to the model in order to make it able to capture the peaks (see Figure~\ref{mx_x_ray}), we use $200$ cubic B-splines and second-order differences for the function, and $80$ cubic B-splines for the adaptive penalty. Results are shown in Figure~\ref{mx_x_ray}. The results using the \texttt{mgcv} package are almost identical to our proposal, and are not depicted. In this case, our method takes less than $3$ seconds, whereas \texttt{mgcv} is around $750$ times slower ($33$ minutes). Regarding the EDs, we obtain a total of $32.1$ and $29.0$ using SOP and \texttt{mgcv}, respectively.
\begin{figure}
\caption{For the x-ray radiation example: estimated smooth effect of the angle of diffraction on the x-ray radiation using the SOP method (solid black line). The grey lines represent the raw data. }
\label{mx_x_ray}
\end{figure} \subsection{Diffusion tensor imaging scan data}\label{DTI_example} Our last example deals with hierarchical curve data. We analyse the \texttt{DTI} dataset, that can be found in the \texttt{R}-package \texttt{refund} \citep{Goldsmith16}. A detailed description of the study and data can be found in \cite{Goldsmith11}, \cite{Goldsmith12} and \cite{Greven17}. In brief, the study aimed at comparing the white matter tracts in patients affected by multiple sclerosis (MS) and healthy individuals. Multiple sclerosis is a disease of the central nervous system that causes lesions in white matter tracts, thus interrupting the travel of nerve impulses to and from the brain and spinal cord. Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique which makes it possible to study the white matter tracts by tracing the diffusion of water on the brain. From DTI scans, fractional anisotropy (FA) measurements can be extracted. FA is related to water diffusion, and thus to MS diagnosis and progression. The \texttt{DTI} dataset contains FA measurements of healthy and diseased individuals, recorded at several locations along the callosal fiber tract (CCA) and the right corticospinal tract in the brain. In Figure \ref{DTI_raw} the observed FA measurements at different tract locations in the CAA are shown, separately for cases and controls, i.e., individuals affected and non affected with MS. Each line in these plots represents an individual, and only the first visit is considered. Note that, in general, MS patients present lower FA measurements than healthy individuals.
For illustration purposes, we present two different analyses and comparisons. We first focus our interest on the subgroup of individuals affected with MS. In this group, there are a total of $m = 99$ individuals, each with $s = 93$ FA measurements at different CAA tract locations. The SOP method is used to estimate the model described in Section \ref{OP_SSC} and presented in \cite{Djeundje10}. To compare results and computing times, the code associated with the paper by \cite{Djeundje10} is also tested. For this example both implementations take the advantage of the array structure of the data: Generalised Linear Array Models \citep[GLAM,][]{Currie2006} are used to efficiently compute the inner products for the Henderson equations (eqn. (\ref{MX:linearsystem})). Here, $43$ cubic B-spline basis are used for the population curve, and $23$ for the individual curves. This configuration gives rise to a model with $2320$ ($ = 43 + 23 \times 99$) coefficients (both random and fixed). SOP method needs about $150$ seconds to fit the model, and \cite{Djeundje10}'s code is $14$ times slower. We note that the computational time can be further improved if the sparse structure of the matrices involved in the model is exploited. Using the \texttt{R}-package \texttt{Matrix}, we are able to reduce the computing time using SOP to $35$ seconds. Figure \ref{DTI_results_pop} shows the estimated population effect using both approaches, that provide very similar results. $95\%$ pointwise confidence intervals are calculated by means of the full-sandwich standard errors proposed by \cite{Heckman13} but adapted to our case. Figure \ref{DTI_results_ind} shows, for several MS patients, the estimated (and observed) FA profiles. In terms of EDs, we obtain $35.03$ (out of $43$) for the population curve (including the unpenalised or fixed part), and a total of $2025.78$ (out of $2275$ ($= 23 \times 99 - 2$)) for all individual curves. We note that this total corresponds to the sum of the EDs associated with the variances $\sigma_2^2$ and $\sigma_3^2$ involved in the modelling of these individual curves (see expressions (\ref{SSC_P_ind}) and (\ref{SSC_G_ind}) for details). More precisely, using the SOP method we obtain an ED of $870.44$ for $\sigma_2^2$ and of $1155.34$ for $\sigma_3^2$.
\begin{figure}
\caption{For the diffusion tensor imaging scan data: (a) observed FA values along the CCA tract. Left: healthy controls. Right: MS patients; (b) estimated population (group) FA profile for individuals affected with MS. Solid red line: SOP method. Dotted blue line: \cite{Djeundje10}'s code. The dotted red lines are the pointwise $95\%$ confidence intervals based on the full-sandwich standard errors proposed by \cite{Heckman13}. The back line is the observed mean and the dotted black lines are the empirical confidence intervals; and (c) estimated (red lines) and observed (black lines) individual FA profiles for $3$ selected individuals.}
\label{DTI_raw}
\label{DTI_results_pop}
\label{DTI_results_ind}
\end{figure}
Our second analysis considers all individuals, cases and controls. The interest here is to compare the FA profiles at the first visit between these two groups. To that aim, the following factor-by-curve interaction model is considered. \[ y_{ij} = f_{z_j}\left(t_i\right) + g_j\left(t_i\right) + \varepsilon_{ij} \;\; 1\leq i \leq s,\;1\leq j \leq m, \] where $z_j = 1$ if the $j$-th individual is affected by MS (case) and $z_j = 0$ otherwise (control). Note that this model assumes a different FA profile for each group. For this analysis, there a total of $m_0 = 42$ controls and $m_1 = 99$ MS patients ($m = m_0 + m_1 = 141$), and $s = 93$ different tract locations. A detailed description of the model can be found in Appendix \ref{AppC}. As for the first analysis $43$ cubic B-spline basis are used for the population curves (FA profiles), and $23$ for the individual curves, yielding a total of $3329$ ($ = 43 \times 2 + 23 \times 141$) coefficients. Using GLAM and sparse matrix techniques (\texttt{R}-package \texttt{Matrix}), the fit takes $65$ seconds. Figure \ref{DTI_results_pop_complete_SOP} shows the estimated FA profiles for both cases and controls, jointly with $95\%$ pointwise confidence intervals \citep{Heckman13}. The ED for the FA profile in controls is $32.21$ and in MS patients is $35.55$. In both cases we include the fixed part. Regarding the individual curves, we obtain a total ED of $2863.46$, the sum of $1263.26$ and $1600.20$.
We compare the results with the functional regression model presented in the paper by \cite{Greven17} (see model (1.1) and Figure 2 in that paper). We would like to note that the P-spline model used in this paper was discussed in \cite{Durban17} as an competitive alternative to the functional approach by \cite{Greven17}. For the functional approach, we consider Gaussian homoskedastic errors, and, as suggested by the authors, $25$ cubic B-spline basis and a first order difference penalty, as well as $8$ functional principal components (FPC) functions. The code for fitting the model was kindly provided by the authors. Results are depicted in Figure \ref{DTI_results_pop_complete_functional}. As can be observed, both approaches provide very similar results. However, note that the pointwise $95\%$ confidence interval for the estimated FA profile in MS patients is narrower than the empirical confidence interval. This results can be explained by the (possibly wrong) assumption of Gaussian homoskedastic errors. In terms of computing times, the functional regression model needs $895$ seconds to be fitted (in contrast to $65$ seconds using SOP). We are aware that the computing times of both approaches (the SOP method and functional approach) are not fully comparable, since they assume different model specifications.
\begin{figure}
\caption{For the diffusion tensor imaging scan data: estimated population (group) FA profiles. From left to right: results using the SOP method and the functional regression approach by \cite{Greven17}. Solid red line: MS patients. Solid blue line: controls. The dotted blue and red lines are the pointwise $95\%$ confidence intervals. For the SOP method confidence intervals are based on the full-sandwich standard errors proposed by \cite{Heckman13}. The solid back line is the observed mean and the dotted black lines are the empirical confidence intervals.}
\label{DTI_results_pop_complete_SOP}
\label{DTI_results_pop_complete_functional}
\end{figure}
\section{Discussion} This paper presents a new estimation method, called SOP, for (generalised) linear mixed models. The method is an extension of a previous proposal by \cite{Harville77}, and generalised by \cite{Schall1991} among others. In contrast to those previous approaches, the SOP method is suitable for models where the precision matrix is linear in the precision parameters. These precision matrices are common when penalised smooth models are reformulated as (generalised) linear mixed models. They appear when there are multiple quadratic penalties acting on the same regression coefficients. One special case are anisotropic tensor-product P-splines. This situation was discussed in the paper by \cite{MXRA2015} where the SAP algorithm was proposed. The present paper goes one step further and generalises SAP to a more general case. SOP is, as far as we know, the only method in this context where the variance parameters estimates involve ``partial'' effective degrees of freedom. As a consequence, the method provides, at convergence, the estimated effective partial degrees of freedom associated with each smooth/random component in the model. This is specially relevant when working with smoothing models, where the effective degrees of freedom of each smooth term gives important insights on the complexity of the fitted function. Furthermore, we show in the paper that the SOP method ensures non-negative estimates of the variance parameters, and discuss the conditions under which these estimates are strictly positive.
We present in the paper several situations in the context of penalised spline regression in which SOP represents a powerful alternative to existing estimation methods. In particular, we show the use of SOP in the case of spatially-adaptive P-splines and for the estimation of subject-specific curves in longitudinal data. We discuss several real data analyses dealing with these situations, and show the outperformance of SOP in terms of computing times. We use simple modelling situations with the aim of describing the method, models and examples in a detailed way, avoiding generalisations that could complicate the reading of the paper and obscure the simplicity of the proposal. However, there are several other fields of application of SOP. For instance, overlapping penalties appear in brain imaging research \citep{Karas2017} and derivative curve estimation \citep{simpkin2013}. In addition, the method can be used for more complex models including linear effects, (multidimensional) smooth functions, random Gaussian effects, etc. The use of other basis functions beyond B-splines is also possible as long as quadratic penalties are combined.
The proposal and examples presented in this paper pave the way for further research efforts. For instance, the approach discussed for adaptive P-splines is based on smoothing the locally varying smoothing parameter by means of B-splines. This implies that smoothness is solely controlled by the number of B-spline basis. The selection of the appropriate basis dimension may not be an easy task, with the undesirable consequence that if a large basis dimension is chosen (larger than needed), we may end up with a local linear fit. To reduce the impact of the basis dimension, we will explore the inclusion of a penalty on the coefficients (variance parameters) associated to the B-spline basis. This can be accomplished by means of a hierarchical structure for the random effects \cite[see, e.g.,][]{Krivobokova08}. Another challenging field is the study of suitable penalties and efficient estimation methods for adaptive P-splines in more than one dimension. Whereas some attempts have been done in two dimensions \cite[see, e.g.,][]{Crainiceanu07, Krivobokova08}, to the best of our knowledge the literature is lacking in three dimensional approaches (e.g., space and time). Some preliminary results using SOP are available at \cite{MXRA2016b}, but further work still needs to be done.
Finally, for variable selection there exists some works that propose sparse regression models using local quadratic approximations to the L1-norm adopting a penalised likelihood approach \citep[see][]{Fan2001,Hunter2005,Zou2008}. More recently, (generalised) linear mixed-effects approaches has been also proposed \cite[see][]{Taylor2012,Groll2014} allowing for the penalty to be estimated simultaneously with the variance parameters using REML. We intend to extend the SOP method in this direction. In conclusion, this paper opens up a pathway for a general estimating method allowing for both smoothing and variable selection in reasonable computing times.
The \texttt{R}-code used for the real data examples presented in Section \ref{examples} as well as an \texttt{R}-package implementing the SOP method for generic generalised linear mixed models, spatially-adaptive P-spline models and P-spline models for hierarchical curve data can be downloaded from \url{https://bitbucket.org/mxrodriguez/sop}.
\section{Proof of Theorem \ref{the1}\label{AppA}} \begin{proof} We first note that the first-order partial derivatives of the (approximate) REML log-likelihood function can be expressed as \citep[see, e.g.,][]{MXRA2015} \begin{equation*} \frac{\partial{l}}{\partial{\sigma_{k_l}^2}} = - \frac{1}{2}trace\left(\boldsymbol{Z}^{\top}\boldsymbol{P}\boldsymbol{Z}\frac{\partial{\boldsymbol{G}}}{\partial{\sigma_{k_l}^2}}\right) + \frac{1}{2}\widehat{\boldsymbol{\alpha}}^{\top}\boldsymbol{G}^{-1}\frac{\partial{\boldsymbol{G}}}{\partial{\sigma_{k_l}^2}}\boldsymbol{G}^{-1}\widehat{\boldsymbol{\alpha}}. \label{MX:rloglikder} \end{equation*} Given that $\boldsymbol{G}$ is a positive definite matrix, we have the identity \[ \frac{\partial{\boldsymbol{G}}}{\partial{\sigma_{k_l}^2}} = - \boldsymbol{G}\frac{\partial{\boldsymbol{G}^{-1}}}{\partial{\sigma_{k_l}^2}}\boldsymbol{G}, \] and thus \[ \frac{\partial{\boldsymbol{G}}}{\partial{\sigma_{k_l}^2}} = \frac{1}{\sigma^{4}_{k_l}}\mbox{diag}\left(\boldsymbol{0}^{(1)},\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k,\boldsymbol{0}^{(2)}\right), \] where $\boldsymbol{0}^{(1)}$ and $\boldsymbol{0}^{(2)}$ are zero square matrices of appropriate dimensions.
The first-order partial derivatives of the REML log-likelihood function are then expressed as \begin{equation*} 2\frac{\partial{l}}{\partial{\sigma_{k_l}^2}} = - \frac{1}{\sigma_{k_l}^4}trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k\right) + \frac{1}{\sigma_{k_l}^4}\widehat{\boldsymbol{\alpha}}_k^{\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k. \label{MX:rloglikder_2} \end{equation*} When the REML estimates are positive, they are obtained by equating the former expression to zero \cite[see, e.g.,][]{Engel1990} \[ \frac{\widehat{\boldsymbol{\alpha}}^{\top}_k\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k}{trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k\right)} = 1. \] We now multiply both sides with $\sigma_{k_l}^2$, and evaluate the left-hand side for the previous iterates and the right-hand side for the update, obtaining \[ \widehat{\sigma}^{2}_{k_l} = \frac{\widehat{\boldsymbol{\alpha}}_k^{[t]\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k^{[t]}}{trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}^{[t]}\boldsymbol{Z}_k\boldsymbol{G}_k^{[t]}\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k^{[t]}\right)}\widehat{\sigma}_{k_l}^{2[t]} = \frac{\widehat{\boldsymbol{\alpha}}_k^{[t]\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k^{[t]}}{trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}^{[t]}\boldsymbol{Z}_k\boldsymbol{G}_k^{[t]}\frac{\boldsymbol{\Lambda}_{k_l}}{\widehat{\sigma}_{k_l}^{2[t]}}\boldsymbol{G}_k^{[t]}\right)}. \] \end{proof} \section{Proof of Theorem \ref{the2}\label{AppB}} \begin{proof} First let us recall some notation and introduce some needed results. We denote as $\boldsymbol{P} = \boldsymbol{V}^{-1} - \boldsymbol{V}^{-1}\boldsymbol{X}\left(\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}\boldsymbol{X}\right)\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}$, where $\boldsymbol{V} = \boldsymbol{R} + \boldsymbol{Z}\boldsymbol{G}\boldsymbol{Z}^{\top}$, $\boldsymbol{R} = \phi\boldsymbol{W}^{-1}$ with $\boldsymbol{W}$ being the diagonal weight matrix involved in the Fisher scoring algorithm.
Denote as $\mathcal{C}\left(\boldsymbol{A}\right)$ the linear space spanned by the columns of $\boldsymbol{A}$, and let $\boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}} = \boldsymbol{X}\left(\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}\boldsymbol{X}\right)^{-1}\boldsymbol{X}^{\top}\boldsymbol{V}^{-1}$ be the orthogonal projection matrix for $\mathcal{C}\left(\boldsymbol{X}\right)$ with respect to $\boldsymbol{V}^{-1}$. It is easy to show that \[ \boldsymbol{P} = \boldsymbol{V}^{-1}\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right) = \left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right)\boldsymbol{V}^{-1}\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right). \] By Theorem 14.2.9 in \cite{Harville1997}, $\boldsymbol{P}$ is a (symmetric) positive semi-definite matrix. In addition, \[ \boldsymbol{P}\boldsymbol{X} = \boldsymbol{0}, \] and \begin{align*} \mbox{\mbox{rank}}(\boldsymbol{P}) & = \mbox{rank}\left(\boldsymbol{V}^{-1}\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right)\right)\\ & = \mbox{rank}\left(\left(\boldsymbol{I}_n - \boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right)\right)\\ & = n - \mbox{rank}\left(\boldsymbol{P}_{\boldsymbol{X}\boldsymbol{V}^{-1}}\right)\\ & = n - \mbox{rank}\left(\boldsymbol{X}\right). \end{align*} Thus, \begin{equation} ker\left(\boldsymbol{P}\right) = \mathcal{C}\left(\boldsymbol{X}\right), \label{k_p_c_x} \end{equation} i.e., $\boldsymbol{P}\boldsymbol{x} = \boldsymbol{0}$ if and only if $\boldsymbol{x}$ is in $\mathcal{C}\left(\boldsymbol{X}\right)$.
Let $\boldsymbol{\Lambda}_{k_l} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{U}^{\top}$ be the eigen value decomposition of $\boldsymbol{\Lambda}_{k_l}$. Note that $\boldsymbol{\Lambda}_{k_l} = \boldsymbol{U}_{+}\boldsymbol{\Sigma}_{+}\boldsymbol{U}_{+}^{\top}$, where $\boldsymbol{U}_{+}$ and $\boldsymbol{\Sigma}_{+}$ are the sub-matrices corresponding to the non-zero eigenvalues. Then \[ \widehat{\boldsymbol{\alpha}}_k^{\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k = \widehat{\boldsymbol{\alpha}}_k^{\top}\boldsymbol{U}_{+}\boldsymbol{\Sigma}_{+}\boldsymbol{U}_{+}^{\top}\widehat{\boldsymbol{\alpha}}_k = \boldsymbol{z}^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+}\boldsymbol{\Sigma}_{+}\boldsymbol{U}_{+}^{\top}\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{z} \geqslant 0, \] with equality holding if and only if $\boldsymbol{U}_{+}^{\top}\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{z} = \boldsymbol{0}$ (since $\boldsymbol{\Sigma}_{+}$ is positive definite). Thus, using result (\ref{k_p_c_x}), the equality holds if $\boldsymbol{z}$ is in $\mathcal{C}\left(\boldsymbol{X}\right)$ or $\mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+}\right) \subset \mathcal{C}\left(\boldsymbol{X}\right)$. By Lemma 4.2.2 and Corollary 4.5.2 in \cite{Harville1997}, we have \[ \mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+}\right) = \mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\right) \subset \mathcal{C}\left(\boldsymbol{X}\right) \iff \mbox{rank}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l},\boldsymbol{X}\right) = \mbox{rank}\left(\boldsymbol{X}\right). \]
Regarding the denominator of the REML-based estimates updates, we follow a similar reasoning. Using Corollary 14.7.5 (and Theorem 14.2.9) in \cite{Harville1997}, we have \begin{align*} trace\left(\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l}\boldsymbol{G}_k\right) = trace\left(\boldsymbol{U}_{+}^{\top}\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+}\boldsymbol{\Sigma}_{+}\right) \geqslant 0, \end{align*} with equality holding if and only if $\boldsymbol{U}_{+}^{\top}\boldsymbol{G}_k\boldsymbol{Z}_k^{\top}\boldsymbol{P}\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+} = \boldsymbol{0}$. Again, this equality holds if and only if $\mathcal{C}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{U}_{+}\right) \subset \mathcal{C}\left(\boldsymbol{X}\right)$ (i.e., $\iff \mbox{rank}\left(\boldsymbol{Z}_k\boldsymbol{G}_k\boldsymbol{\Lambda}_{k_l},\boldsymbol{X}\right) = \mbox{rank}\left(\boldsymbol{X}\right)$). \end{proof} \section{Estimating algorithm}\label{algorithm} This Appendix summarises the steps of the estimating algorithm for model (\ref{mm_equation}) based on the SOP method: \begin{description} \item[Initialise.] Set initial values for $\widehat{\boldsymbol{\mu}}^{[0]}$ and the variance parameters $\widehat{\sigma}_{k_l}^{2[0]}$ ($l = 1,\ldots,p_k$ and $k = 1, \ldots, c$). In those situations where $\phi$ is unknown, establish an initial value for this parameter, $\widehat{\phi}^{[0]}$. Set $t = 0$. \item[Step 1.] Construct the \textit{working} response vector $\boldsymbol{z}$ and the matrix of weights $\boldsymbol{W}$ as follows \[ z_i = g(\widehat{\mu}_i^{[t]}) + (y_i - \widehat{\mu}_i^{[t]})g^{\prime}(\widehat{\mu}_i^{[t]}), \] \[ w_{ii} = \left\{g'(\widehat{\mu}_i^{[t]})^2\nu(\widehat{\mu}_i^{[t]})\right\}^{-1}. \] \begin{description} \item[Step 1.1.] Given the initial \textit{estimates} of variance parameters, estimate $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ by solving \begin{equation} \underbrace{ \begin{bmatrix} \boldsymbol{X}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{X} & \boldsymbol{X}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{Z} \\ \boldsymbol{Z}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{X} & \boldsymbol{Z}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{Z} + {\boldsymbol{G}^{[t]}}^{-1} \end{bmatrix}}_{\boldsymbol{C}^{[t]}} \begin{bmatrix} \boldsymbol{\widehat{\beta}}\\ \boldsymbol{\widehat{\alpha}} \end{bmatrix} = \begin{bmatrix} \boldsymbol{X}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{z}\\ \boldsymbol{Z}^{\top}{\boldsymbol{R}^{[t]}}^{-1}\boldsymbol{z} \end{bmatrix}, \label{MX:linearsystem} \end{equation} where $\boldsymbol{R}^{[t]} = \widehat{\phi}^{[t]}\boldsymbol{W}^{-1}$. Let $\boldsymbol{\widehat{\alpha}}^{[t]}$ and $\boldsymbol{\widehat{\beta}}^{[t]}$ be these estimates. \item[Step 1.2.] Update the variance parameters as follows \begin{equation*} \widehat{\sigma}^{2}_{k_l} = \frac{\widehat{\boldsymbol{\alpha}}_k^{{[t]}\top}\boldsymbol{\Lambda}_{k_l}\widehat{\boldsymbol{\alpha}}_k^{[t]}}{\mbox{ED}_{k_l}^{[t]}}, \end{equation*} and, when necessary, \begin{equation*} \widehat{\phi} = \frac{\left(\boldsymbol{z}-\boldsymbol{X}\boldsymbol{\widehat{\beta}}^{[t]}-\boldsymbol{Z}\boldsymbol{\widehat{\alpha}}^{[t]}\right)^{\top}\boldsymbol{W}\left(\boldsymbol{z}-\boldsymbol{X}\boldsymbol{\widehat{\beta}}^{[t]}-\boldsymbol{Z}\boldsymbol{\widehat{\alpha}}^{[t]}\right)}{n - \mbox{rank}(\boldsymbol{X}) - \sum_{k=1}^c\sum_{l=1}^{p_k}\mbox{ED}_{k_l}^{[t]}}, \end{equation*} with
\[ \mbox{ED}_{k_l}^{[t]} = trace\left(\left(\boldsymbol{G}^{[t]}_k - {\boldsymbol{C}^{[t]}}^{*}_{kk}\right)\frac{\boldsymbol{\Lambda}_{k_l}}{{\widehat{\sigma}_{k_l}^{2[t]}}}\right). \] Recall that ${\boldsymbol{C}^{[t]}}^{*}$ denotes the inverse of $\boldsymbol{C}^{[t]}$ in (\ref{MX:linearsystem}), and ${\boldsymbol{C}^{[t]}}^{*}_{kk}$ is that partition of ${\boldsymbol{C}^{[t]}}^{*}$ corresponding to the $k$-th random component $\boldsymbol{\alpha}_k$. \item[Step 1.3.] Repeat Step 1.1 and Step 1.2, replacing $\widehat{\sigma}_{k_l}^{2[t]}$, and, if updated, $\widehat{\phi}^{[t]}$, by $\widehat{\sigma}_{k_l}^{2}$ and $\widehat{\phi}$, until convergence. For the examples presented in Section \ref{examples}, we use the REML deviance as the convergence criterion. \end{description} \item[Step 2.] Repeat Step 1, replacing the variance parameters and the model's fixed and random effects (and thus $\widehat{\boldsymbol{\mu}}^{[t]}$) by those obtained in the last iteration of Steps 1.1 - Step 1.3, until convergence. \end{description} \section{Factor-by-curve hierarchical curve model\label{AppC}} This appendix describes in detail the factor-by-curve interaction model discussed in Section \ref{DTI_example}, i.e., \[ y_{ij} = f_{z_j}\left(t_i\right) + g_j\left(t_i\right) + \varepsilon_{ij} \;\; 1\leq i \leq s,\;1\leq j \leq m, \] where $z_j = 1$ if the $j$-th individual is affected by MS (case) and $z_j = 0$ otherwise (control). Let's order the data with the observations on controls first, followed by observations on MS patients. In matrix notation, the model can be expressed as \begin{equation} \boldsymbol{y} = [\boldsymbol{Q}\otimes\boldsymbol{B}]\boldsymbol{\theta} + [\boldsymbol{I}_{m}\otimes\breve{\boldsymbol{B}}]\breve{\boldsymbol{\theta}} + \boldsymbol{\varepsilon}, \label{SSC_inter_model_pop} \end{equation} with $\boldsymbol{B}$, $\breve{\boldsymbol{B}}$, $\breve{\boldsymbol{\theta}}$ and $\boldsymbol{\varepsilon}$ as defined in Section \ref{OP_SSC}. Matrix $\boldsymbol{Q}$ is any suitable contrast matrix of dimension $m \times 2$, where $m = m_0 + m_1$, with $m_0$ being the number of controls and $m_1$ the number of MS patients. For our application, we consider \[ \boldsymbol{Q} = \begin{pmatrix} \boldsymbol{1}_{m_0} & \boldsymbol{0}_{m_0}\\ \boldsymbol{0}_{m_1} & \boldsymbol{1}_{m_1} \end{pmatrix}, \] and a different amount of smoothing is assumed for $f_{0}$ and $f_{1}$, i.e., the penalty matrix acting over the vector of coefficients $\boldsymbol{\theta}$ is of the form \[ \boldsymbol{P} = \begin{pmatrix} \lambda_{1}\boldsymbol{D}_q^{\top}\boldsymbol{D}_q & \boldsymbol{0}_{c \times c}\\ \boldsymbol{0}_{c \times c} & \lambda_{2}\boldsymbol{D}_q^{\top}\boldsymbol{D}_q \end{pmatrix}. \] The reformulation as a mixed model can be done in a similar fashion to that described in Section \ref{OP_SSC}, with, in this case \begin{align*} \boldsymbol{X} = & [\boldsymbol{Q}\otimes\boldsymbol{B}\boldsymbol{U}_{0}],\\ \boldsymbol{Z} = & [\boldsymbol{Q}\otimes\boldsymbol{B}\boldsymbol{U}_{+}:\boldsymbol{I}_m\otimes\breve{\boldsymbol{B}}],\\ \end{align*} and \[ \boldsymbol{G}^{-1} = \begin{pmatrix} \sigma_{1}^{-2}\boldsymbol{\Sigma}_{+} & \boldsymbol{0}_{d \times d} & \boldsymbol{0}_{d\times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{d\times d} & \sigma_{2}^{-2}\boldsymbol{\Sigma}_{+} & \boldsymbol{0}_{d\times\left(m \breve{d}\right)}\\ \boldsymbol{0}_{\left(m \breve{d}\right)\times d}& \boldsymbol{0}_{\left(m \breve{d}\right)\times c} & \boldsymbol{I}_{m}\otimes\breve{\boldsymbol{G}}^{-1} \end{pmatrix}, \] where \[ \breve{\boldsymbol{G}}^{-1} = \sigma_3^{-2}\breve{\boldsymbol{D}}_{\breve{q}}^{\top}\breve{\boldsymbol{D}}_{\breve{q}} + \sigma_4^{-2}\boldsymbol{I}_{\breve{d}}. \]
\end{document} |
\begin{document}
\baselineskip=0.3in
\vspace*{3\baselineskip}
\vspace*{2mm}
\begin{center} \textbf{\Large On the Wiener complexity and the Wiener index of fullerene graphs} \end{center}
\begin{center} {\large Andrey A. Dobrynin$^{1,2}$, Andrei Yu. Vesnin$^{1,2,3}$} \end{center}
\baselineskip=0.2in
\begin{center} \emph{$^1$Novosibirsk State University, Novosibirsk, 630090, Russia \\
$^2$Sobolev Institute of Mathematics, Siberian Branch of the \\ Russian Academy of Sciences, Novosibirsk, 630090, Russia\\
$^3$Tomsk State University, Tomsk, 634050, Russia \\
{\rm [email protected], [email protected]} } \end{center}
\begin{center} \ \end{center}
\baselineskip=0.25in
\begin{abstract} Fullerenes are molecules in the form of cage-like polyhedra, consisting solely of carbon atoms. Fullerene graphs are mathematical models of fullerene molecules. The transmission of a vertex $v$ of a graph is the sum of distances from $v$ to all the other vertices. The number of different vertex transmissions is called the Wiener complexity of a graph. Some calculation results on the Wiener complexity and the Wiener index of fullerene graphs of order $n \le 216$ are presented. Structure of graphs with the maximal Wiener complexity or the maximal Wiener index is discussed and formulas for the Wiener index of several families of graphs are obtained. \end{abstract}
\section{Introduction}
A fullerene is a spherically shaped molecule consists of carbon atoms in which every carbon ring is either a pentagon or a hexagon and every atom has bonds with exactly three other atoms. The molecule may be a hollow sphere, ellipsoid, tube, or many other shapes and sizes. Fullerenes have been the subject of intense research, both for their chemistry and for their technological applications, especially in nanotechnology and materials science \cite{Ashr16,Cata11}.
Molecular graphs of fullerenes are called \emph{fullerene graphs}. A fullerene graph is a \mbox{3-connected} 3-regular planar graph with only pentagonal and hexagonal faces. By Euler's polyhedral formula, the number of pentagonal faces is always 12. It is known that fullerene graphs with $n$ vertices exist for all even $n \ge 24$ and for $n = 20$. The number of all non-isomorphic fullerene graphs can be found in \cite{Brin97,Fowl95,Goed15-2}. The set of fullerene graphs with $n$ vertices will be denoted as $F_n$. The number of faces of graphs in $F_n$ is $f = n/2 + 2$ and, therefore, the number of hexagonal faces is $n/2-10$. Despite the fact that the number of pentagonal faces is negligible compared to the number of hexagonal faces, their location is crucial to the shape and properties of fullerene molecules. Fullerenes where no two pentagons are adjacent, \emph{i.\,e.}, each pentagon is surrounded by five hexagons, satisfy the isolated pentagon rule and called \emph{IPR fullerene}. The number of all non-isomorphic IPR fullerenes was reported, for example, in \cite{Goed15-1,Goed15-2}. They are considered as thermodynamic stable fullerene compounds. Description of mathematical properties of fullerene graphs can be found in \cite{Ando16,Ashr16,Cata11,Fowl01,Fowl95,Schw15}.
The vertex set of a graph $G$ is denoted by $V(G)$. The number of vertices of $G$ is called its \emph{order}. By distance $d(u,v)$ between vertices $u,v \in V(G)$ we mean the standard distance of a simple graph $G$, \emph{i.\,e.}, the number of edges on a shortest path connecting these vertices in $G$. The maximal distances between vertices of a graph $G$ is called the \emph{diameter} $D(G)$ of $G$. Vertices are \emph{diametrical} if the distance between them is equal to the diameter of a graph. The \emph{transmission} of vertex $v \in V(G)$ is defined as the sum of distances from $v$ to all the other vertices of $G$, $tr(v)=\sum_{u\in V(G)} d(v,u)$. Transmissions of vertices are used for design of many distance-based topological indices \cite{Shar20}. Usually, a topological index is a graph invariant that maps a set of graphs to a set of numbers such that invariant values coincide for isomorphic graphs. A half of the sum of vertex transmissions gives the \emph{Wiener index} that has found important applications in chemistry (see selected books and reviews \cite{Dehm14,Dobr01,Dobr02,Gutm12-1,Gutm12-2,Gutm86,Knor16,Tode00,Trin83}), $$ W(G) = \sum_{\{u,v\}\subseteq V(G)} d(u,v) = \frac{1}{2}\sum_{v \in V(G)} tr(v). $$
The Wiener index was introduced as structural descriptor for acyclic organic molecules by Harold Wiener \cite{Wien47}. The definition of the Wiener index in terms of distances between vertices of a graph was first given by Haruo Hosoya \cite{Hoso71}.
The number of different vertex transmissions in a graph $G$ is known as the \emph{Wiener complexity} \cite{Aliz16} (or the \emph{Wiener dimension} \cite{Aliz14}), $C_W(G)$. This graph invariant is a measure of transmission variety. A graph is called \emph{transmission irregular} if it has the largest possible Wiener complexity over all graphs of a given order, \emph{i.\,e.}, vertices of the graph have pairwise different transmissions. Various properties of transmission irregular graphs were studies in \cite{Aliz16,Aliz18,Klav18}. It was shown that almost all graphs are not transmission irregular. Infinite families of transmission irregular graphs were constructed for trees, 2-connected graphs and 3-connected cubic graphs in \cite{Aliz18,Dobr18,Dobr19-1,Dobr19-2,Dobr19-3}.
In this paper, we present some results of studies of the Wiener complexity and the Wiener index of fullerene graphs. In particular, we are interested in two questions: does there exist of a transmission irregular fullerene graph and can a graph with the maximal Wiener complexity has the maximal Wiener index?
\section{Distribution of graphs with respect to their Wiener complexity}
Distributions of fullerene graphs with respect to their Wiener complexity have been obtained for $n \le 216$ vertices ($f \le 110$ faces). As an illustration, we present data for graphs with 196 vertices (100 faces). The number of graphs of $F_{196}$ is 177\,175\,687. Distribution of graphs of this family with respect to $C_W$ is presented in Table~\ref{TDistr100}. A graphical representation of these data is shown in Fig.~\ref{Fig1}.
\begin{figure}
\caption{Distribution of fullerene graphs of $F_{196}$ with respect to their Wiener complexity ($N$ is the number of graphs).}
\label{Fig1}
\end{figure}
\section{Wiener complexity of fullerene graphs}
Denote by $C_n$ the maximal Wiener complexity among all fullerene graphs with $n$ vertices,
\emph{i.\,e.}, $C_n = \max \{C_W(G)\, |\, G \in F_n \}$. Fullerene graphs with maximal Wiener complexity have been examined for $n \le 216$ vertices. Let $g_n$ be a difference between order and the Wiener complexity,
$g_n=n-C_n$. Then a transmission irregular graph has $g_n=0$. It is obvious that a transmission irregular graph has the identity automorphism group.
The behavior of $g_n$ when the number of vertices $n$ increases is shown in Fig.~\ref{Fig2}. The bottom and top lines correspond to all fullerene graphs and to IPR fullerene graphs, respectively. Explicit values of $C_n$ and quantity $g_n$ of the graphs are presented in Table~\ref{TDistrAll}. Since the minimal $g_n$ is equal to $9$, we can formulate the following statement.
\begin{proposition} There do not exist transmission irregular fullerene graphs with $n \le 216$ vertices. \end{proposition}
Since the almost all fullerene graphs have no symmetries, we believe that transmission irregular graphs exist for large number of vertices.
\begin{problem} Does there exist a transmission irregular fullerene graph (IPR fullerene graph)? If yes, then what is the smallest order of such graphs? \end{problem}
\begin{table}[t!h!] \centering \caption{Distribution of fullerene graphs of $F_{196}$ with respect to the Wiener \\ complexity $C_W$
($N$ is the number of graphs).} \label{TDistr100} \footnotesize
\begin{tabular}{rr|rr|rr|rr|rr|rr} \hline $C_W$&\ $N$&$C_W$ & $N$ & $C_W$& $N$\ \ \ \ & $C_W$ &$N$ \ \ \ \ & $C_W$ &$N$\ \ \ \ \\ \hline
13 & 1 & 45 & 162 & 73 & 4635 & 101 & 289459 & 129 & 3966428 & 157& 1787105 \\
14 & 2 & 46 & 176 & 74 & 5426 & 102 & 333904 & 130 & 4148282 & 158& 1534799 \\
15 & 3 & 47 & 178 & 75 & 6367 & 103 & 380828 & 131 & 4323678 & 159& 1296521 \\
16 & 1 & 48 & 134 & 76 & 7143 & 104 & 437958 & 132 & 4482070 & 160& 1080839 \\
19 & 1 & 49 & 101 & 77 & 8442 & 105 & 498054 & 133 & 4630277 & 161& 885355 \\
21 & 1 & 50 & 86 & 78 & 9680 & 106 & 564113 & 134 & 4766726 & 162& 713786 \\
23 & 5 & 51& 78 & 79 & 10834 & 107 & 637114 & 135 & 4889247 & 163& 564259 \\
24 & 3 & 52& 90 & 80 & 12451 & 108 & 718248 & 136 & 4999476 & 164& 438608 \\
25 & 1 & 53& 114 & 81 & 13990 & 109 & 804261 & 137 & 5082927 & 165& 331636 \\
26 & 4 & 54& 100 & 82 & 16160 & 110 & 899361 & 138 & 5147054 & 166& 246578 \\
27 & 7 & 55& 112 & 83 & 18120 & 111 & 1000968 & 139 & 5186406 & 167& 178749 \\
28 & 1 & 56& 132 & 84 & 20406 & 112 & 1110963 & 140 & 5195292 & 168& 126604 \\
29 & 4 & 57& 173 &85 & 23226 & 113 & 1231994 & 141 & 5177020 & 169& 87271 \\
30 & 4 & 58& 247 &86 & 26153 & 114 & 1357031 & 142 & 5131736 & 170& 58523 \\
31 & 4 & 59& 268 &87 & 29857 & 115 & 1490391 & 143 & 5052595 & 171& 38065 \\
32 & 8 & 60& 325 &88 & 34232 & 116 & 1632573 & 144 & 4943731 & 172& 23910 \\
33 & 10 & 61& 429 &89 & 39558 & 117 & 1783764 & 145 & 4809637 & 173& 14592 \\
34 & 8 & 62& 551 &90 & 46466 & 118 & 1941577 & 146 & 4643527 & 174& 8433 \\
35 & 13 & 63& 619 & 91 & 55187 & 119 & 2108524 & 147 & 4450101 & 175& 4630 \\
36 & 12 & 64& 846 & 92 & 65082 & 120 & 2278437 & 148 & 4236185 & 176& 2549 \\
37 & 33 & 65& 1039 & 93 & 77669 & 121 & 2459465 & 149 & 4005961 & 177& 1318 \\
38 & 27 & 66& 1268 & 94 & 92772 & 122 & 2636351 & 150 & 3750830 & 178& 653 \\
39 & 48 & 67& 1587 & 95 & 110504 & 123 & 2825077 & 151 & 3479586 & 179& 306 \\
40 & 60 & 68& 1777 & 96 & 130842 & 124 & 3016435 & 152 & 3198568 & 180& 130 \\
41 & 78 & 69& 2267 & 97 & 155105 & 125 & 3210085 & 153 & 2912936 & 181& 71 \\
42 & 121 & 70& 2704 & 98 & 181583 & 126 & 3401907 & 154 & 2624386 & 182& 26 \\
43 & 132 & 71& 3279 & 99 & 214088 & 127 & 3594118 & 155 & 2339326 & 183& 8 \\
44 & 153 & 72& 4016 &100 & 249142 & 128 & 3784693 & 156 & 2059994 & 184& 5 \\ \hline \end{tabular} \end{table} \begin{figure}
\caption{Difference $g_n$ between order and the maximal Wiener complexity of \\ fullerene graphs.}
\label{Fig2}
\end{figure}
\begin{table}[th] \centering \caption{Maximal Wiener complexity $C_n$ and $g_n=n-C_n$ of fullerene \\ graphs of $F_n$
($N$ is the number of graphs with $C_n$).} \label{TDistrAll} \footnotesize
\begin{tabular}{cccc|cccc|cccc|cccc} \hline $n$ &$C_n$&$g_n$&$N$& $n$&$C_n$&$g_n$&$N$& $n$&$C_n$&$g_n$&$N$& $n$&$C_n$&$g_n$&$N$ \\ \hline
20 & 1 & 19 & 1 & 72 & 56 & 16 & 6 & 122 & 109 & 13 & 5 &172 & 160 & 12 & 1 \\
24 & 2 & 22 & 1 & 74 & 61 & 13 & 1 & 124 & 111 & 13 & 5 &174 &164 & 10 & 1 \\
26 & 2 & 24 & 1 & 76 & 63 & 13 & 1 & 126 & 115 & 11 & 1 &176 &165 & 11 & 1 \\
28 & 5 & 23 & 1 & 78 & 64 & 14 & 2 & 128 & 117 & 11 & 1 &178 &167 & 11 & 2 \\
30 & 7 & 23 & 1 & 80 & 66 & 14 & 2 & 130 & 118 & 12 & 3 &180 &168 & 12 & 1 \\
32 & 9 & 23 & 1 & 82 & 71 & 11& 1 & 132 & 121 & 11 & 1 &182 &171 & 11 & 1 \\
34 & 10 & 24 & 2 & 84 & 70 & 14& 2 & 134 & 123 & 11 & 3 &184 &172 & 12 & 4 \\
36 & 14 & 22 & 1 & 86 & 73 & 13& 3 & 136 & 124 & 12 & 1 &186 &177 & 9 & 1 \\
38 & 18 & 20 & 1 & 88 & 73 & 15& 7 & 138 & 127 & 11 & 2 &188 &177 & 11 & 1 \\
40 & 19 & 21 & 1 & 90 & 79 & 11 & 1 & 140 & 131 & 9 & 1 &190 &180 & 10 & 1 \\
42 & 22 & 20 & 1 & 92 & 80 & 12 & 1 & 142 & 131 & 11 & 1 &192 &181 & 11 & 1 \\
44 & 25 & 19 & 1 & 94 & 82 & 12 & 1 & 144 & 132 & 12 & 2 &194 &183 & 11 & 2 \\
46 & 25 & 21 & 4 & 96 & 84 & 12 & 2 & 146 & 134 & 12 & 4 &196 &184 & 12 & 5 \\
48 & 30 & 18 & 1 & 98 & 86 & 12 & 1 & 148 & 136 & 12 & 1 &198 &187 & 11 & 2 \\
50 & 35 & 15 & 1 & 100 & 89 & 11 & 1 & 150 & 138 & 12 & 4 &200 &189 & 11 & 2 \\
52 & 36 & 16 & 1 & 102 & 89 & 13 & 4 & 152 & 141 & 11 & 1 &202 &193 & 9 & 1 \\
54 & 37 & 17 & 1 & 104 & 91 & 13 & 3 & 154 & 144 & 10 & 1 &204 &192 & 12 & 1 \\
56 & 40 & 16 & 1 & 106 & 93 & 13 & 3 & 156 & 144 & 12 & 3 &206 &195 & 11 & 3 \\
58 & 43 & 15 & 2 & 108 & 96 & 12 & 1 & 158 & 147 & 11 & 1 &208 &198 & 10 & 1 \\
60 & 44 & 16 & 3 & 110 & 98 & 12 & 2 & 160 & 148 & 12 & 2 &210 &199 & 11 & 1 \\
62 & 46 & 16 & 3 & 112 & 100 & 12 & 1 & 162 & 151 & 11 & 2 &212 &199 & 13 & 8 \\
64 & 49 & 15 & 5 & 114 & 102 & 12 & 1 & 164 & 153 & 11 & 1 &214 &202 & 12 & 3 \\
66 & 50 & 16 & 2 & 116 & 102 & 14 & 4 & 166 & 155 & 11 & 2 &216 &204 & 12 & 4 \\
68 & 56 & 12 & 1 & 118 & 106 & 12 & 3 & 168 & 157 & 11 & 2 & - & & & \\
70 & 56 & 14 & 1 & 120 & 110 & 10 & 2 & 170 & 159 & 11 & 2 & - & & & \\ \hline \end{tabular} \end{table}
\section{Graphs with the maximal Wiener complexity}
In this section, we study the following problem: can the Wiener index of a fullerene graph with the maximal Wiener complexity be maximal? Numerical data for the Wiener indices of fullerene graphs of order $n \le 216$ are presented in Table~\ref{TWiener}. Here three columns $C_n$, $W$, and $D$ are the maximal Wiener complexity, the Wiener index and the diameter of graphs with $C_n$, respectively. Three columns $W_m$, $C_W$, and $D$ contain the maximal Wiener index, the Wiener complexity and the diameter of graphs with $W_m$.
Based on data of Tables~\ref{TDistrAll} and \ref{TWiener}, one can make the following observations. \begin{itemize} \item Several fullerene graphs of fixed $n$ may have the maximal Wiener complexity $C_n$ while the only one fullerene graph has the maximal Wiener index.
\item Wiener indices of fullerene graphs with fixed $C_n$ ($|\, F_n| > 1$) are not maximal except graphs of order $n=28$ with $W=1198$ ($|\, F_{28}| = 2$).
\item Almost all fullerene graphs with fixed $C_n$ have distinct Wiener indices. The only exception are graphs of order 46 with $W=4289$ (the sequences of their vertex transmissions are distinct). \end{itemize}
\pagebreak
\begin{table}[t!h!] \centering \caption{\normalsize Maximal Wiener complexity and Wiener indices of fullerene graphs.} \label{TWiener} \footnotesize
\begin{tabular}{|r@{\hspace{2mm}}|r@{\hspace{2mm}}r@{\hspace{3mm}}r|r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r
|@{\hspace{1mm}}r@{\hspace{1mm}}|
@{\hspace{1mm}}r@{\hspace{2mm}}|rr@{\hspace{3mm}}r|r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r|} \hline $n$ & $C_n$&$W$ &$D$ & $W_m$ &$C_W$ & $D$&$t$& &$n$ &$C_n$&$W$ &$D$ &$W_m$ &$C_W$ &$D$& $t$ \\ \hline
20 & 1 & 500 & 5 & 500 & 1 & 5 & & & 84 & 70 & 19939 & 13 & 21754 & 21& 15&$c1$ \\
24 & 2 & 804 & 5 & 804 & 2 & 5 & & & & & 20076 & 13 & & & & \\
26 & 2 & 987 & 6 & 987 & 2 & 6 &$b$ & & 86 & 73 & 21404 & 13 & 23467& 8 & 16&$b$ \\
28 & 5 & 1198 & 6 & 1198 & 5 & 6 & & & & & 21521 & 13 & & & & \\
30 & 7 & 1431 & 6 & 1435 & 3 & 6 &$a$ & & & & 21593 & 13 & & & & \\
32 & 9 & 1688 & 6 & 1696 & 3 & 7 &$b$ & & 88 & 73 & 22359 & 13 &24714 & 21 & 16& $c2$ \\
34 & 10 & 1973 & 7 & 1978 & 10 & 7 & & & & & 22421 & 13 & & & & \\
& & 1978 & 7 & & & & & & & & 22604 & 13 & & & & \\
36 & 14 & 2288 & 7 & 2298 & 8 & 7 &$c1$ & & & & 22616 & 13 & & & & \\
38 & 18 & 2627 & 7 & 2651 & 4 & 8 &$b$ & & & & 22619 & 13 & & & & \\
40 & 19 & 3001 & 7 & 3035 & 4 & 8 &$a$ & & & & 22750 & 14 & & & & \\
42 & 22 & 3397 & 8 & 3415 & 19 & 8 &$d1$ & & & & 22939 & 14 & & & & \\
44 & 25 & 3830 & 8 & 3888 & 4 & 9 &$b$ & &90 & 79 & 23923 & 14 &27155 & 9 & 17&$a$ \\
46 & 25 & 4285 & 8 & 4322 & 19 & 9 &$d2$ & &92 & 80 & 25731 & 15 &28256 & 8 & 17&$b$ \\
& & 4289 & 8 & & & & & &94 & 82 & 26793 & 14 &28910 & 44 & 17&$d2$ \\
& & 4289 & 8 & & & & & &96 & 84 & 28274 & 14 &31418 & 24 & 17&$c1$ \\
& & 4291 & 8 & & & & & & & & 28317 & 15 & & & & \\
48 & 30 & 4795 & 9 & 4858 & 12 & 9 &$c1$ & &98 & 86 & 30068 & 15 &33651 & 9 & 18&$b$ \\
50 & 35 & 5310 & 9 & 5455 & 5 & 9 &$a$ & &100 &89 & 31196 & 15 &36580 & 10 & 19&$a$ \\
52 & 36 & 5876 & 9 & 5994 & 13 & 10 &$c2$ & &102 &89 & 32984 & 15 &36206 & 47 & 18&$d1$ \\
54 & 37 & 6475 & 9 & 6558 & 22 & 10 &$d1$ & & & & 33070 & 15 & & & & \\
56 & 40 & 7114 & 10 & 7352 & 5 & 11 &$b$ & & & & 33226 & 15 & & & & \\
58 & 43 & 7782 & 10 & 7910 & 25 & 11 &$d2$ & & & & 33505 & 15 & & & & \\
& & 7822 & 10 & & & & & &104 &91 & 34402 & 15 &39688 & 9 & 19&$b$ \\
60 & 44 & 8437 & 10 & 8880 & 6 & 11 &$a$ & & & & 34529 & 15 & & & & \\
& & 8466 & 10 & & & & & & & & 36801 & 17 & & & & \\
& & 8490 & 10 & & & & & &106 &93 & 36648 & 16 &40278 & 47 & 19&$d2$ \\
62 & 46 & 9202 & 10 & 9651 & 6 & 12 &$b$ & & & & 36664 & 16 & & & & \\
& & 9220 & 11 & & & & & & & & 37594 & 17 & & & & \\
& & 9250 & 11 & & & & & &108 &96 & 38033 & 15 & 43578& 27 & 19&$c1$ \\
64 & 49 & 9988 & 11 & 10410 & 15 & 12 &$c2$ & &110 &98 & 40154 & 16 & 48005& 11 & 21&$a$ \\
& & 9993 & 11 & & & & & & & & 41419 & 17 & & & & \\
& &10003 & 11 & & & & & &112 &100 & 41940 & 17 &48234 & 27 & 20&$c2$ \\
& &10013 & 11 & & & & & &114 &102 & 43885 & 16 &49318 & 52 & 20&$d1$ \\
& &10016 & 11 & & & & & &116 &102 & 45437 & 16 &53832 & 10 & 21&$b$ \\
66 & 50 &10814 & 11 &11126 & 30 & 12 &$d1$ & & & & 46632 & 17 & & & & \\
& &10842 & 11 & & & & & & & & 46798 & 17 & & & & \\
68 & 56 &11714 & 11 &12376 & 6 & 13 &$b$ & & & & 47927 & 18 & & & & \\
70 & 56 &12589 & 11 &13505 & 7 & 13 &$a$ & &118 &106 & 47059 & 15 &54310 & 50 & 21&$d2$ \\
72 & 56 &13407 & 11 &14298 & 18 & 13 &$c1$ & & & & 47489 & 16 & & & & \\
& & 13448 & 11 & & & & & & & & 47697 & 16 & & & & \\
& & 13453 & 11 & & & & & &120 &110 & 49143 & 16 & 61630& 12 & 23&$a$ \\
& & 13567 & 12 & & & & & & & & 49606 & 17 & & & & \\
& & 13578 & 12 & & & & & &122 &109 & 51344 & 16 & 62011& 11 & 22&$b$ \\
& & 13766 & 12 & & & & & & & & 51456 & 16 & & & & \\
74 & 61 & 14521 & 12 & 15563 & 7 & 14 &$b$ & & & & 51933 & 17 & & & & \\
76 & 63 & 15867 & 13 & 16554 & 18 & 14 &$c2$ & & & & 52974 & 17 & & & & \\
78 & 64 & 16834 & 13 & 17398 & 37 & 14 &$d1$ & & & & 53070 & 17 & & & & \\
& & 16877 & 13 & & & & & &124 &111 & 54105 & 17 & 64170 & 30& 22&$c2$ \\
80 & 66 & 17727 & 13 & 19530 & 8 & 15 &$a$ & & & & 55050 & 18 & & & & \\
& & 17832 & 13 & & & & & & & & 55789 & 18 & & & & \\
82 & 71 & 19075 & 13 & 19918 & 38 & 15 &$d2$ & & & & 57358 & 19 & & & & \\
& & & & & & & & & & & 57473 & 19 & & & & \\ \hline \end{tabular} \end{table}
\setcounter{table}{2}
\begin{table}[ht] \centering \caption{\normalsize Maximal Wiener complexity and Wiener indices of fullerene graphs (\emph{continue}).} \footnotesize
\begin{tabular}{|r@{\hspace{2mm}}|r@{\hspace{2mm}}r@{\hspace{2mm}}r|r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r
|@{\hspace{1mm}}r@{\hspace{1mm}}|
@{\hspace{1mm}}r@{\hspace{2mm}}|rr@{\hspace{2mm}}r|r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r|} \hline $n$ & $C_m$&$W$ &$D$ & $W_m$ &$C$ & $D$&$t$& &$n$ &$C_m$&$W$ &$D$ &$W_m$ &$C$ &$D$& $t$ \\ \hline
126 & 115 & 57238 & 18 & 65286 & 57 & 22 &$d1$& &178 & 167 & 141743& 23 &174510& 65 &31 &$d2$ \\
128 & 117 & 60434 & 18 & 70976 & 11 & 23 &$b$& & & & 150696& 25 & & & & \\
130 & 118 & 63736 & 19 & 77655 & 13 & 25 &$a$& &180 & 168 & 139697& 21 &200780& 18 &35 &$a$ \\
& & 63922 & 19 & & & & & &182 & 171 & 144410& 21 &192971& 16 & 32&$b$ \\
& & 65396 & 20 & & & & & &184 & 172 & 146581& 22 &197130& 45 & 32&$c2$ \\
132 & 121 & 62917 & 17 & 76538 & 33 & 23 &$c1$& & & & 147054& 21 & & & & \\
134 & 123 & 64935 & 17 & 80763 & 12 & 24 &$b$& & & & 153615& 23 & & & & \\
& & 65225 & 17 & & & & & & & & 154923& 23 & & & & \\
& & 68161 & 19 & & & & & &186 & 177 & 167300& 26 &198046& 82 & 32&$d1$ \\
136 & 124 & 69838 & 19 & 83274 & 33 & 24 &$c2$& &188 & 177 & 154868& 21 &211776& 16 & 33&$b$ \\
138 & 127 & 72311 & 19 & 84398 & 62 & 24 &$d1$& &190 & 180 & 169849& 24 &235405& 19 & 37&$a$ \\
& & 73771 & 19 & & & & & &192 & 181 & 163370& 22 &222778& 48 & 33&$c1$ \\
140 & 131 & 73644 & 19 & 96280 & 14 & 27 &$a$& &194 & 183 & 187947& 27 &231763& 17 & 34&$b$ \\
142 & 131 & 79852 & 20 & 91518 & 56 & 25 &$d2$& & & & 191290& 27 & & & & \\
144 & 132 & 77934 & 18 & 97914 & 36 & 25 &$c1$& &196 & 184 & 174774& 23 &236394& 48 & 34&$c2$ \\
& & 78924 & 18 & & & & & & & & 178192& 24 & & & & \\
146 & 134 & 86095 & 20 &102947 & 13 & 26 &$b$& & & & 178529& 25 & & & & \\
& & 86287 & 21 & & & & & & & & 179284& 25 & & & & \\
& & 87298 & 21 & & & & & & & & 184011& 24 & & & & \\
& & 87442 & 21 & & & & & &198 & 187 & 177296& 23 &237198& 87 & 34& $d1$ \\
148 & 136 & 86432 & 20 &105834 & 36 & 26 &$c2$& & & & 189530& 25 & & & & \\
150 & 138 & 87886 & 19 &117705 & 15 & 29 &$a$& &200 & 189 & 180683& 23 &273830& 20 & 39& $a$ \\
& & 88860 & 19 & & & & & & & & 192365& 25 & & & & \\
& & 92732 & 21 & & & & & &202 & 193 & 210388& 28 &251318& 71 & 35& $d2$ \\
& & 93898 & 21 & & & & & &204 & 192 & 189450& 22 &265274& 51 & 35& $c1$ \\
152 & 141 & 92988 & 20 &115416 & 13 & 27 &$b$& &206 & 195 & 221909& 28 &275427& 18 & 36& $b$ \\
154 & 144 & 97359 & 21 &115270 & 59 & 27 &$d2$& & & & 221995& 28 & & & & \\
156 & 144 & 95579 & 19 &122938 & 39 & 27 &$c1$& & & & 222097& 28 & & & & \\
& & 96997 & 19 & & & & & &208 & 198 &201644 & 23 &280554& 51 & 36& $c2$ \\
& & 98864 & 21 & & & & & &210 & 199 &238572 & 29 &316255& 21 & 41& $a$ \\
158 & 147 &100055 & 19 &128851 & 14 & 28 &$b$& &212 & 199 &207617 & 23 &299176& 18 & 37& $b$ \\
160 & 148 &103952 & 20 &142130 & 16 & 31 &$a$& & & &207975 & 23 & & & & \\
& &108170 & 22 & & & & & & & &209707 & 23 & & & & \\
162 & 151 & 104909& 19 &133206& 72 & 28&$d1$ & & & &211942 & 24 & & & & \\
& & 116278& 23 & & & & & & & &215779 & 24 & & & & \\
164 & 153 & 110088& 20 &143288& 14 & 29&$b$ & & & &228285 & 26 & & & & \\
166 & 155 & 117531& 21 &142838& 62 & 29&$d2$ & & & &228507 & 26 & & & & \\
& & 119485& 23 & & & & & & & &228922 & 26 & & & & \\
168 & 157 & 114316& 18 &151898& 42 & 29&$c1$ & &214 & 202 &226652 & 25 &297030& 74 & 37& $d2$ \\
& & 126632& 23 & & & & & & & &247352 & 29 & & & & \\
170 & 159 & 123193& 22 &169755& 17 & 33&$a$ & & & &250978 & 29 & & & & \\
& & 130548& 24 & & & & & &216 & 204 &220131 & 23 &312858& 54 & 37& $c1$ \\
172 & 160 & 129708& 22 &162474& 42 & 30&$c2$ & & & &226928 & 25 & & & & \\
174 & 164 & 131354& 23 &163478& 77 & 30&$d1$ & & & &240920 & 27 & & & & \\
176 & 165 &130105 & 20 &175312& 15 &31 &$b$ & & & &270770 & 33 & & & & \\ \hline \end{tabular} \end{table}
\begin{itemize} \item The diameter of graphs with fixed $C_n$ are not maximal for $n \ge 52$.
\item Fullerene graphs with the maximal Wiener index have the maximal diameter. The values of the Wiener complexity $C_W$ can vary greatly. This can be partially explained by the appearance of symmetries in graphs. \end{itemize}
It is of interest how the pentagons are distributed among hexagons for fullerene graphs with the maximal Wiener complexity (see Tables~\ref{TDistrAll} and \ref{TWiener}). Does there exist any regularity in the distribution of pentagons? Table~\ref{TConn} gives some information on the occurrence of pentagonal parts of a particular size. Here $N$ is the number of graphs in which pentagons form $N_p$ isolated connected parts.
\begin{table}[h] \centering \caption{The number of graphs with $N_p$ isolated pentagonal parts.} \label{TConn}
\begin{tabular}{l|rrrrrrrr} \hline
$N_p$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
$N$ & 9 & 8 & 27 & 61 & 42 & 40 & 7 & 1 \\ \hline \end{tabular} \end{table}
Table~\ref{TIsolC5} shows how many fullerene graphs with the maximal Wiener complexity have isolated pentagons (an isolated pentagon forms a part). Here $N$ is the number of graphs having $N_5$ isolated pentagons. Does there exist an IPR fullerene graph with maximal Wiener complexity $C_n$ (lines of Fig.~\ref{Fig2} will have intersection)?
\begin{table}[th] \centering \caption{The number graphs of with $N_5$ isolated pentagons.} \label{TIsolC5}
\begin{tabular}{l|rrrrrr} \hline
$N_5$ & 0 & 1 & 2 & 3 & 4 & 5 \\
$N$ & 23& 56 & 44& 44 & 24 & 4 \\ \hline \end{tabular} \end{table}
\section{Graphs with the maximal Wiener index}
Wiener index of fullerene graphs are studied in \cite{Aliz14,Ando17,Ashr11,Fowl01,Ghor13-1,Ghor17,Ghor13-2,Ghos18,Grao11,Hua14,Iran09}. There is a class of fullerene graphs of tubular shapes, called \emph{nanotubical fullerene graphs}. They are cylindrical in shape, with the two ends capped by a subgraph containing six pentagons and possibly some hexagons called \emph{caps} (see an illustration in Fig.~\ref{Fig3}).
\begin{figure}
\caption{Construction of a nanotubical fullerene graph with two caps.}
\label{Fig3}
\end{figure}
Consider fullerene graphs with the maximal Wiener indices (see Table~\ref{TWiener}). Five graphs of $F_{20}$--$F_{28}$ and $F_{34}$ contain one pentagonal part and other 93 graphs possess two pentagonal parts. Two pentagonal parts of every fullerene graph are the same and contain diametrical vertices. Therefore such graphs are nanotubical fullerene graphs with caps containing identical pentagonal parts. All types of such parts are depicted in Fig.~\ref{Fig4}. The number of fullerene graphs having a given part is shown near diagrams. A type of a cap is determined by the type of its pentagonal part. Types of caps of fullerene graphs are presented by the corresponding notation in column $t$ of Table~\ref{TWiener}. Constructive approaches for enumeration of various caps were proposed in \cite{Brin02,Brin99}. Consider every kind of cap types.
\begin{figure}
\caption{Caps for nanotubical fullerene graphs with the maximal Wiener index.}
\label{Fig4}
\end{figure}
1. \emph{Type $a$}. Caps of type $a$ define so-called (5,0)-nanotubical fullerene graphs. The structure of graphs of this infinite family $T_a$ is clear from an example in Fig.~\ref{Fig5}a. Diameter and the Wiener index of such fullerene graphs were studied in \cite{Aliz14}. To indicate the order of graph $G$, we will use notation $G_n$.
\begin{proposition} {\rm \cite{Aliz14}} \label{Wa} Let $G_n$ be a nanotubical fullerene graph with caps of type $a$. It has $n=10k$ vertices, $k \ge 2$. Then $C_W(G_n)=k$, $D(G_n)=2k-1$, and $W(G_{20}) = 500$, $W(G_{30}) = 1435$, $W(G_{40}) = 3035$, and for $n \ge 50$, \begin{eqnarray*} W(G_n) & = & \frac{1}{30} \left( n^3 + 1175 n - 20100 \right). \end{eqnarray*} \end{proposition}
Based on numerical data of Table~\ref{TWiener}, the similar results have been obtained for fullerene graphs of order $n \le 216$ with caps of the other three types.
\begin{figure}
\caption{Structure of fullerene graphs with caps of types $a$ and $b$.}
\label{Fig5}
\end{figure}
2. \emph{Type $b$}. The structure of graphs of the corresponding family $T_b$ is clear from examples of Fig.~\ref{Fig5}b. Vertices marked by $v$ should be identified in every graph. Table~\ref{TWiener} contains 26 such graphs.
\begin{proposition} \label{Wb} Let $G_n$ be a nanotubical fullerene graph with caps of type $b$. It has $n=6k-4$ vertices, $k \ge 5$. Then $C_W(G_n)=\lceil k/2 \rceil$, $D(G_n)=k+1$, and for $n \ge 26$, \begin{eqnarray*} W(G_n) & = & \frac{1}{36} \left( n^3 + 27 n^2 + 156 n - 4352 \right). \end{eqnarray*} \end{proposition}
Two caps of type $b$ have adjacent pentagonal rings only for $k=5$ . If fullerenes with caps of types $a$ and $b$ have the same number of faces ($n=10k$), then the graph with caps of type $a$ has the maximal Wiener index.
3. \emph{Type $c$}. Fullerene graphs with caps of type $c$ will be splitted into disjoint fa\-mi\-lies, $T_c = T_{c1} \cup T_{c2}$. The corresponding graphs are marked in column $t$ of Table~\ref{TWiener} by $c1$ (13 graphs) and $c2$ (12 graphs). The numbers of vertices of graphs are given in Table~\ref{Tcd}. The orders of graphs of $T_c$ do not coincide with the orders of graphs from the set $T_a \cup T_b$.
\begin{proposition} \label{Wc} a) Let $G_n$ be a nanotubical fullerene graph of family $T_{c1}$. Then for $n \ge 36$, \begin{eqnarray*} W(G_n) & = & \frac{1}{36} \left( n^3 + 24 n^2 + 336 n - 7128 \right). \end{eqnarray*} The Wiener complexity and the diameter of $G_n$ are shown in Table~\ref{Tcd}. One value should be corrected for $k=0$ (see a cell of Table~\ref{Tcd} with mark *): $C_W(G_{36})=8$ instead of $9$.
b) Let $G_n$ be a nanotubical fullerene graph of family $T_{c2}$. Then for $n \ge 52$, \begin{eqnarray*} W(G_n) & = & \frac{1}{36} \left( n^3 + 24 n^2 + 336 n - 7192 \right). \end{eqnarray*} The Wiener complexity and the diameter of $G_n$ are shown in Table~\ref{Tcd}. One value should be corrected for $k=0$: $C_W(G_{52})=13$ instead of $12$. \end{proposition}
\begin{table}[t] \centering \caption{Parameters of fullerene graphs with $n \le 216$ vertices
and caps of types $c$ and $d$. Here $k\ge 0$ for all expressions.} \label{Tcd}
\begin{tabular}{lll|lll} \hline
\multicolumn{3}{c|}{family $T_{c1}$} & \multicolumn{3}{c}{family $T_{c2}$} \\ \hline \ \ \ \ \ $n$ &\ \ \ $C_W$&\ \ \ \ $D$& \ \ \ \ \ $n$ &\ \ \ $C_W$&\ \ \ \ $D$ \\ \hline $60k+36$ & $15k+9^{\,*}$ & $10k+7$ & $60k+76$ & $15k+18$ & $10k+14$ \\ $60k+48$ & $15k+12$ & $10k+9$ & $60k+88$ & $15k+21$ & $10k+16$ \\ $60k+72$ & $15k+18$ & $10k+13$ & $60k+52$ & $15k+12^{\,*}$ & $10k+10$ \\ $60k+84$ & $15k+21$ & $10k+15$ & $60k+64$ & $15k+15$ & $10k+12$ \\ \hline
\multicolumn{3}{c|}{family $T_{d1}$} & \multicolumn{3}{c}{family $T_{d2}$} \\ \hline \ \ \ \ \ $n$ &\ \ \ $C_W$&\ \ \ \ $D$& \ \ \ \ \ $n$ &\ \ \ $C_W$&\ \ \ \ $D$ \\ \hline $60k+66$ & $25k+32^{\,*}$ & $10k+2$ & $60k+106$& $15k+47$ & $10k+19$ \\ $60k+78$ & $25k+37$ & $10k+4$ & $60k+58$ & $15k+35^{\,*}$ & $10k+11$ \\ $60k+102$& $25k+47$ & $10k+8$ & $60k+82$ & $15k+41^{\,*}$ & $10k+15$ \\ $60k+54$ & $25k+27^{\,*}$ & $10k$ & $60k+94$ & $15k+44$ & $10k+17$ \\ \hline \end{tabular} \end{table}
4. \emph{Type $d$}. Fullerene graphs with caps of type $d$ will be also splitted into two disjoint families, $T_d = T_{d1} \cup T_{d2}$.
The both families have 12 members
(see graphs with marks $d1$ and $d2$ in column $t$ of Table~\ref{TWiener}). The numbers of vertices of graphs of $T_d$ are shown in Table~\ref{Tcd}. The orders of graphs of $T_d$ do not coincide with the orders of graphs from the set $T_a \cup T_b \cup T_c$.
\begin{proposition} \label{Wd} a) Let $G_n$ be a nanotubical fullerene graph of family $T_{d1}$. Then $W(G_{42})=3415$ and for $n \ge 54$, \begin{eqnarray*} W(G_n) & = & \frac{1}{36} \left( n^3 + 15 n^2 + 1068 n - 22788 \right). \end{eqnarray*} The Wiener complexity and the diameter of $G_n$ are shown in Table~\ref{Tcd}. Two values should be corrected for $k=0$ (see cells of Table~\ref{Tcd} with mark *): $C_W(G_{66})=30$ instead of $32$ and $C_W(G_{54})=22$ instead of $27$.
b) Let $G_n$ be a nanotubical fullerene graph of family $T_{d2}$. Then $W(G_{46})=4322$ and for $n \ge 58$, \begin{eqnarray*} W(G_n) & = & \frac{1}{36} \left( n^3 + 15 n^2 + 1068 n - 22756 \right). \end{eqnarray*} The Wiener complexity and the diameter of $G_n$ are shown in Table~\ref{Tcd}. Two values should be corrected for $k=0$: $C_W(G_{58})=25$ instead of $35$ and $C_W(G_{82})=38$ instead of $41$. \end{proposition}
The above considerations of fullerene graphs with $n \le 216$ vertices lead to the following conjectures for all fullerene graphs.
\begin{conjecture} If a fullerene graph of an arbitrary order has the maximal Wiener index, then it is a nanotubical fullerene graph with caps of types $a$--$d$ and its Wiener index is given by Propositions \ref{Wa}--\ref{Wd}. \end{conjecture}
\begin{conjecture} The Wiener complexity and the diameter of fullerene graphs of an arbitrary order having the maximal Wiener index are given in Propositions \ref{Wa}--\ref{Wd}. \end{conjecture}
\end{document} |
\begin{document}
\title[]{On the tensor rank of multiplication in finite extensions of finite fields} \author{S. Ballet} \address{Institut de Math\'{e}matiques de Luminy\\ case 930, F13288 Marseille cedex 9\\ France} \email{[email protected]} \author{J. Chaumine} \address{Laboratoire G\'eom\'etrie Alg\'ebrique et Applications \`a la Th\'eorie de l'Information\\ Universit\'e de la Polyn\'esie Fran\c{c}aise,\\ B.P. 6570, 98702 Faa'a, Tahiti\\ France} \email{[email protected]} \author{J. Pieltant} \address{Institut de Math\'{e}matiques de Luminy\\ case 930, F13288 Marseille cedex 9\\ France} \email{[email protected]} \author{R. Rolland} \address{Institut de Math\'{e}matiques de Luminy\\ case 930, F13288 Marseille cedex 9\\ France} \email{[email protected]}
\date{\today} \keywords{finite field, tensor rank of the multiplication, function field} \subjclass[2010]{ Primary 14H05; Secondary 12E20}
\begin{abstract} In this paper, we give a survey of the known results concerning the tensor rank of the multiplication in finite fields and we establish new asymptotical and not asymptotical upper bounds about it. \end{abstract}
\maketitle
\section{Introduction} Several objects constitute the aim of this paper. First, it is a question of introducing the problem of the tensor rank of the multiplication in finite fields and of giving a statement of the results obtained in this part of algebraic complexity theory for which the best general reference is \cite{buclsh}. In particular, one of the aims of this paper is to list exhaustively the few published mistaken statements and to explain them. In the second part, we repair and clarify certain of these statements. Last but not least, we improve several known results. In this section we introduce the problem, we set up notation and terminology and we present the organization of this paper as well as the new obtained results.
\subsection{The bilinear complexity of the multiplication}
Let $\mathbb{F}_q$ be a finite field with $q=p^r$ elements where $p$ is a prime number. Let $\mathbb{F}_{q^n}$ be a degree $n$ extension of $\mathbb{F}_q$. The multiplication $m$ in the finite field $\mathbb{F}_{q^n}$ is a bilinear map from $\mathbb{F}_{q^n} \times \mathbb{F}_{q^n}$ into $\mathbb{F}_{q^n}$, thus it corresponds to a linear map $M$ from the tensor product $\mathbb{F}_{q^n} \bigotimes \mathbb{F}_{q^n}$ into $\mathbb{F}_{q^n}$. One can also represent $M$ by a tensor $t_M \in \mathbb{F}_{q^n}^*\bigotimes \mathbb{F}_{q^n}^* \bigotimes \mathbb{F}_{q^n}$ where $\mathbb{F}_{q^n}^*$ denotes the algebraic dual of $\mathbb{F}_{q^n}$. Each decomposition \begin{equation}\label{algo} t_M=\sum_{i=1}^{k}a^*_i\otimes b^*_i\otimes c_i \end{equation} of the tensor $t_M$, where $a^*_i, b^*_i \in \mathbb{F}_{q^n}^*$ and $c_i \in \mathbb{F}_{q^n}$, brings forth a multiplication algorithm $$x.y=t_M(x\otimes y)=\sum_{i=1}^{k}a^*_i(x)\otimes b^*_i(x)\otimes c_i.$$
The bilinear complexity of the multiplication in $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$, denoted by $\mu_{q}(n)$, is the minimum number of summands in the decomposition (\ref{algo}). Alternatively, we can say that the bilinear complexity of the multiplication is the rank of the tensor $t_M$ (cf. \cite{shtsvl}, \cite{ball2}).
\subsection{Organization of the paper}
In Section 2, we present the classical results via the approach using the multiplication by polynomial interpolation. In section 3, we give an historical record of results resulting from the pioneer works due to D.V. and G.V. Chudnovky \cite{chch} and later Shparlinski, Tsfasman and Vladut in \cite{shtsvl}. In particular in Subsection 3.1, we present the original algorithm as well as the most successful version of the algorithm of Chudnovsky type at the present time. This modern approach uses the interpolation over algebraic curves defined over finite fields. This approach, which we recount the first success as well as the rocks on which the pionners came to grief, enables to end at a first complete proof of the linearity of the bilinear complexity of multiplication \cite{ball1}. Then, in Subsection 3.2, we recall the known results about the bilinear complexity $\mu_q (n)$. Finally, in Section 4, we give new results for $\mu_q(n)$. More precisely, we obtain new upper bounds for $\mu_q(n)$ as well as new asymptotical upper bounds.
\section{Old classical results} Let $$P(u)=\sum_{i=0}^{n}a_iu^i$$ be a monic irreducible polynomial of degree $n$ with coefficients in a field $F$. Let $$R(u)=\sum_{i=0}^{n-1}x_iu^i$$ and $$S(u)=\sum_{i=0}^{n-1}y_iu^i$$ be two polynomials of degree $\leq n-1$ where the coefficients $x_i$ and $y_i$ are indeterminates.
Fiduccia and Zalcstein (cf. \cite{fiza}, \cite{buclsh} p.367 prop. 14.47) have studied the general problem of computing the coefficients of the product $R(u) \times S(u)$ and they have shown that at least $2n-1$ multiplications are needed. When the field $F$ is infinite, an algorithm reaching exactly this bound was previously given by Toom in \cite{toom}. Winograd described in \cite{wino2} all the algorithms reaching the bound $2n-1$. Moreover, Winograd proved in \cite{wino3} that up to some transformations every algorithm for computing the coefficients of $R(u) \times S(u) \mod P(u)$ which is of bilinear complexity $2n-1$, necessarily computes the coefficients of $R(u) \times S(u)$, and consequently uses one of the algorithms described in \cite{wino2}. These algorithms use interpolation technics and cannot be performed if the cardinality of the field $F$ is $<2n-2$. In conclusion we have the following result:
\begin{theo}\label{old} If the cardinality of $F$ is $<2n-2$, every algorithm computing the coefficients of $R(u) \times S(u) \mod P(u)$ has a bilinear complexity $>2n-1$. \end{theo}
Applying the results of Winograd and De Groote \cite{groo} and Theorem \ref{old} to the multiplication in a finite extension $\mathbb{F}_{q^n}$ of a finite field $\mathbb{F}_q$ we obtain:
\begin{theo}\label{thm_wdg} The bilinear complexity $\mu_q(n)$ of the multiplication in the finite field $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$ verifies $$\mu_q(n) \geq 2n-1,$$ with equality holding if and only if $$n \leq \frac{q}{2}+1.$$ \end{theo}
This result does not give any estimate of an upper bound for $\mu_q(n)$, when $n$ is large. In \cite{lesewi}, Lempel, Seroussi and Winograd proved that $\mu_q(n)$ has a quasi-linear upper bound. More precisely:
\begin{theo} The bilinear complexity of the multiplication in the finite field $\mathbb{F}_{q^n}$ over $\mathbb{F}_q$ verifies: $$\mu_q(n) \leq f_q(n)n,$$ where $f_q(n)$ is a very slowly growing function, namely $$f_q(n)=O(\underbrace{\log_q\log_q \cdots \log_q}_{k~times}(n))$$ for any $k \geq 1$. \end{theo}
Furthermore, extending and using more efficiently the technique developed in \cite{bska1}, Bshouty and Kaminski showed that $$\mu_q(n) \geq 3n-o(n)$$ for $q \geq 3.$ The proof of the above lower bound on the complexity of straight-line algorithms for polynomial multiplication is based on the analysis of Hankel matrices representing bilinear forms defined by linear combinations of the coefficients of the polynomial product.
\section{The modern approach via algebraic curves}
We have seen in the previous section that if the number of points of the ground field is too low, we cannot perform the multiplication by the Winograd interpolation method. D.V. and G.V. Chudnowsky have designed in \cite{chch} an algorithm where the interpolation is done on points of an algebraic curve over the groundfield with a sufficient number of rational points. Using this algorithm, D.V. and G.V. Chudnovsky claimed that the bilinear complexity of the multiplication in finite extensions of a finite field is asymptotically linear but later Shparlinski, Tsfasman and Vladut in \cite{shtsvl} noted that they only proved that the quantity $m_q=\liminf_{k \rightarrow \infty}\frac{\mu_q(k)}{k}$ is bounded which do not enable to prove the linearity. To prove the linearity, it is also necessary to prove that $M_q= \limsup_{k \rightarrow \infty}\frac{\mu_q(k)}{k}$ is bounded which is the main aim of their paper. However, I. Cascudo, R. Cramer and C. Xing recently detected a mistake in the proof of Shparlinski, Tsfasman and Vladut. Unfortunately, this mistake that we will explain in details in this section, also had an effect on their improved estimations of $m_q$. After the above pioneer research, S. Ballet obtained in \cite{ball1} the first upper bounds uniformly with respect to $q$ for $\mu_q(n)$. These bounds not being affected by the same mistake enable at the same time to prove the linearity of the bilinear complexity of the multiplication in finite extensions of a finite field. Then, S. Ballet and al. obtained several improvements which will be recalled at the end of this section. These different improvements are based on the following main ideas: the use of towers of algebraic functions fields \cite{ball1} \cite{ball3}, the descent of their definition field \cite{baro1} \cite{balbro}, the use of places of higher degree \cite{baro1} \cite{ceoz} as well as the use of local expansion \cite{arna1} \cite{ceoz}.
\subsection{Linearity of the bilinear complexity of the multiplication}
\subsubsection{The D.V. Chudnovsky and G.V. Chudnovsky algorithm}
In this section, we recall the brilliant idea of D.V. Chudnovsky and G.V. Chudnovsky and give their main result. First, we present the original algorithm of D.V. Chudnovsky and G.V. Chudnovsky, which was established in 1987 in \cite{chch}.
\begin{theo}\label{chudchud} Let
$\bullet$ $F/\mathbb{F}_q$ be an algebraic function field,
$\bullet$ $Q$ be a degree $n$ place of $F/\mathbb{F}_q$,
$\bullet$ ${\mathcal D}$ be a divisor of $F/\mathbb{F}_q$,
$\bullet$ ${\mathcal P}=\{P_1,...,P_{N}\}$ be a set of places of degree $1$.
We suppose that $Q$, $P_1, \cdots, P_N$ are not in the support of ${\mathcal D}$ and that:
a) The evaluation map $$Ev_Q: {\mathcal L}({\mathcal D}) \rightarrow \mathbb{F}_{q^n}\simeq F_Q$$ is onto (where $F_Q$ is the residue class field of $Q$),
b) the application
$$Ev_{{\mathcal P}}: \left \{ \begin{array}{lll}
{\mathcal L}(2{\mathcal D}) & \rightarrow & \mathbb{F}_{q}^{N} \cr
f & \mapsto & (f(P_1),...,f(P_{N})) \cr \end{array} \right . $$
is injective.
Then $$\mu_q(n)\leq N.$$ \end{theo}
As pointed in \cite{shtsvl}, using this algorithm with a suitable sequence of algebraic curves defined over a finite field $\mathbb{F}_q$, D.V. Chudnovsky and G.V. Chudnovsky only proved the following result:
\begin{theo}\label{chudmq} Let $q$ be a square $\geq 25$. Then
$$\liminf \frac{\mu_q(n)}{n}\leq 2\left(1+\frac{1}{\sqrt{q}-3}\right).$$ \end{theo}
\subsubsection{Asymptotic bounds}\label{mM} As seen previously, Shparlinski, Tsfasman, Vladut have given in \cite{shtsvl} many interesting remarks on the algorithm of D.V. and G.V. Chudnovsky and the bilinear complexity. In particular, they have considered asymptotic bounds for the bilinear complexity in order to prove the asymptotic linearity of this complexity from the algorithm of D.V. and G.V. Chudnovsky. Following these authors, let us define $$M_q= \limsup_{k \rightarrow \infty}\frac{\mu_q(k)}{k}$$ and $$m_q=\liminf_{k \rightarrow \infty}\frac{\mu_q(k)}{k}.$$
It is not at all obvious that either of these values is finite but anyway the bilinear complexity of multiplication can be considered as asymptotically linear in the degree of extension if and only if the quantity $M_q$ is finite. First, let us recall a very useful Lemma due to D.V. and G.V. Chudnovsky \cite{chch} and Shparlinski, Tsfasman, Vladut \cite[Lemma 1.2 and Corollary 1.3]{shtsvl}.
\begin{lemm}\label{lemasyMqmq} For any prime power $q$ and for all the positive integers $n$ and $m$, we have $$ \mu_{q}(m)\leq\mu_q(mn)\leq \mu_q(n).\mu_{q^n}(m)$$ $$ m_{q}\leq m_{q^n}.\mu_{q}(n)/n$$ $$ M_{q}\leq M_{q^n}.\mu_{q}(n).$$ \end{lemm}
Now, let us summarize the known estimates concerning these quantities, namely the lower bound of $m_2$ obtained by R. Brockett, M. Brown and D. Dobkin in \cite{brttdo} \cite{brdo} and the lower bound of $m_q$ for $q>2$ given by Shparlinski, Tsfasman and Vladut in \cite{shtsvl}.
\begin{propo}
$$m_2\geq 3.52$$ and $$m_q\geq 2\left( 1+\frac{1}{q-1}\right) \hbox{ for any }q>2.$$
\end{propo}
Note that all the upper bounds of $M_q$ and $m_q$ for any $q$ given by Shparlinski, Tsfasman and Vladut in \cite{shtsvl} are not proved. Indeed, in \cite{shtsvl}, they claim that for any $q$ (in particular for $q=2$), $m_q$ and overall $M_q$ are finite but I. Cascudo, R. Cramer and C. Xing recently communicated us the existence of a gap in the proof established by I. Shparlinsky, M. Tsfasman and S. Vladut: \textit{"the mistake in \cite{shtsvl} from 1992 is in the proof of their Lemma 3.3, page 161, the paragraph following formulas about the degrees of the divisor. It reads: "}\textsl{Thus the number of linear equivalence classes of degree a for which either Condition $\alpha$ or Condition $\beta$ fails is at most $D_{b'} + D_b$.}\textit{" This is incorrect; $D_b$ should be multiplied by the torsion. Hence the proof of their asympotic bound is incorrect."} \\ Let us explain this gap in next section.
\subsubsection{Gap in the proof of the asymptotic linearity}\label{gap}
We settle the following elements \begin{enumerate} \item a place of degree $n$ denoted by $Q$; \item $2n+g-1$ places of degree $1$ : $P_1,\cdots,P_{2n+g-1}$. \end{enumerate}
We look for a divisor $D$ such that: \begin{enumerate} \item $\deg(D)=n+g-1$; \item $\dim({\mathcal L}(D-Q))=0$; \item $\dim({\mathcal L}(2D - (P_1+P_2+\cdots+ P_{2n+g-1})))=0$. \end{enumerate}
The results concerning $M_q$ et $m_q$ obtained in the paper \cite{tsvl} depend on the existence of such a divisor $D$.
Let us remark that these conditions only depend on the class of a divisor (the dimension of a divisor, the degree of a divisor are invariant in a same class). Consequently, we can work on classes and show the existence of a class $[D]$ which answers the question.
Let $J_{n+g-1}$ be the set of classes of degree $n+g-1$ divisors. We know from F. K. Schmidt Theorem that there exists a divisor $D_0$ of degree $n+g-1$. The application $\psi_{n+g-1}$ from $J_{n+g-1}$ into the Jacobian $J_0$ defined by $$\psi_{n+g-1}([D])=[D-D_0]$$ is a bijection from $J_{n+g-1}$ into $J_0$. All the sets $J_k$ have the same number $h$ of elements ($h$ is called the number of classes).
Let $u$ be the application from $J_{n+g-1}$ into $J_{g-1}$ defined by $u([D])=[D-Q]$. This application is bijective. Thus if we set
$$H_{n+g-1}= \{ [D] \in J_{n+g-1} ~|~ \dim([D-Q])=0\},$$ and
$$K_{g-1}=\{ [{\Delta}] \in J_{g-1} ~|~ \dim([{\Delta}])=0\},$$ we have $$K_{g-1}=u( H_{n+g-1}),$$ and then $$\# H_{n+g-1} = \# K_{g-1}.$$
Let us note that if $[{\Delta}]$ is an element of $J_{g-1}$ which is in the complementary of $K_{g-1}$ namely $\dim([{\Delta}])>0$, then there exists in the class $[{\Delta}]$ at least an effective divisor (there exists a $x$ such that ${\Delta}+(x) \geq 0$). Moreover effective divisors in different classes are different. So the complementary of $K_{g-1}$ in $J_{g-1}$ has a cardinality $\leq A_{g-1}$ where $A_{g-1}$ is the number of effective divisors of degree $g-1$. Then the cardinality of $K_{g-1}$ verifies the inequality $$\# H_{n+g-1} = \# K_{g-1} \geq h-A_{g-1}.$$
Let us remark that classes which belong to $H_{n+g-1}$ are the only ones which can solve our problem. But they also have to verify the additional condition $$\dim({\mathcal L}(2D - (P_1+P_2+\cdots+ P_{2n+g-1})))=0.$$ We would like to use a combinatorial proof as for the first condition.
So we have to consider the application $v$ from $H_{n+g-1}$ to $J_{g-1}$ defined by $$v([D])= [2D - (P_1+P_2+\cdots+ P_{2n+g-1})].$$ Unfortunately the application $[D]\mapsto [2D]$ is not necessarily injective. This is related to $2$-torsion points of the Jacobian. The fact that the application $v$ is not injective does not allow us to conclude that there exists an image "big" enough and use a combinatorial argument like in the first part.
\subsection{Known results about the bilinear complexity $\mu_{q}(n)$}
\subsubsection{Extensions of the Chudnovsky algorithm}
In order to obtain good estimates for the bilinear complexity, S. Ballet has given in \cite{ball1} some easy to verify conditions allowing the use of the D.V. and G.V. Chudnovsky algorithm. Then S. Ballet and R. Rolland have generalized in \cite{baro1} the algorithm using places of degree $1$ and~$2$.
Let us present the last version of this algorithm, which is a generalization of the algorithm of type Chudnovsky introduced by N. Arnaud in \cite{arna1} and M. Cenk and F. \"Ozbudak in \cite{ceoz}. This generalization uses several coefficients in the local expansion at each place $P_i$ instead of just the first one. Due to the way to obtain the local expansion of a product from the local expansion of each term, the bound for the bilinear complexity involves the complexity notion $\widehat{M_q}(u)$ introduced by M. Cenk and F. \"Ozbudak in \cite{ceoz} and defined as follows: \begin{defi} We denote by $\widehat{M_q}(u)$ the minimum number of multiplications needed in $\mathbb{F}_q$ in order to obtain coefficients of the product of two arbitrary $u$-term polynomials modulo $x^u$ in $\mathbb{F}_q[x]$. \end{defi} For instance, we know that for all prime powers $q$, we have $\widehat{M_q}(2) \leq 3$ by \cite{ceoz2}.\\
Now we introduce the generalized algorithm of type Chudnovsky described in~\cite{ceoz}.
\begin{theo} \label{theo_evalder} Let \\
$\bullet$ $q$ be a prime power,\\
$\bullet$ $F/\mathbb{F}_q$ be an algebraic function field,\\
$\bullet$ $Q$ be a degree $n$ place of $F/\mathbb{F}_q$,\\
$\bullet$ ${\mathcal D}$ be a divisor of $F/\mathbb{F}_q$,\\
$\bullet$ ${\mathcal P}=\{P_1,\ldots,P_N\}$ be a set of $N$ places of arbitrary degree,\\
$\bullet$ $u_1,\ldots,u_N$ be positive integers.\\ We suppose that $Q$ and all the places in $\mathcal P$ are not in the support of ${\mathcal D}$ and that: \begin{enumerate}[a)]
\item the map
$$
Ev_Q: \left \{
\begin{array}{ccl}
\Ld{} & \rightarrow & \mathbb{F}_{q^n}\simeq F_Q\\
f & \longmapsto & f(Q)
\end{array} \right.
$$
is onto,
\item the map
$$
Ev_{\mathcal P} : \left \{
\begin{array}{ccl}
\Ld{2} & \longrightarrow & \left(\mathbb{F}_{q^{\deg P_1}}\right)^{u_1} \times \left(\mathbb{F}_{q^{\deg P_2}}\right)^{u_2} \times \cdots \times \left(\mathbb{F}_{q^{\deg P_N}}\right)^{u_N} \\
f & \longmapsto & \big(\varphi_1(f), \varphi_2(f), \ldots, \varphi_N(f)\big)
\end{array} \right.
$$
is injective, where the application $\varphi_i$ is defined by
$$
\varphi_i : \left \{
\begin{array}{ccl}
\Ld{2} & \longrightarrow & \left(\mathbb{F}_{q^{\deg P_i}}\right)^{u_i} \\
f & \longmapsto & \left(f(P_i), f'(P_i), \ldots, f^{(u_i-1)}(P_i)\right)
\end{array} \right.
$$
with $f = f(P_i) + f'(P_i)t_i + f''(P_i)t_i^2+ \ldots + f^{(k)}(P_i)t_i^k + \ldots $, the local expansion at $P_i$ of $f$ in ${\Ld{2}}$, with respect to the local parameter~$t_i$. Note that we set ${f^{(0)} =f}$. \end{enumerate} Then $$ \mu_q(n) \leq \displaystyle \sum_{i=1}^N \mu_q(\deg P_i) \widehat{M}_{q^{\deg P_i}}(u_i). $$ \end{theo}
Let us remark that the algorithm given in \cite{chch} by D.V. and G.V. Chudnovsky is the case $\deg P_i=1$ and $u_i=1$ for $i=1, \ldots, N$. The first generalization introduced by S.Ballet and R. Rolland in \cite{baro1} concerns the case $\deg P_i=1 \hbox{ or }2$ and $u_i=1$ for $i=1, \ldots, N$. Next, the generalization introduced by N. Arnaud in \cite{arna1} concerns the case $\deg P_i=1 \hbox{ or }2$ and $u_i=1\hbox{ or }2$ for $i=1, \ldots, N$. However, note that the work of N. Arnaud has never been published and contains few mistakes (mentioned below) which will be repared in this paper. Finally, the last generalization introduced by M. Cenk and F. \"Ozbudak in \cite{ceoz} is useful: it allows us to use certain places of arbitrary degree many times, thus less places of fixed degree are necessary to get the injectivity of $Ev_\mathcal{P}$.
In particular, we have the following result, obtained by N. Arnaud in \cite{arna1}.
\begin{coro} \label{theo_deg12evalder} Let \\
$\bullet$ $q$ be a prime power,\\
$\bullet$ $F/\mathbb{F}_q$ be an algebraic function field,\\
$\bullet$ $Q$ be a degree $n$ place of $F/\mathbb{F}_q$,\\
$\bullet$ $\D$ be a divisor of $F/\mathbb{F}_q$, \\
$\bullet$ ${\mathcal P}=\{P_1,\ldots,P_{N_1},P_{N_1+1},\ldots,P_{N_1+N_2}\}$ be a set of $N_1$ places of degree\\ \indent one and $N_2$ places of degree two,\\
$\bullet$ ${0 \leq l_1 \leq N_1}$ and ${0 \leq l_2 \leq N_2}$ be two integers.\\ We suppose that $Q$ and all the places in $\mathcal P$ are not in the support of $\D$ and that: \begin{enumerate}[a)]
\item the map
$$
Ev_Q: \Ld{} \rightarrow \mathbb{F}_{q^n}\simeq F_Q$$
is onto,
\item the map
$$
Ev_{\mathcal P}: \left \{
\begin{array}{ccl}
\Ld{2} & \rightarrow & \mathbb{F}_{q}^{N_1} \times \mathbb{F}_{q}^{l_1}\times \mathbb{F}_{q^2}^{N_2} \times \mathbb{F}_{q^2}^{l_2} \\
f & \mapsto & \big(f(P_1),\ldots,f(P_{N_1}),f'(P_1),\ldots,f'(P_{l_1}),\\
& & \ f(P_{N_1+1}),\ldots,f(P_{N_1+N_2}),f'(P_{N_1+1}),\ldots,f'(P_{N_1+l_2})\big)
\end{array} \right .
$$ is injective. \end{enumerate} Then $$ \mu_q(n)\leq N_1 + 2l_1 + 3N_2 + 6l_2. $$ \end{coro}
Moreover, from the last corollary applied on Garcia-Stichtenoth towers, N. Arnaud obtained in \cite{arna1} the two following bounds:
\begin{theo}\label{bornes_arnaud1} Let ${q=p^r}$ be a prime power.
\begin{equation*}
\begin{array}{l}
\mbox{(i) \ If $q\geq4$, then }\mu_{q^2}(n) \leq 2 \left(1 + \frac{p}{q-3 + (p-1)\left(1- \frac{1}{q+1}\right)} \right)n,\\
\\
\mbox{(ii) \ If $q\geq16$, then }\mu_{q}(n) \leq 3 \left(1 + \frac{2p}{q-3 + 2(p-1)\left(1- \frac{1}{q+1}\right)} \right)n.
\end{array}
\end{equation*} \end{theo}
We will give a proof of Bound (i) together with an improvement of Bound (ii) in Section \ref{sectbornesarnaud}. In that section, we will also prove two revised bounds for $\mu_{p^2}(n)$ and $\mu_p(n)$ given by Arnaud in \cite{arna1}. Indeed, Arnaud gives the two following bounds with no detailed calculation:
\begin{equation*}
\begin{array}{l}
\mbox{(iii) \ If $p\geq5$ is a prime, then }\mu_{p^2}(n) \leq 2 \left(1 + \frac{2}{p-2} \right)n,\\
\\
\mbox{(iv) \ If $p\geq5$ is a prime, then } \mu_p(n)\leq 3 \left(1+ \frac{4}{p-1}\right) n.
\end{array}
\end{equation*} In fact, one can check that the denominators $p-1$ and $p-2$ are slightly overestimated under Arnaud's hypotheses.
From the results of \cite{ball1} and the previous algorithm, we obtain (cf. \cite{ball1}, \cite{baro1}):
\begin{theo} \label{theoprinc} Let $q$ be a prime power and let $n$ be an integer $>1$. Let $F/\mathbb{F}_q$ be an algebraic function field of genus $g$ and $N_k$ the number of places of degree $k$ in $F/\mathbb{F}_q$. If $F/\mathbb{F} _q$ is such that $2g+1 \leq q^{\frac{n-1}{2}}(q^{\frac{1}{2}}-1)$ then: \begin{enumerate}[1)]
\item if $N_1 > 2n+2g-2$, then $$ \mu_q(n) \leq 2n+g-1,$$
\item if there exists a non-special divisor of degree $g-1$ and $N_1+2N_2>2n+2g-2$, then $$\mu_q(n)\leq 3n+3g,$$
\item if $N_1+2N_2>2n+4g-2$, then $$\mu_q(n)\leq 3n+6g.$$ \end{enumerate} \end{theo}
\subsubsection{Known upper bounds for $\mu_{q}(n)$}
From good towers of algebraic functions fields satisfying Theorem \ref{theoprinc}, it was proved in \cite{ball1}, \cite{ball3}, \cite{baro1}, \cite{balbro}, \cite{ball4} and \cite{bach}:
\begin{theo} Let $q=p^r$ a power of the prime $p$. The bilinear complexity $\mu_q(n)$ of multiplication in any finite field $\mathbb{F}_{q^n}$ is linear with respect to the extension degree, more precisely: $$\mu_q(n) \leq C_q n$$ where $C_q$ is the constant defined by: $$ C_q= \left \{ \begin{array}{lll} \hbox{if } q=2 & \hbox{then} \quad 22 & \hbox{\cite{bapi} and \cite{ceoz}} \cr \cr \hbox{else if } q=3 & \hbox{then} \quad 27 & \hbox{\cite{ball1}} \cr \cr \hbox{else if } q=p \geq 5 & \hbox{then} \quad 3\left(1+ \frac{4}{q-3}\right) &
\hbox{\cite{bach}} \cr \cr \hbox{else if } q=p^2 \geq 25 & \hbox{then} \quad 2\left(1+\frac{2}{p-3}\right) &
\hbox{\cite{bach}} \cr \cr \hbox{else if } q=p^{2k} \geq 16 & \hbox{then} \quad 2\left(1+\frac{p}{q-3 + (p-1)\left(1- \frac{1}{q+1}\right)}\right) &
\hbox{\cite{arna1}} \cr \cr \hbox{else if } q \geq 4 & \hbox{then} \quad 6\left(1+\frac{p}{q-3}\right) & \hbox{\cite{ball3}}\cr \cr \hbox{else if } q \geq 16 & \hbox{then} \quad 3\left(1+\frac{2p}{q-3 + 2(p-1)\left(1- \frac{1}{q+1}\right)}\right) & \hbox{\cite{arna1}}. \end{array} \right . $$ \end{theo}
Note that the new estimate for the constant $C_2$ comes from two recent improvements. First, one knows from Table 1 in \cite{ceoz} that $\mu_2(n) \leq 22n$ for ${2 \leq n \leq 7}$ since $\mu_2(n) \leq 22$ for such integers $n$. Moreover, applying the bound ${\mu_2(n)\leq \frac{477}{26}n+\frac{45}{2}}$ obtained in \cite{bapi}, one gets ${\mu_2(n)\leq \left(\frac{477}{26}+\frac{45}{2\times8}\right)n \leq 22n}$ for ${n\geq 8}$. Note also that the upper bounds obtained in \cite{ball5} and \cite{ball6} are obtained by using the mistaken statements of I. Shparlinsky, M. Tsfasman and S. Vladut \cite{shtsvl} mentioned in the above section \ref{gap}. Consequently, these bounds are not proved and unfortunatly they can not be repaired easily. However, certain not yet published results recently due to H. Randriambololona concerning the geometry of Riemann-Roch spaces might enable to repair them in certain cases.
\subsubsection{Some exact values for the bilinear complexity}
Applying the D.V. and G.V. Chudnovsky algorithm with well fitted elliptic curves, Shokrollahi has shown in \cite{shok} that:
\begin{theo}\label{thm_shokr} The bilinear complexity $\mu_q(n)$ of the multiplication in the finite extention $\mathbb{F}_{q^n}$ of the finite field $\mathbb{F}_q$ is equal to $2n$ for \begin{equation}\label{ine} \frac{1}{2}q +1< n < \frac{1}{2}(q+1+{\epsilon (q) }) \end{equation} where $\epsilon$ is the function defined by: $$ \epsilon (q)= \left \{ \begin{array}{l}
\hbox{the greatest integer} \le 2{\sqrt q} \hbox{ prime to q,}
\quad \hbox{if}~ \hbox{q is not a perfect square} \cr
2{\sqrt q,} \quad \hbox{if}~ \hbox{q is a perfect square.} \cr \end{array} \right . $$ \end{theo}
We still do not know if the converse is true. More precisely the question is: suppose that $\mu_q(n)=2n$, are the inequalities (\ref{ine}) true?
However, for computational use, it is helpful to keep in mind some particular exact values for ${\mu_q(n)}$, such as ${\mu_q(2)=3}$ for any prime power $q$, ${\mu_2(4)=9}$, ${\mu_4(4)=\mu_5(4)=8}$ or ${\mu_2(2^6)=15}$ \cite{chch}.
\section{New results for $\mu_q(n)$}
\subsection{Towers of algebraic function fields}\label{sectdeftowers}
In this section, we introduce some towers of algebraic function fields. Theorem \ref{theoprinc} applied to the algebraic function fields of these towers gives us bounds for the bilinear complexity. A given curve cannot permit to multiply in every extension of $\mathbb{F}_q$, just for $n$ lower than some value. With a tower of function fields we can adapt the curve to the degree of the extension. The important point to note here is that in order to obtain a well adapted curve it will be desirable to have a tower for which the quotients of two consecutive genus are as small as possible, namely a dense tower.
For any algebraic function field $F/\mathbb{F}_q$ defined over the finite field $\mathbb{F}_q$, we denote by $g(F/\mathbb{F}_q)$ the genus of $F/\mathbb{F}_q$ and by $N_k(F/\mathbb{F}_q)$ the number of places of degree $k$ in $F/\mathbb{F}_q$.
\subsubsection{Garcia-Stichtenoth tower of Artin-Schreier algebraic function field extensions}
We present now a modified Garcia-Stichtenoth's tower (cf. \cite{gast}, \cite{ball3}, \cite{baro1}) having good properties. Let us consider a finite field $\mathbb{F}_{q^2}$ with $q=p^r>3$ and $r$ an odd integer. Let us consider the Garcia-Stichtenoth's elementary abelian tower $T_1$ over $\mathbb{F}_{q^2}$ constructed in \cite{gast} and defined by the sequence $(F_0, F_1, F_2,\ldots)$ where $$F_{k+1}:=F_{k}(z_{k+1})$$ and $z_{k+1}$ satisfies the equation: $$z_{k+1}^q+z_{k+1}=x_k^{q+1}$$ with $$x_k:=z_k/x_{k-1} ~ in ~ F_k(for~k\geq1).$$ Moreover $F_0:=\mathbb{F}_{q^2}(x_0)$ is the rational function field over $\mathbb{F}_{q^2}$ and $F_1$ the Hermitian function field over $\mathbb{F}_{q^2}$. Let us denote by $g_k$ the genus of $F_k$, we recall the following \textsl{formulae}: \begin{equation}\label{genregs} g_k = \left \{ \begin{array}{ll}
q^k+q^{k-1}-q^\frac{k+1}{2} - 2q^\frac{k-1}{2}+1 & \mbox{if } k \equiv 1 \mod 2,\\
q^k+q^{k-1}-\frac{1}{2}q^{\frac{k}{2}+1} - \frac{3}{2}q^{\frac{k}{2}}-q^{\frac{k}{2}-1} +1& \mbox{if } k \equiv 0 \mod 2.
\end{array} \right . \end{equation} Let us consider the completed Garcia-Stichtenoth tower $$T_2=F_{0,0}\subseteq F_{0,1}\subseteq \ldots \subseteq F_{0,r} \subseteq F_{1,0}\subseteq F_{1,1} \subseteq \ldots \subseteq F_{1,r} \ldots $$ considered in \cite{ball3} such that $F_k \subseteq F_{k,s} \subseteq F_{k+1}$ for any integer $s \in \{0,\ldots,r\}$, with $F_{k,0}=F_k$ and $F_{k,r}=F_{k+1}$. Recall that each extension $F_{k,s}/F_k$ is Galois of degree $p^s$ with full constant field $\mathbb{F}_{q^2}$. Now, we consider the tower studied in \cite{baro1} $$T_3=G_{0,0} \subseteq G_{0,1} \subseteq \ldots \subseteq G_{0,r}\subseteq G_{1,0}\subseteq G_{1,1}\subseteq \ldots \subseteq G_{1,r}\ldots $$ defined over the constant field $\mathbb{F}_q$ and related to the tower $T_2$ by $$F_{k,s}=\mathbb{F}_{q^2}G_{k,s} \quad \mbox{for all $k$ and $s$,}$$ namely $\mathbb{F}_{k,s}/\mathbb{F}_{q^2}$ is the constant field extension of $G_{k,s}/\mathbb{F}_q$. Note that the tower $T_3$ is well defined by \cite{baro1} and \cite{balbro}. Moreover, we have the following result:
\begin{propo}\label{subfield} Let ${q = p^r \geq4}$ be a prime power. For all integers $k \geq 1$ and ${s \in \{0, \ldots,r\}}$, there exists a step $F_{k,s}/\mathbb{F}_{q^2}$ (respectively $G_{k,s}/\mathbb{F}_q$) with genus $g_{k,s}$ and $N_{k,s}$ places of degree 1 in $F_{k,s}/\mathbb{F}_{q^2}$ (respectively $N_{k,s}$ places of degree 1 and 2 in $G_{k,s}/\mathbb{F}_q$ with places of degree 2 being counted twice) such that: \begin{enumerate}[(1)]
\item $F_k \subseteq F_{k,s} \subseteq F_{k+1}$, where we set $F_{k,0}=F_k$ and $F_{k,r}=F_{k+1}$,\\(respectively $G_k \subseteq G_{k,s} \subseteq G_{k+1}$, where we set $G_{k,0}=G_k$ and $G_{k,r}=G_{k+1}$),
\item $\big( g_k-1 \big)p^s +1 \leq g_{k,s} \leq \frac{g_{k+1}}{p^{r-s}} +1$,
\item $N_{k,s} \geq (q^2-1)q^{k-1}p^s$. \end{enumerate} \end{propo}
\subsubsection{Garcia-Stichtenoth tower of Kummer function field extensions}
In this section we present a Garcia-Stichtenoth's tower (cf. \cite{bach}) having good properties. Let $\mathbb{F}_q$ be a finite field of characteristic $p\geq3$. Let us consider the tower $T$ over $\mathbb{F}_q$ which is defined recursively by the following equation, studied in \cite{gast2}:
$$y^2=\frac{x^2+1}{2x}.$$
The tower $T/\mathbb{F}_q$ is represented by the sequence of function fields $(H_0, H_1, H_2, ...)$ where $H_n = \mathbb{F}_q(x_0, x_1, ..., x_n)$ and $x_{i+1}^2=(x_i^2+1)/2x_i$ holds for each $i\geq 0$. Note that $H_0$ is the rational function field. For any prime number $p \geq 3$, the tower $T/\mathbb{F}_{p^2}$ is asymptotically optimal over the field $\mathbb{F}_{p^2}$, i.e. $T/\mathbb{F}_{p^2}$ reaches the Drinfeld-Vladut bound. Moreover, for any integer $k$, $H_k/\mathbb{F}_{p^2}$ is the constant field extension of $H_k/\mathbb{F}_p$.
From \cite{bach}, we know that the genus $g(H_k)$ of the step $H_k$ is given by: \begin{equation}\label{genregsr} g(H_k) = \left \{ \begin{array}{ll}
2^{k+1}-3\cdot 2^\frac{k}{2}+1 & \mbox{if } k \equiv 0 \mod 2,\\
2^{k+1} -2\cdot 2^\frac{k+1}{2}+1& \mbox{if } k \equiv 1 \mod 2.
\end{array} \right . \end{equation} and that the following bounds hold for the number of rational places in $H_k$ over $\mathbb{F}_{p^2}$ and for the number of places of degree 1 and 2 over $\mathbb{F}_p$: \begin{equation}\label{nbratplgsr} N_1(H_k/\mathbb{F}_{p^2}) \geq 2^{k+1}(p-1) \end{equation} and \begin{equation}\label{nbpldeg12gsr} N_1(H_k/\mathbb{F}_p) +2N_2(H_k/\mathbb{F}_p) \geq 2^{k+1}(p-1). \end{equation}
From the existence of this tower, we can obtain the following proposition \cite{bach}: \begin{propo} \label{propoexistcf2} Let $p$ be a prime number $\geq 5$. Then for any integer $n\geq {\frac{1}{2} (p+1+\epsilon(p))}$ where $\epsilon(p)$ is defined as in Theorem \ref{thm_shokr}, \begin{enumerate}[1)]
\item there exists an algebraic function field $H_k/\mathbb{F}_{p^2}$ of genus $g(H_k/\mathbb{F}_{p^2})$ such that $2g(H_k/\mathbb{F}_{p^2})+1 \leq p^{{n-1}}(p-1)$ and $N_1(H_k/\mathbb{F}_{p^2})>2n+2g(H_k/\mathbb{F}_{p^2})-2$,
\item there exists an algebraic function field $H_k/\mathbb{F}_{p}$ of genus $g(H_k/\mathbb{F}_p)$ such that $2g(H_k/\mathbb{F}_p)+1 \leq p^{\frac{n-1}{2}}(p^{\frac{1}{2}}-1)$ and $N_1(H_k/\mathbb{F}_p)+2N_2(H_k/\mathbb{F}_p)>2n+2g(H_k/\mathbb{F}_p)-2$ and containing a non-special divisor of degree $g(H_k/\mathbb{F}_p)-1$. \end{enumerate} \end{propo}
\subsection{Some preliminary results} \label{sectionusefull} Here we establish some technical results about genus and number of places of each step of the towers $T_2/\mathbb{F}_{q^2}$, $T_3/\mathbb{F}_q$, $T/\mathbb{F}_{p^2}$ and $T/\mathbb{F}_p$ defined in Section \ref{sectdeftowers}. These results will allow us to determine a suitable step of the tower to apply the algorithm on. \subsubsection{About the Garcia-Stichtenoth's tower} In this section, $q:=p^r$ is a power of the prime $p$.
\begin{lemm}\label{lemme_genre} Let ${q>3}$. We have the following bounds for the genus of each step of the towers $T_2/\mathbb{F}_{q^2}$ and $T_3/\mathbb{F}_q$: \begin{enumerate}[i)]
\item $g_k> q^k$ for all ${k\geq 4}$,
\item $g_k \leq q^{k-1}(q+1) - \sqrt{q}q^\frac{k}{2}$,
\item $g_{k,s} \leq q^{k-1}(q+1)p^s$ for all ${k\geq 0}$ and $s=0,\ldots,r$,
\item $g_{k,s} \leq \frac{q^k(q+1)-q^\frac{k}{2}(q-1)}{p^{r-s}}$ for all $k\geq 2$ and $s=0,\ldots,r$. \end{enumerate} \end{lemm}
\begin{Proof} \textit{i)} According to Formula (\ref{genregs}), we know that if ${k \equiv 1 \mod 2}$, then $$ g_k = q^k+q^{k-1}-q^\frac{k+1}{2} - 2q^\frac{k-1}{2}+1 = q^k+q^\frac{k-1}{2}(q^\frac{k-1}{2} - q - 2) +1. $$ Since ${q>3}$ and ${k \geq 4}$, we have ${q^\frac{k-1}{2} - q - 2 >0}$, thus ${g_k>q^k}$.\\ Else if ${k \equiv 0 \mod 2}$, then $$ g_k = q^k+q^{k-1}-\frac{1}{2}q^{\frac{k}{2}+1} - \frac{3}{2}q^{\frac{k}{2}}-q^{\frac{k}{2}-1} +1 = q^k+q^{\frac{k}{2}-1}(q^\frac{k}{2}-\frac{1}{2}q^{2} - \frac{3}{2}q-1)+1. $$ Since ${q>3}$ and ${k\geq 4}$, we have ${q^\frac{k}{2}-\frac{1}{2}q^{2} - \frac{3}{2}q-1>0}$, thus ${g_k>q^k}$.
\textit{ii)} It follows from Formula (\ref{genregs}) since for all $k\geq 1$ we have ${2q^\frac{k-1}{2} \geq 1}$ which works out for odd $k$ cases and ${\frac{3}{2}q^\frac{k}{2}+q^{\frac{k}{2}-1}\geq 1}$ which works out for even $k$ cases, since ${\frac{1}{2}q\geq \sqrt{q}}$.
\textit{iii)} If ${s=r}$, then according to Formula (\ref{genregs}), we have $$ g_{k,s} = g_{k+1}\leq q^{k+1}+q^{k} = q^{k-1}(q+1)p^s. $$ Else, ${s<r}$ and Proposition \ref{subfield} says that ${g_{k,s} \leq \frac{g_{k+1}}{p^{r-s}}+1}$. Moreover, since ${q^\frac{k+2}{2}\geq q}$ and ${\frac{1}{2}q^{\frac{k+1}{2}+1}\geq q}$, we obtain ${g_{k+1}\leq q^{k+1} + q^k - q + 1}$ from Formula (\ref{genregs}). Thus, we get \begin{eqnarray*} g_{k,s} & \leq & \frac{q^{k+1} + q^k - q + 1}{p^{r-s}} +1\\
& = & q^{k-1}(q+1)p^s - p^s + p^{s-r} + 1\\
& \leq & q^{k-1}(q+1)p^s + p^{s-r}\\
& \leq & q^{k-1}(q+1)p^s \ \mbox{ since ${0 \leq p^{s-r} <1}$ and ${g_{k,s} \in \mathbb{N}}$}. \end{eqnarray*}
\textit{iv)} It follows from ii) since Proposition \ref{subfield} gives ${g_{k,s} \leq \frac{g_{k+1}}{p^{r-s}}+1}$, so \linebreak[4]${g_{k,s} \leq \frac{q^{k}(q+1) - \sqrt{q}q^\frac{k+1}{2}}{p^{r-s}} +1}$ which gives the result since ${p^{r-s} \leq q^\frac{k}{2}}$ for all ${k\geq2}$. \qed\\ \end{Proof}
\begin{lemm}{\label{lemme_delta}} Let $q>3$ and $k\geq4$. We set ${\Delta g_{k,s} := g_{k,s+1} - g_{k,s}}$ and ${D_{k,s}:=(p-1)p^sq^k}$ and denote ${M_{k,s} := N_1(F_{k,s}/\mathbb{F}_{q^2}) = N_1(G_{k,s}/\mathbb{F}_q)+2N_2(G_{k,s}/\mathbb{F}_q)}$. One has: \begin{enumerate}[(i)]
\item $\Delta g_{k,s} \geq D_{k,s}$,
\item $M_{k,s} \geq D_{k,s}$. \end{enumerate} \end{lemm}
\begin{Proof} (i) From Hurwitz Genus Formula, one has ${g_{k,s+1}-1 \geq p(g_{k,s}-1)}$, so ${g_{k,s+1}-g_{k,s} \geq (p-1)(g_{k,s}-1)}$. Applying $s$ more times Hurwitz Genus Formula, we get ${g_{k,s+1}-g_{k,s} \geq (p-1)p^s\big(g(G_k)-1\big)}$. Thus ${g_{k,s+1}-g_{k,s} \geq (p-1)p^sq^k}$, from Lemma \ref{lemme_genre} i) since $q>3$ and $k\geq 4$.\\ \noindent(ii) According to Proposition \ref{subfield}, one has \begin{eqnarray*}
M_{k,s} & \geq & (q^2-1)q^{k-1}p^s \\
& = & (q+1)(q-1)q^{k-1}p^s \\
& \geq & (q-1)q^kp^s\\
& \geq & (p-1)q^kp^s \mbox{.} \end{eqnarray*} \qed \end{Proof}
\begin{lemm}\label{lemme_bornesup} Let ${M_{k,s} := N_1(F_{k,s}/\mathbb{F}_{q^2}) = N_1(G_{k,s}/\mathbb{F}_q)+2N_2(G_{k,s}/\mathbb{F}_q)}$. For all ${k \geq 1}$ and ${s=0, \ldots, r}$, we have $$
\sup \big \{ n \in \mathbb{N} \; | \; 2n \leq M_{k,s} -2g_{k,s} +1 \big \} \geq \frac{1}{2}(q+1)q^{k-1}p^s(q-3). $$ \end{lemm}
\begin{Proof} From Proposition \ref{subfield} and Lemma \ref{lemme_genre} iii), we get \begin{eqnarray*} M_{k,s} -2g_{k,s} +1 & \geq & (q^2-1)q^{k-1}p^s - 2q^{k-1}(q+1)p^s +1 \\
& = & (q+1)q^{k-1}p^s\big((q-1) -2\big) +1 \\
& \geq & (q+1)q^{k-1}p^s(q-3) \end{eqnarray*}
thus we have $\sup \big \{ n \in \mathbb{N} \; | \; 2n \leq M_{k,s} -2g_{k,s} +1 \big \} \geq \frac{1}{2}q^{k-1}p^s(q+1)(q-3)$. \qed \end{Proof}
\subsubsection{About the Garcia-Stichtenoth-R\"uck's tower} In this section, $p$ is an odd prime. We denote by $g_k$ the genus of the step $H_k$ and we fix $N_k:= N_1(H_k/\mathbb{F}_{p^2})=N_1(H_k/\mathbb{F}_p)+2N_2(H_k/\mathbb{F}_p)$. The following lemma is straightforward according to Formulae ~(\ref{genregsr}) and (\ref{nbpldeg12gsr}): \begin{lemm}\label{lemme_genregsr} These two bounds hold for the genus of each step of the towers $T/\mathbb{F}_{p^2}$ and $T/\mathbb{F}_p$: \begin{enumerate}[i)]
\item $g_k \leq 2^{k+1}-2\cdot2^\frac{k+1}{2}+1$,
\item $g_k \leq 2^{k+1}$. \end{enumerate} \end{lemm}
\begin{lemm}\label{lemme_deltagsr} For all $k\geq 0$, we set ${\Delta g_k := g_{k+1} - g_k}$. Then one has \linebreak[4]${N_k \geq \Delta g_k \geq 2^{k+1}- 2^\frac{k+1}{2}}$. \end{lemm}
\begin{Proof} If $k$ is even then ${\Delta g_k = 2^{k+1}-2^\frac{k}{2}}$, else ${\Delta g_k = 2^{k+1}-2^\frac{k+1}{2}}$ so the second equality holds trivially. Moreover, since ${p\geq 3}$, the first one follows from Bounds (\ref{nbratplgsr}) and (\ref{nbpldeg12gsr}) which gives ${N_k \geq 2^{k+2}}$. \qed \end{Proof}
\begin{lemm}\label{lemme_bornesupgsr} Let $H_k$ be a step of one of the towers $T/\mathbb{F}_{p^2}$ or $T/\mathbb{F}_p$. One has: $$
\sup \big \{ n \in \mathbb{N} \; | \; N_k \geq 2n +2g_k -1\big \} \geq 2^{k}(p-3)+2. $$ \end{lemm}
\begin{Proof} From Bounds (\ref{nbratplgsr}) and (\ref{nbpldeg12gsr}) for $N_k$ and Lemma \ref{lemme_genregsr} i), we get \begin{eqnarray*} N_k - 2g_k +1 & \geq & 2^{k+1}(p-1) -2(2^{k+1}-2\cdot2^\frac{k+1}{2}+1) +1\\
& = & 2^{k+1}(p-3) + 4\cdot2^\frac{k+1}{2} - 1 \\
& \geq & 2^{k+1}(p-3) + 4 \mbox{ since } k\geq0. \end{eqnarray*}\qed \end{Proof}
\subsection{General results for $\mu_q(n)$} In \cite{balb}, Ballet and Le Brigand proved the following useful result: \begin{theo}\label{existdivnonspe} Let $F/\mathbb{F}_q$ be an algebraic function field of genus $g\geq 2$. If $q\geq4$, then there exists a non-special divisor of degree $g-1$. \end{theo}
The four following lemmas prove the existence of a "good" step of the towers defined in Section \ref{sectdeftowers}, that is to say a step that will be optimal for the bilinear complexity of multiplication:
\begin{lemm}\label{lemme_placedegn} Let $n \geq \frac{1}{2}\left(q^2+1+\epsilon(q^2)\right)$ be an integer. If $q=p^r\geq4$, then there exists a step $F_{k,s}/\mathbb{F}_{q^2}$ of the tower $T_2/\mathbb{F}_{q^2}$ such that all the three following conditions are verified: \begin{enumerate}[(1)]
\item there exists a non-special divisor of degree $g_{k,s}-1$ in $F_{k,s}/\mathbb{F}_{q^2}$,
\item there exists a place of $F_{k,s}/\mathbb{F}_{q^2}$ of degree $n$,
\item $N_1(F_{k,s}/\mathbb{F}_{q^2}) \geq 2n + 2g_{k,s}-1$. \end{enumerate} Moreover, the first step for which both Conditions (2) and (3) are verified is the first step for which (3) is verified. \end{lemm}
\begin{Proof} Note that $n \geq 9$ since $q\geq4$ and ${n \geq \frac{1}{2}(q^2+1) \geq 8.5}$. Fix $1 \leq k \leq n-4$ and ${s \in \{0, \ldots, r\}}$. First, we prove that Condition (2) is verified. Lemma~\ref{lemme_genre}~iv) gives: \begin{eqnarray}
\nonumber 2g_{k,s}+1 & \leq & 2\frac{q^k(q+1)-q^\frac{k}{2}(q-1)}{p^{r-s}} +1\\
\nonumber & = & 2p^s\left(q^{k-1}(q+1)-q^\frac{k}{2}\frac{q-1}{q}\right) +1\\
& \leq & 2q^{k-1}p^s(q+1) \ \ \ \mbox{\ since } 2p^sq^\frac{k}{2}\frac{q-1}{q}\geq1 \label{eqgenre1}\\
\nonumber & \leq & 2q^k(q^2-1). \end{eqnarray} On the other hand, one has ${n-1 \geq k+3 > k+\frac{1}{2}+2}$ so $n-1 \geq \log_q(q^k)+\log_q(2)+\log_q(q+1)$. This gives ${q^{n-1} \geq 2q^k(q+1)}$, hence $q^{n-1}(q-1) \geq 2q^k(q^2-1)$. Therefore, one has ${2g_{k,s}+1 \leq q^{n-1}(q-1)}$ which ensure us that Condition (2) is satisfied according to Corollary 5.2.10 in \cite{stic}.\\ Now suppose also that ${k \geq \log_q\left(\frac{2n}{5}\right)+1}$. Note that for all $n\geq 9$ there exists such an integer $k$ since the size of the interval $[\log_q\left(\frac{2n}{5}\right)+1 , n-4]$ is bigger than ${9-4-\log_4\left(\frac{2\cdot9}{5}\right)-1 \geq 3 >1}$. Moreover such an integer $k$ verifies ${q^{k-1} \geq \frac{2}{5}n}$, so ${n \leq \frac{1}{2}q^{k-1}(q+1)(q-3)}$ since $q\geq4$. Then one has \begin {eqnarray*}
2n+2g_{k,s}-1 & \leq & 2n+2g_{k,s}+1\\
& \leq & 2n + 2q^{k-1}p^s(q+1) \ \ \ \mbox{\ according to (\ref{eqgenre1})}\\
& \leq & q^{k-1}(q+1)(q-3) + 2q^{k-1}p^s(q+1)\\
& \leq & q^{k-1}p^s(q+1)(q-1) \\
& = & (q^2-1)q^{k-1}p^s \end{eqnarray*} which gives ${N_1(F_{k,s}/\mathbb{F}_{q^2}) \geq 2n + 2g_{k,s}-1}$ according to Proposition \ref{subfield} (3). Hence, for any integer $k \in [\log_q\left(\frac{2n}{5}\right)+1 , n-4]$, Conditions (2) and (3) are satisfied and the smallest integer $k$ for which they are both satisfied is the smallest integer $k$ for which Condition (3) is satisfied.\\ To conclude, remark that for such an integer $k$, Condition (1) is easily verified from Theorem \ref{existdivnonspe} since $q\geq 4$ and ${g_{k,s} \geq g_2\geq 6}$ according to Formula (\ref{genregs}).\\ \qed
\end{Proof}
This is a similar result for the tower $T_3/\mathbb{F}_q$:
\begin{lemm}\label{lemme_placedegn2} Let $n \geq \frac{1}{2}\left(q+1+\epsilon(q)\right)$ be an integer. If $q=p^r\geq 4$, then there exists a step $G_{k,s}/\mathbb{F}_q$ of the tower $T_3/\mathbb{F}_q$ such that all the three following conditions are verified: \begin{enumerate}[(1)]
\item there exists a non-special divisor of degree $g_{k,s}-1$ in $G_{k,s}/\mathbb{F}_q$,
\item there exists a place of $G_{k,s}/\mathbb{F}_q$ of degree $n$,
\item $N_1(G_{k,s}/\mathbb{F}_q)+2N_2(G_{k,s}/\mathbb{F}_q) \geq 2n + 2g_{k,s}-1$. \end{enumerate} Moreover, the first step for which both Conditions (2) and (3) are verified is the first step for which (3) is verified. \end{lemm}
\begin{Proof} Note that $n \geq 5$ since $q\geq4$, ${\epsilon(q) \geq \epsilon(4)=4}$ and ${n \geq \frac{1}{2}(q+1+\epsilon(q)) \geq 4.5}$. First, we focus on the case $n\geq13$. Fix $1 \leq k \leq \frac{n-7}{2}$ and ${s \in \{0, \ldots, r\}}$. One has ${2p^sq^k\frac{q+1}{\frac{\sqrt{q}}{2}} \leq q^\frac{n-1}{2}}$ since $${ \frac{n-1}{2} \geq k + 3 = k -\frac{1}{2} +1+1+\frac{3}{2} \geq \log_q(q^{k-\frac{1}{2}}) +\log_q(4)+\log_q(p^s)+\log_q(q+1)}. $$ Hence ${2p^sq^k(q+1) \leq q^\frac{n-1}{2}(\sqrt{q}-1)}$ since ${\frac{\sqrt{q}}{2}\leq \sqrt{q}-1}$ for $q\geq4$. According to (\ref{eqgenre1}) in the previous proof, this proves that Condition (2) is satisfied.\\ The same reasoning as in the previous proof shows that Condition (3) is also satisfied as soon as ${k \geq \log_q\left(\frac{2n}{5}\right)+1}$. Moreover, for $n\geq13$, the interval $[\log_q\left(\frac{2n}{5}\right)+1 , \frac{n-7}{2}]$ contains at least one integer and the smallest integer $k$ in this interval is the smallest integer $k$ for which Condition (3) is verified. Furthermore, for such an integer $k$, Condition (1) is easily verified from Theorem \ref{existdivnonspe} since $q\geq 4$ and ${g_{k,s} \geq g_2\geq 6}$ according to Formula (\ref{genregs}).
To complete the proof, we want to focus on the case $5\leq n\leq12$. For this case, we have to look at the values of $q=p^r$ and $n$ for which we have both ${n\geq \frac{1}{2}\left(q+1+\epsilon(q)\right)}$ and ${5 \leq n \leq 12}$. For each value of $n$ such that these two inequalities are satisfied, we have to check that Conditions (1), (2) and (3) are verified. In this aim, we use the KASH packages \cite{kash} to compute the genus and number of places of degree 1 and 2 of the first steps of the tower $T_3/\mathbb{F}_q$. Thus we determine the first step $G_{k,s}/\mathbb{F}_q$ that satisfied all the three Conditions (1), (2) and (3). We resume our results in the following table: \hspace{-8em}
$$\begin{array}{|c|c|c|c|}
\hline
q=p^r & 2^2 & 2^3 & 3^2 \\
\hline
\epsilon(q) & 4 & 5 & 6 \\
\hline
\frac{1}{2}\left(q+1+\epsilon(q)\right) & 4.5 & 7 & 8 \\
\hline
n \hbox{ to be considered} & 5 \leq n \leq 12 & 7 \leq n \leq 12 & 8 \leq n \leq 12 \\
\hline
(k,s) & (1,1) & (1,1) & (1,1) \\
\hline
N_1(G_{k,s}/\mathbb{F}_q) & 5 & 9 & 10 \\
\hline
N_2(G_{k,s}/\mathbb{F}_q) & 14 & 124 & 117 \\
\hline
\Gamma(G_{k,s}/\mathbb{F}_q) & 15 & 117 & 113 \\
\hline
g_{k,s} & 2 & 12 & 9 \\
\hline
2g_{k,s}+1 & 5 & 25 & 19 \\
\hline
q^\frac{n-1}{2}(\sqrt{q}-1) \geq \ldots & 16 & 936 & 4374 \\
\hline \end{array} $$
\hspace{-8em} $$
\begin{array}{|c|c|c|c|c|}
\hline
q=p^r & 5 & 7 & 11 & 13 \\
\hline
\epsilon(q) & 4 & 5 & 6 & 7\\
\hline
\frac{1}{2}\left(q+1+\epsilon(q)\right) & 5 & 6.5 & 9 & 10.5 \\
\hline
n \hbox{ to be considered} & 5 \leq n \leq 12 & 7 \leq n \leq 12 & 9 \leq n \leq 12 & 11 \leq n \leq 12 \\
\hline
(k,s) & (2,0) & (2,0) & (2,0) & (2,0) \\
\hline
N_1(G_{k,s}/\mathbb{F}_q) & 6 & 8 & 12 & 14 \\
\hline
N_2(G_{k,s}/\mathbb{F}_q) & 60 & 168 & 660 & 1092 \\
\hline
\Gamma(G_{k,s}/\mathbb{F}_q) & 53 & 151.5 & 611.5 & 1021.5 \\
\hline
g_{k,s} & 10 & 21 & 55 & 78 \\
\hline
2g_{k,s}+1 & 21 & 43 & 11 & 157 \\
\hline
q^\frac{n-1}{2}(\sqrt{q}-1) \geq \ldots & 30 & 564 & 33917 & 967422 \\
\hline \end{array} $$
In this table, one can check that for each value of $q$ and $n$ to be considered and every corresponding step $G_{k,s}/\mathbb{F}_q$ one has simultaneously: \begin{itemize}
\item $g_{k,s}\geq2$ so Condition (1) is verified according to Theorem \ref{existdivnonspe},
\item $2g_{k,s}+1 \leq q^\frac{n-1}{2}(\sqrt{q}-1)$ so Condition (2) is verified.
\item $\Gamma(G_{k,s}/\mathbb{F}_q):=\frac{1}{2}\left(N_1(G_{k,s}/\mathbb{F}_q)+2N_2(G_{k,s}/\mathbb{F}_q)-2g_{k,s}+1\right) \geq n$ so Condition (3) is verified. \end{itemize}
\qed \end{Proof}
This is a similar result for the tower $T/\mathbb{F}_{p^2}$:
\begin{lemm}\label{lemme_placedegngsr} Let $p\geq5$ and $n \geq \frac{1}{2}\left(p^2+1+\epsilon(p^2)\right)$. There exists a step $H_k/\mathbb{F}_{p^2}$ of the tower $T/\mathbb{F}_{p^2}$ such that the three following conditions are verified: \begin{enumerate}[(1)]
\item there exists a non-special divisor of degree $g_k-1$ in $H_k/\mathbb{F}_{p^2}$,
\item there exists a place of $H_k/\mathbb{F}_{p^2}$ of degree $n$,
\item $N_1(H_k/\mathbb{F}_{p^2}) \geq 2n + 2g_k - 1$. \end{enumerate} Moreover the first step for which all the three conditions are verified is the first step for which (3) is verified. \end{lemm}
\begin{Proof} Note that ${n \geq \frac{1}{2}(5^2+1+\epsilon(5^2)) = 18}$. We first prove that for all integers $k$ such that ${2 \leq k \leq n - 2}$, we have ${2g_k+1 \leq p^{n-1}(p-1)}$ , so Condition (2) is verified according to Corollary 5.2.10 in \cite{stic2}. Indeed, for such an integer $k$, since ${p\geq5}$ one has ${k \leq \log_2(p^{n-2}) \leq \log_2(p^{n-1}-1)}$, thus $k+2 \leq \log_2\left(4(p^{n-1}-1)\right) \leq \log_2 (4p^{n-1}-1)$ and it follows that ${2^{k+2}+1 \leq 4p^{n-1}}$. Hence ${2\cdot2^{k+1} +1 \leq p^{n-1}(p-1)}$ since ${p\geq5}$, which gives the result according to Lemma \ref{lemme_genregsr} ii).\\ We prove now that for ${k\geq \log_2 (2n-1)-2}$, Condition (3) is verified. Indeed, for such an integer $k$, we have ${k +2\geq \log_2 (2n-1)}$, so ${2^{k+2} \geq 2n-1}$. Hence we get ${2^{k+3} \geq 2n+2^{k+2}-1}$ and so ${2^{k+1}(p-1)\geq 2^{k+1}\cdot4 \geq 2n+2^{k+2}-1}$ since ${p\geq5}$. Thus we have ${N_1(H_k/\mathbb{F}_{p^2}) \geq 2n + 2g_k - 1}$ according to Bound (\ref{nbratplgsr}) and Lemma~\ref{lemme_genregsr}~ii).\\ Hence, we have proved that for any integers ${n\geq 18}$ and ${k \geq 2}$ such that\linebreak[4] ${\log_2 (2n-1)-2 \leq k \leq n - 2}$, both Conditions (2) and (3) are verified. Moreover, note that for any ${n\geq 18}$, there exists an integer $k \geq 2$ in the interval ${\big[ \log_2 (2n-1)-2; n - 2 \big]}$. Indeed, ${\log_2 (2\cdot 18-1)-2 \simeq 3.12 >2}$ and the size of this interval increases with $n$ and is greater than 1 for $n=18$. To conclude, remark that for such an integer $k$, Condition (1) is easily verified from Theorem \ref{existdivnonspe} since $p^2\geq 4$ and ${g_k \geq g_2=3}$ according to Formula (\ref{genregsr}).\\ \qed \end{Proof}
This is a similar result for the tower $T/\mathbb{F}_p$:
\begin{lemm}\label{lemme_placedegngsr2} Let $p\geq5$ and $n \geq \frac{1}{2}\left(p+1+\epsilon(p)\right)$. There exists a step $H_k/\mathbb{F}_p$ of the tower $T/\mathbb{F}_p$ such that the three following conditions are verified: \begin{enumerate}[(1)]
\item there exists a non-special divisor of degree $g_k-1$ in $H_k/\mathbb{F}_p$,
\item there exists a place of $H_k/\mathbb{F}_p$ of degree $n$,
\item $N_1(H_k/\mathbb{F}_p) + 2N_1(H_k/\mathbb{F}_p) \geq 2n + 2g_k - 1$. \end{enumerate} Moreover the first step for which all the three conditions are verified is the first step for which (3) is verified. \end{lemm}
\begin{Proof} Note that ${n \geq \frac{1}{2}(5+1+\epsilon(5)) =5}$. We first prove that for all integers $k$ such that ${2 \leq k \leq n - 3}$, we have ${2g_k+1 \leq p^\frac{n-1}{2}(\sqrt{p}-1)}$,
so Condition (2) is verified according to Corollary 5.2.10 in \cite{stic2}. Indeed, for such an integer $k$, since ${p\geq5}$ and ${n\geq5}$ one has ${\log_2(p^\frac{n-1}{2}-1) \geq \log_2(5^\frac{n-1}{2}-1) \geq \log_2(2^{n-1}) = n-1}$. Thus ${k+2\leq n-1\leq\log_2(p^\frac{n-1}{2}-1)}$ and it follows from Lemma \ref{lemme_genregsr} ii) that \linebreak[4]${2g_k+1 \leq 2^{k+2}+1 \leq p^\frac{n-1}{2} \leq p^\frac{n-1}{2}(\sqrt{p}-1)}$, which gives the result.\\ The same reasoning as in the previous proof shows that Condition (3) is also satisfied as soon as ${k\geq \log_2 (2n-1)-2}$. Hence, we have proved that for any integers ${n\geq 5}$ and ${k \geq 2}$ such that ${\log_2 (2n-1)-2 \leq k \leq n - 3}$, both Conditions (2) and (3) are verified. Moreover, note that the size of the interval ${\big[ \log_2 (2n-1)-2; n - 3 \big]}$ increases with $n$ and that for any ${n\geq 5}$, this interval contains at least one integer $k \geq 2$. To conclude, remark that for such an integer $k$, Condition (1) is easily verified from Theorem \ref{existdivnonspe} since $p\geq 4$ and ${g_k \geq g_2=3}$ according to Formula (\ref{genregsr}).\\ \qed \end{Proof}
Now we establish general bounds for the bilinear complexity of multiplication by using derivative evaluations on places of degree one (respectively places of degree one and two).
\begin{theo}{\label{thm_arnaud1}} Let $q$ be a prime power and $n>1$ be an integer. If there exists an algebraic function field $F/ \mathbb{F}_q$ of genus $g$ with $N$ places of degree 1 and an integer $0 < a \leq N$ such that \begin{enumerate}[(i)]
\item there exists $\mathcal{R}$, a non-special divisor of degree $g-1$,
\item there exists $Q$, a place of degree $n$,
\item $N+a \geq 2n+2g-1$. \end{enumerate} Then $$ \mu_q(n) \leq 2n +g-1+a\mbox{.} $$ \end{theo}
\begin{Proof} Let $\mathcal{P}:=\{P_1, \ldots, P_N\}$ be a set of $N$ places of degree 1 and $\mathcal{P}'$ be a subset of $\mathcal{P}$ with cardinal number $a$. According to Lemma 2.7 in \cite{bapi}, we can choose an effectif divisor $\mathcal{D}$ equivalent to $Q+\mathcal{R}$ such that ${\mathrm{supp}(\mathcal{D}) \cap \mathcal{P} = \varnothing}$. We define the maps $Ev_Q$ and $Ev_\mathcal{P}$ as in Theorem \ref{theo_evalder} with $u_i=2$ if $P_i \in \mathcal{P}'$ and $u_i=1$ if $P_i \in \mathcal{P}\backslash\mathcal{P}'$. Then $Ev_Q$ is bijective, since $\ker Ev_Q = \mathcal{L}(\mathcal{D}-Q)$ with ${\dim(\mathcal{D}-Q) = \dim(R) =0}$ and ${\dim (\mathrm{im}\, Ev_Q )= \dim \mathcal{D} = \deg\mathcal{D} -g +1 + \mathrm{i}(\mathcal{D}) \geq n}$ according to Riemann-Roch Theorem. Thus $\dim (\mathrm{im}\, Ev_Q ) =n$. Moreover, $Ev_\mathcal{P}$ is injective. Indeed, ${\ker Ev_\mathcal{P} = \mathcal{L}(2\mathcal{D}-\sum_{i=1}^N u_iP_i)}$ with $\deg (2\mathcal{D}-\sum_{i=1}^N u_iP_i) =2(n+g-1)-N-a <0$. Furthermore, one has $\mathrm{rk}\, Ev_\mathcal{P} = \dim(2\mathcal{D})= \deg(2\mathcal{D})-g+1+\mathrm{i}(2\mathcal{D})$, and $\mathrm{i}(2\mathcal{D})=0$ since $2\mathcal{D} \geq \mathcal{D} \geq \mathcal{R}$ with ${\mathrm{i}(\mathcal{R})=0}$. So ${\mathrm{rk}\, Ev_\mathcal{P} = 2n+g-1}$, and we can extract a subset $\mathcal{P}_1$ from $\mathcal{P}$ and a subset $\mathcal{P}_1'$ from $\mathcal{P}'$ with cardinal number $N_1\leq N$ and $a_1\leq a$, such that: \begin{itemize}
\item $N_1+a_1 = 2n+g-1$,
\item the map $Ev_{\mathcal{P}_1}$ defined as $Ev_\mathcal{P}$ with $u_i=2$ if $P_i \in \mathcal{P}_1'$ and $u_i=1$ if $P_i \in \mathcal{P}_1\backslash\mathcal{P}_1'$, is injective. \end{itemize} According to Theorem \ref{theo_evalder}, this leads to $\mu_q(n) \leq N_1+2a_1 \leq N_1+a_1+a$ which gives the result. \qed \end{Proof}
\begin{theo}{\label{thm_arnaud2}} Let $q$ be a prime power and $n>1$ be an integer. If there exists an algebraic function field $F/ \mathbb{F}_q$ of genus $g$ with $N_1$ places of degree 1, $N_2$ places of degree 2 and two integers $0 < a_1 \leq N_1$, $0 < a_2 \leq N_2$ such that \begin{enumerate}[(i)]
\item there exists $\mathcal{R}$, a non-special divisor of degree $g-1$,
\item there exists $Q$, a place of degree $n$,
\item $N_1+a_1 +2(N_2+a_2) \geq 2n+2g-1$. \end{enumerate} Then $$ \mu_q(n) \leq 2n + g +N_2 + a_1 + 4a_2 $$ and $$ \mu_q(n) \leq 3n+\frac{3}{2}g+\frac{a_1}{2}+3a_2. $$ \end{theo}
\begin{Proof} Let $\mathcal{P}_1:=\{P_1, \ldots, P_{N_1}\}$ be a set of $N_1$ places of degree 1 and $\mathcal{P}_1'$ be a subset of $\mathcal{P}_1$ with cardinal number $a_1$. Let $\mathcal{P}_2:=\{Q_1, \ldots, Q_{N_2}\}$ be a set of $N_2$ places of degree 2 and $\mathcal{P}_2'$ be a subset of $\mathcal{P}_2$ with cardinal number~$a_2$. According to Lemma 2.7 in \cite{bapi}, we can choose an effectif divisor $\mathcal{D}$ equivalent to $Q+\mathcal{R}$ such that ${\mathrm{supp}(\mathcal{D}) \cap (\mathcal{P}_1\cup \mathcal{P}_2) = \varnothing}$. We define the maps $Ev_Q$ and $Ev_\mathcal{P}$ as in Theorem \ref{theo_evalder} with $u_i=2$ if $P_i \in \mathcal{P}_1' \cup \mathcal{P}_2'$ and $u_i=1$ if ${P_i \in (\mathcal{P}_1\backslash\mathcal{P}_1') \cup (\mathcal{P}_2\backslash\mathcal{P}_2')}$. Then the same raisoning as in the previous proof shows that $Ev_Q$ is bijective. Moreover, $Ev_\mathcal{P}$ is injective. Indeed, ${\ker Ev_\mathcal{P} = \mathcal{L}(2\mathcal{D}-\sum_{i=1}^N u_iP_i)}$ with $\deg (2\mathcal{D}-\sum_{i=1}^N u_iP_i) =2(n+g-1)-(N_1+a_1+2(N_2+a_2)) <0$. Furthermore, one has $\mathrm{rk}\, Ev_\mathcal{P} = \dim(2\mathcal{D})= \deg(2\mathcal{D})-g+1+\mathrm{i}(2\mathcal{D})$, and $\mathrm{i}(2\mathcal{D})=0$ since $2\mathcal{D} \geq \mathcal{D} \geq \mathcal{R}$ with ${\mathrm{i}(\mathcal{R})=0}$. So ${\mathrm{rk}\, Ev_\mathcal{P} = 2n+g-1}$, and we can extract a subset $\Tilde{\mathcal{P}}_1$ from $\mathcal{P}_1$, a subset $\Tilde{\mathcal{P}}_1'$ from $\mathcal{P}_1'$, a subset $\Tilde{\mathcal{P}}_2$ from $\mathcal{P}_2$ and a subset $\Tilde{\mathcal{P}}_2'$ from $\mathcal{P}_2'$ with respective cardinal numbers $\Tilde{N}_1\leq N_1$, $\Tilde{a}_1\leq a_1$, $\Tilde{N}_2\leq N_2$ and $\Tilde{a}_2\leq a_2$, such that: \begin{itemize}
\item $2n + g \geq \Tilde{N}_1+\Tilde{a}_1 +2(\Tilde{N}_2+\Tilde{a}_2) \geq 2n+g-1$,
\item the map $Ev_{\Tilde{\mathcal{P}}}$ defined as $Ev_\mathcal{P}$ with $u_i=2$ if $P_i \in \Tilde{\mathcal{P}}_1'\cup \Tilde{\mathcal{P}}_2'$ and $u_i=1$ if $(\Tilde{\mathcal{P}}_1\backslash \Tilde{\mathcal{P}}_1') \cup (\Tilde{\mathcal{P}}_2\backslash \Tilde{\mathcal{P}}_2')$, is injective. \end{itemize} According to Theorem \ref{theo_evalder}, this leads to $\mu_q(n) \leq \Tilde{N}_1+2\Tilde{a}_1 + 3(\Tilde{N}_2 +2\Tilde{a}_2)$ since $M_k(2) \leq3$ for all prime power $k$. Hence, one has the first result since \linebreak[4]${\Tilde{N}_1+\Tilde{a}_1 + 2(\Tilde{N}_2 +\Tilde{a}_2)\leq 2n+g}$ and the second one since ${\frac{\Tilde{a}_1}{2}+\Tilde{N}_2+\Tilde{a}_2 \leq \frac{g}{2}+n}$. \qed \end{Proof}
\subsection{New upper bounds for $\mu_{q}(n)$} \label{sectbornesarnaud}
Here, we give a detailed proof of Bound (i) of Theorem \ref{bornes_arnaud1} and we give an improvement of Bound (ii). Moreover, we correct the bound for $\mu_{p^2}(n)$ given in \cite{arna1} and ameliorate the unproved bound for $\mu_p(n)$. Namely, we prove:
\begin{theo}\label{theo_arnaud1} Let ${q=p^r \geq 4}$ be a power of the prime $p$. Then
\begin{equation*}
\begin{array}{l}
\mbox{(i) \ If ${q=p^r \geq 4}$, then }\mu_{q^2}(n) \leq 2 \left(1 + \frac{p}{q-3 + (p-1)\left(1- \frac{1}{q+1}\right)} \right)n,\\
\\
\mbox{(ii) \ If ${q=p^r \geq 4}$, then }\mu_{q}(n) \leq 3 \left(1 + \frac{p}{q-3 + (p-1)\left(1- \frac{1}{q+1}\right)} \right)n.\\
\\
\mbox{(iii) \ If $p\geq5$, then }\mu_{p^2}(n) \leq 2 \left(1 + \frac{2}{p-\frac{33}{16}} \right)n.\\
\\
\mbox{(iv) \ If $p\geq5$, then }\mu_{p}(n) \leq 3\left(1 + \frac{2}{p-\frac{33}{16}} \right)n.
\end{array}
\end{equation*}
\end{theo}
\begin{Proof} \begin{enumerate}[(i)]
\item Let $n\geq \frac{1}{2}(q^2+1+{\epsilon (q^2) })$. Otherwise, we already know from Theorems \ref{thm_wdg} and \ref{thm_shokr} that $\mu_{q^2}(n) \leq 2n$. According to Lemma \ref{lemme_placedegn}, there exists a step of the tower $T_2/\mathbb{F}_{q^2}$ on which we can apply Theorem \ref{thm_arnaud1} with $a=0$. We denote by $F_{k,s+1}/\mathbb{F}_{q^2}$ the first step of the tower that suits the hypothesis of Theorem \ref{thm_arnaud1} with $a=0$, i.e. $k$ and $s$ are integers such that ${N_{k,s+1} \geq 2n+2g_{k,s+1}-1}$ and ${N_{k,s}< 2n+2g_{k,s}-1}$, where ${N_{k,s}:=N_1(F_{k,s}/\mathbb{F}_{q^2})}$ and ${g_k:=g(F_{k,s})}$. We denote by $n_0^{k,s}$ the biggest integer such that ${N_{k,s}\geq 2n_0^{k,s}+2g_{k,s}-1}$, i.e. ${n_0^{k,s} = \sup \big\{n \in \mathbb{N} \, \vert \, 2n \leq N_{k,s}-2g_{k,s}+1\big\}}$. To perform multiplication in $\mathbb{F}_{q^{2n}}$, we have the following alternative:
\begin{enumerate}[(a)]
\item use the algorithm on the step $F_{k,s+1}$. In this case, a bound for the bilinear complexity is given by Theorem \ref{thm_arnaud1} applied with $a=0$:
$$
\mu_{q^2}(n) \leq 2n+g_{k,s+1}-1= 2n+g_{k,s}-1 +\Delta g_{k,s}.
$$
(Recall that $\Delta g_{k,s} := g_{k,s+1} - g_{k,s}$)
\item use the algorithm on the step $F_{k,s}$ with an appropriate number of derivative evaluations. Let $a:= 2(n-n_0^{k,s})$ and suppose that $a \leq N_{k,s}$. Then ${N_{k,s} \geq 2n_0^{k,s}+2g_{k,s}-1}$ implies that ${N_{k,s} +a \geq 2n+2g_{k,s}-1}$ so Condition (iii) of Theorem \ref{thm_arnaud1} is satisfied. Thus, we can perform $a$ derivative evaluations in the algorithm using the step $F_{k,s}$ and we have:
$$
\mu_{q^2}(n) \leq 2n+g_{k,s}-1+a.
$$
\end{enumerate}
Thus, if $a \leq N_{k,s}$ Case (b) gives a better bound as soon as ${a<\Delta g_{k,s}}$. Since we have from Lemma \ref{lemme_delta} both ${N_{k,s} \geq D_{k,s}}$ and ${\Delta g_{k,s} \geq D_{k,s}}$, if ${a\leq D_{k,s}}$ then we can perform $a$ derivative evaluations on places of degree 1 in the step $F_{k,s}$ and Case (b) gives a better bound then Case (a).\\
For $x \in \mathbb{R}^{+}$ such that ${N_{k,s+1} \geq 2[x]+2g_{k,s+1}-1}$ and ${N_{k,s} < 2[x]+2g_{k,s}-1}$, we define the function $\Phi_{k,s}(x)$ as follow:
$$
\Phi_{k,s}(x) = \left\{\begin{array}{ll}
2x+g_{k,s}-1+2(x-n_0^{k,s}) & \mbox{if } 2(x- n_0^{k,s}) < D_{k,s}\\
2x+g_{k,s+1}-1& \mbox{else}.
\end{array} \right.
$$
We define the function $\Phi$ for all ${x\geq0}$ as the minimum of the functions $\Phi_{k,s}$ for which $x$ is in the domain of $\Phi_{k,s}$. This function is piecewise linear with two kinds of piece: those which have slope $2$ and those which have slope~$4$. Moreover, since the y-intercept of each piece grows with $k$ and $s$, the graph of the function $\Phi$ lies below any straight line that lies above all the points ${\big(n_0^{k,s}+\frac{D_{k,s}}{2}, \Phi(n_0^{k,s}+\frac{D_{k,s}}{2})\big)}$, since these are the \textit{vertices} of the graph. Let ${X:=n_0^{k,s}+\frac{D_{k,s}}{2}}$, then
\begin{eqnarray*}
\Phi(X) & \leq & 2X + g_{k,s+1} -1\\
& \leq & 2X+ g_{k,s+1}\\
& = & 2\left(1 + \frac{g_{k,s+1}}{2X}\right)X.
\end{eqnarray*} We want to give a bound for $\Phi(X)$ which is independent of $k$ and $s$.
Recall that $D_{k,s} :=(p-1)p^sq^k$, and $$ 2n_0^{k,s} \geq q^{k-1}p^s(q+1)(q-3) \ \ \ \mbox{by Lemma \ref{lemme_bornesup}} $$ and $$ g_{k,s+1} \leq q^{k-1}(q+1)p^{s+1} \ \ \ \mbox{by Lemma \ref{lemme_genre} (iii).} $$ So we have \begin{eqnarray*}
\frac{g_{k,s+1}}{2X} & = & \frac{g_{k,s+1}}{2n_0^{k,s}+D_{k,s}} \\
& \leq & \frac{q^{k-1}(q+1)p^{s+1}}{q^{k-1}p^s(q+1)(q-3) + (p-1)p^sq^k } \\
& = & \frac{q^{k-1}(q+1)p^sp}{q^{k-1}(q+1)p^s\left(q-3 + (p-1)\frac{q}{q+1}\right) }\\
& = & \frac{p}{(q-3)+(p-1)\frac{q}{q+1}} \end{eqnarray*} Thus, the graph of the function $\Phi$ lies below the line ${y=2\left(1 + \frac{p}{(q-3)+(p-1)\frac{q}{q+1}}\right)x}$. In particular, we get $$ \Phi(n) \leq 2\left(1 + \frac{p}{(q-3)+(p-1)\frac{q}{q+1}}\right)n. $$
\item Let $n\geq \frac{1}{2}(q+1+{\epsilon (q) })$. Otherwise, we already know from Theorems \ref{thm_wdg} and \ref{thm_shokr} that $\mu_{q}(n) \leq 2n$. According to Lemma \ref{lemme_placedegn2}, there exists a step of the tower $T_3/\mathbb{F}_{q}$ on which we can apply Theorem \ref{thm_arnaud2} with $a_1=a_2=0$. We denote by $G_{k,s+1}/\mathbb{F}_{q}$ the first step of the tower that suits the hypothesis of Theorem \ref{thm_arnaud2} with $a_1=a_2=0$, i.e. $k$ and $s$ are integers such that ${N_{k,s+1} \geq 2n+2g_{k,s+1}-1}$ and ${N_{k,s}< 2n+2g_{k,s}-1}$, where \linebreak[4]${N_{k,s}:=N_1(G_{k,s}/\mathbb{F}_q)+2N_2(G_{k,s}/\mathbb{F}_q)}$ and ${g_{k,s}:=g(G_{k,s})}$. We denote by $n_0^{k,s}$ the biggest integer such that ${N_{k,s}\geq 2n_0^{k,s}+2g_{k,s}-1}$, i.e. \linebreak[4] ${n_0^{k,s} = \sup \big\{n \in \mathbb{N} \, \vert \, 2n \leq N_{k,s}-2g_{k,s}+1\big\}}$. To perform multiplication in $\mathbb{F}_{q^n}$, we have the following alternative:
\begin{enumerate}[(a)]
\item use the algorithm on the step $G_{k,s+1}$. In this case, a bound for the bilinear complexity is given by Theorem \ref{thm_arnaud2} applied with $a_1=a_2=0$:
$$
\mu_q(n) \leq 3n+\frac{3}{2}g_{k,s+1}= 3n_0^{k,s}+\frac{3}{2}g_{k,s}+3(n-n_0^{k,s}) + \frac{3}{2}\Delta g_{k,s}.
$$
\item use the algorithm on the step $G_{k,s}$ with an appropriate number of derivative evaluations. Let ${a_1+2a_2:= 2(n-n_0^{k,s})}$ and suppose that ${a_1+2a_2 \leq N_{k,s}}$. Then ${N_{k,s}\geq 2n_0^{k,s}+2g_{k,s}-1}$ implies that ${N_{k,s} +a_1+2a_2 \geq 2n+2g_{k,s}-1}$. Thus we can perform $a_1+a_2$ derivative evaluations in the algorithm using the step $G_{k,s}$ and we have:
$$
\mu_q(n) \leq 3n+\frac{3}{2}g_{k,s}+\frac{3}{2}(a_1+2a_2)=3n_0^{k,s}+\frac{3}{2}g_{k,s}+6(n-n_0^{k,s}).
$$
\end{enumerate}
Thus, if $a_1+2a_2 \leq N_{k,s}$ Case (b) gives a better bound as soon as ${n-n_0^{k,s}<\frac{1}{2}\Delta g_{k,s}}$. Since we have from Lemma \ref{lemme_delta} both $N_{k,s} \geq D_{k,s}$ and $\frac{1}{2}\Delta g_{k,s} \geq \frac{1}{2}D_{k,s}$, if $a_1+2a_2\leq D_{k,s}$, i.e. $n-n_0^{k,s} \leq \frac{1}{2} D_{k,s}$, then we can perform $a_1$ derivative evaluations on places of degree 1 and $a_2$ derivative evaluations on places of degree 2 in the step $G_{k,s}$ and Case (b) gives a better bound then Case (a).\\
For $x \in \mathbb{R}^{+}$ such that ${N_{k,s+1} \geq 2[x]+2g_{k,s+1}-1}$ and ${N_{k,s} < 2[x]+2g_{k,s}-1}$, we define the function $\Phi_{k,s}(x)$ as follow:
$$
\Phi_{k,s}(x) = \left\{\begin{array}{ll}
3x+\frac{3}{2}g_{k,s}+3(x-n_0^{k,s}) & \mbox{if } x- n_0^{k,s} < \frac{D_{k,s}}{2}\\
& \\
3x+\frac{3}{2}g_{k,s+1}& \mbox{else}.
\end{array} \right.
$$
We define the function $\Phi$ for all ${x\geq0}$ as the minimum of the functions $\Phi_{k,s}$ for which $x$ is in the domain of $\Phi_{k,s}$. This function is piecewise linear with two kinds of piece: those which have slope $3$ and those which have slope~$6$. Moreover, since the y-intercept of each piece grows with $k$ and $s$, the graph of the function $\Phi$ lies below any straight line that lies above all the points ${\big(n_0^{k,s}+\frac{D_{k,s}}{2}, \Phi(n_0^{k,s}+\frac{D_{k,s}}{2})\big)}$, since these are the \textit{vertices} of the graph. Let ${X:=n_0^{k,s}+\frac{D_{k,s}}{2}}$, then
\begin{eqnarray*}
\Phi(X) & \leq & 3X + \frac{3}{2}g_{k,s+1} \\
& = & 3\left(1 + \frac{g_{k,s+1}}{2X}\right)X.
\end{eqnarray*} We want to give a bound for $\Phi(X)$ which is independent of $k$ and $s$.
Recall that $D_{k,s} :=(p-1)p^sq^k$, and $$ n_0^{k,s} \geq \frac{1}{2}q^{k-1}p^s(q+1)(q-3) \ \ \ \mbox{by Lemma \ref{lemme_bornesup}} $$ and $$ g_{k,s+1} \leq q^{k-1}(q+1)p^{s+1} \ \ \ \mbox{by Lemma \ref{lemme_genre} (iii).} $$ So we have \begin{eqnarray*}
\frac{g_{k,s+1}}{2X} & = & \frac{g_{k,s+1}}{2(n_0^{k,s}+\frac{D_{k,s}}{2})} \\
& \leq & \frac{q^{k-1}(q+1)p^{s+1}}{2(\frac{1}{2}q^{k-1}p^s(q+1)(q-3) + \frac{1}{2}(p-1)p^sq^k)} \\
& = & \frac{q^{k-1}(q+1)p^sp}{q^{k-1}(q+1)p^s\left(q-3 + (p-1)\frac{q}{q+1}\right) }\\
& = & \frac{p}{(q-3)+(p-1)\frac{q}{q+1}} \end{eqnarray*} Thus, the graph of the function $\Phi$ lies below the line ${y=3\left(1 + \frac{p}{(q-3)+(p-1)\frac{q}{q+1}}\right)x}$. In particular, we get $$ \Phi(n) \leq 3\left(1 + \frac{p}{(q-3)+(p-1)\frac{q}{q+1}}\right)n. $$ \item Let $n\geq \frac{1}{2}(p^2+1+{\epsilon (p^2) })$. Otherwise, we already know from Theorems \ref{thm_wdg} and \ref{thm_shokr} that $\mu_{p^2}(n) \leq 2n$. According to Lemma \ref{lemme_placedegngsr}, there exists a step of the tower $T/\mathbb{F}_{p^2}$ on which we can apply Theorem \ref{thm_arnaud1} with $a=0$. We denote by $H_{k+1}/\mathbb{F}_{p^2}$ the first step of the tower that suits the hypothesis of Theorem~\ref{thm_arnaud1} with $a=0$, i.e. $k$ is an integer such that ${N_{k+1} \geq 2n+2g_{k+1}-1}$ and ${N_k< 2n+2g_k-1}$, where ${N_k:=N_1(H_k/\mathbb{F}_{p^2})}$ and ${g_k:=g(H_k)}$. We denote by $n_0^k$ the biggest integer such that ${N_k\geq 2n_0^k+2g_k-1}$, i.e. \linebreak[4]${n_0^k = \sup \big\{n \in \mathbb{N} \, \vert \, 2n \leq N_k-2g_k+1\big\}}$. To perform multiplication in $\mathbb{F}_{p^{2n}}$, we have the following alternative:
\begin{enumerate}[(a)]
\item use the algorithm on the step $H_{k+1}$. In this case, a bound for the bilinear complexity is given by Theorem \ref{thm_arnaud1} applied with $a=0$:
$$
\mu_{p^2}(n) \leq 2n+g_{k+1}-1= 2n+g_k-1 +\Delta g_{k,s}.
$$
(Recall that $\Delta g_k := g_{k+1} - g_k$)
\item use the algorithm on the step $H_k$ with an appropriate number of derivative evaluations. Let $a:= 2(n-n_0^k)$ and suppose that $a \leq N_k$. Then ${N_k \geq 2n_0^k+2g_k-1}$ implies that ${N_k +a \geq 2n+2g_k-1}$ so Condition (3) of Theorem \ref{thm_arnaud1} is satisfied. Thus, we can perform $a$ derivative evaluations in the algorithm using the step $H_k$ and we have:
$$
\mu_{p^2}(n) \leq 2n+g_k-1+a.
$$
\end{enumerate}
Thus, if $a \leq N_k$ Case (b) gives a better bound as soon as ${a<\Delta g_k}$.
For $x \in \mathbb{R}^{+}$ such that ${N_{k+1} \geq 2[x]+2g_{k+1}-1}$ and ${N_k < 2[x]+2g_k-1}$, we define the function $\Phi_k(x)$ as follow:
$$
\Phi_k(x) = \left\{\begin{array}{ll}
2x+g_k-1+2(x-n_0^k) & \mbox{if } 2(x- n_0^k) < \Delta g_k\\
2x+g_{k+1}-1& \mbox{else}.
\end{array} \right.
$$
Note that when Case (b) gives a better bound, that is to say when ${2(x-n_0^k) < \Delta g_k}$, then according to Lemma \ref{lemme_deltagsr} we have also $${2(x-n_0^k)< N_k}$$
so we can proceed as in Case (b) since there are enough rational places to use $a=2(x-n_0^k)$ derivative evaluations on.
We define the function $\Phi$ for all ${x\geq0}$ as the minimum of the functions $\Phi_k$ for which $x$ is in the domain of $\Phi_k$. This function is piecewise linear with two kinds of piece: those which have slope $2$ and those which have slope~$4$. Moreover, since the y-intercept of each piece grows with $k$, the graph of the function $\Phi$ lies below any straight line that lies above all the points ${\big(n_0^k+\frac{\Delta g_k}{2}, \Phi(n_0^k+\frac{\Delta g_k}{2})\big)}$, since these are the \textit{vertices} of the graph. Let ${X:=n_0^k+\frac{\Delta g_k}{2}}$, then
\begin{eqnarray*}
\Phi(X) & \leq & 2X + g_{k+1} -1 \leq 2\left(1 + \frac{g_{k+1}}{2X}\right)X.
\end{eqnarray*} We want to give a bound for $\Phi(X)$ which is independent of $k$.
Lemmas \ref{lemme_genregsr} ii), \ref{lemme_deltagsr} and \ref{lemme_bornesupgsr} give \begin{eqnarray*}
\frac{g_{k+1}}{2X} & \leq & \frac{2^{k+2}}{2^{k+1}(p-3)+4+2^{k+1}-2^\frac{k+1}{2}}\\
& = & \frac{2^{k+2}}{2^{k+1}\left((p-3)+1+2^{-k+1}-2^{-\frac{k+1}{2}}\right)}\\
& = & \frac{2}{p-2+2^{-k+1}-2^{-\frac{k+1}{2}}}\\
& \leq & \frac{2}{p-\frac{33}{16}} \end{eqnarray*} since $-\frac{1}{16}$ is the minimum of the function ${k \mapsto 2^{-k+1}-2^{-\frac{k+1}{2}}}$.\\ Thus, the graph of the function $\Phi$ lies below the line ${y=2\left(1+ \frac{2}{p-\frac{33}{16}}\right)x}$. In particular, we get $$ \Phi(n) \leq 2\left(1+ \frac{2}{p-\frac{33}{16}}\right)n. $$ \item Let $n\geq \frac{1}{2}(p+1+{\epsilon (p) })$. Otherwise, we already know from Theorems \ref{thm_wdg} and \ref{thm_shokr} that $\mu_{p}(n) \leq 2n$. According to Lemma \ref{lemme_placedegngsr2}, there exists a step of the tower $T/\mathbb{F}_p$ on which we can apply Theorem \ref{thm_arnaud2} with $a_1=a_2=0$. We denote by $H_{k+1}/\mathbb{F}_p$ the first step of the tower that suits the hypothesis of Theorem \ref{thm_arnaud2} with $a_1=a_2=0$, i.e. $k$ is an integer such that ${N_{k+1} \geq 2n+2g_{k+1}-1}$ and ${N_k< 2n+2g_k-1}$, where ${N_k:=N_1(H_k/\mathbb{F}_p)+2N_2(H_k/\mathbb{F}_p)}$ and ${g_k:=g(H_k)}$. We denote by $n_0^k$ the biggest integer such that ${N_k\geq 2n_0^k+2g_k-1}$, i.e.\linebreak[4] ${n_0^k = \sup \big\{n \in \mathbb{N} \, \vert \, 2n \leq N_k-2g_k+1\big\}}$. To perform multiplication in $\mathbb{F}_{p^n}$, we have the following alternative:
\begin{enumerate}[(a)]
\item use the algorithm on the step $H_{k+1}$. In this case, a bound for the bilinear complexity is given by Theorem \ref{thm_arnaud2} applied with $a_1=a_2=0$:
$$
\mu_q(n) \leq 3n+\frac{3}{2}g_{k+1}= 3n_0^k+\frac{3}{2}g_k+3(n-n_0^k) + \frac{3}{2}\Delta g_k.
$$
\item use the algorithm on the step $H_k$ with an appropriate number of derivative evaluations. Let ${a_1+2a_2:= 2(n-n_0^k)}$ and suppose that ${a_1+2a_2 \leq N_k}$. Then ${N_k\geq 2n_0^k+2g_k-1}$ implies that ${N_k +a_1+2a_2 \geq 2n+2g_k-1}$. Thus we can perform $a_1+a_2$ derivative evaluations in the algorithm using the step $H_k$ and we have:
$$
\mu_p(n) \leq 3n+\frac{3}{2}g_k+\frac{3}{2}(a_1+2a_2)=3n_0^k+\frac{3}{2}g_k+6(n-n_0^k).
$$
\end{enumerate}
Thus, if $a_1+2a_2 \leq N_{k,s}$ Case (b) gives a better bound as soon as ${n-n_0^{k,s}<\frac{1}{2}\Delta g_{k,s}}$.
For $x \in \mathbb{R}^{+}$ such that ${N_{k+1} \geq 2[x]+2g_{k+1}-1}$ and ${N_k < 2[x]+2g_k-1}$, we define the function $\Phi_k(x)$ as follow:
$$
\Phi_k(x) = \left\{\begin{array}{ll}
3x+\frac{3}{2}g_k+3(x-n_0^k) & \mbox{if } x- n_0^k < \frac{\Delta g_k}{2}\\
& \\
3x+\frac{3}{2}g_{k+1}& \mbox{else}.
\end{array} \right.
$$
Note that when Case (b) gives a better bound, that is to say when ${2(x-n_0^k) < \Delta g_k}$, then according to Lemma \ref{lemme_deltagsr} we have also $${2(x-n_0^k)< N_k}$$
so we can proceed as in Case (b) since there are enough places of degree 1 and 2 to use $a_1+a_2=2(x-n_0^k)$ derivative evaluations on.
We define the function $\Phi$ for all ${x\geq0}$ as the minimum of the functions $\Phi_k$ for which $x$ is in the domain of $\Phi_k$. This function is piecewise linear with two kinds of piece: those which have slope $3$ and those which have slope~$6$. Moreover, since the y-intercept of each piece grows with $k$, the graph of the function $\Phi$ lies below any straight line that lies above all the points ${\big(n_0^k+\frac{\Delta g_k}{2}, \Phi(n_0^k+\frac{\Delta g_k}{2})\big)}$, since these are the \textit{vertices} of the graph. Let ${X:=n_0^k+\frac{\Delta g_k}{2}}$, then
\begin{eqnarray*}
\Phi(X) & \leq & 3X + \frac{3}{2}g_{k+1} = 3\left(1 + \frac{g_{k+1}}{2X}\right)X.
\end{eqnarray*} We want to give a bound for $\Phi(X)$ which is independent of $k$.
The same reasoning as in (iii) gives \begin{equation*}
\frac{g_{k+1}}{2X} \leq \frac{2}{p-\frac{33}{16}} \end{equation*} Thus, the graph of the function $\Phi$ lies below the line ${y=3\left(1+ \frac{2}{p-\frac{33}{16}}\right)x}$. In particular, we get $$ \Phi(n) \leq 3\left(1+ \frac{2}{p-\frac{33}{16}}\right)n. $$
\qed \end{enumerate} \end{Proof}
\subsection{New asymptotical upper bounds for $\mu_{q}(n)$}
In this section, we give upper bounds for the asymptotical quantities $m_q$ and $M_q$ which are defined above in Section \ref{mM}. First, let us repair the two main mistaken statements (as well as their corollaries) due to I. Shparlinsky, M. Tsfasman and S. Vladut (Theorem 3.1 and Theorem 3.9 in \cite{shtsvl}) in the two following propositions.
\begin{propo}
Let $q$ be a prime power such that $A(q)>2$. Then $$m_q \leq 2\left(1+\frac{1}{A(q)-2}\right).$$ \end{propo}
\begin{Proof}\label{newboundmq}
Let $\left(F_s/\mathbb{F}_q\right)_s$ be a sequence of algebraic function fields defined over $\mathbb{F}_q$. Let us denote by $g_s$ the genus of $F_s/\mathbb{F}_q$ and by $N_1(s)$ the number of places of degree $1$ of $F_s/\mathbb{F}_q$. Suppose that the sequence $\left(F_s/\mathbb{F}_q\right)_s$ was chosen such that: \begin{enumerate}
\item $\lim_{s \rightarrow +\infty}g_s=+\infty$;
\item $\lim_{s \rightarrow +\infty}\frac{N_1(s)}{g_s}=A(q)$. \end{enumerate} Let $\epsilon$ be any real number such that $0 < \epsilon < \frac{A(q)}{2} -1$. Let us define the following integer
$$n_s=\left\lfloor\frac{N_1(s)-2g_s(1+\epsilon)}{2}\right\rfloor.$$ Let us remark that $$N_1(s)=g_s A(q) + o(g_s),$$ $$\mbox{so }N_1(s)-2(1+\epsilon)g_s=g_s\left(\strut A(q)-2(1+\epsilon)\right)+o(g_s).$$ Then the following holds \begin{enumerate}
\item there exists an integer $s_0$ such that for any $s \geq s_0$ the integer $n_s$ is strictly positive;
\item for any real number $c$ such that $0<c<A(q)-2(1+\epsilon)$ there exists an integer $s_1$ such that for any integer $s\geq s_1$ the following holds: $n_s \geq \frac{c}{2}g_s$, hence $n_s$ tends to $+\infty$;
\item there exists an integer $s_2$ such that for any integer $s\geq s_2$ the following holds: $2g_s+1 \leq q^{\frac{n_s-1}{2}}\left(q^{\frac{1}{2}}-1\right)$ and consequently there exists a place of degree $n_s$ (cf. \cite[Corollary 5.2.10 (c) p. 207]{stic} ).
\item the following inequality holds: $N_1(s)> 2n_s+2g_s-2$ and consequently, using Theorem \ref{theoprinc} we conclude that $\mu_q(n_s) \leq 2n_s+g_s-1$. \end{enumerate} Consequently, $$\frac{\mu_q(n_s)}{n_s} \leq 2+\frac{g_s-1}{n_s},$$ $$m_q \leq 2+ \lim_{s \rightarrow +\infty}\frac{2g_s-2}{N_1(s)-2(1+\epsilon)g_s-2} \leq 2\left( 1+ \frac{1}{A(q)-2(1+\epsilon)}\right).$$ This inequality is true for any $\epsilon >0$ sufficiently small. Then we obtain the result. \qed \end{Proof}
\begin{coro}\label{coromq1} Let $q=p^m$ be a prime power such that $q \geq 4$. Then $$m_{q^2}\leq 2\left(1+\frac{1}{q-3}\right).$$ \end{coro}
Note that this corollary lightly improves Theorem \ref{chudmq}. Now in the case of arbitrary $q$, we obtain:
\begin{coro}\label{coromq2} For any $q=p^m>3$,
$$m_{q}\leq 3\left(1+\frac{1}{q-3}\right).$$ \end{coro}
\begin{Proof} For any $q=p^m>3$, we have $q^2=p^{2m}\geq 16$ and thus Corollary \ref{coromq1} gives $m_{q^2}\leq 2\left(1+\frac{1}{q-3}\right)$. Then, by Lemma \ref{lemasyMqmq}, we have $$m_q\leq m_{q^2}.\mu_q(2)/2$$ which gives the result since $\mu_q(2)=3$ for any $q$. \qed \end{Proof}
Now, we are going to show that for $M_q$ the same upper bound as for $m_q$ can be proved though only in the case of $q$ being an even power of a prime. However, we are going to prove that in the case of $q$ being an odd power of a prime, the difference between the two bounds is very slight.
\begin{propo}\label{newbound} Let $q=p^m$ be a prime power such that $q \geq 4$. Then $$M_{q^2}\leq 2\left(1+\frac{1}{q-3}\right).$$ \end{propo}
\begin{Proof} Let $q=p^m$ be a prime power such that $q\geq4$. Let us consider two cases. First, we suppose $q=p$. We know that for any real number $\epsilon >0$ and for any sufficiently large real number $x$, there exists a prime number $l_k$ such that $x<l_k<(1+\epsilon)x$. Now, without less of generality let us consider the characteristic $p$ such that $p\neq 11$. Then it is known (\cite{tsvl} and \cite{shtsvl}) that the curve $X_k=X_0(11l_k)$, where $l_k$ is the $k$-th prime number, has a genus $g_k=l_k$ and satisfies $N_1(X_k(\mathbb{F}_{q^2}))\geq (q-1)(g_k+1)$ where $N_1(X_k(\mathbb{F}_{q^2}))$ denotes the number of rational points over $\mathbb{F}_{q^2}$ of the curve $X_k$. Let us consider a sufficiently large $n$. There exist two consecutive prime numbers $l_k$ and $l_{k+1}$ such that $(p-1)(l_{k+1}+1)> 2n+2l_{k+1}-2$ and $(p-1)(l_k+1)\leq 2n+2l_k-2$. Let us consider the algebraic function field $F_{k+1}/\mathbb{F}_{p^2}$ associated to the curve $X_{k+1}$ of genus $l_{k+1}$ defined over $\mathbb{F}_{p^2}$. Let $N_i(F_{k}/\mathbb{F}_{p^2})$ be the number of places of degree $i$ of $F_{k}/\mathbb{F}_{p^2}$. Then $N_1(F_{k+1}/\mathbb{F}_{p^2})\geq (p-1)(l_{k+1}+1)> 2n+2l_{k+1}-2$. Moreover, it is known that $N_n(F_{k+1}/\mathbb{F}_{p^2})>0$ for any integer $n$ sufficiently large. We also know that $l_{k+1}-l_k\leq l_k^{0,535}$ for any integer $k\geq k_0$ where $k_0$ can be effectively determined by \cite{baha}. Then there exists a real number $\epsilon>0$ such that $l_{k+1}-l_k=\epsilon l_k\leq l_k^{0,535}$ namely $l_{k+1}\leq(1+\epsilon)l_k$. It is sufficient to choose $\epsilon$ such that $\epsilon l_k^{0,465}\leq 1$.
Consequently, for any integer $n$ sufficiently large, this algebraic function field $F_{k+1}/\mathbb{F}_{p^2}$ satisfies Theorem \ref{theoprinc}, and so $\mu_{p^2}(n)\leq 2n+l_{k+1}-1\leq 2n+(1+\epsilon)l_k-1$ with $l_k\leq \frac{2n}{p-3}-\frac{p+1}{p-3}$. Thus, as $n\longrightarrow +\infty$ then $l_k\longrightarrow +\infty$ and $\epsilon \longrightarrow 0$, so we obtain $M_{p^2}\leq 2\left(1+\frac{1}{p-3}\right)$. Note that for $p=11$, Proposition 4.1.20 in \cite{tsvl} enables us to obtain $g_k=l_k+O(1)$.
Now, let us study the more difficult case where $q=p^m$ with $m>1$. We use the Shimura curves as in \cite{shtsvl}. Recall the construction of this good family. Let $L$ be a totally real abelian over $\mathbb{Q}$ number field of degree $m$ in which $p$ is inert, thus the residue class field ${\mathcal O}_L/(p)$ of $p$, where ${\mathcal O}_L$ denotes the ring of integers of $L$, is isomorphic to the finite field $\mathbb{F}_{q}$. Let $\wp$ be a prime of $L$ which does not divide $p$ and let $B$ be a quaternion algebra for which $$B\otimes_{\mathbb{Q}}\mathbb{R}=M_2(\mathbb{R}) \otimes \mathbb{H} \otimes ...\otimes \mathbb{H}$$ where $\mathbb{H}$ is the skew field of Hamilton quaternions. Let $B$ be also unramified at any finite place if $(m-1)$ is even; let $B$ be also unramified outside infinity and $\wp$ if $(m-1)$ is odd. Then, over $L$ one can define the Shimura curve by its complex points $X_{\Gamma}(\mathbb{C})=\Gamma\setminus \mathfrak{h}$, where $\mathfrak{h}$ is the Poincar\'e upper half-plane and $\Gamma$ is the group of units of a maximal order ${\mathcal O}$ of $B$ with totally positive norm modulo its center. Hence, the considered Shimura curve admits an integral model over $L$ and it is well known that its reduction $X_{\Gamma,p}(\mathbb{F}_{p^{2m}})$ modulo $p$ is good and is defined over the residue class field ${\mathcal O}_L/(p)$ of $p$, which is isomorphic to $\mathbb{F}_q$ since $p$ is inert in $L$. Moreover, by \cite{ihar}, the number $N_1(X_{\Gamma,p}(\mathbb{F}_{q^2}))$ of $\mathbb{F}_{q^2}$-points of $X_{\Gamma,p}$ is such that $N_1(X_{\Gamma,p}(\mathbb{F}_{q^2}))\geq (q-1)(g+1)$, where $g$ denotes the genus of $X_{\Gamma,p}(\mathbb{F}_{q^{2}})$. Let now $l$ be a prime which is greater than the maximum order of stabilizers $\Gamma_z$, where $z \in \mathfrak{h}$ is a fixed point of $\Gamma$ and let $\wp \nmid l$. Let $\Gamma_0(l)_l$ be the following subgroup of $GL_2(\mathbb{Z}_l)$:
$$ \Gamma_0(l)_l=\left \lbrace \left ( \begin{array}{ll}
a & b \cr
c & d \end{array} \right ) \in GL_2(\mathbb{Z}_l), c \equiv 0~(mod~l) \right \rbrace . $$
Suppose that $l$ splits completely in $L$. Then there exists an embedding $F \longrightarrow \mathbb{Q}_l$ where $\mathbb{Q}_l$ denotes the usual $l$-adic field, and since $B\otimes_{\mathbb{Q}} \mathbb{Q}_l=M_2(\mathbb{Q}_l)$, we have a natural map: $$\phi_l: \Gamma \rightarrow GL_2(\mathbb{Z}_l).$$ Let $\Gamma_l$ be the inverse map of $\Gamma_0(l)_l$ in $\Gamma$ under $\phi_l$. Then $\Gamma_l$ is a subgroup of $\Gamma$ of index $l$. We consider the Shimura curve $X_l$ with $$X_l(\mathbb{C})=\Gamma_l\setminus\mathfrak{h}.$$ It admits an integral model over $L$ and so can be defined over $L$. Hence, its reduction $X_{l,p}$ modulo $p$ is good and it is defined over the residue class field ${\mathcal O}_L/(p)$ of $p$, which is isomorphic to $\mathbb{F}_q$ since $p$ is inert in $L$. Moreover the supersingular $\mathbb{F}_p$-points of $X_{\Gamma,p}$ split completely in the natural projection $$\pi_l: X_{l,p} \rightarrow X_{\Gamma,p}.$$ Thus, the number of the rational points of $X_{l,p}(\mathbb{F}_{q^2})$ is: $$N_1(X_{l,p}(\mathbb{F}_{q^2}))\geq l(q-1)(g+1).$$ Moreover, since $l$ is greater than the maximum order of a fixed point of $\Gamma$ on $\mathfrak{h}$, the projection $\pi_l$ is unramified and thus by Hurwitz formula, $$g_l=1+l(g-1)$$ where $g_l$ is the genus of $X_l$ (and also of $X_{l,p}$).
Note that since the field $L$ is abelian over $\mathbb{Q}$, there exists an integer $N$ such that field $L$ is contained in a cyclotomic extension $\mathbb{Q}(\zeta_N)$ where $\zeta_N$ denotes a primitive root of unity with minimal polynomial $\Phi_{N}$. Let us consider the reduction $\Phi_{N,l_k}$ of $\Phi_{N}$ modulo the prime $l_k$. Then, the prime $l_k$ is totally split in the integer ring of $L$ if and only if the polynomial $\Phi_{N,l_k}$ is totally split in $\mathbb{F}_{l_k}=\mathbb{Z}/l_k\mathbb{Z}$ i.e if and only if $\mathbb{F}_{l_k}$ contains the Nth roots of unity which is equivalent to $N\mid l_k-1$. Hence, any prime $l_k$ such that $l_k \equiv 1 \mod N$ is totally split in $\mathbb{Q}(\zeta_N)$ and then in $L$. Since $l_k$ runs over primes in an arithmetical progression, the ratio of two consecutive prime numbers $l_k \equiv 1 \mod N$ tends to one.
Then for any real number $\epsilon >0$, there exists an integer $k_0$ such that for any integer $k\geq k_0$, $l_{k+1}\leq (1+\epsilon)l_k$ where $l_k$ and $l_{k+1}$ are two consecutive prime numbers congruent to one modulo $N$. Then there exists an integer $n_{\epsilon}$ such that for any integer $n\geq n_{\epsilon}$, the integer $k$, such that the two following inequalities hold $$l_{k+1}(q-1)(g+1)> 2n+2g_{l_{k+1}}-2$$ and $$l_k(q-1)(g+1)\leq 2n+2g_{l_k}-2,$$ satisfies $k\geq k_0$ where $g_{l_i}=1+l_i(g-1)$ for any integer $i$. Let us consider the algebraic function field $F_{k}/\mathbb{F}_{q^2}$ defined over the finite field $\mathbb{F}_{q^2}$ associated to the Shimura curve $X_{l_{k}}$ of genus $g_{l_{k}}$. Let $N_i(F_{k}/\mathbb{F}_{q^2})$ be the number of places of degree $i$ of $F_{k}/\mathbb{F}_{q^2}$. Then $N_1(F_{k+1})/\mathbb{F}_{q^2})\geq l_{k+1}(q-1)(g+1) > 2n+2g_{l_{k+1}}-2$ where $g$ is the genus of the Shimura curve $X_{\Gamma,p}(\mathbb{F}_{q^{2}})$. Moreover, it is known that there exists an integer $n_0$ such that for any integer $n\geq n_0$, $N_n(F_{k+1}/\mathbb{F}_{q^2})>0$. Consequently, for any integer $n\geq \max(n_{\epsilon},n_0)$ this algebraic function field $F_{k+1}/\mathbb{F}_{q^2}$ satisfies Theorem \ref{theoprinc} and so $\mu_{q^2}(n)\leq 2n+g_{l_{k+1}}-1\leq 2n+l_{k+1}(g-1)\leq 2n+(1+\epsilon)l_k(g-1)$ with $l_k< \frac{2n}{(q-1)(g+1)-2(g-1)}$. Thus, for any real number $\epsilon >0$ and for any $n\geq \max(n_{\epsilon},n_0)$, we obtain $\mu_{q^2}(n)\leq 2n+\frac{2n(1+\epsilon)(g-1)}{(q-1)(g+1)-2(g-1)}$ which gives $M_{q^2}\leq2\left(1+\frac{1}{q-3}\right)$. \qed \end{Proof}
\begin{propo}\label{newbound2} Let $q=p^m$ be a prime power with odd $m$ such that $q \geq 5$ . Then $$M_{q}\leq 3\left(1+\frac{2}{q-3}\right).$$ \end{propo}
\begin{Proof} It is sufficient to consider the same families of curves that in Proposition \ref{newbound}. These families of curves $X_k$ are defined over the residue class field of $p$ which is isomorphic to $\mathbb{F}_q$. Hence, we can consider the associated algebraic function fields $F_k/\mathbb{F}_q$ defined over $\mathbb{F}_q$. If $q=p$, we have $N_1(F_{k+1}/\mathbb{F}_{p^2})=N_1(F_{k+1}/\mathbb{F}_{p})+2N_2(F_{k+1}/\mathbb{F}_{p})\geq (p-1)(l_{k+1}+1)> 2n+2l_{k+1}-2$ since $F_{k+1}/\mathbb{F}_{p^2}=F_{k+1}/\mathbb{F}_{p}\otimes_{\mathbb{F}_p} \mathbb{F}_{p^2}$. Then, for any real number $\epsilon >0$ and for any integer $n$ sufficiently large, we have $\mu_{p}(n)\leq 3n+3g_{l_{k+1}}\leq 3n+3(1+\epsilon)l_k$ by Theorem \ref{theoprinc} since $N_n(F_{k+1}/\mathbb{F}_{q^2})>0$. Then, by using the condition $l_k\leq \frac{2n}{p-3}-\frac{p+1}{p-3}$, we obtain $M_{p}\leq 3\left(1+\frac{2}{p-3}\right)$. If $q=p^m$ with odd $m$, we have $N_1(F_{k+1}/\mathbb{F}_{q^2})=N_1(F_{k+1}/\mathbb{F}_{q})+2N_2(F_{k+1}/\mathbb{F}_{q})\geq l_{k+1}(q-1)(g+1)> 2n+2g_{l_{k+1}}-2$ since $F_{k+1}/\mathbb{F}_{q^2}=F_{k+1}/\mathbb{F}_{q}\otimes_{\mathbb{F}_q} \mathbb{F}_{q^2}$. Then, for any real number $\epsilon >0$ and for any integer $n$ sufficiently large as in Proof \ref{newbound}, we have $\mu_{q}(n)\leq 3n+3g_{l_{k+1}}\leq 3n+3(1+\epsilon)l_k$ by Theorem \ref{theoprinc} since $N_n(F_{k+1}/\mathbb{F}_{q^2})>0$. Then, by using the condition $l_k< \frac{2n}{(q-1)(g+1)-2(g-1)}$ we obtain $M_{q}\leq 3\left(1+\frac{2}{q-3}\right)$. \qed \end{Proof}
\begin{propo}\label{newbound3}
$$M_{2}\leq 13.5.$$ \end{propo}
\begin{Proof} Let $q=p^m=4$. We also use the Shimura curves. Let $L=\mathbb{Q}(\sqrt{d})$ be a totally real quadratic number field such that $d\equiv 1 \mod 8$. Then the prime $p=2$ is totally split in $L$ and so the residue class field ${\mathcal O}_L/(p)$ of $p$, where ${\mathcal O}_L$ denotes the ring of integers of $L$, is isomorphic to the finite field $\mathbb{F}_{2}$. Then, let $\wp$ be a prime of $L$ which does not divide $p$ and let $B$ be a quaternion algebra for which $$B\otimes_{\mathbb{Q}}\mathbb{R}=M_2(\mathbb{R}) \otimes \mathbb{H}$$ where $\mathbb{H}$ is the skew field of Hamilton quaternions. Let $B$ be also unramified outside infinity and $\wp$. Then, over $L$ one can define the Shimura curve by its complex points $X_{\Gamma}(\mathbb{C})=\Gamma\setminus \mathfrak{h}$, where $\mathfrak{h}$ is the Poincar\'e upper half-plane and $\Gamma$ is the group of units of a maximal order ${\mathcal O}$ of $B$ with totally positive norm modulo its center. Hence, the considered Shimura curve admits an integral model over $L$ and it is well known that its reduction $X_{\Gamma,p}(\mathbb{F}_{p^{2m}})$ modulo $p$ is good and is defined over the residue class field ${\mathcal O}_L/(p)$ of $p=2$, which is isomorphic to $\mathbb{F}_2$ since $p=2$ is totally split in $L$. Moreover, by \cite{ihar}, the number $N_1(X_{\Gamma,p}(\mathbb{F}_{q^2})$ of $\mathbb{F}_{q^2}$-points of $X_{\Gamma,p}$ is such that $N_1(X_{\Gamma,p}(\mathbb{F}_{q^2}))\geq (q-1)(g+1)$, where $g$ denotes the genus of $X_{\Gamma,p}(\mathbb{F}_{q^{2}})$. Let now $l$ be a prime which is greater than the maximum order of stabilizers $\Gamma_z$, where $z \in \mathfrak{h}$ is a fixed point of $\Gamma$ and let $\wp \nmid l$. Let $\Gamma_0(l)_l$ be the following subgroup of $GL_2(\mathbb{Z}_l)$:
$$ \Gamma_0(l)_l=\left \lbrace \left ( \begin{array}{ll}
a & b \cr
c & d \end{array} \right ) \in GL_2(\mathbb{Z}_l), c \equiv 0~(mod~l) \right \rbrace. $$
Suppose that $l$ splits completely in $L$. Then there exists an embedding $F \longrightarrow \mathbb{Q}_l$ where $\mathbb{Q}_l$ denotes the usual $l$-adic field, and since $B\otimes_{\mathbb{Q}} \mathbb{Q}_l=M_2(\mathbb{Q}_l)$, we have a natural map: $$\phi_l: \Gamma \rightarrow GL_2(\mathbb{Z}_l).$$ Let $\Gamma_l$ be the inverse map of $\Gamma_0(l)_l$ in $\Gamma$ under $\phi_l$. Then $\Gamma_l$ is a subgroup of $\Gamma$ of index $l$. We consider the Shimura curve $X_l$ with $$X_l(\mathbb{C})=\Gamma_l\setminus \mathfrak{h}.$$ It admits an integral model over $L$ and so can be defined over $L$. Hence, its reduction $X_{l,p}$ modulo $p=2$ is good and it is defined over the residue class field ${\mathcal O}_L/(p)$ of $p=2$, which is isomorphic to $\mathbb{F}_2$ since $p=2$ is totally split in $L$. Moreover the supersingular $\mathbb{F}_p$-points of $X_{\Gamma,p}$ split completely in the natural projection $$\pi_l: X_{l,p} \rightarrow X_{\Gamma,p}.$$ Thus, the number of the rational points of $X_{l,p}(\mathbb{F}_{q^2})$ is: $$N_1(X_{l,p}(\mathbb{F}_{q^2}))\geq l(q-1)(g+1).$$ Moreover, since $l$ is greater than the maximum order of a fixed point of $\Gamma$ on $\mathfrak{h}$, the projection $\pi_l$ is unramified and thus by Hurwitz formula, $$g_l=1+l(g-1)$$ where $g_l$ is the genus of $X_l$ (and also of $X_{l,p}$). Note that since the field $L$ is abelian over $\mathbb{Q}$, there exists an integer $N$ such that field $L$ is contained in a cyclotomic extension $\mathbb{Q}(\zeta_N)$ where $\zeta_N$ denotes a primitive root of the unity with minimal polynomial $\Phi_{N}$. Let us consider the reduction $\Phi_{N,l_k}$ of $\Phi_{N}$ modulo the prime $l_k$. Then, the prime $l_k$ is totally split in the integer ring of $L$ if and only if the polynomial $\Phi_{N,l_k}$ is totally split in $\mathbb{F}_{l_k}=\mathbb{Z}/l_k\mathbb{Z}$ i.e if and only if $\mathbb{F}_{l_k}$ contains the Nth roots of the unity which is equivalent to $N\mid l_k-1$. Hence, any prime $l_k$ such that $l_k \equiv 1 \mod N$ is totally split in $\mathbb{Q}(\zeta_N)$ and then in $L$. Since $l_k$ runs over primes in an arithmetical progression, the ratio of two consecutive prime numbers $l_k \equiv 1 \mod N$ tends to one. Then for any real number $\epsilon >0$, there exists an integer $k_0$ such that for any integer $k\geq k_0$, $l_{k+1}\leq (1+\epsilon)l_k$ where $l_k$ and $l_{k+1}$ are two consecutive prime numbers congruent to one modulo $N$. Then there exists an integer $n_{\epsilon}$ such that for any integer $n\geq n_{\epsilon}$, the integer $k$, such that the two following inequalities hold $$l_{k+1}(q-1)(g+1)> 2n+2g_{l_{k+1}}+6$$ and $$l_k(q-1)(g+1)\leq 2n+2g_{l_k}+6,$$ satisfies $k_\geq k_0$ where $g_{l_i}=1+l_i(g-1)$ for any integer $i$.
Let us consider the algebraic function field $F_{k}/\mathbb{F}_{2}$ defined over the finite field $\mathbb{F}_{2}$ associated to the Shimura curve $X_{l_{k}}$ of genus $g_{l_{k}}$. Let $N_i(F_{k}/\mathbb{F}_{t})$ be the number of places of degree $i$ of $F_{k}/\mathbb{F}_{t}$ where $t$ is a prime power. Then, since $F_{k+1}/\mathbb{F}_{q^2}=F_{k+1}/\mathbb{F}_{2}\otimes_{\mathbb{F}_2} \mathbb{F}_{q^2}$ for $q=4$, we have $N_1(F_{k+1}/\mathbb{F}_{q^2})=N_1(F_{k+1}/\mathbb{F}_{2})+2N_2(F_{k+1}/\mathbb{F}_{2})+4N_4(F_{k+1}/\mathbb{F}_{2})\geq l_{k+1}(q-1)(g+1)> 2n+2g_{l_{k+1}}+6$ where $g$ is the genus of the Shimura curve $X_{\Gamma,p}(\mathbb{F}_{q^{2}})$. Moreover, it is known that there exists an integer $n_0$ such that for any integer $n\geq n_0$, $N_n(F_{k+1}/\mathbb{F}_{q^2})>0$. Consequently, for any integer $n\geq \max(n_{\epsilon},n_0)$ this algebraic function field $F_{k+1}/\mathbb{F}_{2}$ satisfies Theorem 3.2 in \cite{bapi} and so $\mu_{2}(n)\leq \frac{9}{2}(n+g_{l_{k+1}}+5)\leq \frac{9}{2}(n+l_{k+1}(g-1)+6)\leq \frac{9}{2}(n+(1+\epsilon)l_k(g-1))+27$ with $l_k< \frac{2n+8}{(q-1)(g+1)-2(g-1)}$. Thus, for any real number $\epsilon >0$ and for any $n\geq \max(n_{\epsilon},n_0)$, we obtain $\mu_{2}(n)\leq \frac{9}{2}(n+2n\frac{(1+\epsilon)}{q-3}+\frac{8}{q-3})+27\leq \frac{9}{2}(1+2(1+\epsilon))n+63$ which gives $M_{2}\leq 13,5$. \qed
\end{Proof}
\end{document} |
\begin{document}
\title[Noether's problem]{Noether's problem and descent}
\author{Fedor Bogomolov} \address{Courant Institute of Mathematical Sciences, N.Y.U. \\
251 Mercer str. \\
New York, NY 10012, U.S.A.} \address{National Research University Higher School of Economics, Russian Federation \\ AG Laboratory, HSE \\ 7 Vavilova str., Moscow, Russia, 117312}
\email{[email protected]}
\author{Yuri Tschinkel} \address{Courant Institute of Mathematical Sciences, N.Y.U. \\
251 Mercer str. \\
New York, NY 10012, U.S.A.} \address{Simons Foundation\\ 160 Fifth Av. \\ New York, NY 10010, U.S.A.} \email{[email protected]}
\keywords{Rationality, fibrations}
\begin{abstract} We study Noether's problem from the perspective of torsors under linear algebraic groups and descent. \end{abstract}
\maketitle
\setcounter{section}{0} \section*{Introduction} \label{sect:introduction}
This note is inspired by the theory of universal torsors developed by Colliot-Th\'el\`ene and Sansuc in connection with arithmetic problems on del Pezzo surfaces \cite{CTS}. This theory associates to a geometrically rational surface $X$ over a field $k$, with $X(k)\neq \emptyset$, a torsor \begin{equation} \label{eqn:tors} \pi:\mathcal T\stackrel{T}{\longrightarrow} X, \end{equation} where $T$ is the N\'eron-Severi torus of $X$, i.e., the character group of $T_{\bar{k}}$ is isomorphic, as a Galois module, to the geometric N\'eron-Severi lattice ${\rm NS}(X_{\bar{k}})$ of $X$. The torsor is viewed as a {\em descent} variety. Basic arithmetic problems on $X$, such as the Hasse Principle and Weak Approximation, are reduced to the following geometric conjecture: $\mathcal T$ is {\em rational} over $k$. The gist of this conjecture is that on $\mathcal T$, the arithmetic complexity of $X$ is {\em untwisted}: while $X$ may have nontrivial (algebraic) Brauer group, it is eliminated via passage to a universal torsor. The conjecture implies in particular that $X$ is unirational over $k$, which is an open problem for del Pezzo surfaces of degree 1.
In higher dimensions, unirational varieties may have nontrivial transcendental Brauer group, or more generally, nontrivial higher unramified cohomology. Examples are conic bundles over rational surfaces considered by Artin--Mumford \cite{AM}, quadric bundles \cite{CTO}, or Brauer--Severi bundles. Our motivation was to understand whether or not these obstructions to stable rationality can be untwisted via passage to fibrations as in \eqref{eqn:tors}.
\begin{defn} \label{defn:tower} A variety $X$ over a field $k$ admits a {\em rational tower} if there exists a sequence of dominant rational maps \begin{equation} \label{eqn:tower} X_n \stackrel{\pi_n}{\longrightarrow} X_{n-1} \rightarrow \cdots \rightarrow X_1 \stackrel{\pi_1}{\longrightarrow} X_0:=X, \end{equation} over $k$, such that \begin{itemize} \item[(1)] the {\em source} of the tower, $X_n$, is rational over $k$, \item[(2)] the generic fiber of $\pi_i$ is geometrically rational and irreducible, over the function field $k(X_{i-1})$, for all $i$. \end{itemize} We say that $X$ admits a {\em toric rational tower}, if in addition \begin{itemize} \item[(2')] the generic fiber of $\pi_i$ is birational to a torsor under an algebraic torus, over the function field $k(X_{i-1})$, for all $i$. \end{itemize} \end{defn}
\begin{conj} \label{conj:main} Let $X$ be a unirational variety over $k$. Then $X$ admits a rational tower. \end{conj}
This conjecture is motivated by topology: every continuous map is homotopy equivalent to a fibration with constant fiber. In the algebraic-geometric context, the conjecture implies in particular that {\em unramified cohomology} of the source of the tower, $X_n$, in Definition \ref{defn:tower}, is trivial. Of course, one can trivialize a given unramified cohomology class (with finite coefficients) via passage to a tower with geometrically rational fibers, as in \eqref{eqn:tower}. Indeed, let $K=k(X)$ be the function field of an algebraic variety over a field $k$, which contains all roots of unity. By the Bloch-Kato conjecture, proved by Voevodsky, the subring $$ \oplus_{i\ge 2} H^i(K)\subset H^*(K) $$ of Galois cohomology of $K$, with finite coefficients, equals the ideal generated by $H^2(K)$. Since elements of $H^2$ are trivialized on the corresponding Brauer-Severi variety, we can trivialize an arbitrary finite set of Galois cohomology classes via passage to a tower as in \eqref{eqn:tower}: $$ Y_m\rightarrow Y_{m-1} \rightarrow \cdots \rightarrow Y_1\rightarrow Y_0=X, $$ where the generic fibers are forms of projective spaces. However, a direct application of this construction does not imply Conjecture~\ref{conj:main}, as we cannot assert the rationality of $Y_m$. In fact, new unramified classes may appear in the process, and we cannot even ensure that $Y_m$ has trivial unramified cohomology.
To motivate Conjecture~\ref{conj:main} from the perspective of algebra, consider the following important class of unirational varieties: quotients $V/G$, where $G$ is a finite group and $V$ a (finite-dimensional) faithful linear representation of $G$ over $k$. In \cite{BT-univer} we showed that, over $k=\bar{{\mathbb F}}_p$, these varieties are {\em universal} for unramified cohomology: every unramified cohomology class of an algebraic variety over $k$ is induced from $V/G^c$, where $G^c$ is a central extension of an abelian group (see Section~\ref{sect:background} for more details). Thus we expect that such quotients are universal from the birational point of view as well. For groups of this type, and a wide class of solvable groups $G$, we prove: \begin{itemize} \item $V/G$ admits a toric rational tower. \end{itemize} Its source $X_n$ is a $k$-rational variety, untwisting all unramified cohomology of $V/G$.
\
\noindent {\bf Acknowledgments.} We are thankful to B. Bloom-Smith for useful comments. This work was partially supported by Laboratory of Mirror Symmetry NRU HSE, RF grant 14.641.31.0001. The first author was funded by the Russian Academic Excellence Project `5-100'. The first author was also supported by a Simons Fellowship and by the EPSRC program grant EP/M024830. The second author was partially supported by the NSF grant 1601912.
\section{Background} \label{sect:background}
Let $X$ be a smooth projective geometrically rational variety over a field $k$. Among obstructions to $k$-rationality are the absence of $k$-rational points or the nontriviality of the {\em algebraic} Brauer group $$ {\rm Br}(X)/{\rm Br}(k). $$
More generally, we may consider {\em unramified cohomology} $$ H^i_{nr}(K,\mu_m^{\otimes i-1}), \quad H^i_{nr}(K,{\mathbb Q}/{\mathbb Z}(i-1)), \quad m\in {\mathbb N}, $$ (see \cite{Bog87}, \cite{CTO} for definitions and basic properties). These groups are birational invariants which vanish when $i > \dim(X)$ or when $X$ is $k$-rational. For $X$ smooth and projective, we have $$ {\rm Br}(X)[m]=H^2_{nr}(K, \mu_m), $$ where $\mu_m$ stands for $m$-th roots of 1. Below, we omit the coefficients, when they are clear from the context.
\begin{conj} \label{conj:bounded} Let $X$ be a unirational variety over an algebraically closed field of characteristic zero. Then its unramified cohomology is finite. \end{conj}
Finiteness of $H^i_{nr}(X,{\mathbb Q}/{\mathbb Z}(i-1))$, for unirational $X$, is known for: \begin{itemize} \item $i=2$, classically, \item $i=3$ \cite[Proposition 3.2]{CT-Kahn}. \end{itemize}
When $X(k)\neq \emptyset$, the theory of Colliot-Th\'el\`ene--Sansuc provides a fibration $$ \pi: \mathcal T\stackrel{T}{\longrightarrow} X $$ as in \eqref{eqn:tors}, such that \begin{itemize} \item the generic fiber of $\pi$ is a principal homogeneous space under an algebraic torus $T$ and \item the total space $\mathcal T$ is, {\em conjecturally}, a rational variety \cite[Conjecture H1, Section 2.8]{CTS}. \end{itemize} Special cases of this are known, e.g., when $X$ is admits a conic bundle over $k$, with 4 degenerate fibers \cite{CT-Sko}. As evidence for this rationality conjecture, one has the proof that, over $k$ of characteristic zero, $H^2_{nr}(\mathcal T)$ is trivial \cite[Theorem 2.1.2]{CTS}. A more recent consistency check is \cite{cao}, where it is shown that \begin{itemize} \item when $X$ is a del Pezzo surface of degree $\ge 2$, the nontrivial part of $H^3_{nr}(\mathcal T)$ is finite and 2-primary. \end{itemize}
We now turn to unirational varieties which are not necessarily geometrically rational, which may have nontrivial {\em transcendental} Brauer group. Examples appeared in the context of {\em Noether's problem}:
\begin{prob}[Noether] Let $V$ be a faithful representation of a finite group $G$ over $k$. Is $V/G$ (stably) rational? \end{prob}
Noether's problem has a negative solution: counterexamples are based on explicit computations of unramified cohomology, which is essentially a combinatorial problem, in terms of the structure of Sylow subgroups of $G$. Groups $G$ with nontrivial $H^2_{nr}(V/G)$ were constructed in \cite{saltman}, \cite{Bog-Izv-87}, and \cite{mor}; with trivial $H^2_{nr}(V/G)$ but nontrivial $H^3_{nr}(V/G)$ in \cite{peyre} and \cite{hoshi}.
On the other hand, unramified cohomology is trivial for actions of many classical finite groups. In some, but not all, of these cases, we have proofs of (stable) rationality (see, e.g., \cite{B-stable}, \cite{BBB}).
Our goal in the following sections will be to {\em untwist} unramified cohomology of $V/G$ by constructing rational towers as in Definition~\ref{defn:tower}. Of particular importance are solvable groups $G$. Indeed, in \cite{BT-univer} we proved a {\em universality} result:
\begin{thm} \label{thm:univer} Let $X$ be an algebraic variety of dimension $\ge 2$ over $k=\bar{\mathbb F}_p$, $\ell\neq p$ a prime, and $$ \alpha\in H^i_{nr}(k(X), {\mathbb Z}/\ell^n) $$ an unramified class. Then there exist finite-dimensional $k$-vector spaces $V_j, j\in J$, depending on $\alpha$, such that $\alpha$ is induced, via a rational map, from an unramified class in the cohomology of the quotient of $$ \mathbb P:=\prod_{j\in J} \mathbb P(V_j) $$ by a finite abelian $\ell$-group $G^a$, acting projectively on each factor. \end{thm}
In other words, central extensions of abelian groups capture all unramified cohomology invariants.
\section{First properties} \label{sect:first}
Throughout, $G$ is a linear algebraic group over a field $k$ and $V$ a finite-dimensional linear faithful representation of $G$ over $k$.
Let ${\rm Bir}(k)$ be the set algebraic varieties over $k$, modulo $k$-birational equivalence, which we denote by $\sim_k$. Let $$ {\rm Rat}(k)\subset \mathcal L\mathcal Q(k)\subset \mathcal G\mathcal Q(k)\subset {\rm Unirat}(k), $$ be the classes of algebraic varieties over $k$ which are \begin{itemize} \item $k$-rational, \item $k$-birational to $V/G$ (Linear Quotients), \item $k$-birational to $X/G$, where $X$ is a $k$-rational algebraic variety and $G$ is a subgroup of the group ${\rm Bir}{\rm Aut}(X)$ of $k$-birational automorphisms of $X$ (General Quotients), \item $k$-unirational, \end{itemize} respectively. Our goal is to connect these classes, via passage to fibrations with geometrically rational generic fibers as in Definition~\ref{defn:tower}. We start with simple examples:
\begin{exam} \label{exam:abelian} Let $G$ be a finite abelian group, or an extension of a finite cyclic group by a finite abelian group.
If $k$ is of characteristic coprime to $n:=|G|$ and contains $n$-th roots of 1 then $V/G$ is rational (see \cite[Theorem 6.1]{swan}).
This can fail over nonclosed fields, even when $G$ is cyclic. Over ${\mathbb Q}$, there are counterexamples due to Swan, for $G={\mathbb Z}/47{\mathbb Z}$ \cite{swan-inv}, \cite{lenstra}, \cite{plans}. \end{exam}
\begin{exam} \label{exam:vv} Let $V$ be a faithful linear representation of a finite group $G$ over $k$, i.e., $G\hookrightarrow {\rm GL}(V)$. Then $$ V/G\sim_k{\mathbb P}(V)/G\times {\mathbb P}^1. $$ \end{exam}
\begin{exam} \label{exam:conn} Let $G$ be a {\em connected} linear algebraic group and $V$ a faithful linear representation of $G$ over $k$. Then $Y:=V/G$ admits a rational tower. Indeed, the total space of the corresponding (rational) fibration $$ V\rightarrow V/G=Y $$ is clearly $k$-rational, and the generic fiber is geometrically rational. \end{exam}
\begin{exam} \label{exam:toric} Toric varieties: a universal torsor of a toric variety $X_{\Sigma}$ over $k$ is given by $$ \mathcal T_{\Sigma}={\mathbb A}^n\setminus Z_{\Sigma}, $$ where $Z_{\Sigma}$ is a locally closed subvariety; we have $$ X_{\Sigma}=\mathcal T_{\Sigma}/\!\!/T_{NS}, $$ where $T_{NS}$ is the N\'eron-Severi torus of $X$. Thus $X_{\Sigma}$ admits a toric rational tower.
\end{exam}
\begin{lemm} Let $G$ be a finite group and $Y=V/G\in \mathcal L\mathcal Q(k)$. Assume that there exists an $X\in {\rm Rat}(k)$ with a generically free $G$-action, such that $X/G\in {\rm Rat}(k)$. Then there exists a fibration $Y_1\rightarrow Y$ with geometrically rational generic fiber and $k$-rational $Y_1$. \end{lemm}
\begin{proof} We have a fibration $$ (X\times V)/G\rightarrow V/G=Y, $$ with generic fiber geometrically isomorphic to $X$. On the other hand, we have a {\em vector bundle} $$ (X\times V)/G \rightarrow X/G $$ with fiber $V$, since the $G$-action on $V$ is linear. Thus $(X\times V)/G$ is $k$-rational. \end{proof}
\begin{coro} Let $X$ be a rational surface over an algebraically closed field $k$ and $G$ a linear algebraic group contained in ${\rm Bir}{\rm Aut}(X)$. Then $Y=V/G$ admits a rational tower. \end{coro}
\begin{rema} Let $C\rightarrow {\mathbb P}^1$ be a Galois cover with Galois group $G$. Then there exists a fibration $$ \pi: (C\times V)/G\rightarrow V/G $$ with rational total space. However, the generic fiber of $\pi$ is not necessarily geometrically rational; it is rational iff the genus $\mathsf g(C)=0$. \end{rema}
\section{Central extensions and wreath products} \label{sect:wreath}
In this section, we investigate the existence of rational towers for $\mathcal L\mathcal Q(k)$, over algebraically closed fields $k$ of characteristic zero. We prove Conjecture~\ref{conj:main} for $V/G$, for special groups $G$.
It is well-known that for finite abelian groups $A$ and their faithful linear representations $V$, the quotient $V/A$ is rational, provided the ground field $k$ contains
$n$-th roots of 1, where $n=|A|$. We turn to central extensions of abelian groups: among these, we have the free central extension $$ 1 \rightarrow Z\rightarrow F^c(A)\rightarrow A \rightarrow 1, $$ with $$ Z:=\wedge^2(A), $$ generated by commutators of lifts of elements of $A$, without nontrivial relations. The extension $F^c(A)$ unique, modulo isoclinism.
\begin{lemm} \label{lemm:central} Consider a central extension of an abelian group $$ 1 \rightarrow Z\rightarrow G^c\rightarrow A \rightarrow 1. $$ Then there exists an exact sequence \begin{equation} \label{eqn:Z} 1 \rightarrow \tilde{Z}\rightarrow \tilde{F}^c \rightarrow G^c\rightarrow 1, \end{equation} where $\tilde{F}^c$ is isoclinic to $F^c(A)$, and $\tilde{Z}$ is abelian. \end{lemm}
\begin{proof} It suffices to add to $G^c$ additional elements which kill the image of stable cohomology $H^2_{st}(A,{\mathbb Z}/\ell)$ in $H^2(\tilde{F}^c,{\mathbb Z}/\ell)$ (see \cite[Section 4]{BPT} and \cite[Section 2]{BT-univer} for definitions and properties). \end{proof}
\begin{coro} There exist a faithful representation $W$ of $F^c=F^c(A)$ and a fibration $$ W/F^c\rightarrow V/G^c $$ whose generic fiber is birational to an algebraic torus over the function field of the base. \end{coro}
\begin{proof} Let $W'$ be a faithful representation of $\tilde{F}^c$ and put $W:=W'\oplus V$. We have a natural surjective homomorphism $W\rightarrow V$ giving rise to a fibration $$ W/\tilde{F}^c\rightarrow V/G^c. $$ Its generic fiber is geometrically isomorphic to $W'/\tilde{Z}$, with $\tilde{Z}$ defined in \eqref{eqn:Z}. It suffices to recall that linear quotients of abelian groups are rational, when $k$ contains roots of unity. \end{proof}
\begin{conj} \label{conj:fc} Let $A$ be a finite abelian group, $F^c(A)$ its free central extension, and $V$ a faithful linear representation of $F^c$, over an algebraically closed field $k$. Then $V/F^c$ is stably rational. \end{conj}
Note that stable rationality does not change within a fixed isoclinism class, over fields containing roots of unity.
\begin{exam} \label{exam:cyclic} Conjecture~\ref{conj:fc} holds when $A$ is cyclic. When $A\simeq {\mathbb Z}/\ell\oplus {\mathbb Z}/\ell$, $F^c(A)$ is the Heisenberg group. The problem reduces to a {\em monomial} action, and Conjecture \ref{conj:fc} holds as well. Next, consider: $$ A:= {\mathbb Z}/\ell\oplus {\mathbb Z}/\ell\oplus {\mathbb Z}/\ell, \quad \text{ for } \ell=2. $$ Modulo isoclinism, we can represent $$ F^c=F^c(A)\subset Q_{8,1}\times Q_{8,2}\times Q_{8,3}, $$ where each $Q_{8,i}$ is the group of quaternions over ${\mathbb Z}/2$: Choose a ${\mathbb Z}/2$-basis $\{e_1,e_2,e_3\}$ of $A$ and a basis $\pi_1,\pi_2,\pi_3$ of surjective homomorphisms $$ \pi_i:A\rightarrow {\mathbb Z}/2\oplus {\mathbb Z}/2, $$ each trivial on one of the generators $e_1,e_2,$ or $e_3$. This induces diagrams
\
\centerline{ \xymatrix{ 1 \ar[r] & ({\mathbb Z}/2)^3 \ar[r] \ar[d] & F^c\ar[r] \ar[d] & ({\mathbb Z}/2)^3 \ar[r] \ar[d] & 1\\
1 \ar[r] & ({\mathbb Z}/2) \ar[r] & Q_{8,i} \ar[r] & ({\mathbb Z}/2)^2 \ar[r] & 1 } }
\
\noindent
Let $V_i$ be the standard 2-dimensional representation of $Q_{8,i}$.
Then
$$
W:=V_1\oplus V_2\oplus V_3
$$
is a faithful representation of $F^c$. We have a $k^\times$-action on each component that commutes with the action of $F^c$.
It follows that
$$
W/F^c\sim_k \left( ({\mathbb P}^1\times {\mathbb P}^1\times {\mathbb P}^1)/ G \right) \times (k^\times)^3, $$ where $G=({\mathbb Z}/2)^3$ acts monomially on $k(x_1,x_2,x_3)$, as follows: \begin{equation} \label{eqn:sigma} \sigma(x_i)=c_{i,\sigma} x_i^{a_{i,\sigma}}, \quad c_{i,\sigma},\, a_{i,\sigma} \in \{ \pm 1\}, \quad \forall \sigma\in G. \end{equation} By \cite[Theorem 10]{yamasaki}, Type (3,3,3,1) in the notation therein, the field of invariants $k(x_1,x_2,x_3)^G$ is rational over $k$, when $k$ is algebraically closed. It follows that $W/F^c$ is stably rational. \end{exam}
\begin{lemm} \cite[Lemma 2.4]{B-Petrov} \label{lemm:wreath} Let $G,H$ be finite groups, acting faithfully on $X$ and $Y$, respectively. Assume that $X/G,Y/H\in {\rm Rat}(k)$.
Let $K:=H\wr G$ be the wreath product, with its natural action on $W:=X^{|H|}\times Y$. Then $W/K$ is rational. \end{lemm}
\begin{proof}
Observe that the quotient $X^{|H|}/H^{|G|}$ is a product of rational
varieties $(X/G)^{|H|}$. The group $H$ acts on $(X/G)^{|H|}$ by permutations, and this action is
equivalent to the linear (free permutation) action of $H$ on $V^{\oplus |H|}$, where
$V\sim_k X/G$.
Thus the quotient $W/K$ is $k$-birational
to a vector bundle over $Y/H$,
and hence rational. \end{proof}
\begin{defn} \label{defn:special-nil} A finite solvable group $G$ is called {\em special}, if there exists a filtration $$ G=G_0\supsetneq G_{1} \supsetneq \cdots \supsetneq G_{r}=1 $$ such that, for all $i=0, \ldots, r-1$, one has \begin{itemize} \item $G_i$ is a normal subgroup in $G$, \item the kernel of the projection $$ p_i:G/G_{i+1}\to G/G_{i} $$ is abelian and \item there exists a section $s_i: G/G_{i}\to G/G_{i+1}$ \end{itemize} \end{defn}
\begin{prop} \label{prop:nil} Let $G$ be a special solvable group and $V$ a faithful linear representation of $G$. Then there exists a toric rational tower $$ X_n\rightarrow \cdots \rightarrow X_0:=V/G. $$ \end{prop}
\begin{proof} By induction. We assume that the claim holds for $G_{(i)}:=G/G_{i}$. The group $G_{(i+1)}$ is uniquely defined by the action of $G_{(i)}$ on the abelian group $A_i:=G_i/G_{i+1}$. Then there exists a surjection \begin{equation} \label{eqn:mod}
\beta_i: B_i:={\mathbb Z}/m_i [G_i]^{r_i}\rightarrow A_i, \end{equation} for some $r_i$ and $m_i$. Its kernel ${\rm Ker}(\beta_i)$ is abelian.
This induces a surjection from the wreath product $G_{(i)} \wr ({\mathbb Z}/m_i)^{r_i}$ onto $G_{(i+1)}$. Let $W_i$ be a faithful representation of $B_i$. Then there is a $G$-equivariant projection $$
W_i^{|G_i|}\rightarrow W, $$ onto some linear representation $W$ of $A_i$, corresponding to the surjection of modules \eqref{eqn:mod}. Let $V_i$ be a faithful representation of $G_{(i)}$ and consider the surjection $$
V_i \oplus W_i^{|G_{(i)}|} \rightarrow V_i\oplus W. $$ Its kernel ${\rm Ker}_i$ is a linear space with a faithful action of the abelian group ${\rm Ker}(\beta_i)$. The quotient ${\rm Ker}/{\rm Ker}(\beta_i)$ is toric, it is the fiber of the projection \begin{equation} \label{eqn:tr}
(V_i \oplus W_i^{|G_{(i)}|})/W_i \rightarrow (V_i\oplus W)/G_{(i+1)}. \end{equation} By induction hypothesis and Lemma~\ref{lemm:wreath}, the left side of \eqref{eqn:tr} admits a tower of toric fibrations with rational source. This implies the claim for $i+1$. \end{proof}
\begin{rema} Our argument is similar to \cite[Theorem 3.3]{saltman-generic} that was focused on the Inverse Galois Problem. \end{rema}
\begin{coro} \label{coro:cr} Assume that $G$ is a special solvable group. Then there exists a rational $G$-variety $X$ with $X/G\in {\rm Rat}(k)$. \end{coro}
\begin{proof} By induction, as in the proof of Proposition~\ref{prop:nil}. It suffices to observe that $X_{i+1}$ is a quotient of a vector bundle over $X_i$ by an abelian linear action on the generic fiber. The corresponding quotient is rational, when $X_i$ is rational. \end{proof}
Thus we obtain many groups with nontrivial (equivalent to linear) embedding into Cremona groups, with rational quotients.
\begin{coro} \label{coro:prop16} Let $V$ be a faithful representation of an $\ell$-group $G^c$, a finite central extension of an abelian $\ell$-group $A$. Then $V/G^c$ admits a tower of toric fibrations with rational source. \end{coro}
\begin{proof} We apply induction on the $\ell$-rank of $A$. The claim is trivial when rank equals 1. Let $A':=A\oplus {\mathbb Z}/\ell^m$. Let $F^c:=F^c(A')$ be the free central extension of $A'$. We have a surjection $$ F^c(A')\twoheadrightarrow F^a(A), $$ with a canonical section and abelian kernel. Now we apply Proposition~\ref{prop:nil}. \end{proof}
As mentioned in Section~\ref{sect:background}, Noether's problem, i.e., the rationality of $V/G$, has a negative solution. However, one can consider more general, nonlinear, generically free $G$-actions on rational varieties. A version of Conjecture~\ref{conj:main}, and of Noether's problem, is the following
\begin{conj} \label{conj:main2} Let $G$ be a finite group. Then there exists a $k$-rational $G$-variety $X$ with generically free $G$-action such that $X/G$ is $k$-rational. \end{conj}
\section{Group theory} \label{sect:group}
In this section we consider finite simple groups $G$. Our main observation is that the Sylow subgroups ${\rm Syl}_{\ell}(G)$ of most simple groups $G$ satisfy the assumptions of Proposition~\ref{prop:nil}; the corresponding $V/{\rm Syl}_{\ell}(G)$ admit a rational tower. Below we sketch a proof in a special case.
\
Let $G={\rm PGL}_{n}({\mathbb F}_q)$, with $q=p^m$, for $m\in{\mathbb N}$.
\begin{enumerate}
\item
Consider ${\rm Syl}_p(G)$. It is conjugate to a subgroup of upper-triangular matrices $U_n\subset G$. There is a natural projection
$$
U_{r}\to U_{r-1}, \quad \forall r,
$$
with a section $s_{r-1}$ and abelian kernel, as in the assumptions of Proposition~\ref{prop:nil},
which induces the corresponding structure on ${\rm Syl}_p(G)$.
\item Consider ${\rm Syl}_{\ell}(G)$, with $\ell\neq p$. Every such subgroup
is a subgroup of the normalizer $N(T)$ of a (possibly nonsplit) maximal torus $T\subset G$. We have an extension
$$
1\rightarrow T\rightarrow N(T)\rightarrow W_T(G)\rightarrow 1.
$$ Note that ${\rm Syl}_{\ell}(G)$ admits a projection onto the corresponding ${\rm Syl}_{\ell}(W_T(G))$ and the induced extension, by an abelian $\ell$-group, splits (for $\ell\neq 2$). Since Proposition~\ref{prop:nil} holds for $$ {\rm Syl}_{\ell}(W_T(G))={\rm Syl}_{\ell}(\mathfrak S_{n-1}) $$ (it is an iterated wreath product extension by cyclic $\ell$-groups), it also holds for ${\rm Syl}_{\ell}(G)$. \end{enumerate} Similar arguments apply to other finite groups of Lie type. Additional considerations are needed for some sporadic simple groups, and small primes.
\section{Actions in small dimensions} \label{sect:special}
In this section, we survey related results on rationality of quotients $X/G$, for low-dimensional rational varieties $X$ over (possibly nonclosed) fields $k$ of characteristic zero.
Let $V$ be a faithful linear representation of a finite group $G$ over $k$. When $G$ is abelian, $V/G$ is rational; however, ${\mathbb P}(V)/G$ need not be $k$-rational, even when $\dim(V)=4$ \cite[Example 2.3]{ahmad}.
When $\dim(V)\le 3$, $V/G$ is rational over algebraically closed $k$: indeed, by Example~\ref{exam:vv}, it suffices to consider the unirational surface ${\mathbb P}(V)/G$ which is rational. However, the situation is different over nonclosed fields.
\
{\em Dimension 2.} There is a large literature on rational $G$-surfaces $X$ over nonclosed fields, i.e., actions of finite groups $G\subset {\rm Aut}(X)$ (see \cite{manin}). We have $X/G\sim_k S/G$, where $S$ is a $G$-minimal del Pezzo surface or a conic bundle. If $S$ is a conic bundle then $G\subset {\rm PGL}_2(\bar{k})$ (see \cite[Theorem 1.3]{trepalin}). We start with several rationality results: let $X$ be a smooth del Pezzo surface with $X(k)\neq \emptyset$ and $G\subset {\rm Aut}(X)$. \begin{itemize}
\item If $\deg(X)=K_X^2\ge 5$ then $X/G \in {\rm Rat}(k)$ \cite[Corollary 1.4]{trepalin-0}, \cite[Theorem 1.1]{trepalin-1}.
\item If $\deg(X)=3$ and $|G|\neq 3$ then $X/G\in {\rm Rat}(k)$ \cite[Theorem 1.3]{trepalin-1}.
\end{itemize}
\begin{exam}\cite[Section 5]{trepalin-1} Let $X\subset {\mathbb P}^3$ be a smooth cubic surface given by $$ f_3(x,y) +zt(ux+vy)+z^3+wt^3=0, $$ where $f_3$ is a form of degree 3, $(x:y:z:t)$ are coordinates in ${\mathbb P}^3$, and $u,v,w$ are parameters. Assume that the Galois group of $f_3$ is ${\mathbb Z}/2$. Then $X$ admits an action of $G={\mathbb Z}/3$, and $X/G$ is $k$-birational to a nonrational over $k$, minimal degree 4 del Pezzo surface $S$, admitting a conic bundle $S\rightarrow {\mathbb P}^1$ over $k$. By \cite{CT-Sko}, $X/G$ admits a rational tower: a universal torsor is $k$-rational. \end{exam}
On the other hand, over nonclosed $k$, the set of $k$-birational types of quotients of conic bundles over ${\mathbb P}^1$ may be infinite \cite[Theorem 1.8]{trepalin}. E.g., this holds when $G=\mathfrak A_5$ and \begin{itemize} \item not every element of $k$ is a square, \item $\sqrt{-1}, \sqrt{5}\in k$. \end{itemize}
\begin{prob} Establish the existence of rational towers for Del Pezzo surfaces over nonclosed fields. \end{prob}
\
{\em Dimension 3.} Over algebraically closed $k$, stable rationality of quotients ${\mathbb P}^3/G$ is unknown for \begin{itemize} \item central ${\mathbb Z}/2$-extensions of the following groups: $$ \tilde{\mathfrak S}_5, \tilde{\mathfrak A}_6, \tilde{\mathfrak S}_6, \tilde{\mathfrak A}_7, {\rm SL}_2(\mathbb F_7), $$ \item extensions of $\mathfrak A_5$, respectively, $\mathfrak A_6$ by a group $N$ of order 64. \end{itemize} For all other subgroups $G\subset {\rm GL}_4(k)$, ${\mathbb P}^3/G$ is stably rational \cite{prokhorov}. Rationality over nonclosed fields has not been addressed.
\end{document} |
\begin{document}
\title{On Ricci negative solvmanifolds and their nilradicals}
\begin{abstract} In the homogeneous case, the only curvature behavior which is still far from being understood is Ricci negative. In this paper, we study which nilpotent Lie algebras admit a Ricci negative solvable extension. Different unexpected behaviors were found. On the other hand, given a nilpotent Lie algebra, we consider the space of all the derivations such that the corresponding solvable extension has a metric with negative Ricci curvature. Using the nice convexity properties of the moment map for the variety of nilpotent Lie algebras, we obtain a useful characterization of such derivations and some applications. \end{abstract}
\tableofcontents
\section{Introduction}\label{intro}
There are no topological obstructions on a differentiable manifold $M$ to the existence of a complete Riemannian metric with negative Ricci curvature (see \cite{Lhk}). However, in the presence of a Lie group $G$ acting transitively on $M$, it is natural to expect a nice interplay between any prescribed curvature behavior of $G$-invariant metrics and not only the topology of $M$ but also the algebraic structure of $G$.
Back in 1974, Heintze \cite{Hnt} (see also \cite{AznWls}) proved that any homogeneous Riemannian manifold with $\Sec<0$ is isometric to a metric on a simply connected solvable Lie group (any metric on a Lie group is assumed to be left-invariant from now on) satisfying the following strong structural property: the nilradical $\ngo$ of its Lie algebra $\sg$ has codimension one and there is an element $Y\in\sg$ such that the derivation $\ad{Y}|_{\ngo}$ of $\ngo$ is {\it positive}, in the sense that its real semisimple part $\ad{Y}|_{\ngo}^\RR$, which is also in $\Der(\ngo)$, has its eigenvalues (i.e.\ the real parts of the eigenvalues of $\ad{Y}|_{\ngo}$) all positive. Conversely, any solvable Lie group of this kind admits a metric with $\Sec<0$. Surprisingly (or not), the obvious question of which nilpotent Lie algebras admit a positive derivation is still wide open. We will show in this paper that such a problem seems to be hopeless.
The stronger pinching condition $-4\leq\Sec\leq-1$ was studied by Eberlein-Heber \cite{EbrHbr}; for instance, they showed that one additionally needs $\ngo$ $2$-step nilpotent (or abelian) and $\Spec\left(\ad{Y}|_{\ngo}^\RR\right)\subset [1,2]$. On the other hand, concerning the weaker condition $\Sec\leq 0$, it was proved by Azencott-Wilson \cite{AznWls} (see also \cite{Wlf,Alk}) that the only homogeneous examples (up to isometry) are still simply connected solvable Lie groups. Here the orthogonal complement $\ag$ of the nilradical $\ngo$ in $\sg$ can be of dimension $>1$, though the conditions $[\ag,\ag]=0$ and $\ad{Y}|_{\ngo}^\RR\geq 0$ must hold, among other more technical conditions.
In the homogeneous case, the only curvature behavior which is still not understood is $\Ricci<0$ (see e.g.\ \cite[Introduction]{NklNkn}). In the 1980s, Dotti-Leite-Miatello \cite{Dtt,DttLt, DttLtMtl} proved that the only unimodular Lie groups that can admit a $\Ricci<0$ metric are the non-compact semisimple ones and showed that most of non-compact simple Lie groups indeed have one, with some low dimensional exceptions, including $\Sl_2(\CC)$, $\Spe(2,\RR)$ and $G_2$ (non-compact). The existence of $\Ricci<0$ metrics on such groups is still open, the only solved case is $\Sl_2(\RR)$, where the non-existence easily follows (see e.g.\ \cite{Mln}). It was proved by Jablonski-Petersen \cite{JblPtr} that a semisimple Lie group admitting a metric with $\Ricci<0$ can not have compact factors, i.e.\ it is of non-compact type. Recall that topologically, any (connected) Lie group is a product $K\times\RR^m$, where $K$ is its maximal compact subgroup.
More recently, in 2016, unexpected examples of Lie groups admitting $\Ricci<0$ metrics which are neither semisimple nor solvable were constructed by Will \cite{Wll1, Wll2}. The Levi factors of some of these examples are compact, including $\SU(n)$ ($n\geq 2$) and $\SO(n)$ ($n\geq 3$), and therefore four of the nine topologies missed by the semisimple examples in \cite{DttLtMtl} are attained: $K\times\RR^m$ for $K$ equal to $\SU(2)$, $\SU(3)$, $\SO(5)$ or $\SO(7)$. The cases in which $K$ is $S^1$, $\Spe(3)$, $\Spe(4)$, $\Spe(5)$ or $G_2$ remain open. It is worth pointing out that the homogeneous space $\SO^+(n,2)/\SO(n)$ ($n\geq 2$), which is homeomorphic to $S^1\times\RR^k$, does admit an invariant metric with $\Ricci<0$ (see \cite[Example 1]{Nkn}). On the other hand, a general construction in \cite{Wll2} gives that any non-compact semisimple Lie group admitting a $\Ricci<0$ metric can be the Levi factor of a non-semisimple Lie group with a $\Ricci<0$ metric. Non-abelian nilradicals are possible in most of these Will's constructions. All this shows that an algebraic characterization of Lie groups having a $\Ricci<0$ metric is out of reach at the moment.
The study of the solvable case was also recently initiated by Nikolayevsky-Nikonorov \cite{NklNkn} in 2015. They obtained the following sufficient condition on a solvable Lie group $S$ to admit a metric with $\Ricci<0$: \begin{equation}\label{NNsuf}
\mbox{There exists $Y\in\sg$ such that $\ad{Y}|_\ngo^\RR>0$, } \end{equation} where $\ngo$ is the nilradical of $\sg$. Note that the nilradicals involved in these examples are the same as those needed for $\Sec<0$, although the condition $[\ag,\ag]=0$ is not mandatory here as in the case of $\Sec\leq 0$. Also a necessary condition was found in \cite{NklNkn}: \begin{equation}\label{NNnec}
\mbox{There exists $Y\in\sg$ such that $\tr{\ad{Y}}>0$ and $\ad{Y}|_{\zg(\ngo)}^\RR>0$, } \end{equation} where $\zg(\ngo)$ is the center of $\ngo$.
We note that all the structural conditions on a solvable Lie group related to the existence of negative sectional or Ricci curvature metrics have the same flavor, motivating the following fundamental question: \begin{quote} Which nilpotent Lie algebras can be the nilradical of some solvable Lie algebra admitting a $\Ricci<0$ metric? \end{quote}
Such a Lie algebra will be called a {\it Ricci negative nilradical} (RN-nilradical for short). Since the existence of a positive derivation is sufficient, even for $\Sec<0$, any nilpotent Lie algebra which is $2$-step or has dimension $\leq 6$ is a RN-nilradical.
In Section \ref{RNnil}, we first show that for a nilpotent Lie algebra, the existence of a derivation of positive trace is a condition that is stronger than admitting a non-trivial diagonalizable derivation, and that this is in turn stronger than the property of having only nilpotent derivations. Secondly, we use condition \eqref{NNnec} (the only obstruction known) to exhibit many explicit examples of nilpotent Lie algebras which are not RN-nilradicals. They all have a derivation of positive trace and the following characteristics are obtained (for the first three examples any diagonalizable derivation has a zero eigenvalue on the center): \begin{itemize}
\item $\dim{\ngo}=8$.
\item $\ngo$ is $3$-step nilpotent.
\item A continuous family of pairwise non-isomorphic algebras of dimension $13$.
\item $\ngo$ has a non-singular derivation but any diagonalizable derivation has a negative eigenvalue on the center. \end{itemize}
On the contrary, we show that the fact that any diagonalizable derivation of $\ngo$ has a negative eigenvalue is not an obstacle for $\ngo$ to be a RN-nilradical. All this suggests that, as in the study of Einstein nilradicals (see e.g.\ \cite{cruzchica}), the search for new sufficient or necessary general conditions is a challenging problem.
In the light of the results obtained in \cite{NklNkn,Nkl} in the general case as well as in the particular cases of Heisenberg and filiform Lie algebras as nilradicals, a complete characterization of solvable Lie algebras admitting $\Ricci<0$ metrics is expected to take the following form: \begin{quote}
There exists $Y\in\sg$ such that $\ad{Y}|_{\ngo}^\RR$ belongs (up to automorphism conjugation) to certain open and convex cone in the maximal torus of derivations of the nilradical $\ngo$ of $\sg$. \end{quote}
We study this problem in Section \ref{RNder}. At the core of this question one has the following situation. Given a nilpotent Lie algebra $\ngo$, each $D\in\Der(\ngo)$ defines a solvable Lie algebra $\sg_D=\RR f\oplus\ngo$ given as the semi-direct sum such that $\ad{f}|_{\ngo}=D$. We call $D$ {\it Ricci negative} if $\tr{D}>0$ and $\sg_D$ admits a $\Ricci<0$ metric such that $D^t=D$ and $f\perp\ngo$. Note that any $D>0$ is Ricci negative (see \eqref{NNsuf}) and $D|_{\zg(\ngo)}>0$ is a necessary condition (see \eqref{NNnec}). The following natural questions arise: \begin{quote} Given a basis $\{ e_i\}$ of $\ngo$, what kind of set is the cone of Ricci negative diagonal derivations? Is it open in the space of diagonal derivations? Is it convex? \end{quote}
We prove that a diagonal derivation $D$ is Ricci negative if and only if $D$ belongs to certain open and convex cone depending on $D$ (see Corollary \ref{main}). Our main tool is the moment map for the $\Gl(\ngo)$-action on the variety of nilpotent Lie algebras, which is known from real geometric invariant theory to satisfy nice convexity properties (see \cite{HnzSch}). In the case when the basis $\{ e_i\}$ is nice (see Definition \ref{nice-def}), a particularly neat characterization of Ricci negative derivations is given. As an application, we obtain that any nilpotent Lie algebra of dimension $7$ having a non-nilpotent derivation is a RN-nilradical (see Theorem \ref{dim7}). More applications are developed in the forthcoming paper \cite{RNder}.
\section{Preliminaries}\label{preli}
\subsection{The representation $\lam$}\label{V-sec} We consider the space of all skew-symmetric algebras of dimension $n$, which is parameterized by the vector space $$ V:=\lam. $$ There is a natural linear action of $\G$ on $V$ given by $g\cdot\mu:=g\mu(g^{-1}\cdot,g^{-1}\cdot)$, for all $g\in\G$, $\mu\in V$, whose derivative defines the $\g$-representation on $V$, $$ E\cdot\mu=E\mu(\cdot,\cdot)-\mu(E\cdot,\cdot)-\mu(\cdot,E\cdot),\qquad E\in\g,\quad\mu\in V. $$ We note that $E\cdot\mu=0$ if and only if $E\in\Der(\mu)$, the Lie algebra of derivations of the algebra $\mu$. Let $\tg^n$ denote the set of all diagonal $n\times n$ matrices. If $\{ e^1,...,e^n\}$ is the basis of $(\RR^n)^*$ dual to the canonical basis $\{ e_1,...,e_n\}$, then $$ \{ \mu_{ijk}:=(e^i\wedge e^j)\otimes e_k : 1\leq i<j\leq n, \; 1\leq k\leq n\} $$ is a basis of $V$ of weight vectors for the above representation. Note that $\mu_{ijk}$ is actually the bilinear form on $\RR^n$ defined by $\mu_{ijk}(e_i,e_j)=-\mu_{ijk}(e_j,e_i)=e_k$ and zero otherwise. The corresponding weights are given by $$ F_{ij}^k:=E_{kk}-E_{ii}-E_{jj}\in\tg^n, \qquad i<j, $$ where $E_{rs}$ denotes as usual the matrix whose only nonzero coefficient is $1$ at entry $rs$. The structural constants $c(\mu)_{ij}^k$ of an algebra $\mu\in V$ are then given by $$ \mu(e_i,e_j)=\sum_{k}c(\mu)_{ij}^k\, e_k, \qquad \mu=\sum_{i<j,\,k} c(\mu)_{ij}^k\, \mu_{ijk}. $$ We consider the Weyl chamber of $\g$ defined by $$ \tg^n_+:=\left\{\Diag(a_1,\dots,a_n):a_1\leq...\leq a_n\right\}, $$ and the open cone $$ \tg^n_{>0}:=\left\{\Diag(a_1,\dots,a_n):a_i>0\right\}. $$ The canonical inner product $\ip$ on $\RR^n$ determines $\Or(n)$-invariant inner products on $V$ and $\g$ making of $\{ \mu_{ijk}\}$ and $\{ E_{ij}\}$ orthonormal bases, respectively. All these inner products will also be denoted by $\ip$.
\subsection{Moment map}\label{mm-sec} The moment map (or $\G$-gradient map) from geometric invariant theory (see e.g.\ \cite{HnzSchStt,HnzSch,BhmLfn} for further information) for the above representation is the $\Or(n)$-equivariant map $$ m:V\smallsetminus\{ 0\}\longrightarrow\sym(n), $$ defined implicitly by \begin{equation}\label{defmm}
\la m(\mu),E\ra=\tfrac{1}{|\mu|^2}\left\langle E\cdot\mu,\mu\right\rangle, \qquad \mu\in V\smallsetminus\{ 0\}, \quad E\in\sym(n). \end{equation} We are using $\g=\sog(n)\oplus\sym(n)$ as a Cartan decomposition, where $\sog(n)$ and $\sym(n)$ denote the subspaces of skew-symmetric and symmetric matrices, respectively. Note that $m$ is also defined on the projective space $\PP(V)$.
\subsection{Convex subsets}\label{convex-sec} Let $W$ be a real vector space endowed with an inner product. A compact and convex subset $E$ of $W$ is called a {\it convex body} and a subset $F\subset E$ is said to be a {\it face} of $E$ if it is convex and for each pair of points $x,y\in E$ such that the {\it relative interior} (i.e.\ the interior as a subset of the generated affine subspace) of the segment $[x,y]$ meets $F$ one has that $[x,y]\subset F$. An {\it extreme point} of $E$ is a point which is a face and a face $F$ of $E$ is called {\it exposed} if there exists $\alpha\in W$ such that $$ F=\left\{ x\in E:\la x,\alpha\ra=\max\{\la y,\alpha\ra:y\in E\}\right\}. $$ Given a subset $X\subset W$, its {\it convex hull} is defined by $$ \CH(X):=\left\{ a_1x_1+\cdots+a_kx_k:x_i\in X, \; a_i\geq 0, \; \sum a_i=1, \; k\in\NN\right\}. $$ Any convex body is the convex hull of its extreme points and also the disjoint union of the relative interiors of its faces. The convex hull of two disjoint disks of same radius in $\RR^2$ is an example of a convex body with four non exposed extreme points.
If $X$ is a finite subset, say $X=\{ x_1,\dots,x_n\}$, $\CH(X)$ is called a {\it convex polytope}. In this case, all the faces of $\CH(X)$ are exposed and it is easy to see that its relative interior is given by $$ \CH^\circ(x_1,\dots,x_n):=\left\{ a_1x_1+\cdots+a_nx_n:a_i>0, \; \sum a_i=1\right\}. $$ A subset $C\subset W$ is called a {\it cone} if $rx\in W$ for any $r>0$, $x\in W$.
\subsection{Convexity properties of the moment map}\label{git} In \cite{HnzSch, BllGhgHnz}, many nice and useful results on the convexity of the image of the moment map have been obtained. In order to apply these results to our $\Gl_n(\RR)$-representation $V=\lam$ (see Section \ref{preli}), we use the notation in such articles and set $$ U:=\U(n), \quad U^\CC=\Gl_n(\CC), \quad Z:=\PP(\Lambda^2(\CC^n)^*\otimes\CC^n). $$ Thus $\PP(V)$ is a $\Gl_n(\RR)$-invariant closed subset of $Z$. For any compatible subgroup $G\subset\Gl_n(\RR)$, one has that $K:=G\cap\Or(n)$ is a maximal compact subgroup of $G$, $\ggo=\kg\oplus\pg$ is a Cartan decomposition and $G=K\exp{\pg}$, where $\pg:=\ggo\cap\sym(n)$ and $\ggo$, $\kg$ denote the Lie algebras of $G$, $K$, respectively. Consider $\ag\subset\pg$, a maximal abelian subalgebra. Thus the corresponding torus $A=\exp{\ag}\subset G$ is also a compatible subgroup.
The moment map $m:V\smallsetminus\{ 0\}\longrightarrow\pg$ for the $G$-action is given by composing the moment map \eqref{defmm} for the $\Gl_n(\RR)$-action with the orthogonal projection from $\sym(n)$ to $\pg$, and the one for the $A$-action, $m_\ag:V\smallsetminus\{ 0\}\longrightarrow\ag$, by projecting on $\ag$. Let $\ag_+\subset\ag$ denote a Weyl chamber of $G$.
We now consider closed $G$-invariant subsets of $\PP(V)$. A subset $X\subset\PP(V)$ is called {\it irreducible} if it is a real semi-algebraic subset whose real algebraic Zariski closure is irreducible (see \cite{HnzSch}). Note that the projection on $\PP(V)$ of any orbit closure $\overline{G\cdot\mu}$ is an irreducible subset if $G$ is connected.
\begin{theorem}\label{conv1}\cite{HnzSch} Let $X$ be a closed $G$-invariant subset of $\PP(V)$. \begin{itemize} \item[(i)] $m(X)\cap\ag$ is the union of finitely many convex polytopes;
\item[(ii)] $m(X)\cap\ag_+$ is a convex polytope if $X$ is irreducible.
\item[(iii)] $m_\ag(X)$ is a convex polytope if $X$ is irreducible. \end{itemize} \end{theorem}
In particular, $m(\overline{G\cdot\mu})\cap\ag_+$ and $m_\ag(\overline{A\cdot\mu})$ are both convex polytopes for any $\mu\in V$. Note that part (iii) is just part (ii) applied to $G=A$ and that if $W:=N_K(\ag)/C_K(\ag)$ is the corresponding Weyl group, then for $X$ irreducible one has that \begin{equation}\label{unionW} m(X)\cap\ag = \bigcup\limits_{k\in W} k\cdot(m(X)\cap\ag_+). \end{equation}
Let $X^A$ denote the set of $A$ fixed points in $X$.
\begin{theorem}\label{conv2}\cite{BllGhgHnz} Let $X$ be a closed $G$-invariant subset of $\PP(V)$. \begin{itemize} \item[(i)] $m_\ag(X^A)$ is a finite set whose convex hull is $\CH(m_\ag(X))$; in particular, $\CH(m_\ag(X))$ is a convex polytope (see \cite[Proposition 3.1]{BllGhgHnz}).
\item[(ii)] $\CH(m(X))\cap\ag=\CH(m_\ag(X))$ and $\CH(m(X)) = K\cdot \CH(m_\ag(X))$ (see \cite[Lemma 1.1]{BllGhgHnz}).
\item[(iii)] All faces of $\CH(m(X))$ are exposed (see \cite[Theorem 0.3]{BllGhgHnz}). \end{itemize} \end{theorem}
\section{Ricci negative derivations}\label{RNder}
Given a nilpotent Lie algebra $\ngo$, each basis $\{ e_1,\dots, e_n\}$ of $\ngo$ identifies the vector space $\ngo$ with $\RR^n$, bringing the whole setting described in Section \ref{preli}, which will be used from now on without any further mention. When an inner product is given on $\ngo$, one can use any orthonormal basis. In this way, the Lie bracket $\lb$ of $\ngo$ becomes a vector in $V$, the orbit $\G\cdot\lb$ consists of those Lie brackets on $\ngo$ which are isomorphic to $\lb$ and the set $\nca$ of all nilpotent Lie brackets is a $\G$-invariant algebraic subset of $V$. Note that each $\mu\in\nca$ determines a Riemannian manifold; namely, the Lie group $(N_{\mu},\ip)$ endowed with the left-invariant metric defined by $\ip$. A remarkable fact is that the moment map from Section \ref{mm-sec} encodes geometric information; indeed \begin{equation}\label{mmR}
m(\mu)=\tfrac{4}{|\mu|^2}\Ricci_{\mu}, \end{equation} where $\Ricci_\mu$ is the Ricci operator of $(N_{\mu},\ip)$ (see e.g.\ \cite{alek}).
Each $D\in\Der(\ngo)$ defines a solvable Lie algebra $$ \sg=\RR f\oplus\ngo, $$
given as the semi-direct sum such that $\ad{f}|_{\ngo}=D$. If $\ip$ is an inner product on $\sg$ such that $|f|=1$ and $f\perp\ngo$, then it is easy to see using e.g. \cite[(11)]{alek} that the Ricci operator of $(\sg,\ip)$ is given by \begin{equation}\label{Ric-half}
\Ricci=\left[\begin{array}{c|c} -\tr{S(D)^2} & \ast \\\hline & \\ \ast & \Ricci_\ngo+\unm[D, D^t]-\tr(D)S(D) \\ & \end{array}\right], \end{equation}
where $\Ricci_\ngo=\tfrac{|\lb|^2}{4}m(\lb)$ is the Ricci operator of $(\ngo,\ip)$, $S(D):=\unm(D+D^t)$ and $$ \la\Ricci f,X\ra=-\tr{S(D)\ad_\ngo{X}}, \qquad\forall X\in\ngo. $$ It is easy to see that $\ast=0$ if $D$ is normal (see e.g.\ the proof \cite[Proposition 4.3]{solvsolitons}).
\begin{definition} A derivation $D$ of a nilpotent Lie algebra $\ngo$ with $\tr{D}>0$ is said to be {\it Ricci negative} if the solvable Lie algebra $\sg=\RR f\oplus\ngo$ defined above admits an inner product of negative Ricci curvature such that $D^t=D$ and $f\perp\ngo$. \end{definition}
We ask for the positivity of the trace in the above definition since the isomorphism class of $\sg=\RR f\oplus\ngo$ is invariant up to a nonzero scaling of $D$. Furthermore, unimodular solvable Lie algebras do not admit Ricci negative metrics (see \cite{Dtt}), so $\tr{D}\ne 0$ is a necessary condition. Note that any Ricci negative derivation is diagonalizable. It easily follows from \eqref{Ric-half} that if $D>0$ (i.e.\ all its eigenvalues are positive) then $D$ is Ricci negative. On the other hand, the only known necessary condition for a derivation $D$ to be Ricci negative is that $D$ must be positive when restricted to the center of $\ngo$ (see \cite[Theorem 2, (1)]{NklNkn}). We consider in Section \ref{RNnil} the problem of which nilpotent Lie algebras admit a Ricci negative derivation.
The following natural question arises:
\begin{quote} (Q1) Given a nilpotent Lie algebra $\ngo$ with a basis $\{ e_i\}$, what kind of set is the cone $$ \left\{ D\in\Der(\ngo):D \;\mbox{is diagonal relative to}\; \{ e_i\}\; \mbox{and Ricci negative}\right\}? $$ Is it open in the space of diagonal derivations? Is it convex? \end{quote}
In \cite{NklNkn,Nkl}, it was proved that this cone is open and convex for Heisenberg and filiform Lie algebras endowed with the standard bases.
\subsection{Ricci negative derivations in terms of the moment map} Let $G_D$ denote the connected component of the identity of the centralizer subgroup of $D$ in $\G$ and let $\ggo_D$ be its Lie algebra. Given an inner product $\ip$ on a vector space $\ngo$, we denote by $\sym(\ngo)$ the space of symmetric operators of $\ngo$ and by $\sym(\ngo)_{>0}$ the open cone of positive definite ones. If $m$ is the moment map defined as in \eqref{defmm} by $\ip$, then the moment map $m_D$ for the $G_D$-action satisfies that $m_D(\mu)$ is the orthogonal projection of $m(\mu)$ on $\sym(\ngo)\cap\ggo_D$. Since $m(\mu)$ commutes with any symmetric derivation of $(\ngo,\mu)$ and $D$ is a derivation of any Lie bracket in the set $\overline{G_D\cdot\lb}$, we obtain that \begin{equation}\label{mDm} m_D=m \quad \mbox{on} \quad \overline{G_D\cdot\lb} \quad \mbox{if} \quad D\in\sym(\ngo). \end{equation}
\begin{theorem}\label{main2} Let $\ngo$ be a nilpotent Lie algebra endowed with an inner product and consider $D\in\Der(\ngo)\cap\sym(\ngo)$ with $\tr{D}>0$. Then the following conditions are equivalent:
\begin{itemize} \item[(i)] $D$ is Ricci negative. \item[ ] \item[(ii)] $D\in\RR_{>0}\, m\left(G_D\cdot\lb\right) + \sym(\ngo)_{>0}$. \item[ ] \item[(iii)] $D\in\RR_{>0}\, m\left(\overline{G_D\cdot\lb}\right) + \sym(\ngo)_{>0}$. \end{itemize} \end{theorem}
\begin{proof}
Let $\ip$ denote the inner product endowing $\ngo$, which we extend to $\sg$ by setting $f\perp\ngo$ and $|f|=1$. We first assume part (i) and denote by $\ip_1$ the Ricci negative inner product on $\sg$ such that $D^t=D$ and $f\perp\ngo$, which up to scaling can be assumed to satisfy $|f|_1=1$. There exists $h\in\sym(\ngo)_{>0}$ such that $\ip_1|_{\ngo\times\ngo}=\la h\cdot,h\cdot\ra$, giving rise to an isometry \begin{equation}\label{isome} (\sg=\RR f\oplus\ngo,\ip_1) \longrightarrow (\sg_1=\RR f\oplus\ngo,\ip), \end{equation}
where the Lie bracket of $\sg_1$ is defined by $\ad_1{f}|_{\ngo}=hDh^{-1}$, $\lb_1|_{\ngo\times\ngo}=h\cdot\lb$. The isometry is produced by the orthogonal isomorphism sending $f$ to $f$ and each $X\in\ngo$ to $h(X)$. Since $D$ is also symmetric with respect to $\ip_1$ we obtain that $h\in G_D$, and so the Ricci operator of $(\sg_1,\ip)$, which is also negative definite by \eqref{isome}, is given by $$
\Ricci_1|_{\ngo} = r\, m(h\cdot\lb) - \tr(D)D, $$ for some $r>0$ (see \eqref{Ric-half} and \eqref{mmR}), from which part (ii) follows .
Since (iii) follows trivially from (ii), it would only remains to show that part (iii) implies (i). Assume that $D=r\, m(\mu)+E$ for some $r>0$, $E\in\sym(\ngo)_{>0}$ and that there exist $h_k\in G_D$ such that $h_k\cdot\lb$ converges to $\mu$, as $k\to\infty$. Thus $D\in\Der(h_k\cdot\lb)$ for any $k$, $D\in\Der(\mu)$ and by scaling $\mu$ appropriately we obtain that $\Ricci_\mu-\tr(D)D$ is negative definite (see \eqref{mmR}). This implies that $(\sg_1,\ip)$ has negative Ricci curvature if we define the Lie bracket on $\sg_1$ using $D$ and $\mu$ as above, and consequently, $(\sg_2,\ip)$ with Lie bracket defined by $D$ and $h_k\cdot\lb$ is also negatively Ricci curved for a sufficiently large $k$ by continuity. By applying the isometry \eqref{isome}, one shows that $\la h_k\cdot,h_k\cdot\ra$ produces a Ricci negative metric on $\sg$, concluding the proof. \end{proof}
\begin{remark} In much the same way as in the above proof, one obtains that the solvable Lie algebra $\sg=\RR f\oplus\ngo$ admits an Einstein (non-flat) inner product such that $D^t=D$ and $f\perp\ngo$ if and only if $D\in\RR_{>0}\, m\left(G_D\cdot\lb\right) + \RR_{>0}I$. \end{remark}
Recall that a linear operator of $\ngo$ is diagonalizable over $\RR$ if and only if it is symmetric with respect to some inner product on $\ngo$. If instead of an inner product we fix a basis of the Lie algebra $\ngo$, then the above proposition can be rewritten as follows for diagonal derivations.
\begin{corollary}\label{main} Let $\ngo$ be a nilpotent Lie algebra endowed with a basis and consider $D\in\Der(\ngo)\cap\tg^n$ with $\tr{D}>0$. Then the following conditions are equivalent:
\begin{itemize} \item[(i)] $D$ is Ricci negative. \item[ ] \item[(ii)] $D\in\RR_{>0}\, m\left(G_D\cdot\lb\right)\cap\tg^n + \tg^n_{>0}$. \item[ ] \item[(iii)] $D\in\RR_{>0}\, m\left(\overline{G_D\cdot\lb}\right)\cap\tg^n + \tg^n_{>0}$. \item[ ] \item[(iv)] $D\in\RR_{>0}\, m\left(\overline{G_D\cdot\lb}\right)\cap\ag_+^D + \tg^n_{>0}$, where $\ag_+^D\subset\tg^n$ is any Weyl chamber of $G_D$. \end{itemize} \end{corollary}
\begin{remark} The cones in parts (ii)-(iv) are all open in $\tg^n$ as $\tg^n_{>0}$ is so. Moreover, the subset in part (iv) is indeed an open and convex cone since $m\left(\overline{G_D\cdot\lb}\right)\cap\ag^{D}_+$ is a convex polytope by Theorem \ref{conv1}, (ii). Note that actually a Ricci negative $D$ must belong to the intersection of all the convex cones obtained by running over all the Weyl chambers. This provides a very useful insight to work on question (Q1). \end{remark}
\begin{proof}
If $D$ is diagonal when written in terms of the basis, then we consider the inner product on $\ngo$ making this basis orthonormal and the equivalence between (i), (ii) and (iii) follows in much the same way as the above theorem. The only observation to make is that at the end of the proof of the fact that (i) implies (ii), since $[\Ricci_1|_\ngo,D]=0$, there exists $g\in\Or(n)\cap G_D$ such that $g\Ricci_1|_\ngo g^{-1}\in\tg^n$, and thus $g\Ricci_1|_\ngo g^{-1}= r\, m(gh\cdot\lb) - \tr(D)D$, from which part (ii) follows.
Finally, assume that part (ii) holds, say $D=rM+E$, where $r>0$, $M=m(g\cdot\lb)\in\tg^n$ for some $g\in G_D$ and $E\in\tg^n_{>0}$. Thus $M$ belongs to some Weyl chamber $\ag_2$ of $G_D$, which has to be of the form $\ag_2=h\ag_+^Dh^{-1}$ for some $h\in\Or(n)\cap G_D$. We therefore obtain that $$ D = h^{-1}Dh = rh^{-1}Mh+h^{-1}Eh = rm(h^{-1}g\cdot\lb)+h^{-1}Eh, $$ with $m(h^{-1}g\cdot\lb)\in m\left(G_D\cdot\lb\right)\cap\ag_+^D$ and $h^{-1}Eh\in\tg^n_{>0}$, from which part (iv) follows, concluding the proof. \end{proof}
Given a nilpotent Lie algebra $\ngo$ endowed with a basis $\{ e_i\}$, we introduce the following notation: $$ \dg:=\Der(\ngo)\cap\tg^n, \qquad \dg_{RN}:=\{ D\in\dg:D\;\mbox{is Ricci negative}\}, $$ and $T\subset\Gl_n(\RR)$ will denote the (connected) torus with Lie algebra $\tg^n$ (i.e.\ the subgroup of diagonal matrices with positive entries).
\begin{example}\label{heis3} Let $\ngo$ be the $3$-dimensional Heisenberg Lie algebra with basis $\{ e_1,e_2,e_3\}$ and Lie bracket $[e_1,e_2]=e_3$. We have that $$ \dg=\{D:=\Diag(d_1,d_2,d_1+d_2):d_1,d_2\in\RR\}, $$ and if $D$ is generic (i.e.\ $d_1, d_2$ are different nonzero real numbers), then $G_D=T$, $\overline{T\cdot\lb}=\RR_{\leq 0}\lb$ and $m(\mu)=F_{12}^3$ for any $\mu=x\lb$. Therefore, according to Corollary \ref{main}, a generic $D\in\dg$ with $\tr{D}>0$ belongs to $\dg_{RN}$ if and only if there exists $a\geq 0$ such that $$ d_1+a>0, \quad d_2+a>0, \quad d_1+d_2-a>0, $$ or equivalently, $d_1+d_2>a>-d_1,-d_2$. By using $d_1+d_2>0$ we obtain that this is in turn equivalent to $2d_1+d_2>0$ and $d_1+2d_2>0$. This implies that $$ \dg_{RN}=\{D\in\dg:2d_1+d_2>0, \; d_1+2d_2>0\}, $$ an open and convex cone. Indeed, since any non-generic derivation $D_0$ with positive trace belongs to the cone on the right and $T\subset G_{D_0}$, so $D_0\in\dg_{RN}$ also by Corollary \ref{main}. \end{example}
\begin{example}\label{n4nice} Let $\ngo$ be the $4$-dimensional nilpotent Lie algebra with basis $\{ e_1,\dots,e_4\}$ and Lie bracket $$ [e_1,e_2]=e_3, \quad [e_1,e_3]=e_4. $$ Since $$ \dg=\{D:=\Diag(d_1,d_2,d_1+d_2,2d_1+d_2):d_1,d_2\in\RR\}, $$ we obtain that $D$ is generic if and only if $d_1, d_2\ne 0$ and $d_1\pm d_2\ne 0$. In that case, $G_D=T$ and $\overline{T\cdot\lb}$ is given by the linear subspace of $V$ of nilpotent Lie brackets $\mu=\mu(x,y)$ defined by $$ \mu(e_1,e_2)=xe_3, \quad \mu(e_1,e_3)=ye_4, \qquad x,y\geq 0, $$ and the moment map is given by $$
m(\mu) = \frac{2}{|\mu|^2}\left[\begin{array}{cccc} -(x^2+y^2)&&&\\ &-x^2&&\\ &&x^2-y^2&\\ &&&y^2 \end{array}\right] = \frac{1}{x^2+y^2}\left(x^2F_{12}^3+y^2F_{13}^4\right), $$ which implies that $m\left(\overline{T\cdot\lb}\right)\cap\tg^4=m\left(\overline{T\cdot\lb}\right)=\CH(F_{12}^3, F_{13}^4)$. Let us now show that $$ \dg_{RN}=\{D\in\dg:d_1+d_2>0, \; 2d_1+d_2>0\}, $$ which is open and convex as in the above example. It follows from Corollary \ref{main} that a generic $D\in\dg$ belongs to $\dg_{RN}$ if and only if there exist $a,b\geq 0$ such that \begin{equation}\label{n4nice-eq} d_1+a+b>0, \quad d_2+a>0, \quad d_1+d_2-a+b>0, \quad 2d_1+d_2-b>0. \end{equation} The last inequality implies that $2d_1+d_2>0$ and the condition $d_1+d_2>0$ follows by adding the last three inequalities. To prove that these two conditions are sufficient we proceed as follows. Note that \eqref{n4nice-eq} is equivalent to the existence of $b\geq 0$ such that $$ 2d_1+d_2>b>-d_1-a,-d_1-d_2+a, $$ which holds if and only if there is an $a\geq 0$ such that $$ 3d_1+2d_2>a>-3d_1-d_2,-d_2, $$ that is, $2d_1+d_2>0$ and $d_1+d_2>0$ since $3d_1+2d_2>0$. On the other hand, the only non-generic derivation with positive trace is $D_0=(1,-1,0,1)$ (up to scaling), and it easy to check that $$ m(G_{D_0}\cdot\lb)\cap\tg^4 = \CH^\circ(F_{12}^3, F_{13}^4)\cup \CH^\circ(F_{24}^3, F_{34}^1). $$ This also follows from \eqref{unionW}. Thus $D_0$ is not Ricci negative by Corollary \ref{main}; indeed, $D_0=aF_{24}^3+bF_{35}^1+E$, $a,b\geq 0$, $E\in\tg^n_{>0}$, implies that $a>1>b$ and $b>a$, a contradiction. \end{example}
The following corollary of Theorem \ref{main2} follows from Theorem \ref{conv1}, (iii) and provides a necessary condition for a symmetric derivation to be Ricci negative. We denote by $\Diagg(A)$ the diagonal part of a matrix $A$.
\begin{corollary}\label{cor-main2} Let $\ngo$ be a nilpotent Lie algebra endowed with an inner product. If $D$ is a symmetric derivation of $\ngo$ which is Ricci negative, then relative to any orthonormal basis of $\ngo$, the diagonal part of $D$ belongs to the cone $$ \RR_{>0}\, \Diagg\left(m\left(\overline{G_D\cdot\lb}\right)\right) + \tg^n_{>0}. $$ \end{corollary}
Recall from Theorem \ref{conv1}, (iii) that this cone is open and convex.
\subsection{Using the convexity properties of the moment map} The characterizations of Ricci negative derivations obtained in Corollary \ref{main} lead us to apply the results from real GIT described in Section \ref{git} to try to understand the set $$ m\left(\overline{G_D\cdot\lb}\right)\cap\tg^n. $$ This coincides with $m(X)\cap\ag$ in the case when $G=G_D$, $\ag=\tg^n$ and $X=\PP\left(\overline{G_D\cdot\lb}\right)$. Recall from \eqref{mDm} that the moment maps for $G_D$ and $\Gl_n(\RR)$ coincide on $X$ in this situation. However, the Weyl chambers for $G_D$ can be much larger. It follows from Theorem \ref{conv1}, (ii) that $m\left(\overline{G_D\cdot\lb}\right)\cap\ag^{D}_+$ is a convex polytope for any Weyl chamber $\ag^{D}_+\subset\tg^n$ of $G_D$ and hence, as in \eqref{unionW}, one obtains that $m\left(\overline{G_D\cdot\lb}\right)\cap\tg^n$ is the union of finitely many convex polytopes by running over all Weyl chambers for $G_D$. In particular, it is convex if $G_D=T$.
In view of the the fact that the torus $T\subset\Gl_n(\RR)$ is always contained in $G_D$ for any $D\in\dg$, we need to deepen the study of the subset $$ m\left(\overline{T\cdot\lb}\right)\cap\tg^n. $$ So in what follows, according to the notation in Section \ref{git}, we consider $G=\Gl_n(\RR)$, thus $\ag=\tg^n$ and $A=T$. Recall that $m_\ag=\Diagg\circ\, m$.
For each $\mu\in V$, we define the following convex subsets of $\tg^n$, $$ \CH_\mu := \CH\left(F_{ij}^k:c(\mu)_{ij}^k\ne 0\right), \quad \CH_\mu^\circ := \CH^\circ\left(F_{ij}^k:c(\mu)_{ij}^k\ne 0\right), $$ where $c(\mu)_{ij}^k$ are the structural constants of $\mu$ (see Section \ref{V-sec}). Note that $F_{ij}^k=m(\mu_{ijk})$.
\begin{lemma}\label{mtoro} $\Diagg\left(m\left(\overline{T\cdot\mu}\right)\right) = \CH_\mu$. \end{lemma}
\begin{proof} For any $E\in\tg^n$ and $\lambda\in V$ one has that $$ E\cdot\lambda = \sum \la E,F_{ij}^k\ra c(\lambda)_{ij}^k\,\mu_{ijk}, $$ so it follows from \eqref{defmm} that \begin{equation}\label{mmdiag}
m_\ag(\lambda) =\tfrac{2}{|\lambda|^2}\sum\limits_{i<j}\left(c(\lambda)_{ij}^k\right)^2 F_{ij}^k\in\CH_\lambda, \qquad\forall\lambda\in V. \end{equation} In particular, $m\left(\overline{T\cdot\mu}\right)\cap\tg^n$ is contained in $\CH_\mu$ for any $\mu\in V$.
On the other hand, since $$ c(h\cdot\mu)_{ij}^k = \frac{h_k}{h_ih_j}c(\mu)_{ij}^k, \qquad\forall h:=\Diag(h_1,\dots,h_n)\in T, $$ one obtains that $\CH_\lambda\subset\CH_\mu$ for any $\lambda\in\overline{T\cdot\mu}$, which implies that $m_\ag(\overline{T\cdot\mu})\subset\CH_\mu$ by \eqref{mmdiag}. But $\mu_{ijk}\in\overline{T\cdot\mu}$ for any $c(\mu)_{ik}^k\ne 0$; indeed, $e^{t\alpha}\cdot\mu$ converges to $\mu_{ij}^k$ as $t\to\infty$ for $\alpha\in\tg^n$ defined by $\alpha_r=1$ for $r=i,j$ and equal to $2$ otherwise. This implies that if $c(\mu)_{ij}^k\ne 0$, then $F_{ij}^k\in m_\ag(\overline{T\cdot\mu})$, which is convex by Theorem \ref{conv1}, (iii), and so $\CH_\mu\subset m_\ag(\overline{T\cdot\mu})$, concluding the proof. \end{proof}
An alternative proof of the above lemma can be given by using Theorem \ref{conv2}, (i) and the fact that the $T$ fixed points are given by $$ \PP(V)^T=\left\{ [\mu_{ijk}]\right\}, \qquad \overline{T\cdot[\mu]}^T=\left\{[\mu_{ijk}]:c(\mu)_{ij}^k\ne 0\right\}, $$ and their $m$-images by $$ m\left(\PP V^T\right)=\left\{ F_{ij}^k\right\}, \qquad m\left(\overline{T\cdot[\mu]}^T\right)=\left\{F_{ij}^k:c(\mu)_{ij}^k\ne 0\right\}. $$
We now show that $\overline{T\cdot\mu}$ is actually determined by $\CH_\mu$, which is in a sense a converse to Lemma \ref{mtoro}. For each subset $J\subset I_\mu:=\{ (i,j,k):c(\mu)_{ij}^k\ne 0\}$, consider the bracket $$ \lambda_J:=\sum_{(i,j,k)\in J} c(\mu)_{ij}^k\,\mu_{ijk}. $$ Note that $m_\ag\left(\overline{T\cdot\lambda_J}\right) = \CH\left(F_{ij}^k:(i,j,k)\in J\right)$ (see \eqref{mtoro}). We recall that some basic convex geometry terminology was given in Section \ref{convex-sec}.
\begin{lemma}\label{A-deg} The closure of the orbit $T\cdot\mu$ is given by $$ \overline{T\cdot\mu} =\left\{\lambda_J:\CH\left(F_{ij}^k:(i,j,k)\in J\right)\;\mbox{is a face of}\; \CH_\mu\right\}. $$ Moreover, if $c(\mu)_{ij}^k\ne 0$, then $\mu_{ij}^k\in\overline{T\cdot\mu}$ and $F_{ij}^k$ is an extreme point of $\CH_\mu$. \end{lemma}
\begin{proof} It follows from \cite[Theorem 1.1, (ii)]{BhmLfn} that a Lie bracket $\lambda$ belongs to $\overline{T\cdot\mu}$ if and only if there exists $\alpha\in\tg^n$ such that $e^{t\alpha}\cdot\mu$ converges to $\lambda$, as $t\to\infty$. In particular, $\lambda=\lambda_J$ for some $J\subset I_\mu$, and such convergence is equivalent to $\la\alpha,F_{ij}^k\ra=0$ for any $(i,j,k)\in J$ and negative otherwise. But the existence of an $\alpha\in\tg^n$ with such properties is necessary and sufficient to have that $\CH\left(F_{ij}^k:(i,j,k)\in J\right)$ is a face of $\CH_\mu$, concluding the proof. \end{proof}
\begin{corollary} $\Diagg\left(m\left(T\cdot\lb\right)\right) = \CH_\mu^\circ$. \end{corollary}
The situation in Example \ref{n4nice} concerning convexity properties can drastically change if we consider a different basis for that Lie algebra, as next example shows.
\begin{example}\label{n4nonice} If the Lie bracket is defined by $$ [e_1,e_2]=e_3+e_4, \quad [e_1,e_3]=e_4, $$ then $\overline{T\cdot\lb}$ is given by the brackets $\mu=\mu(x,y,z)$ such that $$ \mu(e_1,e_2)=xe_3+ye_4, \quad \mu(e_1,e_3)=ze_4, \qquad x,y,z\geq 0. $$ According to Lemma \ref{A-deg}, $\overline{T\cdot\lb}$ consists, besides of the open and dense orbit $T\cdot\lb$ ($x,y,z>0$), of other six nonzero $T$-orbits defined by the (external) faces of the triangle $\CH_{\lb}=\CH(F_{12}^3, F_{12}^4, F_{13}^4)$. The moment map is given by \begin{align*}
m(\mu) =& \frac{2}{|\mu|^2}\left[\begin{array}{cccc} -x^2-y^2-z^2&&&\\ &-x^2-y^2&-yz&\\ &-yz&x^2-z^2&xy\\ &&xy&y^2+z^2 \end{array}\right] \\ =& \frac{1}{x^2+y^2+z^2}\left(x^2F_{12}^3+y^2F_{12}^4+z^2F_{13}^4-yzF_5+xyF_6\right), \end{align*} where $$ F_5:=\left[\begin{smallmatrix} 0&&\\ &0&1&\\ &1&0&\\ &&&0 \end{smallmatrix}\right], \qquad F_6:=\left[\begin{smallmatrix} 0&&\\ &0&&\\ &&0&1\\ &&1&0 \end{smallmatrix}\right]. $$ Thus $m(T\cdot\lb)\cap\tg^4=\emptyset$, $$ m\left(\overline{T\cdot\lb}\right)\cap\tg^4=\CH(F_{12}^3, F_{13}^4)\cup \left\{F_{12}^4\right\}, $$ and recall from Lemma \ref{mtoro} that $\Diagg\left(m\left(\overline{T\cdot\lb}\right)\right)=\CH(F_{12}^3, F_{12}^4, F_{13}^4)$. Nevertheless, it easily follows from Corollary \ref{main} that $\dg_{RN}=\{ (0,d,d,d):d>0\}$ by using only $F_{12}^4$. \end{example}
\subsection{Nilpotent Lie algebras with a nice basis} The better behavior of $m\left(\overline{T\cdot\lb}\right)\cap\tg^4$ in Example \ref{n4nice} compared to what happened in Example \ref{n4nonice} is due to special properties of the basis chosen.
\begin{definition}\label{nice-def} A basis $\{ e_1,\dots,e_n\}$ of a Lie algebra is said to be {\it nice} if every bracket $[e_i,e_j]$ is a scalar multiple of some element $e_k$ in the basis and two different brackets $[e_i,e_j]$, $[e_r,e_s]$ can be a nonzero multiple of the same $e_k$ only if $\{ i,j\}$ and $\{ r,s\}$ are either equal or disjoint. \end{definition}
\begin{lemma}\label{nice}\cite{nicebasis} The following conditions are equivalent: \begin{itemize} \item[(i)] $m\left(\overline{T\cdot\mu}\right)\cap\tg^n = \CH_\mu$.
\item[(ii)] $\{ e_i\}$ is a nice basis for $\mu$.
\item[(iii)] $m\left(T\cdot\mu\right)\subset\tg^n$. \end{itemize} \end{lemma}
\begin{proof} The equivalence between parts (ii) and (iii) is precisely the result proved in \cite{nicebasis}. Part (i) follows from (iii) and \eqref{mtoro}, so we only need to prove that part (i) implies (iii). If $h\in T$, then $m_\ag(h\cdot\mu)=m(g\cdot\mu)$ for some $g\in T$ by (i) and \eqref{mtoro}. But this implies that $h=tg$ for a nonzero $t\in\RR$ since $m_\ag:T\cdot[\mu]\longrightarrow m_\ag(T\cdot\mu)$ is a diffeomorphism (see \cite[Proposition 3]{HnzStt}) and thus $m(h\cdot\mu)\in\tg^n$, concluding the proof. \end{proof}
The following result was proved in \cite[Section 4]{Nkl2}.
\begin{corollary}\label{niceD} If $\ngo$ is a nilpotent Lie algebra with a nice basis, then $\Diagg(D)\in\Der(\ngo)$ for any $D\in\Der(\ngo)$. \end{corollary}
\begin{proof} For any $h\in T$ one has that $\tr{\Diagg(D)m(h\cdot\lb)} = \tr{Dm(h\cdot\lb)} = 0$, thus $\tr{\Diagg(D)F_{ij}^k}=0$ for each $c_{ij}^k\ne 0$ by Lemma \ref{A-deg}, that is, $\Diagg(D)\in\Der(\ngo)$. \end{proof}
The above lemma together with Corollary \ref{main} also give the following.
\begin{corollary}\label{cor-nice} Let $\ngo$ be a nilpotent Lie algebra endowed with a nice basis. Then the open convex cone in $\dg$ given by $$ \left(\RR_{>0}\CH_{\lb} + \tg^n_{>0}\right)\cap \{ D\in\dg:\tr{D}>0\}, $$ is contained in $\dg_{RN}$. \end{corollary}
It follows from Theorem \ref{conv2}, (ii) that $$ \CH\left(m\left(\overline{G_D\cdot\lb}\right)\right) = \Diag\left(m\left(\overline{G_D\cdot\lb}\right)\right), $$ a convex polytope. However, both $m\left(\overline{T\cdot\mu}\right)\cap\tg^n$ and $m\left(\overline{G_D\cdot\mu}\right)\cap\tg^n$ can be tricky subsets if the basis is not nice, as next example shows.
\begin{example}\label{n5nonice} Let $\ngo$ be the $5$-dimensional nilpotent Lie algebra with basis $\{ e_1,\dots,e_5\}$ and Lie bracket $$ [e_1,e_2]=e_3+e_4, \quad [e_1,e_3]=e_5, \quad [e_1,e_4]=e_5. $$ It is easy to see that if $D$ is generic, then $$ G_D=G:=\left\{\left[\begin{array}{cccc} h_1&&&\\ &h_2&&\\ &&H&\\ &&&h_5 \end{array}\right]: H\in\Gl_2^+(\RR), \quad h_i>0\right\}. $$ We consider $G$ acting on the cone $C\subset V$ of nilpotent Lie brackets $\mu=\mu(x,y,z,w)$ defined by $$ \mu(e_1,e_2)=xe_3+ye_4, \quad \mu(e_1,e_3)=ze_5, \quad \mu(e_1,e_4)=we_5, \qquad x,y,z,w\geq 0. $$ The moment map $m:C\smallsetminus\{ 0\}\longrightarrow\pg$ is given by \begin{align*}
m(\mu) =& \frac{2}{|\mu|^2}\left[\begin{array}{ccccc} -(x^2+y^2+z^2+w^2)&&&&\\ &-x^2-y^2&&&\\ &&x^2-z^2&xy-zw&\\ &&xy-zw&y^2-w^2&\\ &&&&z^2+w^2 \end{array}\right] \\ =& \frac{1}{x^2+y^2+z^2+w^2}\left(x^2F_{12}^3+y^2F_{12}^4+z^2F_{13}^5+w^2F_{14}^5+(xy-zw)F\right), \end{align*} where $$ F:=\left[\begin{smallmatrix} 0&&&\\ &0&&&\\ &&0&1&\\ &&1&0&\\ &&&&0 \end{smallmatrix}\right]. $$ It is easy to check that $C\smallsetminus\{ 0\}$ is the disjoint union of three orbits: $G\cdot\lb$, $G\cdot\mu_{123}$ and $G\cdot\mu_{135}$; and the first one is given by $$ G\cdot\lb=\{\mu:xz+yw\ne 0\}, \qquad \overline{G\cdot\lb}=C. $$ This implies that \begin{align} m\left(\overline{G\cdot\lb}\right)\cap\tg^5 =& \{ aF_{12}^3+bF_{12}^4+cF_{13}^5+dF_{14}^5: a,b,c,d\geq 0, \label{image1}\\ &\quad a+b+c+d=1, \quad ab=cd\} \notag\\ =& \{ \Diag(-1,-a-b,a-c,b-d,c+d)\in\tg^5:a,b,c,d\geq 0, \label{image2}\\ &\quad a+b+c+d=1, \quad ab=cd\}. \notag \end{align} Since $$ F_{12}^3-F_{12}^4=F_{14}^5-F_{13}^5 = \Diag(0,0,1,-1,0) \perp (0,-1,1,1,-1) = F_{12}^4-F_{13}^5 = F_{12}^3-F_{14}^5, $$ these four points $\left\{ F_{12}^3, F_{12}^4, F_{13}^5, F_{14}^5\right\}$ in $\tg^5$ are the vertices of a rectangle with center $F_0:=(-1,-\unm,0,0,\unm)$, which is precisely $\CH_{\lb}=\Diagg\left(m\left(\overline{G\cdot\lb}\right)\right)$. It is not so hard to see by using \eqref{image1} that $m\left(\overline{G\cdot\lb}\right)\cap\tg^5$ is the union of two triangles, given by the convex hulls of $\{F_{12}^4,F_{13}^5,F_0\}$ and $\{F_{12}^3,F_{14}^5,F_0\}$, respectively. Such computation becomes clearer if one translates everything to the origin by subtracting the vector $F_0$ from all the vectors involved. The two Weyl chambers are given by $$ (\ag_+)_1=\{\Diag(a_1,\dots,a_5):a_3\leq a_4\}, \qquad (\ag_+)_2=\{\Diag(a_1,\dots,a_5):a_3\geq a_4\}, $$ thus the convex polytope $m\left(\overline{G\cdot\lb}\right)\cap(\ag^\dg_+)_1$ can be obtained by adding to \eqref{image1} or \eqref{image2} the condition $a+d\leq b+c$, and so it coincides with the triangle $\{F_{12}^4,F_{13}^5,F_0\}$. The other triangle corresponds to the other Weyl chamber.
Concerning $T$-orbits, it follows from Lemma \ref{A-deg} that $\overline{T\cdot\lb}$ consists of the open and dense orbit $T\cdot\lb=\{\mu:xz=yw\ne 0\}$ and other eight nonzero $T$-orbits corresponding to the edges and vertexes of the rectangle $\CH_{\lb}$. From \eqref{image1} we obtain that \begin{align} m\left(\overline{T\cdot\lb}\right)\cap\tg^5 =& \{ aF_{12}^3+bF_{12}^4+cF_{13}^5+dF_{14}^5: a,b,c,d\geq 0, \label{image3}\\ &\quad a+b+c+d=1, \quad ab=cd, \quad ac=bd\}. \notag \end{align} It is easy to check that these conditions hold if and only if either $a=d=0$, or $b=c=0$, or $a=d$ and $b=c$, hence $m\left(\overline{T\cdot\lb}\right)\cap\tg^5$ is the union of three segments, $\overline{F_{12}^3F_{14}^5}$, $\overline{F_{12}^4F_{13}^5}$ and the one having as extremes their middle points. The interior of this last segment coincide with $m\left(T\cdot\lb\right)\cap\tg^5$. \end{example}
\begin{remark} In the above example, in terms of the notation in \cite{HnzSch}, we have the compatible group $G=G_\dg$ acting on the closed subset $X:=\PP(W)$, with Cartan decomposition $$ \ggo=\RR^2\oplus\glg_2(\RR)\oplus\RR, \qquad \pg=\RR^2\oplus\sym(2)\oplus\RR, \qquad \ag=\tg^5, \qquad A=T^0. $$ The $A$ fixed points in $X$ are exactly $[E_{21}], [E_{31}], [E_{42}], [E_{43}]$, i.e.\ the weight vectors for the $A$-representation $W$, which have as $m$-images the matrices $F_{12}^3, F_{12}^4, F_{13}^5, F_{14}^5$, respectively. It follows that $m(X)\cap\ag$ is not a union of convex hulls of subsets of $m$-images of $A$ fixed points in $X$, in spite $X$ is irreducible, as asserted in the first theorem in the introduction of \cite{HnzSch}. \end{remark}
\subsection{An application in low dimension}
Any nilpotent Lie algebra of dimension $\leq 6$ has a positive derivation. In dimension $7$, the first examples such that any derivation is nilpotent appear. Note that these algebras do not admit any non-nilpotent solvable extension, they are called {\it characteristically nilpotent}. An inspection of the classification of nilpotent Lie algebras of dimension $\leq 7$ (see e.g.\ \cite{Mgn} or \cite{Frn} and the references therein), which includes more than one hundred algebras and some continuous families, gives that among those which are not characteristically nilpotent only four do not admit a positive derivation. We now apply the results obtained in this section to show that each of these four nilpotent Lie algebras has a solvable extension admitting a Ricci negative metric.
\begin{theorem}\label{dim7} Any nilpotent Lie algebra of dimension $\leq 7$ which is not characteristically nilpotent admits a Ricci negative derivation. \end{theorem}
\begin{proof} The four $7$-dimensional algebras mentioned above are defined by \begin{align} &[e_1, e_2] = e_4, \; [e_1, e_4] = e_5, \; [e_1, e_5] = e_6, \; [e_1, e_6] = e_7, \; [e_2, e_3] = e_5+e_7, \label{alg1} \\ &[e_3, e_4] =-e_6, \; [e_3, e_5] = -e_7, \quad D=\Diag(0, 1, 0, 1, 1, 1, 1). \notag \\ &[e_1, e_2] = e_4, \; [e_1, e_4] = e_5, \; [e_1, e_5] = e_6, \; [e_1, e_6] = e_7, \; [e_2, e_3] = e_6+e_7, \label{alg2} \\ &[e_3, e_4] =-e_7, \quad D=\Diag(0, 1, 0, 1, 1, 1, 1). \notag \\ &[e_1, e_2] = e_3, \; [e_1, e_3] = e_4, \; [e_1, e_5] = e_6, \; [e_2, e_3] = e_5, \; [e_2, e_4] = e_6, \label{alg3} \\ &[e_2, e_5] =e_7, [e_2, e_6] =e_7, \; [e_2, e_5] =-e_7, \quad D=\Diag(0, 1, 1, 1, 2, 2, 3). \notag \\ &[e_1, e_2] = e_3, \; [e_1, e_3] = e_4, \; [e_1, e_4] = e_5, \; [e_1, e_6] = e_7, \; [e_2, e_3] = e_6, \label{alg4} \\ &[e_2, e_4] =e_7, [e_2, e_5] =e_7, \; [e_3, e_4] =-e_7, \quad D=\Diag(0, 1, 1, 1, 1, 2, 2). \notag \end{align}
We will prove that the derivation $D$ given in each case is Ricci negative, from which the theorem follows. For the algebra \eqref{alg1} , we can use $\alpha:=\Diag(-1,0,-2,-1,-2,-3,-4)$ to show that $\CH(F_{ij}^k:c_{ij}^k\ne 0,\; F_{ij}^k\ne F_{23}^7)$ is a face of $\CH_{\lb}$ which corresponds to the degeneration $\lambda:=\lb-\mu_{237}\in \overline{T\cdot\lb}$ (see Lemma \ref{A-deg}). Note that $\lambda$ is nice, and we have that $D-M>0$ for $M:=\unm(F_{12}^4+F_{23}^5)\in \CH_\lambda$. Thus $D$ is Ricci negative by Corollary \ref{cor-nice}. The case \eqref{alg2} follows in identical way by setting $\alpha:=\Diag(-1,0,-3,-1,-2,-3,-4)$ and $M:=\unm(F_{12}^4+F_{23}^6)$.
For the algebra \eqref{alg3}, we use that $\mu_{156}\in \overline{T\cdot\lb}$ (see Lemma \ref{A-deg}), hence $$ M:=F_{15}^6\in m\left(\overline{T\cdot\lb}\right)\cap\tg^7 \subset m\left(\overline{G_D\cdot\lb}\right)\cap\tg^7, $$ and so $D$ is Ricci negative by Corollary \ref{main} since $D-M>0$. Finally, in the case of \eqref{alg4} one can use $\mu_{167}$, concluding the proof. \end{proof}
It is shown in Proposition \ref{ex1ex2ex5}, (i) that the above theorem already fails in dimension $8$.
\section{Ricci negative nilradicals}\label{RNnil}
In this section, we consider the following question: \begin{quote} (Q2) Which nilpotent Lie algebras can be the nilradical of some solvable Lie algebra admitting a Ricci negative metric? \end{quote} We call such a Lie algebra a {\it Ricci negative nilradical} (RN-nilradical for short). The name is motivated by Einstein nilradicals (see e.g.\ the survey \cite{cruzchica}). The existence of a positive derivation (i.e.\ the real part of each eigenvalue is positive) is sufficient to be a RN-nilradical (see \cite[Theorem 2, (1)]{NklNkn}); in particular, any $2$-step nilpotent Lie algebra is a RN-nilradical. Furthermore, it follows from Theorem \ref{dim7} that any non-characteristically nilpotent nilpotent Lie algebra of dimension $\leq 7$ is a RN-nilradical.
On the other hand, a first necessary condition on a nilpotent Lie algebra to be a RN-nilradical is the existence of a derivation with nonzero trace. This follows from the fact that unimodular solvable Lie algebras do not admit Ricci negative metrics (see \cite{Dtt}). In what follows, we recall some algebraic notions and facts related to such condition.
Let $\ngo$ be a real nilpotent Lie algebra. Given $D\in\Der(\ngo)$, consider the additive Jordan decomposition for $D$ given by $$ D=D^s+D^n, \qquad [D^s,D^n]=0, \qquad D^s=D^{\RR}+D^{\im\RR}, \qquad [D^{\RR},D^{\im\RR}]=0, $$ where $D^s$ is semisimple (i.e.\ diagonalizable over $\CC$), $D^n$ is nilpotent, $D^{\RR}$ is {\it real semisimple} (i.e.\ diagonalizable over $\RR$) and $D^{\im\RR}$ is semisimple with only imaginary eigenvalues. It is well-known that $D^s,D^n,D^{\RR},D^{\im\RR}\in\Der(\ngo)$. Note that $\Spec(D)=\Spec(D^s)$ and $\Rea\Spec(D)=\Spec(D^{\RR})$.
A maximal abelian subspace of real semisimple derivations is called a {\it maximal torus} and is known to be unique up to conjugation by automorphisms; its dimension is called the {\it rank} of $\ngo$ and will be denoted by $\rank(\ngo)$. Recall that a Lie algebra is said to be {\it characteristically nilpotent} if it has only nilpotent derivations (i.e.\ its complex rank is zero). The first example was found in \cite{DxmLst} sixty years ago.
It is proved in \cite{Nkl2} that for any Lie algebra $\ngo$, there exists a real semisimple $\phi_\ngo\in\Der(\ngo)$ such that $\tr{\phi_\ngo D}=\tr{D}$ for all $D\in\Der(\ngo)$. Such special derivation, which is unique up to automorphism conjugation, is called a {\it pre-Einstein} derivation. Note that $\phi_\ngo=0$ if and only if $\tr{D}=0$ for any $D\in\Der(\ngo)$, so $\phi_\ngo\ne 0$ is a first obstruction for $\ngo$ to be a RN-nilradical.
It clearly follows that \begin{quote}
$\ngo$ characteristically nilpotent $\quad\Rightarrow\quad$ $\rank(\ngo)=0$ $\quad\Rightarrow\quad$ $\phi_\ngo=0$. \end{quote} An obvious natural question is whether the converse assertions hold. Curiously enough, we could not find any answer in the literature. Examples \ref{ex10} and \ref{ex3} below show that the converse assertions are both false.
The Lie algebras under consideration usually have a natural diagonal derivation $D$. To study the other derivations, we will consider other real semisimple derivations $D^\prime$ which commute with the given derivation $D$. By taking quotients by invariant ideals and using the low dimensional classifications in \cite{Mgn}, which contains the full description of derivations, we find information about the general derivation and about the rank of the Lie algebra. When we define a derivation $D: \ngo \to \ngo$, we will sometimes only define its value on generators of the Lie algebra $\ngo$. In these cases, it is left to the reader to check that it indeed defines a derivation on the whole Lie algebra $\ngo$.
The existence of a nice basis on a nilpotent Lie algebra $\ngo$ (see Definition \ref{nice-def}) makes the computations concerning derivations more manageable. Indeed, if $D^\prime \in\Der(\ngo)$, then the linear map defined by the diagonal of the matrix of $D^\prime$ with respect to a nice basis is also a derivation (see Corollary \ref{niceD}). Since the given derivation $D$ is already diagonal and $D^\prime$ commutes with $D$, the diagonal of $D^\prime$ will also commute with $D$. In some cases we will conclude that the diagonal of $D^\prime$ is equal to $\lambda D$ for some $\lambda \in \mathbb{R}$ and since $D^\prime$ is real semisimple, this will imply that $D^\prime$ is equal to its diagonal.
We note that all the examples provided in this section are written in terms of a nice basis with the only exception of Proposition \ref{ex1ex2ex5}, (i).
\begin{example}\label{ex10} Consider the Lie algebra $\ngo$ with basis vectors $X_1, \dots, X_5, Y_1, \ldots, Y_5, Z$ and brackets defined as \begin{align*} [X_1, X_2] &= X_3 &[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 &[X_2,X_3] &= X_5\\ [Y_1, Y_2] &= -X_3 &[Y_1, Y_3] &= -X_4 &[Y_1, Y_4] &= -X_5 &[Y_2,Y_3] &= -X_5\\ [X_1, Y_2] &= Y_3 &[X_1, Y_3] &= Y_4 &[X_1, Y_4] &= Y_5 &[X_2,Y_3] &= Y_5\\ [Y_1, X_2] &= Y_3 &[Y_1, X_3] &= Y_4 &[Y_1, X_4] &= Y_5 &[Y_2,X_3] &= Y_5\\ [X_1, Y_1] &= Z &[X_2, Y_2] &= Z. & & & & \end{align*} It is straightforward to check that the Jacobi identity holds. Let $\Diag(d_1, \ldots, d_{11})$ be a diagonal derivation. The first four brackets show that $d_i = i d_1$ for $1 \leq i \leq 5$. From the brackets which lead to $Y_3$, we find that $d_1 + d_7 = d_6 + d_2$. The last two brackets leading to $Z$ imply that $d_1 + d_6 = d_2 + d_7$, from which we conclude that $d_1 = 0$. The other brackets then easily give that $\Diag(d_1, \ldots, d_{11}) = 0$.
However, since $D$ defined by $D(X_i) = i Y_i, D(Y_i) = -i X_i$ for all $1 \leq i \leq 5$ and $D(Z) = 0$ is a derivation, we conclude that $\ngo$ is not characteristically nilpotent. Note that the basis is nice, so the diagonal of every derivation is again a derivation. If $D^\prime$ is any real semisimple derivation which commutes with $D$, then its diagonal is equal to $0$ and hence it is of the form $D^\prime(X_i) = \lambda_i Y_i, D^\prime(Y_i) = \lambda_i X_i$ for $1 \leq i \leq 5$ and $D^\prime(Z) = \mu Z$. A similar computation as above shows that $\lambda_i = i \lambda_1$ and $\mu = 0$, which imlies that $D^\prime = \lambda_1 D$. We conclude that $\ngo$ is of complex rank $1$.
To see that $\rank(\ngo)=0$, take any diagonalizable derivation $D^\prime: \ngo \to \ngo$. Since the basis is nice, the diagonal of $D^\prime$ is again equal to $0$. Let $D^\prime(X_i) = \lambda_i Y_i + U_i$ and $D^\prime(Y_i) = \mu_i X_i + V_i$ for $i = 1,2$ and with $U_i, V_i$ a linear combination of the other basis vectors. Let $\mg = \langle X_4, X_5, Y_4, Y_5, Z \rangle$, which is invariant under $D^\prime$ as the sum of $Z(\ngo)$ and $[[\ngo,\ngo],\ngo]$. The relations leading to $X_3$ then show that \begin{align*} D^\prime(X_3) &= (\lambda_1 + \lambda_2) Y_3 + \mg = - (\mu_1 + \mu_2) Y_3 + \mg \\ D^\prime(Y_3) &= (- \lambda_1 + \mu_2) X_3 + \mg = (\mu_1 - \lambda_2) X_3 + \mg \end{align*} and hence $\lambda_i = - \mu_i$ for $i = 1, 2$. Since $D^\prime$ is diagonalizable over $\mathbb{R}$, this implies that $\lambda_1 = \lambda_2 = 0$. As the rank of $\ngo$ over $\mathbb{C}$ is equal to $1$, the derivation $D^\prime$ is conjugate over $\mathbb{C}$ to a multiple of $D$, which implies that in fact $D^\prime = 0$. We conclude that $\ngo$ has real rank $0$ but complex rank $1$.
\end{example}
The above is therefore an example of a nilpotent Lie algebra which is neither characteristically nilpotent nor a RN-nilradical. The following example shows that the existence of a nonzero diagonalizable derivation is not sufficient to be a RN-nilradical either.
\begin{example}\label{ex3} Let $\ngo$ be the Lie algebra of dimension $11$ with basis $X_1, \ldots, X_5, Y_1, \ldots Y_5, Z$ and bracket \begin{align*} [X_1, X_2] &= X_3 &[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 &[X_2, X_3] &= X_5 \\ [Y_1, Y_2] &= Y_3 &[Y_1, Y_3] &= Y_4 &[Y_1, Y_4] &= Y_5 &[Y_2, Y_3] &= Y_5 \\ [X_1, Y_1] &= Z &[X_2, Y_2] &= Z. & & & & \end{align*} Consider the derivation $D$ given by $D(X_i) = i X_i$, $D(Y_j) = - j Y_j$ and $D(Z) = 0$ and let $D^\prime$ be a real semisimple derivation which commutes with $D$. Then $D^\prime$ is diagonal in this basis, since $D$ has $11$ different eigenvalues. From the first four brackets between the $X_i$ it follows that $D^\prime(X_i) = i \lambda X_i$ for some $\lambda \in \mathbb{R}$. Similarly for the $Y_j$ we find that $D^\prime(Y_j) = j \mu Y_j$ for some $\mu \in \mathbb{R}$. The last two brackets, leading to the vector $Z$, imply that $\lambda + \mu = 2 \lambda + 2 \mu$ or equivalently that $D^\prime = \lambda D$. We conclude that $\rank(\ngo)=1$ and however, every derivation has trace $0$, i.e.\ $\phi_\ngo=0$. \end{example}
All this suggests a reformulation of question (Q2) by adding the condition $\phi_\ngo\ne 0$ on the nilpotent Lie algebras involved. The only other known necessary condition for being a RN-nilradical was obtained in \cite{NklNkn}: there must exist a derivation $D$ with $\tr{D}>0$ whose restriction to the center $\zg(\ngo)$ of $\ngo$ is positive, in the sense that $D^{\RR}|_{\zg(\ngo)}>0$, or equivalently, the eigenvalues of $D|_{\zg(\ngo)}$ have all positive real part (see \cite[Theorem 2, (1)]{NklNkn}). Using this obstruction, we now prove that $\phi_\ngo\ne 0$ is still not sufficient. More precisely, the following proposition shows that the two sufficient conditions to be a RN-nilradical mentioned above (i.e.\ $2$-step and $\dim{\ngo}\leq 7$, $\ngo$ non-characteristically nilpotent) are actually sharp. Furthermore, we found a curve of nilpotent Lie algebras which are not RN-nilradicals. Recall that any real semisimple derivation belongs to some maximal torus, hence it is always conjugate to some derivation in a given maximal torus.
\begin{proposition}\label{ex1ex2ex5} There exist nilpotent Lie algebras such that $\phi_\ngo\ne 0$ but any real semisimple derivation has a zero eigenvalue on the center and with the following properties: \begin{itemize} \item[(i)] $\dim{\ngo}=8$, $\ngo$ is $5$-step nilpotent, the dimensions of the descendent central series are $(8,5,4,3,1)$, $\dim{\zg(\ngo)}=2$, $\rank(\ngo)=1$ and $\Diag(0,1,0,1,1,1,1,0)\in\Der(\ngo)$.
\item[(ii)] $\dim{\ngo}=10$, $\ngo$ is $3$-step nilpotent with descendent central series dimensions $(10,6,2)$, $\dim{\zg(\ngo)}=3$, $\rank(\ngo)=2$ and $$\Diag(0,0,0,1,1,1,0,0,0,0), \; \Diag(0,0,0,0,0,0,1,1,1,1)\in\Der(\ngo).$$
\item[(iii)] A continuous family of pairwise non-isomorphic $13$-dimensional $6$-step nilpotent Lie algebras such that $\dim{\zg(\ngo)}=3$, $\rank(\ngo)=1$ and
$$
\Diag(1,2,3,4,5,6,7,-1,-2,-3,-4,-5,0)\in\Der(\ngo).
$$ \end{itemize} \end{proposition}
\begin{proof} Part (i). Consider the Lie algebra $\ngo$ of dimension $8$ with basis $X_1, \ldots, X_7, Y$ and bracket \begin{align*} [X_1, X_2] &= X_4 &[X_1, X_4] &= X_5 &[X_1, X_5] &= X_6 & [X_1, X_6] &= X_7\\ [X_2, X_3] &= X_6 + X_7 &[X_3, X_4] &= -X_7 &[X_1,X_3] &= Y. \end{align*} The center is $\zg(\ngo)=\la X_7,Y\ra$. Note that the Lie algebra $\ngo_X = \faktor{\ngo}{\langle Y \rangle}$ has rank $1$, as it is equal to the Lie algebra $\mathcal{G}_{7,1.01(ii)}$ of \cite{Mgn}. Consider the derivation $D: \ngo \to \ngo$ given by $D(X_1) = D(X_3) = 0$ and $D(X_2) = X_2$. Every derivation $D^\prime$ which commutes with $D$ satisfies $D^\prime(Y) = \mu Y$, since $\langle Y \rangle$ is equal to the intersection of $[\ngo,\ngo]$ and the eigenspace of $D$ of eigenvalue $0$. So, by considering the map induced by $D^\prime$ on $\faktor{\ngo}{\langle Y \rangle}$ one sees that $ D^\prime(X_1), D^\prime(X_3) \in \langle Y \rangle$. In particular, \begin{align*} D^\prime(Y) = [D^\prime(X_1),X_3] + [X_1,D^\prime(X_3)] = 0. \end{align*} If $D^\prime$ is real semisimple, then $D^\prime(X_1) = D^\prime(X_3) = 0$ and hence $D^\prime = \lambda D$ for some $\lambda \in \mathbb{R}$. So every real semisimple derivation has eigenvalue $0$ on the center and the proposition follows.
\noindent Part (ii). Let $\ngo$ be the Lie algebra with basis $X_1, X_2, X_3, Y_1, Y_2, Y_3, Z_1, Z_2, Z_3, Z_4$ and bracket \begin{align*} [X_1, Y_1] &= Y_2 &[X_1, Y_2] &= Y_3 &[X_2, Y_1] &= Y_3\\ [X_1, Z_1] &= Z_2 &[X_2, Z_1] &= Z_3 &[X_1, Z_2] &= Z_4\\ [X_2, Z_3] &= Z_4 &[X_1, X_2] &= X_3. & & \end{align*} The center is generated by $X_3,Y_3,Z_4$. Let $D$ be the derivation given by $D(X_i) = 0$, $D(Y_i) = Y_i$, $D(Z_j) = 2 Z_j$ for all $i \in \{1,2,3\}, j \in \{1,2,3,4\}$. If $D^\prime$ is a real semisimple derivation commuting with $D$, then $D^\prime(X_3) = \lambda_3 X_3$ for some $\lambda_3 \in \mathbb{R}$ since $X_3$ spans the intersection of the eigenspace of $D$ for eigenvalue $0$ and $[\ngo,\ngo]$. We will demonstrate that $\lambda_3 = 0$, which implies the proposition.
Note that, since the basis is nice, we can assume that $D^\prime$ is a diagonal derivation. Write $D(X_i) = \lambda_i X_i$, $D(Y_1) = \mu Y_1$ and $D(Z_1) = \nu Z_1$. By applying $D^\prime$ to the second and the third equation, we get $$2 \lambda_1 + \mu = \lambda_2 + \mu.$$ Similarly, by applying $D^\prime$ to the sixth and seventh equation we get $$\nu + 2 \lambda_1 = \nu + 2 \lambda_2.$$ Hence $\lambda_1=\lambda_2 = 0$ and therefore also $\lambda_3 = \lambda_1 + \lambda_2 = 0$. The other parts follow immediately.
\noindent Part (iii). Finally, consider the Lie algebra $\ngo_t$ with basis $X_1, \ldots, X_7, Y_1, \ldots, Y_5, Z$ and bracket \begin{align*} [X_1, X_2] &= X_3 &[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 \\ [X_1, X_5] &= X_6 &[X_1, X_6] &= X_7 &[X_2, X_3] &= X_5 \\ [X_2, X_4] &= X_6 &[X_2, X_5] &= t X_7 &[X_3, X_4] &= (1-t) X_7\\ [Y_1, Y_2] &= Y_3 &[Y_1, Y_3] &= Y_4 & & \\ [Y_1, Y_4] &= Y_5 &[Y_2, Y_3] &= Y_5 & & \\ [X_1, Y_1] &= Z &[X_2, Y_2] &= Z. \end{align*} The center is $\zg(\ngo)=\la X_7,Y_5,Z\ra$. Let $D$ be the derivation given by $D(X_i) = i X_i$, $D(Y_j) = - j Y_j$ and $D(Z) = 0 $. Similarly as in Example \ref{ex3} one can show that this Lie algebra has rank $1$, using that both the Lie algebras $\ngo_X$ and $\ngo_Y$ have rank one, where $\ngo_X$ and $\ngo_Y$ are the subalgebras spanned by the vectors $X_i$ and $Y_j$ respectively. Hence every real semisimple derivation is conjugate to $\lambda D$ for $\lambda \in \mathbb{R}$ and will have an eigenvalue $0$ on the center.
Now we show that the Lie algebras of (iii) are pairwise non-isomorphic and thus give us a one-parameter family of examples. We denote $\gamma_2(\ngo):=[\ngo,\ngo]$, $\gamma_3(\ngo):=[\ngo,[\ngo,\ngo]]$ and so on. Note that $\gamma_4(\ngo_t) = \langle X_5, X_6, X_7, Y_5 \rangle$ and thus the centralizer is given by $$ C(\gamma_4(\ngo_t)) = \langle X_3, X_4, X_5, X_6, X_7, Y_1, Y_2, Y_3, Y_4, Y_5, Z \rangle. $$ Now define the subspaces \begin{align*} U &:= [ C(\gamma_4(\ngo_t)), C(\gamma_4(\ngo_t))] = \langle X_7, Y_3, Y_4, Y_5 \rangle, \\ V &:= C(U) = \langle X_1, \ldots, X_7, Y_3, Y_4, Y_5, Z \rangle. \end{align*} Similarly, $\gamma_5(\ngo_t) = \langle X_6, X_7 \rangle$ and $$W := C(\gamma_5(\ngo_t)) \cap V = \langle X_2, \ldots, X_7, Y_3, Y_4, Y_5, Z \rangle.$$
Let $\varphi: \ngo_s \to \ngo_t$ be an isomorphism and $V, W \subseteq \ngo_s$, $V^\prime, W^\prime \subseteq \ngo_t$ the subspaces as constructed above. These subspaces are characteristic, in the sense that $\varphi(V) = V^\prime$ and $\varphi(W) = W^\prime$. Let $\lambda, \mu, a \in \mathbb{R}$ such that $\varphi(X_1) = \lambda X_1^\prime + a X^\prime_2 + \gamma_2(\ngo_t)$ and $\varphi(X_2) = \mu X_2^\prime + \gamma_2(\ngo_t)$. A computation shows that $\mu = \lambda^2$ and that $$\varphi(X_i) = \lambda^i X_i^\prime + \gamma_{i}(\ngo_t)$$ for all $i \geq 2$. So in particular, we get that $$s \lambda^7 X_7^\prime = \varphi(s X_7) = \varphi([X_2,X_5]) = [\varphi(X_2),\varphi(X_5)] = \lambda^7 [X_2^\prime,X_5^\prime] = t \lambda^7 X_7^\prime.$$ Because $\lambda \neq 0$ the claim follows. \end{proof}
In view of the above proposition, besides $\phi_\ngo\ne 0$, we may add to question (Q2) the existence of a non-singular derivation. The following proposition shows that this does not suffice either.
\begin{proposition}\label{ex8ex7} There exist nilpotent Lie algebras with $\phi_\ngo\ne 0$ and the following properties: \begin{itemize} \item[(i)] $\dim{\ngo}=13$, $\ngo$ is $5$-step nilpotent, $\dim{\zg(\ngo)}=3$, $\rank(\ngo)=1$ and $$
D=\Diag(1,2,3,4,5,6,7,-1,-2,-3,-4,-5,1)\in\Der(\ngo), \qquad D|_{\zg(\ngo)}=\Diag(7,-5,1). $$ \item[(ii)] $\dim{\ngo}=17$, $\ngo$ is $5$-step nilpotent, $\dim{\zg(\ngo)}=4$, $\rank(\ngo)=1$ and $$ \begin{array}{c}
D=\Diag(-1,-2,-3,-4,-5,-6,-7,1,2,3,4,5,6,7,-1,2,1)\in\Der(\ngo), \\ \\
D|_{\zg(\ngo)}=\Diag(-7,7,-1,1), \qquad \tr{D}=2. \end{array} $$ \end{itemize} \end{proposition}
\begin{proof} Part (i). Let $\ngo$ be the Lie algebra with nice basis $X_1, \ldots, X_{7}, Y_1, \ldots, Y_5, Z$ and bracket
\begin{align*}
[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 &[X_1, X_5] &= X_6 \\
[X_1, X_6] &= X_7 &[X_2, X_3] &= X_5 &[X_2, X_4] &= X_6 \\
[X_3, X_4] &= -X_7 &[X_2, X_5] &= X_7 & & \\
[Y_1, Y_2] &= Y_3 &[Y_1, Y_3] &= Y_4 &[Y_1, Y_4] &= Y_5\\
[Y_2, Y_3] &= Y_5 &[X_2, Y_1] &= Z &[X_3, Y_2] &= Z.
\end{align*} The center is $\zg(\ngo)=\la X_7,Y_5,Z\ra$. Let $\ngo_X$, $\ngo_Y$ and $\ngo_Z$ be the vector spaces spanned by the vectors $X_i$, $Y_j$ and $Z$ respectively. These are all Lie subalgebras of $\ngo$ of rank $1$, which follows from \cite{Mgn} or from an explicit computation. Consider the invertible derivation $D$ defined as $D(X_i) = i X_i, D(Y_j) = -j Y_j$ and $D(Z) = Z$ and let $D^\prime$ be any real semisimple derivation commuting with $D$. The subalgebra $\ngo_Z$ is invariant under $D^\prime$ since it is the intersection of $[\ngo,\ngo]$ and the eigenspace of $D$ of eigenvalue $1$ and similarly, $$ D^\prime(\ngo_X \oplus \ngo_Z) \subseteq \ngo_X \oplus \ngo_Z, \qquad D^\prime(\ngo_Y) \subseteq \ngo_Y. $$
Consider now the induced map by $D^\prime$ on $\faktor{\ngo} {\ngo_Y \oplus \ngo_Z } \approx \ngo_X$ and $\faktor{\ngo}{ \ngo_X \oplus \ngo_Z } \approx \ngo_Y$. Since these quotients have rank one, we find that $D^\prime(X_i) \in \lambda i X_i + \ngo_Z$ and $D^\prime(Y_j) \in - \mu j Y_j + \ngo_Z$ for some $\lambda, \mu \in \mathbb{R}$ and every $1 \leq i \leq 7, 1 \leq j \leq 5$. Now \begin{align*} D^\prime(Z) &= [D^\prime(X_3), Y_2] + [X_3,D^\prime(Y_2)] = 3 \lambda Z - 2 \mu Z \\ & = [D^\prime(X_2), Y_1] + [X_2,D^\prime(Y_1)] = 2 \lambda Z - \mu Z, \end{align*} and hence $\lambda = \mu$. Moreover, $X_1$ and $Z$ are eigenvectors of $D'$ for the eigenvalues $\lambda$, so $D^\prime = \lambda D$ since it is real semisimple.
\noindent Part (ii). Define the Lie algebra $\ngo$ with nice basis $X_1, \ldots, X_7, Y_1, \ldots, Y_7, Z_1, Z_2, Z_3$ and bracket
\begin{align*}
[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 &[X_1, X_5] &= X_6\\
[X_1, X_6] &= X_7 &[X_2, X_3] &= X_5 &[X_2, X_4] &= X_6\\
[X_3, X_4] &= -X_7 &[X_2, X_5] &= X_7 & & \\
[Y_1, Y_3] &= Y_4 &[Y_1, Y_4] &= Y_5 &[Y_1, Y_5] &= Y_6\\
[Y_1, Y_6] &= Y_7 &[Y_2, Y_3] &= Y_5 &[Y_2, Y_4] &= Y_6\\
[Y_3, Y_4] &= -Y_7 &[Y_2, Y_5] &= Y_7 & & \\
[X_3, Y_2] &= Z_1 &[X_2, Y_1] &= Z_1 & &\\
[Y_3, X_1] &= Z_2 &[Z_2, X_1] &= Z_3. & &
\end{align*} Define the subalgebras $\ngo_X$ and $\ngo_Y$ as in (i), then almost the same computations show that $\ngo$ has rank $1$. Any real semisimple derivation is conjugate to a multiple of the derivation $D$ given by $D(X_i) = i X_i$, $D(Y_j)= -j Y_j$ and hence $D(Z_1) = Z_1, D(Z_2) = -2 Z_2, D(Z_3) = -Z_3$. The center is the vector space spanned by $X_7, Y_7, Z_1$ and $Z_3$ and thus $D$ has trace $0$ when restricted to the center. Note that the $\tr(D) = -2 \neq 0$. \end{proof}
Note that none of the Lie algebras in Propositions \ref{ex1ex2ex5} and \ref{ex8ex7} can be a RN-nilradical, since every non-trivial real semisimple derivation has either a zero or negative eigenvalue on the center. It follows from Theorem \ref{dim7} that nilpotent Lie algebras such that any real semisimple derivation has a zero eigenvalue can be RN-nilradicals. We may ask whether the existence of a negative eigenvalue for every real semisimple derivation could be an obstruction to be a RN-nilradical. The answer is no, as the following example shows.
\begin{example}\label{ex9} Consider the Lie algebra $\ngo$ with nice basis $X_1, \ldots, X_{6}, Y_1, \ldots, Y_4$ and bracket \begin{align*} [X_1, X_2] &= X_3 &[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5\\ [X_1, X_5] &= X_6 &[X_2, X_3] &= X_6 & & \\ [X_1, Y_1] &= Y_2 &[X_1, Y_2] &= Y_4 & [X_2, Y_1] &= Y_3 &[Y_1, Y_3] &= Y_4. \end{align*} So $\dim{\ngo}=10$, $\ngo$ is $5$-step nilpotent with descendent central series dimensions $(10,7,4,2,1)$ and $\zg(\ngo)=\la X_6,Y_4\ra$. Let $D$ be the derivation which maps $D(X_1) = X_1, D(X_2) = 3 X_2$ and $D(Y_1) = - Y_1$. We show that $\ngo$ has rank $1$. Let $D^\prime$ be a real semisimple derivation which commutes with $D$. First assume that $D^\prime$ is diagonal. Write $D^\prime(X_1) = \lambda_1 X_1, D^\prime(X_2) = \lambda_2 X_2$ and $D^\prime(Y_1) = \mu Y_1$ for $\lambda_1, \lambda_2, \mu \in \mathbb{R}$. By applying $D^\prime$ to bracket $4$ and $5$, we find that $\lambda_2 = 3 \lambda_1$. Furthermore, the brackets which result to $Y_4$ show that $\mu = - \lambda_1$. So $D^\prime = \lambda_1 D$ and the claim follows. Now let $D^\prime$ be a general real semisimple derivation which commutes with $D$. Since the basis is nice, the diagonal part also is a derivation which commutes with $D$ and hence the diagonal is equal to $\lambda_1 D$ for some $\lambda_1 \in \mathbb{R}$. Since $D^\prime$ is real semisimple, this implies that $D^\prime$ is equal to its diagonal.
Finally, we show that $\ngo$ is a RN-nilradical. Write $F_1, F_2, F_3$ for the weights corresponding to the brackets $[X_1,Y_2] = Y_4, [X_2,Y_1] = Y_3, [Y_1,Y_3] = Y_4$. Note that for $$ M := \frac{1}{6} F_1 + \frac{2}{3} F_2 + \frac{2}{3} F_3 \in \RR_{>0}\CH_{\lb}, $$ it holds that $M(Y_1) = - \frac{4}{3} Y_1$, $M(Y_2) = - \frac{1}{6} Y_2$, $M(Y_3) = 0$, $M(Y_4) = \frac{5}{6} Y_4$ and $MX_i=m_iX_i$ with $m_i\leq 0$ for all $i$. We conclude that $D \in M + \tg^n_{>0} \subseteq \RR_{>0}\CH_{\lb} + \tg^n_{>0} $ and thus Corollary \ref{cor-nice} implies that $\ngo$ is a RN-nilradical. \end{example}
We now show that, unexpectedly, a characteristically nilpotent Lie algebra can admit a nice basis.
\begin{example}\label{ex4} We give two examples of dimension $12$ with a nice basis. The first example $\ngo_1$ has basis $X_1, X_2, X_3, Y_1, Y_2, Y_3, Z_1, Z_2, Z_3, U_1, U_2, U_3$ and bracket \begin{align*} [X_1, X_2] &= Y_1 &[X_2, X_3] &= Y_2 &[X_3, X_1] &= Y_3 \\ [X_1, Y_1] &= Z_1 &[X_2, Y_2] &= Z_2 &[X_3, Y_3] &= Z_3 \\ [X_1, Z_1] &= U_1 &[X_2, Z_2] &= U_2 &[X_3, Z_3] &= U_3 \\ [X_1, Y_3] &= U_3, &[X_2, Y_1] &= U_1 &[X_3, Y_2] &= U_2. \end{align*} So $\ngo$ is $4$-step nilpotent with descendent central series dimensions $(12,9,6,3)$ and $\zg(\ngo)=\la U_1,U_2,U_3\ra$. Let $D$ be any derivation of $\ngo_1$. The diagonal of $D$ is again a derivation and an easy computation shows that this must be $0$.
Now write $D(X_1) = a X_2 + b X_3 + V$ and $D(Y_2) = c Y_1 + d Y_3 + e Z_1 + W$ where $V$ and $W$ are a linear combination of the other basis vectors. Consider $$0 = D([X_1, Y_2]) = [D(X_1), Y_2] + [X_1,D(Y_2)] = a Z_2 + b U_2 + c Z_1 + d U_3 + e U_1,$$ which implies that $a = b = 0$. A similar computation for $X_2$ and $X_3$ shows that $$D(\ngo_1) \subseteq [\ngo_1,\ngo_1]$$ which implies that $\ngo_1$ is characteristically nilpotent.
For the second example, we consider the Lie algebra $\ngo_2$ with basis $X_1, \ldots, X_5, Y_1, \ldots, Y_5, Z_1, Z_2$ and bracket \begin{align*} [X_1, X_2] &= X_3 &[X_1, X_3] &= X_4 &[X_1, X_4] &= X_5 &[X_2, X_3] &= X_5 \\ [Y_1, Y_2] &= Y_3 &[Y_1, Y_3] &= Y_4 &[Y_1, Y_4] &= Y_5 &[Y_2, Y_3] &= Y_5 \\ [X_1, Y_1] &= Z_1 &[X_2, Y_2] &= Z_1 &[X_1, Y_2] &= Z_2 &[X_2, Y_1] &= Z_2. \end{align*} Let $D$ be any derivation of $\ngo_2$, then again the diagonal is $0$ by an easy computation. Now write $D(X_1) = a X_2 + b Y_1 + c Y_2 + V$ and $D(X_2) = a^\prime X_1 + b^\prime Y_1 + c^\prime Y_2 + W$ with $V, W \in \gamma_2(\ngo_2)$, then \begin{align*} 0 = D([X_1,Y_3]) = [D(X_1),Y_3] + [X_1,D(Y_3)] = b Y_4 + c Y_5 + d_1 X_4 + d_2 X_5\\ 0 = D([X_2,Y_3]) = [D(X_2),Y_3] + [X_2,D(Y_3)] = b^\prime Y_4 + c^\prime Y_5 + d_3 X_4 + d_4 X_5 \end{align*} for some $d_i \in \mathbb{R}$. Hence $b = c = b^\prime = c^\prime = 0$. Now consider $$D(X_5) = D([X_2,X_3]) = [D(X_2),X_3] + [X_2,D(X_3)] = a^\prime X_4 + d^\prime X_5$$ for some $d^\prime \in \mathbb{R}$ and hence $a^\prime = 0$, since $D(\gamma_4(\ngo_2) \subseteq \gamma_4(\ngo_2)$. A similar computation for $Y_1$ and $Y_2$ shows that $D$ is nilpotent since $D^2(\ngo_2) \subseteq \gamma_2(\ngo_2)$. With some more work, one can show that $D(\ngo_2) \subseteq \ngo_2$, but since we don't need this fact, we don't give the proof here. \end{example}
\end{document} |
\begin{document}
\frontmatter \pagestyle{headings}
\title{Maximum Entropy Reconstruction for Discrete Distributions with Unbounded Support}
\author{Alexander Andreychenko \and Linar Mikeev \and Verena Wolf}
\institute{Saarland University, Saarbrucken, Germany\\ \email{[email protected]}}
\maketitle
\begin{abstract} The classical problem of moments is addressed by the maximum entropy approach for one-dimensional discrete distributions. The numerical technique of adaptive support approximation is proposed to reconstruct the distributions in the region where the main part of probability mass is located. \end{abstract}
\keywords{maximum entropy, moment problem}
\section{Introduction} \label{sec:intro} In the stochastic chemical kinetics prior information regarding the properties of the distribution (e.g. approximately normally distributed) is not directly accessible and in such a case regaining the probability distribution from the moment description is non-trivial. In fact it turns out that this problem, known as the classical moment problem, has a long history in other application domains and only recently very efficient methods for the reconstruction of the distribution became available.
Given a number of moments of a random variable, there is in general no unique solution for the corresponding distribution. However it is possible to define a sequence of distributions that converges to the true one whenever the number of constraints approaches infinity ~\cite{mnatsakanov_recovery_2009}. Conditions for the existence of a solution are well-elaborated (such as Krein's and Carlemann's conditions) but they do not provide a direct algorithmic way to create the reconstruction. Therefore, Pade approximation ~\cite{mead1984maximum} and inverse Laplace transform ~\cite{Chauveau1994186} have been considered but turned out to work only in restricted cases and require a large number of constraints. Similar difficulties are encountered when
lower and upper bounds for the probability distribution are derived
~\cite{gavriliadis_moment_2008,tari_unified_2005,Kaas198687}. Kernel-based approximation methods have been proposed where one restricts to a particular class of distributions ~\cite{gavriliadis_truncated_2012,mnatsakanov_recovery_2009,chen_song_2000}. The numerically most stable methods are, however, based on the maximum entropy principle which has its roots in statistical mechanics and information theory. The idea is to choose from all distributions that
fulfill the moment constraints the distribution that
maximizes the entropy.
The maximum entropy reconstruction is the least biased estimate that
fulfills the moment constraints and it makes no assumptions about the missing information. No additional knowledge about the shape of the distribution neither a large number of moments is necessary. For instance, if only the first moment (mean) is provided the result of applying the maximum entropy principle is exponential distribution. In case of two moments (mean and variance)
the reconstruction is given by normal distribution.
Additionally, if experimental data (or simulation traces) is available,
data-driven maximum entropy methods can be applied ~\cite{wu_weighted_2009,Golan1996559}. Recently, notable progress has been made in the development of numerical methods for the moment constrained maximum entropy problem ~\cite{abramov2010multidimensional,bandyopadhyay2005maximum,mead1984maximum}, where the main effort is put to the transformation of the problem in order to overcome the numerical difficulties that arise during the optimization procedure.
In this paper we propose a combination of the classical Newton-based technique to numerically solve the maximum entropy problem with the procedure of distribution support approximation.
\section{Maximum Entropy Reconstruction}\label{sec:maxent}
The moment closure is usually used to approximate the moments
of a stochastic dynamical system over time.
The numerical integration of the correspondent ODE system is
usually faster than a direct integration of the probability distribution
or an estimation of the moments based on Monte-Carlo simulations of the system.
However, if one is interested in certain events and only the moments of the distribution are known,
the corresponding probabilities are not directly accessible and
have to be reconstructed based on the moments.
Here, we shortly review standard approaches to reconstruct
one-dimensional
marginal probability distributions
$\pi_i(x_i,t) = P(X_i(t)=x_i)$ of a Markov chain that describes
the dynamics of chemical reactions network.
The task of approximating multi-dimensional distributions
follows the same line however for our case these techniques
revealed to be not effective due to numerical difficulties
in the optimization procedure.
Thus, we have given (an approximation of) the moments of the $i$-th population and obviously,
the corresponding distribution is in general not uniquely determined for a finite set of moments.
In order to select one distribution from this set, we apply the maximum entropy principle.
~\cite{mead1984maximum}.
In this way we minimize the amount of prior information about the distribution
and avoid any other latent assumption about the distribution.
Taking its roots in statistical mechanics and thermodynamics
~\cite{PhysRev.106.620},
the maximum entropy approach was successfully applied
to solve moment problems in the field of
climate prediction
~\cite{abramov2005information,kleeman2002measuring,roulston2002evaluating},
econometrics
~\cite{wu_calculation_2003},
performance analysis
~\cite{tari_unified_2005,guiasu1986maximum}
and many others.
\subsection{Maximum Entropy Approach}
The maximum entropy principle says that
among the set of allowed discrete probability distributions $\mathcal{G}$
we choose the probability distribution $q$ that maximizes the entropy $H(g)$ over all distributions $g \in \mathcal{G}$, i.e.,
\begin{equation}
\label{eq:maxShannonProblem}
\begin{array}{c}
q = \arg \max_{g\in \mathcal{G}} H(g)
= \arg \max_{g\in \mathcal{G}} \left( -\sum_x g(x) \ln{g(x)}\right)
.
\end{array}
\end{equation}
where $x$ ranges over all possible states of the discrete state space.
Note that we assume that all distributions are defined on the same state space. In our case the set $\mathcal{G}$
consists of all discrete probability distributions that satisfy the moment constraints. Given a sequence of $M$ non-central moments
$$\Ex{X^k}=\mu_k, k=0,1,\ldots,M,$$
the following constraints are considered
\begin{equation}\label{eq:momentconstr}
\sum_x x^k g(x) = \mu_k, k=0,1,\ldots,M.
\end{equation}
Here, we choose $g$ to be a non-negative function
and add the constraint $\mu_0=1$ in order
to ensure that $g$ is a distribution.
The above problem is a nonlinear constrained optimization problem, which is usually
addressed by the method of Lagrange. Consider the Lagrangian
functional
\begin{equation*}
\begin{array}{c}
\mathcal{L}(g,\lambda)
= H(g) - \sum\limits_{k=0}^{M} \lambda_k
\left( \sum_x x^k g(x) - \mu_k \right),
\end{array}
\end{equation*}
where $\lambda=(\lambda_0,\ldots,\lambda_M)$ are the corresponding Lagrangian multipliers.
It is possible to show that maximizing the unconstraint Lagrangian $\mathcal{L}$
gives a solution to the constrained maximum entropy problem
The variation of the functional $\mathcal{L}$ according to
the unknown distribution provides the general form of $g(x)$
$$ \frac{\partial \mathcal{L}}{\partial g(x)} = 0
\implies
g(x) = \exp \left( -1 -\sum\limits_{k=0}^{M} \lambda_k x^k \right)
=\frac{1}{Z(x)} \exp \left( -\sum\limits_{k=1}^{M} \lambda_k x^k \right),
$$
where
\begin{equation}
\label{eq:normalization_constant_Z}
Z(x) = e^{1+\lambda_0}
= \displaystyle\sum_x \exp \left( -\sum\limits_{k=1}^{M} \lambda_k x^k \right)
\end{equation}
is a normalization constant.
In
the dual approach
we insert the above equation for $g(x)$ into
the Lagrangian thus we can transform the problem into an
unconstrained convex minimization problem of the dual function w.r.t
to the dual variable $\lambda$
$$\Psi(\lambda)=\ln Z(x) + \sum\limits_{k=1}^{M} \lambda_k \mu_k,$$
According to the Kuhn-Tucker theorem, the solution
$\lambda^* = \arg \min \Psi(\lambda)$ of the minimization problem
for the dual function equals the solution $q$ of the original constrained optimization problem~\eqref{eq:maxShannonProblem}.
\subsection{Maximum Entropy Numerical Approximation}
It is possible to solve the constrained maximization problem in Eq.~\eqref{eq:maxShannonProblem}
for $M \leq 2$ analytically.
For $M>2$
numerical methods have to be applied
to incorporate the knowledge of moments of order three and more.
Here we use the Levenberg-Marquardt
method
~\cite{transtrum_improvements_2012}
to minimize the dual function
$\Psi(\lambda)$.
An approximate solution $\tilde{q}$ is given by
$$ \tilde{q}(x) = \exp \left( -1 -\sum\limits_{k=0}^{M} \hat{\lambda}_k x^k \right) , $$
where $\tilde{\lambda}$ is the result of the iteration
\begin{equation}
\label{eq:dualFunctionNewtonOpt}
\lambda^{(\ell+1)} = \lambda^{(\ell)} - \left( H + \gamma^{(\ell)}
\cdot \mathrm{diag}(H) \right)^{-1} \frac{\partial \Psi}{\partial \lambda}.
\end{equation}
The damping factor $\gamma$ is updated
according to the strategy suggested in
~\cite{transtrum_improvements_2012}
and $\lambda^{(\ell)}=(\lambda^{(\ell)}_1,\ldots,\lambda^{(\ell)}_M)$
is an approximation of
the vector $\lambda=(\lambda_1,\ldots,\lambda_M)$ in the $\ell$-th step of the
iteration. We compute $\lambda_0$
as $\lambda_0 = \ln Z - 1$ (see Eq.~\eqref{eq:normalization_constant_Z}).
Initially we choose
$\lambda^{(0)}=(0,\ldots,0)$ and stop when
the solution converges, i.e. when the condition
$ \vert \lambda^{(\ell+1)} - \lambda^{(\ell)} \vert < \delta_\lambda $
is satisfied for a small threshold $\delta_\lambda>0$.
In the $\ell$-th iteration the components of the gradient vector
are approximated by
$\frac{\partial \Psi}{\partial \lambda_i}
\textstyle\approx \mu_i - \frac{1}{Z} \widetilde{\mu}_i$
and the entries of the Hessian matrix are computed as
$$
H_{i,j} = \frac{\partial^2 \Psi}{\partial \lambda_i \partial \lambda_j}
\approx \frac{Z \cdot \widetilde{\mu}_{i+j} - \widetilde{\mu}_i \widetilde{\mu}_j}{Z^2},
\quad i,j=1, \ldots, M.
$$
The approximation $\widetilde{\mu}_i$ of the $i$-th moment is given by
\begin{equation}
\label{eq:momentapp}
\textstyle \widetilde{\mu}_i
=
\sum\limits_{x}
x^i \exp \left( -\sum\nolimits_{k=1}^{M} \lambda_k^{(\ell)} x^k \right),
\quad i=1,\ldots,2M,
\end{equation}
In order to approximate the moments
we need to truncate the infinite sum in Eq.~\eqref{eq:momentapp}.
We refer to Section~\ref{app:B} for a detailed description
of how the distribution support can be
approximated.
The convexity~
\cite{mead1984maximum}
of the dual function $\Psi(\lambda)$
guarantees the existence of a unique minimum $\lambda^*$
approximated by $\tilde{\lambda}$.
Theoretical conditions for the existence of the solution
are discussed in detail in~
\cite{tari_unified_2005,Stoyanov_2000,Lin199785}.
A similar analysis for the multivariate case is provided
in~
\cite{kleiber_multivariate_2013}.
The iterative procedure in Eq.~\eqref{eq:dualFunctionNewtonOpt}
might however fail due to numerical instabilities
when the inverse of the Hessian is calculated.
The iterative minimization presented in~
\cite{bandyopadhyay2005maximum}
and the Broyden-Fletcher-Goldfarb-Shanno (BFGS)
procedure~
\cite{byrd1995limited}
can be used to improve the numerical stability.
In the sequel we denote by $\tilde{\pi}_i(x,t)$
the reconstructed distribution of the $i$-th species
for a given sequence of moments $\mu_0, \ldots, \mu_M$,
i.e. the marginal probability distribution
$\pi_i(x,t) = P \left( X_i = x \right)$.
Note that the reconstruction approach presented above
provides a reasonable approximation of the probabilities
only in high-probability regions.
In order to accurately approximate the tails of the distribution
special methods have been developed~\cite{gavriliadis_truncated_2012}.
\section{Conclusions}\label{sec:conc} As future work, we plan to extend the reconstruction procedure in several ways. First, we want to consider moments of higher order than five. Since in this case the concrete values become very large it might be advantageous to consider central moments instead which implies that the reconstruction procedure has to be adapted. Alternatively, we might (instead of algebraic moments) consider other functions of the random variables such as exponential functions~\cite{mnatsakanov_note_2013}, Fup functions~ \cite{gotovac_maximum_2009}, and Chebyshev polynomials~ \cite{bandyopadhyay2005maximum}.
Another possible extension could address the problem of truncating the support of the distribution such that the reconstruction is applied to a finite support. We expect that in this case the reconstruction will become more accurate since we will not have to rely on the Gauss-Hermite quadrature formula.
For instance, the theory of Christoffel functions~ \cite{gavriliadis_truncated_2012} could be used to determine the region where the main part of the probability mass is located.
Finally, we want to improve the approximation for species that are present in very small quantities, since for those species a direct representation of the probabilities is more appropriate than a moment representation. Therefore we plan to consider the conditional moments approach~\cite{MCM_Hasenauer_Wolf}, where we only integrate the moments of species having large molecular counts but keep the discrete probabilities for the species with small populations.
\begin{appendix}
\section{Approximation of the Support} \label{app:B} \newcommand{\delta_{\mbox{\scriptsize prob}}}{\delta_{\mbox{\scriptsize prob}}}
During the iteration procedure~\eqref{eq:dualFunctionNewtonOpt}
we need to approximate one-dimensional moments by summing up over all
states $x \in \mathbb{Z}^+$ that have positive probability mass.
However our case studies possess the infinite number of such states
and the appropriate truncation has to be done.
Instead of considering whole state space $\mathbb{Z}^+$
we consider a subset $D = \{x_L,\ldots,x_R\}\subset \mathbb{Z}^+$,
where we have to choose such values for $x_L$ and $x_R$
that the iteration procedure converges.
It might fail to converge if the difference $(x_R - x_L) $
is very large so that the conditional number of the matrix
$ \left( H + \gamma^{(\ell)} \cdot \mathrm{diag}(H) \right) $
is very large.
To find a reasonable initial guess
$D^{(0)} = \{ x_L^{(0)}, \ldots, x_R^{(0)} \}$
we use the results in~\cite{tari_unified_2005}
and consider the roots of the function $\Delta^0_k(w)$
\begin{equation}
\Delta^0_k(w) =
\left|
\begin{array}{cccc}
\mu_0 & \mu_1 & \cdots & \mu_k \\
\vdots & & & \vdots \\
\mu_{k-1} & \mu_k & \cdots & \mu_{2k-1} \\
1 & w & \cdots & w^k \\
\end{array}
\right|,
\end{equation}
where $k=\lfloor \frac{M}{2} \rfloor$, and $M$ is even.
The initial guess $D^{(0)}$ is defined by
$x_L^{(0)} = \lfloor w_1 \rfloor$ and $x_R^{(0)} = \lceil w_k \rceil$,
where $w_1 < \ldots < w_k$ are real and simple roots
of the equation $\Delta^0_k(w) = 0$.
In the $\ell$-th iteration we check
if the probability of the right-most state $x_R^{(\ell)}$
is reasonably small in comparison to the
maximum value of $\tilde{q}^{(\ell)}(x)$ for $x\in D^{(\ell)}$, i.e.
\begin{equation}
\label{eq:supportInequality}
\tilde{q}^{(\ell)}(x_R^{(\ell)})
< \delta_{\mbox{\scriptsize prob}} \cdot \max\nolimits_{x \in D^{(\ell)}} \tilde{q}^{(\ell)}(x),
\end{equation}
where $\delta_{\mbox{\scriptsize prob}}$ is a small threshold
(for all out experiments we chose $\delta_{\mbox{\scriptsize prob}} = 10^{-3}$).
We extend the support until inequality~\eqref{eq:supportInequality}
is satisfied by adding new state on each iteration
\begin{equation}
\left( x_L^{(\ell+1)}, x_R^{(\ell+1)} \right) =
\begin{cases}
\left( \max (0, x_L^{(\ell)}), x_R^{(\ell)} \right), & \ell \text{ is even} \\
\left( x_L^{(\ell)}, x_R^{(\ell)}+1 \right), & \ell \text{ is odd}.
\end{cases}
\end{equation}
The final results $\tilde{\lambda}$ and $\hat{D}$ of the iteration yields the distribution
$\tilde{q}(x)$ that approximates the marginal distribution of interest.
Please note that $M$ is assumed to be even when we use the function $\Delta_k^0$.
Tari et. al also provide the extension of this technique that allows to
account for the case when odd number of moments is known ($M$ is odd)
by considering the function $\Delta^1_z(\eta)$
\begin{equation*}
\Delta^1(\eta) =
\left|
\begin{array}{cccc}
\mu_1 - w_1 \mu_0 & \mu_2-w_1 \mu_1 &\cdots & \mu_{z} - w_1 \mu_{z-1} \\
\vdots & \vdots & & \vdots \\
\mu_{z-1} - w_1 \mu_{z-2} & \mu_{z} -w_1 \mu_{z-1} & \cdots & \mu_{2z-2} \! -\! w_1 \mu_{2z-3} \\
1 & \eta & \cdots & \eta^{z-1} \\
\end{array}
\right|,
\end{equation*}
where $z=\lfloor \frac{M}{2} \rfloor + 1$.
Let $W = \{w_1, \ldots, w_k \}$ be the set of the solutions
of $\Delta^0_k(w)=0$
and
$H= \{ \eta_1, \ldots, \eta_z \}$ be
the set of solutions of $\Delta^1_z(\eta) = 0$,
where all the elements of $W$ and $H$ are real and simple.
The first approximation $D^{(0)}$ is then defined by
$x_L^{(0)} = \lfloor \min(w_1, \eta_1) \rfloor$
and $x_R^{(0)} = \lceil \max(w_k, \eta_z) \rceil$.
\paragraph{Alternative method for support approximation extension.}
Instead of adding only one state to the support approximation $D^{(\ell+1)}$
we can use the following heuristics on
Chebyshev's inequality
$P\{ \vert X - \mu_1 \vert \geq z \} \leq \xi$,
$\xi = \nicefrac{\mu_2}{z}$.
We compute $z=\nicefrac{\mu_2}{0.1}$ (where we fix $\xi = 0.1$)
such that the correspondent set
$\widetilde{D} = \{ \lfloor \mu_1-z \rfloor, \ldots, \lceil \mu_1 + z \rceil \}$
shall include at least $90\%$ of the probability mass.
\newline
The initial approximation
$D^{(0)}$ is compared to $\widetilde{D}$
by computing the difference $l = \vert z - x_R^{(0)} \vert$.
The latter serves as an increment in support extension procedure,
i.e. the approximation of the support on iteration $(i+1)$
is given by $D^{(i+1)} = \{ x_L^{(i+1)}, \ldots, x_R^{(i+1)} \}$,
where
$x_L^{(i+1)} = \max \left( 0, x_L^{(i)} - \lceil \nicefrac{l}{2} \rceil \right)$
and
$x_R^{(i+1)} = x_R^{(i)} + \lceil \nicefrac{l}{2} \rceil$.
\end{appendix}
\end{document} |
\begin{document}
\def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\nonumber{\nonumber}
\newcommand{\mathbf}{\mathbf} \newcommand{\mathrm}{\mathrm} \title{Structure of the interaction and energy transfer between an open quantum system and its environment\\}
\author{Tarek Khalil$^{a}$ \footnote{E-mail address: [email protected]}\\ and\\ Jean Richert$^{b}$ \footnote{E-mail address: [email protected]}\\ $^{a}$ Department of Physics, Faculty of Sciences(V),\\ Lebanese University, Nabatieh, Lebanon\\ $^{b}$ Institut de Physique, Universit\'e de Strasbourg,\\ 3, rue de l'Universit\'e, 67084 Strasbourg Cedex, France} \date{\today} \maketitle \begin{abstract} Due to the coupling of a quantum system to its environment energy can be transfered between the two subsystems in both directions. In the present study we consider this process in a general framework for interactions with different properties and show how these properties govern the exchange. \end{abstract} \maketitle PACS numbers: 02.50.Ey, 02.50.Ga, 03.65.Aa, 05.60.Gg, 42.50.Lz \vskip .2cm Keywords: open quantum system, Markovian and non-Markovian systems, energy exchange in open quantum systems.\\
\section{Introduction}
Quantum systems are generally never completely isolated but interact with an environment with which they may exchange energy and other physical observable quantities. Their properties are naturally affected by their coupling to the external world. The understanding and control of the influence of an environment on a given physical system is of crucial importance in different fields of physics and in technological applications and led to a large amount of investigations ~\cite{reb,aba,kos,gol,cor}. Among the quantities of interest energy exchange between a system and its environment is of prime importance. The total system composed of the considered system and its environment is closed. It conserves the physical observables, in particular the total energy it contains. However, the existence of an interaction between its two parts leads to possible exchanges between them, energy and other observables can be transfered between them in both directions. In the present work we study the conditions under which this tranfer occurs.
In order to investigate the process a number of different approaches have been developed \cite{bel,bag,fl1,fl2,esp}. They make use of the cumulant method as well as other developments ~\cite{car,sch,def}. In the following we shall use the cumulant method in order to work out the energy exchange and the speed at which this exchange takes place~\cite{gua}.
In previous studies ~\cite{kr1,kr2,kr3} we defined criteria which allow to classify open quantum systems with respect to their behaviour in the presence of an environment. In order to do so we used a general formulation which relies on the examination of the properties of the density operator of the system and its environment. The dynamical behaviour of this operator is governed by the structure of the Hamiltonians of the system, its environment and the coupling Hamiltonian which acts between them. The method introduces a general form of the total density operator and avoids the determination of explicit solutions of the equation of motion which governs the time evolution of the system. We follow this formalism in order to work out different cases of physical interest concerning the energy exchange process.\\
The analysis is presented in the following way. In section 2 we define the energy exchange and its rate starting from the characteristic function which generates these quantities in terms of its first and second moment. Section 3 introduces a general formal expression of the density operator at the initial time and the structure of the total Hamiltonian of the system, the environment and their interaction. The expressions of the energy exchange and the exchange rate are explicitly written out. In section 4 we analyze and work out two different cases. They exemplify the role of the interaction between the system and the environment which may or may not induce the time divisibility property of the open system. Conclusions are drawn in section 5.
\section{Energy transfer between S and E: the cumulant approach}
Consider a system $S$ coupled to an environment $E$. The time dependent density operator of the total system is $\hat \rho_{SE}(t)$ and $S$ and $E$ are coupled by an interaction $\hat H_{SE}$. The interaction may generate an energy exchange between the two parts. This quantity can be worked out by means of the cumulant method ~\cite{gua} which was developed in a series of works, first for closed driven systems ~\cite{esp}, later extended to open systems, see f.i.~\cite{fl1,fl2} and further approaches quoted in ~\cite{gua}.
Define the modified density operator
\begin{eqnarray} \rho^{\sim}_{SE}(t,0)=Tr_{E}[{\hat U_{\eta/2}(t,0)\hat \rho_{SE}(0)\hat U^{+}_{-\eta/2}(t,0)}] \label{eq1} \end{eqnarray} where $Tr_{E}$ is the trace over the states in $E$ space and \begin{eqnarray} \hat U_{\eta}(t,0)=\exp(+i\eta \hat H_{SE})\hat U(t,0)\exp(-i\eta \hat H_{SE}) \label{eq2} \end{eqnarray} Here $\hat U(t,0)$ is the time evolution operator of the interacting total system $S+E$.
The characteristic function obtained from the generating function reads \begin{eqnarray} \chi^{(\eta)}(t)=Tr_{S}(\rho^{\sim}_{SE}(t,0)) \label{eq3} \end{eqnarray}
From $\chi^{(\eta)}(t)$ one derives the energy exchange between the environment and the system
\begin{eqnarray}
\Delta E(t)=\frac{d \chi^{(\eta)}(t)}{d (i\eta)}_{| \eta=0} \label{eq4} \end{eqnarray} The speed at which the energy flows between the system and its environment is given by
\begin{eqnarray} V_{E}(t)=\frac{\partial \dot \chi^{(\eta)}(t)}{\partial (i\eta)}_{|\eta=0} \label{eq5} \end{eqnarray} where $\dot \chi^{(\eta)}(t)$ is the time derivative of $\chi^{\eta}(t)$. If the energy flows from the sytem to the environment $V_{E}(t)$ is positive and negative if the flow is reversed.
\section{Energy flow and speed of energy exchange}
\subsection{The density operator}
At time $t=0$ we choose the system to stay in a mixed state
\begin{eqnarray}
|\psi(0)\rangle=\sum_{{i_k}}c_{i_k}|i_k\rangle \label{eq6}
\end{eqnarray} where the normalized states $\{|i_{k}\rangle\}$ are eigenstates of the Hamiltonian $\hat H_{S}$. The environment is described in terms of its density matrix chosen as
$\{ |\alpha\rangle d_{\alpha,\alpha}\langle \alpha|\}$ where $\{d_{\alpha,\alpha}\}$ are the statistical weights of the density matrix in a diagonal basis of states $\{\alpha\}$.
Given these bases of states in $S$ space and in $E$ space the density operator of the total system at time $t=0$ is written as~\cite{buz}
\begin{eqnarray} \hat \rho_{SE}(0)=\hat \rho_{S}(0) \otimes \hat \rho_{E}(0) \notag\\
\hat \rho_{S}(0)=\sum_{k,l}|i_{k}\rangle c_{i_{k}}c^{*}_{i_{l}}\langle i_{l}| \notag\\
\hat \rho_{E}(0)=\sum_{\alpha}|\alpha \rangle d_{\alpha,\alpha} \langle \alpha| \label{eq7} \end{eqnarray} The density operator $\hat \rho_{SE}$ describes a system in the absence of an interaction $\hat H_{SE}$ at $t=0$, hence in the absence of an initial entanglement between $S$ and $E$.
The total Hamiltonian $\hat H$ reads
\begin{eqnarray} \hat H=\hat H_{S}+\hat H_{E}+\hat H_{SE} \label{eq8} \end{eqnarray}
and \begin{eqnarray}
\hat H_{S}|i_{k}\rangle=\epsilon_{i_{k}}|i_{k}\rangle \notag\\
\hat H_{E}|\gamma\rangle=E_{\gamma}|\gamma\rangle \label{eq9}
\end{eqnarray} where $\{|i_{k}\rangle\}$ and $\{|\gamma\rangle\}$ are the eigenvector bases of $\hat H _{S}$ and $\hat H_{E}$. At time $t$ the evolution of $S+E$ is given by
\begin{eqnarray} \hat\rho_{S+E}(t)= \hat U(t)\hat \rho_{S+E}(0) \hat U^{+}(t) \label{eq10} \end{eqnarray} where $\hat U(t,0)=e^{-i\hat H t}$ is the evolution operator of $S+E$.
\subsection{Explicit expressions of the energy exchange and the speed of the flow}
Using the definitions given in Eqs. (4) and (5) the energy transfer and speed of the energy flow read
\begin{eqnarray} \Delta E(t)=\frac{1}{2}Tr_{S}Tr_{E}\{[\hat H_{E},\hat U(t,0)]\hat\rho_{S+E}(0)\hat U^{+}(t,0)\}+h.c. \label{eq11} \end{eqnarray} and
\begin{eqnarray} V_{E}(t)=\frac{1}{2}Tr_{S}Tr_{E}\{[\hat H_{E},\frac{d}{dt}\hat U(t,0)]\hat\rho_{S+E}(0)\hat U^{+}(t,0)\}+h.c. \label{eq12} \end{eqnarray}
Developing these expressions in the bases of states given above they read
\begin{eqnarray} \Delta E(t)=\frac{1}{2}\sum_{j,\gamma}\sum_{i_{1},i_{2},\gamma_{1}}c_{i_{1}} c{*}_{i_{2}}
\langle j \gamma|[\hat H_{E},e^{-i\hat H t}]|i_{1} \gamma_{1}\rangle \notag\\
d_{\gamma_{1},\gamma_{1}}\langle i_{2} \gamma_{1}|e^{+i\hat H t}| j \gamma \rangle +h.c. \label{eq13} \end{eqnarray} and
\begin{eqnarray} V_{E}(t)=\frac{(-i)}{2} \sum_{j,\gamma} \sum_{\alpha_{1},i_{1},i_{2}} c_{i_{1}} c^{*}_{i_{2}} d_{\alpha_{1},\alpha_{1}} \notag\\
\sum_{j_{1}, \gamma_{1}}\{\langle j \gamma|[\hat H_{E},\hat H_{SE}]|j_{1}, \gamma_{1}\rangle
\langle j_{1}, \gamma_{1}|e^{-i\hat H t}|i_{1}\alpha_{1}\rangle
\langle i_{2}\alpha_{1}||e^{+i\hat H t}|j \gamma \rangle \notag\\
+ \langle j \gamma|\hat H|j_{1} \gamma_{1}\rangle
\langle j_{1} \gamma_{1}|[\hat H_{E},e^{-i\hat H t}]|i_{1} \alpha_{1}\rangle
\langle i_{2} \alpha_{1}|e^{+i\hat H t}|j \gamma \rangle \notag\\
- \langle j \gamma|\hat H e^{-i\hat H t}|i_{1} \alpha_{1}\rangle
\langle i_{2}\alpha_{1}||[\hat H_{E}, e^{+i\hat H t}]|j \gamma \rangle \} +h.c. \label{eq14} \end{eqnarray}
\section{Properties of the interaction Hamiltonian}
We use now the general expressions of the energy transfer and its exchange rate in order to test the role of the interaction Hamiltonian in this process. Since $S$ and $E$ are distinct different physical systems their Hamiltonians verify the commutation relation $[\hat H_{S},\hat H_{E}]=0$. Former work ~\cite{kr1} has shown that one may consider two cases which are of special interest:\\
(a) $[\hat H_{E},\hat H_{SE}]=0$\\
(b) $[\hat H_{S},\hat H_{SE}]=0$
\subsection{Case (a)}
It has been shown elsewhere ~\cite{kr1,kr2} that if $H_{E}$ and $H_{SE}$ commute the evolution of the system $S$ is characterized by the divisibility property which is a specific property of Markovian systems. Since the Hamiltonian of the system $S$ commutes in practice with $\hat H_{E}$ it follows that $\hat H_{E}$ commutes with the whole Hamiltonian $\hat H$, hence also $\hat U(t,0)$ and its derivatives. Going back to the expressions of $\Delta E(t)$ and $V_{E}(t)$ it comes out that $\Delta E(t)=V_{E}(t)=0$.
There is no energy exchange between $S$ and $E$ in this case. The physical explanation is the following: the divisibility property imposes that the environment stays in a fixed state at a fixed energy which blocks any possible transfer of energy between the two parts of the total system $S+E$. This is due to the fact that $\hat H_{SE}$ is diagonal in $E$ space, hence the considered state $|\gamma \rangle$ stays the same over any time interval.
\subsection{Evolution of the energy: an impurity immersed in a bosonic condensate}
We introduce the Hamiltonian of a fermionic impurity interacting with a Bose-Einstein condensate ~\cite{chr}. It is given by $\hat H= \hat H_{S}+\hat H_{E}+\hat H_{SE}$ where
\begin{center} \begin{eqnarray} \hat H_{S}=\sum_{\vec k}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k} \label{eq15} \end{eqnarray} \end{center} \begin{center} \begin{eqnarray} \hat H_{E}=\sum_{\vec k}e_{\vec k}a^{+}_{\vec k}a_{\vec k}+\frac{1}{2V}\sum_{\vec k_{1}\vec k_{2} \vec q}V_{B}(\vec q)(\vec q)a^{+}_{\vec k_{1}}a^{+}_{\vec k_{1}+\vec q}a^{+}_{\vec k_{2}-\vec q} a_{\vec k_{2}}a_{\vec k_{1}} \label{eq16} \end{eqnarray} \end{center} \begin{center} \begin{eqnarray} \hat H_{SE}=\frac{1}{V}\sum_{\vec k_{3}\vec k_{4}\vec q}c^{+}_{\vec k_{3}+\vec q}c_{\vec k_{4}} a^{+}_{\vec k_{4}-\vec q}a_{\vec k_{3}} \label{eq17} \end{eqnarray} \end{center} where $[c,c^{+}]$ and $[a,a^{+}]$ are fermion and boson annihilation and creation operators.
We consider the case where the momentum transfer $\vec q=0$. Then one expects that there is no energy exchange between $S$ and $E$. This is indeed so since a simple calculation shows that $[\hat H_{E},\hat H_{SE}]=0$ in this case.
\subsection{Case (b)}
Start from the general expression of $\Delta E(t)$ given by Eq.(13).
Since $[\hat H_{S},\hat H_{SE}]=0$ all the matrix elements are diagonal in $S$ space and the expression takes the form
\begin{eqnarray}
\Delta E(t)=\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|e^{-it \hat H}|j \gamma_{1}\rangle d_{\gamma_{1}\gamma_{1}}
\langle j \gamma_{1}|e^{+it \hat H}|j \gamma \rangle \label{eq18} \end{eqnarray}
In order to follow the evolution of $\Delta E(t)$ in time we determine the expression of $V_{E}(t)$. A somewhat lengthy but straightforward calculation leads to the following expression:
\begin{eqnarray} V_{E}(t)=V^{(1)}_{E}(t)+ c.c. +V^{(2)}_{E}(t)+ c.c. \label{eq19} \end{eqnarray} where
\begin{eqnarray} V^{(1)}_{E}(t)=(-i)\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|\hat H_{SE}|j \gamma_{1}\rangle \notag\\
\sum_{\gamma_{2}}\langle j \gamma_{1}|e^{-it \hat H}|j \gamma_{2}\rangle
d_{\gamma_{2},\gamma_{2}} \langle j \gamma_{2}|e^{+it \hat H}|j \gamma \rangle \label{eq20} \end{eqnarray} and $V^{(1)*}_{E}(t)$ its complex conjugate. It comes out that the c.c. $V^{(1)*}_{E}(t)=V^{(1)}_{E}(t)$ which means that $V^{(1)}_{E}(t)$ is real. The second term reads
\begin{eqnarray}
V_{E}^{(2)}(t)= (-i)\sum_{j}|c_{j}|^{2}\sum_{\gamma \gamma_{1}}(E_{\gamma}+\epsilon_{j}) \sum_{\gamma_{2}}(E_{\gamma}-E_{\gamma_{2}}) \notag\\
\langle j \gamma_{1}|e^{-it \hat H}|j \gamma_{2}\rangle
d_{\gamma_{2},\gamma_{2}}\langle j \gamma_{2}|e^{+it \hat H}|j \gamma \rangle \label{eq21} \end{eqnarray}
It is easy to see that $V_{E}^{(2)}(t) +c.c.=0$, hence $V_{E}(t)=2V_{E}^{(1)}(t)$.
For $t=0$
\begin{eqnarray}
\Delta E(0)= \sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|j \gamma_{1}\rangle d_{\gamma_{1}\gamma_{1}}
\langle j \gamma_{1}|j \gamma \rangle= \notag\\
\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})\delta_{\gamma \gamma_{1}} \label{eq22} \end{eqnarray} Hence $\Delta E(0)=0$ which could have been anticipated from the symmetry property of the expression of $\Delta E(t)$.\\
But contrary to case $(a)$ the energy tranfer is now different from zero. It varies with time hence $\Delta E(t)$ increases or de decreases as a function of the sign of $V_{E}(t)$. This is due to the fact that now many channels can open in $E$ space because $\hat H_{SE}$is no longer diagonal in this space.
\subsection{Evolution of the energy: two examples}
In order to illustrate this case we develop two models on which we exemplify the time dependence of the energy transfer between a system and its environment when $[\hat H_{S},\hat H_{SE}]=0$.
\begin{itemize}
\item {\bf First example}: we consider a model consisting of a 2-level state system $E$,
$[|\gamma\rangle=|1\rangle,|2\rangle]$. The Hamiltonian $\hat H$ decomposes into two parts $\hat H_{0}=\hat H_{S}+\hat H_{SE}$ and $\hat H_{E}$. We consider the case where \begin{eqnarray} [\hat H_{0},[\hat H_{0},\hat H_{E}]]=[\hat H_{E},[\hat H_{0}, \hat H_{E}]]=0 \label{eq23} \end{eqnarray} and $[\hat H_{0},\hat H_{E}]=c1$ where $c$ is a number. Then
\begin{eqnarray} e^{\hat H_{0}+\hat H_{E}}=e^{\hat H_{0}}e^{\hat H_{E}}e^{c/2} \label{eq24} \end{eqnarray} and
\begin{eqnarray} e^{i(\hat H_{0}+\hat H_{E)}t}=e^{i\hat H_{0}t}e^{i\hat H_{E}t}e^{ct^{2}/2} \label{eq25} \end{eqnarray}
The quantities which enter the expressions which follow are defined in Appendix A.
The expressions of $\Delta E(t)$ and $V_{E}(t)$ read
\begin{eqnarray}
\Delta E(t)=e^{ct^{2}}\Delta_{12}\sum_{j}|c_{j}|^{2}(d_{22}-d_{11})[a_{j}^{(12)2}(t)+b_{j}^{(12)2}(t)] \label{eq26} \end{eqnarray} where $\Delta_{12}=E_{1}-E_{2}$, $d_{11},d_{22}$ the weights of the states in $E$ space and
\begin{eqnarray} V_{E}(t)=2e^{ct^{2}} \Delta_{12}\sum_{j} |c_{j}|^{2}[I^{(j)}_{12}Re(\langle 2|\hat \Omega_{j}t)|1\rangle + R^{(j)}_{12}
Im (\langle 2|\hat \Omega_{j}(t)|1\rangle] \label{eq27} \end{eqnarray} with
\begin{eqnarray} Re\langle 2|\hat \Omega_{j}(t)|1\rangle=a_{j}^{11}(t)[a_{j}^{21}(t)\cos(\Delta_{12}t)+b_{j}^{21}(t) \sin(\Delta_{12}t)]d_{11} \notag\\ +a_{j}^{22}(t)a^{21}(t)d_{22} \notag\\
Im\langle 2|\hat \Omega_{j}(t)|1\rangle=a_{j}{11}(t) [-b_{j}^{21}(t)\cos(\Delta_{12}t)+a_{j}^{21}(t)\sin(\Delta_{12}t)]d_{11} \notag\\ +b_{j}^{21}(t)a_{j}^{22}(t)d_{22} \label{eq28} \end{eqnarray}
Both $\Delta E(t)$ and $V_{E}(t$ are oscillating functions of time. $\Delta E(t)$ keeps a fixed sign depending on the sign of $\Delta_{12}$, $V_{E}(t)$ may change sign with time. The energy and the speed of the energy transfer decays to zero for real $c$ real and negative.\\
\item {\bf Second example}: we consider the Hamiltonian $\hat H=\hat H_{S}+\hat H_{E}+\hat H_{SE}$ which governs the coupling of a phonon field to the electron in the BCS theory of superconductivity.
The total Hamiltonian of the electron-phonon system reads
\begin{center} \begin{eqnarray} \hat H_{S}=\sum_{\vec k_{1}}\epsilon_{\vec k_{1}}c^{+}_{\vec k_{1}}c_{\vec k_{1}} \label{eq29} \end{eqnarray} \end{center} \begin{center} \begin{eqnarray} \hat H_{E}=\sum_{\vec q}\hbar \omega_{\vec q}a^{+}_{\vec q}a_{\vec q} \label{eq30} \end{eqnarray} \end{center} \begin{center} \begin{eqnarray} \hat H_{SE}=V_{ph-e}(\vec q)\sum_{\vec k_{2}\vec q}(a^{+}_{-\vec q}+a_{\vec q}) c^{+}_{\vec k_{2}+\vec q}c_{\vec k_{2}} \label{eq31} \end{eqnarray} \end{center} where $V_{ph-e}(\vec q)$ is the phonon-electron interaction, $[a,a^{+}]$ and $[c,c^{+}]$ are phonon and electron annihilation and creation operators. We consider the case where the phonons evolve in the zero mode $\vec q=0$.
Then $\hat H_{E}=\hbar \omega_{0}a^{+}_{0}a_{0}$ and $\hat H_{SE}=V_{ph-e}(0)\sum_{\vec k_{2}}(a^{+}_{0}+a_{0})c^{+}_{\vec k_{2}}c_{\vec k_{2}}$.
The density operator of the total system $S+E$ at $t=0$ is given by the expression \begin{eqnarray}
\hat \rho^{(n_{0},n'_{0})}_{(\vec k,\vec k')}=\frac{1}{2\pi(n_{0}!n'_{0}!)^{1/2}}|\vec k n_{0}\rangle \langle \vec k' n'_{0}| \label{eq32} \end{eqnarray} Working out the commutation relation between $\hat H_{S}$ and $\hat H_{SE}$ leads to $[\hat H_{S},\hat H_{SE}]=0$, hence the evolution operator can be written as \begin{eqnarray} e^{-i\hat Ht}=e^{-i\hat H_{S}t}e^{-i(\hat H_{E}+\hat H_{SE})t} \label{eq33} \end{eqnarray} and in explicit form
\begin{eqnarray} e^{-i\hat Ht}=e^{-it\sum_{k=1}^{\nu}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k}} e^{-i[\omega_{0} a^{+}_{0}a_{0}+\nu V_{ph-e}(0)(a^{+}_{0}+a_{0})]t} \label{eq34} \end{eqnarray} where $\nu$ is the number of electron states. The quantity of interest concerns the time dependence of $\Delta E(t)$ and $V_{E}(t)$ given by Eqs.(18) and (19), hence the time evolution of the matrix elements of $e^{-i\hat Ht}$
\begin{eqnarray} E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)=\langle \vec k n_{0}|e^{-it\sum_{k=1}^{\nu}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k}}e^{-i[\omega_{0} a^{+}_{0}a_{0}+\nu V_{ph-e}(0)(a^{+}_{0}+a_{0})]t}|\vec k n'_{0}\rangle \label{eq35} \end{eqnarray} These matrix elements and their hermitic conjugates which enter the expressions of $\Delta E(t)$ and $V_{E}(t)$ can be worked out explicitly using the Zassenhaus development~\cite{za}, see Appendix B and~\cite{ca}.
They read \begin{eqnarray} E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)=N^{-1}e^{-i\epsilon_{k}t}e^{-i\omega_{0}n_{0}t} F^{(n_{0},n'_{0})}(t) \label{36} \end{eqnarray} with \begin{eqnarray} F^{(n_{0},n'_{0})}(t)= \sum_{n_{2}\leq n_{0},n_{2}\leq n_{3}} \sum_{n_{4}\leq n_{3},n_{4}\leq n'_{0}}(-i)^{n_{0}+n_{3}}(-1)^{n'_{0}+n_{2}-n_{4}} \notag\\ \frac{n_{0}!n'_{0}!(n_{2}!)^{2}(n_{3}!)^{2}[\alpha(t)^{n_{0}+n_{3}-2n_{2}}] [\zeta(t)^{n'_{0}+n_{3}-2n_{4}}]}{(n_{2})^{2} (n_{4})^{2}(n_{0}-n_{2})!(n_{3}-n_{4})! (n_{3}-n_{2})!(n'_{0}-n_{4})!}e^{\Psi(t)} \label{37} \end{eqnarray} and the normalization factor $N= 2\pi(n_{0}!n'_{0}!)^{1/2}$. The functions $\alpha(t)$, $\zeta(t)$ and $\Psi(t)$ read
\begin{eqnarray} \alpha(t)=\frac{\nu V_{ph-e}(0)\sin\omega_{0}t}{\omega_{0}} \label{eq38} \end{eqnarray}
\begin{eqnarray} \zeta(t)=\frac{\omega_{0}[1-\cos\nu V_{ph-e}(0)t]}{\nu V_{ph-e}(0)} \label{eq39} \end{eqnarray}
\begin{eqnarray} \Psi(t)=-\frac{1}{2}[\frac{\nu^{2}V^{2}_{ph-e}(0)\sin^{2}(\omega_{0} t)}{\omega_{0}^{2}}+\frac{\omega_{0}^{2}(1-\cos\nu V_{ph-e}(0)t)^{2}}{\nu^{2}V_{ph-e}^{2}}] \label{eq40} \end{eqnarray}
As one can see from these expressions $E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)$ are oscillating functions of time which leads to the conclusion that the energy transfer and the transfer velocity oscillate continuously and stay finite over any interval of time.
\end{itemize}
\section{Conclusions, remarks}
In the present work we used a cumulant approach~\cite{gua} in order to study the energy transfer between a system and its environment and the rate at which this transfer evolves in time.
If the Hamiltonian of the environment commutes with the interaction between the system and the environment there results an absence of energy transfer between the two subsystems. This can be explained in the following way. As already seen in former work the commutation property is a sufficient condition for divisibility in the time behaviour of an open system, one of the properties which characterize Markov processes ~\cite{sti,riv}. In this case it has been shown~\cite{kr1,kr2,kr3} that the environment keeps in the energy state in which it was at the origin of time and stays there over any interval of time. Hence the energy of the environment is blocked in a fixed state so that it cannot feed the system and does not receive energy from it. Time delays are correlated with the possibility of the environment to jump between different states in closed systems as well as in open ones (see f.i.~\cite{dod,def1,def2} and references quoted in there). This is the case in non-Markovian systems ~\cite{zhe,hai,ren,gua1}.
The experimental realization of the absence of energy transfer may be obtained under different conditions:
\begin{itemize}
\item the strength of the interaction can be chosen such that it keeps very weak and hence does not allow any possible jump to another level in the case of a discrete environment spectrum.
\item the temperature of the environment is kept close to zero so that the ground state is the only accessible state.
\item the commutation relation between the environment and the interaction is rigorously verified which is the case discussed in the present work.
\end{itemize}
We considered also the case where the system and the interaction Hamiltonians commute with each other. In this case energy can flow from the environment to the system and back, there is no blocking effect coming from the environment since several states are at hand and the energy exchange can take place. In order to illustrate this situation we worked out two model systems corresponding to different physical situations. In the first case the energy exchange oscillates but stops exponentially, in the second case which models the electron-phonon system the energy exchange and the exchange speed oscillate in time and never go to zero. This is so because the system behaves coherently as it has been shown in ref.~\cite{kr3,lid}.
\section{Appendix A}
We define the following matrix elements which enter the expressions of $\Delta E(t)$ and $V_{E}(t)$ given by Eqs.(16-18)
\begin{eqnarray}
\langle j 1|e^{-i\hat Ht}|j 1\rangle=e^{ct^{2}/2}e^{-i\epsilon_{j}t}e^{-iE_{1}t}a_{j}^{11}(t) \notag\\
\langle j 2|e^{+i\hat Ht}|j 2\rangle=e^{ct^{2}/2}e^{+i\epsilon_{j}t}e^{+iE_{2}t}a_{j}^{22}(t) \notag\\
\langle j 1|e^{+i\hat Ht}|j 2\rangle=e^{ct^{2}/2}e^{+i\epsilon_{j}t}e^{+iE_{1}t} (a_{j}^{12}(t)+ib_{j}^{12}(t)) \notag\\
\langle j 2|e^{-i\hat Ht}|j 1\rangle=e^{ct^{2}/2}e^{-i\epsilon_{j}t}e^{-iE_{2}t} (a_{j}^{21}(t)-ib_{j}^{21}(t)) \label{eq41} \end{eqnarray}
where $\epsilon_{j}$ is the eigenvalue of state $|j\rangle$ and $|E_{k}\rangle,(k=1,2)$ the eigenvalues of states the states $|\gamma\rangle$ \begin{eqnarray}
a_{j}^{11}(t)=Re(\langle j 1|e^{-i \hat H_{SE}t}|j 1 \rangle) \notag\\
a_{j}^{22}(t)=Re(\langle j 2|e^{+i \hat H_{SE}t}|j 2 \rangle) \notag\\
a_{j}^{12}(t)=Re(\langle j 1|e^{+i \hat H_{SE}t}|j 2 \rangle) \notag\\
a_{j}^{21}(t)=Re(\langle j 2|e^{-i \hat H_{SE}t}|j 1 \rangle) \notag\\
b_{j}^{12}(t)=Im(\langle j 1|e^{+i \hat H_{SE}t}|j 2 \rangle) \notag\\
b_{j}^{21}(t)=Im(\langle j 2|e^{-i \hat H_{SE}t}|j 1 \rangle) \label{eq42} \end{eqnarray} Introduce also
\begin{eqnarray} R^{(j)}_{12}=Re(\langle j 1|\hat H_{SE}|j 2 \rangle) \notag\\
I^{(j)}_{12}=Im(\langle j 1|\hat H_{SE}|j 2 \rangle) \label{eq43} \end{eqnarray} and
\begin{eqnarray}
\hat \Omega_{j}(t)=\sum_{\gamma}e^{-i\hat H t}|j \gamma\rangle d_{\gamma \gamma}\langle j \gamma| e^{+i\hat H t} \label{eq44} \end{eqnarray}
Using these defintions the calculation leads to expressions (26) and (27) of $\Delta E(t)$ and $V_{E}(t)$ in the text.
\section{Appendix B: the Zassenhaus development}
If $X=-i(t-t_{0})(\hat H_{S}+\hat H_{E})$ and $Y=-i(t-t_{0})\hat H_{SE}$
\begin{eqnarray} e^{X+Y}=e^{X}\otimes e^{Y}\otimes e^{-c_{2}(X,Y)/2!}\otimes e^{-c_{3}(X,Y)/3!}\otimes e^{-c_{4}(X,Y)/4!}... \label{eq45} \end{eqnarray} where
\begin{center} $c_{2}(X,Y)=[X,Y]$\\ $c_{3}(X,Y)=2[[X,Y],Y]+[[X,Y],X]$\\ $c_{4}(X,Y)=c_{3}(X,Y)+3[[[X,Y],Y],Y]+[[[X,Y],X],Y]+[[X,Y],[X,Y]$\\ \end{center}
The series has an infinite number of term which can be generated iteratively in a straightforward way ~\cite{ca}. If $[X,Y]=0$ the truncation at the third term leads to the factorisation of the $X$ and the $Y$ contribution. If $[X,Y]=c$ where $c$ is a c-number the expression corresponds to the well-known Baker-Campbell-Hausdorff formula.
\end{document} |
\begin{document}
\title{The Inapproximability of Maximum Single-Sink\\ Unsplittable, Priority and Confluent Flow Problems} \author{F. Bruce Shepherd \and Adrian Vetta}
\maketitle
\begin{abstract} We consider the single-sink network flow problem. An instance consists of a capacitated graph (directed or undirected), a sink node $t$ and a set of demands that we want to send to the sink. Here demand $i$ is located at a node $s_i$ and requests an amount $d_i$ of flow capacity in order to route successfully. Two standard objectives are to maximise (i) the number of demands (cardinality) and (ii) the total demand (throughput) that can be routed subject to the capacity constraints. Furthermore, we examine these maximisation problems for three specialised types of network flow: unsplittable, confluent and priority flows.
In the {\em unsplittable flow} problem, we have edge capacities, and the demand for $s_i$ must be routed on a single path. In the {\em confluent flow} problem, we have node capacities, and the final flow must induce a tree. Both of these problems have been studied extensively, primarily in the single-sink setting. However, most of this work imposed the {\em no-bottleneck assumption} (that the maximum demand $d_{max}$ is at most the minimum capacity $u_{min}$). Given the no-bottleneck assumption, there is a factor $4.43$-approximation algorithm due to Dinitz et al.~\cite{Dinitz99} for the unsplittable flow problem. Under the even stronger assumption of uniform capacities, there is a factor $3$-approximation algorithm due to Chen et al.~\cite{Chen07} for the confluent flow problem.
However, unlike the unsplittable flow problem, a constant factor approximation algorithm cannot be obtained for the single-sink confluent flow problem even {\bf with} the no-bottleneck assumption. Specifically, we prove that it is hard in that setting to approximate single-sink confluent flow to within $O(\log^{1-\epsilon}(n))$, for any $\epsilon>0$. This result applies for both cardinality and throughput objectives even in undirected graphs.
The remainder of our results focus upon the setting {\bf without} the no-bottleneck assumption. There, the only result we are aware of is an $\Omega(m^{1-\epsilon})$ inapproximability result of Azar and Regev~\cite{azar2001strongly} for cardinality single-sink unsplittable flow in directed graphs. We prove this lower bound applies to undirected graphs, including planar networks. This is the first super-constant hardness known for undirected single-sink unsplittable flow, and apparently the first polynomial hardness for undirected unsplittable flow even for general (non-single sink) multiflows. We show the lower bound also applies to the cardinality single-sink confluent flow problem.
Furthermore, the proof of Azar and Regev requires exponentially large demands. We show that polynomial hardness continues to hold without this restriction, even if all demands and capacities lie within an arbitrarily small range $[1,1+\Delta]$, for $\Delta > 0$. This lower bound applies also to the throughput objective. This result is very sharp since if $\Delta=0$, then we have an instance of the single-sink maximum edge-disjoint paths problem which can be solved exactly via a maximum flow algorithm. This motivates us to study an intermediary problem, {\em priority flows}, that models the transition as $\Delta\rightarrow 0$. Here we have unit-demands, each with a priority level. In addition, each edge has a priority level and a routing path for a demand is then restricted to use edges with at least the same priority level. Our results imply a polynomial lower bound for the maximum priority flow problem, even for the case of uniform capacities.
Finally, we present greedy algorithms that provide upper bounds which (nearly) match the lower bounds for unsplittable and priority flows. These upper bounds also apply for general multiflows. \end{abstract}
\section{Introduction} In this paper we improve known lower bounds (and upper bounds) on the approximability of the maximization versions of the {\em single-sink unsplittable flow}, {\em single-sink priority flow} and {\em single-sink confluent flow} problems. In the single-sink network flow problem, we are given a directed or undirected graph $G=(V,E)$ with $n$ nodes and $m$ edges that has edge capacities $u(e)$ or node capacities $u(v)$. There are a collection of demands that have to be routed to a unique destination {\em sink} node $t$. Each demand $i$ is located at a {\em source} node $s_i$ (multiple demands could share the same source) and requests an amount $d_i$ of flow capacity in order to route. We will primarily focus on the following two well-known versions of the single-sink network flow problem: \begin{itemize} \item {\tt Unsplittable Flow}: Each demand $i$ must be sent along a unique path $P_i$ from $s_i$ to $t_i$. \item {\tt Confluent Flow}: Any two demands that meet at a node must then traverse identical paths to the sink. In particular, at most one edge out of each node $v$ is allowed to carry flow. Consequently, the support of the flow is a tree in the undirected graphs, and an arborescence rooted at $t$ in directed graphs. \end{itemize} Confluent flows were introduced to study the effects of next-hop routing \cite{Chen05}. In that application, routers are capacitated and, consequently, nodes in the confluent flow problem are assumed to have capacities but not edges. In contrast, in the unsplittable flow problem it is the edges that are assumed to be capacitated. We follow these conventions in this paper. In addition, we will also examine a third network flow problem called {\tt Priority Flow} (defined in Section \ref{sec:results}). In the literature, subject to network capacities, there are two standard maximization objectives:
\begin{itemize} \item {\tt Cardinality}: Maximize the total number of demands routed. \item {\tt Throughput}: Maximize satisfied demand, that is, the total flow carried by the routed demands. \end{itemize} These objectives can be viewed as special cases of the {\em profit-maximisation} flow problem. There each demand $i$ has a profit $\pi_i$ in addition to its demand $d_i$. The goal is to route a subset of the demands of maximum total profit. The cardinality model then corresponds to the unit-profit case, $w_i=1$ for every demand $i$; the throughput model is the case $\pi_i=d_i$. Clearly the lower bounds we will present also apply to the more general profit-maximisation problem.
\subsection{Previous Work}\label{sec:previous} The unsplittable flow problem has been extensively studied since its introduction by Cosares and Saniee \cite{cosares1994optimization} and Kleinberg~\cite{Kleinberg96}. However, most positive results have relied upon the {\em no-bottleneck assumption} ({\sc nba}) where the maximum demand is at most the minimum capacity, that is, $d_{max} \leq u_{min}$. Given the no-bottleneck assumption, the best known result is a factor $4.43$-approximation algorithm due to Dinitz, Garg and Goemans \cite{Dinitz99} for the maximum throughput objective.
The confluent flow problem was first examined by Chen, Rajaraman and Sundaram~\cite{Chen05}. There, and in variants of the problem \cite{Chen07, donovan2007degree, shepherd2009single}, the focus was on uncapacitated graphs.\footnote{An exception concerns the analysis of graphs with constant treewidth \cite{dressler2010capacitated}.} The current best result for maximum confluent flow is a factor $3$-approximation algorithm for maximum throughput in uncapacitated networks \cite{Chen07}.
Observe that uncapacitated networks (i.e. graphs with uniform capacities) trivially also satisfy the no-bottleneck assumption. Much less is known about networks where the no-bottleneck assumption does {\bf not} hold. This is reflected by the dearth of progress for the case of multiflows (that is, multiple sink) without the {\sc nba}. It is known that a constant factor approximation algorithm exists for the case in which $G$ is a path \cite{bonsma2011constant}, and that a poly-logarithmic approximation algorithm exists for the case in which $G$ is a tree \cite{chekuri2009unsplittable}. The extreme difficulty of the unsplittable flow problem is suggested by the following result of Azar and Regev~\cite{azar2001strongly}. \begin{thm}[\cite{azar2001strongly}]\label{thm:ar} If $P \neq NP$ then, for any $\epsilon > 0$, there is no $O(m^{1-\epsilon})$-approximation algorithm for the cardinality objective of the single-sink unsplittable flow problem in directed graphs. \end{thm} This is the first (and only) super-constant lower bound for the maximum single-sink unsplittable flow problem.
\subsection{Our Results}\label{sec:results} The main focus of this paper is on single-sink flow problems where the no-bottleneck assumption does not hold. It turns out that the hardness of approximation bounds are quite severe even in the (often more tractable) single-sink setting.
In some cases they match the worst case bounds for PIPs (general packing integer programs).
In particular, we strengthen Theorem~\ref{thm:ar} in four ways. First, as noted by Azar and Regev, the proof of their result relies critically on having directed graphs. We prove it holds for undirected graphs, even {\em planar} undirected graphs. Second, we show the result also applies to the confluent flow problem. \begin{thm} \label{thm:extended} If $P \neq NP$ then, for any $\epsilon > 0$, there is no $O(m^{1-\epsilon})$-approximation algorithm for the cardinality objective of the single-sink unsplittable and confluent flow problems in undirected graphs. Moreover for unsplittable flows, the lower bound holds even when we restrict to planar inputs. \end{thm}
Third, Theorems \ref{thm:ar} and \ref{thm:extended} rely upon the use of exponentially large demands -- we call this the {\em large demand regime}. A second demand scenario that has received attention in the literature is the {\em polynomial demand regime} -- this regime is studied in \cite{guruswami2003near}, basically to the exclusion of the large demand regime. We show that strong hardness results apply in the polynomial demand regime; in fact, they apply to the {\em small demand regime} where the {\em demand spread}
$\frac{d_{max}}{d_{min}} = 1+\Delta$, for some ``small'' $\Delta > 0$. (Note that $d_{min} \leq u_{min}$ and so the demand spread of an instance is at least the {\em bottleneck value} $\frac{d_{max}}{u_{min}}$.) Fourth, by considering the case where $\Delta > 0$ is arbitrarily small we obtain similar hardness results for the throughput objective for the single-sink unsplittable and confluent flow problems. Formally, we show the following $m^{\frac12 - \epsilon}$-inapproximability result. We note however that the hard instances have a linear number of edges (so one may prefer to call this an $n^{\frac12-\epsilon}$-inapproximability result).
\begin{thm} \label{thm:hard} Neither cardinality nor throughput can be approximated to within a factor of $O(m^{\frac12-\epsilon})$, for any $\epsilon > 0$, in the single-sink unsplittable and confluent flow problems. This holds for undirected and directed graphs even when instances are restricted to have demand spread $\frac{d_{max}}{d_{min}}=1 + \Delta$, where $\Delta > 0$ is arbitrarily small. \end{thm} Again for the unsplittable flow problem this hardness result applies even in planar graphs. Theorems \ref{thm:extended} and \ref{thm:hard} are the first super-constant hardness for any undirected version of the single-sink unsplittable flow problem, and any directed version with small-demands.
We also remark that the extension to the small-demand regime is significant as suggested by the sharpness of the result. Specifically, suppose $\Delta=0$ and, thus, the demand spread is one. We may then scale to assume that $d_{max}=d_{min}=1$. Furthermore, we may then round down all capacities to the nearest integer as any fractional capacity cannot be used. But then the single-sink unsplittable flow problem can be solved easily in polynomial time by a max-flow algorithm!
To clarify what is happening in the change from $\Delta>0$ to $\Delta=0$, we introduce and examine an intermediary problem, the {\em maximum priority flow problem}. Here, we have a graph $G=(V,E)$ with a sink node $t$, and demands from nodes $s_i$ to $t$. These demand are unit-demands, and thus $\Delta=0$. However, a demand may not traverse every edge. Specifically, we have a partition of $E$ into priority classes $E_i$. Each demand also has a {\em priority}, and a demand of priority $i$ may only use edges of priority $i$ or better (i.e., edges in $E_1 \cup E_2 \cup \ldots E_i$). The goal is to find a maximum routable subset of the demands. Observe that, for this unit-demand problem, the throughput and cardinality objectives are identical. Whilst various priority network design problems have been considered in the literature (cf. \cite{charikar2004resource,chuzhoy2008approximability}), we are not aware of existing results on maximum priority flow. Our results immediately imply the following. \begin{cor} \label{cor:priority} The single-sink maximum priority flow problem cannot be approximated to within a factor of $m^{\frac12-\epsilon}$, for any $\epsilon > 0$, in planar directed or undirected graphs. \end{cor}
The extension of the hardness results for single-sink unsplittable flow to undirected graphs is also significant since it appears to have been left unnoticed even for general multiflow instances. In \cite{guruswami2003near}: ``...{\em the hardness of undirected edge-disjoint paths remains an interesting open question. Indeed, even the hardness of edge-capacitated unsplittable flow remains open''}\footnote{In \cite{guruswami2003near}, they do however establish an inapproximability bound of $n^{1/2-\epsilon}$, for any $\epsilon > 0$, on {\em node-capacitated} {\sf USF} in undirected graphs.} Our result resolves this question by showing polynomial hardness (even for single-sink instances). We emphasize that this is not the first super-constant hardness for general multiflows however.
A polylogarithmic lower bound appeared in \cite{andrews2006logarithmic} for the maximum
edge-disjoint paths (MEDP) problem (this was subsequently extended to the regime where edge congestion is
allowed \cite{andrews2010inapproximability}). Moreover, a polynomial lower bound for MEDP seems less likely given the recent $O(1)$-congestion polylog-approximation algorithms \cite{chuzhoy2012routing,chuzhoy2012polylogarithimic}. In this light, our hardness results for single-sink unsplittable flow again highlight the sharp threshold involved with the no-bottleneck assumption. That is, if we allow some slight variation in demands and capacities within a tight range $[1,1+\Delta]$ we immediately jump from (likely) polylogarithmic approximations for MEDP to (known) polynomial hardness of the corresponding maximum unsplittable flow instances.
We next note that Theorems \ref{thm:ar} and \ref{thm:extended} are stronger than Theorem \ref{thm:hard} in the sense that they have exponents of $1-\epsilon$ rather than $\frac12-\epsilon$. Again, this extra boost is due to their use of exponential demand sizes. One can obtain a more refined picture as to how the hardness of cardinality single-sink unsplittable/confluent flow varies with
the demand sizes, or more precisely how it varies on the bottleneck
value $\frac{d_{max}}{u_{min}}$.\footnote{This seems likely connected to a footnote in \cite{azar2001strongly}
that a lower bound of the
form $O(m^{\frac12-\epsilon}\cdot \sqrt{\log (\frac{d_{max}}{u_{min}}}))$ exists for maximum unsplittable flow in
directed graphs. Its proof was omitted however.}
Specifically, combining the approaches used in Theorems \ref{thm:extended} and \ref{thm:hard} gives: \begin{thm} \label{thm:harder} Consider any fixed $\epsilon > 0$ and $d_{max}/u_{min} > 1$. It is NP-hard to approximate cardinality single-sink unsplittable/confluent flow to within a factor of $O(m^{\frac12-\epsilon}\cdot \sqrt{\log (\frac{d_{max}}{u_{min}})})$ in undirected or directed graphs. For unsplittable flow, this remains true for planar graphs. \end{thm}
Once again we see the message that there is a sharp cutoff for $d_{max}/u_{min} > 1$ even in the large-demand regime. This is because
if the bottleneck value is at most $1$, then the no-bottleneck assumption holds and, consequently, the single-sink unsplittable flow problem admits a constant-factor approximation (not $\sqrt{m}$ hardness). We mention that a similar hardness bound cannot hold for the maximum throughput objective, since one can always reduce to the case where $d_{max}/u_{min}$ is small with a polylogarithmic loss, and hence the lower bound becomes at worst $O(m^{\frac12-\epsilon}\cdot \log m)$. We feel the preceding hardness bound is all the more interesting since known greedy techniques yield almost-matching upper bounds, even for general multiflows. \begin{thm} \label{thm:upper} There is an $O(\sqrt{m}\log (\frac{d_{max}}{u_{min}}))$ approximation algorithm for cardinality unsplittable flow and an $O(\sqrt{m}\log n)$ approximation algorithm for throughput unsplittable flow, in both directed and undirected graphs. \end{thm}
We next present one hardness result for confluent flows assuming the no-bottleneck-assumption. Again, recall that for the maximum single-sink unsplittable flow problem there is a constant factor approximation algorithm given the no-bottleneck-assumption. We prove this is not the case for the single-sink confluent flow problem by providing a super-constant lower bound. Its proof is more complicated but builds on the techniques used for our previous results. \begin{thm}\label{thm:hardnba} Given the no-bottleneck assumption, the single-sink confluent flow problem cannot be approximated to within a factor $O(\log^{1-\epsilon}n)$, for any $\epsilon > 0$, unless $P=NP$. This holds for both the maximum cardinality and maximum throughput objectives in undirected and directed graphs. \end{thm}
Finally, we include a hardness result for the congestion minimization problem for confluent flows. That is, the problem of finding the minimum value $\alpha \geq 1$ such that all demands can be routed confluently if all node capacities are multiplied by $\alpha$. This problem has two plausible variants.
An {\em $\alpha$-congested} routing is an unsplittable flow for the demands where the total load on any node is at most $\alpha$ times its associated capacity. A {\em strong congestion} algorithm is one where the resulting flow must route on a tree $T$ such that for any demand $v$ the nodes on its path in $T$ must have capacity at least $d(v)$. A {\em weak congestion} algorithm does not require this extra constraint on the tree capacities.
Both variants are of possible interest. If the motive for congestion is to route all demands in some limited number $\alpha$ of rounds of admission, then each round should be feasible on $T$ - hence strong congestion is necessary. On the other hand, if the objective is to simply augment network capacity so that all demands can be routed, weak congestion is the right notion. In Section~\ref{sec:congestion} we show that it is hard to approximate strong congestion to within polynomial factors.
\begin{thm}
\label{thm:strongcongestion}
It is NP-hard to approximate the minimum (strong) congestion problem for single-sink confluent flow instances
(with polynomial-size demands) to factors of at most $m^{.5-\epsilon}$ for any $\epsilon>0$.
\end{thm}
\subsection{Overview of Paper} At the heart of our reductions are gadgets based upon the {\em capacitated} $2$-disjoint paths problem. We discuss this problem in Section \ref{sec:two-disjoint-paths}. In Section~\ref{sec:lower}, we prove the $\sqrt{m}$ hardness of maximum single-sink unsplittable/confluent flow in the small demand regime (Theorem \ref{thm:hard}); we give a similar hardness for single-sink priority flow (Corollary \ref{cor:priority}). Using a similar basic construction, we prove, in Section~\ref{sec:confwithNBA}, the logarithmic hardness of maximum single-sink confluent flow even given the no-bottleneck assumption (Theorem \ref{thm:hardnba}). In Section~\ref{sec:stronger}, we give lower bounds on the approximability of the cardinality objective for general demand regimes (Theorems \ref{thm:extended} and \ref{thm:harder}). Finally, in Section~\ref{sec:upper}, we present an almost matching upper bound for unsplittable flow (Theorem \ref{thm:upper}). and priority flow.
\section{The Two-Disjoint Paths Problem}\label{sec:two-disjoint-paths}
Our hardness reductions require gadgets based upon the {\em capacitated} $2$-disjoint paths problem. Before describing this problem, recall the classical $2$-disjoint paths problem:\\
\noindent {\tt 2-Disjoint Paths (Uncapacitated):} Given a graph $G$ and node pairs $\{x_1, y_1\}$ and $\{x_2, y_2\}$. Does $G$ contain paths $P_1$ from $x_1$ to $y_1$ and $P_2$ from $x_2$ to $y_2$ such that $P_1$ and $P_2$ are disjoint?\\
Observe that this formulation incorporates four distinct problems because the graph $G$ may be directed or undirected and the desired paths may be edge-disjoint or node-disjoint. In undirected graphs the $2$-disjoint paths problem, for both edge-disjoint and node disjoint paths, can be solved in polynomial time -- see Robertson and Seymour~\cite{RS95}. In directed graphs, perhaps surprisingly, the problem is NP-hard. This is the case for both edge-disjoint and node disjoint paths, as shown by Fortune, Hopcroft and Wyllie~\cite{FHW80}.
In general, the unsplittable and confluent flow problems concern capacitated graphs. Therefore, our focus is on the capacitated version of the $2$-disjoint paths problem. \\
\noindent {\tt 2-Disjoint Paths (Capacitated):} Let $G$ be a graph whose edges have capacity either $\alpha$ or $\beta$, where $\beta \ge \alpha$. Given node pairs $\{x_1, y_1\}$ and $\{x_2, y_2\}$, does $G$ contain paths $P_1$ from $x_1$ to $y_1$ and $P_2$ from $x_2$ to $y_2$ such that:\\ (i) $P_1$ and $P_2$ are disjoint.\\ (ii) $P_2$ may only use edges of capacity $\beta$. ($P_1$ may use both capacity $\alpha$ and capacity $\beta$ edges.) \\
For directed graphs, the result of Fortune et al.~\cite{FHW80} immediately implies that the capacitated version is hard -- simply assume every edge has capacity $\beta$. In undirected graphs, the case of node-disjoint paths was proven to be hard by Guruswami et al.~\cite{guruswami2003near}. The case of edge-disjoint paths was recently proven to be hard by Naves, Sonnerat and Vetta~\cite{naves2010maximum}, even in planar graphs where terminals lie on the outside face (in an interleaved order, which will be important for us). These results are summarised in Table \ref{table:hardness}. \begin{table}[h]
\centering
\begin{tabular}{|c|cc|}
\hline & Directed & Undirected \\ \hline Node-Disjoint & NP-hard \cite{FHW80} & NP-hard \cite{guruswami2003near} \\ Edge-Disjoint & NP-hard \cite{FHW80} & NP-hard \cite{naves2010maximum} \\ \hline
\end{tabular} \caption{Hardness of the Capacitated 2-Disjoint Paths Problem}\label{table:hardness} \end{table}
Recall that the unsplittable flow problem has capacities on edges, whereas the confluent flow problem has capacities on nodes. Consequently, our hardness reductions for unsplittable flows require gadgets based upon the hardness for edge-disjoint paths \cite{naves2010maximum}; for confluent flows we require gadgets based upon the hardness for node-disjoint paths \cite{guruswami2003near}.
\section{Polynomial Hardness of Single-Sink Unsplittable,\\ Confluent and Priority Flow}
\label{sec:lower}
In this section, we establish that the single-sink maximum unsplittable and confluent flow problems are hard to approximate within polynomial factors for both the cardinality and throughput objectives. We will then show how these hardness results extend to the single-sink maximum priority flow problem. We begin with the small demand regime by proving Theorem~\ref{thm:hard}. Its proof introduces some core ideas that are used in later sections in the proofs of Theorems \ref{thm:harder} and \ref{thm:hardnba}.
\subsection{$\sqrt{n}$-Hardness in the Small Demand Regime} \label{sec:hardness}
Our approach uses a grid routing structure much as in the hardness proofs of Guruswami et al.~\cite{guruswami2003near}. Specifically:
(1) We introduce a graph $G_N$ that has the following properties. There is a set of pairwise crossing paths that can route demands of total value, $\sum_{i=1}^{N} (1+ i \delta) =N + \delta \frac12 N(N+1)$. On the other hand, any collection of pairwise non-crossing paths can route at most $d_{max} = 1+N\delta$ units of the total demand. For a given $\Delta \in (0,1)$ we choose $\delta$ to be small enough so that $d_{max} \leq 1+\Delta < 2$.
(2) We then build a new network $\mathcal{G}$ by replacing each node of $G_N$ by an instance of the capacitated $2$-disjoint paths problem. This routing problem is chosen because it induces the following properties. If it is a YES-instance, then a maximum unsplittable (or confluent) flow on $\mathcal{G}$ corresponds to routing demands in $G_N$ using pairwise-crossing paths. In contrast, if it is a NO-instance, then a maximum unsplittable or confluent flow on $\mathcal{G}$ corresponds to routing demands in $G_N$ using pairwise non-crossing paths.
Since $G_N$ contains $n=O(N^2)$ nodes, it follows that an approximation algorithm with guarantee better than $\Theta(\sqrt{n})$ allows us to distinguish between YES- and NO-instances of our routing problem, giving an inapproximability lower bound of $\Omega(\sqrt n)$. Furthermore, at all stages we show how this reduction can be applied using only undirected graphs. This will prove Theorem \ref{thm:hard}.
\subsubsection{A Half-Grid Graph $G_N$}
\begin{figure}
\caption{ A Half-Grid $G_N$.}
\label{fig.grid}
\end{figure}
Let's begin by defining the graph $G_N$. There are $N$ rows (numbered from top to bottom) and $N$ columns (numbered from left to right). We call the leftmost node in the $i^{th}$ row $s_i$, and the bottom node in the $j^{th}$ column $t_j$. There is a demand of size $\capp{i} := 1+i\delta$ located at $s_i$. Recall, that $\delta$ is chosen so that all demands and capacities lie within a tight range $[1,1+\Delta]$ for fixed $\Delta$ small. All the edges in the $i^{th}$ row and all the edges in the $i^{th}$ column have capacity $\capp{i}$. The $i^{th}$ row extends as far as the $i^{th}$ column and vice versa; thus, we obtain a ``half-grid" that is a weighted version of the network considered by Guruswami et al. \cite{guruswami2003near}. Finally we add a sink $t$. There is an edge of capacity $\capp{j}$ between $t_j$ to $t$. The complete construction is shown in Figure~\ref{fig.grid}.
For the unsplittable flow problem we have edge capacities. We explain later how the node capacities are incorporated for the confluent flow problem. We also argue about the undirected and directed reductions together. For directed instances we always enforce edge directions to be downwards and to the right.
Note that there is a unique $s_i-t$ path $P^*_i$ consisting only of edges of capacity $\capp{i}$, that is, the hooked path that goes from $s_i$ along the $i^{th}$ row and then down the $i^{th}$ column to $t$. We call this the {\em canonical} path for demand $i$.
\begin{claim}\label{cl:cross} Take two feasible paths $Q_i$ and $Q_j$ for demands $i$ and $j$. If $i<j$, then the paths must cross on row $j$, between columns $i$ and $j-1$. \end{claim} \noindent{\bf Proof.} Consider demand $i$ originating at $s_i$. This demand cannot use any edge in columns $1$ to $i-1$ as it is too large. Consequently, any feasible path $Q_i$ for demand $i$ must include all of row $i$. Similarly, $Q_j$ must contain all of row $j$. Row $j$ cuts off $s_i$ from the sink $t$, so $Q_i$ must meet $Q_j$ on row $j$. Demand $i$ cannot use an edge in row $j$ as demand $j$ is already using up all the capacity along that row. Thus $Q_i$ crosses $Q_j$ at the point they meet. As above, this meeting cannot occur in columns $1$ to $i-1$. Thus the crossing point must occur on some column between $i$ and $j-1$ (by construction of the half-grid, column $j$ only goes as high as row $j$ so the crossing cannot be there). $
{\Box}$
By Claim \ref{cl:cross}, if we are forced to route using pairwise non-crossing paths, then only one demand can route. Thus we can route at most a total of $\capp{N}=1+\delta N<2$ units of demand.
\subsubsection{The Instance $\mathcal{G}$} We build a new instance $\mathcal{G}$ by replacing each degree $4$ node in $G_N$ with an instance of the $2$~-~disjoint paths problem. For the unsplittable flow problem in undirected graphs we use gadgets $H$ corresponding to the capacitated edge-disjoint paths problem. Observe that a node at the intersection of column $i$ and row $j$ (with $j > i$) in $G_N$ is incident to two edges of capacity $\capp{i}$ and to two edges of weight $\capp{j}$. We construct $\mathcal{G}$ by replacing each such node of degree four with the routing graph $H$. We do this in such a way that the capacity $\capp{i}$ edges of $G_N$ are incident to $x_1$ and $y_1$, and the $\capp{j}$ edges are incident to $x_2$ and $y_2$. We also let $\alpha=\capp{i}$ and $\beta=\capp{j}$.
For the confluent flow problem in undirected graphs we now have node capacities. Hence we use gadgets $H$ corresponding to the node-capacitated 2-paths problem discussed above.
Again $x_1$ and $y_1$ are given capacity $\capp{i}$ whilst $x_2$ and $y_2$ have capacity $\capp{j}$.
For directed graphs, the mechanism is simpler as the gadgets may now come from the uncapacitated disjoint paths problem. Thus the hardness comes from the directedness and not from the capacities. Specifically, we may set the edge capacities to be $C=\max\{\capp{i},\capp{j}\}$. Moreover, for unsplittable flow we may perform the standard operation of splitting each node in $H$ into two, with
the new induced arc having capacity of $C$. It follows that if there are two flow paths through $H$, each carrying at
least $\capp{i} \geq \capp{j}$ flow, then they must be from $x_1$ to $y_1$ and $x_2$ to $y_2$.
These provide a solution to the node-disjoint directed paths problem in $H$.
The hardness result will follow once we see how this construction relates to crossing and non-crossing collections of paths.
\begin{lemma}\label{lem:yes} If $H$ is a YES-instance, then the maximum unsplittable/confluent flow in $\mathcal{G}$ has value at least $N$. For a NO-instance the maximum unsplittable/confluent flow has value at most $1+\Delta < 2$. \end{lemma} \noindent{\bf Proof.} If $H$ is a YES-instance, then we can use its paths to produce paths in $\mathcal{G}$, whose images in $G_N$, are free to cross at any node. Hence we can produce paths in $\mathcal{G}$ whose images are the canonical paths $P^*_i, \, 1 \le i \le N$ in $G_N$. This results in a flow of value greater than $N$. Note that in the confluent case, these paths yield
a confluent flow as they only meet at the root $t$.
Now suppose $H$ is a NO-instance. Take any flow and consider two paths $\hat{Q}_i$ and $\hat{Q}_j$ in $\mathcal {G}$ for demands $i$ and $j$, where $i < j$. These paths also induce two feasible paths $Q_i$ and $Q_j$ in the half-grid $G_N$. By Claim \ref{cl:cross}, these paths cross on row $j$ of the half-grid (between columns $i$ and $j-1$). In the directed case (for unsplittable or confluent flow) if they cross at a grid-node $v$, then the paths they induce in the copy of $H$ at $v$ must be node-disjoint. This is not possible in the directed case since such paths do not exist for $(x_1,y_1)$ and $(x_2,y_2)$.
In the undirected confluent case, we must also have node-disjoint paths through this copy of $H$.
As we are in row $j$ and a column between column $i$ and $j-1$, we have $\beta=\capp{j}$ and $\capp{i} \le \alpha \le \capp{j-1}$. Thus, demand $j$ can only use the $\beta$-edges of $H$. This contradicts the fact that $H$ is a NO-instance. For the undirected case of unsplittable flow the two paths through $H$ need to be edge-disjoint, but now we obtain a contradiction as our gadget was derived from the capacitated edge-disjoint paths problem.
It follows that no such pair $\hat{Q}_i$ and $\hat{Q}_j$ can exist and, therefore, the confluent/unsplittable flow routes at most one demand and, hence, routes a total demand of at most $1+\Delta$. $
{\Box}$
We then obtain our hardness result. \\
{{\noindent\bf Theorem~\ref{thm:hard}.} \itshape Neither cardinality nor throughput can be approximated to within a factor of $O(m^{\frac12-\epsilon})$, for any $\epsilon > 0$, in the single-sink unsplittable and confluent flow problems. This holds for undirected and directed graphs even when instances are restricted to have bottleneck value $\frac{d_{max}}{u_{min}}=1 + \Delta$ where $\Delta > 0$ is arbitrarily small.\\ }
\noindent{\bf Proof.} It follows that if we could approximate the maximum (unsplittable) confluent flow problem in $\mathcal{G}$ to a factor better than $N$, we could determine whether the optimal solution is at least $N$ or at most $1+\Delta$. This in turn would allow us to determine whether $H$ is a YES- or a NO-instance.
Note that $\mathcal{G}$ has $n=\Theta(pN^2)$ edges, where $p=|V(H)|$. If we take $N = \Theta(p^{\frac12(\frac{1}{\epsilon}-1)})$, where $\epsilon>0$ is an (arbitrarily) small constant, then $n=p^{\frac{1}{\epsilon}}$ and so $N = \Theta(n^{\frac12 (1-\epsilon)})$. A precise lower bound of $n^{.5 - \epsilon'}$ is obtained for $\epsilon' > \epsilon$ sufficiently small, when $n$ is sufficiently large. $
{\Box}$
\subsubsection{Priority Flows and Congestion} \label{sec:congestion}
We now show the hardness of priority flows. To do this, we use the same half-grid construction, except we must replace the capacities by priorities. This is achieved in a straight-forward manner: priorities are defined by the magnitude of the original demands/capacities. The larger the demand or capacity in the original instance, the higher its priority in the new instance. (Given the priority ordering we may then assume all demands and capacities are set to $1$.) In this setting, observe that Claim~\ref{cl:cross} also applies for priority flows. \begin{claim} Consider two feasible paths $Q_i$ and $Q_j$ for demands $i$ and $j$ in the priority flow problem. If $i<j$, then the paths must cross on row $j$, between columns $i$ and $j-1$. \end{claim} \noindent{\bf Proof.} Consider demand $i$ originating at $s_i$. This demand cannot use any edge in columns $1$ to $i-1$ as they do not have high enough priority. Consequently, any feasible path $Q_i$ for demand $i$ must include all unit capacity edges of row $i$. Similarly, $Q_j$ must contain all of row $j$. Row $j$ cuts off $s_i$ from the sink $t$, so $Q_i$ must meet $Q_j$ on row $j$. Demand $i$ cannot use an edge in row $j$ as demand $j$ is already using up all the capacity along that row. Thus $Q_i$ crosses $Q_j$ at the point they meet. As above, this meeting cannot occur in columns $1$ to $i-1$. Thus the crossing point must occur on some column between $i$ and $j-1$.
$
{\Box}$
Repeating our previous arguments, we obtain the following hardness result for priority flows. (Again, it applies to both throughput and cardinality objectives as they coincide for priority flows.)
{{\noindent\bf Corollary~\ref{cor:priority}.} \itshape The maximum single-sink priority flow problem cannot be approximated to within a factor of $m^{\frac12-\epsilon}$, for any $\epsilon > 0$, in planar directed or undirected graphs. $
{\Box}$ }
\vspace*{.2cm}
We close the section by establishing Theorem~\ref{thm:strongcongestion}. Consider grid instance built from
a YES instance of the 2 disjoint path problem. As before we may find a routing of all demands with congestion at most $1$. Otherwise, suppose that the grid graph is built from a NO instance and consider a tree $T$ returned by a strong congestion algorithm. As it is a strong algorithm, the demand in row $i$ must follow its canonical path horizontally to the right as far as it can. As it is a confluent flow, all demands from rows $>i$ must accumulate at this rightmost node in row $i$. Inductively this implies that the total load at the rightmost node in row 1 has load $>N$. As before, for any $\epsilon > 0$ we may choose $N$ sufficiently large so that $N \geq n^{.5-\epsilon}$. Hence we have a YES instance of 2 disjoint paths if and only if the output from a $n^{.5-\epsilon}$-approximate strong congestion algorithm returns a solution with congestion $\leq N$.
\section{Logarithmic Hardness of Single-Sink Confluent Flow with the No-Bottleneck Assumption} \label{sec:confwithNBA}
We now prove the logarithmic hardness of the confluent flow problem given the no-bottleneck assumption. A similar two-step plan is used as for Theorem~\ref{thm:hard} but the analysis is more involved.
(1) We introduce a planar graph $G_N$ which has the same structure as our previous half-grid, except that its edge weights are changed. As before we have demands associated with the $s_i$'s, but we assume these demands are tiny -- this guarantees that the no-bottleneck assumption holds. We thus refer to the demands located at an $s_i$ as the {\em packets} from $s_i$. We define $G_N$ to ensure that there is a collection of pairwise crossing ``trees'' (to be defined) that can route packets of total value equal to the harmonic number $H_N\approx \log N$. On the other hand, any collection of pairwise non-crossing trees can route at most one unit of packet demand.
(2) We then build a new network $\mathcal{G}$ by replacing each node of $G_N$ by an instance of the $2$-disjoint paths problem. Again, this routing problem is chosen because it induces the following properties. If it is a YES-instance, then we can find a routing that corresponds to pairwise crossing trees. Hence we are able to route $H_N$ demand. In contrast, if it is a NO-instance, then a maximum confluent flow on $\mathcal{G}$ is forced to route using a non-crossing structure and this forces the total flow to be at most $1$.
It follows that an approximation algorithm with guarantee better than logarithmic would allow us to distinguish between YES- and NO-instances of our routing problem, giving a lower bound of $\Omega(\log N)$. We will see that this bound is equal to $\Theta(\log^{1-\epsilon} n)$.
\subsection{{An Updated Half-Grid Graph.}}\ \\ Again we take the graph $G_N$ with $N$ rows (now numbered from
bottom to top) and $N$ columns (now numbered from right to left). All the edges in the $i^{th}$ row and all the edges in the $i^{th}$ column have capacity $\frac{1}{i}$. The $i^{th}$ row extends as far as the $i^{th}$ column and vice versa; thus, we obtain a half-grid similar to our earlier construction but with updated weights.
Then we add a sink $t$. There is an edge of capacity $\frac{1}{i}$ to $t$ from the bottom node (called $t_i$) in column $i$. Finally, at the leftmost node (called $s_i$) in row $i$ there is a collection of packets (``sub-demands'') whose total weight is $\frac{1}{i}$. These packets are very small. In particular, they are much smaller than $\frac{1}{n}$, so they satisfy the no-bottleneck assumption. The complete construction is shown in Figure \ref{fig.grid2}. In the directed setting, edges are oriented to the right and downwards.
\begin{figure}
\caption{ An Updated {\sc nba} \ Half-Grid $G_N$.}
\label{fig.grid2}
\end{figure}
Again, there is a unique $s\mbox{-}t$ path $P^*_i$ consisting only of edges of weight $\frac{1}{i}$, that is, the hooked path that goes from $s_i$ along the $i$th row and then down the $i^{th}$ column to $t$. Moreover, for $i \neq j$, the path $P^*_i$ intersects $P^*_j$ precisely once. If we route packets along the paths $\mathcal{P}^*=\{P^*_1,P^*_2,\dots,P^*_N\}$, then we obtain a flow of total value $H_N =1+\frac12+\ldots \frac{1}{N}$. Since every edge incident to $t$ is used in $\mathcal{P}^*$ with its maximum capacity, this solution is a maximum single-sink flow. Clearly, each $P^*_i$ is a tree, so this routing corresponds to our notion of routing on ``crossing trees''.
We then build $\mathcal{G}$ as before by replacing the degree four nodes in the grid by our disjoint-paths gadgets. Our first objective is to analyze the maximum flow possible in the case where our derived instance $\mathcal{G}$ is made from NO-instances. Consider a confluent flow in $\mathcal{G}$. If we contract the pseudo-nodes, this results in some {\em leaf-to-root} paths in the subgraph $G_N$. We define $T_i$ as the union of all such leaf-to-root paths terminating at $t_i$. If we have a NO-instance, then the resulting collection $\mathcal{T}=\{T_1,T_2,\dots,T_N\}$ forms non-crossing subgraphs. That is, if $i \neq j$, then there do not exist leaf-to-root paths $P_i \in T_i$ and $P_j \in T_j$ which cross in the standard embedding of $G_N$. Since we started with a confluent flow in $\mathcal{G}$, the flow paths within each $T_i$ are {\em edge-confluent}. That is, when two flow paths share an {\bf edge}, they must thereafter follow the same path to $t_i$. Note that they could meet and diverge at a node if they use different incoming and outgoing edges. In the following, we identify the subgraph $T_i$ with its edge-confluent flow.
The {\em capacity} of a $T_i$ is the maximum flow from its leaves to $t_i$. The capacity of a collection $\mathcal{T}$ is then the sum of these capacities. We first prove that the maximum value of a flow (i.e., capacity) is significantly reduced if we require a non-crossing collection of edge-confluent flows. One should note that as our demands are tiny, we may essentially route any partial amount $x \leq \frac{1}{i}$ from a node $s_i$; we cannot argue as though we route the whole $\frac{1}{i}$. On the other hand, any packets from $s_i$ must route on the same path, and in particular $s_i$ lies in a unique $T_j$ (or none at all). Another subtlety in the proof is to handle the fact that we cannot apriori assume that there is at most one leaf $s_j$ in a $T_i$.
Hence such a flow does not just correspond to a maximum uncrossed unsplittable flow. In fact, because the packets are tiny,
it is easy to verify that all the packets may be routed unsplittably (not confluently) even if they are required to use non-crossing paths.
\begin{lemma}\label{lemma.maxflow} The maximum capacity of a non-crossing edge-confluent flow in $G_N$ is at most $2$. \end{lemma} \noindent{\bf Proof.} Let $t_{i_1},t_{i_2},...,t_{i_k}$ be the roots of the subgraphs $T_i$ which support the edge-confluent flow, where wlog $i_1 > i_2 > \cdots >i_k$. We argue inductively about the topology of where these supports live in $G_N$. For $i \leq j$ we define a subgrid $G(i,j)$ of $G_N$ induced by columns and rows whose indices lie in the range $[i,j]$. For instance, the rightmost column of $G(i,j)$ has capacity $\frac{1}{i}$ and the leftmost column $\frac{1}{j}$; similarly, the lowest row of $G(i,j)$ has capacity $\frac{1}{i}$ and the highest row $\frac{1}{j}$.
Obviously all the $T_i$'s route in $G(1,N)=G(r_1,\ell_1)$ where we define $r_1=1,\ell_1=N$. Consider the topologically highest path $P_{i_1}$ in $T_{i_1}$,
and let $r'_1$ be the highest row number where this path intersects column $n_1=t_{i_1}$. We define $r_2 = r_1'+1$ and $\ell_2 = n_1-1$ and consider the subgrid $G(r_2,\ell_2)$. Observe that in the undirected case it is possible that $P_{i_1}$ routes through the subgrid $G(r_2,\ell_2)$; see Figure \ref{fig:upperT}(b). In the directed case this cannot happen; see Figure \ref{fig:upperT}(a).
\begin{figure}\label{fig:upperT}
\end{figure}
In addition, it is possible that $T_{i_2}$ completely avoids routing through the subgrid $G(r_2,\ell_2)$. But, for this to happen, it must have a cut-edge (containing all its flow) in column $t_{i_1}$; consequently, its total flow is then at most $\frac{1}{i_1}$. It is also possible that it has some flow which avoids $G(r_1,\ell_1)$ and some that does not. Since $T_{i_1}$ also has maximum flow at most $\frac{1}{i_1}$, it follows that in every case the total flow is at most $\frac{2}{i_1}$ plus the maximum size of a confluent flow in the subproblem $G(r_2,\ell_2)$. Note that in this subproblem, its ``first'' rooted subgraph may be at $t_{i_2}$ or $t_{i_3}$ depending on which of the two cases above occurred for $T_{i_2}$.
If we iterate this process, at a general step $i$ we have a edge-confluent flow in the subgrid $G(r_i,\ell_i)$ whose lower-left corner is in row $r_i$ and column $\ell_i$ (hence $r_i \leq \ell_i$). Note that these triangular grids are in fact nested. Let $T_{n_i}$ be its subgraph rooted at a $t_{n_i}$ with $n_i$ maximized (that is, furthest to the left on bottom row). As before, the total flow in this sub-instance is at most $\frac{2}{n_i}$ plus a maximum edge-confluent flow in some $G(r_{i+1},\ell_{i+1})$. Since each new sub-instance has at least one less rooted flow, this process ends after at most $k^*\le k$ steps. Note that for $i < k^*$ we have $\frac{2}{n_i} \leq \frac{2}{\ell_{i+1}}$ and for $i=k^*$ we have $\frac{2}{n_{k^*}} \leq \frac{1}{r_{k^*}}$. The latter inequality follows since for each $i$ we have $r_i \leq n_i \leq \ell_i$.
Now by construction we have the grids are nested and so $\ell_1 > \ell_2 > \ldots \ell_{k^*} \geq r_{k^*} > \ldots r_2 > r_1$(recall that columns are ordered increasingly from right to left). Since $r_1=1$, we may inductively deduce that $r_{i} \geq i$ for all $i$. Thus $\ell_i \ge k^*$ for all $i$. The total flow in our instance is then at most \begin{eqnarray*} 2\cdot \sum_{1 \leq i \leq k^*} \frac{1}{n_i} &\leq& 2\cdot (\sum_{2 \leq i \leq k^*} \frac{1}{\ell_i} + \frac{1}{r_{k^*}}) \\ &\le& 2\cdot \sum_{1 \leq i \leq k^*} \frac{1}{k^*} \\ &=& 2 \end{eqnarray*} The lemma follows. $
{\Box}$
We can now complete the proof of the approximation hardness. Observe that any node of degree four in $G_N$ is incident to two edges of weight $\frac{1}{i}$ and to two edges of weight $\frac{1}{j}$, for some $j < i$. Again, we construct a graph $\mathcal{G}$ by replacing each node of degree four with an instance $H$ of the $2$ node-disjoint paths problem, where the weight $\frac{1}{i}$ edges of $G_N$ are incident to $x_1$ and $y_1$, and the weight $\frac{1}{j}$ edges are incident to $x_2$ and $y_2$. In the undirected case we require capacitated node-disjoint paths and set $\alpha=\frac{1}{i}$ and $\beta=\frac{1}{j}$. More precisely, since we are dealing with node capacities in confluent flows, we actually subdivide each edge of $H$ and the new node inherits the edge's capacity. The nodes $x_1$ and $y_1$ also have capacity $\frac{1}{i}$ whilst the nodes $x_2$ and $y_2$ have capacity $\frac{1}{j}$ in order to simulate the edge capacities of $G_N$.
\begin{lemma}\label{lem:yes2} If $H$ is a YES-instance, then the maximum single-sink confluent flow in $\mathcal{G}$ has value $H_N$. If $H$ is a NO-instance, then the maximum confluent flow in $\mathcal{G}$ has value at most $2$. \end{lemma} \noindent{\bf Proof.} It is clear that if $H$ is a YES-instance, then the two feasible paths in $H$ can be used to allow paths in $G_N$ to cross at any node without restrictions on their values. This means we obtain a confluent flow of value $H_N$ by using the canonical paths $P^*_i, \, 1 \le i \le N$.
Now suppose that $H$ is a NO-instance and consider how a confluent flow $\mathcal{T}=\{T_1,\dots, T_n\}$ routes packets through the gadgets. As it is a NO-instance, the image of the trees (after contracting the $H$'s to single nodes) in $G_N$ yields a non-crossing edge-confluent flow. The capacity of this collection in $G_N$ is at least that in $\mathcal{G}$. By Lemma~\ref{lemma.maxflow}, their capacity is at most $2$, completing the proof. $
{\Box}$
\ \\ {{\noindent\bf Theorem~\ref{thm:hardnba}.} \itshape Given the no-bottleneck assumption, the single-sink confluent flow problem cannot be approximated to within a factor $O(\log^{1-\epsilon}n)$, for any $\epsilon > 0$, unless $P=NP$. This holds for both the maximum cardinality and maximum throughput objectives in undirected and directed graphs.\\ }
\noindent{\bf Proof.} It follows that if we could approximate the maximum confluent flow problem in $\mathcal{G}$ to a factor better than $H_N/2$, we could determine whether the optimal solution is $2$ or $H_N$. This in turn would allow us to determine whether $H$ is a YES- or a NO-instance.
Note that $\mathcal{G}$ has $n=\Theta(pN^2)$ edges, where $p=|V(H)|$. If we take $N = \Theta(p^{\frac12(\frac{1}{\epsilon}-1)})$, where $\epsilon>0$ is a small constant, then $H_N=\Theta(\frac{1}{2}(\frac{1}{\epsilon}-1) \log p)$. For $p$ sufficiently large, this is $\Omega((\log n)^{1-\epsilon}) = (\frac{1}{\epsilon} \log p )^{1-\epsilon}$. This gives a lower bound of $\Omega((\log n)^{1-\epsilon})$. $
{\Box}$
Similarly, if we are restricted to consider only flows that decompose into $k$ disjoint trees then it is not hard to see that: \begin{thm} Given the no-bottleneck assumption, there is a $\Omega(\log k)$ hardness of approximation, unless $P=NP$, for the problem of finding a maximum confluent flow that decomposes into at most $k$ disjoint trees. $
{\Box}$ \end{thm}
\section{Stronger Lower Bounds for Cardinality Single-Sink Unsplittable Flow with Arbitrary Demands}\label{sec:stronger}
In the large demand regime even stronger lower bounds can be achieved for the cardinality objective. To see this, we explain the technique of Azar and Regev \cite{azar2001strongly} (used to prove Theorem~\ref{thm:ar}) in Section \ref{sec:expo-demands} and show how to extend it to undirected graphs and to confluent flows. Then in Section \ref{sec:refine}, we combine their construction with the half-grid graph to obtain lower bounds in terms of the bottleneck value (Theorem~\ref{thm:harder}).
\subsection{$m^{1-\epsilon}$ Hardness in the Large-Demand Regime}\label{sec:expo-demands}
{{\noindent\bf Theorem~\ref{thm:extended}.} \itshape If $P \neq NP$ then, for any $\epsilon > 0$, there is no $O(m^{1-\epsilon})$-approximation algorithm for the cardinality objective of the single-sink unsplittable/confluent flow problem in undirected graphs.\\ }
\noindent{\bf Proof.} We begin by describing the construction of Azar and Regev for directed graphs. They embed instances of the uncapacitated $2$-disjoint paths problem into a directed path. Formally, we start with a directed path $z^1,z^2, \ldots ,z^{\ell}$ where
$t=z^{\ell}$ forms our sink destination for all demands. In addition, for each $i < \ell$, there are two parallel edges
from $z^{i-1}$ to $z^{i}$. One of these has capacity $2^{i}$ and the other has a smaller capacity of $2^{i}-1$.
There is a demand $s_i$ from each $z^i$, $i < \ell$ to $z^{\ell}$ of size $2^{i+1}$.
Note that this unsplittable flow instance is feasible as follows. For each demand $s_j$, we may follow the high capacity edge
from $z^j$ to $z^{j+1}$ (using up all of its capacity) and then use low capacity edges on the
path $z^{j+1},z^{j+2}, \ldots ,z^{\ell}$. Call these the {\em canonical paths} for the demands. The total demand on the low capacity edge from $z^j$ is then $\sum_{i \leq j} 2^i =2^{j+1}-1$, as desired.
Now replace each node $z^j$, $1\le j < \ell$, by an instance $H^j$ of the uncapacitated directed $2$-disjoint paths problem.
Each edge in $H^j$ is given capacity $2^{j+1}$. Furthermore:\\
(i) The tail of the high capacity edge out of $z^j$ is identified with the node $y_2$. \\
(ii) The tail of the low capacity edge out of $z^j$ is identified with $y_1$.\\
(iii) The heads of both edges into $z^j$ (if they exist) are identified with $x_1$.\\
(iv) The node $x_2$ becomes the starting point of the demand $s_j$ from $z^j$. \\ This construction is shown in Figure \ref{fig:line}.
\begin{figure}
\caption{ An Azar-Regev Path}
\label{fig:line}
\end{figure}
Now if we have a YES-instance of the $2$-disjoint paths problem, we may then simulate the canonical paths in the standard way. The demand in $H^j$ uses the directed path from $x_2$ to $y_2$ in $H^j$; it then follows the high capacity edge from $y_2$ to the $x_1$-node in the
next instance $H^{j+1}$. All the total demand arriving from upstream $H^i$'s entered $H^{j}$ at its node $x_1$ and
follows the directed path from $x_1$ to $y_1$. This total demand is at most $\sum_{i \leq j} 2^i$ and thus fits into the low
capacity edge from $H^j$ into $H^{j+1}$. Observe that this routing is also confluent in our modified instance because the paths in the $H^j$'s are node-disjoint.
Hence, if we have a YES-instance
of the $2$-disjoint paths problem, both the unsplittable and confluent flow problems have a solution routing all of the demands.
Now suppose that we have a NO-instance, and consider a solution to the unsplittable (or confluent) flow problem.
Take the highest value $i$ such that
the demand from $H^i$ is routed. By construction, this demand must use a path $P_2$ from $x_2$ to $y_2$. But this
saturates the high capacity edge from $y_2$. Hence any demand from $H^j$, $j < i$ must pass from $y_1$ to $x_1$ while
avoiding the edges of $P_2$. This is impossible, and so we route at most one demand.
This gives a gap of $\ell$ for the cardinality objective. Azar-Regev then choose $\ell = |V(H)|^{\lceil \frac{1}{\epsilon} \rceil}$ to obtain a hardness of $\Omega(n^{1-\epsilon'})$.
Now consider undirected graphs. Here we use an undirected instance of the capacitated $2$-disjoint paths problem. We plug this instance into each $H^j$, and use the two capacity values of $\beta=2^{j+1}$ and $\alpha=2^{j+1}-1$. A similar routing argument then gives the lower bound. $
{\Box}$
We remark that it is easy to see why this approach does not succeed for the throughput objective. The use of exponentially growing demands implies that high throughput is achieved simply by routing the largest demand.
\subsection{Lower Bounds for Arbitrary Demands}\label{sec:refine}
By combining paths and half-grids we are able to refine the lower bounds in terms of the bottleneck value (or demand spread).\\
{{\noindent\bf Theorem~\ref{thm:harder}.} \itshape Consider any fixed $\epsilon > 0$ and $d_{max}/u_{min} > 1$. It is NP-hard to approximate cardinality single-sink unsplittable/confluent flow to within a factor of $O(\sqrt{\log (\frac{d_{max}}{u_{min}})}\cdot m^{\frac12-\epsilon})$ in undirected or directed graphs. For unsplittable flow, this remains true for planar graphs.\\ }
\noindent{\bf Proof.} We start with two parameters $p$ and $q$. We create $p$ copies of the Azar-Regev path
and attach them to a $p\times p$ half-grid, as shown in Figure~\ref{fig:gridline}.
\begin{figure}\label{fig:gridline}
\end{figure}
Now take the $i^{th}$ Azar-Regev path, for each $i=1,2 \ldots p$. The path
contains $q$ supply nodes with demands of sizes $2^{(i-1)q},2^{(i-1)q+1},\ldots , 2^{iq-1}$. (Supply node $s_j$ has
demand $2^{j-1}$.) Therefore the total demand on path $i$ is $\tau_i := 2^{(i-1)q}(2^q-1)<2^{iq}$. The key point is that {\em the total demand of path $i$ is less than the smallest demand in path $i+1$}. Note that we have $pq$ demands, and thus demand sizes from $2^{0}$ up to $2^{pq-1}$. Consequently the demand spread is $2^{pq-1}$. We set $u_{\min}=d_{\min}$ and thus $$pq-1 = \log\left(\frac{d_{max}}{d_{min}}\right) = \log\left(\frac{d_{max}}{u_{min}}\right)$$
It remains to prescribe capacities to the edges of the half-grid. To do this every edge in $i$th canonical hooked path has capacity $\tau_i$ (not $\capp{i}$). These capacity assignments, in turn, induce corresponding capacities in each of the disjoint paths gadgets. It follows that if the each gadget on the paths and half-grid correspond to a YES-instance gadget then we may route all $pq$ demands.
Now suppose the gadgets correspond to a NO-instance. It follows that we may route at most one demand along each Azar-Regev path. But, by our choice of demand values, any demand on the $i$th path is too large to fit into any column $j < i$ in the half-grid. Hence we have the same conditions required in Theorem~\ref{thm:hard} to show that at most one demand in total can feasibly route. It follows that we cannot approximate the cardinality objective to better than a factor $pq$.
Note that the construction contains at most $m=O((qp+ p^2)\cdot |E(H)|)$ edges, where $H$ is the size of the $2$-disjoint paths instance. Now we select $p$ and $q$ such that $q\ge p$ and $pq\ge |E(H)|^{\frac{1}{\epsilon}}$. Then, for some constant $C$, we have \begin{eqnarray*} C\cdot m^{.\frac12-\epsilon}\cdot \sqrt{\log(\frac{d_{max}}{d_{min}})} &=& m^{\frac12-\epsilon} \cdot \sqrt{\log(\frac{d_{max}}{d_{min}})} \\ &\le & \sqrt{pq}\cdot \sqrt{pq}\\ &=& pq \end{eqnarray*} Therefore, since we cannot approximate to within $pq$, we cannot approximate the cardinality objective to better than a factor $O(\sqrt{\log (\frac{d_{max}}{u_{min}})}\cdot m^{\frac12-\epsilon})$. $
{\Box}$
\section{Upper Bounds for Flows with Arbitrary Demands}\label{sec:upper}
In this section we present upper bounds for maximum flow problems with arbitrary demands.
\subsection{Unsplittable Flow with Arbitrary Demands}\label{sec:upper-unsplittable}
One natural approach for the cardinality unsplittable flow problem is used repeatedly in the literature (even for general multiflows). Group the demands into at most $O(\log \frac{d_{max}}{d_{min}}) \geq \log (\frac{d_{max}}{u_{min}})$ bins, and then consider each bin separately. This approach can also be applied to the throughput objective (and to the more general profit-maximisation model). This immediately incurs a lost factor relating to the number of bins, and this feels slightly artificial. In fact, given the no-bottleneck assumption regime, there is no need to lose this extra factor: Baveja et al~\cite{baveja2000approximation} gave an $O(\sqrt{m})$ approximation for profit-maximisation when $d_{max} \leq u_{min}$. On the other hand, our lower bound in Theorem~\ref{thm:harder} shows that if $d_{max} > u_{min}$ we do need to lose some factor dependent on $d_{max}$. But how large does this need to be? The current best upper bound is $O(\log (\frac{d_{max}}{u_{min}})\cdot \sqrt{m \log m})$ by Guruswami et al.~\cite{guruswami2003near}, and this works for the general profit-maximisation model.\footnote{Actually, they state the bound as $\log^{3/2}m$ because exponential size demands are not considered in that paper.} For the cardinality and throughput objectives, however, we can obtain a better upper bound. The proof combines analyses from \cite{baveja2000approximation} and \cite{kolliopoulos2004approximating} (which focus on the no-bottleneck assumption case). We emphasize that the following theorem applies to all multiflow problems not just the single-sink case.\\
{{\noindent\bf Theorem~\ref{thm:upper}.} \itshape There is an $O(\sqrt{m}\log (\frac{d_{max}}{u_{min}}))$ approximation algorithm for cardinality unsplittable flow and an $O(\sqrt{m}\log n)$ approximation algorithm for throughput unsplittable flow, in both directed and undirected graphs. \\
}
\noindent{\bf Proof.} We apply a result from \cite{guruswami2003near} which shows that for cardinality unsplittable flow, with $d_{max} \leq \Delta d_{min}$, the greedy algorithm yields a $O(\Delta \sqrt{m})$ approximation. Their proof is a technical extension of the greedy analysis of Kolliopoulos and Stein \cite{kolliopoulos2004approximating}. We first find an approximation for the sub-instance consisting of the demands at most $u_{min}$. This satisfies the no-bottleneck assumption and an $O(\sqrt{m})$-approximation is known for general profits \cite{baveja2000approximation}.
Now, either this sub-instance gives half the optimal profits, or we focus on demands of at least $u_{min}$. In the remaining demands, by losing a $\log (\frac{d_{max}}{u_{min}})$ factor, we may assume $d_{max} \leq \Delta d_{min}$, for some $\Delta=O(1)$. The greedy algorithm above then gives the desired guarantee for the cardinality problem. The same approach applies for the throughput objective, since all demands within the same bin have values within a constant factor of each other. Moreover, we require only $\log n$ bins as demands of at most $\frac{d_{max}}{n}$ may be discarded as they are not necessary for obtaining high throughput.
$
{\Box}$
\vspace*{.3cm} As alluded to earlier, this upper bound is not completely satisfactory as pointed out in \cite{chekuri2003edge}. Namely, all of the lower bound instances have a linear number of edges $m=O(n)$. Therefore, it is possible that there exist upper bounds dependent on $\sqrt{n}$. Indeed, for the special case of {\sc MEDP} \ in undirected graphs and directed acyclic graphs $O(\sqrt{n})$-approximations have been developed \cite{chekuri2006n,nguyen2007disjoint}.
Such an upper bound is not known for general directed {\sc MEDP} \ however; the current best approximation is $\min\{\sqrt{m},n^{2/3}\}$.
\subsection{Priority Flow with Arbitrary Demands}\label{sec:upper-priority} Next we show that the lower bound for the maximum priority flow problem is tight. \begin{thm} Consider an instance of the maximum priority flow problem with $k$ priority classes. There is a polytime algorithm that approximates the maximum flow to within a factor of $O(\min\{k,\sqrt{m}\})$. \end{thm} \begin{proof}
First suppose that $k \leq \sqrt{m}$. Then for each class $i$, we may find the optimal priority flow by solving a maximum flow problem in the subgraph induced by all edges of priority $i$ or better. This yields a $k$-approximation. Next consider the case where $\sqrt{m} < k$. Then we may apply Lemma \ref{lem:greedy-priority}, below, which implies that the greedy algorithm yields a $O(\sqrt{m})$-approximation. The theorem follows. $
{\Box}$ \end{proof}
The following proof for uncapacitated networks follows ideas from the greedy analysis of Kleinberg \cite{Kleinberg96},
and Kolliopoulos and Stein \cite{kolliopoulos2004approximating}. One may also design an $O(\sqrt{m})$-approximation
for general edge capacities using more intricate ideas from \cite{guruswami2003near}; we omit the details. \begin{lemma}\label{lem:greedy-priority} A greedy algorithm yields a $O(\sqrt{m})$-approximation to the maximum priority flow problem. \end{lemma} \begin{proof} We now run the greedy algorithm as follows. On each iteration, we find the demand $s_i$ which has a shortest feasible path
in the residual graph. Let $P_i$ be the associated path, and delete its edges. Let the greedy solution have cardinality $t$.
Let $\mathcal{O}$ be the optimal maximum priority flow and let ${\cal Q}$ be those demands which are satisfied in some optimal solution but not by the greedy algorithm. We aim to upper bound the size of $\mathcal{Q}$.
Let $Q$ be a path used in the optimal solution satisfying some demand in ${\cal Q}$. Consider any edge $e$ and the greedy path using it.
We say that $P_i$ {\em blocks} an optimal path $Q$ if $i$ is the least index such that $P_i$ and $Q$ share a common edge $e$. Clearly such an $i$ exists or else we could still route on $Q$.
Let $l_i$ denote the length of $P_i$. Let $k_i$ denote the number of optimal paths (corresponding to demands in ${\cal Q}$ ) that are blocked by $P_i$. It follows that $k_i \le l_i$. But, by the definition of the greedy algorithm, we have that each such blocked path must have length at least $l_i$
at the time when $P_i$ was packed. Hence it used up at least $l_i \ge k_i$ units of capacity in the optimal solution. Therefore the total capacity used by the unsatisfied demands from the optimal solution is at least $\sum_{i=1}^t k_i^2$. As the total capacity is at most $m$ we obtain
\begin{equation} \label{bounding} \frac{(\sum_{i=1}^t k_i)^2}{t} \leq \sum_{i=1}^t k_i^2 \leq m \end{equation}
\noindent where the first inequality is by the Chebyshev Sum Inequality. Since $\sum_{i} k_i = |{\cal Q}| = |{\cal O}| -t $, we obtain
$\frac{(|\mathcal{O}|-t)^2}{t} \leq m$. One may verify that if $t < \frac{|\mathcal{O}|}{\sqrt{m}}$ then this inequality implies
$|\mathcal{O}| = O(\sqrt{m})$ and, so, routing a single demand yields the desired approximation. $
{\Box}$ \end{proof}
\section{Conclusion}
It would be interesting to improve the upper bound in Theorem~\ref{thm:upper} to be in terms of $\sqrt{n}$
rather than $\sqrt{m}$.
Resolving the discrepancy with Theorem~\ref{thm:harder} between $\sqrt{\log(\frac{d_{max}}{u_{min}})}$ and $\log(\frac{d_{max}}{u_{min}})$ would also clarify the complete picture.
\ \\ \noindent{\bf Acknowledgments.} The authors thank Guyslain Naves for his careful reading and precise and helpful comments. The authors gratefully acknowledge support from the NSERC Discovery Grant Program.
\end{document} |
\begin{document}
\title{Impact of Spatial Frequency Based Constraints on Adversarial Robustness}
\begin{abstract} Adversarial examples mainly exploit changes to input pixels to which humans are not sensitive to, and arise from the fact that models make decisions based on uninterpretable features. Interestingly, cognitive science reports that the process of interpretability for human classification decision relies predominantly on low spatial frequency components. In this paper, we investigate the robustness to adversarial perturbations of models enforced during training to leverage information corresponding to different spatial frequency ranges. We show that it is tightly linked to the spatial frequency characteristics of the data at stake. Indeed, depending on the data set, the same constraint may results in very different level of robustness (up to $0.41$ adversarial accuracy difference). To explain this phenomenon, we conduct several experiments to enlighten influential factors such as the level of sensitivity to high frequencies, and the transferability of adversarial perturbations between original and low-pass filtered inputs. \end{abstract} \hspace{10pt}
\keywords{neural networks, adversarial examples, adversarial robustness, spatial frequency}
\section{Introduction} \label{ntroduction} Neural networks based models have been shown to reach impressive performances on challenging tasks, while being vulnerable to adversarial examples, i.e. maliciously crafted perturbations added to clean examples to fool a model at inference
~\cite{szegedy2013intriguing}. This phenomenon has shed the light on the fact that, to perform a specific task, machine learning models rely on different features or different feature processing from those humans rely on to make their decisions \cite{jo2017measuring, ilyas2019adversarial, yinfourier2020}.
To make a model robust against an adversary in the white-box setting, many defenses have been developed including proactive~ \cite{Madry2017, zhang2019theoretically, hendrycks2019, cohen2019certified} \cite{Madry2017, cohen2019certified} and reactive strategies~\cite{meng2017magnet, hwang2019puvae}.
Few works underline the critical link between robustness and the interpretability of the features a model relies on: it has been shown that models trained with adversarial training \cite{Madry2017} or randomized smoothing \cite{cohen2019certified} exhibit interpretable gradients \cite{tsipras2018robustness, etmann2019connection, kaur2019aligned} and, inversely, a recent effort \cite{chan2020jacobian} demonstrates that a model trained to have explainable jacobian matrices presents adversarial robustness.
Furthermore, Zhang \textit{et al.} \cite{Zhang2019interpreting} experimentally highlight the importance of low spatial frequency (hereafter, LSF) information, such as shape, for adversarially trained models, in opposition to concepts associated with high spatial frequency (hereafter, HSF) information. Experimental evidence in neural computation and cognitive psychology suggests the importance of LSF to perform efficient classification~\cite{schyns1994blobs, mermillod2010coarse, french2002importance}. Therefore, a natural hypothesis would be that a model trained specifically to rely more on LSF information might present an improved adversarial robustness. This hypothesis has already been indirectly exploited by taking advantage of preprocessing defense schemes based on HSF components filtering~\cite{das2018shield, liu2019feature, zhang2019adversarial}. Other defenses aim at training a model exploiting more human interpretable information, such as \cite{addepalli2020towards} by giving higher importance to information present in higher bit planes. However, these methods stay agnostic of the intrinsic spatial frequency characteristics of the data.
The objective of this work is twofold. First, we experimentally question some preconceived hypothesis related to adversarial examples, more particularly ones considering adversarial perturbations as a pure HSF phenomenon with data-agnostic spatial frequency characteristics. Second, we aim at investigating the link between spatial frequency features of the information that a model uses to perform predictions \textit{and} the robustness against adversarial perturbations offered by spatial frequency-based constraints. Our key contributions are:
\begin{itemize}
\item We show that a frequency-based regularization induces very different levels of robustness according to the frequency features of the data. As an example, a low-frequency constrained model on CIFAR10 (that covers a broad frequency spectrum) has no robustness, while reaching a $41$ \% true robustness for SVHN against the $l_{\infty}$ PGD attack.
\item By analyzing the sensitivity of a model trained naturally (i.e. without spatial frequency-based procedures and hereafter noted as \textit{regular} model) as well as adversarial transferability properties, we observe that enforcing a model to rely on LSF information is not a necessary condition to bring adversarial robustness.
\item We notice that, depending on the data set complexity, some models spread over the whole frequency spectrum, and show that constraints spanning different frequency ranges can help improving robustness.
\item We discuss combination with adversarial training \cite{Madry2017} for future overall defense strategy. \end{itemize} The paper is organized as follows. After positioning our work in relation to the state-of-the-art in Section \ref{Related work}, we analyze, in Section \ref{Frequency analysis and transferability}, frequency properties and sensitivity of features learned by a model. Notably, we study to which extent features learned by a model are focused on low or high spatial frequency concepts, and the sensitivity of models to frequency constrained noise. In Section \ref{Transferability}, we set forth interesting links between transferability of adversarial perturbations and frequency properties of the information models make use of. In Section \ref{Constraints on frequencies}, we design loss functions to force a model to extract features in particular frequency ranges, and observe the potential effects on adversarial robustness. We link these effects with analysis performed in Sections~\ref{Frequency analysis and transferability} and~\ref{Transferability}. We also show promising future work direction by binding our findings with Adversarial Training.
\section{Related work} \label{Related work}
Recent efforts~\cite{wang2020highfrequency, yinfourier2020} experimentally demonstrate that regular models predominantly exploit non-interpretable HSF components, and that robust models tend to use more concepts, such as shape, associated to LSF~\cite{geirhos2018imagenettrained, Zhang2019interpreting}, in a way similar to that of humans~\cite{schyns1994blobs, mermillod2010coarse, french2002importance}. Therefore, a common belief is that the adversarial vulnerability of a model comes from the utilization of HSF components \cite{wang2020towards}. However, Yin \textit{et al.}~\cite{yinfourier2020} show that adversarial perturbations cannot be viewed only as a HSF phenomenon: adversarially trained models, more sensitive to LSF, stay vulnerable to an adversary that optimally exploits this part of the frequency spectrum. Notably, Sharma \textit{et al.}~\cite{sharma2019lowfreq} make use of LSF constraints to craft adversarial examples against adversarially trained models on ImageNet and use fewer iterations than classical adversarial attacks.
As adversarial examples are the consequence of models relying on brittle and non-interpretable features \cite{ilyas2019adversarial}, some papers exploit the idea of enforcing a model to use features to which humans are sensitive. During training, Yin \textit{et al.} \cite{yinfourier2020} add LSF noise and demonstrate that it does not necessarily improve robustness to LSF perturbations. As a explanation, the authors hypothesize that as natural images are more LSF concentrated, it is harder for a model to become invariant to these spatial frequencies and, then, to LSF perturbations. Before, Geirhos \textit{et al.} \cite{geirhos2018imagenettrained} interestingly took advantage of stylized images that keep only shape information within a data augmentation process to improve the robustness against common perturbations. Even if not related to frequency concerns, Addepalli \textit{et al.} show in \cite{addepalli2020towards} that constraining a model to rely on the information contained in the higher bit planes only (as humans make decisions based on the information of large magnitude) has a positive impact on robustness.
\section{Preliminaries} \label{Preliminaries}
\subsection{Notations}
\label{Notations}
A neural network model $M_\theta$, with parameters $\theta$, classifies an input $x \in \mathbb{R}^d$ to a label $M_{\theta}(x) \in \left\{ 1 \dots C \right\}$. $L^{E}(\theta, x, y)$ denotes the cross-entropy loss for $M_\theta$ and $(x,y)$ an input with its corresponding ground-truth label. The pre-softmax function of $M_\theta$ (the logits) is denoted as $f: \mathbb{R}^d \rightarrow \mathbb{R}^C$. We denote $x^{low}_i$ and $x^{high}_i$ respectively the low-pass and high-pass filtered versions of $x$ at intensity $i$ (see Section \ref{Filtering with the Fourier transform}). The \textit{LSF task} (resp. \textit{HSF task}) refers to the classification task where inputs have been low-pass (resp. high-pass) filtered at some intensity, i.e. where input-label pairs correspond to $(x^{low}_i, y)$ (resp. $(x^{high}_i, y)$) for some $i$. $M^{low}_i$ (resp. $M^{high}_i$) denotes a model trained for the LSF (resp. HSF) task. The accuracy is denoted by $Acc$, and the adversarial accuracy ($Acc_{adv}$) denotes the accuracy of a model on a set of adversarial examples.
\begin{figure*}
\caption{Original image (left). Low-pass filtered images with corresponding mask (first row). High-pass filtered images (magnified for visualization) with corresponding mask (bottom row). For the Fourier domain masks, white denotes value $1$ and black value $0$.~\textbf{For LSF, low filtering intensity means restricted low-pass filtering. For HSF, high filtering intensity means restricted high-pass filtering.}}
\label{filt_exemple}
\end{figure*}
\subsection{Filtering with the Fourier transform}
\label{Filtering with the Fourier transform}
Low or high-pass filtering is performed by deleting the undesired spatial frequencies in the Fourier domain thanks to a boolean mask $\Omega \in \left\{ 0,1 \right\}^{n \times n}$, similarly to~\cite{sharma2019lowfreq, wang2020highfrequency, wang2020towards, yinfourier2020}. We note $\mathcal{F}$ and $\mathcal{F}^{-1}$ the Discrete Fourier Transform (DFT), and its inverse function, respectively. For a gray-scale image $a \in \mathbb{R}^{n \times n}$, $\mathcal{F}_c(a) \in \mathbb{C}^{n \times n}$ denotes the centered variant of $\mathcal{F}(a) \in \mathbb{C}^{n \times n}$ (i.e. coeffients for low frequencies are located at the center, and those for high-frequencies at the corners). The filtered image is then obtained classically as $x^{freq}=\mathcal{F}^{-1}\big(\mathcal{F}_c(a) \odot \Omega\big)$, with $\odot$ the Hadamard product. For a color image, the procedure is applied to each of the channels. We note $x^{low}_i$ for $\Omega$ corresponding to 1's only in the $2i \times 2i$ square in the middle, and $x^{high}_i$ for $\Omega$ corresponding to 1's only outside the $2i \times 2i$ square. In consequence, for low-pass (resp. high-pass) filtering, the smaller (resp. the higher) the intensity $i$, the stronger the filter is. We illustrate this process in Figure~\ref{filt_exemple}.
\subsection{Data sets and models}
\label{data_sets_models}
We consider CIFAR10 \cite{kriz12}, SVHN \cite{Netzer2011} and a custom-built data set, named \textit{Small ImageNet}, built by extracting $10$ meta classes from the ImageNet ILSVRC2012 benchmark. For each meta class, we extracted $3000$ images from the original training set and $300$ images from the non-blacklisted validation set. All input images are scaled to $\left[ 0,1 \right]$. CIFAR10 and SVHN, both involving color images of size $32 \times 32$ allow to make comparisons and draw conclusions about phenomenon observed. \textit{Small ImageNet}, composed of $224 \times 224$ color images, enables to extrapolate results on images with higher definition. For SVHN we use a model inspired from VGG \cite{Simonyan15}, for CIFAR10 we use a WideResNet28-8 model \cite{Zagoruyko2016WRN}, and for \textit{Small Imagenet} we consider a MobileNetV2 model \cite{mobilenetv2}. All details about \textit{Small Imagenet} and the training hyper-parameters are presented in the code repository of this work\footnote{\url{https://gitlab.emse.fr/remi.bernhard/Frequency_and_Robustness/}\label{gitlab}}.
\section{Frequency properties of data and models} \label{Frequency analysis and transferability} In this section we aim at gaining insight on the way information learned by a classifier trained to solve the regular classification task contains information for the LSF and HSF tasks. These experiments allow to gain intuition on the way models leverage information with respect to data sets frequencies, and will help to better understand robustness dissimilarities which occur between different models trained with the same frequency-based constraints (later in Section \ref{Constraints on frequencies}).
\subsection{Impact of filtered data sets.} We begin by evaluating CIFAR10 and SVHN regular models on low-pass and high-pass filtered images. Results are presented in Figure \ref{cifar10_svhn_acc}. For CIFAR10, we notice that the accuracy of a model decreases much slower than SVHN when evaluated on more and more high-pass filtered images. We obtain an opposite result for SVHN with low-pass filtered data. We can therefore assume that the informative features learned by the regular model are more focused on the LSF task for SVHN and are more spread between LSF and HSF tasks for CIFAR10. This analysis is consistent with the intrinsic frequency features of the data sets that is revealed by a classical Fourier analysis (presented in~Figure \ref{cifar10_svhn_imnet_freq}, top row) that actually shows a quite narrow spectrum for SVHN (towards LSF) and a spread spectrum for CIFAR10.
\begin{figure}
\caption{CIFAR10 and SVHN. Accuracy of a regular model on low-pass and high-pass filtered data set, for different filtering intensities.}
\label{cifar10_svhn_acc}
\end{figure}
\begin{figure}
\caption{CIFAR10, SVHN and Small Imagenet data sets. Magnitude of the Fourier spectrum for clean images. Low frequencies are at the center, and high frequencies at the corners.}
\label{cifar10_svhn_imnet_freq}
\end{figure}
\begin{figure}
\caption{CIFAR10 and SVHN. Test set accuracy of models trained on filtered data sets. (See Figure~\ref{filt_exemple} for filtering intensity effect)}
\label{cifar10_svhn_acc_tab}
\end{figure}
The accuracy of models $M^{low}_i$ and $M^{high}_i$ for various $i$, presented in Figure \ref{cifar10_svhn_acc_tab}, further highlights this phenomenon. The accuracy reached by models $M^{low}_i$ decreases much slower for SVHN than for CIFAR10, as the filtering intensity increases (and the inverse phenomenon is observable for high-pass filtering). This agrees with previous results (Figure \ref{cifar10_svhn_acc}), i.e. the useful information for the classification task is more distributed in the frequency spectrum for CIFAR10 compared to SVHN for which the information is predominantly concentrated in the low frequencies.
\subsection{Sensitivity to high spatial frequency noise.}
We investigate the sensitivity of regular models to perturbations in specific frequencies. For that purpose, we use a similar procedure as in \cite{yinfourier2020}: the sensitivity is measured as the error rate of the model on a set of examples perturbed with noise located only in those spatial frequencies. More precisely, for a clean input image $x \in \mathbb{R}^{n \times n \times c}$, each channel is perturbed independently by the addition of a noise $rv U_{i,j}$, where $U_{i,j}$ is a Fourier basis matrix for coordinates $(i,j)$ in the Fourier domain, $r$ is chosen randomly in $\{ -1 ,1\}$, and $v$ controls the magnitude of the perturbation. We then measure the error rate of a model on $1,000$ well-classified test set examples perturbed with this type of noise, as a function of $(i,j)$. We focus our analysis on CIFAR10 and SVHN, two data sets with the same image size, and we experimentally set $v=4$. Results are presented in Figure \ref{cifar10_svhn_sensi_freq} (top row). We notice that a CIFAR10 regular model is more sensitive to LSF and HSF noise, while a SVHN regular model is more sensitive to LSF and not to high and very high frequencies.
For CIFAR10, we also investigate the sensitivity of models $M^{low}_i$. Results are presented in Figure \ref{cifar10_svhn_sensi_freq} (middle and bottom rows). Interestingly, we notice that the models $M^{low}_i$ become less sensitive to HSF as $i$ decreases (i.e. as the low-pass filtering intensity increases). Therefore, for CIFAR10, training a model on low-pass filtered images brings robustness against HSF perturbations. However, this robustness would be useless if adversarial perturbations were to rely on a broader spectrum than only HSF. We investigate this in the following section.
\label{sensitivity_noise} \begin{figure}
\caption{CIFAR10 and SVHN. Error rate of models on data perturbed with Fourier constrained noise. Value at $(i,j)$ measures the error rate on data perturbed with noise along Fourier basis matrice $U_{i,j}$. Low frequencies are located at the center, and high frequencies at the corner. High values (red color) denote a high sensibility, and low values (blue color) denote a low sensibility.}
\label{cifar10_svhn_sensi_freq}
\end{figure}
\section{Transferability analysis} \label{Transferability} The transferability of adversarial perturbations between two models has been explained by shared non-robust useful features~\cite{ilyas2019adversarial}, that is features sensitive to adversarial perturbations and exploited for the prediction by both models. Therefore, to gain a better understanding of the nature of features at stake as well as frequency properties of adversarial perturbations, we study the transferability of adversarial examples between a regular model and models trained for the LSF or HSF task (i.e. $M^{low}$ and $M^{high}$). For these $M^{low}$ and $M^{high}$ models, a pre-processing layer is added before the model to evaluate it on non-filtered inputs. We use the $l_{\infty}$ DIM attack \cite{Xie2018ImprovingTO} (a state-of-the-art gradient-based attack tuned for transferability), with 40 iterations, a probability $p=0.8$ and a $l_{\infty}$ perturbation budget of $0.03$.
We first remind that for models $M^{low}_i$, a low filtering intensity $i$ means that the model is trained on a strict low-pass filtered data set (see Figure~\ref{filt_exemple}) and as the filtering intensity $i$ increases, more and more HSF are considered. Thus, for example, adversarial examples crafted on $M^{low}_2$ exploit non-robust features learned with only LSF information and adversarial examples crafted on $M^{low}_{12}$ will have the possibility to take advantage of non-robust features determined on a broader range of frequencies. On the contrary, for models $M^{high}_i$, a high filtering intensity $i$ means that only HSF are kept (see Figure~\ref{filt_exemple}), and a low intensity represents a larger spectrum, gathering more LSF. Here, adversarial examples crafted on $M^{high}_{12}$ will exploit non-robust features learned exclusively on HSF information and adversarial examples crafted on $M^{high}_2$ will rely on non-robust features from a broader spectrum. We illustrate this in Figure~\ref{cifar10_adversarial_perturbation} \textit{(a)} and \textit{(b)}, by presenting the magnitude of the Fourier spectrum of adversarial perturbation crafted on the regular model and on $M^{low}_6$. We observe that the perturbation is quite uniformly distributed along the spectrum (confirming a result from~\cite{yinfourier2020}) for the regular model and predominantly focused on the low frequencies for $M^{low}_6$. This is an important point, as it shows that adversarial examples are not HSF phenomena but may rely on a large range of spatial frequencies to efficiently fool a model.
\begin{figure}
\caption{CIFAR10. Magnitude of the Fourier spectrum for the adversarial perturbation (a) Regular model, (b) Model trained on low-pass filtered dataset at intensity $6$}
\label{cifar10_adversarial_perturbation}
\end{figure}
Furthermore, analyzing the transferability results of Figure~\ref{cifar10_transferability}, we can provide interesting information about the nature of the robust and non robust features of a model as well as the properties of adversarial perturbations. Importantly, we observe similar behaviors between CIFAR10, SVHN and Small ImageNet.
\begin{figure*}
\caption{SVHN (left), CIFAR10 (middle), Small ImageNet (right). Transferability analysis between base model and models trained on a filtered data set. (See Figure~\ref{filt_exemple} for filtering intensity effect)}
\label{cifar10_transferability}
\end{figure*}
A first conclusion comes from the two-way transferability between the regular model $M$ (note \textit{Base}) and models $M^{low}_i$ (blue curves). Indeed, we notice that the stronger the low-pass filtering (smaller $i$), the lower the transferability of adversarial perturbations. This indicates that the regular classification task and the LSF task share predominantly robust useful features.
Secondly, an important outcome arises from the dissimilarity between the two orange curves for each data set. On one hand, the solid orange curve attests the impact of non-robust features exploiting HSF, highlighted by an almost constant success of adversarial examples crafted from the base model against models $M^{high}_i$ (about $0.1$, $0.2$ and $0.5$ of adversarial accuracy for CIFAR10, SVHN and Small ImageNet respectively). On another hand, the dotted orange curve indicates that, as the high-pass filtering becomes more restrictive (i.e. as the $i$ value increases), the transferability of adversarial examples crafted on $M^{high}_i$ to the regular model decreases. These observations support the claim that, to be efficient, adversarial perturbations must exploit a wide part of the spectrum, and therefore cannot be only focused on HSF. This is particularly the case for Small ImageNet, as \textit{i) }the accuracy from a regular to a $M^{high}$ model is higher than for CIFAR10 and SVHN, and \textit{ii)} the transferability of adversarial examples crafted on $M^{high}_i$ is already poor for $i=5$.
We can summarize our conclusions as follows: \begin{itemize} \item Robustness is strongly related to features that rely on LSF information. \item Adversarial perturbation are not efficient when focused only on HSF. \end{itemize} In the next section, we propose to look deeper into the impact of frequency-based constraints when training a regular model in order to investigate if it can help increasing its robustness.
\section{Adversarial robustness of frequency-constrained models} \label{Constraints on frequencies} Following the precedent observations, we investigate loss functions designed to make a model to leverage informative features for both the regular task and the LSF and/or HSF task and evaluate how it impacts the adversarial robustness. The adversarial robustness is evaluated with the adversarial accuracy ($Acc_{add}$) considering an attacker in the white-box setting, under the common $l_{\infty}$ threat model. Adversarial examples are crafted with the $l_{\infty}$ PGD attack \cite{Madry2017}, with a perturbation budget of $\, \epsilon = 0.03$. To provide the more accurate evaluation of robustness as possible, notably to assess that no gradient masking occurs, we follow state-of-the-art guidelines from~\cite{carlini2019evaluating}. Detailed architectures and setups are presented in the code repository of this work.
\subsection{Frequency-based regularization}
\label{Constraint on low and high frequencies}
To enforce a model $M_{\theta}$ to rely on information relative to the LSF or HSF task, we define the following loss function $L^{freq}$, which acts on the logits.
\begin{equation} \begin{split}
L^{freq}_{i,j}(\theta, x,y) = & L^{E}(\theta, x, y)
+ \lambda_1 \left\| f(x) - f(x^{low}_i) \right\|_2^2 \\
& + \lambda_2 \left\| f(x) - f(x^{high}_j) \right\|_2^2 \end{split} \label{loss L_all} \end{equation}
For readability, when $\lambda_1=0$ (i.e. constraint is only focused on the HSF task) the loss is simply noted as $L^{high}_i$, and for $\lambda_2=0$ as $L^{low}_i$. During training, the cross-entropy part $L^{E}$ makes the model to learn useful features to solve the regular classification task. The second part (moderated by $\lambda_1$ and $\lambda_2$) constraints the model to extract useful features coherently to the LSF and HSF tasks.
For CIFAR10 and SVHN, the loss functions considered are $L^{low}_i$ and $L^{high}_i$ for $i \in \left\{2,4,6,8,10\right\}$, and $L^{freq}_{i,j}$ for $(i,j) \in \left\{ (10,4), (4,12), (6,3), (6,10), (8,6), (6,8) \right\}$. For Small ImageNet, as each model training is costly, we consider the loss functions $L^{low}_i$, $L^{high}_i$ and $L^{freq}_{i,j}$ for representatives values in $\left\{ 10,20,40,60 \right\}$. For conciseness purpose when presenting results, the subscript '${*}$' means that equal results are reached whatever the intensity of the filtering.
\subsection{Do the intrinsic frequency properties of the data bias the level of adversarial robustness?} \label{iv_b} \input{table/cifar10_svhn_imnet_Llow_Lhigh}
Results for $L^{low}$ and $L^{high}$, presented in Table \ref{cifar10_svhn_imnet_Llow_Lhigh}, allow for a first important observation. Indeed, we see that the same constraint induces very different effects on the robustness, depending on the data set at stake. For CIFAR10 and Small ImageNet we observe no robustness when considering separate losses $L^{low}_{*}$ or $L^{high}_{*}$. On the contrary, models trained on SVHN with $L^{low}$ present an interesting level of adversarial robustness depending on the intensity level. This observation appears as coherent with different frequency properties of the data sets highlighted in Section \ref{Frequency analysis and transferability}: the information learned by models trained on CIFAR10 and Small ImageNet are spread over the whole frequency spectrum, and -- on the contrary -- focused on LSF for SVHN. To further investigate this link, we proceed to check if a CIFAR10 model trained with $L^{low}$ would still reach some robustness on low-pass filtered data (i.e. train with $L^{low}_{i}(\theta,x^{low},y)$). In other words, we try to exclude the high spatial frequencies, which are predominantly non robust, and that we assume to explain this complete lack of robustness. To that purpose, we train models with $L^{low}_i$ ($i=6,8,10,12$) on low-pass filtered versions of CIFAR10 ($i=2,4,10$). Thus, we ensure that the model learns features strictly focused in the LSF. Interestingly, we measure a slight but true and non-negligible robustness for some models with loss $L^{low}$ (up to an adversarial accuracy of $0.11$).
As robustness is noticed for SVHN with $L^{low}$ or for CIFAR10 with $L^{low}$ on low-pass filtered data, a first conclusion is that a model relying predominantly on useful features of the LSF task can be made more robust with low-frequency based constraint ($L^{low}$). Moreover, these models also share a non-sensitivity to high and very high frequencies (cf. Section~\ref{Frequency analysis and transferability}), which highlights another influential factor on the robustness of a model trained with the loss $L^{low}$.
\input{table/cifar10_svhn_imnet_Lfreq}
However, is a frequency-based regularization suitable for complex data sets such as CIFAR10 or Small ImageNet? In Table \ref{cifar10_svhn_imnet_Lfreq}, we present results with a constraint spanning a wider spectrum ($\lambda_1 > 0$, $\lambda_2 > 0$). For SVHN, as expected, the combined constraints does not enable to reach better robustness compared to the loss $L^{low}$, as the combination of the two constraints is not compatible with the intrinsic frequency properties of the data set: information is predominantly concentrated in the low frequencies. For Small ImageNet and CIFAR10 the combined constraint forces the model to learn informative features for the LSF and HSF tasks, which are mainly robust as features informative of the LSF task are mostly robust, as shown in Section~\ref{Transferability} for CIFAR10. Interestingly, stronger level of robustness is observed for Small ImageNet and we hypothesize that the impact of the combined constraint is all the more efficient that the frequency spectrum is wider.
\subsection{Is frequency-based regularization compatible with adversarial training ?} \label{Combination with adversarial training} \input{combination_adv_training_v2.tex}
\begin{comment} \subsection{Open questions} \label{open_question} \textcolor{red}{cela me parait tres compliqué de parler d'outlier ou de choses inexpliquées alors que toutes les expériences n'ont pas pu être faites (i.e. tous les (i,j)). Il ne faut pas que cette partie se transforme en un argumentaire pour ne pas accepter ce papier sous prétexte qu'il ne serait pas assez mature pour ça...} Throughout this paper, we shed light on frequency-related properties, which condition the \textit{existence} of robustness for models constrained with loss functions $L^{freq}_{i,j}$. However, as noticed in Tables \ref{cifar10_svhn_imnet_Llow_Lhigh} and \ref{cifar10_svhn_imnet_Lfreq}, there are sometimes outliers in terms of observed robustness. As an example, we observe that no robustness is observed for loss $L^{low}_{10}$. Similarly, we notice the same non-robustness for very specific filtering values for the loss $L^{freq}$. In parallel, when robustness is achieved, the adversarial accuracy $Acc_{adv}$ is not interpretable in terms of filtering strength ($i$). These observations indicate that even if we found \textit{general conditions} which strongly correlate with observed robustness, there exists sophisticated specificities, which have still to be investigated. Among these specificities, we hypothesize that different filtering values correspond to concepts which share more or less robust features with the base task. Therefore, the same constraint with different filtering intensities result in different levels of robustness. \end{comment}
\section{Conclusion} \label{Conclusion} \begin{comment} We investigated through experiments the link between frequency-based procedures and adversarial robustness. Particularly, this allowed to gain insight on the strong influence of frequency properties specific to the data at stake. Notably, we found some necessary conditions for a model to show adversarial robustness when constrained to rely on useful information for the LSF task. More precisely, we show that sensitivity to high spatial frequencies and strongly influence this robustness. Moreover, when this type of constraint does not induce any robustness~--~because the information encompassed in the images is spread over the whole frequency spectrum~--~a constraint covering both the LSF and HSF tasks is a viable solution.\\ These experiences as well as the conclusions of previous efforts in neural computation and cognitive psychology highlight the importance of accurately studying the major difference on the way frequency information is processed by models and by humans, and how it could help to design robust defense strategies against integrity-based attacks of supervised models. \end{comment}
In this paper, we investigate through experiments the link between frequency-based processing and adversarial robustness. Particularly, this allows to gain insight on the strong influence of frequency properties of the data at stake that may be very specific according to the application domain. Notably, we found that models relying predominantly on useful features for the LSF task, and with a non-sensitivity to high frequency noise show robustness when constrained to rely on useful information for the LSF task. Interestingly, when the information encompassed in the images is spread over the whole frequency spectrum, a constraint spanning a wide frequency spectrum is a viable solution. Moreover, the efficiency of frequency-based regularization when combined with existing defense schemes is strongly dependent of the nature of these schemes. These experiences as well as the conclusions of previous efforts in neural computation and cognitive psychology highlight the fact that the intrinsic frequency characteristics of data must be necessarily considered when designing robust defense strategies against integrity-based attacks of supervised models.
\section*{Acknowledgments} This work is a collaborative action that is partially supported by the European project ECSEL InSecTT\footnote{\url{www.insectt.eu}, InSecTT: ECSEL Joint Undertaking (JU) under grant agreement No 876038. The JU receives support from the European Union’s Horizon 2020 research and innovation program and Austria, Sweden, Spain, Italy, France, Portugal, Ireland, Finland, Slovenia, Poland, Netherlands, Turkey. The document reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.} and by the French National Research Agency (ANR) in the framework of the \textit{Investissements d’avenir} program (ANR-10-AIRT-05, irtnanoelec)
~and benefited from the French Jean Zay supercomputer thanks to the \textit{AI dynamic access} program.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Biconnectivity, $st$-numbering and other applications of DFS using $O(n)$ bits \tnoteref{t1}} \tnotetext[t1]{Some of these results were announced in preliminary form in the proceedings of 27th International Symposium on Algorithms and Computation (ISAAC 2016) LIPIcs, Volume 64, pages 22:1-22:13~\cite{Chakraborty0S16}.}
\author[VR]{Sankardeep Chakraborty} \ead{[email protected]} \author[VR]{Venkatesh Raman} \ead{[email protected]} \author[SR]{Srinivasa Rao Satti} \ead{[email protected]} \address[VR]{The Institute of Mathematical Sciences, HBNI, Chennai, India} \address[SR]{Seoul National University, Seoul, South Korea}
\begin{abstract} We consider space efficient implementations of some classical applications of DFS including the problem of testing biconnectivity and $2$-edge connectivity, finding cut vertices and cut edges, computing chain decomposition and $st$-numbering of a given undirected graph $G$ on $n$ vertices and $m$ edges. Classical algorithms for them typically use DFS and some $\Omega (\lg n)$ bits\footnote{We use $\lg$ to denote logarithm to the base $2$.} of information at each vertex. Building on a recent $O(n)$-bits implementation of DFS due to Elmasry et al. (STACS 2015) we provide $O(n)$-bit implementations for all these applications of DFS. Our algorithms take $O(m \lg^c n \lg\lg n)$ time for some small constant $c$ (where $c \leq 2$). Central to our implementation is a succinct representation of the DFS tree and a space efficient partitioning of the DFS tree into connected subtrees, which maybe of independent interest for designing other space efficient graph algorithms.
\end{abstract} \end{frontmatter}
\section{Introduction} Space efficient algorithms are becoming increasingly important owing to their applications in the presence of rapid growth of ``big data'' and the proliferation of specialized handheld devices and embedded systems that have a limited supply of memory. Even if mobile devices and embedded systems are designed with large supply of memory, it might be useful to restrict the number of write operations. For example, on flash memory, writing is a costly operation in terms of speed, and it also reduces the reliability and longevity of the memory. Keeping all these constraints in mind, it makes sense to consider algorithms that do not modify the input and use only a limited amount of work space. One computational model that has been proposed in algorithmic literature to study space efficient algorithms, is the read-only memory (ROM) model. In this article, we focus on space efficient implementations of some fundamental graph algorithms in such settings without paying too much penalty on time.
There is already a rich history of designing space efficient algorithms in the read-only memory model. The complexity class {\sf L} (also known as {\sf DLOGSPACE}) is the class containing decision problems that can be solved by a deterministic Turing machine using only logarithmic amount of work space for computation. There are several important algorithmic results~\cite{DattaLNTW09,ElberfeldJT10,ElberfeldK14,ElberfeldS16} for this class, the most celebrated being Reingold's method~\cite{Reingold08} for checking {\it st}-reachability in an undirected graph, i.e., to determine if there is a path between two given vertices $s$ and $t$. {\sf NL} is the non-deterministic analogue of {\sf L} and it is known that the {\it st}-reachability problem for {\it directed} graphs is {\sf NL}-complete (with respect to log space reductions). Using Savitch's algorithm~\cite{AroraB}, this problem can be solved in $n^{O(\lg n)}$ time using $O(\lg ^2 n)$ bits. Savitch's algorithm is very space efficient but its running time is superpolynomial. Among the deterministic algorithms running in polynomial time for directed {\it st}-reachability, the most space efficient algorithm is due to Barnes et al.~\cite{BarnesBRS98} who gave a slightly sublinear space (using $n/2^{\Theta(\sqrt{\lg n})}$ bits) algorithm for this problem
running in polynomial time. We know of no better polynomial time algorithm for this problem with better space bound. Moreover, the space used by this algorithm matches a lower bound on space for solving directed {\it st}-reachability on a restricted model of computation called Node Naming Jumping Automata on Graphs ({\sf NNJAG})~\cite{CookR80,EdmondsPA99}. This model was introduced especially for the study of directed {\it st}-reachability and most of the known sublinear space algorithms for this problem can be implemented on it. Thus, to design any polynomial time ROM algorithm taking space less than $n/2^{\Theta(\sqrt{\lg n})}$ bits requires significantly new ideas. Recently there has been some improvement in the space bound for some special classes of graphs like planar and H-minor free graphs~\cite{AsanoKNW14,ChakrabortyPTVY14}. Other than these fundamental graph theoretical problems, there have been some work on designing space-efficient algorithms for the more classical selection and sorting problems \cite{Beame91,MunroP80,MunroR96}, and problems in computational geometry~\cite{AsanoBBKMRS14,BarbaKLSS15,BarbaKLS14,DarwishE14} among others.
A drawback, however, for all these graph algorithms using small space i.e., sublinear bits, is that their running time is often some polynomial of high degree. For example, to the best of our knowledge, the exact running time of Reingold's algorithm \cite{Reingold08} for undirected {\it s-t} connectivity is not analysed, yet we know it admits a large polynomial running time. This is not surprising as Tompa~\cite{Tompa82} showed that for directed {\it st}-reachability, if the number of bits available is $o(n)$ then some natural algorithmic approaches to the problem require super-polynomial time. Motivated by these impossibility results from complexity theory and inspired by the practical applications of these fundamental graph algorithms, recently there has been a surge of interest in improving the space complexity of the fundamental graph algorithms without paying too much penalty in the running time i.e., reducing the working space of the classical graph algorithms to $O(n)$ bits with little or no penalty in running time. Generally most of the classical linear time graph algorithms take $O(n)$ words or equivalently $O(n \lg n)$ bits of space.
Starting with the paper of Asano et al. \cite{AsanoIKKOOSTU14} who showed how one can implement DFS using $O(n)$ bits, improving on the naive $O(n \lg n)$-bit implementation, the recent series of papers \cite{AsanoIKKOOSTU14,BanerjeeC016,BanerjeeCRRS2015,CJS,Chakraborty0S16,CS,ElmasryHK15} presented space-efficient algorithms for a few other basic graph problems: namely BFS, maximum cardinality search, topological sort, connected components, minimum spanning tree, shortest path, recognition of outerplanar graph and chordal graphs among others. We add to this small yet growing body of space-efficient algorithm design literature by providing such algorithms for some classical graph problems that have been solved using DFS, namely the problem of testing biconnectivity and $2$-edge connectivity, finding cut vertices and cut edges, computing chain decomposition and $st$-numbering among others.
\subsection{Model of Computation}
As is standard in the area of space-efficient graph algorithms~\cite{AsanoIKKOOSTU14,BanerjeeCRRS2015,BanerjeeC016,ElmasryHK15}, we assume that the input graph is given in a read-only memory (and so cannot be modified). If an algorithm must do some outputting, this is done on a separate write-only memory. When something is written to this memory, the information cannot be read or rewritten again. So the input is ``read only'' and the output is ``write only''. In addition to the input and the output media, a limited random-access workspace is available. The data on this workspace is manipulated at word level as in the standard word RAM model, where the machine consists of words of size $w = \Omega (\lg n)$ bits; and any logical, arithmetic, and bitwise operations involving a constant number of words take a constant amount of time. We count space in terms of the number of bits in the workspace used by the algorithms. Historically, this model is called the {\it register input model} and it was introduced by Frederickson \cite{Frederickson87} while studying some problems related to sorting and selection.
We assume that the input graphs $G=(V,E)$ are represented using an {\it adjacency array}, i.e., $G$ is represented by an array of length $|V|$ where the $i$-th entry stores a pointer to an array that stores all the neighbors of the $i$-th vertex. For the directed graphs, we assume that the input representation has both in/out adjacency array for all the vertices i.e., for directed graphs, every vertex $v$ has access to two arrays, one array is for all the in-neighbors of $v$ and the other array is for all the out-neighbors of $v$. This representation which has now become somewhat standard was also used in \cite{BanerjeeC016,Chakraborty0S16,ElmasryHK15,HagerupK16} recently to design various other space efficient graph algorithms. We use $n$ and $m$ to denote the number of vertices and the number of edges respectively, in the input graph $G$. Throughout the paper, we assume that the input graph is a connected graph, and hence $m \geq n-1$.
\subsection{Our results and organization of the paper} Asano et al.~\cite{AsanoIKKOOSTU14} showed that Depth First Search (DFS) in a directed or an undirected graph can be performed in $O(m \lg n)$ time and $O(n)$ bits of space. Elmasry et al. \cite{ElmasryHK15} improved the time to $O(m\lg \lg n)$ still using $O(n)$ bits of space. We build upon these results to give space efficient implementations of several classical applications of DFS.
First, as a warm up, we start with
some simple applications of the space efficient DFS to show the following. \begin{itemize} \item An $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space algorithm to compute the strongly connected components of a directed graph in Section~\ref{strong_conn}. \end{itemize} In addition, we also give \begin{itemize} \item an algorithm to output the vertices of a directed acyclic graph in a topologically sorted order in Section~\ref{top_sort}, and \item an algorithm to find a sparse (with $O(n)$ edges) spanning biconnected subgraph of an undirected biconnected graph in Section~\ref{sparse_bicon_subgraph} \end{itemize} both using asymptotically the same time and space used for DFS, i.e., using $O(n)$ bits and $O(m \lg \lg n)$ time.
To develop fast and space efficient algorithms for other non-trivial graph problems which are also applications of DFS, in Section~\ref{treecover}, we develop and describe in detail a space efficient tree covering technique, and use this in subsequent sections. This technique, roughly speaking, partitions the DFS tree into connected smaller sized subtrees which can be stored using less space. Finally we solve the corresponding graph problem on these smaller sized subtrees and merge the solutions across the subtrees to get an overall solution. All of these can be done using less space and not paying too much penalty in the running time. Some of these ideas are borrowed from succinct tree representation literature.
As the first application, we consider in Section~\ref{sec:chain-decomp}, a space efficient implementation of chain decomposition
of an undirected graph. This is an important preprocessing routine for an algorithm to find cut vertices, biconnected components, cut edges, and also to test 3-connectivity~\cite{Schmidt2010c} among others. We provide an algorithm that takes $O(m \lg^2 n \lg \lg n)$ time using $O(n)$ bits of space, improving on previous implementations that took $\Omega (n \lg n)$ bits~\cite{Schmidt13} or $\Theta (m+n)$ bits~\cite{BanerjeeC016} of space.
In Section~\ref{sec:onbiconn}, we give improved space efficient algorithms for testing whether a given undirected graph $G$ is biconnected, and if $G$ is not biconnected, we also show how one can find all the cut vertices of $G$.
For this, we provide a space efficient implementation of Tarjan's classical lowpoint algorithm~\cite{Tarjan72}. Our algorithms take $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space. In Section~\ref{edgecon}, we provide a space efficient implementation for testing $2$-edge connectivity of a given undirected graph $G$, and producing cut edges of $G$ using $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space.
Given a biconnected graph, and two distinguished vertices $s$ and $t$, $st$-numbering is a numbering of the vertices of the graph so that $s$ gets the smallest number, $t$ gets the largest and every other vertex is adjacent both to a lower-numbered and to a higher-numbered vertex. Finding an $st$-numbering is an important preprocessing routine for a planarity testing algorithm~\cite{EvenT76} among others. In Section~\ref{sec:stnumb}, we give an algorithm to determine an $st$-numbering of a biconnected graph that takes $O(m \lg^2 n \lg \lg n)$ time using $O(n)$ bits. This improves the earlier implementations that take $\Omega (n \lg n)$ bits~\cite{Brandes02,Ebert83,EvenT76,Tarjan86}. Using this as a subroutine, in Section~\ref{st_app}, we provide improved space effcient implementation for two-partitioning and two independent spanning tree problem among others. We direct the readers to Section~\ref{sec:terms} where we provide all the necessary definitions.
\subsection{Related Models} Several models of computation come close to {\it read-only random-access} model, the model we focus on this paper, when it comes to design space-efficient graph algorithms. A single thread common to all of them is that access to the input tape is restricted in some way. In the {\it multi-pass streaming} model \cite{MunroP80} the input is kept in a read-only sequentially-accessible media, and an algorithm tries to optimize on the number of passes it makes over the input. In the {\it semi-streaming} model~\cite{FeigenbaumKMSZ05}, the elements (or edges if the input is graph) are revealed one by one and extra space allowed to the algorithm is $O(n.polylg(n))$ bits. Observe that, it is not possible to store the whole graph if it is dense. The efficiency of an algorithm in this model is measured by the space it uses, the time it requires to process each edge and the number of passes it makes over the stream. In the {\it in-place} model \cite{BronnimannC06}, one is allowed a constant number of additional variables, but it is possible to rearrange (and sometimes even modify) the input values. Chan et al.~\cite{ChanMR14} introduced the~\emph{restore} model which is a more relaxed version of read-only memory (and a restricted version of the in-place model), where the input is allowed to be modified, but at the end of the computation, the input has to be restored to its original form. This has motivation, for example, in scenarios where the input (in its original form) is required by some other application. Buhrman et al.~\cite{BuhrmanCKLS14,Koucky16} introduced and studied the {\it catalytic-space} model where a small amount (typically $O(\lg n)$ bits) of clean space is provided along with additional auxiliary space, with the condition that the additional space is initially in an arbitrary, possibly incompressible, state and must be returned to this state when the computation is finished. The input is assumed to be given in ROM. They also provided implementations of some graph algorithms space efficiently.
\section{Preliminaries}\label{sec:prelims}
In this section, we list some preliminary results and graph theoretic definitions that will be used later in the algorithms we develop. We also discuss briefly, at a very high level, the main technique that goes behind almost all of our algorithms in this paper.
\subsection{Graph theoretic terminology} \label{sec:terms}
Here we collect all the necessary graph theoretic definitions that will be used throughout the paper. A cut vertex in an undirected graph $G$ is a vertex $v$ that when removed (along with its incident edges) from a graph creates more components than previously in the graph. A (connected) graph with at least three vertices is biconnected (also called $2$-connected in the graph literature sometimes) if and only if it has no cut vertex. A biconnected component is a maximal biconnected subgraph. These components are attached to each other at cut vertices. Similarly in an undirected graph $G$, a bridge (or cut edge) is an edge that when removed (without removing the vertices) from a graph creates more components than previously in the graph. A (connected) graph with at least two vertices is $2$-edge-connected (also called bridgeless sometimes) if and only if it has no bridge. A $2$-edge connected component is a maximal $2$-edge connected subgraph.
Given a biconnected graph $G$, and two distinguished vertices $s$ and $t$ in $V$ such that $s \neq t$, $st$-numbering is a numbering of the vertices of the graph so that $s$ gets the smallest number, $t$ gets the largest and every other vertex is adjacent both to a lower-numbered and to a higher-numbered vertex i.e., a numbering $s=v_1,v_2,\cdots,v_n=t$ of the
vertices of $G$ is called an $st$-numbering, if for all vertices $v_j, 1<j<n$, there exist $1\leq i<j<k\leq n$ such that $\{v_i, v_j\},\{v_j, v_k\} \in E$. It is well-known that $G$ is biconnected if and only if, for every edge $\{s,t\}\in E$, it has an $st$-numbering. In the {\it $k$-partitioning problem}, we are given vertices $a_1,\cdots, a_k$ of an undirected graph $G$ and natural numbers $c_1,\cdots, c_k$ with $c_1+\cdots+ c_k = n$, and we want to find a partition of $V$ into sets $V_1,\cdots, V_k$ with $a_i \in V_i$ and $|V_i| = c_i$ for every $i$ such that every set $V_i$ induces a connected graph in $G$. Given a graph $G$, we call a set of $k$ rooted spanning trees independent if they all have the same root vertex $r$ and, for every vertex $v\neq r$, the paths from $v$ to $r$ in all the $k$ spanning trees are vertex-disjoint (except for their endpoints). A directed graph $G$ is said to be {\it strongly connected} if for every pair of vertices $u$ and $v$ in $V$, both $u$ and $v$ are reachable from each other. If $G$ is not strongly connected, it is possible to decompose $G$ into its strongly connected components i.e. a maximal set of vertices $C \subseteq V$ such that for every pair of vertices $u$ and $v$ in $C$, both $u$ and $v$ are reachable from each other. A topological sort or topological ordering of a directed acyclic graph is a linear ordering of its vertices such that for every directed edge $(u,v) \in E$ from vertex $u$ to vertex $v$, $u$ comes before $v$ in the ordering. Let $T$ be a depth-first search tree of a connected undirected (or directed) graph $G$. For each vertex $v$ of $T$, preorder number of $v$ is the number of vertices visited up to and including $v$ during a preorder traversal of $T$. Similarly, postorder number of $v$ is the number of vertices visited up to and including $v$ during a postorder traversal of $T$.
\subsection{Tree cover and its space efficient construction} To implement our algorithms in $O(n)$ bits, our main idea is to process the nodes of the DFS tree in batches of $O(n/\lg n)$ nodes, as we can only afford to store trees of size $O(n/ \lg n)$ explicitly with their labels. To do this, we first use a tree-cover algorithm (that is used in succinct representations of trees) to partition the tree into $O(\lg n)$ connected subtrees of size $O(n/\lg n)$ each. We then solve the problem we are dealing with in these smaller subtrees, and later merge them in a specific order to obtain the overall solution. In some cases, to obtain the overall solution, we need to generate pairs of subtrees with explicit node labels, and then process the edges between them in a specific order. We describe all the details of the tree cover approach in Section~\ref{treecover}, and describe the algorithms in Section~\ref{everything}.
\subsection{Rank-Select}\label{rs} We use the following fundamental data structure on bitstrings in some of our algorithms.
Given a bitvector $B$ of length $n$, the rank and select operations are defined as follows: \begin{itemize}
\item $rank_a(i,B)$ = number of occurrences of $a\in \{0,1\}$ in $B[1,i]$, for $1\leq i\leq n$;
\item $select_a(i,B)$ = position in $B$ of the $i$-th occurrence of $a\in \{0,1\}$. \end{itemize} The following theorem gives an efficient structure to support these operations. \begin{theorem}[\cite{Clark96,Munro96,MunroRR01}]
\label{staticbit}
Given a bitstring $B$ of length $n$, one can construct a $o(n)$-bit auxiliary structure to support rank and select operations in $O(1)$ time. Also, such a structure can be constructed from the given bitstring in $O(n)$ time. \end{theorem}
\subsection{Related work on Space-efficient DFS} Recall that DFS starts exploring the given input graph $G$ where each vertex is initially {\it white} meaning unexplored, becomes {\it gray} when DFS discovers for the first time and pushed on the stack, and is colored {\it black} when it is finished i.e. its adjacency list has been checked completely and it leaves the stack. Recently Elmasry et al.~\cite{ElmasryHK15} showed the following tradeoff result for DFS, \begin{theorem}[\cite{ElmasryHK15}]\label{thm:elmasry-tradeoff} For every function $t: \mathbb{N} \rightarrow \mathbb{N}$ such that $t(n)$ can be computed within the resource bound of this theorem (e.g., in $O(n)$ time using $O(n)$ bits), the vertices of a directed or undirected graph $G$ can be visited in depth first order in $O((m+n)t(n))$ time with $O(n+n\frac{\lg \lg n}{t(n)})$ bits. \end{theorem} In particular, fixing $t(n)=O(\lg \lg n)$, one can obtain a DFS implementation which runs in $O(m \lg \lg n)$ time using $O(n)$ bits. We build on top of this DFS algorithm to provide space efficient implementation for various applications of DFS in directed and undirected graphs in the rest of this paper.
\section{Some simple applications of DFS using $O(n)$ bits \protect\footnote{The results of this section were announced in preliminary form in the proceedings of 22nd International Computing and Combinatorics Conference (COCOON 2016), Springer LNCS volume 9797, pages 119-130~\cite{BanerjeeC016}.}}
Classical applications of DFS in directed graphs (see~\cite{CLRS}) are to find strongly connected components of a directed graph, and to do a topological sort of a directed acyclic graph among many others. Also, given an undirected biconnected graph $G$, DFS is used as the main tool to produce a sparse spanning biconnected subgraph of $G$. We show here that while topological sort and producing a sparse spanning biconnected subgraph of an undirected biconnected graph can be solved using the same $O(n)$ bits and $O(m \lg\lg n)$ time (as for DFS), strongly connected components of a directed graph can be obtained using $O(n)$ bits and $O(m \lg n \lg \lg n)$ time.
\subsection{Strongly Connected Components}\label{strong_conn} There is a classical two pass algorithm (see \cite{CLRS} or \cite{dasgupta}) for computing the Strongly Connected Components (SCC) of a given directed graph $G$ which works as follows. In the first step, it runs a DFS on $G^R$, the reverse graph of $G$. In the second pass, it runs the connected component algorithm using DFS in $G$ but it processes the vertices in the decreasing order of the finishing time from the first pass.
We can obtain $G^R$ by switching the role of in and out adjacency arrays present in the input representation. As we can not remember the vertex ordering from the first pass due to space restriction, we process them in batches of size $n/\lg n$ in the reverse order i.e., we run a full DFS in $G^R$ to obtain and store the last $n/\lg n$ vertices in an array $A$ as they are the ones which have the highest set of finishing numbers in decreasing order. I.e., we maintain $A$ as a queue of size $n/\lg n$ and as and when a new element is finished, it is added to the queue and the element with the earliest finish time at the other end of the queue is deleted. Now, we pick the vertices from $A$ one by one in the order from the queue with the latest finish time and start a fresh DFS in $G$ to compute the connected components and output all the vertices reachable as a SCC. The output vertices are marked in a bitmap so that we don't output them again. Once we are done with all the vertices in $A$, we restart the DFS from the beginning and produce the next chunk of $n/\lg n$ vertices by remembering the last vertex produced in the previous step and stop as soon as we hit that boundary vertex. Then we repeat the connected component algorithm from this chunk of vertices and continue this way. It is clear that the algorithm produces the SCCs correctly. As we are calling the DFS algorithm $O(\lg n)$ times, total time taken by this algorithm is $O(m\lg\lg n\lg n)$ with $O(n)$ bits of space. Hence, we have the following, \begin{theorem} Given a directed graph $G$ on $n$ vertices and $m$ edges, represented as in/out adjacency array, we can output the strongly connected components of $G$ in $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space. \end{theorem}
\subsection{Topological Sort}\label{top_sort} The standard algorithm for computing topological sort~\cite{CLRS} outputs the vertices of a DFS in reverse order. If we can keep track of the DFS numbers, then reversing is an easy task. While working in space restricted setting (with $o(n \lg n)$ bits), this is a challenge as we don't have space to keep track of the DFS order. We can do as we did in the strongly connected components algorithm in the last section, by storing and outputting vertices in batches of $n/\lg n$ resulting in an $O(m \lg n \lg \lg n)$ time algorithm.
Elmasry et al.\cite{ElmasryHK15} showed that, the vertices of a DAG $G$ can be output in the order of a topological sort within the time and space bounds of a DFS in $G$ plus an additional $O(n \lg \lg n)$ bits. As they also showed how to perform DFS in $O(m+n)$ time and $O(n \lg \lg n)$ bits, overall their algorithm takes $O(m+n)$ time and $O(n \lg \lg n)$ bits to compute a topological sorting of $G$. Their main idea is to maintain enough information about a DFS to resume it in the middle and apply this repeatedly to reverse small chunks of its output, produced in reverse order, one by one.
We observe that, instead of storing information to restart DFS and produce the reverse order, we simply work with the reverse graph itself (which can be obtained from the input representation by switching the role of in and out adjacency arrays) and do a DFS in the reverse graph and output vertices as they are finished (or blackened) i.e., in the increasing order of finishing time. To see the correctness of this procedure, note that the reverse graph is also a DAG, and if $(i,j)$ is an edge in the DAG $G$, then $(j,i)$ is an edge in the reverse graph and $i$ will become black before $j$ while the algorithm performs DFS in the reverse graph. Hence, $i$ will be placed before $j$ in the correct topological sorted order. Thus we have the following, \begin{theorem} \label{dfstopo} Given a DAG $G$ on $n$ vertices and $m$ edges, if the black vertices of the DFS of $G$ can be output using $s(n)$ space and $t(n)$ time, then its vertices can be output in topologically sorted order using $O(s(n))$ space and $O(t(n))$ time assuming that the input representation has both the in and out adjacency array of the graph. \end{theorem} From Theorem~\ref{thm:elmasry-tradeoff} (setting $t(n)=O(\lg \lg n)$) and Theorem~\ref{dfstopo}, we have the following. \begin{corollary} \label{topo} Given a DAG $G$ on $n$ vertices and $m$ edges, its vertices can be output in topologically sorted order using
$O(m\lg \lg n)$ time and $O(n)$ bits. \end{corollary}
Note that,
we knew all along that DFS and topological sort take the same time, the main contribution of Theorem~\ref{dfstopo} is that it shows they take the same space (improving on the result of~\cite{ElmasryHK15} where they showed that topological sort space = DFS space + $O(n \lg \lg n)$ bits under the same time) when both the in/out adjacency arrays are present in the input.
\subsubsection{Topological Sort in Sublinear Space}
We note the following theorem of Asano et al.~\cite{AsanoIKKOOSTU14}. \begin{theorem} DFS on a DAG $G$ can be performed in space $O(\frac{n}{2^{(\sqrt{\lg n})}})$ bits and in polynomial time. \end{theorem}
While it should immediately follow from Theorem~\ref{dfstopo} that topological sort can also be performed using such sublinear bits of space, there is one caveat. Asano et al.'s algorithm works assuming that the given DAG $G$ has a single source vertex. In particular, they determine whether a vertex is black by checking whether it is reachable from {\it the} source without using the gray vertices (using the sublinear space reachability algorithm of~\cite{BarnesBRS98}).
The algorithm can be easily extended to handle $s$ many sources if we have some additional $s \log n$ bits. We simply keep track of the indices of the sources from which DFS has been explored, and to determine whether a vertex is black, we ask if it is reachable from an earlier source or from the current source without using the gray vertices.
Thus we have the following improved theorem. \begin{theorem}\label{modified_dfs} DFS on DAG $G$ with $s$ sources can be performed using $s \lg n + o(n)$ bits and polynomial time. In particular, if $s$ is $o(n/\lg n)$, the overall space used is $o(n)$ bits. \end{theorem} Thus from Theorem \ref{dfstopo} and Theorem~\ref{modified_dfs} we obtain the following, \begin{theorem} \label{sub_topo} Topological Sort on a DAG $G$ with $s$ sinks can be performed using $s \lg n + o(n)$ bits and polynomial time. In particular if $s$ is $o(n/\lg n)$, the overall space used is $o(n)$ bits. \end{theorem}
\subsection{Finding a sparse biconnected subgraph of a biconnected graph}\label{sparse_bicon_subgraph} The problem of finding a $k$-connected spanning subgraph with the minimum number of edges of a $k$-connected graph is known to be NP-hard for any $k\geq2$~\cite{GareyJ79}. But the complexity of the problem decreases drastically if all we want is to produce a ``sparse'' $k$-connected spanning subgraph, i.e., one with $O(n)$ edges. Nagamochi and Ibaraki~\cite{NagamochiI92} gave a linear time algorithm which produces a $k$-connected spanning subgraph with at most $kn-\frac{k(k+1)}{2}$ edges. Later, Cheriyan et al.~\cite{CheriyanKT93} gave another linear time algorithm for $k=2$ and $3$ that produced a $2$-connected spanning subgraph with at most $2n-2$ edges, and a $3$-connected subgraph with at most $3n-3$ edges. Later, Elmasry~\cite{Elmasry10} gave an alternate linear time algorithm for producing a sparse spanning biconnected subgraph of a given biconnected graph by performing a DFS with additional bookkeeping. In what follows, we provide a space efficient implementation for it. In order to do that, we start by briefly describing Elmasry's algorithm.
Let $DFI(v)$ denote the index (integer) that represents the time at which the vertex $v$ is first discovered from the vertex $u$ when performing a DFS i.e., $u$ is the parent of $v$ in the DFS tree. Let $low(v)$ be the smallest $DFI$ value among the $DFI$ values of vertices $w$ such that $(v,w)$ is a back edge. (Note that this quantity is different from the ``lowpoint'' value used in Tarjan's \cite{Tarjan72} classical biconnectivity algorithm.) Basically $low(v)$ captures the information regarding the deepest back edge going out of the vertex $v$. If $v$ has no backedges, for convenience (the reason will become clear in the following lemma), we adopt the convention that $low(v)=DFI(parent(v))$. The edge $(v,low(v))$ is the deepest backedge out of $v$. Note that, it is actually the tree edge between $v$ and its parent if $v$ does not have a backedge. The algorithm maintains all the edges of the DFS tree. In addition, for every vertex in the graph, the algorithm maintains the $DFI$ and the $low$ values along with the back edge that realizes it. As the root of the DFS tree does not have any back edge and, as the underlying graph is 2-connected, the root has only one child $v$ so that there is no back edge emanating from $v$ as well. Thus we get at most $n-2$ back edges along with $n-1$ tree edges, giving a subgraph with at most $2n-3$ edges. Elmasry~\cite{Elmasry10} proved that the resulting graph is indeed a spanning 2-connected subgraph of $G$. His algorithm takes $O(m+n)$ time and $O(n \lg n)$ bits of space. We improve the space bound, albeit with slight increase in time, by first proving a more general lemma as following,
\begin{figure}
\caption{A part of the full DFS tree. The wiggling edges represent tree edges and the edges with arrow heads represent back edges. If $low(v_i)=v_a$, we would come across $v_i$ in the adjacency array of $v_a$ before encountering from the arrays of $v_b$ and $v_c$. I.e., the back edge $(v_a, v_i)$ will be processed before the other back edges $(v_b, v_i)$ and $(v_c, v_i)$ since we process the vertices (and the backedges incident to them) in their DFS order.}
\end{figure}
\begin{lemma}\label{dbe_lemma} Given any undirected graph $G$ with $n$ vertices and $m$ edges, we can compute and report the $low(v)$ values i.e., deepest back edge going out of $v$, for every vertex $v$, using $O(n)$ bits of space and $O(m \lg \lg n)$ time. \end{lemma}
\begin{proof} The aim is to output all the deepest back edges out of every vertex $v$ in $G$ as we perform the DFS. As always, let $\{v_1, v_2,\cdots, v_n\}$ be the vertices of the graph. We perform a DFS with the usual color array and other relevant data structures (as required in Theorem~\ref{thm:elmasry-tradeoff} with $t(n)=\lg \lg n$) along with one more array of $n$ bits, which we call $DBE$ (for Deepest Back Edge) array, which is initialized to all zero. $DBE[i]$ is set to 1 if and only if the algorithm has found and output the deepest back edge emanating from vertex $v_i$. So whenever a white vertex $v_i$ becomes gray (i.e., $v_i$ is visited for the first time), we scan $v_i$'s adjacency array to mark, for every white neighbor $v_j$, $DBE[j]$ to $1$ if and only if it was $0$ before. The correctness of this step follows from the fact that as we are visiting vertices in DFS order, and if $DBE[j]$ is $0$, then vertex $v_j$ is not adjacent to any of the vertices we have visited so far, and as it is adjacent to $v_i$, the deepest back edge emanating from $v_j$ is $(v_i, v_j)$. Hence we output this edge and move on to the next neighbor and eventually with the next step of DFS until all the vertices are exhausted. This completes the description of the algorithm. See Figure $1$ for an illustration. Now to see how this procedure produces all the deepest back edges out of every vertex, note that, at vertex $v_i$, our algorithm reports all the back edges $e = (v_i, v_j)$ where $e$ is the deepest back edge from $v_j$, and also all tree edges $(v_i, v_j)$ where $v_j$ has no back edge. Observe that from our convention, in the second case, $(v_i, v_j)$ is the deepest back edge out of $v_j$. This concludes the proof of the lemma. As we performed just one DFS to produce all such edges, using Theorem~\ref{thm:elmasry-tradeoff}, the claimed running time and space bounds follow. \end{proof}
The way we will actually use Lemma~\ref{dbe_lemma} in our algorithms, is for finding and storing the $low$ values for at most $n/\lg n$ vertices. So we state a corollary for that.
\begin{corollary}\label{coro} Given any undirected graph $G$ with $n$ vertices and $m$ edges and any set $L$ of $O(n/\lg n)$ vertices as input, we can compute, report and store the $low(v)$ values for every vertex $v$ in $L$ in the DFS tree $T$ of $G$ using $O(n)$ bits of space and $O(m \lg \lg n)$ time. \end{corollary}
Note that, Lemma~\ref{dbe_lemma} holds true for any undirected connected graph $G$. In what follows, we use Lemma~\ref{dbe_lemma} to give a space efficient implementation of Elmasry's algorithm when the input graph $G$ is an undirected biconnected graph. In particular, we show the following,
\begin{theorem} \label{sparse} Given an undirected biconnected graph $G$ with $n$ vertices and $m$ edges, we can output the edges of a sparse spanning biconnected subgraph of $G$ using $O(n)$ bits of space and $O(m \lg \lg n)$ time. \end{theorem}
\begin{proof} When the underlying graph $G$ is undirected biconnected graph, we know that Elmasry's algorithm produces a sparse spanning subgraph which is also biconnected. In order to implement that, given an undirected biconnected graph $G$, we first run on $G$ the algorithm of Lemma~\ref{dbe_lemma} which produces and reports all the deepest back edges out of all the vertices $v$ in $G$. Out of all those deepest back edges, note that, some are actually tree edges from our convention. Hence, we don't want to report them multiple time. More specifically, if a vertex $v_j$ has no back edge going out of it, Lemma~\ref{dbe_lemma} outputs the edge $(v_i, v_j)$ as the deepest back edge out of $v_j$, which is actually a tree edge in the DFS tree $T$ of $G$. In order to avoid reporting such edges more than once, we perform the following. During the scanning of $v_i$'s adjacency array, we also check if any of its neighbor, other than its parent, is gray. If so, we report the edge from $v_i$ to its parent. Note that if $v_i$ has a back edge to one of its ancestors (other than its parent), then this step reports the tree edge from $v_i$ to its parent. Otherwise, $v_i$ didn't have any back edge, and hence the tree edge to its parent would have been output while DFS was exploring and outputting deepest back edges from its parent; so we do not output the edge again. Note that, we can do this test along with the algorithm of Lemma~\ref{dbe_lemma} so that using just one DFS, we can produce all the tree edges and deepest back edges as required in Elmasry's algorithm. Thus using Theorem~\ref{thm:elmasry-tradeoff}, we can output the edges of a sparse spanning biconnected subgraph of $G$ using $O(n)$ bits of space and $O(m \lg \lg n)$ time. \end{proof}
\section{Tree Cover and Space Efficient Construction} \label{treecover}
Before moving on to handle other complex applications of DFS in undirected graphs, namely biconnectivity, $2$-edge connectivity, $st$-numbering etc, in the this section we discuss the common methodology to attack all of these problems. Once we set all our machinary here, in Section \ref{treecover}, we see afterwards how to use them almost in a similar fashion to several problems. Central to all of our algorithms following this section is a decomposition of the DFS tree. For this we use the well-known tree covering technique which was first proposed by Geary et al. \cite{GearyRR06} in the context of succinct representation of rooted ordered trees.
The high level idea is to decompose the tree into subtrees called {\it minitrees}, and further decompose the mini-trees into yet smaller subtrees called {\it microtrees}. The microtrees are tiny enough to be stored in a compact table. The root of a minitree can be shared by several other minitrees. To represent the tree, we only have to represent the connections and links between the subtrees. Later He et al.~\cite{HeMS12} extended this approach to produce a representation which supports several additional operations. Farzan and Munro~\cite{FarzanM11} modified the tree covering algorithm of~\cite{GearyRR06} so that each minitree has at most one node, other than the root of the minitree, that is connected to the root of another minitree. This simplifies the representation of the tree, and guarantees that in each minitree, there exists at most one non-root node which is connected to (the root of) another minitree.
The tree decomposition method of Farzan and Munro~\cite{FarzanM11} is summarized in the following theorem:
\begin{figure}
\caption{An illustration of Tree Covering technique with $L=5$. The figure is reproduced from~\cite{FarzanM11}. Each closed region formed by the dotted lines represents a minitree. Note that each minitree has at most one `child' minitree (other than the minitrees that share its root) in this structure.}
\end{figure}
\begin{theorem}[\cite{FarzanM11}]\label{thm:tree-decomposition} For any parameter $L \ge 1$, a rooted ordered tree with $n$ nodes can be decomposed into $\Theta(n/L)$ minitrees of size at most $2L$ which are pairwise disjoint aside from the minitree roots. Furthermore, aside from edges stemming from the minitree root, there is at most one edge leaving a node of a minitree to its child in another minitree. The decomposition can be performed in linear time. \end{theorem}
See Figure $2$ for an illustration. In our algorithms, we apply Theorem~\ref{thm:tree-decomposition} with $L =n/\lg n$. For this parameter $L$, since the number of minitrees is only $O(\lg n)$, we can represent the structure of the minitrees within the original tree (i.e., how the minitrees are connected with each other) using $O(\lg^2 n)$ bits. The decomposition algorithm of~\cite{FarzanM11} ensures that each minitree has at most one `child' minitree (other than the minitrees that share its root) in this structure. We use this property crucially in our algorithms.
We refer to this as the {\it minitree-structure}. See Figure~$3(a)$ for the minitree structure of the tree decomposition shown in Figure~$2$.
\begin{figure}
\caption{ (a) The minitree structure of the tree decomposition shown in Figure~2. (b) This array encodes the entire DFS tree using the balanced parenthesis (BP) representation. (c) In this array, we demonstrate how the minitrees are split into a constant number of consecutive chunks in the BP representation. Note that the bottom array can actually be encoded using $O(\lg^2 n)$ bits, by storing, for each of the $O(\lg n)$ minitrees, pointers to all the chunks in BP sequence indicating the starting and ending positions of the chunks corresponding to the minitrees.}
\end{figure}
Explicitly storing all the minitrees (using pointers) requires $\omega(n)$ bits overall. One way to represent them efficiently using $O(n)$ bits is to store each of them using any linear-bit encoding of a tree~\cite{Raman013}. But if we store these minitrees separately, we loose the ability to compute the preorder or postorder numbers of the nodes in the entire tree, which is needed in our algorithms. Hence, we encode the entire tree structure using a linear-bit encoding, and store pointers into this encoding to represent the minitrees.
We first encode the tree using the {\em balanced parenthesis} (BP) representation~\cite{Lu,MunroR01}, summarized in the following theorem.\footnote{The representation of ~\cite{MunroR01} does not support computing the $i$-th child of a node in constant time while the one in ~\cite{Lu} can. When using these representations to produce a tree cover, the representation of ~\cite{MunroR01} is sufficient as we just need to compute the `next child' as we traverse the tree in post-order computing the subtree sizes of each subtree.}
\begin{theorem}[\cite{Lu}]\label{thm:BP} Given a rooted ordered tree $T$ on $n$ nodes, it can be represented as a sequence of balanced parentheses of length $2n$. Given the preorder or postorder number of a node $v$ in $T$, we can support subtree size and various navigational queries (such as parent and $i$-th child) on $v$ in $O(1)$ time using an additional $o(n)$ bits. \end{theorem}
The following lemma by Farzan et al.~\cite[Lemma 2]{FarzanRR09} (restated) shows that each minitree is split into a constant number of consecutive chunks in the BP sequence. So we now represent each minitree by storing pointers to the set of all {\em chunks} in the BP representation that together constitute the minitree.
\begin{lemma} In the BP sequence of a tree, the bits corresponding to a mini-tree form a set of constant number of substrings. Furthermore, these substrings concatenated together in order, form the BP sequence of the mini-tree. \end{lemma}
Hence, one can store a representation of the minitrees by storing
an $O(\lg^2 n)$-bit structure that stores pointers to the starting positions of the chunks corresponding to each minitree in the BP sequence
We refer to the representation obtained using this tree covering (TC) approach as the TC representation of the tree. See Figure $3$ for a complete example of a minitree structure along with the BP sequence of the tree of Figure $2$.
The following lemma
shows that we can construct the TC representation of the DFS tree of a given graph, using $O(n)$ additional bits.
\begin{lemma}\label{lem:BPtoTC} Given a graph $G$ on $n$ vertices and $m$ edges, if there is an algorithm that takes $t(n,m)$ time and $s(n,m)$ bits to perform DFS on $G$, then one can create the TC representation of the DFS tree in $t(n,m)+O(n)$ time, using $s(n,m)+O(n)$ bits. \end{lemma}
\begin{proof}
We first construct the balanced parenthesis (BP) representation of the DFS tree as follows. We start with an empty sequence, BP, and append parentheses to it as we perform each step of the DFS algorithm. In particular, whenever the DFS visits a vertex $v$ for the first time,
we append an open parenthesis to BP. Similarly when DFS backtracks from $v$,
we append a closing parenthesis. At the end of the DFS algorithm, as every vertex is assigned a pair of parenthesis, length of BP is $2n$ bits. We just need to run the DFS algorithm once to construct this array, hence the running time of this algorithm is asymptotically the same as the running time of the DFS algorithm.
We construct auxiliary structures to support various navigational operations on the DFS tree using the BP sequence, as mentioned in Theorem~\ref{thm:BP}. This takes $o(n)$ time and space using the algorithm of~\cite{GearyRRR06}. We then use the BP sequence along with the auxiliary structures to navigate the DFS tree in postorder, and simulate the tree decomposition algorithm of Farzan and Munro~\cite{FarzanM11} for constructing the TC representation of the DFS tree. If we reconstruct the entire tree (with pointers), then the intermediate space would be $\Omega(n \lg n)$ bits. Instead, we observe that the tree decomposition algorithm of~\cite{FarzanM11} never needs to keep more than $O(L)$ {\em temporary components} (see~\cite{FarzanM11} for details) in addition to some of the {\em permanent components}. Each component (permanent or temporary) can be stored by storing the root of the component together with its subtree size. Since $L = n/\lg n$, and the number of permanent components is only $O(\lg n)$, the space required to store all the permanent and temporary components at any point of time is bounded by $O(n)$ bits. The construction algorithm takes $O(n)$ time. \end{proof}
We use the following lemma
in the description of our algorithms in the later sections.
\begin{lemma}\label{lem:DFS-minitree} Let $G$ be a graph, and $T$ be its DFS tree. If there is an algorithm that takes $t(n,m)$ time and $s(n,m)$ bits to perform DFS on $G$, then, using $s(n,m)+O(n)$ bits, one can reconstruct any minitree given by its ranges in the BP sequence of the TC representation of $T$, along with the labels of the corresponding nodes in the graph in $O(t(n,m))$ time. \end{lemma}
\begin{proof} We first perform DFS to construct the BP representation of the DFS tree, $T$. We then construct the TC representation of $T$, as described in Lemma~\ref{lem:BPtoTC}. We now perform DFS algorithm again, keeping track of the preorder number of the current node at each step. Whenever we visit a new node, we check its preorder number to see if it falls within the ranges of the minitree that we want to reconstruct. (Note that, as mentioned above, from~\cite[Lemma 2]{FarzanRR09}, the set of all preorder number of the nodes that belong to any minitree form a constant number of ranges, since these nodes belong to a constant number of chunks in the BP sequence.) If it is within one of the ranges corresponding to the minitree being constructed, then we add the node along with its label to the minitree. \end{proof}
\section{Applications of DFS using tree-covering technique}\label{everything} In this section, we provide $O(n)$ bit implementations of various algorithmic graph problems that use DFS, by using the tree covering technique developed in the previous section. At a higher level, we use the tree covering technique to generate the minitrees one by one, and then partially solve the corresponding graph problem inside that minitree before finally combining the solution across all the minitrees.
The problems we consider include algorithms to test biconnectivity, $2$-edge connectivity and to output cut vertices, edges, and to find a chain decomposition and an $st$-numbering among others.
To test for biconnectivity and related problems, the classical algorithm due to Tarjan~\cite{Tarjan72,Tarjan74} computes the so-called ``low-point" values (which are defined in terms of a DFS-tree) for every vertex $v$, and checks some conditions based on these values.
Brandes~\cite{Brandes02} and Gabow~\cite{Gabow00} gave considerably simpler algorithms for testing biconnectivity and computing biconnected components by using some path-generating rules instead of low-points; they call these algorithms path-based. An algorithm due to Schmidt \cite{Schmidt13} is based on chain decomposition of graphs to determine biconnectivity (and/or $2$-edge connected). All these algorithms take $O(m+n)$ time and $O(n)$ words of space. Roughly these approaches compute DFS and process the DFS tree in specific order maintaining some auxiliary information of the nodes. We start with a brief description of chain decomposition and its application first before providing its space efficient implementation.
\subsection{Chain decomposition}\label{sec:chain-decomp} Schmidt~\cite{Schmidt2010c} introduced a decomposition of the input graph that partitions the edge set of the graph into cycles and paths, called chains, and used this to design an algorithm to find cut vertices and biconnected components \cite{Schmidt13} and also to test 3-connectivity~\cite{Schmidt2010c} among others. In what follows, we discuss briefly the decomposition algorithm, and state his main result.
The algorithm first performs a depth first search on $G$. Let $r$ be the root of the DFS tree $T$ of $G$. DFS assigns an index to every vertex $v$, namely, the time vertex $v$ is discovered for the first time during DFS -- call it the depth-first-index of $v$ ($DFI(v)$). Imagine that the back edges are directed away from $r$ and the tree edges are directed towards $r$. The algorithm decomposes the graph into a set of paths and cycles called chains as follows. See Figure $4$
for an example. First we mark all the vertices as unvisited. Then we visit every vertex starting at $r$ in the increasing order of DFI, and do the following. For every back edge $e$ that originates at $v$, we traverse a directed cycle or a path. This begins with $v$ and the back edge $e$ and proceeds along the tree towards the root and stops at the first visited vertex or the root. During this step, we mark every encountered vertex as visited. This forms the first chain. Then we proceed with the next back edge at $v$, if any, or move towards the next vertex in the increasing DFI order and continue the process. Let $D$ be the collection of all such cycles and paths. Notice that the cardinality of this set is exactly the same as the number of back edges in the DFS tree as each back edge contributes to a cycle or a path. Also, as initially every vertex is unvisited, the first chain would be a cycle as it would end in the starting vertex. Using this, Schmidt proved the following theorem.
\begin{theorem}[\cite{Schmidt13}]\label{2ec} Let $D$ be a chain decomposition of a connected graph $G(V,E)$. Then $G$ is 2-edge-connected if and only if the chains in $D$ partition $E$. Also, $G$ is 2-vertex-connected if and only if $\delta(G) \geq 2$ (where $\delta(G)$ denotes the minimum degree of $G$) and $D_1$ is the only cycle in the set $D$ where $D_1$ is the first chain in the decomposition. An edge $e$ in $G$ is bridge if and only if $e$ is not contained in any chain in $D$. A vertex $v$ in $G$ is a cut vertex if and only if $v$ is the first vertex of a cycle in $ D \setminus D_1$. \end{theorem}
\begin{figure}
\caption{Illustration of Chain Decomposition. (a) An input graph $G$. (b) A DFS traversal of $G$ and the resulting edge-orientation along with DFIs. (c) A chain decomposition $D$ of $G$. The chains $D_2$ and $D_3$ are paths and rest of them are cycles. The edge $(V_5,V_6)$ is bridge as it is not contained in any chain. $V_5$ and $V_6$ are cut vertices.}
\end{figure}
Now we are ready to describe an implementation of Schmidt's chain decomposition algorithm using only $O(n)$ bits of space and in $O(m \lg^2 n \lg\lg n)$ time using our partition of the DFS tree of Section~\ref{treecover}.
In the following description, {\em processing a back edge} refers to the step of outputting the chain (directed path or cycle) containing that edge and marking all the encountered vertices as visited. Processing a node refers to processing all the back edges out of that node. The main idea of our implementation is to process all the back edges out of each node in their {\em preorder} (as in Schmidt's algorithm). To perform this efficiently (within the space limit of $O(n)$ bits), we process the nodes in {\em chunks} of size $n/\lg n$ each (i.e., the first chunk of $n/\lg n$ nodes in preorder are processed, followed by the next chunk of $n/\lg n$ nodes, and so on). But when processing the back edges out of a chunk $C$, we process all the back edges that go from $C$ to all the minitrees in their {\em postorder}, processing all the edges from $C$ to a minitree $\tau_1$ before processing any other back edges going out of $C$ to a different minitree. This requires us to go through all the edges out of each chunk at most $O(\lg n)$ times (once for each minitree).
Thus the order in which we process the back edges is different from the order in which we process them in Schmidt's algorithm, but we argue that this does not affect the correctness of the algorithm. In particular, we observe the following: \begin{itemize} \item Schmidt's algorithm correctly produces a chain decomposition even if we process vertices to any order, as long as we process a vertex $v$ only after all its ancestors are also processed -- for example, in level order instead of preorder. This also implies that as long as we process the back edges coming to a vertex $v$ (from any of its descendants) only after we process all the back edges going to any of it's ancestors from any of $v$'s descendants, we can produce a chain decomposition correctly. \end{itemize}
To process a back edge $(u,v)$ between a chunk $C$ and a minitree $\tau$, where $u$ belongs to $C$, $v$ belongs to $\tau$, and $u$ is an anscestor of $v$, we first output the edge $(u,v)$, and then traverse the path from $v$ to the root of $\tau$, outputting all the traversed edges and marking the nodes as visited. We then start another DFS to produce the minitree $\tau_p$ containing the parent $p$ of the root of $\tau$, and output the path from $p$ to the root of $\tau_p$, and continue the process untill we reach a vertex that has been marked as visited. Note that this process will terminate since $u$ is marked and is an ancestor of $v$. We maintain a bitvector of length $n$ to keep track of the marked vertices, to perform this efficiently. A crucial observation that we use in bounding the runtime is that once we produce a minitree $\tau_p$ for a particular pair $(C,\tau)$, we don't need to produce it again, as the root of $\tau$ will be marked after the first time we output it as part of a chain.
Also, once we generate a chunk $C$ and a minitree $\tau$, we go through all the vertices of $C$ in preorder, and process all the edges that go between $C$ and $\tau$.
We provide the pseudocode (see Algorithm $1$) below
describing the high-level algorithm for outputting the chain decomposition.
\begin{algorithm}[h] \label{general}
\begin{algorithmic}[1]
\Statex{Let $\tau_1,\tau_2,\cdots,\tau_{O(\lg n)}$ be the minitrees in postorder and $C_1,C_2,\cdots,C_{\lg n}$ be the chunks of vertices in preorder}
\For{$i = 1$ to $\lg n$}
\For{$j = 1$ to $O(\lg n)$}
\ForAll{back edges $(u,v)$ with $u \in C_i$ and $v \in \tau_j$}
\State{output the chain containing the edge $(u,v)$}
\EndFor
\EndFor
\EndFor \end{algorithmic}
\caption{Chain Decomposition} \end{algorithm}
The time taken for the initial part, where we construct the DFS tree, decompose it into minitrees, and construct the auxiliary structures, is $O(m \lg\lg n)$, using Theorem~\ref{thm:elmasry-tradeoff} with $t(n) = \lg\lg n$. The running time of the rest of the algorithm is dominated by the cost of processing the back edges. As outlined in Algorithm $1$, we process the back edges between every pair $(C_i, \tau_j)$, where $C_i$ is the $i$-th chunk of $n/\lg n$ nodes in preorder, and $\tau_j$ is the $j$-th minitree in postorder, for $1 \le i \le \lg n$ and $1 \le j \le O(\lg n)$. The outer loop of the algorithm generates each chunk in preorder, and thus requires a signle DFS to produce all the chunks over the entire execution of the algorithm. The inner loop goes through all the minitrees for each chunk. Since there are $\lg n$ chunks and $O(\lg n)$ minitrees, and prodicing each minitree takes $O(m \lg\lg n)$ time, the generation of all the chunk-minitree pairs takes $O(m \lg\lg n \lg^2 n)$ time.
For a particular pair $(C, \tau)$, we may need to generate many ($O(\lg n)$, in the worst-case) minitrees. But we observe that, this happens for at most one back edge for a every pair $(C, \tau)$, since after processing the first such back edge, the root of the minitree $\tau$ is marked and hence any chain that is output afterwards will stop before the root of the minitree. Also, if a minitree $\tau_\ell$ is generated when processing a pair $(C,\tau)$, then it will not be generated when processing any other pair $(C', \tau')$ which is different from $(C, \tau)$ (since each minitree has at most one child minitree).
Thus the overall running time is dominated by generating all the pairs $C, \tau)$, which is $O(m \lg^2 n \lg\lg n)$.
Thus, we obtain the following.
\begin{theorem}\label{thm:chain-decomp} Given an undirected graph $G$ on $n$ vertices and $m$ edges, we can output a chain decomposition of $G$ in $O(m \lg^2 n \lg \lg n)$ time using $O(n)$ bits. \end{theorem}
\subsection{Testing biconnectivity and finding cut vertices} \label{sec:onbiconn}
A na\"{\i}ve algorithm to test for biconnectivity of a graph $G=(V,E)$ is to check if $(V \setminus \{v\},E)$ is connected, for each $v \in V$. Using the $O(n)$ bits and $O(m+n)$ time BFS algorithm \cite{BanerjeeC016} for checking connectivity, this gives a simple $O(n)$ bits algorithm running in time $O(mn)$. Another approach is to use Theorem \ref{thm:chain-decomp} combining with criteria mentioned in Theorem \ref{2ec} to test for biconnectivity and output cut vertices in $O(m \lg^2 n \lg \lg n)$ time using $O(n)$ bits.
Here we show that using $O(n)$ bits we can design an even faster algorithm running in $O(m \lg n \lg \lg n)$ time. If $G$ is not biconnected, then we also show how one can find all the cut-vertices of $G$ within the same time and space bounds. We implement the classical low-point algorithm of Tarjan~\cite{Tarjan72}. Recall that, the algorithm performs a DFS and computes for every vertex $v$, a value lowpoint[$v$] which is recursively defined as
\begin{align*}
\mbox{lowpoint}[v] = \min \{ ~ DFI(v) & \cup \{\mbox{lowpoint}[s] | ~ s \mbox{ is a child of } v\} \mbox{\newline } \\
& \cup \{DFI(w) | (v,w) \mbox{ is a back-edge}\} ~ \} \end{align*}
Tarjan proved that if a vertex $v$ is not the root, then $v$ is a cut vertex if and only if $v$ has a child $w$ such that lowpoint[$w$] $\geq DFI(v)$. The root of a DFS tree is a cut vertex if and only if the root has more than one child. Since the lowpoint values require $\Omega(n \lg n)$ bits in the worst case, this poses the challenge of efficiently testing the condition for biconnectivity with $O(n)$ bits. To deal with this, as in the case of the chain decomposition algorithm, we compute lowpoint values in $O(\lg n)$ batches using our tree covering algorithm. Cut vertices encountered in the process, if at all, are stored in a separate bitmap. We show that each batch can be processed in $O(m\lg \lg n)$ time using DFS, resulting in an overall runtime of $O(m\lg n \lg \lg n)$.
\subsubsection{Computing lowpoint and reporting cut vertices} We first obtain a TC representation of the DFS tree using the decomposition algorithm of Theorem~\ref{thm:tree-decomposition} with $L = n/\lg n$. We then {\it process} each minitree, in the postorder of the minitrees in the minitree structure. To process a minitree, we compute the lowpoint values of each of the nodes in the minitree (except possibly the root) in overall $O(m)$ time. During the processing of any minitree, if we determine that a vertex is a cut vertex, we store this information by marking the corresponding node in a seperate bit vector.
Each minitree can be reconstructed in $O(m \lg\lg n)$ time using Lemma~\ref{lem:DFS-minitree}. The lowpoint value of a node is a function of the lowpoints of all its children. However the root of a minitree may have children in other minitress. Hence for the root of the minitree, we store the partial lowpoint value (till that point) which will be used to update its value when all its subtrees have computed their lowpoint values (possibly in other minitrees).
Computing the lowpoint values in each of the minitrees is done in a two step process. In the first step, we compute and store
the $low$ values of each node (which is the DFI value of the deepest back edge emanating from that node) belonging to the minitree using Corollary~\ref{coro}.
Note that the $low$ values form one component of the values among which we find the minimum, in the definition of $lowpoint$ above, with a slight change. I.e., if a vertex $v$ has a backedge, then $low(v)$ is nothing but $\min \{DFI(w): (v,w)~is~a~back~edge\}$. However, if $v$ does not have a backedge, by our convention $low(v)$ has the $DFI$ value of its parent which needs to be discounted in computing $lowpoint$ value of $v$. This is easily done if we also remember the DFI value of the parent of every node in the minitree (using $O(n)$ bits).
Once these $low(v)$ values are computed and stored for all the vertices $v$ belonging to a minitree, they are passed on to the next step for computing $lowpoint(v)$ values. More specifically, in the second step, we do another DFS starting at the root of this minitree and compute the lowpoint values for all the vertices $v$ belonging to a minitree exactly in the same way as it is done in the classical Tarjan's~\cite{Tarjan72} algorithm using the explicitly stored $low(v)$ values. We provide the code snippet which actually shows how to compute lowpoint values recursively for a minitree in Algorithm $2$. Thus we obtain the following, \begin{lemma}\label{dbe2} Computing and storing the $lowpoint(v)$ values
for all the nodes $v$ in a minitree can be performed in $O(m \lg \lg n)$ time, using $O(n)$ bits. \end{lemma}
\begin{algorithm}[h]
\begin{algorithmic}[1]
\State{if $low(v) = DFI (parent(v))$ then}
\State{$lowpoint(v)= DFI(v)$}
\State{else $lowpoint(v)=Min \{ DFI(v), low(v)\}$}
\ForAll{$y\in adj(v)$}
\If{$y$ is white}
\State{$DFI(y) \gets DFI(v)+1$}
\State{DFS(y)}
\If{$lowpoint(y)<lowpoint(v)$}
\State{$lowpoint(v)=lowpoint(y)$}
\EndIf
\EndIf
\EndFor \end{algorithmic}
\caption{DFS(v)} \end{algorithm}
To compute the effect of the roots of the minitrees on the $lowpoint$ computation, we store various $\Theta(\lg n)$ bit information with each of the $\Theta(\lg n)$ minitree roots including their partial/full lowpoint values, the rank of its first/last child in its subtree. After we process one minitree, we generate the next minitree, in postorder, and process it in a similar fashion and continue until we exhaust all the minitrees. As we can store the cut vertices in a bitvector $B$ of size $n$ marking $B[i]=1$ if and only if the vertex $v_i$ is a cut vertex, reporting them at the end of the execution of the algorithm is a routine task.
Clearly we have taken $O(n)$ bits of space and the total running time is $O(m \lg \lg n \lg n)$ as we run the DFS algorithm $O(\lg n)$ times overall. Thus we have the following \begin{theorem} Given an undirected graph $G$ with $n$ vertices and $m$ edges, in $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space we can determine whether $G$ is $2$-vertex connected. If not, in the same amount of time and space, we can compute and report all the cut vertices of $G$.
\end{theorem}
\subsection{Testing $2$-edge connectivity and finding bridges}\label{edgecon} The classical algorithm of Tarjan~\cite{Tarjan74}
to check if $G$ is $2$-edge connected takes $O(m+n)$ time using $O(n)$ words. Schmidt's algorithm~\cite{Schmidt13} which is based on chain decomposition can also be implemented in linear time but with $O(m)$ words. The purpose of this section is to improve the space bound to $O(n)$ bits, albeit with slightly increased running time. For this, we use the following folklore characterization: a tree edge $(v,w)$, where $v$ is the parent of $w$, is a bridge if and only if lowpoint[$w$] $> DFI(v)$. That is to say, a tree edge $(v,w)$ is a bridge if and only if the vertex $w$ and any of its descedants in the DFS tree cannot reach to vertex $v$ or any of its ancestors. Thus if the edge $(v,w)$ is removed, the graph $G$ becomes disconnected. Note that, since storing the lowpoint values requires
$\Omega(n \lg n)$ bits, we cannot store all of them at once to check the criteria mentioned in the characterization, and this poses the challenge of efficiently testing the condition for $2$-edge connectivity with only $O(n)$ bits. To perform this test in a space efficient manner, we extend ideas similar to the ones developed in the previous section.
Similar to the biconnectivity algorithm, here also we first construct a TC representation of the DFS tree using the decomposition algorithm of Theorem~\ref{thm:tree-decomposition} with $L = n/\lg n$. We then {\it process} each minitree, in the postorder of the minitrees in the minitree structure. To process a minitree, we compute the lowpoint
values of each of the nodes in the minitree (except possibly the root) in overall $O(m)$ time. While processing these minitrees, if we come across any bridge, we store it in a separate bitvector so that at the end of the execution of the algorithm we can report all of them.
Using Lemma~\ref{lem:DFS-minitree}, we know that each minitree can be reconstructed in $O(m \lg\lg n)$ time, and also we store for the root the partially computed lowpoint
(till the point we are done processing minitrees).
Now we compute the lowpoint values for each of the vertices belonging to a minitree using Lemma~\ref{dbe2}.
Once we determine lowpoint
values for all the vertices belonging to a minitree, we generate each minitree along with the node labels, and easily test whether any tree edge is a bridge using the characterization mentioned above.
We also need to check this condition for edges that connect two minitrees; but this can also be done within the same time and space bounds. We store this information using a bit vector $B$ of length $n-1$ such that $B[i] = 1$ if and only if the $i$-th edge in pre-order, of the DFS tree, is a bridge. Thus, by running another DFS, we can report all the bridges of $G$. Clearly this procedure takes $O(n)$ bits of space and the total running time is $O(m \lg \lg n \lg n)$ as we run the DFS algorithm $O(\lg n)$ times overall.
Hence we obtain the following. \begin{theorem} Given an undirected graph $G$ with $n$ vertices and $m$ edges, in $O(m \lg n \lg \lg n)$ time and $O(n)$ bits of space we can determine whether $G$ is $2$-edge connected. If $G$ is not $2$-edge connected, then in the same amount of time and space, we can compute and output all the bridges of $G$.
\end{theorem}
\subsection{st-numbering}\label{sec:stnumb} The {\it st}-ordering of vertices of an undirected graph is a fundamental tool for many graph algorithms, e.g., in planarity testing and graph drawing. The first linear-time algorithm for {\it st}-ordering the vertices of a biconnected graph is due to Even and Tarjan~\cite{EvenT76}, and is further simplified by Ebert~\cite{Ebert83}, Tarjan~\cite{Tarjan86} and Brandes~\cite{Brandes02}. All these algorithms, however, preprocess the graph using depth-first search, essentially to compute lowpoints which in turn determine an (implicit) open ear decomposition. A second traversal is required to compute the actual {\it st}-ordering.
All of these algorithms take $O(n \lg n)$ bits of space. We give an $O(n)$ bits implementation of Tarjan's~\cite{Tarjan86} algorithm.
We first describe the two pass classical algorithm of Tarjan without worrying about the space requirement. The algorithm assumes, without loss of generality, that there exists an edge between the vertices $s$ and $t$, otherwise it adds the edge $(s,t)$ before starting with the algorithm. Moreover, the algorithm starts a DFS from the vertex $s$ and the edge $(s,t)$ is the first edge traversed in the DFS of $G$. Let $p(v)$ be the parent of vertex $v$ in the DFS tree. $DFI(v)$ and $lowpoint(v)$ have the usual meaning as defined previously. The first pass is a depth first search during which for every vertex $v$, $p(v)$, $DFI(v)$ and $lowpoint(v)$ are computed and stored. The second pass constructs a list $L$, which is initialized with $[s,t]$, such that if the vertices are numbered in the order in which they occur in $L$, then we obtain an {\it st}-ordering. In addition, we also have a sign array of $n$ bits, intialized with sign[$s$]=-. The second pass is a preorder traversal starting from the root $s$ of the DFS tree and works as described in the following pseudocode (Algorithm \ref{st}) below.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\State{DFS(s) starts with the edge $(s,t)$}
\ForAll{vertices $v \neq s,t $ in preorder of DFS(s)}
\If{sign(lowpoint($v))==+$}
\State{Insert $v$ after $p(v)$ in $L$}
\State{sign($p(v))=-$}
\EndIf
\If{sign(lowpoint($v$))==-}
\State{Insert $v$ before $p(v)$ in $L$}
\State{sign($p(v)$)=+}
\EndIf
\EndFor
\end{algorithmic}
\caption{st-numbering}
\label{st} \end{algorithm}
It is clear from the above pseudocode that the procedure runs in linear time using $O(n \lg n)$ bits of space for storing elements in $L$. To make it space effcient, we use ideas similar to our biconnectivity algorithm. At a high level, we generate the lowpoint values of the first $ n/\lg n$ vertices in depth first order and process them. Due to space restriction, we cannot store the list $L$ as in Tarjan's algorithm; instead we use the BP sequence of the DFS tree and augment it with some extra information to `encode' the final {\it st}-ordering, as described below.
As in some of our earlier algorithms, this algorithm also runs in $O(\lg n)$ phases and in each phase it processes $n/\lg n$ vertices. In the first phase, to obtain the lowpoint values of the first $n/\lg n$ vertices in depth first order, we run as in our biconnectivity algorithm a procedure to store explicitly for these vertices their lowpoint values in an array. Also during the execution of the biconnectivity algorithm, the BP sequence is generated and stored in the BP array. We create two more arrays, of size $n$ bits, that have one to one correspondence with the open parentheses of the BP sequence. We can use rank/select operations (as defined Section~\ref{rs}) to map the position of a vertex in these two arrays to the corresponding open parenthesis in the BP sequence. The first array, called Sign, is for storing the sign for every vertex as in Tarjan's algorithm. To simulate the effect of the list $L$, we create the second array, called $P$, where we store the relative position, i.e., ``before'' or ``after'', of every vertex with respect to its parent. Namely, if $u$ is the parent of $v$, and $v$ comes before (after, respectively) $u$ in the list $L$ in Algorithm \ref{st},
then we store $P[v]=b$ ($P[v]=a$, respectively). One crucial observation is that, even though the list $L$ is dynamic, the relative position of the vertex $v$ does not change with respect to the position of $u$, and is determined at the time of insertion of $v$ into the list $L$ (new vertices may be added between $u$ and $v$ later). In what follows, we show how to use the BP sequence, and the array $P$ to emulate the effect of list $L$ and produce the {\it st}-ordering.
We first describe how to reconstruct the list $L$ using the BP sequence and the $P$ array. The main observation we use in the reconstruction of $L$ is that a node $v$ appears in $L$ after all the nodes in its child subtrees whose roots are marked with $b$ in $P$, and also before all the nodes in its child subtrees whose roots are marked with $a$ in $P$. Also, all the nodes in a subtree appear ``together'' (consecutively) in the list $L$. Moreover, all the children marked $b$ appear in the increasing order of the $DFI$ while all the children marked $a$ appear in the decreasing order of the $DFI$. Thus by looking at the $P[v]$ values of all the children of a node $u$, and computing their subtree sizes,
we can determine the position in $L$ of $u$ among all the nodes in its subtree. Let us call a child $v$ of $u$ as {\it after-child} if $v$ is marked $a$ in $P$. Similarly, if $v$ is marked $b$ in $P$, it is called {\it before-child}. Let $T(v)$ denote the subtree rooted at the vertex $v$ in the DFS tree $T$ of $G$ and $|T(v)|$ denotes the size of $T(v)$. Let us also suppose that the vertex $u$ has $k+\ell$ children, out of which $k$ children $v_1,\cdots, v_k$ are before-children and the remaining $\ell$ children $w_1,\cdots,w_\ell$ are after-children, where $DFI(v_1)< DFI(v_2)<\cdots<DFI(v_k)$ and $DFI(w_1)<DFI(w_2)<\cdots<DFI(w_\ell)$. Then in $L$, all the vertices from $T(v_1)$, $T(v_2)$, followed by till $T(v_k)$ appear, followed by $u$ and finally the vertices from $T(w_\ell)$, $T(w_{\ell-1})$ till $T(w_1)$ appear. More specifically, $u$ appears at the $(S+1)$-th location where $S=\sum_{i=1}^{k} |T(v_i)|$. With this approach, we can reconstruct the list $L$, and hence output the {\it st}-numbers of all the nodes in linear time, if $L$ can be stored in memory - which requires $O(n \lg n)$ bits. Now to perform this step with $O(n)$ bits, we repeat the whole process of reconstruction $\lg n$ times, where in the $i$-th iteration, we reproduce sublist $L[(i-1)n/\lg n + 1, \dots, i.n/\lg n]$ -- by ignoring any node that falls outside this range -- and reporting all these nodes with {\it st}-numbers in the range $[(i-1)n/\lg n + 1, \dots, i.n/\lg n]$. As each of these reconstruction takes $O(m \lg n \lg \lg n)$ time, we obtain the following,
\begin{theorem}\label{thm:st-numbering} Given an undirected biconnected graph $G$ on $n$ vertices and $m$ edges, and two distinct vertices $s$ and $t$, we can output an $st$-numbering of all the vertices of $G$ in $O(m \lg^2 n \lg \lg n)$ time, using $O(n)$ bits of space. \end{theorem}
\subsection{Applications of $st$-numbering} \label{st_app} In this section, we show that using the space efficient implementation of Theorem~\ref{thm:st-numbering} for $st$-numbering, we immediately obtain similar results for a few applications of $st$-numbering. We provide the details below.
\subsubsection{Two-partitioning problem}\label{part}
In this problem, given vertices $a_1,\cdots, a_k$ of a graph $G$ and natural numbers $c_1,\cdots, c_k$ with $c_1+\cdots+ c_k = n$, we want to find a partition of $V$ into sets $V_1,\cdots, V_k$ with $a_i \in V_i$ and $|V_i| = c_i$ for every $i$ such that every set $V_i$ induces a connected graph in $G$. This problem is called the {\it $k$-partitioning problem}. The problem is NP-hard even when $k = 2$, $G$ is bipartite and the condition $a_i \in V_i$ is relaxed \cite{DYER85}. But, Gy\"{o}ri \cite{Gyori81} and Lovasz \cite{lovasz} proved that such a partition always exists if the input graph is $k$-connected and can be found in polynomial time in such graphs. Let G be $2$-connected. Then two-partitioning problem can be solved in the following manner~\cite{Schmidt14}: Let $v_1 := a_1$ and $v_n := a_2$, compute an $v_1v_n$-numbering $v_1, v_2, \cdots, v_n$ and note that, from the property of $st$-numbering, for any vertex $v_i$ (in particular for $i = c_1$) the graphs induced by $v_1, \cdots, v_i$ and by $v_i, \cdots , v_n$ are always connected subgraph of $G$. Thus applying Theorem \ref{thm:st-numbering}, we obtain the following:
\begin{theorem}
Given an undirected biconnected graph $G$, two distinct vertices $a_1, a_2$, and two natural numbers $c_1, c_2$ such that $c_1+c_2=n$, we can obtain a partition $(V_1, V_2)$ of the vertex set $V$ of $G$ in $O(m \lg^2 n \lg \lg n)$ time, using $O(n)$ bits of space, such that $a_1 \in V_1$ and $a_2 \in V_2$, $|V_1|=c_1$, $|V_2|=c_2$, and both $V_1$ and $V_2$ induce connected subgraph on $G$. \end{theorem}
\subsubsection{Vertex-subset-two-partition problem}\label{subset} Wada and Kawaguchi \cite{WadaK93} defined the following problem which they call the $vertex$-$subset$-$k$-$partition$ problem. This is actually an extension of the $k$-$partition$ problem defined in Section~\ref{part}. The problem is defined as follows:
\noindent {\bf Input:} \begin{enumerate} \item An undirected graph $G = (V, E)$ with $n$ vertices and $m$ edges;
\item a vertex subset $V' \ (\subseteq V)$ with $n' = |V'| \geq k$; \item $k$ distinct vertices $a_i \ (1 \leq i \leq k) \in V', a_i \neq a_j \ (1 \leq i <j \leq k)$; and \item $k$ natural numbers $n_1, n_2, \cdots, n_k$ such that $\sum\nolimits_{i=1}^k n_i = n'$. \end{enumerate}
\noindent {\bf Output:} a partition $V_1 \cup V_2 \cup \cdots \cup V_k$ of the vertex set $V$ and a partition $V_1' \cup V_2' \cup \cdots \cup V_k'$ of vertex set $V'$ such that for each $i(1\leq i\leq k)$ \begin{enumerate} \item $a_i \in V_i'$;
\item $|V_i'|=n_i$; \item $V_i' \subseteq V_i$ and \item each $V_i$ induces a connected subgraph of $G$. \end{enumerate} Note that this problem is an extension of the $k$-partition problem, since choosing $V'=V$ corresponds to the original $k$-partition problem.
Wada and Kawaguchi~\cite{WadaK93}
proved that vertex-subset-$k$-partition problem always admits a solution if the input graph $G$ is $k$-connected (for $k\geq 2$). In particular, if $G$ is $2$-connected, using $st$-ordering, the vertex-subset-two-partitioning problem can be solved in the following manner~\cite{WadaK93}: suppose that $G, V' \ (\subseteq V), a_1, a_2, n_1$ and $n_2 \ (n_1+n_2=n'=|V'|)$ are the inputs. Let $s = v_1 := a_1$ and $t = v_n := a_2$, compute an $st$-numbering $v_1, v_2, \cdots, v_n$. From this $st$-numbering, note
that, $V$ now can be partitioned in two sets $V_1$ and $V_2$ such that $|V_1 \cap V'|=n_1$ and $|V_2 \cap V'|=n_2$. From the property of $st$-numbering, we know that both $V_1$ and $V_2$ induce a connected subgraph of $G$. Moreover, $a_1\in V_1$ and $a_2 \in V_2$. Using Theorem~\ref{thm:st-numbering} as a subroutine to compute such an $st$-numbering of $G$, we obtain the following result.
\begin{theorem} Given an undirected biconnected graph $G$, we can solve the vertex-subset-two-partitioning problem in $O(m \lg^2 n \lg \lg n)$ time, using $O(n)$ bits of space. \end{theorem}
\subsubsection{Two independent spanning trees}\label{spanning} Recall that $k$ spanning trees of a graph $G$ are independent if they all have the same root vertex $r$, and for every vertex $v\neq r$, the paths from $v$ to $r$ in the $k$ spanning trees are vertex-disjoint (except for their endpoints). Itai and Rodeh~\cite{ItaiR88} conjectured that every $k$-connected graph contains $k$ independent spanning trees. Even though the most general version of this conjecture has not been proved yet, this conjecture is shown to be true for $k \leq 4$ \cite{CheriyanM88,CurranLY06,ItaiR88,ZehaviI89}, and also for planar graphs~\cite{Huck99}. In particular, if the given graph $G$ is biconnected, we can generate two independent spanning trees (let us call them $S$ and $T$) in the following manner~\cite{ItaiR88}.
Choose an arbitrary edge, say $(s,t)$ in $G$. Let $f$ be an $st$-numbering of $G$. To construct $S$, choose for every vertex $v \neq s$ an edge $(u,v)$ such that $f(u) <f(v)$, and for $t$ choose an edge other than $(s,t)$. To construct $T$, choose the edge $(s,t)$ and for every vertex $v \notin \{s,t\}$ an edge $(v,w)$, $f(v) <f(w)$ . It is easy to prove that $s$ is the root of both $S$ and $T$, and that $S$ and $T$ are independent spanning trees as, for every vertex $v$, the path from the root $s$ to $v$ in $S$ consists of vertices $u$ with $f(u) <f(v)$ but except the edge $(s,t)$, whereas in $T$, along with the edge $(s,t)$, it consists of vertices $w$ with $f(v) <f(w)$. Using Theorem~\ref{thm:st-numbering} to compute such an $st$-numbering of $G$, it is not hard to produce $S$ and $T$. Thus we obtain the following,
\begin{theorem} Given an undirected biconnected graph $G$, we can report two independent spanning trees $S$ and $T$ in $O(m \lg^2 n \lg \lg n)$ time, using $O(n)$ bits. \end{theorem}
\section{Concluding remarks and open problems}\label{conclusion} We have presented space efficient algorithms for a number of important applications of DFS. Obtaining linear time algorithms for them while maintaining $O(n)$ bits of space usage is both interesting and challenging open problem. One of the main bottlenecks (with this approach) towards this is finding an $O(n)$-bit, $O(m+n)$-time algorithm for DFS, which is also open, even though for BFS we know such implementations~\cite{BanerjeeC016,ElmasryHK15}. Another challenging open problem is to remove the poly-log terms in the running times of the algorithms described (e.g., the $\lg n$ term in the running time of $2$-vertex and $2$-edge connectivity algorithm, and the $\lg^2 n$ term in the running time of two independent spanning trees algorithm). These terms seem inherent in our tree covering approach. It would be interesting to find other applications of our tree covering approach for space efficient algorithms. There are also plenty of other applications of DFS, it would be interesting to study them from the point of view of space efficiency. For example, planarity is one prime example where DFS has been used very crucially. So it is a natural question that, can we test planarity of a given graph using $O(n)$ bits? \\
\noindent {\bf\large References}
\end{document} |
\begin{document}
\title{Critical graphs for the chromatic edge-stability number}
\author{Bo\v stjan Bre\v sar$^{a,b}$\thanks{Email: \texttt{[email protected]}} \and Sandi Klav\v zar$^{c,a,b}$\thanks{Email: \texttt{[email protected]}} \and Nazanin Movarraei$^d$\thanks{Email: \texttt{[email protected]}} }
\maketitle
\begin{center} $^a$ Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia\\
$^b$ Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia\\
$^c$ Faculty of Mathematics and Physics, University of Ljubljana, Slovenia\\
$^d$ Department of Mathematics, Yazd University, Iran
\end{center}
\begin{abstract} The chromatic edge-stability number ${\rm es}_{\chi}(G)$ of a graph $G$ is the minimum number of edges whose removal results in a spanning subgraph $G'$ with $\chi(G')=\chi(G)-1$. Edge-stability critical graphs are introduced as the graphs $G$ with the property that ${\rm es}_{\chi}(G-e) < {\rm es}_{\chi}(G)$ holds for every edge $e\in E(G)$. If $G$ is an edge-stability critical graph with $\chi(G)=k$ and ${\rm es}_{\chi}(G)=\ell$, then $G$ is $(k,\ell)$-critical. Graphs which are $(3,2)$-critical and contain at most four odd cycles are classified. It is also proved that the problem of deciding whether a graph $G$ has $\chi(G)=k$ and is critical for the chromatic number can be reduced in polynomial time to the problem of deciding whether a graph is $(k,2)$-critical. \end{abstract}
\noindent {\bf Keywords:} chromatic edge-stability; edge-stability critical graph; odd cycle; computational complexity \\
\noindent {\bf AMS Subj.\ Class.\ (2010)}: 05C15, 05C38, 68Q25
\section{Introduction}
Given a graph $G$, a function $c:V(G)\rightarrow [k]=\{1,\ldots, k\}$ such that $c(u)\neq c(v)$ if $uv\in E(G)$, is called a {\em $k$-coloring} of $G$. The minimum $k$ for which $G$ is $k$-colorable is the {\em chromatic number} of $G$, and is denoted by $\chi(G)$. The {\em chromatic edge-stability number}, ${\rm es}_{\chi}(G)$, of $G$ is the minimum number of edges of $G$ whose removal results in a graph $G_{1}$ with $\chi(G_{1})=\chi(G)-1$.
The chromatic edge-stability number was introduced in~\cite{staton-1980}, where ${\rm es}_{\chi}$ was bounded from the above for regular graphs in terms of the size of a given graph. Somehow surprisingly, this natural coloring concept only recently received some of the deserved attention. In~\cite{kemnitz-2018} the edge-stability number was compared with the chromatic bondage number and bounded for several graph operations. In~\cite{akbari-2019+}, among other results, several bounds on ${\rm es}_{\chi}$ are proved and a Nordhaus-Gaddum type inequality derived. We also mention here the concept of edge-transversal number alias bipartite edge frustration, defined as the smallest number of edges that have to be deleted from a graph to obtain a bipartite spanning subgraph, see~\cite{ber-2000, doslic-2007, dvorak-2012, kral-2004, yarahmadi-2011, zhu-2009}. In particular, if $\chi(G) = 3$, then the two invariants coincide. A related study is concerned with the minimum number of edges that an $n$-vertex graph must have so that one can reduce it to a bipartite graph by the removal of a fixed number of edges~\cite{kostochka-2015}.
We say that a graph $G$ is {\em edge-stability critical} if $es_{\chi}(G-e)<es_{\chi}(G)$ holds for every edge $e\in E(G)$. To simplify the writing in this paper, we say that a graph $G$ is {\em $(k,\ell)$-critical}, where $k,\ell\ge 2$, if $G$ is an edge-stability critical graph with $\chi(G)=k$ and ${\rm es}_{\chi}(G)=\ell$. A graph $G$ is $(k,2)$-critical if and only if for every edge $e$ we have $\chi(G-e)=\chi(G)=k$ and there exists an edge $e'\in E(G)$ such that $\chi(G-\{e,e'\})=k-1$. From this point of view we recall that a {\em double-critical graph} is a connected graph in which the deletion of any pair of adjacent vertices decreases the chromatic number by two. The Erd\H{o}s-Lov\'{a}sz Tihany conjecture asserts that $K_k$ is the only double-critical $k$-chromatic graph~\cite{erdos-1968}. For recent results on this problem see~\cite{roso-2017, stiebitz-2017}.
Note that $(2,2)$-critical graphs are precisely the graphs with two edges. Hence, if we restrict to isolate-free graphs, then there are only two $(2,2)$-critical graphs, $P_3$ and $2K_2$. Since isolated vertices play no role in this study, we assume that all graphs considered in the rest of the paper are isolate-free.
To formulate our main result, we introduce the following four families of graphs, where $G + H$ denotes the disjoint union of graphs $G$ and $H$. Let ${\cal A}=\{C_{2k+1}+ C_{2\ell+1}\,|\,k,\ell\ge 1\}$ and let $\cal B$ be the family of graphs that are obtained from $C_{2k+1} + C_{2\ell+1}$, $k, \ell\ge 1$, by identifying a vertex of $C_{2k+1}$ with a vertex of $C_{2\ell+1}$. Let $x_i,y_i$ be the endvertices of the paths $Q_i$, $i\in[4]$, exactly two of the $Q_i$ are odd, and at most one of them is of length $1$. The family $\cal C$ consists of the graphs that are obtained from such four paths, by identifying the vertices $x_1$, $x_2$, $x_3$, and $x_4$ and also identifying the vertices $y_1$, $y_2$, $y_3$, and $y_4$. The family $\cal D$ consist of the following subdivisions of the graph $K_4$: (i) all the subdivided paths are of odd length, (ii) exactly three of the paths are odd, and these three paths induce an odd cycle, and (iii) exactly two of the paths are odd, and these two paths are vertex disjoint. Our main result now reads as follows.
\begin{theorem} \label{thm:main} $\cal A\cup \cal B\cup \cal C\cup \cal D$ is the family of $(3,2)$-critical graphs (without isolated vertices) that contain at most four odd cycles. \end{theorem}
In the next section we prove Theorem~\ref{thm:main}. In Section~\ref{sec:complexity}, we prove that the problem of deciding whether a graph $G$ is critical for the chromatic number and $\chi(G)=k$ can be reduced in polynomial time to the problem of deciding whether a graph is $(k,2)$-critical. We end the paper with concluding remarks concerning a possible characterization of $(3,2)$-critical graphs. In particular, we find one more family of $(3,2)$-critical graphs, and pose a problem whether the discovered families contain all $(3,2)$-critical graphs.
\section{Proof of Theorem~\ref{thm:main}}
It is clear that graphs with only one odd cycle cannot be $(3,2)$-critical. To describe $(3,2)$-critical graphs with exactly two odd cycles, we first show the following general fact about such graphs.
\begin{lemma} \label{lem:general-2-cycles} If $G$ is a graph that has exactly two odd cycles, and these cycles share no edge, then the cycles intersect either in a single vertex or not at all. \end{lemma}
\noindent{\bf Proof.\ } Let $C$ and $D$ be the odd cycles of $G$ and assume that they have more than one common vertex. Let $u$ and $v$ be vertices from $V(C)\cap V(D)$ such that $d_C(u,v)$ is smallest possible. Let $P$ be a $u,v$-subpath on $C$ of length $d_C(u,v)$, and let $P'$ be the other $u,v$-path on $C$. Let $Q$ be a $u,v$-path on $D$, which is internally disjoint from $C$. If the length of $Q$ is of the same parity as the length of $P$, then $Q\cup P'$ is an odd cycle, different from $C$ and $D$. Otherwise, if the length of $Q$ is of different parity as the length of $P$, then $Q\cup P$ is a third odd cycle in $G$, again a contradiction.
$\square$
\begin{theorem} \label{thm:two-odd-cycles} The $(3,2)$-critical graphs that contain exactly two odd cycles are precisely the graphs of the families $\cal A$ and $\cal B$. \end{theorem}
\noindent{\bf Proof.\ } Suppose that $G$ contains exactly two odd cycles. If the two odd cycles have a common edge, then ${\rm es}_{\chi}(G)=1$, hence $G$ is not $(3,2)$-critical. Thus, by Lemma~\ref{lem:general-2-cycles}, the two cycles are either disjoint or intersect in a single vertex. There are no other edges in $G$ but the edges of these two cycles, for otherwise, removing an edge $e$ that is not in any of these two cycles gives ${\rm es}_{\chi}(G-e)\ge 2$, a contradiction.
$\square$
\begin{lemma} \label{lem:general} If $G$ is a graph that has exactly three odd cycles, then the intersection of every two odd cycles is either empty or is a path. \end{lemma} \noindent{\bf Proof.\ } Let $C=v_1,\ldots,v_k,v_1$ ($k$ odd) be one of the three odd cycles in $G$. Suppose that $D$ is an odd cycle in $G$ such that the intersection $C\cap D$ is neither empty nor a path. This implies that $C\cap D$ is a union of at least two paths on $C$. We may assume without loss of generality that one of these paths is induced by vertices $v_1,\ldots, v_t$, where $t\in [k-1]$. Hence, there is a path $P$ between $v_t$ and a vertex $v_r$ in $V(C)\setminus \{v_1,\ldots,v_t\}$, which is internally in $D\setminus C$ (that is, except for its endvertices $v_t$ and $v_r$, all edges and eventual other vertices of $P$ are not in $C$). Analogously, there is a path $P'$ between $v_1$ and a vertex $v_s$ of $V(C)\setminus \{v_1,\ldots,v_t\}$, which is internally in $D\setminus C$ (that is, except for its endvertices $v_1$ and $v_s$ all edges and eventual other vertices of $P'$ are not in $C$). Note that $P$ and $P'$ are either disjoint or their intersection consists only of vertex $v_1$ (if $t=1$) or vertex $v_r$ (if $r=s$) or both.
Let $Q$ and $Q'$ be the two paths on $C$ between $v_t$ and $v_r$. If $P$ has the same parity as $Q$, then $P\cup Q'$ is an odd cycle. Otherwise, $P\cup Q$ is an odd cycle. Note that any of these cycles is distinct from $C$ and $D$. Now, an analogous argument for $P'$ implies that there is an odd cycle $F$ that involves the path $P'$ and a subpath of $C$ between $v_1$ and $v_s$. Clearly, since $P$ and $P'$ are distinct, also $F$ is different from $P\cup Q$ and $P\cup Q'$. We derive that there are at least four odd cycles in $G$, a contradiction.
$\square$
\begin{theorem} \label{thm:threeoddcylces} There are no $(3,2)$-critical graphs that contain exactly three odd cycles. \end{theorem} \noindent{\bf Proof.\ } Suppose that there is $(3,2)$-critical graph $G$ that contains exactly three odd cycles, $C$, $D$, and $E$. By Lemma~\ref{lem:general}, the intersections of pairs of cycles are either empty or they are paths. If one of the intersections, say $C\cap D$, has no edges, then removing an edge of $E$, which does not belong to $C\cup D$, yields a graph $G'$ with ${\rm es}_{\chi}(G')>1$, hence $G$ is not $(3,2)$-critical. Also, if there exists an edge $e$ that belongs to all three odd cycles, we infer that $\chi(G-e)=2$, again a contradiction. From these two observations we infer that $C\cap D$ is a path $P$, that $C\cap E$ is a path $P'$, and that $D\cap E$ is a path $P''$, where the three paths are non-trivial and are pairwise edge disjoint. The cycle $E$ is not equal to the cycle $C\cdot D$, which is the cycle obtained from $C\cup D$ by removing the internal vertices of $P$, since $C\cdot D$ is an even cycle. From this we derive that there exists a path $Q$ from a vertex $x$ in $C\setminus D$ to a vertex $y$ in $D\setminus C$, all of which edges and all eventual internal vertices are not in $C\cup D$. Note that the union of the cycles $C$ and $D$ and the path $Q$ yields a subgraph of $G$, which is a subdivision of $K_4$.
To complete the proof we are going to show that the cycles $C$ and $D$ together with the path $Q$ in any case yield two odd cycles in addition to $C$ and $D$. For this sake we introduce some more notation. Let $z$ and $z'$ be the endvertices of the path $P$, let $P_1$ be the $x,z$-path on $C$ not passing through $z'$ and let $P_1'$ be the $x,z'$-path on $C$ not passing through $z$, see Fig.~\ref{fig:exactly-three-odd}. Similarly, let $P_2$ be the $y,z$-path on $D$ not passing through $z'$ and let $P_2'$ be the $y,z'$-path on $D$ not passing through $z$. Let $p_i$ be the length of $P_i$, $i\in [2]$, and let $q$ be the length of $Q$. We distinguish the following cases.
\begin{figure}
\caption{Situation from the proof of Theorem~\ref{thm:threeoddcylces}.}
\label{fig:exactly-three-odd}
\end{figure}
\noindent {\bf Case 1.} $q$ is even.\\ Assume $p_1$ and $p_2$ are both even. Then $Q\cup P_1\cup P\cup P_2'$ is an odd cycle because the length of the subpath $P\cup P_2'$ is odd. Similarly, $Q\cup P_2\cup P\cup P_1'$ is an odd cycle.
Assume $p_1$ and $p_2$ are both odd. Then the same two cycles as in the previous subcase are odd, but this time because the length of $P-P_2'$ is even.
In the last subcase assume that $p_1$ is odd and $p_2$ is even. (The case when $p_1$ is even and $p_2$ is odd is symmetric.) Now the extra odd cycles we are searching for are $Q\cup P_1\cup P_2$ and $Q\cup P_1'\cup P_2'$. The first cycle is clearly odd, while the second is odd because the cycle $C\cdot D$ is even and hence the length of the subpath $P_1'\cup P_2'$ is odd.
\noindent {\bf Case 2.} $q$ is odd.\\ If $p_1$ and $p_2$ are both even, or if $p_1$ and $p_2$ are both odd, then the extra odd cycles are again $Q\cup P_1\cup P_2$ and $Q\cup P_1'\cup P_2'$.
Assume that $p_1$ is odd and $p_2$ is even. (Again the case when $p_1$ is even and $p_2$ is odd is symmetric.) Then $Q\cup P_1'\cup P\cup P_2$ are $Q\cup P_1\cup P\cup P_2$ are the required odd cycle. For instance, the latter cycle is odd because each of the subpaths $Q$, $P_1$ and $P\cup P_2$ has odd length.
$\square$
\begin{lemma} \label{lem:intersection-path} If $G$ is a $(3,2)$-critical graph that contains at least three odd cycles, then there exist two odd cycles whose intersection is a path with at least one edge. \end{lemma} \noindent{\bf Proof.\ } Since $G$ is $(3,2)$-critical, any two odd cycles intersect in at least two vertices. (Indeed, if odd cycles $C$ and $D$ intersect in one vertex or not at all, then there exists at least one edge $e$ which is not in $C\cup D$, and so ${\rm es}_{\chi}(G-e)\ge 2$, a contradiction.) Let $C$ and $D$ be two odd cycles in $G$, and suppose that their intersection is not a path. Among vertices from $C\cap D$ between which there is no path lying in $C\cap D$, let $x$ and $y$ be chosen to be closest on $D$. Therefore, there is an $x,y$-path $P$ on $D$, which is internally disjoint with $C$, and also an $x,y$-path $Q$ on $C$, which is internally disjoint with $D$. Now, either $P\cup Q$ or $(C-Q)\cup P$ is an odd cycle; in any case the intersection of this odd cycle with $C$ is a path.
$\square$
\begin{lemma} \label{lem:everytwointesect} If $G$ is a $(3,2)$-critical graph that contains at least three odd cycles, then every two distinct odd cycles intersect in more than one vertex. \end{lemma} \noindent{\bf Proof.\ } Suppose that $D_1$ and $D_2$ are odd cycles in $G$, which either intersect in one vertex or they are disjoint. Since the edges of $D_1\cup D_2$ induce a graph with exactly two odd cycles, and $G$ has more than two odd cycles, there must exist an edge $e$ in $G$, which is not in $D_1\cup D_2$. Now, $G-e$ still has the cycles $D_1$ and $D_2$ (having no edge in their intersection), and so ${\rm es}_{\chi}(G-e)\ge 2$, a contradiction.
$\square$
We denote by $K_2^{(4)}$ the multigraph on two vertices connected by four parallel edges, by $K_3^{(2,2,1)}$ the multigraph on three vertices two pairs of which are connected by two parallel edges and the third pair with a single edge, and by $C_4^{(2,1,2,1)}$ the multigraph on four vertices obtained from the graph $C_4$ by duplicating two of its non-consecutive edges. See Fig.~\ref{fig:three-multigraphs}.
\begin{figure}
\caption{Multigraphs $K_2^{(4)}$, $K_3^{(2,2,1)}$, and $C_4^{(2,1,2,1)}$}
\label{fig:three-multigraphs}
\end{figure}
\begin{proposition} \label{prp:three-subdivisions} If $G$ is a $(3,2)$-critical graph that contains at least three odd cycles, then $G$ contains as a subgraph a subdivision of one of the multigraphs $K_2^{(4)}$, $K_4$, $K_3^{(2,2,1)}$, or $C_4^{(2,1,2,1)}$. Moreover, each of the subdivisions contains at least two odd cycles. \end{proposition}
\noindent{\bf Proof.\ } By Lemma~\ref{lem:intersection-path}, there exist two odd cycles $D_1$ and $D_2$ in $G$ whose intersection is a path with at least one edge. We claim that there is a path $P$ connecting two distinct vertices from $D_1\cup D_2$, which is internally disjoint with $D_1\cup D_2$. Since the edges of $D_1\cup D_2$ induce only two odd cycles, that is $D_1$ and $D_2$, a third odd cycle $D_3$ must have an edge $e$, which is not in $D_1\cup D_2$. If both endvertices of $e$ are in $D_1\cup D_2$, then the claim is true, since $e$ itself induces a desired path $P$. Suppose exactly one of the endvertices of $e=xy$ is in $D_1\cup D_2$, say $x\in V(D_1\cup D_2)$. By Lemma~\ref{lem:everytwointesect}, $D_3$ intersects with $D_i$, $i\in [2]$, in at least two vertices, hence there exists a path on $D_3$ from $y$ to $D_1\cup D_2$, which confirms the claim in this case. Finally, if both $x$ and $y$ are not in $D_1\cup D_2$, by Lemma~\ref{lem:everytwointesect} we again infer that there is path on $D_3$ from $x$ to $D_1\cup D_2$ and also a path on $D_3$ from $y$ to $D_1\cup D_2$. These two paths together with the edge $e=xy$ yield a desired path $P$.
\begin{figure}
\caption{Situations leading to subdivisions of $K_2^{(4)}$, of $K_3^{(2,2,1)}$, of $C_4^{(2,1,2,1)}$, and of $K_4$}
\label{fig:possible-subdivisions}
\end{figure}
All the described possibilities for the position of $P$ in $D_1\cup D_2$ are schematically shown in Fig.~\ref{fig:possible-subdivisions}. We infer that $G$ contains as a subgraph a subdivision of one of the following graphs: $K_2^{(4)}$, $K_4$, $K_3^{(2,2,1)}$, or $C_4^{(2,1,2,1)}$. Since the odd cycles $D_1$ and $D_2$ are a part of this subdivision, the last sentence of the statement of the proposition is also clear.
$\square$
\begin{theorem} \label{thm:fouroddcycles} The $(3,2)$-critical graphs, which contain exactly four odd cycles are precisely the graphs of the families $\cal C$ and $\cal D$. \end{theorem} \noindent{\bf Proof.\ } Let $G$ be a $(3,2)$-critical graph, which contains exactly four odd cycles $D_1$, $D_2$ , $D_3$, and $D_4$. We start with the following claim.
\noindent {\bf Claim.} Every edge of $G$ is contained in at least two odd cycles.
\noindent {\bf Proof} (of Claim). Suppose $e$ is an edge which is contained only in one of the cycles, say $D_4$. Hence, since ${\rm es}_{\chi}(G-e)=1$, there exists an edge $f$ in $D_1\cap D_2\cap D_3$. By Lemma~\ref{lem:general}, the intersection of every two odd cycles in $G-e$ is a path. Then, $D_1\cap D_2\cap D_3$ is also a path (containing $f$), and let us denote it by $P$. Note that for two of the cycles among $D_1,D_2,D_3$ their intersection is the path $P$. Without loss of generality, let $D_1\cap D_2=P$. There are two cases: either $D_1$ (resp. $D_2$) has $D_1\cap D_3=P$ (resp. $D_2\cap D_3=P$), or, $(D_1\cap D_3)-P\ne\emptyset$ and $(D_2\cap D_3)-P\neq \emptyset$; see Fig.~\ref{fig:two-cases} for the second case.
\begin{figure}
\caption{Case B from the proof of Claim in the proof of Theorem~\ref{thm:fouroddcycles}}
\label{fig:two-cases}
\end{figure}
\noindent {\bf Case A.} $D_1\cap D_3=P$.\\ Note that $E(D_4\cap P)=\emptyset$, that is, $E(D_4)\cap E(D_1\cap D_2)=\emptyset=E(D_4)\cap E(D_1\cap D_3)$. Since $D_4$ contains an edge which does not lie in $D_1\cup D_2\cup D_3$, there is an edge $g$ in the (even) cycle $(D_2\cup D_3)-(D_2\cap D_3)$, which is not in $D_4$. Note that $g$ also does not lie in $D_1$. Now, we claim that ${\rm es}_{\chi}(G-g)\ge 2$. Indeed, if an edge in $P$ is removed from $G-g$, then $D_4$ is still an odd cycle in the resulting graph. On the other hand, if an edge in $D_1-P$ is removed from $G-g$, then either $D_2$ or $D_3$ remains in the resulting graph (depending on whether $g$ is in $D_2$ or $D_3$). Otherwise, $D_1$ remains in the resulting graph obtained by the removal of two edges, which is the final contradiction in this case.
\noindent {\bf Case B.} $(D_1\cap D_3)-P\ne\emptyset$ and $(D_2\cap D_3)-P\neq \emptyset$.\\ Note that $((D_1\cup D_2\cup D_3)-((D_1\cap D_3)\cup(D_2\cap D_3)))\cup P$ induces an odd cycle. Since $G$ has exactly four odd cycles, this cycles must be $D_4$. This is in a contradiction to the assumption that $e$ is an edge of $D_4$, not contained in the other three cycles.
Both cases lead to a contradiction, hence every edge of $G$ indeed lies in at least two odd cycles, as claimed. ($\Box$)
By Proposition~\ref{prp:three-subdivisions}, since $G$ contains at least three odd cycles, it contains a subgraph, which is a subdivision of one of the multigraphs $K_2^{(4)}$, $K_4$, $K_3^{(2,2,1)}$, or $C_4^{(2,1,2,1)}$. In all the cases let $G'$ denote the respective subgraph, a subdivision of the corresponding multigraph.
\noindent {\bf Case 1.} $G$ contains a subdivision of $K_2^{(4)}$.\\ Let $x$ and $y$ be the vertices in $G$, whose degree in $G'$ is $4$, and let $P_i$, $i\in [4]$, be the corresponding $x,y$-paths in $G'$, having lengths $p_i$, respectively. If exactly two of the lengths $p_i$ are odd, then $G'$ is clearly a $(3,2)$-critical graph. If $E(G)-E(G')\ne\emptyset$, then the removal of an edge in $E(G)-E(G')$ yields a graph with ${\rm es}_{\chi}\ge 2$. Therefore, $G=G'$, and $G$ is in $\cal C$. If an integer $p_j$ is of different parity than the other three, then $G'$ contains exactly three odd cycles, all of which contain the path $P_j$. Hence ${\rm es}_{\chi}(G')=1$. By Claim, every edge of $G$ lies in at least two odd cycles, thus the fourth odd cycle of $G$ contains all the edges of $E(G')-E(P_j)$, which is not possible. Therefore in this case, $G$ is not $(3,2)$-critical. In the other two cases for parities of integers $p_i$, $G'$ is bipartite, which is not a relevant case according to Proposition~\ref{prp:three-subdivisions}.
\noindent {\bf Case 2.} $G$ contains a subdivision of $K_4$.\\ In this case, $G'$ has six paths $P_i$ with lengths $p_i$ that connect vertices of degree $3$ in $G'$, and there are seven cycles, four of which come from the triangles of $K_4$ and three of which come from the squares of $K_4$. If $G'$ is in $\cal D$, then note that $G'$ has exactly four odd cycles, and every edge of $G'$ lies in exactly two odd cycles. As in Case 1, we infer that $G=G'$, and so $G$ is in $\cal D$. In other cases of parities of integers $p_i$, $G'$ contains exactly four odd cycles, all of which pass through the (common) path $P_i$ of odd length. This implies ${\rm es}_{\chi}(G')=1$, and since $G$ has no more odd cycles, ${\rm es}_{\chi}(G)=1$. (The case when all parities are even is not relevant according to Proposition~\ref{prp:three-subdivisions}.)
\noindent {\bf Case 3.} $G$ contains a subdivision of $K_3^{(2,2,1)}$.\\ In this case, $G'$ consists of five (subdivision) paths, two pairs of which form cycles $A$ and $B$, and $P$ is the path not in $A\cup B$; see the left graph in Fig.~\ref{fig:last-case-analysis}.
\begin{figure}
\caption{For the case analysis}
\label{fig:last-case-analysis}
\end{figure}
If both cycles $A$ and $B$ are odd, then by Lemma~\ref{lem:everytwointesect}, $G'$ is not $(3,2)$-critical. In fact, ${\rm es}_{\chi}(G')\ge 3$, and so ${\rm es}_{\chi}(G)\ge 3$. If both $A$ and $B$ are even cycles (and $G'$ is not bipartite), then $G'$ has four odd cycles all of which have path $P$ in common. Hence, ${\rm es}_{\chi}(G')=1$, and since $G$ has no other odd cycles, also ${\rm es}_{\chi}(G)=1$. If one of the two cycles, say $A$, is odd, and $B$ is even, then $G'$ contains exactly three odd cycles, which all have a path $R$ from $A$ in common. Hence ${\rm es}_{\chi}(G')=1$. Since every edge of $G$ lies in at least two odd cycles (by Claim), the fourth odd cycle of $G$ contains all the edges of $E(G')-E(R)$, which is not possible. Therefore also in this case, $G$ is not $(3,2)$-critical.
\noindent {\bf Case 4.} $G$ contains a subdivision of $C_4^{(2,1,2,1)}$.\\ In this case, $G'$ consists of six paths, two pairs of which form cycles $A$ and $B$, and call the other two paths by $Q$ and $Q'$; see the right graph in Fig.~\ref{fig:last-case-analysis}. If both cycles $A$ and $B$ are odd, we infer as in the previous case that $G$ is not $(3,2)$-critical, using Lemma~\ref{lem:everytwointesect}. If $A$ and $B$ are both even cycles, then $G'$ has four odd cycles, which all go through $Q$ or $Q'$. Without loss of generality, assume that all four cycles pass through $Q$. Hence ${\rm es}_{\chi}(G')=1$. Since every edge of $G$ lies in at least two odd cycles, the fourth odd cycle of $G$ contains all the edges of $E(G')-E(Q)$, which is not possible. For the final case, assume that $A$ is odd and $B$ is even (the reversed case is symmetric). Then $G'$ contains exactly three odd cycles, which are passing through one of the paths in $A$. Hence ${\rm es}_{\chi}(G')=1$, and we again infer that the fourth odd cycle of $G$ contains all edges of $G'$ except those of one of the paths in $A$, which is not possible.
$\square$
Theorem~\ref{thm:main} now follows by combining Theorems~\ref{thm:two-odd-cycles}, \ref{thm:threeoddcylces}, and~\ref{thm:fouroddcycles}.
\section{On the complexity of recognizing $(k,2)$-critical graphs} \label{sec:complexity}
In this section, we investigate the computational complexity of the recognition of $(k,2)$-critical graphs for $k\ge 4$. It is easy to see that $(3,2)$-critical graphs can be efficiently recognized. Indeed, for every edge $e$ of a given graph $G$ one has to verify whether $G-e$ is not bipartite and whether there exists an edge $f$ such that $G-\{e,f\}$ is bipartite; clearly, a BFS search can be used to check (non)bipartiteness, which yields a polynomial algorithm for recognizing $(3,2)$-critical graphs. On the other hand, the subsequent result in this section indicates that the recognition of $(k,2)$-critical graphs when $k\ge 4$ is likely to be computationally hard.
Given an integer $k$, $k\ge 3$, let ${{\cal F}_{(k,2)}}$ denote the class of graphs that are $(k,2)$-critical, and let ${{\cal F}_{{\chi},{k}}}$ denote the class of graphs $G$ with $\chi(G)=k$ that are critical for the chromatic number. That is, $G$ is in ${{\cal F}_{{\chi},{k}}}$ when $\chi(G)=k$ and $\chi(H)<k$ for each proper subgraph $H$ of $G$. Since we consider only graphs $G$ with no isolated vertices, the latter condition is equivalent to the statement that $\chi(G-e)=k-1$ for every edge $e$ in $G$. For more information on graphs critical for the chromatic number see Chapter 5 in the book~\cite{jensen-1995}, recent papers~\cite{kostochka-2018, postle-2018}, and references therein.
In the next result we show that the problem of deciding whether a graph is in ${{\cal F}_{{\chi},{k}}}$ can be reduced in polynomial time to the problem of deciding whether a graph is in ${{\cal F}_{(k,2)}}$. In particular, the existence of a polynomial algorithm for recognizing the graphs from ${{\cal F}_{(k,2)}}$ implies that there exists a polynomial algorithm for recognizing the graphs is in ${{\cal F}_{{\chi},{k}}}$. Since the latter seems very unlikely, we believe that recognizing the graphs in ${{\cal F}_{(k,2)}}$ is computationally hard.
Let $G$ be a connected graph and $u\in V(G)$. We denote with $G_u \bowtie K_k$ the graph obtained from $G$ and $K = K_k$ by identifying $u$ with a vertex of $K$. Let further $K_u$ be the $k$-clique of $G_u \bowtie K_k$ obtained in the construction from $K$.
\begin{theorem} Let $G$ be a graph and let $u\in V(G)$. If $k\ge 3$, then $G\in{{\cal F}_{{\chi},{k}}}$ if and only if $G_u \bowtie K_k\in {{\cal F}_{(k,2)}}$. \end{theorem}
\noindent{\bf Proof.\ } Let $G\in {{\cal F}_{{\chi},{k}}}$. Clearly, $\chi(G_u \bowtie K_k)=k$. To prove that $G_u \bowtie K_k\in {{\cal F}_{(k,2)}}$, consider two cases for an edge $e$ in $G_u \bowtie K_k$. If $e$ is in $K_u$, then clearly $\chi((G_u \bowtie K_k)-e)=k$, since $G$ is a subgraph of $(G_u \bowtie K_k)-e$. In addition, for an arbitrary edge $f\in E(G)$, we have $\chi((G_u \bowtie K_k)-\{e,f\})=k-1$, and so ${\rm es}_{\chi}(G-e)=1$. The other case for an edge $e$ (that is, $e\in E(G)$) is similar. Indeed, we have $\chi((G_u \bowtie K_k)-e)=k$, yet for an arbitrary edge $f$ from $K_u$ we have $\chi((G_u \bowtie K_k)-\{e,f\})=k-1$. Thus, $G_u \bowtie K_k\in {{\cal F}_{(k,2)}}$.
For the reversed implication, let $G_u \bowtie K_k\in {{\cal F}_{(k,2)}}$. Since $\chi((G_u \bowtie K_k)-e)=k$ for an edge $e\in E(K_u)$, this implies that $\chi(G)=k$. To prove that $G$ is in ${{\cal F}_{{\chi},{k}}}$, let $e$ be an arbitrary edge in $E(G)$. Note that $\chi((G_u \bowtie K_k)-e)=k$. Since ${\rm es}_{\chi}((G_u \bowtie K_k)-e)=1$, there exists an edge $f\in E(G_u \bowtie K_k)$ such that $\chi((G_u \bowtie K_k)-\{e,f\})=k-1$. This edge $f$ cannot lie in $E(G)$ for otherwise $(G_u \bowtie K_k)-\{e,f\}$ contains the $k$-clique $K_u$ as a subgraph, and so the chromatic number of this graph is $k$. So $f\in E(K_u)$ and hence $\chi(G-e)=k-1$.
$\square$
\section{Concluding remarks on $(3,2)$-critical graphs}
We could find only one more family of $(3,2)$-critical graphs. They can be described as specific subdivisions of a multigraph obtained from a cycle whose edges are duplicated. More precisely, a graph from $\cal E$ is obtained from the disjoint union of $k$ even cycles $C_{2n_1},\ldots, C_{2n_k}$ as follows. For each $i\in [k]$, let $x_i$ and $y_i$ be any two distinct vertices of $C_{2n_i}$, where we only require that $$\sum_{i=1}^k d_{C_{2n_i}}(x_i,y_i)$$ is odd. A graph $G\in \cal E$ is obtained by identifying $y_i$ with $x_{i+1}$ for $i\in [k-1]$, and identifying $y_k$ with $x_1$. It is easy to see that graphs $G\in \cal E$ are $(3,2)$-critical. We pose the following problem for which we suspect it has a positive answer.
\begin{problem} \label{prob} Is it true that $\cal A\cup \cal B\cup \cal C\cup \cal D\cup \cal E$ is the family of $(3,2)$-critical graphs (without isolated vertices)? \end{problem}
Solving Problem~\ref{prob} remains a challenge. One way how to approach it is by using Proposition~\ref{prp:three-subdivisions}, which shows that a $(3,2)$-critical graph with at least three odd cycles contains one of the four subdivisions described in the proposition as a subgraph. Now, assuming that a $(3,2)$-critical graph $G$ has at least five odd cycles, one should examine each of the four possible subdivisions appearing as a proper subgraph $H$ of $G$, and each possibility of which are the odd cycles in these subdivisions. Cases of $H$ being a subdivision of $K_2^{(4)}$ and $K_4$ should yield a contradiction, while $K_3^{(2,2,1)}$ and $C_4^{(2,1,2,1)}$ should either yield a contradiction or that $G$ belongs to $\cal E$. The described approach is probably very technical, hence applying some related work from the literature in an efficent way would be desirable. Among the known results about smallest sets of edges that hit all odd cycles we encountered papers of Berge and Reed~\cite{ber-2000} and Kr\'{a}l and Voss~\cite{kral-2004} whose main concern are planar graphs, and cannot be applied for this purpose.
\section*{Acknowledgments}
The third author thankfully acknowledges initial discussions on the problem with Ross Kang. We also thank Jason Brown and Tommy Jensen for discussions on the recognition complexity of chromatic critical graphs. The financial support from the Slovenian Research Agency (research core funding P1-0297, projects J1-9109, J1-1693, and N1-0095) is acknowledged. The third author acknowledges the financial support from Yazd University research affairs as Post-doc research project.
\end{document} |
\begin{document}
\begin{titlepage} \vspace*{4mm} \title{On the Second Fundamental Theorem of Asset Pricing} \vskip 2em \centerline{\mbox{{\sc Rajeeva L. Karandikar}\hspace{10pt} and \hspace{10pt}{\sc B. V. Rao
}}} \vskip 1em \place{Chennai Mathematical Institute, Chennai.} \vskip 9em \centerline{{\sc Abstract}}
{ Let $X^1,\ldots, X^d$ be sigma-martingales on $(\Omega,{\mathcal F}, {{\sf P}})$. We show that every bounded martingale (with respect to the underlying filtration) admits an integral representation w.r.t. $X^1,\ldots, X^d$ if and only if there is no equivalent probability measure (other than ${{\sf P}}$) under which $X^1,\ldots,X^d$ are sigma-martingales.
From this we deduce the second fundamental theorem of asset pricing- that completeness of a market is equivalent to uniqueness of Equivalent Sigma-Martingale Measure (ESMM).}
\hrule width100pt height .8pt
\vskip 2mm \noindent {\it 2010 Mathematics Subject Classication:} 91G20, 60G44, 97M30, 62P05.\\ \noindent {\it Key words and phrases:} Martingales, Sigma Martingales, Stochastic Calculus, Martingale Representation, No Arbitrage, Completeness of Markets. \end{titlepage}
\section{Introduction} The (first) fundamental theorem of asset pricing says that a market consisting of finitely many stocks satisfies the {\em No Arbitrage property} (NA) if and only there exists an {\em Equivalent Martingale Measure} (EMM)- {\em i.e. there exists an equivalent probability measure under which the (discounted) stocks are (local) martingales.} The No Arbitrage property has to be suitably defined when we are dealing in continuous time, where one rules out approximate arbitrage in the class of admissible strategies. For a precise statement in the case when the underlying processes are locally bounded, see Delbaen and Schachermayer \cite{DS}. Also see Bhatt and Karandikar \cite{BK15} for an alternate formulation, where the approximate arbitrage is defined only in terms of simple strategies. For the general case, the result is true when local martingale in the statement above is replaced by sigma-martingale. See Delbaen and Schachermayer \cite{DS98}. They have an example where the No Arbitrage property holds but there is no equivalent measure under which the underlying process is a local martingale. However, there is an equivalent measure under which the process is a sigma-martingale.
The second fundamental theorem of asset pricing says that the market is complete ({\em i.e. every contingent claim can be replicated by trading on the underlying securities}) if and only if the EMM is unique. Interestingly, this property was studied by probabilists well before the connection between finance and stochastic calculus was established (by Harrison-Pliska \cite{HP}). The completeness of market is same as the question: when is every martingale representable as a stochastic integral w.r.t. a given set of martingales $\{M^1,\ldots ,M^d\}$. When $M^1,\ldots ,M^d$ is the $d$-dimensional Wiener Process, this property was proven by Ito \cite{Ito}. Jacod and Yor \cite{JY} proved that if $M$ is a ${{\sf P}}$-local martingale, then every martingale $N$ admits a representation as a stochastic integral w.r.t. $M$ if and only if there is no probability measure ${{\sf Q}}$ (other than ${{\sf P}}$) such that ${{\sf Q}}$ is equivalent to ${{\sf P}}$ and $M$ is a ${{\sf Q}}$-local martingale. The situation in higher dimension is more complex. The obvious generalisation to higher dimension is not true as was noted by Jacod-Yor \cite{JY}.
To remedy the situation, a notion of vector stochastic integral was introduced- where a vector valued predictable process is the integrand and vector valued martingale is the integrator. The resulting integrals yield a class larger than the linear space generated by component wise integrals. See \cite{J80}, \cite{CS}. However, one has to prove various properties of stochastic integrals once again.
Here we achieve the same objective in another fashion avoiding defining integration again from scratch. In the same breath, we also take into account the general case, when the underlying processes need not be bounded but satisfy the property NFLVR and thus one has a equivalent sigma-martingale measure (ESMM). We say that a martingale $M$ admits a integral representation w.r.t. $(X^1,X^2,\ldots, X^d)$ if there exits predictable $f, g^j$ such that $g^j\in{\mathbb L}(X^j)$, \[Y_t=\sum_{j=1}^d\int_0^t g^j_s{\hspace{0.6pt}d\hspace{0.1pt}} X^j_s,\] $f\in{\mathbb L}(Y)$ and \[M_t=\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\] The security $Y$ can be thought of as a mutual fund or an index fund and the investor is trading on such a fund trying to replicate the security $M$.
We will show that if for a multidimensional r.c.l.l. process $(X^1,X^2,\ldots, X^d)$ an ESMM exists, then all bounded martingales admit a representation w.r.t $X^j$, $1\le j\le d$ if and only if ESMM is unique.
\section{Preliminaries and Notation} Let us start with some notations. $ (\Omega, {\mathcal F}, {{\sf P}})$ denotes a complete probability space with a filtration $ ({\mathcal F}_\centerdot)= \{{\mathcal F}_t:\;t\ge 0 \}$ such that ${\mathcal F}_0$ consists of all ${{\sf P}}$-null sets (in ${\mathcal F}$) and \[\cap_{t>s}{\mathcal F}_t={\mathcal F}_s \;\;\forall s\ge 0.\]
For various notions, definitions and standard results on stochastic integrals, we refer the reader to Jacod \cite{J78} or Protter \cite{P}.
For a local martingale $M$, let ${\mathbb L}^1_m(M)$ be the class of predictable processes $f$ such that there exists a sequence of stopping times $\sigma_k\uparrow\infty$ with \[{{\sf E}}[\{\int_0^{\sigma_k}f^2_s{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s\}^\frac{1}{2}]<\infty\] and for such an $f$, $N=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ is defined and is a local martingale.
Let ${\mathbb M}$ denote the class of martingales and for $M^1,M^2,\ldots , M^d\in{\mathbb M}$ let \[{\mathbb C}(M^1,M^2,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\in{\mathbb L}^1_m(M^j)\}\] and for $T<\infty$, let \[\tilde{{\mathbb K}}_T(M^1,M^2,\ldots , M^d)=\{Z_T\,:\, Z\in {\mathbb C}(M^1,M^2,\ldots , M^d)\}.\]
For the case $d=1$, Yor \cite{Yor} had proved that $\tilde{{\mathbb K}}_T$ is a closed subspace of ${\mathbb L}^1(\Omega,{\mathcal F},{{\sf P}})$. The problem in case $d>1$ is that in general $\tilde{{\mathbb K}}_T(M^1,M^2,\ldots , M^d)$ need not be closed. Jacod-Yor \cite{JY} gave an example where $M^1,M^2$ are continuous square integrable martingales and $\tilde{{\mathbb K}}_T(M^1,M^2)$ is not closed.
For martingales $M^1,M^2,\ldots ,M^d$, let \[{\mathbb F}(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}(M^1,\ldots , M^d)\}.\] Let \[{\mathbb K}_T(M^1,M^2,\ldots , M^d)=\{Z_T\,:\,Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)\}.\] The main result of the next section is \begin{theorem} \label{aztm1} Let $M^1,M^2,\ldots ,M^d$ be martingales. Then ${\mathbb K}_T(M^1,M^2,\ldots , M^d)$ is closed in ${\mathbb L}^1(\Omega,{\mathcal F}, {{\sf P}})$. \end{theorem}
This would be deduced from \begin{theorem} \label{aztm2} Let $M^1,M^2,\ldots ,M^d$ be martingales and $Z^n\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. Then $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$. \end{theorem}
When $M^1,M^2,\ldots ,M^d$ are square integrable martingales, the analogue of Theorem \ref{aztm1} for ${\mathbb L}^2$ follows from the work of Davis-Varaiya \cite{DV}. However, for the EMM characterisation via integral representation, one needs the ${\mathbb L}^1$ version, which we deduce using change of measure technique.
We will need the Burkholder-Davis-Gundy inequality (see \cite{Mey}) (for $p=1$) which states that there exist universal
constants $c^{1}, c^{2}$ such that for all martingales $M$ and $T<\infty$, \[ c^{1}{{\sf E}} [ ([M,M ]_T)^{\frac{1}{2}} ]\le {{\sf E}} [\sup_{0\le t\le T} \lvert M_t \rvert ]\le c^{2}{{\sf E}} [ ([M,M ]_T)^{\frac{1}{2}}].\] After proving Theorem \ref{aztm1}, in the next section we will introduce sigma-martingales and give some elementary properties. Then we come to the main theorem on integral representation of martingales. This is followed by the second fundamental theorem of asset pricing.
\section{Proof of Theorem \ref{aztm1}}
We begin with a few auxiliary results. In this section, we fix martingales $M^1,M^2,\ldots ,M^d$. \begin{lemma}\label{azl0} Let {\small \[{\mathbb C}_b(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\textstyle\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\text{ bounded predictable },1\le j\le d\},\] \[{\mathbb F}_b(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\textstyle\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}_b(M^1,\ldots , M^d)\}.\]} Then ${\mathbb F}_b(M^1,\ldots , M^d)={\mathbb F}(M^1,\ldots , M^d)$. \end{lemma} \noindent{\sc Proof : } Since bounded predictable process belong to ${\mathbb L}^1_m(N)$ for every martingale $N$, it follows that ${\mathbb C}_b(M^1,\ldots , M^d)\subseteq {\mathbb C}(M^1,\ldots , M^d)$ and hence ${\mathbb F}_b(M^1,\ldots , M^d)\subseteq {\mathbb F}(M^1,\ldots , M^d)$.
For the other part, let $Z\in {\mathbb F}$ be given by \[Z_t= Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s , \,f\in{\mathbb L}^1_m(Y)\] where \[Y_t=\sum_{j=1}^d\int_0^t g^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\] with $g^j\in{\mathbb L}^1_m(M^j)$. Let $\xi_s=1+\sum_{j=1}^d\lvert g^j_s\rvert$, $h^j_s=\frac{g^j_s}{\xi_s}$ and \[V_t=\sum_{j=1}^d\int_0^t h^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\] Since $h^j$ are bounded, it follows that $V\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Using $g^j_s=\xi_sh^j_s$ and $g^j\in{\mathbb L}^1_m(M^j)$, it follows that $\xi\in{\mathbb L}^1_m(V)$ and \[Y_t=\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} V_s.\] Since $f\in{\mathbb L}^1_m(Y)$, it follows that $f\xi\in{\mathbb L}^1_m(V)$ and $\int f{\hspace{0.6pt}d\hspace{0.1pt}} Y=\int f\xi {\hspace{0.6pt}d\hspace{0.1pt}} V$. \qed
\begin{lemma} \label{azl1}Let $Z\in{\mathbb M}$ be such that there exists a sequence of stopping times $\sigma_k\uparrow\infty$ with ${{\sf E}}_{{\sf P}}[\sqrt{[Z,Z]_{\sigma_k}}]<\infty$ and $X^k\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ where $X^k_t=Z_{t\wedge\sigma_k}$. Then $Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. \end{lemma} \noindent{\sc Proof : } Let $X^k=\int f^k{\hspace{0.6pt}d\hspace{0.1pt}} Y^k$ for $k\ge 1$ with $f^k\in{\mathbb L}^1_m(Y^k)$ and $Y^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Let $\phi^{k,j}$ be bounded predictable processes such that \[Y^k_t=\sum_{j=1}^d\int_0^t\phi^{k,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\] Let $c_k>0$ be a common bound for $\phi^{k,1},\phi^{k,2},\ldots ,\phi^{k,d}$. Let us define $\eta^j,f$ by \[\eta^j_t=\sum_{k=1}^\infty \frac{1}{c_k}\phi^{k,j}_t\Ind_{\{(\sigma_{k-1},\sigma_k]\}}(t).\] \[f_t=\sum_{k=1}^\infty c_kf^k_t\Ind_{\{(\sigma_{-1},\sigma_k]\}}(t).\] \[Y_t=\sum_{j=1}^d \int_0^t \eta^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s .\] By definition, $\eta^j$ is bounded by 1 for every $j$ and thus
$Y\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$. We can note that \[\begin{split} Z_{t\wedge\sigma_k}- Z_{t\wedge\sigma_{k-1}}&=X^k_{t\wedge\sigma_k}-X^k_{t\wedge\sigma_{k-1}}\\ &=\int_0^tf^k_s\Ind_{\{(\sigma_{k-1},\sigma_k]\}}(s){\hspace{0.6pt}d\hspace{0.1pt}} Y^k_s\\ &=\int_0^tf_s\Ind_{\{(\sigma_{k-1},\sigma_k]\}}(s){\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\end{split} \] Thus \[Z_{t\wedge\sigma_k}=Z_0+\int_0^t\Ind_{[0,{\sigma_k}]}(s)f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\] and hence \[[Z,Z]_{\sigma_k}=\int_0^{\sigma_k}(f_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\] Since by assumption, for all $k$ \[{{\sf E}}_{{\sf P}}[\sqrt{[Z,Z]_{\sigma_k}}\;]<\infty\] it follows that $f\in{\mathbb L}^1_m(Y)$. This proves the required result. \qed \begin{lemma} \label{azl2}Let $Z^n\in{\mathbb M}$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. Then there exists a sequence of stopping times $\sigma_k\uparrow\infty$ and a subsequence $\{n^j\}$ such that each $k\ge 1$,
\[{{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty\] and writing $Y^j=Z^{n^j}$, \begin{equation}\label{az1}{{\sf E}}[\sqrt{[Y^j-Z,Y^j-Z]_{\sigma_k }}\;]\rightarrow 0 \text{ as }j\uparrow\infty.\end{equation} \end{lemma} \noindent{\sc Proof : } Let $n_0=0$. For each $j$, ${{\sf E}}[\lvert Z^n_j-Z_j\rvert ]\rightarrow 0$ as $n\rightarrow\infty$ and hence we can choose $n^j>n^{(j-1)}$ such that \[{{\sf E}}[\lvert Z^{n^j}_j-Z_j\rvert ]\le 2^{-j}.\] Then using Doob's maximal inequality we have \[{{\sf P}}(\sup_{t\le j}\lvert Z^{n^j}_t-Z_t\rvert \ge \frac{1}{j^2})\le \frac{j^2}{2^j}.\] As a consequence, writing $Y^j=Z^{n^j}$, we have \begin{equation}\label{az31} \eta_t=\sum_{j=1}^\infty \sup_{s\le t} \lvert Y^j_s-Z_s\rvert<\infty \;\;a.s. \text{ for all }t<\infty.\end{equation} Now define \[U_t=\lvert Z_t\rvert+\sum_{j=1}^\infty \lvert Y^j_t-Z_t\rvert.\] In view of \eqref{az31}, it follows that $U$ is r.c.l.l. adapted process. For any stopping time $\tau\le m$, we have \[\begin{split} {{\sf E}}[U_\tau]&= {{\sf E}}[\lvert Z_\tau\rvert]+ \sum_{j=1}^\infty {{\sf E}}[\lvert Y^j_\tau-Z_\tau\rvert]\\ &\le {{\sf E}}[\lvert Z_m\rvert]+\sum_{j=1}^\infty {{\sf E}}[\lvert Y^j_m-Z_m\rvert]\\ &\le {{\sf E}}[\lvert Z_m\rvert]+\sum_{j=1}^m {{\sf E}}[\lvert Y^j_m-Z_m\rvert]+\sum_{j=m+1}^\infty 2^{-j} \\ &<\infty. \end{split}\] Here, we have used that $Z$, $Y^j-Z$ being martingales, $\lvert Z\rvert$, $\lvert Y^j-Z\rvert$ are sub-martingales and $\tau\le m$. Now defining \[\sigma_k=\inf\{t\,:\;U_t\ge k\text{ or } U_{t-}\ge k\}\wedge k\] it follows that $\sigma_k$ are bounded stop times increasing to $\infty$ with \[ \sup_{s\le \sigma_k} U_s \le k+ U_{\sigma_k} \] and hence \begin{equation}\label{az31w} {{\sf E}}[\sup_{s\le \sigma_k} U_s ]<\infty. \end{equation} Thus, for each $k$ fixed ${{\sf E}}[\sup_{s\le \sigma_k} \lvert Z_s\rvert]<\infty$ and thus Burkholder-Davis-Gundy inequality ($p=1$ case), we have ${{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty$. In view of \eqref{az31} \[\lim_{j\rightarrow\infty}\sup_{s\le \sigma_k} \lvert Y^j_s-Z_s\rvert= 0\;\;\;\;a.s.\]
and is dominated by $(\sup_{s\le \sigma_k} U_s) $ which in turn is integrable as seen in \eqref{az31w}. Thus by dominated convergence theorem, we have \[ \lim_{j\rightarrow\infty}{{\sf E}}[\sup_{s\le \sigma_k} \lvert Y^j_s-Z_s\rvert]= 0. \]
The result \eqref{az1} now follows from Burkholder-Davis-Gundy inequality ($p=1$ case). \qed
\begin{lemma} \label{azl3}Let $V\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ and $\tau$ be a bounded stopping time such that ${{\sf E}}[\sqrt{[V,V]_{\tau }}\;]<\infty$. Then for $\epsilon>0$, there exists $U\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ such that \[ {{\sf E}}[\sqrt{[V-U,V-U]_{\tau }}\;]\le\epsilon.\] \end{lemma} \noindent{\sc Proof : } Let $V=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ where $f\in{\mathbb L}^1_m(X)$ and $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Since \[[V,V]_t=\int_0^t\lvert f_s\rvert^2{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s,\] the assumption on $V$ gives \begin{equation}\label{az33} {{\sf E}}[\sqrt{\textstyle\int_0^\tau\lvert f_s\rvert^2{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s }]<\infty. \end{equation} Defining $f^k_s=f_s\Ind_{\{\lvert f_s\rvert\le k\}}$, let \[U^k=\int f^k{\hspace{0.6pt}d\hspace{0.1pt}} X.\] Since $X\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ and $f^k$ is bounded, it follows that \[U^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d).\] Note that as $k\rightarrow\infty$, \[{{\sf E}}[\sqrt{[V-U^k,V-U^k]_{\tau }}]={{\sf E}}[\sqrt{\textstyle\int_0^\tau\lvert f_s\rvert^2\Ind_{\{\lvert f_s\rvert> k\}}{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s }]\rightarrow 0\] in view of \eqref{az33}. The result now follows by taking $U=U^k$ with $k$ large enough so that ${{\sf E}}[\sqrt{[V-U^k,V-U^k]_{\tau }}]<\epsilon.$
\qed
\begin{lemma} \label{azl4} Suppose $Z\in {\mathbb M}$ and $\tau$ is a bounded stopping time such that ${{\sf E}}[\sqrt{[Z,Z]_{\tau }}\;]<\infty$, $Z_t=Z_{t\wedge\tau}$. Let $U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ with $U^n_0=0$ be such that \[{{\sf E}}[\sqrt{[U^n-Z,U^n-Z]_{\tau }}\;]\le 4^{-n}.\] Then there exists $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$ and $f\in{\mathbb L}^1_m(X)$ such that \begin{equation}\label{az66}Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s.\end{equation} \end{lemma} \noindent{\sc Proof : } Since $U^n\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$ with $U^n_0=0$, get bounded predictable processes $\{f^{n,j}:n\ge 1,\,1\le j\le d\}$ such that \begin{equation}\label{az2}U^n_t=\sum_{j=1}^d\int_0^tf^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\end{equation} Without loss of generality, we assume that $U^n_t=U^n_{t\wedge\tau}$ and $f^{n,j}_s=f^{n,j}_s\Ind_{[0,\tau]}(s)$. Let \[\zeta=\sum_{n=1}^\infty 2^n\sqrt{[U^n-Z,U^n-Z]_{\tau }}.\] Then ${{\sf E}}[\zeta]<\infty$ and hence ${{\sf P}}(\zeta<\infty)=1.$ Let \[\eta=\zeta+\sqrt{[Z,Z]_{\tau }}+\sum_{j=1}^d\sqrt{[M^j,M^j]_{\tau }}\] Let $c={{\sf E}}[\exp\{-\eta\}]$ and let ${{\sf Q}}$ be the probability measure on $(\Omega,{\mathcal F})$ defined by \[\frac{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}}{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}}=\frac{1}{c}\exp\{-\eta\}.\] Then it follows that $\alpha={{\sf E}}_{{\sf Q}}[\eta^2]<\infty$. Noting that \[\eta^2\ge [Z,Z]_{\tau }+\sum_{j=1}^d[M^j,M^j]_{\tau }+\sum_{n=1}^\infty 2^{2n}[U^n-Z,U^n-Z]_{\tau }\] we have ${{\sf E}}_{{\sf Q}}[[Z,Z]_{\tau }]<\infty$, ${{\sf E}}_{{\sf Q}}[[M^j,M^j]_{\tau }]<\infty$ for $1\le j\le d$. Likewise, ${{\sf E}}_{{\sf Q}}[[U^n-Z,U^n-Z]_{\tau }]<\infty$ and so ${{\sf E}}_{{\sf Q}}[[U^n,U^n]_{\tau }]<\infty$. Note that $Z, M^j$ are no longer a martingales under ${{\sf Q}}$, but we do not need that.
Let $ \widetilde{\Omega}=[0,\infty)\times\Omega$.
Recall that the predictable $\sigma$-field ${\mathcal P}$ is the smallest $\sigma$ field on $ \widetilde{\Omega}$ with respect to which all continuous adapted processes are measurable. We will define signed measures $\Gamma_{ij}$ on ${\mathcal P}$ as follows: for $E\in{\mathcal P}$, $1\le i,j\le d$ let \[\Gamma_{ij}(E)=\int_{\Omega}\int_0^\tau \Ind_E(s,\omega){\hspace{0.6pt}d\hspace{0.1pt}} [M^i,M^j]_s(\omega){\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}(\omega).\]
Let $\Lambda=\sum_{j=1}^d\Gamma_{jj}$. From the properties of quadratic variation $[M^i,M^j]$, it follows that for all $E\in {\mathcal P}$, the matrix $((\Gamma_{ij}(E)))$ is non-negative definite. Further, $\Gamma_{ij}$ is absolutely continuous w.r.t. $\Lambda$ $\forall i,j$. It follows (see appendix) that we can get predictable processes $c^{ij}$ such that \[\frac{{\hspace{0.6pt}d\hspace{0.1pt}}\Gamma_{ij}}{{\hspace{0.6pt}d\hspace{0.1pt}}\Lambda}=c^{ij}\] and that $C=((c^{ij}))$ is a non-negative definite matrix. By construction $\lvert c^{ij}\rvert \le 1$. We can diagonalise $C$ (i.e. obtain singular value decomposition) in a measurable way (see appendix) to obtain predictable processes $b^{ij}$, $d^j$ such that for all $i,k$, (writing $\delta_{ik}=1$ if $i=k$ and $\delta_{ik}=0$ if $i\neq k$),) \begin{equation}\label{az5}\sum_{j=1}^db^{ij}_sb^{kj}_s=\delta_{ik}\end{equation} \begin{equation}\label{az6}\sum_{j=1}^db^{ji}_sb^{jk}_s=\delta_{ik}\end{equation} \begin{equation}\label{az7}\sum_{j,l=1}^db^{ij}_sc^{jl}_sb^{kl}_s=\delta_{ik}d^{i}_s\end{equation}
Since $((c^{ij}_s))$ is non-negative definite, it follows that $d^i_s\ge 0$. For $1\le j\le d$, let \[N^k=\sum_{l=1}^d\int b^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^l.\] Then $N^k$ are ${{\sf P}}$- martingales since $b^{ik}$ is a bounded predictable process. Indeed, $N^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Further, for $i\neq k$ \[[N^i,N^k]=\sum_{j,l=1}b^{ij}_sb^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} [M^j,M^l]\] and hence for any bounded predictable process $h$ \[\begin{split} {{\sf E}}_{{\sf Q}}[\int_0^\tau h_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^i,N^k]]&=\int_\Omega\int_0^\tau h_s\sum_{j,l=1}^db^{ij}_sb^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} [M^j,M^l]{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}(\omega)\\ &=\int_{\bar{\Omega}}h\sum_{j,l=1}^db^{ij}b^{kl}{\hspace{0.6pt}d\hspace{0.1pt}}\Gamma_{jl}\\ &=\int_{\bar{\Omega}}h\sum_{j,l=1}^db^{ij}b^{kl}c^{jl}{\hspace{0.6pt}d\hspace{0.1pt}}\Lambda\\ &=0\end{split} \] where the last step follows from \eqref{az7}. As a consequence, for bounded predictable $h^i$, \begin{equation}\label{az10} {{\sf E}}_{{\sf Q}}[\sum_{i,k=1}^d\int_0^\tau h^i_sh^k_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^i,N^k]_s]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (h^k_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s] \end{equation} Let us observe that \eqref{az10} holds for any predictable processes $\{h^i: 1\le i\le d\}$ provided the RHS if finite: we can first note that it holds for $\tilde{h}^i=h^i\Ind_{\{\lvert h\rvert\le c\}}$ where $\lvert h\rvert=\sum_{i=1}^d\lvert h^i\rvert$ and then let $c\uparrow\infty$. Note that for $n\ge m$ \[\begin{split} \sqrt{[U^n-U^m,U^n-U^m]_{\tau }}&\le \sqrt{[U^n-Z,U^n-Z]_{\tau }}+ \sqrt{[U^m-Z,U^m-Z]_{\tau }}\\ &\le 2^{-m}\eta\end{split} \] and hence \begin{equation}\label{az11} {{\sf E}}_{{\sf Q}}[[U^n-U^m,U^n-U^m]_{\tau }]\le 4^{-m}\alpha.\end{equation} Let us define $g^{n,k}=\sum_{j=1}^df^{n,j}b^{kj}$. Then note that \begin{equation}\label{az12} \begin{split} \sum_{k=1}^d\int g^{n,k}{\hspace{0.6pt}d\hspace{0.1pt}} N^k&=\sum_{k=1}^d\sum_{j=1}^d\int f^{n,j}b^{kj}{\hspace{0.6pt}d\hspace{0.1pt}} N^k\\ &=\sum_{k=1}^d\sum_{j=1}^d\sum_{l=1}^d\int f^{n,j}b^{kj}b^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^l\\ &=\sum_{j=1}^d\int f^{n,j}{\hspace{0.6pt}d\hspace{0.1pt}} M^j\\ &=U^n \end{split}\end{equation} where in the last but one step, we have used \eqref{az6}. Noting that for $n\ge m$, \begin{equation}\label{az13} {{\sf E}}_{{\sf Q}}[\,[U^n-U^m,U^n-U^m]_\tau\,]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau(g^{n,k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s]\end{equation} and using \eqref{az11}, we conclude \begin{equation}\label{az14}\begin{split} {{\sf Q}}(\sum_{k=1}^d\int_0^\tau(g^{n,k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]\ge \frac{1}{m^4})&\le m^4{{\sf E}}_{{\sf Q}}[\,[U^n-U^m,U^n-U^m]_\tau\,]\\ &\le \alpha m^44^{-m}.\end{split}\end{equation} Since ${{\sf E}}_{{\sf Q}}[\,[M^i,M^i]_\tau]<\infty$ for all $i$ and $g^{n,i}$ are bounded, it follows that ${{\sf E}}_{{\sf Q}}[\,[N^i,N^i]_\tau]<\infty$. Thus defining a measure $\Theta$ on ${\mathcal P}$ by \[\Theta(E)=\int[\sum_{k=1}^d\int_0^\tau\Ind_E(s,\omega){\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s(\omega){\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}(\omega)]\] we get (using \eqref{az11} and \eqref{az13}) \[\int (g^{m+1,k}-g^{m,k})^2{\hspace{0.6pt}d\hspace{0.1pt}}\Theta\le \alpha 4^{-m}\] and as a consequence, using Cauchy-Schwartz inequality, we get \[\int \sum_{m=1}^\infty\lvert g^{m+1,k}-g^{m,k}\rvert {\hspace{0.6pt}d\hspace{0.1pt}}\Theta\le \sqrt{\Theta(\bar{\Omega})\alpha}<\infty.\] Defining \[g^k_s=\limsup_{m\rightarrow\infty}g^{m,k}_s\] it follows that $g^{m,k}\rightarrow g^k$ a.s. $\Theta$ and as a consequence, taking limit in \eqref{az14} as $n\rightarrow \infty$, we get \begin{equation}\label{az15} {{\sf Q}}(\sum_{k=1}^d\int_0^\tau(g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\ge \frac{1}{m^4}) \le m^44^{-m}.\end{equation} Since ${{\sf Q}}$ and ${{\sf P}}$ and equivalent measures, it follows that \begin{equation}\label{az16} {{\sf P}}(\sum_{k=1}^d\int_0^\tau(g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\ge \frac{1}{m^4}) \rightarrow 0 \text{ as }m\rightarrow\infty.\end{equation} In view of \eqref{az12}, we have for $m\le n$ \begin{equation}\label{az3}[U^n,U^n]_\tau=\sum_{i,j=1}^d\int_0^\tau g^{n,i}_sg^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]_s\end{equation} and \begin{equation}\label{az4}[U^n-U^m,U^n-U^m]_\tau=\sum_{i,j=1}^d\int_0^\tau (g^{n,i}_s-g^{m,i}_s)(g^{n,j}_s-g^{m,j}_s){\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]_s.\end{equation} Taking limit in \eqref{az3} as $n\rightarrow\infty$, we get (using Fatou's lemma) \begin{equation}\label{az20} {{\sf E}}_{{\sf P}}[\sqrt{\textstyle\sum_{i,j=1}^d\int_0^\tau g^{i}_sg^{j}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]}\,]\le {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]\end{equation} (since \eqref{az1} implies $ {{\sf E}}_{{\sf P}}[\,\sqrt{[U^n,U^n]_\tau}\, ]\rightarrow {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]$). Let us define bounded predictable processes $\phi^{j}$ and predictable process $h^n,h$ and a ${{\sf P}}$-martingale $X$ as follows: \begin{equation}\label{az21} h_s=1+\sum_{i=1}^d\lvert g^{i}_s\rvert\end{equation} \begin{equation}\label{az22} \phi^{j}_s=\frac{g^{j}_s}{h_s} \end{equation} \begin{equation}\label{az23} X_t=\sum_{j=1}^d\int_0^t\phi^{j}_s{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s\end{equation} Since $\phi^{j}$ is predictable, $\lvert \phi^{j}\rvert\le 1$ it follows that $X\in{\mathbb C}_b(M^1,M^2,\ldots,M^d)$ and \begin{equation}\label{az24} [X,X]_t=\sum_{j,k=1}^d\int_0^t\phi^{j}_s\phi^{k}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^j,N^k]_s.\end{equation} Noting that $g^{j}_s=h_s\phi^{j}_s$ by definition, we conclude using \eqref{az20} that \[ \int_0^t(h_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [X,X]_s=\sum_{j,k=1}^d\int_0^tg^{j}_sg^{k}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^j,N^k]_s \] and hence that \begin{equation}\label{az25} {{\sf E}}_{{\sf P}}[\sqrt{\textstyle\int_0^\tau(h_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [X,X]_s}]\le {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]\end{equation} Since $h=h\Ind_{[0,\tau]}$, we conclude that $h\in{\mathbb L}^1_m(X)$ and $Y=\int h{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a martingale with $Y_t=Y_{t\wedge\tau}$ for all $t$. Observe that \[ [U^n,X]_t=\sum_{k,j=1}^d\int_0^tg^{n,k}_s\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\] and hence \[\begin{split} [U^n,Y]_t&=\int_0^th_s{\hspace{0.6pt}d\hspace{0.1pt}}[U^n,X]_s\\ &=\sum_{k,j=1}^d\int_0^tg^{n,k}_sh_s\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\ &=\sum_{k,j=1}^d\int_0^tg^{n,k}_sg^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\end{split}\] and as a consequence \[\begin{split} [U^n-Y,U^n-Y]_t&=[U^n,U^n]_t-2[U^n,Y]_t+[Y,Y]_t\\ &=\sum_{k,j=1}^d\int_0^tg^{n,k}_sg^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s-2\sum_{k,j=1}^d\int_0^tg_s^{n,k}g^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\ &\;\;\;\;\;\;\;\; +\sum_{k,j=1}^d\int_0^tg^{k}_sg^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\ &=\sum_{k,j=1}^d\int_0^t(g^{n,k}_s-g^k_s)(g^{n,j}_s-g^j_s){\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\end{split}\] and thus \begin{equation}\label{az26} {{\sf E}}_{{\sf Q}}[[U^n-Y,U^n-Y]_\tau]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (g^{n,k}_s-g^k_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\end{equation} where we have used \eqref{az10}. Taking $\liminf$ on the RHS in \eqref{az13} and using \eqref{az11}, we conclude \[{{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s]\le \alpha 4^{-m}\] and hence \[{{\sf E}}_{{\sf Q}}[[U^n-Y,U^n-Y]_\tau]\le \alpha 4^{-n}.\] Thus $[U^n-Y,U^n-Y]_\tau\rightarrow 0$ in ${{\sf Q}}$-probability and hence in ${{\sf P}}$-probability. By assumption, $[U^n-Z,U^n-Z]_\tau\rightarrow 0$ in ${{\sf P}}$-probability. Since \[[Y-Z,Y-Z]_\tau\le 2([Y-U^n,Y-U^n]_\tau+[Z-U^n,Z-U^n]_\tau)\] for every $n$, it follows that \begin{equation}\label{az27} [Y-Z,Y-Z]_\tau=0\;\;a.s.\;{{\sf P}}.\end{equation} Since $Y,Z$ are ${{\sf P}}$-martingales such that $Z_t=Z_{t\wedge\tau}$ and $Y_t=Y_{t\wedge\tau}$, \eqref{az27} implies $Y_t-Y_0=Z_t-Z_0$ for all $t$. Recall that by construction, $Y_0=0$, $Y=\int h{\hspace{0.6pt}d\hspace{0.1pt}} X$ and $h\in{\mathbb L}^1_m(X)$, $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Thus \eqref{az66} holds. \qed
We now come to the proof of Theorem \ref{aztm2}. Let $Z^n\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. We have to show that $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$.
Using Lemma \ref{azl2}, get a sequence of stopping times $\sigma_k\uparrow\infty$ and a subsequence $\{n^j\}$ such that $Y^j=Z^{n^j}$ satisfies for each $k\ge 1$, ${{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty$ and \[{{\sf E}}[\sqrt{[Y^j-Z,Y^j-Z]_{\sigma_k }}\;]\rightarrow 0 \text{ as }j\uparrow\infty.\]
Let us now fix a $k$ and let $\tilde{Z}_t=Z_{t\wedge\sigma_k}$. We will show that $\tilde{Z}\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. This will complete the proof in view of Lemma \ref{azl1}.
Using Lemma \ref{azl3}, for each $n$, get $j_n$ such that \[{{\sf E}}[\sqrt{[Y^{j_n}-\tilde{Z},Y^{j_n}-\tilde{Z}]_{\sigma_k }}\;]\le 4^{-j-1}\] For each $n$, taking $V=Y^{j_n}$ and $\epsilon=4^{-j-1}$, get $U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ such that \[ {{\sf E}}[\sqrt{[Y^{j_n}-U^n,Y^{j_n}-U^n]_{\sigma_k }}\;]\le 4^{-j-1}.\]
Then we have \[ {{\sf E}}[\sqrt{[U^n-\tilde{Z},U^n-\tilde{Z}]_{\sigma_k }}\;]\le 4^{-j}.\] with $U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$.
This $\tilde{Z}\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ in view of Lemma \ref{azl4} \qed
Now we turn to proof of Theorem \ref{aztm1}. Let $\xi^n\in{\mathbb G}_T$ be such that $\xi^n\rightarrow\xi$ in ${\mathbb L}^1(\Omega,{\mathcal F}, {{\sf P}})$. Let $\xi^n=X^n_T$ where $X^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. Let us define $Z^n_t=X^n_{t\wedge T}$. Then $Z^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ and the assumption on $\xi^n$ implies \[Z^n_t\rightarrow Z_t={{\sf E}}[\xi\mid{\mathcal F}_t]\text{ in }{\mathbb L}^1(\Omega,{\mathcal F}, {{\sf P}})\;\;\forall t.\] Thus Theorem \ref{aztm2} implies $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ and thus $\xi=Z_T$ belongs to ${\mathbb G}_T$. \qed \section{Sigma-martingales} For a semimartingale $X$, let $L(X)$ denote the class of predictable process $f$
such that $X$ admits a decomposition $X=N+A$ with $N$ being a local martingale, $A$ being a process with finite variation paths with $f\in{\mathbb L}^1_m(N)$ and \begin{equation}\label{ax1} \int_0^t\lvert f_s\rvert {\hspace{0.6pt}d\hspace{0.1pt}}\lvert A\rvert_s<\infty \;\;a.s.\;\;\forall t<\infty.\end{equation} Then for $f\in{\mathbb L}(X)$, the stochastic integral $\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ is defined as $\int f{\hspace{0.6pt}d\hspace{0.1pt}} N+\int f{\hspace{0.6pt}d\hspace{0.1pt}} A$. It can be shown that the definition does not depend upon the decomposition $X=N+A$. See \cite{J78}.
Let $M$ be a martingale, $f\in{\mathbb L}(M)$ and $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$. Then $Z$ is a local martingale if and only if $f\in{\mathbb L}^1_m(M)$. In answer to a question raised by P. A. Meyer, Chou \cite{Chou} introduced a class $\Sigma_m$ of semimartingales consisting of $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ for $f\in{\mathbb L}(M)$. Emery \cite{Em80} constructed example of $f,M$ such that $f\in{\mathbb L}(M)$ but $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ is not a local martingale. Such processes occur naturally in mathematical finance and have been called sigma-martingales by Delbaen and Schachermayer\cite{DS98}.
\definition{A semimartingale $X$ is said to be a sigma-martingale if there exists a $(0,\infty)$ valued predictable process $\phi$ such that $\phi\in{\mathbb L}(X)$ and \begin{equation}\label{ax3}M_t=\int_0^t \phi_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\end{equation} is a martingale.} Our first observation is: \begin{lemma}\label{axl5} Every local martingale $N$ is a sigma-martingale. \end{lemma} \noindent{\sc Proof : } Let $\eta_n\uparrow\infty$ be a sequence of stopping times such that $N_{t\wedge\eta_n}$ is a martingale, \[\sigma_n=\inf\{t\ge 0\,:\;\lvert N_t\rvert\ge n\text{ or }\lvert N_{t-}\rvert\ge n\}\wedge n\] and $\tau_n=\sigma_n\wedge\eta_n$, then it follows that $N_{t\wedge\tau_n}$ is a uniformly integrable martingale and $a_n={{\sf E}}[\,[N,N]_{\tau_n}]<\infty$.
Define \[h_s=\sum_{n=1}^\infty\frac{1}{2^n(1+a_n)}\Ind_{(\tau_{n-1},\tau_n]}.\] Then $h$ being bounded belongs to ${\mathbb L}(N)$ and $M=\int h{\hspace{0.6pt}d\hspace{0.1pt}} N$ is a local martingale with \begin{equation}\label{ax4}\sup_{t<\infty}{{\sf E}}[\,[M,M]_t]<\infty.\end{equation} Thus $M$ is a uniformly integrable martingale. Since $h$ is $(0,\infty)$ valued by definition, it follows that $N$ is a sigma-martingale. \qed This leads to \begin{lemma}\label{axl4} A semimartingale $X$ is a sigma-martingale if and only if there exists a uniformly integrable martingale $M$ satisfying \eqref{ax4} and a predictable process $\psi\in{\mathbb L}(M)$ such that \begin{equation}\label{ax5}X_t=\int_0^t \psi_s{\hspace{0.6pt}d\hspace{0.1pt}} M_s.\end{equation} \end{lemma} \noindent{\sc Proof : }
Let $X$ be given by \eqref{ax5} with $M$ being a martingale satisfying \eqref{ax4} and $\psi\in{\mathbb L}(M)$, then defining \[g_s=\frac{1}{(1+(\psi_s)^2)}, \;\;N_t=\int_0^tg_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\] it follows that $N=\int g\psi{\hspace{0.6pt}d\hspace{0.1pt}} M$. Since $g\psi$ is bounded by 1 and $M$ satisfies \eqref{ax4}, it follows that $N$ is a martingale. Thus $X$ is a sigma-martingale.
Conversely, given a sigma-martingale $X$ and a $(0,\infty)$ valued predictable process $\phi$ such that $N=\int \phi{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a martingale, get $h$ as in Lemma \ref{axl5} and let $M=\int h{\hspace{0.6pt}d\hspace{0.1pt}} N=\int h\phi {\hspace{0.6pt}d\hspace{0.1pt}} X$. Then $M$ is a uniformly integrable martingale that satisfies \eqref{ax4} and $h\phi$ is a $(0,\infty)$ valued predictable process. \qed
From the definition, it is not obvious that sum of sigma-martingales is also a sigma-martingale, but this is so as the next result shows. \begin{theorem} Let $X^1,X^2$ be sigma-martingales and $a_1,a_2$ be real numbers. Then $Y=a_1X^1+a_2X^2$ is also a sigma-martingale. \end{theorem} \noindent{\sc Proof : } Let $\phi^1,\phi^2$ be $(0,\infty)$ valued predictable processes such that \[M^i_t=\int_0^t\phi^i_s{\hspace{0.6pt}d\hspace{0.1pt}} X^i_s,\;\;i=1,2\] are uniformly integrable martingales. Then, writing $\xi=\min(\phi^1,\phi^2)$ and $\eta^i_s=\frac{\xi_s}{\phi^i_s}$, it follows that \[N^i_t=\int_0^t \eta^i_s{\hspace{0.6pt}d\hspace{0.1pt}} M^i_s=\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} X^i_s\] are uniformly integrable martingales since $\eta^i$ is bounded by one. Clearly, $Y=a_1X^1+a_2X^2$ is a semimartingale and $\xi\in{\mathbb L}(X^i)$ for $i=1,2$ implies $\xi\in{\mathbb L}(Y)$ and \[\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s=a_1N^1_s+a_2N^2_s\] is a uniformly integrable martingale. Since $\xi$ is $(0,\infty)$ valued predictable process, it follows that $Y$ is a sigma-martingale. \qed The following result gives conditions under which a sigma-martingale is a local martingale. \begin{lemma}\label{axl1} Let $X$ be a sigma-martingale with $X_0=0$. Then $X$ is a local martingale if and only if there exists a sequence of stopping times $\tau_n\uparrow\infty$ such that \begin{equation}\label{ax7} {{\sf E}}[\,\sqrt{[X,X]_{\tau_n}}\,]<\infty\;\;\forall n.\end{equation} \end{lemma} \noindent{\sc Proof : } Let $X$ be a sigma-martingale and $\phi,\psi,M$ be such that \eqref{ax3}, \eqref{ax4} holds. Let $\psi_s=\frac{1}{\phi_s}$ and as noted above, \eqref{ax5} holds. Then \[[X,X]_t=\int_0^t(\psi_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s. \] Defining $\psi^k_s=\psi_s\Ind_{\{\lvert \psi_s\rvert\le k\}}$, it follows that \[X^k=\int_0^t\psi^k_s{\hspace{0.6pt}d\hspace{0.1pt}} M_s\] is a uniformly integrable martingale. Noting that for $k\ge 1$ \[[X-X^k,X-X^k]_t=\int_0^t(\psi_s)^2\Ind_{\{k<\lvert \psi_s\rvert\}}{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s\] the assumption \eqref{ax7} implies that for each $n$ fixed, \[{{\sf E}}[\,\sqrt{[X-X^k,X-X^k]_{\tau_n}}\,]\rightarrow 0\text{ as }k\rightarrow\infty.\] The Burkholder-Davis-Gundy inequality ($p=1$) implies that for each $n$ fixed, \[{{\sf E}}[\,\sup_{0\le t\le\tau_n}\lvert X_t-X^k_t\rvert\,]\rightarrow 0\text{ as }k\rightarrow\infty.\] and as a consequence $X^{[n]}_t=X_{t\wedge\tau_n}$ is a martingale for all $n$ and so $X$ is a local martingale. Conversely, if $X$ is a local martingale with $X_0=0$, and $\sigma_n$ are stop times increasing to $\infty$ such that $X_{t\wedge\sigma_n}$ are martingales then defining $\zeta_n=\inf\{t\,:\,\lvert X_t\rvert\ge n\}$ and $\tau_n=\sigma_n\wedge\zeta_n$, it follows that ${\mathbb E}[\lvert X_{\tau_n}\rvert]<\infty$ and since \[\sup_{t\le \tau_n}\lvert X_t\rvert\le n+\lvert X_{\tau_n}\rvert\] it follows that ${\mathbb E}[\sup_{t\le \tau_n}\lvert X_t\rvert]<\infty$. Thus, \eqref{ax7} holds in view of Burkholder-Davis-Gundy inequality ($p=1$). \qed
The previous result gives us: \begin{corollary} \label{azc1} A bounded sigma-martingale $X$ is a martingale. \end{corollary} \noindent{\sc Proof : } Since $X$ is bounded, say by $K$, it follows that jumps of $X$ are bounded by $2K$. Thus jumps of the increasing process $[X,X]$ are bounded by $4K^2$ and thus $X$ satisfies \eqref{ax7} for \[\tau_n=\inf\{t\ge 0\,:\;[X,X]_t\ge n\}.\] Thus $X$ is a local martingale and being bounded, it is a martingale. \qed Here is a variant of the example given by Emery \cite{Em80} of a sigma-martingale that is not a local martingale.. Let $\tau$ be a random variable with exponential distribution (assumed to be $(0,\infty)$ valued without loss of generailty) and $\xi$ with ${{\sf P}}(\xi=1)={{\sf P}}(\xi=-1)=0.5$, independent of $\tau$. Let \[M_t=\xi\Ind_{[\tau,\infty)}(t)\] and ${\mathcal F}_t=\sigma(M_s:s\le t)$. Easy to see that $M$ is a martingale. Let $f_t=\frac{1}{t}\Ind_{(0,\infty)}(t)$ and $X_t=\int_0^tf{\hspace{0.6pt}d\hspace{0.1pt}} M$. Then $X$ is a sigma-martingale and \[[X,X]_t=\frac{1}{\tau^2}\Ind_{[\tau,\infty)}(t).\] For any stopping time $\sigma$, it can be checked that $\sigma$ is a constant on $\sigma<\tau$ and thus if $\sigma$ is not identically equal to 0, $\sigma\ge (\tau\wedge a)$ for some $a>0$. Thus, $\sqrt{[X,X]_\sigma}\ge\frac{1}{\tau}\Ind_{\{\tau<a\}}$. It follows that for any stop time $\sigma$, not identically zero, ${{\sf E}}[\sqrt{[X,X]_\sigma}]=\infty$ and so $X$ is not a local martingale.
The next result shows that $\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a sigma-martingale if $X$ is one. \begin{lemma}\label{axl7} Let $X$ be a sigma-martingale, $f\in{\mathbb L}(X)$ and let $U=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$. Then $U$ is a sigma-martingale. \end{lemma} \noindent{\sc Proof : } Let $M$ be a martingale and $\psi\in{\mathbb L}(M)$ be such that $X=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} M$ (as in Lemma \ref{axl4}). Now $U=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X=\int f\psi {\hspace{0.6pt}d\hspace{0.1pt}} M$. Thus, once again invoking Lemma \ref{axl4}, one concludes that $X$ is a sigma-martingale. \qed
We now introduce the class of equivalent sigma-martingale measures (ESMM) and show that it is a convex set. \definition{Let $X^1,\ldots ,X^d$ be r.c.l.l. adapted processes and let ${\mathbb E}^s(X^1,\ldots ,X^d)$ denote the class of probability measures ${{\sf Q}}$ such that $X^1,\ldots ,X^d$ are sigma-martingales w.r.t. ${{\sf Q}}$.} Let \[{\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)=\{{{\sf Q}}\in {\mathbb E}^s(X^1,\ldots ,X^d)\,:\, {{\sf Q}}\text{ is equivalent to }{{\sf P}}\}\] and \[\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)=\{{{\sf Q}}\in {\mathbb E}^s(X^1,\ldots ,X^d)\,:\, {{\sf Q}}\text{ is absolutely continuous w.r.t. }{{\sf P}}\}\]
\begin{theorem} \label{esmm} For semimartingales $X^1,\ldots ,X^d$, ${\mathbb E}^s(X^1,\ldots ,X^d)$, ${\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)$ and $\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)$ are convex sets. \end{theorem} \noindent{\sc Proof : } Let us consider the case $d=1$. Let ${{\sf Q}}^1,{{\sf Q}}^2\in{\mathbb E}^s(X)$. Let $\phi^1,\phi^2$ be $(0,\infty)$ valued predictable processes such that \[M^i_t=\int_0^t \phi^i_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\] are martingales under ${{\sf Q}}_i$, $i=1,2$. Let $\phi_s=min(\phi^1_s,\phi^2_s)$ and let \[M_t=\int_0^t\phi_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s.\] Noting that $M_t=\int_0^t\xi_s^i{\hspace{0.6pt}d\hspace{0.1pt}} M^i_s $ where $\xi_s={\phi_s}(\phi_s^i)^{-1}$ is bounded, it follows that $M$ is a martingale under ${{\sf Q}}^i,i=1,2$. Now if ${{\sf Q}}$ is any convex combination of ${{\sf Q}}^1,{{\sf Q}}^2$, it follows that $M$ is a ${{\sf Q}}$ martingale and hence $X_t=\int_0^t(\phi_s)^{-1}{\hspace{0.6pt}d\hspace{0.1pt}} M_s$ is a sigma-martingale under ${{\sf Q}}$. Thus ${\mathbb E}^s_{{\sf P}}(X)$ is a convex set. Since ${\mathbb E}^s(X^1,\ldots ,X^d)=\cap_{j=1}^d{\mathbb E}^s(X^j)$ it follows that ${\mathbb E}^s(X^1,\ldots ,X^d)$ is convex. Convexity of ${\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)$ and $\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)$ follows from this. \qed In analogy with the definition of ${\mathbb C}$ for martingales $M^1,\ldots ,M^d$, for sigma-martingales $M^1,M^2,\ldots , M^d$ let \[{\mathbb C}(M^1,M^2,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\in{\mathbb L}^1_m(M^j)\}\] \[{\mathbb F}(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}(M^1,\ldots , M^d)\}.\] \begin{lemma}\label{ayl9} Let $M^1,\ldots, M^d$ be sigma-martingales and let $\phi^j$ be $(0,\infty)$ valued predictable processes such that \begin{equation}\label{ay9}N^j_t=\int_0^t\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\end{equation} are uniformly integrable martingales. Then \begin{equation}\label{ay10} {\mathbb C}(M^1,M^2,\ldots , M^d)={\mathbb C}(N^1,N^2,\ldots ,N^d) \end{equation} \begin{equation}\label{ay10a} {\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d). \end{equation} \end{lemma} \noindent{\sc Proof : } Let $\psi^j_s=(\phi^j_s)^{-1}$. Note that $M^j=\int \psi^j{\hspace{0.6pt}d\hspace{0.1pt}} N^j$. Then if $Y\in {\mathbb C}(M^1,M^2,\ldots , M^d)$ is given by \begin{equation}\label{ay11} Y_t=\sum_{j=1}^d\int_0^tf^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;\;f^j\in{\mathbb L}(M^j)\end{equation} then defining $g^j=f^j\psi^j$, we can see that $g^j\in{\mathbb L}(N^j)$ and $\int f^j{\hspace{0.6pt}d\hspace{0.1pt}} M^j=\int g^j{\hspace{0.6pt}d\hspace{0.1pt}} N^j$. Thus \begin{equation}\label{ay12} Y_t=\sum_{j=1}^d\int_0^tg^j_s{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s,\;\;g^j\in{\mathbb L}(N^j).\end{equation} Similarly, if $Y\in{\mathbb C}(N^1,N^2,\ldots ,N^d)$ is given by \eqref{ay12}, then defining $f^j=\phi^jg^j$, we can see that $Y$ satisfies \eqref{ay11}. Thus \eqref{ay10} is true. Now \eqref{ay10a} follows from \eqref{ay10}. \qed \section{Integral Representation w.r.t. Martingales} Let $M^1,\ldots, M^d$ be sigma-martingales. \definition{ A sigma-martingale $N$ is said to have an integral representation w.r.t. $M^1,\ldots, M^d$ if $N\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ or in other words, $\exists Y\in {\mathbb C}(M^1,M^2,\ldots , M^d)$ and $f\in{\mathbb L}(Y)$ such that \begin{equation}\label{ay21} N_t=N_0+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\,\;\;\forall t. \end{equation} }
Here is another observation needed later. \begin{lemma}\label{cal1} Let $M$ be a ${{\sf P}}$-martingale. Let ${{\sf Q}}$ be a probability measure equivalent to ${{\sf P}}$. Let $\xi=\frac{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}}{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}}$ and let $Z$ be the r.c.l.l. martingale given by $Z_t={{\sf E}}_{{\sf P}}[\xi\mid{\mathcal F}_t]$. Then \begin{enumerate}[(i)] \item $M$ is a ${{\sf Q}}$-martingale if and only if $MZ$ is a ${{\sf P}}$-martingale. \item $M$ is a ${{\sf Q}}$-local martingale if and only if $MZ$ is a ${{\sf P}}$-local martingale. \item If $M$ is a ${{\sf Q}}$-local martingale then $[M,Z]$ is a ${{\sf P}}$-local martingale. \item If $M$ is a ${{\sf Q}}$-sigma-martingale then $[M,Z]$ is a ${{\sf P}}$-sigma-martingale. \end{enumerate} \end{lemma} \noindent{\sc Proof : } For a stopping time $\sigma$, let $\eta$ be a non-negative ${\mathcal F}_\sigma$ measurable random variable. Then \[{{\sf E}}_{{\sf Q}}[\eta]={{\sf E}}_{{\sf P}}[\eta Z]={{\sf E}}_{{\sf P}}[\eta {{\sf E}}[Z\mid{\mathcal F}_\sigma]\,]={{\sf E}}_{{\sf P}}[\eta Z_\sigma].\] Thus $M_s$ is ${{\sf Q}}$ integrable if and only if $M_sZ_s$ is ${{\sf P}}$-integrable.
Further, for any stopping time $\sigma$, \begin{equation}\label{bm60} {{\sf E}}_{{\sf Q}}[M_\sigma]={{\sf E}}_{{\sf P}}[M_\sigma Z_\sigma].\end{equation} Thus (i) follows from the observation that an integrable adapted process $N$ is a martingale if and only if ${{\sf E}}[N_\sigma]={{\sf E}}[N_0]$ for all bounded stopping times $\sigma$. For (ii), if $M$ is a ${{\sf Q}}$-local martingale, then get stopping times $\tau_n\uparrow\infty$ such that for each $n$, $M_{t\wedge\tau_n}$ is a martingale. Then we have \begin{equation}\label{bm61}{{\sf E}}_{{\sf Q}}[M_{\sigma\wedge\tau_n}]= {{\sf E}}_{{\sf P}}[M_{\sigma\wedge\tau_n} Z_{\sigma\wedge\tau_n}]. \end{equation} Thus $M_{t\wedge\tau_n}Z_{t\wedge\tau_n}$ is a ${{\sf P}}$-martingale and thus $MZ$ is a ${{\sf P}}$- local martingale. The converse follows similarly.
For (iii), note that $M_tZ_t=M_0Z_0+\int_0^tM_{s-}{\hspace{0.6pt}d\hspace{0.1pt}} Z_s+\int_0^tZ_{s-}{\hspace{0.6pt}d\hspace{0.1pt}} M_s +[M,Z]_t$ and the two stochastic integrals are ${{\sf P}}$ local martingales, the result follows from (ii). For (iv), representing the ${{\sf Q}}$ sigma-martingale $M$ as $M=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} N$, where $N$ is a ${{\sf Q}}$ martingale and $\psi\in{\mathbb L}(N)$, we see \[[M,Z]=\int_0^t\psi_s{\hspace{0.6pt}d\hspace{0.1pt}} [N,Z]_s.\] By (iii), $[N,Z]$ is a ${{\sf Q}}$ sigma-martingale and hence $[M,Z]$ is a ${{\sf Q}}$ sigma-martingale. \qed
The main result on integral representation is: \begin{theorem} \label{intrep} Let ${\mathcal F}_0$ be trivial. Let $M^1,\ldots, M^d$ be sigma-martingales on $(\Omega,{\mathcal F},{{\sf P}})$. Then the following are equivalent. \begin{enumerate}[(i)] \item Every bounded martingale admits representation w.r.t. $M^1,\ldots, M^d$. \item Every uniformly integrable martingale admits representation w.r.t. $M^1,\ldots, M^d$. \item Every sigma-martingale admits representation w.r.t. $M^1,\ldots, M^d$. \item ${{\sf P}}$ is an extreme point of the convex set ${\mathbb E}^s(M^1,\ldots, M^d)$. \item $\tilde{{\mathbb E}}^s_{{{\sf P}}}(M^1,\ldots, M^d)=\{{{\sf P}}\}.$ \item ${\mathbb E}^s_{{{\sf P}}}(M^1,\ldots, M^d)=\{{{\sf P}}\}.$ \end{enumerate} \end{theorem} \noindent{\sc Proof : } Since every bounded martingale is uniformly integrable and a uniformly integrable martinagle is a sigma-martingale, we have\\ \centerline{ (iii)$\Rightarrow$ (ii) $\Rightarrow$ (i).}
\noindent (i) $\Rightarrow$ (ii) is an easy consequence of Theorem \ref{aztm2}: given a uniformly integrable martingale $Z$, for $n\ge 1$, let us define martingales $Z^n$ by \[Z^n_t={{\sf E}}[Z\Ind_{\{\lvert Z\rvert\le n \}}]\mid {\mathcal F}_t].\] We take the r.c.l.l. version of the martingale. It is easy to see that $Z^n$ are bounded martingales and in view of (i), $Z^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. Moreover, for $n\ge t$ \[{{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\le {{\sf E}}[Z\Ind_{\{\lvert Z\rvert> n \}}]\] and hence for all $t$, ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$. Theorem \ref{aztm2} now implies $Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. This proves (ii).
\noindent We next prove (ii) $\Rightarrow$ (iii). Let $X$ be a sigma-martingale. In view of Lemma \ref{axl4}, get a uniformly integrable martingale $N$ and a predictable process $\psi$ such that \[X=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} N.\] Let $N_t=N_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ where $Y\in{\mathbb C}(M^1,M^2,\ldots , M^d)$. Then we have \[X_t=X_0+\int_0^t f_s\psi{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\] and thus $X$ admits an integral representation w.r.t. $M^1,\ldots, M^d$.
Suppose (v) holds and suppose ${{\sf Q}}_1,{{\sf Q}}_2$ $\in{\mathbb E}^s(M^1,M^2,\ldots , M^d)$ and ${{\sf P}}=\alpha {{\sf Q}}_1+(1-\alpha){{\sf Q}}_2$. It follows that ${{\sf Q}}_1,{{\sf Q}}_2$ are absolutely continuous w.r.t. ${{\sf P}}$ and hence ${{\sf Q}}_1, {{\sf Q}}_2\in \tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. In view of (v), ${{\sf Q}}_1={{\sf Q}}_2={{\sf P}}$ and thus ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,M^2,\ldots , M^d)$ and so (iv) is true.
Since ${\mathbb E}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d) \subseteq \tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$, it follows that (v) implies (vi). On the other hand, suppose (vi) is true and ${{\sf Q}}\in\tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. Then ${{\sf Q}}_1=\frac{1}{2}({{\sf Q}}+{{\sf P}})\in {\mathbb E}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. Then (vi) implies ${{\sf Q}}_1={{\sf P}}$ and hence ${{\sf Q}}={{\sf P}}$ and thus (v) holds.
Till now we have proved (i) $\Longleftrightarrow$ (ii) $\Longleftrightarrow$ (iii) and (iv) $\Leftarrow$ (v) $\Longleftrightarrow$ (vi). To complete the proof, we will show (iii) $\Rightarrow$ (v) and (iv) $\Rightarrow$ (i).
Suppose that (iii) is true and let ${{\sf Q}}\in\tilde{{\mathbb E}}^s_{{\sf P}}(M^1,M^2,\ldots , M^d)$. Now let $\xi$ be the Radon-Nikodym derivative of ${{\sf Q}}$ w.r.t. ${{\sf P}}$. Let $R$ denote the r.c.l.l. martingale: $R_t={{\sf E}}[\xi\mid{\mathcal F}_{t}]$. Since ${\mathcal F}_0$ is trivial, $N_0=1$. In view of property (iii), we can get $Y\in{\mathbb C}(M^1,M^2,\ldots , M^d)$ and a predictable processes $f\in{\mathbb L}(Y)$ such that \begin{equation}\label{ax51} R_t=1+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s. \end{equation} Note that \begin{equation}\label{ax51d} [R,R]_t=\int_0^tf_s^2{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\end{equation} Since $M^j$ is a sigma-martingale under ${{\sf Q}}$ for each $j$, it follows that $Y$ is a ${{\sf Q}}$ sigma-martingale. By Lemma \ref{cal1}, this gives $[Y,R]$ is a ${{\sf P}}$ sigma-martingale and hence \begin{equation}\label{ax51a} V^k_t=\int_0^tf_s\Ind_{\{\lvert f_s\rvert\le k\}}{\hspace{0.6pt}d\hspace{0.1pt}} [Y,R]_s\end{equation} is a ${{\sf P}}$ sigma-martingale. Noting that \[ [Y,R]_t=\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s\] we see that \begin{equation}\label{ax51b} V^k_t=\int_0^tf^2_s\Ind_{\{\lvert f_s\rvert\le k\}}{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\end{equation} Thus we can get $(0,\infty)$ valued predictable processes $\phi^j$ such that \[U^k_t=\int_0^t\phi^k_s{\hspace{0.6pt}d\hspace{0.1pt}} V^k_s\] is a martingale. But $U^k$ is a non-negative martingale with $U^k_0=0$. As a result $U^k$ is identically equal to 0 and thus so is $V^k$. It then follows that (see \eqref{ax51d}) $[R,R]=0$ which yields $R$ is identical to 1 and so ${{\sf Q}}={{\sf P}}$. Thus $\tilde{{\mathbb E}}^s_{{\sf P}}(M^1,M^2,\ldots , M^d)$ is a singleton. Thus (iii) $\Rightarrow$ (v).
To complete the proof, we will now prove that (iv) $\Rightarrow$ (i). Suppose $M^1,M^2,\ldots , M^d$ are such that ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,M^2,\ldots , M^d)$. Since $M^j$ is a sigma-martingale under ${{\sf P}}$, we can choose $(0,\infty)$ valued predictable $\phi^j$ such that \[N^j_t=\int_0^t\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\] is a uniformly integrable martingale under ${{\sf P}}$ and as seen in Lemma \ref{ayl9}, we then have \[{\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d).\] Suppose (i) is not true. We will show that this leads to a contradiction. So suppose $S$ is a bounded martingale that does not admit representation w.r.t. $ M^1,M^2,\ldots , M^d$, {\em i.e.} $S\not\in {\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d)$, then for some $T$, \[S_T\not\in {\mathbb K}_T(N^1,N^2,\ldots ,N^d)\] We have proven in Theorem \ref{aztm1} that $ {\mathbb K}_T(N^1,N^2,\ldots ,N^d)$ is closed in ${\mathbb L}^1(\Omega,{\mathcal F}, {{\sf P}})$. Since ${\mathbb K}_T$ is not equal to ${\mathbb L}^1(\Omega, {\mathcal F}_T,{{\sf P}})$, by the Hahn-Banach Theorem, there exists $\xi\in{\mathbb L}^\infty(\Omega, {\mathcal F}_T,{{\sf P}})$, ${{\sf P}}(\xi\neq 0)>0$ such that \[\int \eta\xi {\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}=0\;\;\forall \eta\in{\mathbb K}_T.\] Then for all constants $c$, we have \begin{equation}\label{azk44} \int \eta(1+c\xi) {\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}=\int\eta{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}\;\;\forall \eta\in{\mathbb K}_T.\end{equation} Since $\xi$ is bounded, we can choose a $c>0$ such that \[{{\sf P}}(c\lvert\xi\rvert<\frac{1}{2})=1.\] Now, let ${{\sf Q}}$ be the measure with density $\eta=(1+c\xi)$. Then ${{\sf Q}}$ is a probability measure. Thus \eqref{azk44} yields \begin{equation}\label{azk45} \int \eta{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}=\int\eta{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}\;\;\forall \eta\in{\mathbb K}_T.\end{equation} For any bounded stop time $\tau$and $1\le j\le d$, $N^j_{\tau\wedge T}\in{\mathbb K}_T$ and hence \begin{equation}\label{azk46} {{\sf E}}_{{\sf Q}}[N^j_{\tau\wedge T}]={{\sf E}}_{{\sf P}}[N^j_{\tau\wedge T}]=N^j_0\end{equation} On the other hand, \begin{equation}\label{azk47}\begin{split} {{\sf E}}_{{\sf Q}}[N^j_{\tau\vee T}]&={{\sf E}}_{{\sf P}}[\eta N^j_{\tau\vee T}]\\ &={{\sf E}}_{{\sf P}}[{{\sf E}}_{{\sf P}}[\eta N^j_{\tau\vee T}\mid {\mathcal F}_T]]\\ &={{\sf E}}_{{\sf P}}[\eta{{\sf E}}_{{\sf P}}[ N^j_{\tau\vee T}\mid {\mathcal F}_T]]\\ &={{\sf E}}_{{\sf P}}[\eta N_T]\\ &={{\sf E}}_{{\sf Q}}[ N_T]\\ &=N^j_0.\end{split}\end{equation} where we have used the facts that $\eta$ is ${\mathcal F}_T$ measurable, $N^j$ is a ${{\sf P}}$ martingale and \eqref{azk46}. Now \[ {{\sf E}}_{{\sf Q}}[N^j_{\tau}]={{\sf E}}_{{\sf Q}}[N^j_{\tau\wedge T}]+{{\sf E}}_{{\sf Q}}[N^j_{\tau\vee T}]-{{\sf E}}_{{\sf Q}}[N^j_{T}]=N^j_0.\] Thus $N^j$ is a ${{\sf Q}}$ martingale and since \[M^j=\int_0^t\frac{1}{\phi^j_s}{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s\] it follows that $M^j$ is a ${{\sf Q}}$ sigma-martingale. Thus ${{\sf Q}}\in {\mathbb E}^s(M^1,\ldots, M^d)$. Similarly, if $\tilde{{{\sf Q}}}$ is the measure with density $\eta=(1-c\xi)$, we can prove that $\tilde{{{\sf Q}}}\in {\mathbb E}^s(M^1,\ldots, M^d)$. Since ${{\sf P}}=\frac{1}{2}({{\sf Q}}+\tilde{{{\sf Q}}})$, this contradicts the assumption that ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,\ldots, M^d)$. Thus (iv) $\Rightarrow$ (i). This completes the proof. \qed \section{Completeness of Markets} Let the (discounted) prices of $d$ securities be given by $X^1,\ldots ,X^d$. We assume that $X^j$ are semimartingales and that they satisfy the property NFLVR so that an ESMM exists. \begin{theorem} \label{sftap} {\bf The Second Fundamental Theorem Of Asset Pricing} \\Let $X^1,\ldots ,X^d$ be semimartingales on $(\Omega,{\mathcal F},{{\sf P}})$ such that ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)$ is non-empty. Then the following are equivalent: \begin{enumerate}[(a)] \item For all $T<\infty$, for all ${\mathcal F}_T$ measurable bounded random variables $\xi$ (bounded by say $K$), there exist $g^j\in{\mathbb L}(X^j)$ with \begin{equation}\label{aza1} Y_t=\sum_{j=1}^d\int_0^tg^j_s{\hspace{0.6pt}d\hspace{0.1pt}} X^j_s\end{equation} a constant $c$ and $f\in{\mathbb L}(Y)$ such that $\lvert \int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\rvert \le 2K$ and \begin{equation}\label{aza2} \xi=c+\int_0^Tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\end{equation}
\item The set ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)$ is a singleton. \end{enumerate} \end{theorem} \noindent{\sc Proof : } First suppose that ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)=\{{{\sf Q}}\}.$ Consider the martingale $M_t={{\sf E}}_{{\sf Q}}[\xi\mid{\mathcal F}_t]$. Note that $M$ is bounded by $K$. In view of the equivalence of (i) and (v) in Theorem \ref{intrep}, we get that $M$ admits a representation w.r.t. $X^1,\ldots, X^d$ - thus we get $g^j\in{\mathbb L}(X^j)$ and $f\in{\mathbb L}(Y)$ where $Y$ is given by by \eqref{aza1}, with \[M_t=M_0+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\] Since ${\mathcal F}_0$ is trivial, $M_0$ is a constant. Since $M$ is bounded by $K$, it follows that $\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ is bounded by $2K$. Thus (b) implies (a).
Now suppose (a) is true. Let ${{\sf Q}}$ be an ESMM. Let $M_t$ be a martingale. We will show that $M\in{\mathbb F}(X^1,\ldots, X^d)$, {\em i.e.} $M$ admits integral representation w.r.t. $X^1,\ldots, X^d$. In view of Lemmas \ref{azl1} and \ref{ayl9}, suffices to show that for each $T<\infty$, $N\in{\mathbb F}(X^1,\ldots, X^d)$, where $N$ is defined by $N_t=M_{t\wedge T}$.
Let $\xi=N_T$. Then in view of assumption (a), we have \[\xi=c+\int_0^Tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\] with $Y$ given by \eqref{aza1}, a constant $c$ and $f\in{\mathbb L}(Y)$ such that $U_t=\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ is bounded. Since $U$ is a sigma-martingale that is bounded, it follows that $U$ is a martingale. It follows that \[N_t=c+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \;\;0\le t\le T.\] Thus $N\in {\mathbb F}(X^1,\ldots, X^d)$.
We have proved that (i) in Theorem \ref{intrep} holds and hence (v) holds, {\em i.e.} the ESMM is unique. \qed
\appendix \begin{center}
{\bf APPENDIX}
\end{center}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
For a non-negative definite symmetric matrix $C$, the eigenvalue-eigenvector decomposition gives us a representation \begin{equation}\label{apx1}C=B^TDB\end{equation} \begin{equation}\label{apx2}\mbox{$B$ is a orthogonal matrix and $D$ is a diagonal matrix.}\end{equation} This decomposition is not unique, but for each non-negative definite symmetric matrix $C$, the set of pairs $(B,D)$ satisfying \eqref{apx1}-\eqref{apx2} is compact. Thus it admits a measurable selection, in other words, there exists a Borel mapping $\theta$ such that $\theta(C)=(B,D)$ where $B,C,D$ satisfy \eqref{apx1}-\eqref{apx2}. (See \cite{Graf} or Corollary 5.2.6 \cite{SMS}).
Let ${\mathcal D}$ be a $\sigma$-field on a non-empty set $\Gamma$ and for $1\le i,j\le d$, $\lambda_{ij}$ be $\sigma$-finite signed measures on $(\Gamma, {\mathcal D})$ such that\\ \centerline{ For all $E\in{\mathcal D}$, the matrix$((\lambda_{ij}(E)))$ is a symmetric non-negative definite matrix.} Let $\theta(E)=\sum_{i=1}^d\lambda_{ii}(E)$. Then for $1\le i,j\le d$ there exists a version $c^{ij}$ of the Radon-Nikodym derivate $\frac{d\lambda_{ij}}{d\theta}$ such that for all $\gamma\in\Gamma$, the matrix $((c^{ij}(\gamma)))$ is non-negative definite.
To see this, for $1\le i\le j\le d$ let $f^{ij}$ be a version of the Radon-Nikodym derivative $\frac{d\lambda_{ij}}{d\theta}$ and let $f^{ji}=f^{ij}$. For rationals $r_1,r_2,\ldots , r_d$, let \[A_{r_1,r_2,\ldots , r_d}=\{\gamma:\sum_{ij}r_ir_jf^{ij}(\gamma)< 0\}.\]
Then $\theta(A_{r_1,r_2,\ldots , r_d})=0$ and hence $\theta(A)=0$ where
\[A=\cup\{A_{r_1,r_2,\ldots , r_d}:\; r_1,r_2,\ldots , r_d\text{ rationals}\}.\]
The required version is now given by \[c^{ij}(\gamma)=f^{ij}(\gamma)\Ind_{A^c}(\gamma).\]
\noindent {\it Address}: H1 Sipcot IT Park, Siruseri, Kelambakkam 603103, India\\ {\it E-mail}: [email protected], [email protected]
\end{document} |
\begin{document}
\title{TSInterpret: A unified framework for time series interpretability}
\author{\name Jacqueline Höllig \email [email protected]\\
\addr Information Process Engineering\\
FZI Forschungszentrum Informatik\\
76131 Karlsruhe, Germany
\AND
\name Cedric Kulbach \email [email protected]\\
\addr Information Process Engineering\\
FZI Forschungszentrum Informatik\\
76131 Karlsruhe, Germany
\AND
\name Steffen Thoma \email [email protected]\\
\addr Information Process Engineering\\
FZI Forschungszentrum Informatik\\
76131 Karlsruhe, Germany }
\maketitle
\begin{abstract} With the increasing application of deep learning algorithms to time series classification, especially in high-stake scenarios, the relevance of interpreting those algorithms becomes key. Although research in time series interpretability has grown, accessibility for practitioners is still an obstacle. Interpretability approaches and their visualizations are diverse in use without a unified api or framework. To close this gap, we introduce TSInterpret\footnote{\url{https://github.com/fzi-forschungszentrum-informatik/TSInterpret}}, an easily extensible open-source Python library for interpreting predictions of time series classifiers that combines existing interpretation approaches into one unified framework. The library features (i) state-of-the-art interpretability algorithms, exposes a (ii) unified API enabling users to work with explanations in a consistent way, and provides (iii) suitable visualizations for each explanation. \end{abstract} \begin{keywords}
Time Series; Interpretability; Feature Attribution; Counterfactual Explanation \end{keywords}
\section{Introduction}
Although time series data are omnipresent in industry and daily life, deep learning methods have only been applied to time series data in the last decade. Before, time series classification (TSC) primarily focused on classifications based on discriminatory features (e.g., learning based on the whole time series, intervals, or shapelets (\cite{bagnall_great_2017})). Compared to deep methods, they do not have the ”black-box”- character as a significant drawback. However, with the increasing accuracy of deep learning models and their ability to cope with vast amounts of data, their application on time series classification increased, leading to a need for interpretability, especially in high-risk settings.
Various interpretability methods for time series classification are available (\cite{rojat_explainable_2021}). However, the usage of those methods is not yet standardized: The proposed methods often lack a) open code (e.g., \cite{siddiqui_tsinsight_2021}), b) an easy-to-use interface (e.g., \cite{ismail_benchmarking_2020}), or c) visualization (e.g., \cite{guilleme_agnostic_2019}), making the application of those methods inconvenient and thereby hindering the usage of deep learning methods on safety-critical scenarios. \par Although unifying frameworks (e.g., tf-explain (\cite{meudec_raphael_tf-explain_2021}), Alibi (\cite{klaise_alibi_2021}), or captum (\cite{kokhlikyan_captum_2020})) have already been developed for various data types (tabular, image, text) and algorithms (e.g., GradCam (\cite{selvaraju_grad-cam_2020}), Counterfactuals (\cite{mothilal_explaining_2020}), SHAP (\cite{lundberg_unified_2017})), an interpretability framework for time-series data is still missing. Due to the different structures and properties of time-ordered data, most approaches and frameworks are not directly applicable to time series (\cite{ismail_benchmarking_2020}). Further, time series are not intuitive for human understanding (\cite{siddiqui_tsviz_2019}) and need additional visualizations. Therefore, we propose TSInterpret, a framework implementing algorithms for time series classification. In this work, we provide \begin{itemize}
\item a review of existing interpretability libraries,
\item a unified framework for time series interpretability,
\item and unified visualizations for the implemented algorithms. \end{itemize}
\section{Need for Interpretability in Time Series Classification}\label{sec:Motivation}
Temporal data is ubiquitous and encountered in many real-world applications ranging from electronic health records (\cite{rajkomar_scalable_2018}) to cyber security (\cite{susto_time-series_2018}). Although almost omnipresent, time series classification has been considered one of the most challenging problems in data mining for the last two decades (\cite{yang_10_2006}; \cite{esling_time-series_2012}). With the rising data availability and accessibility (e.g., provided by the UCR / UEA archive (\cite{bagnall_great_2017, dau_ucr_2019})), hundreds of time series classification algorithms have been proposed. Although deep learning methods have been successful in the field of Computer Vision (CV) and Natural Language Processing (NLP) for almost a decade, application on time series has only occurred in the past few years (e.g., \cite{fawaz_deep_2019, rajkomar_scalable_2018,susto_time-series_2018,ruiz_great_2021}). Deep learning models have been shown to achieve state-of-art results on time series classification (e.g., \cite{fawaz_deep_2019}). However, those methods are black-boxes due to their complexity which limits their application to high-stake scenarios (e.g., in medicine or autonomous driving), where user trust and understandability of the decision process are crucial. \par Although much work has been done on interpretability in CV and NLP, most developed approaches are not directly applicable to time series data. The time component impedes the usage of existing methods (\cite{ismail_benchmarking_2020}). Thus, increasing effort is put into adapting existing methods to time series (e.g., LEFTIST based on SHAP / Lime (\cite{guilleme_agnostic_2019}), Temporal Saliency Rescaling for Saliency Methods (\cite{ismail_benchmarking_2020}), Counterfactuals (\cite{ates_counterfactual_2021, sanchez-ruiz_instance-based_2021})), and developing new methods specifically for time series interpretability (e.g., TSInsight based on autoencoders (\cite{siddiqui_tsinsight_2021}), TSViz for interpreting CNN (\cite{siddiqui_tsviz_2019})). For a survey of time series interpretability, please refer to \cite{rojat_explainable_2021}. \par Compared to images or textual data, humans cannot intuitively and instinctively understand the underlying information contained in time series data. Therefore, time series data, both uni- and multivariate, have an unintuitive nature, lacking an understanding at first sight (\cite{siddiqui_tsviz_2019}). Hence, providing suitable visualizations of time series interpretability becomes crucial. \par
For instance, health data tasks like cardiac disease detection from electrocardiogram data (ECG5000, \cite{dau_ucr_2019}) or seizure prediction from the data of a tri-axial accelerometer (Epilepsy, \cite{bagnall_uea_2018}) are examples for the need of interpretability within time series data. Those classification tasks are in a sensitive field regarding people’s lives. Therefore, decisions need to be carefully taken based on solid evidence. Wrong medications or unidentified diseases can have long-lasting effects on a patient's health. To allow physicians to make such data-driven decisions with the help of machine learning, interpretations of black-box models are crucial. It is not only relevant if a patient has epilepsy or a cardiac disease but why or which indications are there for such. Further, implementing a non-interpretable machine learning model in medicine raises legal and ethical issues (\cite{amann_explainability_2020}).
\section{Related Work}\label{sec:RelatedWork} \begin{table}[tbh]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Name} & \multirow{2}{*}{Backend} & \multicolumn{4}{c}{Data}&\multicolumn{2}{|c|}{Task}&\multicolumn{4}{c|}{Scope}&\multicolumn{3}{c|}{Output}\\
\cline{3-15}
& &\rot{Tabular}& \rot{Text}& \rot{Image}& \rot{Time Series}&\rot{Classification} &\rot{Regression} & \rot{Model Agnostic} & \rot{Model Specific} & \rot{Local} & \rot{Global} &\rot{FA} & \rot{IB} & \rot{RB}\\
\hline
AIX360 (\cite{arya_one_2019}) & PYT,TF, BB & X&X&X& & X&X & X &X&X&X&X&X&X\\
\hline
ALIBI (\cite{klaise_alibi_2021}) & BB &X&X&X& & X& X &X&X&X&X&X&X&\\
\hline
Anchor (\cite{ribeiro_anchors_2018}) & BB & X&X&&&&&X&&X&&&X&\\
\hline
Captum (\cite{kokhlikyan_captum_2020}) &PYT &X& X& X& & X&X & X&X&X&&X&&\\
\hline
carla (\cite{pawelczyk_carla_2021}) & PYT, TF, SK & X&&&& X&X & X&&X&&&X&\\
\hline
DALEX (\cite{baniecki_dalex_2021}) & SK,TF & X&&&& X&X&X&&X&X&X&&\\
\hline
DeepExplain (\cite{ancona_towards_2018}) & TF & &X&X&& X&X& & & X&X&X&&\\
\hline
Dice (\cite{mothilal_explaining_2020}) & PYT, TF, BB & X&&&& X&X & X&&X&&&X&\\
\hline
ELI5 (\cite{eli5-org_eli5_nodate}) & BB & X&X&X&&X& & X& &X&&X&&\\
\hline
H2O (\cite{hall_machine_nodate})& - &X&&&X& X&X&X&X&X&X&X&&\\
\hline
iNNvestigate (\cite{alber_innvestigate_2019})&TF& &&X& &X& & &X&X&X&X&&\\
\hline
InterpretMl (\cite{nori_interpretml_2019}) & BB, WB & X&&& &X&X &X&X&X&X&X&&\\
\hline
Lucid (\cite{tensorflow_lucid_nodate}) & TF & &&X& & X&X &&X&X&X&X&&\\
\hline
OmniXAI (\cite{yang_omnixai_2022}) & PYT, TF, SK &X &X&X&X & X&X &X&X&X&X&X&X&\\ \hline
pytorch-cnn-visualizations (\cite{ozbulak_pytorch_2019})& PYT &&&X& & X& &&X&X&&X&&\\
\hline
SHAP (\cite{lundberg_unified_2017}) & PYT, TF, BB & X&X&X& &X&X&X&X&X&X&X&&\\
\hline
Skater (\cite{oracle_skater_nodate})& BB & X& X& X&& X&X & &&X&X&X&X&\\
\hline
Tf-Explain (\cite{meudec_raphael_tf-explain_2021}) & TF& &&X& &X& &&X&X&&X&&\\
\hline
TorchRay (\cite{fong_understanding_2019}) & PYT &&&X&&X& &&X&&&X&&\\
\hline
What-if (\cite{wexler_what-if_2019}) & TF&X&X&X&& X&X&X&&&&&X&\\
\hline
wildboar (\cite{samsten_isaksamstenwildboar_2020})&-&&&& X & X&X&&X&X&&&X&\\
\hline
\end{tabular}}
\caption{Overview over recent explanation libraries and their time series capabilities. \texttt{BB}: Black-Box, \texttt{TF}: Tensorflow, \texttt{PYT}: PyTorch, \texttt{SK}: scikit-learn}
\label{tab:my_label} \end{table} Numerous open-source tools for machine learning evolved in the last few years. In the following, we focus on libraries implementing various post-hoc interpretability methods installable via PyPI. For the summary in \Cref{tab:TSInterpret}, only active libraries, i.e., libraries with some development activity in the past 12 months on GitHub, are included. The comparison is conducted according to the scope of included explanations (Model Agnostic vs. Model-specific, Global vs. Local) and library properties (supported model libraries, data types, and tasks). Model Agnostic methods apply to any classification algorithms, whereas model-specific algorithms only work on a subset of classification algorithms (e.g., Cam on Convolutional Neural Networks). Global interpretability methods interpret the model's overall decision-making process, while local interpretability focuses on the interpretation of a single instance and the prediction of that instance\footnote{More information on the taxonomy are available in \cite{rojat_explainable_2021}.}. Further, the return type of the explanation methods are taken into account. Feature Attribution methods (FA) return a per-feature attribution score based on the feature’s contribution to the model’s output (e.g., GradCam (\cite{selvaraju_grad-cam_2020}) or SHAP (\cite{lundberg_unified_2017})), instance-based methods (IB) calculate a subset of relevant features that must be present to retain or remove a change in the prediction of a given model (e.g., counterfactuals (\cite{mothilal_explaining_2020}) or anchors (\cite{ribeiro_anchors_2018})), while rule-based methods (RB) derive rules. \par
Except for the libraries wildboar (\cite{samsten_isaksamstenwildboar_2020}), OmniXAI (\cite{yang_omnixai_2022}), and H2O (\cite{hall_machine_nodate}), no library provides interpretability methods on time series. However, the support of OmniXAI (\cite{yang_omnixai_2022}) is restricted to anomaly detection, univariate time series and includes only non-time-series-specific interpretability methods. Wildboar (\cite{samsten_isaksamstenwildboar_2020}) focuses on temporal machine learning (classification, regression, and anomaly detection) and only provides a limited number of (counterfactual) interpretability methods as additional features. A unified framework for time series interpretability like captum or ALIBI enabling easy-to-use interpretability is still missing. \section{Library Design}
The API Design is orientated on the scikit design paradigms consistency, sensible defaults, composition, nonprofilarity of classes, and inspection (\cite{buitinck_api_2013}). \begin{figure}
\caption{Structure of TSInterpret. }
\label{fig:Architecture}
\end{figure} \Cref{lst:InterpretabilityExample} shows the workflow of the library in a coding sample. Given a trained machine learning model and an instance to be classified: First, the desired interpretability method is imported (line 1) and instantiated (line 2), followed by explaining the instance (line 3), and finally, the generation of the plot (line 4). \begin{lstlisting}[style=mypython,caption= API Usage Example., label=lst:InterpretabilityExample] from TSInterpret.InterpretabilityModels.Saliency.TSR import TSR
int_mod = TSR(model, item.shape[-1], item.shape[-2]) exp = int_mod.explain(item, labels = label, TSR = True) int_mod.plot(item,exp) \end{lstlisting} \begin{description}
\item[Consistency] All implemented objects share a consistent interface.
Every interpretability method inherits from the interface \texttt{InterpretabilityBase} to ensure that all methods contain a method \texttt{explain} and a \texttt{plot} function.
The \texttt{plot} function is implemented on the level below based on the output structure provided by the interpretability algorithm to provide a unified visualization experience (e.g., in the case of Feature Attribution, the plot function visualizes a heatmap on the original sample). If necessary, those plots are refined by the Mechanism layer. This is necessary to ensure suitable representation as the default visualization can sometimes be misinterpreted (e.g., the heatmap used in the plot function of \texttt{InterpretabilityBase} allows positive and negative values, while \texttt{TSR} is scaled to $[0,1]$. Using the same color pattern for both scales would lead to a high risk of misinterpreting results while comparing \texttt{TSR} with \texttt{LEFTIST}.). The \texttt{explain} function is implemented on the method level.
\item[Sensible Defaults] TSInterpret provides reasonable defaults for most parameters by providing the default parameterizations for each method from the designated papers. Those parameters can easily be changed if needed by providing alternative values during model instantiation.
\item[Composition] Many interpretability methods for time series classification are based on already existing methods for tabular, image, or text data (e.g., \cite{ismail_benchmarking_2020} is based amongst others on \cite{lundberg_unified_2017}). Whenever feasible, existing implementation of such algorithms are used (e.g., SHAP (\cite{lundberg_unified_2017}), captum (\cite{kokhlikyan_captum_2020}), or tf-explain (\cite{meudec_raphael_tf-explain_2021}).
\item[Nonprofilarity of classes]
\\ TSInterpret implements the interpretability algorithms as custom classes. Datasets, instances, and results are represented as NumPy arrays, Lists, or Tuples, instead of classes. For instance, the counterfactual method returns a Tuple of the counterfactual time series and the label (list, int). Hyperparameters are regular strings or numbers.
\item[Inspection] TSInterpret stores and exposes the parameter of the interpretability algorithms as public attributes. In some methods, parameters have a significant impact on the obtained results. Making those parameters publicly available through attributes facilitates experimenting with hyperparameters.
\end{description}
\section{Algorithm Overview} As depicted in \cref{sec:RelatedWork}, our framework provides support for various time series interpretability types, resulting in interpretations and visualization for a variety of use cases. The current version of the library includes the interpretability algorithms listed in \Cref{tab:TSInterpret}. Their implementation in TSInterpret is based on code provided by the authors of the algorithms, which got adapted to the framework and extended to ensure the availability on multiple model backends (e.g., PyTorch or TensorFlow). \begin{table}[!bth]
\centering
\begin{tabular}{|l|cccc|}
\hline
\textbf{Method} & \textbf{Model} & \textbf{Explanations} & \textbf{Type} & \textbf{Dataset} \\ \hline
NUN-CF (\cite{sanchez-ruiz_instance-based_2021}) & TF, PYT, SK & IB & uni & y\\
CoMTE (\cite{ates_counterfactual_2021}) & TF, PYT,SK & IB &multi&y\\
LEFTIST (\cite{guilleme_agnostic_2019}) &TF, PYT, SK & FA &uni&y\\
TSR (\cite{ismail_benchmarking_2020}) & TF, PYT & FA&multi& n\\ \hline
\end{tabular}
\caption{Interpretability Methods implemented in TSInterpret. \texttt{BB}: Black-Box, \texttt{TF}: Tensorflow, \texttt{PYT}: PyTorch, \texttt{SK}: scikit-learn}
\label{tab:TSInterpret} \end{table} \begin{description}
\item[NUN-CF]\footnote{\url{https://github.com/e-delaney/Instance-Based_CFE_TSC}} \cite{sanchez-ruiz_instance-based_2021} proposed using the K-nearest neighbors from the dataset belonging to a different class as native guide to generate counterfactuals. They propose three options for transforming the original time series with this native guide: the plain native guide, the native guide with bary centering, and transformation based on the native guide and class activation mapping. \item[CoMTE]\footnote{\url{https://github.com/peaclab/CoMTE}} \cite{ates_counterfactual_2021} proposed CoMTE as a perturbation-based approach for multivariate time series counterfactuals. The goal is to exchange the smallest possible number of features with reference data by applying random-restart hill climbing to obtain a different classification.\\ \item[LEFTIST]\footnote{\url{https://www.dropbox.com/s/y1xq5bhpf0irg2h/code_LEFTIST.zip?dl=0}} Agnostic Local Explanation for Time Series Classification by \cite{guilleme_agnostic_2019} adapted LIME for time series classification and proposed to use prefixed shapelets as the interpretable components. Each shapelet is a non-overlapping subsection of the original time series with a prefixed length. The feature importance is provided on a shapelet basis. \item[TSR]\footnote{\url{https://github.com/ayaabdelsalam91/TS-Interpretability-Benchmark}} Temporal Saliency Rescaling (\cite{ismail_benchmarking_2020}) calculates the importance of each timestep, followed by the feature importance based on different Saliency Methods, both back-propagation based and perturbation based. We refer the reader to our code documentation for a complete list of implemented methods. The implementation in TSInterpret is based on tf-explain (\cite{meudec_raphael_tf-explain_2021}), SHAP (\cite{lundberg_unified_2017}) and captum (\cite{kokhlikyan_captum_2020}). \end{description}
\begin{figure}
\caption{NUN-CF.}
\label{fig:Nun-Cf}
\caption{LEFTIST.}
\label{fig:Leftist}
\caption{Interpretations obtained with TSInterpret the univariate ECG5000 dataset.}
\label{fig:Vis_univariate}
\end{figure}
\begin{figure}
\caption{CoMTE.}
\label{fig:ates}
\caption{TSR.}
\label{fig:Ismail}
\caption{Interpretations obtained with TSInterpret the multivariate Epilepsy dataset.}
\label{fig:Vis_multivariate}
\end{figure} \par Returning to the use case described in \cref{sec:Motivation}, we train a 1D-conv ResNet on ECG5000 and Epilepsy. \Cref{fig:Vis_univariate} and \Cref{fig:Vis_multivariate} show interpretations obtained with TSInterpret for the 1D-conv ResNet on the first item of the testset for the univariate ECG5000 dataset (\cref{fig:Vis_univariate}) and the multivariate Epileptical dataset (\cref{fig:Vis_multivariate}). With the help of the interpretations shown in \Cref{fig:Vis_univariate} a physician can gather evidence on the functionality of the ResNet classification on ECG 5000. The feature attribution method LEFTIST (\cref{fig:Leftist}) identifies sections with positive/ negative influence on the current time series prediction. In this case, the first section of the time series has the most considerable impact on the current prediction being a normal electrocardiogram. If a physician wants to know why the electrocardiogram shows a normal instance and not an abnormal beat, conclusions can be drawn from the counterfactual approach NUN-CF. Compared to the original time series (in blue \cref{fig:Nun-Cf}) classified as normal, the classification is changed if the drop at the beginning is lower and the drop at the end is replaced with a rise (pink line in \cref{fig:Nun-Cf}). \Cref{fig:Vis_multivariate} shows interpretations for the epilepsy dataset. Suppose a physician wants to know how the original instance is predicted as walking (blue) instead of running. In such a case, the practitioner applies CoMTE to generate a counterfactual in the class direction of running, resulting in a different acceleration of the x-axis in the first 200 time steps (see \cref{fig:ates}). If the physician is only interested in seeing the most important features TSR (\cref{fig:Ismail}) can be applied. With the help of interpretability, a physician can gather evidence for the predicted classification and gain trust in the system (\cite{amann_explainability_2020}).
In general, the visualization for the counterfactual approaches CoMTE and NUN-CF have a similar style, although NUN-CF visualizes univariate and CoMTE multivariate data. Note that CoMTE only visualizes the changed features. For the feature attribution methods LEFTIST and TSR, the visualizations have similar styles but a different color map. This is necessary as LEFTIST returns a positive and negative influence (range $[-1,1]$), while TSR returns normalized time slices and feature importances (range $[0,1]$).\par The easy-to-use interface of TSInterpret allows the out-of-box usage of interpretability models on time series data for different use cases, time series types, and classification models. Further, the framework can be easily extended by inheriting from either \texttt{InterpretabilityBase} or one of the more customized classes (e.g., \texttt{Feature Attribution} or \texttt{InstanceBased}). All classes below the Output layer also come with a default plot function to match the visualizations obtained by the already implemented algorithms.
\section{Outlook} TSInterpret provides a cross-backend unified API for the interpretability of time series classifications enabling interpretation generation with just three lines of code. In the first phase of the development, TSInterpret provides four interpretability algorithms for both uni- and multivariate time series classification and functions for visualizing the interpretation results. However, the Framework is easily extensible by inheritance and provides a simple API design. In the future, we will include support for additional interpretability algorithms, time series prediction, as well as metrics to progress into an easy-to-use benchmarking tool for time series interpretability. In order to focus the research and application of time series prediction models not only on performance but also on understanding the model in a practical environment, a further development step is to integrate TSInterpret into the landscape of existing frameworks and thus establish it as an easily usable standard.
\section*{Acknowledgments}{This work was carried out with the support of the German Federal Ministry of Education and Research (BMBF) withhin the project "MetaLearn" (Grant 02P20A013).}
\end{document} |
\begin{document}
\newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\REV}[1]{\textbf{\color{blue}[[#1]]}}
\newcommand{\GREEN}[1]{\textbf{\color{green}#1}} \newcommand{\RED}[1]{\textrm{\color{red}#1}} \newcommand{\rev}[1]{{\color{blue}#1}}
\newcommand{\andy}[1]{ } \newcommand{\bmsub}[1]{\mbox{\boldmath\scriptsize $#1$}}
\def\mathbb{R}{\mathbb{R}}
\def\bra#1{\langle #1 |}
\def\ket#1{| #1 \rangle} \def\mathop{\text{sinc}}\nolimits{\mathop{\text{sinc}}\nolimits} \def\mathcal{V}{\mathcal{V}} \def\mathcal{H}{\mathcal{H}} \def\mathcal{T}{\mathcal{T}} \def\mathcal{M}{\mathcal{M}} \def\mathcal{N}{\mathcal{N}} \def\mathcal{W}{\mathcal{W}} \def\mathrm{e}{\mathrm{e}} \def\mathrm{i}{\mathrm{i}} \def\mathrm{d}{\mathrm{d}} \renewcommand{\mathop{\text{Re}}\nolimits}{\mathop{\text{Re}}\nolimits} \newcommand{\mathop{\text{Tr}}\nolimits}{\mathop{\text{Tr}}\nolimits}
\title{Greenberger-Horne-Zeilinger states and few-body Hamiltonians}
\author{Paolo Facchi} \affiliation{Dipartimento di Matematica and MECENAS, Universit\`a di Bari, I-70125 Bari, Italy} \affiliation{INFN, Sezione di Bari, I-70126 Bari, Italy}
\author{Giuseppe Florio} \affiliation{Dipartimento di Fisica and MECENAS, Universit\`a di Bari, I-70126 Bari, Italy} \affiliation{INFN, Sezione di Bari, I-70126 Bari, Italy}
\author{Saverio Pascazio} \affiliation{Dipartimento di Fisica and MECENAS, Universit\`a di Bari, I-70126 Bari, Italy} \affiliation{INFN, Sezione di Bari, I-70126 Bari, Italy}
\author{Francesco V. Pepe} \affiliation{Dipartimento di Fisica and MECENAS, Universit\`a di Bari, I-70126 Bari, Italy} \affiliation{INFN, Sezione di Bari, I-70126 Bari, Italy}
\begin{abstract} The generation of Greenberger-Horne-Zeilinger (GHZ) states is a crucial problem in quantum information. We derive general conditions for obtaining GHZ states as eigenstates of a Hamiltonian. In general, degeneracy cannot be avoided if the Hamiltonian contains $m$-body interaction terms with $m \leq 2$ and a number of qubits strictly larger than 4. As an application, we explicitly construct a two-body 4-qubit Hamiltonian and a three-body 5-qubit Hamiltonian that exhibit a GHZ as a nondegenerate eigenstate. \end{abstract}
\pacs{03.67.Mn, 03.65.Ud, 75.10.Dg}
\maketitle
The use of quantum mechanics for improving tasks such as communication, computation and cryptography \cite{nielsen} is based on the availability of highly entangled states \cite{h4,entanglement,entanglementrev,adessorev}. It is therefore of primary importance to obtain reliable strategies for their generation. Among others, GHZ states \cite{ghz} represent a paradigmatic example of multipartite entangled states. In particular, in the case of three qubits, these states contain purely tripartite entanglement \cite{cirac} and do not retain any bipartite entanglement when one of the qubits is traced out, thus maximizing the residual tangle \cite{multipart1}.
The experimental realization of GHZ states \cite{esperimentighz,esperimentighz2,esperimentighz3,esperimentighz4}, most recently with 14 qubits \cite{14} has paved the way towards realistic implementation of quantum protocols. In these experiments a bottom-up approach is employed, whereby individual quantum systems (trapped particles, photons, cavities) are combined and manipulated. As the number of controllable qubits increases, the generation of GHZ states require the use of quantum operations, whose feasibility strongly depends on the physical system used (optical, semiconductor or superconductor based \cite{molmer,generationghz}). In the case of the recent trapped-ion implementation \cite{14}, the problem is additionally complicated by the presence of correlated Gaussian phase noise, that provokes ``superdecoherence", by which decay scales quadratically with the number of qubits. It becomes therefore necessary to manipulate and control state fidelity and dynamics over sufficiently long timescales.
In principle, an alternative scheme for the implementation of GHZ states would consist in its encoding into one of the eigenstates (possibly the fundamental one) of a suitable Hamiltonian. For instance, in \cite{buzek} it was shown that for the quantum Ising model in a transverse field the ground state is approximately a GHZ state if the strength of the field goes to infinity. Moreover, a proper choice of local fields for an Heisenberg-like spin model can yield a ground state which is, again, approximately GHZ \cite{loss1,loss2}.
On the other hand, it would be interesting to understand what are the requirements to obtain an \emph{exact} GHZ state as an eigenstate of a quantum Hamiltonian. In this Letter we will address this problem and find rigorous conditions for the encoding of GHZ states into one of the eigenstates of a Hamiltonian that contains few-body coupling terms.
Let \begin{equation} \label{eq:ghz}
|G_\pm^{n}\rangle=\frac{1}{\sqrt{2}} \left(|0\rangle^{\otimes n}\pm |1\rangle^{\otimes n}\right) \end{equation}
be GHZ states, where $\sigma^z|i\rangle=(-1)^i | i \rangle$
defines the computational basis, with $i=0,1$ and $\sigma^z$ the third Pauli matrix. As a preliminary remark, we notice that it is trivial to find Hamiltonians involving $n$-body interaction terms, whose nondegenerate ground state is $|G_+^{n}\rangle$: the simplest example is $E_0|G_+^{n}\rangle\langle G_+^{n}|$, with $E_0<0$. On the other hand, we can ask whether it is possible for
$|G_+^{n}\rangle$ to be the nondegenerate ground state, even if the Hamiltonian involves at most $m$-body interaction terms (with
$m<n$). One can easily see that this is not possible. The reason lies in the fact that $|G_+^{n}\rangle$ and $|G_-^{n}\rangle$ share the same $m$-body reduced density matrices, and thus the same expectation values on $m$-body interaction terms. If
$|G_+^{n}\rangle$ is a ground state, also $|G_-^{n}\rangle$ must be a ground state. This is a special case of a result proved in \cite{nielsen2}.
Thus, we relax our initial requirement and try to understand whether $|G_+^{n}\rangle$ can be a nondegenerate excited eigenstate for some $m$-body Hamiltonian. More specifically, we search for a limiting value $m_n^*$, depending on the number $n$ of qubits in the system, such that, if the Hamiltonian involves
$m$-body interaction terms (with $m<m_n^*$), $|G_+^{n}\rangle$ cannot be a nondegenerate eigenstate, otherwise the task becomes possible. The most generic $m$-body Hamiltonian acting on the Hilbert space of $n$ qubits can be written as \begin{equation}\label{eq:hm} H^{(m)}=\sum_{j_1=1}^n\ldots\sum_{j_m=1}^n\sum_{\alpha_1} \ldots\sum_{\alpha_m} J_{j_1\ldots j_m}^{\alpha_1\ldots\alpha_m} \sigma_{j_1}^{\alpha_1}\ldots\sigma_{j_m}^{\alpha_m} \end{equation} with $\alpha_i=0,x,y,z$, $\sigma_i^0\equiv\openone_i$ being the identity operator, $\sigma_i^\alpha$ the Pauli matrices acting on the Hilbert space of qubit $i$ and $J$'s real numbers. Terms involving only identities and an even number of $\sigma^z$'s map
$|G_+^{n}\rangle$ on the subspace spanned by itself. On the other hand, terms involving other (products of) Pauli matrices map
$|G_+^{n}\rangle$ onto an orthogonal subspace. The action of $H^{(m)}$ on $\ket{G_+^n}$ is \begin{equation}\label{eq:hmg+}
H^{(m)}|G_+^{n}\rangle=\epsilon|G_+^{n}\rangle+|\Psi^{(m)}\rangle, \end{equation} where $\epsilon$ is a multiplicative constant and
$|\Psi^{(m)}\rangle$ is an unnormalized state vector satisfying \begin{equation}\label{eq:ort+}
\langle\Psi^{(m)}|G_+^{n}\rangle=0 . \end{equation}
Since the action of the Hamiltonian (\ref{eq:hm}) consists in inverting spins and changing the relative sign of $\ket{G_+^n}$, the vector $|\Psi^{(m)}\rangle$ can be expressed in a convenient way by introducing a new notation. Let \begin{equation} \mathcal{N}=\left(1,2,\ldots,n\right) \end{equation} be the ordered set of naturals from $1$ to $n$, and let \begin{equation}\label{eq:indices} \mathcal{I}=\left( i_1, i_2,\ldots,i_l\right) \end{equation}
denote a multi-index, whose elements range from $1$ to $n$ and satisfy $i_1<i_2<\ldots<i_l$. The cardinality $|\mathcal{I}|=l$ verifies \begin{equation}\label{eq:indlength}
1\leq |\mathcal{I}|\leq m<n. \end{equation} We now define a set of normalized state vectors, depending on the choice of the multi-index $\mathcal{I}$ and on the sign $\sigma=\pm$: \begin{eqnarray}\label{eq:gtilde}
|\tilde{G}_{\sigma,\mathcal{I}}^n\rangle&=&\frac{1}{\sqrt{2}}\left[
\left(\bigotimes_{i\in\mathcal{I}}|1\rangle_i\bigotimes_{j\in\mathcal{N}/\mathcal{I}}|0\rangle_j\right)\right.\nonumber\\ &+&\sigma\left.\left(
\bigotimes_{i\in\mathcal{I}}|0\rangle_i\bigotimes_{j\in\mathcal{N}/\mathcal{I}}|1\rangle_j \right)\right] \end{eqnarray} The state
$|\tilde{G}_{\sigma,\mathcal{I}}^n\rangle$ differs from
$|G_+^n\rangle$ in that spins corresponding to the indices in $\mathcal{I}$ are reversed in both computational basis vectors in the superposition $\ket{G_+^n}$. This means that
$|\tilde{G}_{\sigma,\mathcal{I}}^n\rangle=\ket{G_+^n}$ if $\mathcal{I}$ is the empty set. Moreover, the relative phase of the two vectors can be positive or negative, according to the sign
$\sigma$. Thus, the vector $|\Psi^{(m)}\rangle$ in Eq.\ (\ref{eq:hmg+}) can be expressed as \begin{equation}\label{eq:psim}
|\Psi^{(m)}\rangle=b_0|G_-^n\rangle+\sum_{\mathcal{I}} \left(
a_{\mathcal{I}}|\tilde{G}_{+,\mathcal{I}}^n\rangle +
b_{\mathcal{I}} |\tilde{G}_{-,\mathcal{I}}^n\rangle \right). \end{equation} The coefficients $a_{\mathcal{I}}$, $b_{\mathcal{I}}$ and $b_0$ are functions of the parameters of the Hamiltonian (\ref{eq:hm}). It is obvious that, if they can all be set to zero by a proper choice of $H^{(m)}$,
$|G_+^{(m)}\rangle$ will be an eigenstate of the Hamiltonian. A problem arises, however, if we take into account the antisymmetric state $|G_-^{(m)}\rangle$. The action of $H^{(m)}$ on this vector reads \begin{equation}\label{eq:hmg-}
H^{(m)}|G_-^{n}\rangle=\epsilon|G_-^{n}\rangle+|\Phi^{(m)}\rangle, \end{equation}
where $|\Phi^{(m)}\rangle$ is orthogonal to $|G_-^{n}\rangle$ and can be decomposed as \begin{equation}\label{eq:phim}
|\Phi^{(m)}\rangle=b_0|G_+^n\rangle+\sum_{\mathcal{I}} \left(
a_{\mathcal{I}}|\tilde{G}_{-,\mathcal{I}}^n\rangle +
b_{\mathcal{I}} |\tilde{G}_{+,\mathcal{I}}^n\rangle \right). \end{equation}
If all the coefficients in Eq.\ (\ref{eq:psim}) are set to zero, this will result in the cancellation of $|\Phi^{(m)}\rangle$. As a consequence, $|G_+^n\rangle$ and $|G_-^n\rangle$ will be degenerate eigenstates (with eigenvalue $\epsilon$). Thus, if the sufficient conditions \begin{eqnarray} \label{eq:cond1} & & b_0=0, \\ \label{eq:cond2} & & a_{\mathcal{I}}=0\,,\quad b_{\mathcal{I}}=0 \end{eqnarray}
are also \textit{necessary} for $|G_+^n\rangle$ to be an eigenstate of $H^{(m)}$, degeneracy is unavoidable. We notice that, since the following equality holds \begin{equation}
\langle G_-^n | \tilde{G}_{\sigma,\mathcal{I}}^n \rangle=0\quad \forall\mathcal{I} \quad \mbox{and} \quad \forall\sigma, \end{equation} Eq.\ (\ref{eq:cond1}) is always a necessary condition.
Let us start considering the case in which the Hamiltonian (\ref{eq:hm}) contains interaction terms up to $m$-body such that
\begin{equation} m<m_n^*\equiv [(n+1)/2] \end{equation} with $[\cdot]$ denoting the integer part. Following Eq.\ (\ref{eq:indlength}), the sum in the decomposition of $|\Psi^{(m)}\rangle$ and $|\Phi^{(m)}\rangle$ runs over all the multi-indices whose length satisfies \begin{equation}\label{eq:indlengthA}
1\leq |\mathcal{I}|\leq m<m_n^*. \end{equation} If this inequality holds, the following orthogonality relations are verified: \begin{equation}\label{eq:orthog}
\langle \tilde{G}_{\sigma_1,\mathcal{I}_1}^n | \tilde{G}_{\sigma_2,\mathcal{I}_2}^n \rangle=0 \quad \text{if } \mathcal{I}_1\neq \mathcal{I}_2 \text{ or } \sigma_1\neq\sigma_2 . \end{equation} Thus, Eq.\ (\ref{eq:cond2}) is a necessary condition to cancel
$|\Psi^{(m)}\rangle$ and make $|G_+^n\rangle$ an eigenstate of
$H^{(m)}$. In this case, however, $|G_+^n\rangle$ and
$|G_-^n\rangle$ are eigenstates corresponding to the same eigenvalue. We can conclude that, if the Hamiltonian of a qubit system involves terms coupling less than $m_n^*= [(n+1)/2]$ spins, the GHZ state $\ket{G_+^n}$, and any equivalent state by local unitaries, cannot be a nondegenerate eigenstate. If $\ket{G_+^n}$ is an eigenstate for some Hamiltonian $H^{(m)}$, it must be at least two-fold degenerate.
On the other hand, if $m=m_n^*$ degeneracy can be avoided. Actually, in this case some conditions in Eq. (\ref{eq:cond2}) are no longer necessary and, therefore, the orthogonality relations in Eq.\ (\ref{eq:orthog}) hold if inequality (\ref{eq:indlengthA}) is satisfied. However, a new relation emerges connecting
$|\tilde{G}_{\sigma,\mathcal{I}}^n\rangle$ states corresponding to multi-indices of length $m_n^*$ and $(n-m_n^*)$ (which is equal to $m_n^*$ for even $n$ and to $m_n^*-1$ for odd $n$). Indeed, reversing $m_n^*$ spins in $\ket{G_+^n}$ is completely equivalent to reversing the other $n-m_n^*$ ones. Instead, if the same operations are applied on the antisymmetric state $\ket{G_-^n}$, they will differ only by an overall sign. Thus, we have the following relations \begin{equation}\label{eq:gtildelim}
|\tilde{G}_{\pm,\mathcal{I}}^n\rangle=\pm|\tilde{G}_{\pm,\mathcal{N}/\mathcal{I}}^n\rangle\quad
\text{ if } |\mathcal{I}|=m_n^*,n-m_n^*. \end{equation} While conditions (\ref{eq:cond2}) still hold for
$|\mathcal{I}|<\min(m_n^*,n-m_n^*)$, for larger values of
$|\mathcal{I}|$ one should use \begin{equation}\label{eq:cond2b} \left\{ \begin{array}{l} a_{\mathcal{I}}=-a_{\mathcal{N}/\mathcal{I}} \\ b_{\mathcal{I}}=b_{\mathcal{N}/\mathcal{I}} \end{array}
\right. \qquad \mbox{if}\quad |\mathcal{I}|=m_n^*,n-m_n^*. \end{equation}
Thus, in order to cancel $|\Psi^{(m)}\rangle$, it is no longer necessary to set all the coefficient $a_{\mathcal{I}}=0$ and $b_{\mathcal{I}}=0$ in Eq.\ (\ref{eq:psim}) because this would give a degeneracy (remember that, by the same conditions, one would have
$|\Phi^{(m)}\rangle=0$). Instead, by using Eq.(\ref{eq:cond2b}), the vector $|\Phi^{(m)}\rangle$ in Eq.(\ref{eq:phim}) becomes \begin{equation}\label{eq:phimbar}
|\bar{\Phi}^{(m)}\rangle\equiv\sum_{|\mathcal{I}|=m_n^*,n-m_n^*}
\left( a_{\mathcal{I}}|\tilde{G}_{-,\mathcal{I}}^n\rangle +
b_{\mathcal{I}} |\tilde{G}_{+,\mathcal{I}}^n\rangle \right), \end{equation} which is generally different from the null vector. If, for some values of the parameters in the Hamiltonian (\ref{eq:hm}), the conditions (\ref{eq:cond2b}) are satisfied without cancelling
$|\bar{\Phi}^{(m)}\rangle$, the GHZ state $|G_+^n\rangle$ can, at least in principle, be a nondegenerate eigenstate of an Hamiltonian with interaction terms coupling no more than
$m_n^*=[(n+1)/2]$ qubits. As a further remark, we notice that this result does not ensure that $|G_+^n\rangle$ is nondegenerate. The absence of degeneracy can be excluded only by the explicit solution of the Hamiltonian.
The case $m_n^*<m<n$ is analogous to the previous one, since conditions of the type (\ref{eq:gtildelim}) hold for all multi-indices $\mathcal{I}$ that satisfy $n-m\leq
|\mathcal{I}|\leq m$. Following the same procedure as in the case
$m=m_n^*$, we find that the degeneracy of the eigenspace spanned by $|G_+^n\rangle$ and $|G_-^n\rangle$ can still be avoided.
We will now consider an interesting application of the previous results. In the following example, we will focus our attention on the symmetric GHZ state $|G_+^n\rangle$ and the case \begin{eqnarray}\label{eq:four} n=4, \quad m=m_4^*=2 \end{eqnarray} and will show that state
$|G_+^4\rangle=\left(|0000\rangle+|1111\rangle\right)/\sqrt{2}$ can be a nondegenerate eigenstate of a two-body Hamiltonian. We restrict our attention to Hamiltonians involving only two body coupling along the $x$ and $z$ axes, and we add the condition of nearest-neighbour couplings on a ring: \begin{equation}\label{eq:h4} H^{(2)}=\sum_{i=1}^4 \left( J_{i}^x \sigma_i^x\sigma_{i+1}^x + J_{i}^z \sigma_i^z\sigma_{i+1}^z \right) . \end{equation} In Eq.\ (\ref{eq:h4}) we have used periodic boundary conditions
$\bm{\sigma}_5\equiv\bm{\sigma_1}$. We notice that interaction terms of the form $\sigma_i^z\sigma_{i+1}^z$ leave $|G_+^4\rangle$ invariant, while the terms $\sigma_i^x\sigma_{i+1}^x$ reverse two nearest-neighbour spins. Since, as in Eq.\ (\ref{eq:gtildelim}), we have \begin{equation}\label{eq:gtildelim4}
|\tilde{G}_{\pm,(1,2)}^4\rangle=\pm|\tilde{G}_{\pm,(3,4)}^4\rangle, \;\; |\tilde{G}_{\pm,(2,3)}^4\rangle=\pm|\tilde{G}_{\pm,(1,4)}^4\rangle, \end{equation} the action of $H^{(2)}$ on the four-qubits GHZ state
$|G_+^4\rangle$ reads \begin{equation}\label{h2g4}
H^{(2)}|G_+^4\rangle=\left(\sum_{i=1}^4 J_i^z\right)|G_+^4\rangle+\sum_{i=1}^2\left(J_i^x+J_{i+2}^x\right)|\tilde{G}_{\pm,(i,i+1)}^4\rangle. \end{equation}
Thus $|G_+^4\rangle$ is an eigenstate of $H^{(2)}$ if and only if \begin{equation} \left\{ \begin{array}{l} J_1^x=-J_3^x \\ J_2^x=-J_4^x . \end{array} \right. \end{equation} Under these conditions, the Hamiltonian acts on the antisymmetric combination
$|G_-^4\rangle=\left(|0000\rangle-|1111\rangle\right)/\sqrt{2}$ as \begin{equation}
H^{(2)}|G_-^4\rangle=\left(\sum_{i=1}^4 J_i^z\right)|G_-^4\rangle+2\sum_{i=1}^2 J_i^x
|\tilde{G}_{\pm,(i,i+1)}^4\rangle. \end{equation}
If $J_1^x\ne 0$ or $J_2^x\ne 0$, $|G_-^4\rangle$ is not an eigenstate. We explicitly solve a simple model, with $J_i^z\equiv J^z/4$ for all $i$ and $J_1^x=J_2^x\equiv J^x/4(=-J_3^x=-J_4^x)$. If the coupling constants are not zero and $J^x\neq J^z$,
$|G_+^4\rangle$ is a nondegenerate eigenstate, corresponding to the eigenvalue $J^z$. It is remarkable, however, that this model has three other eigenstates which are equivalent to
$|G_+^4\rangle$ by local unitaries, corresponding to the eigenvalues $-J^z$ and $\pm J^x$. As expected, none of them can be the nondegenerate ground state, since the ground-state energy is $\epsilon_0=\sqrt{(J^x)^2+(J^z)^2}$. $\ket{G_+^4}$ is the (nondegenerate) first excited state if $J^z< 0$ and $-1<J^x/J^z<1$. Incidentally, for different ranges of the parameters the first excited state of this Hamiltonian is one of the three eigenstates which are locally equivalent to $\ket{G_+^4}$.
The five-qubit GHZ state $\ket{G_+^5}=(\ket{00000}+\ket{11111})/\sqrt{2}$ needs at least three-body interactions to be the nondegenerate eigenstate of any Hamiltonian ($m_5^*=3$). It is straightforward to check that $\ket{G_+^5}$ is an eigenstate of \begin{equation}\label{eq:h5} H^{(3)}=\frac{J^z}{5} \sum_{i=1}^5 \sigma_i^z\sigma_{i+1}^z + \frac{J^x}{5}\sum_{i=1}^5 \left( \sigma_i^x\sigma_{i+1}^x\sigma_{i+2}^x-\sigma_i^x\sigma_{i+1}^x \right) \end{equation} with eigenvalue $J^z$ (periodic boundary conditions are assumed). The conditions for $\ket{G_+^5}$ to be a nondegenerate eigenstate are easily worked out by diagonalizing $H^{(3)}$. It is the nondegenerate first excited eigenstate if $J^z<0$, $J^x\neq 0$ and \begin{equation} -2+\frac{2}{\sqrt{3}}<\frac{J^x}{J^z}<\frac{1}{6}\left[ \sqrt{2(75+7\sqrt{5})}-(7+\sqrt{5}) \right]. \end{equation} $\ket{G_+^5}$ can be the ground state only if $J^x=0$, but in this case it is twice degenerate.
In conclusion, we investigated general conditions such that GHZ states (\ref{eq:ghz}) are nondegenerate in the spectrum of a Hamiltonian. We showed that if the Hamiltonian acting on the Hilbert space of $n$ qubits involves terms that couple at most $m$ qubits, it is impossible to have a nondegenerate GHZ eigenstate if $m<m_n^*$ with $m_n^*=\left[ (n+1)/2\right]$. If $m\ge m_n^*$, degeneracy can in principle be absent.
The difficulty in obtaining GHZ states as ground states (or even eigenstates) of Hamiltonians that involve only few-body interactions is in accord with previous results \cite{pepe} and seems to be a characteristic trait of multipartite entanglement. It would be interesting, also in view of applications, to investigate the existence of general conditions for obtaining approximate GHZ states for an arbitrary number of qubits by making use of few-body Hamiltonians.
\acknowledgments P.F.\ and G.F.\ acknowledge support through the project IDEA of University of Bari.
\end{document} |
\begin{document}
\title{Flexibility of Lyapunov exponents with respect to two classes of measures on the torus}
\newsavebox{\smlmat} \savebox{\smlmat}{$\left(\begin{smallmatrix}2&1\\1&1\end{smallmatrix}\right)$}
\begin{center}
\emph{Dedicated to Anatole Katok} \end{center}
\begin{abstract}
We consider a smooth area-preserving Anosov diffeomorphism $f\colon \mathbb T^2\rightarrow \mathbb T^2$ homotopic to an Anosov automorphism $L$ of $\mathbb T^2$. It is known that the positive Lyapunov exponent of $f$ with respect to the normalized Lebesgue measure is less than or equal to the topological entropy of $L$, which, in addition, is less than or equal to the Lyapunov exponent of $f$ with respect to the probability measure of maximal entropy. Moreover, the equalities only occur simultaneously. We show that these are the only restrictions on these two dynamical invariants. \end{abstract}
\tableofcontents
\section{Introduction}
The aim of the flexibility program is to study natural classes of smooth dynamical systems and to find \textit{constructive tools} to freely manipulate dynamical data inside a fixed class. The result described in this paper is another example demonstrating the flexibility principle in dynamical systems.
\subsection{Anosov volume-preserving diffeomorphisms on tori}\label{section: anosov}
Consider an $n$-dimensional torus $\mathbb T^n = \mathbb R^n/\mathbb Z^n$, $n\geq 2$. Anosov volume-preserving $C^\infty$ (smooth) diffeomorphisms represent a natural class for studying flexibility questions. For any $f$ in this class the tangent bundle of $\mathbb T^n$ splits as a direct sum $T\mathbb T^n=E^u\oplus E^s$ of two $Df$-invariant subbundles $E^u$ (unstable) and $E^s$ (stable) such that $E^u$ is uniformly expanded by $Df$ and $E^s$ is uniformly contracted by $Df$. Moreover, by \cite[Theorem 18.6.1]{KatokHasselblatt}, $f$ is homotopic and topologically conjugate to an Anosov automorphism $L$ given by a hyperbolic matrix in $SL(n,\mathbb Z)$, i.e., a matrix in $SL(n, \mathbb Z)$ with no eigenvalues on the unit circle.
The \textit{Lyapunov exponent} of $f$ at $x\in\mathbb T^n$, $\pmb v\in T_x\mathbb T^n\setminus\{\pmb 0\}$ is given by \begin{equation}\label{def: Lyapunov exponent}
\lambda(f,x,\pmb v) = \limsup\limits_{n\rightarrow\infty}\frac{\log\|Df^n_x\pmb v\|}{n}. \end{equation}
Since $f$ is volume-preserving, $f$ preserves the normalized Lebesgue measure $\Leb$ which is ergodic \cite{AnosovSinai}. Moreover, there exists a unique measure of maximal entropy $\nu_f$ for $f$ which is ergodic \cite{Bowen}. Applying the Oseledets Multiplicative Ergodic Theorem, we obtain that for any $f$-invariant ergodic Borel probability measure $\mu$ the limit \begin{equation*}
\lim\limits_{n\rightarrow \infty}\frac{\log\|D_xf^n\pmb v\|}{n} \end{equation*} exists for $\mu$-almost every $x\in\mathbb T^n$ and all non-zero $\pmb v\in T_x\mathbb T^n$. In particular, we obtain a collection of $n$ numbers $\lambda_{1,\mu}(f)\geq\ldots\geq \lambda_{n,\mu}(f)$ as possible limits that are called \textit{Lyapunov exponents of $f$ with respect to $\mu$}. Since $f$ is Anosov, all these numbers are nonzero. We denote the number of positive elements in this list by $u(f)$ and call it the \textit{unstable index of $f$}. Also, for any $f$-invariant ergodic Borel probability measure $\mu$: \begin{equation}\label{Lyapunov exponent through integral}
\sum\limits_{i=1}^{u(f)}\lambda_{i,\mu}(f) = \int_{\mathbb T^n}\log\|Df|_{E^u}\|d\mu. \end{equation}
The Lyapunov exponents with respect to $Leb$ and $\nu_f$ are natural dynamical data to look at in the context of the flexibility program.
We denote by $h_{top}(f)$ and $h_{\Leb}(f)$ the topological entropy and the metric entropy with respect to $\Leb$ of $f$, respectively. The following relations for the considered entropies and Lyapunov exponents are known in this setting: \begin{itemize} \item For an Anosov automorphism $L\colon\mathbb T^n\rightarrow\mathbb T^n$, we have $h_{top}(L)=h_{\Leb}(L)=\sum\limits_{i=1}^{u(L)}\lambda_{i,\Leb}(L)$; \item Since $h_{top}$ is an invariant of topological conjugacy \cite[Corollary 3.1.4]{KatokHasselblatt}, we obtain that if $f$ is homotopic to an Anosov automorphism $L$, then $h_{top}(f)=h_{top}(L)$; \item Since $f$ is volume-preserving, we have $\sum\limits_{i=1}^n\lambda_{i,\Leb}(f)=\sum\limits_{i=1}^n\lambda_{i,\nu_f}(f)=0$; \item Variational Principle for entropies \cite[Theorem 4.5.3]{KatokHasselblatt}: $h_{\Leb}(f)\leq h_{top}(f)$; \item Ruelle's inequality \cite{Ruelle}: $h_{top}(f)\leq \sum\limits_{i=1}^{u(f)}\lambda_{i,\nu_f}(f)$; \item Pesin's entropy formula \cite[Theorem 10.4.1]{BarreiraPesin}: $h_{\Leb}(f) = \sum\limits_{i=1}^{u(f)}\lambda_{i,\Leb}(f)$. \end{itemize}
Thus, some representative questions (ordered in increasing number of requirements) concerning flexibility of Lyapunov exponents for Anosov volume-preserving diffeomorphisms on $\mathbb T^n$ are the following:
\begin{question}[Conjecture 1.4 in \cite{BKRH}, weak flexibility for one measure]\label{weak flexibility 1} Given any list of nonzero numbers $\xi_1\geq\ldots\geq \xi_n$ such that $\sum\limits_{i=1}^n\xi_i=0$, does there exist a smooth volume-preserving Anosov diffeomorphism $f$ of $\mathbb T^n$ such that \begin{equation*} (\lambda_{1,\Leb}(f),\ldots, \lambda_{n,\Leb}(f))=(\xi_1,\ldots,\xi_n)? \end{equation*} \end{question}
\begin{question}[Problem 1.3 in \cite{BKRH}, strong flexibility for one measure]\label{strong flexibility 1} Let $L\colon \mathbb T^n\rightarrow\mathbb T^n$, $n\geq 2$, be a volume-preserving Anosov automorphism with the unstable index $u$. Given any list of numbers \begin{equation*} \xi_1\geq\ldots\geq\xi_u>0>\xi_{u+1}\geq\ldots\xi_n\quad\text{such that:} \end{equation*} \begin{equation*} \sum\limits_{i=1}^n\xi_i=0\qquad\text{and}\qquad \sum\limits_{i=1}^u\xi_i\leq h_{top}(L), \end{equation*} does there exist a smooth volume-preserving Anosov diffeomorphism $f\colon\mathbb T^n\rightarrow \mathbb T^n$ homotopic to $L$ such that \begin{equation*} (\lambda_{1,\Leb}(f),\ldots, \lambda_{n,\Leb}(f))=(\xi_1,\ldots,\xi_n)? \end{equation*} \end{question}
\begin{question}[Weak flexibility for two measures]\label{weak flexibility 2} Let $n\in\mathbb N\setminus\{1\}$ and $u\in\mathbb N\cap [1,n)$. Given any two lists of numbers \begin{equation*} \xi_1\geq\ldots\geq\xi_u>0>\xi_{u+1}\geq\ldots\xi_n \quad \text{and} \quad \eta_1\geq\ldots\geq \eta_u>0>\eta_{u+1}\geq\ldots\geq\eta_n \quad\text{such that:} \end{equation*} \begin{equation*} \sum\limits_{i=1}^n\xi_i=0,\qquad \sum\limits_{i=1}^n\eta_i=0,\qquad\text{and}\qquad \sum\limits_{i=1}^u\xi_i\leq\sum\limits_{i=1}^u\eta_i, \end{equation*} does there exist a smooth volume-preserving Anosov diffeomorphism $f\colon\mathbb T^n\rightarrow \mathbb T^n$ such that \begin{equation*} (\lambda_{1,\Leb}(f),\ldots, \lambda_{n,\Leb}(f))=(\xi_1,\ldots,\xi_n) \qquad\text{and}\qquad (\lambda_{1,\nu_f}(f),\ldots, \lambda_{n,\nu_f}(f))=(\eta_1,\ldots,\eta_n)? \end{equation*} \end{question}
\begin{question}[Strong flexibility for two measures]\label{strong flexibility 2} Let $L\colon \mathbb T^n\rightarrow\mathbb T^n$, $n\geq 2$, be a volume-preserving Anosov automorphism with the unstable index $u$. Given any two lists of numbers \begin{equation*} \xi_1\geq\ldots\geq\xi_u>0>\xi_{u+1}\geq\ldots\xi_n \quad \text{and} \quad \eta_1\geq\ldots\geq \eta_u>0>\eta_{u+1}\geq\ldots\geq\eta_n \quad\text{such that:} \end{equation*} \begin{equation*} \sum\limits_{i=1}^n\xi_i=0,\qquad \sum\limits_{i=1}^n\eta_i=0,\qquad\text{and}\qquad \sum\limits_{i=1}^u\xi_i<h_{top}(L)<\sum\limits_{i=1}^u\eta_i, \end{equation*} does there exist a smooth volume-preserving Anosov diffeomorphism $f\colon\mathbb T^n\rightarrow \mathbb T^n$ homotopic to $L$ such that \begin{equation*} (\lambda_{1,\Leb}(f),\ldots, \lambda_{n,\Leb}(f))=(\xi_1,\ldots,\xi_n) \qquad\text{and}\qquad (\lambda_{1,\nu_f}(f),\ldots, \lambda_{n,\nu_f}(f))=(\eta_1,\ldots,\eta_n)? \end{equation*} \end{question}
Corollary 1.6 in \cite{BKRH} gives a positive answer to Question~\ref{weak flexibility 1} when formulated with strict inequalities among the given numbers. For $n=2$, it is folklore among specialists that the positive answer to Question~\ref{strong flexibility 1} was already known by A. Katok using a fairly straightforward global twist construction. The positive answer for $n=2$ and partial answer for $n>2$ follow from \cite[Theorem 1.5]{BKRH}. Moreover, Theorem 1.7 in \cite{BKRH} provides the full solution of Question~\ref{strong flexibility 1} for $\mathbb T^3$ with additional restrictions and the requirement of simple dominated splitting.
In this paper, we study Question~\ref{strong flexibility 2} and provide a positive answer for $n=2$ (see Theorem~\ref{main theorem}). This result can be considered as the two-dimensional version of \cite{E}. All in all, this work differs from \cite{BKRH} by considering flexibility for a pair of exponents instead of a single exponent and by using a more explicit construction. The main difficulty here lies in controlling the measure of maximal entropy. Essentially, the only way to estimate the measures of sets with respect to the measure of maximal entropy is to use Markov partitions which are difficult to understand explicitly for a general Anosov diffeomorphism.
\subsection{Formulation of the result}
Let $\mathbb T^2 = \mathbb R^2/\mathbb Z^2$. Let $f$ be a smooth area-preserving Anosov diffeomorphism on $\mathbb T^2$ homotopic to an Anosov automorphism $L$. We denote by $\lambda_{abs}(f)$ and $\lambda_{mme}(f)$ the positive Lyapunov exponents of $f$ with respect to $\Leb$ and to the measure of maximal entropy $\nu_f$, respectively.
Thus, summarizing Section~\ref{section: anosov} with $n=2$, we have two possibilities:
$$\text{\textit{either} }\qquad 0<\lambda_{abs}(f)<h_{top}(L)<\lambda_{mme}(f)$$ $$\text{\textit{or} }\qquad \lambda_{abs}(f)=\lambda_{mme}(f)=h_{top}(L).$$
Question~\ref{strong flexibility 2} asks if the above relations are the only relations between $\lambda_{abs}$ and $\lambda_{mme}$. Our main theorem shows that this is indeed the case.
\begin{thmintro}\label{main theorem}
Suppose $L_A$ is an Anosov automorphism on $\mathbb T^2$ which is induced by a matrix $A\in SL(2,\mathbb Z)$ with $|\text{trace}(A)|>2$. Let $\Lambda=h_{top}(L_A)$. For any $\Lambda_{abs},\Lambda_{mme}\in\mathbb R$ such that $0<\Lambda_{abs}<\Lambda<\Lambda_{mme}$, there exists a smooth area-preserving Anosov diffeomorphism $f\colon \mathbb T^2\rightarrow \mathbb T^2$ homotopic to $L_A$ such that $\lambda_{abs}(f)=\Lambda_{abs}$ and $\lambda_{mme}(f)=\Lambda_{mme}$. \end{thmintro}
The shaded area plus its lower right corner in Figure \ref{fig:exponents_set} shows the set of all possible values for pairs $(\lambda_{abs},\lambda_{mme})$ in the setting of Theorem~\ref{main theorem}.
\begin{figure}
\caption{Possible values of $(\lambda_{abs},\lambda_{mme})$}
\label{fig:exponents_set}
\caption{Constructions from the proof}
\label{fig:constr_set}
\end{figure}
Using that there is no retraction of a square onto its boundary and continuity of the Lyapunov exponents in the constructed family (see Remark~\ref{remark about continuity}), Theorem~\ref{main theorem} can be reduced to the following.
\begin{thmintro}\label{thm: decrease abs}
Suppose $L_A$ is an Anosov automorphism on $\mathbb T^2$ which is induced by a matrix $A\in SL(2,\mathbb Z)$ with $|\text{trace}(A)|>2$. Let $\Lambda=h_{top}(L_A)$. For any positive numbers $\gamma$, $S$ and $T$ such that $\Lambda<S<T$, there exists a smooth family $\{f_{s,t}\}$, where $(s,t)\in[0,1]\times[0,1]$, of area-preserving Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ such that the following hold: \begin{enumerate} \item $\Lambda-\gamma<\lambda_{abs}(f_{s,0})\leq\Lambda$ for all $s\in[0,1]$;\label{decrease 1} \item $\lambda_{abs}(f_{s,1})<\gamma$ for all $s\in[0,1]$; \label{decrease 2} \item $\lambda_{mme}(f_{0,t})<S$ for all $t\in[0,1]$;\label{decrease 3} \item $\lambda_{mme}(f_{1,t})>T$ for all $t\in[0,1]$.\label{decrease 4}
\end{enumerate} \end{thmintro}
\begin{remark}\label{remark about continuity} In a smooth family of Anosov area-preserving diffeomorphisms on $\mathbb T^2$, the Lyapunov exponents with respect to $\Leb$ and the measure of maximal entropy vary continously. For the Lebesgue measure, this follows immediately from \eqref{Lyapunov exponent through integral} and the fact that the unstable distribution $E^u$ varies continuously. In addition, the measure of maximal entropy depends continuously on the dynamics in the weak* topology. To see this, we can use the fact \cite[Theorem 1]{Moser} that for a smooth family of Anosov area-preserving diffeomorphisms, the topological conjugacy to $L_A$ is continuous in the parameters of the family. Moreover, the measure of maximal entropy is mapped to the measure of maximal entropy by the conjugacy. For the families we consider, the continuity of the measure of maximal entropy can alternatively be seen directly using the constructed Markov partition (see Sections~\ref{section: estimate_mme} and~\ref{section: mme in II}).
\end{remark}
\subsection{Outline of the proof}
To prove Theorem~\ref{thm: decrease abs}, we construct (large) smooth (area-preserving) homotopic deformations of Anosov automorphisms, i.e., deformations preserving the homotopy class and estimate $\lambda_{abs}$ and $\lambda_{mme}$ of the resulting Anosov diffeomorphisms. We refer to Figure \ref{fig:constr_set} in what follows.
\begin{itemize} \item Without loss of generality, we assume that $\text{trace}(A)>2$. It is enough to prove Theorem~\ref{thm: decrease abs} in that case because of the following. Assume that $B\in SL(2,\mathbb Z)$ with $\text{trace}(B)<-2$. Then, $-B\in SL(2,\mathbb Z)$, $\text{trace}(-B) > 2$, and $h_{top}(L_B)=h_{top}(L_{-B})=\Lambda$. Assume that $\Lambda_{abs},\Lambda_{mme}\in\mathbb R$ such that $0<\Lambda_{abs}<\Lambda<\Lambda_{mme}$ and there exists an area-preserving Anosov diffeomorphism $f\colon \mathbb T^2\rightarrow\mathbb T^2$ homotopic to $L_{-B}$ such that $\lambda_{abs}(f)=\Lambda_{abs}$ and $\lambda_{mme}(f)=\Lambda_{mme}$. Let $-I = \begin{pmatrix} -1 & 0\\ 0 & -1\end{pmatrix}$. Then, $L_{-I}\circ f\colon \mathbb T^2\rightarrow\mathbb T^2$ is an area-preserving Anosov diffeomorphism homotopic to $L_{B}$ such that $\lambda_{abs}(L_{-I}\circ f)=\Lambda_{abs}$ and $\lambda_{mme}(L_{-I}\circ f)=\Lambda_{mme}$.
\item In Section~\ref{section: increase max}, we describe a homotopic deformation that produces for any given positive number $H$ and any small positive number $\gamma$ a curve in the set of possible values of $(\lambda_{abs}, \lambda_{mme})$ $\gamma$-close to Side B with one endpoint at $(\Lambda,\Lambda)$ and the other endpoint corresponding to a smooth area-preserving Anosov diffeomorphims with $\lambda_{mme}$ larger than $H$. The resulting diffeomorphisms have a large twist on a thin strip. In Section~\ref{section: estimate_abs}, we find a lower bound on $\lambda_{abs}$. In Section~\ref{section: estimate_mme}, we provide a lower bound on $\lambda_{mme}$ by controlling the measure of maximal entropy of the constructed diffeomorphisms through Markov partitions. The following theorem summarizes the result in Section~\ref{section: increase max}. \end{itemize}
\begin{thmintro}(Theorem \ref{thm: increase max})\label{increase max in intro} Suppose $L_A$ is an Anosov automorphism on $\mathbb T^2$ which is induced by a matrix $A\in SL(2,\mathbb Z)$ with $\text{trace}(A)>2$. Let $\Lambda=h_{top}(L_A)$ and $H>0$. For any positive number $\gamma$, there exists a smooth family $\{g_s\}_{s\in[0,1]}$ of area-preserving Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ such that $g_0=L_A$ and the following hold: \begin{enumerate}[label=(\Alph*)] \item $g_s=L_A$ in a neighborhood of $(0,0)$ for all $s\in[0,1]$; \item $\Lambda-\gamma<\lambda_{abs}(g_s)\leq \Lambda$ for all $s\in[0,1]$; \item $\lambda_{mme}(g_1)>H$. \end{enumerate}
\end{thmintro}
\begin{itemize} \item In Section~\ref{section: decrease abs}, starting from the Anosov diffeomorphisms from the first construction with $\lambda_{mme}$ in some range of values, we modify them in a smooth way by a slow-down deformation near a fixed point to get Anosov diffeomorphisms realizing a curve in the set of possible pairs $(\lambda_{abs}, \lambda_{mme})$ arbitrarily close to Side C (see Lemma~\ref{lemma: decreasing abs path}). In this construction, we are able to keep the lower boundary of the realized pairs of exponents arbitrarily close to Side A (see Lemma~\ref{lemma: upper bound on lambda mme start from linear}) and the upper boundary above a line $\lambda_{mme}=T$ (see Lemma~\ref{bound lambda_mme below with variables}), where $T$ depends on the initial range of values of $\lambda_{mme}$ for the diffeomorphisms coming from the first construction. As a result, we produce a two-parametric family of Anosov diffeomorphisms coming from homotopic deformations covering any given rectangle within the semi-infinite strip that is the set of possible values of $(\lambda_{abs},\lambda_{mme})$. The following theorem summarizes the result in Section~\ref{section: decrease abs}. \end{itemize}
\begin{thmintro}(Theorem~\ref{decrease abs for family})\label{decrease abs in intro} Suppose $L_A$ and $\Lambda$ as in Theorem~\ref{increase max in intro}. For any $H$ such that $\Lambda<H$ and positive number $\gamma$, let $\{g_s\}_{s\in[0,1]}$ be a smooth family of area-preserving Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ from Theorem~\ref{increase max in intro} applied for $\gamma$ and $H$ with lower bound on $\lambda_{mme}(g_1)$ coming from Lemma~\ref{lemma: bound on mme in I} being larger than $H$. Then, there exists a constant $\tilde C$ such that for any $\sigma>0, S>\Lambda$ there exist
a smooth family $\{f_{s,t}\}_{(s,t)\in[0,1]\times[0,1]}$ of Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ such that: \begin{enumerate}[label=(\Alph*)] \item $f_{s,0}=g_s$ for all $s\in[0,1]$; \item $f_{s,t}$ preserves a probability measure $\mu_{s,t}$ which is absolutely continuous with respect to the Lebesgue measure;
\item $\lambda_{abs}(f_{s,1})<\gamma$ for all $s\in[0,1]$; \item $\lambda_{mme}(f_{0,t})<S$ for all $t\in[0,1]$; \item $\lambda_{mme}(f_{1,t})\geq H+\tilde C\sigma$.
\end{enumerate} \end{thmintro}
\begin{itemize} \item Let $\{f_{s,t}\}_{(s,t)\in[0,1]\times[0,1]}$ be the family of Anosov diffeomorphisms in Theorem~\ref{decrease abs in intro}. By the Dacorogna-Moser theorem \cite[Theorem, Appendix A]{HJJ}, there exists a $C^{\infty}$ family $\{\Psi_{s,t}\}$ of $C^{\infty}$ diffeomorphisms of $\mathbb T^2$ satisfying $\Psi_{s,t}^*\mu_{s,t} = \Leb$. Let $\tilde{f}_{s,t} = \Psi_{s,t} f_{s,t}\Psi_{s,t}^{-1}$, then $\left\{\tilde{f}_{s,t}\right\}$ is a $C^\infty$ family of Anosov diffeomorphisms on $\mathbb T^2$ that preserve $\Leb$. Since the conjugacy is smooth, we obtain $\lambda_{abs}(\tilde{f}_{s,t})=\lambda_{abs}(f_{s,t})$ and $\lambda_{mme}(\tilde{f}_{s,t})=\lambda_{mme}(f_{s,t})$. Therefore, we obtain Theorem~\ref{thm: decrease abs} if we applied Theorem~\ref{decrease abs in intro} for $H=2T$ and sufficiently small $\sigma$.
\end{itemize}
\subsection{Further questions for Anosov diffeomorphisms of $\mathbb T^2$}
Interestingly, Theorem~\ref{main theorem} can be reformulated as a statement on flexibility for the pressure function among smooth area-preserving Anosov diffeomorphism homotopic to a fixed Anosov automorphism as follows.
Let $\phi^f_t(x) = -t\log\left|Df|_{E_u(x)}\right|$ for any $x\in\mathbb T^2$. This is called the \textit{geometric potential}. The \textit{pressure function} for the potential $\phi_t^f$ is defined by \begin{equation*} P(\phi^f_t) = \sup\limits_{\mu}\left(h_\mu(f)+\int_{\mathbb T^2}\phi^f_t\,d\mu\right), \end{equation*}
where the supremum is taken over all $f$-invariant probability measures on $\mathbb T^2$ and $h_{\mu}(f)$ denotes the measure theoretical entropy of $f$ with respect to $\mu$. It is known that $P(\phi^f_0) = h_{top}(f)$ and $P(\phi^f_1) = 0$. Also, $P(\phi^f_t)$ is a convex real analytic function of $t$ (see for example \cite[Sections 0.2 and 4.6]{RuelleThF} and \cite{BG}). Since $\int_{\mathbb T^2}\phi^f_t\,d\mu$ becomes a dominated term as $t$ tends to $\pm \infty$, we have that $P(\phi^f_t)$ has asymptotes as $t\rightarrow\pm\infty$. Moreover, $\frac{d}{dt}P(\phi^f_t)|_{t=0} = -\lambda_{mme}(f)$ and $\frac{d}{dt}P(\phi^f_t)|_{t=1} = -\lambda_{abs}(f)$. Thus, Theorem~\ref{main theorem} shows that we can vary the derivatives of the pressure function at $t=0$ and $t=1$. As a result, a more general flexibility question can be formulated in the setting of Theorem~\ref{main theorem}.
\begin{question}
Let $L$ be an Anosov automorphism on $\mathbb T^2$. Given a strictly convex real analytic function $F\colon \mathbb R\rightarrow\mathbb R$ such that $F(0)=h_{top}(L)$, $F(1)=0$, $\frac{dF}{dt}|_{t=0}<-h_{top}(L)$, $\frac{dF}{dt}|_{t=1}\in(-h_{top}(L),0)$, and $F(t)$ has asymptotes as $t\rightarrow\pm\infty$. Does there exist a smooth area-preserving Anosov diffeomorphism $f$ homotopic to $L$ such that $P(\phi^f_t) = F(t)$? \end{question}
The answer to the above question will require different techniques than presented in this paper. If the answer is negative then it would be interesting to determine which extra conditions on the function must be satisfied. For example, do the higher derivatives of the pressure function \cite{KotaniSunada} provide any additional restrictions? Is there a finite list of conditions that must be added in order to obtain flexibility?
We can also consider a rigidity problem connected to the pressure function. Let $f$ be a smooth area-preserving Anosov diffeomorphism homotopic to an Anosov automorphism $L$. By work of de la Llave \cite{delaLlave} and of Marco and Moriy\'{o}n \cite{MM}, we have that if $\lambda_{abs}(f)=h_{top}(L)$, then $f$ and $L$ are $C^\infty$ conjugate. By the properties of the pressure functions discussed above, we have that if $P(\phi^f_t) = P(\phi^L_t)$ for all $t\in\mathbb R$, then $f$ and $L$ are $C^\infty$ conjugate. A natural question is if $L$ can be replaced by any smooth area-preserving Anosov diffeomorphism.
\begin{question} Let $f$ and $g$ be smooth area-preserving Anosov diffeomorphisms on $\mathbb T^2$ that are homotopic. Assume $P(\phi^f_t) = P(\phi^g_t)$ for all $t\in\mathbb R$. Does this imply that $f$ and $g$ are $C^\infty$ conjugate? \end{question}
\textbf{Acknowledgements.} The author would like to thank Andrey Gogolev for helpful discussions and Vaughn Climenhaga for pointing out some references. Question 6 was originally posed by Federico Rodriguez Hertz. The author is also grateful to Anatole Katok for suggesting the problem. Much of this work was completed while the author was at The Ohio State University. She is also grateful for NSF grant 1642548, which supported travel to present the result in this paper. Additionally, the author was partially supported by NSF grant 1547145. The author would like to thank the anonymous referee for thoroughly reading the initial version of the paper and for suggestions to improve the presentation.
\section{Construction I}\label{section: increase max}
In this section, we prove Theorem~\ref{thm: increase max} using several lemmas. We begin by showing how to deduce the theorem from these lemmas before stating and proving the lemmas themselves.
\begin{theorem}\label{thm: increase max} Suppose $L_A$ is an Anosov automorphism on $\mathbb T^2$ which is induced by a matrix $A\in SL(2,\mathbb Z)$ with $\text{trace}(A)>2$. Let $\Lambda=h_{top}(L_A)$ and $H>0$. For any positive number $\gamma$, there exists a smooth family $\{g_s\}_{s\in[0,1]}$ of area-preserving Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ such that $g_0=L_A$ and the following hold: \begin{enumerate}[label=(\Alph*)] \item $g_s=L_A$ in a neighborhood of $(0,0)$ for all $s\in[0,1]$; \item $\Lambda-\gamma<\lambda_{abs}(g_s)\leq \Lambda$ for all $s\in[0,1]$; \item $\lambda_{mme}(g_1)>H$. \end{enumerate} \end{theorem}
\begin{proof} Fix $m\in (0,1)$. Let $l\in(0,l_\gamma)$ where $l_\gamma$ comes from Lemma~\ref{lemma: bound on abs in I} applied for $\varepsilon=\gamma$. Let $\beta=\beta_0$ from Lemma~\ref{lemma: bound on mme in I}. Choose $\tilde\delta$ small enough such that $Q\log\mu^+\left(\frac{\beta_0l+\tilde\delta(1-\beta_0)}{\tilde\delta}\right)+(1-Q)\log C> H$, where $Q,C,\mu^+(\cdot)$ are as in Lemma~\ref{lemma: bound on mme in I}. Note that this is possible because $\mu^+(t)\rightarrow\infty$ as $t\rightarrow\infty$. Also, let $w\in (0,w_0(\tilde\delta))$. Then, the family of maps $F^w_{l,\delta}$ defined in \eqref{definition_twist_maps} where $\delta$ varies in $[\tilde\delta, l]$ is the desired family of maps. \end{proof}
\subsection{Construction I: Anosov twist diffeomorphisms}\label{construction family}
Here we give an explicit formula for a family of smooth area-preserving twist diffeomorphisms \eqref{definition_twist_maps} that are Anosov for an appropriate choice of parameters (see Lemma~\ref{Anosov diffeomorphism}). A smooth subfamily of these diffeomorphisms is used to prove Theorem~\ref{thm: increase max}.
Suppose $L_A$ is an Anosov automorphism on $\mathbb T^2$ which is induced by a matrix $A\in SL(2,\mathbb Z)$ such that $A = \begin{pmatrix} a & b\\ c & d \end{pmatrix}$ and $a+d>2$.
Fix $m\in(0,1)$, let $l\in(0,1-m)$ and $\delta\in(0,l)$. Choose $\beta\in(0,1)$ such that $a+d-|b|\beta>2$. Notice that such $\beta$ exists because $a+d>2$.
Define a function $f_{l,\delta}\colon [0,1]\rightarrow [0,1]$ (see Figure~\ref{f_ldelta}) in the following way:
\begin{equation}\label{function_twist} f_{l,\delta}(x) = \left\{ \begin{aligned} &(1-\beta)x+\beta m &\text{ if }\quad &m<x\leq m+l-\delta,\\ &\frac{\beta l+\delta(1-\beta)}{\delta}x-\frac{\beta(m+l)(l-\delta)}{\delta} &\text{ if } \quad&m+l-\delta<x\leq m+l,\\ &x &&\text{ otherwise. } \end{aligned} \right . \end{equation}
\begin{figure}
\caption{The graph of $f_{l,\delta}$.}
\label{f_ldelta}
\end{figure}
The following lemma allows us to obtain a smooth function that coincides with the given continuous piecewise linear function outside small neighborhoods of points of non-smoothness.
\begin{lemma}\label{twist_function} Let $f_{l,\delta}(x)$ be as in \eqref{function_twist} and $w\in(0,\frac{\delta}{4})$. Denote by $\hat f_{l,\delta}(x)$ the continuous map on $\mathbb R$ that is a lift of $f_{l,\delta}(x)$ such that $\hat f_{l,\delta}(0)=f_{l,\delta}(0)$. Let $\theta_w(x)$ be a smooth positive even function on $\mathbb R$ such that $\int_{\mathbb R}\theta_w(y)dy=1$ and $\theta_w(x)=0$ if $x\not\in(-w,w)$, where $w>0$ and sufficiently small. Define \begin{equation}\label{definition of f_w_l_delta} \hat f^w_{l,\delta}(x)=\int_{\mathbb R}\hat f_{l,\delta}(x-y)\theta_w(y)dy \text{ for any } x\in\mathbb R \qquad \text{ and }\qquad f^w_{l,\delta}(x)=\hat f^w_{l,\delta}(x) \pmod 1. \end{equation} Then, we have $f^w_{l,\delta}$ is a smooth function on $\mathbb R/\mathbb Z$. Moreover, $f^w_{l,\delta}(x)=f_{l,\delta}(x)$ outside of $w$-neighborhoods of the points $x=m, \, m+l-\delta, \, m+l$.
In particular, \begin{equation}\label{derivative_twist} \begin{aligned} &1-\beta\leq D_xf^w_{l,\delta}\leq 1 &\text{ if }\quad & m-w<x\leq m+w,\\ &D_xf^w_{l,\delta}=1-\beta &\text{ if }\quad & m+w<x\leq m+l-\delta-w,\\ &1-\beta\leq D_xf^w_{l,\delta}\leq \frac{\beta l+\delta(1-\beta)}{\delta} &\text{ if }\quad & m+l-\delta-w<x\leq m+l-\delta+w,\\ &D_xf^w_{l,\delta}=\frac{\beta l+\delta(1-\beta)}{\delta} &\text{ if } \quad& m+l-\delta+w<x\leq m+l-w,\\ &1\leq D_xf^w_{l,\delta}\leq \frac{\beta l+\delta(1-\beta)}{\delta} &\text{ if }\quad &m+l-w<x\leq m+l+w,\\ &D_xf^w_{l,\delta}=1 &&\text{ otherwise. } \end{aligned} \end{equation} \end{lemma}
\begin{proof} See the proof of Lemma 3.1 in \cite{E}. \end{proof}
We define $f^0_{l,\delta}=f_{l,\delta}$. For any sufficiently small $w\geq 0$, we consider a family of maps $F^w_{l,\delta}\colon \mathbb T^2\rightarrow \mathbb T^2$, where \begin{equation}\label{definition_twist_maps} F^w_{l,\delta}(x,y) = \begin{pmatrix}
(a-|b|)x+by+|b|f^w_{l,\delta}(x)\\(c-\sgn(b)d)x+dy+\sgn(b)df^w_{l,\delta}(x)\end{pmatrix} \pmod 1. \end{equation}
In particular, we have $F^w_{l,\delta} = \begin{pmatrix} a & b\\ c & d\end{pmatrix}$ on $\left\{(x,y)\in\mathbb T^2| x\in[0,m-w]\cup[m+l+w,1)\right\}$.
By the choice of $\beta$, Lemma~\ref{twist_function}, and Lemma~\ref{Anosov diffeomorphism} below, we have that for all $l\in(0,1-m)$, $\delta\in(0,l)$, and sufficiently small $w>0$ the map $F^w_{l,\delta}$ is an area-preserving Anosov diffeomorphism homotopic to $L_A$.
We will need the following general facts.
\begin{proposition}\label{eigenvector_eigenvalue_proposition} Let \begin{equation}\label{definition of A(t)}
A(t) = \begin{pmatrix} a+|b|(t-1) & b\\ c+\sgn(b)d(t-1) & d \end{pmatrix} \end{equation}
where $a,b,c,d,t\in\mathbb R$, $ad-bc=1$, and $a+d+(t-1)|b|>2$. Then, $\det(A)=1$ and the eigenvalues of $A(t)$ are \begin{equation}\label{eigenvalue}
\mu^\pm(t) = \frac{1}{2}\left((a+d+|b|(t-1)\pm\sqrt{\left(a+d+|b|(t-1)\right)^2-4}\right) \end{equation} with corresponding eigenvectors \begin{equation}\label{eigenvector}
\pmb e^{\pm}(t) = \begin{pmatrix} 2b\\ \phi^\pm\left(a+d+|b|(t-1)\right) \end{pmatrix}, \end{equation} where $\phi^{\pm}(u) = 2d-u\pm\sqrt{u^2-4}$ . In particular, $\mu^+(t)>1$ and $0<\mu^-(t)<1$. \end{proposition}
\begin{proof} Follows from straightforward computations. \end{proof}
\begin{lemma}\label{Anosov diffeomorphism}
Suppose $L_A$ is an Anosov area-preserving automorphism on $\mathbb T^2$ which is induced by a matrix $A = \begin{pmatrix} a & b\\ c & d \end{pmatrix}\in SL(2,\mathbb Z)$ with $\text{trace}(A)>2$. Let $f\colon \mathbb R/\mathbb Z \rightarrow \mathbb R/ \mathbb Z$ be a smooth diffeomorphism such that $f(0)=0$, $0<\alpha_1\leq D_xf$ for all $x\in\mathbb R/\mathbb Z$, and $a+d+(\alpha_1-1)|b|>2$. Define a map $F\colon \mathbb T^2\rightarrow\mathbb T^2$ in the following way: \begin{equation*}
F(x,y) = \begin{pmatrix}
(a-|b|)x+by+|b|f(x)\\(c-\sgn(b)d)x+dy+\sgn(b)df(x)\end{pmatrix} \pmod 1. \end{equation*} Then, $F$ is a smooth area-preserving Anosov diffeomorphism on $\mathbb T^2$.
Moreover, let $\tilde\alpha>0$ such that $\tilde\alpha<\alpha_1$ and $a+d+(\tilde\alpha-1)|b|>2$. Define the following vectors: \begin{align}\label{bounds_of_cones}
&\pmb v^+_{min}(t) = \begin{pmatrix} 2b\\ \phi^+\left(a+d+|b|t\right) \end{pmatrix}, \qquad \pmb v^+_{max} = \begin{pmatrix} b\\ d \end{pmatrix}, \\ &\pmb v^-_{min}(t) = \begin{pmatrix}
2b\\ \phi^-\left(a+d+|b|t\right)\end{pmatrix},\qquad \text{ and }\qquad \pmb v^-_{max} = \begin{pmatrix} 0\\-1 \end{pmatrix},\nonumber \end{align}
where $\phi^\pm$ as in Proposition~\ref{eigenvector_eigenvalue_proposition} and $t>\frac{2-(a+d)}{|b|}$. Let $\mathcal C^\pm$ be the union of the positive cone spanned by $\pmb v^\pm_{min}(\tilde\alpha-1)$ and $\pmb v^\pm_{max}$ and its symmetric complement with vertex at $(0,0)$ in $\mathbb R^2$, respectively. Then, the cones $\mathcal C^\pm$ in $T_{(x,y)}\mathbb T^2$ for all $(x,y)\in\mathbb T^2$ define a system of invariant cones for $F$. Also, there exist $\mu>1>\nu>0$ that depend only on the entries of $A$ and $\tilde\alpha$ such that for all $\pmb v\in\mathcal C^+$ we have $\|D_{(x,y)}F\pmb v\|\geq \mu\|\pmb v\|$ and for all $\pmb v\in\mathcal C^-$ we have $\|D_{(x,y)}F^{-1}\pmb v\|\geq \nu^{-1}\|\pmb v\|$ for any $(x,y)\in\mathbb T^2$. \end{lemma}
\begin{remark}
We apply Lemma~\ref{Anosov diffeomorphism} for the family of functions $f^w_{l,\delta}$ (see \eqref{definition of f_w_l_delta}) which satisfy $f^w_{l,\delta}(0)=0$ and $D_xf^w_{l,\delta}\geq 1-\beta$ for all $x\in\mathbb R/\mathbb Z$ (see \eqref{derivative_twist}). Recall that we only work with $\beta\in(0,1)$ such that $a+d-|b|\beta>2$. \end{remark}
\begin{proof} Notice that $F$ is a smooth map from $\mathbb T^2$ to $\mathbb T^2$ with Jacobian equal to $1$, so it is an area-preserving diffeomorphism.
We show that $F$ is Anosov using invariant cones \cite[Corollary 6.4.8]{KatokHasselblatt}.
Define $A(t) = \begin{pmatrix} a+|b|(t-1) & b\\ c+\sgn(b)d(t-1) & d \end{pmatrix}$. In particular, we have $D_{(x,y)}F = A(D_xf)$ for any $(x,y)\in\mathbb T^2$.
We point out some properties of $\phi^\pm$. First, we have that $\phi^+(u)$ is monotonically increasing and $\phi^-(u)$ is monotonically decreasing for $u>2$. In particular, $2(d-1)\leq\phi^+(u)<2d$ for $u>2$. Also, both functions are smooth for $u>2$. Moreover, $\lim\limits_{u\rightarrow +\infty}\phi^+(u)=2d$ and $\lim\limits_{u\rightarrow +\infty}\phi^-(u)=-\infty$. Therefore, we obtain that $\mathcal C^+\cap\mathcal C^- = \emptyset$. Moreover, for any $t\geq\alpha_1>\tilde\alpha$, we have \begin{equation}\label{invariant cone}
A(t)\mathcal C^+\subset \Int\, (\mathcal C^+) \qquad \text{and} \qquad A(t)^{-1}\mathcal C^-\subset \Int\, (\mathcal C^-), \end{equation} where $\Int$ stands for the interior of the set. See Figure~\ref{figure: cones} for an example of $\mathcal C^\pm$.
We consider the inner product of $\pmb v^+_{min}(\tilde\alpha-1)$ and $\pmb v^+_{max}$ \begin{align}\label{dot product minmax}
\langle\pmb v^+_{min}(\tilde\alpha-1), \pmb v ^+_{max}\rangle = 2b^2+d\phi^+(a+d+|b|(\tilde\alpha-1))\geq \left\{\begin{aligned}&2b^2+2d(d-1) &\text { if } d>0,\\ &2b^2+2d^2 &\text{ if } d\leq 0.\end{aligned}\right. \end{align} By \eqref{dot product minmax}, using the fact that $d\in\mathbb Z$, we have that $\langle\pmb v^+_{min}(\tilde\alpha-1), \pmb v ^+_{max}\rangle >0$.
Recall that $\pmb v^+_{min}(\tilde\alpha-1)$ is an eigenvector of $A(\tilde\alpha)$ with an eigenvalue $\mu^+(\tilde\alpha)>1$. Therefore, \begin{align*}
\|A(t)\pmb v^+_{min}(\tilde\alpha-1)\|^2 &= \left\|\left[A(\tilde\alpha)+\begin{pmatrix}|b|(t-\tilde\alpha) &0\\ \sgn(b)d(t-\tilde\alpha) &0 \end{pmatrix}\right]\pmb v^+_{min}(\tilde\alpha-1)\right\|^2 \\&= \left\|A(\tilde\alpha)\pmb v^+_{min}(\tilde\alpha-1)+2(t-\tilde\alpha)b\begin{pmatrix}|b|\\\sgn(b)d\end{pmatrix}\right\|^2 \\&= \left\|\mu^+(\tilde\alpha)\pmb v^+_{min}(\tilde\alpha-1)+2|b|(t-\tilde\alpha)\pmb v^+_{max}\right\|^2\\&= \left(\mu^+(\tilde\alpha)\|\pmb v^+_{min}(\tilde\alpha-1)\|\right)^2+4\mu^+(\tilde\alpha)(t-\tilde\alpha)|b|\langle \pmb v^+_{min}(\tilde\alpha-1), \pmb v^+_{max}\rangle+\left(2(t-\tilde\alpha)|b|\|\pmb v^+_{max}\|\right)^2. \end{align*}
Therefore, since $\mu^+(\tilde\alpha)>1$ and $\langle\pmb v^+_{min}(\tilde\alpha-1), \pmb v ^+_{max}\rangle >0$, for any $t>\tilde\alpha$ we have $\|A(t)\pmb v^+_{min}(\tilde\alpha-1)\|\geq \mu^+(\tilde\alpha)\|\pmb v^+_{min}(\tilde\alpha-1)\|$.
Moreover, using that $ad-bc=1$, $d\in\mathbb Z$, and $a+|b|(t-1)+d>2$ for any $t>\tilde\alpha$, we have \begin{align*}
\|A(t)\pmb v^+_{max}\|^2 &= \left\|\begin{pmatrix} a+|b|(t-1) & b\\ c+\sgn(b)d(t-1) & d \end{pmatrix}\begin{pmatrix}b\\d\end{pmatrix}\right\|^2 = \left\|\begin{pmatrix}b(a+|b|(t-1)+d)\\bc+|b|d(t-1)+d^2\end{pmatrix}\right\|^2 \\ &= \left\|\begin{pmatrix}b(a+|b|(t-1)+d)\\-1+d(a+|b|(t-1)+d)\end{pmatrix}\right\|^2\\&= b^2\left(a+d+|b|(t-1)\right)^2+\left(d\left(a+d+|b|(t-1)\right)-1\right)^2\geq 4b^2+(2d-1)^2 \\&= \left(1+\frac{3b^2+3d^2-4d+1}{b^2+d^2}\right)\|\pmb v^+_{max}\|^2. \end{align*}
In particular, $\|A(t)\pmb v^+_{max}\|\geq \mu^+_{max}\|\pmb v^+_{max}\|$, where $\mu^+_{max} = (1+\frac{3b^2+3d^2-4d+1}{b^2+d^2})>1$ because $3d^2-4d+1\geq 0$ for $d\in\mathbb Z$ and $b\neq 0$ as $a,d\in\mathbb Z$, $ad-bc=1$, and $\text{trace}(A)>2$.
Let $\mu = \min\{\mu^+(\tilde\alpha), \mu^+_{max}\}>1$. By the properties of $\phi^+$ and the fact that $\text{trace}(A(t))>2$ for $t>\tilde\alpha$, we have that the expanding eigenvectors of $A(t)$ for any $t>\tilde\alpha$ belong to $\mathcal C^+$ (see \eqref{eigenvector}). Moreover, for any $t>\tilde\alpha$, if the set of all vectors in $\mathbb R^2$ that expand at least $\mu$ times by $A(t)$ is non-empty, then it is a cone containing an expanding eigenvector of $A(t)$. As shown above, $\pmb v^+_{min}(\tilde\alpha-1)$ and $\pmb v^+_{max}$ expand at least $\mu$ times by $A(t)$ for any $t>\tilde\alpha$. Since $\mathcal C^+$ is the cone spanned by $\pmb v^+_{min}(\tilde\alpha-1)$ and $\pmb v^+_{max}$, we have $\|A(t)\pmb v\|\geq \mu\|\pmb v\|$ for all $\pmb v\in\mathcal C^+$ and for all $t>\tilde\alpha$.
Similarly, it can be shown that there exists $\nu\in(0,1)$ such that for all $\pmb v\in\mathcal C^-$, we have $\|A(t)^{-1}\pmb v\|\geq\nu^{-1}\|\pmb v\|$ for all $t>\tilde\alpha$.
Since $D_{(x,y)}F = A(D_xf)$ for any $(x,y)\in\mathbb T^2$ and $D_xf\geq\alpha_1>\tilde\alpha$, by the criterion for map to be Anosov using invariant cones, we obtain that $F$ is Anosov. \end{proof}
\begin{figure}
\caption{The cones $\mathcal C^\pm$ for $F$ obtained from $A=\usebox{\smlmat}$ with $\tilde\alpha=0.5$. Here $\pmb e^\pm(1)$ are eigenvectors of $A$.}
\label{figure: cones}
\end{figure}
\subsection{Estimation of $\lambda_{abs}$ in Construction I}\label{section: estimate_abs} The goal of this section is to prove the following lemma. \begin{lemma}\label{lemma: bound on abs in I}
Consider the smooth area-preserving Anosov diffeomorphisms $F^w_{l,\delta}\colon\mathbb T^2\rightarrow \mathbb T^2$ defined in \eqref{definition_twist_maps} (see also Lemma~\ref{twist_function}) using $A=\begin{pmatrix}a & b\\ c& d\end{pmatrix}\in SL(2,\mathbb Z)$ with $a+d>2$. Then, for any $\varepsilon>0$ there exists $l_\varepsilon = l_{\varepsilon}>0$ such that for any $0<l<l_{\varepsilon}$, any $0<\beta<\frac{a+d-2}{|b|}$, any $\delta\in(0,l)$, and any $w\in(0,\frac{\delta}{4})$, we have $\lambda_{abs}(F^w_{l,\delta})>\Lambda-\varepsilon$, where $\Lambda = h_{top}(L_A) = \log(\mu^+(1))$ (see \eqref{eigenvalue}). \end{lemma} \begin{proof} By Lemma~\ref{Anosov diffeomorphism}, we have the following. For each point $p\in\mathbb T^2$, let $\mathcal C_p^\pm$ be the union of the positive cone in the tangent space at $p$ spanned by $\pmb v^\pm_{min}(-\beta)$ and $\pmb v^\pm_{max}$ (see \eqref{bounds_of_cones}) and its symmetric complement. Then, $\mathcal C_p^+\cap \mathcal C_p^- = \emptyset$, \begin{equation}\label{invariant cone direction} D_p(F^w_{l,\delta})\mathcal C_p^+\subset \mathcal C_{F^w_{l,\delta}(p)}^+, \,\text{ and } \,D_p\left(F^w_{l,\delta}\right)^{-1}\mathcal C_p^-\subset \mathcal C_{\left(F^w_{l,\delta}\right)^{-1}(p)}^-. \end{equation} Moreover, there exists $\mu>1$ that depends only on $A$ and $\beta$ such that \begin{equation}\label{general expansion}
\|D_p(F^w_{l,\delta})\pmb v\|\geq \mu\|\pmb v\| \text{ for any } \pmb v\in\mathcal C_p^+. \end{equation}
Let $\pmb v^u = \pmb e^+(1)$ and $\pmb v^s = \pmb e^-(1)$ (see \eqref{eigenvector}). Then,
\begin{align}\label{boundaries of cone in basis of eigenvectors} \pmb v^+_{max} &= c^u_{max}\pmb v^u+ c^s_{max}\pmb v ^s, \text{ where } \begin{pmatrix} c^u_{max}\\c^s_{max} \end{pmatrix} = \frac{1}{4 \sqrt{(a+d)^2-4}}\begin{pmatrix} (a+d)+\sqrt{(a+d)^2-4}\\-(a+d)+\sqrt{(a+d)^2-4}\end{pmatrix},\\ \pmb v^+_{min}(-\beta) &= c^u_{min}\pmb v^u+ c^s_{min}\pmb v ^s, \text{ where } \begin{pmatrix} c^u_{min}\\c^s_{min} \end{pmatrix} = \frac{1}{2
\sqrt{(a+d)^2-4}}\begin{pmatrix} \phi^+(a+d-|b|\beta)-\phi^-(a+d)\\\phi^+(a+d)-\phi^+(a+d-|b|\beta)\end{pmatrix},\nonumber \end{align} where $\phi^+, \phi^-$ are as in Proposition~\ref{eigenvector_eigenvalue_proposition}.
Moreover, any $\pmb v\in\mathcal C_p^+$ can be written in the form $\pmb v = \alpha_1\pmb v^+_{max}+\alpha_2\pmb v^+_{min}(-\beta)$, where $\alpha_1\alpha_2\geq 0$. In particular, for any $n\in\mathbb N$, we have $\frac{\|A^n\pmb v\|}{\|\pmb v\|}\geq\min\left\{\frac{\|A^n\pmb v^+_{max}\|}{\|\pmb v^+_{max}\|},\frac{\|A^n\pmb v^+_{min}(-\beta)\|}{\|\pmb v^+_{min}(-\beta)\|}\right\}$ if $\pmb v \neq \pmb 0$. Thus, for any $\pmb v\in\mathcal C_p^+$
\begin{equation*}
\|A^n\pmb v\|\geq e^{\Lambda n} \|\pmb v^u\| \left|\sin\angle(\pmb v^u, \pmb v^s)\right|\min\left\{\frac{c^u_{max}}{\|\pmb v^+_{max}\|},\frac{c^u_{min}}{\|\pmb v^+_{min}(-\beta)\|}\right\} \|\pmb v\|. \end{equation*}
Notice that $\frac{c^u_{\max}}{\|\pmb v^+_{\max}\|}$ depends only on $A$. Also, we have \begin{equation*}
\frac{c^u_{\min}}{\|\pmb v^+_{\min}(-\beta)\|} = \|\pmb v^u\|^{-1}\left(1+2\frac{c^s_{\min}}{c^u_{\min}}\cdot\frac{\|\pmb v^s\|}{\|\pmb v^u\|}\cos\angle(\pmb v^u,\pmb v^s)+\left(\frac{c^s_{\min}}{c^u_{\min}}\cdot\frac{\|\pmb v^s\|}{\|\pmb v^u\|}\right)^2\right)^{-\frac{1}{2}}, \end{equation*} where \begin{equation*}
\frac{c^s_{\min}}{c^u_{\min}} = \frac{2\sqrt{(a+d)^2-4}}{\phi^+(a+d-|b|\beta)-\phi^-(a+d)}-1. \end{equation*} Since $\phi^+(u)$ is monotonically increasing and $2(d-1)\leq \phi^+(u)<2d$ for $u>2$, we have that \begin{equation*} \frac{2\sqrt{(a+d)^2-4}}{(a+d)+\sqrt{(a+d)^2-4}}-1<\frac{c^s_{\min}}{c^u_{\min}}\leq\frac{2\sqrt{(a+d)^2-4}}{(a+d)-2+\sqrt{(a+d)^2-4}}-1. \end{equation*}
As a result, there exists a constant $C>0$ that depends only on $A$ such that \begin{equation}\label{expansion under A}
\|A^n\pmb v\|\geq e^{\Lambda n} C\|\pmb v\|. \end{equation}
By the Oseledec multiplicative ergodic theorem, we obtain that \begin{equation}\label{lambda_abs}
\lambda_{abs}(F^w_{l,\delta}) = \lim\limits_{n\rightarrow \infty}\frac{\log\|D_{p}\left(F^w_{l,\delta}\right)^n\pmb v\|}{n} \end{equation} for almost every $p\in\mathbb T^2$ with respect to the Lebesgue measure and $\pmb v\in\mathcal C^+_p$.
Let $\mathcal S = \left\{(x,y)\in\mathbb T^2|m-w\leq x\leq m+l+w\right\}$. We have $\mathrm{Leb}(\mathcal S)=l+2w$, where $\mathrm{Leb}$ is the normalized Lebesgue measure. Moreover, we recall that $F^w_{l,\delta}=L_A$ on $\mathbb T^2\setminus\mathcal S$, in particular, $D_xF^w_{l,\delta}=A$ for all $x\in\mathbb T^2\setminus \mathcal S$. Consider $p\in\mathbb T^2$ and a natural number $n$. We write \[ n = \sum\limits_{j=1}^sn_j, \] where $n_1\in\{0\}\cup\mathbb N$ and the numbers $n_j\in\mathbb N$ for $j=1,\ldots,s$ are chosen in the following way: \begin{enumerate}
\item The number $n_1$ is the first moment when $\left(F^w_{l,\delta}\right)^{n_1}(p)\in \mathcal S$;
\item The number $n_2$ is such that the number $n_1+n_2$ is the first moment when $\left(F^w_{l,\delta}\right)^{n_1+n_2}(p)\in \mathbb T^2\setminus \mathcal S$;
\item The rest of the numbers are defined in the same way. For any $k\in\mathbb N$, the number $n_{2k+1}$ is such that the number $\sum\limits_{j=1}^{2k+1}n_j$ is the first moment when $\left(F^w_{l,\delta}\right)^{\sum\limits_{j=1}^{2k+1}n_j}(p)\in \mathcal S$, and the number $n_{2k+2}$ is such that the number $\sum\limits_{j=1}^{2k+2}n_j$ is the first moment when $\left(F^w_{l,\delta}\right)^{\sum\limits_{j=1}^{2k+2}n_j}(p)\in \mathbb T^2\setminus\mathcal S$. \end{enumerate}
Let $\pmb v\in\mathcal C_p^+$ and $\|\pmb v\|=1$. Then, we have \begin{equation*}
\log\|D_{p}\left(F^w_{l,\delta}\right)^n\pmb v\| =\sum\limits_{j=1}^s\log\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|, \end{equation*}
where $\pmb v_1=\pmb v$, $\pmb v_2 = \frac{D_{p}\left(F^w_{l,\delta}\right)^{n_1}\pmb v_1}{\|D_{p}\left(F^w_{l,\delta}\right)^{n_1}\pmb v_1\|}$, and $\pmb v_j = \frac{D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(F^w_{l,\delta}\right)^{n_{j-1}}\pmb v_{j-1}}{\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(F^w_{l,\delta}\right)^{n_{j-1}}\pmb v_{j-1}\|}$ for $j=3, \ldots, s.$ In particular, $\|\pmb v_j\|=1$ for $j=1,\ldots, s$.
Using \eqref{general expansion} and \eqref{expansion under A}, we obtain for $k\in\mathbb N$ \begin{align*}
\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|&\geq e^{\Lambda n_j}C \qquad &\text{if}\qquad &j=2k-1,\\
\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|&\geq \mu^{n_j} \qquad &\text{if}\qquad &j=2k. \end{align*}
As a result, \begin{equation*}
\log\|D_{p}\left(F^w_{l,\delta}\right)^n\pmb v\| \geq \left[\frac{s}{2}\right]\log C+\Lambda\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}+(\log\mu)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}. \end{equation*}
Since $F^w_{l,\delta}$ is a smooth Anosov diffeomorphism, by Birkhoff's Ergodic Theorem we obtain, that \begin{equation*} \frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}\rightarrow (1-l-2w) \,\,\text{ and } \,\,\frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\rightarrow (l+2w)\,\, \text{ as }\,\, n\rightarrow\infty. \end{equation*} Moreover, each visit to $\mathcal S$ is at least one iterate, so $\limsup\limits_{n\rightarrow\infty}\left(\frac{1}{n}[\frac{s}{2}]\right)\leq (l+2w)$.
Therefore, since $\mu>1$, \begin{align*}
\lambda_{abs}(F^w_{l,\delta})&\geq (l+2w)\min\{\log(C\mu),\log\mu\} + \Lambda(1-l-2w)\\ &\geq (l+2w)\min\{\log(C),0\} + \Lambda(1-l-2w)\\&\rightarrow \Lambda \text{ as } l\rightarrow 0 \text{ since } 0<w<\frac{l}{4}. \end{align*}
\end{proof}
\subsection{Estimation of $\lambda_{mme}$ in Construction I}\label{section: estimate_mme}
The results of this section can be summarized in the following lemma. \begin{lemma}\label{lemma: bound on mme in I}
Suppose $L_A$ is an Anosov area-preserving automorphism on $\mathbb T^2$ which is induced by a matrix $A=\begin{pmatrix} a & b\\ c& d\end{pmatrix}\in SL(2,\mathbb Z)$ with $\text{trace}(A)>2$. Fix $m\in(0,1)$ and let $l\in(0,1-m)$ (see the setting of Section~\ref{construction family}). There exists $\beta_0\in\left(0,\frac{a+d-2}{|b|}\right)$ such that there exist $C$ and $\delta_0\in(0,l)$ such that there exists $Q>0$ with the following property. Let $\delta\in(0,\delta_0)$. Then, there exists $w_0=w_0(\delta)$ such that for any $w\in(0,w_0]$ a smooth area-preserving Anosov diffeomorphisms $F^w_{l,\delta}\colon\mathbb T^2\rightarrow \mathbb T^2$ defined in \eqref{definition_twist_maps} with the parameter $\beta=\beta_0$ has the following properties: \begin{enumerate} \item $F^w_{l,\delta} = A\left(\frac{\beta_0 l+\delta(1-\beta_0)}{\delta}\right)$ in $S_2^{w_0}(\delta)$ where $A$ is defined in \eqref{definition of A(t)} and $S_2^{w_0}(\delta)$ is defined in \eqref{fast strip in smoothed}; \item $\nu_{F^w_{l,\delta}}(S_2^{w_0}(\delta))\geq Q$ where $\nu_{F^w_{l,\delta}}$ is the measure of maximal entropy of $F^w_{l,\delta}$; \item $\lambda_{mme}(F^w_{l,\delta})\geq Q\log\mu^+\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)+(1-Q)\log C$ where $\mu^+(\cdot)$ is defined in \eqref{eigenvalue}. \end{enumerate}
\end{lemma}
The key ingredient to estimate $\lambda_{mme}$ is the construction of a Markov partition, which allows us to represent dynamical systems by symbolic systems (see \cite[Section 18.7]{KatokHasselblatt} for more details). We use the Adler-Weiss construction of a Markov partition \cite{AdlerWeiss} to construct a Markov partition of $F^w_{l,\delta}$.
Let $p\in\mathbb T^2$. Then, \begin{equation*}
W_w^s(p) = \{z\in\mathbb T^2\,|\, \lim\limits_{n\rightarrow\infty}\dist\left(\left(F^w_{l,\delta}\right)^n(z),\left(F^w_{l,\delta}\right)^n(p)\right)=0\} \end{equation*}
and \begin{equation*}
W_w^u(p) = \{z\in\mathbb T^2\,|\, \lim\limits_{n\rightarrow\infty}\dist\left(\left(F^w_{l,\delta}\right)^{-n}(z),\left(F^w_{l,\delta}\right)^n(p)\right)=0\} \end{equation*} are the stable and unstable manifolds of $F^w_{l,\delta}$ at $p$, respectively. Moreover, for any $\varepsilon>0$ let $W_w^{i}(p,\varepsilon)$ be the $\varepsilon$-neighborhood of $p$ in $W_w^{i}(p)$, where $i=u,s$. We denote $F_{l,\delta}=F^0_{l,\delta}$, and $W^i(p)=W^i_0(p)$ for $i=u,s$.
Let $\pmb v^u = \pmb e^+(1)$ and $\pmb v^s = \pmb e^-(1)$ (see \eqref{eigenvector}). Since $F^w_{l,\delta}=L_A$ in a neighborhood of $(0,0)$ by the construction of $F^w_{l,\delta}$, there exists $\kappa>0$ such that for any sufficiently small $w\geq 0$, we have \begin{equation*}
W_w^{i}((0,0),\kappa) = \left\{(x,y)\,|\, \dist((x,y),(0,0))\leq\kappa, \, \left\langle \begin{pmatrix}-y\\x\end{pmatrix},\pmb v^i\right\rangle=0\right\} \text{ for } i=u,s. \end{equation*} Moreover, since $(0,0)$ is a fixed point for $F^w_{l,\delta}$, we have \begin{equation}\label{stable_unstable_mfld_image} W^s_w((0,0)) = \bigcup\limits_{n\in\mathbb N}\left(F^w_{l,\delta}\right)^{-n}\left(W_w^{s}((0,0),\kappa)\right)\, \text{ and } \, W^u_w((0,0)) = \bigcup\limits_{n\in\mathbb N}\left(F^w_{l,\delta}\right)^{n}\left(W_w^{u}((0,0),\kappa)\right). \end{equation}
\subsubsection*{Markov partition for $F_{l,\delta}$.}
Let us draw segments of $W^u((0,0))$ and $W^s((0,0))$ until they cross sufficiently many times and separate $\mathbb T^2$ into two disjoint (curvilinear) rectangles $R_1, R_2$. Define $\mathcal R$ to be the partition into rectangles determined by $R_i\cap F_{l,\delta}(R_j)$, where $i,j=1,2$. See Figure~\ref{initial_Markov} for an example. For $n\in\mathbb N$ let $\mathcal R^n$ be the partition into rectangles generated by $\left(F_{l,\delta}\right)^i(R')\cap \left(F_{l,\delta}\right)^j(R'')$, where $R', R''\in\mathcal R$ and $i,j=-n,-n+1,\ldots, n-1,n$. Let $\mathcal R^0 = \mathcal R$. Note that the $\mathcal R^n$ depend on $A$, $\beta$, $m$, $l$, and $\delta$ even though we do not emphasize this in the notation. By the construction, for each $R\in\mathcal R^n$, we have that two opposite boundaries of $R$ are contained in $W^u((0,0))$ and the other two are contained in $W^s((0,0))$.
\begin{figure}
\caption{The image of the partition $\mathcal R$ (in black, solid) for $F_{l,\delta}$ which is a perturbation of $L_A$ where $A=\usebox{\smlmat}$. Some boundaries are compared to the partition of $L_A$ (in black, dashed). $\mathbb T^2$ is shown in blue.}
\label{initial_Markov}
\end{figure}
We have the following property for the constructed partition.
\begin{lemma}\label{element inside}
There exists $\beta_0\in(0,\frac{a+d-2}{|b|})$ and there exists $\delta_0(\beta_0)\in(0,l)$ such that there exists $n_0\in\mathbb N$ with the following property. Let $\delta\in(0,\delta_0)$ and $\mathcal R$ be the Markov partition for $F_{l,\delta}$ (described above) with $\beta=\beta_0$. Then, there exists $R\in\mathcal R^{n_0}$ such that $R\subset \{(x,y)\in\mathbb T^2| x\in(m+l-\delta,m+l)\}$. \end{lemma}
Lemma~\ref{element inside} will follow from the lemmas below. First, we introduce some notation and definitions.
Denote \begin{align}\label{different regions}
S_1(\delta)=\{(x,y)\in\mathbb T^2&|x\in[m,m+l-\delta]\}; \; S_2(\delta)=\{(x,y)\in\mathbb T^2|x\in[m+l-\delta,m+l]\}; \\
&S_3=\mathbb T^2\setminus\{(x,y)\in\mathbb T^2|x\in(m,m+l)\}.\nonumber
\end{align}
\begin{definition} Let $n\in\mathbb N$ and $\mathcal R^n$ be as described above. Let $R\in\mathcal R^n$. The \underline{$s$-size of $R$}, denoted by $d_s(R)$, is the distance in the $\pmb v^s$ direction between the two opposite boundaries (or their extensions) of $R$ that are contained in $W^u((0,0))$. The \underline{$s$-size of $\mathcal R^n$} is $\tilde d_s(\mathcal R^n) = \max\limits_{R\in\mathcal R^n} d_s(R)$. \end{definition}
\begin{definition} Let $n\in\mathbb N$ and $\mathcal R^n$ be as described above. Let $R\in\mathcal R^n$. The \underline{$u$-size of $R$}, denoted by $d_u(R)$, is the distance in the $\pmb v^u$ direction between the two opposite boundaries (or their extensions) of $R$ that are contained in $W^s((0,0))$. The \underline{$u$-size of $\mathcal R^n$} is $\tilde d_u(\mathcal R^n) = \max\limits_{R\in\mathcal R^n} d_u(R)$. \end{definition}
\begin{lemma}\label{stable_size}
Consider the setting above. Then, there exists $\beta_s\in(0,\frac{a+d-2}{|b|})$ such that for any $\beta\in(0,\beta_s)$ there exists $\delta_s=\delta_s(\beta)\in(0,l)$ such that there exists a constant $\nu_s\in(0,1)$ with the following properties. Let $\delta\in(0,\delta_s)$ and $\mathcal R$ be the partition for $F_{l,\delta}$. Then, for any $n\in\mathbb N$, and any $R\in\mathcal R^n$ we have $$d_s(F_{l,\delta}(R))<\nu_s d_s(\mathcal R^n) .$$ \end{lemma} \begin{proof}
Let $\delta\in(0,l)$ and $\beta\in(0,\frac{a+d-2}{|b|})$. Consider the partition $\mathcal R$ for $F_{l,\delta}$. Let $R\in\mathcal R^n$ and $d_s(R)=r$. Define $p_1, p_2\in W^u((0,0))$ be the points on the opposite boundaries of $F_{l,\delta}(R)$ such that the segment $[p_1,p_2]$ has direction $\pmb v^s$. Partition $[p_1,p_2]$ into the minimal number of segments such that each segment is fully contained in one of the regions $F_{l,\delta}(S_1)$, $F_{l,\delta}(S_2)$, and $F_{l,\delta}(S_3)$.
Recall that $F_{l,\delta}$ is linear in $S_1$, $S_2$, and $S_3$, so it takes a piece of a line to a piece of a line.
First, we find the directions of lines in $S_1$ and $S_2$ that are taken to lines with the direction $\pmb v^s$. Let $A(t)$ be as in Proposition~\ref{eigenvector_eigenvalue_proposition}. By the definition of $F_{l,\delta}$, we have $DF_{l,\delta} = A(1-\beta)$ on $S_1$ and $DF_{l,\delta} = A\left(\frac{\beta l+\delta(1-\beta)}{\delta}\right)$ on $S_2$.
It is easy to see that $A(t)\pmb u(t) = \pmb v^s$ if and only if \begin{equation*} \pmb u (t)= \begin{pmatrix}u_1(t)\\u_2(t) \end{pmatrix}= \left(A(t)\right)^{-1}\pmb v^s = e^\Lambda \pmb v^s+\sgn(b)(t-1)\begin{pmatrix}0\\ -dv^s_1+bv^s_2\end{pmatrix}, \end{equation*} where $\pmb v^s = \begin{pmatrix}v^s_1\\v^s_2\end{pmatrix}$ and $\Lambda = h_{top}(L_A)$.
As a result, for any $\varepsilon_1>0$ there exists $\beta_1=\beta_1(\varepsilon_1, A)\in\left(0,\frac{a+d-2}{|b|}\right)$ such that for any $\beta\in (0,\beta_1(\varepsilon_1))$ we have $\frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|}< e^{-\Lambda}+\varepsilon_1$. Moreover, for any $\varepsilon_2>0$ there exists $\delta_1=\delta_1(\varepsilon_2,A,l,\beta)\in(0,l)$ such that for any $\delta\in(0,\delta_1(\varepsilon_2))$, we have $\frac{\|\pmb v^s\|}{\left\|\pmb u\left(\frac{\beta l +\delta(1-\beta)}{\delta}\right)\right\|}<\varepsilon_2$.
Also, \begin{align}\label{slope of u(t)}
\frac{u_2(t)}{u_1(t)} &= \frac{v_2^s}{v_1^s} +\sgn(b)(t-1)e^{-\Lambda}\left(-d+b\frac{v_2^s}{v_1^s}\right) \\&=\frac{v_2^s}{v_1^s} -\sgn(b)(t-1)\frac{1}{4}(d+a-\sqrt{(a+d)^2-4})(d+a+\sqrt{(a+d)^2-4})\nonumber\\&= \frac{v_2^s}{v_1^s} -\sgn(b)(t-1).\nonumber \end{align}
According to the definition of $F_{l,\delta}$, the construction of $\mathcal R^n$, and \eqref{invariant cone direction}, $W^u((0,0))$ is a piecewise linear curve with linear pieces having directions in the cone spanned by $\pmb v^+_{min}(-\beta)$ and $\pmb v^+_{max}$.
Denote by $L(I)$ the length of $I\subset \mathbb R^2$ in the standard Euclidean metric. We estimate $L([p_1,p_2])$. In the following discussion the phrase ``a side has direction $\pmb v$" means ``a side is contained in a line with direction $\pmb v$".
\underline{Case 1:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_3)$. Then, $\left(F_{l,\delta}\right)^{-1}([p_1,p_2])$ has direction $\pmb v^s$ and $F_{l,\delta} = L_{A}$ on $S_3$, so $L([p_1,p_2])\leq e^{-\Lambda}d_s(\mathcal R^n)$.
\underline{Case 2:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_1)$.
Let $\Delta D_1D_2D_3$ be a triangle such that the following hold: $D_1D_2$ has direction $\pmb u(1-\beta)$, $D_1D_3$ has direction $\pmb v^s$, $L(D_1D_3)=d_s(\mathcal R^n)$, and $D_2D_3$ has direction $\pmb v^+_{min}(-\beta)$. Then, for sufficiently small $\beta$ which depends only on $A$,
\begin{equation*}
L(D_1D_2)= d_s(\mathcal R^n)\left(1+\frac{2|b|\beta}{\sqrt{(a+d)^2-4}+\sqrt{(a+d-|b|\beta)^2-4}-|b|\beta}\right)\sqrt{\frac{1+\left(\frac{v^s_2}{v^s_1}+\sgn(b)\beta\right)^2}{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}. \end{equation*}
Let $\Delta T_1T_2T_3$ be a triangle such that the following hold: $T_1T_2$ has direction $\pmb u(1-\beta)$, $T_1T_3$ has direction $\pmb v^s$, $L(T_1T_3)=d_s(\mathcal R^n)$, and $T_2T_3$ has direction $\pmb v^+_{max}$. Then, for sufficiently small $\beta$ which depends only on $A$,
\begin{equation*}
L(T_1T_2)= d_s(\mathcal R^n)\left(1+\frac{2|b|\beta}{(a+d)+\sqrt{(a+d)^2-4}-2|b|\beta}\right)\sqrt{\frac{1+\left(\frac{v^s_2}{v^s_1}+\sgn(b)\beta\right)^2}{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}. \end{equation*}
Using \eqref{slope of u(t)}, we obtain that $L\left(\left(F_{l,\delta}\right)^{-1}([p_1,p_2])\right)\leq \max\{L(D_1D_2), L(T_1T_2)\}$ and $$L([p_1,p_2])\leq \max\{L(D_1D_2), L(T_1T_2)\}\frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|},$$ where $\frac{\max\{L(D_1D_2), L(T_1T_2)\}}{d_s(\mathcal R^n)}\rightarrow 1$ as $\beta\rightarrow 0$. Therefore, there exists $\beta_2 = \beta_2(A)\in\left(0,\frac{a+d-2}{|b|}\right)$ such that there exists $\nu_2=\nu_2(\beta_2)\in(0,1)$ such that for any $\beta\in(0,\beta)$ we have $L([p_1,p_2])\leq \nu_2d_s(\mathcal R^n)$.
\underline{Case 3:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_2)$.
Let $\Delta D_1D_2D_3$ be a triangle such that the following hold: $D_1D_2$ has direction $\pmb u\left(\frac{\beta l +\delta(1-\beta)}{\delta}\right)$, $D_1D_3$ has direction $\pmb v^s$, $L(D_1D_3)=d_s(\mathcal R^n)$, and $D_2D_3$ has direction $\pmb v^+_{min}(-\beta)$. Then,
\begin{equation*}
L(D_1D_2)= d_s(\mathcal R^n)\left|1-\frac{\frac{\sgn(b)\beta(l-\delta)}{\delta}}{\frac{\sqrt{(a+d)^2-4}+\sqrt{(a+d-|b|\beta)^2-4}+|b|\beta}{2b}+\frac{\sgn(b)\beta(l-\delta)}{\delta}}\right|\sqrt{\frac{1+\left(\frac{v^s_2}{v^s_1}-\frac{\sgn(b)\beta(l-\delta)}{\delta}\right)^2}{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}. \end{equation*}
Let $\Delta T_1T_2T_3$ be a triangle such that the following hold: $T_1T_2$ has direction $\pmb u\left(\frac{\beta l +\delta(1-\beta)}{\delta}\right)$, $T_1T_3$ has direction $\pmb v^s$, $L(D_1D_3)=d_s(\mathcal R^n)$, and $D_2D_3$ has direction $\pmb v^+_{max}$. Then,
\begin{equation*}
L(T_1T_2)= d_s(\mathcal R^n)\left|1-\frac{\frac{\sgn(b)\beta(l-\delta)}{\delta}}{\frac{d}{b}-\frac{v^s_2}{v^s_1}+\frac{\sgn(b)\beta(l-\delta)}{\delta}}\right|\sqrt{\frac{1+\left(\frac{v^s_2}{v^s_1}-\frac{\sgn(b)\beta(l-\delta)}{\delta}\right)^2}{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}. \end{equation*}
Using \eqref{slope of u(t)}, we obtain that $L\left(\left(F_{l,\delta}\right)^{-1}([p_1,p_2])\right)\leq \max\{L(D_1D_2), L(T_1T_2)\}$. In particular, for any fixed $\beta$, there exists $\delta_3=\delta_3(\beta,A,l)\in(0,l)$ such that for any $\delta\in(0,\delta_3)$, we have \begin{equation*}
L([p_1,p_2])=L\left(\left(F_{l,\delta}\right)^{-1}([p_1,p_2])\right)\frac{\|\pmb v^s\|}{\left\|\pmb u\left(\frac{\beta l +\delta(1-\beta)}{\delta}\right)\right\|}\leq e^{-\Lambda}d_s(\mathcal R^n). \end{equation*}
\underline{Case 4:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_1) \cup F_{l,\delta}(S_3)$ and $[p_1,p_2]\cap \Int(F_{l,\delta}(S_i))\neq\emptyset$ for $i=1,3$. Combining Cases 1 and 2, we obtain that there exists $\beta_2\in\left(0,\frac{a+d-2}{|b|}\right)$ and $\nu_2 = \nu_2(\beta_2)\in(0,1)$ such that for any $\beta\in(0,\beta_2)$ we have $L([p_1,p_2])\leq \max\{e^{-\Lambda},\nu_2\}d_s(\mathcal R^n).$
\underline{Case 5:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_2) \cup F_{l,\delta}(S_3)$ and $[p_1,p_2]\cap \Int(F_{l,\delta}(S_i))\neq\emptyset$ for $i=2,3$. Combining Cases 1 and 3, we obtain that for any $\beta\in(0,\beta_2)$ (where $\beta_2$ defined in Case 4) we have that there exists $\delta_3=\delta_3(\beta,A,l)\in(0,l)$ such that for any $\delta\in(0,\delta_3)$ we have $L([p_1,p_2])\leq e^{-\Lambda}d_s(\mathcal R^n).$
\underline{Case 6:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_1) \cup F_{l,\delta}(S_2)$ and $[p_1,p_2]\cap \Int(F_{l,\delta}(S_i))\neq\emptyset$ for $i=1,2$. There is a piece of a line with direction $\pmb v^s$ that intersects two opposite boundaries of a rectangle in $\mathcal R^n$ that are contained in $W^u((0,0))$ and passes through a point of $\left(F_{l,\delta}\right)^{-1}([p_1,p_2])$ that belongs to the line $x=m+l-\delta$ (on $\mathbb T^2$). Then, using Cases 2 and 3, we obtain that $L([p_1,p_2])\leq \max\{\nu_2, e^{-\Lambda}\}d_s(\mathcal R^n)$.
\underline{Case 7:} Assume $[p_1,p_2]\subset F_{l,\delta}(S_1)\cup F_{l,\delta}(S_2)\cup F_{l,\delta}(S_3)$ and $[p_1,p_2]\cap \Int(S_i)\neq \emptyset$ for $i=1,2,3$. Let $q_j=\left(F_{l,\delta}\right)^{-1}(p_j)$ where $j=1, 2$
\underline{Subcase 7.1:} Assume $q_j\in S_3$ where $j=1, 2$.
Recall that since $F_{l,\delta}=L_A$ on $S_3$ and $F_{l,\delta}$ is a piecewise linear map, if two segments $\gamma_1,\gamma_2\subset F_{l,\delta}(S_3)\cap[p_1,p_2]$, then the segments $\left(F_{l,\delta}\right)^{-1}(\gamma_1)$ and $\left(F_{l,\delta}\right)^{-1}(\gamma_2)$ are subsets of a line with direction $\pmb v^s$ connecting the opposite boundaries of a rectangle in $\mathcal R^n$ that are contained in $W^u((0,0))$.
Let $Q_i$ be the maximal segment in $[q_1,q_2]$ such that $Q_i\subset S_i$, where $i=1,2$. We denote $r_1=L(Q_1)$ and $r_2=L(Q_2)$. Let $Q'_i$ be the maximal segment in $\left(F_{l,\delta}\right)^{-1}([p_1,p_2])$ such that $Q'_i\subset S_i$, where $i=1,2$. We denote $r'_1=L(Q'_1)$ and $r'_2=L(Q'_2)$. By the definition of $F_{l,\delta}$ and \eqref{slope of u(t)}, we have \begin{align*}
&r_1= (l-\delta)\sqrt{1+\left(\frac{v_2^s}{v_1^s}\right)^2}, \qquad r_2= \delta \sqrt{1+\left(\frac{v_2^s}{v_1^s}\right)^2}, \\
&r'_1= (l-\delta)\sqrt{1+\left(\frac{v_2^s}{v_1^s}+\sgn(b)\beta\right)^2} = r_1\frac{\sqrt{1+\left(\frac{v_2^s}{v_1^s}+\sgn(b)\beta\right)^2}}{\sqrt{1+\left(\frac{v_2^s}{v_1^s}\right)^2}}, \\
&r'_2=\delta\sqrt{1+\left(\frac{v_2^s}{v_1^s}-C_\delta\right)^2} = r_2\frac{\sqrt{1+\left(\frac{v_2^s}{v_1^s}-C_\delta\right)^2}}{\sqrt{1+\left(\frac{v_2^s}{v_1^s}\right)^2}}, \end{align*} where $C_\delta = \sgn(b)\frac{\beta(l-\delta)}{\delta}$. Then, \begin{equation}\label{max_length_in_s}
L(F_{l,\delta}(Q'_1)) = r'_1\frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|} = r_1e^{-\Lambda}\sqrt{\frac{1+\left(\frac{v_2^s}{v_1^s}+\sgn(b)\beta\right)^2}{1+\left(\frac{v_2^s}{v_1^s}-e^\Lambda\beta\sgn(b)\left(-d+b\frac{v_2^s}{v_1^s}\right)\right)^2}} \end{equation} and \begin{equation}\label{max_length_in_f}
L(F_{l,\delta}(Q_2'))=r'_2\frac{\|\pmb v^s\|}{\left\|\pmb u\left(\frac{\beta l+\delta(1-\beta)}{\delta}\right)\right\|} = r_2e^{-\Lambda}\sqrt{\frac{1+\left(\frac{v_2^s}{v_1^s}-C_\delta\right)^2}{1+\left(\frac{v_2^s}{v_1^s}+C_\delta e^{-\Lambda}\left(-d+b\frac{v_2^s}{v_1^s}\right)\right)^2}}. \end{equation}
Choose $K>0$ such that $e^{-\Lambda}(1+K)<1$ and $\left(-d+b\frac{v_2^s}{v_1^s}\right)^{-1}+K = \frac{2}{a+d+\sqrt{(a+d)^2-4}}+K<1$ which is possible since $\Lambda>0$ and $a+d>2$. Let $\nu_{7} = \max\{e^{-\Lambda}(1+K), \frac{2}{a+d+\sqrt{(a+d)^2-4}}+K\}$. Then, there exists $\beta_{7}=\beta_7(A,K)\in(0,\frac{a+d-2}{|b|})$ such that for any $\beta\in(0,\beta_7)$ there exists $\delta_7 = \delta_7(\beta,K,A,l)\in(0,l)$ such that for any $\delta\in(0,\delta_7)$ we have $L(F_{l,\delta}(Q'_i))\leq r_i\nu_7$ for $i=1,2$ because we have \eqref{max_length_in_s} and \eqref{max_length_in_f}. Combining this with Case 1 and the fact that $L([q_1,q_2])\leq d_s(\mathcal R^n)$, we obtain that for the specified choice of $\beta_7$ and $\delta_7$, we have that $L([p_1,p_2])\leq \nu_7d_s(\mathcal R^n)$ for any $\delta\in(0,\delta_7)$.
\underline{Subcases 7.2-7.3} Let $Q'_i$ be the maximal piece of $\left(F_{l,\delta}\right)^{-1}([p_1,p_2])$ in $S_i$ for $i=1,2,3$. Notice that $Q'_3$ has direction $\pmb v^s$. Let $Q$ be a piece of a line with direction $\pmb v^s$ connecting the opposite boundaries of a rectangle in $\mathcal R^n$ that are contained in $W^u((0,0))$ which passes through a point of $\left(F_{l,\delta}\right)^{-1}([p_1,p_2])$ that belongs to the line $x=m+l-\delta$.
\underline{Subcase 7.2:} Assume $q_1\in S_3$ and $q_2\in S_2$.
Let $Q_1$ be the maximal piece of $Q$ in $\left\{(x,y)\in\mathbb T^2| 0\leq x\leq m+l-\delta\right\}$ and $Q_2=Q\setminus Q_1$.
Let $D_1D_2D_3D_4(\pmb z)$ be a trapezoid such that $D_1D_2$ and $D_3D_4$ have direction $\pmb v^s$, $L(D_1D_2)=L(Q_1)$, $D_2D_3$ has direction $\pmb u(1-\beta)$, $L(D_2D_3)=(l-\delta)\sqrt{1+\left(\frac{v_2^s}{v_1^s}+\sgn(b)\beta\right)^2}$, and $D_1D_4(\pmb z)$ has direction $\pmb z = \begin{pmatrix}z_1\\z_2\end{pmatrix}$. In our case, we consider $\pmb z=\pmb v^+_{max}, \pmb v^+_{min}(-\beta)$. Moreover, to have a trapezoid for our choice of $\pmb z$, we should have $\frac{L(Q_1)}{\sqrt{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}\geq(l-\delta)\left(1-\frac{\sgn(b)\beta}{\frac{z_2}{z_1}-\frac{v_2^s}{v_1^s}}\right)$. Then, we obtain \begin{equation*} L(D_2D_3)+L(D_3D_4) = L(Q_1)+(l-\delta)\left(\sqrt{1+\left(\frac{v^s_2}{v^s_1}+\sgn(b)\beta\right)^2}-\left(1-\frac{\sgn(b)\beta}{\frac{z_2}{z_1}-\frac{v_2^s}{v_1^s}}\right)\sqrt{1+\left(\frac{v^s_2}{v^s_1}\right)^2}\right). \end{equation*} In particular, for sufficiently small $\beta>0$ (depending on $A$), we have \begin{equation}\label{length_in_trapezoid_before} L(D_2D_3)+L(D_3D_4) \leq L(Q_1)\max\left\{1,\max\limits_{\pmb z\in\{\pmb v^+_{max}, \pmb v^+_{min}(-\beta)\}}\left\{\frac{\sqrt{1+\left(\frac{v^s_2}{v^s_1}+\sgn(b)\beta\right)^2}}{\left(1-\frac{\sgn(b)\beta}{\frac{z_2}{z_1}-\frac{v_2^s}{v_1^s}}\right)\sqrt{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}\right\}\right\}. \end{equation} Moreover, \begin{align}\label{length_in_trapezoid_after}
L(F_{l,\delta}(Q'_3\cup Q'_1)) &= e^{-\Lambda}L(Q'_3)+\frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|}L(Q'_1)\leq \max\left\{e^{-\Lambda}, \frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|}\right\}L(Q'_3\cup Q'_1)\nonumber\\&\leq \max\left\{e^{-\Lambda}, \frac{\|\pmb v^s\|}{\|\pmb u(1-\beta)\|}\right\}(L(D_2D_3)+L(D_3D_4)). \end{align}
Let $\nu'_7\in(e^{-\Lambda},1)$. Combining \eqref{length_in_trapezoid_before} and \eqref{length_in_trapezoid_after}, we obtain that there exists $\beta'_7=\beta'_7(A,\nu'_7)$ such that for any $\beta\in(0,\beta'_7)$, $L(F_{l,\delta}(Q'_3\cup Q'_1))\leq \nu'_7L(Q_1)$.
Also, for any $\beta\in(0,\beta'_7)$ there exists $\delta_3 = \delta_3(\beta,A,l)\in(0,l)$ such that for any $\delta\in(0,\delta_3)$ we have $L(F_{l,\delta}(Q'_2))\leq e^{-\Lambda}L(Q_2)$ (see Case 3). Then, for the above choice of parameters, we have \begin{equation*} L([p_1,p_2])\leq \nu'_7L(Q_1)+e^{-\Lambda}L(Q_2)\leq \nu'_7L(Q_1\cup Q_2)\leq\nu'_7d_s(\mathcal R^n). \end{equation*}
\underline{Subcase 7.3:} Assume $q_1\in S_1$ and $q_2\in S_3$.
Let $Q_2$ be the maximal piece of $Q$ in $\left\{(x,y)\in\mathbb T^2| m+l-\delta\leq x\leq 1\right\}$ and $Q_1=Q\setminus Q_2$.
Let $D_1D_2D_3D_4(\pmb z)$ be a trapezoid such that $D_1D_2$ and $D_3D_4$ have directions $\pmb v^s$, $L(D_1D_2)=L(Q_2)$, $D_2D_3$ has direction $\pmb u\left(\frac{\beta l+\delta(1-\beta)}{\delta}\right)$ and $L(D_2D_3)=\delta\sqrt{1+\left(\frac{v_2^s}{v_1^s}-\frac{\sgn(b)\beta(l-\delta)}{\delta}\right)^2}$, and $D_1D_4(\pmb z)$ has direction $\pmb z = \begin{pmatrix}z_1\\z_2\end{pmatrix}$. In our case, we consider $\pmb z=\pmb v^+_{max}, \pmb v^+_{min}(-\beta)$. Moreover, to have a trapezoid for our choice of $\pmb z$, we should have $\frac{L(Q_2)}{\sqrt{1+\left(\frac{v^s_2}{v^s_1}\right)^2}}\geq\frac{\sgn(b)\beta(l-\delta)}{\frac{z_2}{z_1}-\frac{v_2^s}{v_1^s}}+\delta$. Moreover, we have \begin{equation*}
\sgn(b)\left(\frac{2d}{2b}-\frac{v_2^s}{v_1^s}\right) = \frac{(a+d)+\sqrt{(a+d)^2-4}}{2|b|}>0 \,\text{ and } \end{equation*} \begin{equation*}
\sgn(b)\left(\frac{2d-(a+d-|b|\beta)+\sqrt{(a+d-|b|\beta)^2-4}}{2b}-\frac{v_2^s}{v_1^s}\right) = \frac{|b|\beta+\sqrt{(a+d-|b|\beta)^2-4}+\sqrt{(a+d)^2-4}}{2|b|}>0. \end{equation*} Thus, we obtain \begin{equation*} L(D_3D_4)= L(Q_2) - \delta\sqrt{1+\left(\frac{v_2^s}{v_1^s}\right)^2}\left(1+\frac{\sgn(b)\beta(l-\delta)}{\delta\left(\frac{z_2}{z_1}-\frac{v_2^s}{v_1^s}\right)}\right)\leq L(Q_2). \end{equation*} Then, for the choice of parameters as in Case 7.1, we have \begin{align}\label{length_in_trapezoid_after_f}
L(F_{l,\delta}(Q'_2))+L(F_{l,\delta}(Q'_3)) &\leq \frac{\|\pmb v^s\|}{\left\|\pmb u\left(\frac{\beta l +\delta(1-\beta)}{\delta}\right)\right\|}L(Q'_2)+e^{-\Lambda}L(Q_2) \leq\nu_7L(Q_2). \end{align}
Let $\nu''_7=\max\{\nu_7, \nu_2\}$ and $\beta''_7 = \min\{\beta_2,\beta_7\}$. Then, for any $\beta\in(0,\beta''_7)$ there exists $\delta_7=\delta_7(\beta,K,l,A)\in(0,l)$ such that for any $\delta\in(0,\delta_7)$ we have $L([p_1,p_2])\leq \nu''_7d_s(\mathcal R^n)$.
Combining Cases 1-7, we obtain the statement of the lemma. \end{proof}
Similarly to Lemma~\ref{stable_size}, we can obtain the following lemma.
\begin{lemma}\label{unstable_size}
Consider the setting above. Then, there exists $\beta_u\in(0,\frac{a+d-2}{|b|})$ such that for any $\beta\in(0,\beta_u)$ there exist $\delta_u=\delta_u(\beta)\in(0,l)$ and $\nu_u = \nu_u (\delta_u) \in(0,1)$ with the following properties. Let $\delta\in(0,\delta_u)$ and $\mathcal R$ be the partition for $F_{l,\delta}$. Then, for any $n\in\mathbb N$, and any $R\in\mathcal R^n$ we have $$d_u(\left(F_{l,\delta}\right)^{-1}(R))<\nu_u d_u(\mathcal R^n) .$$ \end{lemma}
\begin{proof}[Proof of Lemma~\ref{element inside}] First, let $\beta_0=\frac{1}{2}\min\{\beta_s,\beta_u\}$, where $\beta_s$ and $\beta_u$ are as in Lemmas~\ref{stable_size} and \ref{unstable_size}. Then, $\delta' = \frac{1}{2}\min\{\delta_s(\beta_0),\delta_u(\beta_0)\}$.
Let $\delta\in(0,\delta')$. Denote $\tilde A=A\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)$. Then, for any vector $\pmb v = \alpha_1 \pmb v^-_{min}(-\beta_0)+ \alpha_2 \pmb v^-_{max}$, where $\alpha_1\alpha_2\geq 0$ and $\alpha^2_1+\alpha_2^2\neq 0$, we have \begin{align*}
\pmb {\hat v} = \begin{pmatrix}\hat v_1\\\hat v_2 \end{pmatrix}&=\tilde A^{-1}\pmb v = \begin{pmatrix}d & -b\\ -c-\sgn(b)d\frac{\beta_0(l-\delta)}{\delta} & a+|b|\frac{\beta_0(l-\delta)}{\delta} \end{pmatrix}\left(\alpha_1\begin{pmatrix}2b\\\phi^-(a+d-|b|\beta_0) \end{pmatrix}+\alpha_2\begin{pmatrix}0\\-1 \end{pmatrix}\right)\\
&=\frac{\beta_0(l-\delta)}{\delta}|b|\left(\alpha_1\left(2d-\phi^{-}(a+d-|b|\beta_0)\right)+\alpha_2\right)\begin{pmatrix} 0\\ -1\end{pmatrix}+A^{-1}\begin{pmatrix}2b\alpha_1\\\alpha_1\phi^{-}(a+d-|b|\beta_0)-\alpha_2\end{pmatrix}. \end{align*}
Thus, \begin{equation*}
\min\left\{0, \frac{2}{b\left(2d-\phi^{-}(a+d-|b|\beta_0)\right)}\right\}\leq\frac{\hat v_2}{\hat v_1}+\frac{\sgn(b)\beta_0(l-\delta)}{\delta}+\frac{a}{b}\leq \max\left\{0,\frac{2}{b\left(2d-\phi^{-}(a+d-|b|\beta_0)\right)}\right\}. \end{equation*}
Therefore, there exists $\delta_0\in(0,\delta')$ such that the following holds. Let $\delta\in (0,\delta_0)$ and $I$ be a segment of $W^s((0,0))$ contained in $S_2(\delta)$ with endpoints $(m+l-\delta,y_1)$ and $(m+l,y_2)$ on its boundary. Then, $|y_2-y_1|\geq\frac{1}{2}\beta_0 l$.
Let $\mathcal R$ be the Markov partition for $F_{l,\delta}$. Denote by $\partial_j(\mathcal R)$ the boundary of $\mathcal R$ that is contained in $W^j((0,0))$, where $j=s,u$. By Lemmas ~\ref{Anosov diffeomorphism} and ~\ref{unstable_size}, there exists $n_1=n_1(\delta_0)\in\mathbb N$ such that there are two distinct segments $I, J\subset\partial_s(\mathcal R^n_1)\cap S_2(\delta)$ with endpoints $(m+l-\delta, y^I_1)$, $(m+l, y^I_2)$ and $(m+l-\delta, y^J_1)$, $(m+l, y^J_2)$, respectively, with the property that $\min\{|y^I_1-y^J_2|,|y^J_1-y^I_2|\}\geq\frac{1}{4}\beta_0 l$. Then, using Lemma~\ref{stable_size}, there exists also $n_2=n_2(\delta_0)\in\mathbb N$ such that there are two distinct segments $W, V\subset\partial_u(\mathcal R^n_2)\cap S_2(\delta)$ with endpoints $(m+l-\delta, y^W_1)$, $(m+l, y^W_2)$ and $(m+l-\delta, y^V_1)$, $(m+l, y^V_2)$, respectively, with the property that $W\cap I\neq\emptyset$, $W\cap J\neq\emptyset$, $V\cap I\neq\emptyset$, $V\cap J\neq\emptyset$, and the intersection does not contain the endpoints of $I,J, W, V$. Thus, for $n_0=\max\{n_1,n_2\}$ there exists $R\in\mathcal R^{n_0}$ such that $R\subset \{(x,y)\in\mathbb T^2| x\in(m+l-\delta,m+l)\}$. \end{proof}
\subsubsection*{Markov partition for $F^w_{l,\delta}$ when $w>0$.}
We construct the Markov partition for $F^w_{l,\delta}$ in the same way as for $F_{l,\delta}$. We quickly recall it while introducing some notations. Let us draw segments of $W_w^u((0,0))$ and $W_w^s((0,0))$ until they cross sufficiently many times and separate $\mathbb T^2$ into two disjoint (curvilinear) rectangles $R_1, R_2$. Define $\mathcal R_w$ to be the partition into rectangles determined by $R_i\cap F^w_{l,\delta}(R_j)$, where $i,j=1,2$. For $n\in\mathbb N$ let $\mathcal R_w^n$ be the partition into rectangles generated by $\left(F^w_{l,\delta}\right)^i(R)\cap \left(F^w_{l,\delta}\right)^j(T)$, where $S, T\in\mathcal R_w$ and $i,j=-n,-n+1,\ldots, n-1,n$. Let $\mathcal R^0 = \mathcal R$.
\begin{lemma}\label{element_inside_for_smooth} Let $\beta_0, \delta_0$, and $n_0$ be as in Lemma~\ref{element inside}. Let \begin{equation}\label{fast strip in smoothed}
S^w_2(\delta) = \{(x,y)\in\mathbb T^2| x\in(m+l-\delta+w,m+l-w)\}. \end{equation} Then, for any $\delta\in(0,\delta_0)$ there exists $w_0=w_0(\delta)\in(0,\frac{\delta}{4})$ such that there exists $R\in\mathcal R_{w_0}^{n_0}$ such that $R\subset S^{w_0}_2(\delta)$, where $\mathcal R_{w_0}$ is the Markov partition described above for $F^{w_0}_{l,\delta}$ with $\beta=\beta_0$. In particular, there exists $Q=Q(\beta_0,\delta_0)>0$ such that if $\nu^{w_0}_{l,\delta}$ is the measure of maximal entropy for $F^{w_0}_{l,\delta}$, then $\nu^{w_0}_{l,\delta}(S^{w_0}_2(\delta))\geq Q$. \end{lemma}
\begin{proof}
Recall that in a neighborhood of $(0,0)$ we have $F_{l,\delta}=F^w_{l,\delta}=L_A$ for $w\in(0,\frac{\delta}{4})$. In particular, the corresponding stable and unstable manifolds coincide in a neighborhood of $(0,0)$. Let $\mathcal R$ be the constructed Markov partition for $F_{l,\delta}$. From \eqref{stable_unstable_mfld_image}, there exists $N=N(n_0,\kappa)\in\mathbb N$ such that for any point $p\in\partial_j(\mathcal R^{n_0})$ for $j=s,u$ there exist $q\in W^j_{w}((0,0),\kappa)=W^j_{0}((0,0),\kappa)$ and $n\in\mathbb Z$ such that $|n|\leq N$ and $p=\left(F_{l,\delta}\right)^n(q)$. Moreover, $\left(F^w_{l,\delta}\right)^n(q)\in W^j_w((0,0))$. By Lemma~\ref{twist_function}, we obtain that $\dist\left(p,\left(F^w_{l,\delta}\right)^n(q)\right)$ can be made arbitrarily small by choosing a sufficiently small $w$, and this choice can be made in a uniform way on compact sets for $|n|\leq N$. Therefore, using Lemma~\ref{element inside}, we obtain that there exists a sufficiently small $w_0>0$ such that there exists $R\in\mathcal R_{w_0}^{n_0}$ with $R\subset S^{w_0}_2(\delta)$.
The statement about the measure of maximal entropy follows from the fact that $F^{w_0}_{l,\delta}$ is topologically semiconjugate to the topological Markov chain defined using the constructed Markov partition. Moreover, the semiconjugacy is one-to-one on all periodic points except for the fixed points. In particular, the measure of maximal entropy for $F^{w_0}_{l,\delta}$ is defined by the measure of maximal entropy for the topological Markov shift (Parry measure, see, for example, \cite[Proposition 4.4.2]{KatokHasselblatt}) using the topological semiconjugacy. \end{proof}
\subsubsection*{Lower bound on $\lambda_{mme}(F^w_{l,\delta})$.}
We now work towards obtaining a lower bound on $\lambda_{mme}(F^w_{l,\delta})$.
We are in the setting of Lemma~\ref{element_inside_for_smooth}. We will use the same notation as in Proposition~\ref{eigenvector_eigenvalue_proposition} and Lemma~\ref{Anosov diffeomorphism}.
Let $\pmb v_\delta^u = \pmb e^+\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}\right)$ and $\pmb v_\delta^s = \frac{\pmb e^-\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}\right)}{\left\|\pmb e^-\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}\right)\right\|}$. Denote by $\hat\delta = a+d+|b|\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}-1\right)$ and $\hat\beta_0 = a+d-|b|\beta_0$. Then, \begin{align*} &\pmb v^+_{max} = c_{max}^{u,\delta}\pmb v_\delta^u+c_{max}^{s,\delta}\pmb v_{\delta}^s, \text{ where }
\\ &\begin{pmatrix}c_{max}^{u,\delta}\\c_{max}^{s,\delta}\end{pmatrix} = \frac{1}{4\sqrt{\hat\delta^2-4}}\begin{pmatrix}\hat\delta+\sqrt{\hat\delta^2-4}\\\left(\sqrt{\hat\delta^2-4}-\hat\delta\right)\left\|\pmb e^-\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}\right)\right\|\end{pmatrix}\rightarrow \begin{pmatrix}\frac{1}{2}\\0\end{pmatrix} \text{ as } \hat\delta\rightarrow\infty, \text{ and }\\
&\pmb v^+_{min}(-\beta_0) = c_{min}^{u,\delta}\pmb v_\delta^u+c_{min}^{s,\delta}\pmb v_{\delta}^s, \text{ where } \\&\begin{pmatrix}c_{min}^{u,\delta}\\c_{min}^{s,\delta}\end{pmatrix} = \frac{1}{2\sqrt{\hat\delta^2-4}}\begin{pmatrix}\hat\delta-\hat\beta_0+\sqrt{\hat\delta^2-4}+\sqrt{\hat\beta_0^2-4}\\\left(\hat\beta_0-\hat\delta+\sqrt{\hat\delta^2-4}-\sqrt{\hat\beta_0^2-4}\right)\left\|\pmb e^-\left(\frac{\beta_0 l +\delta(1-\beta_0)}{\delta}\right)\right\| \end{pmatrix}\rightarrow\begin{pmatrix}1\\\hat\beta_0-\sqrt{\hat\beta_0^2-4}\end{pmatrix} \text{ as } \hat\delta\rightarrow\infty. \end{align*}
As a result, if $\delta$ is sufficiently small, then for any $\pmb v = \alpha_1\pmb v^+_{max}+\alpha_2\pmb v^+_{min}(-\beta_0)$, where $\alpha_1\alpha_2\geq 0$ and $\alpha_1^2+\alpha_2^2\neq 0$, we have for any $n\in\mathbb N$
\begin{equation}\label{large expansion estimate}
\left\|\left(A\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)\right)^n\pmb v\right\|\geq \left(\mu^+\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)\right)^nC\|\pmb v\|, \end{equation}
where $C = |b|\min\left\{\frac{1}{2\|\pmb v^+_{max}\|}, \frac{1}{\|\pmb v^+_{min}(-\beta_0)\|}\right\}\in(0,1)$.
Let $S^{w_0}_2(\delta)$ be as in Lemma~\ref{element_inside_for_smooth}. Then, $\nu^{w_0}_{l,\delta}(S^{w_0}_2(\delta))\geq Q$. We obtain the lower bound on $\lambda_{mme}(F^w_{l,\delta})$ in a similar way as in Section~\ref{section: estimate_abs} by replacing $\mathcal S$ by $\mathbb T^2\setminus S^{w_0}_2(\delta)$ and $\mathrm Leb$ by $\nu^{w_0}_{l,\delta}$ and using Birkhoff's Ergodic Theorem for $\nu^{w_0}_{l,\delta}$. More precisely,
Consider $p\in\mathbb T^2$ and a natural number $n$. We write \[ n = \sum\limits_{j=1}^sn_j, \] where $n_1\in\{0\}\cup\mathbb N$ and the numbers $n_j\in\mathbb N$ for $j=1,\ldots,s$ are chosen in the following way: \begin{enumerate}
\item The number $n_1$ is the first moment when $\left(F^w_{l,\delta}\right)^{n_1}(p)\in \mathbb T^2\setminus S_2^{w_0}(\delta)$;
\item The number $n_2$ is such that the number $n_1+n_2$ is the first moment when $\left(F^w_{l,\delta}\right)^{n_1+n_2}(p)\in S_2^{w_0}(\delta)$;
\item The rest of the numbers are defined in the same way. For any $k\in\mathbb N$, the number $n_{2k+1}$ is such that the number $\sum\limits_{j=1}^{2k+1}n_j$ is the first moment when $\left(F^w_{l,\delta}\right)^{\sum\limits_{j=1}^{2k+1}n_j}(p)\in \mathbb T^2\setminus S_2^{w_0}(\delta)$, and the number $n_{2k+2}$ is such that the number $\sum\limits_{j=1}^{2k+2}n_j$ is the first moment when $\left(F^w_{l,\delta}\right)^{\sum\limits_{j=1}^{2k+2}n_j}(p)\in S_2^{w_0}(\delta)$. \end{enumerate}
Let $\pmb v\in\mathcal C_p^+$ and $\|\pmb v\|=1$. Then, we have \begin{equation*}
\log\|D_{p}\left(F^w_{l,\delta}\right)^n\pmb v\| =\sum\limits_{j=1}^s\log\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|, \end{equation*}
where $\pmb v_1=\pmb v$, $\pmb v_2 = \frac{D_{p}\left(F^w_{l,\delta}\right)^{n_1}\pmb v_1}{\|D_{p}\left(F^w_{l,\delta}\right)^{n_1}\pmb v_1\|}$, and $\pmb v_j = \frac{D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(F^w_{l,\delta}\right)^{n_{j-1}}\pmb v_{j-1}}{\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(F^w_{l,\delta}\right)^{n_{j-1}}\pmb v_{j-1}\|}$ for $j=3, \ldots, s.$ In particular, $\|\pmb v_j\|=1$ for $j=1,\ldots, s$.
Recall that $DF^w_{l,\delta} = A\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)$ on $S_2^{w_0}(\delta)$. Thus, using \eqref{general expansion} and \eqref{large expansion estimate},
we obtain for $k\in\mathbb N$ \begin{align*}
\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|&\geq \left(\mu^+\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)\right)^nC \qquad &\text{if}\qquad &j=2k-1,\\
\|D_{\left(F^w_{l,\delta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(F^w_{l,\delta}\right)^{n_j}\pmb v_j\|&\geq \mu^{n_j} \qquad &\text{if}\qquad &j=2k. \end{align*}
As a result, using the fact that $\mu>1$, we have \begin{align*}
\log\|D_{p}\left(F^w_{l,\delta}\right)^n\pmb v\| &\geq \left[\frac{s}{2}\right]\log C+\mu^+\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}+(\log\mu)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\\
&\geq \left[\frac{s}{2}\right]\log C+\mu^+\left(\frac{\beta_0l+\delta(1-\beta_0)}{\delta}\right)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}. \end{align*}
Since $F^w_{l,\delta}$ is a smooth Anosov diffeomorphism, by Birkhoff's Ergodic Theorem we obtain, that \begin{equation*} \frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}\rightarrow \nu^{w_0}_{l,\delta}(S_2^{w_0}(\delta)) \,\,\text{ and } \,\,\frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\rightarrow \left(1-\nu^{w_0}_{l,\delta}(S_2^{w_0}(\delta))\right)\,\, \text{ as }\,\, n\rightarrow\infty. \end{equation*} Moreover, each visit to $\mathbb T^2\setminus S_2^{w_0}(\delta)$ is at least one iterate, so $\limsup\limits_{n\rightarrow\infty}\left(\frac{1}{n}[\frac{s}{2}]\right)\leq \left(1-\nu^{w_0}_{l,\delta}(S_2^{w_0}(\delta))\right)\leq (1-Q)$.
Therefore, we obtain \begin{equation}\label{mme_lower_Fldelta} \lambda_{mme}\left(F^{w_0}_{l,\delta}\right)\geq Q\log\mu^+\left(\frac{\beta_0 l+\delta(1-\beta_0)}{\delta}\right)+(1-Q)\log C \rightarrow +\infty \text{ as }\delta\rightarrow 0 \end{equation}
as $C\in(0,1)$ and $Q=Q(\beta_0,\delta_0)\in(0,1)$ are independent of $\delta$ while $w_0$ depends on $\delta$.
\section{Construction II}\label{section: decrease abs}
In this section, we show how to decrease the Lyapunov exponent with respect to the Lebesgue probability measure while controlling the Lyapunov exponent with respect to the measure of maximal entropy starting from the Anosov diffeomorphisms in Theorem~\ref{thm: increase max}. In this section, we use the construction described in \cite{HJJ} while providing estimates of the Lyapunov exponents. As before, we first state the main theorem of the section and give its proof before stating and proving the necessary lemmas.
\begin{theorem}\label{decrease abs for family} Suppose $L_A$ and $\Lambda$ as in Theorem~\ref{increase max in intro}. For any $H$ such that $\Lambda<H$ and positive number $\gamma$, let $\{g_s\}_{s\in[0,1]}$ be a smooth family of area-preserving Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ from Theorem~\ref{increase max in intro} applied for $\gamma$ and $H$ with lower bound on $\lambda_{mme}(g_1)$ coming from Lemma~\ref{lemma: bound on mme in I} being larger than $H$. Then, there exists a constant $\tilde C$ such that for any $\sigma>0, S>\Lambda$ there exists
a smooth family $\{f_{s,t}\}_{(s,t)\in[0,1]\times[0,1]}$ of Anosov diffeomorphisms on $\mathbb T^2$ homotopic to $L_A$ such that: \begin{enumerate}[label=(\Alph*)] \item $f_{s,0}=g_s$ for all $s\in[0,1]$;\label{A} \item $f_{s,t}$ preserves a probability measure $\mu_{s,t}$ which is absolutely continuous with respect to the Lebesgue measure;\label{B}
\item $\lambda_{abs}(f_{s,1})<\gamma$ for all $s\in[0,1]$;\label{C} \item $\lambda_{mme}(f_{0,t})<S$ for all $t\in[0,1]$;\label{D} \item $\lambda_{mme}(f_{1,t})\geq H+\tilde C\sigma$.\label{E}
\end{enumerate} \end{theorem}
\begin{proof} We define $f_{s,0}=g_s$ for all $s\in[0,1]$ so we automatically have \ref{A} in the theorem. Moreover, by Theorem~\ref{increase max in intro}, we have that $\lambda_{abs}(f_{s,0})>\Lambda-\gamma$ and $\lambda_{mme}(f_{1,0})>H$ due to the special form of the lower bound on $\lambda_{mme}(f_{1,0})$ (see the proof of Theorem~\ref{thm: increase max}). Let $\tilde C=\log(K_1K_2^{-1})+\log(C)$ be as in Lemma~\ref{bound lambda_mme below with variables}. Choose $r_0$ small enough such that Lemma~\ref{lemma: upper bound on lambda mme start from linear} and Lemma~\ref{bound lambda_mme below with variables} hold for desired estimates. Apply Construction II to the family $\{f_{s,0}\}$ with the chosen $r_0$ and $\eta$ changing up to $\frac{\eta_1}{2}$ where $\eta_1$ comes from Lemma~\ref{lemma: decreasing abs path}. Thus, we have \ref{B} in the theorem. Furthermore, the choice of $\eta_1$ guarantees that we have \ref{C} in the theorem. Also, by the choice of $r_0$ satisfying Lemma~\ref{lemma: upper bound on lambda mme start from linear} with appropriate choice of $\chi$, we have \ref{D} in the theorem. Finally, from the form of the lower bound \ref{lower bound in lemma on mme} in Lemma~\ref{bound lambda_mme below with variables}, we obtain \ref{E} in the theorem.
\end{proof}
\subsection{Construction II: Slow-down deformation near a fixed point}\label{section slowing}
We will now describe the slow-down deformation near a fixed point that we will use. The construction comes from \cite{HJJ} but was originally introduced by A. Katok to give an example of a Bernoulli area-preserving smooth diffeomorphism (on the boundary of the set of Anosov diffeomorphism) with non-zero Lyapunov exponents on any surface (see \cite{Katokmap}). For more explanation, see Remark~\ref{remark on slow down}.
Recall that the family of diffeomorphisms, $\{f_{s,0}\}_{s\in[0,1]}$, built by Construction I has the properties that each $f_{s,0}$ has $(0,0)$ as a fixed point and is equal to $L_A$ on $\mathbb T^2\setminus \{(x,y)\in\mathbb T^2 | m-w_0<x<m+l+w_0\}$ where $m\in(0,1)$, $l\in(0,1-m)$, and $w_0>0$ (very small). We choose a coordinate system centered at $(0,0)$ with the basis consisting of eigenvectors $\pmb v^u=\pmb e^+(1)$ and $\pmb v^s=\pmb e^-(1)$ of $A$ (see \eqref{eigenvector}). In this coordinate system, $A = \begin{pmatrix}e^\Lambda & 0\\ 0 & e^{-\Lambda}\end{pmatrix}$.
Let $D_r = \{(s_1, s_2)| s_1^2+s_2^2\leq r^2\}$ be a disk of radius $r$ centered at $(0,0)$. Choose $0<r_0<1$ and set $r_1=2r_0\Lambda$. Then, we have \begin{equation*}
D_{r_0}\subset \Int A(D_{r_1})\cap \Int A^{-1}(D_{r_1}). \end{equation*}
The linear map $x\mapsto Ax$ is the time-one map of the local flow generated by the following system of differential equations in $D_{r_1}$:
\begin{equation}\label{original flow}
\frac{ds_1}{dt}=s_1\Lambda, \qquad \frac{ds_2}{dt} = -s_2\Lambda. \end{equation}
Let $\alpha\in(0,\frac{1}{3})$ and $\varepsilon\in(0,1)$ such that $\frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}<\frac{4}{3}$. Choose a $C^{\infty}$ function $\psi_0:[0,1]\rightarrow[0,1]$ satisfying: \begin{enumerate}
\item $\psi_0(u)=1$ for $u\geq r_0^2$;
\item \label{property_psi0} $\psi_0(u)=\left(\frac{u}{r_0^2}\right)^{1+\alpha}$ for $0\leq u\leq\left(\frac{r_0}{2}\right)^2$\label{psi0 near 0}
\item $\psi_0(u)>0$ and $\psi'_0(u)\geq 0$ for $0<u<r_0^2$. \label{monotonicity psi0}
\item $\psi_0(u)\geq \left(\frac{u}{r_0^2}\right)^{1+\alpha}$ and $\psi_0'(u)\leq \frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}r_0^2}\left(\frac{u}{r_0^2}\right)^\alpha$ for $u\in[\left(\frac{r_0}{2}\right)^2, r_0^2]$; \label{stronger bounds on psi0}
\item In particular, for considered $\alpha$ and $\varepsilon$, we have $\psi'_0(u)\leq \frac{4}{3r_0^2}$ when $u\in[\left(\frac{r_0}{2}\right)^2, r_0^2]$.\label{bound derivative} \end{enumerate} Notice that by \eqref{property_psi0} the derivative of $\psi_0$ at $\left(\frac{r_0}{2}\right)^2$ is $\frac{1+\alpha}{2^{2\alpha}r_0^2}$. See Figure~\ref{psi_graph} for an example of $\psi_0(u)$.
\begin{figure}
\caption{An example of $\psi_0(u)$.}
\caption{An example of $\psi_{\eta}(u)$.}
\label{psi_graph}
\label{psi_eta_graph}
\end{figure}
Define a one-parameter family of $C^{\infty}$ functions $\psi_\eta:[0,1]\rightarrow[0,1]$, where $0\leq \eta\leq 2r_0^2$, such that: \begin{enumerate}
\item $\psi_\eta(u)>0$ and $\psi'_\eta(u)\geq 0$ for $0<u<r_0^2$; \label{increase psi_eta}
\item $\psi_\eta(u)=1$ for $u\geq r_0^2$; \label{right end psi_eta}
\item $\psi_\eta(u) = \psi_0(\frac{\eta}{2})$ for $0\leq u\leq \frac{\eta}{4}$;
\item $\psi_\eta(u)=\psi_0(u)$ for $u\geq \eta$;
\item if $\eta_1\leq \eta_2$, then $\psi_{\eta_1}(u)\leq \psi_{\eta_2}(u)$ for every $u\in[0,1]$;\label{monotone}
\item \label{derivative for all eta} $\psi'_\eta(u)\leq \frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}\left(\frac{u}{r_0^2}\right)^{\alpha}\leq\frac{4}{3r_0^2}$ for $u\in(0,1)$;
\item $\psi_\eta(u)\rightarrow \psi_0(u)$ as $\eta\rightarrow 0$ pointwise on $[0,1]$;\label{convergence}
\item The map $(\eta,u)\mapsto \psi_\eta(u)$ is $C^\infty$ smooth.\label{differentiable} \end{enumerate}
Notice that $\psi_{2r_0^2}(u)\equiv 1$ for $u\in[0,1]$. See Figure~\ref{psi_eta_graph} for an example of $\psi_{\eta}(u)$. Also, we have \begin{equation}\label{integral diverge}
\int_0^1\frac{1}{\psi_0(u)}\,du \quad \text{diverges} \quad \text{and} \quad \int_0^1\frac{1}{\psi_\eta(u)}\,du<\infty\quad\text{for}\quad \eta>0. \end{equation}
\begin{remark}\label{remark on slow down} Note that in \cite{Katokmap} the function $\psi_0$ (in the notation above) is such that $\int_0^1\frac{1}{\psi_0(u)}du$ converges. In comparison to \cite{HJJ} and \cite{Katokmap}, we consider a more explicit choice of $\psi_0$ which allows for the estimation of Lyapunov exponents. Furthermore, the maps that we work with are Anosov as we are not considering the map coming from $\psi_0$ itself. This is analogous to \cite[Corollary 4.2]{Katokmap}. \end{remark}
Consider the following slow-down deformation of the flow described by the system \eqref{original flow} in $D_{r_0}$: \begin{equation}\label{slow down flow}
\frac{ds_1}{dt}=s_1\psi_\eta(s_1^2+s_2^2)\Lambda, \qquad \frac{ds_2}{dt} = -s_2\psi_\eta(s_1^2+s_2^2)\Lambda. \end{equation} Let $g_\eta$ be the time-one map of this flow. Observe that $g_\eta$ is defined and of class $C^\infty$ in $D_{r_1}$, and it coincides with $L_A$ in a neighborhood of $\partial D_{r_1}$ by the choice of $\psi_\eta$, $r_0$, and $r_1$. As a result, for sufficiently small $r_0$ we obtain a $C^{\infty}$ diffeomorphism \begin{equation}\label{map G_w}
G_{s,\eta}(x) = \left\{
\begin{aligned}
&f_{s,0}(x) \quad&\text{if}\quad &x\in \mathbb T^2\setminus D_{r_1},\\
&g_\eta(x) \quad&\text{if}\quad&x\in D_{r_1}.
\end{aligned}
\right. \end{equation}
Using \eqref{integral diverge} for $\eta>0$, we can define a positive $C^{\infty}$ function \[ \kappa_\eta(s_1,s_2):=\left\{
\begin{aligned}
&\left(\psi_\eta(s_1^2+s_2^2)\right)^{-1} \quad&\text{if}\quad (s_1,s_2)\in D_{r_0},\\
&1 \quad&\text{otherwise},
\end{aligned}
\right. \]
and its average
\[ K_\eta:=\int_{\mathbb T^2}\kappa_\eta\,d\Leb. \]
For $\eta>0$ and $s\in[0,1]$, the diffeomorphism $G_{s,\eta}$ preserves the probability measure $d\mu_\eta = K_\eta^{-1}\kappa_\eta\,d\Leb$, i.e., $\mu_\eta$ is absolutely continuous with respect to the Lebesgue measure. See the paragraph containing equation (3.2) in \cite{HJJ} for the idea of the proof.
Let $p\in\mathbb T^2$. Consider the cones \begin{equation}\label{bigger cones}
\mathcal K^+(p) = \{\xi_1\pmb v^u+\xi_2\pmb v^s \,|\, \xi_1,\xi_2\in\mathbb R, |\xi_2|\leq |\xi_1|\} \,\text{ and } \mathcal K^-(p)= \{\xi_1\pmb v^u+\xi_2\pmb v^s \,|\, \xi_1,\xi_2\in\mathbb R, |\xi_1|\leq |\xi_2|\} \end{equation} in $T_p\mathbb T^2$.
\begin{lemma}(cf. Proposition 4.1 in \cite{Katokmap})\label{invariant_cones_katokmap} For every $s\in[0,1]$, $\eta>0$, and $p\in\mathbb T^2$ we have \begin{equation*} DG_{s,\eta}\mathcal K^+(p)\subsetneq \mathcal K^+(G_{s,\eta}(p)) \, \text{ and }\, DG_{s,\eta}^{-1}\mathcal K^-(p)\subsetneq \mathcal K^-(G_{s,\eta}^{-1}(p)). \end{equation*}
Moreover, $E_{s,\eta}^+(p) = \bigcap_{j=0}^{\infty}DG^j\mathcal K^+(G^{-j}(p))$ and $E_{s,\eta}^-(p) = \bigcap_{j=0}^{\infty}DG^{-j}\mathcal K^-(G^{j}(p))$ are one-dimensional subspaces of $T_p\mathbb T^2$. \end{lemma}
\begin{proof} The cases for $\mathcal K^+(p)$ and $\mathcal K^-(p)$ are similar. Thus, we restrict ourselves to the inclusion for $\mathcal K^+(p)$.
Notice that the vector $\begin{pmatrix}0\\1\end{pmatrix}$ in the standard Euclidean coordinates is equal to $\frac{1}{2\sqrt{(a+d)^2-4}}(\pmb v^u-\pmb v^s)$. As a result, using \eqref{boundaries of cone in basis of eigenvectors}, we obtain that $\mathcal C^+_p\subset \mathcal K^+(p)$ and $\mathcal C^-_p\subset \mathcal K^-(p)$ (see Lemma~\ref{Anosov diffeomorphism} for notation). By the constructions of $f_{s,0}$ and $G_{s,\eta}$ and Proposition~\ref{eigenvector_eigenvalue_proposition}, the desired inclusion holds outside of the disk $D_{r_1}$.
The system of variational equations corresponding to the system~\eqref{slow down flow} implies that the following equation holds for the tangent $\zeta_\eta$:
\begin{equation}\label{tangent equation} \frac{d\zeta_\eta}{dt} = -2\Lambda((\psi_\eta(s_1^2+s_2^2)+(s_1^2+s_2^2)\psi'_\eta(s_1^2+s_2^2))\zeta_\eta+s_1s_2\psi_{\eta}'(s_1^2+s_2^2)(\zeta_\eta^2+1)). \end{equation}
Since $\psi_\eta(s_1^2+s_2^2)>0$ in $D_{r_1}$ for $\eta>0$, substituting $\zeta_\eta=1$ and $\zeta_\eta=-1$ in \eqref{tangent equation} gives, respectively, \begin{align*} \frac{d\zeta_\eta}{dt} = -2\Lambda(\psi_\eta(s_1^2+s_2^2)+(s_1+s_2)^2\psi'_\eta(s_1^2+s_2^2))<0,\\ \frac{d\zeta_\eta}{dt} = 2\Lambda(\psi_\eta(s_1^2+s_2^2)+(s_1-s_2)^2\psi'_\eta(s_1^2+s_2^2))>0. \end{align*}.
Thus, the result about the desired inclusion follows.
The statement that $E_{s,\eta}^+(p)$ and $E_{s,\eta}^-(p)$ are one-dimensional subspaces follows from the argument in \cite[Proposition 4.1]{Katokmap}. \end{proof}
Lemma~\ref{invariant_cones_katokmap} and the fact that $G_{s,\eta}$ preserves a smooth positive measure imply the following.
\begin{corollary} $G_{s,\eta}$ is a $C^\infty$ Anosov diffeomorphism on $\mathbb T^2$ for any $\eta>0$. \end{corollary}
\subsection{Estimation of $\lambda_{abs}$ in Construction II}\label{section: abs in II}
We use the notation introduced in Section~\ref{section slowing}. The estimation of $\lambda_{abs}(G_{s,\eta})$ follows from the estimation of metric entropy in \cite[Section 3]{HJJ} which we provide here for completeness. From the following lemma, we can guarantee \eqref{decrease 2} in Theorem~\ref{thm: decrease abs}.
\begin{lemma}\label{lemma: decreasing abs path} For any $\gamma>0$ there exists $\eta_1$ such that for any $0<\eta<\eta_1$ and $s\in[0,1]$ we have $\lambda_{abs}(G_{s,\eta})=\lambda_{\mu_{\eta}}(G_{s,\eta})<\gamma$. \end{lemma} \begin{proof} Let $U$ be any fixed neighborhood of $(0,0)$ and $U\subset D_{r_1}$. By \eqref{integral diverge} and properties \ref{monotone} and \ref{convergence} of the family of positive functions $\{\psi_\eta\}_{\eta\in[0,r_0^2]}$ defined above, applying the monotone convergence theorem, we have \begin{equation*} \lim\limits_{\eta\rightarrow 0}\int_{\mathbb T^2}\kappa_\eta\, d\Leb = \int_{\mathbb T^2}\lim\limits_{\eta\rightarrow 0}\kappa_\eta\, d\Leb = \int_{\mathbb T^2}\kappa_0\, d\Leb = \infty. \end{equation*} and \begin{equation*} \lim\limits_{\eta\rightarrow 0}\int_{\mathbb T^2\setminus U}\kappa_\eta\, d\Leb = \int_{\mathbb T^2\setminus U}\lim\limits_{\eta\rightarrow 0}\kappa_\eta\, d\Leb = \int_{\mathbb T^2\setminus U}\kappa_0\, d\Leb < \infty. \end{equation*}
Therefore, we have the following equalities: \begin{equation}\label{measure of complement neighborhood}
\lim\limits_{\eta\rightarrow 0}\mu_\eta(\mathbb T^2\setminus U) =\lim\limits_{\eta\rightarrow 0}\int_{\mathbb T^2\setminus U}K_\eta^{-1}\kappa_\eta\, d\Leb = \lim\limits_{\eta\rightarrow 0}\frac{\int_{\mathbb T^2\setminus U}\kappa_\eta\,d\Leb}{\int_{\mathbb T^2}\kappa_\eta\,d\Leb} =0 \end{equation} and \begin{equation*}
\lim\limits_{\eta\rightarrow 0}\mu_\eta(U) = 1. \end{equation*}
By the ergodicity of $\mu_\eta$ for $G_{s,\eta}$ for $\eta>0$, \eqref{Lyapunov exponent through integral} implies \begin{equation*}
\lambda_{\mu_\eta}(G_{s,\eta}) = \int_{\mathbb T^2}\log\left|DG_{s,\eta}|_{E^u_x(G_{s,\eta})} \right|\,d\mu_\eta, \end{equation*} where $E^u_x(G_{s,\eta})$ is the unstable subspace at $x$ with respect to $G_{s,\eta}$.
By property \ref{psi0 near 0} of $\psi_0$ and the property \ref{differentiable} of $\psi_\eta$, we obtain that for any $\gamma>0$ there exists $\rho>0$ and $\eta_0>0$ such that $DG_{s,\eta}$ is close to the identity map and $\log\left|DG_{s,\eta}|_{E^u_x(G_{s,\eta})}\right|<\frac{\gamma}{2}$ in $D_{\rho}$ for $s\in[0,1]$ and $0<\eta<\eta_0$. Therefore, for any $s\in[0,1]$ \begin{equation}\label{estimate on small ball}
\int_{D_\rho}\log\left|DG_{s,\eta}|_{E^u_x(G_{s,\eta})}\right|\,d\mu_\eta< \frac{\gamma}{2}. \end{equation}
Moreover, by property \ref{differentiable}, we have that $\log\left|DG_{s,\eta}|_{E^u_x(G_{s,\eta})}\right|$ is uniformly bounded. Therefore, there exists $\eta_1<\eta_0$ such that for any $0<\eta<\eta_1$ we have
\begin{equation}\label{estimate outside}
\int_{\mathbb T^2\setminus D_\rho}\log\left|DG_{s,\eta}|_{E^u_x(G_{s,\eta})}\right|\,d\mu_\eta< \frac{\gamma}{2}. \end{equation}
By \eqref{estimate on small ball} and \eqref{estimate outside}, we obtain the lemma. \end{proof}
\subsection{Estimation of $\lambda_{mme}$ in Construction II}\label{section: mme in II}
We use the notation introduced in Section~\ref{section slowing}.
\subsubsection*{Upper bound on $\lambda_{mme}$}
Here, we prove Lemma~\ref{lemma: upper bound on lambda mme start from linear} which provides an upper bound on $\lambda_{mme}$ for a family of maps in Construction II.
The main ingredient to obtain an upper bound on $\lambda_{mme}$ is to estimate the consecutive time each trajectory spends in the annulus $D_{2\Lambda r_0}\setminus D_{\frac{r_0}{2\Lambda}}$ independent of $r_0$ and $\eta$.
Recall that $\Lambda = h_{top}(L_A)$ and $\alpha$ is a constant that appears in the second condition on $\psi_0$ (see Section~\ref{section slowing}).
\begin{lemma}(cf. Lemma 5.6 in \cite{PSZ})\label{time in annulus} There exists $T_0>0$ depending only on $\Lambda$ and $\alpha$ such that for any solution $s_\eta(t) = (s_1(t), s_2(t))_\eta$ of \eqref{slow down flow} with $s_\eta(0)\in D_{r_1}$, we have \begin{equation*}
\max\{t| s_\eta(t)\in D_{r_1}\setminus D_{\frac{r_0}{2\Lambda}}\}<T_0 \end{equation*} for any $\eta\in[0,2r_0^2]$, where $r_1=2\Lambda r_0$. \end{lemma}
\begin{proof} We omit the dependence of $s_1$ and $s_2$ on $\eta$ in the notation below. Let $u=s_1^2+s_2^2$.
Assume $s_1^2 \leq s_2^2$. Then, by \eqref{slow down flow}, we have \begin{align}\label{s1<s2}
\frac{du}{dt} = 2\Lambda\psi_\eta(u)(s_1^2-s_2^2) = -2\Lambda\psi_\eta(u)(u^2-4s_1^2s_2^2)^{\frac{1}{2}}. \end{align}
For $s_2^2\leq s_1^2$, by \eqref{slow down flow}, we have \begin{align}\label{s2<s1}
\frac{du}{dt} = 2\Lambda\psi_\eta(u)(s_1^2-s_2^2) = 2\Lambda\psi_\eta(u)(u^2-4s_1^2s_2^2)^{\frac{1}{2}}. \end{align}
Recall that by \eqref{slow down flow} we have $s_1(t)s_2(t)=s_1(0)s_2(0)$ for any $t$.
If $4s_1^2(0)s_2^2(0)\leq \frac{r_0^4}{32\Lambda^4}$, then, under the assumptions $s_1^2\leq s_2^2$ and $\left(\frac{r_0}{2\Lambda}\right)^2<u\leq r_1^2$, we have
\begin{align*}
\frac{du}{dt} \leq -2\Lambda\psi_\eta(u)\left(\left(\frac{r_0}{2\Lambda}\right)^4-\frac{r_0^4}{32\Lambda^4}\right)^{\frac{1}{2}} = -\Lambda^{-1}\psi_\eta(u)\frac{r_0^2}{2\sqrt{2}}\leq -\Lambda^{-1}\psi_0\left(\frac{r_0^2}{(2\Lambda)^2}\right)\frac{r_0^2}{2\sqrt{2}}\leq -2^{-4-2\alpha}\Lambda^{-3-2\alpha} r_0^2. \end{align*}
Similarly, under the assumption $s_2^2\leq s_1^2$ and $\left(\frac{r_0}{2\Lambda}\right)^2<u\leq r_1^2$, we have
\begin{align*}
\frac{du}{dt} \geq 2\Lambda\psi_\eta(u)\left(\left(\frac{r_0}{2\Lambda}\right)^4-\frac{r_0^4}{32\Lambda^4}\right)^{\frac{1}{2}}\geq 2^{-4-2\alpha}\Lambda^{-3-2\alpha} r_0^2. \end{align*} Therefore, under the assumptions $s_1^2\leq s_2^2$, starting from $u(0)=r_1^2 = 4\Lambda^2r_0^2$, it takes at most $t=2^{2+2\alpha}\Lambda^{1+2\alpha}(16\Lambda^4-1)$ time to reach $u=\left(\frac{r_0}{2\Lambda}\right)^2$, unless the assumption $s_1^2\leq s_2^2$ is violated. If the assumption $s_1^2\leq s_2^2$ is violated, then, by symmetry of \eqref{s1<s2} and \eqref{s2<s1}, the orbit will leave $D_{r_1}$ in at most $2\cdot 2^{2+2\alpha}\Lambda^{1+2\alpha}(16\Lambda^4-1)$ time. A similar argument works if we start from $u(0)=\left(\frac{r_0}{2\Lambda}\right)^2$ under the assumption $s_2^2\leq s_1^2$.
Assume that $4s_1^2(0)s_2^2(0)>\frac{r_0^4}{32\Lambda^4}$. If the trajectory is in $D_{r_1}$, then $r_1^2\geq s_1^2(t)+s_2^2(t)\geq s_2(t)^2$, in particular, $\frac{r_0^4}{32\Lambda^4}<4s_1^2(t)s_2^2(t)\leq4s_1^2(t)r_1^2$, and therefore, $s_1^2(t)>\frac{r_0^2}{512\Lambda^6}$. Using \eqref{slow down flow}, we obtain \begin{equation*} \frac{d}{dt}\left(s_1^2\right) = 2s_1\frac{d}{dt}s_1 = 2s_1^2\psi_\eta(u)\Lambda>\frac{r_0^2}{\Lambda^{7+2\alpha}}2^{-10-2\alpha}. \end{equation*} Therefore, $s_1^2(t)$ will increase to $r_1^2$ and the orbit will leave $D_{r_1}$ in at most $2^{12+2\alpha}\Lambda^{9+2\alpha}$ time.
Finally, $T_0 = \max\{2^{3+2\alpha}\Lambda^{1+2\alpha}(16\Lambda^4-1), 2^{12+2\alpha}\Lambda^{9+2\alpha}\}$. \end{proof}
The following lemma could be of independent interest as we obtain an upper bound on the forward Lyapunov exponents for $G_{0,\eta}$. We recall that $G_{0,\eta}$ depends on the size of the ball where the slow-down deformation is done, i.e., it depends on $r_0$. Moreover, $G_{0,2r_0^2} = L_A$.
\begin{lemma}\label{lemma: upper bound on lambda mme start from linear}
For any $\chi>0$ there exists $r_\chi\in(0,1)$ such that for any $r_0\in(0,r_\chi)$, $x\in\mathbb T^2$ and $\pmb v\in T_x\mathbb T^2$ with $\|v\|\neq 0$ \begin{equation*} \lambda(G_{0,\eta},x,\pmb v)<\Lambda+\chi \end{equation*}
for all $\eta\in(0,2r_0^2]$, where $\lambda(G_{0,\eta},x,\pmb v) = \limsup\limits_{n\rightarrow \infty}\frac{1}{n}\log\|D_xG_{0,\eta}^n\pmb v\|$ is the forward Lyapunov exponent of $(x,\pmb v)$ with respect to $G_{0,\eta}$.
In particular, $\lambda_{mme}(G_{0,\eta})<\Lambda+\chi$ for all $\eta\in(0,2r_0^2]$. \end{lemma}
\begin{proof}
Consider $x\in\mathbb T^2$ and $\pmb v\in T_x\mathbb T^2$ with $\|v\|\neq 0$. Let $n$ be a natural number. We write
$$n = \sum\limits_{j=1}^{s}n_j,$$ where the numbers $n_j\in\{0\}\cup\mathbb N$ are chosen in the following way:
\begin{enumerate}
\item The number $n_1$ is the first moment when $G_{0,\eta}^{n_1}(x)\in D_{r_1}\setminus D_{\frac{r_0}{2\Lambda}}$;
\item The number $n_2$ is such that the number $n_1+n_2$ is the first moment when $G_{0,\eta}^{n_1+n_2}(x)\in D_{\frac{r_0}{2\Lambda}}$;
\item The number $n_3$ is such that the number $n_1+n_2+n_3$ is the first moment when $G_{0,\eta}^{n_1+n_2+n_3}(x)\in D_{r_1}\setminus D_{\frac{r_0}{2\Lambda}}$;
\item The number $n_4$ is such that the number $n_1+n_2+n_3+n_4$ is the first moment when $G_{0,\eta}^{n_1+n_2+n_3+n_4}(x)\not\in D_{r_1}$;
\item The rest of the numbers are defined following the same pattern. \end{enumerate}
If $x\in\mathbb T^2$ is such that the $G_{0,\eta}$-orbit of $x$ does not enter into $D_{r_1}$, then $\lambda(G_{0,\eta},x,\pmb v)\leq \Lambda$ because $G_{0,\eta} = L_A$ outside of $D_{r_1}$.
Assume the $G_{0,\eta}$-orbit of $x$ enters into $D_{r_1}$. By the definition of $\lambda(G_{0,\eta},x,\pmb v)$, in that case it is enough to consider the case when $G_{0,\eta}^{-1}(x)\in D_{r_1}$ but $x\not\in D_{r_1}$.
We have \begin{align}\label{expansion along path}
\log\|D_xG_{0,\eta}^n\pmb v\| = \log\|\pmb v\|+ \sum\limits_{j=1}^s\log\|D_{G_{0,\eta}^{n_1+n_2+\ldots+n_{j-1}}(x)}G_{0,\eta}^{n_j}\pmb v_j\|, \end{align}
where $\pmb v_1=\frac{\pmb v}{\|\pmb v\|}$, $\pmb v_2 = \frac{D_xG_{0,\eta}^{n_1}\pmb v_1}{\|D_xG_{0,\eta}^{n_1}\pmb v_1\|}$, and $\pmb v_j = \frac{D_{G_{0,\eta}^{n_1+n_2+\ldots+n_{j-2}}(x)}G_{0,\eta}^{n_{j-1}}\pmb v_{j-1}}{\|D_{G_{0,\eta}^{n_1+n_2+\ldots+n_{j-2}}(x)}G_{0,\eta}^{n_{j-1}}\pmb v_{j-1}\|}$ for $j=3, \ldots, s$. In particular, $\|\pmb v_j\|=1$ for $j=1, \ldots, s$.
Recall that $G_{0,\eta}$ coincides with $L_A$ in $\mathbb T^2\setminus D_{r_1}$ (see \eqref{map G_w}). Thus, for any $N\in \mathbb N$, there exists a positive number $\theta=\theta(N, \Lambda)$ such that if $r_1<\theta$, i.e., $r_0<\frac{\theta}{2\Lambda}$, then for any $y$ such that $G_{0,\eta}^{-1}(y)\in D_{r_1}$ but $y\not\in D_{r_1}$ we have $G_{0,\eta}^n(y)\not\in D_{r_1}$ for any $\eta\in(0,2r_0^2]$ and $0\leq n\leq N$. Therefore, if $r_0$ is sufficiently small, then $n_1\geq N$.
The coefficient matrix of the variational equation \eqref{slow down flow} is \begin{align}\label{matrix variational}
C_\eta&(s_1(t), s_2(t)) =\\
&=\Lambda\begin{pmatrix} \psi_\eta(s_1^2(t)+s_2^2(t))+2s_1^2(t)\psi'_\eta(s_1^2(t)+s_2^2(t)) & 2s_1(t)s_2(t)\psi'_\eta(s_1^2(t)+s_2^2(t))\\
-2s_1(t)s_2(t)\psi'_\eta(s_1^2(t)+s_2^2(t)) &-\psi_\eta(s_1^2(t)+s_2^2(t))-2s_2^2(t)\psi'_\eta(s_1^2(t)+s_2^2(t))
\end{pmatrix}.\nonumber \end{align}
Let $s_\eta(t)=(s_1(t),s_2(t))_\eta$ be the solution of \eqref{slow down flow} with initial condition $s_\eta(0)=x$. Denote by $A_\eta(t)$ a $2\times 2$-matrix solving the variational equation \begin{equation}\label{equation for differential}
\frac{dA_\eta(t)}{dt}=C_\eta(s_\eta(t))A_\eta(t) \end{equation}
with initial condition $A_\eta(0)=Id$. Then, $A_\eta(1)=D_xG_{0,\eta}$.
Moreover, by \eqref{equation for differential} and the Cauchy-Schwarz inequality, we have for any vector $\pmb v$ \begin{equation}\label{estimate of the norm}
\frac{d\|A_\eta(t)\pmb v\|}{dt}\leq \left\|\frac{d[A_\eta(t)\pmb v]}{dt}\right\|=\left\|C_\eta(s_\eta(t))A_\eta(t)\pmb v\right\|\leq \|C_\eta(s_\eta(t))\|_{op}\|A_\eta(t)\pmb v\|, \end{equation}
where $\|\cdot\|_{op}$ is the operator norm.
By \eqref{estimate of the norm}, \eqref{matrix variational}, the definition of $\psi_\eta$, and property \ref{bound derivative} of $\psi_0$, we have that there exists a positive constant $M$ independent of $r_0$ and $\eta$ such that $\|A_\eta(t)\pmb v\|\leq e^{Mt}\|\pmb v\|$ for any $t$ and any vector $\pmb v$. In particular, $\|D_xG_{0,\eta}\|_{op}\leq e^M$ if $x\in D_{r_1}\setminus D_{\frac{r_0}{2\Lambda}}$.
Consider $x\in D_{\frac{r_0}{2\Lambda}}$. Note that, by \eqref{slow down flow}, we have that for any $t\in[0,1]$ and $\eta\in(0,2r_0^2]$ the image of $x$ under the time-$t$ map of the flow \eqref{slow down flow} is in $D_{\frac{r_0}{2}}$. In particular, $G_{0,\eta}(D_{\frac{r_0}{2\Lambda}})\subseteq D_{\frac{r_0}{2}}$ for any $\eta\in(0,2r_0^2]$.
Recall that if $u\in[0,\left(\frac{r_0}{2}\right)^2]$, then $\psi_0(u) = \left(\frac{u}{r_0^2}\right)^{1+\alpha}$ and $\psi'_0(u)=\frac{1+\alpha}{r_0^2}\left(\frac{u}{r_0^2}\right)^{\alpha}$. Therefore, by the choice of $\psi_\eta(u)$ for $u\in[\frac{\eta^2}{r_0^2},\eta]$, we can guarantee that $0\leq \psi_\eta(u)\leq 2^{-2-2\alpha}$ and $0\leq \psi'_\eta(u)\leq \frac{1+\alpha}{r_0^2}2^{-2\alpha}$ for $u\in[0,\left(\frac{r_0}{2}\right)^2]$ and $\eta\in(0,2r_0^2]$. Then, $\|C_\eta(s_\eta(t))\|_{op}\leq \Lambda$ if $s_\eta(t)\in D_{\frac{r_0}{2}}$ because $2^{-2-2\alpha}(3+2\alpha)\in\left(0,\frac{3}{4}\right]$ and $2^{-1-2\alpha}(1+\alpha)\in\left(0,\frac{1}{2}\right]$ for $\alpha>0$. Therefore, if $x\in D_{\frac{r_0}{2\Lambda}}$, then $\|D_xG_{0,\eta}\|_{op}\leq e^{\Lambda}$. Thus, using \eqref{expansion along path}, and Lemma~\ref{time in annulus}, we obtain that for any $\chi>0$ there exists sufficiently small $r_\chi$ such that for any $r_0\in(0,r_\chi)$, $\eta\in(0,2r_0^2]$, $x\in\mathbb T^2$, and $\pmb v\in T_x\mathbb T^2$ with $\|\pmb v\|\neq 0$, \begin{equation*}
\lambda(G_{0,\eta},x,\pmb v)\leq \Lambda +\frac{2T_0M}{N}\leq \Lambda+\chi. \end{equation*}
\end{proof}
\subsubsection*{Lower bound on $\lambda_{mme}$}
Our next and final goal is to prove Lemma~\ref{bound lambda_mme below with variables} which gives a lower bound on $\lambda_{mme}$ for the maps in Construction II.
\begin{lemma}\label{better cone} For any $\alpha\in(0,\frac{1}{3})$ and $\varepsilon\in(0,1)$ such that $\frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}<\frac{4}{3}$ there exists $\rho=\rho(\alpha,\varepsilon)\in(0,1)$ such that for every $s\in[0,1]$, $\eta>0$, and $p\in D_{r_1}$ we have $$DG_{s,\eta}\mathcal K^+_\rho(p)\subset \mathcal K^+_{\rho}(G_{s,\eta}(p)),$$ where $\mathcal K^+_\rho(p)$ is the cone of size $\rho$ in $T_p\mathbb T^2$, i.e., \begin{equation}\label{cone_rho}
\mathcal K^+_\rho(p) = \{\xi_1\pmb v^u+\xi_2\pmb v^s \,|\, \xi_1,\xi_2\in\mathbb R, |\xi_2|\leq \rho|\xi_1|\}. \end{equation} Moreover, $\rho$ can be expressed in the following way: \begin{equation} \rho(\alpha,\varepsilon) = \frac{-((1-\varepsilon)^{2(1+\alpha)}+1+\alpha)+ \sqrt{((1-\varepsilon)^{2(1+\alpha)}+1+\alpha)^2-(1+\alpha)^2}}{(1+\alpha)}. \end{equation} \end{lemma}
\begin{proof} As in Lemma~\ref{invariant_cones_katokmap}, using the system of variational equations corresponding to the system \eqref{slow down flow}, we look at the following equation for the tangent $\zeta_\eta$:
\begin{equation}\label{tangent equation_2} \frac{d\zeta_\eta}{dt} = -2\Lambda((\psi_\eta(s_1^2+s_2^2)+(s_1^2+s_2^2)\psi'_\eta(s_1^2+s_2^2))\zeta_\eta+s_1s_2\psi_{\eta}'(s_1^2+s_2^2)(\zeta_\eta^2+1)). \end{equation} First, observe that if $(s_1,s_2)\in \left(D_{r_1}\setminus D_{r_0}\right)$ then $\psi_\eta$ is constant, in particular, $\zeta_\eta$ is decreasing when $\zeta_\eta>0$ and increasing when $\zeta_\eta<0$. Also, if $s_1s_2=0$, then we have the same conclusion about $\zeta_{\eta}$. Thus, we can assume in the consideration of the further cases that $s_1s_2\neq 0$. Due to symmetry, it is enough to analyze the case $s_1,s_2>0$.
Let $s_1,s_2>0$. Then, $\zeta_\eta$ is decreasing when $\zeta_\eta>0$, so we consider the case $\zeta_\eta<0$. Moreover, let $k = \frac{s_1s_2}{s_1^2+s_2^2}$. Notice that $k\in(0,\frac{1}{2}]$.
Assume $(s_1,s_2)\in D_{r_0}$. By properties \ref{monotone} and \ref{derivative for all eta} of $\psi_\eta$, we have \begin{equation*} \psi_{\eta}(s_1^2+s_2^2)\geq \left(\frac{s_1^2+s_2^2}{r_0^2}\right)^{1+\alpha} \, \text{ and } \, \psi'_{\eta}(s_1^2+s_2^2)\leq \frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}r_0^2}\left(\frac{s_1^2+s_2^2}{r_0^2}\right)^\alpha. \end{equation*} Thus, plugging this expression into \eqref{tangent equation_2}, we obtain \begin{equation*} \frac{d\zeta_\eta}{dt}\geq -2\Lambda\psi'_\eta(s_1^2+s_2^2)(s_1^2+s_2^2)\left(\left(\frac{(1-\varepsilon)^{2(1+\alpha)}}{1+\alpha}+1\right)\zeta_\eta+k(\zeta_\eta^2+1)\right). \end{equation*}
It is easy to see that $\frac{d\zeta_\eta}{dt}\geq 0$ if $\zeta_\eta\in[\zeta^-(k),\zeta^+(k)]$, where \begin{equation*} \zeta^\pm(k) = \frac{-((1-\varepsilon)^{2(1+\alpha)}+1+\alpha)\pm \sqrt{((1-\varepsilon)^{2(1+\alpha)}+1+\alpha)^2-4k^2(1+\alpha)^2}}{2k(1+\alpha)}. \end{equation*} Also, $\zeta^+(k)\geq \zeta^+\left(\frac{1}{2}\right)$ and $\zeta^-(k)\leq \zeta^-\left(\frac{1}{2}\right)$ for $k\in(0,\frac{1}{2}]$. Thus, $\zeta_\eta$ is non-decreasing for $\zeta_\eta\in\left[\zeta^-\left(\frac{1}{2}\right),\zeta^+\left(\frac{1}{2}\right)\right]$.
As a result, using that $\zeta_\eta$ is smooth and the above analysis, we obtain that $\rho(\alpha,\varepsilon) = \zeta^+(\frac{1}{2})$ gives the desired cone. \end{proof}
Let $p\in\mathbb T^2$ and $\pmb v\in T_p\mathbb T^2$. Then, $\pmb v = \xi_1 \pmb v^u+\xi_2 \pmb v^s$. Denote by $\|\cdot\|_{u,s}$ the norm in $\mathbb R^2$ such that $\|\pmb v\|_{u,s}^2 = \xi_1^2+\xi_2^2$.
\begin{lemma}\label{expanding in better cone} Assume we are in the setting of Lemma~\ref{better cone}. For any $p\in D_{r_1}$, and any $\pmb v\in \mathcal K_{\rho(\alpha,\varepsilon)}^+(p)$ (see \eqref{cone_rho}), we have \begin{equation*}
\|DG_{s,\eta}\pmb v\|_{u,s}\geq\|\pmb v\|_{u,s}. \end{equation*}
In particular, for any $p\in D_{r_1}$, and any $\pmb v\in \mathcal K_{\rho(\alpha,\varepsilon)}^+(p)$, \begin{equation*}
\|DG_{s,\eta}\pmb v\|\geq K_1K_2^{-1}\|\pmb v\|, \end{equation*}
where $K_1,K_2$ are constants coming from the equivalence of norms $\|\cdot\|$ and $\|\cdot\|_{u,s}$ in $\mathbb R^2$, i.e., $0<K_1\leq 1\leq K_2$, $K_1\|\pmb u\|_{u,s}\leq\|\pmb u\|\leq K_2\|\pmb u\|_{u,s}$ for any $\pmb u\in\mathbb R^2$.
Moreover, let $\mathcal C_p^+$ be the union of the positive cone in the tangent space at $p\in\mathbb T^2$ spanned by $\pmb v^+_{min}(-\beta)$ and $\pmb v^+_{max}$ (see \eqref{bounds_of_cones}) and its symmetric complement. There exists $N\in\mathbb N$ such that for each $p\in\mathbb T^2$ we have \begin{equation*} \left(DL_A\right)^N(\mathcal C^+_p)\subset \mathcal K^+_{\rho(\alpha,\varepsilon)}(L_A^N(p)) \text{ and } \left(DL_A\right)^N\left(\mathcal K^+_{\rho(\alpha,\varepsilon)}(p)\right)\subset \mathcal C^+_{L_A^N(p)}. \end{equation*} \end{lemma}
\begin{proof}
Let $p\in D_{r_1}$ and $\pmb v(0)\in \mathcal K_\rho^+(p)$. Moreover, $\pmb v(t)$ is the evolution of $\pmb v(0)$ along the flow. In particular, $\pmb v(t) = \xi_1(t)\pmb v^u+\xi_2(t)\pmb v^s$ where $|\xi_2(t)|\leq\rho|\xi_1(t)|$ for any $t>0$.
By properties of $\psi_\eta$, we have $\frac{\psi'_\eta(s_1^2+s_2^2)}{\psi_\eta(s_1^2+s_2^2)}\leq \frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}\frac{1}{s_1^2+s_2^2}$. Using \eqref{slow down flow}, for any $\alpha\in(0,\frac{1}{3})$ and $\varepsilon\in(0,1)$ such that $\frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}<\frac{4}{3}$ we have \begin{align*} \frac{d}{dt}(\xi_1^2+\xi_2^2) &= 2\Lambda\left((\psi_\eta(s_1^2+s_2^2)+2s_1^2\psi'_\eta(s_1^2+s_2^2))\xi_1^2-(\psi_\eta(s_1^2+s_2^2)+2s_2^2\psi'_\eta(s_1^2+s_2^2))\xi_2^2\right)\\ &= 2\Lambda (\psi_\eta(s_1^2+s_2^2)+2s_1^2\psi'_\eta(s_1^2+s_2^2))\left(\xi_1^2-\frac{\psi_\eta(s_1^2+s_2^2)+2s_2^2\psi'_\eta(s_1^2+s_2^2)}{\psi_\eta(s_1^2+s_2^2)+2s_1^2\psi'_\eta(s_1^2+s_2^2)}\xi_2^2\right)\\ &\geq 2\Lambda (\psi_\eta(s_1^2+s_2^2)+2s_1^2\psi'_\eta(s_1^2+s_2^2))\xi_2^2\left(\rho^{-2}(\alpha,\varepsilon)-\left(1+2\frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}\right)\right)\geq0. \end{align*}
Let $z\in(0,1)$. Then, for any $n\in\mathbb N$, we have $\left(DL_A\right)^n(\mathcal K^+_{z}(p))=\mathcal K^+_{e^{-2n\Lambda}z}(L_A^n(p))$. Thus, we obtain the statement about the cone inclusions. \end{proof}
Recall that $\bar\beta$, $\bar\delta$, and $\bar w$ are the values of the parameters in the construction of $f_{1,0}$ (see \eqref{definition_twist_maps}). Let $D_r$ be a disk of radius $r$ centered at $(0,0)$ and $\bar {\mathcal S}= \{(x,y)\in\mathbb T^2| x\in(m+l-\bar\delta+\bar w,m+l-\bar w)\}$, i.e., the region of the perturbation described in Construction I where $DG_{1,\eta}=A\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar \delta}\right)$ (see Proposition~\ref{eigenvector_eigenvalue_proposition}). Denote by $\nu_{G_{1,\eta}}$ the measure of maximal entropy for $G_{1,\eta}$.
\begin{lemma}\label{bound lambda_mme below with variables} Let $\alpha\in(0,\frac{1}{3})$ and $\varepsilon\in(0,1)$ such that $\frac{1+\alpha}{(1-\varepsilon)^{2(1+\alpha)}}<\frac{4}{3}$. Let $\bar Q=\nu_{f_{1,0}}(\bar{\mathcal S})$. Then, for any $\sigma>0$ there exists $r_{\sigma}$ such that for all sufficiently small $r_0\in(0,r_{\sigma})$ and for all $\eta\in(0,2r_0^2]$ the following hold for $G_{1,\eta}$ obtained in Construction II with the parameters $\alpha, \varepsilon, r_0$, and $\eta$: \begin{enumerate} \item $\nu_{G_{1,\eta}}(D_{r_\sigma})\leq \sigma$; \item\label{lower bound in lemma on mme} $\lambda_{mme}(G_{1,\eta})\geq \left(\log\left(K_1K_2^{-1}\right)+\log(C)\right)\sigma+\log(C)(1-\bar Q)+\log\mu^+\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar\delta}\right)(\bar Q-\sigma),$
where $C$ is a constant that depends only on the matrix $A$ and $\bar\beta$ (see \eqref{large expansion estimate}), and $K_1, K_2$ are constants in Lemma~\ref{expanding in better cone}. \end{enumerate} \end{lemma}
\begin{proof} Recall that in Construction II, for any sufficiently small $r_0\in(0,1)$ we have that $f_{1,0}=G_{1,2r_0^2}$ and for any $\eta\in(0,2r_0^2]$, $G_{1,\eta}(x)=f_{1,0}(x)$ if $x\in\mathbb T^2\setminus D_{r_1}$, where $r_1=2r_0\Lambda$. Also, there exists $\bar r>0$ such that for any $r\in(0,\bar r)$, $f_{1,0}(\bar {\mathcal S})\cap D_{r}=\emptyset$ and $f_{1,0}(D_{r})\cap \bar{\mathcal S}=\emptyset$.
Consider a periodic point $q$ of $f_{1,0}$ other than $(0,0)$. Build a Markov partition $\mathcal {MP}$ for $f_{1,0}$ using the point $q$ and its stable and unstable manifolds, $W^s(q)$ and $W^u(q)$, respectively (Adler-Weiss construction). Since $(0,0)$ is a fixed point for $f_{1,0}$, then $(0,0)\not\in W^s(q)\cap W^u(q)$. In particular, there is a refinement $\overline{ \mathcal{MP}}$ of $\mathcal {MP}$ such that \begin{itemize}
\item if $\mathcal P\bar{\mathcal S} = \{R\in \overline{ \mathcal{MP}}| R\subset \bar{\mathcal{S}}\}$, then $\nu_{f_{1,0}}(\bigcup\limits_{R\in\mathcal P\bar{\mathcal S}}R)\geq \bar Q-\sigma$; \item there exists $R_{\sigma}\in\overline{\mathcal{MP}}$ such that $(0,0)$ is in the interior of $R_\sigma$ and $\nu_{f_{1,0}}(R_\sigma)<\sigma$. \end{itemize}
Choose $r_\sigma<\bar r$ such that $D_{r_\sigma}\subset R_\sigma$.
Let $\rho(\alpha,\varepsilon)$ be as in Lemma~\ref{better cone}. Let $\mathcal C_p^+$ be the union of the positive cone in the tangent space at $p\in\mathbb T^2$ spanned by $\pmb v^+_{min}(-\bar\beta)$ and $\pmb v^+_{max}$ (see \eqref{bounds_of_cones}) and its symmetric complement, where $\bar\beta$ is the value of $\beta$ in the construction of $f_{1,0}$. By Lemma~\ref{expanding in better cone}, we can choose $N\in\mathbb N$ such that for each $p\in\mathbb T^2$ we have \begin{equation*} \left(DL_A\right)^N(\mathcal C^+_p)\subset \mathcal K^+_{\rho(\alpha,\varepsilon)}(L_A^N(p)) \text{ and } \left(DL_A\right)^N\left(\mathcal K^+_{\rho(\alpha,\varepsilon)}(p)\right)\subset \mathcal C^+_{L_A^N(p)}. \end{equation*}
Let $r_0>0$ such that $D_{r_1}\subset D_{r_\sigma}$. Recall that $r_1=2r_0\Lambda$. Since $f_{1,0}=L_A$ in a neighborhood of $(0,0)$, we can choose sufficiently small $r_0$ such that the following two facts hold: \begin{itemize} \item Let $p\in\mathbb T^2$ and $k, n\in\{0\}\cup\mathbb N$ be such that $f_{1,0}^{k}(p)\not\in D_{r_\sigma}$, $f_{1,0}^{k+j}(p)\in D_{r_\sigma}\setminus D_{r_1}$ for $j=1,2,\ldots, n$, and $f_{1,0}^{k+n+1}(p)\in D_{r_1}$. Then, $n\geq N$.
\item Let $p\in\mathbb T^2$ and $k, n\in\{0\}\cup\mathbb N$ be such that $f_{1,0}^{k}(p)\in D_{r_1}$, $f_{1,0}^{k+j}(p)\in D_{r_\sigma}\setminus D_{r_1}$ for $j=1,2,\ldots, n$, and $f_{1,0}^{k+n+1}(p)\not\in D_{r_\sigma}$. Then, $n\geq N$. \end{itemize}
Let $G_{1,\eta}$ be an Anosov diffeomorphism obtained from $f_{1,0}$ using Construction II with the parameter $r_0$ as above. Then, for any $\eta\in(0,2r_0^2]$ we have the following: \begin{itemize} \item $\nu_{G_{1,\eta}}(D_{r_\sigma})<\sigma$; \item $\nu_{G_{1,\eta}}(\bar{\mathcal S})\geq\bar Q-\sigma$; \item Let $p\in\mathbb T^2$ and $k, n\in\{0\}\cup\mathbb N$ be such that $G_{1,\eta}^{k}(p)\not\in D_{r_\sigma}$, $G_{1,\eta}^{k+j}(p)\in D_{r_\sigma}\setminus D_{r_1}$ for $j=1,2,\ldots, n$, and $G_{1,\eta}^{k+n+1}(p)\in D_{r_1}$. Then, $n\geq N$.
\item Let $p\in\mathbb T^2$ and $k, n\in\{0\}\cup\mathbb N$ be such that $G_{1,\eta}^{k}(p)\in D_{r_1}$, $G_{1,\eta}^{k+j}(p)\in D_{r_\sigma}\setminus D_{r_1}$ for $j=1,2,\ldots, n$, and $G_{1,\eta}^{k+n+1}(p)\not\in D_{r_\sigma}$. Then, $n\geq N$. \end{itemize}
Consider $p\in\mathbb T^2\setminus D_{r_\sigma}$ and a natural number $n$. We write $n=\sum\limits_{j=1}^s n_j$, where the numbers $n_j\in\{0\}\cup\mathbb N$ are chosen in the following way (see Figure~\ref{partition depending on regions} for an example): \begin{enumerate} \item The number $n_1$ is the first moment when $\left(G_{1,\eta}\right)^{n_1}(p)\in \bar{\mathcal S}\cup D_{r_\sigma}$; \item The number $n_2$ is such that the number $n_1+n_2$ is the first moment when
$\left(G_{1,\eta}\right)^{n_1+n_2}(p)\in \mathbb T^2\setminus\left(\bar{\mathcal S}\cup D_{r_\sigma}\right)$;
\item The rest of the numbers are defined following this pattern as done in the proof of Lemma \ref{lemma: upper bound on lambda mme start from linear}. \end{enumerate}
Notice that by the choice of $r_\sigma$, for any $j=1, 3, 5, \ldots$ we have that
\begin{center} either $\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_j+k}(p)\in \bar{\mathcal S}$ for all $k\in\mathbb Z\cap[0,n_{j+1})$
or $\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_j+k}(p)\in D_{r_\sigma}$ for all $k\in\mathbb Z\cap[0,n_{j+1})$. \end{center}
\begin{figure}
\caption{Partition of the orbit of $p$ under $G_{1,\eta}$.}
\label{partition depending on regions}
\end{figure}
Let $\pmb v\in\mathcal C_p^+$ and $\|\pmb v\|=1$. Then, we have \begin{equation*}
\log\|D_{p}\left(G_{1,\eta}\right)^n\pmb v\| =\sum\limits_{j=1}^s\log\|D_{\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(G_{1,\eta}\right)^{n_j}\pmb v_j\|, \end{equation*}
where $\pmb v_1=\pmb v$, $\pmb v_2 = \frac{D_{p}\left(G_{l,\eta}\right)^{n_1}\pmb v_1}{\|D_{p}\left(G_{1,\eta}\right)^{n_1}\pmb v_1\|}$, and $\pmb v_j = \frac{D_{\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(G_{1,\eta}\right)^{n_{j-1}}\pmb v_{j-1}}{\|D_{\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-2}}(p)}\left(G_{1,\eta}\right)^{n_{j-1}}\pmb v_{j-1}\|}$ for $j=3, \ldots, s.$ In particular, $\|\pmb v_j\|=1$ for $j=1,\ldots, s$.
Using \eqref{general expansion}, \eqref{large expansion estimate}, and Lemma~\ref{expanding in better cone}, we obtain for $k\in\mathbb N$ \begin{equation*}
\|D_{\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)}\left(G_{1,\eta}\right)^{n_j}\pmb v_j\|\geq\left\{ \begin{aligned} &K_1K^{-1}_2 \quad &\text{if}\qquad &j=2k,\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)\in D_{r_\sigma},\\ &\left(\mu^+\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar\delta}\right)\right)^{n_j}C \quad &\text{if}\qquad &j=2k, \left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{j-1}}(p)\in \bar{\mathcal S},\\ &\mu^{n_j} \quad &\text{if}\qquad &j=2k-1, \end{aligned}\right. \end{equation*} where $C$ is a constant that depends only on $A$ and $\bar\beta$ (see \eqref{large expansion estimate}).
As a result, \begin{align*}
\log\|D_{p}\left(G_{1,\eta}\right)^n\pmb v\| &\geq \log\left(K_1K_2^{-1}\right)\sum\limits_{k=1}^{[\frac{s}{2}]}\mathbb{1}_{D_{r_\sigma}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)\\&+\log\left(C\right)\sum\limits_{k=1}^{[\frac{s}{2}]}\mathbb{1}_{\bar{\mathcal S}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)\\&+\log\mu^+\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar\delta}\right)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\mathbb{1}_{\bar{\mathcal S}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)\\&+(\log\mu)\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}, \end{align*} where $\mathbb{1}_{D_{r_\sigma}}$ and $\mathbb{1}_{\bar{\mathcal S}}$ are the characteristic functions of the corresponding sets.
Since $G_{1,\eta}$ is a smooth Anosov diffeomorphism, using Birkhoff's Ergodic Theorem, we obtain as $n\rightarrow \infty$ \begin{equation*} \frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\mathbb{1}_{\bar{\mathcal S}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right) \rightarrow \nu_{G_{1,\eta}}(\bar{\mathcal S}) \quad\text{and}\quad \frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k-1}\rightarrow \nu_{G_{1,\eta}}(\mathbb T^2\setminus(D_{r_\sigma}\cup \bar{\mathcal S})). \end{equation*} Moreover, since $n_{2k}\geq 1$ for $k=1,2,\ldots, \left[\frac{s}{2}\right]$, we have \begin{equation*} \lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}\mathbb{1}_{D_{r_\sigma}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)\leq\lim\limits_{n\rightarrow\infty}\sum\limits_{k=1}^{[\frac{s}{2}]}n_{2k}\mathbb{1}_{D_{r_\sigma}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)=\nu_{G_{1,\eta}}(D_{r_\sigma}). \end{equation*} Also, notice that each visit to $\mathbb T^2\setminus(D_{r_\sigma}\cup\bar{\mathcal S})$ is at least one iterate, so \begin{equation*} \lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{k=1}^{[\frac{s}{2}]}\mathbb{1}_{\bar{\mathcal S}}\left(\left(G_{1,\eta}\right)^{n_1+n_2+\ldots+n_{2k-1}}(p)\right)\leq\lim\limits_{n\rightarrow\infty}\frac{1}{n}\left[\frac{s}{2}\right]\leq \nu_{G_{1,\eta}}(\mathbb T^2\setminus(D_{r_\sigma}\cup\bar{\mathcal S}))=1-\nu_{G_{1,\eta}}(\bar{\mathcal S})-\nu_{G_{1,\eta}}(D_{r_\sigma}). \end{equation*}
Thus, since $K_1K_2^{-1}, C\in(0,1)$, \begin{align*} \lambda_{mme}(G_{1,\eta})&\geq \log\left(K_1K_2^{-1}\right)\nu_{G_{1,\eta}}(D_{r_\sigma})+\log(C)(1-\nu_{G_{1,\eta}}(\bar{\mathcal S})-\nu_{G_{1,\eta}}(D_{r_\sigma}))\\&+\log\mu^+\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar\delta}\right)\nu_{G_{1,\eta}(\bar{\mathcal S})}+(\log\mu)\nu_{G_{1,\eta}}(\mathbb T^2\setminus(D_{r_\sigma}\cup\bar{\mathcal S}))\\&\geq \log\left(K_1K_2^{-1}\right)\sigma+\log(C)(1-\bar Q+\sigma)+(\bar Q-\sigma)\log\mu^+\left(\frac{\bar\beta+\bar\delta(1-\bar\delta)}{\bar\delta}\right). \end{align*}
\end{proof}
{}
\Addresses
\end{document} |
\begin{document}
\title{Another convex combination of product states for the separable Werner state}
\author{Hiroo Azuma${}^{1,}$\thanks{On leave from Canon Inc., 5-1, Morinosato-Wakamiya, Atsugi-shi, Kanagawa, 243-0193, Japan. E-mail: [email protected]} \ \ and \ \ Masashi Ban${}^{2,3,}$\thanks{E-mail: [email protected]} \\ \\ {\small ${}^{1}$Research Center for Quantum Information Science,}\\ {\small Tamagawa University Research Institute,}\\ {\small 6-1-1 Tamagawa-Gakuen, Machida-shi, Tokyo 194-8610, Japan} \\ {\small ${}^{2}$Advanced Research Laboratory, Hitachi Ltd.,}\\ {\small 2520 Akanuma, Hatoyama, Saitama 350-0395, Japan} \\ {\small ${}^{3}$CREST, Japan Science and Technology Agency,}\\ {\small 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028, Japan} }
\date{February 10, 2006}
\maketitle
\begin{abstract} In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable state to separable state according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability of the Werner state comes from positivity of local density operators of the qubits. \end{abstract}
\section{Introduction} \label{section-introduction} The Einstein-Podolsky-Rosen paradox and Bell's pioneering works reveal that no hidden variable model can reproduce all predictions of quantum mechanics \cite{Einstein-Podolsky-Rosen,Bell,Clauser-Horne-Shimony-Holt}. Thus, quantum correlation is essentially different from classical correlation. Motivation of quantum information theory, which many researchers have been eager to study for the last several decades, is to obtain a deep understanding of the quantum correlation.
A bipartite quantum system is separable if its density matrix can be written as a convex combination of product states. A separable quantum system always admits the hidden variable interpretation. However, the converse is not necessary true. Werner constructs a family of bipartite states, which are characterized by a single real parameter. He shows some inseparable states that belong to this family admit the hidden variable interpretation \cite{Werner}. The states of this family are called the Werner states. Moreover, Popescu indicates that the inseparable Werner states admitting hidden variable models reveal nonlocal correlation under a sequence of measurements, where the second measurement depends on an output of the first measurement \cite{Popescu}.
A criterion of separability for a two-qubit system is conjectured by Peres and established by Horodecki {\it et al}. \cite{Peres,Horodecki-Horodecki-Horodecki}. In a two-qubit system, the Werner state has a single real parameter and varies from inseparable state to separable state according to the value of its parameter. Using Peres-Horodeckis' criterion, we can fix the critical point of the parameter between the separable and inseparable states.
The Werner state is finding wide application in the quantum information processing. It often appears as an intermediate during the quantum purification protocol \cite{Bennett-DiVincenzo-Smolin-Wootters,Murao-Plenio-Popescu-Vedral-Knight}. Thus, we can expect that the Werner state plays an important role in the process of local quantum operations and classical communications (LQCC). Various properties of the Werner state under LQCC is investigated by Hiroshima and Ishizaka \cite{Hiroshima-Ishizaka}.
As mentioned above, the Werner state has many interesting properties. However, we do not know why the Werner state in a two-qubit system changes from inseparable state to separable state suddenly at the critical point of its parameter. To examine the physical meaning of the critical point, we have to know an explicit form of a convex combination of product states for the separable Werner state. In this paper, we investigate a convex combination of product states for the separable Werner state, which is different from the convex combination obtained by Wootters' method ~\cite{Wootters}. (A convex combination of product states for a given separable density matrix is not unique generally.)
The decomposition obtained by Wootters' method is an ensemble of four pure states. By contrast, our decomposition is an integral of a product state with a probability distribution function over a continuous variable. Our decomposed form seems simpler than the decomposed form obtained by Wootters' method, and thus it may give us some insight. This is an advantage of our result. Looking at our decomposed form, we can understand that the critical point of the parameter for separability of the Werner state comes from positivity of local density operators of the qubits. Furthermore, our result produces a hidden variable model because the convex combination of product states always causes the hidden variable interpretation.
Here, we give a brief summary of Wootters' results in Ref.~\cite{Wootters}. Wootters shows an explicit formula for the entanglement of formation of an arbitrary two-qubit system as a function of its density matrix. We can judge whether or not a given two-qubit density matrix is entangled by a value of its entanglement of formation. The density matrix is not entangled if its entanglement of formation is equal to zero, and the density matrix is entangled if its entanglement of formation is more than zero. Thus, we can use Wootters' formula instead of Peres-Horodeckis' criterion.
Wootters also shows how to construct an entanglement-minimizing decomposition of an arbitrary two-qubit density matrix. In this decomposition, the density matrix is described by a convex combination of pure states, and the average entanglement of the pure states is equal to the entanglement of formation. Thus, if we decompose a separable two-qubit density matrix according to Wootters' method, we obtain an ensemble of pure states, each of which has no entanglement. Hence, in general, Wootters' decomposition gives us a convex combination of product states for a separable density matrix explicitly. In Appendix~\ref{appendix-wootters-decomposition}, we write down the decomposition obtained by Wootters' method for the separable Werner state.
In the rest of this section, we introduce the Werner state for a two-qubit system and examine its separability by Peres-Horodeckis' criterion. In Sec.~\ref{section-Werner-state-and-hidden-variable-interpretation}, we investigate the relation between the separable Werner state and the hidden variable interpretation. In Sec.~\ref{section-derivation-hidden-variable-model-Werner-state}, we derive the explicit form of the convex combination of product states for the separable Werner state. In Sec.~\ref{section-discussions} we give a brief discussion. In Appendix~\ref{appendix-wootters-decomposition}, we describe the decomposition of the separable Werner state obtained by Wootters' method.
The Werner state is given by the following density operator on a four-dimensional Hilbert space $\mathcal{H}_{\mbox{\scriptsize A}} \otimes \mathcal{H}_{\mbox{\scriptsize B}}$ spanned by two qubits A and B: \begin{equation} W(q) =
q|\Psi^{-}\rangle\langle\Psi^{-}| + \frac{1-q}{4}\mbox{\boldmath $I$}_{(4)}, \label{definition-Werner-state} \end{equation} where $0\leq q \leq 1$. $\mbox{\boldmath $I$}_{(4)}$ is the identity operator on $\mathcal{H}_{\mbox{\scriptsize A}} \otimes \mathcal{H}_{\mbox{\scriptsize B}}$.
$|\Psi^{-}\rangle$ is one of the Bell states that are maximally entangled on the two-qubit system and it is given by \begin{equation}
|\Psi^{-}\rangle = \frac{1}{\sqrt{2}} (
|0\rangle_{\mbox{\scriptsize A}}|1\rangle_{\mbox{\scriptsize B}} -
|1\rangle_{\mbox{\scriptsize A}}|0\rangle_{\mbox{\scriptsize B}} ). \end{equation} $W(q)$ defined by Eq.~(\ref{definition-Werner-state}) satisfies \begin{equation} W(q)^{\dagger}=W(q), \end{equation} \begin{equation} \mbox{Tr}W(q)=1, \end{equation} \begin{equation}
\langle\psi|W(q)|\psi\rangle\geq 0 \quad\quad\forall|\psi\rangle. \end{equation} Because of the above properties, we can regard $W(q)$ as a density operator.
We can judge whether $W(q)$ is separable or inseparable, that is, whether $W(q)$ is disentangled or entangled, from Peres-Horodeckis' criterion. According to Peres-Horodeckis' criterion, defining the partial transposition of $W(q)$ as $\tilde{W}(q)$, $W(q)$ is separable if all eigenvalues of $\tilde{W}(q)$ are non-negative, and $W(q)$ is inseparable if one of eigenvalues of $\tilde{W}(q)$ is negative. Let us examine eigenvalues of $\tilde{W}(q)$ below.
First of all, we give a matrix representation of $W(q)$ in a ket basis
$\{|i\rangle_{\mbox{\scriptsize A}}|j\rangle_{\mbox{\scriptsize B}}: i,j\in\{0,1\}\}$ as follows: \begin{equation} W(q) = \frac{1}{4} \left( \begin{array}{cccc} 1-q & 0 & 0 & 0 \\ 0 & 1+q & -2q & 0 \\ 0 & -2q & 1+q & 0 \\ 0 & 0 & 0 & 1-q \end{array} \right). \label{matrix-representation-Wq} \end{equation} Thus, we obtain a matrix representation of $\tilde{W}(q)$ as follows: \begin{equation} \tilde{W}(q) = \frac{1}{4} \left( \begin{array}{cccc} 1-q & 0 & 0 & -2q \\ 0 & 1+q & 0 & 0 \\ 0 & 0 & 1+q & 0 \\ -2q & 0 & 0 & 1-q \end{array} \right). \label{partial-transpose-Wq} \end{equation} In Eq.~(\ref{partial-transpose-Wq}), the density operator is subjected to transposition on the Hilbert space $\mathcal{H}_{\mbox{\scriptsize B}}$ spanned by the qubit B. By some calculation, we obtain three-fold degenerate eigenvalues, $(1+q)/4$, and the last eigenvalue, $(1-3q)/4$, for $\tilde{W}(q)$. Hence, $W(q)$ is separable for $0\leq q\leq 1/3$ and inseparable for $1/3<q\leq 1$.
\section{The separable Werner state and the hidden variable interpretation} \label{section-Werner-state-and-hidden-variable-interpretation} From the discussion given in the previous section, we find that $W(q)$ is separable for $0\leq q\leq 1/3$. The separable $W(q)$ has to be rewritten as the convex combination of product states and therefore it admits the hidden variable interpretation. In this section, we investigate relation between the separability of $W(q)$ and the hidden variable interpretation.
In general, we can rewrite the separable $W(q)$ in the form: \begin{equation} W(q) = \sum_{\lambda} p_{\lambda} (\rho_{\mbox{\scriptsize A},\lambda} \otimes \rho_{\mbox{\scriptsize B},\lambda}), \label{W(q)-product-state-representation} \end{equation} where $0\leq p_{\lambda}\leq 1$, $\sum_{\lambda}p_{\lambda}=1$, and $\rho_{\mbox{\scriptsize A},\lambda}$ and $\rho_{\mbox{\scriptsize B},\lambda}$ represent density operators of the qubits A and B, respectively. Here, we may regard the index $\lambda$ in Eq.~(\ref{W(q)-product-state-representation}) as a continuous variable. Moreover, we can describe an arbitrary one-qubit density operator $\rho$ as \begin{equation} \rho = \frac{1}{2} (\mbox{\boldmath $I$}_{(2)} + \mbox{\boldmath $a$}\cdot\mbox{\boldmath $\sigma$}), \end{equation} where $\mbox{\boldmath $I$}_{(2)}$ represents the identity operator on a two-dimensional Hilbert space spanned by a single qubit, and $\mbox{\boldmath $a$}$ represents an arbitrary three-dimensional real vector whose norm is equal to or less than unity. $\mbox{\boldmath $\sigma$}$ stands for a three-dimensional vector whose three components are Pauli matrices, $\mbox{\boldmath $\sigma$}=(\sigma_{x},\sigma_{y},\sigma_{z})$.
From the above consideration, we can rewrite $W(q)$ defined in Eq.~(\ref{W(q)-product-state-representation}) as follows: \begin{eqnarray} W(q) &=& \int d\lambda\; p(\lambda) (\rho_{\mbox{\scriptsize A}}(\lambda) \otimes \rho_{\mbox{\scriptsize B}}(\lambda)) \nonumber \\ &=& \int d\lambda\; p(\lambda) \frac{1}{2} (\mbox{\boldmath $I$}_{(2)} + \mbox{\boldmath $a$}(\lambda)\cdot\mbox{\boldmath $\sigma$})_{\mbox{\scriptsize A}} \otimes \frac{1}{2} (\mbox{\boldmath $I$}_{(2)} + \mbox{\boldmath $b$}(\lambda)\cdot\mbox{\boldmath $\sigma$})_{\mbox{\scriptsize B}}, \label{W(q)-product-state-representation-continuous-lambda} \end{eqnarray} where \begin{equation} \int d\lambda\; p(\lambda) = 1, \label{probability-preserving-condition} \end{equation} and \begin{equation}
|\mbox{\boldmath $a$}(\lambda)|\leq 1, \quad\quad
|\mbox{\boldmath $b$}(\lambda)|\leq 1. \label{norm-condition} \end{equation} Although we write $\lambda$ as a single variable in Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}), we can consider that $\lambda$ stands for multiple variables. Moreover, because $\lambda$, $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$ depend on $q$, strictly speaking, we have to write them as $\lambda(q)$, $p(q,\lambda(q))$, $\mbox{\boldmath $a$}(q,\lambda(q))$, and $\mbox{\boldmath $b$}(q,\lambda(q))$. However, for simplicity, we omit $q$ from their notations. [We never insist that $\lambda$, $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$ do not depend on $q$.] Furthermore, we have to pay attention to the fact that the convex combination of product states for $W(q)$ given in Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}) is not unique.
The convex combination of product states for $W(q)$ given in Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}) admits the hidden variable interpretation. We can understand this fact from the following explanation. Let us perform orthogonal measurements by Hermitian operators, $E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}$ and $E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}$, on the qubits A and B, respectively. We assume that $E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}$ and $E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}$ are given by the following form: \begin{equation} \left\{ \begin{array}{lll} E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}} =\mbox{\boldmath $l$}\cdot\mbox{\boldmath $\sigma$}_{\mbox{\scriptsize A}} &\quad&
\mbox{for $|\mbox{\boldmath $l$}|=1$}, \\ E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}} =\mbox{\boldmath $m$}\cdot\mbox{\boldmath $\sigma$}_{\mbox{\scriptsize B}} &\quad&
\mbox{for $|\mbox{\boldmath $m$}|=1$}. \end{array} \right. \end{equation}
An expectation value of the outcome in the measurement on the qubit A is given by \begin{equation} \frac{1}{2} \mbox{Tr} [ E(\mbox{\boldmath $l$}) (\mbox{\boldmath $I$}_{(2)} + \mbox{\boldmath $a$}(\lambda)\cdot\mbox{\boldmath $\sigma$}) ] = \frac{1}{2} \mbox{Tr} [(\mbox{\boldmath $l$}\cdot\mbox{\boldmath $\sigma$}) (\mbox{\boldmath $a$}\cdot\mbox{\boldmath $\sigma$})] = \mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$}. \label{expectation-value-orthogonal-measurement-qubit-A} \end{equation} Equation~(\ref{expectation-value-orthogonal-measurement-qubit-A}) implies that we obtain $1$ as an output with probability $(1+\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$})/2$ and we obtain $(-1)$ as an output with probability $(1-\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$})/2$ in the measurement on the qubit A. We obtain a similar result on the qubit B.
Therefore, we can describe an expectation value of a product of two outputs obtained from the measurements on the qubits A and B as follows: \begin{eqnarray} C(\mbox{\boldmath $l$},\mbox{\boldmath $m$}) &=& \mbox{Tr}[ (E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}} \otimes E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}) W(q)] \nonumber \\ &=& \int d\lambda \int^{1}_{0} d\lambda_{\mbox{\scriptsize A}} \int^{1}_{0} d\lambda_{\mbox{\scriptsize B}} \; p(\lambda) A(\lambda,\lambda_{\mbox{\scriptsize A}};\mbox{\boldmath $l$}) B(\lambda,\lambda_{\mbox{\scriptsize B}};\mbox{\boldmath $m$}), \label{W(q)-hidden-variable-model} \end{eqnarray} where \begin{equation} A(\lambda,\lambda_{\mbox{\scriptsize A}};\mbox{\boldmath $l$}) = \left\{ \begin{array}{lll} 1 & \quad & \mbox{for $0\leq\lambda_{\mbox{\scriptsize A}}\leq(1+\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$})/2$},\\ -1 & \quad & \mbox{for $(1+\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$})/2<\lambda_{\mbox{\scriptsize A}}\leq 1$}, \end{array} \right. \end{equation} and \begin{equation} B(\lambda,\lambda_{\mbox{\scriptsize B}};\mbox{\boldmath $m$}) = \left\{ \begin{array}{lll} 1 & \quad & \mbox{for $0\leq\lambda_{\mbox{\scriptsize B}}\leq(1+\mbox{\boldmath $m$}\cdot\mbox{\boldmath $b$})/2$},\\ -1 & \quad & \mbox{for $(1+\mbox{\boldmath $m$}\cdot\mbox{\boldmath $b$})/2<\lambda_{\mbox{\scriptsize B}}\leq 1$}. \end{array} \right. \end{equation} This is a hidden variable model.
\section{Decomposition of the separable Werner state} \label{section-derivation-hidden-variable-model-Werner-state} In this section, we derive $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$ given in Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}) explicitly. First, we examine the expectation value of the output that is obtained by the measurement of $E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}$ on the qubit A of $W(q)$. At first we calculate this expectation value from Eq.~(\ref{definition-Werner-state}), and then we calculate it from Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}). Next, we compare these two results.
From Eq.~(\ref{definition-Werner-state}), we obtain \begin{equation} \mbox{Tr} [ (E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes\mbox{\boldmath $I$}_{\mbox{\scriptsize B}}) W(q) ] = q
\langle\Psi^{-}| E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes\mbox{\boldmath $I$}_{\mbox{\scriptsize B}}
|\Psi^{-}\rangle =0. \label{measurement-qubit-A-quantum-mechanics} \end{equation} On the other hand, from Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}), we obtain \begin{equation} \mbox{Tr} [ (E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes\mbox{\boldmath $I$}_{\mbox{\scriptsize B}}) W(q) ] = \int d\lambda\; p(\lambda) (\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$}(\lambda)). \label{measurement-qubit-A-hidden-variable-model} \end{equation} Comparing Eqs.~(\ref{measurement-qubit-A-quantum-mechanics}) and (\ref{measurement-qubit-A-hidden-variable-model}), we obtain the following relation: \begin{equation} \int d\lambda\; p(\lambda) (\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$}(\lambda)) =0 \quad\quad
\forall|\mbox{\boldmath $l$}|\leq 1. \end{equation} Thus, we arrive at \begin{equation} \int d\lambda\; p(\lambda) a_{i}(\lambda) =0 \quad\quad i\in\{x,y,z\}. \label{qubit-A-condition} \end{equation} Examining the orthogonal measurement performed on the qubit B of $W(q)$, we can give a similar discussion and we obtain the following result: \begin{equation} \int d\lambda\; p(\lambda) b_{i}(\lambda) =0 \quad\quad i\in\{x,y,z\}. \label{qubit-B-condition} \end{equation}
Second, we examine the expectation value of the product of the outputs that we obtain by the measurements of $E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}$ and $E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}$ on the qubits A and B of $W(q)$, respectively. At first we calculate this expectation value from Eq.~(\ref{definition-Werner-state}), and then we calculate it from Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}). Next, we compare these two results. From Eq.~(\ref{definition-Werner-state}), we obtain \begin{equation} \mbox{Tr} [ (E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}) W(q) ] = q
\langle\Psi^{-}| E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}
|\Psi^{-}\rangle =-q(\mbox{\boldmath $l$}\cdot\mbox{\boldmath $m$}). \label{measurement-qubits-AB-quantum-mechanics} \end{equation} On the other hand, from Eq.~(\ref{W(q)-product-state-representation-continuous-lambda}), we obtain \begin{equation} \mbox{Tr} [ (E(\mbox{\boldmath $l$})_{\mbox{\scriptsize A}}\otimes E(\mbox{\boldmath $m$})_{\mbox{\scriptsize B}}) W(q) ] = \int d\lambda\; p(\lambda) (\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$}(\lambda)) (\mbox{\boldmath $m$}\cdot\mbox{\boldmath $b$}(\lambda)). \label{measurement-qubits-AB-hidden-variable-model} \end{equation} Comparing Eqs.~(\ref{measurement-qubits-AB-quantum-mechanics}) and (\ref{measurement-qubits-AB-hidden-variable-model}), we obtain the following relation: \begin{equation} \int d\lambda\; p(\lambda) (\mbox{\boldmath $l$}\cdot\mbox{\boldmath $a$}(\lambda)) (\mbox{\boldmath $m$}\cdot\mbox{\boldmath $b$}(\lambda)) = -q(\mbox{\boldmath $l$}\cdot\mbox{\boldmath $m$}) \quad\quad
\forall|\mbox{\boldmath $l$}|\leq 1,
\forall|\mbox{\boldmath $m$}|\leq 1. \end{equation} Thus, we arrive at \begin{equation} \int d\lambda\; p(\lambda) a_{i}(\lambda) b_{j}(\lambda) = -q\delta_{ij} \quad\quad i,j\in\{x,y,z\}. \label{qubits-AB-condition} \end{equation}
Now, we obtain the conditions, Eqs.~(\ref{probability-preserving-condition}), (\ref{norm-condition}), (\ref{qubit-A-condition}), (\ref{qubit-B-condition}), and (\ref{qubits-AB-condition}), which $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$ have to satisfy. Thus, these equations are necessary conditions for $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$. However, at the same time, they are sufficient conditions for $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$. In fact, from Eqs.~(\ref{probability-preserving-condition}), (\ref{norm-condition}), (\ref{qubit-A-condition}), (\ref{qubit-B-condition}), and (\ref{qubits-AB-condition}), we can always regenerate $W(q)$ defined in Eqs.~(\ref{definition-Werner-state}) and (\ref{matrix-representation-Wq}). For example, using Eqs.~(\ref{W(q)-product-state-representation-continuous-lambda}), (\ref{probability-preserving-condition}), (\ref{qubit-A-condition}), (\ref{qubit-B-condition}), and
(\ref{qubits-AB-condition}), we can calculate the matrix element $\langle 00|W(q)|00\rangle$ as follows: \begin{equation}
\langle 00|W(q)|00\rangle = \frac{1}{4} \int d\lambda\; p(\lambda) (1+a_{z}(\lambda))(1+b_{z}(\lambda)) = \frac{1-q}{4}. \end{equation} This result coincides with Eq.~(\ref{matrix-representation-Wq}). We can obtain the similar results about the other matrix elements of $W(q)$.
If we define $p(\lambda)$, $\mbox{\boldmath $a$}(\lambda)$, and $\mbox{\boldmath $b$}(\lambda)$ as described below, they satisfy all of the necessary and sufficient conditions, Eqs.~(\ref{probability-preserving-condition}), (\ref{norm-condition}), (\ref{qubit-A-condition}), (\ref{qubit-B-condition}), and (\ref{qubits-AB-condition}). First, we define the variable $\lambda$ as $\theta\in[0,\pi]$ and $\phi\in[0,2\pi)$. Second, we define the normalized probability distribution as \begin{equation} p(\theta,\phi)=\frac{1}{4\pi}. \end{equation} Then we obtain \begin{equation} \int^{\pi}_{0}d\theta \int^{2\pi}_{0}d\phi \; \sin\theta \; p(\theta,\phi) =1, \label{probability-preserving-condition-theta-phi} \end{equation} and Eq.~(\ref{probability-preserving-condition}) is satisfied. Here, we pay attention to the fact that the volume element for the integral is given by $(d\theta d\phi\;\sin\theta)$ in Eq.~(\ref{probability-preserving-condition-theta-phi}).
Third, we define $\mbox{\boldmath $a$}(\theta,\phi)$ and $\mbox{\boldmath $b$}(\theta,\phi)$ as follows: \begin{equation} a_{i}(\theta,\phi)=\sqrt{3q}f_{i}(\theta,\phi), \quad\quad b_{i}(\theta,\phi)=-\sqrt{3q}f_{i}(\theta,\phi), \label{definition-ab-theta-phi} \end{equation} where \begin{equation} \left\{ \begin{array}{lll} f_{x}(\theta,\phi) & = & \sin\theta\cos\phi, \\ f_{y}(\theta,\phi) & = & \sin\theta\sin\phi, \\ f_{z}(\theta,\phi) & = & \cos\theta. \label{definition-f-xyz-theta-phi} \end{array} \right. \end{equation} The functions $f_{x}$, $f_{y}$, and $f_{z}$ satisfy the following relations: \begin{equation} \frac{1}{4\pi} \int^{\pi}_{0}d\theta \int^{2\pi}_{0}d\phi \; \sin\theta \; f_{i}(\theta,\phi) = 0 \quad\quad i\in\{x,y,z\}, \end{equation} \begin{equation} \frac{1}{4\pi} \int^{\pi}_{0}d\theta \int^{2\pi}_{0}d\phi \; \sin\theta \; f_{i}(\theta,\phi)f_{j}(\theta,\phi) = \frac{1}{3}\delta_{ij} \quad\quad i,j\in\{x,y,z\}. \end{equation} From the above relations, we can confirm that $\mbox{\boldmath $a$}(\theta,\phi)$ and $\mbox{\boldmath $b$}(\theta,\phi)$ satisfy Eqs.~(\ref{qubit-A-condition}), (\ref{qubit-B-condition}), and (\ref{qubits-AB-condition}).
Here, let us calculate norms of $\mbox{\boldmath $a$}$ and $\mbox{\boldmath $b$}$ from Eqs.~(\ref{definition-ab-theta-phi}) and (\ref{definition-f-xyz-theta-phi}), \begin{equation}
|\mbox{\boldmath $a$}| =
|\mbox{\boldmath $b$}| = \sqrt{3q}. \label{norms-ab} \end{equation} Remembering Eq.~(\ref{norm-condition}), we can obtain a condition $0\leq q\leq 1/3$ from Eq.~(\ref{norms-ab}). This implies that the explicit convex combination of product states of $W(q)$ given in this section is right only for $0\leq q\leq 1/3$. This fact coincides with the condition for separability of $W(q)$. From this observation, we understand that the critical point $q=1/3$ comes from positivity of local density operators, $\rho_{\mbox{\scriptsize A}}(\lambda)$ and $\rho_{\mbox{\scriptsize B}}(\lambda)$.
\section{Discussions} \label{section-discussions} In this paper, we write down the separable Werner state as a convex combination of product states explicitly, so that we construct a hidden variable model for the separable Werner state. Our convex combination for the separable Werner state is different from the convex combination obtained by Wootters' method.
In our decomposition, as shown in Eq.~(\ref{definition-ab-theta-phi}), $\mbox{\boldmath $a$}(\lambda)$ and $\mbox{\boldmath $b$}(\lambda)$ always point to the opposite directions with each other, namely, $\mbox{\boldmath $a$}(\lambda)=-\mbox{\boldmath $b$}(\lambda)$ $\forall\lambda$. We cannot find any physical or geometrical meaning of this relation. We are not sure whether or not there exists a decomposed form that satisfies $\mbox{\boldmath $a$}(\lambda)\neq -\mbox{\boldmath $b$}(\lambda)$ for some $\lambda$.
\noindent {\bf \large Acknowledgment}
\noindent H. A. thanks Osamu Hirota for encouragement.
\appendix \section{The decomposed form obtained by Wootters' \\ method for the separable Werner state} \label{appendix-wootters-decomposition} According to Ref.~\cite{Wootters} written by Wootters, we can obtain the following convex combination of product states for the separable Werner state defined in Eq.~(\ref{definition-Werner-state}) with $0\leq q\leq 1/3$:
\begin{equation} W(q)=\sum_{i=1}^{4}|z_{i}\rangle\langle z_{i}|, \end{equation} where \begin{eqnarray}
|z_{1}\rangle &=& (1/2)
(e^{i\theta_{1}}|x_{1}\rangle +
e^{i\theta_{2}}|x_{2}\rangle +
e^{i\theta_{3}}|x_{3}\rangle +
e^{i\theta_{4}}|x_{4}\rangle ), \nonumber \\
|z_{2}\rangle &=& (1/2)
(e^{i\theta_{1}}|x_{1}\rangle +
e^{i\theta_{2}}|x_{2}\rangle -
e^{i\theta_{3}}|x_{3}\rangle -
e^{i\theta_{4}}|x_{4}\rangle ), \nonumber \\
|z_{3}\rangle &=& (1/2)
(e^{i\theta_{1}}|x_{1}\rangle -
e^{i\theta_{2}}|x_{2}\rangle +
e^{i\theta_{3}}|x_{3}\rangle -
e^{i\theta_{4}}|x_{4}\rangle ), \nonumber \\
|z_{4}\rangle &=& (1/2)
(e^{i\theta_{1}}|x_{1}\rangle -
e^{i\theta_{2}}|x_{2}\rangle -
e^{i\theta_{3}}|x_{3}\rangle +
e^{i\theta_{4}}|x_{4}\rangle ), \label{Wootters-z-basis} \end{eqnarray} \begin{eqnarray}
|x_{1}\rangle &=& -i
\frac{\sqrt{1+3q}}{2}|\Psi^{-}\rangle, \nonumber \\
|x_{2}\rangle &=&
\frac{\sqrt{1-q}}{2}|\Psi^{+}\rangle, \nonumber \\
|x_{3}\rangle &=&
\frac{\sqrt{1-q}}{2}|\Phi^{-}\rangle, \nonumber \\
|x_{4}\rangle &=& -i
\frac{\sqrt{1-q}}{2}|\Phi^{+}\rangle, \label{Wootters-x-basis} \end{eqnarray} \begin{eqnarray}
|\Psi^{\pm}\rangle &=& \frac{1}{\sqrt{2}} (
|0\rangle_{\mbox{\scriptsize A}}|1\rangle_{\mbox{\scriptsize B}} \pm
|1\rangle_{\mbox{\scriptsize A}}|0\rangle_{\mbox{\scriptsize B}} ), \nonumber \\
|\Phi^{\pm}\rangle &=& \frac{1}{\sqrt{2}} (
|0\rangle_{\mbox{\scriptsize A}}|0\rangle_{\mbox{\scriptsize B}} \pm
|1\rangle_{\mbox{\scriptsize A}}|1\rangle_{\mbox{\scriptsize B}} ), \label{Wootters-Bell-basis} \end{eqnarray} and \begin{equation} e^{-2i\theta_{1}}(1+3q) + (e^{-2i\theta_{2}}+e^{-2i\theta_{3}}+e^{-2i\theta_{4}})(1-q)=0. \label{Wootters-decomposition-condition} \end{equation} Equation~(\ref{Wootters-decomposition-condition}) does not determine $\theta_{1}$, $\theta_{2}$, $\theta_{3}$, and $\theta_{4}$ uniquely. For example, we have a special solution, $\theta_{1}=0$, $\theta_{2}=\pi/2$, \begin{equation} \cos\theta_{3}=\sqrt{\frac{1-3q}{2(1-q)}}, \quad\quad \sin\theta_{3}=\sqrt{\frac{1+q}{2(1-q)}}, \end{equation} and \begin{equation} \cos\theta_{4}=-\sqrt{\frac{1-3q}{2(1-q)}}, \quad\quad \sin\theta_{4}=\sqrt{\frac{1+q}{2(1-q)}}. \end{equation} By substituting the above special solution into Eqs.~(\ref{Wootters-z-basis}), (\ref{Wootters-x-basis}), and (\ref{Wootters-Bell-basis}), we can confirm that
$|z_{1}\rangle$, $|z_{2}\rangle$, $|z_{3}\rangle$, and $|z_{4}\rangle$ are product states.
\end{document} |
\begin{document}
\begin{abstract} In the present paper, we will discuss the following non-degenerate Hamiltonian system \begin{equation*} H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{P(\theta,t,I)}{\varepsilon^{b}}, \end{equation*}
where $(\theta,t,I)\in\mathbf{{T}}^{d+1}\times[1,2]^d$ ($\mathbf{{T}}:=\mathbf{{R}}/{2\pi \mathbf{Z}}$), $a,b$ are given positive constants with $a>b$, $H_0: [1,2]^d\rightarrow \mathbf R$ is real analytic and $P: \mathbf T^{d+1}\times [1,2]^d\rightarrow \mathbf R$ is $C^{\ell}$ with $\ell=\frac{2(d+1)(5a-b+2ad)}{a-b}+\mu$, $0<\mu\ll1$. We prove that if $\varepsilon$ is sufficiently small, there is an invariant torus with given Diophantine frequency vector for the above Hamiltonian system. As for application, we prove that a finite network of Duffing oscillators with periodic exterior forces possesses Lagrangian stability for almost all initial data.
\end{abstract}
\maketitle
\section{ Introduction and main results}\label{sec1} Consider the harmonic oscillator (linear spring) \begin{equation}\label{fc1-1} \ddot{x}+k^2x=0. \end{equation} It is well-known that any solution of this equation is periodic. So any solution of this equation is bounded for $t\in \mathbf{R}$. That is, this equation is Lagrange stable. However, there is an unbounded solution to the equation \begin{equation}\label{fc1-2} \ddot{x}+k^2\, x=p(t) \end{equation} where the frequency of $p$ is equal to the frequency $k$ of the spring itself. Now let us consider a nonlinear equation \begin{equation}\label{fc1-3} \ddot{x}+x^3=0. \end{equation}
This equation is Lagrange stable, too. An interesting problem is that, does \begin{equation}\label{fc1-4} \ddot{x}+x^3=p(t) \end{equation} have Lagrange stability when $p(t)$ is periodic?
Moser \cite{a1, a2} proposed to study the boundedness of all solutions for Duffing equation \begin{equation}\label{fc1-5}
\ddot{x}+\alpha x^3+\beta x=p(t),
\end{equation} where $\alpha>0, \beta\in \mathbf{R}$ are constants, $p(t)$ is a $1$-periodic continuous function. The first boundedness result, prompted by questions of Littlewood \cite{a3}, is due to Morris \cite{a4} in 1976 who showed that all solutions of the equation (\ref{fcb1-5}) are bounded for all time. \begin{equation}\label{fcb1-5}
\ddot{x}+2x^3=p(t),
\end{equation} where $p(t)$ is a $2\pi$-periodic continuous function. Subsequently, Morris's boundedness result was, by Dieckerhoff-Zehnder \cite{a5} in 1987, extended to a wider class of systems \begin{equation}\label{fcb1-6} \ddot{x}+x^{2n+1}+\sum_{i=0}^{2n} x^{i}p_{i}(t)=0, n\geq 1, \end{equation} where $p_i(t)\in C^{\infty} (i=0,1,\cdots,2n)$ are 1-periodic functions. For some other extensions to study the boundedness, one may see papers \cite{a6, a7, a8, a9, a10, a11, a12, a13, a14, a15}.
In many research fields such as physics, mechanics and mathematical biology as so on arise networks of coupled Duffing oscillators of various form.
For example, the evolution equations for the voltage variables $V_1$ and $V_2$ obtained using the Kirchhoff's voltage law are \begin{equation}\label{fc1-6} \begin{cases}
R^2 C^2 \frac{d^2 V_1}{d t^2}=-(\frac{R^2 C}{R_1})\frac{d V_1}{dt}-(\frac{R}{R_2})V_1-(\frac{R}{100 R_3}) V_1^3+(\frac{R}{R_C})V_2+f \sin \omega t,\\
R^2 C^2 \frac{d^2 V_2}{d t^2}=-(\frac{R^2 C}{R_1})\frac{d V_2}{dt}-(\frac{R}{R_2})V_2-(\frac{R}{100 R_3}) V_2^3+(\frac{R}{R_C})V_1 , \end{cases} \end{equation}
where $R$'s and $C$'s are resistors and capacitors, respectively. This equation can be regarded as one coupled by two Duffing oscillators. See \cite{a17, a18, a19, a20, a21, a22, a23, a24} for more details.
Recently, Yuan-Chen-Li \cite{a16} studied the Lagrangian stability for coupled Hamiltonian system of $m$ Duffing oscillators: \begin{equation}\label{fc1-7} \ddot{x_{i}}+x_{i}^{2n+1}+\frac{\partial F}{\partial x_{i}}=0,\ \ i=1, 2, \cdots, m, \end{equation}
where the polynomial potential $F=F(x, t)=\sum_{\alpha\in \mathbf{N}^{m}, |\alpha|\leq 2n+1}p_{\alpha}(t)x^{\alpha},$ $x\in \mathbf{R}^{m}$ with $p_{\alpha}(t)$ is of period $2 \pi$, and $n$ is a given natural number. Yuan-Chen-Li \cite{a16} proved that (\ref{fc1-7}) had Lagrangian stability for almost all initial data if $p_{\alpha}(t)$ was real analytic.
In the present paper, we will relax the real analytic condition of $p_{\alpha}(t)$ to $C^{\ell}$ ($\ell=2(m+1)(4n+2nm+1)+\mu$ with $0<\mu\ll1$).
In the whole of the present paper we denote by $C$ (or $C_0, C_1, c, c_0,c_1$, etc) an universal constant which may be different in different places. Let positive integer $d$ be the freedom of the to-be considered Hamiltonian.
\begin{thm}\label{thm1-1} Consider a Hamiltonian \begin{equation}\label{fcb1-1} H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{P(\theta,t,I)}{\varepsilon^{b}}, \end{equation}
where $a,b$ are given positive constants with $a>b$, and $H_0$ and $P$ obey the following conditions:\\ {\rm(1)} Given $\ell=\frac{2(d+1)(5a-b+2ad)}{a-b}+\mu$ with $0<\mu\ll1$, and $H_0: [1,2]^d\rightarrow \mathbf R$ is real analytic and $P: \mathbf T^{d+1}\times [1,2]^d\rightarrow \mathbf R$ is $C^{\ell}$, and \begin{equation}\label{fcb1-2}
||H_0||:=\sup_{I\in[1,2]^d}|H_0(I)|\le c_1, \ |P|_{C^{\ell}(\mathbf T^{d+1}\times [1,2]^d)}\le c_2, \end{equation} {\rm(2)} $H_0$ is non-degenerate in Kolmogorov's sense: \begin{equation}\label{fcb1-3} \text{det}\,\left(\frac{\partial^2 H_0(I)}{\partial I^2}\right)\ge c_3>0,\forall\; I\in [1,2]^d. \end{equation} Then there exists $0<\epsilon^*\ll 1$ such that for any $\varepsilon$ with $0<\varepsilon<\epsilon^*$, the Hamiltonian system $$\dot \theta=\frac{\partial H(\theta,t,I)}{\partial I},\;\dot I=-\frac{\partial H(\theta,t,I)}{\partial \theta}$$ possesses a $d+1$ dimensional invariant torus of rotational frequency vector $(\omega(I_0),2\pi)$ with $\omega(I):=\frac{\partial H_0(I)}{\partial I}$, for any $I_0\in [1,2]^d$ and $\omega(I_0)$ obeying Diophantine conditions {\rm(we let $B=5a-b+2ad$):}\\ {\rm(i)} \begin{equation}\label{fc1-19}
|\frac{\langle k,\omega(I_0)\rangle}{\varepsilon^a} +l|\ge \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}}>\frac{\gamma}{|k|^{\tau_2}},\ \ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^2, \end{equation} where $\gamma=(\log\frac{1}{\varepsilon})^{-4},$ $\tau_1=d-1+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\tau_2=d+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$; \\ {\rm(ii)} \begin{equation}\label{fc1-20}
|\frac{\langle k,\omega(I_0)\rangle}{\varepsilon^a} +l|\ge \frac{\gamma}{|k|^{\tau_2}},\ \ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2},
\end{equation} where $\gamma=(\log\frac{1}{\varepsilon})^{-4},$ $\tau_2=d+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$. \end{thm} Applying Theorem \ref{thm1-1} to (\ref{fc1-7}) we have the following theorem.
\begin{thm}\label{thm1-2} For any $A>0$, let $\Theta_A=\{(x_{1}, \dot x_{1}; \cdots, x_{m}, \dot x_{m})\in\mathbf R^{2m}:\ A\le \sum_{i=1}^m x_i^{2n+2}+(n+1)\dot x^2_i\le c_4A, \ c_4>1\}$. Then there exists a subset $\tilde \Theta_A\subset\Theta_A$ with \begin{equation}\label{fc1-21} \lim_{A\to\infty}\frac{\tilde \Theta_A}{\Theta_A}=1
\end{equation} such that any solution to equation {\rm(\ref{fc1-7})} with any initial data $(x_{1}(0), \dot x_{1}(0); \cdots, x_{m}(0), \dot x_{m}(0))\in \tilde\Theta_A $ is time quasi-periodic with frequency vector $(\omega,2\pi)$ where $\omega=(\omega_i:\ i=1,\cdots,m)$ and $\omega_i=\omega_i(I(0))$ with $I(0)=(I_1(0),\cdots,I_m(0))$, $I_{i}(0)=(n+1)\dot x_{i}^{2}(0)+x_{i}^{2n+2}(0)$, furthermore, \begin{equation}\label{fc1-22}
\sup_{t\in\mathbf R}\sum_{i=1}^m |x_i(t)|+|\dot x_i(t)|<\infty. \end{equation}
\end{thm}
\begin{rem}\label{rem1-3} An equation is called to have Lagrangian stability for almost all initial data if its solutions obey (\ref{fc1-21}) and (\ref{fc1-22}). \end{rem}
\begin{rem}\label{rem1-4} Let $\Theta=\{I_0\in[1,2]^d:\ \omega(I_0) \,\text{obeys the Diophantine conditions}\}$. We claim that the Lebesgue measure of $\Theta$ approaches to $1$: \[\text{Leb} \Theta\ge 1-C (\log\frac{1}{\varepsilon})^{-2}\to 1,\;\text{as}\; \varepsilon \to 0.\] Let
$$\tilde\Theta_{k,l}=\left\{\xi\in\omega([1,2]^d):\ |\frac{\langle k,\xi\rangle}{\varepsilon^a} +l|\le \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}},\ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^2\right\}$$
and
$$\tilde\Theta_{k,l}=\left\{\xi\in\omega([1,2]^d):\ |\frac{\langle k,\xi\rangle}{\varepsilon^a} +l|\le \frac{\gamma}{|k|^{\tau_2}},\ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}\right\}.$$ Let $f(\xi)=\frac{\langle k, \xi\rangle}{\varepsilon^{a}}+l.$ Since $k\neq 0,$ there exists an unit vector $v\in\mathbf{Z}^{d}$ such that \begin{equation}\label{fc1-23}
\frac{df(\xi)}{dv}\geq \frac{C|k|}{\varepsilon^{a}}. \end{equation}
Then, if , $k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}$, by (\ref{fc1-23}), we have \begin{equation}\label{fc1-24}
\text{Leb}\tilde\Theta_{k,l}\le C \frac{\gamma\cdot\varepsilon^{\frac{B}{\ell}}}{|k|^{\tau_1+1}}. \end{equation} Thus, \begin{equation}\label{fc1-25}
\text{Leb}\ \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z,|k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2} }\tilde\Theta_{k,l}\right)\le \sum_{|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}}C \gamma\cdot\varepsilon^{\frac{B}{\ell}}\le C(\log\frac{1}{\varepsilon})^{-2}. \end{equation}
If $k\in\mathbf Z^d\setminus\{0\}, \ l\in\mathbf Z, \ |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}$, we can let $c_5=\max\{|\omega([1,2]^d)|\}:=\max\{\sum_{i=1}^d|\omega_i([1,2]^d)|\}.$ Noting that $|\langle k, \xi\rangle|\leq c_5|k|.$ Thus if $|l|>\frac{c_5|k|}{\varepsilon^a}+1,$ then
$$|\frac{\langle k, \xi\rangle}{\varepsilon^a}+l|\geq|l|-|\frac{\langle k, \xi\rangle}{\varepsilon^a}|>\frac{c_5|k|}{\varepsilon^a}+1-\frac{c_5|k|}{\varepsilon^a}\geq1>\frac{\gamma}{|k|^{\tau_2}}.$$
It follows that $\tilde\Theta_{k,l}=\phi.$ Now we assume $|l|\leq\frac{c_5|k|}{\varepsilon^a}+1$, then by (\ref{fc1-23}),we have \begin{equation}\label{fc1-26}
\text{Leb}\tilde\Theta_{k,l}\le \frac{C\gamma\varepsilon^a}{|k|^{\tau_2+1}}. \end{equation} Thus, \begin{eqnarray}\label{fc1-27}
\nonumber\text{Leb}\ \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z,|k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2} }\tilde\Theta_{k,l}\right)&\le& \sum_{k\neq0}\sum_{|l|\leq\frac{c_5|k|}{\varepsilon^a}+1}\frac{C\gamma\varepsilon^a}{|k|^{\tau_2+1}}\\
&\le&\sum_{k\neq0}\frac{C\gamma}{|k|^{\tau_2}}\le C(\log\frac{1}{\varepsilon})^{-4}. \end{eqnarray} Let $\Theta=[1,2]^d\setminus \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z}\omega^{-1}(\tilde\Theta_{k,l})\right)$. By the Kolmogorov's non-degenerate condition, the map $\omega:[1,2]^d\to \omega([1,2]^d)$ is a diffeomorphism in both direction. Then by (\ref{fc1-25}) and (\ref{fc1-27}), the proof of the claim is completed by letting $\Theta=[1,2]^d\setminus \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z}\omega^{-1}(\tilde\Theta_{k,l})\right)$. \end{rem}
\section{Approximation Lemma}\label{sec2}
First we denote by $|\cdot|$ the norm of any finite dimensional Euclidean space. Let $C^{\tilde{\mu}}(\mathbf{R}^{m})$ for $0<\tilde{\mu}<1$ denote the space of bounded H\"older continuous functions $f: \mathbf{R}^m\rightarrow\mathbf{R}^n$ with the norm \begin{equation*}
|f|_{C^{\tilde{\mu}}}=\sup_{0<|x-y|<1}\frac{|f(x)-f(y)|}{|x-y|^{\tilde{\mu}}}+\sup_{x\in\mathbf{R}^m}|f(x)|. \end{equation*}
If $\tilde{\mu}=0$ the $|f|_{C^{\tilde{\mu}}}$ denotes the sup-norm. For $\tilde{\ell}=k+\tilde{\mu}$ with $k\in\mathbf{N}$ and $0\leq\tilde{\mu}<1$ we denote by $C^{\tilde{\ell}}(R^{m})$ the space of functions $f:\mathbf{R}^m\rightarrow \mathbf{R}^n$ with H\"older continuous partial derivatives $\partial^{\alpha}f\in C^{\tilde{\mu}}(\mathbf{R}^m)$ for all multi-indices $\alpha=(\alpha_1,\cdots,\alpha_m)\in\mathbf{N}^m$ with the assumption that $|\alpha|:=|\alpha_1|+\cdots+|\alpha_m|\leq k$. We define the norm \begin{equation*}
|f|_{C^{\tilde{\ell}}}:=\sum_{|\alpha|\leq\tilde{\ell}}|\partial^{\alpha}f|_{C^{\tilde{\mu}}} \end{equation*} for $\tilde{\mu}=\tilde{\ell}-[\tilde{\ell}]<1$. In order to give an approximate lemma, we define the kernel function \begin{equation*}
K(x)=\frac{1}{(2\pi)^m}\int_{\mathbf{R}^m}\hat{K}(\xi)e^{i\langle x,\xi\rangle}d\xi,\ x\in\mathbf{C}^m, \end{equation*}
where $\hat{K}(\xi)$ is a $C^{\infty}$ function with compact support, contained in the ball $|\xi|\leq a_1$ with a constant $a_1>0$, that satisfies $$ \partial^{\alpha}\hat{K}(0)= \left\{ \begin{array}{l} 1, \ \alpha=0,\\ 0, \ \alpha\neq0. \end{array} \right. $$
Then $K: \mathbf{C}^m\rightarrow\mathbf{R}^n$ is a real analytic function with the property that for every $j>0$ and every $p>0$, there exists a constant $c=c(j,p)>0$ such that for all $\beta\in \mathbf{N}^m$ with $|\beta|\leq j$, \begin{equation}\label{fc2-1}
|\partial^{\beta}K(x+iy)|\leq c(1+|x|)^{-p}e^{a_1|y|}, \ x,y\in \mathbf{R}^m. \end{equation} \begin{lem}[Jackson-Moser-Zehnder]\label{lem2-1} There is a family of convolution operators \begin{equation}\label{fc2-2}
(S_{s}F)(x)=s^{-m}\int_{\mathbf{R}^{m}}K(s^{-1}(x-y))F(y)dy,\ \ 0<s\leq 1,\ \ \forall\ F\in C^{0}(\mathbf{R}^{m})
\end{equation}
from $C^{0}(\mathbf{R}^{m})$ into the linear space of entire functions on $\mathbf{C}^{m}$ such that for every $\ell>0$ there exist a constant $c=c(\tilde{\ell})>0$ with the following properties: if $F\in C^{\tilde{\ell}}(\mathbf{R}^{m}),$ then for $|\alpha|\leq \tilde{\ell}$ and $|\mathrm{Im} x|\leq s$, \begin{equation}\label{fc2-3}
|\partial^{\alpha}(S_{s}F)(x)-\sum_{|\beta|\leq \tilde{\ell}-|\alpha|}\partial^{\alpha+\beta}F(\mathrm{Re} x)({\bf{i}}\, \mathrm{Im} x)^{\beta}/\beta!|\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-|\alpha|} \end{equation} and in particular for $\rho\leq s$ \begin{equation}\label{fc2-4}
|\partial^{\alpha}S_{s}F-\partial^{\alpha}S_{\rho}F|_{\rho}:=\sup_{|{\rm Im} x|\leq \rho}|\partial^{\alpha}(S_{s}F)(x)-\partial^{\alpha}(S_{\rho}F)(x)|\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-|\alpha|}. \end{equation} Moreover, in the real case \begin{equation}\label{fc2-5}
|S_{s}F-F|_{C^{p}}\leq c |F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-p},\ \ p\leq \tilde{\ell}, \end{equation} \begin{equation}\label{fc2-6}
|S_{s}F|_{C^{p}}\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-p},\ \ p\leq \tilde{\ell}. \end{equation} Finally, if $F$ is periodic in some variables then so are the approximating functions $S_{s}F$ in the same variables. \end{lem} \begin{rem}\label{rem2-1} Moreover we point out that from (\ref{fc2-6}) one can easily deduce the following well-known convexity estimates \begin{equation}\label{fc2-7}
|f|_{C^{q}}^{l-k}\leq c|f|_{C^{k}}^{l-q}|f|_{C^{l}}^{q-k},\ \ k\leq q\leq l, \end{equation} \begin{equation}\label{fc2-8}
|f\cdot g|_{C^{s}}\leq c(|f|_{C^{s}}|f|_{C^{0}}+|f|_{C^{0}}|g|_{C^{s}}),\ \ s\geq 0. \end{equation} See \cite{a25, a26} for the proofs of Lemma \ref{lem2-1} and the inequalities (\ref{fc2-7}) and (\ref{fc2-8}). \end{rem} \begin{rem}\label{rem2-2} From the definition of the operator $S_s$, we clearly have \begin{equation}\label{fc2-9}
\sup_{x,y\in\mathbf{R}^{m}, |y|\leq s}|S_sF(x+iy)|\leq C|F|_{C^0}. \end{equation}
In fact, by the definition of $S_s$, we have that for any $x,y\in\mathbf{R}^{m}$ with $|y|\leq s$, \begin{eqnarray}
\nonumber|S_sF(x+iy)|&=&\nonumber |s^{-m}\int_{\mathbf{R}^{m}}K(s^{-1}(x+iy-z))F(z)dz|\\
&=&\nonumber |\int_{\mathbf{R}^{m}}K(is^{-1}y+\xi)F(x-s\xi)d\xi|\\
&\leq&\nonumber |F|_{C^0}\int_{\mathbf{R}^{m}}|K(is^{-1}y+\xi)|d\xi\\
&\leq&\nonumber C|F|_{C^0}, \end{eqnarray} where we used (\ref{fc2-1}) in the last inequality. \end{rem}
Consider a function $F(\theta,t,I)$, where $F:\mathbf T^{d+1}\times [1,2]^d\rightarrow\mathbf{R}$ satisfies \begin{equation*}
|F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}\leq C.
\end{equation*}
By Whitney's extension theorem, we can find a function $\tilde{F}: \mathbf{T}^{d+1}\times \mathbf{R}^{d}\rightarrow\mathbf{R}$ such that $\tilde{F}|_{\mathbf T^{d+1}\times [1,2]^d}=F$ (i.e. $\tilde{F}$ is the extension of $F$) and \begin{equation*}
|\tilde{F}|_{C^{|\alpha|}(\mathbf{T}^{d+1}\times \mathbf{R}^{d})}\leq C_{\alpha}|F|_{C^{|\alpha|}(\mathbf T^{d+1}\times [1,2]^d)}, \ \forall\alpha\in\mathbf{Z}^{2d+1}_{+}, |\alpha|\leq\tilde{\ell},
\end{equation*}
where $C_{\alpha}$ is a constant depends only $\tilde{\ell}$ and $d$.
Let $z=(\theta,t,I)$ for brevity, define, for $\forall s>0$,
\begin{equation*}
(S_{s}\tilde{F})(z)=s^{-(2d+1)}\int_{\mathbf{T}^{d+1}\times \mathbf{R}^{d}}K(s^{-1}(z-\tilde{z}))\tilde{F}(\tilde{z})d\tilde{z}.
\end{equation*}
For any positive integer $p$, let $\mathbf{{T}}_s^p=\left\{x\in \mathbf{{C}}^{p}/{(2\pi \mathbf{Z})}^{p}: |{\rm Im}x|\leq s \right\}$, $\mathbf{{R}}_s^p=\left\{x\in \mathbf{{C}}^{p}: |{\rm Im}x|\leq s\right\}$. Fix a sequence of fast decreasing numbers $s_{\nu}\downarrow0$, $\nu\in\mathbf{Z_{+}}$ and $s_0\leq\frac{1}{4}$. Let
\begin{equation*}
F^{(\nu)}(z)=(S_{2s_\nu}\tilde{F})(z),\ \nu\geq0.
\end{equation*} Then $F^{(\nu)}$'s ($\nu\geq0$) are entire functions in $\mathbf{C}^{2d+1}$, in particular, which obey the following properties.\\ (1) $F^{(\nu)}$'s ($\nu\geq0$) are real analytic on the complex domain $\mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}$;\\ (2) The sequence of functions $F^{(\nu)}$'s satisfies the bounds \begin{equation}\label{fc2-10}
\sup_{z\in\mathbf{T}^{d+1}\times \mathbf{R}^{d}}|F^{(\nu)}(z)-\tilde{F}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}s_{\nu}^{\tilde{\ell}}, \end{equation} \begin{equation}\label{fc2-11}
\sup_{z\in \mathbf{T}^{d+1}_{2s_{\nu+1}}\times\mathbf{R}^{d}_{2s_{\nu+1}}}|F^{(\nu+1)}(z)-F^{(\nu)}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}s_{\nu}^{\tilde{\ell}}, \end{equation} where constants $C=C(d,\tilde{\ell})$ depend on only $d$ and $\tilde{\ell}$;\\ (3) The first approximate $F^{(0)}(z)=(S_{2s_0}\tilde{F})(z)$ is ``small'' with respect to $F$. Precisely, \begin{equation}\label{fc2-12}
|F^{(0)}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)},\ \ \forall z\in\mathbf{T}^{d+1}_{2s_0}\times\mathbf{R}^{d}_{2s_0}, \end{equation} where constant $C=C(d,\tilde{\ell})$ is independent of $s_0$;\\ (4) From Lemma \ref{lem2-1}, we have that
\begin{equation}\label{fc2-13}
F(z)=F^{(0)}(z)+\sum_{\nu=0}^{\infty}(F^{(\nu+1)}(z)-F^{(\nu)}(z)),\ \ z\in \mathbf T^{d+1}\times [1,2]^d.
\end{equation}
Let
\begin{equation}\label{fc2-14}
F_0(z)=F^{(0)}(z),\ F_{\nu+1}(z)=F^{(\nu+1)}(z)-F^{(\nu)}(z).
\end{equation}
Then
\begin{equation}\label{fc2-15}
F(z)=\sum_{\nu=0}^{\infty}F_{\nu}(z),\ \ \ \ z\in \mathbf T^{d+1}\times [1,2]^d.
\end{equation}
\section{Normal Form}\label{sec3} Let $I_0\in [1,2]^d$ such that $\omega(I_0)=\frac{\partial H_0}{\partial I}(I_0)$ obeys Diophantine conditions (\ref{fc1-19}) and (\ref{fc1-20}). Let $\mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\mu_2=2\mu_1=\frac{(a-b)^2\mu}{500(a+b+1)(d+3)(5a-b+2ad)}$, $m_0=10+\left[E\right]$ where $E=\max\{\frac{4B}{a-b-\frac{2(\tau_1+2)B}{\ell}-2\mu_1}, \frac{2(2\tau_1+3)(\tau_2+1)B}{B-2a-2(\tau_2+1)b-\frac{2(2\tau_1+5)(\tau_2+1)B}{\ell}-8\mu_1(\tau_2+1)-2\mu_2)}\}$ ($a$, $b$, $\tau_1$, $\tau_2$, $B$, $\ell$ are the same as those in Theorem \ref{thm1-1}), and $[\cdot]$ is the integer part of a positive number.
Define sequences \begin{itemize} \item $ \varepsilon_{j}=\varepsilon^{\frac{j B}{m_0}}, j=0,1,2,\cdots,m_0, \varepsilon_{j}=\varepsilon_{j-1}^{1+\mu_3} \ \ with \ \ \mu_3=\frac{(a-b)\mu}{10B}, j=m_0+1,m_0+2,\cdots;$ \item $ s_j=\varepsilon_{j+1}^{\frac{1}{\ell}},\ s_j^{(l)}=s_{j}-\frac{l}{10}(s_j-s_{j+1}),l=0,1,\cdots,10, j=0,1,2,\cdots;$ \item $ r_j=\varepsilon^{\frac{(j+1)(\tau_1+1) B}{\ell m_0}+\mu_1+\frac{B}{\ell}} \ with \ \mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)},\ \ j=0,1,2,\cdots,m_0,\ r_j=r_{j-1}^{1+\mu_3},\ j=m_0+1,m_0+2,\cdots;$ \item $ r_j^{(l)}=r_{j}-\frac{l}{10}(r_j-r_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$ \item $K_j=\frac{2B}{s_j}\log\frac{1}{\varepsilon},\ j=0,1,2,\cdots;$
\item $B(r_j)=\{z\in\mathbf C^d: \, |z-I_0|\le r_j\},\ j=0,1,2,\cdots$. \end{itemize}
With the preparation of Section \ref{sec2}, we can rewrite equation (\ref{fcb1-1}) as follows:
\begin{equation}\label{fc3-1} H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{1}{\varepsilon^{b}} \sum_{\nu=0}^{\infty}P_{\nu}(\theta,t,I),
\end{equation}
where
\begin{equation}\label{fc3-2} P_{\nu}: \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}\rightarrow\mathbf{C},
\end{equation}
is real analytic, and
\begin{equation}\label{fc3-3}
\sup_{(\theta,t,I)\in \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}}|P_{\nu}|\leq C\varepsilon_{\nu}.
\end{equation} Let \begin{equation}\label{fc3-4} h^{(0)}(t,I)\equiv0,\ P^{(0)}=P_0. \end{equation} Then we can rewrite equation (\ref{fc3-1}) as follows:
\begin{equation}\label{fc3-5} H^{(0)}(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(0)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_0 P^{(0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=1}^{\infty}\frac{P_{\nu}(\theta,t,I)}{\varepsilon^{b}}.
\end{equation} Define
\begin{equation*}
D(s,r)=\mathbf{T}^{d+1}_{s}\times B(r),\ D(s,0)=\mathbf{T}^{d+1}_{s},\ D(0,r)=B(r).
\end{equation*} For a function $f$ defined in $D(s,r)$ , define
\begin{equation*}
||f||_{D(s,r)}=\sup_{(\theta,t,I)\in D(s,r)}|f(\theta,t,I)|.
\end{equation*}
Similarly, we can define $||f||_{D(0,r)}$ and $||f||_{D(s,0)}$.
Clearly, (\ref{fc3-5}) fulfill (\ref{fc3-8})-(\ref{fc3-10}) with $m=0$. Then we have the following lemma. \begin{lem}\label{lem3-1} Suppose that we have had $m+1$ {\rm(}$m=0,1,2,\cdots, m_0-1${\rm)} symplectic transformations $\Phi_0=id$, $\Phi_1$, $\cdots$, $\Phi_m$ with \begin{equation}\label{fc3-6} \Phi_j:D(s_j,r_j)\rightarrow D(s_{j-1},r_{j-1}),\ j=1,2,\cdots,m
\end{equation} and \begin{equation}\label{fcf3-1}
\|\partial(\Phi_{j}-id)\|_{D(s_j,r_j)}\leq \frac{1}{2^{j+1}},\ j=1,2,\cdots,m \end{equation} such that system {\rm(\ref{fc3-5})} is changed by $\Phi^{(m)}=\Phi_0\circ\Phi_1\circ\cdots\circ\Phi_m$ into \begin{equation}\label{fc3-7} H^{(m)}=H^{(0)}\circ\Phi^{(m)}=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_m P^{(m)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m)}(\theta,t,I)}{\varepsilon^{b}},
\end{equation} where \begin{equation}\label{fc3-8}
\|h^{(m)}(t,I)\|_{D(s_{m}, r_{m})}\leq C,
\end{equation}
\begin{equation}\label{fc3-9}
\|P^{(m)}(\theta,t,I)\|_{D(s_{m}, r_{m})}\leq C,
\end{equation}
\begin{equation}\label{fc3-10}
\|P_{\nu}\circ\Phi^{(m)}(\theta,t,I)\|_{D(s_{\nu}, r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m+1,m+2,\cdots.
\end{equation} Then there is a symplectic transformation $\Phi_{m+1}$ with \begin{equation*} \Phi_{m+1}:D(s_{m+1},r_{m+1})\rightarrow D(s_m,r_m)
\end{equation*}
and
\begin{equation*}
\|\partial(\Phi_{m+1}-id)\|_{D(s_{m+1},r_{m+1})}\leq \frac{1}{2^{m+2}} \end{equation*} such that system {\rm(\ref{fc3-7})} is changed by $\Phi_{m+1}$ into {\rm($\Phi^{(m+1)}=\Phi_0\circ\Phi_1\circ\cdots\circ\Phi_{m+1}$)} \begin{eqnarray*}\label{fc3-12}
\nonumber H^{(m+1)}&=&H^{(m)}\circ\Phi_{m+1}=H^{(0)}\circ\Phi^{(m+1)}\nonumber\\ \nonumber&=&\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m+1)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_{m+1} P^{(m+1)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\theta,t,I)}{\varepsilon^{b}},
\end{eqnarray*} where $H^{(m+1)}$ satisfies {\rm(\ref{fc3-8})}-{\rm(\ref{fc3-10})} by replacing $m$ by $m+1$.
\end{lem} \begin{proof} Assume that the change $\Phi_{m+1}$ is implicitly defined by \begin{equation}\label{fc3-16} \Phi_{m+1}: \begin{cases}
I=\rho+\frac{\partial S}{\partial \theta}, \\
\phi=\theta+\frac{\partial S}{\partial \rho},\\ t=t,
\end{cases} \end{equation} where $S=S(\theta, t, \rho)$ is the generating function, which will be proved to be analytic in a smaller domain
$D(s_{m+1}, r_{m+1}).$ By a simple computation, we have $$d I\wedge d\theta=d\rho \wedge d\theta+\sum_{i,j=1}^d\frac{\partial^{2}S}{\partial\rho_{i}\partial\theta_{j}}d\rho_{i}\wedge d\theta_{j}=d\rho \wedge d\phi.$$ Thus, the coordinates change $\Phi_{m+1}$ is symplectic if it exists. Moreover, we get the changed Hamiltonian \begin{eqnarray}\label{fc3-17}
\nonumber H^{(m+1)}&=&H^{(m)}\circ\Phi_{m+1}\nonumber\\ \nonumber &=&\frac{H_0(\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{a}}+\frac{h^{(m)}(t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{\varepsilon_{m} P^{(m)}(\theta,t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{\partial S}{\partial t}\\ &&+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\theta=\theta(\phi, t, \rho)$ is implicitly defined by (\ref{fc3-16}). By Taylor formula, we have \begin{eqnarray}\label{fc3-18}
\nonumber H^{(m+1)} &=&\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{h^{(m)}(t,\rho)}{\varepsilon^{b}}+\langle\frac{\omega(\rho)}{\varepsilon^a}, \frac{\partial S}{\partial \theta}\rangle+\frac{\partial S}{\partial t}+\frac{\varepsilon_{m} P^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\\ &&+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1,
\end{eqnarray}
where $\omega(\rho)=\frac{\partial H_0}{\partial I}(\rho)$ and
\begin{eqnarray}\label{fc3-19}
\nonumber R_1&=&\frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau+\frac{\varepsilon_m}{\varepsilon^b}\int_{0}^{1}\frac{\partial P^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\\ &&+\frac{1}{\varepsilon^b}\int_0^1\frac{\partial h}{\partial I}(t,\rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}\, d\tau. \end{eqnarray} Expanding $P^{(m)}(\theta,t,\rho)$ into a Fourier series, \begin{equation}\label{fc3-20} P^{(m)}(\theta,t,\rho)=\sum_{(k,l)\in\mathbf{Z}^d\times \mathbf{Z}}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}:=P_{1}^{(m)}(\theta,t,\rho)+P_{2}^{(m)}(\theta,t,\rho), \end{equation}
where $P_{1}^{(m)}=\sum_{|k|+|l|\leq K_m}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$, $P_{2}^{(m)}=\sum_{|k|+|l|> K_m}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$. Then, we derive the homological equation: \begin{equation}\label{fc3-21} \frac{\partial S}{\partial t}+\langle \frac{\omega(\rho)}{\varepsilon^{a}}, \frac{\partial S}{\partial\theta}\rangle +\frac{\varepsilon_{m} P_1^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}-\frac{\varepsilon_{m} \widehat{{P}_1^{(m)}}(0,t,\rho)}{\varepsilon^{b}}=0, \end{equation} where $\widehat{{P}_1^{(m)}}(0, t, \rho)$ is $0$-Fourier coefficient of $P_1^{(m)}(\theta,t,\rho)$ as the function of $\theta$. Let \begin{equation}\label{fc3-22}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq K_m,k\neq 0}\widehat{S}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}. \end{equation} By passing to Fourier coefficients, we have \begin{equation}\label{fc3-23}
\widehat{S}(k, l, \rho)=\frac{\varepsilon_m}{\varepsilon^b}\cdot\frac{i}{\varepsilon^{-a}\langle k, \omega(\rho)\rangle +l}\widehat{{P}^{(m)}}(k, l, \rho),\;|k|+|l|\leq K_m,\; k\in \mathbf{Z}^{d}\setminus\{0\}, l\in \mathbf{Z}. \end{equation} Then we can solve homological equation (\ref{fc3-21}) by setting \begin{equation}\label{fc3-24}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq K_m,k\neq 0} \frac{\varepsilon_m}{\varepsilon^b}\cdot\frac{i}{\varepsilon^{-a}\langle k, \omega(\rho)\rangle +l}\widehat{{P}^{(m)}}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}. \end{equation}
By (\ref{fcb1-3}) and (\ref{fc1-19}), for $\forall \rho\in B(r_{m})$, $|k|+|l|\leq K_{m}$, $k\neq 0$, we have \begin{eqnarray}\label{fc3-25}
|\varepsilon^{-a}\langle k, \omega(\rho)\rangle+l|
&\geq &|\varepsilon^{-a}\langle k, \omega(I_{0})\rangle+l|-|\varepsilon^{-a}\langle k, \omega(I_{0})-\omega(\rho)\rangle|\nonumber\\
&\geq & \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}}-C\varepsilon^{-a}|k|r_{m}\nonumber\\
&\geq & \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{2|k|^{\tau_1}}. \end{eqnarray} Then, by (\ref{fc3-9}), (\ref{fc3-23})-(\ref{fc3-25}), using R\"ussmann \cite{a27, a28} subtle arguments to give optimal estimates of small divisor series (also see Lemma 5.1 in \cite{a29}), we get \begin{equation}\label{fc3-26}
\|S(\theta,t,\rho)\|_{D(s^{(1)}_m, r_m)}\leq\frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m\|P^{(m)}(\theta,t,\rho)\|_{D(s_m,r_m)}}{\gamma s_{m}^{\tau_{1}}}\leq\frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}}. \end{equation} Then by the Cauchy's estimate, we have \begin{equation}\label{fc3-27}
\|\frac{\partial S}{\partial \theta}\|_{D(s^{(2)}_m, r_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\ll r_m-r_{m+1},\ \|\frac{\partial S}{\partial \rho}\|_{D(s^{(1)}_m, r^{(1)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}\ll s_m-s_{m+1}. \end{equation} By (\ref{fc3-16}) and (\ref{fc3-27}) and the implicit function theorem, we get that there are analytic functions $u=u(\phi, t, \rho), v=v(\phi, t, \rho)$ defined on the domain $D(s^{(3)}_m, r^{(3)}_m)$ with \begin{equation}\label{fc3-28} \frac{\partial S(\theta,t,\rho)}{\partial \theta}=u(\phi, t, \rho),\ \frac{\partial S(\theta,t,\rho)}{\partial \rho}=-v(\phi, t, \rho) \end{equation} and \begin{equation}\label{fc3-29}
\|u\|_{D(s^{(3)}_m, r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\ll r_m-r_{m+1},\ \|v\|_{D(s^{(3)}_m, r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}\ll s_m-s_{m+1} \end{equation} such that \begin{equation}\label{fc3-30} \Phi_{m+1}: \begin{cases}
I=\rho+u(\phi, t, \rho), \\
\theta=\phi+v(\phi, t, \rho),\\ t=t.
\end{cases} \end{equation} Then, we have \begin{equation}\label{fc3-31} \Phi_{m+1}(D(s_{m+1},r_{m+1}))\subseteq \Phi_{m+1}(D(s^{(3)}_{m},r^{(3)}_{m}))\subseteq D(s_{m},r_{m}). \end{equation}
Let \begin{equation}\label{fc3-32} h^{(m+1)}(t,\rho)=h^{(m)}(t,\rho)+\varepsilon_{m}\widehat{{P}_1^{(m)}}(0, t, \rho), \end{equation} \begin{equation}\label{fc3-33}
\frac{\varepsilon_{m+1}P^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}=\frac{\varepsilon_{m} P_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1. \end{equation} Then by (\ref{fc3-18}), (\ref{fc3-20}), (\ref{fc3-21}), (\ref{fc3-32}) and (\ref{fc3-33}), we have \begin{equation}\label{fc3-34} H^{(m+1)}(\phi,t,\rho)=\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{h^{(m+1)}(t,\rho)}{\varepsilon^{b}}+\frac{\varepsilon_{m+1} P^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc3-9}) and (\ref{fc3-20}), it is not difficult to show that (see Lemma A.2 in \cite{a30}), \begin{equation}\label{fc3-36}
\| \frac{\varepsilon_{m} P_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C\varepsilon_{m}}{\varepsilon^{b}}K_{m}^{d+1}e^{-\frac{K_{m}s_{m}}{2}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}. \end{equation} By (\ref{fc3-8}), (\ref{fc3-9}), (\ref{fc3-20}), (\ref{fc3-32}) and (\ref{fc3-36}), we have \begin{equation}\label{fc3-35}
\| h^{(m+1)}\|_{D(s_{m+1}, r_{m+1})}\leq \| h^{(m)}\|_{D(s_{m+1}, r_{m+1})}+\| \varepsilon_{m}\widehat{{P}_1^{(m)}}(0, t, \rho)\|_{D(s_{m+1}, r_{m+1})}\leq C. \end{equation}
By (\ref{fc3-8}), (\ref{fc3-9}), (\ref{fc3-26})-(\ref{fc3-29}), we have \begin{equation}\label{fc3-37}
\| \frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C}{\varepsilon^{a}r_m^2}\cdot(\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}})^2\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}, \end{equation} \begin{equation}\label{fc3-38}
\| \frac{\varepsilon_m}{\varepsilon^b}\int_{0}^{1}\frac{\partial P^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C\varepsilon_m}{\varepsilon^{b}r_m}\cdot\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}, \end{equation} \begin{equation}\label{fc3-39}
\| \frac{1}{\varepsilon^b}\int_0^1\frac{\partial h}{\partial I}(t,\rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}\, d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C}{\varepsilon^{b}r_m}\cdot\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}. \end{equation}
By (\ref{fc3-29}) and (\ref{fc3-30}), we have \begin{equation}\label{fc3-40} \Phi_{m+1}(\phi,t,\rho)=(\theta,t,I),\ (\phi,t,\rho)\in D(s_m^{(3)},r_m^{(3)}). \end{equation} By (\ref{fc3-29}), (\ref{fc3-30}) and (\ref{fc3-40}), we have \begin{equation}\label{fc3-41}
\|I-\rho\|_{D(s^{(3)}_m,r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}, \ \ \|\theta-\phi\|_{D(s^{(3)}_m,r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}. \end{equation} By (\ref{fc3-30}), (\ref{fc3-41}) and Cauchy's estimate, we have \begin{equation}\label{fc3-42}
\|\partial(\Phi_{m+1}-id)\|_{D(s^{(4)}_m,r_m^{(4)})}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}r_m}. \end{equation} It follows that \begin{equation}\label{fc3-43}
\|\partial(\Phi_{m+1}-id)\|_{D(s_{m+1},r_{m+1})}\leq \frac{1}{2^{m+2}}. \end{equation} By (\ref{fc3-6}), (\ref{fcf3-1}), (\ref{fc3-31}) and (\ref{fc3-43}), we have
\begin{eqnarray}\label{fc3-44}
\nonumber &&\|\partial\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1},r_{m+1})}\\
\nonumber&=&\|(\partial\Phi_1\circ\Phi_2\circ\cdots\circ\Phi_{m+1})(\partial\Phi_2\circ\Phi_3\circ\cdots\circ\Phi_{m+1})\cdots(\partial\Phi_{m+1})\|_{D(s_{m+1},r_{m+1})}\\
\nonumber&\leq&\prod_{j=0}^{m}(1+\frac{1}{2^{j+2}})\\
&\leq&2. \end{eqnarray} It follows that \begin{equation}\label{fc3-45} \Phi^{(m+1)}(D(s_{\nu},r_{\nu}))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m+1,m+2,\cdots. \end{equation} In fact, suppose that $w=\Phi^{(m+1)}(z)$ with $z=(\phi,t,\rho)\in D(s_{\nu},r_{\nu})$. Since $\Phi^{(m+1)}$ is real for real argument and $r_{\nu}<s_{\nu}$, we have
\begin{eqnarray}\label{fc3-46}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m+1)}(z)|=|{\rm Im} \Phi^{(m+1)}(z)-{\rm Im} \Phi^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m+1)}(z)- \Phi^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1},r_{m+1})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}. \end{eqnarray} By (\ref{fc3-3}) and (\ref{fc3-45}), we have \begin{equation}\label{fc3-47}
\|\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\|_{D(s_{m+1},r_{m+1})}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^b}, \end{equation} \begin{equation}\label{fc3-48}
\|P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{\nu},r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m+2,m+3,\cdots. \end{equation} By (\ref{fc3-19}), (\ref{fc3-29}), (\ref{fc3-33}), (\ref{fc3-36}), (\ref{fc3-37})-(\ref{fc3-39}) and (\ref{fc3-47}), we have
\begin{equation}\label{fc3-49}
\|P^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1}, r_{m+1})}\leq C.
\end{equation}
The proof is finished by (\ref{fc3-31}), (\ref{fc3-34}), (\ref{fc3-35}), (\ref{fc3-43}), (\ref{fc3-48}) and (\ref{fc3-49}). \end{proof}
By Lemma \ref{lem3-1}, there is a symplectic transformation $\Phi^{(m_0)}=\Phi_{0}\circ\Phi_{1}\circ\cdots\circ\Phi_{m_0}$ with
$$\Phi^{(m_0)}: D(s_{m_0}, r_{m_0})\rightarrow D(s_{0}, r_{0})$$
such that system (\ref{fc3-5}) is changed by $\Phi^{(m_0)}$ into
\begin{equation}\label{fc3-50} H^{(m_0)}=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m_0)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon^B P^{(m_0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}(\theta,t,I)}{\varepsilon^{b}}
\end{equation}
where \begin{equation}\label{fc3-51}
\|h^{(m_0)}(t,I)\|_{D(s_{m_0}, r_{m_0})}\leq C,
\end{equation}
\begin{equation}\label{fc3-52}
\|P^{(m_0)}(\theta,t,I)\|_{D(s_{m_0}, r_{m_0})}\leq C,
\end{equation}
\begin{equation}\label{fc3-53}
\|P_{\nu}\circ\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{\nu}, r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m_0+1,m_0+2,\cdots.
\end{equation} \section{A symplectic transformation}\label{sec4} Let $[h^{(m_0)}](I)=\widehat{h^{(m_0)}} (0,I)$ be the $0$-Fourier coefficient of $h^{(m_0)}(t,I)$ as the function of $t$. In order to eliminate the dependence of $h^{(m_0)} (t,I)$ on the time-variable $t$, we introduce the following transformation
\begin{equation}\label{fcc4-1} \Psi:\ \rho=I,\ \phi=\theta+\frac{\partial \tilde S(t,I)}{\partial I}, \end{equation} where $\tilde S(t,I)=\frac{1}{\varepsilon^b}\int_0^t\left( [ h^{(m_0)}](I)-h^{(m_0)}(\xi,I) \right) d \xi.$ It is symplectic by easy verification $d\,\rho\wedge d\phi=d\, I\wedge d\, \theta$. Noting that the transformation is not small. So $\Psi$ is not close to the Identity. Let \begin{equation*}\label{fc4-1} \tilde{s}_0=\varepsilon^{b+\frac{(m_0+1)(2\tau_1+3) B}{\ell m_0}+4\mu_1+\frac{2B}{\ell}}, \ \tilde{r}_{0}=\varepsilon^{a+(\tau_2+1)b+\frac{(m_0+1)(2\tau_1+3)(\tau_2+1)B}{m_0\ell}+4\mu_1(\tau_2+1)+\mu_2+\frac{2B(\tau_2+1)}{\ell}}, \end{equation*} where $\mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\mu_2=2\mu_1$. We introduce a domain
$$\mathcal{D}:=\left\{t=t_1+t_2i\in \mathbf T_{s_{m_0}}:\; |t_2|\le \tilde{s}_0 \right\}\times\left\{I=I_1+I_2i\in B(r_{m_0}):\; |I_2|\le\tilde{r}_{0}\right\},$$ where $t_1,t_2,I_1,I_2$ are real numbers. Noting that $h^{(m_0)}(t,I)$ is real for real arguments. Thus, for $(t,I)\in \mathcal{D}$, we have \begin{eqnarray}\label{fc4-2}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{\mathcal{D}}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\mathcal{D}}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\mathcal{D}}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{\mathcal{D}}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{\mathcal{D}}\\ \nonumber&\leq&\frac{C\tilde{s}_0}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{r}_0}{\varepsilon^br^2_{m_0}}\\ &\leq&\frac{1}{2}s_{m_0}. \end{eqnarray} By (\ref{fc3-50}), (\ref{fcc4-1}) and (\ref{fc4-2}), we have
\begin{equation}\label{fc4-3} \Psi(\mathbf T^{d}_{s_{m_0}/2}\times\mathcal D)\subset D(s_{m_0},r_{m_0}) \end{equation} and \begin{eqnarray}\label{fc4-4}
\nonumber \tilde{H}(\phi,t,\rho)&=&H^{(m_0)}\circ\Psi\nonumber\\ &=&\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{[ h^{(m_0)}](\rho)}{\varepsilon^{b}}+\frac{\varepsilon^B \breve{P}^{(m_0)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\breve{P}^{(m_0)}(\phi,t,\rho)=P^{(m_0)}(\phi-\frac{\partial}{\partial I}\tilde S(t,\rho),t,\rho)$ and $\|\breve{P}^{(m_0)}\|_{\mathbf T^{d}_{s_{m_0}/2}\times\mathcal D}\leq C$.
\section{Iterative lemma}\label{sec5}
By (\ref{fc3-51}), we have
\begin{equation}\label{fc5-1}
\varepsilon^{a-b}\|\frac{\partial^2 [ h^{(m_0)}](\rho)}{\partial \rho^2} \|_{D(0,\frac{r_{m_0}}{2})}\leq\frac{C\varepsilon^{a-b}}{r^2_{m_0}}\ll1. \end{equation}
Then by (\ref{fcb1-3}), (\ref{fc3-51}) and (\ref{fc5-1}), solving the equation $\frac{\partial H_0(\rho)}{\partial \rho}+\varepsilon^{a-b}\frac{\partial [ h^{(m_0)}](\rho)}{\partial \rho} =\omega(I_0)$ by Newton iteration, we get that there exists $\tilde{I}_{0}\in \mathbf{R}^{d}\cap D(0,\frac{r_{m_0}}{2})$ with $|\tilde{I}_{0}-I_0|\leq\frac{C\varepsilon^{a-b}}{r_{m_0}}\ll r_{m_0}$ such that
\begin{equation}\label{fc5-2} \frac{\partial H_0}{\partial \rho}(\tilde{I}_{0})+\varepsilon^{a-b}\frac{\partial [ h^{(m_0)}]}{\partial \rho} (\tilde{I}_{0})=\omega(I_0), \end{equation} where $\omega(I_0)=\frac{\partial H_0}{\partial \rho}(I_{0})$. For any $c>0$ and any $y_0\in\mathbf{{R}}^{d}$, let
\begin{equation*}
B(y_0,c)=\{z\in\mathbf C^d: \, |z-y_0|\le c\}.
\end{equation*}
Define
\begin{equation*}
\tilde{D}(s,r(I))=\mathbf{T}^{d+1}_{s}\times B(I,r),\ \tilde{D}(s,0)=\mathbf{T}^{d+1}_{s},\ \tilde{D}(0,r(I))=B(I,r).
\end{equation*}
Let $\tilde{\varepsilon}_0=\varepsilon_{m_0}=\varepsilon^B$. Noting that $|\tilde{I}_{0}-I_0|\ll r_{m_0}$, and by (\ref{fc4-3}), we have
\begin{equation}\label{fbc5-1} \Psi(\tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0)))\subset D(s_{m_0},r_{m_0}). \end{equation} Then we can rewrite equation (\ref{fc4-4}) as follows:
\begin{equation}\label{fc5-3} \tilde{H}^{(0)}(\theta,t,I)=\frac{H^{(0)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\theta,t,I)}{\varepsilon^{b}},
\end{equation} where $(\theta,t,I)\in \tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0))$, $H^{(0)}_0(I)=H_0(I)+\varepsilon^{a-b}[ h^{(m_0)}](I)$, $\tilde{P}^{(0)}=\varepsilon^B \breve{P}^{(m_0)}$ and
\begin{equation}\label{fc5-4} \frac{\partial H^{(0)}_0}{\partial I}(\tilde{I}_{0})=\omega(I_0), \end{equation}
\begin{equation}\label{fc5-5}
\|\tilde{P}^{(0)}\|_{\tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0))}\leq C\tilde{\varepsilon}_0. \end{equation}
By (\ref{fcb1-3}) and (\ref{fc5-1}), we get that there exist constants $M_0>0$, $h_0>0$ such that \begin{equation}\label{fc5-6} det\left(\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\right),\ det\left(\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\right)^{-1}\leq M_0, \ \forall I\in \tilde{D}(0,\tilde{r}_0(\tilde{I}_0))
\end{equation}
and
\begin{equation}\label{fc5-7}
\|\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\|_{\tilde{D}(0,\tilde{r}_0(\tilde{I}_0))}\leq h_0.
\end{equation}
Define sequences \begin{itemize} \item $ \tilde{\varepsilon}_0=\varepsilon_{m_0}=\varepsilon^B,\ \tilde{\varepsilon}_{j+1}=\tilde{\varepsilon}_{j}^{1+\mu_3}=\varepsilon_{m_0+1+j} \ with \ \mu_3=\frac{(a-b)\mu}{10B},\ j=0,1,\cdots;$ \item $ \tilde{s}_0=\varepsilon^{b+\frac{(m_0+1)(2\tau_1+3) B}{\ell m_0}+4\mu_1+\frac{2B}{\ell}} \ \ \ with \ \ \ \mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)},\ \ \tilde{s}_{j+1}= \tilde{s}_{j}^{1+\mu_3}, \ \ \tilde{s}_j^{(l)}=\tilde{s}_{j}-\frac{l}{10}(\tilde{s}_j-\tilde{s}_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$ \item $\tilde{r}_{0}=\varepsilon^{a+(\tau_2+1)b+\frac{(m_0+1)(2\tau_1+3)(\tau_2+1)B}{m_0\ell}+4\mu_1(\tau_2+1)+\mu_2+\frac{2B(\tau_2+1)}{\ell}} \ \ with \ \ \mu_2=2\mu_1, \ \ \tilde{r}_{j+1}= \tilde{r}_{j}^{1+\mu_3},\ \tilde{r}_j^{(l)}=\tilde{r}_{j}-\frac{l}{10}(\tilde{r}_j-\tilde{r}_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$ \item $\tilde{K}_j=\frac{2}{\tilde{s}_j}\log\frac{1}{\tilde{\varepsilon}_j},\ j=0,1,2,\cdots;$ \item $h_j=h_0(2-\frac{1}{2^j}),\ j=0,1,2,\cdots;$ \item $M_j=M_0(2-\frac{1}{2^j}),\ j=0,1,2,\cdots.$
\end{itemize}
We claim that \begin{equation}\label{fc5-8}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\theta,t,I)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\leq C\varepsilon_{\nu}= C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+1,m_0+2,\cdots.
\end{equation}
In fact, for $(t,I)=(t_1+t_2i,I_1+I_2i)\in \tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))$, where $t_1,t_2,I_1,I_2$ are real numbers, we have \begin{eqnarray}\label{fc5-9}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\ \nonumber&\leq&\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{r}_{\nu-m_0}}{\varepsilon^br^2_{m_0}}\\ &\leq&\frac{1}{2}s_{\nu}. \end{eqnarray} It follows that
\begin{equation}\label{fc5-10} \Psi(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0)))\subset \tilde{D}(s_{\nu},\tilde{r}_{\nu-m_0}(\tilde{I}_0)). \end{equation} Suppose that $w=\Phi^{(m_0)}(z)$ with $z=(\theta,t,I)\in \tilde{D}(s_{\nu},\tilde{r}_{\nu-m_0}(\tilde{I}_0))\subset D(s_{m_0},r_{m_0})$. Since $\Phi^{(m_0)}$ is real for real argument and $\tilde{r}_{\nu-m_0}<r_{\nu}<s_{\nu}$, then by (\ref{fc3-44}) with $m=m_0-1$, we have
\begin{eqnarray}\label{fc5-11}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m_0)}(z)|=|{\rm Im} \Phi^{(m_0)}(z)-{\rm Im} \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m_0)}(z)- \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{m_0},r_{m_0})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}. \end{eqnarray} Then by (\ref{fc5-10}) and (\ref{fc5-11}), we have \begin{equation}\label{fc5-12} \Phi^{(m_0)}\circ\Psi(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0)))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m_0+1,m_0+2,\cdots. \end{equation} By (\ref{fc3-3}) and (\ref{fc5-12}), the proof of (\ref{fc5-8}) is completed. Clearly, by (\ref{fc5-4})-(\ref{fc5-8}), (\ref{fc5-3}) fulfill (\ref{fc5-15})-(\ref{fc5-19}) with $m=0$. Then we have the following lemma.
\begin{lem}\label{lem5-1}{\rm (Iterative Lemma)} Suppose that we have had $m+1$ {\rm(}$m=0,1,2,\cdots${\rm)} symplectic transformations $\tilde{\Phi}_0=id$, $\tilde{\Phi}_1$, $\cdots$, $\tilde{\Phi}_m$ with \begin{equation}\label{fc5-13} \tilde{\Phi}_j:\tilde{D}(\tilde{s}_j,\tilde{r}_j(\tilde{I}_j))\rightarrow \tilde{D}(\tilde{s}_{j-1},\tilde{r}_{j-1}(\tilde{I}_{j-1})),\ j=1,2,\cdots,m
\end{equation}
and \begin{equation}\label{fcg5-1}
\|\partial(\tilde{\Phi}_{j}-id)\|_{\tilde{D}(\tilde{s}_j,\tilde{r}_j(\tilde{I}_j))}\leq \frac{1}{2^{j+1}},\ j=1,2,\cdots,m \end{equation}
where $\tilde{I}_j\in \mathbf{R^{d}}, \ j=0,1,2,\cdots,m$ such that system {\rm(\ref{fc5-3})} is changed by $\tilde{\Phi}^{(m)}=\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_m$ into \begin{equation}\label{fc5-14} \tilde{H}^{(m)}=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(m)}=\frac{H^{(m)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m)}(\theta,t,I)}{\varepsilon^{b}},
\end{equation} where
\begin{equation}\label{fc5-15} \frac{\partial H^{(m)}_0}{\partial I}(\tilde{I}_{m})=\omega(I_0), \end{equation}
\begin{equation}\label{fc5-16}
\|\tilde{P}^{(m)}\|_{\tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))}\leq C\tilde{\varepsilon}_m, \end{equation} \begin{equation}\label{fc5-17} det\left(\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\right),\ det\left(\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\right)^{-1}\leq M_m, \ \forall I\in \tilde{D}(0,\tilde{r}_m(\tilde{I}_m)),
\end{equation}
\begin{equation}\label{fc5-18}
\|\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\|_{\tilde{D}(0,\tilde{r}_m(\tilde{I}_m))}\leq h_m,
\end{equation}
\begin{equation}\label{fc5-19}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m)}(\theta,t,I)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_m))}\leq C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+m+1,m_0+m+2,\cdots.
\end{equation} Then there is a symplectic transformation $\tilde{\Phi}_{m+1}$ with \begin{equation}\label{fc5-20} \tilde{\Phi}_{m+1}:\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))\rightarrow \tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))
\end{equation}
and \begin{equation*}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq \frac{1}{2^{m+2}} \end{equation*} where $\tilde{I}_{m+1}\in \mathbf{R^{d}}$ such that system {\rm(\ref{fc5-14})} is changed by $\tilde{\Phi}_{m+1}$ into {\rm($\tilde{\Phi}^{(m+1)}=\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_{m+1}$)} \begin{eqnarray*}
\nonumber \tilde{H}^{(m+1)}&=&\tilde{H}^{(m)}\circ\tilde{\Phi}_{m+1}=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(m+1)}\nonumber\\ \nonumber&=&\frac{H^{(m+1)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m+1)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\theta,t,I)}{\varepsilon^{b}},
\end{eqnarray*}
where $\tilde{H}^{(m+1)}$ satisfies {\rm(\ref{fc5-15})}-{\rm(\ref{fc5-19})} by replacing $m$ by $m+1$.
\end{lem}
\begin{proof} Assume that the change $\tilde{\Phi}_{m+1}$ is implicitly defined by \begin{equation}\label{fc5-22} \tilde{\Phi}_{m+1}: \begin{cases}
I=\rho+\frac{\partial S}{\partial \theta}, \\
\phi=\theta+\frac{\partial S}{\partial \rho},\\ t=t,
\end{cases} \end{equation} where $S=S(\theta, t, \rho)$ is the generating function, which will be proved to be analytic in a smaller domain
$\tilde{D}(\tilde{s}_{m+1}, \tilde{r}_{m+1}(\tilde{I}_{m+1})).$ By a simple computation, we have $$d I\wedge d\theta=d\rho \wedge d\theta+\sum_{i,j=1}^d\frac{\partial^{2}S}{\partial\rho_{i}\partial\theta_{j}}d\rho_{i}\wedge d\theta_{j}=d\rho \wedge d\phi.$$ Thus, the coordinates change $\tilde{\Phi}_{m+1}$ is symplectic if it exists. Moreover, we get the changed Hamiltonian \begin{eqnarray}\label{fc5-23}
\nonumber \tilde{H}^{(m+1)}&=&\tilde{H}^{(m)}\circ\tilde{\Phi}_{m+1}\nonumber\\ \nonumber &=&\frac{H^{(m)}_0(\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{a}}+\frac{ \tilde{P}^{(m)}(\theta,t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\\ &&+\frac{\partial S}{\partial t}+\sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\theta=\theta(\phi, t, \rho)$ is implicitly defined by (\ref{fc5-22}). By Taylor formula, we have
\begin{eqnarray}\label{fc5-24}
\nonumber \tilde{H}^{(m+1)} &=&\frac{H^{(m)}_0(\rho)}{\varepsilon^{a}}+\langle\frac{\omega^{(m)}(\rho)}{\varepsilon^a}, \frac{\partial S}{\partial \theta}\rangle+\frac{\partial S}{\partial t}+\frac{ \tilde{P}^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+R_1\\
\nonumber &&+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\\ &&+\sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\omega^{(m)}(\rho)=\frac{\partial H^{(m)}_0}{\partial I}(\rho)$ and \begin{equation}\label{fc5-25} R_1=\frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H^{(m)}_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau+\frac{1}{\varepsilon^b}\int_{0}^{1}\frac{\partial \tilde{P}^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau. \end{equation} Expanding $\tilde{P}^{(m)}(\theta,t,\rho)$ into a Fourier series, \begin{equation}\label{fc5-26} \tilde{P}^{(m)}(\theta,t,\rho)=\sum_{(k,l)\in\mathbf{Z}^d\times \mathbf{Z}}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}:=\tilde{P}_{1}^{(m)}(\theta,t,\rho)+\tilde{P}_{2}^{(m)}(\theta,t,\rho), \end{equation}
where $\tilde{P}_{1}^{(m)}=\sum_{|k|+|l|\leq \tilde{K}_m}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$, $\tilde{P}_{2}^{(m)}=\sum_{|k|+|l|> \tilde{K}_m}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$. Then, we derive the homological equation: \begin{equation}\label{fc5-27} \frac{\partial S}{\partial t}+\langle \frac{\omega^{(m)}(\rho)}{\varepsilon^{a}}, \frac{\partial S}{\partial\theta}\rangle +\frac{ \tilde{P}_1^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}-\frac{\widehat{\tilde{P}^{(m)}}(0,0,\rho)}{\varepsilon^{b}}=0, \end{equation} where $\widehat{\tilde{P}^{(m)}}(0, 0, \rho)$ is $0$-Fourier coefficient of $\tilde{P}^{(m)}(\theta,t,\rho)$ as the function of $(\theta,t)$. Let \begin{equation}\label{fc5-28}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq \tilde{K}_m,(k,l)\neq (0,0)}\widehat{S}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}. \end{equation} By passing to Fourier coefficients, we have \begin{equation}\label{fc5-29}
\widehat{S}(k, l, \rho)=\frac{i}{\varepsilon^b}\cdot\frac{\widehat{\tilde{P}^{(m)}}(k, l, \rho)}{\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle +l},\;|k|+|l|\leq \tilde{K}_m,\; (k,l)\in \mathbf{Z}^{d}\times \mathbf{Z}\setminus\{(0,0)\}. \end{equation} Then we can solve homological equation (\ref{fc5-27}) by setting \begin{equation}\label{fc5-30}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq \tilde{K}_m,(k,l)\neq (0,0)} \frac{i}{\varepsilon^b}\cdot\frac{\widehat{\tilde{P}^{(m)}}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}}{\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle +l}. \end{equation}
By (\ref{fc1-20}), (\ref{fc5-15}) and (\ref{fc5-17}), for $\forall \rho\in \tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))$, $|k|+|l|\leq \tilde{K}_{m}$, $(k,l)\neq (0,0)$, we have \begin{eqnarray}\label{fc5-31}
|\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle+l|
&\geq &|\varepsilon^{-a}\langle k, \omega^{(m)}(\tilde{I}_m)\rangle+l|-|\varepsilon^{-a}\langle k, \omega^{(m)}(\tilde{I}_m)-\omega^{(m)}(\rho)\rangle|\nonumber\\
&\geq & \frac{\gamma}{|k|^{\tau_2}}-C\varepsilon^{-a}|k|\tilde{r}_{m}\nonumber\\
&\geq &\frac{\gamma}{2|k|^{\tau_2}}. \end{eqnarray} Then, by (\ref{fc5-16}), (\ref{fc5-29})-(\ref{fc5-31}), using R\"ussmann \cite{a27, a28} subtle arguments to give optimal estimates of small divisor series (also see Lemma 5.1 in \cite{a29}), we get \begin{equation}\label{fc5-32}
\|S(\theta,t,\rho)\|_{\tilde{D}(\tilde{s}^{(1)}_m, \tilde{r}_m(\tilde{I}_m))}
\leq\frac{C\|\tilde{P}^{(m)}\|_{\tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))}}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}}\leq\frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}}. \end{equation} Then by the Cauchy's estimate, we have \begin{equation}\label{fc5-33}
\|\frac{\partial S}{\partial \theta}\|_{D(\tilde{s}^{(2)}_m, \tilde{r}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}\ll \tilde{r}_m-\tilde{r}_{m+1},\ \|\frac{\partial S}{\partial \rho}\|_{D(\tilde{s}^{(1)}_m, \tilde{r}^{(1)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}\ll \tilde{s}_m-\tilde{s}_{m+1}. \end{equation} By (\ref{fc5-22}) and (\ref{fc5-33}) and the implicit function theorem, we get that there are analytic functions $u=u(\phi, t, \rho), v=v(\phi, t, \rho)$ defined on the domain $\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))$ with \begin{equation}\label{fc5-34} \frac{\partial S(\theta,t,\rho)}{\partial \theta}=u(\phi, t, \rho),\ \frac{\partial S(\theta,t,\rho)}{\partial \rho}=-v(\phi, t, \rho) \end{equation} and \begin{equation}\label{fc5-35}
\|u\|_{\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}\ll \tilde{r}_m-\tilde{r}_{m+1},\ \|v\|_{\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}\ll \tilde{s}_m-\tilde{s}_{m+1} \end{equation} such that \begin{equation}\label{fc5-36} \tilde{\Phi}_{m+1}: \begin{cases}
I=\rho+u(\phi, t, \rho), \\
\theta=\phi+v(\phi, t, \rho),\\ t=t.
\end{cases} \end{equation} Then, we have \begin{equation}\label{fc5-37} \tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m)))\subseteq \tilde{D}(\tilde{s}_{m},\tilde{r}_{m}(\tilde{I}_m)). \end{equation}
Let \begin{equation}\label{fc5-38} H_0^{(m+1)}(\rho)=H_0^{(m)}(\rho)+\varepsilon^{a-b}\widehat{\tilde{P}^{(m)}}(0, 0, \rho). \end{equation} By the Cauchy's estimate and (\ref{fc5-16}), we have
\begin{equation}\label{fc5-39}
\|\frac{\partial^p \widehat{\tilde{P}^{(m)}}(0, 0, \rho)}{\partial \rho^p}\|_{\tilde{D}(0,\tilde{r}^{(p)}_m(\tilde{I}_m))}\leq \frac{C \tilde{\varepsilon}_m}{\tilde{r}_m^p},\ \ p=1,2.
\end{equation}
By (\ref{fc5-17}), (\ref{fc5-18}), (\ref{fc5-38}) and (\ref{fc5-39}), we have
\begin{equation}\label{fc5-40} det\left(\frac{\partial^2 H^{(m+1)}_0(\rho)}{\partial \rho^2}\right),\ det\left(\frac{\partial^2 H^{(m+1)}_0(\rho)}{\partial \rho^2}\right)^{-1}\leq M_{m+1}, \ \forall \rho\in \tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))
\end{equation}
and
\begin{equation}\label{fc5-41}
\|\frac{\partial^2 H^{(m+1
)}_0(\rho)}{\partial \rho^2}\|_{\tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))}\leq h_{m+1}.
\end{equation}
By (\ref{fc5-38}), we have
\begin{equation}\label{fc5-42} \frac{\partial H_0^{(m+1)}(\rho)}{\partial \rho}=\frac{\partial H_0^{(m)}(\rho)}{\partial \rho}+\varepsilon^{a-b}\frac{\partial \widehat{\tilde{P}^{(m)}}(0, 0, \rho)}{\partial \rho}. \end{equation} Noting that $H_0^{(m+1)}(\rho)$, $H_0^{(m)}(\rho)$ and $\widehat{\tilde{P}^{(m)}}(0, 0, \rho)$ are real analytic on $\tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))$ and that $\tilde{I}_{m}\in\mathbf{R^d}$. Then by (\ref{fc5-15}), (\ref{fc5-38})-(\ref{fc5-40}) and (\ref{fc5-42}), it is not difficult to see that (see Appendix ``A The Classical Implicit Function Theorem'' in \cite{a31}) there exists an unique point $\tilde{I}_{m+1}\in\mathbf{R^d}$ so that \begin{equation}\label{fc5-43} \frac{\partial H^{(m+1)}_0}{\partial \rho}(\tilde{I}_{m+1})=\omega(I_0),
\end{equation}
\begin{equation}\label{fc5-44}
|\tilde{I}_{m+1}-\tilde{I}_{m}|\leq \frac{C\varepsilon^{a-b}\tilde{\varepsilon}_m}{\tilde{r}_m}\ll \tilde{r}_{m}.
\end{equation}
By (\ref{fc5-37}) and (\ref{fc5-44}), we have
\begin{eqnarray}\label{fc5-45} \tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}_{m+1}, \tilde{r}_{m+1}(\tilde{I}_{m+1})))&\subseteq&\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1})))\nonumber\\ &\subseteq&\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m)))\subseteq \tilde{D}(\tilde{s}_{m},\tilde{r}_{m}(\tilde{I}_m)). \end{eqnarray}
Let \begin{equation}\label{fc5-46} \frac{\tilde{P}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}=\frac{\tilde{P}_{2}^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1. \end{equation} Then by (\ref{fc5-24}), (\ref{fc5-26}), (\ref{fc5-27}), (\ref{fc5-38}) and (\ref{fc5-46}), we have \begin{equation}\label{fc5-47} \tilde{H}^{(m+1)}(\phi,t,\rho)=\frac{H^{(m+1)}_0(\rho)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc5-16}), (\ref{fc5-26}) and (\ref{fc5-44}), it is not difficult to show that (see Lemma A.2 in \cite{a30}), we have
\begin{equation}\label{fc5-48}
\| \frac{\tilde{P}_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C\tilde{\varepsilon}_{m}}{\varepsilon^{b}}\tilde{K}_{m}^{d+1}e^{-\frac{\tilde{K}_{m}\tilde{s}_{m}}{2}}\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}}. \end{equation} By (\ref{fc5-16}), (\ref{fc5-18}), (\ref{fc5-32})-(\ref{fc5-35}) and (\ref{fc5-44}), we have \begin{equation}\label{fc5-49}
\| \frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H^{(m)}_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C}{\varepsilon^{a}}\cdot(\frac{\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}})^2\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}}, \end{equation} \begin{equation}\label{fc5-50}
\| \frac{1}{\varepsilon^b}\int_{0}^{1}\frac{\partial \tilde{P}^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C\tilde{\varepsilon}_m}{\varepsilon^{b}\tilde{r}_m}\cdot\frac{\tilde{\varepsilon}_m}{\varepsilon^b\gamma \tilde{s}_{m}^{\tau_{2}+1}}\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}}. \end{equation}
By (\ref{fc5-35}) and (\ref{fc5-36}), we have \begin{equation}\label{fc5-51} \tilde{\Phi}_{m+1}(\phi,t,\rho)=(\theta,t,I),\ (\phi,t,\rho)\in \tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m)). \end{equation} By (\ref{fc5-35}), (\ref{fc5-36}) and (\ref{fc5-51}), we have \begin{equation}\label{fc5-52}
\|I-\rho\|_{\tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}, \ \|\theta-\phi\|_{\tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}. \end{equation} By (\ref{fc5-36}), (\ref{fc5-52}) and Cauchy's estimate, we have \begin{equation}\label{fc5-53}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_m^{(4)},\tilde{r}_m^{(4)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}\tilde{r}_m}. \end{equation} By (\ref{fc5-44}) and (\ref{fc5-53}), we have \begin{equation}\label{fc5-54}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq \frac{1}{2^{m+2}}. \end{equation} By (\ref{fc5-13}), (\ref{fcg5-1}), (\ref{fc5-45}) and (\ref{fc5-54}), we have
\begin{eqnarray}\label{fc5-55}
\nonumber &&\|\partial\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\\
\nonumber&=&\|(\partial\tilde{\Phi}_1\circ\tilde{\Phi}_2\circ\cdots\circ\tilde{\Phi}_{m+1})(\partial\tilde{\Phi}_2\circ\tilde{\Phi}_3\circ\cdots\circ\tilde{\Phi}_{m+1})\cdots(\partial\tilde{\Phi}_{m+1})\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\\
\nonumber&\leq&\prod_{j=0}^{m}(1+\frac{1}{2^{j+2}})\\
&\leq&2. \end{eqnarray}
We claim that \begin{equation}\label{fc5-56}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1}))}\leq C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+m+1,m_0+m+2,\cdots.
\end{equation}
In fact, suppose that $w=\tilde{\Phi}^{(m+1)}(z)$ with $z=(\phi,t,\rho)\in \tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1}))$. Since $\tilde{\Phi}^{(m+1)}$ is real for real argument and $\tilde{r}_{\nu-m_0}<\tilde{s}_{\nu-m_0}$, we have
\begin{eqnarray}\label{fc5-57}
\nonumber &&|{\rm Im} w|=|{\rm Im} \tilde{\Phi}^{(m+1)}(z)|=|{\rm Im} \tilde{\Phi}^{(m+1)}(z)-{\rm Im} \tilde{\Phi}^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&| \tilde{\Phi}^{(m+1)}(z)- \tilde{\Phi}^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2\tilde{s}_{\nu-m_0}. \end{eqnarray} By (\ref{fc5-13}), (\ref{fc5-45}) and (\ref{fc5-57}), we have \begin{equation}\label{fc5-58} \tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subseteq D_{\nu}:=(\mathbf{T}^{d+1}_{2\tilde{s}_{\nu-m_0}}\times \mathbf{R}^{d}_{2\tilde{s}_{\nu-m_0}})\bigcap \tilde{D}(\tilde{s}_{0},\tilde{r}_{0}(\tilde{I}_{0})). \end{equation} For $(t,I)=(t_1+t_2i,I_1,I_2i)\in D_{\nu}$, where $t_1,t_2,I_1,I_2$ are real numbers, we have \begin{eqnarray}\label{fc5-59}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{D_{\nu}}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{D_{\nu}}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{D_{\nu}}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{D_{\nu}}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{D_{\nu}}\\ \nonumber&\leq&\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br^2_{m_0}}\\ &\leq&\frac{1}{2}s_{\nu}. \end{eqnarray} By (\ref{fbc5-1}), (\ref{fc5-58}) and (\ref{fc5-59}), we have \begin{equation}\label{fc5-60} \Psi\circ\tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subseteq \bar{D}_{\nu}:=(\mathbf{T}^{d+1}_{s_{\nu}}\times \mathbf{R}^{d}_{2\tilde{s}_{\nu-m_0}})\bigcap D(s_{m_0},r_{m_0}). \end{equation} Suppose that $w=\Phi^{(m_0)}(z)$ with $z=(\theta,t,I)\in \bar{D}_{\nu}$. Since $\Phi^{(m_0)}$ is real for real argument and $2\tilde{s}_{\nu-m_0}<r_{\nu}<s_{\nu}$, then by (\ref{fc3-44}) with $m=m_0-1$, we have
\begin{eqnarray}\label{fc5-62}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m_0)}(z)|=|{\rm Im} \Phi^{(m_0)}(z)-{\rm Im} \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m_0)}(z)- \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{m_0},r_{m_0})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}. \end{eqnarray} Then by (\ref{fc5-60}) and (\ref{fc5-62}), we have \begin{equation}\label{fc5-63} \Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m_0+m+1,m_0+m+2,\cdots. \end{equation} By (\ref{fc3-3}) and (\ref{fc5-63}), the proof of (\ref{fc5-56}) is completed. By (\ref{fc5-25}), (\ref{fc5-44}), (\ref{fc5-46}), (\ref{fc5-48})-(\ref{fc5-50}) and (\ref{fc5-56}), we have
\begin{equation}\label{fc5-64}
\|\tilde{P}^{(m+1)}\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq C\tilde{\varepsilon}_{m+1}. \end{equation} Then the proof is completed by (\ref{fc5-40}), (\ref{fc5-41}), (\ref{fc5-43}), (\ref{fc5-45}), (\ref{fc5-47}), (\ref{fc5-54}), (\ref{fc5-56}) and (\ref{fc5-64}).
\end{proof}
\section{Proof of Theorems \ref{thm1-1}-\ref{thm1-2}}\label{sec6}
In Lemma \ref{lem5-1}, letting $m\rightarrow\infty$ we get the following lemma:
\begin{lem}\label{lem6-1} There exisits a symplectic transformation $\tilde{\Phi}^{(\infty)}:=\lim_{m\rightarrow\infty}\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_{m}$ with \begin{equation}\label{fc6-1} \tilde{\Phi}^{(\infty)}:\mathbf{T}^{d+1}\times \{\tilde{I}_{\infty}\}\rightarrow D(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0)),
\end{equation} where $\tilde{I}_{\infty}\in \mathbf{R^{d}}$ such that system {\rm(\ref{fc5-3})} is changed by $\tilde{\Phi}^{(\infty)}$ into \begin{equation}\label{fc6-2} \tilde{H}^{(\infty)}(\theta,t,I)=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(\infty)}=\frac{H^{(\infty)}_0(I)}{\varepsilon^{a}},
\end{equation} where
\begin{equation}\label{fc6-3} \frac{\partial H^{(\infty)}_0}{\partial I}(\tilde{I}_{\infty})=\omega(I_0), \end{equation}
\begin{equation}\label{fc6-4}
\|\tilde{\Phi}^{(\infty)}-id\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq \tilde{\varepsilon}_0^{\frac{1}{2\ell}}. \end{equation}
\end{lem}
\begin{proof} By (\ref{fc5-35}) and (\ref{fc5-55}), for $z=(\theta,t,I)\in \mathbf{T}^{d+1}\times \tilde{I}_{\infty}$ and $m=0,1,2, \cdots$, we have
\begin{eqnarray}\label{fc6-5}
\nonumber &&\|\tilde{\Phi}^{(m+1)}(z)-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\
\nonumber&=&\|\tilde{\Phi}^{(m)}(\tilde{\Phi}_{m+1}(z))-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\
\nonumber&\leq&\|\partial\tilde{\Phi}^{(m)}(\tilde{\Phi}_{m+1}(z))\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\|\tilde{\Phi}_{m+1}(z)-z\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\ &\leq&2\tilde{\varepsilon}_m^{\frac{1}{\ell}}, \end{eqnarray} where $\tilde{\Phi}^{(0)}:=id$. Then, we have
\begin{equation*}
\|\tilde{\Phi}^{(\infty)}(z)-z\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq\sum_{m=0}^{\infty} \|\tilde{\Phi}^{(m+1)}(z)-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq\sum_{m=0}^{\infty}2\tilde{\varepsilon}_m^{\frac{1}{\ell}} \leq \tilde{\varepsilon}_0^{\frac{1}{2\ell}}. \end{equation*} This completes the proof of Lemma \ref{lem6-1}.
\end{proof} Then the proof of Theorem \ref{thm1-1} is completed by (\ref{fc3-1}), (\ref{fc3-5}), (\ref{fc3-50}), (\ref{fc4-4}) , (\ref{fc5-3}) and Lemma \ref{lem6-1}. Applying Theorem \ref{thm1-1} to (\ref{fc1-7}) we have Theorem \ref{thm1-2} (see Section 5 of \cite{a16} for the proof).
\end{document} |
\begin{document}
\title{\LARGE \bf SOS Methods for Multi-Delay Systems:\\ A Dual Form of Lyapanov-Krasovskii Functional }
\author{Matthew~M.~Peet,~\IEEEmembership{Member,~IEEE,} \thanks{M. Peet is with the School for the Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, 85298 USA. e-mail: (\tt \small [email protected]}}
\maketitle
\begin{abstract} We present a dual form of Lyapunov-Krasovskii functional which allows the problem of controller synthesis of multi-delay systems to be formulated and solved in a convex manner. First, we give a general form of dual stability condition formulated in terms of Lyapunov operators which are positive, self-adjoint and preserve the structure of the state-space. Second, we provide a class of such operators and express the stability conditions as positivity and negativity of quadratic Lyapunov-Krasovskii functional forms. Next, we adapt the SOS methodology to express positivity and negativity of these forms as LMIs, describing a new set of polynomial manipulation tools designed for this purpose. Finally, we apply the resulting LMIs to a battery of numerical examples and demonstrate that the stability conditions are not conservative. The results of this paper are significant in that they open the way for dynamic output $H_\infty$ optimal control of systems with multiple time-delays. \end{abstract} \begin{IEEEkeywords} Delay Systems, Lyapunov-Krasovskii, LMIs, Stability, Controller Synthesis. \end{IEEEkeywords}
\section{Introduction} Systems with delay have been well-studied for some time~\cite{niculescu_book,gu_2003,richard_2003}. Recently, there have been many results on the use of optimization and semidefinite programming for stability of linear and nonlinear time-delay systems. Although the computational question of stability of a linear state-delayed system is believed to be NP-hard, several techniques have been developed which use LMI methods~\cite{boyd_book} to construct sequences of polynomial-time algorithms which provide sufficient stability conditions and appear to converge to necessity as the complexity of the algorithms increase. Examples of such sequential algorithms include the piecewise-linear approach~\cite{gu_2003}, the delay-partitioning approach~\cite{gouaisbaut_2009}, the Wirtinger-based method of~\cite{seuret_2013} and the SOS approach~\cite{peet_2009SICON}. In addition, there are also frequency-domain approaches such as~\cite{michiels_2005,sipahi_2005}. These algorithms are sufficiently reliable so that for the purposes of this paper, we may consider the problem of stability analysis of linear discrete-delay systems to be solved.
The purpose of this paper is to explore methods by which the success in stability analysis of time-delay systems may be used to attack what may be considered the relatively underdeveloped field of robust and optimal controller synthesis. Although there have been a number of results on controller synthesis for time-delay systems~\cite{luo_book}, none of these results has been able to resolve the fundamental bilinearity of the synthesis problem. That is, controller synthesis is not convex in the combined Lyapunov operator $\mathcal{P}$ and feedback operator $\mathcal{K}$. Without convexity, it is difficult to construct provably stabilizing controllers without significant conservatism, much less address the problems of robust and quadratic stability. Some papers use iterative methods to alternately optimize the Lyapunov operator and controller as in~\cite{moon_2001} or~\cite{fridman_2002} (via a ``tuning parameter''). However, this iterative approach is not guaranteed to converge. Meanwhile, approaches based on frequency-domain methods, discrete approximation, or Smith predictors result in controllers which are not provably stable or are sensitive to variations in system parameters or in delay. Finally, we mention that delays often occur in both state and input and to date most methods do not provide a unifying formulation of the controller synthesis problem with both state and input delay.
In this paper, we create a unified inequality-based framework for robust and optimal control of systems with multiple delays. The model for our approach is the LMI framework for control of linear finite-dimensional state-space systems. Specifically, there exists a controller $u=Kx$ such that $\dot x=Ax+Bu$ is stable if and only if there exists some $P>0$ and $Z$ such that $AP+PA^T+BZ+Z^TB^T<0$. This LMI follows directly from the dual version of the Lyapunov inequality $AP+PA^T<0$ via the variable substitution $Z=KP$ ($K$ is then given by $K=ZP^{-1}$). If $A(\delta)$ and $B(\delta)$ are uncertain, $\delta \in \Delta$, then we search for $P(\delta)$ and $Z(\alpha)$ (or a fixed $P$ for quadratic stability) and the inequality must hold for all $\delta \in \Delta$ - a problem which is more difficult, but still convex in the variables $P$ and $Z$. LMIs of this form were introduced in~\cite{bernussou_1989} and are the basis for a majority of LMI methods for controller synthesis (See Chapter 5 Notes in~\cite{boyd_book} for a discussion). The question, then, is how to obtain similar results for control of time-delay systems.
Our approach is to think of the delay system evolving on a Hilbert space $x \in X$ as \[ \dot x(t)=\mathcal{A}x(t)+\mathcal{B}u(t). \]
We seek an operator $\mathcal{K}$ such that the feedback $u=\mathcal{K}x$ is stabilizing. Note that the input $u$ can also be infinite-dimensional so that we may represent systems with input delay in the same framework (using a Dirac operator for $\mathcal{B}$). We also note that we use full-state feedback, which assumes that measurements are retained for a period equal to the value of delay. Since such full-state measurements are often not available, ultimately the framework must include output feedback control - a more difficult problem.
In the Hilbert space framework, then, and focusing on the first terms, we seek a stabilizability condition of the form $\ip{x}{\mathcal{A}\mathcal{P}x}+\ip{\mathcal{A}\mathcal{P}x}{x}+\ip{x}{\mathcal{B}\mathcal{Z}x}+\ip{\mathcal{B}\mathcal{Z}x}{x}\le 0$ for all solutions $x\in X$. To create and test such an inequality, the first step is to establish and test a dual Lyapunov inequality of the form $\ip{x}{\mathcal{A}\mathcal{P}x}+\ip{\mathcal{A}\mathcal{P}x}{x}\le 0$ which guarantees stability of $\dot x=\mathcal{A}x$. Construction and testing of such a dual Lyapunov test is the main contribution of this paper. Due to space constraints, controller synthesis itself will be treated separately, but some early results on synthesis can be found in~\cite{peet_2009TDS}. In addition, while we discuss enforcement of the operator inequalities, this is not the main focus of the paper, which relies on restatement of existing results, primarily from~\cite{peet_2014ACC}. Indeed, we emphasize that the contribution of the paper (Theorems~\ref{thm:dual} and~\ref{thm:dual_MD}) is not a specific numerical method for determining stability of time-delay systems, but rather a new Lyapunov framework for solving the problem of controller synthesis. Moreover, the conditions are deliberately formulated in such a way that alternative approaches such as~\cite{Gu_1997},\cite{gouaisbaut_2009},\cite{seuret_2013} may also be applied in order to test stability and obtain stabilizing controllers. Finally, we note that in abstract space, there have been a number of results on dual and adjoint systems~\cite{bensoussan_book}. Unfortunately, however, these dual systems are not delay-type systems and there is no clear relationship between stability of these adjoint and dual systems and stability of the original delayed system.
This paper is organized as follows. In Sections~\ref{sec:Lyapunov} and~\ref{sec:Framework} we develop a mathematical framework for expressing Lyapunov-based stability conditions as operator inequalities. In Section~\ref{sec:duality} we show that given additional constraints on the Lyapunov operator, satisfaction of the dual Lyapunov inequality $\ip{x}{\mathcal{A}\mathcal{P}x}+\ip{\mathcal{A}\mathcal{P}x}{x}\le0$ proves stability of the delayed system. In Sections~\ref{sec:structured_SD} and~\ref{sec:structured_MD} we define a restricted class of Lyapunov functionals and operators which are valid for the dual stability condition in both the single-delay and mutliple-delay cases. In Sections~\ref{sec:dual_stability_SD} and~\ref{sec:dual_stability_MD} we apply these classes of operators to obtain dual stability conditions. These conditions are formulated as positivity and negativity of Lyapunov functionals and may be considered the primary contribution of the paper. We also note that the dual stability conditions have a tridiagonal matrix structure which is distinct from standard Lyapunov-Krasovskii forms and may potentially be exploited to increase performance when studying systems with a large number of delays. In Sections~\ref{sec:positivity}, ~\ref{sec:positivity_LMI}, and~\ref{sec:spacing}, we show how SOS-based methods can be used to parameterize positive Lyapunov functionals and thereby enforce the inequality conditions in Sections~\ref{sec:dual_stability_SD} and~\ref{sec:dual_stability_MD}. Finally, in Sections~\ref{sec:dual_LMI_SD} and~\ref{sec:dual_LMI_MD}, we summarize our results with a set of LMI conditions for dual stability in both the single and multiple-delay cases. Section~\ref{sec:toolbox} describes our Matlab toolbox, available online, which facilitates construction and solution of the LMIs. Section~\ref{sec:validation} applies the results to a variety of stability problems and verifies that the dual stability test is not conservative.
\section{Notation}\label{sec:Notation} Standard notation includes the Hilbert spaces $L_2^m[X]$ of square integrable functions from $X$ to $\mathbb{R}^m$ and $W^m_2[X]:=\{x\, :\, x, \dot x \in L_2^m[X]\}$. We use $L_2^m$ and $W_2^m$ when domains are clear from context. We also use the the extensions $L_2^{n \times m}[X]$ and $W_2^{n \times m}[X]$ for matrix-valued functions which map to $\mathbb{R}^{n\times m}$. $\mathcal{C}[X] \supset W_2[X]$ denotes the continuous functions on $X$. $S^n \subset \mathbb{R}^{n \times n}$ denotes the symmetric matrices. $I_n \in \mathbb{S}^n$ denotes the identity matrix. $0_{n\times m} \in \mathbb{R}^{n\times m}$ is the matrix of zeros with shorthand $0_{n}:=0_{n \times n}$. For a natural number, $N \in \mathbb{N}$, we adopt the index shorthand notation which denotes $[K]=\{1,\cdots,K\}$. Some additional notation is defined throughout the paper with a selected subset summarized in the Appendix.
\section{Lyapunov Stability of Time-Delay Systems}\label{sec:Lyapunov}
In this paper, we consider stability of linear discrete-delay systems of the form \begin{align} \dot{x} (t) &= A_0x(t) + \sum_{i =1}^K A_i x(t-\tau_i)\;&\text{ for all } \; t\ge0 ,\notag \\ x(t) &= \phi(t)\;&\text{ for all } \; t \in [-\tau_K,0] \label{eqn:delay_eqn} \end{align} where $A_i \in \mathbb{R}^{n\times n}$, $\phi \in \mathcal{C}[-\tau_K,0]$, $K\in \mathbb{N}$ and for convenience $\tau_1 < \tau_2 < \cdots < \tau_K$. We associate with any solution $x$ and any time $t \ge 0$, the `state' of System~\eqref{eqn:delay_eqn}, $x_t \in \mathcal{C}[-\tau_K,0]$, where $x_t(s) = x(t+s)$. Although we only consider discrete-delay systems, the results of this paper may easily be extended to systems with distributed delay. For linear discrete-delay systems of the form~\eqref{eqn:delay_eqn}, the system has a unique solution for any $\phi \in \mathcal{C}[-\tau_K,0]$ and global, local, asymptotic and exponential stability are all equivalent.
Stability of Equations~\eqref{eqn:delay_eqn} may be certified through the use of Lyapunov-Krasovskii functionals - an extension of Lyapunov theory to systems with infinite-dimensional state-space. In particular, it is known that stability of linear time-delay systems is equivalent to the existence of a quadratic Lyapunov-Krasovskii functional of the form \begin{align} V(\phi) &= \int_{-\tau_K}^0\bmat{\phi(0)\\\phi(s)}^T M(s) \bmat{\phi(0)\\\phi(s)} ds \notag \\
& + \int_{-\tau_K}^0 \int_{-\tau_K}^0 \phi(s)^T N(s,\theta) \phi(\theta)\, ds\,d\theta, \label{eqn:complete_quad} \end{align} where the Lie (upper-Dini) derivative of the functional is negative along any solution $x$ of~\eqref{eqn:delay_eqn}. That is, \[ \dot V(x_t) = \lim_{h \rightarrow 0}\frac{V(x_{t+h}) - V(x_t)}{h} \le 0 \] for all $t \ge 0$. Furthermore, the unknown functions $M$ and $N$ may be assumed to be continuous in their respective arguments everywhere except possibly at points $H := \{-\tau_1,\cdots,-\tau_K\}$.
For the dual stability conditions we propose in this paper, discontinuities in the unknown functions $M$ and $N$ pose challenges which make this form of Lyapunov-Krasovskii functional poorly suited to controller synthesis. For this reason, we use an alternative formulation of the necessary Lyapunov-Krasovskii functional better suited to the dual stability conditions we propose. Specifically, it has been shown~\cite{gu_2010} that existence of a positive decreasing Lyapunov-Krasovskii functional of the form in Eqn.~\eqref{eqn:complete_quad} implies the existence of a positive decreasing Lyapunov-Krasovskii functional of the form \begin{align} V(\phi) &= \tau_K \phi(0)^T P \phi(0) + \tau_K \sum_{i=1}^K \int_{-\tau_i}^0 \phi(0)^T Q_i(s)\phi(s) d s +\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 \phi(s)^T Q_i(s)^T \phi(0) ds \notag \\
&+\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T S_i(s)\phi_i(s) + \sum_{i,j=1}^{K}\int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi(s)^T R_{ij}(s,\theta)\phi(\theta)d \theta, \label{eqn:complete_quad2} \end{align} where the functions $Q_i$, $S_i$ and $R_{ij}$ may be assumed continuous on their respective domains of definition.
\section{A Mathematical Framework for Lyapunov Inequalities}\label{sec:Framework} The use of Lyapunov-Krasovskii functionals can be simplified by considering stability in the semigroup framework - a generalization of the concept of differential equations. Although the results of this paper do not require the semigroup architecture, we adopt this notation in order to simplify the concepts and avoid unnecessary notation. Sometimes known as the `flow map', a `strongly continuous semigroup' is an operator, $S(t):Z \rightarrow Z$, defined by the Hilbert space $Z$, which represents the evolution of the state of the system so that for any solution $x$, $x_{t+s} = S(s) x_t$. Note that for a given $Z$, the semigroup may not exist even if the solution exists for any initial conditions in $Z$. Associated with a semigroup on $Z$ is an operator $\mathcal{A}$, called the `infinitesimal generator' which satisfies \[ \frac{d}{dt}S(t)\phi = \mathcal{A} S(t)\phi \] for any $\phi \in X$. The space $X$ is often referred to as the domain of the generator $\mathcal{A}$, and is the space on which the generator is defined and need not be a closed subspace of $Z$. In this paper we will refer to $X$ as the `state-space'. For System~\eqref{eqn:delay_eqn}, following the approach in~\cite{curtain_book}, we define $Z_{m,n,K}:=\{\mathbb{R}^m \times L_2^{n}[-\tau_1,0]\times \cdots \times L_2^n[-\tau_K,0]\}$ and for $\{x,\phi_1,\cdots,\phi_K\}\in Z_{m,n,K}$, we define the following shorthand notation \[ \bmat{x\\ \phi_i}:=\{x,\phi_1,\cdots,\phi_K\}, \] which allows us to simplify expression of the inner product which we define to be \[ \ip{\bmat{y\\ \psi_i}}{\bmat{x\\ \phi_i}}_{Z_{m,n,K}}=\tau_K y^T x + \sum_{i=1}^K \int_{-\tau_i}^0 \psi_i(s)^T\phi_i(s)ds. \] Furthermore, when $m=n$, we simplify the notation using $Z_{n,K}:=Z_{n,n,K}$. We may now conveniently write the state-space as \[ X:=\left\{\bmat{x \\ \phi_i} \in Z_{n,K}\, : \, \phi_i \in W_2^n[-\tau_i,0] \text{ and } \phi_i(0)=x \text{ for all } i\in [K] \right\}. \] We furthermore extend this notation to say \[ \bmat{x \\ \phi_i}(s)=\bmat{y \\ f(s,i)} \] if $x=y$ and $\psi_i(s)=f(s,i)$ for $s \in [-\tau_i,0]$ and $i\in [K]$. This also allows us to compactly represent the infinitesimal generator, $\mathcal{A}$, of Eqn.~\eqref{eqn:delay_eqn} as \[ \mathcal{A}\bmat{x\\ \phi_i}(s):= \bmat{A_0 x + \sum_{i=1}^K A_i \phi_i(-\tau_i)
\\ \dot \phi_i(s)}. \] Using these definitions of $\mathcal{A}$, $Z$ and $X$, for matrix $P$ and sufficiently smooth functions $Q_i,S_i,R_{ij}$, we define an operator $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$ of the ``complete-quadratic'' type as \begin{align} &\left(\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}} \bmat{x\\ \phi_i}\right)(s) :=\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)\, d \theta }. \end{align} This notation allows us to associate $P,Q_i,S_i$ and $R_{ij}$ with the corresponding complete-quadratic functional in Eqn~\eqref{eqn:complete_quad2} as \[ V(\phi)=\ip{\bmat{\phi(0)\\ \phi_i }}{\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}\bmat{\phi(0)\\ \phi_i }}_{Z_{n,K}}. \]
That is, the Lyapunov functional is defined by the operator $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$ which is a variation of a classical combined multiplier and integral operator whose multipliers and kernel functions are given by $P,Q_i,S_i,R_{ij}$. The time-derivative of the complete-quadratic functional can similarly be represented using these operators as \[ \dot V(\phi)=\ip{\bmat{\phi(0)\\ \phi_i }}{\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}\mathcal{A} \bmat{\phi(0)\\ \phi_i }}_{Z_{n,K}}+\ip{\mathcal{A} \bmat{\phi(0)\\ \phi_i }}{\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}} \bmat{\phi(0)\\ \phi_i }}_{Z_{n,K}}. \] The classical stability problem, then, states that the delay-differential Equation~\eqref{eqn:delay_eqn} is stable if there exists an $\alpha > 0$, matrix $P$ and functions $Q_i,S_i,R_{ij}$ such that $V(\phi)\ge \alpha \norm{\phi(0)}^2$ and $\dot V(\phi)\le 0$ for all $\phi\in \mathcal{C}[-\tau_K,0]^n$ such that $\bmat{\phi(0)\\ \phi_i} \in X$. In this paper, however, we seek to establish new stability conditions in a dual space - a problem which is formulated is formulated in the following Section.
\section{A Dual Stability Condition}\label{sec:duality}
Using the notation we have introduced in the proceeding section, we may compactly represent the dual stability condition which forms the main theoretical contribution of the paper.
\begin{thm} \label{thm:dual} Suppose that $\mathcal{A}$ generates a strongly continuous semigroup on $Z$ with domain $X$. Further suppose there exists a bounded, positive and coercive linear operator $\mathcal{P} : X \rightarrow X$ which is self-adjoint with respect to the $Z$ inner product and satisfies \[ \ip{\mathcal{A}\mathcal{P}z}{z}_Z+\ip{z}{\mathcal{A}\mathcal{P}z}_Z\le -\norm{z}^2_Z \] for all $z \in X$. Then a dynamical system which satisfies $\dot x(t) = \mathcal{A}x(t)$ generates an exponentially stable semigroup. \end{thm} \begin{IEEEproof} Because $\mathcal{P}$ is coercive, bounded and self-adjoint, its inverse exists and is coercive, bounded and self-adjoint. Define the Lyapunov functional \[ V(y) = \ip{y}{\mathcal{P}^{-1} y}\ge \alpha \norm{y}^2_Z \] which holds for some $\alpha>0$ and all $y\in X$. If $y(t)$ satisfies $\dot y(t)=\mathcal{A}y(t)$, then $V$ has time derivative \begin{align} \frac{d}{dt} V(y(t)) &= \ip{\dot y(t)}{\mathcal{P}^{-1}y(t)} + \ip{y(t)}{\mathcal{P}^{-1}\dot y(t)}\\
&= \ip{\mathcal{A} y(t)}{\mathcal{P}^{-1}y(t)} + \ip{y(t)}{\mathcal{P}^{-1} \mathcal{A}y(t)}\\
&= \ip{\mathcal{A} y(t)}{\mathcal{P}^{-1}y(t)} + \ip{\mathcal{P}^{-1}y(t)}{ \mathcal{A}y(t)}. \end{align} Now define $z(t)= \mathcal{P}^{-1} y(t) \in X$ for all $t\ge 0$. Then $y(t)=\mathcal{P}z(t)$ and since $\mathcal{P}$ is bounded and $\mathcal{P}^{-1}$ is coercive, there exist $\gamma,\delta>0$ such that \begin{align} \dot V(y(t)) &= \ip{\mathcal{A} y(t)}{\mathcal{P}^{-1}y(t)} + \ip{\mathcal{P}^{-1}y(t)}{ \mathcal{A}y(t)}\\
&= \ip{\mathcal{A} \mathcal{P}z(t)}{z(t)} + \ip{z(t)}{ \mathcal{A}\mathcal{P}z(t)} \\
&\le -\norm{z(t)}^2 \le -\frac{1}{\gamma}\ip{z(t)}{\mathcal{P}z(t)}=-\frac{1}{\gamma}\ip{y(t)}{\mathcal{P}^{-1}y(t)}\le -\frac{\delta}{\gamma} \norm{y(t)}^2. \end{align} Negativity of the derivative of the Lyapunov function implies exponential stability in the square norm of the state by, e.g.~\cite{curtain_book} or by the invariance principle. \end{IEEEproof}
The advantage of the dual stability condition is that we replace $\mathcal{A}\mathcal{P}$ with $\mathcal{P}\mathcal{A}$. Although relatively subtle, this distinction allows convexification of the the controller synthesis problem. In the following section, we discuss how to parameterize operators which satisfy the conditions of Theorem~\ref{thm:dual}. We start with the constraints $\mathcal{P}=\mathcal{P^*}$ and $\mathcal{P}:X\rightarrow X$. Note that without significant restrictions on $P,Q_i,S_i,R_{ij}$, the operator $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$ satisfies neither constraint.
\section{A Structured Operator: Single Delay}\label{sec:structured_SD}
In order to satisfy the conditions of the dual stability condition in Theorem~\ref{thm:dual}, we must restrict ourselves to a class of operators which are self-adjoint with respect to the inner-product defined on $Z_{n,K}$ and which preserve the structure of the state-space (map $X\rightarrow X$). We first consider the simpler case of a single delay. In this case, we have $Z_{m,n,1}=\mathbb{R}^m \times L_2^{n}$ with the $L_2^{m\times n}$ inner product and the state-space becomes $X:=\{\{x, \phi \} \in \mathbb{R}^n \times W_2^n[-\tau,0]\, : \, \phi(0)=x \}$ . To preserve the structure of $X$, we consider operators of the form
\begin{align} &\left(\mathcal{P}\bmat{x \\ \phi}\right)(s):= \bmat{ \tau( R(0,0)+S(0))x + \int_{-\tau}^0 R(0,s)\phi(s)d s \\ \tau R(s,0)\phi(0) + \tau S(s)\phi(s) + \int_{-\tau}^0 R(s,\theta)\phi(\theta)d \theta }\label{eqn:operator} \end{align} Clearly, we have that $\mathcal{P}$ is a bounded linear operator and if $S,R \in W_2^{n \times n}[-\tau,0]$ by inspection maps $\mathcal{P}: X \rightarrow X$. Furthermore, $\mathcal{P}$ is self-adjoint with respect to the $L_2^{2n}$ inner product, as indicated in the following lemma.
\begin{lem}\label{lem:selfadjoint} Suppose $R(s,\theta) = R(\theta,s)^T$ and $S(s) \in \mathbb{S}^n$. Then the operator $\mathcal{P}$, as defined in Equation~\eqref{eqn:operator}, is self-adjoint with respect to the $L_2^{2n}$ inner product. \end{lem} \begin{IEEEproof} The operator $\mathcal{P}: X \rightarrow X$ is self-adjoint with respect to the inner product $\ip{\cdot}{\cdot}_{L_2^{2n}}$ if \[ \ip{\bmat{y \\\psi}}{\mathcal{P}\bmat{x \\ \phi}}_{L_2^{2n}}=\ip{\mathcal{P}\bmat{y \\\psi}}{\bmat{x \\ \phi}}_{L_2^{2n}} \] for any $\bmat{x \\ \phi},\bmat{y \\\psi} \in X$. By exploiting the structure of $\mathcal{P}$ and $X$, we have the following.{ \begin{align} &\ip{\bmat{y \\\psi}}{\mathcal{P}\bmat{x \\ \phi}}_{L_2^{2n}}=\int_{-\tau}^0\bmat{y\\\psi(s)} \bmat{ \tau (R(0,0)+S(0))x + \int_{-\tau}^0 R(0,\theta)\phi(\theta)d \theta \\
\tau R(s,0)\phi(0) + \tau S(s)\phi(s) + \int_{-\tau}^0 R(s,\theta)\phi(\theta)d \theta }ds \\
&=\int_{-\tau}^0\bmat{y\\\psi(s)}
\bmat{ \tau (R(0,0)+S(0)) & \tau R(0,s) \\ \tau R(s,0) & \tau S(s)}
\bmat{x\\\phi(s)}ds + \int_{-\tau}^0\int_{-\tau}^0\bmat{y\\\psi(s)}
\bmat{0&0\\0&R(s,\theta)}
\bmat{x\\\phi(\theta)}ds \,d\theta\\
&=\int_{-\tau}^0\left(\bmat{\tau (R(0,0) + S(0)) & \tau R(0,s) \\
\tau R(s,0) & \tau S(s)}^T
\bmat{y\\\psi(s)}\right)^T \bmat{x\\\phi(s)}ds + \int_{-\tau}^0\int_{-\tau}^0\bmat{0\\mathbb{R}(s,\theta)^T \psi(s)}^T \bmat{x\\\phi(\theta)}ds \, d\theta\\
&=\int_{-\tau}^0\bmat{\tau (R(0,0) + S(0) )y + \tau R(0,s) \psi(s)\\
\tau R(s,0) y + \tau S(s) \psi(s) + \int_{-\tau}^0R(\theta,s)^T \psi(\theta)d \theta }^T
\bmat{x\\ \phi(s) } ds \\ &=\int_{-\tau}^0\bmat{\tau (R(0,0) + S(0) )y + \int_{-\tau}^0 R(0,\theta) \psi(\theta)d\theta\\ \tau R(s,0) \psi(0) + \tau S(s) \psi(s)+\int_{-\tau}^0 R(s,\theta) \psi(\theta)d \theta}^T \bmat{x\\\phi(s)}ds =\ip{\mathcal{P}\bmat{y \\\psi}}{\bmat{x \\ \phi}}_{L_2^{2n}} \end{align} } \end{IEEEproof} Note that the constraint that the operator be self adjoint significantly reduces the number of free variables. In the single delay case, we have made this explicit by replacing the variables $P$ and $Q$ with $P=\tau(R(0,0)+S(0))$ and $Q(s)=R(0,s)$. A natural question is whether the self-adjoint constraint introduces conservatism. While we cannot establish that the self-adjoint constraint is necessary and sufficient for stability, construction of converse Lyapunov functionals in, e.g.~\cite{gu_2010} indicate coupling between the functions and furthermore, the numerical results at the end of this paper indicate little if any conservatism in this constraint. We now apply this structured operator to Theorem~\ref{thm:dual} to obtain conditions on $S$ and $R$ for which stability holds.
\section{Dual Stability Conditions - Single Delay Case}\label{sec:dual_stability_SD} In this section, we apply the structured operator in Section~\ref{sec:structured_SD} to the dual stability condition in Theorem~\ref{thm:dual} to establish conditions for stability in the single-delay case. Note that we do not yet discuss how to enforce these conditions. First recall that the generator, $\mathcal{A}$ is defined as \begin{align} \left(\mathcal{A}\bmat{x\\ \phi}\right)(s) &= \bmat{ A_0 x + A_1 \phi(t-\tau) \\ \frac{d}{d s} \phi(s)}. \end{align}
\begin{thm}\label{thm:dual_SD} Suppose there exist $\epsilon>0$ and functions $S\in W_2^{n\times n}[-\tau,0]$ and $R\in W_2^{n\times n}[[-\tau,0]\times [-\tau,0]]$ where $R(s,\theta) = R(\theta,s)^T$ and $S(s) \in \mathbb{S}^n$ such that $\ip{x}{\mathcal{P}x}_{L_2^{2n}} \ge\epsilon \norm{x}^2$ for all $x \in X$ and \[ \ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\mathcal{D}\bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}\le-\epsilon \norm{\bmat{x \\ \phi}}_{L_2^{2n}}^2 \] for all $\bmat{x\\ \phi} \in X$ where
\begin{align} &\left(\mathcal{P}\bmat{x \\ \phi}\right)(s):= \bmat{ \tau( R(0,0)+S(0))x + \int_{-\tau}^0 R(0,s)\phi(s)d s \\ \tau R(s,0)\phi(0) + \tau S(s)\phi(s) + \int_{-\tau}^0 R(s,\theta)\phi(\theta)d \theta } \end{align} and \begin{align} &\left(\mathcal{D}\bmat{x \\ y \\ \phi}\right)(s):= \bmat{ D_0 \bmat{x \\ y} + \int_{-\tau}^0 V(s)\phi(s)ds\\ \tau V(s)^T\bmat{x & y}^T + \tau \dot S(s)\phi(s) + \int_{-\tau}^0 E(s,\theta)\phi(\theta)d \theta } \end{align} where
\begin{align} &D_0:=\bmat{S_{11}+S_{11}^T & S_{12} \\
S_{12}^T & S_{22}} ,\quad V(s)=\bmat{S_{13}(s)\\0},\\
&S_{11} := \tau A_0(R(0,0)+S(0)) + \tau A_1 R(-\tau,0) +\frac{1}{2} S(0), \notag \\ &S_{12} := \tau A_1 S(-\tau), \qquad S_{22} := - S(-\tau), \notag \\ &S_{13}(s) := A_0 R(0,s)+A_1 R(-\tau,s)+\dot R(s,0)^T, \notag \\
&E(s,\theta):=\frac{d}{ds}R(s,\theta) + \frac{d}{d\theta} R(s,\theta). \notag \end{align} Then the system defined by Equation~\eqref{eqn:delay_eqn} is exponentially stable.
\end{thm}
\begin{IEEEproof} Define the operators $\mathcal{A}$ and $\mathcal{P}$ as above. By assumption, the operator $\mathcal{P}$ is coercive. By Lemma~\ref{lem:selfadjoint}, $\mathcal{P}$ is self-adjoint and maps $X\rightarrow X$. This implies that by Theorem~\ref{thm:dual} the system is exponentially stable if \[ \ip{\mathcal{A}\mathcal{P} \bmat{x\\ \phi}}{\bmat{x\\ \phi}}_{L_2^{2n}}+\ip{\bmat{x\\ \phi}}{\mathcal{A}\mathcal{P} \bmat{x\\ \phi}}_{L_2^{2n}} \le -\norm{\bmat{x\\ \phi}}_{L_2^{2n}} \] for all $\bmat{x\\ \phi} \in X$. We begin by constructing $\mathcal{A}\mathcal{P}x$.
\begin{align} &\left(\mathcal{A}\mathcal{P}\bmat{x\\ \phi}\right)(s):= \bmat{y_1\\ y_2(s)} \notag \\ &y_1 = \tau A_0( R(0,0)+S(0))x + \int_{-\tau}^0 A_0 R(0,s)\phi(s)d s \\ &\qquad \qquad + A_1\bbl(\tau R(-\tau,0)\phi(0) + \tau S(-\tau)\phi(-\tau) + \int_{-\tau}^0 R(-\tau,\theta)\phi(\theta)d \theta\bbr)\\ &y_2(s)= \tau \frac{d}{ds}R(s,0)\phi(0) + \tau \dot S(s)\phi(s) + \tau S(s)\dot \phi(s)+ \int_{-\tau}^0 \frac{d}{ds}R(s,\theta)\phi(\theta)d \theta. \end{align} Thus \begin{align} &\ip{\bmat{x\\ \phi}}{\mathcal{A}\mathcal{P}\bmat{x\\ \phi}} := \tau x^T y_1 + \int_{-\tau}^0\phi(s)^Ty_2(s)ds. \end{align} Examining these terms separately and using $x = \phi(0)$, we have { \begin{align} \tau x^T y_1 &=\tau^2 x^T A_0( R(0,0)+S(0))x + \tau \int_{-\tau}^0 x^T A_0 R(0,s)\phi(s)d s + \tau x^T A_1 \tau R(-\tau,0)\phi(0)\\
& \qquad + \tau^2 x^T A_1 S(-\tau)\phi(-\tau) + \tau\int_{-\tau}^0 x^T A_1 R(-\tau,\theta)\phi(\theta)d \theta\\ &= \int_{-\tau}^0 \left( x^T \tau A_0( R(0,0)+S(0))x + \tau x^T A_0 R(0,s)\phi(s)\right)d s \\ & + \int_{-\tau}^0 (x^T \tau A_1 R(-\tau,0)x + x^T \tau A_1 S(-\tau) \phi(-\tau) )ds + \int_{-\tau}^0 \tau x^T A_1 R(-\tau,s)\phi(s) ds\\ &=\int_{-\tau}^0 \bmat{x\\x(-\tau)\\x(s)}^T \bmat{\tau A_0( R(0,0)+S(0)) + \tau A_1 R(-\tau,0) & *^T & *^T\\ \frac{\tau}{2} S(-\tau)^T A_1^T &0& *^T\\ \frac{\tau}{2} R(0,s)^T A_0^T+\frac{\tau}{2} R(-\tau,s)^T A_1^T&0&0 }
\bmat{x\\x(-\tau)\\x(s)}ds. \end{align} } Examining the second term, we get { \begin{align} &\int_{-\tau}\phi(s)^Ty_2(s)ds=\int_{-\tau}\phi(s)^T\tau \left( \dot R(s,0)\phi(0) + \dot S(s)\phi(s)\right)ds\\ &\hspace{2cm}+ \int_{-\tau}^0 \phi(s)^T \tau S(s)\dot \phi(s)ds+ \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T \frac{d}{ds}R(s,\theta)\phi(\theta)d \theta ds\\ &=\int_{-\tau}\phi(s)^T\left(\tau \phi(s)^T \dot R(s,0)\phi(0) + \phi(s)^T \tau \dot S(s)\phi(s)\right)ds + \frac{\tau}{2} x^T S(0) x - \frac{\tau}{2} \phi(-\tau)^T S(-\tau)\phi(-\tau)\\
&\qquad- \half \int_{-\tau}^0 \phi(s)^T \tau \dot S(s) \phi(s)ds + \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T \frac{d}{ds}R(s,\theta)\phi(\theta)d \theta ds\\ &= \int_{-\tau}^0 \bmat{x\\x(-\tau)\\x(s)}^T
\hspace{-1.5mm}\bmat{\frac{1}{2} S(0) & *^T & *^T\\
0& \hspace{-1.5mm}-\frac{1}{2} S(-\tau) &*^T\\
\frac{\tau}{2} \dot R(s,0)&0& \hspace{-1.5mm}\frac{\tau}{2} \dot S(s) }
\bmat{x\\x(-\tau)\\x(s)}ds + \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T \frac{d}{ds}R(s,\theta)\phi(\theta)d \theta ds.
\end{align} } Combining both terms, and using symmetry of the inner product, we get \begin{align} &\ip{\bmat{x \\ \phi}}{\mathcal{A}\mathcal{P}\bmat{x \\ \phi}} +\ip{\mathcal{A}\mathcal{P}\bmat{x \\ \phi}}{\bmat{x \\ \phi}}= \\ &\int_{-\tau}^0 \bmat{x\\x(-\tau)\\x(s)}^T \bmat{S_{11}+S_{11}^T & S_{12} & \tau S_{13}(s)\\
S_{12}^T & S_{22} & 0_n\\
\tau S_{13}(s)^T & 0_n & \tau \dot S(s) }
\bmat{x\\x(-\tau)\\x(s)} ds + \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T \left(\frac{d}{ds}R(s,\theta) + \frac{d}{d\theta} R(s,\theta)\right)\phi(\theta)d \theta ds\\ &=\ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\mathcal{D}\bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}\le-\epsilon \norm{\bmat{x \\ \phi}}_{L_2^{2n}}^2. \end{align} Since $\ip{\mathcal{A}\mathcal{P} \bmat{x\\ \phi}}{\bmat{x\\ \phi}}_{L_2^{2n}}+\ip{\bmat{x\\ \phi}}{\mathcal{A}\mathcal{P} \bmat{x\\ \phi}}_{L_2^{2n}} \le -\norm{\bmat{x\\ \phi}}_{L_2^{2n}}$ for all $\bmat{x \\ \phi}\in X$ we conclude that the conditions of Theorem~\ref{thm:dual} are satisfied and hence System~\eqref{eqn:delay_eqn} is exponentially stable. \end{IEEEproof}
\noindent \textbf{Dual Lyapunov-Krasovskii Form:} To summarize the results of Theorem~\ref{thm:dual_SD} in a more traditional Lyapunov-Krasovskii format, the system is stable if there exists a \begin{align} V(\phi) &= \int_{-\tau}^0 \bmat{\phi(0)\\ \phi(s)}^T \bmat{ \tau( R(0,0)+S(0)) & \tau R(0,s)\\
\tau R(s,0) & \tau S(s) } \bmat{\phi(0)\\ \phi(s)} ds + \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T R(s,\theta) \phi(\theta)d \theta ds \end{align} such that $V(\phi)\ge \norm{\bmat{\phi(0)\\ \phi}}^2$ and { \begin{align} V_D(\phi)&=\int_{-\tau}^0 \bmat{\phi(0)\\ \phi(-\tau)\\ \phi(s)}^T \bmat{S_{11}+S_{11}^T & S_{12} & \tau S_{13}(s)\\
S_{12}^T & S_{22} & 0_n\\
\tau S_{13}(s)^T & 0_n & \tau \dot S(s) }
\bmat{\phi(0)\\ \phi(-\tau)\\ \phi(s)} ds\\ &\hspace{2cm}+ \int_{-\tau}^0 \int_{-\tau}^0 \phi(s)^T \left(\frac{d}{ds}R(s,\theta) + \frac{d}{d\theta} R(s,\theta)\right)\phi(\theta)d \theta ds\le -\epsilon \norm{\bmat{\phi(0) \\ \phi}}. \end{align} } Note that unlike the standard Lyapunov-Krasovskii functions, the derivative of the dual functional is tri-diagonal in both the single-delay and multiple-delay cases. When studying systems with a large number of delays, it may be possible to exploit this structure to offer performance improvement over the standard Lyapunov-Krasovskii form.
\section{A Structured Operator: Multiple Delay}\label{sec:structured_MD} Now that we have considered the single delay case, we extend this result to multiple delays. In this case, the constraint that the operator be self-adjoint is expressed as a linear constraint on $P$ and the functions $Q_i$, $S_i$ and $R_{ij}$, none of which are eliminated as was done for the single delay case. For the multiple delay case, recall the state-space is defined as \[ X:=\left\{\bmat{x \\ \phi_i} \in Z_{n,K}\, : \, \phi_i \in W_2^n[-\tau_i,0] \text{ and } \phi_i(0)=x \text{ for all } i\in [K] \right\}. \] Likewise, recall the inner product on $Z$ for $x,y \in X$ as \[ \ip{\bmat{y\\ \psi_i}}{\bmat{x\\ \phi_i}}_{Z_{m,n,K}}=\tau_K y^T x + \sum_{i=1}^K \int_{-\tau_i}^0 \psi_i(s)^T\phi_i(s)ds. \] Lastly, recall we consider operators of the form \begin{align} &\left(\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}} \bmat{x\\ \phi_i}\right)(s) :=\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)\, d \theta }.\label{eqn:operator_ndelay} \end{align}
\begin{lem}\label{lem:selfadjoint_MD} Suppose that $S_i\in W_2^{n\times n}[-\tau_i,0]$, $R_{ij}\in W_2^{n\times n}\left[[-\tau_i,0]\times[-\tau_j,0]\right]$ and $S_i(s)=S_i(s)^T$, $R_{ij}(s,\theta)=R_{ji}(\theta,s)^T$, $P=\tau_KQ_i(0)^T + \tau_KS_i(0)$ and $Q_j(s)=R_{ij}(0,s)$ for all $i,j\in [K]$. Then $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$ is a bounded linear operator, maps $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}: X \rightarrow X$, and as defined in Equation~\eqref{eqn:operator_ndelay}, is self-adjoint with respect to the inner product defined on $Z_{n,K}$. \end{lem} \begin{IEEEproof} To simplify the presentation, let $\mathcal{P}:=\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$. We first establish that $\mathcal{P}: X \rightarrow X$. If $\bmat{x\\\phi_i}\in X$, then $\phi_i \in \mathcal{C}[-\tau_i,0]$ and $\phi_i(0)=x$. Now if \begin{align} &\bmat{y\\ \psi_i(s)}=\left(\mathcal{P}\bmat{x\\ \phi_i}\right)(s)=\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)d \theta } \end{align} then since $P=\tau_K Q_i(0)^T + \tau_K S_i(0)$ and $Q_j(s)=R_{ij}(0,s)$, we have that \begin{align} \psi_i(0)&=\tau_K Q_i(0)^T x + \tau_KS_i(0)\phi_i(0) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(0,\theta)\phi_j(\theta)d \theta\\ &=\left( \tau_K Q_i(0)^T + \tau_KS_i(0)\right)x + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(0,\theta)\phi_j(\theta)d \theta\\ &=P x + \sum_{j=1}^K \int_{-\tau_j}^0 Q_j(s)\phi_j(s)d s\\ &=y. \end{align} Since $S_i\in W_2^{n\times n}[-\tau_i,0]$, $R_{ij}\in W_2^{n\times n}\left[[-\tau_i,0]\times[-\tau_j,0]\right]$, $\phi_i\in W_2^n[-\tau_i,0]$, and hence we have $\bmat{y \\ \psi_i}\in X$ and hence $\mathcal{P}:X\rightarrow X$. Furthermore, boundedness of $Q_i$, $S_i$ and $R_{ij}$ implies boundedness of the linear operator $\mathcal{P}$.
Now, to prove that the operator $\mathcal{P}$ is self-adjoint with respect to the inner product $\ip{\cdot}{\cdot}_{Z_{n,K}}$, we show \[ \ip{y}{\mathcal{P}x}_{Z_{n,K}}=\ip{\mathcal{P}y}{x}_{Z_{n,K}} \] for any $x,y \in X$. Using the properties $S_i(s)=S_i(s)^T$ and $R_{ij}(s,\theta)=R_{ji}(\theta,s)^T$, we have the following.
{ \begin{align} &\ip{\bmat{y \\ \psi_i}}{\mathcal{P}\bmat{x \\ \phi_i}}_{Z_{n,K}}=\tau_K y^T\left( P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(\theta)\phi_i(\theta) d \theta\right)\\
&\qquad + \sum_{i=1}^K \int_{-\tau_i}^0 \psi_i(s)\left(\tau_K Q_i(s)^T x + \tau_KS_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)d \theta \right)\\ &=y^T \tau_K P x + \sum_{i=1}^K \int_{-\tau_i}^0 y^T \tau_K Q_i(s)\phi_i(s) d s\\
&+ \sum_{i=1}^K \int_{-\tau_i}^0 \left(\psi_i(s)^T \tau_K Q_i(s)^T x + \tau_K\psi_i(s) S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 \psi_i(s)R_{ij}(s,\theta)\phi_j(\theta)d \theta \right)ds\\ &=\left(\tau_K Py + \sum_{i=1}^K \int_{-\tau_i}^0 \tau_K Q_i(s) \psi_i(s)ds \right)^T x \\
&\qquad + \sum_{i=1}^K \int_{-\tau_i}^0 \left(y^T \tau_K Q_i(s)+ \tau_K\psi_i(s)^T S_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 \psi_j(\theta)^TR_{ji}(\theta,s)d\theta \right)\phi_i(s) ds\\ &=\tau_K \left(Py + \sum_{j=1}^K \int\limits_{-\tau_j}^0 Q_i(s) \psi_j(s)ds \right)^T x\\
&\qquad \qquad + \sum_{i=1}^K \int\limits_{-\tau_i}^0 \left( \tau_K Q_i(s)^T y + \tau_KS_i(s)^T \psi_i(s) +\sum_{j=1}^K \int\limits_{-\tau_j}^0 R_{ji}(\theta,s)^T\psi_j(\theta)d\theta \right)^T \phi_i(s) \,ds\\ &\qquad =\ip{\mathcal{P}\bmat{y\\\psi_i}}{\bmat{x\\ \phi_i}}_{Z_{n,K}} \end{align} } \end{IEEEproof}
\section{The Dual Stability Condition for Multiple Delays}\label{sec:dual_stability_MD} For the multiple-delay case, we apply the operator defined in Section~\ref{sec:structured_MD} to the dual stability condition in Theorem~\ref{thm:dual}. Here the generator, $\mathcal{A}$ is defined as \begin{align} \left(\mathcal{A} \bmat{x\\ \phi_i}\right)(s) &= \bmat{ A_0 x + \sum_{i=1}^k A_i \phi_i(-\tau_i) \\ \frac{d}{d s} \phi_i(s)}. \end{align}
\begin{thm}\label{thm:dual_MD} Suppose that there exist $S_i\in W_2^{n\times n}[-\tau_i,0]$ and $R_{ij}\in W_2^{n\times n}\left[[-\tau_i,0]\times[-\tau_j,0]\right]$ such that $S_i(s)=S_i(s)^T$ and $R_{ij}(s,\theta)=R_{ji}(\theta,s)^T$. Let $P=\tau_KQ_i(0)^T + \tau_KS_i(0)$ and $Q_j(s)=R_{ij}(0,s)$ for all $i,j\in [K]$. If $\ip{x}{\mathcal{P}x}_{Z_{n,K}} \ge \epsilon \norm{x}^2$ for all $x \in X$ and \[ \ip{\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}{\mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}_{Z_{{nK},n,K}}\le- \norm{\bmat{x \\ \phi_i}}_{Z_{n,K}}^2 \] for all $\bmat{x\\ \phi_i} \in X$ where \begin{align} \left(\mathcal{P} \bmat{x\\ \phi_i}\right)(s) =\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)d \theta } \end{align}
and \begin{align} \mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}(s)=\bmat{\bmat{C_0 & C_{1} & \cdots &C_{k} \\C_{1}^T &-S_1(-\tau_1) & 0&0 \\ \vdots&0&\ddots&0 \\ C_{k^T} &0&0&-S_k(-\tau_K)}\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)} + \sum_{i=1}^K\int_{-\tau_i}^0 \bmat{B_{i}(s)\\ 0 \\ \vdots \\ 0}\phi_i(s)ds
\\ \tau_K B_{i}(s)^T x + \tau_K \dot S_i(s)\phi_i(s)+\sum_{j=1}^K \int_{-\tau_j}^0G_{ij}(s,\theta)\phi_j(\theta) d \theta } \end{align}
where \begin{align} &C_0:= A_0 P + PA_0^T +\tau_K \sum_{i=1}^K ( A_i Q_i(-\tau_i)^T+ Q_i(-\tau_i)A_i^T + S_i(0)), \notag\\ &C_{i}:=\tau_K A_iS_i(-\tau_i),\notag \\ &B_{i}(s):=A_0 Q_i(s) +\dot Q_i(s)+\sum_{j=1}^K R_{ji}(-\tau_j,s), \notag \\ &G_{ij}(s,\theta):=\frac{\partial}{\partial s}R_{ij}(s,\theta)+\frac{\partial}{\partial \theta}R_{ji}(s,\theta)^T, \end{align} then the system defined by Equation~\eqref{eqn:delay_eqn} is exponentially stable.
\end{thm}
\begin{IEEEproof} Define the operators $\mathcal{A}$ and $\mathcal{P}$ as above. By Lemma~\ref{lem:selfadjoint_MD}, $\mathcal{P}$ is self-adjoint and maps $X \rightarrow X$. Since $\mathcal{P}$ is positive and coercive by assumption, this implies by Theorem~\ref{thm:dual} the system is exponentially stable if \[ \ip{\mathcal{A}P \bmat{x\\ \phi_i}}{\bmat{x\\ \phi_i}}+\ip{\bmat{x\\ \phi_i}\\ \phi_i}{\mathcal{A}P \bmat{x\\ \phi_i}} \le -\norm{\bmat{x\\ \phi_i}}^2 \] for all $\bmat{x\\ \phi_i} \in X$. We begin by constructing $(\mathcal{A}\mathcal{P}x)(s):= \bmat{y\\\psi_i(s)}$.
\begin{align}
&y = A_0 P x + \sum_{i=1}^K \int_{-\tau_i}^0 A_0 Q_i(s)\phi_i(s)d s \\ &\qquad + \sum_{i=1}^K A_i\left(\tau_K Q_i(-\tau_i)^T x + \tau_K S_i(-\tau_i)\phi_i(-\tau_i) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(-\tau_i,\theta)\phi_j(\theta)d \theta\right),\\ &\psi_i(s)= \tau_K \dot Q_i(s)^T x + \tau_K \dot S_i(s)\phi_i(s) + \tau_K S_i(s)\dot \phi_i(s)+ \sum_{j=1}^K \int_{-\tau_j}^0 \frac{d}{ds}R_{ij}(s,\theta)\phi_j(\theta)d \theta. \end{align} Thus \begin{align} &\ip{ \bmat{x\\ \phi_i}}{\mathcal{A}\mathcal{P} \bmat{x\\ \phi_i}}:= \tau_K x^T y + \sum_{i=1}^K \int_{-\tau_i}^0\phi_i(s)^T\psi_i(s)ds. \end{align} Examining these terms separately and using $x = \phi_i(0)$, we have { \begin{align} &x^T y =x^T A_0Px +\sum_{i=1}^K \int_{-\tau_i}^0 x^T A_0Q_i(s)\phi_i(s)ds + \sum_{i=1}^K \tau_K x^T A_i Q_i(-\tau_i)^Tx\\
& \qquad + \sum_{i=1}^K \tau_K x^T A_i S_i(-\tau_i)\phi_i(-\tau_i)+ \sum_{i=1}^K \int_{-\tau_i}^0 \sum_{j=1}^K x^T A_j R_{ji}(-\tau_j,\theta)\phi_i(\theta)d \theta \end{align} } Examining the second term, we get { \begin{align} &\sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T \psi_i(s)ds\\ &=\sum_{i=1}^K \tau_K \int_{-\tau_i}^0 \phi_i(s)^T \dot Q_i(s)^T x \,ds +\sum_{i=1}^K \tau_K \int_{-\tau_i}^0 \phi_i(s)^T \dot S_i(s)\phi_i(s)ds+\sum_{i=1}^K \tau_K \int_{-\tau_i}^0 \phi_i(s)^T S_i(s) \dot \phi_i(s)ds\\ & \qquad +\sum_{i,j} \int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi_i(s)^T \frac{\partial}{\partial s}R_{ij}(s,\theta)\phi_i(\theta)\,ds\,d\theta\\ &=\sum_{i=1}^K \tau_K \int_{-\tau_i}^0 \phi_i(s)^T \dot Q_i(s)^T x \,ds +\frac{\tau_K}{2} \sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T \dot S_i(s)\phi_i(s)ds+ \frac{\tau_K}{2} x^T \sum_{i=1}^K S_i(0) x\\ & \qquad -\frac{\tau_K}{2} \sum_{i=1}^K \phi_i(-\tau_i)^T S_i(-\tau_i) \phi_i(-\tau_i)+\sum_{i,j} \int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi_i(s)^T \frac{\partial}{\partial s}R_{ij}(s,\theta)\phi_i(\theta)\,ds\,d\theta \end{align} } Combining both terms, { \begin{align} &\ip{\bmat{x\\ \phi_i}}{\mathcal{A}\mathcal{P}\bmat{x\\ \phi_i}}_{Z_{n,K}}=\tau_K x^T y +\sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T \psi_i(s)ds\\ &=x^T \left(\tau_K A_0P + \sum_{i=1}^K \tau_K^2 A_i Q_i(-\tau_i)^T +\frac{\tau_K}{2} \sum_{i=1}^K S_i(0) \right)x \\ &+ \tau_K^2\sum_{i=1}^K x^T A_i S_i(-\tau_i)\phi_i(-\tau_i) -\frac{\tau_K}{2}\sum_{i=1}^K \phi_i(-\tau_i)^T S_i(-\tau_i) \phi_i(-\tau_i)\\ &+\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 x^T \left(A_0Q_i(s) + \dot Q_i(s)+\sum_{j=1}^K A_j R_{ji}(-\tau_j,s) \right)\phi_i(s)ds \\
&+\frac{\tau_K}{2} \sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T \dot S_i(s)\phi_i(s)ds+\sum_{i,j} \int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi_i(s)^T \frac{\partial}{\partial s}R_{ij}(s,\theta)\phi_i(\theta)\,ds\,d\theta \end{align} }
Combining this term with its adjoint, we recover \begin{align} &\ip{\mathcal{A}\mathcal{P}\bmat{x\\ \phi_i}}{\bmat{x\\ \phi_i}}_{Z_{n,K}}+\ip{\bmat{x\\ \phi_i}}{\mathcal{A}\mathcal{P}\bmat{x\\ \phi_i}}_{Z_{n,K}}\hspace{-.5cm}=\ip{\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}{\mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}_{Z_{{nK},n,K}}\hspace{-1cm}\le- \norm{\bmat{x \\ \phi_i}}_{Z_{n,K}}^2. \end{align} We conclude that all conditions of Theorem~\ref{thm:dual} are satisfied and hence System~\eqref{eqn:delay_eqn} is stable. \end{IEEEproof} In the following sections, we will show how positivity of $\mathcal{P}$ and negativity of $\mathcal{D}$ can be enforced using SDP when the functions $S_i$ and $R_{ij}$ are polynomial.\\
\noindent \textbf{Dual Lyapunov-Krasovskii Form:} To summarize the results of Theorem~\ref{thm:dual_MD} in a more traditional Lyapunov-Krasovskii format, the system is stable if there exists a \begin{align} V(\phi) &= \tau_K \phi(0)^T P \phi(0) + \tau_K \sum_{i=1}^K \int_{-\tau_i}^0 \phi(0)^T Q_i(s)\phi(s) d s +\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 \phi(s)^T Q_i(s)^T \phi(0) ds \notag \\
&+\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T S_i(s)\phi_i(s) + \sum_{i,j=1}^{K}\int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi(s)^T R_{ij}(s,\theta)\phi(\theta)d \theta, \end{align} such that $V(\phi)\ge \norm{\bmat{\phi(0)\\ \phi_i}}^2$ and { \begin{align} V_D(\phi)&=\tau_K \phi(0)^T C_0 \phi(0) + 2\tau_K\sum_{i=1}^K \phi(0)^T C_i \phi_i(-\tau_i) - \tau_K \sum_{i=1}^K \phi_i(-\tau_i)^T S_i(-\tau_i) \phi_i(-\tau_i)\\ &+2\tau_K\sum_{i=1}^K \int_{-\tau_i}^0 \phi(0)^T B_i(s) \phi_i(s)ds +\tau_K \sum_{i=1}^K \int_{-\tau_i}^0 \phi_i(s)^T \dot S_i(s)\phi_i(s)ds\\ &+\sum_{i,j} \int_{-\tau_i}^0 \int_{-\tau_j}^0 \phi_i(s)^T G_{ij}(s,\theta)\phi_i(\theta)\,ds\,d\theta\le -\norm{\bmat{\phi(0)\\ \phi_i}}^2. \end{align} }
\section{SOS Conditions for Positivity on $Z_{m,n,K}$}\label{sec:positivity} In the proceeding two sections, we have shown that stability of the multiple delay system is implied by the existence of an operator $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$, which is positive on $Z_{n,n,K}$ and such that $\mathcal{D}$ is negative definite on $Z_{nK,n,K}$ and where $\mathcal{D}$ has a structure similar to $\mathcal{P}$ and is defined by functions which are linear transformations of the functions $P,Q_i,S_i,R_{ij}$. The challenge, then, is to search for the functions $P,Q_i,S_i,R_{ij}$ such that $\mathcal{P}$ is positive and $\mathcal{D}$ is negative. In this section, we discuss how to enforce positivity of $\mathcal{P}$ by assuming $Q_i,S_i,R_{ij}$ are polynomials and defining constraints on the coefficients of these polynomials in a form expressible as a semidefinite program.
Roughly speaking, our approach is to use positive matrices to parameterize a cone of operators with a square root defined on the appropriate inner product. For example, in $L_2$, if $Q>0$ is a positive matrix, it has a square root and hence if we define $V(z)=\ip{z}{Qz}$, we have $V(z)=\ip{z}{Qz}=\ip{z}{P^TPz}=\ip{Pz}{Pz}\ge 0$. Hence $(\mathcal{P}z)(s):=Qz(s)$ defines a positive operator. For $L_2[X]$, we generalize this approach using more complicated vectors of operators to obtain forms such as $V(z)=\ip{\mathcal{Z}(z)}{Q\mathcal{Z}(z)}_{L_2}$, as will be discussed in the following sections. Unfortunately, however, positivity in the inner product on $Z_{m,n,K}$ is difficult to enforce directly. The reason, through some abuse of notation, is that unlike the $L_2$ inner product, for an arbitrary matrix $P$, $\ip{z}{P^T P z}_{Z_{m,n,K}}\neq \ip{P z}{Pz}_{Z_{m,n,K}}$. This difficulty may be overcome, however, by defining a transformation from $Z_{m,n,K}$ to $\mathbb{R}^m \times L_2^n[-\tau_K,0]$. Hence, our positive operators on elements of $Z_{m,n,K}$ will be a combination of a transformation from $Z_{m,n,K}$ to $\mathbb{R}^m \times L_2^n[-\tau_K,0]$ and a positive quadratic form defined on the space $\mathbb{R}^m \times L_2^n[-\tau_K,0]$.
First, consider the operator, $\mathcal{P}:X\rightarrow X$, \begin{align} &\left(\mathcal{P} \bmat{x\\ \phi_i}\right)(s) =\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)d \theta }. \end{align}
Then, for $\bmat{x\\ \phi_i} \in X$, if we have that $\phi_i(s)=\phi_j(s)=\phi(s)$ for all $i,j\in[K]$ and $s\in [-\tau_K,0]$, we have the obvious representation \begin{align} &\ip{\bmat{x\\ \phi_i(s)}}{\mathcal{P}\bmat{x\\ \phi_i(s)}}_{Z_{m,n,K}}\\
&=\int_{-\tau_K}^0\bmat{x \\ \phi(s)}^T M(s) \bmat{x \\ \phi(s)}ds + \int_{-\tau_K}^0\int_{-\tau_K}^0\phi(s)^T N(s,\theta)\phi(s)ds\,d\theta \end{align} where \begin{align} M(s)&=\begin{cases} \bmat{P & \tau_K \sum_{j=i}^k Q_j(s)\\ \tau_K \sum_{j=i}^k Q_j(s)^T & \tau_K \sum_{j=i}^k S_j(s) } & s \in [-\tau_{i}, -\tau_{i-1}]\\ \end{cases}\\ N(s,\theta)&=\begin{cases} \sum_{l=i}^k \sum_{m=j}^k R_{lm}(s,\theta) & s \in [-\tau_{i}, -\tau_{i-1}],\, \theta \in [-\tau_{j}, -\tau_{j-1}]\\ \end{cases}\\ \end{align} Then if we constrain $M$ and $N$ to define a positive operator on $\mathbb{R}^m \times L_2^n[-\tau_K,0]$, $\mathcal{P}$ will define a positive operator on $Z_{m,n,K}$.
Unfortunately, while $\phi_i(s)=\phi_j(s)=\phi(s)$ holds for solutions of Eqn~\eqref{eqn:delay_eqn}, elements of the dual state $z=\mathcal{P}\phi$ does not necessarily satisfy this property. Indeed, for an arbitrary $\bmat{x & \phi_i}^T \in X$, the restriction $\phi_i(s)=\phi_j(s)$ would place unreasonable additional constraints on the variables $Q_i$, $S_i$ and $R_{ij}$. For this reason, we instead perform a change of variables to obtain \begin{align} &\ip{\bmat{x\\ \phi_i(s)}}{\mathcal{P}\bmat{x\\ \phi_i(s)}}_{Z_{m,n,K}}\\
&=\int_{-\tau_K}^0\bmat{x \\ \hat \phi(s)}^T M(s) \bmat{x \\ \hat \phi(s)}ds + \int_{-\tau_K}^0\int_{-\tau_K}^0\hat\phi(s)^T N(s,\theta)\hat \phi(s)ds\,d\theta \end{align} where if define $a_i=\frac{\tau_i-\tau_{i-1}}{\tau_i}$, then \begin{align} M(s)&=\begin{cases} \bmat{P & \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})\\ \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})^T & \frac{\tau_K}{a_i} S_i(\frac{s+\tau_{i-1}}{a_i}) } & s \in [-\tau_i, -\tau_{i-1}]\\ \end{cases}\\ N(s,\theta)&=\begin{cases} R_{ij}(\frac{s+\tau_{i-1}}{a_i},\frac{\theta+\tau_{j-1}}{a_j}) & s \in [-\tau_i, -\tau_{i-1}],\, \theta \in [-\tau_j, -\tau_{j-1}]\\ \end{cases}\\ \end{align} and \[ \hat \phi(s)=\begin{cases} \phi_i(\frac{s+\tau_{i-1}}{a_i}) & s \in [-\tau_i, -\tau_{i-1}].\\ \end{cases} \] Thus, if $M$ and $N$ define a positive operator on $\mathbb{R}^m \times L_2^n[-\tau_K,0]$, then $\mathcal{P}$ defines a positive operator on $Z_{m,n,K}$. Indeed, it can be shown that positivity of the operator $\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$ on $Z_{m,n,K}$ is equivalent~\cite{gu_2010} to positivity of the multiplier and integral operator defined by the piecewise-continuous functions $M$ and $N$ on $L_2^n[-\tau_K,0]$ where we assume the $\hat \phi_i$ are all independent. To simplify notation, we will denote the transformation between $P,Q_i,S_i,R_{ij}$ and $M,N$ as \[ \{M,N\}:=\mathcal{L}_1(P,Q_i,S_i,R_{ij}) \] if $a_i=\frac{\tau_i-\tau_{i-1}}{\tau_i}$ and \begin{align} M(s)&=\begin{cases} \bmat{P & \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})\\ \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})^T & \frac{\tau_K}{a_i} S_i(\frac{s+\tau_{i-1}}{a_i}) } & s \in [-\tau_i, -\tau_{i-1}]\\ \end{cases}\notag \\ N(s,\theta)&=\begin{cases} R_{ij}(\frac{s+\tau_{i-1}}{a_i},\frac{\theta+\tau_{j-1}}{a_j}) & s \in [-\tau_i, -\tau_{i-1}],\, \theta \in [-\tau_j, -\tau_{j-1}]\\ \end{cases}\label{eqn:linop1} \end{align}
\begin{lem} Let $\{M,N\}:=\mathcal{L}_1(P,Q_i,S_i,R_{ij})$ and \begin{equation} \left(\mathcal{P}_{M,N}x\right)(s):= M(s)x(s) + \int_{-\tau_K}^0 \bmat{0_{n} & 0_n\\ 0_n & N(s,\theta)}x(\theta)d \theta.
\end{equation} If $\ip{x}{\mathcal{P}_{M,N}x}_{L_2^{m+n}}\ge \alpha \norm{x}_{L_2^{m+n}}^2$ for some $\alpha>0$ and all $x \in \mathbb{R}^m \times L_2^n[-\tau_K,0]$, then $\ip{x}{\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}x}_{Z_{m,n,K}}\ge \alpha \norm{x}^2_{Z_{m,n,K}}$ for all $x \in Z_{m,n,K}$. \end{lem} \begin{IEEEproof} The proof follows directly from the observation that $\norm{\bmat{x\\ \hat \phi}}_{L_2^{m+n}}^2 = \norm{\bmat{x\\ \phi_i}}_{Z_{m,n,K}}^2$. \end{IEEEproof} Note that if $Q_i, S_i$ and $R_{ij}$ are polynomials with variable coefficients, then the constraint $\{M,N\}=\mathcal{L}_1(P,Q_i,S_i,R_{ij})$ defines a linear equality constraint between the coefficients of $Q_i, S_i$ and $R_{ij}$ and the coefficients of the polynomials which define $M$ and $N$. In the following section, we will discuss how to enforce positivity of operators on $\mathbb{R}^m \times L_2^n[-\tau_K,0]$ defined by piecewise-polynomial multipliers and kernels.
\section{LMI conditions for Positivity of Multiplier and Integral Operators}\label{sec:positivity_LMI} In this Section, we define LMI-based conditions for positivity of operators of the form \begin{equation} \left(\mathcal{P}_{M,N}x\right)(s):= M(s)x(s) + \int_{-\tau_K}^0 N(s,\theta)x(\theta)d \theta.\label{eqn:operator_simple}
\end{equation} where $x \in L_2^n[-\tau_K,0]$ and $M$ and $N$ are continuous except possibly on $s,\theta\in \{-\tau_1,\cdots-\tau_K\}$. In the following, for square-integrable functions $M,N$, we will retain the slightly overloaded notation $\mathcal{P}_{M,N}$ as defined in Equation~\eqref{eqn:operator_simple}. Note that we initially consider positivity of the operator on $L_2^{m+n}[-\tau_K,0]$ and not the subspace $\mathbb{R}^m \times L_2^n[-\tau_K,0]$.
Our approach to positivity is based on the observation that a positive operator will always have a square root. If we assume that this square root is also of the form of operator~\eqref{eqn:operator_simple} with functions $M$ and $N$ piecewise-polynomial of bounded degree, then the results of this section give necessary and sufficient conditions for the positivity of~\eqref{eqn:operator_simple}. Note that although this assumption is restrictive, it is unclear whether it implies conservatism. For example, while not all positive polynomials are Sum-of-Squares, any positive polynomial can be approximated arbitrarily well in the sup norm on a bounded domain by a polynomial with a polynomial ``root''.
\begin{thm}\label{thm:pos_op_joint} For any functions $Y_1: [-\tau_K,0] \rightarrow \mathbb{R}^{m_1 \times n}$ and $Y_2: [-\tau_K,0] \times [-\tau_K,0] \rightarrow \mathbb{R}^{m_2 \times n}$, square integrable on $[-\tau_K,0]$ with $g(s)\ge 0$ for $s \in [-\tau_K,0]$, suppose that
\begin{align} M(s) &= g(s) Y_1(s)^T Q_{11} Y_1(s) \label{def:M} \\ N(s,\theta) &= g(s) Y_1(s)Q_{12}Y_2(s,\theta) + g(\theta)Y_2(\theta,s)^T Q_{12}^T Y_1(\theta) \notag \\ &\qquad \qquad + \int_{-\tau_K}^0 g(\omega)Y_2(\omega,s)^T Q_{22}Y_2(\omega,\theta) \, d\omega \label{def:N}
\end{align} where $Q_{ij} \in \mathbb{R}^{m_i \times m_j}$ and \[ Q=\bmat{Q_{11} & Q_{12}\\Q_{12}^T& Q_{22}} \ge 0.
\] Then for $\mathcal{P}_{M,N}$ as defined in Equation~\eqref{eqn:operator_simple}, $\ip{x}{\mathcal{P}_{M,N}x}_{L_2^n} \ge 0$ for all $x \in L_2^n[-\tau_K,0]$. \end{thm} The proof of Theorem~\ref{thm:pos_op_joint} can be found in~\cite{peet_2014ACC}.
Theorem~\ref{thm:pos_op_joint} gives a linear parametrization of a cone of positive operators using positive semidefinite matrices. Note that there are few constraints on the functions $Y_1$ and $Y_2$. These functions serve as the basis for the multipliers and kernels found in the square root of $\mathcal{P}_{M,N}$. The class of multipliers and kernels defined by Theorem~\ref{thm:pos_op_joint} is thus determined by $Y_1$ and $Y_2$.
We now consider certain choices of $Y_1$ and $Y_2$ which yield piecewise-polynomials functions $M$ and $N$.
\subsection{Piecewise-Polynomials Multipliers and Kernels}\label{subsec:positivity_PC}
To define multipliers and kernels with discontinuities at known points, we divide the region of integration $[-\tau_K,0]$ into almost disjoint subregions $[-\tau_{i}, -\tau_{i-1}]$, $i \in [K]$ on which continuity holds and assume the functions are polynomial on these subregions. To do this, we introduce the indicator functions (not to be confused with the identity matrix)
\[ I_i(t) = \begin{cases}1 & t \in [-\tau_{i}, -\tau_{i-1}]\\ 0& \text{otherwise,} \end{cases} \quad i \in [K]
\] and the vector of indicator functions $J = \bmat{I_1 & \cdots & I_K}^T$. We can now define the basis vectors $Y_1$ and $Y_2$ which define the positivity conditions in Theorem~\ref{thm:pos_op_joint}. \[ Y_{1pc}(s) = Y_{1p}(s) \otimes J(s),\; Y_{2pc}(s,\theta) = Y_{2p}(s,\theta) \otimes J(s) \otimes J(\theta)
\] where \begin{equation} Y_{1p}(s) = Y_d(s) \otimes I_n,\qquad Y_{2p}(s,\theta) = Y_d(s,\theta) \otimes I_n.
\end{equation} and $Y_d(s)$ is a vector whose elements form a basis for the polynomials in variables $s$ of degree $d$ or less. e.g. The vector of monomials. Note for $s \in \mathbb{R}$, $Y_{d}:[-\tau_K,0]\rightarrow \mathbb{R}^{d+1}$, hence $Y_{1p}:[-\tau_K,0]\rightarrow \mathbb{R}^{n(d+1)\times n}$, and $Y_{1pc}:[-\tau_K,0]\rightarrow \mathbb{R}^{nK(d+1)\times n}$. Similarly, $Z_d(s,\theta)\in \mathbb{R}^{q}$ where $q=(d+1)(d+2)/2$, $Y_{2p}(s,\theta)\in \mathbb{R}^{nq\times n}$, and $Y_{2pc}(s,\theta)\in \mathbb{R}^{nKq\times n}$
\begin{thm}\label{thm:positivity_PC} If $Y_{1}(s)=Y_{1pc}(s)$ and $Y_{2}(s,\theta) = Y_{2pc}(s,\theta)$ and $M$ and $N$ are defined as in Equations~\eqref{def:M} and~\eqref{def:N}, then $M$ and $N$ are piecewise-polynomial matrices ($\mathbb{R}^{n \times n}$) of degree $2d$ with possible discontinuities at $s,\theta \in \{-\tau_i\}_i$. In this case, if $g_i(s)\ge 0$ for $s \in [-\tau_i,-\tau_{i-1}]$, the functions $M$ and $N$ can be defined piecewise as
\[ M(s) = \begin{cases}M_i(s) & s \in [-\tau_{i}, -\tau_{i-1}]\end{cases}
\] where
\[ M_{i} = g_i(s) Y_d(s)^T Q_{11,ii} Y_d(s)
\] where $Q_{11,i,j}\in \mathbb{R}^{n(d+1)\times n(d+1)}$ is the $i,j$th block of $Q_{11}\in \mathbb{S}^{n(d+1)K}$. Likewise,
\[ N(s,\theta) = \begin{cases}N_{ij}(s,\theta) & s \in [-\tau_{i}, -\tau_{i-1}] \,\,\text{and} \,\,\theta \in [-\tau_{j}, -\tau_{j-1}]\end{cases}
\] where
\begin{align} &N_{ij} = g_i(s) Y_{1p}(s)Q_{12,i,(i-1)K+j}Y_{1p}(s,\theta) \\ &\qquad \qquad + g_j(\theta)Y_{2p}(\theta,s)^T Q_{12,(j-1)K+i,j}^T Y_{1p}(\theta)\\ & + \sum_{l=1}^K \int_{-\tau_l}^{-\tau_{l-1}} g_l(\omega) Y_{2p}(\omega_l,s)^T Q_{22,i+(l-1)K,j+(l-1)K}Y_{2p}(\omega_l,\theta) \, d\omega_l
\end{align} where $Q_{12,i,j}\in \mathbb{R}^{n(d+1)\times nq}$ is the $i,j$th block of $Q_{12}\in \mathbb{R}^{n(d+1)K\times nqK}$ and $Q_{22,i,j}\in \mathbb{R}^{nq\times nq}$ is the $i,j$th block of $Q_{22}\in \mathbb{S}^{nqK}$. \end{thm}
The proof of Theorem~\ref{thm:positivity_PC} can be found in~\cite{peet_2014ACC}.
For the intervals $s \in [-\tau_i,-\tau_{i-1}]$, the choice of $g_i$ is typically either $g_i(s)=1$ or $g_i=-(s+\tau_i)(s+\tau_{i-1})$. Inclusion of $g \neq 1$ is a variation of the classical Positivstellensatz approach to local positivity, as can be found in, e.g.~\cite{stengle_1973,schmudgen_1991,putinar_1993}. To improve accuracy, we typically use a combination of both although we may set $Q_{12},Q_{21},Q_{22}=0$ for the latter to reduce the number of variables. To simplify notation, throughout the remainder of the paper, we will use the notation $\{M,N\}\in \Xi_{d,n,K}$ to denote the LMI constraints on the coefficients of the polynomials $M,N$ implied by the conditions of Theorem~\ref{thm:positivity_PC} using both $g_i(s)=1$ and $g_i=-(s+\tau_i)(s+\tau_{i-1})$ as \[ \Xi_{d,n,K}:=\{\{M,N\}\,:\, \substack{ M=M_1+M_2,\, N=N_1+N_2,\, \text{ where $\{M_1,N_1\}$ and $\{M_2,N_2\}$ satisfy the}\\ \text{conditions of Thm.~\ref{thm:positivity_PC} with $g_i=1$ and $g_i=-(s+\tau_i)(s+\tau_{i-1})$, respectively.}} \} \]
\section{Spacing Functions and Mixed State-Space}\label{sec:spacing}
The result in Theorem~\ref{thm:pos_op_joint} as stated is a parametrization of operators which are positive on the space $L_2^n[-\tau_K,0]$. However, as in Section~\ref{sec:positivity}, we instead need to enforce positivity on the subspace $\mathbb{R}^m \times L_2^n[-\tau_K,0] \subset L_2^{m+n}[-\tau_K,0]$.
To enforce positivity on a subspace $X\subset L_2^n[-\tau_K,0]$, we turn to so-called ``spacing functions'' - a concept closely tied to projection operators.
\begin{thm}\label{prop:spacing} Suppose $X$ is a closed subspace of a Hilbert space $Z$. Then $\ip{u}{\mathcal{P}u}\ge 0$ for all $u \in X$ if and only if there exist operators $\mathcal{M}$ and $\mathcal{T}$ such that $\mathcal{P}= \mathcal{M}+\mathcal{T}$ and $\ip{u}{\mathcal{M}u}\ge 0$ for all $u \in Z$ and $\ip{u}{\mathcal{T}u}=0$ for all $u \in X$. \end{thm} The proof of Theorem~\ref{thm:positivity_PC} can be found in~\cite{peet_2014ACC}.
This proposition implies that the class of operators which are positive on $X$ is the direct sum of the cone of operators, $\mathcal{M}$ which are positive on $Z$ and the space of operators, $\mathcal{T}$, which are orthogonal to $X$. Taking $Z=L_2^{n+m}$, we already know how to parameterize $\mathcal{M}$. The question, then, is how to parameterize the ``spacing'' operators $\mathcal{T}$.
\subsection{A Class of Spacing Functions}
For both the single-delay and multi-delay case, we enforce positivity on a subspace of the form $\mathbb{R}^m \times L_2^{n}[-\tau_K,0]\subset L_2^{m+n}[-\tau_K,0]$. For this subspace, we define a class of spacing functions as follows. \begin{thm}\label{thm:mixed_pos} Suppose that $F$ and $H$ are defined as \begin{align} &F(s) = \bmat{K(s) + \frac{1}{\tau_K}\int_{-\tau_K}^0 \int_{-\tau_K}^0 L_{11}(\omega,t)d\omega dt& \int_{-\tau_K}^0 L_{12}(\omega,s)d\omega
\\ \int_{-\tau_K}^0 L_{21}(s,\omega)d\omega & 0}\\ &H(s,\theta) = - \bmat{L_{11}(s,\theta)&L_{12}(s,\theta)\\L_{21}(s,\theta)&0} \end{align} for some square-integrable functions $K$ and $L_{ij}$ where $K(s)\in \mathbb{R}^{m\times m}$, $L_{11}(s,\theta)\in \mathbb{R}^{m\times m}$, and $L_{12}(s,\theta)\in \mathbb{R}^{m\times n}$ such that $\int_{-\tau_K}^0 K(s) ds =0$. Then if \[ \mathcal{T}z(s):=F(s)z(s)+\int_{-\tau_K}^0H(s,\theta)z(\theta)\, d\theta \] then for any $z\in \mathbb{R}^m \times L_{2}^{n}$, \[ \ip{z}{\mathcal{T}z}_{L_2^{m+n}}=0 \] \end{thm}
\begin{IEEEproof} The proof is straightforward. For $z(s)= \bmat{c & y(s)}^T$ with $c \in \mathbb{R}^m$ and $y \in L_2^{n}[-\tau_K,0]$, we have
\begin{align} &\ip{z}{\mathcal{T}z}_{L_2^{m+n}}=\int_{-\tau_K}^0 \bmat{c \\ y(s)}^T \bmat{K(s) + \frac{1}{\tau_K}\int_{-\tau_K}^0 \int_{-\tau_K}^0 L_{11}(\omega,t)d\omega dt& \int_{-\tau_K}^0 L_{12}(\omega,s)d\omega
\\ \int_{-\tau_K}^0 L_{21}(s,\omega)d\omega & 0} \bmat{c \\ y(s)} ds \\ &\qquad \qquad \qquad -\int_{-\tau_K}^0 \int_{-\tau_K}^0 \bmat{c \\ y(s)}^T \bmat{L_{11}(s,\theta)&L_{12}(s,\theta)\\L_{21}(s,\theta)&0} \bmat{c \\ y(\theta)} d\theta ds\\ &=\int_{-\tau_K}^0 \bmat{c \\ y(s)}^T \bmat{\frac{1}{\tau_K}\int_{-\tau_K}^0 K(\omega)d\omega & 0
\\ 0 & 0} \bmat{c \\ y(s)} ds \\ &\qquad \qquad \qquad +\int_{-\tau_K}^0 \int_{-\tau_K}^0 \bmat{c \\ y(s)}^T \bmat{L_{11}(s,\theta)-L_{11}(s,\theta)&L_{12}(s,\theta)-L_{12}(s,\theta)\\L_{21}(s,\theta)-L_{21}(s,\theta)&0} \bmat{c \\ y(\theta)} d\theta \,ds =0.
\end{align} \end{IEEEproof} For simplicity, we use $\{F,H\}\in \Theta_{m,n,K}$ to denote the conditions of Theorem~\ref{thm:mixed_pos} which, if $K$ and $L_{ij}$ are piecewise-polynomial matrices, is a set of linear equality constraints on the coefficients of the polynomials which define $F$ and $J$. \[ \Theta_{m,n,K}:=\{\{F,H\}\,:\, F,H \text{ satisfy the conditions of Thm.~\ref{thm:mixed_pos}.}\} \] For convenience, given $\{F,H\} \in \Theta_{m,n,K}$, we define the operator $\mathcal{T}_{F,H}: L_2^{m,n}\rightarrow L_2^{m,n}$ as \[ \left(\mathcal{T}_{F,H}z\right)(s):=F(s)z(s)+\int_{-\tau_K}^0H(s,\theta)z(\theta)\, d\theta. \]
\section{SOS Conditions for Dual Stability in the Case of a Single Delay}\label{sec:dual_LMI_SD} We now state an LMI representation of the dual stability condition for a single delay ($\tau=\tau_K$). \begin{thm}\label{thm:dualLMI_SD} Suppose there exist $d \in \mathbb{N}$, constant $\epsilon>0$, functions $S\in W_2^{n\times n}[-\tau,0]$, $R\in W_2^{n\times n}[[-\tau,0]\times [-\tau,0]]$, $\{F_1,H_1\}\in \Theta_{n,n,1}$, and $\{F_2,H_2\}\in \Theta_{2n,n,1}$ where $R(s,\theta) = R(\theta,s)^T$ and $S(s) \in \mathbb{S}^n$ such that \[ \left\{M,N\right\} \in \Xi_{d,2n,1} \] and \[ \left\{-D,-E\right\}\in \Xi_{d,3n,1} \] where \begin{align} &M(s)=\bmat{\tau R(0,0) + \tau S(0)& \tau R(0,s) \\ \tau R(s,0)&\tau S(s)}+F_1(s) -\epsilon I_{2n}, \\ &N(s,\theta)=\bmat{0_{n} & 0_{n}\\0_{n}&R(s,\theta)}+H_1(s,\theta), \end{align} \begin{align} &D(s):=\bmat{D_0 & \tau V(s)\\
\tau V(s)^T & \tau \dot S(s)+\epsilon I_n }+F_2(s) , \\ &D_0:=\bmat{S_{11}+S_{11}^T +\epsilon I_n& S_{12} \\
S_{12}^T & S_{22}} ,\quad V(s)=\bmat{S_{13}(s)\\0},\\ &S_{11} := \tau A_0(R(0,0)+S(0)) + \tau A_1 R(-\tau,0) +\frac{1}{2} S(0), \\ &S_{12} := \tau A_1 S(-\tau), \quad S_{22} := - S(-\tau), \\ &S_{13}(s) := A_0 R(0,s)+ A_1 R(-\tau,s)+ \dot R(s,0)^T, \\
&E(s,\theta):=\bmat{0_{2n} & 0_{2n,n}\\0_{n,2n}&G(s,\theta)}+H_2(s,\theta) \\
&G(s,\theta):=\frac{d}{ds}R(s,\theta) + \frac{d}{d\theta} R(s,\theta).
\end{align}
Then the system defined by Equation~\eqref{eqn:delay_eqn} is exponentially stable.
\end{thm} \begin{IEEEproof} Consider the operator \begin{align} &\left(\mathcal{P}\bmat{x \\ \phi}\right)(s):= \bmat{ \tau( R(0,0)+S(0))x + \int_{-\tau}^0 R(0,s)\phi(s)d s \\ \tau R(s,0)\phi(0) + \tau S(s)\phi(s) + \int_{-\tau}^0 R(s,\theta)\phi(\theta)d \theta } \end{align} Since $\{F_1,H_1\}\in \Theta_{n,n,1}$, and $\left\{M,N\right\} \in \Xi_{d,2n,1}$, by Lemma~\ref{prop:spacing} and Theorem~\ref{thm:positivity_PC}, we have for $x \in Z_{n,1}$ \begin{align} \ip{x}{\mathcal{P}x}_{L_2^{2n}}-\epsilon \norm{x}^2 =\ip{x}{\left(\mathcal{P}+\mathcal{T}_{F_1,H_1}\right) x}_{L_2^{2n}}-\epsilon \norm{x}^2 =\ip{x}{\mathcal{P}_{M,N} x}_{L_2^{2n}}\ge 0. \end{align} This establishes that $\ip{x}{\mathcal{P}x}_{L_2^{2n}} \ge\epsilon \norm{x}^2$ for all $x \in X$. Similarly, examine the operator \begin{align} &\left(\mathcal{D}\bmat{x \\ y \\ \phi}\right)(s):= \bmat{ D_0 \bmat{x \\ y} + \int_{-\tau}^0 V(s)\phi(s)ds\\ \tau V(s)^T\bmat{x & y}^T + \tau \dot S(s)\phi(s) + \int_{-\tau}^0 G(s,\theta)\phi(\theta)d \theta }. \end{align} Since $\{F_2,H_2\}\in \Theta_{2n,n,1}$, and $\left\{-D,-E\right\} \in \Xi_{d,3n,1}$, we have for $x \in Z_{2n,n,1}$ \begin{align} &\ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\mathcal{D}\bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}+\epsilon \norm{\bmat{x \\ \phi}}^2 =\ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\left(\mathcal{D}+\mathcal{T}_{F_2,H_2}\right) \bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}+\epsilon \norm{\bmat{x \\ \phi}}^2\\ & =\ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\mathcal{P}_{D,E} \bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}\le 0. \end{align} This likewise establishes that \[ \ip{\bmat{x \\ \phi(-\tau) \\ \phi}}{\mathcal{D}\bmat{x \\ \phi(-\tau) \\ \phi}}_{L_2^{3n}}\le-\epsilon \norm{\bmat{x \\ \phi}}^2 \] for all $\bmat{x\\ \phi} \in X$. By assumption, $R(s,\theta) = R(\theta,s)^T$ and $S(s) \in \mathbb{S}^n$ and hence Theorem~\ref{thm:dual_SD} establishes exponential stability of Equation~\eqref{eqn:delay_eqn}. \end{IEEEproof}
\section{SOS Conditions for Dual Stability in the Case of Multiple Delays}\label{sec:dual_LMI_MD}
\begin{thm}\label{thm:dualLMI_MD} Suppose there exist $d \in \mathbb{N}$, constant $\epsilon>0$, matrix $P\in \mathbb{R}^{n\times n}$, functions $S_i, Q_i \in W_2^{n\times n}[-\tau_i,0]$, $R_{ij}\in W_2^{n\times n}\left[[-\tau_i,0]\times[-\tau_j,0]\right]$ for $i,j \in [K]$, $\{F_1,H_1\}\in \Theta_{n,n,K}$, and $\{F_2,H_2\}\in \Theta_{n(K+1),n,K}$ such that \[ \left\{M,N\right\} \in \Xi_{d,2n,K} \qquad \text{and}\qquad \left\{-D,-E\right\}\in \Xi_{d,n(K+1),K}, \]
where \begin{align} M(s)&=M_0(s)+F_1(s)-\epsilon I_{2n} \quad \text{and} \quad N(s,\theta)=\bmat{0_{n} & 0_{n}\\0_{n}&N_0(s,\theta)}+H_1(s,\theta), \end{align} where \[ \{M_0,N_0\}:=\mathcal{L}_1(P-\epsilon I_n,Q_i,S_i-\epsilon I_n,R_{ij}) \] and \begin{align} D(s)&=D_0(s)+F_2(s),\qquad E(s,\theta)=\bmat{0_{n(K+1)} & 0_{n(K+1),n}\\0_{n,n(K+1)}&E_0(s,\theta)}+H_2(s,\theta), \end{align} where \[ \{D_0,E_0\}:=\mathcal{L}_1(D_1,V_i,\dot S_i + \epsilon I_n ,G_{ij}) \]
and where \begin{align} &D_1:=\bmat{C_{0}+C_0^T+\epsilon I_n & C_{1} & \cdots &C_{k} \\C_{1}^T &-S_1(-\tau_1) & 0&0 \\ \vdots&0&\ddots&0 \\ C_{k}^T &0&0&-S_k(-\tau_K)},\\ &C_{0}:= A_0 P +\tau_K \sum_{i=1}^K ( A_i Q_i(-\tau_i)^T + \half S_i(0)), \notag\\ &C_{i}:=\tau_K A_iS_i(-\tau_i)\qquad i\in [K] \notag \\ &V_i(s):=\bmat{B_i(s)^T & 0 &\cdots &0}^T \qquad i\in [K]\\ &B_{i}(s):=A_0 Q_i(s) +\dot Q_i(s)+\sum_{j=1}^K R_{ji}(-\tau_j,s) \qquad i\in [K] \notag \\ &G_{ij}(s,\theta):=\frac{\partial}{\partial s}R_{ij}(s,\theta)+\frac{\partial}{\partial \theta}R_{ji}(s,\theta)^T, \quad i,j\in [K]. \end{align} Furthermore, suppose \begin{align} P&=\tau_KQ_i(0)^T + \tau_KS_i(0) \quad \text{ for } i\in [K],\\ S_i(s)&=S_i(s)^T,\qquad R_{ij}(s,\theta)=R_{ji}(\theta,s)^T \quad \text{ for } i,j \in [K],\\ Q_j(s)&=R_{ij}(0,s)\quad \text{ for } i,j\in [K]. \end{align} Then the system defined by Equation~\eqref{eqn:delay_eqn} is exponentially stable.
\end{thm}
\begin{IEEEproof} Consider the operator $\mathcal{P}:=\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}}$.
Since $\{F_1,H_1\}\in \Theta_{n,n,1}$, and $\left\{M,N\right\} \in \Xi_{d,2n,1}$, by Lemma~\ref{prop:spacing} and Theorem~\ref{thm:positivity_PC}, we have for $x \in Z_{n,K}$ \begin{align} \ip{x}{\mathcal{P}x}_{Z_{n,K}}-\epsilon \norm{x}^2 =\ip{\hat x}{\left(\mathcal{P}_{M,N}-\mathcal{T}_{F_1,H_1}\right) \hat x}_{L_2^{2n}} =\ip{\hat x}{\mathcal{P}_{M,N} \hat x}_{L_2^{2n}}\ge 0, \end{align} where $\hat x \in L_2^{2n}$. This establishes that $\ip{x}{\mathcal{P}x}_{Z_{n,K}} \ge\epsilon \norm{x}^2$ for all $x \in X$. Similarly, examine the operator \begin{align} \mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}(s)=\bmat{\bmat{D_{1} & C_{1} & \cdots &C_{k} \\C_{1}^T &-S_1(-\tau_1) & 0&0 \\ \vdots&0&\ddots&0 \\ C_{k^T} &0&0&-S_k(-\tau_K)}\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)} + \sum_{i=1}^K\int_{-\tau_i}^0 \bmat{B_{i}(s)\\ 0 \\ \vdots \\ 0}\phi_i(s)ds
\\ \tau_K B_{i}(s)^T x + \tau_K \dot S_i(s)\phi_i(s)+\sum_{j=1}^K \int_{-\tau_j}^0G_{ij}(s,\theta)\phi_j(\theta) d \theta }. \end{align} Since $\{F_2,H_2\}\in \Theta_{n(K+1),n,K}$, and $\left\{-D,-E\right\} \in \Xi_{d,n(K+2),K}$, we have for $\bmat{x \\ \phi_i} \in Z_{n,K}$ \begin{align} &\ip{\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}{\mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}_{Z_{n(K+1),n,1}}+\epsilon \norm{\bmat{x \\ \phi_i}}^2 =\ip{z}{\left(\mathcal{P}_{D,E}-\mathcal{T}_{F_2,H_2}\right) z}_{L_2^{n(K+2)}}\\ & =\ip{z}{\mathcal{P}_{D,E} z}_{L_2^{n(K+2)}} \le 0. \end{align} This likewise establishes that \[ \ip{\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}{\mathcal{D}\bmat{\bmat{x \\ \phi_1(-\tau_1) \\ \vdots \\ \phi_k(-\tau_K)}\\ \phi_i}}_{Z_{n(K+1),n,K}}\le-\epsilon \norm{\bmat{x \\ \phi_i}}^2 \] for all $\bmat{x\\ \phi} \in X$. By assumption, $P=\tau_KQ_i(0)^T + \tau_KS_i(0)$, $S_i(s) \in \mathbb{S}^n$, $Q_j(s)=R_{ij}(0,s)$ and $R_{ij}(s,\theta)=R_{ji}(\theta,s)^T$. Hence Theorem~\ref{thm:dual_MD} establishes exponential stability of Equation~\eqref{eqn:delay_eqn}. \end{IEEEproof}
\section{A Matlab Toolbox Implementation}
\label{sec:toolbox} To assist with the application of these results, we have created a library of functions for verifying the stability conditions described in this paper. These libraries make use of modified versions of the SOSTOOLS~\cite{prajna_2002} and MULTIPOLY toolboxes coupled with either SeDuMi~\cite{sturm_1999} or Mosek. A complete package can be downloaded from~\cite{mmpeet_web}. Key examples of functions included are:
\begin{enumerate} \item \verb+[M,N]=sosjointpos_mat_ker_ndelay.m+
\begin{itemize} \item Declares a positive piecewise-polynomial multiplier, kernel pair which satisfies $[M,N] \in \Xi_{d,n,K}$.
\end{itemize}
\item \verb+sosmateq.m+
\begin{itemize} \item Declare a matrix-valued equality constraint.
\end{itemize}
\item \verb+[F,H]=sosspacing_mat_ker_ndelay.m+
\begin{itemize} \item Declare a matrix-valued equality constraint which satisfies $\{F,H\}\in \Theta_{n,n,K}$.
\end{itemize} \end{enumerate}
The functions are implemented within the pvar framework of SOSTOOLS and the user must have some familiarity with this relatively intuitive language to utilize these functions. Note also that the entire toolbox and supporting modified implementations of SOSTOOLS and MULTIPOLY must be added to the path for these functions to execute.
\paragraph{Pseudocode} To illustrate how these conditions can be efficiently coded using the Matlab toolbox, we give a pseudocode implmentation of the conditions of Theorem~\ref{thm:dualLMI_MD}. \begin{enumerate} \item \verb?[M,N]=sosjointpos_mat_ker_ndelay? \item \verb?[F1,H1]=sosspacing_mat_ker_ndelay? \item \verb?[D,E]=L(M+F1, N+H1)? \item \verb?[Q,R]=sosjointpos_mat_ker_ndelay? \item \verb?[F2,H2]=sosspacing_mat_ker_ndelay? \item \verb?sosmateq(D+F2+Q)? \item \verb?sosmateq(E+H2+R)? \end{enumerate}
Here we use the function $L$ to represent the map $\mathcal{L}_1$. An optimized version of the code is contained in\\ \verb+solver_ndelay_nd_dual_joint.m+.
\section{Numerical Validation}\label{sec:validation} In this section, we apply the dual stability condition to a battery of numerical examples in order to verify that the proposed stability conditions are not conservative. In each case, the maximum stable value of a specified parameter is given for each degree $d$. In each case $d$ is increased until the maximum parameter value is tight to several decimal places. The computation time is also listed in CPU seconds on an Intel i7-5960X 3.0GHz processor. This time corresponds to the interior-point (IPM) iteration in SeDuMi and does not account for preprocessing, postprocessing, or for the time spent on polynomial manipulations formulating the SDP using SOSTOOLS. Such polynomial manipulations can significantly exceed SDP computation time.
\paragraph{Example A} First, we consider a simple example which is known to be stable for $\tau \le \frac{\pi}{2}$. \[ \dot x(t)=-x(t-\tau) \] \[
\hbox{\begin{tabular}{c|c|c|c|c|c|c}
$d$ & $1$ & $2$ & $3$ & $4$ & $5$ & \text{analytic}\\ \hline $\tau_{\max}$ & 1.408 & 1.5707 & 1.5707 & 1.5707 & 1.5707& 1.5707\\ \hline CPU sec & .18 & .21 & .25 & .47 & .73 \\ \end{tabular}} \]
\paragraph{Example B} Next, we consider a well-studied 2-dimensional, single delay system. \[ \dot x(t)=\bmat{0 & 1 \\ -2 & .1}x(t)+\bmat{0 & 0\\1 & 0} x(t-\tau) \] \[
\hbox{\begin{tabular}{c|c|c|c|c|c}
$d$ & $1$ & $2$ & $3$ & $4$ & \text{limit}\\ \hline $\tau_{\max}$ & 1.6581 & 1.716 & 1.7178 & 1.7178 &1.7178 \\ $\tau_{\min}$ & .10019 & .10018 & .10017 & .10017 & .10017\\ \hline CPU sec & .25 & .344 & .678 & 1.725 & \\ \end{tabular}} \]
\paragraph{Example C} We consider a scalar, two-delay system. \[ \dot x(t)=ax(t)+b x(t-1) + c x(t-2) \] In this case, we fix $a=-2$, $c=-1$ and search for the maximum $b$, which can be found in, e.g.~\cite{nussbaum_book,gu_2005,egorov_2014} to be 3. \[
\hbox{\begin{tabular}{c|c|c|c|c|c}
$d$ & $1$ & $2$ & $3$ & $4$ & \text{analytic}\\ \hline $b_{\max}$ & .7071 & 2.5895 & 2.9981 & 2.9982 & 3\\ \hline CPU sec & .3 & .976 & 2.77 & 12.96 & \\ \end{tabular}} \]
\paragraph{Example D} We consider a 2-dimensional, two-delay system where $\tau_1=\tau_2/2$ and search for the maximum stable $\tau_2$. \[ \dot x(t)=\bmat{0 & 1\\ -1 & .1}x(t)+\bmat{0 & 0\\-1 & 0} x(t-\tau/2) + \bmat{0 & 0\\1 & 0} x(t-\tau) \] \[
\hbox{\begin{tabular}{c|c|c|c|c|c}\label{tab:taumax}
$d$ & $1$ & $2$ & $3$ & $4$ & \text{limit}\\ \hline $\tau_{\max}$ & 1.33 & 1.371 & 1.3717 & 1.3718 & 1.372\\ \hline CPU sec & 2.13 & 6.29 & 24.45 & 79.0 & \\ \end{tabular}} \]
\section{Conclusion}\label{sec:conclusion} In conclusion, we have proposed a new form of dual Lyapunov stability condition which allows convexification of the controller synthesis problem for delayed and other infinite-dimensional systems. This dual principle requires a Lyapunov operator which is positive, invertible, self-adjoint and preserves the structure of the state-space. We have proposed such a class of operators and used them to create stability conditions which can be expressed as positivity and negativity of quadratic Lyapunov functions. These dual stability conditions have a tridiagonal structure which is distinct from standard Lyapunov-Krasovskii forms and may be exploited to increase performance when studying systems with large numbers of delays. The dual stability condition is presented in a format which can be adapted to many existing computational methods for Lyapunov stability analysis. We have applied the Sum-of-Squares approach to enforce positivity of the quadratic forms and tested the stability condition in both the single and multiple-delay cases. Numerical testing on several examples indicates the method is not conservative. The contribution of the present paper is not in the efficiency of the stability test, however, as these are likely less efficient when compared to, e.g., previous SOS results due to the highly structured nature of the operators used. Rather the contribution is in the convexification of the synthesis problem which opens the door for dynamic output-feedback $H_\infty$ synthesis for infinite-dimensional systems.
\appendices \section{Table of Notation} For convenience, we summarize a selected subset of the notation used in this paper.
\noindent \textbf{Spaces:} $Z_{m,n,K}:=\{\mathbb{R}^m \times L_2^{n}[-\tau_1,0]\times \cdots \times L_2^n[-\tau_K,0]\}$ with $Z_{n,K}:=Z_{n,n,K}$ and \[ \ip{\bmat{y\\ \psi_i}}{\bmat{x\\ \phi_i}}_{Z_{m,n,K}}=\tau_K y^T x + \sum_{i=1}^K \int_{-\tau_i}^0 \psi_i(s)^T\phi_i(s)ds. \]
\noindent \textbf{Subsets:}
\[ \Xi_{d,n,K}:=\{\{M,N\}\,:\, \substack{ M=M_1+M_2,\, N=N_1+N_2,\, \text{ where $\{M_1,N_1\}$ and $\{M_2,N_2\}$ satisfy the}\\ \text{conditions of Thm.~\ref{thm:positivity_PC} with $g_i=1$ and $g_i=-(s+\tau_i)(s+\tau_{i-1})$, respectively.}} \} \]
\[ \Theta_{m,n,K}:=\{\{F,H\}\,:\, F,H \text{ satisfy the conditions of Thm.~\ref{thm:mixed_pos}.}\} \]
\noindent \textbf{Operators:} \begin{align} &\left(\mathcal{P}_{\{P,Q_i,S_i,R_{ij}\}} \bmat{x\\ \phi_i}\right)(s) :=\bmat{ P x + \sum_{i=1}^K \int_{-\tau_i}^0 Q_i(s)\phi_i(s) d s \\ \tau_K Q_i(s)^T x + \tau_K S_i(s)\phi_i(s) + \sum_{j=1}^K \int_{-\tau_j}^0 R_{ij}(s,\theta)\phi_j(\theta)\, d \theta }. \end{align}
\begin{equation} \left(\mathcal{P}_{M,N}x\right)(s):= M(s)x(s) + \int_{-\tau_K}^0 N(s,\theta)x(\theta)d \theta.
\end{equation}
Given $\{F,H\} \in \Theta_{m,n,K}$, we define the operator $\mathcal{T}_{F,H}: L_2^{m,n}\rightarrow L_2^{m,n}$ as \[ \left(\mathcal{T}_{F,H}z\right)(s):=F(s)z(s)+\int_{-\tau_K}^0H(s,\theta)z(\theta)\, d\theta. \]
\noindent \textbf{Linear Transformations:} We say \[ \{M,N\}:=\mathcal{L}_1(P,Q_i,S_i,R_{ij}) \] if $a_i=\frac{\tau_i-\tau_{i-1}}{\tau_i}$ and \begin{align} M(s)&=\begin{cases} \bmat{P & \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})\\ \frac{\tau_K}{a_i} Q_i(\frac{s+\tau_{i-1}}{a_i})^T & \frac{\tau_K}{a_i} S_i(\frac{s+\tau_{i-1}}{a_i}) } & s \in [-\tau_i, -\tau_{i-1}]\\ \end{cases}\notag \\ N(s,\theta)&=\begin{cases} R_{ij}(\frac{s+\tau_{i-1}}{a_i},\frac{\theta+\tau_{j-1}}{a_j}) & s \in [-\tau_i, -\tau_{i-1}],\, \theta \in [-\tau_j, -\tau_{j-1}]\\ \end{cases} \end{align}
\section*{Acknowledgment} This work was supported by the National Science Foundation under grants No. 1100376 and 1301851.
\begin{IEEEbiography}{Matthew M. Peet} received the B.S. degree in physics and in aerospace engineering from the University of Texas, Austin, TX, USA, in 1999 and the M.S. and Ph.D. degrees in aeronautics and astronautics from Stanford University, Stanford, CA, in 2001 and 2006, respectively. He was a Postdoctoral Fellow at the National Institute for Research in Computer Science and Control (INRIA), Paris, France, from 2006 to 2008, where he worked in the SISYPHE and BANG groups. He was an Assistant Professor of Aerospace Engineering in the Mechanical, Materials, and Aerospace Engineering Department, Illinois Institute of Technology, Chicago, IL, USA, from 2008 to 2012. Currently, he is an Assistant Professor of Aerospace Engineering, School for the Engineering of Matter, Transport, and Energy, Arizona State University, Tempe, AZ, USA, and Director of the Cybernetic Systems and Controls Laboratory. Dr. Peet received a National Science Foundation CAREER award in 2011. \end{IEEEbiography}
\end{document} |
\begin{document}
\title{GENERAL TAX STRUCTURES AND \\THE L\'EVY INSURANCE RISK MODEL.}
\authorone[The University of Bath]{Andreas E. Kyprianou} \addressone{Department of Mathematical Sciences, The University of Bath, Claverton Down, Bath BA2 7AY, UK. email: [email protected]} \authortwo[Concordia University]{Xiaowen Zhou} \addresstwo{Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve Blvd W., Montr\'eal Qu\'ebec, H3G 1M8, Canada. email: [email protected]}
\begin{abstract} In the spirit of \cite{AH, ARZ} we consider a L\'evy insurance risk model with tax payments of a more general structure than in the aforementioned papers that was also considered in \cite{ABBR}. In terms of scale functions, we establish three fundamental identities of interest which have stimulated a large volume of actuarial research in recent years. That is to say, the two sided exit problem, the net present value of tax paid until ruin as well as a generalized version of the Gerber-Shiu function. The method we appeal to differs from \cite{AH, ARZ} in that we appeal predominantly to excursion theory. \end{abstract}
\keywords{Reflected L\'evy processes, passage problems, integrated exponential L\'evy processes, insurance risk processes, ruin, excursion theory.}
\ams{60K05, 60K15, 91B30}{60G70, 60J55}
\section{Introduction and main results} Recent advances in the analysis of the ubiquitous ruin problem from the theory of insurance risk has seen a tendency to replace the classical Cram\'er-Lundberg surplus process with a general spectrally negative L\'evy processes; see for example \cite{ARZ, HPSV, KKM, RZ} to name but a few. In that case the surplus process is commonly referred to as a {\it L\'evy insurance risk process}. Although moving to this more complex setting, arguably, does not bring any more realistic features to the table than are already on offer in the classical Cram\'er-Lundberg model, a clear mathematical advantage has emerged. Working with L\'evy insurance risk processes forces one to approach the problem of ruin via excursion or fluctuation theory which does not use specific features of the underlying L\'evy process other than a generic path decomposition of the process in terms of excursions from its maximum which manifests itself in the form of a Poisson point process.
In this paper, we continue in this vein and build on ideas of L\'evy insurance risk processes with tax which were introduced and studied in \cite{AH, ARZ, ABBR}. Specifically we introduce a more general tax structure and therewith we establish, for the aggregate surplus process, new identities for the two sided exit problem, a generalized version of the Gerber-Shiu function as well the net present value of tax paid until ruin.
Henceforth the process $X=\{X_t: t\geq 0\}$ with probabilities $\{\mathbb{P}_x : x\in\mathbb{R}\}$ and natural filtration $\{\mathcal{F}_t : t\geq 0\}$ will denote a spectrally negative L\'evy process with the usual exclusion of processes in the latter class which have monotone paths (that is to say a pure increasing linear drift and the negative of a subordinator). For convenience we shall always denote $\mathbb{P}_0$ by $\mathbb{P}$. Let \[ \psi(\lambda) = \log\mathbb{E}(e^{\lambda X_1}) \] be the Laplace exponent of $X$ which is known to be finite for at least $\theta \in[0,\infty)$ in which case it is a strictly convex and infinitely differentiable function. The asymptotic behaviour of $X$ is characterized by $\psi'(0+)$, so that $X$ drifts to $\pm\infty$ (oscillates) accordingly as $\pm\psi'(0+)>0$ ($\psi'(0+)=0$). When $X$ plays the role of the surplus process, it is usual to make the assumption that $\psi'(0+)>0$ which is equivalent to the {\it net profit condition} in the case that $X$ is a Cram\'er-Lundberg process. However, this condition is not necessary for any of the forthcoming analysis.
Denote by $S=\{S_t : t\geq 0\}$ the process which describes the running supremum of $X$, that is to say, $S_t = \sup_{s\leq t}X_s$ for
each $t\geq 0$. Following \cite{ABBR}, we are interested in modeling tax payments from the L\'evy insurance process in such a way that the cumulative payment until
time $t$ is given by \[ \int_0^t \gamma(S_u)dS_u \] where $\gamma:[0,\infty)\to [0,1)$ is a measurable function which satisfies \begin{equation} \int^\infty_0 (1 - \gamma(s))ds =\infty \label{fortheinverse} \end{equation}
In that case the aggregate surplus process, the primary object of our study, is given by
\begin{equation}
U_t:=X_t-\int_0^t \gamma(S_u)dS_u.
\label{agg}
\end{equation} In the special case that $\gamma$ is a constant in $(0,1)$ our L\'evy insurance risk process with tax agrees with the model introduced in \cite{AH, ARZ}. In the case that $\gamma=0$, we are back to a regular L\'evy insurance risk process. Note also that processes of the form (\ref{agg}) constitute a subclass of controlled L\'evy risk processes, the latter being of popular interest in recent literature; see for example \cite{APP, KL, KP}
In order to state the results alluded to above which concern path functionals of $U$, we must first introduce more notation. As is now usual when studying L\'evy risk processes, a key element of the analysis involves the use of scale functions, defined as follows. For every $q\geq 0$ there exists a function $W^{(q)}:\mathbb{R}\rightarrow [0,\infty)$ such that $W^{(q)}(x)=0$ for all $x<0$ and otherwise is almost everywhere differentiable on $[0,\infty)$ satisfying,
\begin{equation}\label{eq:scale} \int_0^\infty e^{-\lambda x} W^{(q)}(x)dx\,=\,\frac{1}{\psi(\lambda)-q},\text{ { } for { }} \lambda>\Phi(q), \end{equation} where $\Phi(q)$ is the largest solution to the equation $\psi(\theta)=q$ (there are at most two). We shall write for short $W^{(0)}=W$ . It is known that when $X$ has paths of unbounded variation, the scale functions $W^{(q)}$ are continuously differentiable on $(0,\infty)$, and when $X$ has paths of bounded variation, they are almost everywhere differentiable. In either case we shall denote by $W^{(q)\prime}$ the associated density. It is also known that if $X$ has a Gaussian component then $W^{(q)}$ is twice continuously differentiable on $(0,\infty)$.
There exists a well known exponential change of measure that one may perform for spectrally negative L\'evy processes, \begin{equation}
\left.\frac{d\mathbb{P}_x^\vartheta}{d\mathbb{P}_x}\right|_{\mathcal{F}_t} = e^{\vartheta (X_t-x)- \psi(\vartheta)t} \label{COM} \end{equation} for $x\in\mathbb{R}$ and $\vartheta\geq 0$ under which $X$ remains within the class of spectrally negative L\'evy processes. In particular if $\nu(dx)$ is the L\'evy measure of $-X$ under $\mathbb{P}$ then $e^{-\vartheta x}\nu(dx)$ is its L\'evy measure under $\mathbb{P}^\vartheta$. It will turn out to be useful to introduce an additional parameter to the scale functions described above in the light of this change of measure. Henceforth we shall refer to the functions $W_\vartheta$ where $\vartheta\geq 0$ as the functions that play the role of the scale functions defined in the previous paragraph but when considered under the measures $\mathbb{P}^\vartheta$.
Next define \[ \tau_a^+ := \inf \{t > 0 \colon U_t > a \} \text{ and }\tau_0^- := \inf \{t > 0 \colon U_t < 0 \} \] with the convention $\inf\emptyset=\infty$. For $s\geq x$ define
\begin{equation}
\bar{\gamma}(s):=s-\int_x^s\gamma(y)dy=x+\int_x^s(1-\gamma(y))dy.
\label{gamma-bar}
\end{equation} By differentiating (\ref{gamma-bar}) latter we note that since $\gamma\in[0,1)$ it follows that $\bar\gamma$ is a strictly increasing function. Moreover, since it is continuous it has a well defined inverse on $[x,\infty)$ which we denote by $\bar\gamma^{-1}$.
We may now present the three main results of this paper as promised. Their proofs will be given in the subsequent sections.
For each $x>0$, the process $L_t : = S_t-x$, $t\geq 0$, serves as a local time at $0$ for the Markov process $ Y: = S-X$ under $\mathbb{P}_x$. Write $L^{-1}:=\{L^{-1}_t:t\geq 0\}$ for the right continuous inverse of $L$.
\begin{thm}[Two sided exit problem]\label{T:survival} For any $0 < x < a$, we have \begin{equation}\label{E:laplace} \bE_x\left[ e^{-q \tau^+_a} \ind_{\{\tau_a^+ < \tau_0^-\}} \right] = \exp\left\{-\int_x^a \frac{W^{(q)\prime}(y)}{W^{(q)}(y)(1-\gamma(\bar{\gamma}^{-1}(y)))}dy \right\}. \end{equation} \end{thm}
\begin{thm}[Net present value of tax paid until ruin]\label{NPV} For any $0 < x < a$, we have \begin{equation}\label{dividends} \bE_x\left[ \int_0^{\tau^-_0} e^{-qu} \gamma(S_u) dS_u \right]= \int_x^\infty \exp\left\{- \int_x^t \frac{W^{(q)\prime}(\bar\gamma(s))}{W^{(q)} (\bar\gamma(s))} ds\right\} \gamma(t)dt. \end{equation} \end{thm}
\begin{thm}\label{G-S} For each $t\geq 0$ let $S_t^U:=\sup_{s\leq t}U_s$ and let $\kappa = L^{-1}_{L_{\tau^-_0}-}$, the last moment that tax is paid before ruin. Denote by $\nu$ the jump measure of $-X$. For any $y,z>0, \theta>y$ and $\alpha,\beta\geq 0$, we have \begin{eqnarray}\label{gs} \lefteqn{ \mathbb{E}_x\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in d\theta, U_{\tau^-_0-}\in dy, - U_{\tau^-_0}\in dz \right) }&&\notag\\ &&= \frac{1}{1- \gamma(\bar\gamma^{-1}(\theta))} \exp\left\{-\int_x^\theta \frac{W^{(\alpha)\prime}(y)}{W^{(q)}(\alpha)(1-\gamma(\bar{\gamma}^{-1}(y)))}dy \right\} \notag\\ &&\hspace{3cm}\left\{W^{(\beta)\prime}(\theta-y)-\frac{W^{(\beta)\prime}(\theta)}{W^{(\beta)}(\theta)}W^{(\beta)}(\theta-y)\right\}\nu(y+dz)d\theta dy. \end{eqnarray} Furthermore, we also have, \begin{eqnarray} \label{gs-creep} \lefteqn{\mathbb{E}_x\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in d\theta, U_{\tau^-_0} =0\right) }\notag\\ &&= \frac{1}{1- \gamma(\bar\gamma^{-1}(\theta))} \exp\left\{-\int_x^\theta \frac{W^{(\alpha)\prime}(y)}{W^{(\alpha)}(y)(1-\gamma(\bar{\gamma}^{-1}(y)))}dy \right\}\notag\\ &&\hspace{5cm}\cdot\frac{\sigma^2}{2} \left\{ \frac{W^{(\beta)\prime}(\theta)^2}{W^{(\beta)}(\theta)} - W^{(\beta)\prime\prime}(\theta)\right\}, \end{eqnarray} where $\sigma$ is the Gaussian coefficient in the L\'evy-It\^o decomposition of $X$. \end{thm}
\begin{remark}\rm When $\gamma\in(0,1)$ is a constant, we note that the expressions (\ref{E:laplace}) and (\ref{dividends}) agree with formulas (3.1) and (3.2) in \cite{ARZ}. Indeed we have $\bar\gamma(s) = s(1-\gamma) + \gamma x$. For Theorem \ref{T:survival} we have \[\bE_x\left[ e^{-q \tau^+_a} \ind_{\{\tau_a^+ < \tau_0^-\}} \right] = \exp\left\{-\int_x^a \frac{W^{(q)\prime}(y)}{W^{(q)}(y)(1-\gamma)}dy \right\}=\left(\frac{W^{(q)}(x)}{W^{(q)}(a)}\right)^{1/(1-\gamma)}.\]
For Theorem \ref{NPV} we have by two changes of variables \begin{eqnarray*} \int_x^\infty \exp\left\{- \int_x^t \frac{W^{(q)\prime}(\bar\gamma(s))}{W^{(q)} (\bar\gamma(s))} ds\right\} \gamma(t)dt &=&\gamma\int_x^\infty \exp\left\{- \frac{1}{1-\gamma}\int_x^{\bar\gamma(t)} \frac{W^{(q)\prime}(y)}{W^{(q)} (y)} dy\right\} dt \\ &=&\gamma\int_x^\infty \left(\frac{W^{(q)}(x)}{W^{(q)}(\bar\gamma(t))}\right)^{1/(1-\gamma)}dt\\ &=&\frac{\gamma}{1-\gamma}\int_x^\infty \left(\frac{W^{(q)}(x)}{W^{(q)}(u)}\right)^{1/(1-\gamma)}du, \end{eqnarray*} which is formula (3.2) in \cite{ARZ}.
Theorem \ref{G-S} on the other hand gives a new result for the setting of \cite{ARZ}. In particular, we have \begin{eqnarray*} \lefteqn{ \mathbb{E}_x\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in d\theta, U_{\tau^-_0-}\in dy, - U_{\tau^-_0}\in dz \right) }&&\notag\\ &&= \frac{1}{1- \gamma} \left(\frac{W^{(\alpha)} (x)}{W^{(\alpha)}(\theta)}\right)^{1/(1-\gamma)}\left\{W^{(\beta)\prime}(\theta-y)-\frac{W^{(\beta)\prime}(\theta)}{W^{(\beta)}(\theta)}W^{(\beta)}(\theta-y)\right\}\nu(y+dz)d\theta dy \end{eqnarray*} and \begin{eqnarray*} \lefteqn{\mathbb{E}\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in d\theta, U_{\tau^-_0} =0\right) }\notag\\ &&= \frac{\sigma^2}{2(1-\gamma)}\left(\frac{W^{(\alpha)} (x)}{W^{(\alpha)}(\theta)}\right)^{1/(1-\gamma)} \left\{ \frac{W^{(\beta)\prime}(\theta)^2}{W^{(\beta)}(\theta)} - W^{(\beta)\prime\prime}(\theta)\right\}. \end{eqnarray*}
Finally note that when $\gamma =0$ and the process $U$ agrees with the L\'evy insurance risk process $X$, the last two formulae above give us two new expressions for the time value of the overall maximal wealth accumulated prior to ruin, the wealth immediately before ruin and the deficit at ruin. \end{remark}
\begin{remark}\rm One major criticism of working with scale functions is that, in principle, one has only solved the problems of interest up to inverting the Laplace transform (\ref{eq:scale}). However, in the last year there have been a number of developments in the theory of scale functions which has seen a large number of explicit examples appearing in the literature; including the case of Cram\'er-Lundberg models. See for example \cite{CKP, HK, KR, P}. The paper \cite{S} also gives recipes for evaluating scale functions numerically. \end{remark}
\section{Proofs of Main results} We begin this section by pointing out some important features of the running supremum of the aggregate process (\ref{agg}) which turns out to be key in our use of excursion theory in the forthcoming proofs. \begin{lemma}\label{suprema} We have that \begin{equation} S^U_t = S_t - \int_0^t \gamma(S_s)dS_s \label{U-sup} \end{equation} and that the random times $\{t\geq 0 : U_t = S^U_t\}$ agree precisely with $\{t\geq 0 : X_t = S_t\}$. \end{lemma} \begin{proof} Note that on the one hand \begin{equation} S^U_t = \sup_{s\leq t} \left\{X_s - \int_0^s \gamma(S_u)dS_u\right\}\geq \sup_{s\leq t} X_s - \int_0^t \gamma(S_u)dS_u = S_t - \int_0^t \gamma(S_u)dS_u. \label{lower} \end{equation} On the other hand since $X_t\leq S_t$ we have \[ U_t\leq S_t - \int_0^t \gamma(S_u)dS_u = \int_0^t (1- \gamma(S_u))dS_u \] and hence, since $\gamma (y)\in[0,1)$ for all $y\geq 0$, \begin{equation} S^U_t \leq \sup_{s\leq t} \int_0^s (1- \gamma(S_u))dS_u = \int_0^t (1- \gamma(S_u))dS_u = S_t - \int_0^t \gamma(S_u)dS_u \label{upper} \end{equation} Together (\ref{upper}) and (\ref{lower}) imply (\ref{U-sup}). Now suppose that $t'\in\{t\geq 0 : X_t = S_t\}$. This implies that \[ U_{t'} = X_{t'} - \int_0^{t'}\gamma(S_u)dS_u=S_{t'} - \int_0^{t'}\gamma(S_u)dS_u = S^U_{t'} \] and hence $t'\in\{t\geq 0 : U_t = S^U_t\}$. On the other hand, if $t''\in \{t\geq 0 : U_t = S^U_t\}$, then \[ X_{t''} - \int_0^{t''}\gamma(S_u)dS_u = U_{t''} = S^U_{t''} = S_{t''} - \int_0^{t''}\gamma(S_u)dS_u \] showing that $X_{t''} = S_{t''}$ and hence $t''\in\{t\geq 0: X_t = S_t\}$.
$\square$\end{proof}
For the remaining proofs we shall also make heavy use of excursion theory for the process $S-X$ for which we refer to \cite{be96} for background reading. We shall spend a moment here setting up some necessary notation which will be used throughout the remainder of the paper. The Poisson process of excursions indexed by local time shall be denoted by $\{(t, \epsilon_t): t\geq 0\}$ where \[ \epsilon_t = \{\epsilon_t(s) := X_{L^{-1}_{t}} - X_{L^{-1}_{t-}+s}: 0< s\leq L^{-1}_{t} - L^{-1}_{t-} \} \]
whenever $\sigma(\epsilon_t):=L^{-1}_{t} - L^{-1}_{t-}>0$. Accordingly we refer to a generic excursion as $\epsilon(\cdot)$
(or just $\epsilon$ for short as appropriate) belonging to the space $\mathcal{E}$ of canonical excursions.
The intensity measure of the process $\{(t, \epsilon_t): t\geq 0\}$ is given by $dt\times dn$ where $n$ is a measure on the space of
excursions (the excursion measure). An $n$-measurable functional of the canonical excursion which will be of prime interest is $\bar\epsilon= \sup_{s\geq 0}\epsilon(s)$. A useful formula for this functional that we shall make use of is the following (cf. \cite{kyp06}) \begin{equation} n(\overline{\epsilon}> x) = \frac{W'(x)}{W(x)} \label{excursion-tail} \end{equation} providing that $x$ is not a point of discontinuity in the derivative of $W$ (which is only a concern when $X$ has paths of bounded variation, in which case there are at most a countable number).
Lemma \ref{suprema} also has an important bearing on the process of excursions described above. Indeed, from the identity (\ref{U-sup}) we note that if $L_t = s$, or equivalently $S_t = x+s$, under $\mathbb{P}_x$, then $S^U_t = \bar\gamma(x+s)$. Moreover, $L^{-1}_{s} = \tau^+_{\bar\gamma(x+s)}$, or equivalently $\tau^+_a = L^{-1}_{\bar\gamma^{-1}(a) -x}$, under $\mathbb{P}_x$ and the excursions of $U$ away from its maximum agree precisely with $\{(t, \epsilon_t): t\geq 0\}$.
\begin{proof}[Proof of Theorem \ref{T:survival}] Taking account of the remarks following Lemma \ref{suprema} we have that the event $\{\tau_a^+ < \tau_0^-\}$ is the same as \[ \{\bar\epsilon_s \leq \bar{\gamma}(x+s) , \forall \, 0 \leq s < \bar{\gamma}^{-1}(a)-x \}. \]
Then, for $x>0$, \begin{equation*} \begin{split} \bP_x (\tau_a^+ < \tau_0^-) &=\mathbb{P}_x( \bar\epsilon_s \leq \bar{\gamma}(x+s) , \forall \, 0 \leq s<\bar{\gamma}^{-1}(a)-x )\\ &=\exp\left\{-\int_0^{\bar{\gamma}^{-1}(a)-x} n(\bar\epsilon >\bar\gamma(x+s))ds\right\}\\ & = \exp \left\{-\int_0^{\bar{\gamma}^{-1}(a)-x} \frac{W^{\prime}(\bar{\gamma}(x+s))}{W(\bar{\gamma}(x+s))} \, ds \right\} \\ &=\exp \left\{-\int_x^{a} \frac{W^{\prime}(y)}{W(y)(1-\gamma(\bar{\gamma}^{-1}(y)))} \, dy \right\}, \end{split} \end{equation*} where we change the variable making use of the fact that, since $\bar\gamma(\bar\gamma^{-1}(s))=s$, we have from the chain rule \[ \frac{d}{ds} \bar\gamma^{-1}(s) = \frac{1}{\bar\gamma'(\bar\gamma^{-1}(s))} = \frac{1}{1-\gamma(\bar\gamma^{-1} (s))}. \]
Next, note that \begin{eqnarray} \bP_x^{\Phi(q)}( \tau_a^+ < \tau_0^- ) &=& \bE_x \left[ e^{\Phi(q)(X_{\tau^+_a} - x)-q \tau^+_a} \ind_{\{\tau_a^+ < \tau_0^-\}}\right]\notag\\ &=& \exp\left\{\Phi(q)\left(\bar{\gamma}^{-1}(a)-x\right)\right\} \bE_x \left[ e^{-q \tau^+_a} \ind_{\{\tau_a^+ < \tau_0^-\}}\right] \label{one} \end{eqnarray} where we have appealed to the change of measure (\ref{COM}) with $\vartheta = \Phi(q)$ and the final equality follows by virtue of the fact that on $\{\tau^+_a <\infty\}$
\begin{eqnarray*}
X_{\tau_a^+}&=&U_{\tau^+_a}+\int_0^{\tau_a^+} \gamma(S_u)dS_u\notag\\ &=& a+\int_0^{L^{-1}_{\bar\gamma^{-1}(a) -x}}\gamma(S_u)dS_u\notag\\ &=& a+\int_x^{\bar{\gamma}^{-1}(a)}\gamma(y)dy\notag\\ &=&\bar{\gamma}^{-1}(a) \label{two} \end{eqnarray*} where in the second equality we have made the change of variable $y = S^{-1}_u$.
Note also that it is known (cf. Chapter 8 of \cite{kyp06}) that for $q,x\geq 0$, \begin{equation} W^{(q)}(x) = e^{\Phi(q)x}W_{\Phi(q)}(x) \label{q-Phiq} \end{equation} and hence \begin{equation} \frac{W_{\Phi(q)}'(x)(x)}{W_{\Phi(q)}(x)}=\frac{{W^{(q)}}'(x)}{W^{(q)}(x)}-\Phi(q). \label{three} \end{equation} Piecing together (\ref{one}), (\ref{two}) and (\ref{three}) we get \begin{equation*} \begin{split} &\bE_x \left[ e^{-q \tau^+_a} \ind_{\{\tau_a^+ < \tau_0^-\}}\right]\\ &= \bP_x^{\Phi(q)} ( \tau_a^+ < \tau_0^- ) \exp\left\{-\Phi(q)\left(\bar{\gamma}^{-1}(a)-x \right)\right\} \\ &=\exp \left\{-\int_x^{a} \frac{{W_{\Phi(q)}}^{\prime}(y)}{W_{\Phi(q)}(y)(1-\gamma(\bar{\gamma}^{-1}(y)))} \, dy \right\}\exp\left\{-\Phi(q)\left(\bar{\gamma}^{-1}(a)-u \right)\right\}\\ &=\exp\left\{-\int_x^a\frac{W^{(q)\prime}(y)}{W^{(q)}(y)(1-\gamma(\bar{\gamma}^{-1}(y)))}dy\right\}, \end{split} \end{equation*} where we have also used the fact that \[\int_x^a \frac{1}{1-\gamma(\bar{\gamma}^{-1}(y))}dy=\bar{\gamma}^{-1}(a)-\bar{\gamma}^{-1}(x)=\bar{\gamma}^{-1}(a)-x.\] The proof is now complete.
$\square$\end{proof}
\begin{proof}[Proof of Theorem \ref{NPV}] The proof builds on the experience of the calculations in the previous proof. We note that the process $S$ does not increase on the time interval $(L^{-1}_{L_{\tau^{-}_0}-}, \tau^-_0)$ and hence \begin{eqnarray*} \lefteqn{\mathbb{E}_x\left[\int_0^{\tau^-_0} e^{-qu} \gamma(S_u)dS_u \right]}\\ &&=\mathbb{E}_x\left[\int_0^{L^{-1}_{L_{\tau^-_0}-}} e^{-qu} \gamma(L_u+x)dL_u \right]\\ &&=\mathbb{E}_x\left[\int_0^{\infty}\ind_{\{t< L_{\tau^-_0}\}} e^{-qL^{-1}_t} \gamma(t+x)dt \right]\\ &&=\int_0^{\infty} \mathbb{E}_x\left[ e^{-q L^{-1}_{t} } \ind_{\{ \bar\epsilon_s \leq \bar{\gamma}(x+s) , \forall \, 0 \leq s \leq t \}} \right] \gamma(t+x)dt\\ &&=\int_0^{\infty} e^{-\Phi(q)t} \mathbb{P}^{\Phi(q)}_x( \bar\epsilon_s \leq \bar{\gamma}(x+s) , \forall \, 0 \leq s \leq t ) \gamma(t+x)dt\\ &&=\int_0^{\infty} e^{-\Phi(q)t} \exp\left\{- \int_0^t n_{\Phi(q)}(\bar\epsilon > \bar\gamma(x+s)) ds\right\} \gamma(t+x)dt\\ &&=\int_0^{\infty} e^{-\Phi(q)t} \exp\left\{- \int_0^t \frac{W_{\Phi(q)}'(\bar\gamma(x+s))}{W_{\Phi(q)}(\bar\gamma(x+s))} ds \right\} \gamma(t+x)dt\\ &&=\int_0^{\infty} \exp\left\{- \int_0^t \frac{W^{(q)\prime}(\bar\gamma(x+s))}{W^{(q)}(\bar\gamma(x+s))} ds\right\} \gamma(t+x)dt \end{eqnarray*} where in the fifth equality the measure $n_{\Phi(q)}$ plays the role of $n$ under $\mathbb{P}^{\Phi(q)}$, in the penultimate equality we have used (\ref{excursion-tail}) and the final equality uses (\ref{three}). The proof is completed by applying a straightforward change of variables.
$\square$\end{proof}
Before turning the proof of Theorem \ref{G-S}, we need first to prove an additional auxiliary result. To this end, define $\rho_a= \inf\{s>0 : \epsilon(s)>a \}$, the first passage time above $a$ of the canonical excursion $\epsilon$.
We also need the first passage times for the underlying L\'evy process $X$, \[ T^+_x = \inf\{t>0 : X_t >x\}\text{ and }T^-_x=\inf\{t>0 : X_t <x\} \] for all $x\in\mathbb{R}$.
\begin{lem}\label{aux-lemma} For any $y,z>0$ and $q\geq 0$, we have \begin{eqnarray*} \lefteqn{n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right) }&&\\ &&= \left\{W^{(q)\prime}(a-y)-\frac{W^{(q)\prime}(a)}{W^{(q)}(a)}W^{(q)}(a-y)\right\}\nu(y+dz)dy \end{eqnarray*} and \[ n\left( e^{-q \rho_a}; \epsilon(\rho_a)=a \right) =\frac{\sigma^2}{2} \left\{ \frac{W^{(q)\prime}(a)^2}{W^{(q)}(a)} - W^{(q)\prime\prime}(a)\right\}. \]
\end{lem}
\begin{proof} Recall that $Y= S-X$. For the latter process introduce its first passage time \[ \varsigma_a = \inf\{t>0 : Y_t >a\}. \] By a classical application of the compensation formula (see for example the treatment of a related problem in \cite{AKP}) we have for $q\geq 0$ that \begin{eqnarray} \lefteqn{\mathbb{E}(e^{-q \varsigma_a} ; a- Y_{\varsigma_a - } \in dy, Y_{\varsigma_a} - a\in dz)}&&\notag\\ &&=\mathbb{E}\left[\sum_{t\geq 0} e^{-q(L^{-1}_{t-}+ \rho_a(\epsilon_t))} \ind_{\{\sup_{s<t} \overline\epsilon_s \leq a, \rho_a(\epsilon_t) <\sigma(\epsilon_t) , a- \epsilon_t(\rho_a-) \in dy,\epsilon_t(\rho_a ) -a\in dz \} } \right]\notag\\ &&=\int_0^\infty \mathbb{E}\left[e^{-qL^{-1}_{t}} \ind_{\{ \sup_{s\leq t} \overline\epsilon_s \leq a\}}\right] dt \cdot n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right)\notag\\ &&=\int_0^\infty e^{-\Phi(q)t} e^{-n_{\Phi(q)}(\bar\epsilon> a) t}dt \cdot n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right)\notag\\ &&=\int_0^\infty e^{-\Phi(q)t} \exp\left\{-\frac{W'_{\Phi(q) } (a)}{W_{\Phi(q)}(a) } t\right\}dt \cdot n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right)\notag\\ &&=\int_0^\infty \exp\left\{-\frac{W^{(q)\prime } (a)}{W^{(q)}(a) } t\right\}dt\cdot n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right) \label{mine} \end{eqnarray} where in the first equality the time index runs over local times and the sum is the usual shorthand for integration with respect to the Poisson counting measure of excursions, and for the second equality we need the quasi left continuity for subordinator $L^{-1} $.
On the other hand, according to Theorem 1 of \cite{Pist-Sem}, we have that \begin{eqnarray} \lefteqn{ \mathbb{E}(e^{-q \varsigma_a} ; a- Y_{\varsigma_a - } \in dy, Y_{\varsigma_a} - a\in dz) }&&\notag\\ &&=\int_0^\infty \exp\left\{-\frac{W^{(q)\prime } (a)}{W^{(q)}(a) } t\right\}dt\notag\\ &&\hspace{1cm}\times \left(W^{(q)\prime}(a-y)-\frac{W^{(q)\prime}(a)}{W^{(q)}(a)}W^{(q)}(a-y)\right)\nu(y+dz)dy \label{martijn's} \end{eqnarray} By comparing the left and right hand sides of (\ref{mine}) and (\ref{martijn's}) we thus have that \begin{eqnarray*} \lefteqn{n\left(e^{-q\rho_a}; a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz \right) }&&\\ &&= \left(W^{(q)\prime}(a-y)-\frac{W^{(q)\prime}(a)}{W^{(q)}(a)}W^{(q)}(a-y)\right)\nu(y+dz)dy \end{eqnarray*} as claimed.
For the proof of the second part we should first note that it known (cf. \cite{be96}) that the process $X$ creeps downwards if and only if $\sigma\neq 0$. It thus follows that $Y$ creeps upwards if and only if $\sigma\neq 0$. Henceforth assume that $\sigma \neq 0$. The proof then follows the same reasoning except in (\ref{mine}) one replaces the event $\{a- Y_{\varsigma_a - } \in dy, Y_{\varsigma_a} - a\in dz\}$ by $\{Y_{\varsigma_a} = a\}$ on the left hand side and $\{a- \epsilon(\rho_a-) \in dy,\epsilon(\rho_a ) -a\in dz\}$ by $\{\epsilon(\rho_a) =a\}$ on the right hand side. Furthermore as a replacement for (\ref{martijn's}) in the argument we use instead \[ \mathbb{E}(e^{-q\varsigma_a}; Y_{\varsigma_a} = a) = \frac{\sigma^2}{2} \left\{ \frac{W^{(q)\prime}(a)^2}{W^{(q)}(a)} - W^{(q)\prime\prime}(a)\right\}\cdot \int_0^\infty \exp\left\{-\frac{W^{(q)\prime } (a)}{W^{(q)}(a) } t\right\}dt, \] which is taken from Theorem 2 of \cite{Pist-Sem}.
$\square$\end{proof}
\begin{proof}[Proof of Theorem \ref{G-S}] We give the proof for the first identity. The proof of the second identity follows along exactly the same lines using the second part of Lemma \ref{aux-lemma} instead and is left as an exercise for the reader.
In a similar spirit to the proof of Lemma \ref{aux-lemma} we may write for a given open interval $B\subset(0,\infty)$, \begin{eqnarray*} \lefteqn{ \mathbb{E}\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in B, U_{\tau^-_0-}\in dy, - U_{\tau^-_0}\in dz \right) }&&\\ &&=\mathbb{E}_x\left[ \sum_{t\geq 0}\ind_{\{L^{-1}_{t-} < \tau^-_0\}}e^{-\alpha L^{-1}_{t-} } \ind_{\{S^U_{L^{-1}_{t-}}\in B\}} e^{- \beta (\tau^-_0 - L^{-1}_{t-})} \ind_{\{S^U_{\tau_0^-} = S^U_{L^{-1}_{t-}}\}} \ind_{\{ U_{\tau^-_0 -} \in dy, - U_{\tau^-_0} \in dz\}} \right]. \end{eqnarray*} Note however that, on account of the fact that \[ S^U_{L^{-1}_{t-}}=S^U_{L^{-1}_t} = x+t - \int_x^{L^{-1}_t}\gamma(S_u)dS_u = x+t - \int_x^{x+t}\gamma(y)dy=\bar\gamma(x+t) \] we have that $\{S^U_{L^{-1}_t}\in B \} = \{\bar\gamma(x+t)\in B\}$. Note also that $L^{-1}_{t} = \tau^+_{\bar\gamma(x+t)}$ and
$L^{-1}$ is quasi left continuous. Hence, applying the compensation formula we have \begin{eqnarray*} \lefteqn{ \mathbb{E}\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in B, U_{\tau^-_0-}\in dy, - U_{\tau^-_0}\in dz \right) }&&\\ &&=\mathbb{E}_x\Big[ \int_0^\infty dt\cdot e^{-\alpha \tau^+_{\bar\gamma(x+t)} } \ind_{\{\bar\gamma(x+t)\in B\}} \ind_{\{\tau^+_{\bar\gamma(x+t)} <\tau^-_0 \}}\\ &&\hspace{1.5cm}\times n(e^{-\beta \rho_{\bar\gamma(x+t)}}; \bar\gamma(x+t) - \epsilon(\rho_{\bar\gamma(x+t)} -) \in dy, \epsilon(\rho_{\bar\gamma(x+t)})-\bar\gamma(x+t) \in dz)\Big]\\ &&= \int_B \frac{d\theta}{1- \gamma(\bar\gamma^{-1}(\theta))} \mathbb{E}_x\left[e^{-\alpha \tau^+_\theta } \ind_{\{\tau^+_\theta <\tau^-_0 \}}\right]
n(e^{-\beta \rho_\theta}; \theta - \epsilon(\rho_\theta -) \in dy, \epsilon(\rho_\theta)-\theta \in dz) \end{eqnarray*} where in the final equality we have applied a change of variable.
Now making use of the first part of Lemma \ref{aux-lemma} and the conclusion of Theorem \ref{T:survival} it thus follows that \begin{eqnarray*} \lefteqn{ \mathbb{E}\left( e^{-\alpha\kappa - \beta(\tau^-_0 - \kappa)}; S^U_{\tau^-_0}\in d\theta, U_{\tau^-_0-}\in dy, - U_{\tau^-_0}\in dz \right) }&&\\ &&= \frac{1}{1- \gamma(\bar\gamma^{-1}(\theta))} \exp\left\{-\int_x^\theta \frac{W^{(\alpha)\prime}(y)}{W^{(\alpha)}(y)(1-\gamma(\bar{\gamma}^{-1}(y)))}dy \right\} \\ &&\hspace{3cm}\left\{W^{(\beta)\prime}(\theta-y)-\frac{W^{(\beta)\prime}(\theta)}{W^{(\beta)}(\theta)}W^{(\beta)}(\theta-y)\right\}\nu(y+dz)d\theta dy \end{eqnarray*} as required.
$\square$\end{proof}
\subsection*{Acknowledgments} The first author acknowledges the support of EPSRC grant number EP/D045460/1. The second author is supported by an NSERC grant.
\end{document} |
\begin{document}
\title[]{Solutions of a Linear Equation in a Subgroup of Units in a Function Field}
\author{Chia-Liang Sun} \maketitle
\begin{itemize} \item
Institute of Mathematics,
Academia Sinica\newline
E-mail: {\tt [email protected]} \end{itemize}
\begin{abstract} Over a large class of function fields, we show that the solutions of some linear equations in the topological closure of a certain subgroup of the group of units in the function field are exactly the solutions that are already in the subgroup. This result solves some cases of the function field analog of an old conjecture proposed by Skolem. \end{abstract}
\section{Introduction}\label{OnSkolemConj_intro} Let $K/k$ be a function field, i.e., $K$ is finitely generated over $k$ with transcendence degree $1$ such that $k$ is relatively algebraically closed in $k$. Let $\Omega_{K/k}$ be the set of all places of $K/k$. Let $M$ be a natural number, and $\mathbb A^M$ be the affine $M$-space, whose coordinate is denoted by ${\mathbf X}=(X_1,\ldots,X_M)$. For any two elements $\mathbf{a}=(a_1,\ldots,a_M)$ and $\mathbf{b}=(b_1,\ldots,b_M)$ in $\mathbb A^M(K^*)$, we define $\mathbf{ab}=(a_1b_1,\ldots,a_Mb_M)\in\mathbb A^M(K)$ and $\mathbf{a}\cdot{\mathbf b}=\sum_{i=1}^Ma_ib_i\in K$; we also write $\mathbf{b}\cdot{\mathbf X}$ for the function $\sum_{i=1}^Mb_iX_i$ on $\mathbb A^M$. For any subset $\Theta$ of some ring and any variety $V$ in $\mathbb A^M$, let $V(\Theta)$ denote the set of points on $V$ with each coordinate in $\Theta$. For any $\mathbf{b}\in \mathbb A^M(K^*)$, we denote by $W_{\mathbf{b}}$ (resp. $W'_{\mathbf{b}}$) the variety in $\mathbb A^M$ defined by $\mathbf{b}\cdot{\mathbf X} = 0$ (resp. $\mathbf{b}\cdot{\mathbf X} = 1$). We fix a cofinite subset $\Omega\subset\Omega_{K/k}$ and endow $\prod_{v\in\Omega}K_v^*$ with the natural product topology. Via the diagonal embedding, we identify any subgroup $\Gamma\subset K^*$ with its image in $\prod_{v\in\Omega}K_v^*$, and denote by $\overline{\Gamma}$ its topological closure. Moreover, for each $v\in\Omega_{K/k}$, the inclusion $\Gamma\subset K_v^*$ is continuous and therefore induces a subtopology of $\Gamma$, which will be referred to as \em $v$-adic subtopology\em.
The purpose of this paper is investigate the circumstances where the equalities \begin{eqnarray} &W_{\mathbf{b}}(\overline{\Gamma}) = W_{\mathbf{b}}(\Gamma) & \label{maineqW0}\\ &W'_{\mathbf{b}}(\overline{\Gamma}) = W'_{\mathbf{b}}(\Gamma) & \label{maineqW1} \end{eqnarray} hold. In the case where $M=1$, both sides of (\ref{maineqW0}) are always empty; however, Example 0 in \cite{Sun} shows that (\ref{maineqW1}) may fail unless we make some restrictions on the \em largeness \em of $\Gamma$. In the characteristic zero case, this is indeed the only assumption needed. \begin{thm}\label{OnSkolemConj_main0} Suppose that $k$ has characteristic $0$. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then for any $v\in\Omega_{K/k}$, the $v$-adic subtopology of $\Gamma$ is discrete. In particular, both (\ref{maineqW0}) and (\ref{maineqW1}) hold. \end{thm}
In the case where $k$ is finite, the main result in \cite{Sun} shows that (\ref{maineqW1}) holds if $M=1$ and $\Gamma$ is finitely generated. Nevertheless, Example \ref{x_plus_y} suggests that even in the case where $M=2$, both (\ref{maineqW0}) and (\ref{maineqW1}) can fail in general, unless we put some mixed assumptions on $\mathbf{b}$ and $\Gamma$. Recall that $K$ is \em separably generated \em over $k$ if there exists $t\in K$ such that $K$ is finite separable over $k(t)$. The separable Hilbert subset $H_k(f_1,\ldots,f_m; g)$ of $k$, where each $f_i(T,X)$ is a separable irreducible polynomial in $k(T)[X]$ and $g(T)$ is a nonzero polynomial in $k[T]$, consists of those $a\in k$ such that $g(a)\neq 0$ and each $f_i(a,X)$ is defined and irreducible in $k[X]$ \cite{FieldArith}. For each $i$, let $\psi_i:\mathbb A^M(K^*)\rightarrow\mathbb A^M(K^*)$ be the map which replaces the $i$-th component of $\mathbf{a}\in\mathbb A^M(K^*)$ by $1$ and keeps the others unchanged. We shall prove the following result. \begin{thm}\label{OnSkolemConj_main} Suppose that $k$ has characteristic $p$, and either \begin{itemize} \item that $k$ is finite, or \item that $K$ is separably generated over $k$, that each separable Hilbert subset of $k$ is infinite, and that $k$ contains only finitely many roots of unity. \end{itemize} Let $\mathbf{b}\in \mathbb A^M(K^*)$ be contained in $\mathbb A^M(O_S^*)$ for some finite $S\subset\Omega_{K/k}$, and $\Gamma\subset O_S^*$ be a subgroup. For each natural number $m$, let $R_m\subset \Gamma$ be a complete set of representatives of $\Gamma/\Gamma\cap (kK^{p^m})^*$. \begin{enumerate} \item Each $R_m$ is a finite set.\label{fin_con} \item Suppose that there is some $m$ such that for every $\mathbf{r}\in \mathbb A^M(R_m)$ the components of $\mathbf{br}$ are linearly independent over $kK^{p^m}$. Then both sides of (\ref{maineqW0}) are empty.\label{W0} \item Suppose that there is some $m$ such that for every $\mathbf{r}\in \mathbb A^M(R_m)$ the components of some $\psi_j(\mathbf{br})$ are linearly independent over $kK^{p^m}$. Then (\ref{maineqW1}) holds, and the common set is finite with its cardinality no larger than the number of $\mathbf{r}\in \mathbb A^M(R_m)$ such that the components of $\mathbf{br}$ and those of each $\psi_j(\mathbf{br})$ are linearly independent over $kK^{p^m}$. \label{W1} \end{enumerate} \end{thm} A large class of fields satisfy the assumption of Theorem \ref{OnSkolemConj_main} on $k$. In fact, if $k$ is a global field, then each separable Hilbert subset of $k$ is infinite (\cite{FieldArith}, Theorem 13.3.5) and $k$ contains only finitely many roots of unity; these two properties are preserved under a purely transcendental extension with arbitrary cardinality as its transcendence degree ({\em{op. cit.}}, Proposition 13.2.1) and any algebraic extension with a finite separable degree ({\em{op. cit.}}, Proposition 12.3.3).
Some brief remarks on the hypotheses of Theorem \ref{OnSkolemConj_main} follow. In case where $M=1$, the assumptions in (\ref{W0}) and (\ref{W1}) are vacuous. When $k$ is finite and $M=2$, using Lemma 3 in \cite{axby}, we may replace the assumption in (\ref{W0}) with $b_1b_2^{-1}\notin\sqrt{\Gamma}$, and that in (\ref{W1}) with $\{b_1,b_2\}\not\subset\sqrt{\Gamma}$, where $\sqrt{G}=\{x\in K^*: x^n\in G \text{ for some }n\in\mathbb{N}\}$ for any subgroup $G\subset K^*$.
\begin{exm}\label{x_plus_y} Let $K=\mathbb F_p(t)$ be a purely transcendental extension of $\mathbb F_p$, and $\Gamma=\langle t, -t, 1-t\rangle$ be the subgroup of $K^*$ generated by $t$, $-t$, and $1-t$. Take $\Omega\subset\Omega_K$ to be a cofinite subset such that $\Gamma$ is contained in the valuation ring $O_v$ for every $v\in\Omega$, and $\mathbf{b}=(1,1)\in\mathbb A^2(K^*)$. The sequence $(t^{p^{n!}})_{n\geq 1}$ in $\Gamma$ converges to $\alpha\in \overline{\Gamma} \setminus K^*$ (\cite{Sun}, Example 1). Therefore we see that $(\alpha, -\alpha)\in W_{\mathbf{b}}(\overline{\Gamma})\setminusW_{\mathbf{b}}(\Gamma)$ and $(\alpha, 1-\alpha)\in W'_{\mathbf{b}}(\overline{\Gamma}) \setminusW'_{\mathbf{b}}(\Gamma)$. \end{exm}
In case where $\Gamma\subset O_S^*$ and $\mathbf{b}\in\mathbb A^M(O_S^*)$ for some finite $S\subset\Omega_{K/k}$, the equalities (\ref{maineqW0}) and (\ref{maineqW1}) are stronger assertions than the function field analog of an old conjecture raised by Skolem \cite{Sko37}. To state his conjecture, we make the following abbreviation: $$ \begin{array}{lll} \mathfrak{S}_L(K/k,M,\mathbf{b}, S, \Gamma) & \text{stands for} &\text{For each nonzero ideal }I\subset O_S\text{ there exists }\\ &&\mathbf{x}\in\mathbb A^M(\Gamma)\text{ such that }\mathbf{b}\cdot{\mathbf x}\in I.\\ \mathfrak{S}_G(K/k,M,\mathbf{b}, S, \Gamma) & \text{stands for} & \text{There exists } \mathbf{x}\in\mathbb A^M(\Gamma) \text{ such that } \mathbf{b}\cdot{\mathbf x}=0.\\ \end{array} $$ In the case where $K$ is a number field and $S$ contains all Archimedean places, Skolem asserts that $\mathfrak{S}_L(K,M,\mathbf{b}, S, \Gamma)\Leftrightarrow\mathfrak{S}_G(K,M,\mathbf{b}, S, \Gamma)$ should always hold, and gives a proof when $M=2$ \cite{Sko37}. The validity of Skolem's assertion seems to have been largely ignored in the recent literature until Harari and Voloch \cite{HV} noticed its connection with (\ref{maineqW0}) and (\ref{maineqW1}). In fact, we may translate Theorem \ref{OnSkolemConj_main0} and Theorem \ref{OnSkolemConj_main} into results on Skolem's conjecture via the following equivalences: $$ \begin{array}{cccccc} \mathfrak{S}_L(K/k,M,\mathbf{b}, S, \Gamma) & \Leftrightarrow & W_{\mathbf{b}}(\overline{\Gamma})\neq\emptyset & \Leftrightarrow & W'_{\phi_i(\mathbf{b})}(\overline{\Gamma})\neq\emptyset &\text{ where } \Omega=\Omega_{K/k}\setminus S\\ \mathfrak{S}_G(K/k,M,\mathbf{b}, S, \Gamma) & \Leftrightarrow & W_{\mathbf{b}}(\Gamma)\neq\emptyset & \Leftrightarrow & W'_{\phi_i(\mathbf{b})}(\Gamma)\neq\emptyset & \end{array} $$ for each $i$, where $\phi_i:\mathbb A^M(K^*)\rightarrow\mathbb A^{M-1}(K^*)$ is defined by $$ (a_1,\ldots,a_M)\mapsto \left(-\frac{a_1}{a_i},\ldots,-\frac{a_{i-1}}{a_i},-\frac{a_{i+1}}{a_i},\ldots,-\frac{a_M}{a_i}\right). $$ For instance, if $K/k$ satisfies the assumptions in Theorem \ref{OnSkolemConj_main}, then $\mathfrak{S}_L(K/k,2,\mathbf{b}, S, \Gamma)\Leftrightarrow\mathfrak{S}_G(K/k,2,\mathbf{b}, S, \Gamma)$ always holds.
\section{Proof of the Main Results}\label{pf} \begin{lemma}\label{LN} Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then for any $v\in\Omega_{K/k}$, there is a subgroup $G_v\subset\Gamma$ which is open in the $v$-adic subtopology. If we further suppose that $k$ contains only finitely many roots of unity, then $G_v$ may be chosen such that $\sqrt{G_v}$ is finitely generated. \end{lemma} \begin{proof} Taking $G_v=\Gamma\cap(1+m_v)$, we see that $G_v$ is open in the $v$-adic subtopology of $\Gamma$ since $1+m_v$ is an open subgroup of $K_v^*$. The inclusion $\Gamma\subset O_S^*$ induces the map $G_v\rightarrow O_S^*/k^*$, which is injective because $k^*\cap(1+m_v)$ is trivial. The first conclusion follows from the fact that $O_S^*/k^*$ is finitely generated (Corollary 1 of Proposition 14.1, \cite{NTFF}). Now we show that $\sqrt{G_v}$ is finitely generated under the additional hypothesis that $k$ contains only finitely many roots of unity. Note that $\sqrt{G_v}\subset\sqrt{\Gamma}\subset O_S^*$, which again induces the map $\sqrt{G_v}\rightarrow O_S^*/k^*$ with its kernel contained in $k^*\cap\sqrt{K^*\cap(1+m_v)}$, which is exactly the group of roots of unity in $k$. This completes the proof. \end{proof}
{\it Proof of Theorem \ref{OnSkolemConj_main0}}: Fix $v\in\Omega_{K/k}$ and let $U_n=\Gamma\cap(1+m_v^n)$ for $n\geq 1$. Then $U_n$ is open in the $v$-adic subtopology of $\Gamma$. Since $k$ has characteristic zero, the quotient groups $U_n/U_{n+1}$ are torsion-free for all $n$. The proof of Lemma \ref{LN} shows that $U_1$ is finitely generated, hence $U_n$ is trivial for some $n$.\qed
\begin{lemma}\label{remain_prime} Suppose that $K$ is separably generated over $k$, and that each separable Hilbert subset of $k$ is infinite. Let $L$ be a finite separable extension of $K$. Then there are infinitely many $v\in\Omega_{K/k}$ which extend uniquely and unramifiedly to a place of $L$. \end{lemma} \begin{proof} Since $K$ is separably generated over $k$, it is enough to assume that $K=k(t)$, in which case the desired property follows from the infiniteness of the separable Hilbert subset $H_k(f; 1)$, where $L=K(y)$ with $y$ a root of the separable polynomial $f\in k(T)[X]$. \end{proof}
\begin{lemma}\label{ptop} Suppose that $K/k$ satisfies the assumptions in Theorem \ref{OnSkolemConj_main}. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then for any integer $m$ prime to $p$, the subgroup $\Gamma^m$ of $\Gamma$ is open. \end{lemma} \begin{proof} The case where $k$ is finite is proved in Lemma 12 of \cite{Sun}. Thus we assume that $k$ is infinite. By Lemma \ref{LN}, we may assume that $\sqrt{\Gamma}$ is finitely generated; then by Lemma 5 of \cite{Sun}, we may further assume that $\Gamma=\sqrt{\Gamma}$. Let $L$ be the finite Galois extension of $K$ obtained by adjoining all the $m$-th roots of every element in $\Gamma$. For each field $E$ such that $K\subset E \subset L$, Lemma \ref{remain_prime} yields a place $v_E\in\Omega$ which extends to a unique place $v_E$ of $E$ such that $[E_{v_E}:K_{v_E}]=[E:K]$. Let $S$ be the finite set consisting of those $v_E$ such that there is no proper intermediate field between $K$ and $E$. We shall complete the proof by showing that $\Gamma\cap U_S\subset \Gamma^m$, where $U_S=\prod_{v\in S}1+m_v$ is an open subgroup of $\prod_{v\in S}K_v^*$. In fact, we only have to show that $\Gamma\cap U_S\subset (K^*)^m$ because $\Gamma=\sqrt{\Gamma}$. Assume $x\in\Gamma\cap U_S\setminus (K^*)^m$ and let $F$ be the extension of $K$ obtained by adjoining an $m$-th root of $x$. Then we have $K\subsetneq E\subset F \subset L$ for some $E$ such that $v_E\in S$. Since $[E_{v_E}:K_{v_E}]=[E:K]\neq 1$, it follows that $x$ has no $m$-th root in $K_{v_E}$, which contradicts the assumption $x\in U_S$ by Hensel's lemma. \end{proof}
\begin{lemma} Suppose that $k$ has characteristic $p$, and that $k$ contains only finitely many roots of unity. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then for every $v\in\Omega_{K/k}$, any subgroup of $\Gamma$ containing $\Gamma^{p^n}$ for some $n\in\mathbb{N}$ is open in the $v$-adic subtopology of $\Gamma$. \end{lemma} \begin{proof} It suffices to show that those subgroups $\Gamma^{p^n}$ are open in the $v$-adic subtopology of $\Gamma$. As in the proof of Lemma \ref{ptop}, we may assume that $\Gamma=\sqrt{\Gamma}$ is finitely generated. Because $(K_v^*)^{p^n}\cap K^*= (K^*)^{p^n}$, we have $(K_v^*)^{p^n}\cap\Gamma\leq (K^*)^{p^n}\cap\Gamma=\Gamma^{p^n}$. Then it suffices to show that $(K_v^*)^{p^n}\cap\Gamma$ is open in the $v$-adic subtopology of $\Gamma$. Note that $(K_v^*)^{p^n}$ is closed in $K_v^*$ and consequently $K_v^*/(K_v^*)^{p^n}$ is Hausdorff. Consider the map $\Gamma\rightarrow K_v^*/(K_v^*)^{p^n}$ induced from the inclusion $\Gamma\subset K_v^*$, which is continuous with respect to the $v$-adic subtopology of $\Gamma$. Since this map factors through $\Gamma/\Gamma^{p^n}$, its image is finite, whence discrete. This completes our proof. \end{proof}
\begin{cor}\label{csp} Suppose that $K/k$ satisfies the assumptions in Theorem \ref{OnSkolemConj_main}. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then any subgroup of $\Gamma$ containing $\Gamma^m$ for some $m\in\mathbb{N}$ is open.\qed \end{cor}
\begin{cor}\label{closed} Suppose that $K/k$ satisfies the assumptions in Theorem \ref{OnSkolemConj_main}. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then $\Gamma$ is closed in $K^*$. \end{cor} \begin{proof} Let $P\in\overline{\Gamma}\cap K^*$. Lemma \ref{LN} (and its proof) shows that $P\in\overline{\Gamma_0}$ for some finitely generated subgroup $\Gamma_0\subset\Gamma$ such that $\Gamma_0$ is contained in a finitely generated closed subgroup $\Gamma_S\subset O_S^*$. By enlarging $S$, we may assume $S\cup\Omega=\Omega_{K/k}$ and hence both $O_S^*$ and $\Gamma_S$ are closed in $K^*$. By Corollary \ref{csp}, every subgroup of $\Gamma_S$ with finite index is open; hence $\Gamma_0$ is closed in $\Gamma_S$, and thus in $K^*$. This shows $P\in\Gamma_0\subset\Gamma$ and finishes our proof. \end{proof}
\begin{cor}\label{iso} Suppose that $K/k$ satisfies the assumptions in Theorem \ref{OnSkolemConj_main}. Let $\Gamma\subset K^*$ be a subgroup contained in $O_S^*$ for some finite $S\subset\Omega_{K/k}$. Then for any subgroup $\Delta$ of $\Gamma$ containing $\Gamma^m$ for some $m\in\mathbb{N}$, the homomorphism $$ \Gamma/\Delta\rightarrow\overline{\Gamma}/\overline{\Delta} $$ is bijective. \end{cor} \begin{proof} By Corollary \ref{closed}, we have $\Gamma\cap\overline{\Delta}=\Gamma\cap (\overline{\Delta}\cap K^*)=\Delta$, which is the desired injectivity. By Corollary \ref{csp}, $\Delta$ is open in $\Gamma$, hence the desired surjectivity follows. (c.f. Lemma 8, \cite{Sun}) \end{proof}
\begin{lemma}\label{temp} Suppose that $k$ has positive characteristic $p$, and that $K$ is separably generated over $k$. Then for any $n>0$, we have $K\cap\overline{k}K^{p^n}=kK^{p^n}$. \end{lemma} \begin{proof} Denote by $k^{\text{sep}}$ the separable closure of $k$ in $\overline{k}$. As $K\cap k^{\text{sep}}K^{p^n}$ is both separable and purely inseparable over $kK^{p^n}$, they are equal to each other; hence it remains to show that $K\cap\overline{k}K^{p^n}\subset k^{\text{sep}}K^{p^n}$. Let $K=k(t,y)$ with $y$ separable over $k(t)$. Then $K=k(t,y^{p^n})$ and thus $K\cap \overline{k}K^{p^n}=k(t,y^{p^n})\cap \overline{k}(t^{p^n},y^{p^n})\subset k^{\text{sep}}(t,y^{p^n})\cap \overline{k}(t^{p^n},y^{p^n})$. The irreducible polynomial of $y^{p^n}$ over $k^{\text{sep}}(t)$ is still irreducible over $\overline{k}(t)$, and therefore $k^{\text{sep}}(t,y^{p^n})\cap \overline{k}(t^{p^n},y^{p^n}) =k^{\text{sep}}(t^{p^n},y^{p^n})$ since $k^{\text{sep}}(t)\cap \overline{k}(t^{p^n})=k^{\text{sep}}(t^{p^n})$. This completes the proof. \end{proof}
\begin{lemma}\label{cts} Suppose that $k$ has positive characteristic $p$, and that $K$ is separably generated over $k$. Then for each $v\in\Omega_{K/k}$ and each $n>0$, any $kK^{p^n}$-linear map $\phi: K\rightarrow K$ is continuous with respect to the $v$-adic topology. \end{lemma} \begin{proof} Any $v\in\Omega_{K/k}$ gives a discrete valuation $v:K^*\rightarrow\mathbb Z$ such that $v(s)=1$ for some $s\in K^*$ and that $v((kK^{p^n})^*)\subset p^n\mathbb Z$. It follows that $\{s^j\}_{0\leq j\leq p^n-1}$ is $K^{p^n}$-linearly independent. Since $K$ is separably generated over $k$, we have $[K:kK^{p^n}]=p^n$ and conclude that $\{s^j\}_{0\leq j\leq p^n-1}$ is a $K^{p^n}$-linear basis for $K$. To prove this lemma, it is enough to show the continuity of $\phi$ at $0$. Let $x = \sum_{j=0}^{p^n-1} c_j s^j\neq 0$ with all $c_j\in kK^{p^n}$. Then $v(x) = \min_{c_j\neq 0} \left(v(c_j)+j\right)$ and $\phi(x) = \sum_{j=0}^{p^n-1} c_j \phi(s^j)$. Thus, we have $v(\phi(x)) \geq \min_{c_j\neq 0} \left( v(c_j)+v(\phi(s^j))\right)\geq v(x)+ \min_{0\leq j\leq p^n-1} \left(v(\phi(s^j)-j)\right)$. This finishes the proof since $\min_{0\leq j\leq p^n-1} \left(v(\phi(s^j)-j)\right)$ is independent of $x$. \end{proof}
For each $v\in\Omega_{K/k}$ and each natural number $m$, denote by $(kK^{p^m})^*_v$ the topological closure of $(kK^{p^m})^*$ in $K_v^*$.
\begin{prop}\label{OnSkolemConj_key} Suppose that $k$ has positive characteristic $p$, and that $K$ is separably generated over $k$. Let $\mathbf{b}\in (K^*)^M$, and let $m$ be a natural number. \begin{enumerate} \renewcommand{\alph{enumi})}{\alph{enumi})} \renewcommand{\alph{enumi}}{\alph{enumi}} \item Suppose that the components of $\mathbf{b}$ are linearly independent over $kK^{p^m}$. Then we have $$W_{\mathbf{b}}\left((kK^{p^m})^*_v\right)=\emptyset\qquad\text{for all }v\in\Omega_{K/k}.$$\label{keyW0} \item Suppose that for some $j$ the components of $\psi_j(\mathbf{b})$ are linearly independent over $kK^{p^m}$. Then for some $P\inW'_{\mathbf{b}}(K)$ we have
$$
\prod_{v\in\Omega_{K/k}}W'_{\mathbf{b}}\left((kK^{p^m})^*_v\right)\subset \{P\}.
$$
If, moreover, the components of either $\mathbf{b}$ or some $\psi_l(\mathbf{b})$ are linearly dependent over $kK^{p^m}$, then we have $$W'_{\mathbf{b}}\left((kK^{p^m})^*_v\right)=\emptyset\qquad\text{for all }v\in\Omega_{K/k}.$$ \label{keyW1} \end{enumerate} \end{prop} \begin{proof}
Choose $t\in\overline{k}K$ such that $\overline{k}(t)\subset \overline{k}K \subset \overline{k}((t))$. By Remark 1 in \cite{VolochWronskians}, there exists an iterative derivation $\{D_{\overline{k}K}^{(i)}\}_{i\geq 0}$ on $\overline{k}K$ such that $\overline{k}K^{p^m}=\{x\in \overline{k}K:D_{\overline{k}K}^{(l)}(x)=0,\;\text{ if }\; 1\leq l< p^m\}$ for $i,j\geq 0$. Taking restriction gives an iterative derivation $\{D_K^{(i)}\}_{i\geq 0}$ on $K$ such that $\{x\in K :D_K^{(l)}(x)=0,\;\text{ if }\; 1\leq l< p^m\}=K\cap \overline{k}K^{p^m} =kK^{p^m}$ by Lemma \ref{temp}. By Lemma \ref{cts}, for each $v\in\Omega_{K/k}$, we extend each $\{D_K^{(i)}\}_{i\geq 0}$ to an iterative derivation $\{D_{K_v}^{(i)}\}_{i\geq 0}$ on $K_v$ by continuity, and note that $D_{K_v}^{(i)}|_{\left((kK^{p^m})^*\right)_v}$ is the zero map for any $1\leq i<p^m$.
Fix some $v\in\Omega_{K/k}$ and some $\mathbf{c}=(c_1,\ldots, c_M)\in W_{\mathbf{b}}\left((kK^{p^m})^*_v\right)\cup W'_{\mathbf{b}}\left((kK^{p^m})^*_v\right)$. Then $\sum_{j=1}^N b_j c_j = e$, where $e\in\{0,1\}$. For any $0\leq i<p^m$, because $D_{K_v}^{(i)}(c_j)=0$, we have that \begin{equation}\label{keyeq} \sum_{j=1}^N D_K^{(i)}(b_j) c_j^{(l)}=D_K^{(i)}(e). \end{equation}
Denote by $\mathbf{c}$ (resp. $\mathbf{e}$) the $M$-by-$1$ matrix with the $j$-th component $c_j$ (resp. $D_K^{(j)}(e)$). For any set $I=\{i_1,\ldots,i_M\}$ of $M$ nonnegative integers such that $0=i_1<i_2<\cdots i_M<p^m$, let $\mathbf{T}_{\mathbf{b},I}$ be the $M$-by-$M$ matrix with the entry $D_K^{(i_l)}(b_j)$ being at the $l$-th row and $j$-th column. From (\ref{keyeq}), we have $\mathbf{T}_{\mathbf{b},I}\mathbf{c}=\mathbf{e}$, which implies \begin{equation}\label{mateq} (\det \mathbf{T}_{\mathbf{b},I}) \mathbf{c}=\mathbf{T}_{\mathbf{b},I}^*\mathbf{e}, \end{equation} where $\mathbf{T}_{\mathbf{b},I}^*$ denotes the adjoint matrix of $\mathbf{T}_{\mathbf{b},I}$. If $e=0$ and the components of $\mathbf{b}$ are linearly independent over $kK^{p^m}$, then by Theorem 1 of \cite{VolochWronskians}, $\det \mathbf{T}_{\mathbf{b},I}\neq 0$ for some $I$, which contradicts ${\mathbf c}\neq 0$. This proves (\ref{keyW0}).
To prove (\ref{keyW1}), we consider the case where $e=1$. Note that the $j$-th component of $\mathbf{T}_{\mathbf{b},I}^*{\mathbf e}$ is exactly $\det\mathbf{T}_{\psi_j(\mathbf{b}),I}$. Under the assumption in the first part, Theorem 1 of \cite{VolochWronskians} implies that $\det \mathbf{T}_{\psi_j(\mathbf{b}),I}\neq 0$ for some $j$ and $I$. Hence there is at most one choice for ${\mathbf c}$ satisfying (\ref{mateq}), and this choice gives $P\in W'_{\mathbf{b}}(K)$. If the additional hypothesis also holds, then either $\det \mathbf{T}_{\mathbf{b},I}$ or some component of $\mathbf{T}_{\mathbf{b},I}^*{\mathbf e}$ is zero, and (\ref{mateq}) is impossible. \end{proof}
{\it Proof of Theorem \ref{OnSkolemConj_main}.} First, note that for each $m$ the kernel of the natural map $$ \Gamma\rightarrow\left(O_S^*/k^*\right)/\left(O_S^*/k^*\right)^m $$ is contained in $(kK^{p^m})^*$. This proves (\ref{fin_con}) since $O_S^*/k^*$ is finitely generated (Corollary 1 of Proposition 14.1, \cite{NTFF}). We have $$ \overline{\Gamma}=\bigcup_{\gamma\in R_m} \gamma\overline{\Gamma\cap (kK^{p^m})^*}\subset \bigcup_{\gamma\in R_m} \prod_{v\in\Omega}\gamma (kK^{p^m})^*_v, $$ where the first equality follows from Corollary \ref{iso}. This gives $$ W_{\mathbf{b}}(\overline{\Gamma})\subset \prod_{v\in\Omega} W_{\mathbf{b}}\left(\bigcup_{\gamma\in R_m} \gamma (kK^{p^m})^*_v\right) =\bigcup_{\mathbf{r}\in\mathbb A^M(R_m)}\prod_{v\in\Omega}\mathbf{r} W_{\mathbf{br}}\left((kK^{p^m})^*_v\right). $$ Proposition \ref{OnSkolemConj_key}(\ref{keyW0}) shows that $\prod_{v\in\Omega}W_{\mathbf{br}}\left((kK^{p^m})^*_v\right)=\emptyset$ for each $\mathbf{r}\in\mathbb A^M(R_m)$, proving (\ref{W0}).
Similarly, we have $$ W'_{\mathbf{b}}(\overline{\Gamma})\subset \bigcup_{\mathbf{r}\in\mathbb A^M(R_m)}\prod_{v\in\Omega}\mathbf{r} W'_{\mathbf{br}}\left((kK^{p^m})^*_v\right). $$ Let $U\subset \mathbb A^M(R_m)$ be the subset consisting of those $\mathbf{r}$ such that the components of $\mathbf{br}$ and those of each $\psi_j(\mathbf{br})$ are linearly independent over $kK^{p^m}$. By Proposition \ref{OnSkolemConj_key}(\ref{keyW1}), for each $\mathbf{u}\in U$ there exists some $P_{\mathbf{u}}\inW'_{\mathbf{bu}}(K)$ such that $\prod_{v\in\Omega}W'_{\mathbf{br}}\left((kK^{p^m})^*_v\right)\subset\{P_{\mathbf{u}}\}$, while for each $\mathbf{r}\in R^M\setminus \mathbb A^M(R_m)$, we have $\prod_{v\in\Omega}W'_{\mathbf{br}}\left((kK^{p^m})^*_v\right)=\emptyset$. Then, $W'_{\mathbf{b}}(\overline{\Gamma}) \subset\{\mathbf{u}P_{\mathbf{u}}: \mathbf{u}\in U\}\subset W'_{\mathbf{b}}(K)$. It follows that $W'_{\mathbf{b}}(\overline{\Gamma}) \subsetW'_{\mathbf{b}}(\overline{\Gamma})\cap W'_{\mathbf{b}}(K)=W'_{\mathbf{b}}(\overline{\Gamma}\cap K^*)= W'_{\mathbf{b}}(\Gamma)$, where the last equality is concluded by Corollary \ref{closed}. This finishes our proof.\qed
\end{document} |
\begin{document}
\title{Distinguishing Number for Some Circulant Graphs }
\begin{abstract} Introduced by Albertson et al. \cite{albertson}, the distinguishing number $D(G)$ of a graph $G$ is the least integer $r$ such that there is a $r$-labeling of the vertices of $G$ that is not preserved by any nontrivial automorphism of $G$. Most of graphs studied in literature have 2 as a distinguishing number value except complete, multipartite graphs or cartesian product of complete graphs depending on $n$. In this paper, we study circulant graphs of order $n$ where the adjacency is defined using a symmetric subset $A$ of $\mathbb{Z}_n$, called generator. We give a construction of a family of circulant graphs of order $n$ and we show that this class has distinct distinguishing numbers and these lasters are not depending on $n$.
`
\end{abstract}
\section{Introduction}\label{sec:in} In 1979, F.Rudin \cite{rudin} proposed a problem in Journal of Recreational Mathematics by introducing the concept of the breaking symmetry in graphs. Albertson et al.\cite{albertson} studied the distinguishing number in graphs defined as the minimum number of labels needed to assign to the vertex set of the graph in order to distinguish any non trivial automorphism graph. The distinguishing number is widely focused in the recent years : many articles deal with this invariant in particular classes of graphs: trees \cite{tree}, hypercubes \cite{Bogstad}, product graphs \cite{klav_power} \cite{Imrich_cartes_power} \cite{klav_cliques} \cite{Fisher_1} and interesting algebraic properties of distinguishing number were given in \cite{Potanka} \cite{tym} and \cite{Z}. Most of non rigid structures of graphs (i.e structures of graphs having at most one non trivial automorphism) need just two labels to destroy any non trivial automorphism. In fact, paths $P_n$ $(n>1)$, cycles $C_n$ $(n>5)$, hypercubes $Q_n$ $(n>3)$, $r$ $(r>3)$ times cartesian product of a graph $G^r$ where $G$ is of order $n>3$, circulant graphs of order $n$ generated by $\{\pm 1,\pm 2,\dots \pm k\}$ \cite{gravier}($n\geq2k+3$) have 2 as a common value of distinguishing number. However, complete graphs, complete multipartite graphs \cite{chrom} and cartesian product of complete graphs (see \cite{klav_cliques} \cite{Fisher_1} \cite{Fisher_2}) are the few classes with a big distinguishing number. The associated invariant increases with the order of the graphs. In order to surround the structure of a graph of a given order $n$ and get a proper distinguishing number we built regular graphs $C(m,p)$ of order $mp$ where the adjacency is described by introducing a generator $A$ $(A \subset \mathbb{Z}_{m.p})$. These graphs are generated by $A=\{(p-1)+ r.p, (p+1)+ r.p$ : $0\leq r \leq m-1\}$ for all $n=m.p \geq 3$. In fact, the motivation of this paper is to give an answer to this following question, noted ${\mathcal{(Q)}}$:\\ ``Given a sequence of ordered and distinct integer numbers $d_1,d_2,\dots,d_r$ in $\mathbb{N}^* \setminus \{1\}$, does it exist an integer $n$ and $r$ graphs $G_i$ $(1\leq i\leq r)$ such that $D(G_i)=d_i$ for all $i=1,\dots,r$ and $n$ is the common order of the $r$ graphs?"\\ In the following proposition, we give the answer to this question:
\begin{proposition}\label{disconnected} Given an ordered sequence of $r$ distinct integers $d_1,d_2,\dots,d_r$ with $r\geq2$ and $d_i\geq 2$ for $i=1,\dots,r$, there exists $r$ graphs $G_1,G_2,\dots,G_r$ of order $n$ such that $G_i$ contains a clique $K_{d_i}$ and $D(G_i)=d_i$ for all $1\leq i\leq r$. \end{proposition}
\begin{proof}
Suppose that $d_1\neq 2$ and $n=d_r$. For the integer $d_r$, we assume that $G_r \simeq K_{d_r}$ and $D(G_r)=d_r$.\\ For the other integers, we consider the disconnected $(r-1)$ graphs $G_i$ having two connected component $C$ and $C'$ such that $C\simeq K_{d_i}$ and $C'$ is a path $P_{n-d_i}$ for all $i=1,\dots,(r-1)$.\\ Observe that, when $d_1\neq 2$ or $n= d_r\neq4$, then the connected component $C$ and $C'$ can not be isomorphic. By consequence, an automorphism $\delta$ of a graph $G_i$ acts in the same connected component for all $1\leq i\leq r-1$. More than, $D(G_i)=\max (D(C),D(C'))=D(C)=d_i$ for all $1\leq i\leq r-1$. \\ If $d_1= 2$ and $n= d_r=4$ the same graphs are considered except for $G_1$ where we put $G_1\simeq P_4$. Then, $D(G_1)=2=d_1$. \end{proof}
\noindent The graphs of Proposition \ref{disconnected} are not completely satisfying since these ones are not connected. Furthermore, these graphs give no additional information for graphs having hight distinguishing number, since they just use cliques for construction. So our purpose is to construct connected graphs structural properties that give answer to question ${\mathcal{(Q)}}$ \begin{theorem}\label{main} Given an ordered sequence of $r$ distinct integers $d_1,d_2,\dots,d_r$ with $r\geq2$ and $d_i\geq 2$ for $i=1,\dots,r$, there exists $r$ connected circulant graphs $G_1,G_2,\dots,G_r$ of order $n$ such that $D(G_i)=d_i$. \end{theorem}
\noindent So, in section 1, basic definitions and preliminary results used in this paper are given. Then in section 2, we define circulant graphs $C(m,p)$ , $n=m.p\geq 3$ and provide interesting structural properties of this class of graphs.These later are used to determine the associated distinguishing number which is given in section 3. We also give the proof of Theorem \ref{main} in the same section. Finally, in section 4, we conclude by some remarks and possible improvement of reply of the question ${\mathcal{(Q)}}$. \section{Definitions and Preliminaries Results}\label{sec:1}
We only consider finite, simple, loopless, and undirected graphs $G=(V ,E)$ where $V$ is the vertex set and $E$ is the edge set. The \emph{complement} of $G$ is the simple graph $\overline{G}=(V,\overline{E})$ which consists of the same vertex set $V$ of G. Two vertices $u$ and $v$ are adjacent in $\overline{G}$ if and only if they are not in $G$. The \emph{neighborhood} of a vertex $u$, denoted by $N(u)$, consists in all the vertices $v$ which are adjacent to $u$. A \emph{complete graph} of order $n$, denoted $K_n$, is a graph having $n$ vertices such that all two distinct vertices are adjacent. A \emph{path} on $n$ vertices, denoted $P_n$, is a sequence of distinct vertices and and $n-1$ edges $v_iv_{i+1}$, $1 \leq i \leq n - 1$. A path relying two distinct vertices $u$ and $v$ in $G$ is said $uv$-path. A \emph{cycle}, on $n$ vertices denoted $C_n$, is a path with $n$ distinct vertices $v_1, v_2, \dots, v_n$ where $v_1$ and $v_n$ are confused. For a graph $G$, the \emph{distance} $d_G(u, v)$ between vertices $u$ and $v$ is defined as the number of edges on a shortest $uv$-path.\\ Given a subset $A \subset \mathbb{Z}_n$ with $0 \not \in A$ and for all $a\in A$ and $-a\in A$, a \emph{circulant graph}, is a graph on $n$ vertices $0,1,\dots,n-1$ where two vertices $i$ and $j$ are adjacent if $j-i$ modulo $n$ is in $A$.
\noindent The \emph{automorphism} (or \emph{symmetry}) of a graph $G=(V,E)$ is a permutation $\sigma$ of the vertices of $G$ preserving adjacency i.e if $xy \in E$, then $\sigma(x)\sigma(y) \in E$. The set of all automorphisms of $G$, noted $Aut(G)$ defines a structure of a group. A labeling of vertices of a graph $G$, $c: V(G) \rightarrow \{1,2,\dots, r\}$ is said $r$-\emph{distinguishing} of $G$ if $\forall \sigma \in Aut (G)\setminus \{Id_G\}$: $c \neq c \circ \sigma$. That means that for each automorphism $\sigma \neq id $ there exists a vertex $v\in V$ such that $c(v)\neq c(\sigma(v))$. A \emph{distinguishing number} of a graph $G$, denoted by $D(G)$, is a smallest integer $r$ such that $G$ has an $r$-distinguishing labeling. Since $Aut(G)=Aut(\overline{G})$, we have $D(G)=D(\overline{G})$. The distinguishing number of a complete graph of order $n$ is equal to $n$. The distinguishing number of complete multipartite graphs is given in the following theorem: \begin{theorem} \cite{chrom}\label{multipartite} Let $K_{a_1^{j_1} ,a_2^{j_2},\dots,a_r^{j_r}}$ denote the complete multipartite graph that has $j_i$ partite sets of size $a_i$ for $i = 1, 2,\dots,r$ and $a_1 > a_2 > \dots > a_r$. Then $D(K_{a_1^{j_1} ,a_2^{j_2},\dots,a_r^{j_r}})= \min \{p :\binom{p}{a_i} \geqslant j_i$ for all $i \}$ \end{theorem} Let us introduce the concept of modules useful to investigate distinguishing number in graphs. A \emph{module} in the graph $G$ is a subset $M$ of vertices which share the same neighborhood outside $M$ i.e for all $y \in V \setminus M$: $M \subseteq N(y)$ or $xy \not \in E$ for all $x\in M$. A trivial module in a graph $G$ is either the set $V$ or any singleton vertex. A module $M$ of $G$ is said \emph{maximal} in $G$ if for each non trivial module $M'$ in $G$ containing $M$, $M'$ is reduced to $M$. The following lemma shows how modules can help us to estimate the value of distinguishing number in graphs: \begin{lemma}\label{module} Let $G$ be a graph and $M$ a module of $G$. Then, $D(G)\geq D(M)$ \end{lemma} \begin{proof}
\noindent Let $c$ be an $r$-labeling such that $r<D(M)$. Since $r<D(M)$, there exits $\delta\mid_{M}$ a non trivial automorphism of $M$ such that $c(x)=c(\delta\mid_{M}(x))$ for all $x \in M$ i.e the restriction of $c$ in $M$ is not a distinguishing. Now, let $\delta$ be the extension of $\delta \mid_{M}$ to $G$ with $\delta(x)=x$ $\forall x \not \in M$ and $\delta(x)=\delta\mid_M(x)$ otherwise. We get $c(x)=c(\delta(x))$ for all $x \in G$. Moreover, $\delta \neq id$ since $\delta\mid_{M} \neq id\mid_{M}$. \end{proof}
\section{Circulant Graphs $C(m,p)$}\label{sec:2} \noindent In this section, we study distinguishing number of circulant graphs $C(m,p)$ of order $n=m.p\geq3$ with $m\geqslant 1$ and $p\geqslant 2$. A vertex $i$ is adjacent to $j$ in $C(m,p)$ iff $j-i$ modulo $n$ belongs to $A=\{p-1+r.p, p+1+ r.p$, $0\leq r \leq m-1\}$ (See Fig. \ref{weakly}). When $p>1$, these graphs are circulant since for all $0 \leq r\leq m-1$ the symmetric of $p-1+r.p$ is $1+p+(m-r-2)p$ which belongs to $A$ and $p>1$ implies that $0\notin A$. By construction, set $C(m,1)$ is the clique $K_m$. Let specify some other particular values of $p$ and $m$, $C(1,p)$ is the cycle $C_p$. Also we have: $C(m,2)=K_{m,m}$ and $C(m,3)=K_{m,m,m}$. By Theorem \ref{multipartite}, $D(C(m,2))=D(C(m,3))=m+1$. Moreover, $D(C(1,p))=2$ for $p\geq6$. \begin{pr}\label{proper} The vertex set of $C(m,p)$ ($m\geqslant 2$ and $p\geqslant 2$) can be partitioned into $p$ stable modules $M_i=\{i+r.p:$ $ 0\leq r \leq m-1 \}$ of size $m$ for $i=0,\dots,p-1$. \end{pr} \begin{proof}
Given two distinct vertices $a, b \in M_i$ for $i=0,\dots,p-1$, $a-b\equiv rp[n]$ for some $0<r \leqslant m-1$ , then $a-b \notin A$ which proves that each $M_i$ induces a stable sets.
Moreover, it is clear that $\{M_i \}_{i=0,\dots, p-1}$ forms a partition of vertex set of $C(m,p)$.\\ Let us prove that $M_i$ defines a module. For this, suppose that $a=i+r_{a}\cdot p$ and $b=i+r_{b}\cdot p$ two distinct vertices of a given stable set $M_i$.\\ Let $c \in V\setminus M_i$ such that $ac$ is an edge and let $c=j+r_{c}\cdot p$.\
Let
$ r_{bc}=\left \{ \begin{array}{ll}
r_b-r_c & \mbox{if } r_b> r_c \\
m+(r_b - r_c) & \mbox{else } \end{array} \right. $ \hspace{12mm} $r_{ac}= \left \{ \begin{array}{ll}
r_a-r_c & \mbox{if } r_a> r_c \\
m+(r_a - r_c) &\mbox{else} \end{array} \right. $
two integer numbers such that $b-c\equiv (i-j)+r_{bc}\cdot p[n]$ and $a-c\equiv (i-j)+r_{ac}\cdot p[n]$ (with $0 \leqslant r_{ac} \leqslant m-1$ and $0 \leqslant r_{bc} \leqslant m-1$.)\\ Since $a-c$ is in $A$ then there is some integers $k$ verifying $0\leqslant k\leqslant r_{ac}$ such that $i-j+kp=p-1$ (or= $p+1$).\\
If $k\leqslant r_{bc}$, we obtain $b-c\equiv i-j+kp+(r_{bc}-k)\cdot p[n]$.\\ Then $b-c \equiv p-1+(r_{bc}-k)\cdot p[n]$ (or $\equiv p+1+(r_{bc}-k)\cdot p[n]$). We deduce that $b-c \in A$ since $0\leqslant k \leqslant m-1$.\\
Else, we have $r_{bc} < k \leqslant m+r_{bc}$. We have $b-c \equiv i-j+r_{bc}\cdot p[n]$.\ Then $b-c \equiv i-j+(m+r_{bc})\cdot p[n]$. We get $b-c \equiv i-j +kp+(m+r_{bc}-k)\cdot p[n]$ which belongs to $A$ since $0\leqslant m+r_{bc}-k \leqslant m-1$.\\%\qed \end{proof} \begin{figure}
\caption{ Circulant graphs: the vertices of the same color are in the same module.}
\label{weakly}
\end{figure}
\noindent Since each $M_i$ (for all $0\leqslant i\leqslant p-1$) is a stable set then, by definition of a module, we have:
\begin{pr} \label{permutation} Any permutation of elements of $M_i$ is an automorphism of $G$ for all $0\leqslant i \leqslant p-1$. \qed \end{pr}
\noindent By Lemma \ref{module} and Property \ref{proper}, we have $D(C(m,p))\geqslant m$. We will improve this bound:
\begin{theorem} \label{principal} For all $p \geq 2$ and for all $m \geq2$, $D(C(m,p)) = m+1$ if $p\neq 4$. \end{theorem} \section{Proof of Theorem \ref{main} and Theorem \ref{principal}}\label{sec:3} In this section, we give the proof of Theorem \ref{principal} in the first step, while the second step is spent to give the proof of the Theorem \ref{main}
\begin{lemma} \label{borne} For all $p \geq 2$ and for all $m \geq2$, $D(C(m,p)) > m$. \end{lemma} \begin{proof}
\noindent If $p=2$ (resp. $p=3)$ then $C(m,2)\cong K_{m,m}$ (resp. $C(m,3)\cong K_{m,m,m}$). According to Theorem \ref{multipartite}, we have $D(C(m,p))>m$. Let $C(m,p)$ be the circulant graph generated by $A=\{p-1+rp, p+1+rp: 0\leqslant r \leqslant m-1\}$.
\noindent Let us suppose that $p>3$. Since the modules $M_i$ $(i=0,\dots, p-1)$ are stables of size $m$, then by Lemma \ref{module} we have $D(C(m,p))\geq m$.\\ Consider $c:V(C(m,p))\rightarrow \{1,2,\dots,m\}$ be a $m$-labeling of $C(m,p)$ $(m \geq 2)$ and prove that $c$ is not $m$-distinguishing.\\ By way of contradiction, assume that $c$ is $m$-distinguishing.
\noindent For all distinct vertices $v$, $w$ in a given module $M_{i_0}$ with $i_0\in \{0,1,\dots,p-1\}$ we have $c(v)\neq c(w)$ otherwise, there exists a transposition $\tau$ of $v$ and $w$ verifying $c=c \circ \tau$. This yields a contradiction. That means that in a fixed module $M_i$ we have all labels.
\noindent Let $P_j$ ($1\leqslant j \leqslant m$) be a set of index $\{(j-1)p+i,i \in \{0, \dots, p-1\} \}$.
\noindent Let $v\in M_i$ ( $0\leqslant i\leqslant p-1$) then $v=i+rp$ where $0\leqslant r \leqslant m-1$. \noindent Consider now the mapping $\delta_i$ with $i=0,\dots,p-1$ defined as follows: $\delta_i: V \rightarrow V$ such that $\delta_i(v)=(c(v)-1)p+i$ if $v \in M_i$ else $\delta (v)=v$. By Property \ref{permutation}, $\delta_i$ defines an automorphism of $G$.\\ \noindent Let $\delta = \delta_0 \circ \dots \circ \delta_{p-1}$ be an automorphism of $G$.
\noindent Let $\psi$ be a mapping defined as follows: $\psi: V \rightarrow V$ such that $\psi(i+rp)= p-(i+1) + rp$. Let prove that $\psi$ is an automorphism of $G$.
\noindent Let $a=i+rp$ and $b=j+r'p$ two adjacent vertices then $b-a=j-i+(r'-r)p \in A$. We have $\psi(b)- \psi(a) = i-j +(r'-r)p$ which belongs to $A$. Thus $\psi$ is an automorphism of $G$.
\noindent Check now that $\delta ^{-1} \circ \psi \circ \delta$ is non trivial automorphism of $G$ preserving the labeling $c$. See Fig. \ref{composition}.
\noindent Then $\delta ^{-1} \circ \psi \circ \delta$ is clearly an automorphism because it is a composition of automorphisms.
\noindent Since $\delta ^{-1} \circ \psi \circ \delta(0) = \delta ^{-1} \circ \psi ( (c(0)-1)p +0)
= \delta ^{-1} ( (c(0)-1)p+(p-1)) = u$ with $u \in M_{p-1}$ and $c(u)=c(0)$, then $u\neq 0$ since $0 \in M_0$ and $M_0 \neq M_{p-1}$ and $p> 1$. \noindent Thus $\delta ^{-1} \circ \psi \circ \delta$ is not a trivial automorphism.
\begin{figure}
\caption{The automorphism $\delta^{-1}\circ\psi\circ\delta$ applied to $C(4,4)$ with four labels (1,2,3,4)=(black,red, blue,green).}
\label{composition}
\end{figure}
\noindent To complete the proof, it is enough to show that $c(u)=c(\delta ^{-1} \circ \psi \circ \delta(u))$ for all vertex $u$.\\ \noindent Let $u=i+rp$ then we have $\delta ^{-1} \circ \psi \circ \delta (u) =\delta ^{-1} \circ \psi ((c(u)-1)p +i)= \delta ^{-1} ((c(u)-1)p+ p-(i+1)) =v$ such that $v\in M_{p-(i+1)}$ and $c(v)=c(u)$.
\noindent Then $\delta ^{-1} \circ \psi \circ \delta$ preserves the labeling.
\end{proof}
\noindent The following result gives the exact value of $D(C(m,p))$
\begin{lemma}\label{D(G)} For all $p\geq 2$ and $p\neq4$ and for all $m \geq 2$ : $D(C(m,p)) \leqslant m+1$ \end{lemma} \begin{proof}
If $p\in \{2,3\}$ the proposition is true by Theorem \ref{multipartite}. Consider $c$ be the $(m+1)$-labeling defined as follows (See Fig. \ref{m+1color}):
\begin{figure}
\caption{ The $(m+1)$-labeling: the label of each vertex is given inside the cycle.}
\label{m+1color}
\end{figure}
\begin{equation*} c(v)= \left\{
\begin{array}{ll}
1 & \hspace{7mm} 0 \leqslant v \leqslant \lfloor \frac{p}{2}\rfloor \hspace{2mm} \text{and}\hspace{2mm} v=2p-1 \\
2 & \hspace{7mm} \lfloor \frac{p}{2}\rfloor < v \leqslant p-1 \\
j+1 & \hspace{7mm} v\in P_j \hspace{2mm} \text{and} \hspace{2mm}2\leqslant j\leqslant m \hspace{2mm}\text{and} \hspace{2mm} v\neq 2p-1
\end{array}
\right. \end{equation*}
\noindent Suppose that there exists an automorphism $\delta$ preserving this labeling and prove that $\delta$ is trivial.
\noindent Since $p>4$, $0$ is the unique vertex labeled $1$ which has the following sequence of label in his neighborhood $(1,1,2,3,4,4,\dots,m+1,m+1)$. Thus $\delta(0)=0$.
\noindent However, we refer to the following claim: \begin{claim}\label{distance} For each vertex $i$ in $C(m,p)$ where $0\leq i\leq p-1$, we have:
\begin{equation*} d(0,i)= \left\{
\begin{array}{ll}
i & \hspace{5mm} 1 \leqslant i \leqslant \lfloor \frac{p}{2}\rfloor \\
p-i & \hspace{5mm} \lfloor \frac{p}{2} \rfloor < i \leqslant p-1
\end{array}
\right. \end{equation*}
\end{claim} \begin{proof} \noindent First observe that for all pair of vertices $u$ and $v$ in the same module $M$ and $z\in V\setminus M$, we have $d(u,z)=d(v,z)$ and $d(u,v)=2$.
Now, if we contract each module $M_i$ of $C(m,p)$, then we get a cycle on $p$ vertices which implies the claim. \end{proof}
\noindent Let us prove that each vertex lebeled $1$, is fixed by the automorphism $\delta$:\\ Consider the table describing the sequence of labels of the vertex $u$:
\begin{table} \begin{tabular}{lll} \hline\noalign{
} \bf $u$ & \bf $c(u)$ & \bf $c(N(u))$ \\ \noalign{
}\hline\noalign{
} \bf $0$ & \bf $1$ & \bf $1,1,2,3,4,4, \dots, m+1,m+1$.\\ \bf $0 < i < \lfloor \frac{p}{2}\rfloor$ & \bf $1$ & \bf $1,1,3,3,4,4, \dots, m+1,m+1$.\\ \bf $\lfloor \frac{p}{2}\rfloor$ & \bf $1$ & \bf $1,2,3,3,4,4, \dots, m+1,m+1$.\\ \bf $\lfloor \frac{p}{2}\rfloor < j < p-1$ & \bf $2$ & \bf $2, 2 ,3,3,4,4, \dots, m+1,m+1$.\\ \bf $p-1$ & \bf $2$ & \bf $1, 2, 3, 3,4,4, \dots, m+1,m+1$.\\ \bf $2p-1$ & \bf $1$ & \bf $1,2,3,3,4,4, \dots, m+1,m+1$.\\ \noalign{
}\hline \end{tabular} \caption{The sequence of labels being in the neighborhood of vertices.} \end{table}
For all $i$ such that $0< i< \lfloor \frac{p}{2} \rfloor$, we have the sequence of labels occurring in the neighborhood of a vertex $i$ is $(1,1,3,3, \dots m+1, m+1)$. More than, for all two distinct vertices $u$ and $v$ such that $0< u,v< \lfloor \frac{p}{2} \rfloor$ we have $d(u,0)\neq d(v,0)$. Then, since $\delta(0)=0$ we get $\delta (u)=u$ and $\delta (v)=v$. Generally, for all vertex $i$ such that $0< i< \lfloor \frac{p}{2} \rfloor$, we obtain $\delta(i)=i$.\\
\noindent More than, the sequence of labels in the neighborhood of $2p-1$ and $\lfloor \frac{p}{2}\rfloor$ is $\{1, 2, 3, 3, 4, 4, \dots, m+1, m+1 \}$. Since $d(\lfloor \frac{p}{2} \rfloor,0) > d(2p-1,0)=1$, then we get $\delta(2p-1)=2p-1$ and $\delta(\lfloor \frac{p}{2} \rfloor)= \lfloor \frac{p}{2} \rfloor$.
\noindent Now observe that by the previous claim, any distinct vertices $u$ and $v$ labeled $2$, we have $d(u,0)\neq d(v,0)$. Then for any vertex $u$ such that $c(u)=2$, we have $\delta(u)=u$.
Finally, let us prove that each vertex $v$ in $C(m,p)\setminus (P_1\cup \{2p-1\})$ is fixed by the automorphism $\delta$. For that, it is enough to show for all pair of distinct vertices $u$ and $v$ such that $c(u)=c(v)$, we have $N(u)\cap \{0,1,2,\dots,p-1\} \neq N(v)\cap \{0,1,2,\dots,p-1\}$. This proposition will imply that each vertex $v$ labeled $c(v)$ $(c(v)\geq2)$ is fixed by $\delta$ and we conclude the proof of theorem.
\noindent Let $u$ and $v$ two distinct vertices such that $c(u)=c(v)$ with $u,v \in C(m,p)\setminus (P_1\cup \{2p-1\}) $.\\
\noindent Since $c(u)=c(v)$, we have $u \in M_i$ and $v\in M_j$ with $i\neq j$. Then $i-1, i+1 \in N(u)$ and $j-1, j+1 \in N(v)$.
If $i=0$ then $p-1\in N(u)$ since $p\in M_i$. Similarly, if $i=p-1$, then $0\in N(u)$ since $mp-1\in M_i$.
\noindent Therefore, modulo $p$, we have that $i-1, i+1 \in N(u)\cap \{0,1,\dots,p-1 \}$ and $j-1, j+1 \in N(v)\cap \{0,1,\dots,p-1 \}$.
Additionally, observe that any vertex $u$ has exactly two neighborhood among $p$ consecutive vertices of $G$. Thus $N(u)\cap \{0,1, \dots,p-1\} =\{i-1, i+1 \; \; \bmod{p} \}$ and $N(v)\cap \{0,1, \dots,p-1\} =\{j-1, j+1\; \; \bmod{p}\}$.
\noindent Now, if $N(u)\cap \{0,1,\dots,p-1 \}= N(v)\cap \{0,1,\dots,p-1 \}$ and $i\neq j$, then $i+1=j-1$ and $i-1=j+1$. Thus $j=i-2$, $j=i+2$ and $p=4$.
Since $p>4$, we get that $N(u)\cap \{0,1,\dots,p-1 \}\neq N(v)\cap \{0,1,\dots,p-1 \}$. \end{proof}
\noindent Lemma \ref{borne} and Lemma \ref{D(G)} give the proof of Theorem \ref{principal}. The following result gives the value of distinguishing number for $p=4$:
\begin{corollary}\label{p4} For each $m\geq 2$, $C(m,4)$ is isomorphic to $C(2m,2)$ $($or $K_{2m,2m})$ and $D(C(m,4))=$ $2m+1$. \end{corollary} \begin{proof}
\noindent The graph $C(m,4)$ is partitioned into four modules $M_0$, $M_1$, $M_2$, $M_3$. We have: $N(M_0)=N(M_2)=M_1\cup M_3$ and $N(M_1)=N(M_3)=M_0\cup M_2$. Thus, the module $M_i$ is not maximal where $i \in \{0,1,2,3\}$. Furthermore, $M_0 \cup M_2$ and $M_1\cup M_3$ are stables of size $2m$. Then, the graph $C(m,4)$ is a multipartite graph $K_{2m,2m}$ and $D(C(m,4))=D(K_{2m,2m})=D(C(2m,2))=2m+1$. \end{proof}
\noindent \textbf{PROOF OF THEOREM \ref{main}}
\noindent Let $d_1,d_2,\dots,d_r$ be an ordered sequence of distinct integers. Let $m_i=d_i -1$ for all $i=1,\dots,r$ and $p_i=\displaystyle\prod_{j\neq i} m_j$.
\noindent By definition, $m_i p_i=m_j p_j$ for $i\neq j$ for $i,j=1,\dots,r$.\\ If all $p_i\neq 4$, then let $n=m_i p_i$ else $n=3m_i p_i$ for all $i=1,\dots,r$.\\ Now, by Theorem \ref{principal}, $D(C(m_i,p_i))=m_i+1=d_i$ for all $i=1,\dots,r$.\\ So, $(G_i)_i ={(C(m_{i},p_{i}))}_i$ with $i=1,\dots,r$, is a family of connected circulant graphs of order $n$ such that $D(G_i)=d_i$. \qed
\section{Remarks and conclusion}\label{sec:4}
\noindent We have studied the structure of circulant graphs $C(m,p)$ by providing the associated distinguishing number. We have determined the distinguishing number of circulant graphs $C(m,p)$ for all $m.p\geq 3$ with $m\geqslant 1$ and $p\geqslant 2$. We can summarize the result which give the value of distinguishing number for circulant graphs $C(m,p)$ as follows:
$D(C(m,p))=$ $\begin{cases} m & (m\geqslant 3 \; \; \text{and} \; \; p=1)\\ m+1 & (m=1 \; \; \text{and} \; \; p\geq 6) \; \; or \; \; (m\geq2 \; \; p\geq2 \; \; p\neq4) \\ 2m+1 & (m=1 \; \; \text{and} \; \; p\in\{3,4,5\}) \; \; or \; \; (m\geq2 \; \; p=4) \end{cases}$
\noindent We deduce that for a given integer $n=\displaystyle\prod_{i=1}^{r} m_i$ for $r\geq 2$ and $m_i\geq 1$, we can build a family of graphs of same order $n$ where the distinguishing number depends on divisors of $n$ . The main idea of constructing such graphs consists of partitioning the vertex set into modules of same size. The circulant graphs are well privileging structure. One may ask if we can construct such family of circulant graphs with smaller order?
\noindent For instance, we can improve in Theorem \ref{main} the order $n$ of $(C(m_i,p_i))_i$ for $i=1,\dots r$, by taking $n=\frac{\displaystyle\prod_{i=1}^{r} m_i}{gcd(m_i, \displaystyle\prod_{j<i} m_j)}$.
\end{document} |
\begin{document}
\title{Mixing Property of Quantum Relative Entropy}
\author{Fedor Herbut} \affiliation {Serbian Academy of Sciences and Arts, Knez Mihajlova 35, 11000 Belgrade, Serbia and Montenegro}
\email{[email protected]}
\date{\today}
\begin{abstract} An analogue of the mixing property of quantum entropy is derived for quantum relative entropy. It is applied to the final state of ideal measurement and to the spectral form of the second density operator. Three cases of states on a directed straight line of relative entropy are discussed. \end{abstract}
\pacs{03.65.Ta 03.67.-a} \maketitle
\rm Relative entropy plays a fundamental role in quantum information theory (see p. 15 in \cite{O-P} and the review articles \cite{Vedral}, \cite{Schum}, which have relative entropy in the title).
The {\it relative entropy}
$S(\rho||\sigma)$ of a state (density operator) $\rho$ with respect to a state $\sigma$ is by definition
$$S(\rho||\sigma)\equiv {\rm tr} [\rho log(\rho )]-{\rm tr} [\rho log(\sigma)]\eqno{(1a)}$$ $$\mbox{if}\quad \mbox{supp}(\rho ) \subseteq \mbox{supp}(\sigma );\eqno{(1b)}$$
$$\mbox{or else}\quad S(\rho||\sigma)=+\infty \eqno{(1c)}$$ (see p. 16 in \cite{O-P}). By "support" is meant the subspace that is the topological closure of the range.
If $\sigma$ is singular and condition (1b) is valid, then the orthocomplement of the support (i. e., the null space) of $\rho$, contains the null space of $\sigma$, and both operators reduce in supp$(\sigma )$. Relation (1b) is valid in this subspace. Both density operators reduce also in the null space of $\sigma$. Here the $log$ is not defined, but it comes after zero, and it is generally understood that zero times an undefined quantity is zero. We'll refer to this as {\it the zero convention}.
The more familiar concept of (von Neumann) quantum entropy, $S(\rho )\equiv -{\rm tr} [\rho log(\rho )]$, also requires the zero convention. If the state space is infinite dimensional, then, in a sense, entropy is almost always infinite (cf p.241 in \cite{Wehrl}). In finite-dimensional spaces, entropy is always finite.
In contrast, relative entropy is often infinite also in finite-dimensional spaces (due to (1c)). Most results on relative entropy with general validity are {\it inequalities}, and the infinity fits well in them. It is similar with entropy. But there is one {\it equality for entropy} that is much used, {\it the mixing property} concerning {\it orthogonal state decomposition} (cf p. 242 in \cite{Wehrl}):
$$\sigma =\sum_k w_k\sigma_k,\eqno{(2)}$$ $\forall k:\enskip w_k\geq 0$; for $w_k>0$, $\sigma_k>0,\enskip {\rm tr} \sigma_k=1$; $\sum_kw_k=1$. Then $$S(\sigma )=H(w_k)+ \sum_kw_kS(\sigma_k),\eqno{(3a)}$$ $$H(w_k)\equiv -\sum_k[w_klog(w_k)] \eqno{(3b)}$$ being the Shannon entropy of the probability distribution $\{w_k:\forall k\}$.
The {\it first aim} of this article is to derive an analogue of (3a), which will be called {mixing property of relative entropy}. The {\it second aim} is to apply it to the derivation of two properties of the final state in ideal measurement, and to the spectral decomposition of $\sigma$ in the general case.
We will find it convenient to make use of an {\it extension} $log^e$ of the logarithmic function to the entire real axis: $$\mbox{if}\quad 0<x:\qquad log^e(x)\equiv log(x),\eqno{(4a)}$$ , $$\mbox{if}\quad x\leq 0:\qquad log^e(x)\equiv 0.\quad \eqno{(4b)}$$
The following elementary property of the extended logarithm will be utilized.
Lemma 1: {\it If an orthogonal state decomposition (2) is given, then $$log^e(\sigma ) =\sum'_k [log(w_k)]Q_k+\sum'_k log^e (\sigma_k),\eqno{(5)}$$ where $Q_k$ is the projector onto the support of $\sigma_k$, and the prim on the sum means that the terms corresponding to $w_k=0$ are omitted.}
Proof: Spectral forms $\forall k, \enskip w_k>0:\enskip \sigma_k=\sum_{l_k}s_{l_k}\ket{l_k} \bra{l_k}\quad$ (all $s_{l_k}$ positive) give a spectral form $\sigma = \sum_k\sum_{l_k}w_ks_{l_k}\ket{l_k}\bra{l_k}$ of $\sigma$ on account of the orthogonality assumed in (2) and the zero convention. Since numerical functions define the corresponding operator functions via spectral forms, one obtains further $$log^e(\sigma )\equiv \sum_k\sum_{l_k}[log^e(w_ks_{l_k})]\ket{l_k} \bra{l_k}=$$ $$\sum_k'\sum_{l_k}[log(w_k)+log(s_{l_k})] \ket{l_k} \bra{l_k}=$$ $$\sum_k'[log(w_k)]Q_k+\sum_k' \sum_{l_k}[log(s_{l_k})]\ket{l_k} \bra{l_k}.$$ (In the last step $Q_k=\sum_{l_k}\ket{l_k}\bra{l_k}$ for $w_k>0$ was made use of.) The same is obtained from the RHS of (5) when the spectral forms of $\sigma_k$ are substituted in it.
$\Box$
Now we come to the main result.
Theorem 1: {\it Let condition (1b) be valid for the states $\rho$ and $\sigma$, and let an orthogonal state decomposition (2) be given. Then
$$S(\rho||\sigma)=S\Big(\sum_kQ_k\rho Q_k\Big)-S(\rho )+$$
$$H(p_k||w_k)+\sum_kp_k S(Q_k\rho Q_k/p_k||\sigma_k),\eqno{(6)}$$ where, for $w_k>0$, $Q_k$ projects onto the support of $\sigma_k$, and $Q_k\equiv 0$ if $w_k=0$, $p_k\equiv {\rm tr} (\rho Q_k)$, and
$$H(p_k||w_k)\equiv \sum_k[p_klog(p_k)]-\sum_k[p_klog(w_k)] \eqno{(7)}$$ is the classical discrete counterpart of the quantum relative entropy, valid because $(p_k>0)\enskip \Rightarrow (w_k>0)$.}
One should note that the claimed validity of the classical analogue of (1b) is due to the definitions of $p_k$ and $Q_k$. Besides, (2) implies that $(\sum_kQ_k)$ projects onto supp$(\sigma )$. Further, as a consequence of (1b), $(\sum_kQ_k)\rho =\rho$. Hence, ${\rm tr} \Big(\sum_kQ_k\rho Q_k\Big)=1$.
Proof of theorem 1: We define $$\forall k,\enskip p_k>0:\quad \rho_k\equiv Q_k\rho Q_k/p_k.\eqno{(8)}$$ First we prove that (1b) implies $$\forall k,\enskip p_k>0:\quad \mbox{supp}(\rho_k)\subseteq \mbox{supp} (\sigma_k).\eqno{(9)}$$
Let $k$, $p_k>0$, be an arbitrary fixed value. We take a pure-state decomposition $$\rho =\sum_n\lambda_n\ket{\psi_n}\bra{\psi_n} \eqno{(10a)},$$ $\forall n:\enskip \lambda_n>0$. Applying $Q_k...Q_k$ to (10a), one obtains another pure-state decomposition $$Q_k\rho Q_k=p_k\rho_k =\sum_n\lambda_nQ_k\ket{\psi_n}\bra{\psi_n} Q_k\eqno{(10b)}$$ (cf (8)). Let $Q_k\ket{\psi_n}$ be a nonzero vector appearing in (10b). Since (10a) implies that $\ket{\psi_n}\in \mbox{supp}(\rho )$ (cf Appendix (ii)), condition (1b) further implies $\ket{\psi_n}\in \mbox{supp}(\sigma )$. Let us write down a pure-state decomposition $$\sigma =\sum_m \lambda'_m\ket{\phi_m}\bra{\phi_m} \eqno{(11)}$$ with $\ket{\phi_1}\equiv \ket{\psi_n}$. (This can be done with $\lambda'_1>0$ cf \cite{Hadji}.) Then, applying $Q_k...Q_k$ to (11) and taking into account (2), we obtain the pure-state decomposition $$Q_k\sigma Q_k=w_k\sigma_k=\sum_m \lambda'_mQ_k\ket{\phi_m}\bra{\phi_m} Q_k. \eqno{(11b)}$$ (Note that $w_k>0$ because $p_k>0$ by assumption.) Thus, $Q_k\ket{\psi_n}=Q_k\ket{\phi_1}\in \mbox{supp}(\sigma_k)$. This is valid for any nonzero vector appearing in (10b), and these span supp$(\rho_k)$ (cf Appendix (ii)). Therefore, (9) is valid.
On account of (1b), the standard logarithm can be replaced by the extended one in definition (1a) of relative entropy: $$ S(\rho ||\sigma )=-S(\rho)-{\rm tr} [\rho log^e(\sigma )].$$ Substituting (2) on the RHS, and utilizing (5), the relative entropy
$S(\rho ||\sigma )$ becomes $$-S(\rho )-{\rm tr} \Big\{\rho \Big[\sum_k'[log(w_k)]Q_k+\sum_k'[ log^e(\sigma_k)]\Big]\Big\}=$$ $$-S(\rho )-\sum_k'[p_klog(w_k)]-\sum_k'{\rm tr} [\rho log^e(\sigma_k)].$$ Adding and subtracting $H(p_k)$ (cf (3b)), replacing $log^e(\sigma_k)$ by $Q_k[log^e(\sigma_k)]Q_k$, and taking into account (7) and (8), one further obtains
$$S(\rho ||\sigma
)=-S(\rho )+H(p_k)+H(p_k||w_k)+$$ $$-\sum_k'p_k{\rm tr} [\rho_klog^e(\sigma_k)].$$ (The zero convention is valid for the last term because the density operator $Q_k\rho Q_k/p_k$ may not be defined. Note that replacing $\sum_k$ by $\sum_k'$ in (7) does not change the LHS because only $p_k=0$ terms are omitted.)
Adding and subtracting the entropies $S(\rho_k)$ in the sum, one further has
$$S(\rho ||\sigma
)=-S(\rho )+H(p_k)+H(p_k||w_k)+$$ $$\sum_k'p_kS(\rho_k)+\sum_k'p_k\{-S(\rho_k) -{\rm tr} [\rho_klog^e(\sigma_k)]\}.$$ Utilizing the mixing property of entropy (3a), one can put $S\Big(\sum_kp_k\rho_k\Big)$ instead of $[H(p_k)+\sum_k'p_kS(\rho_k)]$. Owing to (9), we can replace $log^e$ by the standard logarithm and thus obtain the RHS(6).
$\Box$\\
{\it Some Applications of the Mixing Property} - Let $\rho$ be a state and $A=\sum_ia_iP_i+\sum_ja_jP_j$ a spectral form of a discrete observable (Hermitian operator) $A$, where the eigenvalues $a_i$ and $a_j$ are all distinct. The index $i$ enumerates all the detectable eigenvalues, i. e., $\forall i:\enskip {\rm tr} (\rho P_i)>0$, and ${\rm tr} [\rho (\sum_iP_i)]=1$.
After an {\it ideal measurement} of $A$ in $\rho$, the entire ensemble is described by the {\it L\"{u}ders state}: $$\rho_L(A)\equiv \sum_iP_i\rho P_i\eqno{(12)}$$ (cf \cite{Lud}). (One can take more general observables that are ideally measurable in $\rho$ cf \cite{Roleof}. For simplicity we confine ourselves to discrete ones.)
Corollary 1: {\it The relative-entropic
"distance" from any quantum state to its L\"{u}ders state is the difference between the corresponding quantum entropies:} $$S\Big(\rho ||\sum_iP_i\rho P_i\Big)=S\Big(\sum_iP_i\rho P_i\Big)-S(\rho ).\eqno{(13)}$$
Proof: First we must prove that $$\mbox{supp}(\rho )\subseteq \mbox{supp}\Big(\sum_iP_i\rho P_i\Big).\eqno{(14)}$$ To this purpose, we write down a decomposition (10a) of $\rho$ into pure states. One has $\mbox{supp}(\sum_iP_i)\supseteq \mbox{supp}(\rho )$ (equivalent to the certainty of $(\sum_iP_i)$ in $\rho$, cf \cite{Roleof}), and the decomposition (10a) implies that each $\ket{\psi_n}$ belongs to $\mbox{supp}(\rho )$. Hence, $\ket{\psi_n}\in \mbox{supp}(\sum_iP_i)$; equivalently, $\ket{\psi_n}=(\sum_iP_i)\ket{\psi_n}$. Therefore, one can write $$\forall n:\quad \ket{\psi_n}=\sum_i(P_i \ket{\psi_n}).\eqno{(15a)}$$ Further, (10a) implies $$\sum_iP_i\rho P_i=\sum_i\sum_n\lambda_nP_i\ket{\psi_n} \bra{\psi_n}P_i.\eqno{(15b)}$$ As seen from (15b), all vectors $(P_i\ket{\psi_n})$ belong to supp$(\sum_iP_i\rho P_i)$. Hence, so do all $\ket{\psi_n}$ (due to (15a)). Since $\rho$ is the mixture (10a) of the $\ket{\psi_n}$, the latter span $\mbox{supp}(\rho )$. Thus, finally, also (14) follows.
In our case $\sigma \equiv \sum_iP_i\rho P_i$ in (6). We replace $k$ by $i$. Next, we establish $$\forall i:\quad Q_i\rho Q_i=P_i\rho P_i.\eqno{(16)}$$ Since $Q_i$ is, by definition, the support projector of $(P_i\rho P_i)$, and $P_i(P_i\rho P_i)=(P_i\rho P_i)$, one has $P_iQ_i=Q_i$ (see Appendix (i)). One can write $P_i\rho P_i=Q_i( P_i\rho P_i)Q_i$, from which then (16) follows.
Realizing that $w_i\equiv {\rm tr} (Q_i\rho Q_i)={\rm tr} (P_i\rho P_i)\equiv p_i$ due to
(16), one obtains $H(p_i||w_i)=0$ and
$$\forall i:\quad S(Q_i\rho Q_i/p_i ||P_i\rho P_i/w_i)=0$$ in (6) for the case at issue. This completes the proof.
$\Box$
Now we turn to a peculiar further implication of corollary 1.
Let $B=\sum_k\sum_{l_k}b_{kl_k}P_{kl_k}$ be a spectral form of a discrete observable (Hermitian operator) $B$ such that all eigenvalues $b_{kl_k}$ are distinct. Besides, let $B$ be more complete than $A$ or, synonymously, a refinement of the latter. This, by definition means that $$\forall k:\quad P_k=\sum_{l_k}P_{kl_k}\eqno{(17)}$$ is valid. Here $k$ enumerates both the $i$ and the $j$ index values in the spectral form of $A$.
Let $\rho_L(A)$ and $\rho_L(B)$ be the L\"{u}ders states (12) of $\rho$ with respect to $A$ and $B$ respectively.
Corollary 2: {\it The states $\rho$,
$\rho_L(A)$, and $\rho_L(B)$ lie on a straight line with respect to relative entropy, i. e. $$S\Big(\rho || \rho_L(B)\Big)=S\Big(\rho
||\rho_L(A)\Big)+S\Big(\rho_L(A))|| \rho_L(B)\Big),\eqno{(18a)}$$ or explicitly:} $$S\Big(\rho
||\sum_i\sum_{l_i}(P_{il_i}\rho P_{il_i})\Big)=S\Big(\rho
||\sum_i(P_i\rho P_i)\Big)+$$
$$ S\Big(\sum_i(P_i\rho P_i)|| \sum_i\sum_{l_i}(P_{il_i} \rho P_{il_i})\Big).\eqno{(18b)}$$
Note that all eigenvalues $b_{kl_k}$ of $B$ with indices others than $il_i$ are undetectable in $\rho$.
Proof follows immediately from corollary 1 because
$$S\Big(\rho ||\rho_L(B)\Big) =\Big[S\Big(\rho_L(B)\Big)- S\Big(\rho_L(A)\Big)\Big]+$$ $$\Big[S\Big(\rho_L(A)\Big)-S(\rho )\Big],$$ and, as easily seen from (12), $\rho_L(B)= \Big(\rho_L(A)\Big)_L(B)$ due to $P_{il_i}P_{i'}=\delta_{i,i'}P_{il_i}$ (cf (17)).
$\Box$
Next, we derive another consequence of theorem 1.
Corollary 3: {\it Let $\{p_k:\forall k\}$ and $\{w_k:\forall k\}$ be probability distributions such that $p_k>0\enskip \Rightarrow \enskip w_k>0$. Then,
$$H(p_k||w_k)=S\Big(\sum_kp_k\ket{k}\bra{k}
||\sum_kw_k\ket{k}\bra{k}\Big),\eqno{(19)}$$ where the LHS is given by (7), and the orthonormal set of vectors $\{\ket{k}:\forall k\}$ is arbitrary.}
Proof: Applying (6) to the RHS of (19), one obtains $$RHS(19)=S\Big(\sum_kp_k\ket{k}\bra{k}\Big) -S\Big(\sum_kp_k\ket{k}\bra{k}\Big)+$$
$$H(p_k||w_k)+\sum_kp_kS(\ket{k}\bra{k}|| \ket{k}\bra{k})=LHS(19).$$
$\Box$
Finally, a quite different general result also follows from the mixing property (6).
Theorem 2: {\it Let $S(\rho ||\sigma )$ be the relative entropy of any two states such that (1b) is satisfied. Let, further, $$\sigma
=\sum_kw_k\ket{k}\bra{k}\eqno{(20)}$$ be a spectral form of $\sigma$ in terms of eigenvectors. Then $$S(\rho ||\sigma )=
S\Big(\rho ||\sum_k(\ket{k}\bra{k}\rho \ket{k}\bra{k})\Big)+$$ $$S\Big(\sum_k(\ket{k}\bra{k} \rho
\ket{k}\bra{k})||\sigma \Big).\eqno{(21)}$$ Thus, the states $\rho$, $\sum_k(\ket{k}\bra{k} \rho \ket{k}\bra{k})$ (cf (20) for $\ket{k}\bra{k}$), and $\sigma$ lie on a directed straight line of relative entropy.}
Proof: Application of (6) to the LHS(21), in view of (20), leads to
$$S(\rho ||\sigma )=S\Big(\sum_k(\ket{k}\bra{k} \rho \ket{k}\bra{k})\Big)-S(\rho )+$$
$$H(p_k||w_k)+\sum_kp_kS(\ket{k}\bra{k}
||\ket{k}\bra{k}).$$ In view of $\enskip p_k= \bra{k}\rho \ket{k}$, (13), (19), and (20), this equals RHS(21).
$\Box$
It is well known that the relative-entropic "distance", unlike the Hilbert-Schmidt (HS) one, fails to satisfy the triangle rule, which requires that the distance between two states must not exceed the sum of distances if a third state is interpolated. But, and this is part of the triangle rule, one has equality if and only if the interpolated state lies on a straight line with the two states. As it is seen from corollary 2 and theorem 2 as examples, the relative-entropic "distance" does satisfy the equality part of the triangle rule.
An interpolated state lies on the HS line between two states if and only if it is a convex combination of the latter. Evidently, this is not true in the case of relative entropy.
Partovi \cite{Partovi} has recently considered three states on a directed relative-entropic line: a general multipartite state $\rho_1 \equiv \rho_{AB\dots N}$, a suitable separable multipartite state $\rho_2\equiv \rho_{AB\dots N}^S$ with the same reductions $\rho_A, \rho_B, \dots ,\rho_N$, and finally $\rho_3\equiv \rho_A\otimes \rho_B\otimes \dots \otimes \rho_N$. The mutual information in $\rho_1$ is taken to be its total correlations information. It is well known that it can be written as the relative entropy of $\rho_1$ relative to $\rho_3$. The straight line implies:
$$S(\rho_1||\rho_2)=
S(\rho_1||\rho_3)-S(\rho_2||\rho_3).$$ To my understanding, it is Partovi's idea that if $\rho_2$ is as close to $\rho_1$ as possible (but being on the straight line and having the same reductions), then its von Neumann mutual information
$S(\rho_2||\rho_3)$ equals the classical information in $\rho_1$, and
$S(\rho_1||\rho_2)$ is the amount of entanglement or quantum correlation information in $\rho_1$.
Partovi's approach utilizes the relative-entropy "distance" in the only way how it is a distance: on a straight line. One wonders why should the relative entropy "distance" be relevant outside a straight line, where it is no distance at all cf \cite{V-P}, \cite{Plenio}. On the other hand, these approaches have the very desirable property of being entenglement monotones. But so are many others (see {\it ibid.}).
To sum up, we have derived the mixing property of relative entropy $S(\rho
||\sigma )$ for the case when (1b) is valid (theorem 1), and two more general equalities of relative entropies (corollary 3 and theorem 2), which follow from it. Besides, two properties of L\"{u}ders states (12) have been obtained (corollary 1 and corollary 2). The mixing property is applicable to any orthogonal state decomposition (2) of $\sigma$. Hence, one can expect a versatility of its applications in quantum information theory.\\
{\it Appendix} - Let $\rho =\sum_n\lambda_n\ket{n}\bra{n}$ be an arbitrary decomposition of a density operator into ray projectors, and let $E$ be any projector. Then $$E\rho =\rho \quad \Leftrightarrow \quad \forall n:\enskip E\ket{n}=\ket{n}\eqno{(A.1)}$$ (cf Lemma A.1. and A.2. in \cite{FHJP94}).
(i) If the above decomposition is an eigendecomposition with positive weights, then $\sum_n\ket{n}\bra{n}=Q$, $Q$ being now the support projector of $\rho$, and, on account of (A.1), $$E\rho =\rho \quad \Rightarrow \quad EQ=Q.\eqno{(A.2)}$$.
(ii) Since one can always write $Q\rho =\rho$, (A.1) implies that all $\ket{n}$ in the arbitrary decomposition belong to supp$(\rho )$. Further, defining a projector $F$ so that supp$(F)\equiv$ span$(\{\ket{n}:\forall n\})$, one has $FQ=F$. Equivalence (A.1) implies $F\rho =\rho$. Hence, (A.2) gives $QF=Q$. Altogether, $F=Q$, i. e., the unit vectors $\{\ket{n}:\forall n\}$ span supp$(\rho)$.\\
\end{document} |
\begin{document}
\ifpdf \DeclareGraphicsExtensions{.pdf, .jpg, .tif} \else \DeclareGraphicsExtensions{.eps, .jpg} \fi
\title{The Real Density Matrix} \author{Timothy F. Havel} \email[E-mail:\ensuremath\M[0.075]]{[email protected]}
\thanks{corresponding author.} \affiliation{Dept.~of Nuclear Engineering, Massachusetts Inst.~of Technology, Cambridge, MA 02139}
\date{\today}
\begin{abstract} We introduce a nonsymmetric real matrix which contains all the information that the usual Hermitian density matrix does, and which has exactly the same tensor product structure. The properties of this matrix are analyzed in detail in the case of multi-qubit (e.g.~spin $= 1/2$) systems, where the transformation between the real and Hermitian density matrices is given explicitly as an operator sum, and used to convert the essential equations of the density matrix formalism into the real domain. \end{abstract}
\pacs{03.65.Ca, 03.67.-a, 33.25.+k, 02.10.Xm}
\maketitle
\section{Prologue} The density matrix plays a central role in the modern theory of quantum mechanics, and an equally important role in its applications to optics, spectroscopy, and condensed matter physics. Viewed abstractly, it is a self-adjoint operator $\rho$ on system's Hilbert space, the expectation values $0 \le \BRA{\psi}\ensuremath\M[0.075]\rho\ensuremath\M[0.075]\KET{\psi} \le 1$ of which give the probability of observing the system in the state $\KET{\psi}$. As a matrix, however, it is generally represented versus the operator (or ``Liouville'') basis $\KET{i}\BRA{j}$ induced by a choice of a complete orthonormal basis $\ensuremath\M[0.075]\KET{i}\mid i=0,1,\ldots\ensuremath\M[0.075]$ in the underlying Hilbert space. These complex-valued matrices $\RHO \equiv [\BRA{i}\rho\KET{j}]_{i,j}$ are necessarily Hermitian and positive semi-definite. Their diagonal entries are the probabilities of these mutually exclusive basis states, whereas their off-diagonal entries prescribe the amounts by which the probabilities of their coherent superpositions deviate from the corresponding classical mixtures due to interference.
Another option is to use a operator basis the elements of which have rank exceeding one, so that it is not induced by any Hilbert space basis. The most common example here is the representation of operators on a two-dimensional Hilbert space by real linear combinations of Pauli matrices $\ensuremath\M[0.075] \SIG[0] (\equiv \MAT I_{2\LAB D}), \SIG[1], \SIG[2], \SIG[3] \ensuremath\M[0.075]$. In this case the basis elements themselves are self-adjoint and so can be given a physical interpretation, e.g.~as the components of the Bloch or Stokes vector \cite{Bloch:46,FeyVerHel:57}. Although arbitrary bases of self-adjoint operators could be used, for multi-particle systems it is desirable that the overall basis be induced by identical bases on each particle's Hilbert subspace. With the Pauli matrices this leads to the so-called \emph{product operator} representation \cite{ErnBodWok:87}, in which density operators are represented by linear combinations of all possible tensor (\emph{ergo} Kronecker) products of the Pauli matrices, e.g.~$\SIG[k]^1\SIG[\ell]^2 \equiv (\SIG[k] \otimes \SIG[0]) (\SIG[0] \otimes \SIG[\ell])$. In contrast to the Hermitian case, it has not been widely recognized that this tensor product structure is reflected by the real-valued coefficients in the expansion of any density operator in terms of product operators (also known as the \emph{coherence vector} \cite{MahleWeber:98}). Thus if one properly arranges these coefficients in a matrix one obtains a \emph{real} but \emph{nonsymmetric} analog of the Hermitian density matrix with the \emph{same} tensor product structure. This fact holds for any number of Hilbert spaces $\mathfrak H_k$ ($k = 1,2,\ldots$) of arbitrary (even infinite) dimension $L > 0$ and self-adjoint bases $\ensuremath\M[0.075] B^k_\ell \ensuremath\M[0.075]_{\ell=1}^L$ for the space of bounded linear operators on each.
The purpose of this paper is show how, in the case of multi-qubit (\emph{ergo} two-state quantum) systems, one can perform all the usual Hermitian density matrix calculations entirely with the real density matrix. While the formulae are more complicated in most cases than they are with the Hermitian density matrix, we argue that they are in many respects closer to the underlying physics than the Hermitian formulae, simply because the entries of the real density matrix correspond to (expectation values of) observables. Indeed, it is well-known that the single-qubit Pauli algebra is nothing but a complex matrix representation of the geometric (or Clifford) algebra of a three-dimensional Euclidean vector space \cite{HavelDoran:02}. This \emph{real} algebra in turn has been demonstrated to be a concise but versatile formalism within which to analyze and teach much of modern physics \cite{Hestenes:03,DoranLasen:03}. It makes a certain amount of sense to use a representation wherein the ``reverse-even'' entities in the algebra, i.e.~scalars and vectors, are real while the less-familiar ``reverse-odd'' entities, i.e.~``pseudo-scalars'' and ``pseudo-vectors'', are purely imaginary (cf.~\citet{Baylis:99}). The drawback of our representation is that the matrices no longer form a representation of the underlying geometric algebra, i.e.~the geometric product no longer corresponds to matrix multiplication. We will leave it to the community to decide if or when the advantages outweigh the disadvantages, and offer our results simply as the outcome of an intellectual exercise.
\section{Metamorphosis} \label{sec:meta} We begin by introducing a bit of notation which will considerably simplify the remainder of our presentation. First, instead of the above bra-ket notation, let us write the $2\times2$ elementary matrices as \begin{equation} \MAT E_{00} \leftrightarrow \KET0\BRA0 \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT E_{10} \leftrightarrow \KET1\BRA0 \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT E_{01} \leftrightarrow \KET0\BRA1 \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT E_{11} \leftrightarrow \KET1\BRA1 \ensuremath\M[0.075], \end{equation} where $\KET0 \leftrightarrow \MAT e_0$, $\KET1 \leftrightarrow \MAT e_1$ denote an orthonormal basis for a two-dimensional Hilbert space. Then it is easily seen that, for any nonnegative integers $i, j \le M \equiv 2^N - 1$, the $(M+1)\times(M+1)$ elementary matrix $\MAT E_{ij}$ is the Kronecker product of $2\times2$ elementary matrices the indices of which are the bits $i_n, j_n \in \ensuremath\M[0.075] 0,\,1 \ensuremath\M[0.075]$ in the binary expansions of $i, j$, respectively, i.e. \begin{equation} \MAT E_{ij} ~\equiv~ \big[ \delta_{ik} \delta_{j\ell} \big]_{k,\ell=0}^{M,M} ~=~ \MAT E_{i_1j_1} \otimes\cdots\otimes \MAT E_{i_Nj_N} ~, \end{equation} where the $\delta$'s are Kronecker deltas. In an analogous fashion, we will denote the usual $2\times2$ Pauli matrices by \begin{equation} \MAT P_{00} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.5] \SIG[\,0] \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT P_{10} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.5] \SIG[\,1] \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT P_{01} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.5] \SIG[\,2] \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT P_{11} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.5] \SIG[\,3] \ensuremath\M[0.075]. \end{equation} In this notation it may readily be verified that the multiplication table among the Pauli matrices may be expressed succinctly as \begin{equation} \MAT P_{ij\ensuremath\M[0.075]} \MAT P_{k\ell} ~=~ \imath^{\ensuremath\M[0.075](i\ell-jk)(1-2ij)(1-2k\ell)}\ensuremath\M[0.075] \MAT P_{(i+k-2ik),\ensuremath\M[0.075][0.05](j+\ell-2j\ell)} \quad\big( i,j,k,\ell \in \{0,\,1\ensuremath\M[0.075] \big) ~, \end{equation} where $\imath^2 = -1$. This indexing scheme may be extended to all Kronecker products of these matrices in the same way as for the $2\times2$ elementary matrices, i.e. \begin{equation} \MAT P_{ij} ~=~ \MAT P_{i_1j_1} \otimes\cdots\otimes \MAT P_{i_Nj_N} \quad(0 \le i,j \le M) ~. \end{equation} For example, if $M = 3$ we have $\MAT P_{01} = \SIG[0]\otimes\SIG[2]$, $\MAT P_{02} = \SIG[2]\otimes\SIG[0]$, and $\MAT P_{03} = \SIG[2]\otimes\SIG[2]\ensuremath\M[0.075]$.
The Hermitian density matrix, of course, can be expanded relative to either the elementary matrix basis or the Pauli matrix basis, e.g. \begin{equation} \RHO ~=~ {\sum}_{i,j=0}^{\,1,\,1}\ensuremath\M[0.075] \rho_{ij}\ensuremath\M[0.075] \MAT E_{ij} ~=~ \text{\large$\tfrac12$} ~ {\sum}_{i,j=0}^{1,\,1}\ensuremath\M[0.075] \sigma_{ij}\ensuremath\M[0.075] \MAT P_{ij} ~, \label{eq:expansion} \end{equation}
for a single qubit with $\rho_{ij} \in \FLD C$ but $\sigma_{ij} \in \FLD R$. Both of these bases are orthogonal relative to the Hilbert-Schmidt inner product, which is given by $\HIP{\MAT X}{\MAT Y} \equiv \TR(\MAT{XY}^\dag) \equiv 2^N\ensuremath\M[0.075] \langle\ensuremath\M[0.075] \MAT{XY}^{\dag \ensuremath\M[0.075]} \rangle$ for any number of qubits $N > 0$. Thus there is a unique unitary superoperator $2^{\ensuremath\M[0.075]-1/2}\ensuremath\M[0.075] \ALG U$ which carries the Pauli to the elementary matrix basis, where the factor of $\sqrt2$ comes from $\ensuremath\M[0.075] \MAT P_{ij} \ensuremath\M[0.075]^2 \equiv \HIP{\MAT P_{ij}}{\MAT P_{ij}} = 2\ensuremath\M[0.075] \ensuremath\M[0.075] \MAT E_{ij} \ensuremath\M[0.075]^2$. This superoperator, moreover, is nearly self-adjoint since $\HIP{\MAT E_{ij}}{\MAT P_{k\ell}} = \HIP{\MAT E_{k\ell}}{\MAT P_{ij}}$ for all $0\le i,j,k,\ell \le 1$ with the sole exception of $\HIP{\MAT E_{10}}{\MAT P_{01}} = \imath\ensuremath\M[0.075] \HIP{\MAT E_{01}}{\MAT P_{10}}$ and its complex conjugate. On applying $\ALG U$ to both sides of Eq.~(\ref{eq:expansion}), therefore, we obtain \begin{equation}
\ALG U(\RHO) ~=~ \text{\large$\tfrac12$} ~ {\sum}_{i,j=0}^{\,1,\,1}\ensuremath\M[0.075] \rho_{ij}\ensuremath\M[0.075] \sqrt{{\imath}^{\,j\ensuremath\M[0.075]-\.i}/2^{\ensuremath\M[0.075]|j-i\ensuremath\M[0.075]|}}\ensuremath\M[0.075] \big(\ensuremath\M[0.075] \MAT P_{ji} \ensuremath\M[0.075]+\ensuremath\M[0.075] {(-1)}^{(1-\.i\ensuremath\M[0.075])\.j}\ensuremath\M[0.075] \MAT P_{ij\ensuremath\M[0.075]} \big) ~=~ {\sum}_{i,j=0}^{1,\,1}\ensuremath\M[0.075] \sigma_{ij}\ensuremath\M[0.075] \MAT E_{ij} ~. \end{equation} We will take this as our definition of the \emph{real density matrix} for a single qubit. Henceforth, we shall denote this by \begin{equation} \begin{bmatrix} ~\sigma_{00}~&~\sigma_{01}~ \ensuremath\M[0.075] ~\sigma_{10}~&~\sigma_{11}~ \end{bmatrix} \ensuremath\M[0.075]\equiv\ensuremath\M[0.075] \SIG \ensuremath\M[0.075]\equiv\ensuremath\M[0.075] \ALG U(\RHO) \ensuremath\M[0.075]=\ensuremath\M[0.075] 2 \begin{bmatrix} \AVG{\RHO\ensuremath\M[0.075] \SIG[0]} & \AVG{\RHO\ensuremath\M[0.075] \SIG[2]} \ensuremath\M[0.075] \AVG{\RHO\ensuremath\M[0.075] \SIG[1]} & \AVG{\RHO\ensuremath\M[0.075] \SIG[3]} \end{bmatrix} ~. \end{equation}
Note that our choice of normalization gives $\sigma_{00} = 2\ensuremath\M[0.075] \langle\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \rangle = 1$, so that although $\ALG U$ is otherwise unitary the Hilbert-Schmidt norm is scaled by a factor of $\sqrt2$; specifically $2\ensuremath\M[0.075] \ensuremath\M[0.075]\RHO\ensuremath\M[0.075][0.1]\ensuremath\M[0.075]^2 =\ensuremath\M[0.075]\SIG\ensuremath\M[0.075][0.15]\ensuremath\M[0.075]^2 \equiv 2\ensuremath\M[0.075] \langle \SIG^{\ensuremath\M[0.075]\top\ensuremath\M[0.075]} \SIG\ensuremath\M[0.075][0.1] \rangle = 1 + \sigma_{10}^2 + \sigma_{01}^2 + \sigma_{11}^2$.
Let us now look at some explicit representations of the superoperator $\ALG U$. To begin with, the mapping from the Pauli to the basis elementary is clearly \begin{equation} \begin{aligned} \MAT E_{00} ~=\ensuremath\M[0.075][0.75] & \HALF\big( \MAT P_{00} + \MAT P_{11} \big) \ensuremath\M[0.075] \MAT E_{10} ~=\ensuremath\M[0.075][0.75] & \HALF\big( \MAT P_{10} - \imath\ensuremath\M[0.075] \MAT P_{01} \big) \end{aligned} \qquad \begin{aligned} \MAT E_{01} ~=\ensuremath\M[0.075][0.75] & \HALF\big( \MAT P_{10} + \imath\ensuremath\M[0.075] \MAT P_{01} \big) \ensuremath\M[0.075] \MAT E_{11} ~=\ensuremath\M[0.075][0.75] & \HALF\big( \MAT P_{00} - \MAT P_{11} \big) ~, \end{aligned} \end{equation} and hence (since coordinates are contravariant) \begin{equation} \label{eq:usvd} \KET{\SIG} ~\equiv~ \begin{bmatrix} 1\ensuremath\M[0.075] \sigma_{10}\ensuremath\M[0.075] \sigma_{01}\ensuremath\M[0.075] \sigma_{11} \end{bmatrix} \ensuremath\M[0.075][0.4]=\ensuremath\M[0.075][0.4] \begin{bmatrix} ~1~&0&0&0~\ensuremath\M[0.075] ~0~&1&0&0~\ensuremath\M[0.075] ~0~&0&-\imath&0~\ensuremath\M[0.075] ~0~&0&0&1~ \end{bmatrix} \ensuremath\M[0.075][-0.5] \begin{bmatrix} ~1&~0&0&1\ensuremath\M[0.075] ~0&~1&1&0\ensuremath\M[0.075] ~0&~1&-1&0\ensuremath\M[0.075] ~1&~0&0&\ensuremath\M[0.075]-1 \end{bmatrix} \ensuremath\M[0.075][-0.5] \begin{bmatrix} \rho_{00}\ensuremath\M[0.075] \rho_{10}\ensuremath\M[0.075] \rho_{01}\ensuremath\M[0.075] \rho_{11} \end{bmatrix} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.6] \EMB{\ALG{VW}}\ensuremath\M[0.075] \KET{\RHO} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.6] \EMB{\ALG U}\ensuremath\M[0.075] \KET{\RHO} ~, \end{equation} where we have factored the overall superoperator's matrix $\EMB{\ALG U}$ into the product of a diagonal matrix $\EMB{\ALG V}$ and a purely real one $\EMB{\ALG W}$. An operator sum representation for the superoperator $\ALG W$ may be derived from the singular value decomposition of its \emph{Choi matrix}, i.e. \begin{equation} \begin{bmatrix} ~1&~0&~0&\ensuremath\M[0.075]-1\ensuremath\M[0.075] ~0&~1&~1&0\ensuremath\M[0.075] ~0&~1&~1&0\ensuremath\M[0.075] ~1&~0&~0&\ensuremath\M[0.075]-1 \end{bmatrix} ~=~ \begin{bmatrix} ~1&~0&\ensuremath\M[0.075]-1&0\ensuremath\M[0.075] ~0&~1&0&1\ensuremath\M[0.075] ~0&~1&0&\ensuremath\M[0.075]-1\ensuremath\M[0.075] ~1&~0&1&0 \end{bmatrix} \begin{bmatrix} ~1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0~\ensuremath\M[0.075] ~0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0~\ensuremath\M[0.075] ~0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0~\ensuremath\M[0.075] ~0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0~ \end{bmatrix} \begin{bmatrix} 1&0&~1&0\ensuremath\M[0.075] 0&1&~0&1\ensuremath\M[0.075] 0&1&~0&\ensuremath\M[0.075]-1\ensuremath\M[0.075] \ensuremath\M[0.075]-1&0&~1&0 \end{bmatrix}^{\displaystyle\top} , \end{equation} where the Choi matrix (left) is obtained simply by swapping certain pairs of the entries in $\EMB{\ALG W} \equiv \big[ w_{ij} \big]_{i,j=0}^{\,3,\,3}$ \cite{Havel!QPT:03}; specifically $w_{20} \leftrightarrow w_{01}$, $w_{30} \leftrightarrow w_{11}$, $w_{22} \leftrightarrow w_{03}$ and $w_{32} \leftrightarrow w_{13}$. Observe that the left-singular vectors associated with the nonzero singular values of $\EMB{\ALG W}$ can be written as ``columnized'' Pauli matrices, specifically $\KET{\MAT P_{00}}$ and $\KET{\MAT P_{10}}$ in the notation of Eq.~(\ref{eq:usvd}), while the corresponding right-singular vectors are $\KET{\MAT P_{11}}$ and $\KET{\MAT P_{10}\ensuremath\M[0.075]}$. It follows that the operator sum form of $\ALG W$ is \cite[Proposition~3]{Havel!QPT:03} \begin{equation} \ALG W(\RHO) ~=~ \MAT P_{00}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{11} \ensuremath\M[0.075]+\ensuremath\M[0.075] \MAT P_{10}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{10} ~=~ \begin{bmatrix} \rho_{00} + \rho_{11} & \rho_{10} - \rho_{01} \ensuremath\M[0.075] \rho_{10} + \rho_{01} & \rho_{00} - \rho_{11} \end{bmatrix} ~. \end{equation}
Although an operator sum form for $\ALG V$ could be obtained by this same approach, since it is diagonal a more compact representation of its action may be obtained by packing its nonzero entries into a single $2\times2$ matrix which acts via the ``entrywise'' or \emph{Hadamard product} ``$\odot$'' \cite{HaShViCo:01}, as follows: \begin{equation} \begin{split} \SIG ~=~ \MAT Q \odot \big( \RHO\ensuremath\M[0.075] \MAT P_{11} \ensuremath\M[0.075]+\ensuremath\M[0.075] \MAT P_{10}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{10} \big) ~\equiv\ensuremath\M[0.075] & \begin{bmatrix} ~1&-\imath~\ensuremath\M[0.075] ~1&~1~ \end{bmatrix} \odot \begin{bmatrix} \rho_{00} + \rho_{11} & \rho_{10} - \rho_{01} \ensuremath\M[0.075] \rho_{10} + \rho_{01} & \rho_{00} - \rho_{11} \end{bmatrix} \ensuremath\M[0.075] \equiv\ensuremath\M[0.075]& \begin{bmatrix} ~\rho_{00} + \rho_{11} ~&~ \imath\ensuremath\M[0.075] (\rho_{01} - \rho_{10})~ \ensuremath\M[0.075] ~\rho_{01} + \rho_{10} ~&~ \rho_{00} - \rho_{11}~ \end{bmatrix} ~. \end{split} \end{equation} Since $\ALG W$ is self-adjoint and the overall superoperator $\ALG U$ is unitary (up to a factor of $\sqrt2$), it is easily seen that the inverse $\ALG U^{-1}$ can be written as \begin{equation} \RHO ~=~ \HALF \left( \big(\ensuremath\M[0.075][0.05] \OL{\MAT Q} \odot \SIG \big)\ensuremath\M[0.075] \MAT P_{11} ~+~ \MAT P_{10}\ensuremath\M[0.075] \big(\ensuremath\M[0.075][0.05] \OL{\MAT Q} \odot \SIG \big)\ensuremath\M[0.075] \MAT P_{10}\ensuremath\M[0.075] \right) ~, \end{equation} where the overbar indicates the complex conjugate of all the matrix entries.
The beauty of this operator sum form for the superoperator $\ALG U$ is that the Hadamard product obeys the mixed product formula with the Kronecker product, \begin{equation} \label{eq:mixed} (\MAT A \otimes \MAT B) \odot (\MAT C \otimes \MAT D) ~=~ (\MAT A \odot \MAT C) \otimes (\MAT B \odot \MAT D) ~, \end{equation} just like the usual matrix product does. Thus if we extend $\ALG U$ to factorable multi-qubit density matrices in the obvious way, \begin{equation} \ALG U(\RHO^1 \otimes\cdots\otimes \RHO^N) ~\equiv~ \ALG U(\RHO^1) \otimes\cdots\otimes \ALG U(\RHO^N) ~, \end{equation} and thence to arbitrary multi-qubit density matrices by linearity, we immediately obtain a general expression. Explicitly, in the case of two qubits, we get \begin{equation} \begin{split} \SIG \ensuremath\M[0.075]\equiv\ensuremath\M[0.075] & \SIG^1 \otimes \SIG^2 \ensuremath\M[0.075]\equiv\ensuremath\M[0.075] \ALG U(\RHO^1) \otimes \ALG U(\RHO^2) \ensuremath\M[0.075] =\ensuremath\M[0.075] & \Big( \MAT Q \odot \big( \RHO^1\ensuremath\M[0.075] \MAT P_{11} + \MAT P_{10}\ensuremath\M[0.075] \RHO^1\ensuremath\M[0.075] \MAT P_{10} \big)\ensuremath\M[0.075] \Big) \otimes \Big( \MAT Q \odot \big( \RHO^2\ensuremath\M[0.075] \MAT P_{11} + \MAT P_{10}\ensuremath\M[0.075] \RHO^2\ensuremath\M[0.075] \MAT P_{10} \big)\ensuremath\M[0.075] \Big) \ensuremath\M[0.075] =\ensuremath\M[0.075] & \big( \MAT Q \otimes \MAT Q \big) \odot \Big(\ensuremath\M[0.075] \big( \RHO^1\ensuremath\M[0.075] \MAT P_{11} + \MAT P_{10}\ensuremath\M[0.075] \RHO^1\ensuremath\M[0.075] \MAT P_{10} \big) \otimes \big( \RHO^2\ensuremath\M[0.075] \MAT P_{11} + \MAT P_{10}\ensuremath\M[0.075] \RHO^2\ensuremath\M[0.075] \MAT P_{10} \big)\ensuremath\M[0.075] \Big) \ensuremath\M[0.075] =\ensuremath\M[0.075] & \big( \MAT Q \otimes \MAT Q \big) \odot \big( \begin{aligned}[t] & \ensuremath\M[0.075] (\RHO^1\ensuremath\M[0.075] \MAT P_{11}) \otimes (\RHO^2\ensuremath\M[0.075] \MAT P_{11}) + (\RHO^1\ensuremath\M[0.075] \MAT P_{11}) \otimes (\MAT P_{10}\ensuremath\M[0.075] \RHO^2\ensuremath\M[0.075] \MAT P_{10}) ~\cdots \ensuremath\M[0.075] & +\ensuremath\M[0.075] (\MAT P_{10}\ensuremath\M[0.075] \RHO^1\ensuremath\M[0.075] \MAT P_{10}) \otimes (\RHO^2\ensuremath\M[0.075] \MAT P_{11}) + (\MAT P_{10}\ensuremath\M[0.075] \RHO^1\ensuremath\M[0.075] \MAT P_{10}) \otimes (\MAT P_{10}\ensuremath\M[0.075] \RHO^2\ensuremath\M[0.075] \MAT P_{10}) \big) \end{aligned} \ensuremath\M[0.075] =\ensuremath\M[0.075] & \big( \begin{aligned}[t] & \ensuremath\M[0.075] \MAT Q \otimes \MAT Q \big) \odot \big( (\RHO^1 \otimes \RHO^2) (\MAT P_{11} \otimes \MAT P_{11}) + (\MAT P_{00} \otimes \MAT P_{10}) (\RHO^1 \otimes \RHO^2) (\MAT P_{11} \otimes \MAT P_{10}) ~\cdots \ensuremath\M[0.075] & \ensuremath\M[0.075]+ (\MAT P_{10} \otimes \MAT P_{00}) (\RHO^1 \otimes \RHO^2) (\MAT P_{10} \otimes \MAT P_{11}) + (\MAT P_{10} \otimes \MAT P_{10}) (\RHO^1 \otimes \RHO^2) (\MAT P_{10} \otimes \MAT P_{10}) \big) \end{aligned} \ensuremath\M[0.075] \equiv\ensuremath\M[0.075] & \MAT Q^{\otimes2} \odot \big( \RHO\ensuremath\M[0.075] \MAT P_{33} + \MAT P_{10}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{32} + \MAT P_{20}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{31} + \MAT P_{30}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{30} \big) ~, \label{eq:metamorph} \end{split} \end{equation} where $\RHO \equiv \RHO^1 \otimes \RHO^2$ and $\MAT Q^{\otimes N}$ ($N > 0$) denotes the $N$-fold Kronecker power of $\MAT Q$. It is readily verified that the general formula for $N$ qubits is \begin{equation} \label{eq:you} \ALG U(\RHO) ~=~ \MAT Q^{\otimes N} \odot {\sum}_{m\ensuremath\M[0.075]=\,0}^{\,M}\ensuremath\M[0.075] \MAT P_{m,\ensuremath\M[0.075][0.05]0}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{M,\ensuremath\M[0.075][0.05](M-m)} ~, \end{equation} where $\ensuremath\M[0.075][-0.05]\RHO\ensuremath\M[0.075][-0.05]$ is\ensuremath\M[0.075][-0.05] any \ensuremath\M[0.075][-0.05](not\ensuremath\M[0.075][-0.05] necessarily\ensuremath\M[0.075][-0.05] factorable)\ensuremath\M[0.075][-0.05] density\ensuremath\M[0.075][-0.05] matrix.\ensuremath\M[0.075][-0.05] Similarly,\ensuremath\M[0.075][-0.05] the\ensuremath\M[0.075][-0.05] inverse\ensuremath\M[0.075][-0.05] is\ensuremath\M[0.075][-0.05] given\ensuremath\M[0.075][-0.05] by \begin{equation} \label{eq:uoy} \ALG U^{-1}(\SIG) ~=~ 2^{-N}\ensuremath\M[0.075] {\sum}_{m\ensuremath\M[0.075]=\,0}^{\,M}\ensuremath\M[0.075] \MAT P_{m,\ensuremath\M[0.075][0.05]0}\ensuremath\M[0.075] \big(\ensuremath\M[0.075][0.05] \OL{\MAT Q}^{\ensuremath\M[0.075][0.1]\otimes N\ensuremath\M[0.075][-0.1]} \odot \SIG \big)\ensuremath\M[0.075] \MAT P_{M,\ensuremath\M[0.075][0.05](M-m)} ~. \end{equation} Evidently $\MAT Q^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \odot\ensuremath\M[0.075] \ALG U^{-1}(\SIG) \ensuremath\M[0.075]=\ensuremath\M[0.075] 2^{-N}\ensuremath\M[0.075] \ALG U(\ensuremath\M[0.075] \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \odot\ensuremath\M[0.075] \SIG \ensuremath\M[0.075])$.
\section{Reality Check} The following properties of the real density matrix $\SIG$ are worth noting explicitly: \begin{itemize} \item In addition to being real, it is nonsymmetric with one fixed element $\sigma_{00} = 1$; \item It contains all the same information that the Hermitian density matrix does (since they are related by the bijection $\ALG U$). \item It is diagonal if and only if the Hermitian density matrix is diagonal (which is why we defined $\sigma_{11}$ to be the coefficient of the diagonal Pauli matrix $\SIG[3]$). \item It has the same tensor product structure as the Hermitian density matrix, since (as shown by Eq.~(\ref{eq:metamorph})) $\ALG U$ maps Kronecker products to Kronecker products. \end{itemize} In addition to these nice analytic features, the real density matrix can also be quite useful for displaying the results of \emph{quantum state tomography}: the determination of density matrices from experimental data. In most cases to date, the real or imaginary parts of the Hermitian density matrix have been displayed using two-dimensional bar graphs (see e.g~\cite[\S7.7.4]{NielsChuan:00}). Although useful, such a plot must both omit information and exhibit redundant information. Real density matrices are definitely superior in this respect, and sometimes may also exhibit the underlying symmetry of a state more clearly. Bar graphs of the real density matrix are shown below for both the diagonal Hilbert space basis as well as the Bell basis, illustrating how easily these states may be distinguished. Further examples with experimental NMR data may be found in \citet{HCLBFPTWBH:02}.
\begin{figure}
\caption{Plots of the diagonal Hilbert space basis (below) and of the Bell basis (above). All axes are dimensionless, and the labels on the horizontal axes correspond to the indices of the two-qubit real density matrix entries $\sigma_{ij}$ as used in the main text.}
\end{figure}
The fact that the mapping between Pauli matrix coefficients and the entries of the Hermitian density operator preserves the tensor product structure has been noted earlier by \citet{PitteRubin:00a} (and without doubt by many other researchers as well). In our present notation, their observation was based upon the following simple relation: \begin{equation} \begin{bmatrix} \sigma_{00} & \sigma_{10} \ensuremath\M[0.075] \sigma_{11} & \imath\sigma_{01} \end{bmatrix} ~=~ \begin{bmatrix} ~1~ & ~1~ \ensuremath\M[0.075] ~1~ & -1~ \end{bmatrix} \begin{bmatrix} \rho_{00} & \rho_{10} \ensuremath\M[0.075] \rho_{11} & \rho_{01} \end{bmatrix} ~. \end{equation} While this is certainly a simpler relation than our operator sum, the ``density matrix'' on the right-hand side is not the usual Hermitian one, and the mapping between the two can be written explicitly only by using operator sums, supermatrices, or the like. Our goal here is to translate the usual operations and relations on Hermitian density matrices into the real domain, and the reordering of the entries of the real density matrix as above offers no advantage for this purpose.
As our first example of such a translation, let us show how the usual criterion for the purity of the Hermitian density matrix can be carried over to the real domain: \begin{align} \ensuremath\M[0.075][-3] 1 \ensuremath\M[0.075]=\ensuremath\M[0.075] 2^N\ensuremath\M[0.075] \langle\ensuremath\M[0.075] \RHO^2\ensuremath\M[0.075] \rangle \ensuremath\M[0.075]=\ensuremath\M[0.075] & \left\langle\ensuremath\M[0.075] \RHO \ensuremath\M[0.075]{\sum}_{m\ensuremath\M[0.075]=\,0}^{m=M}\ensuremath\M[0.075] \MAT P_{m,0} \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big) \MAT P_{M,M-m}\right\rangle \notag \ensuremath\M[0.075] =\ensuremath\M[0.075] & \left\langle\ensuremath\M[0.075] \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big) {\sum}_{m=0}^M\ensuremath\M[0.075] \MAT P_{M,M-m}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{m,0} \right\rangle \notag \ensuremath\M[0.075] =\ensuremath\M[0.075] & \left\langle\ensuremath\M[0.075] {\sum}_{m\ensuremath\M[0.075]=\,0}^{m=M}\ensuremath\M[0.075] \MAT P_{m,0}\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \MAT P_{M,M-m}\ensuremath\M[0.075] \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big)^{\ensuremath\M[0.075]\dag} \right\rangle \ensuremath\M[0.075] \notag =\ensuremath\M[0.075] & \left\langle\ensuremath\M[0.075] \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big) \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big)^{\ensuremath\M[0.075]\dag} \right\rangle \ensuremath\M[0.075] \notag =\ensuremath\M[0.075] & \left\langle \big( \MAT Q^{\otimes N} \odot \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N} \odot \SIG \big)^{\dag}\ensuremath\M[0.075] \SIG\ensuremath\M[0.075] \right\rangle ~=~ \big\langle\ensuremath\M[0.075] \SIG^\top \SIG\ensuremath\M[0.075] \big\rangle ~. \end{align} In going to the last line, we have used the general relation $\langle (\MAT A \odot \MAT B)\ensuremath\M[0.075] \MAT C^\dag \rangle = \langle (\MAT A \odot \MAT C)^\dag\ensuremath\M[0.075] \MAT B\ensuremath\M[0.075] \rangle$ for arbitrary conformant matrices $\MAT A, \MAT B, \MAT C$ \cite{Lutkepohl:96}. This derivation easily generalizes to a formula for the ensemble-average expectation values of any observable with Hermitian matrix $\EMB\mu$ and corresponding real matrix $\EMB\nu = \ALG U(\EMB\mu)$, showing that \begin{equation}
\big\langle\ensuremath\M[0.075] \EMB\mu \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \big\rangle ~\equiv~ 2^{\.N}\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \EMB\mu\ensuremath\M[0.075] \RHO\ensuremath\M[0.075] \big\rangle ~=~\big\langle\ensuremath\M[0.075] \EMB\nu^\top \SIG\ensuremath\M[0.075] \big\rangle ~=~ 2^{\ensuremath\M[0.075]-N\ensuremath\M[0.075]} \big\langle\ensuremath\M[0.075] \EMB\nu \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \SIG \ensuremath\M[0.075]\big\rangle ~. \end{equation}
We close this section by noting that the \emph{partial trace} operation corresponds simply to extracting a principal submatrix of the real density matrix \cite{SomCorHav:98}.
\section{Life in the Real World} While the expectation values of observables carry over to the real domain without significant complication, things become distinctly more challenging when it comes to integrating the equations of motion. In the case of a single qubit, it is readily verified that the commutator with an arbitrary Hamiltonian $\EMB\mu = \ALG U^{-1}(\EMB\nu)$ becomes \begin{equation} \label{eq:1pc} \begin{split} \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \SIG,\ensuremath\M[0.075] \EMB\nu\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-0.25]\big] ~\equiv~ \ALG U\big( \big[\ensuremath\M[0.075] \RHO,\ensuremath\M[0.075] \EMB\mu\ensuremath\M[0.075] \big] \big) & ~=\ensuremath\M[0.075][.5] \imath\ensuremath\M[0.075]\MAT P_{01} \big( \EMB\nu\ensuremath\M[0.075] \MAT E_{11}\ensuremath\M[0.075][.1] \SIG \ensuremath\M[0.075]-\ensuremath\M[0.075]\SIG\ensuremath\M[0.075] \MAT E_{11}\ensuremath\M[0.075][.1] \EMB\nu \big) \MAT P_{01} \ensuremath\M[0.075]
& ~=\ensuremath\M[0.075][.5] \imath \begin{bmatrix} 0 & \sigma_{11\ensuremath\M[0.075]} \nu_{10} -\sigma_{10\ensuremath\M[0.075]} \nu_{11} \ensuremath\M[0.075] \sigma_{01\ensuremath\M[0.075]} \nu_{11} - \sigma_{11\ensuremath\M[0.075]} \nu_{01} & \sigma_{10\ensuremath\M[0.075]} \nu_{01} - \sigma_{01\ensuremath\M[0.075]} \nu_{10} \end{bmatrix} ~, \end{split} \end{equation} wherein the matrix entries are the components of the usual vector cross product.
This equation of motion is most simply integrated by considering the matrix representation of the commutation superoperator defined by $\EMB\nu$, which we henceforth assume without loss of generality has $\nu_{00} = 0$. Letting $\KET{\MAT X}$ denote the column vector of height $(M+1)^2$ obtained by stacking the columns of the $(M+1)\times(M+1)$ matrix $\MAT X$ on top of one another in left-to-right order, and applying the well-known identity \begin{equation} \label{eq:oppr2prop} \KET{ \MAT{AXB} } ~=~ \big( \MAT B^\top \otimes \MAT A \big)\ensuremath\M[0.075] \KET{\MAT X} \end{equation} (see e.g.~\cite{Lutkepohl:96}), we find that\footnote{ The factor of $1/2$ here does not mean that the rotation is spinorial, but rather that the rate of rotation is $1/\sqrt2$ times the Hilbert-Schmidt norm of the Pauli matrix that generates it, while we pick up another factor of $1/\sqrt2$ on transforming to the real domain.} \begin{equation} \label{eq:form0} \begin{split}
\ensuremath\M[0.075][-1] \dot{\SIG} ~=~ \Big|\ensuremath\M[0.075] \tfrac{\displaystyle\imath}2\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \SIG,\ensuremath\M[0.075] \EMB\nu\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-0.25]\big] \Big\rangle ~=\ensuremath\M[0.075][0.5]
& \text{\large$\HALF$}\ensuremath\M[0.075] \Big( \MAT P_{01} \otimes \MAT P_{01} \Big) \Big( \MAT P_{00} \otimes \EMB\nu\ensuremath\M[0.075] \MAT E_{11} \ensuremath\M[0.075]-\ensuremath\M[0.075]\EMB\nu^\top \MAT E_{11} \otimes \MAT P_{00} \Big)\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \SIG\ensuremath\M[0.075] \big\rangle \ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] & \frac12 \begin{bmatrix} ~0&0&0&0\ensuremath\M[0.075] ~0&0&-\nu_{11}&\nu_{01}\ensuremath\M[0.075] ~0&\nu_{11}&0&-\nu_{10}\ensuremath\M[0.075] ~0&-\nu_{01}&\nu_{10}&0 \end{bmatrix}\ensuremath\M[0.075] \begin{bmatrix} \sigma_{00}\ensuremath\M[0.075] \sigma_{10}\ensuremath\M[0.075] \sigma_{01}\ensuremath\M[0.075] \sigma_{11} \end{bmatrix} ~\equiv~ \HALF\ensuremath\M[0.075] \EMB{\ALG R}_{\EMB\nu}\ensuremath\M[0.075] \KET{\SIG}\ensuremath\M[0.075] . \end{split} \end{equation}
Since $\EMB{\ALG R}_{\EMB\nu}^{\,3} = -{\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]}^{2\ensuremath\M[0.075]} \EMB{\ALG R}_{\EMB\nu\ensuremath\M[0.075]}$ ($\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]^2 \equiv 2\ensuremath\M[0.075][0.05] \langle \EMB\nu^\top \EMB\nu \rangle \equiv \HIP{\EMB\nu}{\EMB\nu}$), this one-sided matrix differential equation is easily integrated by using the Cayley-Hamilton theorem to exponentiate the (lower-right $3\times3$ block of the) coefficient matrix \cite{NajfeHavel:95a}, obtaining \begin{equation} \begin{split}
& \big|\ensuremath\M[0.075][0.1]\SIG(t)\ensuremath\M[0.075][0.1]\big\rangle \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.5] \bigg( \MAT P_{00} \otimes \MAT P_{00} \ensuremath\M[0.075]+\ensuremath\M[0.075] \frac{\sin(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])}{\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]}\ensuremath\M[0.075] \EMB{\ALG R}_{\EMB\nu} \ensuremath\M[0.075]+\ensuremath\M[0.075] \frac{1-\cos(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])} {\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]{\ensuremath\M[0.075][1.4]}^2}\ensuremath\M[0.075] \EMB{\ALG R}_{\EMB\nu}^{\,2} \bigg)\ensuremath\M[0.075] \big|\ensuremath\M[0.075][0.1]\SIG(0)\ensuremath\M[0.075][0.1]\big\rangle \ensuremath\M[0.075] =\ensuremath\M[0.075][.5]
& \Big|\ensuremath\M[0.075] \SIG(0) \ensuremath\M[0.075]+\ensuremath\M[0.075] \imath\ensuremath\M[0.075] \sin(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \SIG(0),\ensuremath\M[0.075] \hat{\EMB\nu}\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075]-\ensuremath\M[0.075] \big( 1-\cos(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1]) \big)\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \SIG(0),\ensuremath\M[0.075] \hat{\EMB\nu}\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-0.25]\big], \hat{\EMB\nu}\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-0.25]\big] \Big\rangle , \end{split} \end{equation}
wherein $\hat{\EMB\nu} \equiv \EMB\nu/\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]$ and it is readily shown that \begin{equation} \label{eq:r2}
\EMB{\ALG R}_{\EMB\nu}^{\,2\ensuremath\M[0.075]} ~=~ \KET{\EMB\nu} \BRA{\EMB\nu} ~-~ \ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]^2\ensuremath\M[0.075][0.25] \big( \MAT P_{00} \otimes \MAT P_{00} \ensuremath\M[0.075]-\ensuremath\M[0.075] \MAT E_{00} \otimes \MAT E_{00} \big) \end{equation}
is $-\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]^2$ times the projection onto the plane orthogonal to the unit vector $\KET{\hat{\EMB\nu}} = \BRA{\hat{\EMB\nu}}^\top$. The whole formula can thus be expressed more geometrically as \begin{equation}
\KET{\VP\SIG(t)\ensuremath\M[0.075]} ~=~ \HIP{\hat{\EMB\nu}}{\VP\SIG}\ensuremath\M[0.075] \KET{\hat{\EMB\nu}} \ensuremath\M[0.075]+\ensuremath\M[0.075] \sin(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])\ensuremath\M[0.075] \KET{\hat{\EMB\nu}} \times \KET{\VP{\SIG}} \ensuremath\M[0.075]+\ensuremath\M[0.075] \cos(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])\ensuremath\M[0.075] \KET{\hat{\EMB\nu}} \times \big(\ensuremath\M[0.075] \KET{\hat{\EMB\nu}} \times \KET{\VP{\SIG}} \big) ,~ \end{equation}
where $\VP{\SIG} \equiv \SIG(0) - \MAT E_{00}$ and ``$\times$'' is the cross product of the (last three components of the) vectors it connects. This is a standard expression for rotation of the three-dimensional vector $\KET{\VP\SIG}$ about the axis $\KET{\hat{\EMB\nu}}$ by an angle $\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][0.05]$.
The extension of these formulae to general multi-particle commutators is not straightforward, since the tensor product of commutation superoperators is not simply related to the tensor products of their underlying operators. Nevertheless, we can give a reasonably simple formula for the two-particle commutator with a factorable Hamiltonian, which is the most important case in practice. This is based upon a geometric algebra expression for the commutator of a tensor product of two three-dimensional vectors (or equivalently in the present context, traceless $2\times2$ Hermitian matrices), which is derived in \citet{HaDo$AGACSE:02}: \begin{equation} \label{eq:doran} 2\ensuremath\M[0.075] \big[ \MAT A \otimes \MAT C,\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] ~=~ \HIP{\MAT A}{\MAT B} \big( \MAT P_{00} \otimes [ \MAT C, \MAT D ] \big) ~+~ \big( [ \MAT A, \MAT B ] \otimes \MAT P_{00} \big) \HIP{\MAT C}{\MAT D} ~. \end{equation} Letting $\MAT a \equiv \ALG U(\MAT A)$, etc.~be the corresponding real matrices, this translates to: \begin{equation} \label{eq:form1a} 2\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \MAT a \otimes \MAT c ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \big]\ensuremath\M[0.075][-0.25]\big] ~=~ \HIP{\MAT a}{\MAT b} \big( \MAT E_{00} \otimes \ensuremath\M[0.075][.1][\ensuremath\M[0.075][-.15][\ensuremath\M[0.075] \MAT c, \MAT d \ensuremath\M[0.075][.1]]\ensuremath\M[0.075][-.15]] \big) ~+~ \big( \ensuremath\M[0.075][.1][\ensuremath\M[0.075][-.15][\ensuremath\M[0.075] \MAT a, \MAT b \ensuremath\M[0.075][.1]]\ensuremath\M[0.075][-.15]] \otimes \MAT E_{00} \big)\HIP{\MAT c}{\MAT d} \end{equation} This formula is easily extended to the case in which $a_{00},\ldots,d_{00} \ne 0$ by multilinearity; in the following, however, we will need only the case in which $a_{00} = c_{00} = 1$, which introduces two additional terms: \begin{equation} \label{eq:form1b} \begin{split} \ensuremath\M[0.075][-.5] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \MAT E_{00} \otimes \MAT c ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]\leftrightarrow\ensuremath\M[0.075][.5] \HALF\ensuremath\M[0.075] \big[\ensuremath\M[0.075] \MAT P_{00} \otimes \MAT C \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] ~=~ \HALF\ensuremath\M[0.075] \MAT B \otimes [\ensuremath\M[0.075][.1] \MAT C , \MAT D \ensuremath\M[0.075][.1]] \ensuremath\M[0.075][.5]\leftrightarrow\ensuremath\M[0.075][.5] & \HALF\ensuremath\M[0.075] \MAT b \otimes [\ensuremath\M[0.075][-.15][\ensuremath\M[0.075][.1] \MAT c, \MAT d \ensuremath\M[0.075][.1]]\ensuremath\M[0.075][-.15]] ~; \ensuremath\M[0.075] \ensuremath\M[0.075][-.5] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \MAT a \otimes \MAT E_{00} ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]\leftrightarrow\ensuremath\M[0.075][.5] \HALF\ensuremath\M[0.075] \big[ \MAT A \otimes \MAT P_{00} \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] ~=~ \HALF\ensuremath\M[0.075] [\ensuremath\M[0.075][.1] \MAT A ,\ensuremath\M[0.075] \MAT B \ensuremath\M[0.075][.1]] \otimes \MAT D \ensuremath\M[0.075][.25]\leftrightarrow\ensuremath\M[0.075][.5] & \HALF\ensuremath\M[0.075] [\ensuremath\M[0.075][-.15][\ensuremath\M[0.075][.1] \MAT a, \MAT b \ensuremath\M[0.075][.1]]\ensuremath\M[0.075][-.15]] \otimes \MAT d ~. \end{split} \end{equation} Finally, we shall need the general result \cite{Lutkepohl:96,Havel!QPT:03}: \begin{equation} \label{eq:form2} \KET{ \MAT X \otimes \MAT Y } ~=~ \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big)\ensuremath\M[0.075] \big(\ensuremath\M[0.075] \KET{ \MAT X } \otimes \KET{ \MAT Y } \big) , \end{equation} for $2\times2$ matrices $\MAT X$ \ensuremath\M[0.075] $\MAT Y$, where the two-particle \emph{commutation matrix} $\EMB{\ALG K}_{22}$ is given by \begin{equation} \EMB{\ALG K}_{22} ~=~ {\sum}_{i,\,j=0}^{\,3,\,3}\ensuremath\M[0.075] \MAT E_{ij} \otimes \MAT E_{ji} ~=~ \begin{bmatrix} ~1~&~0~&~0~&~0~\ensuremath\M[0.075][-1ex] ~0~&~0~&~1~&~0~\ensuremath\M[0.075][-1ex] ~0~&~1~&~0~&~0~\ensuremath\M[0.075][-1ex] ~0~&~0~&~0~&~1~ \end{bmatrix} ~. \end{equation}
We are interested in the case that $\MAT a = \SIG^1$, $\MAT b = \EMB\nu^1$, $\MAT c = \SIG^2$ and $\MAT d = \EMB\nu^2$, i.e.~we have a factorizable two-particle state $\SIG^1 \otimes \SIG^2$ evolving under a bi-axial interaction $\EMB\nu^1 \otimes \EMB\nu^2$. To express this more compactly, we define the matrix $\EMB{\ALG S}_{\EMB\nu^1}$ via \begin{equation}
\big\langle\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \SIG^1\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \MAT E_{00}\ensuremath\M[0.075] \big\rangle ~+~ \big|\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\rangle ~=~ \begin{bmatrix} ~0~&\nu_{10}^1&\nu_{01}^1&\nu_{11}^1\ensuremath\M[0.075][-1ex] \nu_{10}^1&0&0&0\ensuremath\M[0.075][-1ex] \nu_{01}^1&0&0&0\ensuremath\M[0.075][-1ex] \nu_{11}^1&0&0&0 \end{bmatrix}\ensuremath\M[0.075] \begin{bmatrix} \text{\small$1$}\ensuremath\M[0.075][-1ex] \sigma_{10}^1\ensuremath\M[0.075][-1ex] \sigma_{01}^1\ensuremath\M[0.075][-1ex] \sigma_{11}^1 \end{bmatrix} ~\equiv\ensuremath\M[0.075] \EMB{\ALG S}_{\ensuremath\M[0.075]\EMB\nu^1}\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \SIG^1\ensuremath\M[0.075] \big\rangle \end{equation} with an analogous definition in $\HIP{\EMB\nu^2}{\SIG^2}\ensuremath\M[0.075] \KET{\MAT E_{00}} + \KET{\EMB\nu^2} \equiv\ensuremath\M[0.075] \EMB{\ALG S}_{\ensuremath\M[0.075]\EMB\nu^2}\ensuremath\M[0.075] \KET{\SIG^2}$. Then Eqs.~(\ref{eq:form0}), (\ref{eq:form1a}), (\ref{eq:form1b}) \ensuremath\M[0.075] (\ref{eq:form2}) give us \begin{align} \label{eq:ugly}
\ensuremath\M[0.075][-1]\partial_t\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \SIG^1\otimes\SIG^2\ensuremath\M[0.075] \big\rangle \ensuremath\M[0.075][.25]=\ensuremath\M[0.075][.5]
&\notag \Big|\ensuremath\M[0.075][0.2] \tfrac{\displaystyle\imath}4\ensuremath\M[0.075][0.1] \big[\ensuremath\M[0.075][-.25]\big[ \SIG^1 \otimes \SIG^2 ,\ensuremath\M[0.075] \EMB\nu^1 \otimes \EMB\nu^2\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-.25]\big] \Big\rangle \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] & \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big)\ensuremath\M[0.075][0.1] \tfrac{\displaystyle\imath}4 \begin{aligned}[t] \Big(
& \HIP{\SIG^1}{\EMB\nu^1}\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \MAT E_{00}\ensuremath\M[0.075] \big\rangle \otimes \big|\ensuremath\M[0.075][0.1] \big[\ensuremath\M[0.075][-.25]\big[ \SIG^2,\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-.25] \big] \big\rangle ~+ \ensuremath\M[0.075]
\ensuremath\M[0.075][-3] \cdots~ & \big|\ensuremath\M[0.075][0.1] \big[\ensuremath\M[0.075][-.25]\big[ \SIG^1,\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big]\ensuremath\M[0.075][-.25]\big] \big\rangle \otimes \big|\ensuremath\M[0.075] \MAT E_{00}\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \HIP{\SIG^2}{\EMB\nu^2} ~+ \ensuremath\M[0.075]
\ensuremath\M[0.075][-3] \cdots~ & \big|\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\rangle \otimes \big|\ensuremath\M[0.075][0.1] \big[\ensuremath\M[0.075][-.25]\big[ \SIG^2 ,\ensuremath\M[0.075] \EMB\nu^2 \big]\ensuremath\M[0.075][-.25]\big] \big\rangle ~+ \ensuremath\M[0.075]
\ensuremath\M[0.075][-3] \cdots~ & \big|\ensuremath\M[0.075][0.1] \big[\ensuremath\M[0.075][-.25]\big[ \SIG^1 ,\ensuremath\M[0.075] \EMB\nu^1 \big]\ensuremath\M[0.075][-.25]\big]\ensuremath\M[0.075] \big\rangle \otimes \big|\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\rangle \Big) \end{aligned} \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] &\notag \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big) \ensuremath\M[0.075]\tfrac14\ensuremath\M[0.075] \big( \EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \begin{aligned}[t] \ensuremath\M[0.075]+\ensuremath\M[0.075] \EMB{\ALG R}_{\EMB\nu^1} \otimes \EMB{\ALG S}_{\EMB\nu^2} \big)
\big(\ensuremath\M[0.075][0.1] \big|\ensuremath\M[0.075][0.1] \SIG^1\ensuremath\M[0.075][0.1] \big\rangle \otimes \big|\ensuremath\M[0.075][0.1] \SIG^2\ensuremath\M[0.075][0.1] \big\rangle \big) \end{aligned} \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] &\notag \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big) \ensuremath\M[0.075]\tfrac14\ensuremath\M[0.075] \begin{aligned}[t] \big( \EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \ensuremath\M[0.075]+\ensuremath\M[0.075] \EMB{\ALG R}_{\EMB\nu^1} \otimes \EMB{\ALG S}_{\EMB\nu^2} \big) ~\cdots \ensuremath\M[0.075][4] & \ensuremath\M[0.075]
\cdots~ \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big)\ensuremath\M[0.075] \big|\ensuremath\M[0.075] \SIG^1 \otimes \SIG^2\ensuremath\M[0.075] \big\rangle & ~. \end{aligned} \end{align}
Since left or right multiplication of a $4$D column or row vector by $\EMB{\ALG R}_{\MAT x}$ gives the cross product of the last three components of that vector with $[ x_{10}, x_{01}, x_{11} ]$, it may be seen that $\EMB{\ALG S}_{\EMB\nu^1}$, $\EMB{\ALG R}_{\EMB\nu^1}$ are mutually annihilating (i.e.~$\EMB{\ALG S}_{\EMB\nu^{1\ensuremath\M[0.075]}} \EMB{\ALG R}_{\EMB\nu^1} = \EMB{\ALG R}_{\EMB\nu^1\ensuremath\M[0.075]} \EMB{\ALG S}_{\EMB\nu^1} = \MAT 0$), and similarly for $\EMB{\ALG S}_{\EMB\nu^2}$, $\EMB{\ALG R}_{\EMB\nu^2}$. As a result, the two terms on the last line of Eq.~(\ref{eq:ugly}) commute and their exponential factorizes. It is moreover easily shown that $\EMB{\ALG S}_{\MAT x}^{\ensuremath\M[0.075][.1]3} = \ensuremath\M[0.075]\MAT x\ensuremath\M[0.075]^2 \EMB{\ALG S}_{\ensuremath\M[0.075][1.5]\MAT x}$ and $\EMB{\ALG R}_{\MAT x}^3 = -\ensuremath\M[0.075]\MAT x\ensuremath\M[0.075]^{2\ensuremath\M[0.075]} \EMB{\ALG R}_{\ensuremath\M[0.075][1.5]\MAT x\ensuremath\M[0.075]}$, so the overall integral is \begin{equation} \label{eq:overall} \begin{split}
\big|\ensuremath\M[0.075][0.1] \SIG(t)\ensuremath\M[0.075][0.1] \big\rangle ~=~ \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big)\ensuremath\M[0.075] \MAT{Exp}\big( \EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \ensuremath\M[0.075][0.25]t/4 \big)\ensuremath\M[0.075] \MAT{Exp}\big( \EMB{\ALG R}_{\EMB\nu^1} \otimes \EMB{\ALG S}_{\EMB\nu^2} \ensuremath\M[0.075][0.25]t/4 \big) ~\cdots \ensuremath\M[0.075][2] & \ensuremath\M[0.075]
\cdots~ \big( \MAT P_{00} \otimes \EMB{\ALG K}_{22} \otimes \MAT P_{00} \big)\ensuremath\M[0.075] \big|\ensuremath\M[0.075][0.1] \SIG(0)\ensuremath\M[0.075][0.1] \big\rangle , & \end{split} \end{equation} where $\SIG(0) = \SIG^1 \otimes \SIG^2$ and (letting $\MAT P_{00}^{\otimes4}$ be the $2\times2$ identity tensored with itself $4$ times) \begin{equation} \begin{split} \ensuremath\M[0.075][-1] \MAT{Exp}\big(\ensuremath\M[0.075]\EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \ensuremath\M[0.075][.25]t/4 \big) \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.5]
\MAT P_{00}^{\otimes4} \ensuremath\M[0.075]+\ensuremath\M[0.075] \frac{\sin\ensuremath\M[0.075]\big( \ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075] \ensuremath\M[0.075][0.1]t/4 \big)}{\ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075]}\ensuremath\M[0.075] \big( \EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \big) ~+ &
\ensuremath\M[0.075] \cdots~ \frac{1 - \cos\ensuremath\M[0.075]\big(\ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075] \ensuremath\M[0.075][0.1]t/4 \big)}{{\ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075]}^2 {\ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075]}^2}\ensuremath\M[0.075] \big( \EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2} \big)^2 & \end{split} \end{equation} with an almost identical expression for $\MAT{Exp}\big( \EMB{\ALG R}_{\EMB\nu^1} \otimes \EMB{\ALG S}_{\EMB\nu^2} \ensuremath\M[0.075][.25]t/4 \big)$. The square in the last term may be evaluated by combining Eq.~(\ref{eq:r2}) with \begin{equation} \label{eq:s2}
\EMB{\ALG S}_{\MAT x}^{\ensuremath\M[0.075][.1]2} ~=~ \KET{\MAT x} \BRA{\MAT x} ~+~ \ensuremath\M[0.075]\MAT x\ensuremath\M[0.075]^2\ensuremath\M[0.075][0.25] \MAT E_{00} \otimes \MAT E_{00} ~. \end{equation}
Because $\EMB{\ALG S}_{\EMB\nu^1} \otimes \EMB{\ALG R}_{\EMB\nu^2}$ and $\EMB{\ALG R}_{\EMB\nu^1} \otimes \EMB{\ALG S}_{\EMB\nu^2}$ are mutually annihilating, the product of their exponentials expands to only five terms, two pairs of which have identical trigonometric coefficients. On pulling the right-hand column operator ``$\KET{}$'' back to the left in Eq.~(\ref{eq:overall}), essentially reversing what we did to derive the differential version in Eq.~(\ref{eq:ugly}), we obtain the integrated equation of motion we have been seeking: \begin{equation} \label{eq:final2p} \begin{split}
\SIG(t) \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.5] \SIG^1 \otimes \SIG^2 \ensuremath\M[0.075]+\ensuremath\M[0.075] \sin\ensuremath\M[0.075]\big( \ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075] \,t/4 \big)\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \SIG^1 \otimes \SIG^2 ,\ensuremath\M[0.075] \hat{\EMB\nu}^1 \otimes \hat{\EMB\nu}^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075]- \ensuremath\M[0.075][3.2] &
\ensuremath\M[0.075][0.5ex] \cdots~ \big(1 - \cos\ensuremath\M[0.075]\big( \ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075] \,t/4 \big) \big)\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \big[\ensuremath\M[0.075][-0.25]\big[ \SIG^1 \otimes \SIG^2 ,\ensuremath\M[0.075] \hat{\EMB\nu}^1 \otimes \hat{\EMB\nu}^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] ,\ensuremath\M[0.075] \hat{\EMB\nu}^1 \otimes \hat{\EMB\nu}^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075]. \end{split} \end{equation} Equations (\ref{eq:form1a}--\ref{eq:form1b}) tell us (more or less) what the geometric interpretation of the two particle commutator is, so we turn our attention to the double commutator. Geometric algebra shows that the double commutator of tensor products of the corresponding traceless $2\times2$ Hermitian matrices reduces to \begin{multline} 2\ensuremath\M[0.075] \big[ \big[ \MAT A \otimes \MAT C,\ensuremath\M[0.075] \MAT B \otimes \MAT D \big],\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] \ensuremath\M[0.075] \begin{aligned}[t] =\ensuremath\M[0.075][.5] & \big[ \HIP{\MAT A}{\MAT B} \big( \MAT P_{00} \otimes [ \MAT C, \MAT D ] \big) \ensuremath\M[0.075]+\ensuremath\M[0.075] \big( [ \MAT A, \MAT B ] \otimes \MAT P_{00} \big) \HIP{\MAT C}{\MAT D} \ensuremath\M[0.075],\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] \ensuremath\M[0.075][2] \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] & \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \MAT B \otimes [\ensuremath\M[0.075] [\MAT C,\ensuremath\M[0.075] \MAT D],\ensuremath\M[0.075] \MAT D \ensuremath\M[0.075]] \ensuremath\M[0.075]+\ensuremath\M[0.075] [\ensuremath\M[0.075] [\MAT A,\ensuremath\M[0.075] \MAT B],\ensuremath\M[0.075] \MAT B \ensuremath\M[0.075]] \otimes \MAT D\ensuremath\M[0.075] \HIP{\MAT C}{\MAT D} ~, \end{aligned} \end{multline} so we will be done once we figure out what the double commutator of $2\times2$ matrices is. On expanding the commutator and using the fact that for such matrices the anticommutator satisfies $\MAT{AB} + \MAT{BA} = \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \MAT P_{00\ensuremath\M[0.075]}$, we get (including the real analogs): \begin{equation} \label{eq:1pdc} \begin{split} [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT a ,\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]]\ensuremath\M[0.075][-0.12]] ,\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]]\ensuremath\M[0.075][-0.12]] \ensuremath\M[0.075][.5]\leftrightarrow\ensuremath\M[0.075][.5] & [\ensuremath\M[0.075] [\ensuremath\M[0.075] \MAT A,\ensuremath\M[0.075] \MAT B \ensuremath\M[0.075]],\ensuremath\M[0.075] \MAT B \ensuremath\M[0.075]] \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.5] \MAT{AB}^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] 2\ensuremath\M[0.075] \MAT{BAB} \ensuremath\M[0.075]+\ensuremath\M[0.075] \MAT B^2 \MAT A \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] &
\MAT A\ensuremath\M[0.075] \ensuremath\M[0.075] \MAT B \ensuremath\M[0.075]^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] 2\ensuremath\M[0.075] (\MAT{BA} + \MAT{AB} - \MAT{AB})\ensuremath\M[0.075] \MAT B \ensuremath\M[0.075] =\ensuremath\M[0.075][.5] &
2\ensuremath\M[0.075] \big( \MAT A\ensuremath\M[0.075] \ensuremath\M[0.075] \MAT B \ensuremath\M[0.075]^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \MAT B \big) \ensuremath\M[0.075][.5]\leftrightarrow\ensuremath\M[0.075][.5] \MAT a\ensuremath\M[0.075] \ensuremath\M[0.075]\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]\ensuremath\M[0.075]^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] \HIP{\MAT a}{\MAT b}\ensuremath\M[0.075] \MAT b ~. \end{split} \end{equation} Back-substitution of this and the corresponding expression for $[[\MAT C, \MAT D], \MAT D]$ into the preceding equation now yields: \begin{multline} \ensuremath\M[0.075][-1] \big[ \big[ \MAT A \otimes \MAT C,\ensuremath\M[0.075] \MAT B \otimes \MAT D \big],\ensuremath\M[0.075] \MAT B \otimes \MAT D \big] \ensuremath\M[0.075]\begin{aligned}[t] &
\ensuremath\M[0.075][-.5]=\ensuremath\M[0.075][.5] \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \MAT B \otimes \big( \MAT C\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT D\ensuremath\M[0.075]^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] \HIP{\MAT C}{\MAT D}\ensuremath\M[0.075] \MAT D \big) \ensuremath\M[0.075]+\ensuremath\M[0.075] \big( \MAT A\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT B\ensuremath\M[0.075]^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \MAT B\big) \otimes \MAT D\ensuremath\M[0.075] \HIP{\MAT C}{\MAT D} \ensuremath\M[0.075] &
\ensuremath\M[0.075][-.5]=\ensuremath\M[0.075][.5] \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT D\ensuremath\M[0.075]^2\ensuremath\M[0.075] (\MAT B \otimes \MAT C) ~+~ \HIP{\MAT C}{\MAT D}\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT B\ensuremath\M[0.075]^2\ensuremath\M[0.075] (\MAT A \otimes \MAT D) ~-~ 2\ensuremath\M[0.075] \HIP{\MAT A}{\MAT B}\ensuremath\M[0.075] \HIP{\MAT C}{\MAT D}\ensuremath\M[0.075] (\MAT B \otimes \MAT D) .\ensuremath\M[0.075][-1] \end{aligned} \end{multline} Since no matrix products occur in this expression, we may transliterate directly to the corresponding real expression including the additional terms arising from setting $a_{00} = c_{00} = 1$ (cf.~Eq.~(\ref{eq:form1b})): \begin{multline} 4\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \big[\ensuremath\M[0.075][-0.25]\big[ (\MAT a + \MAT E_{00}) \otimes (\MAT c + \MAT E_{00}) ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]= \ensuremath\M[0.075] \begin{aligned}[t]
\HIP{\MAT a}{\MAT b}\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT{d}\ensuremath\M[0.075]^2\ensuremath\M[0.075] (\MAT b \otimes \MAT c) \ensuremath\M[0.075]+\ensuremath\M[0.075] \HIP{\MAT c}{\MAT d}\ensuremath\M[0.075] \ensuremath\M[0.075]\MAT b\ensuremath\M[0.075]^2\ensuremath\M[0.075] (\MAT a \otimes \MAT d) ~\cdots \ensuremath\M[0.075] & \ensuremath\M[0.075] -~ 2\ensuremath\M[0.075] \HIP{\MAT a}{\MAT b}\ensuremath\M[0.075] \HIP{\MAT c}{\MAT d}\ensuremath\M[0.075] (\MAT b \otimes \MAT d) ~\cdots \ensuremath\M[0.075] & \ensuremath\M[0.075] +~ 2\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \MAT b \otimes [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT c ,\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] {]\ensuremath\M[0.075][-0.13]]} \ensuremath\M[0.075]+\ensuremath\M[0.075] [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT a ,\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]]\ensuremath\M[0.075][-0.12]] \otimes \MAT d ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][-1.5] & \ensuremath\M[0.075][1.5]. \end{aligned} \end{multline} The first of these additional terms (given on the last line of the above equation) may be evaluated via Eqs.~(\ref{eq:form1a}), (\ref{eq:form1b}) \ensuremath\M[0.075] (\ref{eq:1pdc}) as indicated below, \begin{equation} \begin{split} \ensuremath\M[0.075][-1] 2\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[\ensuremath\M[0.075] \MAT b \otimes [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT c ,\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] {]\ensuremath\M[0.075][-0.15]]} ,\ensuremath\M[0.075] \MAT b \otimes \MAT d \ensuremath\M[0.075]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.75] &
{\ensuremath\M[0.075]\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]\ensuremath\M[0.075]}^2 \big( \MAT E_{00} \otimes [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT c ,\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] ]\ensuremath\M[0.075][-0.13]] ,\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] ]\ensuremath\M[0.075][-0.13]] \big) \ensuremath\M[0.075]+\ensuremath\M[0.075] \big(\ensuremath\M[0.075] [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT b ,\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075] ]\ensuremath\M[0.075][-0.13]] \otimes \MAT E_{00} \big)\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] [\ensuremath\M[0.075][-0.12][\ensuremath\M[0.075] \MAT c ,\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] ]\ensuremath\M[0.075][-0.13]] \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075] \big\rangle \ensuremath\M[0.075] =\ensuremath\M[0.075][.75] &
{\ensuremath\M[0.075]\ensuremath\M[0.075] \MAT b \ensuremath\M[0.075]\ensuremath\M[0.075]}^2\ensuremath\M[0.075] \big(\ensuremath\M[0.075] \MAT E_{00} \otimes (\ensuremath\M[0.075] {\ensuremath\M[0.075]\ensuremath\M[0.075] \MAT d\ensuremath\M[0.075] \ensuremath\M[0.075]}^2 \ensuremath\M[0.075] \MAT c \ensuremath\M[0.075]-\ensuremath\M[0.075] \HIP{\MAT c}{\MAT d}\ensuremath\M[0.075] \MAT d \ensuremath\M[0.075]) \big) ~, \end{split} \end{equation} with an analogous expression for the remaining term. We now put in our previous values $\MAT a = \VP{\SIG}^1 \equiv \SIG^1 - \MAT E_{00}$, $\MAT b = \EMB\nu^1$, $\MAT c = \VP{\SIG}^2 \equiv \SIG^2 - \MAT E_{00}$ and $\MAT d = \EMB\nu^2$ and expand the result fully to get: \begin{multline} 4\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \big[\ensuremath\M[0.075][-0.25]\big[ \SIG^1 \otimes \SIG^2 ,\ensuremath\M[0.075] \EMB\nu^1 \otimes \EMB\nu^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] ,\ensuremath\M[0.075]\EMB\nu^1 \otimes \EMB\nu^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]= \ensuremath\M[0.075] \begin{aligned}[t]
\big\langle\ensuremath\M[0.075] \SIG^1 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^1 \big\rangle\ensuremath\M[0.075] {\big\ensuremath\M[0.075]\EMB\nu^2\big\ensuremath\M[0.075]}^2 \ensuremath\M[0.075] \big( \EMB\nu^1 \otimes \VP{\SIG}^2\ensuremath\M[0.075] \big) \ensuremath\M[0.075]+\ensuremath\M[0.075] {\big\ensuremath\M[0.075]\ensuremath\M[0.075]\EMB\nu^1\big\ensuremath\M[0.075]}^2\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \SIG^2 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \big( \VP{\SIG}^1 \otimes \EMB\nu^2\ensuremath\M[0.075] \big) ~\cdots \ensuremath\M[0.075] &
\ensuremath\M[0.075] -~ 2\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \SIG^1 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \SIG^2 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \big( \EMB\nu^1 \otimes \EMB\nu^2\ensuremath\M[0.075] \big) ~\cdots \ensuremath\M[0.075] & \end{aligned} \ensuremath\M[0.075] \begin{aligned}[t]
+~{\big\ensuremath\M[0.075]\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\ensuremath\M[0.075]}^2\ensuremath\M[0.075] {\big\ensuremath\M[0.075]\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\ensuremath\M[0.075]}^2\ensuremath\M[0.075] \big( \MAT E_{00} \otimes \VP{\SIG}^2 \ensuremath\M[0.075]+\ensuremath\M[0.075] \VP{\SIG}^1 \otimes \MAT E_{00} \big) ~\cdots \ensuremath\M[0.075][1.25] &
\ensuremath\M[0.075] -~ \Big( {\big\ensuremath\M[0.075]\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\ensuremath\M[0.075]}^2\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \SIG^2 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \MAT E_{00} \otimes \EMB\nu^2 \ensuremath\M[0.075]+\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \SIG^1 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \EMB\nu^1\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] {\big\ensuremath\M[0.075]\ensuremath\M[0.075] \EMB\nu^2\ensuremath\M[0.075] \big\ensuremath\M[0.075]}^2\ensuremath\M[0.075] \EMB\nu^1 \otimes \MAT E_{00} \Big) . & \end{aligned} \end{multline}
Finally, we divide through by $\ensuremath\M[0.075]\EMB\nu^1\ensuremath\M[0.075]^2\ensuremath\M[0.075] \ensuremath\M[0.075]\EMB\nu^2\ensuremath\M[0.075]^2$ to get the normalized ``vectors'' $\smash{\hat{\EMB\nu}}^1$ and $\smash{\hat{\EMB\nu}}^2$, replace the $\SIG$'s by $\VP{\SIG}$'s inside the traces (which doesn't change their values) and recombine terms to obtain: \begin{multline} 4\ensuremath\M[0.075] \big[\ensuremath\M[0.075][-0.25]\big[ \big[\ensuremath\M[0.075][-0.25]\big[ \SIG^1 \otimes \SIG^2 ,\ensuremath\M[0.075] \hat{\EMB\nu}^1 \otimes \hat{\EMB\nu}^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] ,\ensuremath\M[0.075] \hat{\EMB\nu}^1 \otimes \hat{\EMB\nu}^2 \ensuremath\M[0.075][.1]\big]\ensuremath\M[0.075][-0.25]\big] \ensuremath\M[0.075][.5]= \ensuremath\M[0.075] \begin{aligned}[t]
& \big( \VP{\SIG}^1 \ensuremath\M[0.075]-\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \VP\SIG^1 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \hat{\EMB\nu}^1\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \hat{\EMB\nu}^1 \big) \otimes \big( \big\langle\ensuremath\M[0.075] \VP\SIG^2 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \hat{\EMB\nu}^2\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \hat{\EMB\nu}^2 \ensuremath\M[0.075]+\ensuremath\M[0.075] \MAT E_{00} \big) ~\cdots \ensuremath\M[0.075][2] \ensuremath\M[0.075] +~
& \big( \big\langle\ensuremath\M[0.075] \VP\SIG^1 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \hat{\EMB\nu}^1\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \hat{\EMB\nu}^1 \ensuremath\M[0.075]+\ensuremath\M[0.075] \MAT E_{00} \big) \otimes \big( \VP{\SIG}^2 \ensuremath\M[0.075]-\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \VP\SIG^2 \ensuremath\M[0.075]\big|\ensuremath\M[0.075] \hat{\EMB\nu^2}\ensuremath\M[0.075] \big\rangle\ensuremath\M[0.075] \hat{\EMB\nu}^2 \big) ~. \end{aligned} \end{multline} This has a fairly simple interpretation: The rejection of $\VP{\SIG}^1$ from $\hat{\EMB\nu}^1$ is tensored with the projection of $\VP{\SIG}^2$ onto $\hat{\EMB\nu}^2$ (plus the usual scalar part of $1$) and added to the projection of $\VP{\SIG}^1$ onto $\hat{\EMB\nu}^1$ (plus the scalar part) tensored with the rejection of $\VP{\SIG^2}$ from $\hat{\EMB\nu}^2$. This should be contrasted with the single commutator (Eq.~(\ref{eq:form1a})), wherein the inner and outer products of each pair are tensored together both ways and added.
\section{Meta-Metamorphosis} In the previous section we have shown how the Hamiltonians usually assumed for quantum computing with qubits can be integrated entirely within the real domain. For a single qubit, the results could be interpreted as a simple Bloch vector rotation. With a bi-axial interaction between two qubits, we also found that that the integrated expression had a reasonably nice geometric interpretation. Algebraically, however, it is usually easier to integrate in the Hermitian domain, simply because Hermitian matrices are easier to diagonalize. In this section, therefore, we shall derive formulae by which matrix representations of general superoperators can be translated from the Hermitian into the real domain, along with some specific examples of their utility.
Using the identity given in Eq.~(\ref{eq:oppr2prop}) together with our operator sum expressions for $\ALG U$ and its inverse (Eqs.~(\ref{eq:you}) \ensuremath\M[0.075] (\ref{eq:uoy})), it is straightforward to show that an arbitrary superoperator $\ALG S$ with matrix representation $\EMB{\ALG S}$ acting on $\RHO$ via $\RHO \mapsto \EMB{\ALG S}\ensuremath\M[0.075] \KET{\RHO}$ transforms into the real domain according to \begin{equation} \label{eq:supopstfm} \EMB{\ALG S} \ensuremath\M[0.075]\stackrel{\ALG U}{\longleftrightarrow}\ensuremath\M[0.075] 2^{-N\ensuremath\M[0.075]} \EMB{\ALG Q} \bigg( \sum_{m'=0}^M\ensuremath\M[0.075] \MAT P_{M,(M-m')} \otimes \MAT P_{m',0} \bigg) \EMB{\ALG S} \bigg( \sum_{m=0}^M\ensuremath\M[0.075] \MAT P_{M,(M-m)} \otimes \MAT P_{m,0} \bigg) \OL{\EMB{\ALG Q}} ~, \end{equation} where $\EMB{\ALG Q} \equiv \DMAT(\ensuremath\M[0.075]\KET{\MAT Q^{\otimes N}})$. The superoperator $\ALG S$ may be written in operator sum form versus the basis of elementary matrices \cite{Havel!QPT:03} as \begin{equation}
\EMB{\ALG S}\ensuremath\M[0.075] \KET{\RHO} ~=~ \bigg| \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij}\ensuremath\M[0.075] \MAT E_{ki\ensuremath\M[0.075]} \RHO\ensuremath\M[0.075][0.05] \MAT E_{j\ell} \bigg\rangle ~=~ \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij}\ensuremath\M[0.075] \big( \MAT E_{\ell j} \otimes \MAT E_{ki} \big)\ensuremath\M[0.075] \KET{\RHO} \ensuremath\M[0.075]. \end{equation} On substituting the second of these equations (sans $\KET\RHO$) into the first and rearranging things a bit, the transformed superoperator becomes \begin{equation} 2^{-N\ensuremath\M[0.075]} \EMB{\ALG Q} \bigg( \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij}\ensuremath\M[0.075] \sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \big( \MAT P_{M,(M-m')\ensuremath\M[0.075]} \MAT E_{\ensuremath\M[0.075][1.3]\ell j\ensuremath\M[0.075]} \MAT P_{M,(M-m)} \big) \otimes \big( \MAT P_{m',0\ensuremath\M[0.075]} \MAT E_{ki\ensuremath\M[0.075]} \MAT P_{m,0} \big)\ensuremath\M[0.075] \bigg) \OL{\EMB{\ALG Q}} ~. \end{equation} Unfortunately, because each factor in the above Kronecker product depends on both indices $m$ and $m'$, this cannot be regarded as a transformation of the elementary matrix basis into the real domain.
Better insight can be obtained by looking at how the Choi matrix $\EMB{Choi}(\EMB{\ALG S})$ of the superoperator transforms. This may be obtained from the propagating matrix $\EMB{\ALG S}$ simply by replacing Kronecker products of the elementary matrices by dyadic products of the corresponding columnized basis, but in the opposite order \cite{Havel!QPT:03}. This task is facilitated by expressing the left- and right-multiplication by diagonal matrices as a Hadamard product, using the well-known formula \cite{HaShViCo:01} \begin{equation} \label{eq:wellknown} \DMAT(\MAT a)\ensuremath\M[0.075] \MAT X\ensuremath\M[0.075] \DMAT^\dag(\MAT b) ~=~ (\MAT{ab}^\dag) \odot \MAT X ~, \end{equation} which is essentially a special case of Eq.~(\ref{eq:form2}). This allows the transformed superoperator to be rewritten as \begin{multline}
2^{-N\ensuremath\M[0.075]} \Big( \big|\ensuremath\M[0.075] \MAT Q^{\otimes N} \big\rangle\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \MAT Q^{\otimes N} \big| \Big) \odot \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij} ~\cdots \ensuremath\M[0.075] \cdots \sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \Big(\ensuremath\M[0.075] \big( \MAT P_{M,(M-m')\ensuremath\M[0.075]} \MAT E_{\ensuremath\M[0.075][1.3]\ell j\ensuremath\M[0.075]} \MAT P_{M,(M-m)} \big) \otimes \big( \MAT P_{m',0\ensuremath\M[0.075]} \MAT E_{ki\ensuremath\M[0.075]} \MAT P_{m,0} \big)\ensuremath\M[0.075] \Big) ~. \end{multline} The advantage of this form is that the Hadamard product commutes with the $\EMB{Choi}$ operator (since it rearranges the entries of the product's operands identically), giving us \begin{align} \ensuremath\M[0.075][-0.25] \EMB{Choi}(\EMB{\ALG S}) \ensuremath\M[0.075][0.25]\stackrel{\ALG U}{\longleftrightarrow}\ensuremath\M[0.075][0.5] &
2^{-N\ensuremath\M[0.075]} \EMB{Choi}\Big( \big|\ensuremath\M[0.075] \MAT Q^{\otimes N} \big\rangle\ensuremath\M[0.075] \big\langle\ensuremath\M[0.075] \MAT Q^{\otimes N} \big| \Big) \odot \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij} ~\cdots \notag\ensuremath\M[0.075] & \cdots\ensuremath\M[0.075] \sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \EMB{Choi}\Big(\ensuremath\M[0.075] \big( \MAT P_{M,(M-m')\ensuremath\M[0.075]} \MAT E_{\ensuremath\M[0.075][1.3]\ell j\ensuremath\M[0.075]} \MAT P_{M,(M-m)} \big) \otimes \big( \MAT P_{m',0\ensuremath\M[0.075]} \MAT E_{ki\ensuremath\M[0.075]} \MAT P_{m,0} \big)\ensuremath\M[0.075] \Big) \notag\ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] & 2^{-N\ensuremath\M[0.075]} \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \otimes \MAT Q^{\otimes N} \big) \odot \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij} ~\cdots \notag\ensuremath\M[0.075] &
\cdots\ensuremath\M[0.075] \sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \Big( \big|\ensuremath\M[0.075] \MAT P_{m',0\ensuremath\M[0.075]} \MAT E_{ki\ensuremath\M[0.075]} \MAT P_{m,0} \big\rangle \big\langle \MAT P_{M,(M-m')\ensuremath\M[0.075]} \MAT E_{\ensuremath\M[0.075][1.3]\ell j\ensuremath\M[0.075]} \MAT P_{M,(M-m)} \big| \Big) \notag\ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] & 2^{-N\ensuremath\M[0.075]} \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \otimes \MAT Q^{\otimes N} \big) \odot \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij}\sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \big( \MAT P_{m,0} \otimes \MAT P_{m',0} \big)\ensuremath\M[0.075] \KET{\MAT E_{ki\ensuremath\M[0.075]}} ~\cdots\notag\ensuremath\M[0.075] &
\cdots~ \big\langle\ensuremath\M[0.075] \MAT E_{\ensuremath\M[0.075][1.3]\ell j\ensuremath\M[0.075]} \big| \big( \MAT P_{M,(M-m)} \otimes \MAT P_{M,(M-m')} \big) \ensuremath\M[0.075]\notag =\ensuremath\M[0.075][0.5] & 2^{-N\ensuremath\M[0.075]} \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \otimes \MAT Q^{\otimes N} \big) \odot \sum_{m,m'=0}^{M,M}\ensuremath\M[0.075] \big( \MAT P_{m,0} \otimes \MAT P_{m',0} \big)\ensuremath\M[0.075] \sum_{i,j=0}^{M,M}\ensuremath\M[0.075] \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] s_{k\ell}^{ij} ~\cdots \ensuremath\M[0.075]\notag & \cdots~ \big( \MAT E_{ij} \otimes \MAT E_{k\ell} \big)\big( \MAT P_{M,(M-m)} \otimes \MAT P_{M,(M-m')} \big) \ensuremath\M[0.075]\notag \equiv\ensuremath\M[0.075][0.5] & 2^{-N\ensuremath\M[0.075]} \big( \OL{\MAT Q}^{\ensuremath\M[0.075]\otimes N\ensuremath\M[0.075]} \otimes \MAT Q^{\otimes N} \big) \odot \sum_{m=0}^{M'}\ensuremath\M[0.075] \MAT P_{m,0}\ensuremath\M[0.075] \EMB{Choi}(\EMB{\ALG S})\ensuremath\M[0.075] \MAT P_{M',(M'-m)} ~, \end{align} where $M' \equiv (M+1)^2 - 1 = 2^{2N} - 1$ and we have used the relation $\KET{\MAT E_{ki}}\ensuremath\M[0.075] \BRA{\MAT E_{\ell j}} = (\MAT e_i \otimes \MAT e_k)(\MAT e_j \otimes \MAT e_\ell)^\top = \MAT E_{ij} \otimes \MAT E_{k\ell}$.
In other words, the Choi matrix of a superoperator maps into the real domain much like a density matrix on twice as many qubits. In fact we can write the \emph{real} transformation matrix $\EMB{\ALG T}$, which acts on the real density matrix as $\EMB{\ALG T}\ensuremath\M[0.075] \KET{\SIG}$, in the following compact form: \begin{equation} \EMB{\ALG T} \ensuremath\M[0.075][0.7]=\ensuremath\M[0.075][0.7] 2^{-N}\ensuremath\M[0.075] \EMB{Choi}\Big(\ensuremath\M[0.075] \ALG U\big(\ensuremath\M[0.075] \EMB{Choi}( \EMB{\ALG S} )\ensuremath\M[0.075] \big) \Big)\ensuremath\M[0.075] \OL{\EMB{\ALG Q}}^{\,2} ~. \end{equation} Turning this around, we also find that we can express the Choi matrix of $\ALG S$ in the Pauli basis as \begin{equation} \begin{split} & 2^{-N}\ensuremath\M[0.075] \EMB{Choi}(\EMB{\ALG S}) \ensuremath\M[0.075][0.7]=\ensuremath\M[0.075][0.7] \ALG U^{-1}\big( \EMB{Choi}( \EMB{\ALG T}\ensuremath\M[0.075] \EMB{\ALG Q}^2 \ensuremath\M[0.075]) \big) \ensuremath\M[0.075] \equiv\ensuremath\M[0.075] & \sum_{i,j=0}^{M,M} \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] \ALG U^{-1}\big(\ensuremath\M[0.075] t_{k\ell}^{ij}\ensuremath\M[0.075] \EMB{Choi}\big( (\MAT E_{\ell j} \otimes \MAT E_{ki})\ensuremath\M[0.075] \EMB{\ALG Q}^2 \ensuremath\M[0.075]\big) \big) \ensuremath\M[0.075] =\ensuremath\M[0.075] & \sum_{i,j=0}^{M,M} \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] t_{k\ell}^{ij}~ \ALG U^{-1}\Big( \EMB{Choi}(\MAT E_{\ell j} \otimes \MAT E_{ki}) \odot \EMB{Choi}\big(\ensuremath\M[0.075] \KET{\MAT 1\ensuremath\M[0.075] \MAT 1^\top} \BRA{\MAT Q^{\otimes N\ensuremath\M[0.075]} \odot \MAT Q^{\otimes N}}\ensuremath\M[0.075] \big)\ensuremath\M[0.075] \Big) \ensuremath\M[0.075] =\ensuremath\M[0.075] & \sum_{i,j=0}^{M,M} \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] t_{k\ell}^{ij}\ensuremath\M[0.075] \ALG U^{-1}\big( \MAT E_{ij} \otimes \MAT E_{k\ell} \big) \odot \big( \big( \MAT Q^{\otimes N\ensuremath\M[0.075]} \odot \MAT Q^{\otimes N} \big) \otimes \big( \MAT 1\ensuremath\M[0.075] \MAT 1^\top \big) \big) \big) \ensuremath\M[0.075] =\ensuremath\M[0.075] & 2^{-2N}\ensuremath\M[0.075] \big( \big( \MAT Q^{\otimes N\ensuremath\M[0.075]} \odot \MAT Q^{\otimes N} \big) \otimes \big( \MAT 1\ensuremath\M[0.075] \MAT 1^\top \big) \big) \odot \sum_{i,j=0}^{M,M} \sum_{k,\ell=0}^{M,M}\ensuremath\M[0.075] t_{k\ell}^{ij}\ensuremath\M[0.075] \big( \MAT P_{ij} \otimes \MAT P_{k\ell} \big) ~, \end{split} \end{equation} where $\MAT 1$ denotes a column vector of ones of the appropriate size.
It is time for our examples! We shall begin with operator sums for single qubit rotations, and go on to show how rotations about the $\SIG[3]$ axis as well as the $\SIG[3]^1\SIG[3]^2$ interaction between two qubits can be compactly described in the real domain using Hadamard products. We close by showing that this description also extends quite nicely to $\SIG[3]$ dephasing as well as nonunital relaxation back towards a nonrandom equilibrium state (i.e.~$T_2$ and $T_1$ relaxation in NMR parlance).
Consider the Choi matrix of the propagator which rotates a single qubit (Eqs.~(\ref{eq:form0}--\ref{eq:r2}) above): \begin{align}
& \EMB{Choi}\big( \MAT P_{00} \otimes \MAT P_{00} \ensuremath\M[0.075]+\ensuremath\M[0.075] \sin(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])\ensuremath\M[0.075] \EMB{\ALG R}_{\widehat{\EMB\nu}} \ensuremath\M[0.075]+\ensuremath\M[0.075] (1-\cos(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1]))\ensuremath\M[0.075] \EMB{\ALG R}_{\widehat{\EMB\nu}}^{\,2} \big) \notag\ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] &
\begin{bmatrix} ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~0~&~0~&~0~&~0~\ensuremath\M[0.075] ~0~&~0~&~0~&~0~\ensuremath\M[0.075] ~1~&~0~&~0~&~1~ \end{bmatrix} ~+~ \sin(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1]) \begin{bmatrix} ~0&0&0&0\ensuremath\M[0.075] ~0&0&-\hat\nu_{11}&\hat\nu_{10}\ensuremath\M[0.075] ~0&\hat\nu_{11}&0&-\hat\nu_{10}\ensuremath\M[0.075] ~0&-\hat\nu_{01}&\hat\nu_{01}&0 \end{bmatrix} ~+~ \cdots \ensuremath\M[0.075]\notag &
\cdots~ (1-\cos(\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][.1]t/2\ensuremath\M[0.075][.1])) \begin{bmatrix} ~0&0&0& -\hat\nu_{10}^2-\hat\nu_{11}^2 \ensuremath\M[0.075] ~0&0& \hat\nu_{10}\hat\nu_{01} & \hat\nu_{01}\hat\nu_{11} \ensuremath\M[0.075] ~0& \hat\nu_{10}\hat\nu_{01} &0& \hat\nu_{01}\hat\nu_{11} \ensuremath\M[0.075] -\hat\nu_{01}^2-\hat\nu_{11}^2 & \hat\nu_{10}\hat\nu_{11} & \hat\nu_{10}\hat\nu_{11} & -\hat\nu_{01}^2-\hat\nu_{10}^2 \end{bmatrix} ~, \end{align}
where the ``hat'' on the $\nu$'s indicates normalization by $\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]$. A general operator sum can be derived from this matrix (as well as by expanding the implied commutators via Eq.~(\ref{eq:1pc})), but since this is a bit involved we shall restrict ourselves to rotations by an angle $\vartheta =\ensuremath\M[0.075]\EMB\nu\ensuremath\M[0.075]\ensuremath\M[0.075][0.1]t/2$ about the $\LAB x$, $\LAB y$ or $\LAB z$ coordinate axes. On substituting $\hat\nu_{10} = 1$ and $\hat\nu_{01} = \hat\nu_{11} = 0$ into the above Choi matrix we obtain the following singular value decomposition: \begin{equation} \begin{split} \begin{bmatrix} ~1~&~0~&~0~& \cos(\vartheta)\ensuremath\M[0.075] ~0~&~0~&~0~& \sin(\vartheta)\ensuremath\M[0.075] ~0~&~0~&~0~& -\sin(\vartheta)\ensuremath\M[0.075] ~1~&~0~&~0~& \cos(\vartheta) \end{bmatrix} \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & \begin{bmatrix} \cos(\vartheta/2)&\sin(\vartheta/2)\ensuremath\M[0.075] \sin(\vartheta/2)~&-\cos(\vartheta/2)\ensuremath\M[0.075] -\sin(\vartheta/2)~&~\cos(\vartheta/2)\ensuremath\M[0.075] \cos(\vartheta/2)&\sin(\vartheta/2) \end{bmatrix} \cdots \ensuremath\M[0.075] & \cdots \begin{bmatrix} \cos(\vartheta/2)&0\ensuremath\M[0.075] 0&\sin(\vartheta/2) \end{bmatrix}\ensuremath\M[0.075][-0.25] \begin{bmatrix} ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~1~&~0~&~0~&-1~ \end{bmatrix} ~, \end{split} \end{equation} as may be readily verified using the usual half-angle formulae. The corresponding operator sum for rotation by $\vartheta$ about the $\LAB x$-axis is simply: \begin{equation} \begin{split} \ALG U\Big( \EMB e^{-\imath(\vartheta/2)\MAT P_{10}} \RHO\ensuremath\M[0.075] \EMB e^{~\imath(\vartheta/2)\MAT P_{10}} \Big) \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & \cos(\vartheta/2) \begin{bmatrix} \cos(\vartheta/2)&-\sin(\vartheta/2)\ensuremath\M[0.075] \sin(\vartheta/2)&\cos(\vartheta/2) \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix} ~+~ \cdots \ensuremath\M[0.075] & \sin(\vartheta/2) \begin{bmatrix} \sin(\vartheta/2)&\cos(\vartheta/2)\ensuremath\M[0.075] -\cos(\vartheta/2)&\sin(\vartheta/2) \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} ~1~&~0~\ensuremath\M[0.075] ~0~&-1~ \end{bmatrix} . \end{split} \end{equation} In a similar fashion, it can be shown that the operator sum for a $\LAB y$-rotation is: \begin{equation} \begin{split} \ALG U\Big( \EMB e^{-\imath(\vartheta/2)\MAT P_{01}} \RHO\ensuremath\M[0.075] \EMB e^{~\imath(\vartheta/2)\MAT P_{01}} \Big) \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & \cos(\vartheta/2) \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} \cos(\vartheta/2)&-\sin(\vartheta/2)\ensuremath\M[0.075] \sin(\vartheta/2)&\cos(\vartheta/2) \end{bmatrix} ~+~ \cdots \ensuremath\M[0.075] & \sin(\vartheta/2) \begin{bmatrix} ~1~&~0~\ensuremath\M[0.075] ~0~&-1~ \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} \sin(\vartheta/2)&\cos(\vartheta/2)\ensuremath\M[0.075] -\cos(\vartheta/2)&\sin(\vartheta/2) \end{bmatrix} . \end{split} \end{equation}
For a $\LAB z$-rotation, on the other hand, the Choi matrix turns out to be rank $4$ with singular value decomposition: \begin{equation} \begin{split} & \begin{bmatrix} 1&0&0&\cos(\vartheta)\ensuremath\M[0.075] 0&0&-\sin(\vartheta)&0\ensuremath\M[0.075] 0&\sin(\vartheta)&0&0\ensuremath\M[0.075] \cos(\vartheta)&0&0&1 \end{bmatrix} \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] \begin{bmatrix} ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~0~&~1~&-1~&~0~\ensuremath\M[0.075] ~0~&~1~&~1~&~0~\ensuremath\M[0.075] ~1~&~0~&~0~&-1~ \end{bmatrix} \cdots \ensuremath\M[0.075] & \cdots \begin{bmatrix} \cos^2(\vartheta/2)&0&0&0\ensuremath\M[0.075] 0&\cos(\vartheta/2)\sin(\vartheta/2)&0&0\ensuremath\M[0.075] 0&0&\cos(\vartheta/2)\sin(\vartheta/2)&0\ensuremath\M[0.075] 0&0&0&\sin^2(\vartheta/2) \end{bmatrix}\ensuremath\M[0.075][-0.5] \begin{bmatrix} ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~0~&1~&-1~&~0~\ensuremath\M[0.075] ~0~&~1~&~1~&~0~\ensuremath\M[0.075] ~1~&~0~&~0~&-1~ \end{bmatrix} . \end{split} \end{equation} This corresponds to the operator sum \begin{equation} \begin{split} \ALG U\Big( \EMB e^{-\imath(\vartheta/2)\MAT P_{11}} \RHO\ensuremath\M[0.075] \EMB e^{~\imath(\vartheta/2)\MAT P_{11}} \Big) \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & \cos^2(\vartheta/2)\ensuremath\M[0.075] \SIG ~+~ \sin^2(\vartheta/2)\ensuremath\M[0.075] \MAT P_{11\ensuremath\M[0.075]} \SIG\ensuremath\M[0.075] \MAT P_{11} ~+~ \cdots \ensuremath\M[0.075] & \cdots~ \imath \cos(\vartheta/2)\sin(\vartheta/2) \big( \MAT P_{10\ensuremath\M[0.075]} \SIG\ensuremath\M[0.075] \MAT P_{01} ~+~ \MAT P_{01\ensuremath\M[0.075]} \SIG\ensuremath\M[0.075] \MAT P_{10} \big) ~, \end{split} \end{equation} which has the pleasant feature that the trigonometric functions occur as scalar factors in each term and not embedded in the operators. This enables us to use Eq.~(\ref{eq:wellknown}) to rewrite it in terms of the Hadamard product as follows: \begin{equation} \begin{split} \ALG U\Big( \EMB e^{-\imath(\vartheta/2)\MAT P_{11}} \RHO\ensuremath\M[0.075] \EMB e^{~\imath(\vartheta/2)\MAT P_{11}} \Big) \ensuremath\M[0.075][0.5] & =\ensuremath\M[0.075][0.5]\begin{bmatrix} 1&\cos(\vartheta)\ensuremath\M[0.075] \cos(\vartheta)&1 \end{bmatrix} \odot \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix} ~+~ \cdots \ensuremath\M[0.075] \cdots~ \begin{bmatrix} ~0~&~1~\ensuremath\M[0.075] ~1~&~0~ \end{bmatrix}\ensuremath\M[0.075][-0.25] & \left( \begin{bmatrix} 0&-\sin(\vartheta)\ensuremath\M[0.075] \sin(\vartheta)&0 \end{bmatrix} \odot \begin{bmatrix} \text{\small$1$}&\sigma_{01}\ensuremath\M[0.075] \sigma_{10}&\sigma_{11} \end{bmatrix} \right)\ensuremath\M[0.075][-0.25] \begin{bmatrix} ~0~&~1~\ensuremath\M[0.075] ~1~&~0~ \end{bmatrix} ~. \end{split} \end{equation}
As our next example, wherein the Hadamard product enables even greater simplifications, consider an ``Ising-type'' interaction between two qubits of the form $\SIG[3] \otimes \SIG[3] = \MAT P_{33}$, which is also known as ``weak scalar coupling'' in NMR \cite{ErnBodWok:87}. An operator sum expression for this could be obtained by expanding the general formula given in Eq.~(\ref{eq:final2p}), but because this Hamiltonian is again diagonal in the $\SIG[3]$ eigenbasis a simpler expression can be obtained directly starting from the diagonal matrix of the corresponding propagator, i.e. \begin{equation} \begin{split}
\big|\ensuremath\M[0.075] \EMB e^{-\imath \MAT P_{33} \pi J t / 2} \RHO\ensuremath\M[0.075][0.1] \EMB e^{\ensuremath\M[0.075]\imath \MAT P_{33} \pi J t / 2}\ensuremath\M[0.075][0.1] \big\rangle \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & \big( \EMB e^{\ensuremath\M[0.075]\imath \MAT P_{33} \pi J t / 2} \otimes \EMB e^{-\imath \MAT P_{33} \pi J t / 2} \big)\ensuremath\M[0.075][0.1] \KET{\RHO} \ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] & \DMAT\big(\ensuremath\M[0.075][0.1] \KET{\MAT J(t)} \big) \KET{\RHO} \ensuremath\M[0.075][0.5]\equiv\ensuremath\M[0.075][0.5] \EMB{\ALG J}(t)\ensuremath\M[0.075] \KET{\RHO} ~, \end{split} \end{equation} where \begin{equation} \MAT J(t) ~\equiv~ \begin{bmatrix} 1 & e^{\ensuremath\M[0.075]\imath \pi J t} & e^{\ensuremath\M[0.075]\imath \pi J t} & 1 \ensuremath\M[0.075] e^{-\imath \pi J t} & 1 & 1 & e^{-\imath \pi J t} \ensuremath\M[0.075] e^{-\imath \pi J t} & 1 & 1 & e^{-\imath \pi J t} \ensuremath\M[0.075] 1 & e^{\ensuremath\M[0.075]\imath \pi J t} & e^{\ensuremath\M[0.075]\imath \pi J t} & 1 \end{bmatrix} . \end{equation}
On transforming this into the real domain via Eq.~(\ref{eq:supopstfm}), we find that $\EMB{\ALG J}$ has been converted into the sum of the diagonal matrix \begin{equation} \EMB{\ALG C}(t) ~\equiv~ \DMAT\ensuremath\M[0.075][-0.25]\left(\ensuremath\M[0.075][0.25] \begin{picture}(1,50)(0,50) \thicklines \put(0,10){\line(0,1){84}} \end{picture} \begin{bmatrix} 1 & \cos( \pi J t) & \cos( \pi J t) & 1 \ensuremath\M[0.075] \cos( \pi J t) & 1 & 1 & \cos( \pi J t) \ensuremath\M[0.075] \cos( \pi J t) & 1 & 1 & \cos( \pi J t) \ensuremath\M[0.075] 1 & \cos( \pi J t) & \cos( \pi J t) & 1 \end{bmatrix} \begin{picture}(10,50)(0,39) \thicklines \put(0,0){\line(1,6){7}} \put(0,84){\line(1,-6){7}} \end{picture} \ensuremath\M[0.075][-0.25]\right) ~, \end{equation} and the anti-diagonal matrix \begin{equation} \MAT P_{15,0}\ensuremath\M[0.075][0.25] \EMB{\ALG S}(t)\ensuremath\M[0.075][0.1] ~\equiv~ \DMAT\ensuremath\M[0.075][-0.25]\left(\ensuremath\M[0.075][0.25] \begin{picture}(1,50)(0,50) \thicklines \put(0,10){\line(0,1){84}} \end{picture} \begin{bmatrix} 0 & \sin( \pi J t) & \sin( \pi J t) & 0 \ensuremath\M[0.075] \sin( \pi J t) & 0 & 0 & -\sin( \pi J t) \ensuremath\M[0.075] \sin( \pi J t) & 0 & 0 & -\sin( \pi J t) \ensuremath\M[0.075] 0 & -\sin( \pi J t) & -\sin( \pi J t) & 0 \end{bmatrix} \begin{picture}(10,50)(0,39) \thicklines \put(0,0){\line(1,6){7}} \put(0,84){\line(1,-6){7}} \end{picture} \ensuremath\M[0.075][-0.25]\right) ~, \end{equation} where the (self-inverse) left factor of $\MAT P_{15,0} = \SIG[1]^{\otimes4}$ simply reverses the order of the rows. The nonzero entries of the Choi matrix of $\EMB{\ALG C}(t) + \EMB{\ALG S}(t)$ turn out to comprise two $4\times4$ blocks along the diagonal, which are exactly the two matrices above, i.e. \begin{equation} \begin{split}
\EMB{\ALG C}(t) ~=~ & \DMAT\big( \big| \EMB{\ALG E}_{\EMB{\ALG C}\ensuremath\M[0.075]} \EMB{Choi}( \EMB{\ALG C}(t) ) \EMB{\ALG E}_{\EMB{\ALG C}\ensuremath\M[0.075]}^\top \big\rangle \big) \ensuremath\M[0.075] \text{and}\quad
\MAT P_{15,0}\ensuremath\M[0.075][0.25] \EMB{\ALG S}(t) ~=~ & \DMAT\big( \big| \EMB{\ALG E}_{\EMB{\ALG S}\ensuremath\M[0.075]} \EMB{Choi}( \EMB{\ALG S}(t) ) \EMB{\ALG E}_{\EMB{\ALG S}\ensuremath\M[0.075]}^\top \big\rangle \big)\ensuremath\M[0.075][0.1] ~, \end{split} \end{equation} wherein \begin{equation} \EMB{\ALG E}_{\EMB{\ALG C}} ~\equiv~ \sum_{i=0}^3 \MAT e_i (\MAT e_i\ensuremath\M[0.075][0.1] \otimes \MAT e_i)^\top \qquad\text{and}\qquad \EMB{\ALG E}_{\EMB{\ALG S}} ~\equiv~ \sum_{i=0}^3 \MAT e_i\ensuremath\M[0.075][0.1] (\MAT e_i \otimes \MAT e_{3-i})^\top \end{equation} project out the rows / columns of their respective blocks.
Thus we can obtain the desired operator sum representation by computing the eigenvalues and eigenvectors of the $4\times4$ symmetric matrices \begin{equation} \MAT C(t) ~\equiv~ \EMB{\ALG E}_{\EMB{\ALG C}\ensuremath\M[0.075]} \EMB{Choi}( \EMB{\ALG C}(t) ) \EMB{\ALG E}_{\EMB{\ALG C}\ensuremath\M[0.075]}^\top \quad\text{and}\quad \MAT S(t) ~\equiv~ \EMB{\ALG E}_{\EMB{\ALG S}\ensuremath\M[0.075]} \EMB{Choi}( \EMB{\ALG S}(t) ) \EMB{\ALG E}_{\EMB{\ALG S}\ensuremath\M[0.075]}^\top ~, \end{equation} letting the operators' matrices be the diagonal / anti-diagonal matrices formed from the entries of these eigenvectors, and multiplying each term in the sum by the corresponding eigenvalue. The results are \begin{equation}
\EMB{\ALG C}(t)\ensuremath\M[0.075] \KET{\SIG} \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] \Big|\ensuremath\M[0.075] \HALF\ensuremath\M[0.075] \big( 1 + \cos(\pi J t) \big)\ensuremath\M[0.075] \SIG ~+~ \HALF\ensuremath\M[0.075] \big( 1 - \cos(\pi J t) \big)\ensuremath\M[0.075] \MAT P_{33}\ensuremath\M[0.075][0.2] \SIG\ensuremath\M[0.075][0.3] \MAT P_{33} \ensuremath\M[0.075]\Big\rangle \end{equation} and \begin{equation} \begin{split}
\ensuremath\M[0.075] \EMB{\ALG S}(t)\ensuremath\M[0.075] \KET{\SIG} \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] \HALF\ensuremath\M[0.075] \sin(\pi J t)\ensuremath\M[0.075] \Big|\ensuremath\M[0.075] \MAT P_{30}\ensuremath\M[0.075][0.25] \DMAT\big( [\ensuremath\M[0.075][0.3]1,\,1,\,1,-1\ensuremath\M[0.075][0.1]] \big)\ensuremath\M[0.075][0.2] \SIG\ensuremath\M[0.075][0.3] \DMAT\big( [\ensuremath\M[0.075][0.3]1,\,1,\,1,-1\ensuremath\M[0.075][0.1]] \big)\ensuremath\M[0.075][0.1] \MAT P_{30} & ~\cdots \ensuremath\M[0.075] \cdots~ -~ \MAT P_{30}\ensuremath\M[0.075][0.25] \DMAT\big( [\ensuremath\M[0.075][0.1]-1,\,1,\,1,\,1\ensuremath\M[0.075][0.1]] \big)\ensuremath\M[0.075][0.2] \SIG\ensuremath\M[0.075][0.3] \DMAT\big( [\ensuremath\M[0.075][0.1]-1,\,1,\,1,\,1\ensuremath\M[0.075][0.1]] \big)\ensuremath\M[0.075][0.1] \MAT P_{30}\Big\rangle & ~. \end{split} \end{equation} By using Eq.~(\ref{eq:wellknown}) to replace these operator sums by Hadamard products and taking advantage of the symmetry of $\MAT S(t)$, however, we can obtain an even simpler expression, namely \begin{equation} \label{eq:simpler}
\big( \EMB{\ALG C}(t) + \EMB{\ALG S}(t) \big)\ensuremath\M[0.075][0.1] \KET{\SIG} ~=~ \big| \MAT C(t) \odot \SIG \ensuremath\M[0.075]-\ensuremath\M[0.075] \MAT S(t) \odot (\MAT P_{30}\ensuremath\M[0.075][0.15] \SIG\ensuremath\M[0.075][0.15] \MAT P_{30}) \big\rangle ~. \end{equation}
Finally, we show how one can also use Hadamard products with the real density matrix to describe simple relaxation processes, in a manner similar to that described in \citet{HaShViCo:01} for the usual Hermitian density matrix. For a single qubit undergoing $T_1$ (dissipation) and $T_2$ (decoherence) relaxation, the time derivative is given by: \begin{equation} \partial_{t\ensuremath\M[0.075]} \SIG(t) ~=~ -\MAT R \odot \SIG(t) ~\equiv~ -\begin{bmatrix} 0&1/T_2\ensuremath\M[0.075] 1/T_2&1/T_1 \end{bmatrix} \odot \begin{bmatrix} \text{\small$1$}&\sigma_{01}(t)\ensuremath\M[0.075] \sigma_{10}(t)&\sigma_{11}(t) \end{bmatrix} ~. \end{equation} Assuming that these relaxation processes are uncorrelated, this can immediately be extended to any number of qubits using the fact that the Hadamard product satisfies the mixed product formula with the Kronecker product (Eq.~(\ref{eq:mixed})). In the case of two qubits relaxing with Hadamard relaxation matrices $\MAT R^1$, $\MAT R^2$, for example, we obtain \begin{equation} \begin{split} \partial_{t\ensuremath\M[0.075]} \SIG(t) \ensuremath\M[0.075][0.5]=\ensuremath\M[0.075][0.5] & -\ensuremath\M[0.075] \big( \MAT R^1 \otimes (\MAT{11}^\top) + (\MAT{11}^\top) \otimes \MAT R^2 \big) \odot \SIG(t) \ensuremath\M[0.075] =\ensuremath\M[0.075][0.5] & - \begin{bmatrix} 0&1/T_2^2& 1/T_2^1&1/T_2^1+1/T_2^2\ensuremath\M[0.075] 1/T_2^2&1/T_1^2& 1/T_2^1+1/T_2^2&1/T_2^1+1/T_1^2\ensuremath\M[0.075] 1/T_2^1&1/T_2^1+1/T_2^2&1/T_1^1&1/T_1^1+1/T_2^2\ensuremath\M[0.075] 1/T_2^1+1/T_2^2&1/T_2^1+1/T_1^2&1/T_1^1+1/T_2^2&1/T_1^1+1/T_1^2 \end{bmatrix} \odot\ensuremath\M[0.075] \SIG(t) ~. \end{split} \end{equation} where $\MAT 1$ is a $4\times1$ vector of $1$'s. The fact that uncorrelated $T_1$ as well as $T_2$ relaxation can be extended so easily to multiple spins in this way is actually a significant advantage of the real density matrix over the Hermitian, since in the latter case the diagonal terms are mixtures of terms decaying at differing rates, substantially complicating their treatment via Hadamard products \cite{HaShViCo:01}.
When correlations are present, however, these advantages are largely lost, since then the off-diagonal entries of the real density matrix consist of mixtures of terms with differing decay rates (fortunately, $T_1$ relaxation is usually largely uncorrelated \cite{ErnBodWok:87}). Let us work through the case of two qubits in detail, assuming for simplicity that the $T_2$ relaxation processes at the two qubits are totally correlated and have the same rate $1/T_2$, as for example in an NMR gradient-diffusion experiment \cite{HaShViCo:01}. In this case the Hadamard relaxation matrix for the Hermitian density matrix has the form \begin{equation} \MAT R ~=~ \frac1{T_2} \begin{bmatrix} ~0~&~1~&~1~&~4~\ensuremath\M[0.075] ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~1~&~0~&~0~&~1~\ensuremath\M[0.075] ~4~&~1~&~1~&~0~ \end{bmatrix} ~, \end{equation} and the corresponding $16\times16$ diagonal relaxation superoperator \begin{equation} \EMB{\ALG R} ~=~ \DMAT\big(\ensuremath\M[0.075] \KET{\MAT R} \big) \end{equation} is easily exponentiated into a diagonal matrix of survival probabilities for the entries of the (traceless part of the) Hermitian density matrix. In this case, however, it turns out to be almost as easy, but more revealing, to convert $\EMB{\ALG R}$ into the real domain and perform the integration there. The result of the first step is \begin{equation} \EMB{\ALG R} ~\stackrel{\ALG U}{\longleftrightarrow}~ \frac1{T_2} \left[ \begin{smallmatrix} 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\ensuremath\M[0.075]-2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\ensuremath\M[0.075]-2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:2\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:1\ensuremath\M[0.075]&\:0\ensuremath\M[0.075][0.25ex] 0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0\ensuremath\M[0.075]&\:0 \end{smallmatrix} \right] ~, \end{equation} which again has nonzero entries only on its diagonal and its anti-diagonal. Representing the action of this matrix on $\KET{\SIG}$ as a sum of a Hadamard product and a Hadamard product coupled with row/column inversion as in Eq.~(\ref{eq:simpler}), we obtain the real equation of motion: \begin{equation} -T_2\ensuremath\M[0.075] \partial_t\ensuremath\M[0.075] {\SIG}(t) \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.3] \begin{bmatrix} ~0~&~1~&~1~&~2~\ensuremath\M[0.075] ~1~&~0~&~2~&~1~\ensuremath\M[0.075] ~1~&~2~&~0~&~1~\ensuremath\M[0.075] ~2~&~1~&~1~&~0~ \end{bmatrix} \ensuremath\M[0.075]\odot\ensuremath\M[0.075] \SIG(t) ~+~ \begin{bmatrix} ~0~&~0~&~0~&-2~\ensuremath\M[0.075] ~0~&~0~&~2~&~0~\ensuremath\M[0.075] ~0~&~2~&~0~&~0~\ensuremath\M[0.075] -2~&~0~&~0~&~0~ \end{bmatrix} \odot\ensuremath\M[0.075] \big( \MAT P_{30}\ensuremath\M[0.075][0.1] \SIG(t) \ensuremath\M[0.075][0.1]\MAT P_{30} \big) \end{equation} It may readily be verified that the operations on $\SIG$ which occur in the two terms of this expression commute, and hence this equation can be integrated by exponentiating them separately. With the first term this leads to a simple Hadamard (entrywise) exponential \cite{HaShViCo:01}, namely \begin{equation} \MAT D(t) ~\equiv~ \MAT{Exp}_{\ensuremath\M[0.075]\odot\ensuremath\M[0.075]} \left( -\frac{t}{T_2} \begin{bmatrix} ~0~&~1~&~1~&~2~\ensuremath\M[0.075] ~1~&~0~&~2~&~1~\ensuremath\M[0.075] ~1~&~2~&~0~&~1~\ensuremath\M[0.075] ~2~&~1~&~1~&~0~ \end{bmatrix} \right) ~=~ \begin{bmatrix} ~1~&e^{-t/T_2}&e^{-t/T_2}&e^{-2t/T_2} \ensuremath\M[0.075] e^{-t/T_2}&~1~&e^{-2t/T_2}&e^{-t/T_2}\ensuremath\M[0.075] e^{-t/T_2}&e^{-2t/T_2}&~1~&e^{-t/T_2}\ensuremath\M[0.075] e^{-2t/T_2}&e^{-t/T_2}&e^{-t/T_2}&~1~ \end{bmatrix} . \end{equation} To exponentiate the second term, we note that the Hadamard product is with $2\ensuremath\M[0.075] \MAT P_{03} = 2\ensuremath\M[0.075] \SIG[2]\otimes\SIG[2]$ and resort briefly to superoperators in order to simplify the exponential as follows: \begin{equation} \begin{split} & \MAT{Exp}\big( -\ensuremath\M[0.075](2t/T_2)\ensuremath\M[0.075] \DMAT(\ensuremath\M[0.075] \KET{\MAT P_{03}} )\ensuremath\M[0.075] (\MAT P_{30} \otimes \MAT P_{30})\ensuremath\M[0.075] \big) \ensuremath\M[0.075][0.5ex] =\ensuremath\M[0.075] & {\sum}_{k=0}^\infty\ensuremath\M[0.075] \big( -\ensuremath\M[0.075](2t/T_2)\ensuremath\M[0.075] \DMAT(\ensuremath\M[0.075] \KET{\MAT P_{03}} )\ensuremath\M[0.075] (\MAT P_{30} \otimes \MAT P_{30}) \ensuremath\M[0.075]\big)^k \big/\ensuremath\M[0.075] k! \ensuremath\M[0.075][0.5ex] =\ensuremath\M[0.075] & \begin{aligned}[t] \MAT P_{00} \otimes \MAT P_{00} ~+~ \DMAT(\ensuremath\M[0.075] \KET{\MAT P_{30}} ) \sum_{k=1}^\infty\ensuremath\M[0.075] \frac{{(2t/T_2)}^{2k}}{2k!} ~\cdots & \ensuremath\M[0.075] -~ \DMAT( \KET{\MAT P_{03}} )\ensuremath\M[0.075] & (\MAT P_{30} \otimes \MAT P_{30}) \sum_{k=0}^\infty\ensuremath\M[0.075] \frac{{(2t/T_2)}^{2k+1}}{(2k+1)!} \end{aligned} \ensuremath\M[0.075] =\ensuremath\M[0.075] & \begin{aligned}[t] \MAT P_{00} \otimes \MAT P_{00} ~+~ \DMAT(\ensuremath\M[0.075] \KET{\MAT P_{30}} )\ensuremath\M[0.075] (\cosh( 2t / T_2 ) \ensuremath\M[0.075]-\ensuremath\M[0.075] 1) ~\cdots & \ensuremath\M[0.075] -~ \DMAT(\ensuremath\M[0.075] \KET{\MAT P_{03}} )\ensuremath\M[0.075] & (\MAT P_{30} \otimes \MAT P_{30})\ensuremath\M[0.075]\sinh( 2t / T_2 ) ~. \end{aligned} \end{split} \end{equation} This derivation relies on the facts that $\MAT P_{30} \otimes \MAT P_{30}$ squares to the identity, $\DMAT(\KET{\MAT P_{03}})$ squares to $\DMAT(\KET{\MAT P_{30}})$, and each commutes with the other.
Going back to operator sum notation and abbreviating $\SIG \equiv \SIG(0)$, we thus obtain in all: \begin{align} & \SIG(t) \ensuremath\M[0.075][.5]=\ensuremath\M[0.075][.5] \begin{aligned}[t] \MAT D(t) \odot \big( \SIG \ensuremath\M[0.075]+\ensuremath\M[0.075] (\cosh(2t/T_2) - 1)\ensuremath\M[0.075] \MAT P_{30} \odot \SIG ~\cdots & \ensuremath\M[0.075] -~ \sinh & (2t/T_2)\ensuremath\M[0.075]\MAT P_{03} \odot (\ensuremath\M[0.075] \MAT P_{30}\ensuremath\M[0.075] \SIG\ensuremath\M[0.075] \MAT P_{30} \ensuremath\M[0.075])\ensuremath\M[0.075] \big) \end{aligned} \notag \ensuremath\M[0.075][1ex] =\ensuremath\M[0.075][.5] & \begin{bmatrix} \sigma_{00} & \sigma_{01}\ensuremath\M[0.075] e^{-t/T_2} & \sigma_{02}\ensuremath\M[0.075] e^{-t/T_2} & \Delta(t;\ensuremath\M[0.075] \sigma_{03},\ensuremath\M[0.075] \sigma_{30}) \ensuremath\M[0.075][.5ex] \sigma_{10}\ensuremath\M[0.075] e^{-t/T_2} & \sigma_{11} & \Delta(t;\ensuremath\M[0.075] \sigma_{12}, -\sigma_{21}) & \sigma_{13}\ensuremath\M[0.075] e^{-t/T_2} \ensuremath\M[0.075][.5ex] \sigma_{01}\ensuremath\M[0.075] e^{-t/T_2} & \Delta(t;\ensuremath\M[0.075] \sigma_{21},\ensuremath\M[0.075] -\sigma_{12}) & \sigma_{22} & \sigma_{23}\ensuremath\M[0.075] e^{-t/T_2} \ensuremath\M[0.075][.5ex] \Delta(t;\ensuremath\M[0.075] \sigma_{30},\ensuremath\M[0.075] \sigma_{03}) & \sigma_{31}\ensuremath\M[0.075] e^{-t/T_2} & \sigma_{32}\ensuremath\M[0.075] e^{-t/T_2} & \sigma_{33} \end{bmatrix} , \end{align} where $\Delta(t;\ensuremath\M[0.075] x,\ensuremath\M[0.075] y) \equiv (\cosh(2t/T_2)\ensuremath\M[0.075] x + \sinh(2t/T_2)\ensuremath\M[0.075] y))\ensuremath\M[0.075] \exp(-2t/T_2)$. From this we see that the anti-diagonal entries decoher into mixtures with their symmetrically placed opposites in the real density matrix. These mixtures correspond to the real and imaginary parts of the $\rho_{12} = \bar\rho_{21}$ entries in the Hermitian density matrix, otherwise known as \emph{zero-quantum coherences}, which are immune to correlated noise \cite{ErnBodWok:87}.
\section{Epilogue} We have seen that one can, with some effort, do pretty much everything with the real density matrix that one could with the usual Hermitian one. This may be useful as a didactic device, or in calculations with experimental (e.g.~NMR) data where it is desirable to keep the experimentally measured values of the observables in sight at all times. This work is also a good demonstration of the power of Choi matrix decompositions as a means of finding operator sum representations of linear superoperators \cite{Havel!QPT:03}.
Although the Hermitian density matrix is expected to be better suited, by and large, for the purposes of numerical calculations, it is worth emphasizing that for theoretical and/or expository purposes the compact but lucid notation of geometric algebra offers significant advantages over any matrix formalism. In this regard, we point out that \citet{HavDorFur:03} have recently introduced a \emph{parity-even} (rather than reverse-even, aka Hermitian) multi-qubit density operator via geometric algebra, which generalizes the \emph{multi-particle space-time algebra} introduced for isolated systems to open multi-qubit systems. It is our hope that in due course such a geometric formulation may provide new insights into some of the conceptual problems that underlie quantum physics.
The existence of the real density matrix is further of some theoretical interest, since it provides a coordinate ring within which one can study the issues of entanglement and decoherence via invariant theoretic methods \cite{GraRotBet:98, Makhlin:02}. There are intimate connections between invariant theory and geometric algebra, and it is often easier to automate symbolic computations in an invariant ring than it is at the more abstract level of geometric algebra \cite{Sturmfels:93, Havel:97, Havel:01}.
\begin{acknowledgments}
\noindent The author thanks Nicolas Boulant, David Cory and Chris Doran for useful discussions. This work was supported by ARO grants DAAD19-01-1-0519, DAAD19-01-1-0678, by DARPA grant MDA972-01-1-0003, and by a grant from the Cambridge-MIT Institute, Ltd. \end{acknowledgments}
\end{document} |
\begin{document}
\title{Copula-Based Deep Survival Models for Dependent Censoring} \author[1,2]{Ali Hossein Gharari Foomani$^{*,}$} \author[3,5]{Michael Cooper$^{*,\dagger,}$} \author[1,2]{Russell Greiner} \author[3,4,5]{Rahul G. Krishnan} \affil[1]{ Department of Computing Science, University of Alberta } \affil[2]{ Alberta Machine Intelligence Institute } \affil[3]{ Department of Computer Science, University of Toronto } \affil[4]{ Department of Laboratory Medicine and Pathobiology, University of Toronto } \affil[5]{ Vector Institute }
\maketitle \begin{NoHyper} \def$\dagger$}\footnotetext{Correspondence to \href{mailto:[email protected]}{[email protected]}.}\def\thefootnote{\arabic{footnote}{*}\footnotetext{These authors contributed equally to this work.}\def$\dagger$}\footnotetext{Correspondence to \href{mailto:[email protected]}{[email protected]}.}\def\thefootnote{\arabic{footnote}{\arabic{footnote}} \end{NoHyper} \def$\dagger$}\footnotetext{Correspondence to \href{mailto:[email protected]}{[email protected]}.}\def\thefootnote{\arabic{footnote}{$\dagger$}\footnotetext{Correspondence to \href{mailto:[email protected]}{[email protected]}.}\def$\dagger$}\footnotetext{Correspondence to \href{mailto:[email protected]}{[email protected]}.}\def\thefootnote{\arabic{footnote}{\arabic{footnote}}
\begin{abstract} A survival dataset describes a set of instances ({\em e.g.},\ patients) and provides, for each, either the time until an event ({\em e.g.},\ death), or the censoring time ({\em e.g.},\ when lost to follow-up -- which is a lower bound on the time until the event). We consider the challenge of survival prediction: learning, from such data, a predictive model that can produce an individual survival distribution for a novel instance. Many contemporary methods of survival prediction implicitly assume that the event and censoring distributions are independent conditional on the instance’s covariates –- a strong assumption that is difficult to verify (as we observe only one outcome for each instance) and which can induce significant bias when it does not hold. This paper presents a parametric model of survival that extends modern non-linear survival analysis by relaxing the assumption of conditional independence. On synthetic and semi-synthetic data, our approach significantly improves estimates of survival distributions compared to the standard that assumes conditional independence in the data.\footnotemark \end{abstract} \footnotetext{Code available at \href{https://github.com/rgklab/copula_based_deep_survival}{this GitHub repository}.}
\begin{figure*}\label{fig:survival-graphs}
\end{figure*}
\addtocontents{toc}{\protect\setcounter{tocdepth}{0}} \section{Introduction} Clinical and epidemiological investigations often want to predict the time until the onset of an event of interest. As examples, a clinical trial of a therapeutic cancer regimen may compare the time-to-mortality in patients who received an experimental therapy against the times of the patients in the control arm~\citep{emmerson2021understanding, zhang2011antiangiogenic}; and a study developing a clinical risk score may want to regress the time until patient mortality onto covariates of interest, in order to leverage the learned model parameters in a predictive risk algorithm~\citep{jia2019cox}.
In such time-to-event prediction tasks, it is common to only have a lower bound on the time-to-event for some instances in the study cohort. Here, we focus on \textit{right censored} instances -- {\em e.g.},\ patients who left the study prior to their time of death (loss-to-follow-up), or patients who did not die prior to the conclusion of the study (administrative censoring)~\citep{leung1997censoring, lesko2018censor}. \textit{Survival prediction} refers to the development of statistical models that support time-to-event prediction when some training instances are censored. Rather than discarding such censored instances, methods in survival analysis instead leverage the censoring time as a \textit{lower bound} on that individual's time-to-event~\citep{kalbfleisch2011statistical}.
Let $X^{(i)} \in \mathcal{X}$ refer to the covariates of the $i^{th}$ patient, and let $T_{\text{obs}}^{(i)} \in \mathbb{R}_+$ refer to their time of last observation, taken to be the minimum of the event time $T_E^{(i)} \in \mathbb{R}_+$ and censorship time $T_C^{(i)} \in \mathbb{R}_+$. Because a patient can be either censored or uncensored, but not both, we only observe one of $\{\,T_E, \, T_C\,\}$ for each patient. A common assumption in survival analysis is \textit{conditionally independent censoring}~\citep{kalbfleisch2011statistical}: \begin{equation}
T_E\ \perp\ T_C \ |\ X \label{eq:conditional_indep} \end{equation} {\em i.e.},\ once $X$ is known, knowing either the event or censoring time does not provide additional information about the other quantity; see Figure~\hbox{\ref{fig:survival-graphs}}(left). This assumption does not always hold. Figure~\ref{fig:survival-graphs} shows this assumption is violated when the event time affects the censoring time, or in the presence of unobserved confounding variables. When Equation~\ref{eq:conditional_indep} does not hold, we say that the data features \textit{dependent censorship}, a common feature of survival data that is unaccounted, or assumed to be absent, in modern survival prediction.
This is not a theoretical concern. Consider a study assessing the survival outcomes of a cohort of chronic disease patients treated with a certain type of medication. The study collects basic demographic and medical information about each patient, their time-of-death or censorship, and an indicator expressing whether the patient died or was censored.
Now imagine that sicker patients often remove themselves from the study in order to explore alternative treatment options. This presents a form of selection bias: while we may surmise that a patient who is censored is more likely to be sicker than their uncensored counterpart, and therefore, may have a lower time-of-death, a statistical model that does not account for this will likely over-estimate each patient's survival time, which may have implications when assessing the safety and utility of the medication in question. This motivating example, characterized by the middle graph in Figure \ref{fig:survival-graphs}, presents a scenario that contemporary approaches to survival regression are often poorly equipped to accommodate.
Note that it is typically impossible to verify a dependency in practice because we only observe one outcome (either event or censorship) per instance, but never both. Also, this dependency between $T_E$ and $T_C$ can be quite subtle if it takes place by means of unobserved confounding variables. The effect of variables $U$ are highlighted in Figure~\mbox{\ref{fig:survival-graphs}} (right).
Relaxing the conditional independence assumption of Equation~\ref{eq:conditional_indep} has been previously studied. However, existing approaches either do not permit the incorporation of covariates ({\em e.g.},\ \cite{zheng1995estimates}, \cite{rivest2001martingale}, \cite{de2013generalized}), or make strict assumptions over the form of the marginal distributions of $f_{T_E}$ and $f_{T_C}$ ({\em e.g.},\ \cite{escarela2003fitting}). These limitations mean it is difficult to apply these ideas to survival times modeled via nonlinear functions (such as neural networks) that are increasingly being used. In this vein, our work makes the following contributions: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item We show how to leverage copulas to correct for dependent censorship in neural network based models of survival outcomes. We present a parameteric proportional hazards model that leverages neural networks to relax assumptions on the distributional form of the marginal event and censoring functions, and employs a copulas to model the dependence between event and censoring. We also present a method to jointly learn the model and copula parameter from right-censored survival data. To our knowledge, this work represents the first neural network-based model of survival analysis to account for dependent censoring.
\item We demonstrate that conventional survival metrics, like concordance, are biased under dependent censoring, and we highlight the general impossibility of unbiased evaluation in this regime.
\item It is statistically impossible to determine whether $T_E$ and $T_C$ are independent or dependent from data alone. We show how the \textit{choice of copula can represent an assumption} (prescribed via domain knowledge) over the relationship between the event and censoring distributions. Our paper cleanly characterizes the dependence assumptions underlying two common families of copula (\mbox{the Clayton and Frank families}), and provides guidance to practitioners in choosing a copula to meet their needs. The incorporation of the copula enables practitioners to improve the resulting model on a variety of different benchmarks. \end{enumerate}
\section{Background and Preliminaries} For notation, we will use \color{frenchblue}$T_E$ \color{black} and \color{mediumred-violet}$T_C$ \color{black} where appropriate to refer (respectively) to the random variables representing time-of-\color{frenchblue}event \color{black} and \color{mediumred-violet}censorship\color{black}. When a time could refer to either, we will instead simply use $T$. Realizations of each random variable, such as the time-of-event for a specific patient, will be denoted with a superscript ({\em e.g.},\ $T_E^{(i)}$).
\subsection{Survival Analysis Preliminaries}
Our work will use the following elementary quantities defined by survival analysis: $f_{T|X}$, $F_{T|X}$, representing the conditional density and cumulative distribution functions over the time of an outcome of interest ({\em e.g.},\ event or censorship). Then, we have the following definitions.
\begin{definition}[Survival Function] The \textit{survival function} \begin{equation}
S_{T|X}(t|X)\ \triangleq\ \Cprob{T > t}{X}\ =\
1-F_{T|X}(\,t\,|\,X\,) \end{equation} represents the likelihood that event (or censorship) will take place after a specified time, $t$. \end{definition}
\begin{definition}[Hazard Function] The \textit{hazard function}, \begin{equation}
\small h_{T|X}(t|X) \triangleq \lim_{\epsilon \rightarrow 0}
\Cprob{T \in [t, t + \epsilon) }{ T \geq t, X} \ =\ \frac{f_{T|X}(t|X)}{S_{T|X}(t|X)}
\label{eq:hazard_function} \end{equation} represents the probability that the event will take place within an infinitesimal window in the future, given that it has not yet occurred. \end{definition}
\begin{definition}[Likelihood Function] The general likelihood function for survival data $\mathcal{D}\,=\,\{ (X^{(i)}, T_\text{obs}^{(i)}, \delta^{(i)}) \}_{i=1}^N$ is the following \footnote{ {The standard presentation of the survival likelihood is the survival likelihood under conditional independence (Equation \mbox{\ref{eq:independence_likelihood}}), which represents a special case of Equation \mbox{\ref{eq:generallikelihood}}. For a derivation of Equation \mbox{\ref{eq:generallikelihood}}, refer to Appendix \ref{appx:the_right_censored_likelihood}}.} \begin{align}\footnotesize
\mathcal{L}(\mathcal{D}) = \prod_{i=1}^N &\color{frenchblue}{\underbrace{\color{black}\left[\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(T^{(i)}_{\text{obs}},\, t_c\, |\, X^{(i)})\,dt_c\right]\color{frenchblue}}_{\Pr\left(T_E = T^{(i)}_{\text{obs}},\, T_C > T^{(i)}_{\text{obs}}\, |\, X^{(i)}\right)}}^{\color{black}\delta^{(i)}}\label{eq:survivallikelihood}\\&\color{mediumred-violet}{\underbrace{\color{black}\left[\int_{T^{(i)}_{\text{obs}}}^\infty f_{T_E, T_C | X}(t_e,\, T^{(i)}_{\text{obs}}\, |\, X^{(i)})\,dt_e\right]}_{\color{mediumred-violet} \Pr\left(T_C = T^{(i)}_{\text{obs}},\, T_E > T^{(i)}_{\text{obs}}\, |\, X^{(i)}\,\right)}}^{\color{black}1-\delta^{(i)}} \nonumber \end{align} \label{eq:generallikelihood} \end{definition}
\subsection{Copulas and Sklar's Theorem} \begin{definition}[Copula \citep{nelsen2007introduction}] A copula $C(u_1, ..., u_m) : [0, 1]^m \rightarrow [0, 1]$ is a function with the following properties. \begin{enumerate}[wide, labelwidth=!, labelindent=0pt]
\item \ul{Groundedness}: if there exists an $i \in \{1, ..., m\}$ such that $u_i = 0$, then $C(u_1, ..., u_m) = 0$.
\item \ul{Uniform Margins}: for all $i \in \{1, ..., m\}$, if $\forall j:\ j\neq i \Rightarrow u_{j} = 1$, then $C(u_1, ..., u_m) = u_i$.
\item \ul{$d$-Increasingness}: for all $u = (u_1, ..., u_m)$, $v = (v_1, ..., v_m)$ where $u_i < v_i$ for all $i = 1, ..., m$, the following holds:
\begin{equation*}
\sum_{l \in \{0, 1\}^m} (-1)^{l_1 + ... + l_m} C(u_1^{l_1}v_1^{1-l_1}, ..., u_m^{l_m}v_m^{1-l_m}) \geq 0
\end{equation*} \end{enumerate} \label{def:copula} \end{definition}
The utility of copulas as probabilistic objects stems primarily from the application Sklar's Theorem \citep{sklar1959fonctions}, which demonstrates that any joint cumulative density can be written in terms of a copula over the quantiles of its marginal cumulative densities.
In this work, we will place our emphasis on those copulas that model \textit{joint survival functions}. Such copulas are known as \textit{survival copulas}, and their own version of Sklar's theorem (Equation \ref{eq:sklar-survival}) applies.
\begin{figure}\label{fig:quantile-figure}
\end{figure}
\begin{theorem}[Sklar's Theorem (Survival Copulas) \citep{nelsen2007introduction}] A survival copula\footnote{The copula that relates the joint cumulative distribution $F_{X_1, ..., X_m}$, with the marginal cumulative distribution functions is typically not the same as that which relates the joint survival function $S_{T_1, ..., T_M}$ with the marginal survival functions, though both are valid copulas \citep{nelsen2007introduction}.} is a copula that applies Sklar's Theorem to survival functions, as follows: \begin{equation} \label{eq:sklar-survival}\small S_{T_1, ..., T_m}(t_1,\, \dots\,,\, t_m\,)\ =\ C(\,S_{T_1}(t_1),\, \dots,\, S_{T_m}(t_m)\,) \end{equation} \end{theorem}
A visualization of the way in which a copula induces dependency between $T_E$ and $T_C$ via the quantiles of $S_{T_E | X}$ and $S_{T_C | X}$, is shown in Figure \ref{fig:quantile-figure}.
We will focus on two families of copulas, the Clayton \citep{clayton1978model} and Frank \citep{frank1979simultaneous} families. Within these families, the copula $C_\theta$ is parameterized by a single parameter, $\theta$, interpreted as the degree of dependence between the marginal distributions under Equation \ref{eq:sklar-survival}. A larger value of $\theta$ implies greater dependency between the marginal distributions, and both families of copulas converge to the independence copula as $\theta$ approaches 0. We additionally restrict ourselves to \textit{bivariate} survival copulas, although in principle, these methods could be directly extended to accommodate an arbitrary number of competing events. Such uniparametric copulas provide a parameter-efficient means of modeling the joint survival function: given that survival analysis already provides tools to model the marginal survival functions $S_{T_E}$, $S_{T_C}$, a model that couples these distribution functions via a uniparametric copula $C_\theta$ only requires adding one additional parameter to the model.
\section{Related Work} \label{sec:relatedwork} \textbf{Deep Learning in Survival Analysis}: Linear models of survival analysis make the (often unrealistic) assumption that an individual's time-to-event is determined by a linear function of his or her covariates. \cite{faraggi1995neural} presented the first neural-network based model of survival, by incorporating a neural network into a Cox Proportional Hazards (CoxPH) model \citep{cox1972regression}. Although subsequent experimentation found the Farragi-Simon model unable to outperform its linear CoxPH counterpart \citep{mariani1997prognostic, xiang2000comparison}, DeepSurv \citep{katzman2018deepsurv} leveraged modern tools from deep learning such as SELU units \citep{klambauer2017self} and the Adam optimizer \citep{kingma2014adam}, to learn a practical neural network-based CoxPH model that reliably outperformed the linear CoxPH on nonlinear outcome data. Since then, variations of neural network-based models of survival, such as DeepHit \citep{lee2018deephit} (and its extension to time-varying data, Dynamic-DeepHit \citep{lee2019dynamic}), Deep Survival Machines \citep{nagpal2021deep}, SuMo-net \citep{rindt2022survival}, Transformer-based survival models \citep{hu2021transformer, wang2022survtrace}, and methods based off of Neural ODEs \citep{tang2022soden} have been introduced to model survival outcomes. Though these models successfully relax assumptions around the functional form of marginal risk, they do not jointly model the event and censoring times, a limitation that does not allow them to appropriately account for dependent censorship.
DeepSurv has enjoyed enduring success in part due to its broad applicability and strong performance on clinical data ({\em e.g.},\ \cite{kim2019deep, hung2019deep, she2020development}). Therefore, our investigation will focus on relaxing the conditional independence assumption in a parameteric proportional hazards model; we leave to future work the relaxation of the conditional independence assumption in other classes of neural network based survival models.
\textbf{Missing/Censored-Not-At-Random Data and Identification}: Since we do not simultaneously observe $T_E$ and $T_C$, we can treat the problem of survival analysis as one of missing data. The standard taxonomy \citep{rubin1976inference, tsiatis2006semiparametric} of missing data partitions variables into one of three classes: \textit{missing completely at random (MCAR)} where the missingness process is independent of the value of any observed variable, \textit{missing at random (MAR)} where the missingness process may depend on the value of one or more observed covariates, and \textit{missing not at random (MNAR)} where the missingness process may depend on unobserved variables (such as unobserved confounding or self-masking). Similarly, censorship in survival analysis can take place \textit{completely at random (CCAR)}, \textit{at random (CAR)}, or \textit{not at random (CNAR)} \citep{leung1997censoring, lipkovich2016sensitivity}. The conditional independence assumption of Equation \ref{eq:conditional_indep} is equivalent to asserting CAR in the data.
MNAR data, in the general case, is non-identifiable \citep{nabi2020full}; but survival analysis imposes stronger assumptions on the data than general models of missing data, since observed event time acts as a lower bound for unobserved event time (in the case of censored data). Therefore, prior work has focused on investigating the scenarios in which model parameters of survival data can be uniquely identified. \cite{tsiatis1975nonidentifiability} established that, in the general case, the joint over $M$ variables, $\Pr(T_1, ..., T_M)$ is not generally identifiable from observations of the random variable $T = \min\left(T_1, ..., T_M\right)$; although if the joint distribution is defined in terms of a known copula $C$, and the marginals are continuous, then identifiability holds \citep{zheng1996identifiability, carriere1995removing}. \cite{crowder1991identifiability} extended this line of work and showed that even if all the marginal distributions $f_1, ..., f_M$ are known, the joint distribution remains non-identifiable. Research in statistics has since defined tuples of marginals and copulas for which the joint distribution is identifiable. Notably, \cite{schwarz2013identifiability} and prove that if the marginals $f_E$ and $f_C$ are known, several sub-classes of Archimedean copulas are identifiable in the bivariate case. \cite{zheng1996identifiability, carriere1995removing} highlight conditions for identifiability when the form and parameter of the copula are known \textit{a priori}. \cite{schwarz2013identifiability} categorize copulas into sub-classes wherein the ground-truth copula, $C_{\theta^*}$, is identifiable. Our current analysis does not touch upon the identifiability of the joint distribution in the context of neural network based models of survival outcomes though the success of our method does highlight this as an important area for future study. Many machine learning models remain non-identified \citep{bona2021parameter} while remaining useful as predictive and descriptive models. We consider our method a similar approach in this respect.
\textbf{Copula-Based Models of Dependent Censoring}:
Prior literature has leveraged copulas to model the relationship between the event and censoring distributions in order to account for the effect of dependent censoring \cite{emura2018analysis}. To our knowledge, the first such work was that of \cite{zheng1995estimates} and \cite{rivest2001martingale}, whose development of the nonparametric Copula-Graphic Estimator extended the Kaplan-Meier Estimator \citep{kaplan1958nonparametric} to cases where the dependence between $T_E$ and $T_C$ takes the form of an assumed copula (both $C$, $\theta$ assumed to be known). Though parametric estimators for this problem have been proposed in prior literature, they tend to make strict assumptions over the distributional form of $f_{T|X}$ ({\em e.g.},\ that it is a linear-Weibull function \citep{escarela2003fitting}\footnote{Although Escarela does not directly model dependent censoring, but rather dependent competing events, the approach can be directly extended to this domain.}). Proposed semi-parametric estimators \citep{chen2010semiparametric, emura2017joint, deresa2022copula} suffer from much the same problem, as both of these approaches assume that the hazard is a linear function of the instance covariates. To our knowledge, no such copula-based model exists to accommodate more complex relationships between covariates and risk while also accounting for dependent censoring. This is the gap our research aims to fill.
\section{Model and Optimization} We now present our extension of the Weibull CoxPH model~\citep{barrett2014weibull}, and discuss the problem of learning nonlinear models of survival outcomes under dependent censorship. Our approach entails modeling each outcome -- event and censorship -- independently with an extension of the Weibull CoxPH model, and linking them via a copula in the likelihood function during training. Our approach makes the following assumptions.
\begin{assumption}[Known Form of the Copula] \label{assmp:knownform} We assume prior knowledge of the functional form of the copula ({\em e.g.},\ that $C_{\theta^*}$, the copula associated with the data-generating process, is a Clayton copula).\footnote{In some experiments, we weaken this assumption, and we will explicitly note where this is the case.} \end{assumption}
\begin{assumption}[Proportional Hazards \citep{cox1972regression}] \label{assmp:proportionalhazards}
The hazard for each outcome (event/censorship) can be decomposed into some \textit{baseline hazard} $\lambda_0$, dependent only on time, and some \textit{covariate hazard} $g$, dependent only on the covariates $X$. That is, there exists some appropriate $\lambda_0, g$ for which $h_{T|X}(t| X) = \ \lambda_0(t) \,\exp(\,g(X)\,)$. \end{assumption}
\subsection{The Weibull CoxPH Model} \label{sec:Weibull-Cox}
Let $\lambda_0(t) = \left(\frac{\nu}{\rho}\right)\left(\frac{t}{\rho}\right)^{\nu-1}$ denote the baseline hazard of the Weibull CoxPH model, and let $g_{\psi}$ denote a neural network with parameters $\psi$ mapping the covariate space $\mathcal{X}$ to the real line. Then, leveraging the proportional hazards assumption, we define our model in terms of its hazard: \begin{equation} \label{eq:model-hazard}
\hat{h}_{T|X}(\,t|X\,)\ =\ \left(\frac{\nu}{\rho}\right)\left(\frac{t}{\rho}\right)^{\nu-1} \exp\left(\,g_\psi(X)\,\right) \end{equation}
Let $\phi = \{\nu, \rho, \psi\}$ denote the complete set of model parameters, and observe that the Weibull CoxPH model is fully parametric model over these \textit{marginal parameters} $\phi$. By rearranging Equation \ref{eq:model-hazard}, this class of models readily admits $\hat{S}_{T|X}$, the estimated survival function over each outcome, and $\hat{f}_{T|X}$, the corresponding probability mass function. These two quantities will allow us to perform maximum likelihood estimation -- their derivations are provided in Appendix \ref{appx:the_survival_function} and \ref{appx:the_density_function}. \begin{align}
\hat{S}_{\,T|X}(\,t|X\,)\ &=\ \exp\left(-\left(\frac{t}{\rho}\right)^\nu g_\psi(X)\right)\\
\hat{f}_{T|X}(\,t|X\,)\ &=\ h_{T|X}(\,t|X\,)\ \hat{S}_{T|X}(\,t|X\,) \end{align}
\subsection{Maximum Likelihood Learning Under Dependent Censorship} \label{sec:likelihood}
Let $\mathcal{D} = \{(X^{(i)}, T_{\text{obs}}^{(i)}, \delta^{(i)})\}_{i=1}^N$ represent a dataset comprising $N$ i.i.d. draws from some data-generating distribution. Let $X^{(i)} \in \mathcal{X}$ refer to a set of baseline covariates collected about each individual $i$. Let $T_{\text{obs}}^{(i)} \in \mathbb{R}_+$ refer to their time of last observation, taken to be the minimum of latent variables $T_E^{(i)} \in \mathbb{R}_+$, $T_C^{(i)} \in \mathbb{R}_+$, representing the event and censoring times, respectively. Finally, let $\delta^{(i)} \in \{0, 1\}$ represent an event indicator taking on the value $\mathbbm{1}[T_E^{(i)} < T_C^{(i)}]$. Let $C$ represent a survival copula. Given $\mathcal{D}$, we learn by maximizing the likelihood of the observed data.
Under conditional independence, Equation \ref{eq:survivallikelihood} factorizes and simplifies into the familiar form of the survival likelihood. \footnotesize \begin{align}
\mathcal{L}(\mathcal{D}) = \prod_{i=1}^N &\left[f_{T_E|X}(T^{(i)}_{\text{obs}} | X^{(i)})S_{T_C|X}(T^{(i)}_{\text{obs}}|X^{(i)})\right]^{\delta^{(i)}} \label{eq:independence_likelihood}\\& \left[f_{T_C|X}(T^{(i)}_{\text{obs}} | X^{(i)})S_{T_E|X}(T^{(i)}_{\text{obs}}|X^{(i)})\right]^{1-\delta^{(i)}}\nonumber \end{align} \normalsize
However, when $T_E,T_C$ are no longer conditionally independent, we can no longer rely on this clean decomposition of the log-likelihood. Instead, we make use of the following lemma. \begin{lemma}[Conditional Survival Function Under Sklar's Theorem (Survival)] \label{lemma:copula-conditional}
If $S_{T_E, T_C | X}(t_e, t_c | x) = \left.C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right.$, then, \footnotesize \begin{equation}
\int_{t_c}^\infty f_{T_C | T_E, X}(t_c | t_e, x) = \left.\frac{\partial}{\partial u_1} C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right.. \nonumber \end{equation} \end{lemma} \normalsize
Applying Lemma~\ref{lemma:copula-conditional} to Equation \ref{eq:survivallikelihood} yields the log-likelihood for survival models under dependent censorship.\pagebreak \footnotesize \begin{align} \label{eq:loglikelihood} \ell(\mathcal{D}) &= \sum_{i=1}^N
{ } \delta^{(i)}\log \left[f_{T_E|X}(T^{(i)}_\text{obs}| X^{(i)})\right] + \\&\delta^{(i)} \log \left[\frac{\partial}{\partial u_1}C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right] +\nonumber\\
&(1-\delta^{(i)})\log \left[f_{T_C|X}(T^{(i)}_\text{obs}| X^{(i)})\right] + \nonumber\\
&(1-\delta^{(i)}) \log \left[\frac{\partial}{\partial u_2}C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right].\nonumber \end{align} \normalsize
In this expression, the first term corresponds to the log likelihood of observing the event at time $T_{\text{obs}}^{(i)}$. The second term corresponds to the conditional probability of observing the censorship time after the event time, given that the event time is $T_{\text{obs}}^{(i)}$. The third and fourth terms, by symmetry, represent the same quantities for the censorship time. Despite the visual complexity of Equation \ref{eq:loglikelihood}, the partial derivatives of the Clayton and Frank copulas admit closed form solutions, so the log likelihood function has a closed form and can be maximized via gradient-based methods. Algorithm~\ref{alg:optimization} details the optimization procedure used to jointly optimize the marginal and copula parameters. Empirically, we find that scaling the gradient of $\hat{\theta}$ by a large constant factor $K$, and then clipping it prior to taking each update step, supports stable optimization in this regime ($K = 1000$ in our experiments). Additional implementation details and hyperparameters are discussed in Appendix \ref{appx:implementation_details}.
\RestyleAlgo{ruled} \SetKwComment{Comment}{$\ $\# }{ } \SetKwComment{Commentt}{\# }{ } { \begin{algorithm}[h] \small \KwIn{
$\mathcal{D}$: survival dataset of the form $\{(X^{(i)}, T_\text{obs}^{(i)}, \delta^{(i)})\}_{i=1}^N$; $C_\theta$: a bivariate copula, parameterized by $\theta$; $\mathcal{M}$, a class of survival model parameterized by $\phi$ that can produce $\hat{S}^{(\mathcal{M})}_{T|X}(t|X)$, $\hat{f}^{(\mathcal{M})}_{T|X}(t|X)$, for each $X^{(i)} \in \mathcal{D}$; $\alpha$: learning rate for event model, censoring model, and copula parameter; $M$: number of training epochs; $K$: large constant factor; $\theta_{\text{min}}$: small positive number. } \KwResult{$\hat{\theta}, \hat{\phi}_E, \hat{\phi}_C$: learned parameters of the copula and each marginal survival model.} \hrulefill\\ $\mathcal{M}_E \gets \texttt{Instantiate}(\mathcal{M}; \hat{\psi}_E^{(0)})$ \; $\mathcal{M}_C \gets \texttt{Instantiate}(\mathcal{M}; \hat{\psi}_C^{(0)})$ \; $C_\theta \gets \texttt{Instantiate}(C; \hat{\theta}^{(0)})$ \; \For{$i = 1,\, ...\,,\, M$}{
$\mathcal{L}_{i} \gets \ell\left[\mathcal{D}; \hat{f}^{\left(\mathcal{M}_E\right)}_{T|X}, \hat{f}^{\left(\mathcal{M}_C\right)}_{T|X}, \hat{S}^{\left(\mathcal{M}_E\right)}_{T|X}, \hat{S}^{\left(\mathcal{M}_C\right)}_{T|X},
C_{\hat{\theta}^{(i)}}\right]$\;
$\hat{\psi}_C^{(i)} \gets $\texttt{AdamUpdate}($\mathcal{L}_i$, $\hat{\psi}_C$, $\alpha$) \;
$\hat{\psi}_E^{(i)} \gets $\texttt{AdamUpdate}($\mathcal{L}_i$, $\hat{\psi}_E$, $\alpha$) \;
$\nabla \hat{\theta}^{(i)} \gets \nabla \hat{\theta}^{(i)} \times K$\;
$\nabla \hat{\theta}^{(i)} \gets \nabla \hat{\theta}^{(i)} \vert_{[-0.1, 0.1]}$\;
$\hat{\theta}^{(i)} \gets $\texttt{AdamUpdate}($\mathcal{L}_i$, $\hat{\theta}$, $\alpha$) \;
$\hat{\theta}^{(i)} \gets \min(\hat{\theta}^{(i)}, \theta_{\text{min}})$ \Comment{Constrain theta > 0} } \Return{$\hat{\theta}^{(i)}$, $\hat{\psi}_E^{(i)}$, $\hat{\psi}_C^{(i)}$} \caption{\small Learning Under Dependent Censorship \normalsize} \label{alg:optimization} \normalsize \end{algorithm} }
\section{Evaluation}
\subsection{Metrics are Biased Under Dependence}
Standard metrics such as the concordance index \citep{harrell1982evaluating, uno2011c}, time-dependent concordance (TDCI) \citep{gerds2013estimating}, and Brier score \citep{brier1950verification} cannot effectively evaluate models learned under dependent censoring. To demonstrate this, we generate survival data under a copula, and compare the performance of the data-generating event model, $f_{T_E|X}$, on censored and uncensored data as the dependency increases. The results of this experiment are shown in Table \mbox{\ref{tab:biased_metrics}}. As the dependence increases, both the concordance and Brier score under censoring deviate from their values without censoring. This suggests that the utility of these metrics decreases as the dependence in censoring increases. This challenges previous results that use these measures as the primary statistics of interest when assessing the performance of models under dependent censoring.
By way of analogy, we describe the connection between evaluation under dependent censoring and the potential outcomes framework from causal inference. In the case where censoring takes place completely at random, metrics like concordance and Brier score are suitable means of evaluation, akin to how a randomized controlled trial produces an unbiased estimate of the average treatment effect. Under observed confounding, weighting schemes like inverse-propensity censorship weighting \mbox{\cite{uno2011c, graf1999assessment}} leverage a censoring model to produce an unbiased estimator of the evaluation statistic. But confounding of the form in survival analysis does not readily admit a censoring model that can be used to perform weighting adjustment since the covariates required for such a model remain unobserved. Consequently, unbiased model evaluation under dependent censoring is fundamentally a problem of counterfactual analysis and not feasible to solve using observational data alone.
\begin{table}[]
\centering\scriptsize
\setlength\tabcolsep{2pt}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{C-Index ($\uparrow$)} & \multicolumn{3}{c|}{Brier Score ($\downarrow$)}\\
$\tau$ & \multicolumn{1}{c}{Uncensored} & \multicolumn{1}{c}{Censored} & \multicolumn{1}{c|}{Abs. Diff. ($\downarrow$)} & \multicolumn{1}{c}{Uncensored} & \multicolumn{1}{c}{Censored} & \multicolumn{1}{c|}{Abs. Diff. ($\downarrow$)}\\
\hline\hline
0.01 & 0.6151 & 0.6187 & 0.0037 & 0.0719 & 0.0859 & 0.0140\\
0.2 & 0.6144 & 0.6140 & 0.0004 & 0.0757 & 0.0909 & 0.0152\\
0.4 & 0.6170 & 0.6164 & 0.0006 & 0.0726 & 0.0943 & 0.0217 \\
0.6 & 0.6172 & 0.6342 & 0.0170 & 0.0733 & 0.0963 & 0.0230\\
0.8 & 0.6125 & 0.6873 & 0.0748 & 0.0744 & 0.1054 & 0.0310\\
\hline
\end{tabular}
\caption{\small The results of an experiment comparing the concordance index and Brier score on an uncensored population, against that on a population experiencing dependent censoring. The full details of this experiment are provided in Appendix \ref{appx:evaluation_metrics_are_biased_under_dependence}}
\label{tab:biased_metrics} \end{table}
\begin{figure}\label{fig:eval-metric}
\end{figure}
\textbf{The \textit{Survival-}$\ell_1$ Metric}: We introduce the \textit{Survival-$\ell_1$} as a means of quantifying bias in survival analysis due to dependent censoring on synthetic data. The \textit{Survival-$\ell_1$} metric $\mathcal{C}_{\text{Survival-}\ell_1} : \mathcal{S} \times \mathcal{S} \rightarrow \mathbb{R}_+$, is the $\ell_1$ distance between the ground-truth survival curve, $S_{T|X}$, and the estimate achieved by a survival model, $\hat{S}_{T|X}$ (Figure \ref{fig:eval-metric}), over the lifespan of the curves.
However, the scale of the naive $\ell_1$ measure between survival curves is proportional to the total amount of elapsed time under each survival curve. To ensure that survival curves over longer lifespans do not contribute proportionally more to the evaluation metric than those over shorter lifespans, we define the small constant \textit{normalizing quantile}, $Q_{\lVert\cdot\rVert}$ (in our experiments, $Q_{\lVert\cdot\rVert} = 0.01$). We can loosely think of the time when each survival curve reaches the normalizing quantile as the ``end time'' of that survival curve. By normalizing the area between the survival curves by the \textit{temporal normalization} value $T_{\text{max}}^{(i)} = S^{-1}_{T|X^{(i)}}\left(Q_{\lVert\cdot\rVert}\right)$, we ensure that the duration spanned by a patient's survival curve does not influence that patient's contribution to $\mathcal{C}_{\text{Survival-}\ell_1}$ relative to other patients.
Our \textit{Survival-$\ell_1$} metric therefore takes the following form: \footnotesize \begin{align}
\mathcal{C}_{\text{Survival-}\ell_1}(S, \hat{S}) = \sum_{i=1}^N &\frac{1}{N \times T_{\text{max}}^{(i)}} \int_{0}^{\infty} \\&\left|S_{T|X}(t|X^{(i)}) - \hat{S}_{T|X}(t|X^{(i)})\right| dt\nonumber \end{align} \normalsize
\begin{figure*}\label{fig:mainresults}
\end{figure*}
\section{Experiments and Results} \textbf{Synthetic Data}: The \textit{Survival}-$\ell_1$ metric places strong assumptions on our knowledge of the data-generating process by assuming access to the ground-truth survival functions for each outcome. For this reason, we predominantly make use of synthetic data to evaluate the merits of our approach.
Algorithm \ref{alg:datagenerating} provides a means of generating synthetic data under a specified copula $C$ with Weibull CoxPH margins. For the \texttt{Linear-Risk} experiment shown in Figure \ref{fig:mainresults}, we generate data according Algorithm \ref{alg:datagenerating} with $X \in \mathbb{R}^{N \times 10} \sim \mathcal{U}_{[0,1]}$, $\nu_E^* = 4, \rho_E^* = 14, \psi_E^*(X) = \beta_E^T(X)$, $\nu_C^* = 3, \rho_C^* = 16, \psi_C^*(X) = \beta_C^T(X)$, where $\beta_E, \beta_C \in [0,1]^{10} \sim \mathcal{U}_{[0,1]}$. For the \texttt{Nonlinear-Risk} experiment, we run Algorithm \ref{alg:datagenerating} with $X \in \mathbb{R}^{N \times 10} \sim \mathcal{U}_{[0,1]}$, $\nu_E^* = 4, \rho_E^* = 17, \psi_E^*(X) = \sum_{i=1}^{10}X_{i}^{2}/8$, $\nu_C^* = 3, \rho_C^* = 16, \psi_C^*(X) = \beta_{C}^{T}X^{2}/5$, where $ \beta_C \in [0,1]^{10} \sim \mathcal{U}_{[0,1]}$. Each synthetic experiment was performed on $20,000$ train, $10,000$ validation, and $10,000$ test samples.
The network $g_\psi$ in the model we train on the \texttt{Linear-Risk} data consists of a single linear layer, while the network $g_\psi$ in the model we train on the \texttt{Nonlinear-Risk} data consists of a three-layer fully-connected neural network with ELU activations and hidden layers consisting of $[10, 4, 4, 4, 2, 1]$ dimensions, respectively.
\textbf{Semi-Synthetic Data}: To investigate the promise of our approach on non-synthetic data, we artificially censor regression datasets according to a various degrees of dependence. We choose two datasets (\texttt{STEEL}) \citep{asuncion2007uci} and \texttt{AIRFOIL} \citep{misc_airfoil_self-noise_291} from the UCI Machine Learning Repository. We induce censoring in the data according to Algorithm \ref{alg:semi-synthetic} in Appendix \ref{appx:creating_a_semi_synthetic_dataset_with_dependent_censoring}. We then train a linear version of our method on the artificially censored dataset and evaluate our performance via the $R^2$ statistic\footnote{Note that a method like \textit{Survival-}$\ell_1$ does not apply to this context, as semi-synthetic data does not provide ground-truth survival curves.}. In this experiment, we compare our approach against two baselines: a linear Weibull CoxPH model trained on the regression data \textit{without censoring}, and a linear independence-assuming Weibull CoxPH model.
\RestyleAlgo{ruled} \SetKwComment{Comment}{$\ $\# }{ } \SetKwComment{Commentt}{\# }{ } { \begin{algorithm}[h] \small \KwIn{ $X \in \mathbb{R}^{N \times d}$: a set of covariates, $g_{\psi} : \mathbb{R}^{N \times d} \rightarrow \mathbb{R}$: a class of risk function parameterized by $\psi$, $C_\theta$: a class of copula parameterized by $\theta$, $(\nu^*_E, \rho^*_E, \psi^*_E), (\nu^*_C, \rho^*_C, \psi^*_C), \theta^*$: data-generating parameters associated with each outcome model and the copula, respectively.} \KwResult{$\mathcal{D}$, a survival dataset with the desired dependence.} \hrulefill\\ $\mathcal{D} = \emptyset$\; \For{$i = 1,\, ...\,,\, N$}{
$u_1^{(i)}, u_2^{(i)} \sim C_{\theta^*}$\;
$T_E^{(i)} \gets \left(\frac{-\log(u_1)}{g_{\psi_E^*}(X^{(i)})}\right)^{\frac{1}{\nu^*_E}}\rho^*_E$\;
$T_C^{(i)} \gets \left(\frac{-\log(u_2)}{g_{\psi_C^*}(X^{(i)})}\right)^{\frac{1}{\nu^*_C}}\rho^*_C$\;
$\mathcal{D} \gets \mathcal{D} \cup \{(X^{(i)}, \min(T_E^{(i)}, T_C^{(i)}), \mathbbm{1}[T_E^{(i)} < T_C^{(i)}])\}$\; } \Return{$\mathcal{D}$} \caption{\small Generating Synthetic Dependent Survival Data \normalsize} \label{alg:datagenerating} \normalsize \end{algorithm} }
Our results highlight three properties of our framework. First, our model is capable of reducing the bias in the learned individual survival curve (as measured by the \textit{Survival-$\ell_1$} metric). Second, the learning algorithm does, in many cases, recover the ground truth coefficient associated with the copula when parameterizing the prediction of the event and censoring time with neural networks. Finally, our framework opens up new avenues to learning more complex forms of dependence between event and survival time.
\textbf{Reducing Bias in Survival Outcomes}: Figure \ref{fig:mainresults} (left column) plots the model bias as measured by the $\textit{Survival-}\ell_1$, and how it behaves across datasets (in rows of plots).
We highlight that our approach of modeling the dependence structure between event and censorship times reduces the bias in the model's estimation of survival curves. The bias is substantially lower under our approach for all values of $\tau > 0$, and we note that the improvements are more pronounced for larger values of $\tau$ indicating that the improvements in our approach are larger as the dependence between censorship and event time is stronger. We see consistent results holding for both the \texttt{Linear-Risk} and \texttt{Nonlinear-Risk} data-generating processes, and for both the Frank and Clayton families of copula. In the special case where $\tau = 0$, we observe that our approach correctly recovers the independence copula, and learns an unbiased survival curve.
Our results on the artificially censored \texttt{STEEL} and \texttt{AIRFOIL} datasets suggest that our method also shows promise on non-synthetic data. On the \texttt{STEEL} dataset, our method achieves an $R^2$ of $0.508$ under high dependence ($\tau=0.8$), compared to the $R^2$ of $0.341$ achieved by the independence-assuming model. Likewise, on the \texttt{AIRFOIL} dataset, our method achieves an $R^2$ of $0.484$ under high dependence ($\tau=0.8$), compared to the $R^2$ of $0.330$ achieved by the independence-assuming model. Across different degrees of dependence, our approach reliably outperforms the independence-assuming baseline, and often approaches the performance of the model trained on the uncensored version of the data. The complete table of results can be found in Appendix \ref{appx:additional_semi_synthetic_results_steel_dataset} (\texttt{STEEL}) and \ref{appx:additional_semi_synthetic_results_airfoil_dataset} (\texttt{AIRFOIL}).
\begin{figure}\label{fig:convex-combination}
\end{figure}
\textbf{Empirical Recovery of the Copula Parameter}: How close are the recovered parameters of the copula to the true parameters used in the data-generating process? Although we do not have a formal proof of identifiability, we nevertheless study this question empirically on the two datasets in Figure \ref{fig:mainresults} (right column). Here, we find that our approach is able to reliably recover a $\hat{\theta}$ that is close to $\theta^*$ across different datasets and families of copula.
\textbf{Relaxating Assumption \ref{assmp:knownform}}: Next, we showcase the flexibility of our framework via a relaxation of Assumption \ref{assmp:knownform}. Specifically, rather than parameterizing our model with $C_\theta$, a single copula of an assumed functional form, we instead parameterize it with a convex combination of Clayton and Frank copulas. During optimization, we learn $\theta_{\text{Frank}}$, $\theta_{\text{Clayton}}$, and $\kappa$, a mixing parameter. Because the Clayton and Frank copulas are both Archimedean, we know that their convex mixture is also a valid Archimedean copula \citep{bacigal2010some, bacigal2015generators}. Figure \ref{fig:convex-combination} shows the results of an experiment on synthetic data with \texttt{Linear-Risk} margins and a dependency produced by a convex combination of copulas: $C_{\text{Mix.}}(u,v) = \kappa C_{\text{Frank}}(u,v) + (1-\kappa) C_{\text{Clayton}}(u,v)$. In this experiment, we fix $\kappa = 0.5$. As in the case where the functional form of $C$ was known, the mixture model reduces bias in estimation of the event and censoring distributions.
\section{Discussion} \label{sec:discussion}
\subsection{Dependent Censoring in Practice} \textbf{Evaluating Survival Models on Observational Data}: Given the impossibility of evaluation from observational data alone, how should a practitioner apply our method? We propose that practitioners adopt simulation -- the present gold standard of evaluation from the causal inference literature -- as a primary means to test the performance of survival models under dependent censoring. Such methods as \mbox{\cite{parikh2022validating} and \cite{mahajan2022empirical}} present means of generating counterfactual synthetic data that is similar to the available observational data. Then, evaluating model performance on the simulated data using counterfactual metrics (like Survival-$\ell_1$) is treated as a viable proxy of model performance on the downstream data.
\textbf{The Assumptions Encoded by the Clayton and Frank Copulas}: Given that we only observe either the time of event or censorship, identifying the joint distribution between these variables is generally not possible. Therefore, the choice of copula represents a \textit{assumption} over the data. How can a practitioner leverage domain knowledge in order to select the right copula to use within our framework? Consider how the copula parameter, $\theta$, relates the event and censoring curves under three different circumstances. (1) If the censoring and event curves are identical, then $\theta$ grows with the probability that the time of event and censorship are the same. (2) If the censoring curve decays faster than the survival curve, $\theta$ grows with the probability that the time of censorship precedes the time of event. (3) If the survival curve decays faster than the censoring curve, $\theta$ grows with the probability that the time of event precedes the time of censorship. For a fixed $\theta$, the Clayton copula expresses this dependency as stronger at later times (lower quantiles), and weaker at earlier times (higher quantiles). The Frank copula expresses strength of the dependency at more uniform strength across all time periods. A visualization of these cases, and of the quantile densities expressed by the Clayton and Frank copulas, can be found in Appendix \ref{appx:quantile_density_visualizations} and \ref{appx:intuition_for_copula_selection}.
\section{Conclusion} The method of using copulas to couple marginal survival distributions is a general one. As future work, we consider extending this approach to other classes of neural survival models, such as those that do not assume either proportional hazards or a Weibull baseline hazard. Though the \textit{Survival-$\ell_1$} metric is a sufficient metric to demonstrate the promise of our approach, it relies on knowledge of the complete survival curve for each instance; this is typically not available in real-world data. The careful study of the behaviour of conventional evaluation metrics under dependence, and the design of strategies to more faithfully ascertain the performance of a model from observational data alone remain open avenues for future work.
Modern statistical methods in survival analysis increasingly rely on complex, nonlinear functions of risk; however, existing applications of deep learning to survival analysis do not accommodate dependent censoring that may be present in the data. This work relaxes this key assumption, and presents the first neural network-based model of survival to accommodate dependent censoring. Our experimental results demonstrate the promise of our method: our approach significantly reduces the \textit{Survival-$\ell_1$} (bias) in estimation and our optimization technique is reliably able to recover the underlying dependence parameter in survival data across datasets of varying feature sizes.
\addtocontents{toc}{\protect\setcounter{tocdepth}{3}}
\nocite{seabold2010statsmodels} \nocite{asuncion2007uci} \nocite{Dua:2019} \nocite{ve2021efficient} \nocite{sathishkumar2020energy} \nocite{sathishkumar2020industry}
\title{Copula-Based Deep Survival Models for Dependent Censoring\\(Supplementary Material)}
\onecolumn \raggedbottom \maketitle \tableofcontents
\appendix
\section{Table of Notation} \label{appx:table_of_notation}
\begin{table}[H]
\setlength{\tabcolsep}{12pt}
\begin{adjustbox}{center}
\begin{tabular}{ll}
$\textbf{1}^N$ & $N$-vector filled with 1's.\\
$\mathbbm{1}[\cdot]$ & Indicator function.\\
$\mathcal{L}(\cdot)$ & Likelihood function.\\
$\ell(\cdot)$ & Log-likelihood function.\\
$X \in \mathcal{X}$ & Covariates of one instance (as elements of the covariate space, $\mathcal{X}$).\\
$T_E \in \mathbb{R}_+$ & Event time.\\
$T_C \in \mathbb{R}_+$ & Censorship time.\\
$T_{\text{obs}} \in \mathbb{R}_+$ & Time of last observation; the minimum of $T_E, T_C$.\\
$T \in \mathbb{R}_+$ & Either event or censoring time; used in contexts where a quantity may refer to either.\\
$\delta \in \{0,1\}$ & Event indicator. Equal to 1 if the observed time is the event time; 0 otherwise.\\
$\mathcal{D} \subset \mathcal{X} \times \mathbb{R}_+ \times \{0,1\}$ & Survival dataset of the form $\{(X^{(i)}, T^{(i)}_\text{obs}, \delta^{(i)})\}_{i=1}^N$.\\
$S_T \in \mathcal{S}$ & Survival function, $S : \mathbb{R}_+ \rightarrow [0,1]$, and space of survival functions, $\mathcal{S}$.\\
$f_{T}$ & Probability density function over time, representing $\prob(T=t)$.\\
$F_{T}$ & Cumulative density function over time, representing $\prob(T < t)$.\\
$C$ & A copula. If written as $C_\theta$, this denotes a copula parameterized by the dependence parameter $\theta$.\\
$u_1, u_2$ & Inputs to a copula function. It is assumed that these are uniformly distributed. \end{tabular} \end{adjustbox} \label{tab:notation} \end{table}
\section{Copula Formulae and Algorithms} \label{appx:copula_formulae_and_algorithms}
\subsection{Table of Preliminaries} \label{appx:table_of_preliminaries}
\begin{table}[H]
\small
\begin{adjustbox}{center}
\begin{tabular}{|c||c|c|c|}
\hline
Copula & $C_\theta(u_1, u_2)$ & $\Theta$ & \parbox{0.5cm} {\begin{align*} \frac{\partial}{\partial u_1} C_\theta(u_1, u_2) \end{align*}} \\
\hline
\hline
Independence Copula & $u_1u_2$ & N/A & \parbox{3cm} {\begin{align*} u_2 \end{align*}}\\
\hline
Clayton Copula & $\left(\max \left(u_1^{-\theta} + u_2^{-\theta} - 1, 0\right)\right)^{-1/\theta}$ & $[-1, \infty)\backslash\{0\}$ & \parbox{3cm} {\begin{align*}
\begin{cases}
\left(u_1^{-\theta} + u_2^{-\theta}-1\right)^{\frac{-\theta-1}{\theta}}u_2^{-\theta-1} \quad &u_1^{-\theta} + u_2^{-\theta} > 1\\
0 &\text{otherwise}
\end{cases}
\end{align*}}\\
\hline
Frank Copula & \parbox{3cm}{\begin{align*}
\frac{-1}{\theta} \log \left(1+\frac{(\exp(-\theta u_1)-1)(\exp(-\theta u_2)-1)}{\exp(-\theta)-1}\right)
\end{align*}} & $\mathbb{R}\backslash\{0\}$ & \parbox{3cm} {\begin{align*}
\frac{\exp(-\theta u_1)(\exp(-\theta u_2)-1)}{\exp(-\theta)-1}
\end{align*}}\\
\hline
\end{tabular}
\end{adjustbox}
\caption{A table of formulas representing different classes of bivariate copulas used in our experiments. This table provides $C_\theta(u_1, u_2)$, the formula for the cumulative distribution function of the copula;
$\Theta$, representing the family $\Theta$ from which valid $\theta$ may be drawn; and $\frac{\partial}{\partial u_1} C_\theta(u_1, u_2)$, representing the partial derivative of the copula with respect to its first parameter. Due to the symmetric nature of these copulas, one can readily find $\frac{\partial}{\partial u_2} C_\theta(u_1, u_2)$ from $\frac{\partial}{\partial u_1} C_\theta(u_1, u_2)$ by simply interchanging $u_1$, $u_2$ (hence, we only provide $\frac{\partial}{\partial u_1} C_\theta(u_1, u_2)$).}
\label{tab:my_label} \end{table}
\subsection{Sampling from a Copula} \label{appx:sampling_from_a_copula}
Algorithm 2 requires that we draw samples from the Clayton and Frank copulas. To do so, we implement the copula sampling scheme from in the Python \href{https://www.statsmodels.org/stable/index.html}{\texttt{statsmodels}} package \citep{seabold2010statsmodels}.
\subsection{Quantile Density Visualizations} \label{appx:quantile_density_visualizations}
\begin{figure}
\caption{Plots of the densities for the Clayton (top row) and Frank (bottom row) copulas, under different degrees of dependence. These plots are functions of each of the copula's margins, $u$ and $v$. In practice, $u$ and $v$ are quantiles of the event and censoring distributions. Observe that, as the dependence increases, the difference in density between the on-diagonal points (points where $u \approx v$) and the off-diagonal points increases. Note also that, while the Clayton copula concentrates density around low quantiles (points where $u \approx v \approx 0$) as dependence increases, the Frank copula concentrates density more uniformly around the on-diagonal.}
\label{fig:quantile-dependence}
\end{figure}
\subsection{Intuition for Copula Selection} \label{appx:intuition_for_copula_selection}
In Section \ref{sec:discussion}, we discussed three different cases that can be used to build intuition around the forms of dependence induced by various copulas. In Figure \ref{fig:three_cases}, we visualize these cases, and relate them to the quantile density plots in Appendix \ref{appx:quantile_density_visualizations}. The point of this section is to build intuition regarding the \textit{a priori} selection of a copula, so we will necessarily make a few simplifications. For example, although the three cases we discuss are not exhaustive -- it is possible that the event and censoring survival curves cross (\textit{e.g.} if the event and censoring distributions have different baseline hazards) -- they present clean intuition relating the choice of copula to the structure of the joint density it produces.
\begin{figure}
\caption{Three survival functions highlighting the three cases we presented in Section \ref{sec:discussion} of the main body. \textbf{Left}: the case where the conditional survival and censoring functions are the same. \textbf{Center}: the censoring survival function decays faster than the event survival function. \textbf{Right}: the event survival function decays faster than the censoring survival function. }
\label{fig:three_cases}
\end{figure}
The key intuition for selecting a copula from domain knowledge can be drawn from Sklar's Theorem (Survival), which states that a joint distribution over event and censoring times can be modelled as two independent event and censoring distributions the quantiles of which are linked by a copula. When the event and censoring distributions are the same (left), the event quantile of a given time is the same as the censoring quantile for that same time. Thus, an increased dependence between event and censoring quantiles is directly reflected in a positive dependence between event and censoring times. When the censoring survival curve decays more quickly than the event survival curve, the event quantile of a given event time is higher than the censoring quantile for that same time. Therefore, increasing the dependence between event and censoring quantiles increases the likelihood that the censoring time precedes the event time. By symmetry, the opposite is true when the event survival curve decays more quickly than the censoring survival curve. An increase in dependence beteween quantiles in this setting increases the likelihood that the event itme precedes the censoring time under the model.
\section{Derivations} \label{appx:derivations}
\subsection{The Right-Censored Likelihood} \label{appx:the_right_censored_likelihood}
As a starting point for the subsequent derivations, we discuss the intuition behind the general likelihood for right-censored survival data (Equation~\ref{eq:generallikelihood}).
Recall that a survival dataset $\mathcal{D}$ consists of $N$ i.i.d. samples of the form $\{(X^{(i)}, T^{(i)}_\text{obs}, \delta^{(i)})\}_{i=1}^N \ \subset\ \mathcal{X} \times \mathbb{R}_+ \times \{0,1\}$. The likelihood expressed in Equation \ref{eq:generallikelihood} uses the $\delta^{(i)}$ terms in the exponent as a conditional binary filter: raising a term to the power of $\delta^{(i)}$ ensures it is non-degenerate only when the patient experiences an event; raising a term to the power of $1-\delta^{(i)}$ ensures it is non-degenerate only when the patient is censored.
Let $f_{T_E, T_C | X}$ represent the joint density function of the event and censoring times, respectively, conditional on the patients' covariates. There are two mutually-exclusive, collectively-exhaustive into which we can decompose the right-censored likelihood for a given patient $i$: \begin{enumerate}
\item\textbf{Case 1} ($\delta^{(i)} = 1$): If $\delta^{(i)} = 1$, the likelihood term should express that $T_E^{(i)} = T^{(i)}_\text{obs}$, and $T_C^{(i)} > T^{(i)}_\text{obs}$. This corresponds to the observation that the patient experienced the event at time $T^{(i)}_\text{obs}$, and was not censored prior to experiencing the event. The probability of this event under our density function is $\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(T^{(i)}_\text{obs}, t_c | X^{(i)})dt_c$.
\item\textbf{Case 2} ($\delta^{(i)} = 0$): If $\delta^{(i)} = 0$, the likelihood term should express that $T_C^{(i)} = T^{(i)}_\text{obs}$, and $T_E^{(i)} > T^{(i)}_\text{obs}$. This corresponds to the observation that the patient is censored at time $T^{(i)}_\text{obs}$, and did not experience an event prior to being censored. The probability of this event under our density function is $\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(t_e, T^{(i)}_\text{obs} | X^{(i)})dt_e$. \end{enumerate}
Combining these two cases, and applying the assumption that our data is i.i.d., yields the general likelihood function for right-censored data.
\begin{equation}
\mathcal{L}(\mathcal{D}) = \prod_{i=1}^N \color{frenchblue}{\underbrace{\color{black}\left[\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(T^{(i)}_{\text{obs}},\, t_c\, |\, X^{(i)})\,dt_c\right]\color{frenchblue}}_{\Pr\left(T_E = T^{(i)}_{\text{obs}},\, T_C > T^{(i)}_{\text{obs}}\, |\, X^{(i)}\right)}}^{\color{black}\delta^{(i)}}\color{mediumred-violet}{\underbrace{\color{black}\left[\int_{T^{(i)}_{\text{obs}}}^\infty f_{T_E, T_C | X}(t_e,\, T^{(i)}_{\text{obs}}\, |\, X^{(i)})\,dt_e\right]}_{\color{mediumred-violet} \Pr\left(T_C = T^{(i)}_{\text{obs}},\, T_E > T^{(i)}_{\text{obs}}\, |\, X^{(i)}\,\right)}}^{\color{black}1-\delta^{(i)}}\color{black} \label{eq:generallikelihood-supp} \end{equation}
\subsection{The Right-Censored Log-Likelihood Under Conditional Independence} \label{appx:the_right_censored_log_likelihood_under_conditional_independence}
Under the assumption that $T_E \perp T_C | X$, we can factorize the conditional density distributions in Equation \ref{eq:generallikelihood-supp}. $f_{T_E,T_C|X}$ factorizes into $f_{T_E|X}f_{T_C|X}$. \begin{align}
\mathcal{L}(\mathcal{D}) &= \prod_{i=1}^N \left[f_{T_E|X}(T^{(i)}_\text{obs} | X^{(i)})\int_{T^{(i)}_\text{obs}}^\infty f_{T_C | X}(t_c | X^{(i)})dt_c\right]^{\delta^{(i)}} \left[f_{T_C|X}(T^{(i)}_\text{obs} | X^{(i)})\int_{T^{(i)}_\text{obs}}^\infty f_{T_E | X}(t_e | X^{(i)})dt_e\right]^{1-\delta^{(i)}}\\
&= \prod_{i=1}^N \left[f_{T_E|X}(T^{(i)}_\text{obs} | X^{(i)})\left(1-F_{T_C|X}(T^{(i)}_\text{obs}|X^{(i)})\right)\right]^{\delta^{(i)}} \left[f_{T_C|X}(T^{(i)}_\text{obs} | X^{(i)})\left(1-F_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\right)\right]^{1-\delta^{(i)}}\\
&= \prod_{i=1}^N \left[f_{T_E|X}(T^{(i)}_\text{obs} | X^{(i)})S_{T_C|X}(T^{(i)}_\text{obs}|X^{(i)})\right]^{\delta^{(i)}} \left[f_{T_C|X}(T^{(i)}_\text{obs} | X^{(i)})S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\right]^{1-\delta^{(i)}}\\
\therefore \quad \ell(\mathcal{D}) &= \sum_{i=1}^N \delta^{(i)}\log\left[f_{T_E|X}(T^{(i)}_\text{obs} | X^{(i)})\right] + \delta^{(i)} \log \left[S_{T_C|X}(T^{(i)}_\text{obs}|X^{(i)})\right] + (1-\delta^{(i)}) \log
\left[f_{T_C|X}(T^{(i)}_\text{obs} | X^{(i)})\right] + \nonumber\\
& \qquad\quad(1-\delta^{(i)})\left[S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\right] \label{eq:surv_indep} \end{align}
\subsection{The Right-Censored Log-Likelihood Under Dependence Defined by a Copula}
\subsubsection{Proof of Lemma \ref{lemma:copula-conditional}} \label{appx:proof_of_lemma_1}
\begin{manuallemma}{2}[Conditional Survival Function Under Sklar's Theorem (Survival)] \label{lemma:copula-conditional-supp}
If $S_{T_E, T_C | X}(t_e, t_c | x) = \left.C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right.$, then, \begin{align}
\int_{t_c}^\infty f_{T_C | T_E, X}(t_c | t_e, x) &= \frac{\partial}{\partial u_1} \left.C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right. \end{align} \end{manuallemma} \begin{proof} \begin{align}
\int_{t_c}^\infty f_{T_C | T_E, X}(t_c | t_e, x) &= \frac{\int_{t_c}^\infty f_{T_C, T_E | X}(t_c, t_e | x) dt_c}{f_{T_E|X}(t_e|x)} && \text{(Def'n of Cond. Prob.)}\\
&= \frac{\frac{-\partial}{\partial T_E} \int_{t_e}^\infty \int_{t_c}^\infty f_{T_C, T_E | X}(t_c, t_e | x) dt_c dt_e}{f_{T_E|X}(t_e|x)}\\
&= \frac{\frac{-\partial}{\partial T_E} S_{T_C, T_E | X}(t_c, t_e | x)}{f_{T_E|X}(t_e|x)} &&\text{(Def'n of Survival Function)}\\
&= \frac{\frac{-\partial}{\partial T_E} \left(C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right)}{f_{T_E|X}(t_e|x)} &&\text{(Sklar's Theorem)}\\
&= \frac{\frac{-\partial}{\partial u_1} \left(C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right)\frac{\partial}{\partial T_E}S_{T_E|X}(t_e|x)}{f_{T_E|X}(t_e|x)} &&\text{(Chain Rule)}\\
&= \frac{-\partial}{\partial u_1} \left(C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right)\cancelto{-1}{\frac{-f_{T_E|X}(t_e|x)}{f_{T_E|X}(t_e|x)}}\\
&= \frac{\partial}{\partial u_1} \left(C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right) \end{align} \end{proof}
\textit{Corollary.} We can symmetrically apply this lemma to the converse case, $f_{T_E|T_C,X}$, to obtain: \begin{equation}
\int_{t_e}^\infty f_{T_E | T_C, X}(t_e | t_c, x) = \frac{\partial}{\partial u_2} \left(C(u_1, u_2)\middle|_{\substack{{u_1=S_{T_E|X}(t_e|x)}\\ {u_2=S_{T_C|X}(t_c|x)}}}\right) \end{equation}
\subsubsection{Derivation of the Right-Censored Log Likelihood Under a Copula} \label{appx:derivation_of_right_censored_log_likelihood_under_a_copula}
Having now proven Lemma \ref{lemma:copula-conditional}, we apply it to derive a likelihood function for survival prediction under dependent censoring. We use Equation \ref{eq:generallikelihood} as the starting point for our derivation.
\begin{align}
\mathcal{L}(\mathcal{D}) = \prod_{i=1}^N & \left[\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(T^{(i)}_\text{obs}, t_c | X^{(i)})\,dt_c\right]^{\delta^{(i)}} \left[\int_{T^{(i)}_\text{obs}}^\infty f_{T_E, T_C | X}(t_e, T^{(i)}_\text{obs} | X^{(i)})\,dt_e\right]^{1-\delta^{(i)}}\\
= \prod_{i=1}^N &\left[f_{T_E|X}(T^{(i)}_\text{obs}| X^{(i)})\int_{T^{(i)}_\text{obs}}^\infty f_{T_C | T_E, X}(t_c | T^{(i)}_\text{obs}, X^{(i)})\,dt_c\right]^{\delta^{(i)}}\times &&\text{(Chain Rule)}\\
&\left[f_{T_C|X}(T^{(i)}_\text{obs}| X^{(i)})\int_{T^{(i)}_\text{obs}}^\infty f_{T_E | T_C, X}(t_e | T^{(i)}_\text{obs}, X^{(i)})\,dt_e\right]^{1-\delta^{(i)}}\nonumber\\
= \prod_{i=1}^N &\left[f_{T_E|X}(T^{(i)}_\text{obs}| X^{(i)})\frac{\partial}{\partial u_1}\left(C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right)\right]^{\delta^{(i)}}\times&&\text{(Lemma \ref{lemma:copula-conditional})}\\
&\left[f_{T_C|X}(T^{(i)}_\text{obs}| X^{(i)})\frac{\partial}{\partial u_2}\left(C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right)\right]^{1-\delta^{(i)}}\nonumber\\
\therefore \quad \ell(\mathcal{D}) = \sum_{i=1}^N &\delta^{(i)}\log \left[f_{T_E|X}\left(T^{(i)}_\text{obs}| X^{(i)}\right)\right] + \delta^{(i)} \log \left[\frac{\partial}{\partial u_1}C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right] +\\
&(1-\delta^{(i)})\log \left[f_{T_C|X}\left(T^{(i)}_\text{obs}| X^{(i)}\right)\right] + \nonumber\\
&(1-\delta^{(i)}) \log \left[\frac{\partial}{\partial u_2}C(u_1, u_2)\middle\vert_{\substack{u_1 = S_{T_E|X}(T^{(i)}_\text{obs}|X^{(i)})\\u_2 = S_{T_C|X}(T^{(i)}_\text{obs})|X^{(i)}} }\right]\nonumber \end{align}
\subsection{The Weibull CoxPH Model} \label{appx:the_weibull_coxph_model}
Recall that the Weibull CoxPH model is defined in terms of its hazard, as follows. \begin{equation} \label{eq:model-hazard}
h_{T|X}(\,t|X\,)\ =\ \left(\frac{\nu}{\rho}\right)\left(\frac{t}{\rho}\right)^{\nu-1} \exp\left(\,g_\psi(X)\,\right) \end{equation}
Our method, however, relies on the ability to extract additional quantities -- the density ($\hat{f}_{T|X}$) and survival functions ($\hat{S}_{T|X}$) -- from the model, as these are essential to computing our likelihood function. In this section, we derive the closed-form expressions for these two quantities that are present in the main body of our work.
\subsubsection{The Survival Function} \label{appx:the_survival_function}
The survival function under our model can be derived via its cumulative hazard.
\begin{definition}[Cumulative Hazard] The \textit{cumulative hazard} \begin{equation}
\hat{H}_{T|X}(t|X)\ \triangleq\ \int_{0}^t \hat{h}_{T|X}(u|X)du \end{equation} represents the integral of the hazard function over all time prior to a specified time, $t$. \end{definition}
The cumulative hazard of the Weibull CoxPH can be expressed in closed form as follows: \begin{align} \label{eq:model-cum-hazard}
\hat{H}_{T|X}(\,t|X\,)\ &=\ \int_{0}^{t} \left(\frac{\nu}{\rho}\right)\left(\frac{u}{\rho}\right)^{\nu-1} \exp\left(\,g_\psi(X)\,\right)\,du \\ &= \ \left(\frac{t}{\rho}\right)^{\nu} \exp\left(\,g_\psi(X)\,\right) \label{eq:cumhazard_closedform} \end{align}
One alternative formulation of the survival function expresses $S_{T|X}$ in terms of the hazard function, as follows. \begin{align} \label{eq:model-survival}
S_{T|X}(t|X)\ \triangleq\ \exp(-H_{T|X}(t|X)) \end{align}
We can apply this identity to Equation \ref{eq:cumhazard_closedform} to obtain the following expression for $\hat{S}_{T|X}$ under the Weibull CoxPH model: \begin{align}
\hat{S}_{T|X}(t|X)= \exp\left(-\left(\frac{t}{\rho}\right)^{\nu} \exp\left(\,g_\psi(X)\,\right)\right) \end{align}
\subsubsection{The Density Function} \label{appx:the_density_function}
From Equation 3, we know that the density of an event can be calculated as follows. \begin{equation} \label{eq:model-density}
f_{T|X}(t|X)\ =\ S_{T|X}(t|X) h_{T|X}(t|X) \end{equation}
\subsection{A Stable Implementation} \label{appx:a_stable_implementation} In order to optimize a Weibull model in a stable way we used another representation of Weibull distribution. This new representation is derived by applying log transformation to the cumulative hazard function of Weibull distribution. \begin{equation}
\begin{aligned}
H_{T|X}(\,t|X\,) &= \exp(\log(H_{T|X}(t|X)))\\
&=\exp\left(\log\left(\left(\frac{t}{\rho}\right)^{\nu} \exp(g_\psi(X))\right)\right)\\
&=\exp(\nu\log(t) - \nu\log(\rho) + g_\psi(X))
\end{aligned} \end{equation}
Setting $\sigma = \frac{1}{\nu}$, $\mu = \log(\rho)$ ,and $f(x) = -\frac{g_\psi(X)}{\nu}$, gives us a long-cumulative hazard function of the following form.
\begin{equation} \label{eq:model-cum-hazard_stable}
H_{T|X}(\,t|X\,)\ =\ \exp\left(\frac{\log(t) - \mu - f(x)}{\sigma}\right) \end{equation}
\subsubsection{Hazard function} \label{appx:hazard_function}
Given the formula for the cumulative hazard function we can derive the hazard function in the new format by taking the derivative of cumulative hazard with respect to $t$. \begin{equation}
h_{T|X}(\,t|X\,)=\ \frac{\partial H_{T|X}(\,t|X\,)}{\partial t} =\ \frac{H_{T|X}(\,t|X\,)}{t\sigma} \end{equation}
\pagebreak \section{Algorithms} \label{appx:algorithms}
\subsection{Computing the Survival-$\ell_1$} \label{appx:computing_the_survival_l1}
Here, we expand on the computation of the Survival-$\ell_1$ metric from the main paper by providing an algorithm for the explicit computation of the inner term of the Survival-$\ell_1$ metric, as well as the value $T_{\text{max}}$ for the given pair of survival curves, $S, \hat{S}$:
\begin{align}
\mathcal{C}_{\textit{Survival-}\ell_1}(S, \hat{S})\quad =\quad \sum_{i=1}^N &\frac{1}{N \times T_{\text{max}}^{(i)}} \underbrace{\int_{0}^{\infty} \left|S_{T\,|\,X}(t\,|\,X^{(i)}) - \hat{S}_{T\,|\,X}(t\,|\,X^{(i)})\right| dt}_{\text{Inner Term}}\nonumber \end{align}
Although the integral in the $\mathcal{C}_{\textit{Survival-}\ell_1}$ is over an infinite domain, in this approximation, we consider only the simplified case wherein the upper bound of integration is $T_\text{max}$.
\RestyleAlgo{ruled} \SetKwComment{Comment}{$\ $\# }{ } \SetKwComment{Commentt}{\# }{ } \setcounter{algocf}{2} { \begin{algorithm}[H] \label{alg:optimization} \KwIn{ \begin{enumerate}
\item $S_1, S_2$: Survival curves to compare under the Survival-$\ell_1$ metric. Here, we assume $S_1$ is the ground-truth survival curve, and $S_2$ is the estimated curve.
\item $Q_{\lVert\cdot\rVert}$: Normalizing quantile.
\item $N_\text{steps}$: Number of discretization steps. \end{enumerate} } \KwResult{ \begin{enumerate}
\item $\Delta_{\text{total}}$: a discretized approximation of the integral $\int_{0}^{T_{\text{max}}} \left|S_{1}(t\,|\,X^{(i)}) - {S}_{2}(t\,|\,X^{(i)})\right| dt$.
\item $T_\text{max}$: This is used as a normalization weight when computing the full expression for the Survival-$\ell_1$ metric. \end{enumerate} } \hrulefill\\ $T_\text{max} \gets {S}^{-1}_{1}\left(Q_{\lVert\cdot\rVert}\right)$\; $\Delta_\text{total} \gets 0$\\ \For{$i = 1,\, ...\,,\, N_\text{steps}$}{
$\Delta_{i;S_1,S_2} \gets \frac{T_{\text{max}}}{N_{\text{steps}}} \times \ell_1\left[{S_1\left(\frac{i \times T_{\max}}{N_{\text{steps}}}\right)}, {S_2\left(\frac{i \times T_{\max}}{N_{\text{steps}}}\right)}\right]$\;
$\Delta_{\text{total}} \gets \Delta_{\text{total}} + \Delta_{i;S_1,S_2}$\; } \Return{$\Delta_{\text{total}}, T_{\text{max}}$} \caption{Discrete Approximation of the Inner Term of the Survival-$\ell_1$} \end{algorithm} }
\pagebreak \subsection{Creating a Semi-Synthetic Dataset with Dependent Censoring} \label{appx:creating_a_semi_synthetic_dataset_with_dependent_censoring}
We convert a regression dataset to a survival dataset with dependent censoring using the following algorithm.
\RestyleAlgo{ruled} \SetKwComment{Comment}{$\ $\# }{ } \SetKwComment{Commentt}{\# }{ } \SetKwInOut{Dependencies}{Dependencies} { \begin{algorithm}[H] \label{alg:datagenerating} \KwIn{ \begin{enumerate}
\item $\mathcal{D}_{\text{reg}} = \left\{X^{(i)}, Y^{(i)}\right\}_{i=1}^N \subseteq \mathcal{X} \times \mathbb{R}_+$. Regression dataset consisting of covariates and labels.
\item $C_\theta: [0,1] \times [0,1] \rightarrow [0,1]$. A bivariate, uniparametric copula. \end{enumerate} } \KwResult{ \begin{enumerate}
\item $\mathcal{D}_{C, \theta} \subseteq \mathcal{X} \times \mathbb{R}_+ \{0,1\}$. Artificially censored version of $D_\text{reg}$ in which the joint distribution between $Y$ and $T_C$ is governed by the application of Sklar's Theorem to the copula $C_\theta$. \end{enumerate} }
\hrulefill\\
\Comment{Learn a Weibull CoxPH model based on the outcomes of the train set without any censoring} $\hat{W}_E \gets \texttt{Weibull-Linear}(Y, X, \textbf{1}^N)$\; ${W}_C \gets {W}_E$\;
${{W}_C}.\nu \gets {{W}_C}.\nu / 0.6$ \Comment{Decreases the variance of the censoring distribution} $T_{C} \gets \textbf{0}^N$\; $\mathcal{D}_{C, \theta} = \emptyset$\; \For{$i = 1,\, ...\,,\, N$}{
$u_1^{(i)} \gets \hat{S}_{W_E}(Y^{(i)}); $\Comment{Obtain event quantile}
$u_2^{(i)} \sim C_\theta(\cdot \,\mid\, u_1^{(i)}); $\Comment{Sample censoring quantile conditionally from the copula}
$T_C^{(i)} \gets \hat{S}_{W_C}^{-1}(u_2^{(i)})$; \Comment{Obtain censoring time via inv. censoring survival function}
$\mathcal{D}_{C, \theta} \gets \mathcal{D}_{C, \theta}\, \cup\, \{(X^{(i)}, \min\left(Y^{(i)}, T_C^{(i)}\right), \mathbbm{1}[Y^{(i)} \leq T_C^{(i)}])\}$\; } \Return{$\mathcal{D}_{C, \theta}$}\; \caption{Semi-Synthetic Dataset Construction with Dependent Censoring} \label{alg:semi-synthetic} \end{algorithm} }
\section{Additional Experimental Details} \label{appx:additional_experimental_details}
\subsection{Evaluation Metrics Are Biased Under Dependence} \label{appx:evaluation_metrics_are_biased_under_dependence}
For this experiment, we sampled 10,000 data points according to Algorithm \ref{alg:datagenerating} with $X \in \mathbb{R}^{N \times 10} \sim \mathcal{U}_{[0,1]}$, $\nu_E^* = 4, \rho_E^* = 17, \psi_E^*(X) = X_{1}^{2}+X_{2}^{2}$, $\nu_C^* = 3, \rho_C^* = 16, \psi_C^*(X) = \sum_{i=1}^{3}\beta_{C_{i}}X_{i}^{2}$, where $ \beta_C \in [0,1]^{10} \sim \mathcal{U}_{[0,1]}$.
\subsection{Implementation Details} \label{appx:implementation_details}
We halted the learning algorithms if the validation loss failed to improve for a consecutive 3000 epochs. The \texttt{Linear-Risk} experiments were conducted without any form of regularization, whereas the \texttt{Nonlinear-Risk} experiments employed $\ell_2$ regularization with a coefficient of $\lambda=0.001$. For all experiments, the learning rate remained constant at $0.001$.
\section{Datasets and Processing} \label{appx:datasets_and_processing}
\subsection{Steel Industry Energy Consumption (\texttt{STEEL}) Dataset} \label{appx:steel_industry_energy_consumption_dataset}
The \texttt{STEEL} dataset \citep{ve2021efficient, sathishkumar2020energy, sathishkumar2020industry} is a regression dataset from the UCI Machine Learning Repository \citep{asuncion2007uci}, comprising 35,040 observations of of the power consumption of plants run by DAEWOO Steel Co. Ltd in Gwangyang, South Korea. The data includes 9 covariates (including day of the week, type of load (light/medium/heavy), CO$_2$ measurements in PPM, and leading/lagging reactive power measurements), and one outcome variable (the industry energy consumption, measured in kWh). For our semi-synthetic experiment, we used $70\%$ of the data as the train set, $15\%$ as the validation set, and $15\%$ as the test set.
\subsection{Airfoil Self-Noise (\texttt{AIRFOIL}) Dataset} \label{appx:airfoil_self_noise_dataset}
The \texttt{Airfoil} dataset \citep{Dua:2019} is another regression dataset from the UCI Machine Learning Repository \citep{asuncion2007uci}. It comprises 1,503 observations obtained from aerodynamic and acoustic tests of two and three-dimensional airfoil blade sections conducted in an anechoic wind tunnel. The data includes 6 covariates (including frequency, angle of attack, chord length, free-stream velocity, suction side displacement thickness) and one outcome variable (scaled sound pressure level). For our semi-synthetic experiment, we used $70\%$ of the data as the train set, $15\%$ as the validation set, and $15\%$ as the test set.
\section{Additional Semi-Synthetic Experimental Results} \label{appx:additional_semi_synthetic_experimental_results}
For the experiments in this section we used a Clayton copula to censor the dataset as described in Algorithm \ref{alg:semi-synthetic}.
\subsection{Semi-Synthetic Survival Regression on the \texttt{STEEL} Dataset} \label{appx:additional_semi_synthetic_results_steel_dataset}
Below, we present the results of our survival regression on the test set of the \texttt{STEEL} dataset.
\begin{table}[H]
\small
\begin{adjustbox}{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\tau = 0.2$ & $\tau = 0.4$ & $\tau = 0.6$ & $\tau = 0.8$\\
\hline
Weibull CoxPH (No Censoring) & 0.513 & 0.513 & 0.513 & 0.513\\
\hline
Weibull CoxPH (Independence Assuming) & 0.333 & 0.309 & 0.324 & 0.341\\
Weibull CoxPH (Dependent, \textbf{ours}) & 0.371 & 0.442 & 0.512 & 0.508\\
\hline
\end{tabular}
\end{adjustbox}
\caption{A table of $R^2$ values given by performing survival regression on the \texttt{STEEL} dataset under various degrees of dependence induced by Algorithm \ref{alg:semi-synthetic}. A higher $R^2$ indicates a better performing algorithm. The top row represents the performance of a Weibull CoxPH model trained on the regression data without censoring; this should indicate an upper bound on the performance of any survival model under censoring. We find that the performance of our approach, though below the theoretical upper bound, lies substantially above that of the independence-assuming approach.}
\label{tab:steel-results} \end{table}
\subsection{Semi-Synthetic Survival Regression on the \texttt{AIRFOIL} Dataset} \label{appx:additional_semi_synthetic_results_airfoil_dataset}
Below, we present the results of our survival regression on the test set of the \texttt{AIRFOIL} dataset. \begin{table}[H]
\small
\begin{adjustbox}{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&$\tau = 0.2$& $\tau = 0.4$& $\tau = 0.6$& $\tau = 0.8$\\
\hline
Weibull CoxPH (No Censoring)&$0.572$&$0.572$&$0.572$& 0.572\\
\hline
Weibull CoxPH (Independence Assuming)&$0.583$&$0.549$&$0.465$&$0.330$\\
Weibull CoxPH (Dependent, \textbf{ours})&$0.580$&$0.564$&$0.507$&$0.484$\\
\hline \end{tabular}
\end{adjustbox}
\caption{A table of $R^2$ values given by performing survival regression on the \texttt{AIRFOIL} dataset under various degrees of dependence induced by Algorithm \ref{alg:semi-synthetic}. The top row represents the performance of a Weibull CoxPH model trained on the regression data without censoring; this should indicate an upper bound on the performance of any survival model under censoring. While performance of both methods degrades as dependence increases, we find that our method is better able to obtain higher values of $R^2$ than the independence-assuming model under greater degrees of dependence.}
\label{tab:steel-results} \end{table}
\end{document} |
\begin{document}
\title{Automorphisms and derivations of affine commutative and PI-algebras}
\author{Oksana Bezushchak} \address{Faculty of Mechanics and Mathematics,
Taras Shevchenko National University of Kyiv, 60, Volodymyrska street, 01033 Kyiv, Ukraine} \email{[email protected]}
\thanks{The first author was supported by the PAUSE program (France), partly supported by UMR 5208 du CNRS; and partly supported by MES of Ukraine: Grant for the perspective development of the scientific direction "Mathematical sciences and natural sciences" at TSNUK}
\author{Anatoliy Petravchuk} \address{Faculty of Mechanics and Mathematics,
Taras Shevchenko National University of Kyiv, 60, Volodymyrska street, 01033 Kyiv, Ukraine} \email{[email protected], [email protected]} \thanks{The second author was supported by MES of Ukraine: Grant for the perspective development of the scientific direction "Mathematical sciences and natural sciences" at TSNUK}
\author{Efim Zelmanov} \address{SICM, Southern University of Science and Technology, 518055 Shenzhen, China} \email{[email protected]}
\subjclass[2020]{Primary 13N15, 16R99, 16W20; Secondary 16R40, 16W25, 17B40, 17B66}
\date{\today}
\keywords{PI-algebra, Lie algebra, automorphism, derivation, affine commutative algebra}
\begin{abstract} We prove analogs of A.~Selberg's result for finitely generated subgroups of $\text{Aut}(A)$ and of Engel's theorem for subalgebras of $\text{Der}(A)$ for a finitely generated associative commutative algebra $A$ over an associative commutative ring. We prove also an analog of the theorem of W.~Burnside and I.~Schur about locally finiteness of torsion subgroups of $\text{Aut}(A)$. \end{abstract}
\maketitle
\section{Introduction}
Let $\textbf{A}$ be the algebra of regular (polynomial) functions on an affine algebraic variety $V$ over an associative commutative ring $\Phi$ with $1.$
The group of $\Phi$-linear automorphisms $\text{Aut}(\textbf{A})$ and the Lie algebra of $\Phi$-linear derivations $\text{Der} (\textbf{A})$ are referred to as the group of polynomial automorphisms of $V$ and the Lie algebra of vector fields on $V$, respectively.
When the variety $V$ is irreducible, i.e. the ring $\textbf{A}$ is a domain, the group $\text{Aut}(K)$ of automorphisms of the field $K$ of fractions of $\textbf{A}$ is called the group of birational automorphisms of $V$; and the Lie algebra $\text{Der} (K)$ of derivations of $K$ is called the Lie algebra of rational vector fields on $V$.
Let $\mathbb{F}$ be the field. Then $\mathbb{F}[x_1,\ldots, x_n]$ and $\mathbb{F}(x_1,\ldots, x_n)$ are the polynomial algebra and the field of rational functions. The group $\text{Aut}(\mathbb{F}(x_1,\ldots, x_n))$ and the algebra $\text{Der}(\mathbb{F}(x_1,\ldots, x_n))$ (resp. $\text{Aut}(\mathbb{F}[x_1,\ldots, x_n])$ and $\text{Der}(\mathbb{F}[x_1,\ldots, x_n])$) are called the \emph{Cremona group} and the \emph{Cremona Lie algebra} (resp. \emph{polynomial Cremona group} and \emph{polynomial Cremona Lie algebra}).
Recall that a group is called \emph{linear} if it is embeddable into a group of invertible matrices over an associative commutative ring. Groups $\text{Aut}(\textbf{A})$ are, generally speaking, not linear. It has been an ongoing effort of many years to understand: \begin{center}\emph{which properties of linear groups can be carried over to \\ automorphisms groups $\emph{\text{Aut}(\textbf{A})}$ and to Cremona groups}?\end{center}
J.-P.~Serre \cite{Serre38,Serre37} studied finite subgroups of Cremona groups. V.~L.~Popov \cite{Popov28} initiated the study of the question of whether the celebrated Jordan's theorem on finite subgroups of linear groups carries over to the groups $\text{Aut}(A).$ For some important results in this direction see \cite{Bandman_Zarhin5,Birkar8,Deserti11,Popov28,Popov29,Prokhorov_Shramov31}.
S.~Cantat \cite{Cantat10} proved the Tits Alternative for Cremona groups of rank $2$.
In this paper, we prove analogs of A.~Selberg's result \cite{Selberg36} (see also \cite{Alperin2}) for finitely generated subgroups of $\text{Aut}(A)$ and of Engel's theorem for subalgebras of $\text{Der}(A)$ for a finitely generated associative commutative algebra $A.$
We say that a group is \emph{virtually torsion free} if it has a subgroup of finite index that is torsion free.
\begin{theorem}\label{Theorem1} Let $A$ be a finitely generated associative commutative algebra over an associative commutative ring $\Phi$ with $1$. Suppose that $A$ does not have additive torsion. Then
\begin{enumerate}
\item[$(a)$] an arbitrary finitely generated subgroup of the group $\emph{\text{Aut}}(A)$ is virtually torsion free;
\item[$(b)$] if $A$ is a finitely generated ring (i.e. $\Phi$ is the ring of integers $ \mathbb{Z}$), then the group $\emph{\text{Aut}}(A)$ is virtually torsion free. \end{enumerate} \end{theorem}
\begin{corollary}[\rm\textbf{An analog of the theorem of W.~Burnside and I.~Schur}; see \cite{Jacobson17,Jacobson18}]\label{Cor1} Under the assumptions of theorem $\ref{Theorem1}(a)$ every torsion subgroup of $\emph{\text{Aut}}(A)$ is locally finite. \end{corollary}
\begin{corollary}\label{Corollary2} Every torsion subgroup of a polynomial Cremona group \\ $\text{Aut}(\mathbb{F}[x_1,\ldots,x_n]),$ where $\mathbb{F}$ is a field of characteristic zero, has an abelian normal subgroup of finite index. \end{corollary}
Corollary \ref{Corollary2} immediately follows from corollary \ref{Cor1} and from the Jordan property of the group $\text{Aut}(\mathbb{F}[x_1,\ldots,x_n]);$ see \cite{Birkar8,Prokhorov_Shramov31}.
If the torsion subgroup in corollary \ref{Cor1} is torsion of bounded degree, then we don't need any assumptions on additive torsion. Indeed, in \cite{Bass_Lubotzky6}, it was shown that the group ${\rm Aut}(A)$ is locally residually finite. Hence, by the positive solution of the restricted Burnside problem (see \cite{Zelmanov41,Zelmanov42}), the group $G$ is locally finite.
Recall that a derivation $d$ of an algebra $A$ is called \emph{locally nilpotent} if for an arbitrary element $a\in A$ there exists an integer $n(a)\geq 1$ such that $d^{n(a)}(a)=0.$ For more information about locally nilpotent derivations see \cite{Freudenburg12}. An algebra is called \emph{locally nilpotent} if every finitely generated subalgebra is nilpotent.
Let $L\subseteq {\rm Der}(A)$ be a Lie algebra that consists of locally nilpotent derivations. The question of whether it implies that the Lie algebra $L$ is locally nilpotent was discussed in \cite{Freudenburg12,Petravchuk_Sysak26,Skutin39}. In particular, A.~Skutin \cite{Skutin39} proved local nilpotency of $L$ for a commutative domain $A$ of finite transcendence degree and characteristic zero.
\begin{theorem}\label{Theorem2}
Let $A$ be a finitely generated associative commutative algebra over an associative commutative ring, and let $L$ be a subalgebra of ${\rm Der}(A)$ that consists of locally nilpotent derivations. Then the Lie algebra $L$ is locally nilpotent. \end{theorem}
The assumption of finite generation of the algebra $A$ is essential. If $A$ is the algebra of polynomials in countably many variables over a field, then there exists a non-locally nilpotent Lie subalgebra $L\subseteq {\rm Der}(A)$ that consists of locally nilpotent derivations. The following theorem, however, imposes a finiteness condition that is weaker than finite generation.
Let $A$ be a commutative domain. Let $K$ be the field of fractions of $A.$ An arbitrary derivation of the domain $A$ extends to a derivation of the field $K,$ ${\rm Der} (A)\subseteq {\rm Der} (K).$ We have $K{\rm Der} (K)\subseteq {\rm Der} (K),$ hence ${\rm Der} (K)$ can be viewed as a vector space over the field $K.$
\begin{theorem}\label{Theorem3} Under the assumptions above, let $L\subseteq {\rm Der}(A)$ be a Lie ring that consists of locally nilpotent derivations. Suppose that ${\rm dim}_{K}KL<\infty .$ Then the Lie ring $L$ is locally nilpotent. \end{theorem}
A special case of this theorem was proved by A.~P.~Petravchuk and K.~Ya.~Sysak in \cite{Petravchuk_Sysak26}.
The proof of theorem \ref{Theorem3} is based on a stronger version of theorem \ref{Theorem2}, which is of independent interest.
Recall that a subalgebra $B$ of an associative commutative algebra $A$ is called an \emph{order} in $A$ if there exists a multiplicative semigroup $S\subset B$ such that \begin{enumerate}
\item[(1)] every element from $S$ is invertible in $A,$
\item[(2)] an arbitrary element $a\in A$ can be represented as $a=s^{-1}b,$ where $s\in S$ and $b\in B.$ \end{enumerate}
Let $L \subseteq {\rm Der}(A)$ be a subalgebra. The subset
$A_L=\{ a\in A \ | \ \text{for an arbitrary} \ d\in L \ \text{there exists an integer} \ n(d)\geq 1 \ {\rm such \ that} \ d^{n(d)}(a)=0\}$ is a subalgebra of the algebra $A.$
\begin{proposition}\label{Proposition1}
Let $A$ be a finitely generated commutative domain. Let $L$ be a subalgebra of ${\rm Der}(A).$ If the subalgebra $A_L$ is an order in $A$, then the Lie algebra $L$ is locally nilpotent. \end{proposition}
To achieve a natural generality and to expand to noncommutative cases we extended theorems \ref{Theorem1} and \ref{Theorem2} to algebras with polynomial identities, i.e. $\text{PI}$-algebras; see \cite{Aljad_Gia_Procesi_Regev1,Belov_Rowen7,Rowen35}.
A $\text{PI}$-algebra is called \emph{representable} if it is embeddable in a matrix algebra over an associative commutative algebra. In \cite{Small40}, L. W. Small constructed an example of a finitely generated $\text{PI}$-algebra that is not representable.
\begin{theorem}\label{Theorem4}
Let $A$ be a finitely generated representable $\emph{\text{PI}}$-algebra over an associative commutative ring. Suppose that $A$ does not have additive torsion. Then
\begin{enumerate}
\item[(a)] an arbitrary finitely generated subgroup of the group ${\rm Aut}(A)$ is virtually torsion free;
\item[(b)] if $A$ is a finitely generated ring, then the group ${\rm Aut}(A)$ is virtually torsion free.
\end{enumerate} \end{theorem}
\begin{theorem}\label{Theorem5}
Let $A$ be a finitely generated $\emph{\text{PI}}$-algebra over an associative commutative ring. Suppose that $A$ does not have additive torsion. Then an arbitrary torsion subgroup of ${\rm Aut}(A)$ is locally finite. \end{theorem}
We remark that theorem \ref{Theorem5} does not contain assumptions on representability.
C.~Procesi \cite{Procesi30} proved local finiteness of torsion subgroups of multiplicative groups of $\text{PI}$-algebras.
\begin{theorem}\label{Theorem6}
Let $A$ be a finitely generated $\emph{\text{PI}}$-algebra over an associative commutative ring. Let $L\subseteq {\rm Der}(A)$ be a subalgebra that consists of locally nilpotent derivations. Then the Lie algebra $L$ is locally nilpotent. \end{theorem}
\section{Preliminaries}
In this section, we review some facts that will be used in proofs.
\subsection{}\label{1.1} Theorems \ref{Theorem1}, \ref{Theorem2}, \ref{Theorem5} and \ref{Theorem6} were formulated for finitely generated associative commutative algebras over an associative commutative ring $\Phi .$ We will show that it is sufficient to assume $\Phi =\mathbb Z,$ that is to prove the theorems for finitely generated rings. In particular, theorems \ref{Theorem1}(b) and \ref{Theorem4}(b) imply theorems \ref{Theorem1}(a) and \ref{Theorem4}(a), respectively. We will do it for theorem \ref{Theorem6}. The arguments for theorems \ref{Theorem1}, \ref{Theorem2} and \ref{Theorem5} are absolutely similar.
Let $\Phi$ be an associative commutative ring and let $A$ be an associative $\text{PI}$-algebra over $\Phi $ (see \textbf{\ref{1.2}}) generated by elements $a_1, \ldots a_m;$ and $ A\ni 1.$ Let $L\subseteq \Der _{\Phi}(A)$ be a Lie subalgebra generated by derivations $d_1, \ldots , d_n.$ Suppose that every derivation of the $\Phi$-algebra $L$ is locally nilpotent. Let $\Phi \langle x_1, \ldots , x_m\rangle$ be the free associative $\Phi$-algebra in free generators $x_1, \ldots , x_m.$ Then there exist elements $f_{ij}(x_1, \ldots , x_m), 1\leq i\leq n, 1\leq j\leq m,$ such that $d_i(a_j)=f_{ij}(a_1, \ldots , a_m).$
Let $A_1$ be the subring of $A$ generated by elements $1, a_1, \ldots ,a_m$ and by all coefficients of the elements $f_{ij}(x_1, \ldots ,x_m).$ It is straightforward that the subring $A_1$ is invariant under $d_1, \ldots , d_n.$ Assuming that theorem \ref{Theorem6} is true for $\Phi =\mathbb Z$, there exists an integer $r\geq 1$ such that $L^r(A_1)=(0).$ In particular, $L^r(a_i)=(0), 1\leq i\leq m.$ Since the elements $a_1, \ldots , a_m$ generate the $\Phi$-algebra $A$ we conclude that $L^r=(0).$
Let us review some basic definitions and facts about $\text{PI}$-algebras that can be found in the books \cite{Aljad_Gia_Procesi_Regev1,Belov_Rowen7,Rowen35}.
\subsection{}\label{1.2} An associative algebra over an associative commutative ring $\Phi \ni 1$ is said to be \emph{$\text{PI}$}- if there exists an element $$ f(x_1, \ldots , x_n)=x_1\cdots x_n+\sum _{1\not =\sigma \in S_n}\alpha _{\sigma}x_{\sigma (1)}\cdots x_{\sigma (n)}$$ of the free associative algebra $\Phi \langle x_1, \ldots , x_n\rangle$ such that $f(a_1, \ldots , a_n)=0$ for arbitrary elements $a_1, \ldots , a_n\in A;$ hereafter $S_n$ is the group of permutations of the set $\{1,\ldots, n \}$. In this case we say that the algebra $A$ satisfies the identity $f(x_1, \ldots , x_n)=0.$
If $A$ is a $\text{PI}$-algebra, then it satisfies an identity with all the coefficients $\alpha _{\sigma}, 1\not =\sigma\in S_n,$ lying in $\mathbb Z.$ In other words, every $\text{PI}$-algebra is $\text{PI}$ over $\mathbb Z,$ i.e. $\text{PI}$ as a ring.
\subsection{}\label{1.6} A ring $A$ is called \emph{prime} if the product of any two nonzero ideals is different from zero. If $A$ is a prime $\text{PI}$-ring, then the center $$Z=\{ a\in A \ | \ ab=ba \ \rm{for \ an \ arbitrary \ element} \ b\in A\}\not =(0)$$ and the ring of fractions $(Z\setminus ( 0))^{-1}A$ is a finite-dimensional central simple algebra over the field of fractions of the domain $Z$; see \cite{Markov22,Rowen34}.
\subsection{}\label{1.7} A ring $A$ is called \emph{semiprime} if it does not contain nonzero nilpotent ideals. Let $A$ be a finitely generated semiprime $\text{PI}$-ring. Let $Z$ be the center of $A$ and let $Z^*$ denote the set of elements from $Z$ that are not zero divisors. Then the ring of fractions ${(Z^{*})}^{-1}A$ is a finite direct sum of simple finite-dimensional (over their centers) algebras.
\subsection{}\label{1.8} An element $a\in L$ of a Lie algebra $L$ is called ad-\emph{nilpotent} if the operator $${\rm ad} (a) : L\to L, \quad {\rm ad} (a): x \mapsto [a, x],$$ is nilpotent.
Suppose that a Lie algebra $L$ is generated by elements $a_1, \ldots , a_m.$ Commutators in $a_1, \ldots , a_m$ are defined via the following rules: \begin{enumerate}
\item[(i)] an arbitrary generator $a_i, 1\leq i\leq m,$ is a commutator in $a_1, \ldots , a_m;$
\item[(ii)] if $\rho '$ and $\rho ''$ are commutators in $a_1, \ldots , a_m$, then $\rho =[\rho ', \rho '']$ is a commutator in $a_1, \ldots , a_m.$ \end{enumerate}
An element $a\in L$ is called a \emph{commutator} in $a_1, \ldots , a_m$ if it is a commutator because of (i) and (ii).
A Lie algebra $L$ over an associative commutative ring $\Phi \ni 1$ is called $\text{PI}$ (\emph{satisfies a polynomial identity}) if there exists a multilinear element of the free Lie algebra $$f(x_0, x_1, \ldots , x_n)=({\rm ad} (x_1)\cdots {\rm ad} (x_n) +\sum _{1\not =\sigma \in S_n}\alpha _{\sigma}{\rm ad} (x_{\sigma (1)})\cdots {\rm ad} (x_{\sigma (n)}))x_0, \alpha _{\sigma}\in \Phi,$$ such that $f(a_0, a_1, \ldots a_n)=0$ for arbitrary elements $a_0, a_1, \ldots a_n\in L.$
The following theorem was proved in \cite{Zelmanov42}.
\begin{theo*}[\cite{Zelmanov42}] Let $L$ be a Lie $\emph{\text{PI}}$-algebra over an associative commutative ring generated by elements $a_1, \ldots , a_m.$ Suppose that every commutator in $a_1, \ldots , a_m$ is $\emph{\text{ad}}$-nilpotent. Then the Lie algebra $L$ is nilpotent.\end{theo*}
\section{Groups of automorphisms}
\begin{lemma}\label{Lemma1}
Let $A$ be a finitely generated commutative domain without additive torsion. Then the group ${\rm Aut} (A)$ is virtually torsion free. \end{lemma}
\begin{proof}
Let $I$ be a maximal ideal of the ring $A.$ The field $A/I$ is finitely generated, hence $A/I$ is a finite field, $A/I\simeq GF(p^l).$ Let $\mathcal P$ be the set of all ideals $P\vartriangleleft A$ such that $A/P\simeq GF(p^l).$ Let $P_0$ be the ideal of the ring $A$ generated by all elements $a^{p^l}-a, a\in A,$ and by the prime number $p.$ It is easy to see that the ring $A/P_0$ is finite, $P_0\subseteq \cap _{P\in \mathcal P}P.$
This implies that the set $\mathcal P$ is finite.
Automorphisms of the ring $A$ permute ideals from $\mathcal P.$ The ideal $I$ belongs to $\mathcal P.$ Hence, there exists a subgroup $ H_1 \leq {\rm Aut} (A), |{\rm Aut} (A): H_1|<\infty ,$ that leaves the ideal $I$ invariant. We have $|A: I^2|<\infty .$ Therefore, there exists a subgroup $H_2\leq H_1, |{\rm Aut} (A):H_2|<\infty ,$ such that
$$(1-h)(A)\subseteq I^2$$
for an arbitrary element $h\in H_2.$ Furthermore, if $a_1, \ldots a_k\in I,$ then
$$(h-1)(a_1\cdots a_k)=(h(a_1)-a_1+a_1)\cdots (h(a_k)-a_k+a_k)-a_1\cdots a_k=\sum b_1\cdots b_k,$$
where each $b_i=(h-1)(a_i)$ or $a_i$ and in each summand at least one element $b_i$ is equal to $(h-1)(a_i).$ This implies that
$$(1-h)(I^k)\subseteq I^{k+1}.$$
By the Krull intersection theorem (see \cite{Atiyah_Macdonald4}), we have $$\bigcap _{k\geq 1}I^k=(0).$$ If an element from $H_2$ has finite order, then this order must be a power of the prime number~$p.$
Consider the ring $$\widetilde{A}=\langle 1/p , A\rangle \subseteq A\otimes _{\mathbb Z}\mathbb Q,$$ where $\mathbb Q$ is the field of rational numbers. If $\widetilde J$ is a maximal ideal of the ring $\widetilde A,$ then
$$\widetilde{A}/\widetilde{J}\simeq GF(q^t) \quad \text{for prime } \quad q, \quad q\not =p, \quad \text{and} \quad \bigcap _{k\geq 1}\widetilde{J}\,^k=(0).$$
Let $J=\widetilde{J}\cap A.$ Arguing as above, we find a subgroup $H_3\leq {\rm Aut} (A)$ of a finite index such that $(1-h)(J^k)\subseteq J^{k+1}, k\geq 0,$ for an arbitrary element $h\in H_3.$ Hence, if an element from $H_3$ has finite order, then this order must be a power of the prime number $q.$
Now, $H_2\cap H_3$ is a torsion free subgroup of ${\rm Aut} (A).$ This completes the proof of the lemma. \end{proof}
\begin{lemma}\label{Lemma2}
Let $A$ be a semiprime finitely generated associative commutative ring without additive torsion. Then the group ${\rm Aut} (A)$ is virtually torsion free.
\end{lemma} \begin{proof}
Let $S\subset A$ be the set of all nonzero elements that are not zero divisors. Then the ring of fractions $S^{-1}A$ is a direct sum of fields, $S^{-1}A=\mathbb{F}_1\oplus \cdots \oplus \mathbb{F}_k.$ An arbitrary automorphism of the ring $A$ extends to an automorphism of $S^{-1}A.$
Hence, there exists a subgroup $H\leq {\rm Aut} (A)$ of finite index such that every automorphism from $H$ leaves the summands $\mathbb{F}_1, \ldots , \mathbb{F}_k$ invariant. For each $i, 1\leq i\leq k,$ the factor-ring
$$K=A/A\cap ( \mathbb{F}_1\oplus \cdots \oplus \mathbb{F}_{i-1} \oplus \mathbb{F}_{i+1}\oplus \cdots \oplus \mathbb{F}_k)$$
is a domain without additive torsion. By lemma \ref{Lemma1}, there exists a subgroup $H_i< H$ of finite index such that the image of $H_i$ in $ {\rm Aut} (K)$ is torsion free. This implies that the group $\cap _{i=1}^kH_i$ is torsion free. Indeed, if an element $h\in \cap _{i=1}^kH_i$ has finite order, then $h$ acts identically modulo $K,$ and we get
$$(1-h)(A)\subseteq \bigcap _{i=1}^k(\mathbb{F}_1\oplus \cdots \oplus \mathbb{F}_{i-1} \oplus \mathbb{F}_{i+1}\oplus \cdots \oplus \mathbb{F}_k)=(0).$$
This completes the proof of the lemma. \end{proof}
\begin{proof}[Proof of theorem \emph{\ref{Theorem4}(b)}.] Let $A$ be a finitely generated representable $\text{PI}$-ring that does not have additive torsion. A.~I.~Malcev \cite{Malcev21} showed that the ring $A$ is embeddable in a matrix algebra over a field of characteristic zero, $A\hookrightarrow M_n(\mathbb{F}), \ {\rm char}\, \mathbb{F}=0.$ Let $a_1, \ldots , a_m$ be generators of the ring $A,$ and let $\mathbb Z\langle X\rangle$ be the free associative ring on free generators $x_1, \ldots , x_m.$ If $R\subseteq \mathbb Z\langle x_1, \ldots , x_m\rangle$ is a set of defining relations of the ring $A$ in the generators $a_1, \ldots ,a_m,$ then $A\simeq \langle x_1, \ldots , x_m \ | \ R=(0)\rangle .$
Let $n, m\geq 2.$ Consider $m$ generic $n\times n$ matrices $$X_k=(x_{ij}^{(k)})_{1\leq i, j\leq n}, 1\leq k\leq m.$$
These are $n\times n$ matrices over the polynomial ring $\mathbb Z[X],$ where $$ X=\{x_{ij}^{(k)}, 1\leq i, j\leq n, 1\leq k\leq m\}$$ is the set of variables. The ring $G(m,n)$ generated by generic matrices $ {X}_1, \ldots , {X}_m$ is a domain and it is $\text{PI}$; see \cite{Amitsur3}.
For a relation $r\in R$ let
$$r(X_1, \ldots , X_m)=\big(r_{ij}(X)\big)_{1\leq i, j\leq n},\quad r_{ij}(X)\in \mathbb Z[X].$$
Consider the associative commutative ring $U$ presented by generators $X$ and relations
$r_{ij}(X)=0, $ $r\in R, $ $ 1\leq i, j\leq n,$ i.e. $$ U=\mathbb Z[X]/I, \quad
I={\rm id}_{\mathbb Z[X]}\big(r_{ij}(X), \ r\in R, \ 1\leq i, j\leq n\big).$$
Since the ring $A$ is embeddable in $M_n(\mathbb{F})$ it follows that the homomorphism $$u:A\to M_{n}(U), \quad u(a_k)=X_k+I\in M_n(U),\quad 1\leq k\leq m,$$ is an embedding. Moreover, the ring $U$ has the following universal property: \\ if $C$ is an associative commutative ring and $\varphi :A\to M_n(C)$ is an embedding, then there exists a unique homomorphism $U\to C$ that makes the diagram
$$ \xymatrix{ {A}\ar [r] ^{u} \ar[rd]_{\varphi} & {M_n(U)} \ar[d] \\
& {M_n(C)} }$$
commutative.
This implies that every automorphism of the ring $A$ gives rise to an automorphism of the ring $U.$ Let
$$T(U)=\{ x\in U \ | \ \text{there exists an integer} \ k\geq 1 \ \text{such that} \ kx=0\}$$
be the torsion part of the ring $U.$ Let $J\big(U/T(U)\big)$ be the radical of the ring $U/T(U),$ $J\big(U/T(U)\big)=J/T(U),$ where
$$(0)\subseteq T(U)\subseteq J\vartriangleleft U, \quad \overline{U}=U/J.$$
The factor-ring $\overline{U}$ is semiprime and does not have additive torsion. An arbitrary automorphism of the ring $A$ gives rise to an automorphism of $\overline{U}.$
Since the ring $A$ is embeddable in $M_n(\mathbb{F})$, $\rm{char}\, \mathbb{F}=0,$ it follows that $A$ is embeddable in $M_n(\overline{U})$ and the group ${\rm Aut} (A)$ is embeddable in ${\rm \Aut} (\overline{U}).$ By lemma \ref{Lemma2}, the group ${\rm Aut} (\overline{U})$ is virtually torsion free and so is ${\rm Aut} (A).$ This
completes the proof of theorem \ref{Theorem4}(b). \end{proof}
Recall that theorem \ref{Theorem4}(b) implies theorems \ref{Theorem1} and \ref{Theorem4}(a).
We will discuss the annoying representability assumption in theorem \ref{Theorem4}. Let $A$ be a finitely generated $\text{PI}$-algebra over the field of rational numbers $\mathbb Q,$ and let $J$ be the Jacobson radical of the algebra $A.$ By \cite{Braun9}, the Jacobson radical of a finitely generated $\text{PI}$-ring is nilpotent. So, the radical $J$ is nilpotent. The stabilizer of the descending chain $A\supset J\supset J^2\supset \cdots \ $ in $ {\rm Aut} (A)$ is torsion free. Indeed, let $\varphi \in \Aut (A)$ and $(1-\varphi )J^i\subseteq J^{i+1}, i\geq 0.$ We assume that $\varphi ^{n}=1.$ Then we have $$\varphi ^{n}=(\varphi -1+1)^n=\sum _{i=2}^n\binom{n}{i}(\varphi -1)^i+n(\varphi -1)+1.$$ Hence, $$n(1-\varphi )=\sum _{i= 2}^n\binom{n}{i}(\varphi -1)^i.$$ Suppose that $a\in A$ and $(1-\varphi )a\not=0.$ Let $(1-\varphi )a\in J^k\setminus J^{k+1}.$ By the above, $n(1-\varphi )a\in (\varphi -1)J^k\subseteq J^{k+1},$ a contradiction.
If the group ${\rm Aut} (A/J^2)$ is virtually torsion free, then so is the group ${\rm Aut} (A).$ Indeed, let $H$ be a torsion free subgroup of finite index in ${\rm Aut} (A/J^2)$ and let $\widetilde H$ be the preimage of $H$ under the homomorphism ${\rm Aut} (A)\to {\rm Aut} (A/J^2).$ If $h\in \widetilde{H}$ is a torsion element, then $h$ acts identically modulo $J^2,$ hence $h$ stabilizes the chain $A\supset J\supset J^2\supset \cdots $ and $h=1.$ We proved that the subgroup $\widetilde H$ of ${\rm Aut} (A)$ is torsion free.
In all known examples of nonrepresentable finitely generated $\text{PI}$-algebras the Jacobson radical is nilpotent of degree $\geq 3.$
\begin{Conjecture*} A finitely generated $\text{PI}$-algebra with $J^2=0$ is representable. \end{Conjecture*}
If this conjecture is true, then the representability assumption in theorem \ref{Theorem4} can be dropped.
The analog of Selberg's theorem holds for automorphism groups of some algebras that are far from being $\text{PI}.$
\begin{proposition}\label{Proposition2}
Let $A=\mathbb Z\langle x_1, \ldots, x_m\rangle, m\geq 2,$ be the free associative ring on free generators $ x_1, \ldots, x_m.$ The group of automorphisms ${\rm Aut} (A)$ is virtually torsion free. \end{proposition}
\begin{proof}
Let $p$ be a prime number. Let $I_p$ be the ideal of the algebra $A$ generated by $p$ and by all elements $a^p-a, a \in A.$ The ideal $I_p$ is invariant under all automorphisms, the factor-ring $A/I_p$ is finite and constant terms of all elements in $I_p$ are divisible by $p.$ Hence, $$\bigcap _{i\geq 1}I_{p}\,^{i}=(0).$$ The subgroup
$$H_1={\rm ker} \big({\rm Aut} (A)\to {\rm Aut} (A/I_p^{2})\big)$$ has finite index in ${\rm Aut} (A)$ and every element of finite order in $H_1$ has an order, which is a power of $p.$ Now, choose a prime number $q,$ $ p\not =q.$ The subgroup $$H_2={\rm ker} \big({\rm Aut} (A)\to {\rm Aut} (A/I_q^{2})\big)$$ also has finite index in ${\rm Aut} (A)$ and every element of finite order in $H_2$ has an order which is a power of $q.$ The subgroup $H_1\cap H_2$ is torsion free and has finite index in ${\rm Aut} (A).$ This completes the proof of the proposition. \end{proof}
\begin{lemma}\label{Lemma3}
Let $A$ be a \emph{$\text{PI}$}-algebra. Let $\phantom{i}_{A}{M}$ be a finitely generated left $A$-module. Then the algebra of $A$-module endomorphisms of the module $\phantom{i}_{A}{M}$ is \emph{$\text{PI}.$} \end{lemma} \begin{proof}
Let $M=\sum _{i=1}^{n}Am_i.$ Consider the free $A$-module $V$ on free generators $x_1, \ldots ,x_n:$ $$ V=\sum _{i=1}^{n}Ax_i,$$ and the homomorphism
$$f: V \to M, \quad x_i \mapsto m_i,\quad 1\leq i\leq n.$$
Denote its kernel as $V_0.$ Let
$$E_1=\{ \varphi \in {\rm End} _{A}(V) \ | \ \varphi (V_0)\subseteq V_0\}, \quad E_2=\{ \varphi \in {\rm End} _{A}(V) \ | \ \varphi (V)\subseteq V_0\}.$$
Then
$$ {\rm End} _{A}(M)\simeq E_1/E_2.$$
The algebra ${\rm End}_A(V)$ is isomorphic to the algebra of $n\times n$ matrices over $A$. Hence, ${\rm End} _A(V)$ is a $\text{PI}$-algebra. This implies that $E_1$ and $E_1/E_2$ are $\text{PI}$-algebras. \end{proof}
\begin{proof}[Proof of theorem \emph{\ref{Theorem5}}.] Let $A$ be a finitely generated $\text{PI}$-algebra over $\mathbb Q$, and let $G$ be a finitely generated torsion subgroup of ${\rm Aut} (A).$ Consider the Jacobson radical $J$ of the algebra $A.$ The semisimple algebra $\overline{A}=A/J$ is representable; see \cite{Herstein16}. Hence, by theorem \ref{Theorem4}(a), the group ${\rm Aut} (\overline{ A})$ has Selberg's property, and the image of the group $G$ in ${\rm Aut} (\overline{ A})$ is finite. In other words, the subgroup
$ H=\{\varphi \in G \ | \ (1-\varphi )(A)\subseteq J\}$ has finite index in $G.$
Consider the subgroup $$K=\{\varphi \in {\rm Aut} (A) \ | \ (1-\varphi )(A)\subseteq J^2\}.$$ We showed that this subgroup centralizes the descending chain $A\supset J\supset J^2\ldots ,$ hence $K$ is a torsion free group. Therefore, $G\cap K=(1)$, and the homomorphism $G\to {\rm Aut} (A/J^2)$ is an embedding. Without loss of generality, we will assume that $J^2=(0).$ The radical $J$ can be viewed as an $\overline A$-bimodule.
Let $a_1, \ldots , a_m$ be generators of the algebra $A,$ and let $h_1, \ldots , h_r$ be generators of the subgroup $H.$ We have $(1-h_i)(A)\subseteq J, J^2=0,$ hence $1-h_i$ is a derivation of the algebra $A.$ This implies that $(1-h_i)(A)$ lies in the $\overline A$-subbimodule of $J$ generated by elements ${(1-h_i)(a_1), \ldots , (1-h_i)(a_m).}$ Let $J'$ be the $\overline A$-subbimodule of $J$ generated by elements ${(1-h_i)(a_j), 1\leq i\leq r, 1\leq j\leq m.}$ The finitely generated subbimodule $J'$ is invariant with respect to the action of $H.$ For an automorphism $h\in H,$ consider the restriction ${\rm Res}(h)$ of $h$ to $J'.$ This restriction is a bimodule automorphism of the $\overline A$-bimodule $J'.$ The mapping $$\varphi : H\to GL(_{\overline A}J'_{\overline A}), \quad h\mapsto {\rm Res}(h),$$ is a homomorphism to the group of bimodule automorphisms $GL(_{\overline A}J'_{\overline A}).$
The $\overline A$-bimodule $J'$ is a left module over the algebra ${\overline A}\bigotimes _{\mathbb Q}{\overline A}^{op}$ and
$$ GL(_{\overline A}J'_{\overline A})=GL_{{\overline A}\bigotimes _{\mathbb Q}{\overline A}^{op}}(J').$$
The algebra ${\overline A}\bigotimes _{\mathbb Q}{\overline A}^{op}$ is $\text{PI}$; see \cite{Regev33}. By lemma \ref{Lemma3}, the algebra $${\rm End} _{{\overline A}\bigotimes _{\mathbb Q}{\overline A}^{op}}(J')$$ is $\text{PI}$ as well. Thus, $\varphi (H)$ is a finitely generated torsion subgroup of the multiplicative group of a $\text{PI}$-algebra. By the result of C.~Procesi \cite{Procesi30}, the group $\varphi (H)$ is finite. The kernel $H'={\rm ker}\, \varphi$ is a subgroup of finite index in $G$ and for an arbitrary element $h\in H'$ we have $(1-h)(A)\subseteq J', (1-h)(J')=(0).$ Let $h^k=1, k\geq 1.$ We have
$$1-h^k=k(1-h) \ {\rm mod} \ (1-h)^2.$$
This implies $k(1-h)(A)=0$ and, therefore, $h=1, H'=(1).$
Hence, $|G|<\infty .$ This completes the proof of the theorem.\end{proof}
\section{Lie rings of locally nilpotent derivations}
\begin{proposition}\label{Proposition3}
Let $A$ be a finitely generated $\emph{\text{PI}}$-ring. Then the Lie ring ${\rm Der} (A)$ is $\emph{\text{PI}}.$ \end{proposition} \begin{proof}
For an integer $n\geq 2$ consider the following elements of the free Lie ring
$$P_n(x_0, x_1, \ldots x_n)=\sum_{\sigma \in S_n}(-1)^{|\sigma |}\ad (x_{\sigma (1)})\cdots \ad (x_{\sigma (n)})x_0.$$
For an associative commutative ring $\Phi$ let $W_{\Phi}(n)$ denote the Lie $\Phi$-algebra of $\Phi$-linear derivations of the polynomial algebra $\Phi [x_1, \ldots ,x_n].$ In \cite{Razmyslov32}, Yu.P.Razmyslov proved that for a field $\mathbb{F}$ of characteristic zero the Lie algebra $W_{\mathbb{F}}(n)$ satisfies the identity $P_N=0,$ where $N=(n+1)^{2}.$ The Lie ring $W_{\mathbb Z}(n)$ is a subring of the $\mathbb Q$-algebra $W_{\mathbb Q}(n).$ Hence, $W_{\mathbb Z}(n)$ satisfies the identity $P_N=0.$ Let $A$ be a $\text{PI}$-ring generated by elements $a_1, \ldots , a_m.$ Since $A$ is a finitely generated $\text{PI}$-ring, it follows that $A$ is an epimorphic image of the ring of generic matrices $G(m, n)$ for some integers $m, n\geq 2;$ see \cite{Belov_Rowen7,Kemer19}. Let
$$ G(m, n)\to A, \quad X_k=\big(x_{ij}^{(k)}\big)_{1\leq i, j\leq n}\mapsto a_k, \quad 1\leq k\leq m,$$
be an epimorphism. Let $N=(n^2m+1)^2.$ We will show that the Lie ring ${\rm Der} (A)$ satisfies the identity $P_N=0.$
Denote $$X=\{\,x_{ij}^{(k)} \ | \ 1\leq i, j\leq n, \quad 1\leq k\leq m\, \}.$$ Choose derivations $d_0, d_1, \ldots , d_N\in {\rm Der} (A).$ There exist elements $f_{st}(x_1, \ldots , x_m)$ of the free associative ring $\mathbb Z\langle x_1, \ldots , x_m\rangle, $ $ 0\leq s\leq N,$ $ 1\leq t\leq m,$ such that
$$d_s(a_t)=f_{st}(a_1, \ldots , a_m).$$
Let $$f_{st}(X_1, \ldots , X_m)=\big(g_{ij}^{st}(X)\big)_{1\leq i, j\leq n},$$ where $g_{ij}^{st}(X)\in \mathbb Z[X]$ are entries of the matrix $f_{st}(X_1, \ldots X_m).$
Consider derivations $ \widetilde{d}_{s}$ of the ring $\mathbb Z[X],$ $$\widetilde{d}_{s}(x_{ij}^{(t)})=g_{ij}^{st}(X), \quad 1\leq i, j\leq n, \quad 0\leq s\leq N, \quad 1\leq t\leq m.$$
Let $L$ be the Lie subring generated by the derivations $ \widetilde{d}_{s}, 0\leq s\leq N$ in ${\rm Der} (\mathbb Z[X]).$ The mapping $ \widetilde{d}_{s}\to d_{s}, 0\leq s\leq N,$ extends to a homomorphism $L\to {\rm Der} (A).$ This implies $P_N(d_0, d_1, \ldots d_N)=0$ and completes the proof of the proposition. \end{proof}
Now, our aim is to prove theorem \ref{Theorem6}. In view of \textbf{\ref{1.1}}, we will assume that the finitely generated $\text{PI}$-algebra $A$ of theorem \ref{Theorem6} is a finitely generated ring.
Let's prove theorem \ref{Theorem6} and proposition \ref{Proposition1} for the case of prime characteristics.
Let $A$ be a finitely generated $\text{PI}$-ring and let $L\subseteq {\rm Der} (A)$ be a Lie ring that consists of locally nilpotent derivations. Suppose further that there exists a prime number $p\geq 2$ such that $pA=(0).$
Let $a_1, \ldots , a_m$ be generators of the ring $A.$ Let $d\in L.$ There exists a power $p^k$ of the prime number $p$ such that $$d^{p^k}(a_i)=0, \quad 1\leq i\leq m.$$
The power $d^{p^k}$ is again a derivation of the ring $A.$ Hence $d^{p^k}=0.$ This implies that ${\rm ad} (d)^{p^k}=0$ in the Lie ring $L.$ By proposition \ref{Proposition3}, the Lie ring $L$ is $\text{PI},$ and by results of \cite{Zelmanov43} (see \textbf{\ref{1.7}}), the Lie ring $L$ is locally nilpotent. Moreover, every finitely generated subalgebra $L_1$ of $L$ acts on $A$ nilpotently, i.e. there exists an integer $s\geq 1$ such that $$\underbrace{L_1\cdots L_1}_{s}A=(0).$$ This proves theorem \ref{Theorem6} in the case of a prime characteristic.
Now, let $A$ be an associative commutative ring generated by elements $a_1, \ldots , a_m,$ let $p$ be a prime number such that $pA=(0),$ and let $L\subseteq {\rm Der} (A)$ be a Lie subring of ${\rm Der} (A).$ Suppose that the subring $A_L$ is an order in $A.$ Then $a_i=b_{i}^{-1}c_i, 1\leq i\leq m,$ where $b_i, c_i\in A_L.$ For an arbitrary derivation $d\in L$ there exists a power $p^k$ such that $d^{p^k}(b_i)=d^{p^k}(c_i)=0, 1\leq i\leq m.$ Then $d^{p^k}(a_i)=0, 1\leq i\leq m, $ and, therefore, $d^{p^k}=0.$ Again, by \cite{Zelmanov43}, the ring $L$ is locally nilpotent. This proves proposition \ref{Proposition1} in the case of prime characteristic.
A Lie ring $L$ is called \textit{weakly Engel} if for arbitrary elements $a, b\in L$ there exists an integer $n(a, b)\geq 1$ such that $${\rm ad} (a)^{n(a, b)}b=0.$$ B.~I.~Plotkin \cite{Plotkin27} proved that a weakly Engel Lie ring has a locally nilpotent radical. In other words, if $L$ is a weakly Engel Lie ring, then $L$ contains the largest locally nilpotent ideal $I$ such that the factor-ring $L/I$ does not contain nonzero locally nilpotent ideals. We denote $I=\text{Loc}(L).$
\begin{lemma}\label{Lemma4}
Let $A$ be a finitely generated ring and let a Lie ring $L\subseteq {\rm Der} (A)$ consist of locally nilpotent derivations. Then the Lie ring $L$ is weakly Engel. \end{lemma} \begin{proof}
Let the ring $A$ be generated by elements $a_1, \ldots , a_m.$ Let $d_1, d_2\in L.$ There exists an integer $n\geq 1$ such that $d_1^n(a_i)=0, 1\leq i\leq m.$ Since the set $$\{\, d_2d_1^i(a_j), \quad 0\leq i\leq n-1, \quad 1\leq j\leq m\,\}$$ is finite there exists an integer $k\geq 1$ such that $$d_1^kd_2d_1^i(a_j)=0, \quad 0\leq i\leq n-1, \quad 1\leq j\leq m.$$
We have $${\rm ad} (d_1)^sd_2=\sum_{i+j=s}(-1)^j\binom{s}{i}d_1^id_2d_1^j.$$ Hence
$$(\ad (d_1)^{n+k-1}d_2)(a_j)=0, 1\leq j\leq m.$$
This implies
${\rm ad} (d_1)^{n+k-1}d_2=0$ and completes the proof of the lemma. \end{proof}
\begin{lemma}\label{Lemma5}
Let $A$ be a finitely generated associative commutative ring. Let $L \subseteq {\rm Der} (A)$ be a Lie ring of derivations such that the subring $A_L$ is an order in $A.$ Then the Lie ring $L$ is weakly Engel. \end{lemma} \begin{proof}
Let $a_1, \ldots ,a_m$ be generators of the ring $A,$ let $a_i=b_i^{-1}c_i, 1\leq i\leq m, $ where $b_i, c_i \in A_L.$ Choose derivations $d_1, d_2 \in L.$ In the proof of lemma \ref{Lemma4} we showed that there exists an integer $s\geq 1$ such that
$$({\rm ad} (d_1)^sd_2)(b_i)=({\rm ad} (d_1)^{s}d_2)(c_i)=0, 1\leq i\leq m.$$
Since $d'={\rm ad} (d_1)^sd_2$ is a derivation of the algebra $A$ it follows that $d'(a_i)=0, 1\leq i\leq m,$ and therefore $d'=0.$ This completes the proof of the lemma. \end{proof}
\begin{lemma}\label{Lemma6}
Let $A$ be a finitely generated semiprime $\text{PI}$-ring. Then there exists a family of homomorphisms $A\to M_{n}(\mathbb Z/p\mathbb Z)$ into matrix rings over prime fields that approximates $A.$ \end{lemma} \begin{proof}
The ring $A$ is representable \cite{Herstein16}, i.e. it is embeddable into a ring of matrices over a finitely generated associative commutative semiprime ring $C, \ A \hookrightarrow M_n(C).$ Hilbert's Nullstellensatz \cite{Atiyah_Macdonald4} implies that $C$ is a subdirect product of finite fields. Hence, there exists a family of homomorphisms $\varphi _i : A\to M_n(\mathbb{F}_i),$ where $ \mathbb{F}_i$ are finite fields such that $\cap _{i}{\rm ker}\, \varphi _i=(0).$ If ${\rm char}\, \mathbb{F}_i=p,$ then the field $\mathbb{F}_i$ is embeddable into a ring of matrices over $\mathbb Z/p\mathbb Z.$ This completes the proof of the lemma. \end{proof}
\begin{lemma}\label{Lemma7}
Let $A$ be a finitely generated prime $\text{PI}$-ring. Let $Z$ be the center of $A$ and let $K$ be the field of fractions of the commutative domain $Z.$ Then ${\rm dim} _{K}K{\rm Der} (A)<\infty .$ \end{lemma} \begin{proof}
Let $a_1, \ldots , a_m$ be generators of the ring $A.$ As we have remarked in \textbf{\ref{1.6}} the ring of fractions $\widetilde A=(Z\setminus ( 0))^{-1}A$ is a finite-dimensional central simple algebra over the field $K.$ Let ${\rm dim} _{K}\widetilde A=s.$ We will show that ${\rm dim}_KK{\rm Der} (A)\leq ms.$
Choose $ms+1$ derivations $d_1, \ldots , d_{ms+1}$ of the ring $A.$ Consider the vector space $$V=\underbrace{\widetilde A\oplus \cdots \oplus \widetilde A}_{m}$$ over the field $K$, ${\rm dim} _KV=ms,$ and vectors $v_i=(d_i(a_1), \ldots , d_i(a_m))\in V, 1\leq i\leq ms+1.$ There exist coefficients $k_1, \ldots k_{ms+1}\in K,$ not all equal to $0,$ such that $$\sum _{i=1}^{ms+1}k_iv_i=0.$$ This implies $d(a_i)=0, 1\leq i\leq m,$ where $d=\sum_{i=1}^{ms+1}k_id_i.$ Since $d$ is a derivation of the ring $\widetilde A$ and elements $a_1, \ldots , a_m$ generate $A$ as a ring it follows that $d(A)=(0).$ This implies that $d(K)=0$ and completes the proof of the lemma. \end{proof}
Now, we will prove theorem \ref{Theorem6} and proposition \ref{Proposition1} in the case when the algebra $A$ is prime.
As above, let $A$ be a finitely generated prime $\text{PI}$-ring, let $Z=Z(A)$ be the center of the ring $A,$ and let $ K=(Z\setminus \{0\})^{-1}Z$ be the field of fractions of the domain $Z.$ Suppose that a Lie ring $L\subseteq {\rm Der} (A)$ consists of locally nilpotent derivations. For a derivation $d\in L$ let ${\rm id}_L(d)$ denote the ideal of the Lie ring $L$ generated by the element $d.$ Consider the descending chain of ideals $$ I_1=L, \ \ I_{i+1}=\sum _{d\in I_{i}} \ [{\rm id}_L(d), {\rm id}_L(d)].$$ Since ${\rm dim} _KKL<\infty$, by lemma \ref{Lemma7}, it follows that the descending chain $$ KI_1\supseteq KI_2\supseteq \cdots $$ stabilizes. Let $KI_l=KI_{l+1}= \cdots.$ We will show that $I_l=(0).$ Indeed, there exists a finite collection of derivations $d_1, \ldots , d_r\in I_l$ such that $$KI_{l+1}=\sum _{i=1}^rK[{\rm id}_L(d_i), {\rm id}_L(d_i)].$$ Recall that \begin{equation}\label{equation1} {\rm id}_L(d_i)=\mathbb{Z}d_i+\sum _{t\geq 1} [\ldots [d_i, \underbrace{ L], L], \ldots , L}_{t}] . \end{equation} Let \begin{equation}\label{equation2} d\in [{\rm id}_L(d_i), {\rm id}_L(d_i)]. \end{equation} Expanding the commutators on the right-hand sides of (\ref{equation1}) and (\ref{equation2}) we get $$d=\sum \underset{(\star _1)}{ . . . }d_i\underset{(\star _2)}{ . . . }d_i\underset{(\star _3)}{ . . . },$$ where $(\star _{1})$ is a product of derivations from $L$ and, possibly, a multiplication by an element from $K,$ $(\star _{2})$ and $(\star _{3})$ are products, may be empty, of derivations from $L.$ Hence, $d=\sum \cdots d_i\cdots ,$ where each summand has a nonempty product of derivations from $L$ to the right of $d_i.$
Since $d_1, \ldots , d_r\in \sum _{j}K[{\rm id}_L(d_j), {\rm id}_L(d_j)],$ we have \begin{equation}\label{equation3} d_i=\sum k_{ijt}u_{ijt}d_jv_{ijt}, \quad 1\leq i\leq r,
\end{equation} where $k_{ijt}\in K; u_{ijt}, v_{ijt}$ are products of derivations from $L;$ $v_{ijt}$ are nonempty products of derivations from $L.$
Let $b$ be a common denominator of all elements $k_{ijt},$ that is $k_{ijt}\in b^{-1}Z.$ Consider the finitely generated prime $\text{PI}$-ring $A_1=\langle b^{-1}, A\rangle.$ The ring $A_1$ is invariant under $\Der (A).$ Suppose that there exists an element $a\in A$ such that $d_i(a)\not =0.$ By lemma \ref{Lemma6}, there exists a family of homomorphisms $\varphi : A_1\to M_{n}(\mathbb Z/p\mathbb Z)$ that approximates the ring $A_1.$ Hence, there exists a prime number $p$ such that $d_i(a)\not \in pA_1.$
Consider the subring $L'$ of the Lie ring $L$ generated by all derivations that are involved in the products $v_{ijt}.$ Clearly, $L'$ is a finitely generated Lie ring.
We have shown above that theorem \ref{Theorem6} is true for rings of prime characteristics. Applying this result to the ring $A/pA$, we conclude that the ring $L'$ acts nilpotently on $A/pA$. In other words, there exists an integer $s\geq 1$ such that \begin{equation}\label{equation4} \underbrace{L'\cdots L'}_{s}(A)\subseteq pA . \end{equation} Iterating (\ref{equation3}) $s$ times, we get $$d_i=\sum u_td_jv_{i_1j_1t_1}\cdots v_{i_sj_st_s},$$ where $u_t\in A_1.$ By (\ref{equation4}), we get $$ v_{i_1j_1t_1}\cdots v_{i_sj_st_s}A\subseteq pA\subseteq pA_1$$ and, therefore, $d_i(a)\in pA_1, 1\leq i\leq r,$ a contradiction. We showed that $I_l=(0).$ Recall that, by B.~I.~Plotkin's theorem \cite{Plotkin27}, the ring $L$ has a locally nilpotent radical ${\rm Loc}(L).$ Let $i\geq 1$ be a minimal positive integer such that $I_i\subseteq {\rm Loc}(L), i\leq l.$ Suppose that $i\geq 2.$ For an arbitrary element $a\in I_{i-1}$ the ideal ${\rm id}_L(a)$ is abelian modulo $I_i.$ Since the factor-ring $L/{\rm Loc}(L)$ does not contain nonzero abelian ideals it follows that $a\in {\rm Loc}(L), I_{i-1}\subseteq {\rm Loc}(L),$ a contradiction.
We showed that $L=I_1\subseteq {\rm Loc}(L),$ in other words, the ring $L$ is locally nilpotent. This completes the proof of theorem \ref{Theorem6} in the case when the ring $A$ is prime.
To finish the proof of proposition \ref{Proposition1} we need just to repeat the arguments above. Let $A$ be a commutative domain, $L\subseteq {\rm Der} (A)$ and $A_L$ is an order in $A.$ We see that the subring $(A_1)_L$ is an order in the ring $A_1$ and, therefore, for any prime number $p$ the subring $(A_1/pA_1)_L$ is an order in $A_1/pA_1.$ In the case of a prime characteristic proposition \ref{Proposition1} was proved for an arbitrary finitely generated associative commutative ring, not necessarily a domain. Hence, we can apply it to $A_1/pA_1,$ and finish the proof of proposition \ref{Proposition1} following the proof of theorem \ref{Theorem6} verbatim.
To tackle the semiprime case we will need the following lemma. \begin{lemma}\label{Lemma8}
Let $A$ be a finitely generated semiprime ring. Then there exists a finite family of ideals $I_1, \ldots , I_n\vartriangleleft A$ such that each ideal $I_i, 1\leq i\leq n,$ is invariant under ${\rm Der} (A);$ each factor-ring $A/I_i$ is prime, and $$\bigcap _{j=1}^{n}I_j=(0).$$ \end{lemma}
\begin{proof} As we have mentioned in \textbf{\ref{1.7}}, the ring of fractions $\widetilde{A}=({Z^{\star}})^{-1}A,$ where $Z^{\star}$ is the set of all nonzero central elements of $A$ that are not zero divisors, is a direct sum $\widetilde{ A}=\widetilde{ A_1}\oplus \cdots \oplus \widetilde{ A_n}$ of simple finite-dimensional over their centers algebras. Let
$$ I_i=A\cap (\widetilde{ A}_1+\cdots +\widetilde{ A}_{i-1}+\widetilde{ A}_{i+1}+\cdots +\widetilde{ A}_{n}), 1\leq i\leq n.$$
All direct summands $\widetilde{A}_i$ are invariant under ${\rm Der} (\widetilde{A}).$ An arbitrary derivation of the ring $A$ extends to a derivation of $\widetilde{A}.$ This implies that each ideal $I_i$ is invariant under ${\rm Der} (A).$
Let us prove that each factor-ring $A/I_i$ is prime. Suppose that $a, b\in A$ and $aAb\subseteq I_i.$ We need to show that $a\in I_i$ or $b\in I_i.$ The inclusion above implies that
$$a{\widetilde A}b\subseteq \widetilde{ A}_1+\cdots +\widetilde{ A}_{i-1}+\widetilde{ A}_{i+1}+\cdots +\widetilde{ A}_{n}.$$
The factor-ring
$${\widetilde A}/(\widetilde{ A}_1+\cdots +\widetilde{ A}_{i-1}+\widetilde{ A}_{i+1}+\cdots +\widetilde{ A}_{n})\simeq {\widetilde A}_{i}$$ is simple. Hence, at least one of the elements $a, b$ lies in $I_i.$ It is straightforward that $I_1\cap \cdots \cap I_n=(0).$ This completes the proof of the lemma.\end{proof}
Now, we are ready to prove theorem \ref{Theorem6} in the case when the ring $A$ is semiprime.
Let $A$ be a finitely generated semiprime $\text{PI}$-ring. Let $L$ be a finitely generated Lie subring $L\subseteq {\rm Der} (A)$ that consists of locally nilpotent derivations. Let $I_1, \ldots , I_n$ be the ideals of lemma \ref{Lemma8}. We showed above that there exists $r\geq 1$ such that
$$L^r(A/I_i)=(0), 1\leq i\leq n.$$
Hence, $$L^r(A)\subseteq \bigcap _{i=1}^nI_i=(0) \quad \text{and} \quad L^r=(0).$$ This completes the proof of theorem \ref{Theorem6} for semisimple rings.
\begin{lemma}\label{Lemma9}
Let $A$ be a finitely generated $\text{PI}$-ring and let $L\subseteq {\rm Der} (A)$ be a Lie ring that consists of locally nilpotent derivations. Let $I\vartriangleleft A$ be a differentially invariant ideal such that $I^2=(0)$ and the image of the Lie ring $L$ in ${\rm Der} (A/I)$ is locally nilpotent. Then the Lie ring $L$ is locally nilpotent. \end{lemma} \begin{proof}
Choose derivations $d_1, \ldots, d_n\in L.$
We need to show that the Lie ring $L'$ generated by $d_1, \ldots, d_n$ is nilpotent. By the assumption of lemma \ref{Lemma8}, there exists $r\geq 1$ such that $L'^r(A)\subseteq I.$ Let $d\in L'^r$ and let $a_1, \ldots , a_m$ be generators of the ring $A.$ There exists an integer $l\geq 1$ such that $d^l(a_j)=0, 1\leq j\leq m.$ Let $v=a_{i_1}\cdots \cdot a_{i_s}$ be a product of generators in the ring $A.$ Since $d(a_i)Ad(a_j)\subseteq I^2=(0)$ it follows that
$$d^l(a_{i_1}\cdots a_{i_s}) =d^l(a_{i_1})a_{i_2}\cdots a_{i_s}+a_{i_1}d^l(a_{i_2})\cdots a_{i_s}+\cdots +a_{i_1}\cdots a_{i_{s-1}}d^l(a_{i_s})=0.$$
Hence $d^l=0.$
Since the ring $L$ is weakly Engel by lemma \ref{Lemma4}, B.~I.~Plotkin's theorem \cite{Plotkin27} implies that the Lie ring $L'^r$ is finitely generated. Hence, by \cite{Zelmanov43} (see also \textbf{\ref{1.7}}), the Lie ring $L'^r$ is nilpotent and the Lie ring $L'$ is solvable. Again by B.~I.~Plotkin's theorem, the Lie ring $L'$ is nilpotent. This completes the proof of the lemma.\end{proof}
Let us prove theorem \ref{Theorem6} in the case when the ring $A$ does not have additive torsion.
Let $J$ be the Jacobson radical of the ring $A.$ By \cite{Braun9}, the radical $J$ is nilpotent. Let $J^n=(0), J^{n-1}\not =(0), n\geq 2.$ It is well known that if the ring $A$ does not have additive torsion, then the radical $J$ is differentially invariant.
Let $$I=\{ a\in A \ | \ {\rm there \ exists \ an \ integer} \ s\geq 1 \ \ {\rm such \ that} \ sa\in J^{n-1}\}.$$
The ideal $I$ is differentially invariant. We claim that $I^2=(0).$ Indeed, let $a, b\in I.$ There exist integers $s_1, s_2 \geq 1$ such that $s_1a\in J^{n-1}, s_2b \in J^{n-1}.$ Hence $s_1s_2ab \in (J^{n-1})^2=(0).$ Since the ring $A$ does not have additive torsion it follows that $ab=0.$
The Jacobson radical of the ring $A/I$ is $J/I, (J/I)^{n-1}=(0).$ The ring $A/I$ obviously does not have additive torsion. Hence, by inductive assumption on $n$, the image of $L$ in ${\rm Der} (A/I)$ is locally nilpotent; and by lemma \ref{Lemma9}, the ring $L$ is locally nilpotent.
Now, we are ready to finish the proof of theorem \ref{Theorem6}.
Let $a_1, \ldots , a_m$ be generators of a $\text{PI}$-ring $A.$ Let $L\subseteq {\rm Der} (A)$ be a finitely generated Lie subring such that every derivation from $L$ is locally nilpotent. Let $T(A)$ be the ideal of $A$ that consists of elements of a finite additive order. Clearly, $T(A)$ is differentially invariant. The factor-ring $A/T(A)$ does not have an additive torsion. Hence, by the proof of theorem~\ref{Theorem6} in the case when the ring $A$ does not have additive torsion, the image of the ring $L$ in ${\rm Der} (A/T(A))$ is nilpotent. Therefore, there exists $r\geq 1$ such that for any derivation $d\in L^r$ we have $d(A)\subseteq T(A).$ Since the ring $L$ is finitely generated and weakly Engel by lemma \ref{Lemma4}, it follows from B.~I.~Plotkin's theorem \cite{Plotkin27} that the Lie ring $L^r$ is finitely generated.
We aim to show that the Lie ring $L^r$ is nilpotent. Let $d_1', \ldots , d_l'$ be generators of $L^r.$ There exists an integer $n\geq 1$ such that $$nd_i'(a_j)=0, \quad 1\leq i\leq l, \quad 1\leq j\leq m.$$ Hence,
$nL^r(A)=(0).$
For a prime number $p$, consider the ideal
$$I_p=\{ a\in A \ | \ {\rm there \ exists \ an \ integer } \ t\geq 1 \ {\rm such \ that } \ p^ta=0\}.$$ Let $a\in I_p,$ $ d\in L^r.$ Then $nd(a)=0$ and $p^td(a)=0$ for some $t\geq 1.$ Hence, for a prime number $p$ not dividing $n$, we have $L^rI_p=(0).$
This allows us to consider the factor-ring $A/\sum_{p\nmid n}I_p$ instead of $A.$ In other words, we will assume that for a prime number $p$ not dividing $n$ the ring $A$ does not have a $p$-torsion.
Let $p_1, \ldots , p_s$ be all distinct prime divisors of $n.$ Then $$T(A)=I_{p_1}\oplus \cdots \oplus I_{p_s}. $$ Let $s\geq 2.$ Inducting on the integer $n$ we can assume that the image of the Lie ring $L$ in each ${\rm Der} (A/I_{p_i})$ is nilpotent. In other words, there exists a number $r_i\geq 1$ such that $L^{r_i}(A)\subseteq I_{p_i}.$ This implies
$$L^{\max{(r_1, r_2)}}(A)\subseteq I_{p_1}\cap I_{p_2}=(0).$$
Therefore, we assume that $T(A)=I_p$ for some prime number $p.$ The ideal $pI_p$ lies in the Jacobson radical of $A$ and $pI_p$ is differentially invariant. Let $(pI_p)^q=(0), q\geq 1.$ If $q\geq 2,$ then inducting on $q$ we can assume that the image of the Lie ring $L$ in $\Der (A/(pI_p)^{q-1})$ is nilpotent. Hence, the ideal $(pI_p)^{q-1}$ satisfies the assumptions of lemma \ref{Lemma9}. Suppose, therefore, that $q=1,$ $ pI_p=(0),$ $ n=p.$ Now, we have $pL^r(A)=(0).$
This implies that for an arbitrary derivation $d\in L^r$ every $p$-power $d^{p^k}$ is again a derivation. Indeed,
$$ d^{p^k}(ab)=\sum _{i=0}^{p^k}\binom{p^k}{i}d^i(a)d^{p^k-i}(b)$$
for arbitrary elements $a, b\in A.$ If $0< i< p^k,$ then the binomial coefficient $\binom{p^k}{i}$ is divisible by $p,$ hence $$\binom{p^k}{i}d^i(a)=0, \quad \text{which implies} \quad d^{p^k}(ab)=d^{p^k}(a)b+ad^{p^k}(b).$$
Choosing $d\in L^r$ and arguing as above, we find $p^k$ such that $d^{p^k}(a_j)=0, 1\leq j\leq m, $ therefore, $d^{p^k}=0.$ The Lie ring $L^r$ is finitely generated, $\text{PI}$, and an arbitrary derivation from $L^r$ is nilpotent. By \cite{Zelmanov43}, the Lie ring $L^r$ is nilpotent. The ring $L$ is solvable, hence, by the result of B.~I.~Plotkin \cite{Plotkin27}, it is nilpotent. This completes the proof of theorem \ref{Theorem6}.
Now, our aim is to prove theorem \ref{Theorem3}. In the rest of this section, we assume that $A$ is a commutative domain; $L\subseteq {\rm Der} (A)$ is a Lie ring that consists of locally nilpotent derivations; $K$ is the field of fractions of the domain $A,$ and ${\rm dim} _KKL<\infty.$ Our aim is to prove that the Lie ring $L$ is locally nilpotent. Let \begin{equation}\label{equation5} KL=\sum _{i=1}^{n}Kd_i, \quad d_i\in L, \quad \text{and} \quad [d_i, d_j]=\sum _{t=1}^nc_{ijt}d_t, \quad c_{ijt}=\frac{a_{ijt}}{b_{ijt}}, \end{equation} where $a_{ijt},$ $ b_{ijt}\in A.$ Enlarging the set $\{ d_1, \ldots , d_n\}$ if necessary we will assume that the derivations $d_1, \ldots , d_n$ generate $L, $ that is, $L={\rm Lie}_{\mathbb Z}\langle d_1, \ldots , d_n\rangle .$ Let $ d_{i_1} \cdots d_{i_m}$ be a product in the associative ring of additive endomorphisms of the field $K.$ We call this product \textit{ordered} if $i_1\leq i_2\leq \cdots \leq i_m.$ Let $\mathcal P$ denote the set of all ordered products of derivations $d_1, \ldots , d_n$ including the empty product, i.e. the identity operator.
\begin{lemma}\label{Lemma10}
For an arbitrary element $a\in A$ the set of ordered products $v=d_{i_1}\cdots d_{i_m}\in \mathcal P$ such that $v(a)\not =0,$ is finite. \end{lemma}
\begin{proof}
Let $$v=d_1^{k_1}d_{2}^{k_2}\cdots d_n^{k_n}, \quad \text{where} \ k_i \ \text{are nonnegative integers.}$$ There exists an integer $q_n\geq 1$ such that $d_{n}^{q_n}(a)=0.$ Hence, if $v(a)\not =0,$ then $k_n<q_n.$ Similarly, there exists $q_{n-1}\geq 1$ such that $$d_{n-1}^{q_{n-1}}d_n^i(a)=0 \quad \text{for all} \quad 0\leq i\leq q_{i-1}.$$ Hence, $v(a)\not =0$ implies $k_n<q_n, k_{n-1}<q_{n-1}$ and so on. This completes the proof of the lemma. \end{proof}
Consider the set $C=\{ c_{ijt}\}_{i, j, t}\subset K$; see (\ref{equation5}).
\begin{lemma}\label{Lemma11}
An arbitrary product $d_{i_1}\cdots d_{i_r}$ can be represented as
$$d_{i_1}\cdots d_{i_r}=\sum \pm (v_1(c_1))\cdots (v_s(c_s))v_0, $$
where in each summand the operators $v_0,v_1, \ldots ,v_s$ lie in $\mathcal P$ and elements $c_1, \ldots , c_s$ lie in $C.$
\end{lemma}
\begin{proof} For a product $v=d_{i_1}\cdots d_{i_r}$ let $l$ be the number of $1\leq k\leq r-1,$ such that $i_k>i_{k+1}.$ Let $\nu (v)=(r, l).$ We will compare pairs $(r, l)$ lexicographically and use induction on $\nu (v).$ Let $i=i_k>i_{k+1}=j.$ Then $$d_id_j=d_jd_i+\sum _{t}c_{ijt}d_t.$$ Clearly, $$\nu (d_{i_1}\cdots d_{i_{k-1}}d_jd_id_{i_{k+2}}\cdots d_{i_r})<\nu (v).$$ Consider the product $$d_{i_1}\cdots d_{i_{k-1}}c_{ijt}d_td_{i_{k+2}}\cdots d_{i_r}.$$ Commuting the element $c_{ijt}$ with derivations $d_{i_1}, \ldots , d_{i_{k-1}}$ we get $$d_{i_1}\cdots d_{i_{k-1}}c_{ijt}=\sum (v'(c_{ijt}))v'',$$ where $v', v''$ are products of derivations $d_{i_1}, \ldots , d_{i_{k-1}}$ of total length $k-1.$ Hence, $$d_{i_1}\cdots d_{i_{k-1}}c_{ijt}d_td_{i_{k+2}}\cdots d_{i_r}=\sum \pm (v'(c_{ijt}))v''d_td_{i_{k+2}}\cdots d_{i_r}.$$ In each summand the lengths of products $v'$ and $v''d_td_{i_{k+2}}\cdots d_{i_r}$ are less than $r.$ Applying the induction assumption to these products, we get the assertion of the lemma. \end{proof}
Consider the subring $\widetilde{A}$ of the field $K$ generated by the elements $$v(a_{ijt}), \quad v(b_{ijt}),\quad v(b_{ijt})^{-1};\quad v\in \mathcal P; \quad i,\ j,\ t\geq 1.$$ By lemma \ref{Lemma10}, the ring $\widetilde A$ is finitely generated.
\begin{lemma}\label{Lemma12}
The subring $\widetilde A$ is invariant under the action of $L.$ \end{lemma}
\begin{proof}
For an arbitrarily ordered product of derivations $v\in \mathcal P$ we have
$$v(b_{ijt}^{-1})=\sum \frac{1}{b_{ijt}^m}(v_1b_{ijt})\cdots (v_sb_{ijt}),$$
where $m\geq 1; v_1, \ldots , v_s\in \mathcal P,$ and
$$ v(c_{ijt})=v(a_{ijt}\cdot b_{ijt}^{-1})=\sum _{v', v''\in \mathcal P}v'(a_{ijt})v''(b_{ijt}^{-1}).$$
These equalities imply $v(c_{ijt})\in \widetilde{A}.$ Now, by lemma \ref{Lemma11}, the ring $\widetilde A$ is invariant under the action of $L.$ \end{proof}
The ring $\widetilde A$ is generated by elements $v(a_{ijt}), v(b_{ijt})\in A\cap \widetilde{A}$ and elements $v(b_{ijt})^{-1}.$ Hence, an arbitrary element of the ring $\widetilde A$ can be represented as a ratio $a/b,$ where $a, b\in A\cap \widetilde{A}.$ Hence, $A\cap \widetilde{A}$ is an order in the ring $\widetilde A,$ and the multiplicative semigroup $S$ being generated by elements $v(b_{ijt})\not =0.$
By proposition \ref{Proposition1}, the image of the ring $L$ in $\End _{\mathbb Z}(\widetilde{A})$ is a nilpotent Lie ring. Hence, there exists an integer $r\geq 1$ such that $L^{r}(\widetilde{A})=(0).$ By lemma \ref{Lemma5} and Plotkin's theorem \cite{Plotkin27}, the Lie ring $L^r$ is finitely generated.
Consider the subfield
$$K_0=\big\{\, \alpha \in K \ | \ L^r(\alpha )=(0)\, \big\}.$$
The $K_0$-algebra $A'=K_0A\subseteq K$ is a domain. The field $K_0$ is invariant under the action of $L,$ hence the $K_0$-algebra $A'$ is invariant as well.
Let $L'$ be the image of the Lie ring $L^r$ in $\End _{\mathbb Z}(A').$ Since all the coefficients $c_{ijt}$ lie in $K_0$ the product $K_0L$ is a Lie ring and a finite-dimensional vector space over $K_0.$ This implies
that $K_0L'$ is a finite-dimensional $K_0$-algebra. Now, Petravchuk-Sysak theorem (see \cite{Petravchuk_Sysak26}) implies that $L'$ is a nilpotent Lie ring. Again, by lemma \ref{Lemma5} and B.~I.~Plotkin's theorem, the Lie ring $L$ is nilpotent. This completes the proof of theorem \ref{Theorem3}.
We will finish with examples showing that corollary \ref{Cor1} of theorem \ref{Theorem1} and theorem \ref{Theorem2} are wrong for countably generated algebras. Let $\mathbb{F}$ be an arbitrary field and let $A=\mathbb{F}[x_1, x_2, \ldots ]$ be the polynomial algebra on countable many generators. We will construct
(i) a Lie algebra $L\subset {\rm Der} (A)$ that consists of locally nilpotent derivations and is not locally nilpotent,
(ii) a torsion group $G<{\rm Aut} (A)$ that is not locally finite.
Consider the countable-dimensional vector space $V=\sum _{i\geq 1}\mathbb{F}x_i.$ There exists a countable finitely generated Lie algebra $L$ such that every operator ${\rm ad} (a), a\in L,$ is nilpotent, and the algebra $L$ has zero center (see \cite{Golod14,Lenagan_Smoctunowicz20}). The mapping $L\to \mathfrak{gl}(L),$ $ a\mapsto {\rm ad} (a),$ $ a\in L,$ is an embedding of the Lie algebra $L$ in $\mathfrak{gl}(L)$ and every linear transformation ${\rm ad} (a)$ from the image of $L$ is nilpotent. Therefore, we can suppose that $L\subseteq \mathfrak{gl}(V)$ and every linear transformation from $L$ is nilpotent. An arbitrary linear transformation on $V$ is a restriction of a derivation from $$\sum _{i\geq 1}V\frac{\partial}{\partial x_i}.$$ Hence, we assume that $$L \subseteq \sum _{i\geq 1}V\frac{\partial}{\partial x_i}\subset {\rm Der} (A).$$ Since every derivation from $L$ acts nilpotently on $V$ it follows that it acts locally nilpotently on $A.$ Similarly, there exists a finitely generated torsion group $G< {\rm Aut} (V)$ that is not locally finite (see \cite{Grigorchuk15,Novikov_Adyan23,Novikov_Adyan24,Novikov_Adyan25}). Every linear transformation $\varphi \in GL(V)$ uniquely extends to an automorphism $\widetilde{\varphi}\in \Aut (A).$ Thus the mapping $GL(V)\to {\rm Aut} (A),$ $\varphi \mapsto \widetilde{\varphi}, $ is an embedding of groups. Hence, $G$ is a torsion not locally finite subgroup of ${\rm Aut} (A).$
\end{document} |
\begin{document}
\title[An algebraic property of Reidemeister torsion]{An algebraic property of Reidemeister torsion}
\author{Teruaki Kitano} \address{Department of Information Systems Science, Faculty of Science and Engineering, Soka University \\ Tangi-cho 1-236, Hachioji, Tokyo 192-8577 \\ Japan} \email{[email protected]}
\author{Yuta Nozaki} \address{ Graduate School of Advanced Science and Engineering, Hiroshima University \\ 1-3-1 Kagamiyama, Higashi-Hiroshima City, Hiroshima, 739-8526 \\ Japan} \email{[email protected]}
\subjclass[2020]{Primary 57K31, 57Q10, Secondary 11R04, 13P15, 57M05}
\keywords{Reidemeister torsion, algebraic integer, resultant, Chebyshev polynomial, character variety, $A$-polynomial}
\maketitle
\begin{abstract} For a 3-manifold $M$ and an acyclic $\mathit{SL}(2,\mathbb{C})$-representation $\rho$ of its fundamental group, the $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion $\tau_\rho(M) \in \mathbb{C}^\times$ is defined. If there are only finitely many conjugacy classes of irreducible representations, then the Reidemeister torsions are known to be algebraic numbers. Furthermore, we prove that the Reidemeister torsions are not only algebraic numbers but also algebraic integers for most Seifert fibered spaces and infinitely many hyperbolic 3-manifolds. Also, for a knot exterior $E(K)$, we discuss the behavior of $\tau_\rho(E(K))$ when the restriction of $\rho$ to the boundary torus is fixed. \end{abstract}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} \label{sec:Intro} Let $M$ be a connected compact $n$-manifold and let $\rho$ be an acyclic $\mathit{SL}(2,\mathbb{C})$-representation, namely the chain complex $C_\ast(M;\mathbb{C}^2_\rho)$ is acyclic. Then the $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion $\tau_\rho(M) \in \mathbb{C}^\times$ is defined to be the alternative product of determinants (see Section~\ref{subsec:Rtorsion}). When $\rho$ is not acyclic, we set $\tau_\rho(M)=0$. Then $\tau_\rho(M)$ defines a $\mathbb{C}$-valued function on the $\mathit{SL}(2,\mathbb{C})$-representation variety $R(M)=\mathrm{Hom}(\pi_1(M),\mathit{SL}(2,\mathbb{C}))$, which factors through the $\mathit{SL}(2,\mathbb{C})$-character variety $X(M)$ of $M$. In this paper, we mainly consider a 3-manifold $M$ and the subspace $X^\mathrm{irr}(M)$ of irreducible characters. Under the assumption that $X^\mathrm{irr}(M)$ is a finite set, Johnson~\cite{Joh88} defined the torsion polynomial $\sigma_M(t)$ of $M$ by \[ \sigma_M(t) = \prod_{[\rho] \in X^\mathrm{irr}(M),\ \text{acyclic}}(t-\tau_\rho(M)) \in \mathbb{C}[t]. \] He mentioned that $\sigma_M(t)$ lies in $\mathbb{Q}[t]$ by considering the action of a Galois group. As a consequence, $\tau_\rho(M)$ is an algebraic number when $X^\mathrm{irr}(M)$ is a finite set. The first author and Tran~\cite{KiTr} described the torsion polynomial of the Brieskorn homology 3-sphere $\Sigma(p,q,r)$ in terms of the normalized Chebyshev polynomial (see also \cite{Kit17}). Moreover, we show that $\sigma_{\Sigma(p,q,r)}(t)$ has integral coefficients in Section~\ref{sec:Seifert}. In other words, $\tau_\rho(\Sigma(p,q,r))$ is not only an algebraic number but also an algebraic integer for every irreducible representation $\rho$. Recall that $\alpha \in \mathbb{C}$ is called an \emph{algebraic integer} if there is a monic polynomial over $\mathbb{Z}$ such that $\alpha$ is a root of the polynomial. As a consequence, for the homology 3-sphere obtained by Dehn surgery along the (right-handed) $(p,q)$-torus knot $T_{p,q}$, its Reidemeister torsion $\tau_\rho(S^3_{-1/n}(T_{p,q}))$ is also an algebraic integer since $\Sigma(p,q,pqn+1) = S^3_{-1/n}(T_{p,q})$ for $n>0$.
Also, such a phenomenon was numerically observed for the figure-eight knot $4_1$ in \cite{Kit16N}. It is worth mentioning that while the Brieskorn homology 3-spheres are not hyperbolic, $S^3_{p/q}(4_1)$ is hyperbolic unless $p/q$ is an integer with $|p/q| \leq 4$ or $p/q=\infty$. In this paper, we rigorously prove the observation in \cite{Kit16N}.
\begin{theorem} \label{thm:4_1} Let $p,q$ be non-zero integers and either $p$ or $q$ is $1$. For an $\mathit{SL}(2,\mathbb{C})$-representation $\rho$ of $\pi_1(S^3_{p/q}(4_1))$, the $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion $\tau_\rho(S^3_{p/q}(4_1))$ is an algebraic integer. \end{theorem}
\begin{figure}
\caption{The twist knot $J(2,2m)$ and $J(2,2)=3_1$, where $2m$ half-twists mean $|2m|$ negative half-twists if $m<0$.}
\label{fig:twist}
\end{figure}
Furthermore, an analogous result holds for the complement of a twist knot $J(2,2m)$ in Figure~\ref{fig:twist} ($m \in \mathbb{Z}$), which suggests that the Reidemeister torsions of the closed 3-manifold $S^3_{p/q}(J(2,2m))$ might be algebraic integers as observed in Examples~\ref{ex:2/3} and \ref{ex:5_2}. Throughout this paper, for a knot $K$, let $E(K)$ denote the complement of an open tubular neighborhood of $K$. We also write $\mu$ and $\lambda$ for a meridian and a preferred longitude of $K$, respectively.
\begin{theorem} \label{thm:twist_knot} Let $K$ be a twist knot $J(2,2m)$ and let $p=\pm 1$ and $q$ an odd integer. For an irreducible $\mathit{SL}(2,\mathbb{C})$-representation $\rho$ of $\pi_1(E(K))$ satisfying $\rho(\mu^p\lambda^q)=I_2$, the $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion $\tau_\rho(E(K))$ is an algebraic integer. \end{theorem}
Note that $J(2,2)$ is a trefoil knot $3_1=T_{-3,2}$, and $\tau_\rho(E(3_1))$ was computed by Johnson~\cite{Joh88}. In fact, he expressed $\tau_\rho(E(T_{p,q}))$ in terms of cosine functions. In Section~\ref{sec:Seifert}, we see that these are algebraic integers as well. We give the proofs of Theorems~\ref{thm:4_1} and \ref{thm:twist_knot} in Section~\ref{sec:algebraic_integer}, which is based on the resultant of polynomials and the $A$-polynomial of knots introduced by Cooper, Culler, Gillet, Long, and Shalen~\cite{CCGLS94}.
Here we refer to some related results. First recall that the Reidemeister torsion $\tau_\rho(E(K))$ coincides with $\Delta_{K,\rho}(1)$, where $\Delta_{K,\rho}(t) \in \mathbb{C}[t^{\pm 1}]$ denotes the twisted Alexander polynomial associated with $\rho$ (see \cite{Wad94}, \cite{Kit96T}). When $K$ is hyperbolic and $\rho$ is the holonomy representation, Dunfield, Friedl, and Jackson~\cite{DFJ12} called $\Delta_{K,\rho}(t)$ the hyperbolic torsion polynomial. They observed that the coefficients of hyperbolic torsion polynomial are often algebraic integers. This observation and Theorems~\ref{thm:4_1} and \ref{thm:twist_knot} suggest that, under some conditions, Reidemeister torsions have an interesting algebraic property that is quite non-trivial from its definition. Recently, Yoon~\cite{Yoo20B} proved a vanishing identity on the adjoint Reidemeister torsions of 2-bridge knots. This result also implies that Reidemeister torsions are imposed on strong algebraic restrictions. Moreover, we observe that the maximal $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion in absolute value is often a Perron number in Examples~\ref{ex:2/3} and \ref{ex:5_2}. Here, a real algebraic number $\alpha>1$ is called a \emph{Perron number} if it is larger than the absolute values of the Galois conjugates of $\alpha$. Such numbers appear, for example, in the study of the stretch factor of pseudo-Anosov homeomorphisms (see \cite[Section~14.2.1]{FaMa12}).
\begin{figure}
\caption{The knot $K_0$.}
\label{fig:K0}
\end{figure}
We now turn our attention to the case where $X^\mathrm{irr}(M)$ is an infinite set. For instance, if $M=E(K)$ for a knot $K$ in $S^3$, then $X^\mathrm{irr}(M)$ has a positive dimension. The first author~\cite{Kit94E} gave the formula $\tau_\rho(E(4_1)) = 2-2\operatorname{tr}\rho(\mu)$, where $\operatorname{tr}\rho(\mu)\neq 2$ for a meridian $\mu$. Note that $\tau_\rho(E(4_1))$ is no longer an algebraic integer since $\operatorname{tr}\rho(\mu)$ can vary continuously. On the other hand, $\tau_\rho(E(4_1))$ is determined by the restriction $r(\rho)$ of $\rho$ to the boundary torus. Here, when $\partial M\neq \emptyset$, we denote by $r$ the regular map $X(M) \to X(\partial M)$ between algebraic sets induced by the inclusion $\partial M \hookrightarrow M$. It is natural to ask whether the function $\tau_\rho(M)$ on $X(M)$ varies continuously while $r(\rho)$ is fixed. If $K$ is a 2-bridge knot, then $\dim_\mathbb{C} X(E(K))=1$ and $r^{-1}(\rho_0)$ is a finite set, and thus $\tau_\rho(M)$ cannot vary continuously. We need to consider the case $\dim_\mathbb{C} X(E(K))\geq 2$. An easy example of such a knot is the connected sum $K=K_1\sharp K_2$ of non-trivial knots $K_1$ and $K_2$. However, if $r(\rho)$ is fixed, then $\tau_\rho(E(K))$ depends only on $\tau_\rho(E(K_1))$ and $\tau_\rho(E(K_2))$ because of the multiplicativity of the Reidemeister torsions. In this paper, we focus on the knot $K_0$ in Figure~\ref{fig:K0}, which is obtained from $4_1\sharp 4_1$ by a construction given by Cooper and Long~\cite{CoLo96}. Then we can find a family $C$ of representations of $\pi_1(E(K_0))$ and obtain the following result.
\begin{theorem} \label{thm:vary_conti} Let $\rho' \in C$. Then the function $\tau_\rho(E(K_0))$ varies continuously on the preimage $r^{-1}(r(\rho')) \subset X(E(K_0))$. \end{theorem}
Note that the knot $K_0$ is alternating, hyperbolic, and fibered (see Section~\ref{sec:Looper-Long}). Theorem~\ref{thm:vary_conti} is motivated by the authors' previous work~\cite{KiNo20}. They investigated the set \[ \mathit{RT}(M)=\{\tau_\rho(M) \mid [\rho] \in X^\mathrm{irr}(M),\ \text{acyclic}\} \subset \mathbb{C} \] of all values of the Reidemeister torsion for irreducible representations. In \cite{KiNo20}, the authors proved that for 2-bridge knots $K_1$ and $K_2$, the set $\mathit{RT}(\Sigma(K_1,K_2))$ is finite, while $X^\mathrm{irr}(\Sigma(K_1,K_2))$ has positive dimension. Here the splice $\Sigma(K_1,K_2)$ is the closed 3-manifold $E(K_1)\cup_h E(K_2)$, where $h$ is an orientation-revering homeomorphism sending a meridian (resp.\ longitude) of $K_1$ to a longitude (resp.\ meridian) of $K_2$. The proof is based on the fact that $\tau_\rho(E(K_j))$ cannot vary continuously if $r_j(\rho)$ is fixed ($j=1,2$) as mentioned before. In contrast, Theorem~\ref{thm:vary_conti} implies the following consequence.
\begin{corollary} \label{cor:RT_infinite} Let $K$ be a $2$-bridge knot such that polynomials $f_C(L,M)$ and $A_K(M,L) \in \mathbb{Z}[L,M]$ have a common zero $(L_0,M_0)$ with $L_0,M_0 \neq 0$ and with $L_0\neq \pm 1$ or $M_0\neq \pm 1$, where $A_K$ is the $A$-polynomial of $K$. Then the set $\mathit{RT}(\Sigma(K_0,K))$ is an infinite set. \end{corollary}
Here, $f_C(L,M) \in \mathbb{Z}[L,M]$ is the polynomial \begin{align*}
& L^2 M^{16}-L((M^{32}+1)-4(M^{30}+M^{2})-2(M^{28}+M^{4})+16(M^{26}+M^{6}) \\
& +13(M^{24}+M^{8})-32(M^{22}+M^{10})-46(M^{20}+M^{12})+20(M^{18}+M^{14}) +70M^{16})+M^{16}, \end{align*} which is derived from $r(C) \subset X(\partial E(K_0))$. Note that the condition on $K$ in Corollary~\ref{cor:RT_infinite} is generic enough. For instance, $3_1$ and $4_1$ satisfy the condition.
\subsection*{Acknowledgments} This work was supported by JSPS KAKENHI Grant Numbers JP19K03505 and JP20K14317.
\section{Preliminaries} \label{sec:Preliminaries}
\subsection{$\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion and the torsion polynomial} \label{subsec:Rtorsion} Let $M$ be a connected compact 3-manifold whose boundary is empty or a disjoint union of tori and let $\rho\colon \pi_1(M) \to \mathit{SL}(2,\mathbb{C})$ be a representation, namely a group homomorphism. Suppose $\rho$ is acyclic, that is, $H_\ast(M;\mathbb{C}^2_\rho)=0$. We first endow $M$ with a cell decomposition so that $M$ is a CW-complex. This gives the cellular chain complex $\{C_\ast(M;\mathbb{C}^2_\rho), \partial_\ast\}$ with an (ordered) basis $\mathbf{c}_i$ of $C_i(M;\mathbb{C}^2_\rho)$ coming from $i$-cells. We next choose a basis $\mathbf{b}_i$ of $\operatorname{Im}\partial_{i+1}$ and its lift $\tilde{\mathbf{b}}_i$ to $C_{i+1}(M;\mathbb{C}^2_\rho)$. It follows from $H_\ast(M;\mathbb{C}^2_\rho)=0$ that the union $\mathbf{b}_i\tilde{\mathbf{b}}_{i-1}$ is a basis of $C_i(M;\mathbb{C}^2_\rho)$. Let $[\mathbf{b}_i\tilde{\mathbf{b}}_{i-1}/\mathbf{c}_i]$ denote the change of basis matrix from $\mathbf{c}_i$ to $\mathbf{b}_i\tilde{\mathbf{b}}_{i-1}$. Now, the \emph{$\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion} (or simply \emph{Reidemeister torsion}) $\tau_\rho(M)$ of $M$ associated to $\rho$ is defined by \[ \tau_\rho(M) = \prod_{i=0}^3 \det[\mathbf{b}_i\tilde{\mathbf{b}}_{i-1}/\mathbf{c}_i]^{(-1)^{i+1}} \in \mathbb{C}^\times. \] See \cite{Joh88} and Section~1 of \cite{Kit94S,Kit94E} for details. For instance, when $M$ is the complement of a 2-bridge knot $K$, the fundamental group $\pi_1(M)$ has a presentation with generators $x,y$ and a relator $r$. Then, for an acyclic representation $\rho$, one can derive the formula \[ \tau_\rho(M) = \frac{\det(\rho(\partial r/\partial y))}{\det(\rho(x)-I_2)}, \] where $\partial r/\partial y$ denotes a Fox derivative of $r$.
We next consider the $\mathit{SL}(2,\mathbb{C})$-character variety $X(M)$, which is the GIT quotient of the $\mathit{SL}(2,\mathbb{C})$-representation variety $R(M)=\mathrm{Hom}(\pi_1(M),\mathit{SL}(2,\mathbb{C}))$. Let $X^\mathrm{irr}(M)$ denote the subspace of $X(M)$ consisting of conjugacy classes of irreducible representations. In \cite[p.~54]{Joh88}, when $X^\mathrm{irr}(M)$ is a finite set, the \emph{torsion polynomial} $\sigma_M(t)$ of $M$ is defined by \[ \sigma_M(t)=\prod_{[\rho] \in X^\mathrm{irr}(M),\ \text{acyclic}}(t-\tau_\rho(M)) \in \mathbb{Q}[t]. \]
\begin{remark} \label{rem:convention} Reidemeister torsion is sometimes defined to be $\prod_{i=0}^3 \det[\mathbf{b}_i\tilde{\mathbf{b}}_{i-1}/\mathbf{c}_i]^{(-1)^{i}}$. Our $\tau_\rho(M)$ is the same as Reidemeister torsion in \cite{Joh88, Kit16N, KiTr}, but the inverse of Reidemeister torsion in \cite{Kit94S, Kit94E}. This difference is crucial for the results in this paper. For instance, $\tau_\rho(S^3_1(4_1))$ is an algebraic integer, but its inverse is not since the constant term of $\sigma_{S^3_1(4_1)}(t)=t^3-12t^2+20t-8$ is not $\pm 1$. \end{remark}
\subsection{The $A$-polynomial of knots} Let $K$ be a knot in $S^3$ and recall that $E(K)$ denotes the complement of an open tubular neighborhood of $K$. We consider the algebraic subset $U$ of $R(E(K))$ consisting of representations $\rho$ such that $\rho(\lambda)$ and $\rho(\mu)$ are upper triangular matrices, where $\lambda$ and $\mu$ denote a preferred longitude and a meridian, respectively. Now we define a map $\xi\colon U \to \mathbb{C}^2$ by $\xi(\rho)=(\rho(\lambda)_{11}, \rho(\mu)_{11})$, where $X_{ij}$ denotes the $(i,j)$-entry of a matrix $X$. The Zariski closure of the image of $\xi$ is some algebraic curves and points. The product of the defining polynomials of the algebraic curves is known to be defined over $\mathbb{Z}$ so that the coefficients are relatively prime. The resulting polynomial $A_K(L,M) \in \mathbb{Z}[L,M]$ is called the \emph{$A$-polynomial} of $K$, which is defined up to sign. Following \cite{CCGLS94}, we drop the factor $L-1$ corresponding to abelian representations from the $A$-polynomial, that is, $A_\text{unknot}(L,M)=1$.
For 2-bridge knots, their $A$-polynomials are computed from Riley polynomials (see \cite[Section~7]{CCGLS94} and \cite[Section~3]{CoLo96}). Here we recall the definition of the Riley polynomial introduced in \cite{Ril84}. Let $K$ be a 2-bridge knot. Then $\pi_1(E(K))$ has a presentation of the form $\ang{x,y \mid wx=yw}$, where $w$ is a word in $x$ and $y$. For $s,t \in \mathbb{C}$ with $s\neq 0$, we consider the representation $\rho_{s,t}$ of the free group $\ang{x,y}$ by \[ \rho_{s,t}(x) = \begin{pmatrix}
s & 1 \\
0 & 1/s \end{pmatrix},\quad \rho_{s,t}(y) = \begin{pmatrix}
s & 0 \\
-t & 1/s \end{pmatrix} \] and define the \emph{Riley polynomial} $\phi(s,t)$ of $K$ by \[ \phi(s,t)= \rho_{s,t}(w)_{11}+(s-s^{-1})\rho_{s,t}(w)_{12} \in \mathbb{Z}[s^{\pm 1},t]. \] It is shown in \cite[Theorem~1]{Ril84} that $\rho_{s,t}$ factors through $\pi_1(E(K))$ if and only if $\phi(s,t)=0$ holds. Moreover, every non-abelian representation is conjugate to some $\rho_{s,t}$. Note that $\rho_{s,t}$ is irreducible if and only if $t\neq 0$.
The $A$-polynomial of a 2-bridge knot is obtained from its Riley polynomial by eliminating $t$. More precisely, we consider the resultant \[ \mathrm{res}_t(L-\rho_{M,t}(\lambda)_{11}, \phi(M,t)) \in \mathbb{Z}[L, M^{\pm 1}] \] of polynomials in $t$, and the $A$-polynomial is a factor of this resultant. See Section~\ref{subsec:resultant} for the definition of the resultant.
\begin{remark} The variable $M$ in $A_K(L,M)$ can be taken as the variable $s$ in $\phi(s,t)$ for a 2-bridge knot $K$. In this paper, however, we use a pair $(L,M)$ for $A_K(L,M)$ and $(s,t)$ for $\phi(s,t)$ under the conventions of these polynomials. \end{remark}
\begin{example} Let us consider the case $K=J(2,4)=5_2$. Then $w=[y,x^{-1}]^2$, $\mu=x$, and $\lambda=[x,y^{-1}]^2[y,x^{-1}]^2$. Thus, we have \begin{align*}
\phi(s,t) &= \left(2(s^2+s^{-2})-3\right) t^2+\left(-(s^4+s^{-4})+3 (s^2+s^{-2})-6\right) t+2(s^2+s^{-2})-t^3-3, \\
A_{5_2}(L,M) &= -L^3 + L^2 (1 - 2 M^2 - 2 M^4 + M^8 - M^{10}) \\
&\qquad + L M^4 (-1 + M^2 - 2 M^6 - 2 M^8 + M^{10}) - M^{14}. \end{align*}
Let us give an observation. Since $\phi(i,t)=-t^3-7t^2-14t-7$ and $A_{5_2}(L,i)=(L-1)^3$, there are three irreducible representations of $\pi_1(E(5_2))$ such that the restrictions to $\pi_1(\partial E(5_2))$ are identical. However, one can check that $\tau_{\rho_{i,t}}(E(5_2))=(-t^3-3t^2+2t+9)/2$, and thus $\tau_{\rho_{i,t}}(E(5_2))$'s are distinct for the three representations. That is, the Reidemeister torsion $\tau_\rho(E(5_2))$ is not necessarily determined by the restriction $r(\rho)=\rho|_{\partial E(5_2)}$. On the other hand, $\tau_\rho(E(5_2))$ can only take finitely many values while $r(\rho)$ is fixed. In particular, it does not vary continuously. \end{example}
\subsection{Resultants of polynomials and algebraic integers} \label{subsec:resultant} Let $R$ be an integral domain. Let $f(x)=a_0x^m+a_1x^{m-1}+\dots+a_m$ and $g(x)=b_0x^n+b_1x^{n-1}+\dots+b_n \in R[x]$ with $a_0,b_0 \neq 0$. Then the \emph{resultant} of $f$ and $g$ is defined by \[ \mathrm{res}_x(f,g) = \begin{vmatrix}
a_0 & a_1 & \cdots & a_{m-1} & a_m & & & \\
& a_0 & a_1 & \cdots & a_{m-1} & a_m & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & a_0 & a_1 & \cdots & a_{m-1} & a_m \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& b_0 & b_1 & \cdots & b_{n-1} & b_n & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & b_0 & b_1 & \cdots & b_{n-1} & b_n \end{vmatrix} \in R, \] which is the determinant of an $(n+m)\times(m+n)$ matrix. It is well-known that $\mathrm{res}_x(f,g)=0$ if and only if $f$ and $g$ have a common zero in the algebraic closure of the quotient field of $R$. The resultant is useful to eliminate variables from equations. For instance, let $f(x,y)=2x^2 + y^2 - 1$, and $g(x,y)=x y - 1 \in R[x]$, where $R=\mathbb{Z}[y]$. Then one has $\mathrm{res}_x(f,g)=y^4-y^2+2 \in \mathbb{Z}[y]$. In particular, the $y$-coordinates of common zeros of $f$ and $g$ are algebraic integers. We denote by $\mathbb{A}$ the ring of algebraic integers. It is known that the ring $\mathbb{A}$ is integrally closed in $\mathbb{C}$. See \cite[Sections~0.2 and 5.2]{MaRe03} for fundamental properties of algebraic integers and interesting relations to 3-manifolds.
Throughout this paper, we use some basic properties of the resultant. For example, we use the facts that $\mathrm{res}_x(f,g)=a_0^n\prod_{g(\zeta)=0}f(\zeta)$ and $\mathrm{res}_x(f,gh)=\mathrm{res}_x(f,g)\mathrm{res}_x(f,h)$ (see \cite[Chapter~3, Section~1]{CLO05} for instance).
The rest of this section is devoted to showing the key lemma for the proof of Proposition~\ref{prop:twist_knot}. Before starting the proof, we give an easy observation. For $f(x)=a_0x^m+a_1x^{m-1}+\dots+a_m$, consider a one-row matrix $A=\begin{pmatrix}
a_0 & a_1 & \cdots & a_m \end{pmatrix}$. By adding $x$ times the $j$th entry to the $(j+1)$st entry for $j=1,2,\dots,m-1$ in this order, we obtain $\begin{pmatrix}
a_0 & a_0x+a_1 & \cdots & f(x) \end{pmatrix}$. If we apply this procedure $k+1$ times ($0\leq k<m$) to $A$, then the $i$th entry is equal to $\sum_{j=0}^{i-1}\binom{k+j}{j}a_{i-1-j}x^j \in R[x]$. One can check it by the equality $\binom{k+j-1}{j-1}+\binom{k+j-1}{j} = \binom{k+j}{j}$ about binomial coefficients. In particular, the $(m-k)$th entry is equal to $\frac{1}{k!}f^{(k)}(x)$, where $f^{(k)}$ denotes the $k$th derivative of $f$. For instance, when $m=4$ and $k=2$, we have \[ \begin{pmatrix}
a_0 & 3 a_0 x+a_1 & 6 a_0 x^2+3 a_1 x+a_2 & \ast & \ast \end{pmatrix}. \] Its 3rd entry is actually equal to $\frac{1}{2!}f^{(2)}(x) = \frac{1}{2}(12a_0 x^2+6a_1 x+2a_2)$.
\begin{lemma} \label{lem:factor_of_res} Let $\zeta \in R$ and $f,g \in R[x]$. If $g(\zeta)^{m-k}$ divides $\frac{1}{k!}f^{(k)}(\zeta)$ for all $0\leq k<m$, then $g(\zeta)^m$ divides $\mathrm{res}_x(f,g)$. \end{lemma}
\begin{proof} In the definition of $\mathrm{res}_x(f,g)$, we first add $\zeta$ times the $j$th column to the $(j+1)$st column for $j=1,2,\dots,m+n-1$ in this order. We next add $-\zeta$ times the $i$th row to the $(i-1)$st row for $i=2,3,\dots,n,n+2,n+3,\dots,m+n$: \begin{align*} \mathrm{res}_x(f,g) &= \begin{vmatrix}
a_0 & a_0\zeta+a_1 & \cdots & & f(\zeta) & \cdots & & \zeta^{n-1}f(\zeta) \\
& a_0 & & \cdots & & f(\zeta) & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & a_0 & a_0\zeta+a_1 & \cdots & & f(\zeta) \\
b_0 & b_0\zeta+b_1 & \cdots & & g(\zeta) & \cdots & & \zeta^{m-1}g(\zeta) \\
& b_0 & & \cdots & & g(\zeta) & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & b_0 & b_0\zeta+b_1 & \cdots & & g(\zeta) \end{vmatrix}\\ &= \begin{vmatrix}
a_0 & a_1 & \cdots & a_{m-1} & a_m & & & \\
& a_0 & a_1 & \cdots & a_{m-1} & a_m & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & a_0 & a_0\zeta+a_1 & \cdots & & f(\zeta) \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& b_0 & b_1 & \cdots & b_{n-1} & b_n & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & b_0 & b_0\zeta+b_1 & \cdots & & g(\zeta) \end{vmatrix}. \end{align*} Since $g(\zeta)^m \mid f(\zeta)$, one can divide the $(m+n)$th column by $g(\zeta)$. We then sweep out the bottom row by 1 at the bottom left: \begin{align*} \mathrm{res}_x(f,g) &= g(\zeta) \begin{vmatrix}
a_0 & a_1 & \cdots & a_{m-1} & a_m & & & \\
& a_0 & a_1 & \cdots & a_{m-1} & a_m & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & a_0 & a_0\zeta+a_1 & \cdots & & f(\zeta)g(\zeta)^{-1} \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& b_0 & b_1 & \cdots & b_{n-1} & b_n & & \\
& & \ddots & \ddots & & \ddots & \ddots & \\
& & & b_0 & b_0\zeta+b_1 & \cdots & & 1 \end{vmatrix}\\ &= g(\zeta) \begin{vmatrix}
a_0 & a_1 & \cdots & a_{m-1} & a_m & & & \\
& \ddots & \ddots & & \ddots & \ddots & & \\
& & a_0 & a_1 & \cdots & a_{m-1} & a_m & \\
& & & a_0-\ast & a_0\zeta+a_1-\ast & \cdots & & f(\zeta)g(\zeta)^{-1} \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& \ddots & \ddots & & \ddots & \ddots & & \\
& & b_0 & b_1 & \cdots & b_{n-1} & b_n & \\
& & & 0 & 0 & \cdots & 0 & 1 \end{vmatrix}, \end{align*} where $\ast$'s are divisible by $g(\zeta)^{m-1}$. We now reduce the size of the matrix by 1 and apply almost the same column and row operations as above: \begin{align*} \mathrm{res}_x(f,g) &= g(\zeta) \begin{vmatrix}
a_0 & a_1 & \cdots & a_{m-1} & a_m & & \\
& \ddots & \ddots & & \ddots & \ddots & \\
& & a_0 & a_1 & \cdots & a_{m-1} & a_m \\
& & & a_0-\ast & a_0\zeta+a_1-\ast & \cdots & \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & \\
& \ddots & \ddots & & \ddots & \ddots & \\
& & b_0 & b_1 & \cdots & b_{n-1} & b_n \\ \end{vmatrix} \\ &= g(\zeta) \begin{vmatrix}
a_0 & a_1 & \cdots & & a_{m-1} & a_m & & \\
& \ddots & \ddots & & & \ddots & \ddots & \\
& & a_0 & a_0\zeta+a_1 & \cdots & & & \frac{1}{0!}f(\zeta) \\
& & & a_0-\ast & 2a_0\zeta+a_1-\ast & \cdots & & \frac{1}{1!}f^{(1)}(\zeta)-\ast \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& \ddots & \ddots & & \ddots & \ddots & & \\
& & b_0 & b_1 & \cdots & b_{n-1} & b_n & \\
& & & b_0 & b_0\zeta+b_1 & \cdots & & g(\zeta) \end{vmatrix}. \end{align*} Since $g(\zeta)^{m-1} \mid \frac{1}{1!}f^{(1)}(\zeta)-\ast$, one can divide the $(m+n-1)$st column by $g(\zeta)$. By sweeping out the bottom row, $\mathrm{res}_x(f,g)$ is equal to \begin{align*} g(\zeta)^2 \begin{vmatrix}
a_0 & a_1 & \cdots & & a_{m-1} & a_m & & \\
& \ddots & \ddots & & & \ddots & \ddots & \\
& & a_0-\ast & a_0\zeta+a_1-\ast & \cdots & & & f(\zeta)g(\zeta)^{-1} \\
& & & a_0-\ast' & 2a_0\zeta+a_1-\ast' & \cdots & & (f^{(1)}(\zeta)-\ast)g(\zeta)^{-1} \\
b_0 & b_1 & \cdots & b_{n-1} & b_n & & & \\
& \ddots & \ddots & & \ddots & \ddots & & \\
& & b_0 & b_1 & \cdots & b_{n-1} & b_n & \\
& & & 0 & 0 & \cdots & 0 & 1 \end{vmatrix}, \end{align*}
where each $\ast'$ is divisible by $g(\zeta)^{m-2}$. The assumption and the previous observation guarantee that one can continue this process until the size of the matrix is $n\times n$. We finally obtain $\mathrm{res}_x(f,g) = g(\zeta)^m|X|$, where $X$ is some $n\times n$ matrix over $R$, and hence $g(\zeta)^m \mid \mathrm{res}_x(f,g)$. \end{proof}
\section{Reidemeister torsion and algebraic integers} \label{sec:algebraic_integer} The aim of this section is to prove Theorems~\ref{thm:twist_knot} and \ref{thm:4_1}.
\subsection{The proof of Theorem~\ref{thm:twist_knot}}
\begin{lemma} Let $K$ be a $2$-bridge knot. Then the coefficient of the leading term of the $A$-polynomial $A_K(L,M)$ with respect to $L$ is a unit, namely a power of $M$ up to sign. Moreover, it holds for the lowest degree term as well. \end{lemma}
\begin{proof} First recall that $\phi(s,t) \in \mathbb{Z}[s^{\pm 1},t]$ denotes the Riley polynomial of $K$. Since $A_K(L,M)$ is a factor of the resultant \[ f(L,M)=\mathrm{res}_t(\phi(M,t), L-\rho_{M,t}(\lambda)_{11}) \in \mathbb{Z}[L,M^{\pm 1}], \] it suffices to prove that the leading term of $f(L,M)$ with respect to $L$ is a unit. By \cite[Lemma~2]{Ril84}, the coefficient of the leading term of $\phi(M,t)$ with respect to $t$ is $\pm 1$. It follows from the definition of the resultant that the leading term of $f(L,M)$ with respect to $L$ is a power of $L$ up to sign.
Now, the latter follows from the equality $A_K(L,M)=A_K(L^{-1},M^{-1})$ up to units in $\mathbb{Z}[L,M]$ (see \cite[Proposition~4.2(1)]{CoLo96}). \end{proof}
Before the next lemma, we recall a boundary slope of a knot $K$. For an incompressible surface $S$ in $E(K)$, its boundary consists of parallel simple closed curves on $\partial E(K)$. Thus, $S$ defines an element $p[\mu]+q[\lambda] \in H_1(\partial E(K);\mathbb{Z})$ up to sign, and then $p/q \in \mathbb{Q}\cup\{1/0\}$ is called a \emph{boundary slope} of $K$. It is proved in \cite[Theorem~3.4]{CCGLS94} that the slope of a side of the Newton polygon $N(A_K)$ of $A_K(L,M)$ is a slope of $K$. Combining this theorem and \cite[Theorem~1(b)]{HaTh85}, we have that slopes of sides of $N(A_K)$ are even integers for a 2-bridge knot $K$.
\begin{lemma} \label{lem:M_in_A} Let $K$ be a $2$-bridge knot and let $p=\pm 1$ and $q>0$. Then the coefficients of the highest and lowest degree terms of $\mathrm{res}_L(A_K(L,M), M^pL^q-1) \in \mathbb{Z}[M^{\pm 1}]$ are $\pm 1$. \end{lemma}
\begin{proof}
It suffices to discuss only the highest degree term of the resultant since $A_K(L,M)=A_K(L^{-1},M^{-1})$ up to units in $\mathbb{Z}[L,M]$. Let $d=\deg_L A_K(L,M)$ and suppose $p=-1$. Let $c L^iM^j$ be the highest degree term of $A_K(L,M)$ with respect to the lexicographic order of $(\deg_M, \deg_L)$. Since this monomial corresponds to a corner $v$ of the Newton polygon $N(A_K)$, \cite[Theorem~11.3]{CoLo96} implies $c=\pm 1$. When we choose $c M^j$'s in the first $|q|$ rows of the matrix appearing in the resultant, the highest degree term is $\pm(c M^j)^{q}(M^{-1})^{d-i}1^i = \pm M^{jq-d+i}$. Now, let $l$ be the line through $v$ with slope $p=-1$. Since the slopes of the sides of $N(A_K)$ are even, $N(A_K)$ lies in the lower left half space. Therefore, all terms in the resultant except $\pm M^{jq-d+i}$ have a lower degree.
In the case $p=1$, we can show the statement in a similar way by considering the lexicographic order of $(\deg_M, -\deg_L)$. \end{proof}
As a consequence of Lemma~\ref{lem:M_in_A}, if $\rho_{s_0,t_0}$ is a representation satisfying the condition of Theorem~\ref{thm:twist_knot}, then $s_0$ and $s_0^{-1}$ are algebraic integers.
\begin{lemma} \label{lem:s0t0} For a root $(s_0,t_0)$ of $\phi(s,t)$, if $s_0, s_0^{-1} \in \mathbb{A}$, then $t_0 \in \mathbb{A}$. \end{lemma}
\begin{proof} By \cite[Lemma~2]{Ril84}, $\phi(s_0,t) \in \mathbb{A}[t]$ is monic. Since the ring $\mathbb{A}$ of algebraic integers is integrally closed in $\mathbb{C}$, we conclude $t_0 \in \mathbb{A}$. \end{proof}
In the rest of this section, let $K=J(2,2m)$ ($m\neq 0$). Define $d(m)=\deg_t\phi(1,t)$, then by \cite[Lemma~2]{Ril84}, we have \[ d(m)= \deg_t\phi(s,t)= \begin{cases}
2m-1 & \text{if $m>0$,} \\
-2m & \text{if $m<0$.} \end{cases} \]
\begin{lemma} \label{lem:diff_of_A}
$\frac{1}{n!}\frac{\partial^nA_K}{\partial L^n}(-1,M)$ is divisible by $(M^2-1)^{d(m)-n}$ for $0\leq n< d(m)$. \end{lemma}
\begin{proof} The proof is by induction on $m$. The cases $m=-1,0,1,2$ are directly confirmed by \cite[Theorem~1]{HoSh04}. For instance, in the case $m=2$, we have \begin{align*}
A_K(-1,M) &= (M^2-1)^3 \left(2 M^8+4 M^6+5 M^4+4 M^2+2\right), \\
\frac{1}{1!}\frac{\partial A_K}{\partial L}(-1,M) &= -(M^2-1)^2 \left(M^{10}-M^6-4 M^4-6 M^2-5\right), \\
\frac{1}{2!}\frac{\partial^2A_K}{\partial L^2}(-1,M) &= (M^2-1) \left(M^8+2 M^2+4\right). \end{align*} Suppose $m>2$. Following \cite{HoSh04}, let \begin{align*}
x(L,M)&= -L+L^2 +2LM^2 +M^4 +2LM^4 +L^2M^4 +2LM^6 +M^8 -LM^8, \\
y(L,M)&= M^4(L+M^2)^4. \end{align*} Then one can check that $(M^2-1)^{2-k} \mid \frac{1}{k!}\frac{\partial^k x}{\partial L^k}(-1,M)$ for $0\leq k\leq 2$ and $(M^2-1)^{4-k} \mid\frac{1}{k!} \frac{\partial^k y}{\partial L^k}(-1,M)$ for $0\leq k\leq 4$. By the recursive relation in \cite[Theorem~1]{HoSh04} and Leibniz's rule, we have \[ \frac{1}{n!}\frac{\partial^nA_K}{\partial L^n} = \sum_{k=0}^{2} \frac{1}{k!}\frac{\partial^k x}{\partial L^k} \frac{1}{(n-k)!}\frac{\partial^{n-k}A_{J(2,2m-2)}}{\partial L^{n-k}} - \sum_{k=0}^{4} \frac{1}{k!}\frac{\partial^k y}{\partial L^k} \frac{1}{(n-k)!}\frac{\partial^{n-k}A_{J(2,2m-4)}}{\partial L^{n-k}}. \] When $L=-1$, by the induction hypothesis, each term in the sums is divisible by $(M^2-1)^{2m-1-n}$.
The cases $m<-1$ also follow from the recursive relation in a similar way. \end{proof}
\begin{proposition} \label{prop:twist_knot} Let $p=\pm 1$. Then, $\mathrm{res}_L(A_K(L,M), M^pL^q-1)$ is divisible by $(M^p(-1)^{q}-1)^{d(m)}$. \end{proposition}
\begin{proof} Note that $M^p(-1)^{q}-1 \mid M^2-1$ when $p=\pm 1$. Now, Lemma~\ref{lem:diff_of_A} allows us to apply Lemma~\ref{lem:factor_of_res} to $\zeta= -1 \in \mathbb{C}[M]$ and \[ f(L,M)=A_K(L,M),\ g(L,M)=ML^q-1 \in \mathbb{C}[M][L]=\mathbb{C}[L,M]. \] This completes the proof. \end{proof}
Here recall that $\tau_\rho(E(K))$ is written of the form \[ \frac{\det(\rho_{s_0,t_0}(\partial r/\partial y))}{\det(\rho_{s_0,t_0}(x)-I_2)} = -s_0(s_0-1)^{-2}F(s_0,t_0), \] where $F(s,t) \in \mathbb{Z}[s^{\pm 1},t]$. In the situation of Theorem~\ref{thm:twist_knot}, $s_0$ and $F(s_0,t_0)$ are algebraic integers by Lemmas~\ref{lem:M_in_A} and \ref{lem:s0t0}. Hence, it suffices for proving the theorem to consider whether $(s_0-1)^{-1}$ is an algebraic integer.
\begin{lemma} \label{lem:alpha-1} Let $f(x) \in \mathbb{Z}[x]$ be a monic polynomial and let $\alpha \in \mathbb{C}\setminus\{1\}$ be a root of $f$. If $f(1)=\pm 1$, then $(\alpha-1)^{-1}$ is an algebraic integer. \end{lemma} \begin{proof} Let $\beta=(\alpha-1)^{-1}$, namely $\alpha=\beta^{-1}+1$. Since $f$ is monic, we have \[ 0=f(\alpha)=f(\beta^{-1}+1)=\beta^{-\deg f}+\cdots+f(1). \] Here, by $f(1)=\pm 1$, there exists a monic polynomial $g \in \mathbb{Z}[x]$ such that $g(\beta)=0$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:twist_knot}] Recall that $q$ is odd. Let us show that $(s_0-1)^{-1} \in \mathbb{A}$. By Lemma~\ref{lem:alpha-1}, it suffices to find a monic polynomial $f(s)$ such that $f(s_0)=0$ and $f(1)=\pm 1$. Proposition~\ref{prop:twist_knot} implies that \[ f(s):= (s+1)^{-d(m)} \mathrm{res}_L(A_K(L,s), sL^q-1) \] lies in $\mathbb{Z}[s^{\pm 1}]$ and its is monic. By \cite[Corollary~3(i)]{HoSh04}, we have \begin{align*}
f(1) &= (1+1)^{-d(m)} \mathrm{res}_L(\pm(L+1)^{d(m)}, L^q-1) \\
&= \pm 2^{-d(m)} \mathrm{res}_L(L+1, L^q-1)^{d(m)} = \pm 1. \end{align*} Therefore, $(s_0-1)^{-1}$ is an algebraic integer. \end{proof}
\begin{remark} When $q$ is even, $(s_0-1)^{-1}$ is not necessarily an algebraic integer. For example, in the case where $J(2,-2)=4_1$ and $(p,q)=(1,2)$, $s_0$'s are the roots of the polynomial \[ f(x)=x^{14}+2 x^{13}+x^{12}-4 x^{10}-8 x^9-10 x^8-13 x^7-10 x^6-8 x^5-4 x^4+x^2+2 x+1. \] Here, $f(x)$ is irreducible since it is the product of two irreducible polynomials of degree 7 over the prime field $\mathbb{F}_2$ of order $2$ and is the product of two irreducible polynomials of degree 4 and 10 over $\mathbb{F}_3$. Thus, if $(s_0-1)^{-1} \in \mathbb{A}$, then $f(1)=\pm 1$. However, we have $f(1)=-49$.
On the other hand, $\tau_\rho(E(4_1)) = -2(s_0+s_0^{-1}-1)$ is an algebraic integer since $s_0^{\pm 1} \in \mathbb{A}$. The authors predict that Theorem~\ref{thm:twist_knot} holds for every integer $q$. \end{remark}
\subsection{The proof of Theorem~\ref{thm:4_1}} This subsection is devoted to proving that $\tau_\rho(S^3_{p/q}(4_1))$ is an algebraic integer when $p=1$ or $q=1$. We also observe that it is an algebraic integer when $p/q=2/3$ in Example~\ref{ex:2/3}. Moreover, $\tau_\rho(S^3_{1/2}(5_2))$ is an algebraic integer as well (see Example~\ref{ex:5_2} and Problem~\ref{prob:tau_closed}).
Let $q\neq 0$ and let $\rho$ be an acyclic representation of $\pi_1(S^3_{p/q}(4_1))$. Then \[ \tau_\rho(S^3_{p/q}(4_1)) = \frac{-2(s+s^{-1}-1)}{\det(\rho(\mu^{p'}\lambda^{q'})-I_2)} = 2(s+s^{-1}-1)s^{p'}L^{q'}(s^{p'}L^{q'}-1)^{-2}, \] where $pq'-qp'=1$ and $L=\rho(\lambda)_{11}$. We divide the proof of Theorem~\ref{thm:4_1} into two cases: (I) $p=1$, $q\neq 0$ and (II) $p\neq 0$, $q=1$. Recall that \begin{align*} A_{4_1}(L,M) &= L^2M^4+L(-M^8-1+M^6+M^2+2M^4)+M^4 \\
&= \left(L^2+L(-(M+M^{-1})^4+5(M+M^{-1})^2-2)+1\right)M^4. \end{align*}
\begin{proof}[Proof of Theorem~\ref{thm:4_1}(I)] Let $p=1$, $q\neq 0$. Then one can take $(p',q')=(0,1)$. By Lemmas~\ref{lem:M_in_A} and \ref{lem:s0t0}, it suffices to show that $(L_0-1)^{-1}$ is an algebraic integer. First, by direct computations, we have $\operatorname{tr}\rho(\mu)\neq -2$. Indeed, if $s=-1$, then \[ \rho(\lambda)= \begin{pmatrix}
-1 & \pm 2 i \sqrt{3} \\
0 & -1 \\ \end{pmatrix}, \] and hence $\rho(x)\rho(\lambda)^q \neq I_2$. See also \cite[Lemma~6.2]{KiTr}.
Let $f(L)=A_{4_1}(L,L^{-q})$ and $g(L)=L^{-q}+L^q \in \mathbb{Z}[L^{\pm 1}]$. Then, we have $f(-1)=0$ and \begin{align*} f'(L) &= \left(2L+(-g(L)^4+5g(L)^2-2)+L(-4g'(L)g(L)^3+10g'(L)g(L))\right)L^{-4q} \\
&\qquad -4q\left(L^2+L(-g(L)^4+5g(L)^2-2)+1\right)L^{-4q-1}. \end{align*} It follows from $g'(-1)=0$ that $f'(-1)=0$, and hence $f(L)=(L+1)^2h(L)$ for some $h(L) \in \mathbb{Z}[L^{\pm 1}]$. Here, $h(L_0)=0$ and $h(1)=1$ since $L_0\neq -1$ and $f(1)=4$. Therefore, Lemma~\ref{lem:alpha-1} implies $(L_0-1)^{-1} \in \mathbb{A}$. \end{proof}
Next, we discuss the case (II), that is, $q=1$. Then one can take $(p',q')=(-1,0)$. If $(s-1)^{-1}$ is an algebraic integer, we can obtain the desired result. However, the case $p=4$, we have $A_{4_1}(s^{-4},s)=s^{-2}(s^2+1)^2$, and hence $s=\pm i$. The minimal polynomial of $(\pm i-1)^{-1}$ is $2x^2+2x+1$, and thus they are not algebraic integers. This observation suggests that one needs to show that $\tau_\rho(S^3_{p}(4_1)) = 2(s^2-s+1)/(s-1)^2$ is an algebraic integer directly. Note that \[ A_{4_1}(s^{-p},s) = s^{2-p}+2 s^{4-p}+s^{6-p}-s^{8-p}-s^{-p}+s^{4-2 p}+s^4. \]
\begin{lemma} \label{lem:div_by_16} The coefficient of the leading term of \[ \mathrm{res}_s(\tau(s-1)^2-2(s^2-s+1), A_{4_1}(s^{-p},s)) \in \mathbb{Z}[\tau] \] is equal to $\pm 16$. Moreover, it is divisible by $4(2\tau-3)^2$ if $p$ is odd and by $16$ if $p$ is even. \end{lemma}
\begin{proof} Let $n=\deg A_{4_1}(s^{-p},s)$ and \begin{align*} h(\tau) &= \mathrm{res}_s(\tau(s-1)^2-2(s^2-s+1), A_{4_1}(s^{-p},s)) \\
&= \begin{vmatrix}
\tau-2 & -2\tau+2 & \tau-2 & & \\
& \ddots & & \ddots & \\
& & \tau-2 & -2\tau+2 & \tau-2 \\
\ast & \cdots & & \ast & \\
& \ast & \cdots & & \ast \end{vmatrix}. \end{align*} Then, by the definition of the resultant, \begin{align*}
\frac{1}{n!}\frac{d^n}{d\tau^n}h(\tau) &= \mathrm{res}_s((s-1)^2, A_{4_1}(s^{-p},s)) \\
&= \pm A_{4_1}(1,1)^2 = \pm 16. \end{align*} When $p$ is odd, we first note that $(s+1)^2 \mid A_{4_1}(s^{-p},s)$. Thus, $h(\tau)$ is a multiple of $\mathrm{res}_s\left(\tau(s-1)^2-2(s^2-s+1), (s+1)^2\right) = \pm(4\tau-6)^2$.
We next consider the case $p$ even. For the matrix in the resultant $h(\tau)$, we add the $j$th column to the $(j+1)$st column for $j=1,2,\dots,n+1$ in this order twice. Then one obtains \[ \begin{vmatrix}
\tau-2 & -2 & -4 & \cdots & & -2n-2 \\
& \tau-2 & -2 & -4 & \cdots & -2n \\
& & \ddots & & & \vdots \\
& & & \tau-2 & -2 & -4 \\
\ast & \cdots & & \ast & \ast' & \ast' \\
& \ast & \cdots & \ast & \ast' & \ast' \end{vmatrix}, \] where four $\ast'$'s are multiples of $4$. Since $\tau$'s appear only in the diagonal and the entries in the upper triangle are even, the coefficient of $\tau^k$ is a multiple of $16$ unless $k=n-1, n-2, n-3$. Here note that there are even number of odd integers in the $(n+i)$th row for $i=1,2$ since $2 \mid A_{4_1}(1,1)$. Thus, the coefficient of $\tau^k$ is a multiple of $16$ when $k=n-1, n-2$.
Now, the coefficients in $h(\tau)$ are multiples of $16$ except the coefficient of $\tau^{n-3}$. Hence, it suffices to see that $16 \mid h(1)=\mathrm{res}_s(-s^2-1,A_{4_1}(s^{-p},s))$. By a property of the resultant, we have $h(1) = \pm A_{4_1}(i^{-p},i)A_{4_1}((-i)^{-p},-i)$, which is equal to $0$ or $\pm 16$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:4_1}(II)] By Lemma~\ref{lem:div_by_16}, when $p$ is odd, $4^{-1}(2\tau-3)^{-2}h(\tau)$ is in $\mathbb{Z}[\tau]$ and monic. When $p$ is even, $16^{-1}h(\tau)$ is in $\mathbb{Z}[\tau]$ and monic as well. Hence, we conclude that $\tau_\rho(S^3_{p}(K))$ is an algebraic integer. \end{proof}
We now pose the following problem.
\begin{problem} \label{prob:tau_closed} It is natural to ask whether $\tau_\rho(S^3_{p/q}(4_1))$ is always an algebraic integer. More generally, whether $\tau_\rho(S^3_{p/q}(K))$ is an algebraic integer for any knot $K$ with $\dim_\mathbb{C} X^\mathrm{irr}(E(K))=1$. \end{problem}
We confirm that $\tau_\rho(S^3_{p/q}(K))$ is an algebraic integer for every irreducible representation $\rho$ in the cases of $S^3_{2/3}(4_1)$ and $S^3_{1/2}(5_2)$.
\begin{example} \label{ex:2/3} Let us consider the case $p/q=2/3$. Then there are 12 representations of $\pi_1(S^3_{2/3}(4_1))$ as follows. \[
\begin{array}{l | l | l}
s & t & \tau_\rho(S^3_{2/3}(4_1)) \\ \hline
-0.200325+0.979729 i & -3.42754 & 0.738094 \\
0.200325+0.979729 i & -3.42754 & 5.85638 \\
-0.490393+0.871501 i & -2.21504 & 2.38654 \\
-1.30664+0.0498758 i & -0.392004+0.724199 i & 5.8872-0.943648 i \\
-0.264802+0.964303 i & -1.43838 & 8.4028 \\
0.490393+0.871501 i & -2.21504 & 0.0164217 \\
0.264802+0.964303 i & -1.43838 & 0.258749 \\
1.30664+0.0498758 i & -0.392004-0.724199 i & -0.717017+0.0236658 i \\
-0.764207+0.0291705 i & -0.392004-0.724199 i & 5.8872+0.943648 i \\
0.764207+0.0291705 i & -0.392004+0.724199 i & -0.717017-0.0236658 i \\
-0.615146 & -0.135036 & 1.60981 \\
0.615146 & -0.135036 & 94.3908 \\ \end{array} \] Using the resultant, one finds a polynomial $f(x)^2g(x)$ which is zero at these exact values of the Reidemeister torsion, where \begin{align*} f(x)&= x^{12}-124 x^{11}+3142 x^{10}-34792 x^9+196796 x^8-561760 x^7+627280 x^6 \\ & \quad +254848 x^5-866240 x^4+153088 x^3+253696 x^2-66560 x+1024 \end{align*} and $g(x)$ is a certain polynomial. We can confirm that $g(x)\neq 0$ for the above 12 values $\tau_\rho(S^3_{2/3}(4_1))$. Thus, they must be the zeros of $f$, and hence they are algebraic integers.
Since $f(94)<0<f(95)$, the zero of $f$ approximated by $94.3908$ must be a real number. This value is larger than the absolute values of the other 11 zeros. Hence, this zero is a Perron number. Here the corresponding representation is an $\mathit{SL}(2,\mathbb{R})$-representation. Indeed, $s$'s and $t$'s are respectively zeros of the polynomials \begin{align*}
& s^{24}-3 s^{22}-3 s^{20}+8 s^{18}+12 s^{16}-7 s^{14}-20 s^{12}-7 s^{10}+12 s^8+8 s^6-3 s^4-3 s^2+1, \\
& t^6+8 t^5+23 t^4+31 t^3+23 t^2+10 t+1. \end{align*} They have zeros between $0.6$ and $0.7$ and between $-0.2$ and $-0.1$, respectively. Therefore, one of the $\mathit{SL}(2,\mathbb{R})$-representations gives a Perron number. Such a phenomenon is numerically seen in \cite{Kit16N} as well. Also, the Reidemeister torsions of the Brieskorn homology 3-sphere $\Sigma(p,q,r)$ are real numbers (see Section~\ref{sec:Seifert}). Thus, the maximal value $(4\sin\frac{\pi}{2p}\sin\frac{\pi}{2q}\sin\frac{\pi}{2r})^{-2}$ is a Perron number. Note that $\pi_1(\Sigma(2,3,5))$ does not admit any $\mathit{SL}(2,\mathbb{R})$-representation and the maximal Reidemeister torsion comes from an $\mathit{SU}(2)$-representation (see \cite[Remark~3.4]{KiYa16}). \end{example}
\begin{example} \label{ex:5_2} There are 17 representations of $\pi_1(S^3_{1/2}(5_2))$ and three of them are not acyclic: \[
\begin{array}{l | l | l}
s & t & \tau_\rho(S^3_{1/2}(5_2)) \\ \hline
-0.471842+0.881683 i & -2.87048 & 2.81243 \\
0.165381+0.98623 i & -3.68825 & 12.3508 \\
-0.200082+0.979779 i & -2.2146 & 0.490587 \\
0.0681942+0.997672 i & -2.41933 & 0.313753 \\
1.29286+1.35876 i & -0.182462-0.461334 i & 9.57556-0.520417 i \\
-1.26355+0.363134 i & 0.158863+1.07159 i & 3.6856-0.147423 i \\
-0.348139+0.937443 i & -1.1692 & 4.5522 \\
1 & 0.21508-1.30714 i & 0 \\
1 & 0.21508+1.30714 i & 0 \\
1 & 0.56984 & 0 \\
-0.859034+0.511919 i & -0.62449 & 7.42456 \\
0.313791+0.949492 i & -1.80681 & 0.0670363 \\
0.942666+0.333737 i & 0.0622582 & 148.658 \\
0.709501+0.704704 i & -1.66753 & 1.06479 \\
-0.731039+0.210094 i & 0.158863-1.07159 i & 3.6856+0.147423 i \\
0.367529+0.386263 i & -0.182462+0.461334 i & 9.57556+0.520417 i \\
-0.986232+0.16537 i & 0.445642 & 5.74328 \\ \end{array} \] In the same manner as Example~\ref{ex:2/3}, we can show that 14 values of $\tau_\rho(S^3_{1/2}(5_2))$ are the zeros of the monic polynomial \begin{align*} & x^{14}-210 x^{13}+10760 x^{12}-269160 x^{11}+3993232 x^{10} \\ & -38203808 x^9+245006784 x^8 -1067441024 x^7+3141232640 x^6-6091473408 x^5 \\ & +7422475264 x^4-5260713984 x^3+1942106112 x^2-314212352 x+13778944. \end{align*} Hence they are algebraic integers.
As similar to the previous example, the zero of the above polynomial approximated by $148.658$ is a Perron number. The corresponding representation is numerically an $\mathit{SL}(2,\mathbb{R})$-representation by considering the conjugate $P^{-1}\rho(\textendash)P$, where $P\approx \begin{pmatrix}
-0.950489 i & -1.97479 \\
0.341848 & 0.341848 i \\ \end{pmatrix} $. \end{example}
\section{Reidemeister torsion of Seifert fibered spaces} \label{sec:Seifert} In this section, we show that the $\mathit{SL}(2,\mathbb{C})$-Reidemeister torsion $\tau_\rho(M)$ is an algebraic integer for a Seifert fibered space $M$ under a mild condition, which any Brieskorn homology 3-sphere satisfies.
\subsection{Chebyshev polynomials and variants} We use Chebyshev polynomials and variants to prove the algebraic integrality. The next lemma follows from the fact that $\cos\frac{2k-1}{2n}\pi = \sin\frac{n-2k+1}{2n}\pi$ ($k=1,2,\dots,n$) are the roots of the Chebyshev polynomial of the first kind $T_n(x)$ defined by $T_n(\cos\theta)=\cos n\theta$. Note that $T_{2n}(x)$ has the form of $2^{2n-1}x^{2n}+\dots+(-1)^n \in \mathbb{Z}[x]$.
\begin{lemma} \label{lem:sin} Let $a$ be a positive even integer. Then $(\sin\frac{k}{2a}\pi)^{-1}$ is an algebraic integer for $k=-a+1,-a+3,\dots,a-1$. \end{lemma}
\begin{proof} As mentioned above, $\sin\frac{k}{2a}\pi$ gives a zero of $T_a(x)$. Then $(\sin\frac{k}{2a}\pi)^{-1}$ is a zero of $x^{a}T_a(1/x)=\pm x^{a}+\dots+2^{a-1}$. \end{proof}
Here let us introduce a variant of the normalized Chebyshev polynomial of the second kind.
\begin{definition} Define a polynomial $V_n(x) \in \mathbb{Z}[x]$ inductively by $V_0(x)=1$, $V_1(x)=x-1$, and $V_n(x)=xV_{n-1}(x)-V_{n-2}(x)$ for $n\geq 2$. \end{definition}
By definition, $V_n(x)$ is of the form $x^n+\dots\pm 1$.
\begin{lemma} \label{lem:cos} Suppose $\theta\neq (2k-1)\pi$ for any $k \in\mathbb{Z}$. Then $V_n(2\cos\theta)=\cos(n+\frac{1}{2})\theta/\cos\frac{\theta}{2}$. \end{lemma}
\begin{proof} Use induction on $n$. The cases $n=0,1$ follow from the equalities $\cos\frac{3}{2}\theta = \cos\theta \cos\frac{1}{2}\theta -\sin\theta \sin\frac{1}{2}\theta$ and $\sin\theta = 2\sin\frac{\theta}{2}\cos\frac{\theta}{2}$. Suppose $n\geq 2$. Then we have \begin{align*}
V_n(2\cos\theta)\cos\frac{\theta}{2} &= (2\cos\theta V_{n-1}(2\cos\theta)-V_{n-2}(2\cos\theta))\cos\frac{\theta}{2} \\
&= 2\cos\theta \cos\left(n-\frac{1}{2}\right)\theta -\cos\left(n-\frac{3}{2}\right)\theta \\
&= \cos\left(n-\frac{1}{2}\right)\theta \cos\theta -\sin\left(n-\frac{1}{2}\right)\theta \sin\theta \\
&= \cos\left(n+\frac{1}{2}\right)\theta, \end{align*} where the second equality follows from the induction hypothesis. \end{proof}
Therefore, the roots of $V_n(x)$ are $2\cos\frac{2k-1}{2n+1}\pi = 2\sin\frac{2n+3-4k}{2(2n+1)}\pi$ ($k=1,2,\dots,n$). Since $V_n(0)=\pm 1$, we obtain the next consequence.
\begin{corollary} \label{cor:2sin} Let $a$ be a positive odd integer. Then $(2\sin\frac{a-2k}{2a}\pi)^{\pm 1}$ is an algebraic integer for $k=-a+4,-a+8,\dots,a-2$. \end{corollary}
\subsection{Seifert fibered spaces}
Let $M$ be an orientable Seifert fibered space with Seifert index \[ \{b,(\epsilon=o, g), (a_1,b_1),\dots,(a_m,b_m)\}, \] whose base orbifold is a closed oriented surface with $m$ singular points. Here $b$ is an integer, $g$ is a non-negative integer and $a_i$, $b_i$ are coprime integers.
Now any non-trivial value of the Reidemeister torsion $\tau_\rho(M)$ for an irreducible $\mathit{SL}(2,\mathbb{C})$-representation $\rho\colon \pi_1(M) \to \mathit{SL}(2,\mathbb{C})$ is given as follows. See \cite[Main Theorem]{Kit94S} and Remark~\ref{rem:convention}.
\begin{proposition} \label{lem:tau_Seifert} \[ \tau_\rho(M)^{-1} = 2^{4-m-g}\prod_{i=1}^m \left(1-(-1)^{s_i}\cos\frac{r_i k_i\pi}{a_i}\right), \] where \begin{enumerate} \item $r_i,s_i\in\mathbb{Z}$ such that $a_is_i-b_ir_i=-1$ $(i=1,\dots,m)$, \item $k_i\in\mathbb{Z}$ such that $0\leq k_i\leq a_i$ and $k_i\equiv b_i\bmod 2$ $(i=1,\dots,m)$. \end{enumerate} \end{proposition}
\begin{remark} A pair $(r_i, s_i )$ is not unique, but it depends on only $(a_i,b_i)$. On the other hand $k_i$ does on a representation $\rho$. \end{remark}
We can see the following by Lemmas~\ref{lem:sin} and \ref{lem:cos}. We write $m_o$ for the number of odd $a_i$'s.
\begin{proposition} \label{prop:Seifert} For a Seifert fibered space $M$ with an irreducible representation $\rho$, if $2m_o+g\geq 4$, then the Reidemeister torsion $\tau_\rho(M)$ is an algebraic integer. \end{proposition}
\begin{proof} Here we deform each factor, which contains a cosine function, of the formula in Lemma~\ref{lem:tau_Seifert} to a more simple form.
\begin{itemize} \item Case (I): $a_i$ is odd and $b_i$ is even.
In this case we see $ s_i $ must be odd since $a_i s_i -b_ir_i=-1$. Now, \[ \begin{split} 1-(-1)^{ s_i }\cos\frac{r_i k_i\pi}{a_i} &=1+\cos\frac{r_i k_i\pi}{a_i}\\ &=2\cos^2\left(\frac{r_i k_i\pi}{2a_i}\right)\\ &=2\sin^2\left(\frac{r_{i} {k}_i\pi}{2a_i}+\frac{\pi}{2}\right)\\ &=2\sin^2\left(\frac{2a_i\pi}{2a_i}-\frac{(r_{i}{k}_i+a_i)\pi}{2a_i}\right)\\ &=2\sin^2\left(\frac{a_i-r_i k_i}{2a_i}\pi\right).\\ \end{split} \]
\item Case (II): $a_i$ is odd and $b_i$ is odd.
In this case $ s_i $ may be odd and $r_i$ even by changing $ s_i $ to $ s_i +b_i$ and $r_i$ to $r_i+a_i$. Similarly, we have \[ 1-(-1)^{ s_i }\cos\frac{r_i k_i\pi}{a_i} =2\sin^2\left(\frac{a_i-r_i k_i}{2a_i}\pi\right). \]
\item Case (III): $a_i$ is even.
In this case $b_i$ must be odd and $ s_i $ may be odd by changing $ s_i $ to $ s_i +b_i$. Similarly to Cases (I) and (II), we see \[ 1-(-1)^{ s_i }\cos\frac{r_i k_i\pi}{a_i} =2\sin^2\left(\frac{a_i-r_i k_i}{2a_i}\pi\right). \] \end{itemize}
Now we see \[ \tau_\rho(M) = 2^{2m_o+g-4}\prod_{a_i:\,\text{odd}} \left(2\sin\frac{a_i-r_i k_i}{2a_i}\pi\right)^{-2} \cdot \prod_{a_i:\,\text{even}} \left(\sin\frac{a_i-r_i k_i}{2a_i}\pi\right)^{-2} . \] Recall that $m_o$ is the number of odd $a_i$'s. By Lemma~\ref{lem:sin} and Corollary~\ref{cor:2sin}, it can be seen that all factors \[ 2^{2m_o+g-4},\ \left(2\sin\frac{a_i-r_i k_i}{2a_i}\pi\right)^{-2}\ (\text{$a_i$: odd}),\ \left(\sin\frac{a_i-r_i k_i}{2a_i}\pi\right)^{-2}\ (\text{$a_i$: even}), \] are algebraic integers. This completes the proof. \end{proof}
Consider a Brieskorn homology 3-sphere $\Sigma(a_1,a_2,a_3)$. This is a Seifert fibered space with $m=3,~g=0$ and $m_o\geq 2$. Since there exist finitely many conjugacy classes of irreducible representations, the torsion polynomial can be defined.
\begin{corollary} The torsion polynomial $\sigma_{\Sigma(a_1,a_2,a_3)}(t) \in \mathbb{Q}[t]$ lies in $\mathbb{Z}[t]$. \end{corollary}
Recall that we write $T(p,q)$ for the $(p,q)$-torus knot. By similar arguments and Johnson's computation~\cite{Joh88}, we see the following.
\begin{proposition} \label{prop:torus_knot} For any irreducible representation $\rho$, the Reidemeister torsion $\tau_\rho(E(T_{p,q}))$ is an algebraic integer. \end{proposition}
\section{Continuous variation of Reidemeister torsion} \label{sec:Looper-Long} In this final section, we prove Theorem~\ref{thm:vary_conti} which asserts that for a certain knot $K_0$ the Reidemeister torsion $\tau_\rho(E(K_0))$ can vary continuously while the restriction $r(\rho)$ is fixed. We construct the knot $K_0$ in Figure~\ref{fig:K0} following \cite[Section~8]{CoLo96}.
\begin{figure}
\caption{The knot $4_1\sharp 4_1$ with a braid axis.}
\label{fig:4141}
\end{figure}
\begin{remark} Since Figure~\ref{fig:K0} is an alternating diagram, $K_0$ is an alternating knot. Then, we confirm that $K_0$ is prime by \cite[Theorem~4.4]{Lic97}. The Alexander polynomial $\Delta_{K_0}(t)$ of $K_0$ is computed as follows: \[ t^{12}-11 t^{11}+55 t^{10}-169 t^9+358 t^8-551 t^7+635 t^6-551 t^5+358 t^4-169 t^3+55 t^2-11 t+1. \] In particular, $K_0$ is not a torus knot, and thus it is hyperbolic by \cite[Corollary~2]{Men84}. Moreover, since $\Delta_{K_0}(t)$ is monic, $K_0$ is a fibered knot of genus $6$ by \cite[Theorem~1.2]{Mur63}. \end{remark}
Let $\varpi \colon E(K_0) \to E(4_1\sharp 4_1)$ denote a double branched cover whose branch set is the braid axis. The map $\varpi$ induces a homomorphism $\varpi_\ast\colon \pi_1(E(K_0)) \to \pi_1(E(4_1\sharp 4_1))$ and a regular map $\varpi^\ast\colon X(E(4_1\sharp 4_1)) \to X(E(K_0))$.
Recall that irreducible representations of \[ \pi_1(E(4_1)) = \ang{x,y \mid [y,x]^{-1}x=y[y,x]^{-1}} \] can be given by \[ \rho(x) = \begin{pmatrix}
s & 1 \\
0 & s^{-1} \end{pmatrix},\quad \rho(y) = \begin{pmatrix}
s & 0 \\
-t & s^{-1} \end{pmatrix}, \] up to conjugate, where $s,t \in \mathbb{C}^\times$ satisfies $\phi(s,t)=0$. Suppose $s \neq \pm 1$. Consider irreducible representations $\rho_{s,t}\ast (P_u^{-1}\rho_{s,t'}P_u)$ of $\pi_1(E(4_1\sharp 4_1))$, where \[ P_u= \begin{pmatrix}
u & (u-u^{-1})/(s-s^{-1}) \\
0 & u^{-1} \end{pmatrix} \in \mathit{SL}(2,\mathbb{C}) \] is an element of the centralizer of $\rho_{s,t}(x)$. Then we define an irreducible representation $\rho_{s,t,u}$ of $\pi_1(E(K_0))$ by $\rho_{s,t,u}= \varpi^\ast(\rho_{s,t}\ast (P_u^{-1}\rho_{s,t}P_u))$.
\begin{proof}[Proof of Theorem~\ref{thm:vary_conti}] Define \[ C=\{\rho_{s,t,u} \mid s,t,u \in \mathbb{C}^\times,\ s\neq\pm 1,\ \phi(s,t)=0\}. \] There is a Wirtinger representation of $\pi_1(E(K_0))$ with $16$ generators and $15$ relations. Then, by Mathematica, we compute the Reidemeister torsion as follows: \[ \tau_{\rho_{s,t,u}}(E(K_0)) = \frac{f_2(s,t)u^2+f_0(s,t)+f_{-2}(s,t)u^{-2}}{s^{70}(s-1)^{19}(s+1)^{18}}, \] where $f_j(s,t) \in \mathbb{Z}[s,t]$ ($j=2,0,-2$) are certain complicated polynomials. Moreover, $\mathrm{res}_t(f_j(s,t), \phi(s,t)) \in \mathbb{Z}[s]$ is the product of six factors $s$, $s\pm 1$, $s^2\pm s-1$, and $g_j(s)$ with some multiplicities, where $g_j(s) \in \mathbb{Z}[s]$ for $j=\pm 2$.
Now, it suffices to show that $f_2(s,t)$ and $f_{-2}(s,t)$ do not vanish simultaneously. By the definition of $C$, we have $s\neq 0, \pm 1$. It follows from $t\neq 0$ and $\phi(s,t)=0$ that $s^2\pm s-1 \neq 0$ since $s^2+s^{-2}-3=(s-s^{-1}+1)(s-s^{-1}-1)$. Finally, by a computer calculation, we can check that $\mathrm{res}_s(g_2(s), g_{-2}(s)) \in\mathbb{Z}$ is non-zero, namely, they have no common zeros. \end{proof}
\begin{figure}
\caption{Generators $x_1,x_2,\dots,x_{16}$ of $\pi_1(E(K_0))$.}
\label{fig:generator}
\end{figure}
\begin{example} When $(s,t)=(i,\frac{-5+\sqrt{5}}{2})$, the Reidemeister torsion $\tau_{\rho_{s,t,u}}(E(K_0))$ is equal to \begin{align*} \frac{1}{16}\left(\left((6765+7695 i)+(6383+2015 i) \sqrt{5}\right) u^2+(73122+21422 \sqrt{5}) \right.\\ \left. +\left((6765-7695 i)+(6383-2015 i) \sqrt{5}\right)u^{-2} \right) \end{align*} Also, when $(s,t)=(2,\frac{5-\sqrt{105}}{8})$, $\tau_{\rho_{s,t,u}}(E(K_0))$ is equal to \begin{align*} \left((8505120805-233834087 \sqrt{105}) u^2+16 (633637427 \sqrt{105}+6409291542) \right.\\ \left. +(58019711838 \sqrt{105}+594558745850)u^{-2} \right) /6291456. \end{align*} \end{example}
\begin{remark} Let $s_\pm=\frac{1\pm\sqrt{5}}{2}$. Then $s_\pm^{\pm 1}$ are the roots of $s^2-s^{-2}-3$ and we have four reducible representations $\rho_{s_\pm^{\pm 1},0,u}$. First, $\rho_{s_\pm^{-1},0,u}$ is equivalent to $\rho_{s_\pm,0,u}$. Next, as elements of $X(E(K_0))$, they are independent of $u$ since \[ P_u^{-1} \begin{pmatrix}
s & 0 \\
0 & s^{-1} \end{pmatrix} P_u = \begin{pmatrix}
s & 1-u^{-2} \\
0 & s^{-1} \end{pmatrix}. \] Moreover, for any $u \in \mathbb{C}^\times$, we have \[ \tau_{\rho_{s_\pm,0,u}}(E(K_0)) = 256 (8222 \mp 3677 \sqrt{5}). \] \end{remark}
Finally, let us prove Corollary~\ref{cor:RT_infinite} stated at the end of Section~\ref{sec:Intro}.
\begin{proof}[Proof of Corollary~\ref{cor:RT_infinite}] First, the longitude $\lambda$ of $K_0$ is written as \[ \lambda=x_2 x_7^{-1} x_1 x_{12} x_5^{-1} x_{11} x_{16}^{-1} x_6^{-1} x_{10} x_{15}^{-1} x_9 x_4 x_{13}^{-1} x_3 x_8^{-1} x_{14}^{-1}. \] By a computer calculation, we have $\rho_{s,t,u}(\lambda)_{11} = f(s,t)/g(s)$. Then the resultant $\mathrm{res}_t(g(s)L-f(s,t), \phi(s,t))$ is the product of four factors $s$, $s\pm 1$, and $f_C(L,s)$, where $f_C(L,s)$ is a polynomial written after Corollary~\ref{cor:RT_infinite}. Since $s\neq 0, \pm 1$, we conclude that $\rho_{s,t,u}(x)_{11}=s$ and $\rho_{s,t,u}(\lambda)_{11}=L$ must satisfy $f_C(L,s)=0$.
Now, we have a 2-bridge knot such that $f_C(L,M)$ and $A_K(M,L)$ have a common zero $(L_0,M_0)$ with $L_0,M_0 \neq 0$ and $\{L_0,M_0\}\not\subset\{1,-1\}$. It follows from the definition of the $A$-polynomial and \cite[Theorem~3.1]{CoLo96} that there exist $\{\rho_u\}_u \subset C$ corresponding to $(L_0,M_0)$ and $\rho_K \in X(E(K))$ corresponding to $(M_0,L_0)$. Moreover, $\{L_0,M_0\}\not\subset\{1,-1\}$ implies that $\rho_u$ coincides with $\rho_K$ on the boundary torus, and thus we have a family $\{\rho_u\ast\rho_K\}_u$ of representations of $\pi_1(\Sigma(K_0,K))$. By the multiplicativity of the Reidemeister torsion and Theorem~\ref{thm:vary_conti}, the set $\mathit{RT}(\Sigma(K_0,K))$ is an infinite set. \end{proof}
\end{document} |
\begin{document}
\title{Large time convergence for a chemotaxis model with degenerate local sensing and consumption}
\author{Philippe Lauren\c{c}ot} \address{Laboratoire de Math\'ematiques (LAMA) UMR~5127, Universit\'e Savoie Mont Blanc, CNRS\\ F--73000 Chamb\'ery, France} \email{[email protected]}
\keywords{convergence - Liapunov functional - chemotaxis-consumption model - local sensing} \subjclass{35B40 - 35K65 - 35K51 - 35Q92}
\date{\today}
\begin{abstract} Convergence to a steady state in the long term limit is established for global weak solutions to a chemotaxis model with degenerate local sensing and consumption, when the motility function is $C^1$-smooth on $[0,\infty)$, vanishes at zero, and is positive on $(0,\infty)$. A condition excluding that the large time limit is spatially homogeneous is also provided. These results extend previous ones derived for motility functions vanishing algebraically at zero and rely on a completely different approach. \end{abstract}
\maketitle
\pagestyle{myheadings} \markboth{\sc{Ph. Lauren\c cot}}{\sc{Large time convergence for a chemotaxis model}}
\section{Introduction}\label{sec1}
The chemotaxis system with local sensing and consumption \begin{subequations}\label{ks}
\begin{align}
\partial_t u & = \Delta (u\gamma(v)) \;\;\text{ in }\;\; (0,\infty)\times \Omega\,, \label{ks1} \\
\partial_t v & = \Delta v - uv \;\;\text{ in }\;\; (0,\infty)\times \Omega\,, \label{ks2} \\
\nabla (u\gamma(v))\cdot \mathbf{n} & = \nabla v\cdot \mathbf{n} = 0 \;\;\text{ on }\;\; (0,\infty)\times \partial\Omega\,, \label{ks3} \\
(u,v)(0) & = (u^{in},v^{in}) \;\;\text{ in }\;\; \Omega\,, \label{ks4}
\end{align} \end{subequations} describes the dynamics of a population of cells with density $u\ge 0$ living on a nutrient with concentration $v\ge 0$ and moving in space under the combined effects of a nutrient-dependent diffusion and a nutrient-induced chemotactic bias \cite{KeSe1971b}. Here, $\Omega$ is a bounded domain of $\mathbb{R}^n$, $n\ge 1$, and the motility~$\gamma$ is a smooth function which is positive on $(0,\infty)$. Unlike the classical Keller-Segel system with local sensing \begin{subequations}\label{cks}
\begin{align}
\partial_t u & = \Delta (u\gamma(v)) \;\;\text{ in }\;\; (0,\infty)\times \Omega\,, \label{cks1} \\
\partial_t v & = \Delta v - v + u \;\;\text{ in }\;\; (0,\infty)\times \Omega\,, \label{cks2} \\
\nabla (u\gamma(v))\cdot \mathbf{n} & = \nabla v\cdot \mathbf{n} = 0 \;\;\text{ on }\;\; (0,\infty)\times \partial\Omega\,, \label{cks3} \\
(u,v)(0) & = (u^{in},v^{in}) \;\;\text{ in }\;\; \Omega\,, \label{cks4}
\end{align} \end{subequations}
in which the variable $v$ is rather the concentration of a signalling chemical produced by the cells as accounted for in~\eqref{cks2} \cite{KeSe1970}, the nutrient is consumed by the cells in the model~\eqref{ks} according to the nonlinear absorption term $-uv$ in~\eqref{ks2}. The dynamics of the two models is thus expected to differ significantly. On the one hand, the long term behaviour of solutions to~\eqref{cks} is far from being completely understood. Convergence of $(u,v)$ to the spatially homogeneous steady state $(\|u^{in}\|_1/|\Omega|,\|u^{in}\|_1/|\Omega|)$ is established in \cite[Theorem~1.5]{DLTW2023} when $\gamma'(s)\le 0 \le s\gamma'(s) + \gamma(s)$ for $s\ge 0$ and a similar result is likely to be true when $\gamma'\ge 0$. Such a simple dynamics is however unlikely to be the generic one, as there exist non-constant stationary solutions to~\eqref{cks} in some domains, see~\cite{LNT1988, WaXu2021}. Besides, though solutions to~\eqref{cks} are global and even bounded for a large class of motility functions~$\gamma$ \cite{BPT2021, DKTY2019, DLTW2023, FuJi2021a, FuJi2021b, FuSe2022b, JLZ2022, LiJi2020, TaWi2017, XiJi2023, YoKi2017}, unbounded solutions do exist \cite{FuJi2022, FuSe2022a, JiWa2020}.
The situation is somewhat simpler for~\eqref{ks} as the set of stationary solutions can easily be identified and seen to depend heavily on the value of $\gamma(0)$ \cite{LiWi2023a, Wink2023c}. Indeed, if $\gamma(0)>0$, then the only steady states to~\eqref{ks} are the spatially homogeneous solutions $(M,0)$, $M\ge 0$, and they attract the dynamics \cite{Laur2023, LiWi2023a}. In contrast, the set of stationary solutions to~\eqref{ks} is much larger when $\gamma(0)=0$, as $(\bar{u},0)$ is a stationary solution to~\eqref{ks} for any (sufficiently smooth) function~$\bar{u}$. Despite this wealth of stationary solutions, Winkler proves in~\cite{Wink2023b, Wink2023c} that the dynamics of~\eqref{ks} selects one and only one steady state in the large time limit: more precisely, given \begin{equation}
\gamma\in C^1([0,\infty))\,, \quad \gamma(0)=0, \quad \gamma>0 \;\;\text{ on }\;\; (0,\infty)\,, \label{n1} \end{equation} satisfying \begin{equation}
\gamma\in C^3((0,\infty)) \;\;\;\text{ and }\;\; \liminf_{s\to 0} \left\{ \frac{\gamma(s)}{s^\alpha} \right\} > 0 \label{w1} \end{equation} for some $\alpha\ge 1$ and a suitably constructed global weak solution~$(u,v)$ to~\eqref{ks} \cite{Wink2022a, Wink2023a}, there is a non-negative measurable function~$u_\infty$ such that \begin{equation*}
(u(t),v(t)) \;\;\text{ converges to }\; (u_\infty,0) \;\;\text{ in a suitable topology } \end{equation*} if one of the following additional assumptions holds true: \begin{description}
\item[(I)] $n\in\{1,2\}$ and $\gamma'(0)>0$, see \cite[Theorem~1.4]{Wink2023b},
\item[(II)] $n\ge 3$ and there is $\alpha'\in (1,2]$ such that
\begin{equation}
\limsup_{s\to 0} \left\{ s^{2-\alpha'} |\gamma''(s)| \right\} < \infty \,, \label{w2}
\end{equation} see \cite[Theorem~1.2]{Wink2023c}. \end{description} Observe that, for $\alpha\ge 1$, the function $\gamma(s)=s^\alpha$, $s\ge 0$, satisfies~\eqref{n1}, \eqref{w1}, and~\eqref{w2} with $\alpha'=\min\{\alpha, 2\}$.
Besides, conditions on $\gamma$, $u^{in}$, and $v^{in}$ are provided to guarantee that $u_\infty$ is not a constant, see~\cite[Theorem~1.5]{Wink2023b} and~\cite[Corollary~1.4]{Wink2023c}.
The aim of this note is to prove that these results are actually valid under the sole assumption~\eqref{n1} on $\gamma$, without assuming an algebraic behaviour of $\gamma$ near zero. However, the convergence established below takes place in a weaker topology for the $u$-component than the one obtained in~\cite{Wink2023b, Wink2023c}. In addition, the approach used herein is different and thus provides an alternative viewpoint on the stabilization issue for~\eqref{ks}. We first state the convergence result.
\begin{theorem}\label{thm1}
Assume that $\gamma$ satisfies~\eqref{n1} and consider $(u^{in},v^{in}) \in L_+^\infty(\Omega,\mathbb{R}^2)$, where $E_+$ denotes the positive cone of the Banach lattice $E$. If $(u,v)$ is a global weak solution to~\eqref{ks} in the sense of Definition~\ref{defws} below, then
\begin{equation*}
A_\infty(x) := \int_0^\infty (u\gamma(v))(s,x)\ \mathrm{d}s \in H_+^1(\Omega)\,, \qquad x\in\Omega\,,
\end{equation*} and
\begin{align}
& u(t) \rightharpoonup u_\infty := \Delta A_\infty + u^{in} \;\;\;\text{ in }\;\; H^1(\Omega)' \;\;\text{ as }\; t\to \infty\,, \label{cvu} \\
& \lim_{t\to\infty} \|v(t)\|_p = 0\,, \qquad p\in [1,\infty)\,. \label{cvv}
\end{align} Moreover, $\langle u_\infty , \vartheta \rangle_{(H^1)',H^1} \ge 0$ for all $\vartheta\in H_+^1(\Omega)$ and $\langle u_\infty , 1 \rangle_{(H^1)',H^1} = \langle u^{in}, 1\rangle_{(H^1)',H^1}$. \end{theorem}
The cornerstone of our approach to the study of the long term behaviour of global weak solutions to~\eqref{ks}is the observation that the dynamics of the system~\eqref{ks} is somewhat encoded in that of the auxiliary function \begin{equation*}
A(t,x) := \int_0^t (u\gamma(v))(s,x)\ \mathrm{d}s\,, \qquad (t,x)\in (0,\infty)\times\Omega\,, \end{equation*} which bears several interesting properties collected in Lemma~\ref{lemb2}. Among others, $t\mapsto A(t,x)$ is non-decreasing for a.e. $x\in \Omega$ and the trajectory $\{A(t)\ :\ t\ge 0\}$ is bounded in $H^1(\Omega)$, two features which guarantee in particular that the function~$A_\infty$ introduced in Theorem~\ref{thm1} is well-defined and lies in $H^1(\Omega)$. Moreover, for all $t\ge 0$, the function~$A(t)$ is a variational solution of the elliptic equation \begin{equation*}
- \Delta A(t) = u^{in} - u(t)\;\;\text{ in }\;\Omega\,, \qquad \nabla A(t)\cdot \mathbf{n} = 0 \;\;\text{ on }\;\partial\Omega\,, \end{equation*} so that the large time behaviour of $u(t)$ is driven by that of $A(t)$.
The second contribution of this paper is in the spirit of~\cite[Theorem~1.3]{Wink2023c} and provides an estimate on the distance in $H^1(\Omega)'$ between the initial condition $u^{in}$ and the final state $u_\infty$ of the $u$-component of~\eqref{ks}. Its statement requires additional notation, which we introduce now: for $z\in H^1(\Omega)'$, we set $\langle z\rangle := \langle z , 1 \rangle_{(H^1)',H^1}/|\Omega|$ and note that \begin{equation*}
\langle z\rangle = \frac{1}{|\Omega|} \int_\Omega z(x)\ \mathrm{d}x \;\;\;\text{ for }\;\; z\in H^1(\Omega)'\cap L^1(\Omega). \end{equation*} Now, for $z\in H^1(\Omega)'$ with $\langle z \rangle = 0$, we define $\mathcal{K}[z]\in H^1(\Omega)$ as the unique (variational) solution to \begin{subequations}\label{n2}
\begin{equation}
-\Delta\mathcal{K}[z] = z \;\;\text{ in }\;\; \Omega\,, \qquad \nabla\mathcal{K}[z]\cdot \mathbf{n} = 0 \;\;\text{ on }\;\; \partial\Omega\,, \label{n2a}
\end{equation}
satisfying
\begin{equation}
\langle\mathcal{K}[z] \rangle = 0\,. \label{n2b}
\end{equation} \end{subequations}
We then choose the following norm $\|\cdot\|_{(H^1)'}$ on $H^1(\Omega)'$: \begin{equation*}
\|z\|_{(H^1)'} := \|\nabla\mathcal{K}[z-\langle z\rangle]\|_2 + |\langle z\rangle|\,, \qquad z\in (H^1)(\Omega)'. \end{equation*}
\begin{proposition}\label{prop2} Assume that $\gamma$ satisfies~\eqref{n1} and consider $(u^{in},v^{in}) \in L_+^\infty(\Omega,\mathbb{R}^2)$. If $(u,v)$ is a global weak solution to~\eqref{ks} in the sense of Definition~\ref{defws} below and $u_\infty$ denotes the weak limit in $H^1(\Omega)'$ of $u(t)$ as $t\to\infty$ given by Theorem~\ref{thm1}, then
\begin{equation}
\|u_\infty - u^{in}\|_{(H^1)'}^2 \le \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,\|v^{in}\|_\infty)}\,. \label{dist}
\end{equation} In particular, if \begin{equation}
\|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,\|v^{in}\|_\infty)} < \|u^{in} - \langle u^{in} \rangle \|_{(H^1)'}^2\,, \label{smalldist} \end{equation} then $u_\infty$ is not a constant. \end{proposition}
An immediate consequence of Proposition~\ref{prop2} is that, given $u^{in}\in L_+^\infty(\Omega)$ with $u^{in}\not\equiv \langle u^{in} \rangle$ and a sufficient small $v^{in}\in L_+^\infty(\Omega)$, the first component of the corresponding global weak solution $(u,v)$ to~\eqref{ks} has a non-constant limit. A quantitative estimate on the required smallness of $v^{in}$ in $L^\infty(\Omega)$ is provided by~\eqref{smalldist} and reads \begin{equation*}
\|v^{in}\|_\infty \|\gamma'\|_{L^\infty(0,\|v^{in}\|_\infty)} < \frac{\|u^{in} - \langle u^{in} \rangle \|_{(H^1)'}^2}{|\Omega|\|u^{in}\|_\infty}\,. \end{equation*} Such a result is obviously connected with the fact that the solution $(u,v)$ to~\eqref{ks} with initial condition $(u^{in},0)$ is the stationary solution $(u,v)=(u^{in},0)$, as already mentioned.
\section{Proofs}\label{sec2}
Let us first make precise the notion of global weak solution to~\eqref{ks} to be used in this paper. We emphasize here that, since $\gamma(0)=0$, the equation~\eqref{ks1} is degenerate, so that we cannot expect much regularity on~$u$.
\begin{definition}\label{defws}
Assume that $\gamma$ satisfies~\eqref{n1} and consider $(u^{in},v^{in})\in L_+^\infty(\Omega,\mathbb{R}^2)$. A global weak solution to~\eqref{ks} is a pair of non-negative functions $(u,v)$ such that
\begin{align*}
u & \in C_w([0,\infty),H^1(\Omega)') \cap L^\infty((0,\infty),L_+^1(\Omega))\,, \\
v & \in C([0,\infty),L_+^1(\Omega)) \cap L^\infty((0,\infty)\times\Omega)\cap L_{\mathrm{loc}}^2([0,\infty),H^1(\Omega))\,, \\
u\sqrt{\gamma(v)} & \in L_{\mathrm{loc}}^2([0,\infty),L^2(\Omega))\,,
\end{align*} which satisfies \begin{align*}
\langle u(t) , \vartheta(t) \rangle_{(H^1)',H^1} - \int_\Omega u^{in}\vartheta(0)\ \mathrm{d}x & = \int_0^t \int_\Omega u(s)\gamma(v(s)) \Delta\vartheta(s)\ \mathrm{d}x\mathrm{d}s \\
& \qquad + \int_0^t \langle u(s) , \partial_t \vartheta(s) \rangle_{(H^1)',H^1}\ \mathrm{d}s \end{align*} for $\vartheta\in L^2((0,t),H_N^2(\Omega))\cap W^{1,2}((0,t),H^1(\Omega))$ and $t\ge 0$, where \begin{equation*}
H_N^2(\Omega) := \{ z \in H^2(\Omega)\ :\ \nabla z\cdot \mathbf{n} = 0 \;\;\text{ on }\;\partial\Omega\}\,, \end{equation*} as well as \begin{align*}
\int_\Omega (v(t)\vartheta(t) - v^{in}\vartheta(0))\ \mathrm{d}x + \int_0^t \int_\Omega \nabla v(s)\cdot \nabla\vartheta(s)\ \mathrm{d}x\mathrm{d}s & + \int_0^t \int_\Omega (uv)(s)\vartheta(s)\ \mathrm{d}x\mathrm{d}s \\
& = \int_0^t \int_\Omega v(s)\partial_t\vartheta(s)\ \mathrm{d}x\mathrm{d}s \end{align*} for $\vartheta\in L^2((0,t),H^1(\Omega))\cap L^\infty((0,t)\times\Omega)$ and $t\ge 0$. \end{definition}
We shall not address the existence issue here and refer to \cite{Wink2022a, Wink2023a, Wink2023b} for results in that direction, the main assumption on $\gamma$ being that it vanishes in an algebraic way at zero. In the non-degenerate case $\gamma>0$ on $[0,\infty)$, implying in particular that $\gamma(0)>0$, existence results are also available, see \cite{LiZh2021, LiWi2023a, LiWi2023b}.
We now fix $\gamma$ satisfying~\eqref{n1} and consider $(u^{in},v^{in})\in L_+^\infty(\Omega,\mathbb{R}^2)$, along with a global weak solution $(u,v)$ to~\eqref{ks} in the sense of Definition~\ref{defws}. As a first step towards the identification of the large time limit of $(u,v)$, we collect obvious consequences of~\eqref{ks}, the non-negativity of $u$ and $v$, and the comparison principle.
\begin{lemma}\label{lemb1}
For $t\ge 0$,
\begin{equation}
\langle u(t) \rangle = M := \langle u^{in} \rangle \;\;\text{ and }\;\; \|v(t)\|_\infty \le V := \|v^{in}\|_\infty\,. \label{b1}
\end{equation} Moreover, \begin{equation}
\int_0^\infty \|(uv)(t)\|_1\ \mathrm{d}t \le \|v^{in}\|_1\,. \label{b2} \end{equation} \end{lemma}
\begin{proof}
We integrate~\eqref{ks1} with respect to space and time and use the no-flux boundary conditions~\eqref{ks3} to obtain the first identity in~\eqref{b1}. Similarly, we infer from~\eqref{ks2} and~\eqref{ks3} that
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} \|v(t)\|_1 + \int_0^t \|(uv)(s)\|_1\ \mathrm{d}s = \|v^{in}\|_1\,, \qquad t\ge 0\,, \label{b3}
\end{equation} from which~\eqref{b2} readily follows. Finally, we use the comparison principle to deduce from~\eqref{ks2}, \eqref{ks3}, and the non-negativity of $uv$ that $v(t,x)\le V$ for $(t,x)\in [0,\infty)\times\bar{\Omega}$, thereby completing the proof of~\eqref{b1}. \end{proof}
We now define the auxiliary function \begin{equation*}
A(t,x) := \int_0^t (u\gamma(v))(s,x)\ \mathrm{d}s\,, \qquad (t,x)\in (0,\infty)\times \Omega\,, \end{equation*} and devote the next lemma to its properties.
\begin{lemma}\label{lemb2}
The function~$A$ belongs to $L^\infty((0,\infty),H^1(\Omega))$ with
\begin{subequations}\label{b45}
\begin{align}
\|A(t)\|_1 & \le \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}\,, \qquad t\ge 0 \,, \label{b4} \\
\|\nabla A(t)\|_2^2 & \le \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}\,, \qquad t\ge 0\,.\label{b5}
\end{align} \end{subequations} In addition, $t\mapsto A(t,x)$ is a non-decreasing function for a.e. $x\in \Omega$ and \begin{equation}
A_\infty(x) := \sup_{t\ge 0}\{A(t,x)\} = \int_0^\infty (u\gamma(v))(s,x)\ \mathrm{d}s\,, \qquad x\in\Omega\,, \label{b6} \end{equation} is well-defined and belongs to $H_+^1(\Omega)$. Also, for any $\vartheta\in H^1(\Omega)$, \begin{equation}
\lim_{t\to\infty} \|A(t)-A_\infty\|_2 = \lim_{t\to\infty} \int_\Omega \nabla\vartheta\cdot \nabla(A(t)-A_\infty)\ \mathrm{d}x = 0\,. \label{b7} \end{equation} \end{lemma}
\begin{proof}
Owing to the non-negativity of $u$ and $\gamma$,
\begin{equation*}
A(t_1,x) \le A(t_2,x)\,, \qquad 0\le t_1 \le t_2\,, \ x\in\Omega\,,
\end{equation*} while, for $t\ge 0$, it follows from~\eqref{n1}, \eqref{b1}, and~\eqref{b2} that \begin{align*}
\|A(t)\|_1 & = \int_0^t \int_\Omega (u\gamma(v))(s,x)\ \mathrm{d}x\mathrm{d}s \le \|\gamma'\|_{L^\infty(0,V)} \int_0^t \int_\Omega (uv)(s,x)\ \mathrm{d}x\mathrm{d}s \\
& \le \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)} \,, \end{align*} which proves~\eqref{b4}. Furthermore, the monotone convergence theorem implies that the function $A_\infty$ defined by~\eqref{b6} belongs to $L_+^1(\Omega)$ and \begin{equation}
\lim_{t\to\infty} \|A(t)-A_\infty\|_1 = 0\,. \label{b8} \end{equation}
We next infer from~\eqref{ks1}, \eqref{ks3}, and the definition of $A$ that, for $t\ge 0$, \begin{equation}
u(t) - \Delta A(t) = u^{in} \;\;\;\text{ in }\;\; H^1(\Omega)'\,. \label{b9} \end{equation} In particular, $A(t) - \langle A(t)\rangle = \mathcal{K}[u^{in}-u(t)]$ belongs to $H^1(\Omega)$ and we infer from~\eqref{b9} and the non-negativity of $u(t)$ and $A(t)$ that \begin{align*}
\|\nabla A(t)\|_2^2 & = \langle - \Delta A(t), A(t) \rangle_{(H^1)',H^1} \\
& \le \langle u(t) - \Delta A(t) , A(t) \rangle_{(H^1)',H^1} = \int_\Omega u^{in} A(t)\ \mathrm{d}x \\
& \le \|u^{in}\|_\infty \|A(t)\|_1\,. \end{align*} Hence, by~\eqref{b4}, \begin{equation*}
\|\nabla A(t)\|_2^2 \le \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}\,, \end{equation*} from which~\eqref{b5} follows. Finally, we deduce the $H^1$-regularity of $A_\infty$ from~\eqref{b5} by a weak compactness argument, whereas the convergence~\eqref{b7} is an immediate consequence of~\eqref{b4}, \eqref{b5}, and~\eqref{b8}. \end{proof}
We next turn to the convergence of~$v$ and begin with a classical energy estimate, which is available here thanks to the non-negativity of the right hand side of~\eqref{ks2}.
\begin{lemma}\label{lemb3}
For $t\ge 0$,
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t} \|v\|_2^2 + 2 \|\nabla v\|_2^2 + 2 \|v\sqrt{u}\|_2^2 = 0\,.
\end{equation*} \end{lemma}
\begin{lemma}\label{lemb4}
For each $p\in [1,\infty)$,
\begin{equation*}
\lim_{t\to\infty} \|v(t)\|_p = 0\,.
\end{equation*} \end{lemma}
\begin{proof}
Introducing $P:=\mathcal{K}[u-M]$ and $P^{in} := \mathcal{K}[u^{in}-M]$, we observe that $P = P^{in}-A$, so that
\begin{equation}
\|\nabla P\|_2 \le \|\nabla P^{in}\|_2 + \|\nabla A\|_2 \le c_1 := \|u^{in}\|_2 + \sqrt{\|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}}\,. \label{b10}
\end{equation}
We next infer from~\eqref{n2a} and~\eqref{b3} that
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t} \|v\|_1 = - \int_\Omega uv\ \mathrm{d}x = - \int_\Omega (M-\Delta P) v\ \mathrm{d}x = - M \|v\|_1 + \int_\Omega \nabla v\cdot \nabla P\ \mathrm{d}x\,.
\end{equation*}
Hence, using~\eqref{b10} and H\"older's inequality,
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t} \|v\|_1 + M \|v\|_1 \le \|\nabla v\|_2 \|\nabla P\|_2 \le c_1 \|\nabla v\|_2\,.
\end{equation*}
After integration with respect to time, we obtain
\begin{equation}
\|v(t)\|_1 \le \|v^{in}\|_1 e^{-Mt} + c_1 \int_0^t e^{M(s-t)} \|\nabla v(s)\|_2\ \mathrm{d}s\,, \qquad t\ge 0\,. \label{b11}
\end{equation}
Now, by H\"older's inequality,
\begin{align*}
\int_0^t e^{M(s-t)} \|\nabla v(s)\|_2\ \mathrm{d}s & \le \left( \int_0^t e^{M(s-t)}\ \mathrm{d}s \right)^{1/2} \left( \int_0^t e^{M(s-t)} \|\nabla v(s)\|_2^2\ \mathrm{d}s \right)^{1/2} \\
& \le \frac{1}{\sqrt{M}} \left( \int_0^t e^{M(s-t)} \|\nabla v(s)\|_2^2\ \mathrm{d}s \right)^{1/2}\,.
\end{align*}
Since
\begin{equation*}
2 \int_0^\infty \|\nabla v(s)\|_2^2\ \mathrm{d}s \le \|v^{in}\|_2^2
\end{equation*} by Lemma~\ref{lemb3}, we deduce from the Lebesgue dominated convergence theorem that
\begin{equation*}
\lim_{t\to\infty} \int_0^t e^{M(s-t)} \|\nabla v(s)\|_2^2\ \mathrm{d}s = 0\,.
\end{equation*}
Consequently,
\begin{equation}
\int_0^t e^{M(s-t)} \|\nabla v(s)\|_2\ \mathrm{d}s = 0 \label{b12}
\end{equation}
and~\eqref{b11} and~\eqref{b12} entail that
\begin{equation*}
\lim_{t\to\infty} \|v(t)\|_1 = 0\,,
\end{equation*}
thereby proving Lemma~\ref{lemb4} for $p=1$. To complete the proof, we use the above convergence, along with~\eqref{b1} and H\"older's inequality. \end{proof}
Thanks to the above analysis, we are now in a position to prove Theorem~\ref{thm1} and Proposition~\ref{prop2}.
\begin{proof}[Proof of Theorem~\ref{thm1}]
According to Lemma~\ref{lemb2}, the function $A_\infty$ introduced in Theorem~\ref{thm1} is well-defined and belongs to $H_+^1(\Omega)$. Setting $u_\infty = u^{in} + \Delta A_\infty\in H^1(\Omega)'$, we infer from~\eqref{b9} that, for $t\ge 0$ and $\vartheta\in H^1(\Omega)$,
\begin{align*}
\langle u(t)-u_\infty , \vartheta \rangle_{(H^1)',H^1} & = \langle u(t)-u^{in} + u^{in}-u_\infty , \vartheta \rangle_{(H^1)',H^1} \\
& = \langle \Delta (A(t) - A_\infty) , \vartheta \rangle_{(H^1)',H^1} \\
& = - \int_\Omega \nabla(A(t)-A_\infty)\cdot \nabla\vartheta\ \mathrm{d}x\,,
\end{align*} and the right hand side of the above identity converges to zero as $t\to\infty$ due to~\eqref{b7}. We have thus proved the convergence~\eqref{cvu}, whereas the convergence~\eqref{cvv} is established in Lemma~\ref{lemb4}. As for the properties of~$u_\infty$ stated at the end of Theorem~\ref{thm1}, they readily follow from~\eqref{b1}, the non-negativity, and the convergence~\eqref{cvu} \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop2}]
The starting point is the estimate~\eqref{b5} and the convergences~\eqref{b7} which imply that
\begin{equation}
\|\nabla A_\infty\|_2^2 \le \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}\,, \label{b13}
\end{equation}
recalling that $V=\|v^{in}\|_\infty$. Since $\langle u_\infty \rangle = \langle u^{in} \rangle$ by Theorem~\ref{thm1}, it follows from~\eqref{b13} and the definition of $u_\infty$ that \begin{equation*}
\|u_\infty - u^{in}\|_{(H^1)'}^2 = \|\nabla\mathcal{K}[u_\infty - u^{in}]\|_2^2 = \|\nabla A_\infty\|_2^2 \le \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)}\,, \end{equation*} as stated in~\eqref{dist}.
Assume now that $(u^{in},v^{in})$ satisfies~\eqref{smalldist}. It follows from~\eqref{dist} and~\eqref{smalldist} that \begin{align*}
\|u_\infty - \langle u^{in} \rangle \|_{(H^1)'} & = \|u_\infty - u^{in} + u^{in} - \langle u^{in} \rangle \|_{(H^1)'} \\
& \ge \|u^{in} - \langle u^{in} \rangle \|_{(H^1)'} - \|u_\infty - u^{in} \rangle \|_{(H^1)'} \\
& > \left[ \|u^{in}\|_\infty \|v^{in}\|_1 \|\gamma'\|_{L^\infty(0,V)} \right]^{1/2} - \|u_\infty - u^{in} \rangle \|_{(H^1)'} >0\,. \end{align*} Consequently, $u_\infty \ne \langle u^{in} \rangle$, which completes the proof after noticing that the property $\langle u_\infty \rangle = \langle u^{in} \rangle$ established in Theorem~\ref{thm1} excludes that $u_\infty$ coincides with any other constant. \end{proof}
\section*{Acknowledgments}
Enlightening (electronic) discussions with Michael Winkler on the topic studied in this paper are gratefully acknowledged. Part of this work was done while enjoying the kind hospitality of the Department of Mathematics, Indian Institute of Technology Roorkee.
\end{document} |
\begin{document}
\title{Estimation of qubit pure states with collective and individual measurements}
\author{E.~Bagan, A.~Monras and R.~Mu{\~n}oz-Tapia} \affiliation{Grup de F{\'\i}sica Te{\`o}rica \& IFAE, Facultat de Ci{\`e}ncies, Edifici Cn, Universitat Aut{\`o}noma de Barcelona, 08193 Bellaterra (Barcelona) Spain}
\begin{abstract} We analyze the estimation of a qubit pure state by means of local measurements on $N$ identical copies and compare its average fidelity for an isotropic prior probability distribution to the absolute upper bound given by collective measurements. We discuss two situations: the first one, where the state is restricted to lie on the equator of the Bloch sphere, is formally equivalent to phase estimation; the second one, where there is no constrain on the state, can also be regarded as the estimation of a direction in space using a quantum arrow made out of $N$ parallel spins. We discuss various schemes with and without classical communication and compare their efficiency. We show that the fidelity of the most general collective measurement can always be achieved asymptotically with local measurements and no classical communication. \end{abstract}
\pacs{03.67.-a, 03.65.Wj, 89.70.+c}
\maketitle
\section{Introduction}\label{introduction} In Quantum Information, measurement and estimation are not just \emph{some} important topics. They are at the very core of the theory. An unknown quantum state can only be unveiled by means of measurements, and the information is always gained at the expense of destroying the state. This information is then processed to obtain the desired estimate.
Any estimation procedure requires a sample of identical copies of the unknown state on which we can perform measurements. If unknown states could be copied, then one could produce samples of an arbitrary number of copies and the state could be estimated with infinite accuracy. The no-cloning theorem, however, prevents this possibility~\cite{no-cloning}. But even in such unphysical circumstances, one would only have finite time and a limited number of resources for copying and measuring. So, in the real world only a finite, usually not large, number of copies are available and only a reasonable approximation to the unknown state can be made. It is thus very important to devise strategies with optimal performance in a variety of practical situations.
Over the last few years, it has been recognized that a joint measurement on $N$ copies is more efficient than $N$ individual measurements on each copy separately. We will often refer to the former as {\em collective}, in contrast to {\em individual} (also called {\em local}) measurements. The quantum correlations behind the collective measurements are (almost) always more powerful than the classical correlations used in sequential individual measurements~\cite{pw,mp}. This and other issues have been studied in various contexts~\cite{holevo,helstrom,book,braunstein,derka, lpt,direction-1,bbm-direction,ps-direction, ajv,bbm-reference-1,bbm-reference-2,ps-reference,bartlett}, but for local measurements not many analytical results have been obtained~\cite{jones,gill-massar,fkf,hannemann,bbm-local,bbm-mixed, embacher}. The most powerful, involve sophisticated estimation theory technology that is not so widely known among physicists. Moreover, they mainly apply in the asymptotic limit (when $N$ is large), and in a ``pointwise" fashion, in the sense that no average over the prior probability distribution of unknown states is considered~\cite{japos-1,japos-2}.
Here we address the issue of estimating the most elementary quantum pure state, that of a qubit, assuming we have a sample of $N$ identical copies of it. We consider two relevant cases: estimation of a completely unknown qubit state, i.e. one given by an arbitrary point on the Bloch sphere; and estimation of restricted states that are known to lie on the equator of the Bloch sphere. The latter is also interesting because it is formally equivalent to phase estimation (these states are also equivalent to the so-called rebits~\cite{rebits}). We refer to these two situations as 3D and 2D cases, respectively.
The most general measurement is described by a Positive Operator Valued Measure (POVM) on the $N$ copies. An optimal measurement of this type yields the ultimate bounds that can be achieved by any estimation procedure. We will re-derive these bounds in a unified, very comprehensive framework. For local measurements, we will consider von Neumann measurements, as these can be readily implemented in a laboratory. So, by ``local measurement" we will loosely mean a \emph{local von Neumann measurement}. Furthermore, we will show that for some of the local procedures discussed here, optimal measurements are necessarily of von Neumann type. We will refer to local schemes that use classical communication, i.e. those that exploit the possibility of using the actual outcomes that are being obtained to dynamically adapt the next measurements, as LOCC (Local Operations and Classical Communication) schemes.
Let us be more concrete about the problem we are concerned with. Assume that we are given an ensemble of $N$ identical copies of an unknown qubit state, which we denote by $\ket{\vec{n}}$, where $\vec{n}$ is the unit vector on the Bloch sphere that satisfies
\begin{equation}\label{bloch}
\ket{\vec{n}}\bra{\vec{n}}=\frac{1+\vec{n} \cdot \vec{\sigma}}{2}, \end{equation}
and $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ are the usual Pauli matrices. After performing a measurement (collective or local) on the $N$ copies of $\ket{\vec{n}}$, one obtains an outcome~$\chi$. Based on~$\chi$, an estimate, $\ket{\vec{M}(\chi)}$, for the unknown state is guessed. To quantify how well $\ket{\vec{M}(\chi)}$ approximates the unknown state $\ket{\vec{n}}$ we use the fidelity, defined as the overlap
\begin{equation}\label{f}
f_n(\chi)\equiv|\bra{\vec{n}}\vec{M}(\chi) \rangle|^2={1+\vec{n} \cdot\vec{M}(\chi) \over
2}. \end{equation}
The fidelity~\eqref{f} can be regarded as a ``score": we get ``1" for a perfect determination ($\vec{M}=\vec{n}$) and ``0" for a completely wrong guess ($\vec{M}=-\vec{n}$). Our aim is to maximize the average fidelity, hereafter fidelity in short, over the initial probability and all possible outcomes,
\begin{equation}\label{f-1}
F\equiv \langle f \rangle = \sum_\chi \int dn\, f_n(\chi)\;p_{n}(\chi), \end{equation}
where $dn$ is the prior probability distribution, and $p_{n}(\chi)$ is the probability of getting outcome $\chi$ given that the unknown state is~$\ket{\vec{n}}$.
As mentioned above, we allow for general measurements or POVMs. They are defined by a set of positive operators $\{O(\chi)\}$ (each one of them associated to an outcome) that satisfy the condition
\begin{equation}\label{povm}
\sum_\chi O(\chi)=\openone. \end{equation}
The probability $p_{n}(\chi)$ is given in terms of these operators by
\begin{equation}\label{pnx}
p_{n}(\chi)={\rm tr}\,[\rho_n O(\chi)], \end{equation} where $\rho_n$ is the quantum state of the $N$ identical copies, i.e., $\rho_n=(\ket{\vec{n}} \bra{\vec{n}})^{\otimes N}$.
In Eq.~\eqref{f-1} there are only two elements that require optimization: the guess and the POVM, which enter this equation through~(\ref{f}) and~(\ref{pnx}), respectively. The optimal guess (OG) can be obtained rather trivially. The Schwarz inequality shows that the choice
\begin{equation}\label{m-optimal}
\vec{M}(\chi)=\frac{\vec{V}(\chi)}{|\vec{V}(\chi)|}, \end{equation}
where
\begin{equation}\label{v-optimal}
\vec{V}(\chi)=\int dn\; \vec{n}\; p_n(\chi), \end{equation}
maximizes the value of the fidelity, which then reads~\cite{bbm-local}
\begin{equation}\label{f-optimal}
F=\frac{1}{2}\left(1+\Delta
\right)\equiv\frac{1}{2}\left(1+\sum_\chi|\vec{V}(\chi)|\right) \end{equation}
(in the strict sense we should write $F_{\rm OG}$, but to simplify the notation, we drop the subscript when no confusion arises). Eq.~\eqref{m-optimal} gives the best state that can be~inferred, and Eq.~\eqref{f-optimal} gives the maximum fidelity that can be achieved for \emph{any} prior probability and \emph{any} measurement scheme specified by the conditional probabilities $p_{n}(\chi)$. We are thus left only with the non-trivial task of obtaining the optimal measurement. The goal of this paper is to compute the maximum value of~\eqref{f-optimal} within a unified framework for various measurement schemes, specially the local ones.
The paper is organised as follows. In the next section we derive the bounds on the fidelity of the optimal collective measurements. In Sec.~\ref{local} we discuss several local measurement schemes, with and without classical communication. Asymptotic values of the fidelity are computed in Sec.~\ref{asymptotic}. A summary of results and our main conclusions are presented in Sec.~\ref{conclusions}, and two technical appendices end the paper.
\section{Collective measurements}\label{collective} \subsection{2D states}\label{2D states} A 2D state corresponds to a point that is known to lay on the equator of the Bloch sphere. If we take it to be on the $xy$ plane, such state, $\ket{\vec{n}}$, has $\vec{n}=(\cos\theta,\sin\theta,0)$. If no other information is available, the prior probability distribution has to be isotropic, that is, $dn=d\theta/(2\pi)$. Eq.~\eqref{f-optimal} then reads
\begin{equation}\label{Delta-2D}
\Delta=\sum_\chi\left|\int \frac{d\theta}{2 \pi}\vec{n} \ {\rm tr}\,[\rho_n O(\chi)]\right|. \end{equation}
Notice that we can write,
\begin{equation}\label{rhon}
\rho_n\equiv\rho(\theta)=U(\theta)\rho_0 U^{\dag}(\theta), \end{equation}
where $\rho_0$ is a fiducial state of angular momentum $J\equiv N/2$ and maximal magnetic quantum number, $m=J$, along any fixed direction on the equator of the Bloch sphere. In particular, $\rho_0$ can be chosen to point along the $x$ axis,
\begin{equation}\label{rhoxn}
\rho_0={\ket{JJ}_x}\, {}_x\bra{JJ}. \end{equation}
In Eq.~\eqref{rhon}, $U(\theta)$ is the unitary representation of a rotation around the $z$ axis. The group of such unitary matrices is isomorphic to $U(1)$. In the standard basis $\ket{jm}\equiv \ket{m}$, where $U(\theta)$ is diagonal, we have
\begin{equation}\label{rho-theta}
\rho(\theta)=\sum_{m\ n}e^{i (m-n)\theta}\left({\rho_0}\right)_{m n}
\ket{m}\bra{n}. \end{equation}
We are now ready to compute \eqref{Delta-2D}:
\begin{eqnarray}\label{Delta2D-1}
\Delta&=&\sum_\chi\left|\sum_{m n} \int \frac{d\theta}{2 \pi} e^{i \theta} e^{i
(m-n)\theta}
\left({\rho_0}\right)_{m n} [O(\chi)]_{n m} \right| \nonumber \\
&=&\sum_\chi\left|\sum_{m=-J}^{J-1}
\left({\rho_0}\right)_{m m+1} [O(\chi)]_{m+1\, m} \right|, \end{eqnarray}
where $[O(\chi)]_{mn}\equiv\bra{m}O(\chi)\ket{n}$. The following inequalities give the maximum value of $\Delta$:
\begin{eqnarray}
\nonumber
\Delta &\leq& \sum_\chi\sum_{m=-J}^{J-1}
\left|\left({\rho_0}\right)_{m m+1} [O(\chi)]_{m+1\, m} \right|\\
\nonumber
&\leq& \sum_{m=-J}^{J-1}
\left|\left({\rho_0}\right)_{m m+1}\right| \sum_\chi\left|[O(\chi)]_{m+1 \,m} \right|\\
&\leq& \sum_{m=-J}^{J-1}
\left|\left({\rho_0}\right)_{m m+1}\right|,
\label{optimal-Delta-2D} \end{eqnarray}
where in the last step in~(\ref{optimal-Delta-2D}) we have used that
\begin{equation}\label{povm-cond}
\sum_\chi\left|[O(\chi)]_{m+1 m} \right|\leq 1, \end{equation}
as follows from positivity and~(\ref{povm}). More precisely, positivity implies
\begin{equation}\label{positivity}
[O(\chi)]_{m m}[O(\chi)]_{m+1\, m+1}\geq \left|[O(\chi)]_{m
m+1}\right|^2, \end{equation}
and the Schwarz inequality yields
\begin{eqnarray}
&\displaystyle \sum_\chi \left|[O(\chi)]_{m \,m+1}\right|\leq &\nonumber \\
&\displaystyle \sqrt{\sum_\chi
[O(\chi)]_{m m}}\sqrt{\sum_\chi[O(\chi)]_{m+1\, m+1}}=1 .&
\label{schwarz-povm} \end{eqnarray}
There are two points worth emphasizing here. First, all the inequalities can be saturated, therefore the bound is tight (we give below a POVM that accomplishes this task). Second, the bound \eqref{optimal-Delta-2D} is completely general. If we were interested in encoding the information carried by a phase $\theta$ in a covariant way [as in \eqref{rhon}], but using a general fiducial state~\cite{braunstein,wiseman,wiseman-2}, $\rho_0=\sum_{mm'}a_m a^*_{m'}\ket{m}\bra{m'}$ (ideally the best possible one, which is not necessarily a tensor product of identical copies), the fidelity would still be bounded by \eqref{optimal-Delta-2D}. In this general case, one has $\Delta\le\sum_{mm'}a^*_{m'}\mathsf{M}_{m'm}a_m$, where $2\mathsf{M}_{m'm}=\delta_{m'\,m+1}+\delta_{m'+1\,m}$, and its maximum value is given by the largest eigenvalue of the matrix $\mathsf M$. A straightforward calculation gives~\cite{wiseman} $F_{\max}=(1+\cos[2\pi/(d_J+2)])/2$, where $d_j=2j+1$ is the dimension of the invariant Hilbert space corresponding to the representation ${\bf j}$ of $SU(2)$.
In our problem $\rho_0$ is constrained by the condition of having identical copies. From \eqref{rhoxn} and \eqref{optimal-Delta-2D} one has~\cite{derka}
\begin{eqnarray}\label{Delta-2D-max}
\Delta& \leq &\frac{1}{2^{N}}\sum_{m=-J}^{J-1}
\sqrt{\pmatrix{N \cr J+m}\pmatrix{N \cr J+m+1}}\nonumber\\
&=& \frac{1}{2^N}\sum_{m=-J}^{J} \pmatrix{N \cr
J+m}\sqrt{\frac{J-m}{J+m+1}} , \end{eqnarray}
where we have used that
\begin{equation} \ket{JJ}_x=\ket{\vec x}^{\otimes N}={1\over 2^J}\sum_{m=-J}^J\pmatrix{N\cr J+m}\ket{Jm} \end{equation}
(recall that $J\equiv N/2$).
We next show that there are POVMs that attain this bound. To saturate the first inequality in~(\ref{optimal-Delta-2D}) the phase of $[O(\chi)]_{m\,m+1}$ must be independent of $m$. This is ensured if this phase is a function of $m-n$. Similarly, a set of positive operators for which \mbox{$|O(\chi)_{m n}|=\mathit{constant}$} for all $\chi$, $m$ and $n$, will certainly saturate the remaining inequalities in~(\ref{optimal-Delta-2D}). In particular, the covariant (continuous) POVM, whose elements are given by
\begin{equation} [O(\phi)]_{mn}={\rm e}^{i(m-n)\phi} , \label{optimal-povm-2D} \end{equation}
satisfies all the requirements. Note that we have labeled the outcomes by a rotation angle~$\phi$, which plays the role of $\chi$. Hence, condition~(\ref{povm}) becomes
\begin{equation}
\int \frac{d\phi}{2 \pi} O(\phi) =\openone, \label{optimal-povm-2D-old} \end{equation}
which certainly holds for~(\ref{optimal-povm-2D}). These are rank one operators, and can also be written as
\begin{equation} O(\phi)=U(\phi)\ket{B}\bra{B}U^\dagger(\phi), \end{equation}
where
\begin{equation}\label{B}
\ket{B}=\sum_{m=-J}^{J}\ket{J,m}. \end{equation}
We have just shown that at least one optimal measurement [i.e., a POVM that saturates~(\ref{optimal-Delta-2D})] exists, but other optimal measurements can be found. POVMs with finite number of outcomes, for instance, are straightforward to obtain by choosing $\phi$ to be the $d_J$-th roots of unity, namely,
\begin{equation} [O(k)]_{mn}={1\over d_j}\exp\left\{{i(m-n){2\pi k\over d_J}}\right\} , \end{equation}
where $k=1,\dots, d_J$. In this case $\{O(k)\}$ is a von Neumann measurement, since the number of rank one POVM elements equals the dimension of the Hilbert space.
In the asymptotic limit the fidelity can be obtained in terms of the moments of a binomial distribution $ \mathrm{Bin}(n,p)$ with parameters $n=N$ and $p=1/2$. We simply need to expand~\eqref{Delta-2D-max} in powers of $m$, i.e., around $\langle m\rangle=0$, to obtain
\begin{eqnarray}
\Delta&\leq&\frac{1}{2^N}\sum_m \pmatrix{N \cr
J+m}\times \\
&&\left[1- \frac{2 m}{N}+\left( \frac{2
m^2}{N^2}-\frac{1}{N}\right)+ O(1/N^{3/2})\right]\nonumber
\label{Delta-2D-expansion} \end{eqnarray}
(notice that sum over $m$ is shifted by $J$ with respect to the usual binomial distribution). The moments are well known to be $\langle 1 \rangle=1$, $\langle m \rangle=0$ and \mbox{$\langle m^2 \rangle=N/4$}. The latter shows that $m$ has ``dimensions" of $\sqrt{N}$, which helps to organise the expansion in powers of $1/N$. We finally obtain
\begin{equation}\label{Delta-2D-asymptotic}
\Delta_{\max}=1-\frac{1}{2 N}+\cdots, \end{equation}
and a fidelity
\begin{equation}\label{fidelity-2D-asymptotics}
F=1-\frac{1}{4 N}+\cdots. \end{equation}
\subsection{3D states}\label{3D states} A 3D state $\ket{\vec n}$ corresponds to a general point on the Bloch sphere. As in the previous section we write
\begin{equation}\label{rhon-3D}
\rho_n\equiv\rho(\vec{n})=U(\vec{n})\rho_0 U^{\dag}(\vec{n}), \end{equation}
where for convenience $\rho_0$ is now chosen to point along the $z$ axis, i.e.,
\begin{equation}\label{rhoz-3D}
\rho_0=\ket{JJ}\bra{JJ}, \end{equation}
and $U(\vec{n})$ is the unitary representation [i.e., the element of $SU(2)$] of the rotation
that brings $\vec{z}$ into~$\vec{n}$ (a rotation around the vector $\vec z \times \vec n$).
Recalling \eqref{pnx} and \eqref{v-optimal}, we have
\begin{equation}\label{v-3D}
\vec{V}(\chi)=\int d n\,\vec{n}\; {\rm tr}\,[\rho_n O(\chi)], \end{equation}
where $dn$ is the invariant measure on the two-sphere, e.g.,
\begin{equation} \label{febc dn} dn={d(\cos\theta)d\phi\over4\pi}, \end{equation}
where $\theta$ and $\phi$ are the standard azimuthal and polar angles.
Notice that we can always define an operator $\Omega(\chi)$ in such a way that
\begin{equation}\label{O-optimal-3D-1}
O(\chi)=U[\vec{M}(\chi)] \Omega(\chi) U^{\dag}[\vec{M}(\chi)], \end{equation}
where $\vec{M}(\chi)$ is given by~\eqref{m-optimal}. Taking into account that $\Delta$ is rotationally invariant one obtains
\begin{equation}\label{Delta-3D-1}
\Delta=\sum_\chi\left|\int dn \; n_z \; {\rm tr}\,[\rho_n
\Omega(\chi)\right|. \end{equation}
We readily see that $n_z= \cos\theta=\D{1}{0}{0}(\vec{n})$, where the rotation matrices $\D{j}{m}{m'}$ are defined in the standard way, $\D{j}{m}{m'}(\vec n)=\bra{jm}U(\vec n)\ket{jm'}$. We then have
\begin{eqnarray}\label{Delta-3D-2}
\Delta &=&\sum_\chi\left|\sum_{mm'}\int dn \;\D{1}{0}{0}(\vec{n})\;
\left(\rho_n\right)_{mm'}\Omega_{m'
m}(\chi)]\right|\nonumber \\
&=& \sum_{m}\left[\sum_\chi \Omega_{mm}(\chi)\right]\int dn \;\D{1}{0}{0}(\vec{n})\;
\left(\rho_n\right)_{m
m}, \end{eqnarray}
where in the second equality we have used that
\begin{equation}\label{schur again} \int dn\,\D{1}{0}{0}(\vec{n})\left(\rho_n\right)_{mm'}=\delta_{mm'}\int dn\,\D{1}{0}{0}(\vec{n})\left(\rho_n\right)_{mm}, \end{equation}
as follows from Schur's lemma after realizing that the left hand side of~(\ref{schur again}) commutes with $U(\theta)$, the unitary transformations defined right after~\eqref{rhoxn}. Recall that these transformations, which are a $U(1)$ subgroup of $SU(2)$, have only one-dimensional irreducible representations, labeled by the magnetic quantum number~$m$, thus yielding relation~(\ref{schur again}). In~(\ref{Delta-3D-2}) we have removed the absolute value as all terms are positive (see below). Tracing \eqref{O-optimal-3D-1} one obtains $\sum_\chi {\rm tr}\,\Omega(\chi)=\sum_\chi {\rm tr}\, O(\chi)=d_J$. Therefore
\begin{eqnarray}
\Delta&\leq& d_J \max_{m} \int dn \;\D{1}{0}{0}(\vec{n})\;
\left(\rho_n\right)_{mm}\nonumber\\
&=&\max_m\langle10;Jm|Jm\rangle^2
={J\over J+1},
\label{Delta-3D-2p} \end{eqnarray} where in the second equality we have used that $\rho_{mm}=\D{J}{m}{J}(\vec{n})\mathfrak{D}^{(J)*}_{m J}(\vec{n})$ and the well known orthogonality relations of the SU(2) irreducible representations~\cite{edmonds}. Recalling that $J\equiv N/2$ we finally get~\cite{mp}
\begin{equation}\label{fidelity-3D}
F=\frac{N+1}{N+2}, \end{equation}
which for large $N$ behaves as
\begin{equation}\label{asymptotic-3D}
F=1-\frac{1}{N}+\cdots \,. \end{equation}
Let us finally give a POVM that saturates all the inequalities. The maximum of the Clebsch-Gordan $\langle10;Jm|Jm\rangle$ in~(\ref{Delta-3D-2p}) occurs at $m=J$. Hence, to attain the bound we need to choose
\begin{equation}\label{povm-3D} \Omega_{mm}(\chi)=c_\chi \delta_{Jm},\quad \sum_\chi c_\chi=d_J, \end{equation}
where the coefficients $c_\chi$ are positive. This leads straightforwardly to the optimal continuous POVM defined by
\begin{equation}\label{povm-3d-2}
O(\vec{m})=d_J \,U(\vec{m})\ket{JJ}\bra{JJ}U^{\dag}(\vec{m}) \end{equation}
(one can check that $ \int dm\; O(\vec{m})=\openone$).
POVM's with finite number of elements can also be constructed. The only requirements are~\eqref{povm-3D} and, of course, \eqref{povm}. These constraints translate into a series of conditions for the set of directions $\{\vec{m}\}$, for which solutions with a finite number of elements can be found. We refer the reader to the literature~\cite{derka,lpt, bbm-direction,bbm-reference-1} for details.
\section{Local measurements}\label{local}
Collective measurements, although very interesting from the theoretical point of view, are difficult to implement in practice. Far more interesting for experimentalists are individual von Neumann measurements~\cite{hannemann,experiments}.
Individual von Neumann measurements on qubits are represented by two projectors
\begin{equation}\label{vonneumann}
O(\pm \vec{m})=\frac{1\pm \vec{m}\cdot \vec{\sigma}}{2}, \end{equation}
where $\vec{m}$ is a unit Bloch vector characterizing the measurement (in a spin system, e.g., $\vec{m}$ is the orientation of a Stern-Gerlach). In a general frame we must also allow classical communication, i.e., the possibility of adapting the orientation of the measuring devices depending on previous outcomes~\cite{fkf,hannemann,bbm-local}.
In the next sections we study quantitatively several schemes in ascending order of optimality: from the most basic tomography inspired schemes~\cite{tomography,likelihood} to the most general individual measurement procedure with classical communication~\cite{bbm-local}.
Our aim is to investigate how good these local measurements are as compared to the collective ones. We would like to stress that in this context few analytical results are known \cite{japos-2,gill-massar,bbm-mixed}. Our results here complement and extend the analysis carried out by some of the authors in~\cite{bbm-local}.
\subsection{Fixed Measurements}\label{fixed} Let us start with the most basic scheme for reconstructing a qubit: fixed von Neumann measurements along 2 orthogonal directions (say, $x$ and $y$) in the equator of the Bloch sphere for 2D states, or along 3 orthogonal directions (say, $x$, $y$ and $z$) for 3D states. This kind of scheme is often called tomography~\cite{tomography,experiments,qudits}.
Consider $N=2{\mathscr N}$ ($3{\mathscr N}$) copies of the state $\ket{\vec{n}}$. After ${\mathscr N}$ measurements along each one of the directions $\vec{e}_i$, $i=x,y,(z)$, we obtain a set of outcomes $+1$ and $-1$ with frequencies ${\mathscr N} \alpha_i$ and ${\mathscr N}(1-\alpha_i)$, respectively. This occurs with probability
\begin{equation}\label{probability-local}
p_n(\alpha)= \!\!\!\!\!\! \prod_{i=x,y,(z)} \!\!\!
\left(\begin{array}{c}
{\mathscr N}\\
{\mathscr N} \alpha_i
\end{array}
\right)\left(\frac{1+n_i}{2}\right)^{{\mathscr N}\alpha_i}\!\!\!
\left(\frac{1-n_i}{2}\right)^{{\mathscr N}(1-\alpha_i)}, \end{equation}
where $n_i$ are the projections of the vector $\vec{n}$ along each direction, $n_i\equiv \vec{n}\cdot \vec{e}_i$, and we have used the shorthand notation $\alpha=\{\alpha_i\}$. The combinatorial factor takes into account all the possible orderings of the outcomes and the remaining factors are the quantum probabilities, i.e, the appropriate powers of ${\rm tr}\,[\ket{\vec{n}}\bra{\vec{n}} O(\pm \vec{e}_i)]$.
Since the expectation value of $\vec{\sigma}$ is $\bra{\vec{n}}\vec{\sigma}\ket{\vec{n}}=\vec{n}$, a straightforward guess based on the relative frequencies of each outcome is
\begin{equation}\label{cl-guess}
M^{\rm CLG}_{i}(\alpha)=\frac{
2\alpha_i-1}{\sqrt{\sum_j (2\alpha_j-1)^2}}, \end{equation}
where the superscript stands for central limit guess (CLG). Notice the normalization factor which ensures that $|\vec{M}^{\rm CLG}|=1$, hence $\vec{M}^{\rm CLG}$ always corresponds to a physical pure state~\cite{likelihood}. The average fidelity in this case is given by
\begin{equation}\label{fidelity-tomographic} F=\frac{1}{2}+ \frac{1}{2}\sum_\alpha
\int dn \; \vec{n}\cdot
\vec{M}^{\rm CLG}(\alpha)\, p_n(\alpha), \end{equation}
where $p_n(\alpha)$ is defined in~\eqref{probability-local}. Although the CLG~\eqref{cl-guess} is not the \emph{best} state that one can infer from the data, it has the nice property that it can be directly (and easily) obtained from the observed frequencies without further processing. So, it is interesting to compare its fidelity with the optimal one, given by \eqref{v-optimal} and \eqref{f-optimal} (see Fig.~\ref{fig:2Dfid}).
\begin{figure}\label{fig:2Dfid}
\end{figure}
The OG for this measurement scheme is
$\vec{M}^{\rm OG}(\alpha)=\vec{V}(\alpha)/|\vec{V}(\alpha)|$, where
\begin{equation}\label{m-guess}
\vec{V}(\alpha)=
\int dn\; \vec{n} \ p_n(\alpha), \end{equation}
and, from \eqref{f-optimal}, the optimal fidelity reads $F=[1+\sum_{\alpha} V(\alpha)]/2$. Closed expressions of the fidelity for the lowest values of $N$ can be derived from~\eqref{fidelity-tomographic} and~\eqref{m-guess} using
\begin{equation}\label{n-integration}
\int dn \; n_{i_1}n_{i_2}\cdots n_{i_q}=
\frac{1}{K_q} \delta_{\{i_1 i_2}\delta_{i_3 i_4}\cdots
\delta_{i_{q-1}i_q\}}, \end{equation}
where the normalization factor is $K_q=q!!$ in 2D and $K_q=(q+1)!!$ in 3D, and the indexes in curly brackets are fully symmetrized, e.g., $\delta_{\{i_1 i_2}\delta_{i_3 i_4\}}=\delta_{i_1 i_2}\delta_{i_3 i_4}+ \delta_{i_1 i_3}\delta_{i_2 i_4}+\delta_{i_1 i_4}\delta_{i_2 i_3}$. Obviously, the integral~\eqref{n-integration} vanishes for $q$ odd. For larger values of $N$ the expressions became rather involved and we have resorted to a numerical calculation.
The 2D case is illustrated in Fig.~\ref{fig:2Dfid}, where the average fidelity for the above two guesses and $N$ in the range $10-60$ is shown. In both instances the fidelity approaches unity as $N$ increases, and the OG always performs better than the CLG, as it should. Notice that to make the graphs more easily readable we have interpolated between integer points.
At this point, it is convenient to define the scaled error function
\begin{equation}\label{epsilon-N} \epsilon_N=N(1-F), \end{equation}
and the limit
\begin{equation}\label{epsilon-limit}
\epsilon=\lim_{N\to \infty}\epsilon_N , \end{equation}
which gives the first order coefficient of the fidelity in the large $N$ expansion, $F=1-\epsilon/N+ \cdots$ (the asymptotic behavior will be properly discussed in Sec.~\ref{asymptotic}). Fig.~\ref{fig:2Derr} shows $\epsilon_N$ as a function of $N$ for 2D states.
\begin{figure}\label{fig:2Derr}
\end{figure}
One readily sees that the CLG gives $\epsilon^{\rm{CLG}}\approx 3/8$~\cite{bbm-local}, while for collective measurements one has $\epsilon^{\rm{COL}} \approx 1/4$, in agreement with Eq.~\eqref{fidelity-2D-asymptotics}. The stability of the curves $\epsilon^{\rm{COL}}_N$ and $\epsilon^{\rm{CLG}}_N$ shows that the fidelity is well approximated by $F=1-\epsilon/N$ for such small values of $N$ as those in the figure. This asymptotic regime is not yet achieved by the OG, however we will show in Sec.~\ref{asymptotic} that the OG gives $\epsilon^{\rm{OG}} = 1/4$, thus matching the collective bound for large~$N$.
Fig.~\ref{fig:3Derr} shows the scaled error in the 3D case.
\begin{figure}\label{fig:3Derr}
\end{figure}
Again, one readily sees that the OG performs better than the CLG. However, the improvement does not seem to be enough to match the collective bound. We prove analytically in the next section that $\epsilon^{\rm{OG}}= 13/12> \epsilon^{\rm{COL}}=1$.
In previous paragraphs we have presented the most basic scheme, i.e., that with a minimal number of orientations of the measuring devices and without exploiting classical communication. A next step in complexity is to consider a more general set of fixed directions $\{m_k\}$. It is intuitively clear that, assuming some sort of isotropy, the more directions are taken into account, the better the estimation procedure will be. For instance, in 2D we may consider a set of directions given by the angles $\theta_k=k \pi/N$, where $k=1,\dots, N$. The set of outcomes $\chi$ can be expressed as an $N$-digit binary number $\chi=i_{N}i_{N-1}\cdots i_{2}i_{1}$, where $i_k$ ($=0,1$) and the fidelity then reads
\begin{equation}\label{isotropic}
F=\frac{1}{2}+\frac{1}{2}\sum_{\chi=00\cdots0}^{2^{N}-1}\left|\int dn\; \vec{n}
\prod_{k=1}^{N}\frac{1+(-1)^{i_k} \vec{n} \cdot \vec{m}_k}{2}\right|. \end{equation}
Analytical results for low $N$ can be obtained using~\eqref{n-integration}. For large $N$, numerical computations show that this ``isotropic strategy" is indeed better than tomography (see \cite{monras} for explicit results), but there is no substantial improvement.
For 3D states, however, it is really important to consider more general fixed measurement schemes, such as a 3D version of the isotropic one we have just discussed. One can readily see from Fig.~\ref{fig:3Derr} that the tomographic OG does not saturate asymptotically the collective bound and one could be tempted to think that classical communication may be required to attain it~\cite{bbm-local,gill-massar}. That would somehow indicate a fundamental difference between the estimation of 2D and 3D states.
There is a difficulty in implementing the isotropic scheme for 3D states since the notion of isotropic distribution of directions is not uniquely defined, which contrasts with the 2D case. A particularly interesting scheme that encapsulates this notion (at least asymptotically) and enables us to perform analytical computations consists of measurements along a set of random directions. With the same notation as in \eqref{isotropic}, the fidelity for this set of directions can be written as
\begin{equation}\label{fid-random}
F=\frac{1}{2}+\frac{1}{2}\!\!\!\sum_{\chi=0\cdots0}^{2^{N}-1}\int
\prod_{k=1}^{N} dm_k\left|\int
dn\;
\vec{n} \frac{1+(-1)^{i_k} \vec{n} \cdot
\vec{m}_k}{2}\right|. \end{equation}
In Fig.~\ref{fig:3Dranderr} we show the scaled error, $\epsilon_N$, obtained from numerical simulations for rather large $N$. One readily sees the improvement of the random scheme over the tomographic OG. We will show in the next section that the former indeed attains the collective bound asymptotically, thus resolving the puzzle of whether classical communication is needed or not. A numerical fit gives a value $\epsilon_N^{\rm{rand}}=1.002\pm 0.008$, which provides a numerical check of the analytical results of section~\ref{asymptotic} below.
\begin{figure}\label{fig:3Dranderr}
\end{figure}
\subsection{Adaptive Measurements}\label{adaptive} In this subsection we will discuss schemes that make use of classical communication. In principle, they should be more efficient than those considered so far.
\subsubsection{One step adaptive}
We first review a method put forward by Gill and Massar~\cite{gill-massar}, which we call ``one step adaptive". This scheme, although very simple, suffices to show in a very straightforward way that local measurements attain the collective bounds in 2D and 3D. It also has the nice feature that only one reorientation of the measuring device is required.
The basic idea of the method is to split the measurements in two stages. In the first one a small number of copies is used to obtain a rough estimate $\vec{M}_0$ of the state. In the second stage the remaining copies are used to refine the estimate by measuring on a plane orthogonal to $\vec{M}_0$. This strategy has a clear motivation from the information theory point of view. A measurement can be regarded as a query that one makes to a system. The most informative queries are those for which the prior probabilities of each outcome are the same. Measurements on the orthogonal plane to $\vec{M}_0$ have this feature. The method turns out be efficient if the number of copies used in each of the two stages is carefully chosen.
To be more concrete, suppose we are given $N$ copies of an unknown qubit state. Let $N_0$ stand for the number of copies used in the first stage and let $\bar{N}=N-N_0$ stand for the rest. In the 2D (3D) case, one measures $N_0/2$ ($N_0/3$) copies along two (three) fixed orthogonal directions and infers the guess $\vec{M}_0$. In the second stage, one measures $\bar{N}$ ($\bar{N}/2$) along $\vec{u}$ ($\vec{u},\vec{v}$), which are chosen so that $\{\vec u,\vec v,\vec M_0\}$ is an orthonormal basis, and infers the final guess $\vec{M}$. If
$N_0\propto N^{a}$ with $0<a<1$, one can show that the one step adaptive scheme saturates the collective bound in the asymptotic regime~\cite{gill-massar}. Although we do not have a rigorous proof, our numerical analysis reveals that the optimal value of~$a$, i.e., the one that gives the maximal fidelity, is $a\sim 1/2$. For other choices of~$a$ the scheme can even be less efficient than some fixed measurement schemes. For the benefit of the reader, we present a detailed discussion of the method and a proof of the asymptotic limit within our unified framework in Appendix~\ref{osa}.
\subsubsection{Greedy scheme}\label{greedy-scheme}
We now move forward to more sophisticated schemes and discuss
one that exploits much more efficiently classical communication.
The idea behind it is to maximize the average fidelity at each single measurement step.
It is called ``greedy" because it does not take into account the total number of available copies, instead, it treats each copy as if it were the last one~\cite{fkf,hannemann}.
We first need to introduce some notation. Recall that the set of outcomes $\chi$ can be expressed as a $N$-digit binary number $\chi=i_{N}i_{N-1}\cdots i_{2}i_{1}$ ($i_k=0,1$). Since we allow the $k$-th measurement to depend on the list of previous outcomes, $i_{k-1} i_{k-2}\cdots i_{2}i_{1}\equiv \chi_{k}$ (note that $\chi=\chi_N$), we have $\vec{m}(\chi_k)$ instead of $\vec{m}_k$. This is a compact notation where the length $k$ of the string $\chi_k$ denotes the number of copies upon with we have already measured. The orthogonality of the von Neumann measurements is imposed by the constraint
\begin{equation}\label{von-neumann}
\vec m(1\chi_{k-1})=-\vec m(0\chi_{k-1}), \end{equation}
where $1\chi_{k-1}$ is the list of length $k$ obtained by prepending 1 to the list $\chi_{k-1}$, and similarly for $0\chi_{k-1}$. In general, the number of independent vectors for a given $N$ is $(\sum_{k=1}^{N} 2^k )/2=2^N-1$. For example, if $N=2$ there are three independent directions, which can be chosen as $\vec{m}(0),\vec{m}(00),\vec{m}(01)$, and the other three are obtained using Eq.~\eqref{von-neumann}. Since the first measurement can be chosen at will, this number is reduced to $2^N-2$.
The general expression of the conditional probability thus reads
\begin{equation}\label{probability-general}
p_n(\chi)=\prod_{k=1}^{N}{1+\vec n\cdot\vec m(\chi_{k})\over2}, \end{equation}
and, as discussed in the introduction, the OG gives a fidelity $F=(1+\Delta)/2$, where
\begin{equation}\label{Delta-greedy}
\Delta=\sum_{\chi=00\cdots0}^{2^{N}-1}\left|\int dn\, \vec{n}\, p_n(\chi)\right|. \end{equation}
We could in principle attempt to maximize this expression with respect to \emph{all} the independent variables, i.e., all independent $\{\vec{m}(\chi_k)\}$. However, the maximization process very quickly becomes extremely difficult. In the greedy scheme one takes a more modest approach: one maximizes at each step $k$. This enables us to find a compact algorithm for computing the fidelity, as we discuss below. Furthermore, we show in Appendix B that in this situation the optimal local measurement at each step is indeed of von Neumann type, i.e., any other POVM will perform worse.
Let us concentrate on the last step, $N$, of the greedy scheme. Suppose we have optimised the previous $N-1$ measurements and
have obtained a string of outcomes $\chi_{N-1}$. To ease the notation, let us denote the direction of the last measurement by $\vec{m}_N$, namely $\vec{m}_N\equiv\vec{m}(0\chi_{N-1})=- \vec{m}(1\chi_{N-1})$. We then need to maximize
\begin{equation}\label{Delta-last}
d(\chi_N)= | \vec{V}(0\chi_{N-1})|+|\vec{V}(1\chi_{N-1})|. \end{equation}
Here
\begin{eqnarray}
\vec{V}(i_N\chi_{N-1})=\int dn\, \vec{n}\, p_n(\chi_{N-1})
\frac{1+(-1)^{i_N}\vec{n}\cdot \vec{m}_N}{2}, \end{eqnarray}
or, equivalently,
\begin{equation}\label{vectors-last}
\vec{V}(i_N\chi_{N-1})=
\frac{1}{2}[\vec{V}(\chi_{N-1})+(-1)^{i_N}\mathsf{A}(\chi_{N-1})\vec{m}_N], \end{equation}
where $\mathsf{A}$ is the real positive symmetric matrix with elements
\begin{equation}\label{matrix-A}
\mathsf{A}_{k l}(\chi_{N-1})=\int dn\, n_k n_l\, p_n(\chi_{N-1}). \end{equation}
Therefore
\begin{eqnarray}\label{delta-last-2}
d(\chi_N)&=&\frac{1}{2}\left\{|\vec{V}(\chi_{N-1})\right.+
\mathsf{A}(\chi_{N-1})\vec{m}_N| + \nonumber \\
&&
\left.|\vec{V}(\chi_{N-1})-\mathsf{A}(\chi_{N-1})\vec{m}_N|\right\}. \end{eqnarray}
Notice that for 2D states and fixed $d(\chi_N)$ the points $\vec{\mu}=\mathsf{A}\,\vec{m}_N$ lay on an ellipse with focus at $\pm \vec{V}$ (an ellipsoid for 3D states). In addition they fulfil the normalization constraint
\begin{equation}\label{omega-constrain}
\vec{\mu} \cdot(\mathsf{A}^{-2} \vec{\mu})=1, \end{equation}
which also defines an ellipse (ellipsoid in 3D) centered at the origin. As usual, optimality tells us that the maximum of $d(\chi_N)$ occurs at the points of tangency of the ellipses (ellipsoids). This provides a geometrical procedure for finding the optimal direction $\vec{m}_N$ and an algorithm for computing
$|\vec{V}(\chi_N)|$.
We now proceed to obtain some explicit expressions for low~$N$.
We discuss only the 3D case, as the 2D case is completely analogous (numerical results for 2D states are shown in Fig.~\ref{fig:2Dfid} and Fig.~\ref{fig:2Derr}).
When we only have one copy of the state, $N=1$, the Bloch vector of the measurement can be chosen in any direction, say $\vec{e}_x$, i.e., $\vec{m}(0)=-\vec{m}(1)=\vec{e}_x$. The explicit computation of the vector $\vec{V}$ in~\eqref{v-optimal} gives
\begin{equation}\label{N-1-greedy}
\vec{V}(\chi_1)=\frac{1}{6}\vec{m}(\chi_1), \end{equation}
and $F=2/3$, as expected from~\eqref{fidelity-3D} [or \eqref{Delta-2D-max}].
The first non-trivial case is $N=2$. The matrix $\mathsf{A}(\chi_1)$ reads
\begin{equation}\label{A-2}
\mathsf{A}_{k l}(\chi_1)=\frac{1}{6} \delta_{k l} \ \ \ \ \chi_1=0,1, \end{equation}
i.e., $\mathsf{A}(\chi_1)$ is independent of $\chi_1$ and proportional to the identity. The maximum of~\eqref{delta-last-2} occurs for $\vec{m}_2 \perp \vec{e}_x$, so we choose $\vec{m}_2=\vec{e}_y$, which means, $\vec{m}(00)= \vec{m}(01)=\vec{e}_y$, (notice that in general these two vectors do not need to be equal, they are only required to be orthogonal to $\vec{m}(0)$]. Because of~\eqref{von-neumann}, we also have $\vec{m}(10)= \vec{m}(11)=-\vec{e}_y$. The OG reads
\begin{equation}\label{guess-N=2}
\vec{
M}^{(2)}(\chi)=\frac{\vec{m}(\chi_2)+\vec{m}(\chi_1)}{\sqrt{2}} , \end{equation}
e.g., $\vec{M}^{(2)}(01)=[\vec{m}(01)+\vec{m}(1)]/\sqrt{2} =[\vec{m}(01)-\vec{m}(0)]/\sqrt{2}=[\vec{e}_y-\vec{e}_x]/\sqrt{2}$. One obtains
\begin{equation}\label{V-modul-N=2}
|\vec{V}(\chi_2)|=\frac{\sqrt{2}}{12} \end{equation}
for all~$\chi_2$, which implies
\begin{equation}\label{fidelity-N=2}
F^{(2)}=\frac{3+\sqrt{2}}{6}. \end{equation}
The case $N=3$ can be computed along the same lines. One can easily see that $\vec{m}(\chi_3)$ has to be perpendicular to $\vec{m}(\chi_2)$ and $\vec{m}(\chi_1)$. This shows that, up to $N=3,$ the greedy approach does not use classical communication, i.e, the directions of the measuring devices are only required to be mutually orthogonal, independently of the outcomes. The optimal guess is a straightforward generalization of \eqref{guess-N=2}:
\begin{equation}\label{guess-N=3}
\vec M^{(3)}(\chi)=\frac{\vec{m}(\chi_3)+\vec{m}(\chi_2)+\vec{m}(\chi_1)}{\sqrt{3}}, \end{equation}
and the fidelity reads
\begin{equation}\label{fidelity-N=3}
F^{(3)}=\frac{3+\sqrt{3}}{6}. \end{equation}
The above results could have been anticipated. As already mentioned, the outcomes of a measurement on the plane orthogonal to the guess have roughly the same probability and are, hence, most informative. One can regard these measurements as corresponding to mutually unbiased observables, i.e., those for which the overlap between states of different basis (related to each observable) is constant~\cite{unbiased}. Hence, there is no redundancy in the information about the state acquired from the different observables. This point of view also allows to extend the notion of (Bloch vector) orthogonality to states in spaces of arbitrary dimension.
The case $N=4$ is even more interesting, since four mutually orthogonal vectors cannot fit onto the Bloch sphere. We expect classical communication to start playing a role here. Indeed the Bloch vectors $\vec{m}(\chi_4)$ do depend on the outcomes of previous measurements. They can be compactly written as~\footnote{The generalization of this formula to arbitrary $N$, i.e., $\vec{m}(\chi_N)=(-1)^{i_N} \sum_{k=1}^{N-2}\vec{m}(\chi_k)\times \vec{m}(\chi_{N-1})/\sqrt{N-2}$, provides a set of points that are roughly isotropically distributed in the sphere, which may be interesting in other contexts.}
\begin{equation}\label{ebc-vectors} \vec{m}(\chi_4)= \frac{(-1)^{i_4}}{\sqrt{2}}\sum_{k=1}^{2}\vec{m}(\chi_k)\times \vec{m}(\chi_{3}). \end{equation}
Again, one can see that the vectors $\vec{m}(\chi_4)$ are orthogonal to the guess one would have made with the first three measurements. The fidelity in this case is
\begin{equation}\label{fidelity-N=4} F^{(4)}={15+\sqrt{91}\over30}. \end{equation}
For larger $N$, we have computed the fidelity of the greedy scheme by numerical simulations. In Fig~\ref{fig:3Derr} (Fig.~\ref{fig:2Dfid} and Fig.~\ref{fig:2Derr} for 2D states) we show the results for $10\leq N\leq 60$ (diamonds). Notice that the greedy scheme is indeed better than fixed measurement schemes and approaches the collective bound very fast.
Actually, the greedy scheme is the best we can use if we do not know a priori the number of copies that will be available; obviously, the best one can do in these circumstances is to optimise at each step. However, if $N$ is known, we have extra information that some efficient schemes could exploit to increase the fidelity. We next show that this is indeed the case.
\subsubsection{General LOCC scheme} In the most general LOCC scheme one is allowed to optimise over all the Bloch vectors $\{m(\chi_k)\}$, thus taking into account the whole history of outcomes. Up to $N=3$ the results are the same as for the greedy scheme: orthogonal Bloch vectors for the measurements and no classical communication required. The results \eqref{fidelity-N=2} and \eqref{fidelity-N=3} are, therefore, the largest fidelity that can be attained by any LOCC scheme.
The most interesting features appear at $N=4$. Here there are 14 independent vectors, which can be grouped into two independent families of seven vectors. With such a large number of vectors an analytical calculation is too involved and we have resorted partially to a numerical optimization. The solution exhibits some interesting properties. First, one obtains that $\vec{m}(\chi_1)\perp \vec{m}(\chi_2)$, for all $\chi_1$ and $\chi_2$, as in the $N=2$ and $N=3$ cases. Therefore one can choose $\vec{m}(\chi_1)=(-1)^{i_1} \vec e_{x}$ and $\vec{m}(\chi_2)=(-1)^{i_2} \vec e_{y}$. Only for the third and fourth measurement one really has to take different choices in accordance to the sequence of the preceding outcomes. The Bloch vectors of the third measurement can be parametrized by a single angle $\alpha$ as
\begin{equation}\label{mx3}
(-1)^{i_3} \vec{m}(\chi_3)=\cos\alpha \, \vec{u}_1(\chi_2)+\sin\alpha \, \vec{v}_1(\chi_2), \end{equation}
where
\begin{eqnarray}
\vec{u}_1(\chi_2) &=& \vec{m}(\chi_1)\times \vec{m}(\chi_2), \nonumber \\
\vec{v}_1(\chi_2)&=& \vec{u}_1(\chi_2)\times \vec{s}(\chi_2),\nonumber\\
\vec{s}(\chi_2)&=&{ \vec{m}(\chi_2)+ \vec{m}(\chi_1)\over\sqrt2} . \end{eqnarray}
Notice that, rather unexpectedly, $\vec m(\chi_{1})$, $\vec m(\chi_{2})$ and $\vec m(\chi_{3})$ are not mutually orthogonal. The optimal value of this angle is $\alpha=0.502$. Although we cannot give any insight as to why this value is optimal, in agreement with our intuition one sees that $\vec m(\chi_{3})\perp \vec M(\chi_2)$, i.e., the third measurement probes the plane orthogonal to the Bloch vector one would guess from the first two outcomes [see Eq.~\eqref{guess-N=2}]. The vectors of the fourth measurement can be parametrized by two angles, $\beta$ and $\gamma$, as
\begin{equation}\label{mx4}
(-1)^{i_4} \vec{m}(\chi_4)=\cos\gamma \, \vec{u}_2(\chi_3)+\sin\gamma \, \vec{v}_2(\chi_3), \end{equation}
where
\begin{eqnarray} \vec{u}_2(\chi_3)&=& \vec{s}(\chi_2)\times \vec{m}(\chi_3),\nonumber \\
\vec{v}_2(\chi_3)&=& \cos\beta \, \vec{m}(\chi_3)-\sin\beta\,
\vec{s}(\chi_2) \end{eqnarray}
The optimal values of these angles are $\beta=0.584$, $\gamma=0.538$, and the corresponding fidelity is $F^{(4)}_{\rm general}=0.8206$. This is just $1.5\%$ lower than the absolute bound $5/6=0.8333$ attained with collective measurement, Eq.~\eqref{fidelity-3D}. Note that this value is slightly larger than the fidelity obtained with the greedy scheme, $F^{(4)}_{\mbox{\scriptsize general}}>F^{(4)}_{\mbox{\scriptsize greedy}}=(15+\sqrt{91})/30\approx 0.8180$. The extra information consisting of the number of available copies has indeed been used to attain a larger fidelity. We conclude that for $N>3$, it pays to relax optimality at each step, and greedy schemes~\cite{fkf,hannemann} are thus not optimal. We would like to remark that if, for some reason, some copies are lost or cannot be measured, the most general scheme will not be optimal, since it has been designed for a specific number of copies. We have also computed the values of the maximal LOCC fidelities for $N=5,6$: $F^{(5)}_{\rm general}=0.8450$ and $F^{(6)}_{\rm general}=0.8637$. Beyond $N=6$ the small differences between this and the greedy scheme become negligible.
\section{Local schemes in the asymptotic limit}\label{asymptotic} The asymptotic expression of the fidelity enables us to compare different schemes independently of the number of copies. If two schemes have the same asymptotic fidelity, it is justified to say that they have the same efficiency, and conversely. Here we will compute such asymptotic expansions. We will show that, asymptotically, classical communication is not needed to attain the absolute upper bound given by the maximum fidelity of the most general collective measurements. Some of the results presented in this section were obtained by two of the authors by explicit computations in~\cite{bbm-local}. Here we will use a statistical approach that relate the Fisher information~$\mathrm{I}$~\cite{cover} with the average fidelity $F$. This approach will greatly simplify our earlier derivations.
We label the independent state parameters by the symbol $\eta$. This symbol will refer to the two angles $\theta$, $\phi$ for 3D states: $\eta\equiv (\theta,\phi)$; and the polar angle $\theta$ for 2D states: $\eta\equiv\theta$.
Assume that under a sensible measurement and estimation scheme (we mean by that a scheme that leads to a perfect determination of the state when $N\to \infty$, i.e, $F\stackrel{N\to \infty}{\longrightarrow} 1$) the estimated state is close to the signal state, that is, their respective parameters $\hat\eta(\chi)$ and $\eta$ differ by a small amount. In this Section, a hat ($\hat{\phantom{a}}$) will always refer to estimated parameters, the fidelity $f_n(\chi)$, Eq.~\eqref{f}, will be denoted by $f_{\eta}(\hat{\eta})$, and similarly, the probability $p_n(\chi)$ will be written as $p_\eta(\chi)$. Note that the guessed parameters $\hat\eta(\chi)$ are based on a particular outcome $\chi$. This dependence will be implicitly understood when no confusion arises.
The fidelity can be approximated by the first terms of its series expansion:
\begin{eqnarray}
f_{\eta}(\hat\eta)\approx
1 +
\frac{1}{2}\left.\frac{\partial^2 f}{\partial
\hat\eta_i \partial
\hat\eta_j}\right|_{\hat\eta=\eta}(\hat\eta_i-
{\eta}_i)(\hat\eta_j-{\eta}_j), \end{eqnarray}
where we have used that $f_{\eta}(\eta)=1$ and $\partial f_{\eta}/\partial \hat{\eta}|_{\hat{\eta}=\eta}=0$. Averaging over all possible outcomes, we have
\begin{equation}
\label{eq:fidelityexpanded}
F(\eta)\approx1+\frac{1}{2}{\rm tr}\,[\mathsf{H}(\eta)\,\mathsf{V}(\eta)],\\ \end{equation}
where
\begin{equation}\label{f-eta}
F(\eta)\equiv \sum_{\chi} p_{\eta}(\chi)
f_{\eta}[\hat{\eta}(\chi)], \end{equation}
is the ``pointwise" fidelity, $\mathsf{H}(\eta)$ is the Hessian matrix of $f_\eta(\hat\eta)$ at $\hat{\eta}=\eta$, and $\mathsf{V}(\eta)$ is the covariance matrix, with elements $\mathsf{V}_{ij}(\eta)=\sum_{\chi} p_{\eta}(\chi)(\hat\eta_i-{\eta}_i)(\hat\eta_j-{\eta}_j)$.
It is well known that the variance of an unbiased estimator is bounded by
\begin{equation}\label{eq:variancevsfisher}
\mathsf{V}(\eta)\geq\frac{1}{\mathrm{I}(\eta)}, \end{equation}
the so called Cram{\'e}r-Rao bound \cite{caves,japos-1,gill-massar}, where the Fisher information matrix $\mathrm{I}(\eta)$ is defined as
\begin{eqnarray}
\label{eq:fisherinfo}
\mathrm{I}_{ij}(\eta)&=&\sum_{\chi}p_{\eta}(\chi)\frac{\partial
\ln p_{\eta}(\chi)}{\partial \eta_i}
\frac{\partial \ln p_{\eta}(\chi)}{\partial \eta_j}. \end{eqnarray}
The conditional probability $p_{\eta}(\chi)$ regarded as a function of $\eta$ is called the likelihood function $\mathcal{L}(\eta)=p_{\eta}(\chi)$. It is also well known that the bound~\eqref{eq:variancevsfisher} is attained by the maximum likelihood estimator (MLE)~\cite{cramer}, defined as $\hat{\eta}^{\rm MLE}=\mathrm{argmax} \;\cal{L}(\eta)$. Hence this bound is tight.
A link between the Fisher information and the fidelity is obtained by combining \eqref{eq:fidelityexpanded} and \eqref{eq:variancevsfisher}, and noticing that $\mathsf{H}(\eta)$ is negative definite. We thus have
\begin{equation}
\label{eq:crbound}
F(\eta)\leq1+\frac{1}{2}{\rm tr}\,\frac{\mathsf H(\eta)}{\mathrm{I}(\eta)} \end{equation}
to leading order and for any unbiased estimation scheme.
The Fisher information is additive. This means that if $p^{(2)}_{\eta}(\chi,\chi')=p_\eta(\chi)p'_\eta(\chi')$, which happens when we perform two measurements [say, $\{O(\chi)\}$ and $\{O'(\chi')\}$] on two identical states, the Fisher information of the combined measurement is simply $\mathrm{I}^{(2)}(\eta)=\mathrm{I}(\eta)+\mathrm{I}'(\eta)$. In particular, for $N$ {\em identical} measurements, we have $\mathrm{I}^{(N)}(\eta)=N~\mathrm{I}(\eta)$.
Finally, since the OG is a better estimator, and it is asymptotically unbiased, we must have
\begin{equation}\label{eq:ordering}
F^{\rm{OG}}(\eta)\simeq F^{\rm{MLE}}(\eta)=1+
\frac{1}{2 N}{\rm tr}\, \frac{\mathsf{H}(\eta)}{\mathrm{I}^(\eta)} \end{equation}
to leading order, where the fidelities refer to an estimation scheme consisting of $N$ identical measurements. Below we use Eq.~\eqref{eq:ordering} to compute the asymptotic limits of the fixed measurement schemes discussed in Sec.~\ref{fixed}.
We would like to remark that the Cram{\'e}r-Rao bound assumes some regularity conditions on the figure of merit and the estimators. These conditions are satisfied for the problem considered here, but the bound may not hold in more general situations such as the estimation of a mixed qubit state (see~\cite{bbm-mixed}).
\subsection{2D states}
The 2D case is rather simple because the states have just one parameter and the Fisher information is a single number. Moreover, {\em any} von Neumann measurement whose vector lies on the equator of the Bloch sphere performed on a 2D system has $\mathrm{I}=1$, as can be checked by plugging
\begin{equation}
p_{\theta}(\pm1)=\frac{1\pm\cos(\theta-\theta_m)}{2} \end{equation}
into \eqref{eq:fisherinfo}, where $\theta_m$ is the polar angle of $\vec{m}$, the direction along which the von Neumann measurement is performed. Therefore, in 2D the Fisher information for a set of $N$ measurements (identical or not) is $\mathrm I^{(N)}=N$.
Since the Hessian is
\begin{equation}\label{eq:hessian2D}
\mathsf H=\left.\frac{\partial^2 f}{\partial\hat\theta^2}\right|_{\hat\theta=\theta}=-\frac{1}{2}, \end{equation}
{\em any} sensible local measurement scheme on 2D states will yield
\begin{equation}
F(\theta)=1-\frac{1}{4N}+\cdots . \end{equation}
Note that this fidelity is independent of $\theta$, so it coincides with the average fidelity $F=\int d\theta F(\theta)/(2\pi) = 1- 4/N$.
We recall that this fidelity is attained by the MLE, and hence also by the OG. We note that it coincides with the collective bound, Eq.~\eqref{fidelity-2D-asymptotics}. This implies that the collective bound is attained by tomography, without classical communication. It is quite surprising that the most basic scheme, with measurements along two fixed orthogonal directions, saturates already the collective bound asymptotically.
\subsection{3D states} The 3D case is more involved. The results shown in Fig.~\ref{fig:3Derr} hint that tomography does not saturate the collective bound. There are, however, other measurement schemes that do saturate this bound and still do not require classical communication. We prove these two statements below. \par \subsubsection{Tomography} Tomography, as explained in Sec.~\ref{fixed}, consists in measuring along three orthogonal directions on the Bloch sphere. We will concentrate on the OG estimator, or more precisely on the MLE, which is asymptotically equivalent. We do not discuss the CLG, Eq.~\eqref{cl-guess}, as it does not attain the collective bound even for 2D states (see Fig.~\ref{fig:2Derr}).
We first compute the Fisher information. Consider a scheme that consists in repeating ${\mathscr N}$ times the following: take 3 copies of the state and perform a measurement along $\vec e_x$ on the first copy, along $\vec e_y$ on the second copy, and along $\vec e_z$ on the third copy (recall that $N=3{\mathscr N}$). These three von Neumann measurements can be regarded as a single measurement with $2^3$ possible outcomes labeled by $\chi=(\chi_1,\chi_2,\chi_3)$, where $\chi_j=\pm1$. The probability of obtaining an outcome $\chi$ is
\begin{equation}
\label{eq:prob3D}
p_\eta(\chi)=\prod_{j=x,y,z}\frac{1}{2}\left(1+\chi_j \vec n\cdot e_j\right) . \end{equation}
The Fisher information matrix $\mathrm{I}(\theta,\phi)$ of such elementary measurement is obtained by substituting Eq.~(\ref{eq:prob3D}) in Eq.~(\ref{eq:fisherinfo}). Note that the Fisher information of this scheme is $\mathrm{I}^{({\mathscr N})}(\theta,\phi)={\mathscr N}\, \mathrm{I}(\theta,\phi)$.
The Hessian of the fidelity is
\begin{equation}\label{eq:hessian3D}
\mathsf{H}(\theta,\phi)=-
\pmatrix{\displaystyle \frac{1}{2}&0 \cr
0&\displaystyle\frac{ 1-\cos 2\theta}{4}}, \end{equation}
which turns out to be independent of $\phi$. With this, we obtain
\begin{eqnarray}
&&\frac{1}{{\mathscr N}}{\rm tr}\,
\frac{\mathsf{H}(\theta,\phi)}{\mathrm{I}(\theta,\phi)}=
\frac{3}{16{\mathscr N}} \nonumber \\
&& \times\frac{35+28\cos2\theta+\cos4\theta-8\cos4\phi\sin^4\phi}
{9+7\cos2\theta-2\cos4\phi\sin^2\theta}. \end{eqnarray}
Integrating over the isotropic prior probability $dn$, Eq.~\eqref{febc dn}, we obtain
\begin{equation}\label{eq:traceHI} \frac{1}{{\mathscr N}}\int dn\, {\rm tr}\, \frac{\mathsf{H}(\theta,\phi)}{\mathrm{I}(\theta, \phi)}
=-\frac{13}{18{\mathscr N}}. \end{equation}
Recalling (\ref{eq:ordering}) and $N=3{\mathscr N}$ we finally get
\begin{equation}
\label{eq:bound3D}
F^{\rm OG}=1-\frac{13}{12N}+\cdots. \end{equation}
As mentioned above, the collective bound $F=1-1/N$ is not attained by tomography. At this point, the question arises whether classical communication is necessary to attain this bound. We next show that this is not the case.
\subsubsection{Random scheme}
We now consider the so called random scheme i.e, a scheme in which measurements are performed along random directions chosen from an isotropic distribution. In contrast to tomography, which only takes into account three fixed directions, this scheme explores the Bloch sphere isotropically if a large number of copies is available. Therefore one can expect that it will perform much better.
This approach is equivalent to performing a covariant (continuous) POVM on each one of the copies separately. Here, we instead regard it as von Neumann measurements and a classical ancilla, e.g., a ``roulette", that tells us along which direction we measure. From this point of view the outcome parameters are given by $\chi=(\xi,(u\equiv\cos\vartheta), \varphi)$, where $\vartheta$ and $\varphi$ are the azimuthal and polar angles of the direction $\vec{m}(u,\varphi)$ of the measurement, and $\xi=\pm 1$ is the corresponding outcome.
Let us compute the Fisher information for a specific state $\eta=((v\equiv\cos\theta),\phi)$. Since this strategy is isotropic, the pointwise fidelity $F(\eta)$ is independent of $\eta$, and we conveniently choose $\eta=((v=0),0)=\underline{0}$. By the same argument, no average over $\eta$ will be needed: $F=F(\eta)$. The probability is given by
\begin{equation}
p_{\eta}(\chi)=
\frac{1+\xi~\vec n\cdot\vec m (u,\varphi)}{2} ,
\end{equation}
and the Fisher information reads
\begin{equation}
\label{eq:fisherinfo-cont}
\mathrm{I}_{ij}(\eta)=\sum_{\xi=\pm 1}\int \frac{du\, d\varphi}{4\pi}
p_{\eta}(\chi) \frac{\partial
\ln p_{\eta}(\chi)}{\partial \eta_i}
\frac{\partial \ln p_{\eta}(\chi)}{\partial
\eta_j}. \end{equation}
The diagonal elements read
\begin{eqnarray}
\mathrm{I}_{vv}(\underline{0})\!\!
&=&\!\!\frac{1}{8\pi}\!\sum_{\xi=\pm1}\!
\int\! \frac{u^2du\,d\varphi}{1+\xi\sqrt{1-u^2}\cos\varphi}=\frac{1}{2},\\
\mathrm{I}_{\phi\phi}(\underline{0})
\!\!&=&\!\!\frac{1}{8\pi}\!\sum_{\xi=\pm1}\!
\int \!\frac{(1-u^2)\sin^2\varphi\,du\, d\varphi}{1+\xi\sqrt{1-u^2}\cos\varphi}
=\frac{1}{2}. \end{eqnarray}
As for the off-diagonal elements, a straightforward calculation gives
\begin{equation}\label{eq:offdiagonal}
\mathrm{I}_{v\phi}(\underline{0})=\mathrm{I}_{\phi v}(\underline{0})=0,
\end{equation}
as one could expect, since gaining information on $v$ does not provide information on $\phi$ and viceversa. The Fisher information matrix thus reads
\begin{equation}
\mathrm{I}(\underline{0})={1\over2} \pmatrix{1&0 \cr
0&1}. \end{equation}
The Hessian of the fidelity is
\begin{eqnarray}
\mathsf{H}_{ij}(\underline{0})=\left.\frac{\partial^2 f}
{\partial \hat{\eta}_i\partial
\hat{\eta}_j}\right|_{\hat\eta=\underline{0}}=-\frac{\delta_{ij}}{2}. \end{eqnarray}
Finally, using~\eqref{eq:ordering} we obtain
\begin{equation}\label{eq:fbound-random} F^{\mathrm{OG}}= 1-\frac{1}{N}+\cdots . \end{equation}
We conclude that asymptotically classical communication is not required to saturate the collective bound: a measurement scheme based on a set of random directions does the job.
\section{Conclusions}\label{conclusions} We have presented a selfcontained and detailed study of several estimation schemes when a number $N$ of identical copies of a qubit state is available. We have used the fidelity as a figure of merit and presented a general framework which enables us to treat collective as well as local measurements on the same footing.
We have considered two interesting situations: that of a completely unknown qubit state (3D case), and that of a qubit laying on the equator of the Bloch sphere (2D case). We have obtained the optimal measurements and maximum fidelities for the most general collective strategies. These results, although well known, were scattered in the literature and rederived several times. Here we have obtained them within a direct and unified framework. The solution in the 2D case is strikingly simple, and can be extended to the case of optimal covariant phase estimation.
These collective schemes yield the ultimate fidelity bounds that can be achieved by any scheme, thus setting a natural scale of what is a good or a bad estimation. However, they require a joint measurement on the $N$ copies, which is usually very difficult, if not impossible, to be implemented in practice. The main part of this paper has therefore focused on measurements that can be implemented in a laboratory with nowadays technology: local von Neumann measurements.
In the 2D case we have shown that, quite surprisingly, the most basic tomographic scheme, i.e, measurements along two fixed orthogonal directions with the adequate data processing (the~OG), gives already a fidelity that is asymptotically equal to the collective bound. We have obtained this limit using the Cram{\'e}r-Rao bound, which is particularly simple to compute in this situation. This result is in agreement with the direct (and lengthier) computation presented in~\cite{bbm-local}.
For the 3D states, tomography, i.e. measurements along three fixed orthogonal directions, fails to give the asymptotic collective bound, even with the best data processing. The main reason of this failure is that the Bloch sphere is not explored thoroughly. We have considered an extension that is asymptotically isotropic: a series von Neumann measurements along random directions. We have proved that this scheme, which does not make use of classical communication, does saturates the collective bound. Hence, we conclude that in the large $N$ limit, an estimation procedure based on local measurements without classical communication does perform as well as the most efficient and sophisticated collective schemes.
We have also discussed local schemes with classical communication, i.e, schemes in which the measurements are devised in such a way that they take into account previous outcomes. We have studied in detail the one step adaptive scheme of Gill and Massar~\cite{gill-massar}, which has very interesting features: only two measurement orientations are required, adaptivity is only used once, and the estimation is made by a CLG, which can be read off from the frequency of outcomes. The economy of resources in this scheme may raise doubts about its efficiency. In Appendix~\ref{osa} we give a simple proof that for large $N$ it indeed attains the collective bounds.
We have also studied strategies that make a more intensive use of classical communication. In the greedy scheme optimization is performed at each measurement step \cite{fkf,hannemann}. This scheme is the best approach one can take if the actual number of available copies is not known. We have given a geometrical condition for sequentially finding the optimal measurements and have proved that they have to be of von Neumann type (see App.~\ref{greedy}), i.e. no general local POVM's will perform better in this context. We have illustrated the performance of the method with numerical simulations and have shown the behaviour of the optimal collective scheme is reached for very low values of $N$. This occurs for $N$ as low as $N=20$ in 2D and slightly above, $N=45$, in 3D.
In the most general scheme we see that up to $N=3$ ($N=2$ in 2D) there is no need for classical communication: the optimal measurements correspond to a set of mutually unbiased observables. For larger $N$, the knowledge of the actual value of $N$ provides an extra information that translates into an increase of the fidelity. From the practical point of view, however, this difference is negligible already at the level of a few copies ($N\gtrsim 6$).
Our approach may be extended to other situations. For instance, the problem of estimating (qubit) mixed states, which is much more involved, can be tackled along the lines described here~\cite{bbm-mixed}. It would also be interesting to consider qudits and check whether a set of mutually unbiased observables provides the optimal local estimation scheme when the number of copies coincides with the number of independent variables that parametrize the qudit state.
\section*{Acknowledgments} It is a pleasure to thank M.~Baig for his collaboration at early stages of this work. We also thank R.~Gill and M.~Ballester for useful discussions. We acknowledge financial support from Spa\-nish Ministry of Science and Technology project BFM2002-02588, CIRIT project SGR-00185, and QUPRODIS working group EEC contract IST-2001-38877.
\appendix \section{}\label{osa}
The simplest LOCC approach is exemplified by the ``one step adaptive" scheme~\cite{gill-massar}. Measurements are performed along just two different directions, and CLG is used~\footnote{Although one can consider more sophisticated estimators, such as MLE or OG, they will not improve significantly the estimation.}. The scheme has two stages and classical communication is used only once, in going from the first stage to the second. This is therefore, a very economical scheme from the practical and theoretical point of view. We here review the method and give a straightforward and comprehensive proof that it saturates the collective bound for large $N$. Given the economy of the scheme, this is not an obvious result at all. We focus only on the 3D case, as the simpler 2D case can be worked out along the same lines.
\noindent \textit{First stage}: One performs $N_0=N^a$ ($0<a<1$) measurements with a sensible estimator, in the sense of Sec.~\ref{asymptotic}, and obtains an estimation $\vec M_0$ with a fidelity $F_0$:
\begin{equation}
\label{fid0}
F_0=\sum_{\chi_0}\int
dn \frac{1+\vec n\cdot\vec M_0(\chi_0)}{2} p_n(\chi_0), \end{equation}
where $\chi_0$ stands for the list of outcomes obtained in this first stage.
\noindent \textit{Second Stage}: At this point we use the CLG on the remaining $2{\mathscr N} \equiv\bar{N}=N-N_0$ copies by measuring along \emph{two} perpendiculars directions, $\vec u$ and $\vec v$, on the plane orthogonal to $\vec M_0$. In this basis the final guess can be written as
\begin{equation}
\label{eq:ansatz-osa}
\vec M=\vec M_0(\chi_0)\,\cos\omega+
(\vec u \cos\tau+\vec v \sin\tau) \sin\omega. \end{equation}
This parametrization ensures that $\vec M$ is unitary. The angles $\omega$ and $\tau$ depend on the outcomes of this second stage, which are the frequencies $\alpha_i {\mathscr N}$, $(1-\alpha_i){\mathscr N}$, ($i=u,v$). The probabilities are given by $p_n(\alpha)$ in \eqref{probability-local}, with $n_u=\vec{n}\cdot\vec{u}$ and $n_v=\vec{n}\cdot\vec{v}$. As argued above, we measure on the plane orthogonal to $\vec{M}_0$ because the two outcomes of each measurement have roughly the same probability, $\alpha_i \approx 1/2$, and they are most informative. It is convenient to define the two dimensional vector $\vec{r}$, with components:
\begin{equation}\label{r-vector}
r_i \equiv 2 \alpha_i -1, \qquad i=u,v, \end{equation}
which, on average, is close to $\vec{0}$. This vector gives an estimation of the projection of the signal Bloch vector $\vec{n}$ on the measurement plane ($uv$ plane). Hence, $\omega$ is expected to be small ($\vec{M}_0 \approx \vec M $) and we make the ansatz
\begin{equation}
\label{omega}
\omega=\lambda\sqrt{r_u^2+r_v^2}, \qquad
\tan\tau=\frac{r_v}{r_u}, \end{equation}
where the positive parameter $\lambda$ will be determined later.
The final fidelity for a signal state $\vec{n}$ and outcomes $(\chi_0,\vec{r})$ is
\begin{equation} f_n(\chi_0,\vec{r})=\frac{1+\vec n\cdot\vec M(\chi_0,\vec{r})}{2}, \end{equation}
and the average fidelity $F$ reads
\begin{equation}
\label{fidelity}
F=\sum_{\chi_0,\vec{r}}\int dn\frac{1+\vec n\cdot\vec
M(\chi_0,\vec{r})}{2}p_n(\chi_0)p_n(\vec{r}|\chi_0). \end{equation}
Notice that the probability of obtaining the outcome
$\vec r$, namely, $p_n(\vec r|\chi_0)$ [$\equiv p_n(\alpha)$ in~\eqref{probability-local} with $i=u,v$], is conditioned on $\chi_0$ through the dependence of the second stage measurements on~$\vec{M}_0(\chi_0)$.
Since we will compute different averages over $\chi_0$, $\vec{r}=(r_u,r_v)$ and $\vec{n}$, it is convenient to introduce the following notation:
\begin{eqnarray}
\left\langle f \right\rangle_0&=&
\sum_{\chi_0}~f_n(\chi_0,\vec{r})~p_n(\chi_0),\\
\left\langle f \right\rangle_r&=&
\sum_{\vec{r}}~f_n(\chi_0,\vec{r})~p_n(\vec{r}|\chi_0), \\
\left\langle f \right\rangle_n&=&
\int dn~f_n(\chi_0,\vec{r}), \end{eqnarray}
and similarly for averages of other functions of $\chi_0$, $\vec r$ and~$\vec n$. We will denote composite averaging by simply combining subscripts (i.e. $\left\langle\left\langle F \right\rangle_r\right\rangle_0 \equiv\left\langle F\right\rangle_{r,0}$). Therefore, we write
\begin{equation} F\equiv\langle f \rangle_{r,0,n}\equiv\langle f\rangle. \end{equation}
Since $F=(1 +\Delta)/2$, we have
\begin{equation}\label{eq:f-average-osa}
\Delta=\langle\vec{n}\cdot \vec M \rangle. \end{equation}
In the expansions that we perform below, we keep only the terms that contribute to the the fidelity up to order $1/N$. Recalling that $\omega$ is expected to be small, it follows that
\begin{eqnarray}
\cos\omega&=&1-\frac{\lambda^2}{2}\left[r_u^2+r_v^2\right], \\
\sin\omega\cos\tau&=&\lambda r_u ,\\
\sin\omega\sin\tau&=&\lambda r_v , \end{eqnarray}
to leading order. Therefore, the expectation value in \eqref{eq:f-average-osa} can be written as
\begin{eqnarray}
\Delta&=&\left\langle\left
(1-\frac{\lambda^2}{2}\left\langle r_u^2+r_v^2\right
\rangle_r\right)\vec n\cdot\vec M_0\right\rangle_{0,n}\\
&+&\lambda \Big\langle \langle
r_u\rangle_r~n_u+\langle
r_v\rangle_r~n_v \Big\rangle_{0,n}.
\label{expval2} \end{eqnarray}
Since $r_u$, $r_v$ (or equivalently $\alpha_u$, $\alpha_v$) are binomially distributed, one readily sees that
\begin{eqnarray}
\langle r_i\rangle_r&=&n_i, \\
\langle r_i^2\rangle_r&=& n_i^2+\frac{1-n_i^2}{{\mathscr N}}. \end{eqnarray}
We further recall that $\vec n$ is unitary and that $\{\vec M_0, \vec u,\vec v\}$ is an orthonormal basis, hence $n_u^2+n_v^2=1-(\vec n\cdot\vec M_0)^2$, and \eqref{expval2} can be cast as
\begin{eqnarray}
&& \kern-3.5em \Delta=\lambda+\left[1-{\lambda^2\over2}\left(1+\frac{1}{{\mathscr N}}\right)\right]
\left\langle\vec n\cdot\vec M_0\right\rangle_{0,n}
\nonumber \\
&&\kern-3.5em -\lambda\left\langle(\vec n\cdot\vec
M_0)^2\right\rangle_{0,n}+ \frac{\lambda^2}{2}\left(1-\frac{1}{{\mathscr N}}\right)
\left\langle(\vec n\cdot\vec M_0)^3
\right\rangle_{0,n}\kern-0.5em. \label{febc was here} \end{eqnarray}
To compute the moments $ \langle(\vec n\cdot\vec M_0)^{q}
\rangle_{0,n}$, we consider the angle $\delta$ between $\vec{n}$ and
$\vec{M}_0$, which is also expected to be small. We have
\begin{equation}
2 F_0-1= \left\langle\vec n\cdot\vec M_0\right\rangle_{0,n}=
\left\langle\cos\delta\right\rangle_{0,n}
\simeq 1-\frac{\left\langle\delta^2\right\rangle_{0,n}}{2}, \end{equation}
where we have used \eqref{fid0}. Therefore
\begin{eqnarray}
\left\langle(\vec n\cdot\vec M_0)^q\right\rangle_{0,n}&=&
\left\langle\cos^q\delta\right\rangle_{0,n}
\simeq1-\frac{q}{2}\langle\delta^2\rangle_{0,n}
\nonumber \\
&=&1-2q(1-F_0). \end{eqnarray}
Now we plug this result back into (\ref{febc was here}) to obtain
\begin{equation}
F=\langle f\rangle=1-(1-\lambda)^2(1-F_0)-\frac{1-4(1-F_0)}{2{\mathscr N}}\lambda^2. \end{equation}
Since the term $(1-\lambda)^2(1-F_0)$ is always positive, the maximum fidelity is obtained with the choice $\lambda=1$, and we are left with
\begin{equation}
F=1-\frac{1-4(1-F_0)}{2{\mathscr N}}. \end{equation}
Since the first estimation is asymptotically unbiased, \mbox{$1-F_0$} vanishes for large $N_0$ (i.e., for large $N$) and
\begin{eqnarray}
F&\simeq&1-\frac{1}{2{\mathscr N}}. \end{eqnarray}
Recalling that ${\mathscr N}=(N-N^a)/2$, we finally have
\begin{equation}
F=1-\frac{1}{N}+\cdots. \end{equation}
This concludes the proof.
\section{}\label{greedy} \newcommand{\mathrm{E}}{\mathrm{E}}
In this appendix we prove that in the greedy scheme the optimal individual measurements on each copy are of von Neumann type. We sketch the proof for 2D states. The 3D case can be worked out along the same lines.
The history of outcomes will be denoted, as usual, by~$\chi$. Notice that here we consider general local measurements (local POVMs) with $R$ outcomes, where $R$ is possibly larger than two. Therefore $\chi$ is a $N$-digit integer number in base $R$: $\chi=i_N i_{N-1}\cdots i_1$ ($i_k=0,1, \ldots , R-1$). As in Sec.~\ref{adaptive} we use the notation $\chi_k=i_k i_{k-1}\cdots i_1$. A measurement on the $k$-th copy is defined by a set of non-negative rank-one operators
$\{O(\chi_k)\}_{i_k=0}^{R-1}=\{O(i_k\chi_{k-1)}\,|\,i_k=0,1,\ldots,R-1\}$, where
\begin{equation}\label{greedy-povm}
O(\chi_k)=c(\chi_k) [1+\vec m(\chi_k)\cdot\vec \sigma]. \end{equation}
The non-negative constants $c(\chi_k)$ and the vectors $\vec{m}(\chi_k)$ are subject to the constraints
\begin{eqnarray}
\label{eq:const1}
\sum_{i_k=0}^{R-1} c(\chi_k)&=&1, \\
\label{eq:const2}
\sum_{i_k=0}^{R-1} c(\chi_k)~\vec m(\chi_k)&=&0 ,\\
\label{eq:const3}
|\vec m(\chi_k) |&=&1, \end{eqnarray}
which ensures that $O(\chi_k)\ge0$ and $\sum_{i_k}O(\chi_k)=\openone$. Note that we allow $c(\chi_k)$ to be zero, thus taking into account the possibility that each local POVM may have a different number of outcomes without letting $R$ depend on~$k$.
Assume we have measured all but the last copy and we wish to
optimise the last measurement. Recall from Sec.~\ref{introduction} that the fidelity can be written as $F=(1+\Delta)/2$, where
\begin{equation}
\Delta=\sum_\chi|\vec V(\chi)|, \quad \vec V(\chi)=\int dn~\vec n~p_n(\chi). \end{equation}
To simplify the notation, let us define $r\equiv i_N$, $\vec{m}_r=\vec{m}(r\chi_{N-1})$, and $c_r=c(r\chi_{N-1})$. Then,
\begin{equation}
p_n(\chi)=
p_n(\chi_{N-1})
\left[c_r( 1+ \vec n\cdot\vec m_{r})
\right] \end{equation}
and
\begin{equation}
\Delta =
\sum_{\chi_{N-1}}\sum_{r}|\vec V(r\chi_{N-1})|=
\sum_{\chi_{N-1}}d(\chi_{N-1}) ,
\label{eq:greedyfid} \end{equation}
where we have defined $d(\chi_{N-1})$ as
\begin{equation}\label{eq:greedy-dx}
d(\chi_{N-1})\equiv\sum_{r}c_{r}\left|\int
dn~\vec n~p_n(\chi_{N-1})\left(1+\vec n\cdot\vec m_{r}\right)\right|. \end{equation}
We further write $\vec V\equiv\vec V(\chi_{N-1})$ and define the symmetric positive matrix
\begin{equation}
\mathsf{A}_{ij}\equiv \mathsf{A}_{ij}(\chi_{N-1})\equiv \int
dn~n_in_j~p_n(\chi_{N-1}). \end{equation}
Eq.~\eqref{eq:greedy-dx} becomes
\begin{equation} \label{eq-greedy-dx-2}
d=\sum_r c_r|\vec V+\mathsf A \vec m_r| \end{equation}
(Hereafter the dependency on $\chi_{N-1}$ will be implicitly understood to simplify the notation).
Our task is to maximize~\eqref{eq-greedy-dx-2}. Introducing the Lagrange multipliers $\lambda$, $\vec{\gamma}$, and $\omega_r$, the function we need to maximize is actually
\begin{equation}\label{eq:L}
L=d-\lambda\,\Lambda -\vec{\gamma} \cdot\vec\Gamma -\sum_r
\omega_r \Omega_r , \end{equation}
where the constraints
\begin{eqnarray}
\Lambda &=& \sum_r c_r -1, \\
\vec{\Gamma} &=& \sum_r c_r \vec{m}_r,\\
\Omega_r &=& \frac{\vec{m}_r^2 -1}{2}, \end{eqnarray}
can be read off from \eqref{eq:const1}, \eqref{eq:const2} and \eqref{eq:const3}. The factor two in the last expression is introduced for later convenience. Variations with respect to $c_r$ yield
\begin{eqnarray}\label{eq:max-L}
\frac{\delta L}{\delta c_r}=|\vec V+\mathsf A\vec m_r|-\vec \gamma\cdot\vec
m_r-\lambda=0. \end{eqnarray}
Notice that the points $\vec{m}_r$ that satisfy this equation define an ellipse, $\mathcal E$, with focus at $-\mathsf A^{-1}\vec{V}$. Notice also that the parameter $\lambda$ at the maximum is the value of $d$ in \eqref{eq-greedy-dx-2} (just multiply \eqref{eq:max-L} by $c_r$ and sum over $r$ taking into account the constraints~$\Lambda=0$ and~$\vec{\Gamma}=0$).
Finally, consider the variations of $\vec m_r$ in \eqref{eq:L}. We obtain
\begin{equation}\label{eq:L-mr}
c_r \left(\mathsf{A} \frac{\vec V + \mathsf{A} \vec{m}_r}{|\vec V +
\mathsf{A} \vec{m}_r|} -\vec{\gamma}\right)=\omega_r \vec{m}_r, \end{equation}
which means that the vector inside the brackets is proportional to $\vec{m}_r$. Note that condition $\Omega_r=0$ defines a unit circle, and the orthogonal vector to this curve is $\vec{m}_r$. So we only need to prove that the orthogonal vector at point $\vec{m}_r$ of the ellipse $\mathcal E$ defined in \eqref{eq:max-L} has precisely this direction ---this follows straightforwardly taking variations with respect to $\vec{m}_r$ in \eqref{eq:max-L}. Therefore, the solution is given by the tangency points of ellipse $\cal E$ and circle $\Omega_r=0$. There are only two such points, they are in opposite directions, and all constraints and maximization equations are satisfied with $c_{1,2}=1/2$. This proves that the optimal measurements in the greedy scheme are indeed von Neumann's~\footnote{For the 3D case we just have to replace ellipses by ellipsoids and circles by spheres.}. Notice that this is a stronger statement than it looks: local measurements with a larger number of outcomes will perform worse.
\newcommand{\PRL}[3]{Phys.~Rev. Lett.~\textbf{#1}, #2 (#3)} \newcommand{\PRA}[3]{Phys.~Rev. A~\textbf{#1}, #2 (#3)} \newcommand{\JPA}[3]{J.~Phys. A~\textbf{#1}, #2 (#3)} \newcommand{\PLA}[3]{Phys.~Lett. A~\textbf{#1}, #2 (#3)}
\end{document} |
\begin{document}
\title{An Approximation Algorithm for Two-Edge-Connected Subgraph Problem via Triangle-free Two-Edge-Cover hanks{
This work was partially supported by the joint project of Kyoto University and Toyota Motor Corporation,
titled ``Advanced Mathematical Science for Mobility Society'', and
by JSPS KAKENHI Grant Numbers JP20K11692 and JP22H05001.
}
\begin{abstract}
The $2$-Edge-Connected Spanning Subgraph problem (2-ECSS) is one of the most fundamental and well-studied problems in the context of network design.
In the problem, we are given an undirected graph $G$, and the objective is to find
a $2$-edge-connected spanning subgraph $H$ of $G$ with the minimum number of edges.
For this problem, a lot of approximation algorithms have been proposed in the literature.
In particular, very recently, Garg, Grandoni, and Ameli gave an approximation algorithm for 2-ECSS with factor $1.326$,
which was the best approximation ratio.
In this paper, we give a $(1.3+\varepsilon)$-approximation algorithm for 2-ECSS, where $\varepsilon$ is an arbitrary positive fixed constant,
which improves the previously known best approximation ratio.
In our algorithm, we compute a minimum triangle-free $2$-edge-cover in $G$
with the aid of the algorithm for finding a maximum triangle-free $2$-matching given by Hartvigsen.
Then, with the obtained triangle-free $2$-edge-cover, we apply the arguments by Garg, Grandoni, and Ameli. \end{abstract}
\section{Introduction}
In the field of survivable network design, a basic problem is to construct a network with minimum cost that satisfies a certain connectivity constraint.
A seminal result by Jain~\cite{Jain} provides a $2$-approximation algorithm for a wide class of survivable network design problems.
For specific problems among them, a lot of better approximation algorithms have been investigated in the literature.
In this paper, we study the $2$-Edge-Connected Spanning Subgraph problem (2-ECSS), which is one of the most fundamental and well-studied problems in this context.
In 2-ECSS, we are given an undirected graph $G=(V, E)$, and the objective is to find
a $2$-edge-connected spanning subgraph $H$ of $G$ with the minimum number of edges.
It was shown in~\cite{CL,CristG} that 2-ECSS does not admit a PTAS unless ${\rm P}={\rm NP}$.
Khuller and Vishkin~\cite{KV} gave a $3/2$-approximation algorithm for this problem, which was the starting point of the study of approximation algorithms for 2-ECSS.
Cheriyan, Seb\H{o}, and Szigeti~\cite{CSS} improved this ratio to $17/12$, and later
Hunkenschr\"{o}der, Vempala, and Vetta~\cite{VV,HVV} gave a $4/3$-approximation algorithm.
By a completely different approach, Seb\H{o} and Vygen~\cite{SV} achieved the same approximation ratio.
Very recently, Garg, Grandoni, and Ameli~\cite{GGA} improved this ratio to $1.326$
by introducing powerful reduction steps and developing the techniques in~\cite{HVV}.
The contribution of this paper is to present a $(1.3+\varepsilon)$-approximation algorithm for 2-ECSS for any $\varepsilon > 0$,
which improves the previously best approximation ratio.
\begin{theorem}
\label{thm:main}
For any constant $\varepsilon >0$, there is a deterministic polynomial-time $(1.3+\varepsilon)$-approximation algorithm for 2-ECSS.
\end{theorem}
Our algorithm and its analysis are heavily dependent on the well-developed arguments by Garg, Grandoni, and Ameli~\cite{GGA}.
In our algorithm, we first apply the reduction steps given in~\cite{GGA}.
Then, instead of a minimum $2$-edge-cover, we compute a minimum \emph{triangle-free} $2$-edge-cover in the graph,
which is the key ingredient in our algorithm.
We show that this can be done in polynomial time with the aid of the algorithm for finding a maximum triangle-free $2$-matching given by Hartvigsen~\cite{HartD} (see Theorem~\ref{thm:HartD}).
Finally, we convert the obtained triangle-free $2$-edge-cover into a spanning $2$-edge-connected subgraph
by using the arguments in~\cite{GGA}.
Our main technical contribution is to point out the utility of Hartvigsen's algorithm~\cite{HartD} in the arguments by Garg, Grandoni, and Ameli~\cite{GGA}.
It should be noted that Hartvigsen's algorithm has not received much attention in this context.
\paragraph{Related Work}
A natural extension of 2-ECSS is the $k$-Edge-Connected Spanning Subgraph problem ($k$-ECSS),
which is to find a $k$-edge-connected spanning subgraph of the input graph with the minimum number of edges.
For $k$-ECSS, several approximation algorithms have been proposed, in which approximation factors depend on $k$~\cite{CT2000,GGTW2009,GG2012}.
We can also consider the weigthed variant of 2-ECSS, in which the objective is to find
a $2$-edge-connected spanning subgraph with the minimum total weight in a given edge-weighted graph.
The result of Jain~\cite{Jain} leads to a $2$-approximation algorithm for the weighted 2-ECSS, and it is still the best known approximation ratio.
For the case when all the edge weights are $0$ or $1$, which is called the \emph{forest augmentation problem},
Grandoni, Ameli, and Traub~\cite{GAT} recently gave a $1.9973$-approximation algorithm.
See references in~\cite{GGA,GAT} for more related work on survivable network design problems.
It is well-known that a $2$-matching of maximum size can be found in polynomial-time by using a matching algorithm; see e.g.,~\cite[Section 30]{lexbook}.
As a variant of this problem, the problem of finding a maximum $2$-matching that contains no cycle of length at most $k$,
which is called the \emph{$C_{\le k}$-free $2$-matching problem}, has been actively studied.
Hartvigsen~\cite{HartD} gave a polynomial-time algorithm for the $C_{\le 3}$-free $2$-matching problem (also called the \emph{triangle-free $2$-matching problem}), and
Papadimitriou showed the NP-hardness for $k \ge 5$ (see \cite{CP80}).
The polynomial solvability of the $C_{\le 4}$-free $2$-matching problem has been open for more than 40 years.
The edge weighted variant of the $C_{\le 3}$-free $2$-matching problem is also a big open problem in this area,
and some positive results are known for special cases~\cite{HL13,Kob10,PW21IPL,Kob22}.
See references in~\cite{Kob22} for more related work on the $C_{\le k}$-free $2$-matching problem.
\section{Preliminary}
Throughout the paper, we only consider simple undirected graphs, i.e., every graph has neither self-loops nor parallel edges. \footnote{It is shown in~\cite{HVV} that this assumption is not essential when we consider $2$-ECSS.} A graph $G=(V, E)$ is said to be \emph{$2$-edge-connected} if $G \setminus \{e\}$ is connected for any $e \in E$,
and it is called \emph{$2$-vertex-connected} if $G \setminus \{v\}$ is connected for any $v \in V$ and $|V| \ge 3$. For a subgraph $H$ of $G$, its vertex set and edge set are denoted by $V(H)$ and $E(H)$, respectively. A subgraph $H$ of $G=(V,E)$ is \emph{spanning} if $V(H)=V(G)$. In the $2$-Edge-Connected Spanning Subgraph problem ($2$-ECSS), we are given a graph $G=(V, E)$ and the objective is to find a $2$-edge-connected spanning subgraph $H$ of $G$ with the minimum number of edges (if one exists).
In this paper, a spanning subgraph $H$ is often identified with its edge set $E(H)$. Let $H$ be a spanning subgraph (or an edge set) of $G$. A connected component of $H$ which is 2-edge-connected is called a \emph{2EC component of $H$}. A 2EC component of $H$ is called an \emph{$i$-cycle 2EC component} if it is a cycle of length $i$. In particular, a $3$-cycle 2EC component is called a \emph{triangle 2EC component}. A maximal $2$-edge-connected subgraph $B$ of $H$ is called a \emph{block} of $H$
if $|V(B)| \ge 3$ and $B$ is not a 2EC component. An edge $e \in E(H)$ is called a \emph{bridge} of $H$ if $H \setminus \{e\}$ has more connected components than $H$. A block $B$ of $H$ is called a \emph{leaf block} if $H$ has exactly one bridge incident to $B$, and an \emph{inner block} otherwise.
Let $G=(V, E)$ be a graph. For an edge set $F \subseteq E$ and a vertex $v \in V$, let $d_F(v)$ denote the number of edges in $F$ that are incident to $v$. An edge set $F \subseteq E$ is called a \emph{$2$-matching} if $d_F(v) \le 2$ for any $v \in V$, and it is called a \emph{$2$-edge-cover} if $d_F(v) \ge 2$ for any $v \in V$. \footnote{Such edge sets are sometimes called \emph{simple} $2$-matchings and \emph{simple} $2$-edge-covers in the literature.}
\section{Algorithm in Previous Work} \label{sec:previous}
Since our algorithm is based on the well-developed $1.326$-approximation algorithm given by Garg, Grandoni, and Ameli~\cite{GGA}, we describe some of their results in this section.
\subsection{Reduction to Structured Graphs}
In the algorithm by Garg, Grandoni, and Ameli~\cite{GGA}, they first reduce the problem to the case when
the input graph has some additional conditions, where such a graph is called a $(5/4,\varepsilon)$-structured graph.
In what follows in this paper, let $\varepsilon > 0$ be a sufficiently small positive fixed constant, which will appear in the approximation factor.
In particular, we suppose that $0\le \varepsilon \le 1/24$, which is used in the argument in~\cite{GGA}.
We say that a graph $G=(V,E)$ is \emph{$(5/4,\varepsilon)$-structured} if it is $2$-vertex-connected, it contains at least ${2}/{\varepsilon}$ vertices, and
it does not contain the following structures:
\begin{itemize}
\item \textbf{($5/4$-contractible subgraph)} a $2$-edge-connected subgraph $C$ of $G$ such that every $2$-edge-connected spanning subgraph of $G$ contains at least $\frac{4}{5}|E(C)|$ edges with both endpoints in $V(C)$;
\item \textbf{(irrelevant edge)} an edge $uv\in E$ such that $G \setminus \{u,v\}$ is not connected;
\item \textbf{(non-isolating $2$-vertex-cut)} a vertex set $\{u,v\}\subseteq V$ of $G$ such that $G\setminus\{u,v\}$ has at least three connected components or has exactly two connected components, both of which contains at least two vertices.
\end{itemize} The following lemma shows that it suffices to consider $(5/4,\varepsilon)$-structured graphs when we design approximation algorithms.
\begin{lemma}[\mbox{Garg, Grandoni, and Ameli~\cite[Lemma 2.2]{GGA}}] \label{lem:structured} For $\alpha \ge \frac{5}{4}$, if there exists a deterministic polynomial-time $\alpha$-approximation algorithm for 2-ECSS on $(5/4,\varepsilon)$-structured graphs, then there exists a deterministic polynomial-time $(\alpha+2\varepsilon)$-approximation algorithm for 2-ECSS. \end{lemma}
\subsection{Semi-Canonical Two-Edge-Cover}
A $2$-edge-cover $H$ of $G$ (which is identified with a spanning subgraph) is called \emph{semi-canonical} if it satisfies the following conditions.
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\arabic{enumi})}
\renewcommand{\theenumi}{(\roman{enumi})}
\item \label{canonical:2EC} Each 2EC component of $H$ is a cycle or contains at least $7$ edges.
\item \label{canonical:block} Each leaf block contains at least $6$ edges and each inner block contains at least $4$ edges.
\item \label{canonical:triangle} There is no pair of edge sets $F \subseteq H$ and $F' \subseteq E \setminus H$ such that $|F| = |F'| \le 3$, $(H \setminus F) \cup F'$ is a $2$-edge-cover with fewer connected components than $H$, and $F$ contains an edge in some triangle 2EC component of $H$.
\item \label{canonical:4cycle} There is no pair of edge sets $F \subseteq H$ and $F' \subseteq E \setminus H$ such that $|F| = |F'| = 2$, $(H \setminus F) \cup F'$ is a $2$-edge-cover with fewer connected components than $H$,
both edges in $F'$ connect two $4$-cycle 2EC components, say $C_1$ and $C_2$, and $F$ is contained in $C_1 \cup C_2$. In other words, by removing $2$ edges and adding $2$ edges,
we cannot merge two $4$-cycle 2EC components into a cycle of length $8$.
\end{enumerate}
\begin{lemma}[\mbox{Garg, Grandoni, and Ameli~\cite[Lemma 2.6]{GGA}}] \label{lem:fewtriangles}
Suppose we are given a semi-canonical $2$-edge-cover $H$ of a $(5/4,\varepsilon)$-structured graph $G$ with $b|H|$ bridges and $t|H|$ edges belonging to triangle 2EC components of $H$.
Then, in polynomial time, we can compute a $2$-edge-connected spanning subgraph $S$ of size at most $(\frac{13}{10}+\frac{1}{30}t-\frac{1}{20}b)|H|$. \end{lemma}
\begin{remark}
In the original statement of~\cite[Lemma 2.6]{GGA}, $H$ is assumed to satisfy a stronger condition than semi-canonical, called canonical.
A $2$-edge-cover $H$ is said to be \emph{canonical} if it satisfies \ref{canonical:2EC} and \ref{canonical:block}
in the definition of semi-canonical $2$-edge-covers, and
also the following condition:
there is no pair of edge sets $F \subseteq H$ and $F' \subseteq E \setminus H$ such that $|F| = |F'| \le 3$ and $(H \setminus F) \cup F'$ is a $2$-edge-cover with fewer connected components than $H$.
However, one can see that the condition ``canonical'' can be relaxed to ``semi-canonical'' by following the proof of~\cite[Lemma 2.6]{GGA}; see the proofs of Lemmas D.3, D.4, and D.11 in~\cite{GGA}. \end{remark}
\section{Algorithm via Triangle-Free Two-Edge-Cover}
The idea of our algorithm is quite simple: we construct a semi-canonical $2$-edge-cover $H$ with no triangle 2EC components and then apply Lemma~\ref{lem:fewtriangles}. We say that an edge set $F\subseteq E$ is \emph{triangle-free} if there is no triangle 2EC components of $F$. Note that a triangle-free edge set $F$ may contain a cycle of length three that is contained in a larger connected component. In order to construct a semi-canonical triangle-free $2$-edge-cover, we use a polynomial-time algorithm for finding a triangle-free $2$-matching given by Hartvigsen~\cite{HartD}.
\begin{theorem}[\mbox{Hartvigsen~\cite[Theorem 3.2 and Proposition 3.4]{HartD}}] \label{thm:HartD}
For a graph $G$, we can find a triangle-free $2$-matching in $G$ with maximum cardinality in polynomial time. \end{theorem}
In Section~\ref{sec:trianglefree2ec}, we give an algorithm for finding a minimum triangle-free $2$-edge-cover with the aid of Theorem~\ref{thm:HartD}. Then, we transform it into a semi-canonical triangle-free $2$-edge-cover in Section~\ref{sec:canonical}. Using the obtained $2$-edge-cover, we give a proof of Theorem~\ref{thm:main} in Section~\ref{sec:proofmain}.
\subsection{Minimum Triangle-Free Two-Edge-Cover} \label{sec:trianglefree2ec}
As with the relationship between $2$-matchings and $2$-edge-covers (see e.g.~\cite[Section 30.14]{lexbook}), triangle-free $2$-matchings and triangle-free $2$-edge-covers are closely related to each other, which can be stated as the following two lemmas.
\begin{lemma} \label{lem:upper}
Let $G=(V, E)$ be a connected graph such that the minimum degree is at least two and $|V| \ge 4$.
Given a triangle-free $2$-matching $M$ in $G$, in polynomial time,
we can compute a triangle-free $2$-edge-cover $C$ of $G$ with size at most $2|V|-|M|$. \end{lemma}
\begin{proof}
Starting with $F=M$, we perform the following update repeatedly while $F$ is not a $2$-edge-cover:
\begin{quote}
Choose a vertex $v\in V$ with $d_F(v) < 2$ and an edge $vw\in E \setminus F$ incident to $v$.
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\renewcommand{\theenumi}{(\roman{enumi})}
\item \label{MtoC:1} If $F\cup \{vw\}$ is triangle-free, then add $vw$ to $F$.
\item \label{MtoC:2} Otherwise, $F\cup \{vw\}$ contains a triangle 2EC component with vertex set $\{u, v, w\}$ for some $u\in V$.
In this case, choose an edge $e$ connecting $\{u, v, w\}$ and $V \setminus \{u, v, w\}$, and add both $vw$ and $e$ to $F$.
\end{enumerate}
\end{quote}
If $F$ becomes a $2$-edge-cover, then the procedure terminates by returning $C = F$.
It is obvious that this procedure terminates in polynomial steps and returns a triangle-free $2$-edge-cover.
We now analyze the size of the output $C$.
For an edge set $F \subseteq E$, define $g(F) = \sum_{v \in V} \max \{2-d_F(v), 0\}$.
Then, in each iteration of the procedure, we observe the following:
in case \ref{MtoC:1}, one edge is added to $F$ and $g(F)$ decreases by at least one;
in case \ref{MtoC:2}, two edges are added to $F$ and $g(F)$ decreases by at least two, because $d_F(v) = d_F(w) = 1$ before the update.
With this observation, we see that $|C| - |M| \le g(M) - g(C) = \sum_{v\in V} (2-d_M(v))$,
where we note that $M$ is a $2$-matching and $C$ is a $2$-edge-cover. Therefore, it holds that
\begin{equation*}
|C|\le |M|+\sum_{v\in V} (2-d_M(v))=|M|+(2|V|-2|M|)=2|V|-|M|,
\end{equation*}
which completes the proof. \end{proof}
\begin{lemma} \label{lem:lower}
Given a triangle-free $2$-edge-cover $C$ in a graph $G = (V, E)$, in polynomial time,
we can compute a triangle-free $2$-matching $M$ of $G$ with size at least $2|V|-|C|$. \end{lemma}
\begin{proof}
Starting with $F=C$, we perform the following update repeatedly while $F$ is not a $2$-matching:
\begin{quote}
Choose a vertex $v\in V$ with $d_F(v) > 2$ and an edge $vw\in F$ incident to $v$.
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\renewcommand{\theenumi}{(\roman{enumi})}
\item If $F\setminus \{vw\}$ is triangle-free, then remove $vw$ from $F$.
\item If $F\setminus \{vw\}$ contains a triangle 2EC component whose vertex set is $\{v, v_1, v_2\}$ for some $v_1, v_2 \in V$, then remove $v v_1$ from $F$.
\item \label{FtoM:3} If neither of the above holds, then $F\setminus \{vw\}$ contains a triangle 2EC component whose vertex set is $\{w, w_1, w_2\}$ for some $w_1, w_2 \in V$.
In this case, remove $ww_1$ from $F$.
\end{enumerate}
\end{quote}
If $F$ becomes a $2$-matching, then the procedure terminates by returning $M = F$.
It is obvious that this procedure terminates in polynomial steps and returns a triangle-free $2$-matching.
We now analyze the size of the output $M$.
For an edge set $F \subseteq E$, define $g(F) = \sum_{v \in V} \max \{d_F(v)-2, 0\}$.
Then, in each iteration of the procedure, we observe that one edge is removed from $F$ and $g(F)$ decreases by at least one,
where we note that $d_F(w) = 3$ before the update in case~\ref{FtoM:3}.
With this observation, we see that $|C| - |M| \le g(C) - g(M) = \sum_{v\in V} (d_C(v) -2)$,
where we note that $C$ is a $2$-edge-cover and $M$ is a $2$-matching. Therefore, it holds that
\begin{equation*}
|M|\ge |C| - \sum_{v\in V} (d_C(v)-2) = |C| - (2|C|-2|V|)=2|V|-|C|,
\end{equation*}
which completes the proof. \end{proof}
By using these lemmas and Theorem~\ref{thm:HartD}, we can compute a triangle-free $2$-edge-cover with minimum cardinality in polynomial time.
\begin{proposition} \label{prop:trifree2cover}
For a graph $G=(V,E)$, we can compute a triangle-free $2$-edge-cover of $G$ with minimum cardinality in polynomial time (if one exists). \end{proposition}
\begin{proof}
It suffices to consider the case when $G$ is a connected graph such that the minimum degree is at least two and $|V| \ge 4$.
Let $M$ be a triangle-free $2$-matching in $G$ with maximum cardinality, which can be computed in polynomial time by Theorem~\ref{thm:HartD}.
Then, by Lemma~\ref{lem:upper}, we can construct a triangle-free $2$-edge-cover $C$ of $G$ with size at most $2|V|-|M|$.
We now show that $G$ has no triangle-free $2$-edge-cover $C'$ with $|C'| < 2|V| - |M|$.
Assume to the contrary that there exists a triangle-free $2$-edge-cover $C'$ of size smaller than $2|V| - |M|$.
Then, by Lemma~\ref{lem:lower}, we can construct a triangle-free $2$-matching $M'$ of $G$ with size at least $2|V|-|C'|$.
Since $|M'| \ge 2|V| - |C'| > 2|V| - (2|V| - |M|) = |M|$, this contradicts that $M$ is a triangle-free $2$-matching with maximum cardinality.
Therefore, $G$ has no triangle-free $2$-edge-cover of size smaller than $2|V| - |M|$, which implies that $C$ is a triangle-free $2$-edge-cover with minimum cardinality. \end{proof}
\subsection{Semi-Canonical Triangle-Free Two-Edge-Cover} \label{sec:canonical}
We show the following lemma saying that a triangle-free $2$-edge-cover can be transformed into a semi-canonical triangle-free $2$-edge-cover without increasing the size. Although the proof is almost the same as that of~\cite[Lemma 2.4]{GGA}, we describe it for completeness.
\begin{lemma} \label{lem:convert}
Given a triangle-free $2$-edge-cover $H$ of a $(5/4, \varepsilon)$-structured graph $G = (V, E)$,
in polynomial time, we can compute a triangle-free $2$-edge-cover $H'$ of no larger size which is semi-canonical. \end{lemma}
\begin{proof}
Recall that an edge set is identified with the corresponding spanning subgraph of $G$.
Starting with $H' = H$, while $H'$ is not semi-canonical we apply one of the following operations in this order of priority.
We note that $H'$ is always triangle-free during the procedure, and hence it always satisfies condition~\ref{canonical:triangle} in the definition of semi-canonical $2$-edge-cover.
\begin{enumerate}
\item[(a)] If there exists an edge $e \in H'$ such that $H' \setminus \{e\}$ is a triangle-free $2$-edge-cover, then remove $e$ from $H'$.
\item[(b)] If $H'$ does not satisfy condition~\ref{canonical:4cycle}, then
we merge two $4$-cycle 2EC components into a cycle of length $8$
by removing $2$ edges and adding $2$ edges.
Note that the obtained edge set is a triangle-free $2$-edge-cover that has fewer connected components.
\item[(c)]
Suppose that condition~\ref{canonical:2EC} does not hold, i.e., there exists a 2EC component $C$ of $H'$ with fewer than $7$ edges that is not a cycle.
Since $C$ is $2$-edge-connected and not a cycle, we obtain $|E(C)| \ge |V (C)| + 1$.
If $|V(C)|=4$, then $C$ contains at least $5$ edges and contains a cycle of length $4$,
which contradicts that (a) is not applied.
Therefore, $|V(C)| = 5$ and $|E(C)| = 6$.
Since operation (a) is not applied, $C$ is either a bowtie (i.e., two triangles that share a commmon vertex) or
a $K_{2,3}$; see figures in the proof of~\cite[Lemma 2.4]{GGA}.
\begin{enumerate}
\item[(c1)] Suppose that $C$ is a bowtie that has two triangles $\{v_1, v_2, u\}$ and $\{v_3, v_4, u\}$.
If $G$ contains an edge between $\{v_1, v_2\}$ and $\{v_3, v_4\}$, then we can replace $C$ with a cycle of length $5$, which decreases the size of $H'$.
Otherwise, by the $2$-vertex-connectivity of $G$, there exists an edge $zw \in E \setminus H'$ such that $z \in V \setminus V(C)$ and $w \in \{v_1, v_2, v_3, v_4\}$.
In this case, we replace $H'$ with $(H' \setminus \{uw\}) \cup \{zw\}$.
Then, the obtained edge set is a triangle-free $2$-edge-cover with the same size, which has fewer connected components.
\item[(c2)] Suppose that $C$ is a $K_{2, 3}$ with two sides $\{v_1, v_2\}$ and $\{w_1, w_2, w_3\}$.
If every $w_i$ has degree exactly $2$, then every feasible $2$-edge-connected spanning subgraph contains all the edges of $C$, and hence $C$ is a $\frac{5}{4}$-contractible subgraph,
which contradicts the assumption that $G$ is $(5/4, \varepsilon)$-structured.
If $G$ contains an edge $w_i w_j$ for distinct $i, j \in \{1, 2, 3\}$, then we can replace $C$ with a cycle of length $5$, which decreases the size of $H'$.
Otherwise, since some $w_i$ has degree at least $3$,
there exists an edge $w_i u \in E \setminus H'$ such that $i \in \{1, 2, 3\}$ and $u \in V \setminus V(C)$.
In this case, we replace $H'$ with $(H' \setminus \{v_1 w_i\}) \cup \{w_i u\}$.
Then, the obtained edge set is a triangle-free $2$-edge-cover with the same size, which has fewer connected components.
\end{enumerate}
\item[(d)]
Suppose that the first half of condition~\ref{canonical:block} does not hold, i.e., there exists a leaf block $B$ that has at most $5$ edges.
Let $v_1$ be the only vertex in $B$ such that all the edges connecting $V(B)$ and $V \setminus V(B)$ are incident to $v_1$.
Since operation (a) is not applied, we see that $B$ is a cycle of length at most $5$.
Let $v_1, \dots , v_\ell$ be the vertices of $B$ that appear along the cycle in this order.
We consider the following cases separately; see figures in the proof of~\cite[Lemma 2.4]{GGA}.
\begin{enumerate}
\item[(d1)] Suppose that there exists an edge $zw \in E \setminus H'$ such that $z \in V \setminus V(B)$ and $w \in \{v_2, v_\ell\}$.
In this case, we replace $H'$ with $(H' \setminus \{v_1 w\}) \cup \{z w\}$.
\item[(d2)] Suppose that $v_2$ and $v_\ell$ are adjacent only to vertices in $V(B)$ in $G$, which implies that $\ell \in \{4, 5\}$.
If $v_2 v_\ell \not\in E$, then every feasible 2EC spanning subgraph contains four edges (incident to $v_2$ and $v_\ell$) with both endpoints in $V(B)$,
and hence $B$ is a $\frac{5}{4}$-contractible subgraph, which contradicts the assumption that $G$ is $(5/4, \varepsilon)$-structured.
Thus, $v_2 v_\ell \in E$.
Since there exists an edge connecting $V \setminus V(B)$ and $V(B) \setminus \{v_1\}$ by the $2$-vertex-connectivity of $G$,
without loss of generality, we may assume that $G$ has an edge $v_3 z$ with $z \in V \setminus V(B)$.
In this case, we replace $H'$ with $(H' \setminus \{v_1 v_\ell, v_2 v_3\}) \cup \{v_3 z, v_2 v_\ell\}$.
\end{enumerate}
In both cases, the obtained edge set is a triangle-free $2$-edge-cover with the same size.
Furthermore, we see that either (i) the obtained edge set has fewer connected components or
(ii) it has the same number of connected components and fewer bridges.
\item[(e)]
Suppose that the latter half of condition~\ref{canonical:block} does not hold, i.e., there exists an inner block $B$ that has at most $3$ edges.
Then, $B$ is a triangle. Let $\{v_1, v_2, v_3\}$ be the vertex set of $B$.
If there are at least two bridge edges incident to distinct vertices in $V(B)$, say $wv_1$ and $z v_2$, then
edge $v_1 v_2$ has to be removed by operation (a), which is a contradiction.
Therefore, all the bridge edges in $H'$ incident to $B$ are incident to the same vertex $v\in V(B)$.
In this case, we apply the same operation as (d).
\end{enumerate}
We can easily see that each operation above can be done in polynomial time.
We also see that each operation decreases the lexicographical ordering of $(|H'|, {\rm cc}(H'), {\rm br}(H'))$,
where ${\rm cc}(H')$ is the number of connected components in $H'$ and
${\rm br}(H')$ is the number of bridges in $H'$.
This shows that the procedure terminates in polynomial steps.
After the procedure, $H'$ is a semi-canonical triangle-free $2$-edge-cover with $|H'| \le |H|$, which completes the proof. \end{proof}
\subsection{Proof of Theorem~\ref{thm:main}} \label{sec:proofmain}
By Lemma~\ref{lem:structured}, in order to prove Theorem~\ref{thm:main},
it suffices to give a $\frac{13}{10}$-approximation algorithm for 2-ECSS in $(5/4, \varepsilon)$-structured graphs
for a sufficiently small fixed $\varepsilon > 0$.
Let $G=(V, E)$ be a $(5/4, \varepsilon)$-structured graph.
By Proposition~\ref{prop:trifree2cover},
we can compute a minimum-size triangle-free $2$-edge-cover $H$ of $G$ in polynomial-time.
Note that the optimal value ${\sf OPT}$ of 2-ECSS in $G$ is at least $|H|$, because
every feasible solution for 2-ECSS is a triangle-free $2$-edge-cover.
By Lemma~\ref{lem:convert}, $H$ can be transformed into a semi-canonical triangle-free $2$-edge-cover $H'$ with $|H'| \le |H|$.
Since $H'$ is triangle-free, by applying Lemma~\ref{lem:fewtriangles} with $H'$,
we obtain a $2$-edge-connected spanning subgraph $S$ of size at most $(\frac{13}{10}-\frac{1}{20}b)|H'|$, where $H'$ has $b|H'|$ bridges.
Therefore, we obtain
\[
|S| \le \left( \frac{13}{10}-\frac{1}{20}b \right) |H'| \le \frac{13}{10} |H| \le \frac{13}{10} {\sf OPT},
\]
which shows that $S$ is a $\frac{13}{10}$-approximate solution for 2-ECSS in $G$.
This completes the proof of Theorem~\ref{thm:main}. \qed
\section{Concluding Remarks}
In this paper, we have presented a $(1.3+\varepsilon)$-approximation algorithm for 2-ECSS,
which achieves the currently best approximation ratio.
We give a remark that our algorithm is complicated and far from practical, because
we utilize Hartvigsen's algorithm~\cite{HartD}, which is quite complicated.
Therefore, it will be interesting to design a simple and easy-to-understand approximation algorithm with (almost) the same approximation ratio as ours.
Another possible direction of future research is to further improve the approximation ratio by improving Lemma~\ref{lem:fewtriangles}.
\end{document} |
\begin{document}
\title {\bf FPTAS for Weighted Fibonacci Gates and Its Applications }
\author{Pinyan Lu\thanks{Microsoft Research. {\tt
[email protected]}}
\and Menghui Wang\thanks{University of Wisconsin-Madison. This work was partially performed when the author was an undergraduate student at Shanghai Jiao Tong Univerisity. {\tt [email protected]}}
\and Chihao Zhang\thanks{Shanghai Jiao Tong Univerisity. {\tt [email protected]}} }
\date{} \maketitle
\begin{abstract} Fibonacci gate problems have severed as computation primitives to solve other problems by holographic algorithm~\cite{FOCS08} and play an important role in the dichotomy of exact counting for Holant and CSP frameworks~\cite{STOC09}. We generalize them to weighted cases and allow each vertex function to have different parameters, which is a much boarder family and \#P-hard for exactly counting. We design a fully polynomial-time approximation scheme (FPTAS) for this generalization by correlation decay technique.
This is the first deterministic FPTAS for approximate counting in the general Holant framework without a degree bound. We also formally introduce holographic reduction in the study of approximate counting
and these weighted Fibonacci gate problems serve as computation primitives for approximate counting.
Under holographic reduction, we obtain FPTAS for other Holant problems and spin problems.
One important application is developing an FPTAS for a large range of ferromagnetic two-state spin systems.
This is the first deterministic FPTAS in the ferromagnetic range for two-state spin systems without a degree bound.
Besides these algorithms, we also develop several new tools and techniques to establish the correlation decay property, which are applicable in other problems. \end{abstract}
\thispagestyle{empty}
\setcounter{page}{1}
\section{Introduction}
Holant is a refined framework for counting problems~\cite{FOCS08,STOC09,holant}, which is more expressive than previous frameworks such as counting constraint satisfaction problems (CSP) in the sense that they can be simulated using Holant instances. In this paper, we consider a generalization called weighted Holant problems. A weighted Holant is an extension of a Holant problem where each edge $e$ is assigned an activity $\lambda_e$, and if it is chosen it contributes to the partition function a factor of $\lambda_e$. Given a graph $G(V,E)$, a family of node functions ${\cal F}=\left\{F_v | v \in V\right\}$, and edge weights $\Lambda=\left\{\lambda_e | e\in E\right\}$, the partition function for a weighted Holant instance $\Omega\left(G,{\cal F},\Lambda\right)$ is the summation of the weights over all configurations $\sigma:E\rightarrow \left\{0,1\right\}$, specifically the value of $$\sum_{\sigma}\left(\prod_{e \in E}\lambda_e\left(\sigma\left(e\right)\right)\prod_{v\in V}F_v\left(\sigma |_{E\left(v\right)}\right)\right).$$ We use Holant(${\mathcal F}, \Lambda$) to denote the class of Holant problems where all functions are taken from ${\mathcal F}$ and all edge weights are taken from $\Lambda$. For example, consider the {\sc Perfect Matching} problem on $G$. This problem corresponds to attaching the {\sc Exact-One} function on every vertex of $G$ --- for each 0-1 edge assignment, the product $\prod_{v\in V} F_v(\sigma\mid_{E(v)})$ evaluates to 1 when the assignment is a perfect matching, and 0 otherwise, thereby summing over all 0-1 edge assignments gives us the number of perfect matchings in $G$. If we use the {\sc At-Most-One} function at each vertex, then we can count all matchings, including those that are not perfect.
A symmetric function $F$ can be expressed by $[f_0,f_1,\ldots,f_k]$, where $f_i$ is the value of $F$ on inputs of hamming weight $i$. The above mentioned {\sc Exact-One} and {\sc At-Most-One} functions are both symmetric and can be expressed as $[0,1,0,0, \ldots]$ and $[1,1,0,0, \ldots]$ respectively. A Fibonacci function $F$ is a symmetric function $[f_0,f_1,\ldots,f_k]$, satisfying that $f_i=c f_{i-1}+f_{i-2}$ for some constant $c$. For example, the parity function $[a,b, a,b, \ldots]$ is a special Fibonacci function with $c=0$. If there are no edge weights (or equivalently all the weights are equal to 1) and all the node functions are Fibonacci functions with a same parameter $c$, we have a polynomial time algorithm to compute the partition function exactly~\cite{FOCS08}. These problems also form the base for a family of holographic algorithms, where other interesting problems can be reduced to the Fibonacci gate problems~\cite{FOCS08}. Furthermore, this family of functions is interesting not only because of its tractability, but also because it essentially captures almost all tractable Holant problems with all unary functions available~\cite{STOC09,holant}.
If we allow edges to have non-trivial weights or each functions to have different parameters in Fibonacci gates, then the exact counting problem becomes \#P-hard~\cite{STOC09,holant}. Nevertheless, it is interesting to study the problem in the approximation setting. We first introduce the solution concepts for approximate counting. A fully polynomial-time approximation scheme (FPTAS) is an algorithm scheme that approximates the answer to a problem within an arbitrarily small relative error in polynomial time. More precisely, an FPTAS is an algorithm scheme such that for any given parameter $\varepsilon>0$, the algorithm produces an output $Z'$ satisfying $(1-\varepsilon)Z<Z'<(1+\varepsilon)Z$, where $Z$ is the correct answer, and runs in time $poly(n,1/\varepsilon)$. Its randomized relaxation is called a fully polynomial-time randomized approximation scheme (FPRAS),
which uses random bits in the algorithm and requires that the final output be within the range $[(1-\epsilon) Z, (1+\epsilon) Z]$ with high probability.
In contrast to the exact counting setting, the approximability of Holant problem is much less well-understood. In this paper, we study approximate counting for weighted Fibonacci gate problems.
Another closely related and well-studied model is spin systems. In this paper, we focus on two-state spin systems. An instance of a spin system is a graph $G(V,E)$. A configuration $\sigma: V\rightarrow \{0, 1\}$ assigns every vertex one of the two states. The contributions of local interactions between adjacent vertices are quantified by a matrix $A=\begin{bmatrix} A_{0,0} & A_{0,1} \\ A_{1,0} & A_{1,1} \end{bmatrix}=\begin{bmatrix} \beta & 1 \\ 1 & \gamma \end{bmatrix}$, where $\beta, \gamma \geq 0$. The partition function is defined by \ifabs{
$Z_A(G)=\sum_{\sigma \in \{0,1\}^{V}} \prod_{(u,v) \in E} A_{\sigma(u), \sigma(v)}$. } {
\[ Z_A(G)=\sum_{\sigma \in 2^{V}} \prod_{(u,v) \in E} A_{\sigma(u), \sigma(v)}.\]} There has been a lot of studies on the approximability of the partition function in terms of parameters $\beta$ and $\gamma$. The problem is exactly solvable in polynomial time if $\beta \gamma=1$. When $\beta \gamma<1$, the system is called anti-ferromagnetic and we have a complete understanding of its approximability: there is a uniqueness boundary, above which there is an FPTAS~\cite{Weitz06,LLY12,SST,LLY13} and below which it is NP-hard~\cite{Sly10,SS12,galanis2012inapproximability}.
The story is different in ferromagnetic range $\beta\gamma>1$. Jerrum and Sinclair \cite{JS93} gave an FPRAS for Ising model ($\beta=\gamma>1$) based on Markov Chain Monte Carlo (MCMC) method and lately Goldberg et al. extended that to all $\beta\gamma>1$ plane. However, these algorithms are all randomized. Can we design a deterministic FPTAS for it as that for anti-ferromagnetic range? Indeed, this is an interesting and important question in general and many effort has been made for derandomizing MCMC based algorithms. For instance, there is an FPRAS for counting matchings~\cite{jerrum1989approximating} but FPTAS is only known for graphs of bounded degree~\cite{BGKNT07}. The situation is similar in computing permanent of nonnegative matrix, although an FPRAS is known \cite{app_JSV04}, the current best deterministic algorithm can only approximate the permanent with an exponential large factor \cite{linial1998deterministic}. To the best of our knowledge, no deterministic FPTAS was previously known for two-state spin systems in ferromagnetic range. In particular, the correlation decay technique, the main tool to design FPTAS in anti-ferromagnetic range, cannot directly apply.
\subsection{Our Results} The main results of this paper are a number of FPTAS's for computing the partition function of different Holant problems and spin systems.
\begin{description} \item[Weighted Fibonacci gates.] We design an FPTAS for weighted Fibonacci gates when the parameters satisfy certain conditions.
We have several theorems to cover different ranges.
In Theorem~\ref{thm:1}, we prove that for any fixed choice of other parameters, we can design an FPTAS as long as the edge weights are close enough to $1$, which corresponds to the unweighted case.
This result demonstrates a smooth transition from the unweighted case to weighted ones in terms of approximation.
Another interesting range is that we have an FPTAS for the whole range as long as the Fibonacci parameter $c$ is reasonably large (no less than a constant 1.17) and edge weights are no less than $1$ (which means all the edges prefer to be chosen) (Theorem~\ref{thm:2}).
It is worth noting that we allow Fibonacci functions on different nodes to have different parameters $c$, which contrasts the exact counting setting where it is crucial for different functions to have the same parameter in order to have a polynomial time algorithm.
\item[Ferromagnetic two-state spin systems.] We design an FPTAS for a large range of ferromagnetic two state spin systems.
This is the first deterministic FPTAS in the ferromagnetic range for two-state spin systems without a degree bound.
To describe the tractable range, we present a monotonically increasing function $\Gamma:[1, \infty] \to \mathbb{R}$ with $\Gamma(1)=1$ and $\Gamma(x)\leq x$.
We have an FPTAS for a ferromagnetic spin system $\begin{bmatrix} \beta & 1 \\ 1 & \gamma \end{bmatrix}$ as long as $\gamma \leq \Gamma(\beta)$ or $\beta \leq \Gamma(\gamma)$ (Theorem~\ref{thm:spin-holant}).
The exact formula of $\Gamma$ is complicated and we do not spend much effort to optimize it.
However, it already enjoys a nice property in that $\lim_{x \to +\infty} \frac{\Gamma(x)}{x}=1$.
This means that although the range does not cover the Ising model ($\beta=\gamma$), it gets relatively close to that in infinity.
We also have similar results for two-spin system with external fields.
\item[Other Holant Problems.] We can extend our FPTAS to functions $[f_0,f_1,\ldots,f_d]$ with form
$f_{i+2}=a f_{i+1} + b f_i$ if the parameters satisfy certain conditions.
This is a much broader family than Fibonacci gates, since Fibonacci gates corresponds to $b=1$. \end{description}
\subsection{Our Techniques} Our main approach for designing FPTAS's is the correlation decay technique as introduced in \cite{BG08} and \cite{Weitz06}. While the general framework is standard, it is highly non-trivial to design a recursive computational structure and especially to prove the property of exponential correlation decay for a given set of problems. This is in analog to designing an FPRAS with the Markov Chain Monte Carlo (MCMC) method: though general framework for these algorithms is the same, it is still difficult to design Markov chains for different problems and especially to prove the rapid mixing property~\cite{MC_JA96}. One powerful technique here is to use a potential function to amortize the decay rate, which has been introduced and used in many problems~\cite{RSTVY11,LLY12,SST,LLY13,LY13} and which we utilize here. Besides this, to enrich the tool set, we introduce several new techniques to design the recursive computational structure and to prove the correlation decay property. We believe that these techniques can find applications in other problems.
\begin{description} \item[Working with dangling edges.] The recursive computational structure for spin problems usually relates a marginal probability of a vertex to that of its neighbors.
In Holant problems, we are talking about the assignments and marginal probabilities of edges.
Since an edge has two ends, it has two set of neighbors, which complicates things a lot.
In this paper, we choose to work on instances with dangling edges, which is a half edge that have neighbors only on one end.
It is much easier to write recursions on dangling edges.
This technique works for any Holant problems and we believe it is the right thing to work with in the Holant framework. Indeed, the idea has later been successfully used in~\cite{counting-edge-cover}.
\item[Computation tree with bounded degrees.] Usually, the correlation decay property only implies an FPTAS for systems with bounded degrees.
One exception is the anti-ferromagnetic two-state spin systems, where a stronger notion of computationally efficient correlation decay is introduced\cite{LLY12}.
In this paper, we also establish the computationally efficient correlation decay for systems without a degree bound, but via a different approach.
By making use of the unique property of Fibonacci functions, we can decompose a node into several nodes with constant degrees.
Thus, at each step of our computation tree, we only involve constant many sub-instances even if the degree of the original system is not bounded.
\item[Bounding range of variables.] After we get a recursion system, the main task is to prove the correlation decay property.
This is usually achieved by proving that a certain amortized decay rate, which is a function of several variables, is less than one for any choice of these variables in their domain.
If we can prove that these variables are always within smaller domains, then we only need to prove that the rate is less than one under these smaller domains, which becomes weaker and easier to prove.
Some naive implementation of this idea already appeared in approximate counting of coloring problems~\cite{GK07,LY13}.
In this paper, we develop this idea much further.
We divide sub-instances involved in the computation tree into two classes: deep ones for which we can get a much better estimation of their range and shallow ones for which we can compute their value without error.
Then we can either compute the exact value or we can safely assume that it is within a smaller domain, which enables us to prove the correlation decay property easier.
\item[Holographic reduction.] We formally introduce holographic reduction in the study of approximate counting. We use weighted Fibonacci gate problems as computation primitives for approximate counting and design holographic algorithms for other problems based on them.
In particular,
we use the FPTAS for Fibonacci gates to obtain an for ferromagnetic two-state spin systems.
It is noteworthy that the correlation decay property does not generally hold for ferromagnetic two-state spin systems.
So we cannot do a similar argument to get the FPTAS in the spin world directly.
Moreover, the idea of holographic reduction can apply to any Holant problems, which extends known counting algorithms (both exact and approximate, both deterministic and randomized) to a broader family of problems.
Indeed, the other direction of holographic reduction is also used in our algorithm. We design an exact algorithm for shallow sub-instances of Fibonacci instance by a holographic reduction to the spin world. \end{description}
\subsection{Related Works} Most previous studies of the Holant framework are for exact counting, and a number of dichotomy theorems were proved~\cite{holant,HuangL12,CaiGW13}. Holographic reduction was introduced by Valiant in holographic algorithms~\cite{HA_FOCS,STOC07}, which is later also used to prove hardness result of counting problems~\cite{FOCS08,holant,planar}.
For some special Holant problems such as counting (perfect) matchings, their approximate versions are well studied~\cite{BGKNT07,jerrum1989approximating,app_JSV04}. In particular, \cite{BGKNT07} gave an FPTAS to count matchings but only for graphs with bounded degrees. It is relatively less studied in the general Holant framework in terms of approximate counting except for two recent work: \cite{YZ13} studied general Holant problems but only for planar graph instances with a bounded degree; \cite{McQuillan2013} gives an FPRAS for several Holant problems. Another well-known example is the ``sub-graph world" in \cite{JS93}. It is indeed a weighted Holant problem with Fibonacci functions of $c=0$, for which an FPRAS was given. In that paper, holographic reduction was also implicitly used, which extends the FPRAS to the Ising model.
Most previous study for FPTAS via correlation decay is on the spin systems. It was extremely successful in the anti-ferromagnetic two-spin system~\cite{Weitz06,LLY12,SST,LLY13}. It is also used in multi-spin systems~\cite{GK07,LY13}. Many more works focused on randomized approximate counting, such as examples~\cite{JS93,app_JSV04,app_GJ11,app_DJV01,IS_DG00,col_Jerrum95,col_Vigoda99}.
\section{Preliminaries}\label{sec:background}
A weighted Holant instance $\Omega = (G(V,E), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$ is a tuple. $G(V,E)$ is a graph. $F_v$ is a function with arity $d_v$: $\{0,1\}^{d_v}\rightarrow \mathbb{R}^+$, where $d_v$ is the degree of $v$ and $\mathbb{R}^+$ denotes non-negative real numbers. Edge weight $\lambda_e$ is a mapping $\{0,1\} \rightarrow \mathbb{R}^+ $. A configuration $\sigma$ is a mapping $E\rightarrow \{0,1\}$ and gives a weight
\[w_\Omega(\sigma)=\prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v \in V} F_v(\sigma\mid_{E(v)}),\]
where $E(v)$ denotes the incident edges of $v$.
The counting problem on the instance $\Omega$ is to compute the partition function: \[Z(\Omega)=\sum_{\sigma}\left( \prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v\in V} F_v(\sigma\mid_{E(v)})\right).\]
We can represent each function $F_v$ by a vector in $(\mathbb{R}^+)^{2^{d_v}}$, or a tensor in $((\mathbb{R}^+)^{2})^{\otimes d_v}$. This is also called a {\it signature}.
A symmetric function $F$ can be expressed by $[f_0,f_1,\ldots,f_k]$, where $f_j$ is the value of $F$ on inputs of hamming weight $j$. For example, the equality function is $[1,0,\ldots,0,1]$. Edge weight is a unary function, which can be written as $[\lambda_e(0),\lambda_e(1)]$. Since we do not care about global scale factor, we always normalize that $\lambda_e(0)=1$ and use the notation $\lambda_e=\lambda_e(1)$ as a real number.
A Holant problem is parameterized by a set of functions ${\cal F}$ and edge weights $\Lambda$. We denote by ${\rm Holant}({\cal F}, \Lambda )$ the following computation problem . \begin{definition} Given a set of functions ${\cal F}$ and edge weights $\Lambda$, we denote by ${\rm Holant}({\cal F}, \Lambda )$ the following computation problem.\\
\noindent {\bf Input:} A Holant instance $\Omega = (G(V,E), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$, where $F_v \in {\cal F}$ and $\lambda_e \in \Lambda $ ;\\ \noindent {\bf Output:} The partition function $Z(\Omega)$. \end{definition}
The weights of configurations also give a distribution over all possible configurations: \[\mathbb{P}_\Omega(\sigma) =\frac{w_\Omega (\sigma)}{Z(\Omega)}=\frac{1}{ Z(\Omega)} \prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v\in V} F_v(\sigma\mid_{E(v)}).\] This defines the marginal probability of each edge $e_0 \in E$.
\[\mathbb{P}_\Omega(\sigma(e_0)=0) =\frac{\sum_{\sigma:\sigma(e_0)=0}\left( \prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v\in V} F_v(\sigma\mid_{E(v)})\right)}{ Z(\Omega)} .\]
Similarly, we can define the marginal probability of a subset of edges. Let $E_0\subset E$ and $e_1, e_2, \ldots, e_{|E_0|}$ be an enumeration of the edges in $E_0$. Then we can define $\sigma(E_0)=\sigma(e_1)\sigma(e_2)\cdots\sigma(e_{|E_0|})$ as a Boolean string of length $|E_0|$. Let $\alpha\in \{0,1\}^{|E_0|}$, we define
\[\mathbb{P}_\Omega(\sigma(E_0)=\alpha) =\frac{\sum_{\sigma:\sigma(e_i)=\alpha_i, i=1,2,\ldots, |E_0|}\left( \prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v\in V} F_v(\sigma\mid_{E(v)})\right)}{ Z(\Omega)} .\] We denote the partial summation as \[Z(\Omega, \sigma(E_0)=\alpha)=\sum_{\sigma:\sigma(e_i)=\alpha_i}\left( \prod_{e \in E} \lambda_e (\sigma(e)) \prod_{v\in V} F_v(\sigma\mid_{E(v)})\right).\]
We define a dangling instance $\Omega^{D}$ of ${\rm Holant}({\cal F}, \Lambda )$ also as a tuple $(G(V,E\cup D), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$, where $G(V,E\cup D)$ is a graph with dangling edges $D$. A dangling edge can be viewed as a half edge, with one end attached to a regular vertex in $V$ and the other end dangling (not considered as a vertex). A dangling instance $\Omega^{D}$ is the same as a Holant instance except for these dangling edges. In $G(V,E\cup D)$ each node is assigned a function in ${\cal F}$ (we do not consider ``dangling'' leaf nodes at the end of a dangling edge among these), each regular edge in $E$ is assigned a weight from $\Lambda$ and we always assume that there is no weight on a dangling edge in this paper. A dangling instance can be also viewed as a regular instance by attaching a vertex with function $[1,1]$ at the dangling end of each dangling edge. We can define the probability distribution and marginal probabilities just as for regular instance. In particular, we shall use dangling instance $\Omega^e$ with single dangling edge $e$ extensively in this paper. For that, we define \[R(\Omega^{e})= \frac{\mathbb{P}_{\Omega^{e}}(\sigma(e)=1)}{\mathbb{P}_{\Omega^{e}}(\sigma(e)=0)}.\]
\ifabs{} { \begin{definition}\label{definition-partial-pinning}
Given a Holant instance $\Omega=(G(V,E), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$, a vertex $e_0=(u_1,u_2) \in E$ and $\tau \in [0,1]$. We can define a weighted pinning operation $\Pin{\Omega}{e}{\tau}=(G'(V,E-\{e_0\}), \{ F'_v | v \in V \}, \{\lambda_e | e \in E-\{e_0\}\} )$. The graph of $\Pin{\Omega}{e}{\tau}$ is the same as that of $\Omega$ except that $e$ is removed; All the edge weights in the remaining edges are the same in both instances; all the vertex functions are the same except $u_1$ and $u_2$. For $v\in \{u_1, u_2\}$ and $\alpha\in \{0,1\}^{d_v-1}$, $F'_v(\alpha)=(1-\tau) F_v(\alpha 0) +\tau F_v(\alpha 1)$. \end{definition}
In this definition, we have the coincidence that for $x\in \{0,1\}$, $\Pin{\Omega}{e}{x}$ is exactly the Holant instance by fixing the edge $e$ to $x$.
\subsection{Holographic Reduction}
Holographic reduction is powerful reduction among counting problems expressible in Holant framework. We use $Holant({\cal G}|{\cal R})$ to denote all the counting problems, expressed as unweighted Holant problems on bipartite graphs $H=(U,V,E)$, where each signature for a vertex in $U$ or $V$ is from
${\cal G}$ or ${\cal R}$, respectively. Signatures in ${\cal G}$ are are denoted by column vectors (or contravariant tensors); signatures in ${\cal R}_q$ are denoted by row vectors (or covariant tensors)~\cite{dodson}. One can perform (contravariant and covariant) tensor transformations on the signatures, which may produce exponential cancelations in tensor spaces. We shall define a simple version of holographic reductions, which are invertible. Suppose $Holant({\cal G}|{\cal R})$ and
$Holant({\cal G'}|{\cal R'})$ are two holant problems defined for the same family of graphs, and
$T \in {\bf GL}({\mathbb C})$ is a basis.
We say that there is a holographic
reduction from $Holant({\cal G}|{\cal R})$ and
$Holant({\cal G'}|{\cal R'})$ , if the {\it contravariant} transformation $G' = T^{\otimes g} G$ and the {\it covariant} transformation $R=R' T^{\otimes r}$ map $G\in {\cal G}$ to $G'\in {\cal G'}$ and $R\in {\cal R}$ to $R' \in {\cal R'}$, where $G$ and $R$ have arity $g$ and $r$ respectively. (Notice the reversal of directions when the transformation $T^{\otimes n}$ is applied. This is the meaning of {\it contravariance} and {\it covariance}.)
\begin{theorem}[Holant Theorem~\cite{HA_J}]\label{thm:holant}
Suppose there is a holographic reduction from $Holant({\cal G}|{\cal R})$ to
$Holant({\cal G'}|{\cal R'})$ mapping instance $\Omega$ to $\Omega'$, then $Z(\Omega) = Z(\Omega')$. \end{theorem}
The proof of this theorem follows from general principles of contravariant and covariant tensors~\cite{dodson}.}
\ifabs{ \section{Statement of Main Results} } { \section{Results and Applications}
We first list our FPTAS for various ranges of Fibonacci gates and show their applications in other Holant problems and spin systems. The proof of these theorems shall be given in later sections.
\subsection{Fibonacci Signature}} A symmetrical function $[f_0,f_1,\ldots,f_d]$ is called a (generalized) Fibonacci function if there exists a constant $c$ such that \[f_{i+2}=c f_{i+1} + f_i, \ \ \mbox{where }\ \ i=0, 1, \cdots, d-2.\] We denote this family of function as ${\cal F}_c$, the Fibonacci functions with parameter $c$. \ifabs{} {Another useful way to parameterize Fibonacci functions is \[f_i = A \rho^i + B (-\rho)^{-i}, \] where $A,B$ are two constants and $\rho$ is the positive root of $t^2= c t +1$. Thus, there is a one to one correspondence between parameter $c$ and $\rho$. In this paper, when one of them is defined in a context, we assume that the other one is also defined automatically and accordingly. We shall use the other one directly and freely.
} We use ${\cal F}_c^{p,q}$ to denote a subfamily of ${\cal F}_c$
such that $f_{i+1} \geq p f_{i}$ and $f_{i+1} \leq q f_{i}$ for all $i=0, 1, \cdots, d-1 $. When the upper bound $q$ is not given, we simply write ${\cal F}_c^{p}$. We use ${\cal F}_{c_1, c_2}^{p,q}$ to denote $\bigcup_{c_1\leq c \leq c_2} {\cal F}_c^{p,q}$. We use $\Lambda_{\lambda_1,\lambda_2}$ to denote the set of edge weights $\lambda_e$ such that $\lambda_1\leq \lambda_e \leq \lambda_2$.
Here is a list of FPTAS's we get:
\begin{theorem} \label{thm:1} For any $c>0$ and $p>0$, there exists $\lambda_1(p,c)<1$ and $\lambda_2(p,c)>1$ such that there is an FPTAS for ${\rm Holant}({\cal F}_c^p, \Lambda_{\lambda_1(p,c),\lambda_2(p,c)} )$. \end{theorem}
\begin{theorem} \label{thm:2} Let $p>0$. Then there is an FPTAS for ${\rm Holant}({\cal F}_{1.17, +\infty}^p, \Lambda_{1,+\infty} )$. \end{theorem}
\begin{theorem} \label{thm:spin} Let $\lambda>0$ and $c\geq 2.57$. There is an FPTAS for ${\rm Holant}({\cal F}_{c}^{c/2,c+2/c}, \Lambda_{\lambda,+\infty} )$. \end{theorem}
\ifabs{ As an important application of the above theorems, we get the following FPTAS for ferromagnetic two-state spin system . \begin{theorem} \label{thm:spin-holant} There is a continuous curve $\Gamma(\beta)$ defined on $[1,+\infty)$ such that (1) $\Gamma(1)=1$; (2) $1<\Gamma(\beta)<\beta$ for all $\beta>1$; and (3) $\lim_{\beta \to +\infty} \frac{\Gamma(\beta)}{\beta}=1$. There is an FPTAS for the two-state spin system with local interaction matrix
$\begin{bmatrix}
\beta & 1 \\
1 & \gamma
\end{bmatrix}$ and external field $\mu \leq 1$ if $\beta\gamma>1$ and $\gamma \leq \Gamma(\beta)$. \end{theorem} } {}
\ifabs{} {
\subsection{Beyond Fibonacci} We use ${\cal L}_{a,b}$ to denote the set of all symmetric functions $[f_0,f_1,\ldots,f_d]$ which satisfies that \[f_{i+2}=a f_{i+1} + b f_i, \ \ \mbox{where }\ \ i=0, 1, \cdots, d-2.\] And we use ${\cal L}$ to denote all these functions for different $a$ and $b$. We shall show that an instance of ${\rm Holant}({\cal L}, \Lambda)$
can be transformed to an instance of Fibonacci gates. Given an instance $\Omega = (G(V,E), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$ of ${\rm Holant}({\cal L}, \Lambda)$, we can modify a function $F_v=[f_0,f_1,\ldots,f_d]\in {\cal L}_{a,b}$ to \[ [g_0, g_1, \ldots, g_d]=[f_0,\frac{f_1}{\sqrt{b}},\ldots,\frac{f_d}{b^{d/2}}]. \] Then these $[g_0, g_1, \ldots, g_d]$ satisfies that \[g_{i+2}=\frac{a}{\sqrt{b}} g_{i+1} + g_i, \ \ \mbox{where }\ \ i=0, 1, \cdots, d-2,\] which is a Fibonacci function. At the same time, we modify the edge weight of each neighbor of $v$ from $\lambda$ to $\lambda \sqrt{b}$. By the definition of partition function, it is easy to verify that the partition function remains the same after these simultaneous modification of vertex function and edge weighs. We can do this for all the vertex functions and edge weights. This is indeed a holographic reduction under the basis $\begin{bmatrix} 1 & 0 \\ 0 & \sqrt{b} \end{bmatrix}$. Finally we can get an instance of Fibonacci gate. So all our FPTAS results for Fibonacci gates can be translated to an FPTAS results of a subfamily of ${\rm Holant}({\cal L}, \Lambda)$.
\subsection{Holographic reduction and spin world}\label{sec:spin} Weighted Holant problem can also be interpreted as an (unweighed) Holant problem defined on bipartite graphs. For any Holant instance on a general graph, we can make it bipartite by adding an additional vertex on each edge, and for the new vertex on a edge with weight $\lambda$, the function on it is $[1, 0, \lambda]$. The new bipartite graph is unweighed (no edge weights). It is clear that this modification does not change the partition function of the instance. For this bipartite Holant, we can apply a holographic reduction under base $\begin{bmatrix} 1 & t \\ \rho & -\frac{t}{\rho}\end{bmatrix}$ to get the following lemma.
\begin{lemma} \label{lem:holographic}
Let $\lambda>0$, $\rho\geq 1$, $t(1-\lambda)>0$, and $|t|\leq 1$. Let $\beta=\frac{1+\lambda \rho^2}{t(1-\lambda)}$ and $\gamma=\frac{t(1+\lambda \rho^{-2})}{1-\lambda}$. The two spin problem with edge function $\begin{bmatrix} \beta & 1 \\ 1 & \gamma \end{bmatrix}$ and external field $\mu$ is equivalent to ${\rm Holant}( {\cal F}_{\rho-\frac{1}{\rho}}, \Lambda_{\lambda,\lambda} )$, where ${\cal F}_{\rho-\frac{1}{\rho}}$ is a set of Fibonacci functions with parameter $c=\rho-\frac{1}{\rho}$ and the one of arity $n$ has form \begin{equation}
\label{eq:spin-fibo}
f_k=\rho^k+\mu t^n (-\rho)^{-k}. \end{equation} \end{lemma}
Through this reduction, Theorem~\ref{thm:1}-\ref{thm:spin} give a region on the $\beta\mbox{-}\gamma$ plane in which the ferromagnetic two-state spin system problem admits an FPTAS. The explicit range is complicated and not very informative. We use a function $\Gamma(\beta)$ to denote the combined range of the above three theorems and have the following FPTAS for ferromagnetic two spin system.
\begin{figure}
\caption{This figure illustrates the rough shape of $\Gamma(\cdot)$ when there is no external field. It also includes anti-ferromagnetic range. Parameters $(\beta, \gamma)$ admit FPTAS in green region and hard to approximate in red region. }
\end{figure}
\begin{theorem} \label{thm:spin-holant} There is a continuous curve $\Gamma(\beta)$ defined on $[1,+\infty)$ such that (1) $\Gamma(1)=1$; (2) $1<\Gamma(\beta)<\beta$ for all $\beta>1$; and (3) $\lim_{\beta \to +\infty} \frac{\Gamma(\beta)}{\beta}=1$. There is an FPTAS for the two-state spin system with local interaction matrix
$\begin{bmatrix}
\beta & 1 \\
1 & \gamma
\end{bmatrix}$ and external field $\mu \leq 1$ if $\beta\gamma>1$ and $\gamma \leq \Gamma(\beta)$. \end{theorem}
\begin{proof} The main idea is to make use of the holographic reduction as stated in Lemma \ref{lem:holographic} to transform
FPTAS for the Fibonacci function $f_k=\rho^k+\mu t^n (-\rho)^{-k}$ with edge weight $\lambda$ to a FPTAS for spin system with
parameters $\beta=\frac{1+\lambda \rho^2}{t(1-\lambda)}$, $\gamma=\frac{t(1+\lambda \rho^{-2})}{1-\lambda}$ and external field $\mu$. In the following, we first choose some parameters $\rho$, $\lambda$ and $|t|=1$ in the tractable range of Theorem~\ref{thm:spin}, Theorem~\ref{thm:2}, and Theorem~\ref{thm:1} to define the boundary $\Gamma(\beta)$ by the holographic reduction. Then we cover the below area by choosing some suitable $|t|<1$.
We first specify the boundary curve
$\Gamma(\beta)=\max\left\{\Gamma_1(\beta),\Gamma_2(\beta),\Gamma_3(\beta)\right\}$ where $\Gamma_1(\beta),\Gamma_2(\beta)$ are curves parameterized by $\lambda$ and $\Gamma_3(\beta)$ is a curve parameterized by $\rho$ defined as follows.
\begin{align*}
\Gamma_1&=\left(\beta=\frac{1+2.92^2\lambda}{1-\lambda},\gamma=\frac{1+2.92^{-2}\lambda}{1-\lambda}\right),\quad \lambda\in(0,1);\\
\Gamma_2&=\left(\beta=\frac{1+1.75^2\lambda}{\lambda-1},\gamma=\frac{1+1.75^{-2}\lambda}{\lambda-1}\right),\quad \lambda\in(1,\infty);\\
\Gamma_3&=\left(\beta=\frac{1+\rho^2\lambda_2(\rho)}{\lambda_2(\rho)-1},\gamma=\frac{1+\rho^2\lambda_2(\rho)}{\lambda_2(\rho)-1}\right),\quad \rho\in(1,\infty)
\end{align*}
where $\lambda_2(\cdot)$ is the one in Theorem~\ref{thm:1}.
$\Gamma_1$ is obtained from Lemma~\ref{lem:holographic} combined with Theorem~\ref{thm:spin} by taking $t=1$ and $\rho=2.92$ (equivalently $c=2.57$) as it is easy to verify that the condition $c/2\le\frac{f_{i+1}}{f_i}\le c+2/c$ hold in this case. $\Gamma_2$ is obtained from Lemma~\ref{lem:holographic} combined with Theorem~\ref{thm:2} by taking $t=-1$ and $\rho=1.75$ (equivalently $c=1.17$). $\Gamma_3$ is obtained from Lemma~\ref{lem:holographic} combined with Theorem~\ref{thm:1} by taking $t=-1$ and $\lambda= \lambda_2(\rho)$. Note that although in the statement of Theorem~\ref{thm:1}, $\lambda_2(\cdot)$ is a function of $p$ and $\rho$, $p$ is also a function of $\rho$ for fixed $t$ and $\mu$ in our case. Thus $\lambda_2(\cdot)$ is a function of $\rho$.
Now we can discuss the shape of $\Gamma(\beta)$ on $\beta\mbox{-}\gamma$ plane. The maximum in the definition of $\Gamma(\beta)$ is achieved by $\Gamma_1,\Gamma_2,\Gamma_3$ consecutively for $\beta$ from $1$ to $\infty$.
\begin{itemize}
\item When $\beta$ is relatively small, $\Gamma(\beta)=\Gamma_1(\beta)$ , which starts from the point $(1,1)$.
\item As $\beta$ grows, $\Gamma(\beta)=\Gamma_2(\beta)$ as the slope $\frac{\Gamma_1(\beta)}{\beta}$ approaches $
\frac{1+2.92^{-2}}{1+2.92^{2}}\approx 0.117$ while $\frac{\Gamma_2(\beta)}{\beta}$ approaches $
\frac{1+1.75^{-2}}{1+1.75^{2}}\approx 0.3265$.
\item We $\beta$ is large enough, we have $\Gamma(\beta)=\Gamma_3(\beta)$ with the slope approaches $1$: $\lim_{\beta\to+\infty}\frac{\Gamma(\beta)}{\beta}=\lim_{\lambda\to 1^+,\rho\to 1^+}\frac{1+\lambda\rho^{-2}}{1+\lambda\rho^{2}}=1$.
\end{itemize}
It remains to prove that an FPTAS exists for $\gamma<\Gamma(\beta)$. It is easy to verify that for fixed choice of $\rho$ and $\lambda$ as above, if we choose a $t$ with the same sign but smaller absolute value, it remains in the tractable range of Theorem~\ref{thm:spin}, Theorem~\ref{thm:2}, and Theorem~\ref{thm:1}. For any pair $(\beta, \gamma)$ with $\frac{1}{\beta} < \gamma < \Gamma(\beta)$, there exist a pair $(\beta^*, \gamma^*)$ in the curve $\Gamma$ such that $\beta \gamma =\beta^*\gamma^*$. By the definition of $\Gamma$, we know that $\beta^*=\frac{1+\lambda\rho^2}{t^*(1-\lambda)}$ and $\gamma^*=\frac{t^*(1+\lambda\rho^{-2})}{1-\lambda}$ for some $\rho,\lambda$ and $t^*=1$ or $-1$, for which the Fibonacci gates $f_k=\rho^k+\mu (t^*)^n(-\rho)^{-k}$ has an FPTAS. By our observation, we still have FPTAS if we replace $t^*$ by a $t$ with the same sign but smaller absolute value. In particular, if we choose $t=\frac{t^* \beta^*}{\beta}$, we get $\beta=\frac{1+\lambda\rho^2}{t(1-\lambda)}$ and $\gamma=\frac{t(1+\lambda\rho^{-2})}{1-\lambda}$. So $(\beta,\gamma)$ also admits an FPTAS by holographic reduction.
\end{proof}
}
\section{Computation Tree Recursion} \label{sec:recursion} In the exact polynomial time algorithm for Fibonacci gates without edge weights, one crucial property of a set of Fibonacci functions with a fixed parameter is that it is closed when two nodes are connected together~\cite{FOCS08}. This is no longer true if we have non-trivial edge weights or when different Fibonacci function have different parameters. However, we can still use the special property of a Fibonacci function to decompose a vertex, which is the key property for all FPTAS algorithms in our paper.
Let $\Omega = (G(V,E), \{ F_v | v \in V \}, \{\lambda_e | e \in E\} )$ be an instance of ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$, $v\in V$ be a vertex of the instance with degree $d_1+d_2$ ($d_1, d_2\geq 1$) and $e_1, e_2, \ldots, e_{d_1+d_2}$ be its incident edges. We can construct a new Holant instance $\Omega'$: $\Omega'$ is the same as $\Omega$ except that $v$ is decomposed into two vertices $v', v''$. $e_1, e_2, \ldots, e_{d_1}$ are connected to $v'$ and $e_{d_1+1}, e_{d_1+2}, \ldots, e_{d_1+d_2}$ are connected to $v''$. There is a new edge $e$ connecting $v'$ and $v''$. If the function on the original $v$ is $[f_0, f_1, \ldots, f_{d_1+d_2}]$, a Fibonacci function with parameter $c$, then the function on $v'$ is $[f_0, f_1, \ldots, f_{d_1}]$ and the function on $v''$ is $[1,0, 1, c \ldots]$, also a Fibonacci function with parameter $c$. The edge weight on the new edge $e$ is $1$. The functions on all other nodes and edge weights on all other edges (except the new $e$) remain the same as that in $\Omega$. We use the following notation to denote this decomposition operation $$\Omega'=D(\Omega, v, \{e_1, e_2, \ldots, e_{d_1}\}, \{e_{d_1+1}, e_{d_1+2}, \ldots, e_{d_1+d_2}\}).$$ \ifabs{Using the special property of Fibonacci function, we have the following lemma. } { \begin{figure}
\caption{Vertex decomposition}
\end{figure}}
\begin{lemma}\label{lem:decomposition}
Let $\Omega'=D(\Omega, v, E_1, E_2)$. Then $Z(\Omega)=Z(\Omega')$ and for all $e\in E$, $\mathbb{P}_\Omega(\sigma(e)=0)=\mathbb{P}_{\Omega'}(\sigma(e)=0)$. \end{lemma}
\ifabs{} { \begin{proof}
There is a natural one-to-two correspondence of configuration $\sigma$ of $\Omega$ to $\sigma'_0$ and $\sigma'_1$ of $\Omega'$:
$\sigma'_0$ and $\sigma'_1$ are identical to $\sigma$ on $E$ while $\sigma'_0(e)=0$ and $\sigma'_1(e)=1$ for the additional edge $e$ in $\Omega'$. Then our conclusion follows from the fact that
\[w_\Omega(\sigma)=w_{\Omega'}(\sigma'_0)+w_{\Omega'}(\sigma'_1).\]
We verify this in the following.
The contribution of all the other vertex function and edges weights are the same in both sides. So, we only need to verify that \[F_v(\sigma(E_1+E_2))= F_{v'}(\sigma(E_1)0) F_{v''}(\sigma(E_2)0) +F_{v'}(\sigma(E_1)0) F_{v'}(\sigma(E_1)0).\]
or
\[f_{|\sigma(E_1+E_2)|}=f_{|\sigma(E_1)|} g_{|\sigma(E_2)|} + f_{|\sigma(E_1)|+1} g_{|\sigma(E_2)|+1}, \]
where $\{g_i\}$ in the Fibonacci function of $v''$. Then the above identity can be verified by the definition of $f$ and $g$. \end{proof}}
Let $\Omega^{e}$ be a dangling instance of ${\rm Holant}({\cal F}_{c_1,c_2}^p, \Lambda_{\lambda_1,\lambda_2} )$. Let $v$ be the attaching vertex of the dangling edge $e$ and $e_1, e_2, \ldots, e_d$ be other incident edges of $v$. We compute $R(\Omega^{e})$ by smaller instances depending on $d$. If $d=0$, then $R(\Omega^{e})$ can be computed directly. If $d=1$, we construct a smaller dangling instance $\Omega^{e_1}$ by removing $e_0$ and $v$ from $G$ and make $e_1$ be the new dangling edge and remove its weight. \begin{equation}\label{eq:rec_single}
R(\Omega^{e})=\frac{f_1+ \lambda_{e_1} f_2 R(\Omega^{e_1})}{f_0+ \lambda_{e_1} f_1 R(\Omega^{e_1})}. \end{equation} \ifabs{} { We define \[h(x)=\frac{f_1+ \lambda_{e_1} f_2 x}{f_0+ \lambda_{e_1} f_1 x} \] }
If $d\geq 2$, we use the above lemma to decompose the vertex $v$ into $v'$ and $v''$ and let $e$ and $e_1$ connect to $v''$ and the remaining edges connect to $v'$. We use $e'$ to denote the edge between $v'$ and $v''$. By removing $e$ and $v''$ from $\Omega'$ , we get a dangling instance $\Omega^{e', e_1}$ with two dangling edges $e',e_1$. \ifabs{} { \begin{figure}
\caption{Vertex decomposition ($d=3$)}
\end{figure} } \begin{align*} R(\Omega^{e})
&=\frac{Z(\Omega^{e}, \sigma(e)=1)} {Z(\Omega^{e}, \sigma(e)=0)}\\
&=\frac{\lambda_{e_1} Z(\Omega^{e', e_1}, \sigma(e' e_1)=01)+Z(\Omega^{e', e_1}, \sigma(e' e_1)=10)+c \lambda_{e_1} Z(\Omega^{e', e_1}, \sigma(e' e_1)=11)} {Z(\Omega^{e', e_1}, \sigma(e' e_1)=00)+\lambda_{e_1} Z(\Omega^{e', e_1}, \sigma(e' e_1)=11)}\\
&=\frac{\lambda_{e_1} \mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=01)+\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=10)+c \lambda_{e_1} \mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=11)} {\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=00)+\lambda_{e_1} \mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=11)}. \end{align*}
In the above recursion, the marginal probability of the original instance is written as that of smaller instances but with two dangling edges. In order to continue the recursive process, we need to convert them into instances with single dangling edge. This can be done by pinning one of the two dangling edges, or just leaving one of the edges free (in which case the dangling end of the free edge can be treated as a regular vertex with signature $[1,1]$). \ifabs{We use $\Pin{\Omega}{e}{x}$ to denote the new instance obtained by pinning the edge $e$ of the instance $\Omega$ to $x$. } {} There are many choices in deciding which edge to pin, and to what state the edge is pinned to. Each choice leads to different recursions and consequently have an impact on the following analysis. Here we give an example which is used in the proof of Theorem \ref{thm:1} and Theorem \ref{thm:spin}. In the proof of Theorem \ref{thm:2}, we use a different one.
Set $\Omega^{e'}=\Pin{\Omega^{e', e_1}}{e_1}{0}$, $\Omega^{e_1}=\Pin{\Omega^{e', e_1}}{e'}{0}$ and $\widetilde{\Omega}^{e_1}=\Pin{\Omega^{e', e_1}}{e'}{1}$. By the definitions, we have
\ifabs{$\mathbb{P}_{\Omega^{e'}}(\sigma(e')=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e')=0|\sigma(e_1)=0)$,
$\mathbb{P}_{\Omega^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=0)$, and $\mathbb{P}_{\widetilde{\Omega}^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=1)$.}
{$$\mathbb{P}_{\Omega^{e'}}(\sigma(e')=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e')=0|\sigma(e_1)=0),$$
$$\mathbb{P}_{\Omega^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=0),$$
$$\mathbb{P}_{\widetilde{\Omega}^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=1).$$} Given these relation and the fact that \[\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=00)+\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=01)+\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=10)+\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=11)=1.\] \ifabs{We can solve these marginal probabilities and substitute these into the above recursion to get } { We can solve these marginal probabilities and get \[\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=00)=\frac{1}{1+R(\Omega^{e'})+R(\Omega^{e_1})+R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})}.\] \[\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=01)=\frac{R(\Omega^{e_1})}{1+R(\Omega^{e'})+R(\Omega^{e_1})+R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})}.\] \[\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=10)=\frac{R(\Omega^{e'})}{1+R(\Omega^{e'})+R(\Omega^{e_1})+R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})}.\] \[\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e' e_1)=11)=\frac{R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})}{1+R(\Omega^{e'})+R(\Omega^{e_1})+R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})}.\] Substituting these into the above recursion, we get} \begin{equation}\label{eq:rec}
R(\Omega^{e}) =\frac{\lambda_{e_1} R(\Omega^{e_1}) +R(\Omega^{e'}) +c \lambda_{e_1} R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})} {1+\lambda_{e_1}R(\Omega^{e'}) R(\widetilde{\Omega}^{e_1})} \end{equation} \ifabs{} { We define \[g(x,y,z)=\frac{\lambda_{e_1} y +x +c \lambda_{e_1} x z} {1+\lambda_{e_1}x z}.\] }
If $e'$ and $e_1$ are in different connected components of $\Omega^{e', e_1}$, then the marginal probability of $e_1$ is independent of $e'$ and as a result $R(\widetilde{\Omega}^{e_1})=R({\Omega}^{e_1})$. So in this case, we have
\begin{equation}\label{eq:rec_separate}
R(\Omega^{e}) =\frac{\lambda_{e_1} R(\Omega^{e_1}) +R(\Omega^{e'}) +c \lambda_{e_1} R(\Omega^{e'}) R({\Omega}^{e_1})} {1+\lambda_{e_1}R(\Omega^{e'}) R({\Omega}^{e_1})} \end{equation} \ifabs{} { We define \[\hat{g}(x,y)=\frac{\lambda_{e_1} y +x +c \lambda_{e_1} x y} {1+\lambda_{e_1}x y}.\] }
Starting from an dangling instance $\Omega^{e}$, we can compute $R(\Omega^{e}) $ by one of (\ref{eq:rec_single}), (\ref{eq:rec}) and (\ref{eq:rec_separate}) recursively. We note that if $\Omega^{e} \in {\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$, the instances involved in the recursion are also in the same family. \ifabs{We define three functions according to these three recursions: \[ h(x)=\frac{f_1+ \lambda_{e_1} f_2 x}{f_0+ \lambda_{e_1} f_1 x} ,\ \ \ \ \ g(x,y,z)=\frac{\lambda_{e_1} y +x +c \lambda_{e_1} x z} {1+\lambda_{e_1}x z},\ \ \ \mbox{ and }\ \ \ \hat{g}(x,y)=\frac{\lambda_{e_1} y +x +c \lambda_{e_1} x y} {1+\lambda_{e_1}x y}.\] } {} By expanding this recursion, we get a computation tree recursion to compute $R(\Omega^{e})$. We need one more step to compute the marginal probability of an edge in a regular instance. \ifabs{ This can be done similarly and we have the following lemma.} {Let $e=(u,v)$ be an edge in a regular instance $\Omega$. We can use Lemma \ref{lem:decomposition} to decompose vertices $u$ and $v$ in two smaller ones if their degrees are larger than three. Therefore, we can assume that the degrees of $u$ and $v$ are both less than four. In the following, we assume $d(u)=d(v)=3$. Other cases are similar and simpler. We denote the other two incident edges of $u$ as $e_1$ and $e_2$, the other two incident edges of $v$ as $e_3$ and $e_4$. The function on $u$ is $F_u$ and the function on $v$ is $F_v$. We use $\Omega^D=\Omega^{e_1, e_2, e_3, e_4}$ to denote the dangling instance by removing $u$, $v$ and the edge $e=(u,v)$ from $\Omega$. Then it follows from the definition that \begin{align*}
&\ \ \mathbb{P}_\Omega(\sigma(e)=0)\\
&=\frac{\sum_{x_1,x_2, x_3, x_4\in \{0,1\}} Z( \Omega^D, \sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4) F_u(x_1 x_2 0) F_v(x_3 x_4 0)}{\sum_{x_1,x_2, x_3, x_4\in \{0,1\}} \left( Z( \Omega^D, \sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4) (F_u(x_1 x_2 0) F_v(x_3 x_4 0)+ F_u(x_1 x_2 1) F_v(x_3 x_4 1) ) \right)}\\
&=\frac{\sum_{x_1,x_2, x_3, x_4\in \{0,1\}} \mathbb{P}_{\Omega^D}(\sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4) F_u(x_1 x_2 0) F_v(x_3 x_4 0)}{\sum_{x_1,x_2, x_3, x_4\in \{0,1\}} \left( \mathbb{P}_{\Omega^D}(\sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4) (F_u(x_1 x_2 0) F_v(x_3 x_4 0)+ F_u(x_1 x_2 1) F_v(x_3 x_4 1) ) \right)},\\ \end{align*} where $\mathbb{P}_{\Omega^D}(\sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4) $ can be further written as a product of four probability for dangling instances with one single dangling edge each. \[\mathbb{P}_{\Omega^D}(\sigma(e_1 e_2 e_3 e_4) = x_1 x_2 x_3 x_4)=\prod_{k=1,2,3,4} \mathbb{P}_{\Omega^{D_k}}(\sigma(e_k)=x_k), \] where $\Omega^{e_k}$ is obtained by pinning $\Omega^D$: $e_1, e_2, \ldots e_{k-1}$ are pinned to $x_1, x_2, \ldots x_{k-1}$ respectively; $e_{k+1}, e_{k+2}, \ldots e_4$ are all pinned with weight $\frac{1}{2}$ (see them free). Thus if we can estimate the marginal probabilities of dangling instances in sufficient precision, we can use the above relation to compute $\mathbb{P}_\Omega(\sigma(e)=0)$. Since this recursion only involves constant many sub-instance and their derivatives are all bounded, we conclude the following lemma. } \begin{lemma}
\label{lem:dangling-to-reg}
If we can $\epsilon$ approximate $R(\Omega^{e})$ for any dangling instance $\Omega^{e}$ of ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$
in time $poly(n, \frac{1}{\epsilon})$, we can also $\epsilon$ approximate the marginal probability of any edge of a regular instance of ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$ in time $poly(n, \frac{1}{\epsilon})$. \end{lemma}
\section{Algorithm}\label{sec:algo} \ifabs{The procedure from marginal probabilities to partition function is rather standard and we have the following lemma.} { The general framework of the algorithm is standard. We use the marginal probabilities to compute the partition function and use the computation tree recursion to estimate the marginal probabilities.}
\begin{lemma}\label{lemma:prob-to-FPTAS}
If for any $\epsilon>0$ and any $\Omega^{e}$ of ${\rm Holant}\left({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} \right)$, we have a deterministic algorithm to get $\widehat{P}$ in time $poly\left(n, \frac{1}{\epsilon}\right)$ such that $|\widehat{P}-\mathbb{P}_{\Omega^{e}} (\sigma(e)=0) |\leq \epsilon $, we have an FPTAS for ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2})$. \end{lemma}
\ifabs{} { \begin{proof} By Lemma~\ref{lem:dangling-to-reg}, if we can compute an $\epsilon$ additive approximation of the marginal probability of a dangling instance in time $poly\left(n, \frac{1}{\epsilon}\right)$ , we can also compute a $\epsilon$ additive approximation of the marginal probability of an edge in a regular instance in $poly\left(n, \frac{1}{\epsilon}\right)$, and further compute a $\frac{\epsilon}{6m}$ additive approximation in $poly\left(n, \frac{1}{\epsilon}\right)$.
The partition function can be approximated from estimations of marginal probabilities by the following standard procedure. Let $e_1,e_2,\ldots,e_m$ be an enumeration of the edges $E$. \begin{enumerate} \item Let $\Omega_1=\Omega$. For $k=1,2,\ldots,m$, assuming that the $\Omega_k$ is well-defined, use the algorithm to compute $\hat{\mathbb{P}}_{\Omega_k}(\sigma(e_k)=0)$. If $\hat{\mathbb{P}}_{\Omega_k}(\sigma(e_k)=0)\geq \frac{1}{2}$, set $x_k=0$; otherwise set $x_k=1$. Construct $\Omega_{k+1}$ by pinning $e_k$ of $\Omega_k$ to $x_k$. \item Compute $\widehat{Z}(\Omega)=\frac{w_\Omega(x_1 x_2 \cdots x_m)}{\prod_{k=1}^m\hat{\mathbb{P}}_{\Omega_k}(\sigma(e_k)=x_k)}$ and return $\widehat{Z}(\Omega)$. \end{enumerate} It is clear that the running time is in $poly\left(n, \frac{1}{\epsilon}\right)$. By the construction, we have that $\hat{\mathbb{P}}_{\Omega_k}(\sigma(e_k)=x_k)\geq \frac{1}{2}$. Since it is a $\frac{\epsilon}{6m}$ additive approximation of $\mathbb{P}_{\Omega_k}(\sigma(e_k)=x_k)$, we have that $\mathbb{P}_{\Omega_k}(\sigma(e_k)=x_k)> \frac{1}{3}$. Thus \[ \frac{\mgnt{t}_{\Omega_k}(\sigma(e_k)=x_k)}{\mathbb{P}_{\Omega_k}(\sigma(e_k)=x_k)} \in [1-\frac{\epsilon}{2m}, 1+\frac{\epsilon}{2m}]. \] By definition we have $\mathbb{P}_{\Omega}(x_1 x_2 \cdots x_m)=\frac{w_\Omega(x_1 x_2 \cdots x_m)}{Z(\Omega)}$, thus $Z(\Omega) = \frac{w_\Omega(x_1 x_2 \cdots x_m)}{\prod_{k=1}^m\hat{\mathbb{P}}_{\Omega_k}(\sigma(e_k)=x_k)}$. Therefore, we have \[ 1-\epsilon \le \left(1-\frac{\epsilon}{2m}\right)^m \le \frac{Z(\Omega)}{\widehat{Z}(\Omega)} = \prod_{k=1}^m\frac{\mgnt{t}_{\Omega_k}(X_{v_k}=x_{v_k})}{\mathbb{P}_{\Omega_k}(X_{v_k}=x_{v_k})} \le \left(1+\frac{\epsilon}{2m}\right)^m \le 1+\epsilon, \] which is simplified as that $1-\epsilon\le\frac{\widehat{Z}(\Omega)}{Z(\Omega)}\le 1+\epsilon$. This completes the proof. \end{proof}}
Before we use the computation tree recursion to compute the marginal probability, we need the following lemma to handle shallow instances separately. We denote by $SP(\Omega^{e})$ the longest simple path containing $e$ in $G$.
\begin{lemma}\label{lemma:bounded-simple-path} Let $L$ be a constant. We have a polynomial time algorithm to compute $R(\Omega^{e})$ for all $\Omega^{e}$ of ${\rm Holant}({\cal F}_{c_1,c_2}^p, \Lambda_{\lambda_1,\lambda_2} )$ with $SP(\Omega^{e})\leq L$. \end{lemma}
The proof of the above Lemma uses holographic reduction to spin world and makes use of the self-avoiding walk tree~\cite{Weitz06} for two-state spin systems. The length of the longest simple path is the same as the depth of the self-avoiding walk tree. \ifabs{See the full version for more details.} {In order to make the argument through, we define an extended two state spin system to be a two state spin system where the vertex weight could be any real number and the edge function could be any (not necessary symmetric) real function. In this system, we can also define partition function as usually. By that, we can algebraically define formal marginal probability which can be any real number. Under these definitions, the technique of self-avoiding walk tree is still valid and can be used to compute the partition function of extended two state spin systems. This conclude the following lemma.
\begin{lemma} The partition function of extended two state spin system with bounded simple path can be computed in polynomial time. \end{lemma}
Any instance of ${\rm Holant}\left({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} \right)$ can be transform to an instance of extended two spin system with same partition function under holographic reduction. If we can compute the partition function, we can also compute marginal probabilities. This proves Lemma \ref{lemma:bounded-simple-path}. }
Now we give out formal procedure to estimate $\mathbb{P}_{\Omega^e}(\sigma(e)=0)$. Since there is a one to one relation between $\mathbb{P}_{\Omega^e}(\sigma(e)=0)$ and $R(\Omega^{e})$, we can define our recursion on $R(\Omega^{e})$, and at the final step we convert $R(\Omega^{e})$ back to $\mathbb{P}_{\Omega}(\sigma(e)=0)$.
Let bounds $R_1, R_2$ and depth $L$ be obtained for the family of dangling instance in the sense that for any dangling instance with $SP(\Omega^{e})\geq L$, we have $R(\Omega^{e})\in [R_1, R_2]$. Formally, for $t\ge 0$, the quantity $R^{t}(\Omega^{e})$ is recursively defined as follows: \begin{itemize} \item If $SP(\Omega^{e})\leq 2L$, we compute $R^{t}(\Omega^{e})=R(\Omega^{e})$ by Lemma \ref{lemma:bounded-simple-path}. \item Else If $t=0$, let $R^{0}(\Omega^{e})=R_1$. \item Else If $t>0$, use one of the recursion to get $\tilde{R}^t(\Omega^{e}) = g(R^{t-1}(\Omega^{e'}), R^{t-1}(\Omega^{e_1}),R^{t-1}(\widetilde{\Omega}^{e_1})$, $\tilde{R}^t(\Omega^{e})=h(R^{t-1}(\Omega^{e_1}))$ or $\tilde{R}^t(\Omega^{e}) = \hat{g}(R^{t-1}(\Omega^{e'}), R^{t-1}(\Omega^{e_1}))$. Return the median of $R_1, \tilde{R}^{t}(\Omega^{e}), R_2$:
$R^{t}(\Omega^{e})=Med(R_1, \tilde{R}^{t}(\Omega^{e}), R_2)$. \end{itemize} There are three possible recursions and we define four amortized decay rates: \ifabs{
\[\alpha_1(x) = \frac{\Phi(x) \left|\deriv{h}{x}\right|}{\Phi(h(x))},\ \ \ \alpha_3(x,y) =\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi(x)}{\Phi(\hat{g}(x,y))},\ \ \ \
\alpha_4(x,y) =\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi(y)}{\Phi(\hat{g}(x,y))},\]
\[ \alpha_2(x,y,z) =\frac{1}{\Phi(g(x,y,z))}\left(\left|\pderiv{g}{x}\right|\Phi(x)+ \left|\pderiv{g}{y}\right|\Phi(y)+\left|\pderiv{g}{z}\right|\Phi(z)\right),\] } { \begin{align*}
\alpha_1(x) &= \frac{\Phi(x) \left|\deriv{h}{x}\right|}{\Phi(h(x))},\\
\alpha_2(x,y,z) &=\frac{1}{\Phi(g(x,y,z))}\left(\left|\pderiv{g}{x}\right|\Phi(x)+ \left|\pderiv{g}{y}\right|\Phi(y)+\left|\pderiv{g}{z}\right|\Phi(z)\right),\\
\alpha_3(x,y) &=\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi(x)}{\Phi(\hat{g}(x,y))},\\
\alpha_4(x,y) &=\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi(y)}{\Phi(\hat{g}(x,y))},\\ \end{align*}} where $\Phi(\cdot)$ is a potential function.
\begin{definition} We call a function $\Phi: (0, +\infty)\rightarrow (0, +\infty)$ \emph{nice} if there is some function $f: [1,+\infty) \to (0,+\infty)$ such that for any $c\geq 1$ and $x,y>0$ with $\frac{x}{c}\leq y \leq c x$, we have $\frac{\Phi(x)}{\Phi(y)}\leq f(c)$. \end{definition} \ifabs{} { For any fixed constant $d$, $\Phi(x)=x^d$ is a nice function while $\Phi(x)=2^x$ is not.} \begin{lemma} \label{lem:algo} Let bounds $R_1, R_2$ and depth $L$ be obtained for dangling instances of ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$ such that for any dangling instance with $SP(\Omega^{e})\geq L$, we have $R(\Omega^{e})\in [R_1, R_2]$. If there exist a nice function $\Phi(\cdot)$ and a constant $\alpha<1$ such that $\alpha_1(x) \leq \alpha$ for all $x \in [R_1, R_2]$, $\alpha_2(x,y,z) \leq \alpha$ for all $x,y,z \in [R_1, R_2]$, $\alpha_3(x,y)\leq \alpha$ for all $x\in [R_1, R_2]$, and $\alpha_4(x,y)\leq \alpha$ for all $y\in [R_1, R_2]$. Then there is an FPTAS for ${\rm Holant}({\cal F}_{c_1,c_2}^{p,q}, \Lambda_{\lambda_1,\lambda_2} )$. \end{lemma}
\begin{proof}
By Lemma \ref{lemma:prob-to-FPTAS}, it is enough to give a $poly\left(n, \frac{1}{\epsilon}\right)$ algorithm to get $\widehat{P}$ such that $|\widehat{P}-\mathbb{P}_{\Omega^{e}} (\sigma(e)=0) |\leq \epsilon $. We shall use the above recursive algorithm to compute an estimation of $R(\Omega^{e})$ and then to compute $\widehat{P}$.
Given any $\Omega^{e}$ and constant $L$, we can test if $SP(\Omega^{e})<2 L$ in polynomial time. Let $\phi=\int \frac{1}{\Phi(x)} dx$ be a monotonously increasing function. We prove by induction that $$|\phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))| \leq \alpha^t |\phi(R_1)-\phi(R(\Omega^{e}))| .$$
For the base case $t=0$, if $SP(\Omega^{e})\leq 2L$, then it is trivially true since $R^0(\Omega^{e})=R(\Omega^{e})$. Otherwise, it is also trivial since we set $R^0(\Omega^{e})=R_1$.
Now we assume that the inequality is true for $t-1$ and prove it for $t$. If $SP(\Omega^{e})\leq 2L$, then this is trivially true since $R^t(\Omega^{e})=R(\Omega^{e})$. Now we assume that $R^t(\Omega^{e})>2L$ and as a result it is computed by a recursion. It is enough to prove for the case that $R^{t}(\Omega^{e})=Med(R_1, \tilde{R}^{t}(\Omega^{e}), R_2)=\tilde{R}^{t}(\Omega^{e})$. In other cases, $R^{t}(\Omega^{e})$ is even closer to $R(\Omega^{e})$ since $R(\Omega^{e})\in [R_1, R_2]$. There are three cases to consider: \begin{enumerate}
\item $R^t(\Omega^{e})=h(R^{t-1}(\Omega^{e_1}))$. If $SP(\Omega^{e_1})\leq 2L$, then by the calculation $R^{t-1}(\Omega^{e_1})=R(\Omega^{e_1})$ and as a result $R^t(\Omega^{e})=h(R(\Omega^{e_1}))=R(\Omega^{e})$. Otherwise, we have
that $R^{t-1}(\Omega^{e_1}), R(\Omega^{e_1})\in [R_1, R_2]$.
\[ | \phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))| = |\phi(h(R^{t-1}(\Omega^{e_1})))-\phi(h(R(\Omega^{e_1})))|
=\frac{\Phi(x) |\deriv{h}{x}|}{\Phi(h(x))} |\phi(R^{t-1}(\Omega^{e_1})-\phi(R(\Omega^{e_1}))|,\]
by mean value theorem, where $x$ is between $R^{t-1}(\Omega^{e_1})$ and $R(\Omega^{e_1})$ and as a result $x\in [R_1, R_2]$. By the fact that
$\alpha_1(x) = \frac{\Phi(x) \left|\deriv{h}{x}\right|}{\Phi(h(x))}\leq \alpha$ for $x\in [R_1, R_2]$, we get
\[
| \phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))| \leq \alpha|\phi(R^{t-1}(\Omega^{e_1})-\phi(R(\Omega^{e_1}))| \leq \alpha^t |\phi(R_1)-\phi(R(\Omega^{e}))|,\]
where the last inequality uses induction hypothesis.
\item $R^t(\Omega^{e}) = g(R^{t-1}(\Omega^{e'}), R^{t-1}(\Omega^{e_1}), R^{t-1}(\widetilde{\Omega}^{e_1})$. In this case, we know that $e_1$ and $e'$ are connected and thus $$\frac{SP(\Omega^{e'})}{2} \leq SP(\Omega^{e_1})= SP(\widetilde{\Omega}^{e_1}) \leq 2 SP(\Omega^{e'}). $$
If $\min\{SP(\Omega^{e_1}),SP(\widetilde{\Omega}^{e_1}), SP(\Omega^{e'})\}>L$, we know that $ R(\Omega^{e_1}),R(\widetilde{\Omega}^{e_1}), R(\Omega^{e'})\in [R_1, R_2]$ and by a similar argument as above we get that the conclusion by the fact that $\alpha_2(x,y,z) \leq \alpha, \forall x,y,z \in [R_1, R_2]$.
Otherwise, we have that $\max\{SP(\Omega^{e_1}),SP(\widetilde{\Omega}^{e_1}), SP(\Omega^{e'})\}\leq 2L$ and we have
$R^{t-1}(\Omega^{e'})=R(\Omega^{e'}), R^{t-1}(\Omega^{e_1})=R(\Omega^{e_1})$, and $ R^{t-1}(\widetilde{\Omega}^{e_1})=R(\widetilde{\Omega}^{e_1})$. Therefore, we have $R^t(\Omega^{e}) =R(\Omega^{e})$.
\item $R^t(\Omega^{e}) = \hat{g}(R^{t-1}(\Omega^{e'}), R^{t-1}(\Omega^{e_1}))$. In this case,
if $\max\{SP(\Omega^{e_1}), SP(\Omega^{e'})\}\leq 2L$, we have
$R^{t-1}(\Omega^{e'})=R(\Omega^{e'})$ and $ R^{t-1}(\Omega^{e_1})=R(\Omega^{e_1})$.
If $\min\{SP(\Omega^{e_1}), SP(\Omega^{e'})\}> 2L$, we know that both $R(\Omega^{e'})$ and $R(\Omega^{e_1})$ are in $[R_1,R_2]$. Then it is a weaker version of the above recursion of $g$ and we get the result.
The remaining case is that $\min\{SP(\Omega^{e_1}), SP(\Omega^{e'})\}\leq 2L$ and $\max\{SP(\Omega^{e_1}), SP(\Omega^{e'})\}> 2L$. For that, one of $R(\Omega^{e'})$ and $R(\Omega^{e_1})$
is in $[R_1,R_2]$ and the other one is equal to the correct value without error. We get our conclusion by the fact that $\alpha_3(x,y)\leq \alpha, \forall x\in [R_1, R_2]$ or $\alpha_4(x,y)\leq \alpha, \forall y\in [R_1, R_2]$ respectively. \end{enumerate} This completes the induction proof for
\ifabs{$|\phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))| \leq \alpha^t |\phi(R_1)-\phi(R(\Omega^{e}))|$ .} {
$$|\phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))| \leq \alpha^t |\phi(R_1)-\phi(R(\Omega^{e}))| .$$} Since
\[|\phi(R^t(\Omega^{e}))-\phi(R(\Omega^{e}))|= \frac{1}{\Phi(x)} |R^t(\Omega^{e})-R(\Omega^{e})|, \mbox{ and }\ |\phi(R_1)-\phi(R(\Omega^{e}))|= \frac{1}{\Phi(y)} |R_1-R(\Omega^{e})|,\] for some $x,y\in [R_1, R_2]$ by the Mean Value Theorem. Given the fact that $\Phi(\cdot)$ is nice and $\frac{R_2}{R_1}$ is bounded by a constant, we conclude that there is a constant $C$ such that
$$|R^t(\Omega^{e})-R(\Omega^{e})| \leq C \alpha^t |R_1-R(\Omega^{e})| .$$ Let $\widehat{P}=\frac{1}{1+R^t(\Omega^{e})}$ then we have that \begin{align*}
|\widehat{P} -\mathbb{P}_{\Omega^{e}} (\sigma(e)=0) | &= |\frac{1}{1+R^t(\Omega^{e})} - \frac{1}{1+R(\Omega^{e})}|\\
&= \frac{|R^t(\Omega^{e})-R(\Omega^{e})|}{(1+R^t(\Omega^{e})) (1+R(\Omega^{e})) } \\
& \leq \frac{C \alpha^t |R_1-R(\Omega^{e})|}{(1+R^t(\Omega^{e})) (1+R(\Omega^{e})) } \\ &\leq C \alpha^t . \end{align*} Thus by an appropriate choice of $t=O\left(\log \frac{1}{\epsilon} \right)$ , we have
$|\widehat{P} -\mathbb{P}_{\Omega^{e}} (\sigma(e)=0) | \leq \epsilon $. \end{proof}
\ifabs {
\input{outline} } {
\section{Bounds} In this section, we shall prove various upper and lower bounds for $R(\Omega^{e})$. These bounds are crucial to obtain the correlation decay property and hence FPTAS. We start with the following straightforward bounds which work for any dangling Holant instance.
\begin{lemma}
Let $\Omega^e$ be a dangling Holant instance, $v$ be the vertex attaching $e$ and the function on $v$ be $F_v=[f_0,f_1,\dots,f_{d+1}]$. Then
\[\min_{k=0,1,\ldots, d} \frac{f_{k+1}}{f_k} \leq R(\Omega^{e}) \leq \max_{k=0,1,\ldots, d} \frac{f_{k+1}}{f_k} .\] \end{lemma}
\begin{proof}
Let $D=\{e_1,e_2,\dots,e_d\}$ be other incident edges of $v$. For any fixed configuration $\pi \in \{0,1\}^D$,
the $R(\Omega^{e}_\pi)=\frac{f_{|\pi|+1}}{f_{|\pi|}}$. Average over all the possible configurations $\pi \in \{0,1\}^D$,
we know that that $R(\Omega^{e})$ is sandwiched between two extreme configurations. \end{proof}
In the above argument, we used the worst configuration for the edges $e_1,e_2,\dots,e_d$. If we already establish that the marginal probabilities of these edges are within certain range, we can get a more accurate estimation of $R(\Omega^{e})$. Recursively using this idea, we can get better and better bounds. This is the main approach to get better bounds in this section.
\begin{lemma}\label{lemma:rec-bound}
If $R(\Omega^{e}) \in [R_1, R_2] $ for any dangling instance $\Omega^{e}$ from a family ${\rm Holant}({\cal F}_{c_1, c_2}^{p_0}, \Lambda_{\lambda_1,\lambda_2} )$ with $SP(\Omega^{e})\geq L$. Then for a dangling instance $\Omega^{e}$ of ${\rm Holant}({\cal F}_{c_1, c_2}^p, \Lambda_{\lambda_1,\lambda_2} )$ with $SP(\Omega^{e})\geq L+1$, we have
\[ \min_{p\geq p_0, c\in [c_1, c_2] , \lambda \in [\lambda_1, \lambda_2] , x\in [R_1, R_2] } \frac{ p +(1+c p )\lambda x}{1+\lambda x p } \leq R(\Omega^{e}) \leq \max_{p\geq p_0, c\in [c_1, c_2] , \lambda \in [\lambda_1, \lambda_2] , x\in [R_1, R_2] } \frac{ p +(1+c p )\lambda x}{1+\lambda x p } . \] \end{lemma}
\begin{proof}
Formally, let $\Omega^D$ be the dangling instance obtained from $\Omega^e$ by removing $v$ and thus $D=\{e_1,e_2,\dots,e_d\}$ consists of $d$ dangling edges. $v_i$ is the vertex in $\Omega^D$ that attaches $e_i$ for all $1\le i\le d$.
Without loss of generality, we assume that in one longest simple path, $e$ is followed by $e_1$.
We define $D'=\{e_2,e_3,\dots,e_d\}$ and assume $F_v=[f_0,f_1,\dots,f_d]$.
Then we have
\begin{align*}
R(\Omega^e)
&=\frac{\mathbb{P}_{\Omega^e}(\sigma(e)=1)}{\mathbb{P}_{\Omega^e}(\sigma(e)=0)} = \frac{Z(\Omega^e,\sigma(e)=1)}{Z(\Omega^e,\sigma(e)=0)}\\
&=\frac{\sum_{\pi\in\{0,1\}^D}\left(Z(\Omega^D,\sigma(D)=\pi)\cdot\prod_{i=1}^d\lambda_{(v,v_i)}^{\pi(e_i)}\cdot f_{\|\pi\|+1}\right)}{\sum_{\pi\in\{0,1\}^D}\left(Z(\Omega^D,\sigma(D)=\pi)\cdot\prod_{i=1}^d\lambda_{(v,v_i)}^{\pi(e_i)}\cdot f_{\|\pi\|}\right)}\\
&=\frac{\sum_{\pi\in\{0,1\}^{D'}}\prod_{i=2}^d\lambda_{(v,v_i)}^{\pi(e_i)}\left(Z(\Omega^D,\sigma(D)=0\pi)\cdot f_{\|\pi\|+1}+\lambda_{(v,v_1)}\cdot Z(\Omega^D,\sigma(D)=1\pi)\cdot f_{\|\pi\|+2}\right)}{\sum_{\pi\in\{0,1\}^{D'}}\prod_{i=2}^d\lambda_{(v,v_i)}^{\pi(e_i)}\left(Z(\Omega^D,\sigma(D)=0\pi)\cdot f_{\|\pi\|}+\lambda_{(v,v_1)}\cdot Z(\Omega^D,\sigma(D)=1\pi)\cdot f_{\|\pi\|+1}\right)}.\\
&=\frac{\sum_{\pi\in\{0,1\}^{D'}}\prod_{i=2}^d\lambda_{(v,v_i)}^{\pi(e_i)}\left(\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|+1}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+2}\right)}{\sum_{\pi\in\{0,1\}^{D'}}\prod_{i=2}^d\lambda_{(v,v_i)}^{\pi(e_i)}\left(\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+1}\right)}.
\end{align*}
Thus
\begin{align}
R(\Omega^e)&\le\max_{\pi\in\{0,1\}^{D'}}\frac{\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|+1}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+2}}{\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+1}}.\label{eq:Rmax}\\
R(\Omega^e)&\ge\min_{\pi\in\{0,1\}^{D'}}\frac{\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|+1}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+2}}{\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)\cdot f_{\|\pi\|}+\lambda_{(v,v_1)}\cdot \mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)\cdot f_{\|\pi\|+1}}\label{eq:Rmin}.
\end{align}
For a fixed $\pi\in\{0,1\}^{D'}$, we can define a new dangling instance $\Omega_\pi^{e_1}$ with dangling edge $e_1$ by
pinning the configurations of $D'$ to $\pi$. Then we have
\[R(\Omega_\pi^{e_1})= \frac{\mathbb{P}_{\Omega^D}(\sigma(D)=1\pi)}{\mathbb{P}_{\Omega^D}(\sigma(D)=0\pi)}.\]
By our choice of $e_1$, we have that $SP(\Omega_\pi^{e_1})\geq L$. As a result, $R(\Omega_\pi^{e_1}) \in [R_1, R_2]$
By the definition of Fibonacci function, we have $f_{\|\pi\|+2}= c f_{\|\pi\|+1} +f_{\|\pi\|}$. Let $p=\frac{f_{\|\pi\|+1}}{f_{\|\pi\|}}$, we get that claimed bounds.
\end{proof}
We denote by \[h^c_{\lambda, p}(x) = \frac{ p +(1+c p )\lambda x}{1+\lambda x p }.\] We use $\Hf{c_1}{c_2}{p}{\lambda_1}{\lambda_2}$ to denote the family $\{h^c_{\lambda, p }\mid c_1\leq c \leq c_2, \lambda_1\le\lambda\le\lambda_2,u\ge u_0\}$. By recursive using Lemma \ref{lemma:rec-bound}, we can get the following bound.
\begin{lemma}\label{lemma:bound-funtion-to-instance}
If for any $x\geq 0$ and any $h_1, h_2, \dots, h_L \in \Hf{c_1}{c_2}{p}{\lambda_1}{\lambda_2}$, we have $h_1 h_2 \cdots h_L(x) \in [R_1, R_2]$. Then
for any dangling instance $\Omega^{e}$ of ${\rm Holant}({\cal F}_{c_1, c_2}^p, \Lambda_{\lambda_1,\lambda_2} )$ with $SP(\Omega^{e})\geq L$, we have $R(\Omega^{e})\in [R_1, R_2]$. \end{lemma}
\subsection{For Theorem~\ref{thm:2}}
\begin{lemma}\label{lem:warm1}
Let $c_0>0, p_0>0, L\geq c_0^2 + \frac{c_0}{p_0}$ and $h_1, h_2, \ldots, h_L \in \{h^c_{\lambda, p} | c\geq c_0, \lambda\geq 1, p \geq p_0\}$. Then for any $x\geq 0$, we have
\[
h_L h_{L-1} \dots h_1(x) \geq c_0.
\] \end{lemma} \begin{proof}
We denote $x_i= h_i h_{i-1} \dots h_1 (x)$ and $x_i =h_i(x_{i-1}) =\frac{ p_i +(1+c_i p_i )\lambda_i x_{i-1}}{1+\lambda_i x_{i-1} p_i }$.
If there exists some $x_{i-1} \lambda_i \geq c_0$, then
\begin{align*}
h_i(x_{i-1}) &=\frac{ p_i +(1+c_i p_i )\lambda_i x_{i-1}}{1+\lambda_i x_{i-1} p_i }\\
& \geq \frac{ p_i +(1+c_0 p_i )\lambda_i x_{i-1}}{1+\lambda_i x_{i-1} p_i } \\
&= c_0 + \frac{ p_i +\lambda_i x_{i-1} -c_0}{1+\lambda_i x_{i-1} p_i } \\
&\geq c_0 + \frac{ p_i +c_0 -c_0}{1+\lambda_i x_{i-1} p_i } \\
&> c_0.
\end{align*}
Then it remains above $c_0$.
Now we assume $x_{i-1} \lambda_i <c_0$ for all $i=1, 2, \ldots, L$. In this case, we have
\begin{align*}
x_{i}-x_{i-1} &= h_i(x_{i-1}) -x_{i-1} \\
&=\frac{ p_i +(1+c_i p_i )\lambda_i x_{i-1}}{1+\lambda_i x_{i-1} p_i }-x_{i-1}\\
& \geq \frac{ p_i +(1+c_0 p_i )\lambda_i x_{i-1} -x_{i-1} -\lambda_i x_{i-1}^2 p_i}{1+\lambda_i x_{i-1} p_i } \\
& \geq \frac{ p_i +(\lambda_i -1) x_{i-1} }{1+\lambda_i x_{i-1} p_i } \\
& \geq \frac{ p_i }{1+\lambda_i x_{i-1} p_i } \\
& \geq \frac{ p_0 }{1+c_0 p_0 } .
\end{align*}
So at every step, it is increased by at least $\frac{ p_0 }{1+c_0 p_0 }$. So if $L\geq \frac{c_0 (1+c_0 p_0)} {p_0}$, we can conclude that $x_L \geq c_0$. \end{proof}
By Lemma \ref{lemma:bound-funtion-to-instance} and the above bound, we have the following bound which is used in the proof of Theorem \ref{thm:2}.
\begin{corollary}
Let $c_0>0, p_0>0, L\geq c_0^2 + \frac{c_0}{p_0}$ and $\Omega^{e}$ be an instance of ${\rm Holant}({\cal F}_{c_0, \infty}^{p_0}, \Lambda_{1,\infty} )$ with $SP(\Omega^{e})\geq L$. Then $R(\Omega^{e})\geq c_0$. \end{corollary}
\subsection{For Theorem~\ref{thm:1}}
\begin{lemma}\label{lem:warm}
Let $h_{\lambda_1,\mu_1}, h_{\lambda_2,\mu_2}\in\Hf{c}{c}{\mu}{\lambda}{+\infty}$ be two functions, then for any $x\ge 0$,
\[
\min\left\{\frac{\lambda\mu^2}{1+\lambda\mu^2}\cdot c,x^*\right\}\le h_{\lambda_1,\mu_1}h_{\lambda_2,\mu_2}(x)
\le\max\left\{\frac{\mu+(1+c\mu)\lambda c}{\mu\lambda c},c+\frac{1}{\mu},x^*\right\}.
\]
where $x^*$ is the larger fixpoint of $h_{\lambda_1,\mu_1}$. \end{lemma} \begin{proof}
We only prove the lower bound, the proof of the upper bound is analogous.
If $\mu_1\ge\rho$ then the lemma obviously holds since $h_{\lambda_1,\mu_1}(x)\ge c$ for any $x\ge 0$. Thus we assume $\mu_1<\rho$, then we distinguish between two cases:
\begin{itemize}
\item [(1)] $\mu_2\ge\rho$, then the lemma follows from the fact that $h_{\lambda_2,\mu_2}(x)\ge c$ for any $x$ and $h_{\lambda_1,\mu_1}(x)>x$ when $x<x^*$.
\item [(2)] $\mu_2<\rho$, then we have
\[
h_{\lambda_1,\mu_1}h_{\lambda_2,\mu_2}(x)\ge h_{\lambda_1,\mu_1}(\mu),
\]
and thus
\[
h_{\lambda_1,\mu_1}(\mu)=\frac{\mu_1+(1+c\mu_1)\lambda_1\mu}{1+\lambda_1\mu_1\mu}\ge\frac{\lambda\mu^2}{1+\lambda\mu^2}\cdot c.
\]
\end{itemize} \end{proof}
In the following, we say a number $x$ is \emph{warm} if $\frac{\lambda\mu^2}{1+\lambda\mu^2}\cdot c\le x\le \max\left\{\frac{\mu+(1+c\mu)\lambda c}{\mu\lambda c},c+\frac{1}{\mu}\right\}$ when we work with functions in $\Hf{c}{c}{\mu}{\lambda}{+\infty}$.
\begin{lemma}\label{lem:fpbound}
Let $\mu,\lambda,c>0$ be three numbers, let $x^*$ be the larger fixpoint of $h_{\lambda,\mu}^c$, then
\[
\left|x^*-\rho\right|\le \frac{4\rho\left|\lambda-1\right|\left|\mu-\rho\right|}{(\rho^2+1)\lambda\mu}
\] \end{lemma} \begin{proof}
Solving the equation $h_{\lambda,\mu}^c(x^*)=x^*$ and taking the larger root, we obtain
\[
x^*=\rho+\frac{(\lambda-1)\rho-\lambda(\rho^2+1)\mu+\sqrt{\left(\lambda-1\right)^2\rho^2+\left(\lambda^2\rho^4-2\lambda(\lambda-2)\rho^2+\lambda^2\right)\mu^2+2\left(\lambda(\lambda-1)\rho(\rho^2-1)\right)\mu}}{\lambda\rho\mu}
\]
Take
\begin{align*}
A&=\lambda(\rho^2+1)\mu-(\lambda-1)\rho,\\
B&=\left(\lambda-1\right)^2\rho^2+\left(\lambda^2\rho^4-2\lambda(\lambda-2)\rho^2+\lambda^2\right)\mu^2+2\left(\lambda(\lambda-1)\rho(\rho^2-1)\right)\mu
\end{align*}
Then $x^*-\rho=\frac{\sqrt{B}-A}{\lambda\rho\mu}$ and it holds that
\begin{align*}
B-A^2
&=B-\left(\lambda^2(\rho^2+1)^2\mu^2-2\lambda(\lambda-1)(\rho^2+1)\mu\rho+(\lambda-1)^2\rho^2\right)\\
&=4(\lambda-1)\lambda\rho^2\mu(\rho-\mu),\\
\sqrt{B}+A&\ge\lambda(\rho^2+1)\mu
\end{align*}
Notice that if $\lambda=1$ or $\mu=\rho$, then $x^*=\rho$. We need to distinguish between four cases
\begin{enumerate}[(1)]
\item $\lambda>1$ and $\mu>\rho$;
\item $\lambda>1$ and $\mu<\rho$;
\item $\lambda<1$ and $\mu>\rho$;
\item $\lambda<1$ and $\mu<\rho$.
\end{enumerate}
We only prove (1), the other cases are analogous.
If $\lambda>1$ and $\mu>\rho$, then $x^*<\rho$ and we have
\[
\rho-x^*
=\frac{A-\sqrt{B}}{\lambda\rho\mu}
=\frac{A^2-B}{(\sqrt{B}+A)\lambda\rho\mu}
\le\frac{4(\lambda-1)\rho(\mu-\rho)}{(\rho^2+1)\lambda\mu}
\] \end{proof}
\begin{lemma}\label{lem:smallmu}
Let $h_{\lambda_0,\mu_0}\in \Hf{c}{c}{\mu}{\lambda}{+\infty}$ and $\mu_0\le\rho$. Let $k$ be a number such that $k(1+(1-k)\mu^2)<c(1-k^2)\mu^3$. Then for every warm $x$, if $\max\{|\lambda-1|,|\lambda_0-1|\}\le k$, then
\[
|h_{\lambda_0,\mu_0}(x)-\rho|\le \alpha_1|x-\rho|+\delta_1
\]
for $\alpha_1=\frac{1+k}{1+\frac{c(1-k^2)\mu^3}{1+(1-k)\mu^2}}<1$ and $\delta_1=k\rho$. \end{lemma} \begin{proof}
\begin{align*}
|h_{\lambda_0,\mu_0} - \rho|
&=\left|\frac{\rho-\mu_0}{\rho}\cdot\frac{\lambda_0}{1+x\mu_0\lambda_0}\left(x-\rho\right)+(\lambda_0-1)\frac{\rho - \mu_0}{1+x\lambda_0\mu_0}\right|\\
&\le\frac{1+k}{1+\frac{c(1-k^2)\mu^3}{1+(1-k)\mu^2}}\left|x-\rho\right| + k\rho.
\end{align*} \end{proof}
\begin{lemma}\label{lem:bigmu}
Let $k<\rho^2-1$ be a number and $h_{\lambda_1,\mu_1},h_{\lambda_2,\mu_2}\in \Hf{c}{c}{\mu}{\lambda}{+\infty}$ where $\mu\ge\rho$. Assume $\max\{|\lambda_1-1|,|\lambda_2-1|,|\lambda-1|\}\le k$, then for every warm $x$,
\[
|h_{\lambda_1,\mu_1}h_{\lambda_2,\mu_2}(x)-\rho|\le \alpha_2|x-\rho|+\delta_2
\]
for $\alpha_2=\frac{1+k}{\rho^2} < 1$ and $\delta_2=\left(\frac{1+(1-k)\mu^2}{(1-k)^2\mu^2c}+\frac{1}{\rho}\right)k$. \end{lemma}
\begin{proof}
\begin{align}
&\left|h_{\lambda_1,\mu_1}h_{\lambda_2,\mu_2}(x) - \rho\right|\notag\\
=&\left|\frac{\rho-\mu_1}{\rho}\cdot\frac{\lambda_1}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\left(h_{\lambda_2,\mu_2}(x)-\rho\right)+(\lambda_1-1)\frac{\rho - \mu_1}{1+h_{\lambda_2,\mu_2}(x)\lambda_1\mu_1}\right|\notag\\
=&\left|\frac{\rho-\mu_1}{\rho}\cdot\frac{\lambda_1}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\left(
\frac{\rho-\mu_2}{\rho}\cdot\frac{\lambda_2}{1+x\mu_2\lambda_2}\left(x-\rho\right)+(\lambda_2-1)\frac{\rho - \mu_2}{1+x\lambda_2\mu_2}\right) + \right.\notag\\
&\quad\;\left.(\lambda_1-1)\frac{\rho - \mu_1}{1+h_{\lambda_2,\mu_2}(x)\lambda_1\mu_1}\right|\label{eqn:twice}
\end{align}
Take
\begin{align*}
A& = \left|\frac{\lambda_1\lambda_2}{\rho^2}\cdot\frac{\mu_1-\rho}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\cdot\frac{\mu_2-\rho}{1+x\mu_2\lambda_2}\right|,\\
B& = \left|\frac{\rho-\mu_1}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\right|,\\
C& = \left|\frac{\mu_2-\rho}{1+x\mu_2\lambda_2}\cdot\frac{\mu_1-\rho}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\cdot\frac{\lambda_1}{\rho}\right|.
\end{align*}
It holds that
\begin{align*}
A
&\le \frac{\lambda_1\lambda_2\mu_1\mu_2}{\rho^2}\frac{\rho}{\lambda_1\rho\mu_1\mu_2+\left(\lambda_1\lambda_2\rho\mu_1+\left(\left(\lambda_1\lambda_2\left(\rho^2-1\right)\right)\mu_1+\lambda_2\rho\right)\mu_2\right)x+\rho}\\
&\le \frac{\lambda_1\lambda_2\mu_1\mu_2}{\rho^2}\frac{1}{\lambda_1\mu_1\mu_2+1}\\
&\le\frac{\lambda_2}{\rho^2}\le\frac{1+k}{\rho^2},\\
B&\le\left|\frac{\mu_1}{1+h_{\lambda_2,\mu_2}(x)\mu_1\lambda_1}\right|\le\frac{1}{h_{\lambda_2,\mu_2}(x)(1-k)}\le\frac{1+(1-k)\mu^2}{(1-k)^2\mu^2c},\\
C&=A\cdot\frac{\rho}{\lambda_2}\le\frac{1}{\rho}.
\end{align*}
Then
\begin{align*}
(\ref{eqn:twice})
&= \left|A\cdot|x-\rho| + B\cdot(\lambda_1-1) + C\cdot(\lambda_2-1)\right|\\
&\le |A|\cdot|x-\rho|+(|B|+|C|)\cdot k\\
&\le\frac{1+k}{\rho^2}|x-\rho|+\left(\frac{1+(1-k)\mu^2}{(1-k)^2\mu^2c}+\frac{1}{\rho}\right)k.
\end{align*}
Then $\alpha_2 = \frac{1+k}{\rho^2} < 1$ and $\delta_2 = \left(\frac{1+(1-k)\mu^2}{(1-k)^2\mu^2c}+\frac{1}{\rho}\right)k$. \end{proof}
\begin{lemma}\label{lem:criterion}
Consider functions in $\Hf{c}{c}{\mu}{\lambda}{\lambda'}$ and define $k = \max\{|\lambda-1|,|\lambda'-1|\}$. We assume that $k$ satisfies $k<\rho^2-1$ and $k(1+(1-k)\mu^2)<c(1-k^2)\mu^3$. There exist constants $M,\delta,\alpha<1$ such that for any sequence of $d>0$ functions $h_1,h_2,\dots,h_d\in\Hf{c}{c}{\mu}{\lambda}{\lambda'}$ and any warm $x$, if the sequence satisfies one of following three criterions:
\begin{itemize}
\item [(1)] $d=1$ and $h_1$ has its corresponding $\mu\le\rho$;
\item [(2)] $d\le M$ and exact $h_1$ and $h_d$ in the sequence have their corresponding $\mu>\rho$;
\item [(3)] $d=M$ and exact $h_d$ has its corresponding $\mu>\rho$,
\end{itemize}
then
\[
\left|h_dh_{d-1}\cdots h_1(x)-\rho\right|\le\alpha\left|x-\rho\right|+\delta.
\] \end{lemma} \begin{proof}
Assume $h_i=h_{\lambda_i,\mu_i}$ for every $1\le i\le d$. We consider three criterions respectively:
\begin{itemize}
\item [(1)] We can take $\alpha=\alpha_1$ and $\delta=\delta_1$.
\item [(2)] For every $1\le i\le d$, define $\gamma_i=\frac{4\rho\left|\lambda_i-1\right|\left|\mu_i-\rho\right|}{(\rho^2+1)\lambda_i\mu}$. For every $2\le i\le d-1$, $h_i$ is an increasing function, then due to Lemma \ref{lem:fpbound}, for any $x\ge 0$,
\begin{itemize}
\item If $x\le\rho$, then
\[
\min\{x,\rho-\gamma_i\}\le h_i(x)\le\rho+\gamma_i.
\]
\item If $x\ge\rho$, then
\[
\rho-\gamma_i\le h_i(x)\le\max\{x,\rho+\gamma_i\}.
\]
\end{itemize}
Let $\gamma=\max_{1\le i\le d}\gamma_i$, then for any $x\ge 0$, one of following two must be true:
\begin{itemize}
\item [(a)]
\[
\left|h_dh_{d-1}\dots h_1(x)-\rho\right|\le\left|h_dh_1(\rho)\right|.
\]
\item [(b)]
\[
\left|h_dh_{d-1}\dots h_1(x)-\rho\right|\le\max\left\{\left|h_d(\rho+\gamma)-\rho\right|,\left|h_d(\rho-\gamma)-\rho\right|\right\}.
\]
\end{itemize}
Notice that $h_d(x)=h_{\lambda_d,\mu_d}(x)$ is monotone on $\mu_d$ for fixed $x$, thus
\begin{align*}
\min\left\{
c+\frac{1}{\lambda_d(\rho+\gamma)},
\frac{\mu+(1+c\mu)\lambda_d(\rho+\gamma)}{1+\mu\lambda_d(\rho+\gamma)}
\right\}&\le h_d(\rho+\gamma)\\
&\le\max\left\{
c+\frac{1}{\lambda_d(\rho+\gamma)},
\frac{\mu+(1+c\mu)\lambda_d(\rho+\gamma)}{1+\mu\lambda_d(\rho+\gamma)}
\right\},\\
\min\left\{
c+\frac{1}{\lambda_d(\rho-\gamma)},
\frac{\mu+(1+c\mu)\lambda_d(\rho-\gamma)}{1+\mu\lambda_d(\rho-\gamma)}
\right\}&\le h_d(\rho-\gamma)\\
&\le\max\left\{
c+\frac{1}{\lambda_d(\rho-\gamma)},
\frac{\mu+(1+c\mu)\lambda_d(\rho-\gamma)}{1+\mu\lambda_d(\rho-\gamma)}
\right\}.
\end{align*}
Therefore we can take $\alpha=\alpha_2$ and
\begin{align*}
\delta&=\max\left\{
\delta_2,
\left|\frac{1}{\lambda_d(\rho+\gamma)}-\frac{1}{\rho}\right|,
\left|\frac{\mu+(1+c\mu)\lambda_d(\rho+\gamma)}{1+\mu\lambda_d(\rho+\gamma)}-\rho\right|,\right.\\%
&\left.\quad\quad\quad\quad\quad
\left|\frac{1}{\lambda_d(\rho-\gamma)}-\frac{1}{\rho}\right|,
\left|\frac{\mu+(1+c\mu)\lambda_d(\rho-\gamma)}{1+\mu\lambda_d(\rho-\gamma)}-\rho\right|
\right\}.
\end{align*}
\item [(3)] Assume $h_1=h_{\lambda_1,\mu_1}$, then
\begin{align*}
\left|h_1(x)-\rho\right|
&\le \left|\frac{\rho-\mu}{\rho}\cdot\frac{\lambda_1}{1+x\mu\lambda_1}\right|\left|x-\rho\right|+\left|\lambda_1-1\right|\left|\frac{\rho - \mu}{1+x\lambda_1\mu}\right|\\
&\le \frac{1}{\rho x}\left|x-\rho\right|+\frac{k}{\lambda_1 x}.
\end{align*}
Let $\alpha' = \frac{1}{\rho x}$ and $M$ be the number such that $\alpha'\alpha_1^M<\alpha_1<1$, then we can take $\alpha = \alpha'\alpha^M$ and take $\delta = \alpha'\cdot\frac{\delta_1}{1-\alpha_1}+\frac{k}{\lambda_1 x}$.
\end{itemize} \end{proof}
Let $h_1,h_2,\dots,h_d\in\Hf{c_1}{c_2}{p}{\lambda_1}{\lambda_2}$ be a sequence of functions. If for every function $h_i$ and every $x\ge 0$, we have $|h_i(x)-\rho|\le \alpha |x_i-\rho|+\delta$ holds for some $\alpha<1$ and $\delta$, then for every $x\ge 0$, \[
|h_dh_{d-1}\dots h_1(x)-\rho|<\alpha^d|x-\rho|+\frac{\delta}{1-\alpha} \] holds.
Consdier functions in $\Hf{c}{c}{p}{\lambda_1}{\lambda_2}$ and define $k=\max\left\{|\lambda_1-1|,|\lambda_2-1|\right\}$. Assume $k<1/2$, then for a sequence of functions $f_1,\dots,f_d\in\Hf{c}{c}{p}{\lambda_1}{\lambda_2}$ that satisfies one of three criterions in Lemma~\ref{lem:criterion}, it holds that for every warm $x$, \[
|h_dh_{d-1}\dots h_1(x)-\rho|\le\alpha|x-\rho|+\delta \] where \begin{align*}
\delta
&\le\max\left\{\Delta_1,\Delta_2,\Delta_3,\Delta_4,\Delta_5,\Delta_6\right\} \end{align*} and \[ \alpha\le\max\{\alpha_1,\alpha_2\}, \] for \begin{align*}
\Delta_1&=\delta_2=\left(\frac{1+(1-k)p^2}{(1-k)^2p^2c}+\frac{1}{\rho}\right)k,\\
\Delta_2&=\frac{1}{\rho x}\cdot\frac{\delta_1}{1-\alpha_1}+\frac{k}{(1-k)x},\\
\Delta_3&=\max_{\lambda\in[\lambda_1,\lambda_2]}\left\{\left|\frac{1}{\lambda(\rho+\gamma)}-\frac{1}{\rho}\right|\right\},\\
\Delta_4&=\max_{\lambda\in[\lambda_1,\lambda_2]}\left\{\left|\frac{p+(1+cp)\lambda(\rho+\gamma)}{1+p\lambda(\rho+\gamma)}-\rho\right|\right\},\\
\Delta_5&=\max_{\lambda\in[\lambda_1,\lambda_2]}\left\{\left|\frac{1}{\lambda(\rho-\gamma)}-\frac{1}{\rho}\right|\right\},\\
\Delta_6&=\max_{\lambda\in[\lambda_1,\lambda_2]}\left\{\left|\frac{p+(1+cp)\lambda(\rho-\gamma)}{1+p\lambda(\rho-\gamma)}-\rho\right|\right\},\\
\alpha_1&=\frac{1+k}{1+\frac{c(1-k^2)p^3}{1+(1-k)p^2}},\quad\delta_1= k\rho,\\
\alpha_2&= \frac{1+k}{\rho^2},\\
\gamma&\le\max_{\lambda\in[\lambda_1,\lambda_2],\mu\in[p,+\infty]}\left\{\frac{4\rho|\lambda-1||\mu-\rho|}{(\rho^2+1)\lambda\mu}\right\}\le\frac{4\rho^2k}{(1+\rho^2)(1-k)p} \end{align*}
In the following, we shall bound $\frac{1}{1-\alpha_1}$, $\frac{1}{1-\alpha_2}$ and each $\Delta_i$ respectively. Since \[ \alpha_1 =\frac{1+k}{1+\frac{c(1-k^2)p^3}{1+(1-k)p^2}} \le\frac{1+k}{1+\frac{3}{4}\cdot\frac{cp^3}{1+p^2}} =\frac{4(1+k)(1+p^2)}{4+4p^2+3cp^3}, \] we have \[ \frac{1}{1-\alpha_1} \le\frac{4+4p^2+3cp^3}{3cp^4-4(1+p^2)k}. \] If we require that $k<\min\{\frac{cp^4}{2(1+p^2)},\frac{c^2}{2}\}$, then \[ \frac{1}{1-\alpha_1}<\frac{4+4p^2+3cp^3}{cp^4}<\frac{11(1+p^3)(1+c)}{p^4c}. \] Using the fact that $\rho^2\ge c^2+1$, we have \begin{align*}
\frac{1}{1-\alpha_2}
&=\frac{\rho^2}{\rho^2-k-1}\\
&\le\frac{c^2+1}{c^2-k}\\
&\le\frac{2(c^2+1)}{c^2} \end{align*}
\begin{align*}
\Delta_1
&=\left(\frac{1+(1-k)p^2}{(1-k)^2p^2c}+\frac{1}{\rho}\right)k\\
&\le\left(\frac{1+(1-k)p^2}{(1-k)^2p^2c}+\frac{1}{c}\right)k\\
&=\frac{1+(1-k)p^2+(1-k)^2p^2}{(1-k)^2p^2c}\cdot k\\
&\le\frac{1+p^2+p^2}{(1-k)^2p^2c}\cdot k\\
&\le\frac{8(1+p^2)k}{p^2c} \end{align*}
\begin{align*}
\Delta_2
&=\frac{1}{\rho x}\cdot\frac{\delta_1}{1-\alpha_1}+\frac{k}{(1-k)x}\\
&\le\frac{k}{x(1-\alpha_1)}+\frac{k}{(1-k)x}\\
&=\frac{k}{x}\left(\frac{1}{1-\alpha_1}+\frac{1}{1-k}\right)\\
&\le\frac{\left(1+(1-k)p^2\right)k}{(1-k)p^2c}\left( \frac{11(1+p^3)(1+c)}{p^4c} +\frac{1}{1-k}\right)\\
&\le\frac{2\left(1+p^2\right)k}{p^2c}\left( \frac{11(1+p^3)(1+c)}{p^4c}+2\right)\\
&\le\frac{70(1+p^2)(1+p^4)(1+c)k}{p^6c^2}\\
&\le\frac{210(1+p^6)(1+c)k}{p^6c^2} \end{align*}
It follows from monotonicity that \begin{align*}
&\max\left\{\Delta_3,\Delta_4,\Delta_5,\Delta_6\right\}\\
\le&\max\left\{
\frac{p+(1+cp)(1+k)(\rho+\gamma)}{1+p(1+k)(\rho+\gamma)}-\rho,
\rho-\frac{p+(1+cp)(1-k)(\rho-\gamma)}{1+p(1-k)(\rho-\gamma)},\right.\\
&\quad\quad\quad\left.
\frac{1}{(1-k)(\rho-\gamma)}-\frac{1}{\rho},
\frac{1}{\rho}-\frac{1}{(1+k)(\rho+\gamma)}
\right\}. \end{align*} If we require that $k<\frac{p(1+\rho^2)}{16\rho}$, then we have \begin{align*}
\frac{p+(1+cp)(1+k)(\rho+\gamma)}{1+p(1+k)(\rho+\gamma)}-\rho
&=\frac{p-\rho+(1+k)(\rho+\gamma)(1+cp-\rho p)}{1+p(1+k)(\rho+\gamma)}\\
&\le\frac{k\rho+\gamma(1+k)}{p(\rho+\gamma)}\\
&\le\frac{k\left(1+\frac{12\rho}{(1+\rho^2)p}\right)}{p}\\
&\le\frac{12(1+p)k}{p^2} \end{align*}
\begin{align*}
\rho-\frac{p+(1+cp)(1-k)(\rho-\gamma)}{1+p(1-k)(\rho-\gamma)}
&=\frac{\rho-p+(1-k)(\rho-\gamma)(\rho p-c\mu-1)}{1+p(1-k)(\rho-\gamma)}\\
&\le\frac{2(k\rho+\gamma)}{p(\rho-\gamma)}\\
&\le\frac{32(1+p)k}{p^2} \end{align*}
\begin{align*}
\frac{1}{(1-k)(\rho-\gamma)}-\frac{1}{\rho}
&=\frac{\rho-(1-k)(\rho-\gamma)}{(1-k)\rho(\rho-\gamma)}\\
&\le\frac{2(\gamma+k\rho)}{\rho(\rho-\gamma)}\\
&\le\frac{32(1+p)k}{cp} \end{align*} \begin{align*}
\frac{1}{\rho}-\frac{1}{(1+k)(\rho+\gamma)}
&=\frac{(1+k)(\rho+\gamma)-\rho}{\rho(1+k)(\rho+\gamma)}\\
&\le\frac{k\rho+\gamma(1+k)}{\rho(\rho+\gamma)}\\
&\le\frac{12(1+p)k}{cp} \end{align*}
Take all bounds into account, we have \begin{align*}
\delta\le&\max\left\{\frac{11(1+p^3)(1+c)}{p^4c}, \frac{2(c^2+1)}{c^2}\right\}\cdot\\%
&\max\left\{
\frac{8(1+p^2)}{cp^2},
\frac{210(1+p^6)(1+c)}{p^6c^2},
\frac{32(1+p)}{p^2},
\frac{32(1+p)}{cp}
\right\}\cdot k \end{align*}
\begin{lemma}
Let $c,p,\lambda_1,\lambda_2,L>0$ and $h_1,h_2,\dots,h_L\in\Hf{c}{c}{p}{\lambda_1}{\lambda_2}$ and define $k=\max\left\{|\lambda_1-1|,|\lambda_2-1|\right\}$. If $k<\min\left\{1/2, \frac{cp^4}{2(1+p^2)},\frac{c^2}{2}, \frac{p(1+\rho^2)}{16\rho}\right\}$, then for any warm $x$,
\[
h_Lh_{L-1}\dots h_1(x)\in[R_1,R_2]
\]
for $R_1=\rho-\Delta, R_2=\rho+\Delta$ where
\begin{align*}
\Delta
&=\max\left\{\frac{11(1+p^3)(1+c)}{p^4c}, \frac{2(c^2+1)}{c^2},\right\}\cdot\max\left\{
\frac{8(1+p^2)}{cp^2},
\frac{210(1+p^6)(1+c)}{p^6c^2},
\frac{32(1+p)}{p^2},
\frac{32(1+p)}{cp}
\right\}\cdot k\\%
&\quad\quad+\left(\max\{\frac{4(1+k)(1+p^2)}{4+4p^2+3cp^3},\frac{1+k^2}{1+c^2}\}\right)^{g(L)}\cdot|x-\rho|
\end{align*}
and $g:\mathbb{N}\to\mathbb{N}$ is a non-decreasing and unbounded function. \end{lemma} \begin{proof}
The lemma follows from previous discussion and the fact that any sequence of $L$ functions can be consecutively grouped such that each group satisfies one of three criterions in Lemma~\ref{lem:criterion}. Thus
\[
g(L)=\min_{\mbox{a sequence of $L$ functions in $\Hf{c}{c}{p}{\lambda_1}{\lambda_2}$}}\{\mbox{number of groups in $f_L,f_{L-1},\dots,f_1$}\}.
\] \end{proof}
\begin{corollary}\label{cor:conclusion}
Let $c,p,\lambda_1,\lambda_2,L>0$ and $\Omega^{e}$ be an instance of ${\rm Holant}({\cal F}_{c, c}^{p}, \Lambda_{\lambda_1,\lambda_2} )$ with $SP(\Omega^{e})\geq L+2$. Define $k=\max\left\{|\lambda_1-1|,|\lambda_2-1|\right\}$. If $k<\min\left\{1/2, \frac{cp^4}{2(1+p^2)},\frac{c^2}{2}, \frac{p(1+\rho^2)}{16\rho}\right\}$. Then $R(\Omega^{e})\in[R_1,R_2]$ for $R_1=\rho-\Delta, R_2=\rho+\Delta$ where
\begin{align*}
\Delta
&=\max\left\{\frac{11(1+p^3)(1+c)}{p^4c}, \frac{2(c^2+1)}{c^2},\right\}\cdot\max\left\{
\frac{8(1+p^2)}{cp^2},
\frac{210(1+p^6)(1+c)}{p^6c^2},
\frac{32(1+p)}{p^2},
\frac{32(1+p)}{cp}
\right\}\cdot k\\%
&\quad\quad+\left(\max\{\frac{4(1+k)(1+p^2)}{4+4p^2+3cp^3},\frac{1+k^2}{1+c^2}\}\right)^{g(L)}\cdot\max\left\{\left|\frac{\lambda\mu^2}{1+\lambda\mu^2}\cdot c-\rho\right|,\left|\frac{\mu+(1+c\mu)\lambda c}{\mu\lambda c}-\rho\right|,\left|\frac{1}{\mu}-\frac{1}{\rho}\right|\right\}
\end{align*}
and $g:\mathbb{N}\to\mathbb{N}$ is a non-decreasing and unbounded function. Moreover, it holds that $\lim_{L\to\infty,k\to 0}\Delta=0$. \end{corollary} \begin{proof}
For any two functions $h_1,h_2\in\Hf{c}{c}{p}{\lambda_1}{\lambda_2}$ and any $x\ge 0$, it follows from Lemma~\ref{lem:warm} that $h_1h_2(x)$ is either warm or lies in the range $[\rho-\gamma,\rho+\gamma]$ for $\gamma=\max_{\lambda\in[\lambda_1,\lambda_2],\mu\in[p,+\infty]}\left\{\frac{4\rho|\lambda-1||\mu-\rho|}{(\rho^2+1)\lambda\mu}\right\}$. \end{proof}
\section{Correlation Decay}
In this section, we are going to prove Theorem~\ref{thm:1}, Theorem~\ref{thm:2} and Theorem~\ref{thm:spin} by analyzing the correlation decay property stated in Lemma~\ref{lem:algo}. To this end, we shall study the recursions discussed in Section~\ref{sec:recursion}.
\subsection{Proof of Theorem~\ref{thm:1}}
It follows from Corollary~\ref{cor:conclusion} that for every $\eta>0$, there exists $\beta(\eta)>0$ such that $k=\max\{|\lambda_1-1|,|\lambda_2-1|\}\le\beta(\eta)$ implies $\Delta=\max\{|\rho-R_1|,|\rho-R_2|\}<\eta$ by choosing $L$ sufficiently large.
We use the trivial potential function $\Phi(x)=1$ and as discussed in Section~\ref{sec:algo}, it is sufficient to bound \[
\alpha_1(x) = \left|\deriv{h}{x}\right|;\quad
\alpha_2(x,y,z) =\left|\pderiv{g}{x}\right|+ \left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right|;\quad
\alpha_3(x,y) =\left|\pderiv{\hat{g}}{x}\right|;\quad
\alpha_4(x,y) =\left|\pderiv{\hat{g}}{y}\right|. \] where \begin{align*}
h(x) &=\frac{\mu+(c\mu+1)\lambda x}{1+\lambda\mu x}\\
g(x,y,z) &=\frac{\lambda cxz+\lambda y+x}{\lambda xz+1}\\
\hat{g}(x,y) &=\frac{\lambda c xy+\lambda y+x}{\lambda xy+1}\\ \end{align*} for some $\mu\ge p$ and $\lambda>0$.
We shall frequently use the following equality:
\begin{fact}
Assume $a,b,A,B,x$ are all positive numbers. If $b-Bx>0$, then $\frac{a+Ax}{b-Bx}=\frac{a}{b}+\frac{Ab+aB}{b(b-Bx)}x$. \end{fact}
\begin{lemma}\label{lem:dx}
Let $\frac{1}{2}<\lambda<2$ and $\varepsilon<\frac{1}{4}$. If $\rho-\varepsilon<x,y,z<\rho+\varepsilon$, then $\pderiv{g}{x}\leq\frac{|\lambda-1|}{\lambda \rho^2+1}+15\varepsilon$. \end{lemma} \begin{proof}
\begin{align*}
\pderiv{g}{x}
&= \frac{\lambda cz+1-\lambda^2yz}{(\lambda xz+1)^2} \\
&\le \frac{\lambda(\rho-\frac{1}{\rho})(\rho+\varepsilon)+1-\lambda^2(\rho-\varepsilon)^2}
{(\lambda(\rho-\varepsilon)^2+1)^2} \\
&\leq \frac{(\lambda\rho^2+1)|\lambda-1|+(\lambda\rho+2\lambda^2\rho)\varepsilon}{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\\
&= \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{4\rho|\lambda-1|\lambda+(2\lambda^2\rho+\lambda\rho)}{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{8\rho+8\rho+2\rho}{(\lambda\rho^2+1)\rho(\lambda\rho+\frac{1}{\rho}-4\lambda\varepsilon)}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{18\rho}{(\lambda\rho^2+1)\rho(2\sqrt{\lambda}-\lambda)}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{18}{\frac{4}{5}(\frac{1}{2}+1)}\varepsilon<\frac{|\lambda-1|}{\lambda\rho^2+1}+15\varepsilon.
\end{align*} \end{proof}
\begin{lemma}\label{lem:mdx}
Let $\frac{1}{2}<\lambda<2$ and $\varepsilon<\frac{1}{4}$. If $\rho-\varepsilon<x,y,z<\rho+\varepsilon$, then $-\pderiv{g}{x}\leq\frac{|\lambda-1|}{\lambda \rho^2+1}+19\varepsilon$. \end{lemma} \begin{proof}
\begin{align*}
-\pderiv{g}{x}
&= \frac{\lambda^2yz-\lambda cz-1}{(\lambda xz+1)^2} \\
&\le \frac{\lambda^2 (\rho+\varepsilon)^2-\lambda(\rho-\frac{1}{\rho})(\rho-\varepsilon)-1}
{(\lambda(\rho-\varepsilon)^2+1)^2} \\
&\le \frac{(\lambda\rho^2+1)|\lambda-1|+(2\lambda^2\rho+\lambda^2+\lambda\rho)\varepsilon}{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\\
&= \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{4\rho|\lambda-1|\lambda+(2\lambda^2\rho+\lambda^2+\lambda\rho)}{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{8\rho+8\rho+4+2\rho}{(\lambda\rho^2+1)\rho(\lambda\rho+\frac{1}{\rho}-4\lambda\varepsilon)}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{18\rho+4}{(\lambda\rho^2+1)\rho(2\sqrt{\lambda}-\lambda)}\varepsilon \\
&\le \frac{|\lambda-1|}{\lambda\rho^2+1}+\frac{22}{\frac{4}{5}(\frac{1}{2}+1)}\varepsilon<\frac{|\lambda-1|}{\lambda\rho^2+1}+19\varepsilon
\end{align*} \end{proof}
\begin{lemma}\label{lem:dy}
Let $\lambda<2$ and $\varepsilon<\frac{1}{2}$. If $\rho-\varepsilon<x,y,z<\rho+\varepsilon$, then $\left|\pderiv{g}{y}\right|\leq\frac{\lambda}{\lambda \rho^2+1}+3\varepsilon$. \end{lemma} \begin{proof}
\begin{align*}
\left|\pderiv{g}{y}\right|
&= \frac{\lambda}{\lambda xz+1} \\
&\le \frac{\lambda}{\lambda(\rho-\varepsilon)^2+1} \\
&\le \frac{\lambda}{\lambda\rho^2+1-2\lambda\rho\varepsilon}\\
&= \frac{\lambda}{\lambda\rho^2+1}+\frac{2\lambda^2\rho}{(\lambda\rho^2+1)(\lambda\rho^2+1-2\lambda\rho\varepsilon)}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{2\lambda^2\rho}{\lambda\rho^2+1}\varepsilon \\
&= \frac{\lambda}{\lambda\rho^2+1}+\frac{2\lambda^2}{\lambda\rho+\frac{1}{\rho}}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{2\lambda^2}{2\sqrt{\lambda}}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+3\varepsilon.
\end{align*} \end{proof}
\begin{lemma}\label{lem:dz}
Let $\frac{1}{2}<\lambda<2$ and $\varepsilon<\frac{1}{4}$. If $\rho-\varepsilon<x,y,z<\rho+\varepsilon$, then $|\pderiv{g}{z}|\leq\frac{\lambda}{\lambda \rho^2+1}+30\varepsilon$. \end{lemma} \begin{proof}
Since $\left|\pderiv{g}{z}\right|=\frac{|\lambda^2 xy+\lambda x^2-\lambda cx|}{(\lambda xz+1)^2}$ and
\begin{align*}
\lambda^2 xy+\lambda x^2-\lambda cx
&=\lambda x(\lambda y+x-c)\\
&\ge \lambda x(\frac{1}{2}(\rho-\varepsilon)+(\rho-\varepsilon)-\rho+\frac{1}{\rho}))\\
&=\lambda x(\frac{\rho}{2}+\frac{1}{\rho}-\frac{3}{2}\varepsilon)>0,
\end{align*}
we have
\begin{align*}
\left|\pderiv{g}{z}\right| &= \frac{\lambda^2 xy+\lambda x^2-\lambda cx}{(\lambda xz+1)^2} \\
&\le \frac{(\lambda^2+\lambda)(\rho+\varepsilon)^2-\lambda(\rho-\frac{1}{\rho})(\rho-\varepsilon)}
{(\lambda(\rho-\varepsilon)^2+1)^2} \\
&\le \frac{\lambda(\lambda\rho^2+1)+((\lambda^2+\lambda)(2\rho+\varepsilon)+\lambda\rho)\varepsilon}
{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\\
&= \frac{\lambda}{\lambda\rho^2+1}+\frac{4\rho\lambda^2+(\lambda^2+\lambda)(2\rho+\varepsilon)+\lambda\rho}{(\lambda\rho^2+1)^2-4\rho(\lambda\rho^2+1)\lambda\varepsilon}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{16\rho+6(2\rho+1)+2\rho}{(\lambda\rho^2+1)\rho(\lambda\rho+\frac{1}{\rho}-4\lambda\varepsilon)}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{16\rho+6(2\rho+1)+2\rho}{(\lambda\rho^2+1)\rho(2\sqrt{\lambda}-\lambda)}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{30+\frac{6}{\rho}}{\frac{4}{5}(\frac{1}{2}\rho^2+1)}\varepsilon \\
&\le \frac{\lambda}{\lambda\rho^2+1}+\frac{36}{\frac{4}{5}(\frac{1}{2}+1)}=\frac{\lambda}{\lambda\rho^2+1}+30\varepsilon.
\end{align*} \end{proof}
\begin{lemma} \label{lem:gbound}
For $\frac{1}{2}<\lambda<2$ and $\varepsilon<\frac{1}{4}$, if $\rho-\varepsilon<x,y,z<\rho+\varepsilon$, then $\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| \leq \frac{|\lambda-1|+2\lambda}{\lambda\rho^2+1}+52\varepsilon.$ \end{lemma} \begin{proof}
It holds from previous lemmas that
\[
\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| = \max\left\{\pderiv{g}{x},-\pderiv{g}{x}\right\}+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| \leq \frac{|\lambda-1|+2\lambda}{\lambda\rho^2+1}+52\varepsilon.
\] \end{proof}
\begin{lemma} \label{lem:thm1g1}
Assume $\rho<2$. If $\max\left\{\frac{1}{2},\frac{1}{2\rho-\rho^2+2},1-\beta\left(\frac{\rho-1}{208}\right)\right\}<\lambda<\min\left\{\frac{5-\rho}{6+\rho^3-3\rho^2},1+\beta\left(\frac{\rho-1}{208}\right)\right\}$, then $\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right|\leq\frac{5-\rho}{4}<1$. \end{lemma} \begin{proof}
Choose $\varepsilon$ in Lemma~\ref{lem:gbound} such that $52\varepsilon<\frac{\rho-1}{4}$.
According to Lemma~\ref{cor:conclusion}, this can be done by setting $|\lambda-1|<\beta\left(\frac{\rho-1}{208}\right)$. \begin{enumerate}[(1)]
\item When $\lambda>1$, according to Lemma~\ref{lem:gbound}, we have
\begin{align*}
\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| &\leq
\frac{3\lambda-1}{\lambda\rho^2+1}+\frac{\rho-1}{4} \\
&\leq \frac{3\frac{5-\rho}{6+\rho^3-3\rho^2}-1}{\frac{5-\rho}{6+\rho^3-3\rho^2}\rho^2+1}+\frac{\rho-1}{4} \\
&= \frac{5-\rho}{4}.
\end{align*}
\item When $\lambda\leq 1$, according to Lemma~\ref{lem:gbound}, we have
\begin{align*}
\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| &\leq
\frac{\lambda+1}{\lambda\rho^2+1}+\frac{\rho-1}{4} \\
&\leq \frac{\frac{1}{2\rho-\rho^2+2}+1}{\frac{1}{2\rho-\rho^2+1}\rho^2+1}+\frac{\rho-1}{4} \\
&= \frac{5-\rho}{4}.
\end{align*} \end{enumerate} \end{proof}
\begin{lemma} \label{lem:thm1g2}
Assume $\rho\geq 2$. If $\max\left\{\frac{1}{2},1-\beta\left(\frac{1}{416}\right)\right\}<\lambda<\min\left\{2,1+\beta\left(\frac{1}{416}\right)\right\}$, then $\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right|\leq\frac{7}{8}$. \end{lemma} \begin{proof}
Choose $\varepsilon$ in Lemma~\ref{lem:gbound} such that $52\varepsilon<\frac{1}{8}$.
According to Lemma~\ref{cor:conclusion}, this can be done by setting $|\lambda-1|<\beta\left(\frac{1}{416}\right)$. \begin{enumerate}[(1)]
\item When $\lambda>1$, according to Lemma~\ref{lem:gbound}, we have
\begin{align*}
\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| &\leq
\frac{3\lambda-1}{4\lambda+1}+\frac{1}{8}
\leq \frac{7}{8}.
\end{align*}
\item When $\lambda<1$, according to Lemma~\ref{lem:gbound}, we have
\begin{align*}
\left|\pderiv{g}{x}\right|+\left|\pderiv{g}{y}\right|+\left|\pderiv{g}{z}\right| &\leq
\frac{\lambda+1}{4\lambda+1}+\frac{1}{8}
\leq \frac{\frac{1}{2}+1}{4\frac{1}{2}+1}+\frac{1}{8}<\frac{7}{8}.
\end{align*} \end{enumerate} \end{proof}
\begin{lemma}\label{lem:dh}
Let $\max\left\{\frac{1}{2},\frac{1}{\rho}\right\}<\lambda<2$ and $\varepsilon<\frac{1}{4}$. If $\rho-\varepsilon<x<\rho+\varepsilon$, then $\left|\deriv{h}{x}\right|\leq\frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+64\varepsilon$. \end{lemma} \begin{proof}
\begin{align*}
\left|\deriv{h}{x}\right|
&= \left|\frac{\lambda(c\mu+1)-\lambda\mu^2}{(1+\lambda\mu x)^2}\right| \\
&\le \frac{\lambda\rho\mu+\lambda+\lambda\mu^2}{(1+\lambda\mu(\rho-\varepsilon))^2} \\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{(1+\lambda\mu\rho)^2-2\lambda\mu(1+\lambda\mu\rho)\varepsilon}\\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{(1+\lambda\mu\rho)^2}+\frac{(\mu\rho+1+\mu^2)2\lambda^2\mu}{(1+\lambda\mu\rho)\left(\left(1+\lambda\mu\rho\right)^2-2\lambda\mu\left(1+\lambda\mu\rho\right)\varepsilon\right)}\varepsilon \\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+\frac{(\mu\rho+1+\mu^2)2\lambda^2\mu}{(1+\lambda\mu\rho)(1+2\lambda\mu\rho+\lambda^2\mu^2\rho^2-\frac{1}{2}\lambda\mu-\frac{1}{2}\lambda^2\mu^2\rho)}\varepsilon \\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+\frac{(\mu\rho+1+\mu^2)2\lambda^2\mu}{(1+\lambda\mu\rho)\left(1+\frac{1}{2}\lambda^2\mu^2\rho\right)}\varepsilon \\
&= \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+\frac{2\lambda^2\mu^2\rho+2\lambda^2\mu+2\lambda^2\mu^3}{\frac{1}{2}\lambda^3\mu^3\rho^2+\frac{1}{2}\lambda^2\mu^2\rho+\lambda\mu\rho+1}\varepsilon \\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+\frac{8\mu^2\rho+8\mu+8\mu^3}{\frac{1}{8}\mu^2\rho+\mu+\frac{1}{4}\mu^3}\varepsilon \\
&\le \frac{\lambda\mu\rho+\lambda+\lambda\mu^2}{1+2\lambda\mu\rho+\lambda\mu^2\rho}+64\varepsilon
\le \max\left\{\frac{1}{2},\frac{\lambda\mu+\lambda+\lambda\mu^2}{1+2\lambda\mu+\lambda\mu^2}\right\}+64\varepsilon
\end{align*} \end{proof}
\begin{lemma}\label{lem:dht1}
Assume $0<\mu<1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{\mu}{1280}\right)\right\}<\lambda<\min\left\{2,\frac{10-\mu}{\mu^3+2\mu^2-10\mu+10},1+\beta\left(\frac{\mu}{1280}\right)\right\}$, then $\left|\deriv{h}{x}\right|\leq 1-\frac{\mu}{20}<1$. \end{lemma} \begin{proof}
Choose $\varepsilon$ in Lemma~\ref{lem:dh} such that $64\varepsilon<\frac{\mu}{20}$, according to Lemma~\ref{cor:conclusion}, this can be done by setting $|\lambda-1|<\beta\left(\frac{\mu}{1280}\right)$.
\begin{align*}
\left|\deriv{h}{x}\right| &\leq \max\left\{\frac{1}{2},\frac{\lambda\mu+\lambda+\lambda\mu^2}{1+2\lambda\mu+\lambda\mu^2}\right\}+\frac{\mu}{20} \\
&\leq \max\left\{\frac{1}{2},\frac{\frac{10-\mu}{\mu^3+2\mu^2-10\mu+10}\left(\mu+1+\mu^2\right)}{1+2\frac{10-\mu}{\mu^3+2\mu^2-10\mu+10}\mu+\frac{10-\mu}{\mu^3+2\mu^2-10\mu+10}\mu^2}\right\}+\frac{\mu}{20} \\
&= 1-\frac{\mu}{20}.
\end{align*} \end{proof}
\begin{lemma}\label{lem:dht2}
Assume $\mu\ge 1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{\rho-1}{256\rho}\right)\right\}<\lambda<\min\left\{2,1+\beta\left(\frac{\rho-1}{512\rho}\right)\right\}$, then $\left|\deriv{h}{x}\right|\leq \frac{3\rho+1}{4\rho}<1$. \end{lemma} \begin{proof}
According to Lemma~\ref{lem:dh},
\begin{align*}
\left|\deriv{h}{x}\right|\leq \frac{\mu\rho+1+\mu^2}{2\mu\rho+\mu^2\rho}+64x\varepsilon.
\end{align*}
Set $\alpha=\frac{\rho+1}{2\rho}$. Since
\begin{align*}
\frac{\mu\rho+1+\mu^2}{2\mu\rho+\mu^2\rho}-\alpha &= \frac{(\mu\rho+1-2\alpha\mu\rho)+(\mu^2-\alpha\mu^2\rho)}{2\mu\rho+\mu^2\rho}<0,
\end{align*}
if we choose $\varepsilon$ in Lemma~\ref{lem:dh} such that $128\varepsilon<\frac{1-\alpha}{2}$, then
\begin{align*}
\left|\deriv{h}{x}\right|\leq \alpha+\frac{1-\alpha}{2}=\frac{3\rho+1}{4\rho}.
\end{align*} \end{proof}
\begin{lemma}\label{lem:dghx}
Let $\max\left\{\frac{1}{2},\frac{1}{\rho}\right\}<\lambda<2$ and $\varepsilon<\frac{1}{4}$. If $\rho-\varepsilon<x<\rho+\varepsilon$, then $\left|\pderiv{\hat{g}}{x}\right|\leq\frac{\lambda\rho y+\lambda^2 y^2+1}{1+2\lambda\rho y+\lambda^2\rho^2 y^2}+256\varepsilon$. \end{lemma} \begin{proof}
\begin{align*}
\left|\pderiv{\hat{g}}{x}\right|
&\le \frac{\lambda\rho y+\lambda^2 y^2+1}{\left(\lambda\left(\rho-\varepsilon\right)y+1\right)^2} \\
&\le \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2y^2+2\lambda\rho y+1-2\lambda y(\lambda\rho y+1)\varepsilon}\\
&\leq \frac{\lambda^2\rho^2 y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{(\lambda\rho y+\lambda^2 y^2+1)2\lambda y}{\left(\lambda\rho y+1\right)\left(\lambda^2\rho^2 y^2+2\lambda\rho y+1-2\lambda y\left(\lambda\rho y+1\right)\varepsilon\right)}\varepsilon \\
&\le \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{(\lambda\rho y+\lambda^2 y^2+1)2\lambda y}{\left(\lambda\rho y+1\right)\left(\lambda^2\rho^2 y^2+2\lambda\rho y+1-\frac{1}{2}\lambda^2\rho y^2-\frac{1}{2}\lambda y\right)}\varepsilon \\
&\le \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{(\lambda\rho y+\lambda^2 y^2+1)2\lambda y}{\left(\lambda\rho y+1\right)\left(1+\lambda\rho y+\lambda^2 \rho y^2\left(\rho-\frac{1}{2}\right)\right)}\varepsilon \\
&\le \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{(\lambda\rho y+\lambda^2 y^2+1)2\lambda y}{\left(\lambda\rho y+1\right)\left(1+\frac{1}{2}\lambda^2\rho y^2\right)}\varepsilon \\
&= \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{2\lambda^2\rho y^2+2\lambda^3 y^3+2\lambda y}{\lambda\rho y+1+\frac{1}{2}\lambda^3\rho^2 y^3+\frac{1}{2}\lambda^2\rho y^2}\varepsilon \\
&= \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}
+\frac{8\rho y^2+16y^3+4y}{\frac{1}{8}\rho y^2+\frac{1}{16}y^3+\frac{1}{2}y}\varepsilon \\
&= \frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda\rho y+1}+256\varepsilon.
\end{align*} \end{proof}
In the following, we fix $p$ as a nonnegative number, then we have:
\begin{lemma}\label{lem:thm1ghx}
If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\}<\lambda<\min\left\{2,1+\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\}$ and $y\geq p$, then $\left|\pderiv{\hat{g}}{x}\right|<\max\left\{\frac{\rho^2+1}{2\rho^2},\frac{3p+4}{4p+4}\right\}<1$. \end{lemma} \begin{proof}
Set $\alpha=\max\left\{\frac{1}{\rho^2},\frac{p+2}{2p+2}\right\}$. Since
$$\frac{\lambda\rho y+\lambda^2 y^2+1}{\lambda^2\rho^2 y^2+2\lambda \rho y+1}-\alpha
=\frac{(\lambda\rho y+1-2\alpha\lambda\rho y-\alpha)+(\lambda^2 y^2-\alpha\lambda^2\rho^2 y^2)}{\lambda\rho y^2+2\lambda\rho y+1}<0,$$
if we choose $\varepsilon$ such that $256\varepsilon<\frac{1-\alpha}{2}$, then according to Lemma~\ref{lem:dghx}, we have
$$\left|\pderiv{\hat{g}}{x}\right|\leq \alpha+\frac{1-\alpha}{2}=\frac{1+\alpha}{2}=\max\left\{\frac{\rho^2+1}{2\rho^2+1},\frac{3p+4}{4p+4}\right\}<1.$$ \end{proof}
It follows from Lemma~\ref{lem:dht1} and Lemma~\ref{lem:dht2} that
\begin{lemma}\label{lem:dh1}
Assume $p<1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{p}{1280}\right)\right\}<\lambda<\min\left\{2,\frac{10-p}{p^3+2p^2-10p+10},1+\beta\left(\frac{p}{1280}\right)\right\}$, then $\left|\deriv{h}{x}\right|\leq \max\left\{\frac{3\rho+1}{4\rho},1-\frac{p}{20}\right\}<1$. \end{lemma}
\begin{lemma}\label{lem:dh2}
Assume $p\geq 1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{\rho-1}{512\rho}\right)\right\}<\lambda<\min\left\{2,1+\beta\left(\frac{\rho-1}{512\rho}\right)\right\}$, then $\left|\deriv{h}{x}\right|\leq \frac{3\rho+1}{4\rho}<1$. \end{lemma}
If we define $h(\mu,x):=\frac{\mu+(c\mu+1)\lambda x}{1+\lambda\mu x}=h(x)$, then $\tilde g(x,y)=h(x,y)$. Thus $\pderiv{\tilde h(x,y)}{y}$. Then according to Lemma~\ref{lem:dh1} and Lemma~\ref{lem:dh2}, we have
\begin{lemma}\label{lem:thm1ghy1}
Assume $0<p<1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{p}{1280}\right)\right\}<\lambda<\min\left\{2,\frac{10-p}{p^3+2p^2-10p+10},1+\beta\left(\frac{p}{1280}\right)\right\}$ and $x\ge p$, then $\left|\deriv{\hat{g}}{y}\right|\leq \max\left\{\frac{3\rho+1}{4\rho},1-\frac{p}{20}\right\}<1$. \end{lemma}
\begin{lemma}\label{lem:thm1ghy2}
Assume $p\geq 1$. If $\max\left\{\frac{1}{2},\frac{1}{\rho},1-\beta\left(\frac{\rho-1}{512\rho}\right)\right\}<\lambda<\min\left\{2,1+\beta\left(\frac{\rho-1}{512\rho}\right)\right\}$ and $\ge p$, then $\left|\pderiv{\hat{g}}{y}\right|\leq \frac{3\rho+1}{4\rho}<1$. \end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:1}] According to Lemma~\ref{lem:algo} and Lemma~\ref{lem:thm1g1},~\ref{lem:thm1g2},~\ref{lem:dh1},~\ref{lem:dh2},~\ref{lem:thm1ghx},~\ref{lem:thm1ghy1},~\ref{lem:thm1ghy2}, we have the following results. \begin{enumerate}[C{a}se I.]
\item If $0<p<1$ and $1<\rho<2$, then
\begin{align*}
\lambda_1=\max\left\{\frac{1}{2},\frac{1}{2\rho-\rho^2+2},1-\beta\left(\frac{\rho-1}{208}\right),
\frac{1}{\rho},1-\beta\left(\frac{p}{1280}\right), \right.\\
\left. 1-\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\lambda_2=\min\left\{\frac{5-\rho}{6+\rho^3-3\rho^2},1+\beta\left(\frac{\rho-1}{208}\right),
\frac{10-p}{p^3+2p^2-10p+10}, \right. \\ \left. 1+\beta\left(\frac{p}{1280}\right),
1+\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\alpha=\max\left\{\frac{5-\rho}{4},
\frac{3\rho+1}{4\rho},1-\frac{p}{20},
\frac{\rho^2+1}{2\rho^2},\frac{3p+4}{4p+4}\right\}.
\end{align*}
\item If $0<p<1$ and $\rho \geq 2$, then
\begin{align*}
\lambda_1=\max\left\{\frac{1}{2},1-\beta\left(\frac{1}{416}\right),
\frac{1}{\rho},1-\beta\left(\frac{p}{1280}\right),
1-\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\lambda_2=\min\left\{2,1+\beta\left(\frac{1}{416}\right),
\frac{10-p}{p^3+2p^2-10p+10}, 1+\beta\left(\frac{p}{1280}\right), \right. \\ \left.
1+\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\alpha=\max\left\{\frac{7}{8},
\frac{3\rho+1}{4\rho},1-\frac{p}{20},
\frac{\rho^2+1}{2\rho^2},\frac{3p+4}{4p+4}\right\}.
\end{align*}
\item If $p \geq 1$ and $1<\rho<2$, then
\begin{align*}
\lambda_1=\max\left\{\frac{1}{2},\frac{1}{2\rho-\rho^2+2},1-\beta\left(\frac{\rho-1}{208}\right),
\frac{1}{\rho},1-\beta\left(\frac{\rho-1}{512\rho}\right), \right. \\
\left. 1-\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\lambda_2=\min\left\{\frac{5-\rho}{6+\rho^3-3\rho^2},1+\beta\left(\frac{\rho-1}{208}\right),
1+\beta\left(\frac{\rho-1}{512\rho}\right), \right. \\ \left.
1+\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\alpha=\max\left\{\frac{5-\rho}{4},
\frac{3\rho+1}{4\rho},
\frac{\rho^2+1}{2\rho^2},\frac{3p+4}{4p+4}\right\}.
\end{align*}
\item If $p \geq 1$ and $\rho \geq 2$, then
\begin{align*}
\lambda_1=\max\left\{\frac{1}{2},1-\beta\left(\frac{1}{416}\right),
\frac{1}{\rho},1-\beta\left(\frac{\rho-1}{512\rho}\right), \right. \\
\left. 1-\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\lambda_2=\min\left\{2,1+\beta\left(\frac{1}{416}\right),
1+\beta\left(\frac{\rho-1}{512\rho}\right), \right. \\ \left.
1+\beta\left(\frac{1}{512}\min\left\{\frac{\rho^2-1}{2\rho^2},\frac{p}{p+1}\right\}\right)\right\},
\end{align*}
\begin{align*}
\alpha=\max\left\{\frac{7}{8},
\frac{3\rho+1}{4\rho},
\frac{\rho^2+1}{2\rho^2},\frac{3p+4}{4p+4}\right\}.
\end{align*} \end{enumerate} Here the $\beta\left(\cdot\right)$ is defined in the beginning of the section. \end{proof}
\subsection{Proof of Theorem~\ref{thm:2}}
In order to use Lemma~\ref{lem:algo}, we need to establish four inequalities of the form $\frac{A}{B}\leq \alpha<1$. It turns out that each multivariable polynomial $A-\alpha B$ enjoys the property that the highest degrees of variables $x,y,z,\lambda$ are no greater than two. Therefore it is possible to determine the monotonicity of each variable within the given range.
We shall show that it is decreasing with respect to variables $x,y,z,\lambda$ respectively and verify the fact that $A-\alpha B |_{x=y=z=c,\lambda=1} < 0$.
In the following proof, we use $\Phi(x)=x$ as potential function and a different set of recursions from those used in the proof of Theorem~\ref{thm:1}. These recursions can be obtained by the same methods proposed in Section~\ref{sec:recursion} except for one step: when converting an instance with two dangling edges to one with single dangling edge, we define three different sub-instances. Let $\Omega^{e'}$ denote the sub-instance of $\Omega^{e',e_1}$ achieved by leaving $e_1$ free (which is equivalent to attaching a vertex with signature $[1,1]$ on the dangling end of $e_1$), and set $\Omega^{e_1}=\Pin{\Omega^{e', e_1}}{e'}{0}$ and $\widetilde{\Omega}^{e_1}=\Pin{\Omega^{e', e_1}}{e'}{1}$. Now we have $$\mathbb{P}_{\Omega^{e'}}(\sigma(e')=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e')=0),$$
$$\mathbb{P}_{\Omega^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=0),$$
$$\mathbb{P}_{\widetilde{\Omega}^{e_1}}(\sigma(e_1)=0)=\mathbb{P}_{\Omega^{e', e_1}}(\sigma(e_1)=0|\sigma(e')=1).$$ Applying the remaining steps in Section~\ref{sec:recursion} gives the following recursions: \begin{align*}
g(x,y,z)&=\frac{x(1+y)+\lambda y(1+z)+\lambda c x(1+y)z}{1+z+\lambda x(1+y)z} \\
\frac{\partial g}{\partial x}&=-\frac{(y+1)(z+1)(-c\lambda z+\lambda^2 yz-1)}{(1+z+\lambda x(1+y)z)^2} \\
\frac{\partial g}{\partial y}&=\frac{(z+1)(\lambda(cxz+z+1)+\lambda^2 xz+x)}{(1+z+\lambda x(1+y)z)^2} \\
\frac{\partial g}{\partial z}&=-\frac{x(y+1)(\lambda(x-c+y(\lambda+x))+1)}{(1+z+\lambda x(1+y)z)^2} \end{align*} $h(x)$ and $\hat{g}(x,y)$ are the same as they were in the last part.
In the following, let $c_0=1.17$ be a constant.
Note that $\frac{\partial g}{\partial y}(x,y,z) \geq 0$ and $\frac{\partial g}{\partial z}(x,y,z) \leq 0$ for $x,y,z\geq c$ and $\lambda \geq 1$. Let $$g_1(x,y,z)=\frac{\frac{\partial g}{\partial x}(x,y,z)x+\frac{\partial g}{\partial y}(x,y,z)y-\frac{\partial g}{\partial z}(x,y,z)z}{g(x,y,z)},$$ $$g_2(x,y,z)=\frac{-\frac{\partial g}{\partial x}(x,y,z)x+\frac{\partial g}{\partial y}(x,y,z)y-\frac{\partial g}{\partial z}(x,y,z)z}{g(x,y,z)},$$ it is clear that
$$\frac{\left|\frac{\partial g}{\partial x}(x,y,z)\right|\Phi(x)+\left|\frac{\partial g}{\partial y}(x,y,z)\right|\Phi(y)+\left|\frac{\partial g}{\partial z}(x,y,z)\right|\Phi(z)}{\Phi(g(x,y,z))}=\max\left\{g_1(x,y,z),g_2(x,y,z)\right\}.$$ We shall bound $g_1$ and $g_2$ separately. \begin{lemma} \label{lem:22} $2 (22 c^3+c^2+c+1)+22(-94 c^3-54 c^2+c) \leq 0$ for $c \geq c_0$. \end{lemma}
\begin{lemma} \label{lem:17} $-36 c^2 \lambda^2+2 c (-94 c^2 \lambda^2+22 c^2 \lambda-36 c \lambda^2)+2 c^2 \lambda+2 c (\lambda^2+\lambda+21)+2 \lambda \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -36 c^2 \lambda^2+2 c (-94 c^2 \lambda^2+22 c^2 \lambda-36 c \lambda^2)+2 c^2 \lambda+2 c (\lambda^2+\lambda+21)+2 \lambda \\
=& 2 (-94 c^3-54 c^2+c) \lambda^2+2 (22 c^3+c^2+c+1) \lambda+42 c \end{align*} which is a parabola of $\lambda$ and according to Lemma~\ref{lem:22}, its center is to the left of $1$. Therefore it is decreasing with $\lambda$, and if we set $\lambda=1$ we have \begin{align*}
& -36 c^2 \lambda^2+2 c (-94 c^2 \lambda^2+22 c^2 \lambda-36 c \lambda^2)+2 c^2 \lambda+2 c (\lambda^2+\lambda+21)+2 \lambda \\
\leq& 2 (22 c^3+c^2+c+1)+2 (-94 c^3-54 c^2+c)+42 c \\
=& -144 c^3-106 c^2+46 c+2 \leq 0. \end{align*} The last less-than clause derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:36} $2 c (11 c^3+c^2+c+1)+4 c (-47 c^3-36 c^2+c) \leq 0$ for $c \geq c_0$. \end{lemma}
\begin{lemma} \label{lem:15} $2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+2 c (-2 c \lambda^2 z (19 z+9)-\lambda z (9 c \lambda z-1))-2 \lambda z (9 c \lambda z-1) \leq 0$ for $z \geq c$, $\lambda \geq 1$, $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& 2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+2 c (-2 c \lambda^2 z (19 z+9) \\
& \quad -\lambda z (9 c \lambda z-1))-2 \lambda z (9 c \lambda z-1) \\
=& z^2 (-94 c^2 \lambda^2+22 c^2 \lambda-36 c \lambda^2)+z (-36 c^2 \lambda^2+2 c^2 \lambda+2 c (\lambda^2+\lambda+21)+2 \lambda)+22 c. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:17}, its center is to the left of $c$ so it is decreasing with $z$. If we set $z=c$ we have \begin{align*}
& 2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+2 c (-2 c \lambda^2 z (19 z+9) \\
& \quad -\lambda z (9 c \lambda z-1))-2 \lambda z (9 c \lambda z-1) \\
\leq & c^2 (-94 c^2 \lambda^2+22 c^2 \lambda-36 c \lambda^2)+c (-36 c^2 \lambda^2+2 c^2 \lambda+2 c (\lambda^2+\lambda+21)+2 \lambda)+22 c \\
=& 2 c (-47 c^3-36 c^2+c) \lambda^2+2 c (11 c^3+c^2+c+1) \lambda+2 c (21 c+11). \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:36}, its center is to the left of $1$ so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& 2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+2 c (-2 c \lambda^2 z (19 z+9) \\
& \quad -\lambda z (9 c \lambda z-1))-2 \lambda z (9 c \lambda z-1) \\
\leq & 2 c (11 c^3+c^2+c+1)+2 c (-47 c^3-36 c^2+c)+2 c (21 c+11) \\
=& 2 c (-36 c^3-35 c^2+23 c+12) \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:46} $44 c^4+6 c^3-17 c^2+2 (-94 c^4-90 c^3-16 c^2)+2 c+1 \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:44} $-18 c^3 \lambda^2+2 c^3 \lambda+c^2 (2 \lambda^2-17 \lambda+42)+2 c (-47 c^3 \lambda^2+22 c^3 \lambda+2 c^2 \lambda (1-18 \lambda)-9 c \lambda^2)+2 c (\lambda+11)+\lambda \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -18 c^3 \lambda^2+ 2 c^3 \lambda+c^2 (2 \lambda^2-17 \lambda+42)+ 2 c (-47 c^3 \lambda^2+22 c^3 \lambda +2 c^2 \lambda (1-18 \lambda)-9 c \lambda^2) \\
& \quad +2 c (\lambda+11)+\lambda \\
=& 42 c^2+(-94 c^4-90 c^3-16 c^2) \lambda^2+(44 c^4+6 c^3-17 c^2+2 c+1) \lambda+22 c. \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:46} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& -18 c^3 \lambda^2+2 c^3 \lambda+c^2 (2 \lambda^2-17 \lambda+42)+2 c (-47 c^3 \lambda^2+22 c^3 \lambda+2 c^2 \lambda (1-18 \lambda)-9 c \lambda^2) \\
& \quad +2 c (\lambda+11)+\lambda \\
\leq & -50 c^4-84 c^3+9 c^2+24 c+1 \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:53} $22 c^5+4 c^4-17 c^3+2 c^2+2 (-47 c^5-54 c^4-7 c^3)+c \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:13} $2 c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1)-\lambda (y+1)^2 z (9 c \lambda z-1) \leq 0$ for $y,z \geq c$, $\lambda \geq 1$, $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& 2 c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda (y+1)^2 z (9 c \lambda z-1) \\
=& 2 c^2 \lambda z^2-18 c^2 \lambda z+y^2 (-2 c \lambda^2 z (19 z+9)-\lambda z (9 c \lambda z-1)) \\
& \quad +y (2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)-2 \lambda z (9 c \lambda z-1)) \\
& \quad -\lambda z (9 c \lambda z-1)+22 c z+2 c. \end{align*} This is a parabola of $y$ and according to Lemma~\ref{lem:15} its center is to the left of $c$, so it is decreasing with $y$. If we set $y=c$ we have \begin{align*}
& 2 c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda (y+1)^2 z (9 c \lambda z-1) \\
\leq & c^2 (-2 c \lambda^2 z (19 z+9)-\lambda z (9 c \lambda z-1))+2 c^2 \lambda z^2-18 c^2 \lambda z \\
& \quad +c (2 c (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)-2 \lambda z (9 c \lambda z-1)) \\
& \quad -\lambda z (9 c \lambda z-1)+22 c z+2 c \\
=& 22 c^2+z^2 (-47 c^3 \lambda^2+22 c^3 \lambda+2 c^2 \lambda (1-18 \lambda)-9 c \lambda^2) \\
& \quad +z (-18 c^3 \lambda^2+2 c^3 \lambda+c^2 (2 \lambda^2-17 \lambda+42)+2 c (\lambda+11)+\lambda)+2 c. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:44} its center is to the left of $c$, so it is decreasing with $z$. If we set $z=c$ we have \begin{align*}
& 2 c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda (y+1)^2 z (9 c \lambda z-1) \\
\leq & 22 c^2+c^2 (-47 c^3 \lambda^2+22 c^3 \lambda+2 c^2 \lambda (1-18 \lambda)-9 c \lambda^2) \\
& \quad +c (-18 c^3 \lambda^2+2 c^3 \lambda+c^2 (2 \lambda^2-17 \lambda+42)+2 c (\lambda+11)+\lambda)+2 c \\
=& 42 c^3+44 c^2+(-47 c^5-54 c^4-7 c^3) \lambda^2+(22 c^5+4 c^4-17 c^3+2 c^2+c) \lambda+2 c. \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:53} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& 2 c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda (y+1)^2 z (9 c \lambda z-1) \\
\leq & -25 c^5-50 c^4+18 c^3+46 c^2+3 c \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:66} $24 c^3+3 c^2+2 (-36 c^5-36 c^4-76 c^3-36 c^2+c)+2 c+2 \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:64} $2 c^3 \lambda-18 c^2 \lambda^2+3 c^2 \lambda+2 c (-18 c^4 \lambda^2-18 c^3 \lambda^2-38 c^2 \lambda^2+11 c^2 \lambda-9 c \lambda^2+\lambda)+c (\lambda^2+21)+2 \lambda \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& 2 c^3 \lambda-18 c^2 \lambda^2+3 c^2 \lambda+2 c (-18 c^4 \lambda^2-18 c^3 \lambda^2-38 c^2 \lambda^2+11 c^2 \lambda-9 c \lambda^2+\lambda)+c (\lambda^2+21)+2 \lambda \\
=& (24 c^3+3 c^2+2 c+2) \lambda+(-36 c^5-36 c^4-76 c^3-36 c^2+c) \lambda^2+21 c. \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:66} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& 2 c^3 \lambda-18 c^2 \lambda^2+3 c^2 \lambda+2 c (-18 c^4 \lambda^2-18 c^3 \lambda^2-38 c^2 \lambda^2+11 c^2 \lambda-9 c \lambda^2+\lambda)+c (\lambda^2+21)+2 \lambda \\
\leq & -36 c^5-36 c^4-52 c^3-33 c^2+24 c+2 \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:74} $13 c^4+3 c^3+c^2+2 (-18 c^6-18 c^5-38 c^4-27 c^3+c^2)+2 c+1 \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:60} $-18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+2 c (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9))+c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2 \leq 0$ for $z \geq c$, $\lambda \geq 1$, $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+2 c (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9)) \\
& \quad +c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2 \\
=& z (2 c^3 \lambda-18 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2+21)+2 \lambda) \\
& \quad +z^2 (-18 c^4 \lambda^2-18 c^3 \lambda^2-38 c^2 \lambda^2+11 c^2 \lambda-9 c \lambda^2+\lambda)+11 c+\lambda. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:64} its center is to the left of $c$, so it is decreasing with $z$. If we set $z=c$ we have \begin{align*}
& -18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+2 c (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9)) \\
& \quad +c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2 \\
\leq & c (2 c^3 \lambda-18 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2+21)+2 \lambda) \\
& \quad +c^2 (-18 c^4 \lambda^2-18 c^3 \lambda^2-38 c^2 \lambda^2+11 c^2 \lambda-9 c \lambda^2+\lambda)+11 c+\lambda \\
=& 21 c^2+(13 c^4+3 c^3+c^2+2 c+1) \lambda+(-18 c^6-18 c^5-38 c^4-27 c^3+c^2) \lambda^2+11 c. \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:74} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& -18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+2 c (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9)) \\
& \quad +c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2 \\
\leq & -18 c^6-18 c^5-25 c^4-24 c^3+23 c^2+13 c+1 \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:85} $c (23 c^3+5 c^2-6 c+2)+2 c (-18 c^5-36 c^4-56 c^3-27 c^2+c) \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:82} $c (c^3 \lambda-9 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2-8 \lambda+21)+2 \lambda+11)+2 c^2 (-9 c^4 \lambda^2-18 c^3 \lambda^2-28 c^2 \lambda^2+11 c^2 \lambda+c (\lambda-9 \lambda^2)+\lambda) \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& c (c^3 \lambda-9 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2-8 \lambda+21)+2 \lambda+11) \\
& \quad +2 c^2 (-9 c^4 \lambda^2-18 c^3 \lambda^2-28 c^2 \lambda^2+11 c^2 \lambda+c (\lambda-9 \lambda^2)+\lambda) \\
=& c (23 c^3+5 c^2-6 c+2) \lambda+c (-18 c^5-36 c^4-56 c^3-27 c^2+c) \lambda^2+c (21 c+11). \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:85} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& c (c^3 \lambda-9 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2-8 \lambda+21)+2 \lambda+11) \\
& \quad +2 c^2 (-9 c^4 \lambda^2-18 c^3 \lambda^2-28 c^2 \lambda^2+11 c^2 \lambda+c (\lambda-9 \lambda^2)+\lambda) \\
\leq & c (23 c^3+5 c^2-6 c+2)+c (-18 c^5-36 c^4-56 c^3-27 c^2+c)+c (21 c+11) \\
=& -c (18 c^5+36 c^4+33 c^3+22 c^2-16 c-13) \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:93} $12 c^5+4 c^4-7 c^3+2 c^2+2 (-9 c^7-18 c^6-28 c^5-18 c^4+c^3)+c \leq 0$ for $c \geq c_0$. \end{lemma} \begin{proof} This is a numerical result. \end{proof}
\begin{lemma} \label{lem:6} $g_1(x,y,z) \leq \frac{9}{10}$ for $x,y,z \geq c$, $\lambda \geq 1$, $c \geq c_0$. \end{lemma} \begin{proof} $g_1(x,y,z) \leq \frac{9}{10}$ is equivalent to \begin{align*} x (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
-\lambda x^2 (y+1)^2 z (9 c \lambda z-1)+\lambda y (z+1)^2 \leq 0. \end{align*} This is a parabola of $x$ and according to Lemma~\ref{lem:13} its center is to the left of $c$, so it is decreasing with $x$. If we set $x=c$ we have \begin{align*}
& x (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda x^2 (y+1)^2 z (9 c \lambda z-1)+\lambda y (z+1)^2 \\
\leq & -c^2 \lambda (y+1)^2 z (9 c \lambda z-1)+c (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z \\
& \quad -\lambda^2 y^2 z (19 z+9)+11 z+1)+\lambda y (z+1)^2 \\
=& -9 c^3 \lambda^2 z^2+c^2 \lambda z^2-8 c^2 \lambda z+y^2 (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9)) \\
& \quad +y (-18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2)+11 c z+c. \end{align*} This is a parabola of $y$ and according to Lemma~\ref{lem:60} its center is to the left of $c$, so it is decreasing with $y$. If we set $y=c$ we have \begin{align*}
& x (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z \\
& \quad -\lambda^2 y^2 z (19 z+9)+11 z+1)-\lambda x^2 (y+1)^2 z (9 c \lambda z-1)+\lambda y (z+1)^2 \\
\leq & -9 c^3 \lambda^2 z^2+c^2 \lambda z^2-8 c^2 \lambda z+c^2 (-9 c^3 \lambda^2 z^2+c^2 \lambda z-c \lambda^2 z (19 z+9)) \\
& \quad +c (-18 c^3 \lambda^2 z^2+c^2 \lambda z (11 z+3)+c (-9 \lambda^2 z^2+(\lambda^2+21) z+11)+\lambda (z+1)^2)+11 c z+c \\
=& c z (c^3 \lambda-9 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2-8 \lambda+21)+2 \lambda+11) \\
& \quad +c z^2 (-9 c^4 \lambda^2-18 c^3 \lambda^2-28 c^2 \lambda^2+11 c^2 \lambda+c (\lambda-9 \lambda^2)+\lambda)+c (11 c+\lambda+1). \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:82} its center is to the left of $c$, so it is decreasing with $z$. If we set $z=c$ we have \begin{align*}
& x (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda x^2 (y+1)^2 z (9 c \lambda z-1)+\lambda y (z+1)^2 \\
\leq & c^2 (c^3 \lambda-9 c^2 \lambda^2+3 c^2 \lambda+c (\lambda^2-8 \lambda+21)+2 \lambda+11) \\
& \quad +c^3 (-9 c^4 \lambda^2-18 c^3 \lambda^2-28 c^2 \lambda^2+11 c^2 \lambda+c (\lambda-9 \lambda^2)+\lambda)+c (11 c+\lambda+1) \\
=& 21 c^3+22 c^2+(12 c^5+4 c^4-7 c^3+2 c^2+c) \lambda+(-9 c^7-18 c^6-28 c^5-18 c^4+c^3) \lambda^2+c. \end{align*} This is a parabola of $\lambda$ and according to Lemma~\ref{lem:93} its center is to the left of $1$, so it is decreasing with $\lambda$. If we set $\lambda=1$ we have \begin{align*}
& x (y (z (c \lambda+\lambda^2+21)+\lambda z^2 (11 c-9 \lambda)+11)+c \lambda z^2-9 c \lambda z-\lambda^2 y^2 z (19 z+9)+11 z+1) \\
& \quad -\lambda x^2 (y+1)^2 z (9 c \lambda z-1)+\lambda y (z+1)^2 \\
\leq & -9 c^7-18 c^6-16 c^5-14 c^4+15 c^3+24 c^2+2 c \leq 0. \end{align*} The last less-than derives from the condition that $c \geq c_0$. \end{proof}
\begin{lemma} \label{lem:2:19} $1+4 c^2 \lambda+21 \lambda^2+c \lambda (-15+22 \lambda)+2 c \lambda (11 \lambda-36 c^2 \lambda-36 c^3 \lambda+c (-9+2 \lambda)) \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& 1+4 c^2 \lambda+21 \lambda^2+c \lambda (-15+22 \lambda)+2 c \lambda (11 \lambda-36 c^2 \lambda-36 c^3 \lambda+c (-9+2 \lambda)) \\
=& 1+(-15 c-14 c^2) \lambda+(21+44 c+4 c^2-72 c^3-72 c^4) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& 1+4 c^2 \lambda+21 \lambda^2+c \lambda (-15+22 \lambda)+2 c \lambda (11 \lambda-36 c^2 \lambda-36 c^3 \lambda+c (-9+2 \lambda)) \\
\leq & 22+29 c-10 c^2-72 c^3-72 c^4 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:17} $-9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2+2 c \lambda z (2 c-18 c^2 \lambda z+\lambda (11+z)) \leq 0$ for $\lambda \geq 1$, $z \geq c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2+2 c \lambda z (2 c-18 c^2 \lambda z+\lambda (11+z)) \\
=& -9+(1+4 c^2 \lambda+21 \lambda^2+c \lambda (-15+22 \lambda)) z+\lambda (11 \lambda-36 c^2 \lambda-36 c^3 \lambda+c (-9+2 \lambda)) z^2. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:2:19} it is decreasing when $z \geq c$. Therefore if we set $z=c$ we have \begin{align*}
& -9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2+2 c \lambda z (2 c-18 c^2 \lambda z+\lambda (11+z)) \\
\leq & -9+c^2 \lambda (11 \lambda-36 c^2 \lambda-36 c^3 \lambda+c (-9+2 \lambda))+c (1+4 c^2 \lambda+21 \lambda^2+c \lambda (-15+22 \lambda)) \\
=& -9+c+(-15 c^2-5 c^3) \lambda+(21 c+33 c^2+2 c^3-36 c^4-36 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& -9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2+2 c \lambda z (2 c-18 c^2 \lambda z+\lambda (11+z)) \\
\leq & -9+22 c+18 c^2-3 c^3-36 c^4-36 c^5 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:32} $-9-15 c^2 \lambda+2 c^3 \lambda+11 c^2 \lambda^2+c (1-27 \lambda+21 \lambda^2)+2 c (-9 c^2 \lambda-17 c^2 \lambda^2-36 c^3 \lambda^2-18 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -9-15 c^2 \lambda+2 c^3 \lambda+11 c^2 \lambda^2+c (1-27 \lambda+21 \lambda^2) \\
& \quad +2 c (-9 c^2 \lambda-17 c^2 \lambda^2-36 c^3 \lambda^2-18 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \\
=& -9+c+(-27 c-53 c^2-16 c^3) \lambda+(21 c+33 c^2-34 c^3-72 c^4-36 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& -9-15 c^2 \lambda+2 c^3 \lambda+11 c^2 \lambda^2+c (1-27 \lambda+21 \lambda^2) \\
& \quad +2 c (-9 c^2 \lambda-17 c^2 \lambda^2-36 c^3 \lambda^2-18 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \\
\leq & -9-5 c-20 c^2-50 c^3-72 c^4-36 c^5 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:14} $-19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z)-2 c \lambda (1+y)^2 z (-1+9 c \lambda z)+y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2) \leq 0$ for $\lambda \geq 1$, $y,z \geq c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z)-2 c \lambda (1+y)^2 z (-1+9 c \lambda z) \\
& \quad +y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2) \\
=& -19-9 (1+3 c \lambda) z-c \lambda (19+18 c \lambda) z^2 \\
& \quad +y (-9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2)+\lambda y^2 z (2 c-18 c^2 \lambda z+\lambda (11+z)). \end{align*} This is a parabola of $y$ and according to Lemma~\ref{lem:2:17} it is decreasing when $y \geq c$. Therefore if we set $y=c$ we have \begin{align*}
& -19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z)-2 c \lambda (1+y)^2 z (-1+9 c \lambda z) \\
& \quad +y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2) \\
\leq & -19-9 (1+3 c \lambda) z-c \lambda (19+18 c \lambda) z^2 \\
& \quad +c (-9+(1-15 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda-36 c^2 \lambda) z^2)+c^2 \lambda z (2 c-18 c^2 \lambda z+\lambda (11+z)) \\
=& -19-9 c+(-9-15 c^2 \lambda+2 c^3 \lambda+11 c^2 \lambda^2+c (1-27 \lambda+21 \lambda^2)) z \\
& \quad +(-9 c^2 \lambda-17 c^2 \lambda^2-36 c^3 \lambda^2-18 c^4 \lambda^2+c \lambda (-19+11 \lambda)) z^2. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:2:32} it is decreasing when $z \geq c$. Therefore if we set $z=c$ we have \begin{align*}
& -19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z)-2 c \lambda (1+y)^2 z (-1+9 c \lambda z) \\
& \quad +y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2) \\
\leq & -19-9 c+c^2 (-9 c^2 \lambda-17 c^2 \lambda^2-36 c^3 \lambda^2-18 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \\
& \quad +c (-9-15 c^2 \lambda+2 c^3 \lambda+11 c^2 \lambda^2+c (1-27 \lambda+21 \lambda^2)) \\
=& -(19-c) (1+c)-(1+c) (27 c^2+7 c^3) \lambda-(1+c) (-21 c^2-c^3+18 c^4+18 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& -19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z)-2 c \lambda (1+y)^2 z (-1+9 c \lambda z) \\
& \quad +y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2) \\
\leq & -19-18 c-5 c^2-12 c^3-24 c^4-36 c^5-18 c^6 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:64} $c+2 \lambda-17 c^2 \lambda+2 c^3 \lambda+21 c \lambda^2+22 c^2 \lambda^2+2 c (\lambda-9 c^2 \lambda+11 c \lambda^2+2 c^2 \lambda^2-18 c^3 \lambda^2-18 c^4 \lambda^2) \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& c+2 \lambda-17 c^2 \lambda+2 c^3 \lambda+21 c \lambda^2+22 c^2 \lambda^2+2 c (\lambda-9 c^2 \lambda+11 c \lambda^2+2 c^2 \lambda^2-18 c^3 \lambda^2-18 c^4 \lambda^2) \\
=& c+(2+2 c-17 c^2-16 c^3) \lambda+(21 c+44 c^2+4 c^3-36 c^4-36 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& c+2 \lambda-17 c^2 \lambda+2 c^3 \lambda+21 c \lambda^2+22 c^2 \lambda^2+2 c (\lambda-9 c^2 \lambda+11 c \lambda^2+2 c^2 \lambda^2-18 c^3 \lambda^2-18 c^4 \lambda^2) \\
\leq & 2+24 c+27 c^2-12 c^3-36 c^4-36 c^5 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:62} $-18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2)+2 c (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \leq 0$ for $\lambda \geq 1$, $z \geq c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& -18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2) \\
& \quad +2 c (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \\
=& -9 c+\lambda+(c+2 \lambda-17 c^2 \lambda+2 c^3 \lambda+21 c \lambda^2+22 c^2 \lambda^2) z \\
& \quad +(\lambda-9 c^2 \lambda+11 c \lambda^2+2 c^2 \lambda^2-18 c^3 \lambda^2-18 c^4 \lambda^2) z^2. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:2:64} it is decreasing when $z \geq c$. Therefore if we set $z=c$ we have \begin{align*}
& -18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2) \\
& \quad +2 c (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \\
\leq & -9 c+\lambda+c (c+2 \lambda-17 c^2 \lambda+2 c^3 \lambda+21 c \lambda^2+22 c^2 \lambda^2) \\
& \quad +c^2 (\lambda-9 c^2 \lambda+11 c \lambda^2+2 c^2 \lambda^2-18 c^3 \lambda^2-18 c^4 \lambda^2) \\
=& -9 c+c^2+(1+2 c+c^2-17 c^3-7 c^4) \lambda+(21 c^2+33 c^3+2 c^4-18 c^5-18 c^6) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& -18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2) \\
& \quad +2 c (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \\
\leq & 1-7 c+23 c^2+16 c^3-5 c^4-18 c^5-18 c^6 \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:79} $2 c^2 (\lambda-9 c^2 \lambda-8 c^2 \lambda^2-18 c^3 \lambda^2-9 c^4 \lambda^2+c \lambda (-19+11 \lambda))+c (-9+2 \lambda-17 c^2 \lambda+c^3 \lambda+11 c^2 \lambda^2+c (1-28 \lambda+21 \lambda^2)) \leq 0$ for $\lambda \geq 1$ and $c \geq c_0$. \end{lemma} \begin{proof} \begin{align*}
& 2 c^2 (\lambda-9 c^2 \lambda-8 c^2 \lambda^2-18 c^3 \lambda^2-9 c^4 \lambda^2 +c \lambda (-19+11 \lambda)) \\
& \quad +c (-9+2 \lambda-17 c^2 \lambda+c^3 \lambda+11 c^2 \lambda^2+c (1-28 \lambda+21 \lambda^2)) \\
=& (-9+c) c+c (2-26 c-55 c^2-17 c^3) \lambda+c (21 c+33 c^2-16 c^3-36 c^4-18 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
& 2 c^2 (\lambda-9 c^2 \lambda-8 c^2 \lambda^2-18 c^3 \lambda^2-9 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \\
& \quad +c (-9+2 \lambda-17 c^2 \lambda+c^3 \lambda+11 c^2 \lambda^2+c (1-28 \lambda+21 \lambda^2)) \\
\leq & -c (7+4 c+22 c^2+33 c^3+36 c^4+18 c^5) \leq 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:2:9} $g_2(x,y,z) \leq \frac{9}{10}$ for $\lambda \geq 1$ and $x,y,z \geq c \geq c_0$. \end{lemma} \begin{proof} $g_2(x,y,z) \leq \frac{9}{10}$ is equivalent to \begin{align*} \lambda y (1+z)^2-\lambda x^2 (1+y)^2 z (-1+9 c \lambda z)+x (-19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z) \\
+y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2)) \leq 0. \end{align*} Denote the left hand-side of the above inequality by $A$. $A$ is a parabola of $x$ and according to Lemma~\ref{lem:2:14} it is decreasing when $x \geq c$. Therefore if we set $x=c$ we have \begin{align*}
A \leq & \lambda y (1+z)^2-c^2 \lambda (1+y)^2 z (-1+9 c \lambda z) \\
& \quad +c (-19-9 z-29 c \lambda z-19 c \lambda z^2+\lambda^2 y^2 z (11+z) \\
& \quad \quad +y (-9+(1-19 c \lambda+21 \lambda^2) z+\lambda (-9 c+11 \lambda) z^2)) \\
=& -19 c-9 c z-28 c^2 \lambda z-19 c^2 \lambda z^2-9 c^3 \lambda^2 z^2 \\
& \quad +y^2 (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \\
& \quad +y (-18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2)). \end{align*} This is a parabola of $y$ and according to Lemma~\ref{lem:2:62} it is decreasing when $y \geq c$. Therefore if we set $y=c$ we have \begin{align*}
A \leq & -19 c-9 c z-28 c^2 \lambda z-19 c^2 \lambda z^2-9 c^3 \lambda^2 z^2+c^2 (c^2 \lambda z-9 c^3 \lambda^2 z^2+c \lambda^2 z (11+z)) \\
& \quad +c (-18 c^3 \lambda^2 z^2+\lambda (1+z)^2-c^2 \lambda z (17+9 z)+c (-9+z+21 \lambda^2 z+11 \lambda^2 z^2)) \\
=& c (-19-9 c+\lambda)+c (-9+2 \lambda-17 c^2 \lambda+c^3 \lambda+11 c^2 \lambda^2+c (1-28 \lambda+21 \lambda^2)) z \\
& \quad +c (\lambda-9 c^2 \lambda-8 c^2 \lambda^2-18 c^3 \lambda^2-9 c^4 \lambda^2+c \lambda (-19+11 \lambda)) z^2. \end{align*} This is a parabola of $z$ and according to Lemma~\ref{lem:2:79} it is decreasing when $z \geq c$. Therefore if we set $z=c$ we have \begin{align*}
A \leq & c (-19-9 c+\lambda)+c^3 (\lambda-9 c^2 \lambda-8 c^2 \lambda^2-18 c^3 \lambda^2-9 c^4 \lambda^2+c \lambda (-19+11 \lambda)) \\
& \quad +c^2 (-9+2 \lambda-17 c^2 \lambda+c^3 \lambda+11 c^2 \lambda^2+c (1-28 \lambda+21 \lambda^2)) \\
=& -(19-c) c (1+c)-c (1+c) (-1-c+28 c^2+8 c^3) \lambda-c (1+c) (-21 c^2-c^3+9 c^4+9 c^5) \lambda^2. \end{align*} This is a parabola of $\lambda$ and it is decreasing when $\lambda \geq 1$ and $c \geq c_0$. Therefore if we set $\lambda=1$ we have \begin{align*}
A \leq & -c (18+16 c+5 c^2+14 c^3+16 c^4+18 c^5+9 c^6) \leq 0. \end{align*} \end{proof}
We now combine Lemma~\ref{lem:6} and Lemma~\ref{lem:2:9} to give a bound for
$\frac{\left|\frac{\partial g}{\partial x}(x,y,z)\right|\Phi(x)+\left|\frac{\partial g}{\partial y}(x,y,z)\right|\Phi(y)+\left|\frac{\partial g}{\partial z}(x,y,z)\right|\Phi(z)}{\Phi(g(x,y,z))}$. \begin{lemma} \label{lem:thm2g}
$\frac{\left|\frac{\partial g}{\partial x}(x,y,z)\right|\Phi(x)+\left|\frac{\partial g}{\partial y}(x,y,z)\right|\Phi(y)+\left|\frac{\partial g}{\partial z}(x,y,z)\right|\Phi(z)}{\Phi(g(x,y,z))} \leq \frac{9}{10}$ for $x,y,z \geq c$, $\lambda \geq 1$, $c \geq c_0$. \end{lemma}
\begin{lemma} \label{lem:thm2:h}
$\frac{\left|\deriv{h}{x}\left(x\right)\right|\Phi\left(x\right)}{\Phi\left(h\left(x\right)\right)} \leq \max\left\{\frac{1}{2},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}<1$ when $x \geq c \geq 1$, $\lambda \geq 1$, $\mu \geq p>0$. \end{lemma} \begin{proof} \begin{align*}
\frac{\left|\deriv{h}{x}\left(x\right)\right|\Phi\left(x\right)}{\Phi\left(h\left(x\right)\right)} &=
\frac{\left|\lambda(c\mu+1)-\lambda\mu^2\right|x}{(1+\lambda\mu x)(\mu+(c\mu+1)\lambda x)} \\
&\leq \frac{(\lambda(c\mu+1)+\lambda\mu^2) x}{(1+\lambda\mu x)(\mu+(c\mu+1)\lambda x)} \\
&= \frac{\lambda(c\mu+1)+\lambda\mu^2}{\lambda^2(c\mu+1)\mu x+\frac{\mu}{x}+\lambda(c\mu+1)+\lambda\mu^2} \\
&\leq \frac{\lambda(c\mu+1)+\lambda\mu^2}{\lambda^2(c\mu+1)\mu c +\frac{\mu}{c}+\lambda(c\mu+1)+\lambda\mu^2} \\
&= \frac{(c\mu+1)+\mu^2}{\lambda(c\mu+1)\mu c +\frac{\mu}{c \lambda}+(c\mu+1)+\mu^2} \\
&\leq \frac{(c\mu+1)+\mu^2}{(c\mu+1)\mu c +\frac{\mu}{c}+(c\mu+1)+\mu^2} \\
&= \frac{c(1+c \mu+\mu^2)}{(1+c \mu)(c+\mu+c^2 \mu)}. \end{align*} Since the derivative of $\frac{c(1+c \mu+\mu^2)}{(1+c \mu)(c+\mu+c^2 \mu)}$ with respect to $c$ is $$-\frac{\mu((c^4-1)\mu^2+(c^2-1)+2c(c^2-1)\mu+c^2 \mu^2+2c^3 \mu^3)}{(1+c\mu)^2(c+\mu+c^2\mu)^2}\leq 0,$$ it is decreasing when $c \geq 1$. Therefore we have \begin{align*}
\frac{\left|\deriv{h}{x}\left(x\right)\right|\Phi\left(x\right)}{\Phi\left(h\left(x\right)\right)} &\leq
\frac{1+\mu+\mu^2}{(1+\mu)(1+2\mu)} \\
&\leq \max\left\{\frac{1}{2},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}. \end{align*} \end{proof}
\begin{lemma} \label{lem:thm2:ghx}
$\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi(x)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \leq \max\left\{\frac{1}{2},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}<1$ when $x \geq c \geq 1$, $\lambda \geq 1$, $y \geq p>0$. \end{lemma} \begin{proof} \begin{align*}
\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi(x)}{\Phi\left(\hat{g}\left(x,y\right)\right)} &=
\frac{\left|1+c\lambda y-\lambda^2 y^2\right|x}{(1+\lambda xy)(x+\lambda y+\lambda cxy)} \\
&\leq \frac{(1+c\lambda y+\lambda^2 y^2)x}{(1+\lambda xy)(x+\lambda y+\lambda cxy)} \\
&= \frac{1+c\lambda y+\lambda^2 y^2}{(\lambda y+c\lambda^2 y^2)x+\frac{\lambda y}{x}+1+c\lambda y+\lambda^2 y^2} \\
&\leq \frac{1+c\lambda y+\lambda^2 y^2}{(\lambda y+c\lambda^2 y^2)c+\frac{\lambda y}{c}+1+c\lambda y+\lambda^2 y^2} \\
&= \frac{c(1+c\lambda y+\lambda^2 y^2)}{(1+c\lambda y)(c+\lambda y+c^2\lambda y)}. \end{align*} Since the derivative of $\frac{c(1+c\lambda y+\lambda^2 y^2)}{(1+c\lambda y)(c+\lambda y+c^2\lambda y)}$ with respect to $c$ is $$-\frac{\lambda y((c^2-1)+2c\lambda y(c^2-1)+\lambda^2 y^2(c^4-1)+c^2\lambda^2 y^2+2c^3\lambda^3 y^3)}{(1+c\lambda y)^2(c+\lambda y+c^2\lambda y)^2}\leq 0,$$ it is decreasing when $c \geq 1$. Therefore we have \begin{align}
\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi(x)}{\Phi\left(\hat{g}\left(x,y\right)\right)} &\leq
\frac{1+\lambda y+\lambda^2 y^2}{(1+\lambda y)(1+2\lambda y)} \\
&\leq \max\left\{\frac{1}{2},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}. \end{align} \end{proof}
\begin{lemma} \label{lem:thm2:ghy}
$\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi(y)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \leq \max\left\{\frac{1}{2},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}<1$ when $y \geq c \geq 1$, $\lambda \geq 1$, $x \geq p>0$. \end{lemma} \begin{proof} Observe that $\hat{g}(x,y)=h(y)\mid_{\mu=x}$, and thus $\pderiv{\hat{g}}{y}=\deriv{h}{x}(y)\mid_{\mu=x}$. From Lemma~\ref{lem:thm2:h} we know that this bound also holds for
$\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi(y)}{\Phi\left(\hat{g}\left(x,y\right)\right)}$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:2}] The proof follows immeadiately from Lemma~\ref{lem:algo} and Lemma~\ref{lem:thm2g},~\ref{lem:thm2:h},~\ref{lem:thm2:ghx},~\ref{lem:thm2:ghy}, with $\alpha=\max\left\{\frac{9}{10},\frac{1+p+p^2}{(1+p)(1+2p)}\right\}$. \end{proof}
\subsection{Proof of Theorem~\ref{thm:spin}}
The recursion we use in this proof is the same as those in proving Theorem~\ref{thm:1}. The difference is that this time we use potential function $\Phi(x)=x$.
\begin{lemma} \label{lem:3:dg} If $c/2 \leq x,y,z \leq 2c$, $\lambda>0$, and $c \geq 4\sqrt{\sqrt{2}-1}=2.57$, then there is some constant $\alpha(\lambda,c)<1$ such that
$\frac{\left|\pderiv{g}{x}\right|\Phi\left(x\right)+\left|\pderiv{g}{y}\right|\Phi\left(y\right)+\left|\pderiv{g}{z}\right|\Phi\left(z\right)}{\Phi\left(g\left(x,y,z\right)\right)} \leq \alpha$. \end{lemma} \begin{proof} \begin{align*}
& \frac{\left|\pderiv{g}{x}\right|\Phi\left(x\right)+\left|\pderiv{g}{y}\right|\Phi\left(y\right)+\left|\pderiv{g}{z}\right|\Phi\left(z\right)}{\Phi\left(g\left(x,y,z\right)\right)} \\
=& \frac{(\lambda x z+1) \left(x \left| \frac{-y z \lambda^2+c z \lambda+1}{(\lambda x z+1)^2}\right| +y \left| \frac{\lambda}{\lambda x z+1}\right| +z \left| \frac{\lambda x (-c+x+\lambda y)}{(\lambda x z+1)^2}\right| \right)}{c \lambda x z+\lambda y+x} \\
\leq & \frac{(\lambda x z+1) \left(x \frac{y z \lambda^2+c z \lambda+1}{(\lambda x z+1)^2} +y \frac{\lambda}{\lambda x z+1} +z \left| \frac{\lambda x (-c+x+\lambda y)}{(\lambda x z+1)^2}\right| \right)}{c \lambda x z+\lambda y+x}. \end{align*} Let \begin{align} g_1(x,y,z) &= \frac{(\lambda x z+1) \left(x \frac{y z \lambda^2+c z \lambda+1}{(\lambda x z+1)^2} +y \frac{\lambda}{\lambda x z+1} +z \frac{\lambda x (-c+x+\lambda y)}{(\lambda x z+1)^2} \right)}{c \lambda x z+\lambda y+x} \\ &= \frac{2 c \lambda x z+\lambda^2 x y z-\lambda x^2 z+\lambda y+x}{(\lambda x z+1) (c \lambda x z+\lambda y+x)} \label{eq:thm3:2:g1} \end{align} and \begin{align} g_2(x,y,z) &= \frac{(\lambda x z+1) \left(x \frac{y z \lambda^2+c z \lambda+1}{(\lambda x z+1)^2} +y \frac{\lambda}{\lambda x z+1} -z \frac{\lambda x (-c+x+\lambda y)}{(\lambda x z+1)^2} \right)}{c \lambda x z+\lambda y+x} \\ &= \frac{3 \lambda^2 x y z+\lambda x^2 z+\lambda y+x}{(\lambda x z+1) (c \lambda x z+\lambda y+x)} \label{eq:thm3:2:g2}, \end{align} then
$$\frac{\left|\pderiv{g}{x}\right|\Phi\left(x\right)+\left|\pderiv{g}{y}\right|\Phi\left(y\right)+\left|\pderiv{g}{z}\right|\Phi\left(z\right)}{\Phi\left(g\left(x,y,z\right)\right)} \leq \max\left\{g_1(x,y,z),g_2(x,y,z)\right\}.$$ The result follows immediately after Lemma~\ref{lem:thm3:2:g1} and Lemma~\ref{lem:thm3:2:g2}. \end{proof}
\begin{lemma} \label{lem:thm3:2:g1} If $c/2 \leq x,y,z \leq 2c$, $c>0$, and $\lambda \geq \lambda_0> 0$, then $g_1$ (defined in \eqref{eq:thm3:2:g1}) satisfies $g_1(x,y,z) \leq \frac{8 c^2 \lambda^2+\left(6 c^2+32\right) \lambda+8}{c^2 \left(c^2+8\right) \lambda^2+\left(6 c^2+32\right) \lambda+8} < 1$. \end{lemma} \begin{proof} \begin{align} g_1(x,y,z) &= \frac{2 c \lambda x z+\lambda^2 x y z-\lambda x^2 z+\lambda y+x}{(\lambda x z+1) (c \lambda x z+\lambda y+x)} \label{eq:thm3:2:8} \\ &\leq \frac{3 c^2 \lambda z+2 c \left(\lambda^2 y z+1\right)+4 \lambda y}{(c \lambda z+2) \left(c^2 \lambda z+c+2 \lambda y\right)} \label{eq:thm3:2:13} && \text{by Lemma~\ref{lem:thm3:2:8}} \\ &\leq \frac{4 c \lambda^2 z+\lambda (3 c z+8)+2}{(c \lambda z+2) (\lambda (c z+4)+1)} \label{eq:thm3:2:17} &&\text{by Lemma~\ref{lem:thm3:2:13}} \\ &\leq \frac{8 c^2 \lambda^2+\left(6 c^2+32\right) \lambda+8}{\left(c^2 \lambda+4\right) \left(\left(c^2+8\right) \lambda+2\right)} \label{eq:thm3:2:g1a} &&\text{by Lemma~\ref{lem:thm3:2:17}} \\ &\leq \frac{8 c^2 \lambda_0^2+\left(6 c^2+32\right) \lambda_0+8}{\left(c^2 \lambda_0+4\right) \left(\left(c^2+8\right) \lambda_0+2\right)} &&\text{by Lemma~\ref{lem:thm3:2:g1a}} \\ &= \frac{8 c^2 \lambda_0^2+\left(6 c^2+32\right) \lambda_0+8}{c^2 \left(c^2+8\right) \lambda_0^2+\left(6 c^2+32\right) \lambda_0+8}. \end{align} \end{proof}
\begin{lemma} \label{lem:thm3:2:8} \eqref{eq:thm3:2:8} is decreasing monotonically with respect to $x$ when $x \geq c/4$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:8} with respect to $x$ is $$-\frac{\lambda z \left(x^2 \left(2 c^2 \lambda^2 z^2+c \lambda z \left(\lambda^2 y z+4\right)+2 \lambda^2 y z+2\right)+2 \lambda x y (c \lambda z+2)-c \lambda y\right)}{(\lambda x z+1)^2 (c \lambda x z+\lambda y+x)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:2:13} \eqref{eq:thm3:2:13} is increasing monotonically with respect to $y$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:13} with respect to $y$ is $$\frac{2 c^3 \lambda^3 z^2}{(c \lambda z+2) \left(c^2 \lambda z+c+2 \lambda y\right)^2} \geq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:2:17} \eqref{eq:thm3:2:17} is decreasing monotonically with respect to $z$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:17} with respect to $z$ is $$-\frac{c^2 \lambda^2 z \left(4 c \lambda^2 z+\lambda (3 c z+16)+4\right)}{(c \lambda z+2)^2 (\lambda (c z+4)+1)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:2:g1a} \eqref{eq:thm3:2:g1a} is decreasing monotonically with respect to $\lambda$ when $\lambda \geq 0$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:g1a} with respect to $\lambda$ is $-\frac{4c^2\lambda^2\left(8+\left(32+3c^2\right)\lambda+4c^2\lambda^2\right)}{\left(4+c^2\lambda\right)^2\left(2+\left(8+c^2\right)\lambda\right)^2}<0$. \end{proof}
\begin{lemma} \label{lem:thm3:2:g2} If $c/2 \leq x,y,z \leq 2c$, $\lambda\geq\lambda_0>0$, and $c \geq 4\sqrt{\sqrt{2}-1}=2.57$, then $g_2$ (defined in \eqref{eq:thm3:2:g2}) satisfies $g_2(x,y,z) \leq \frac{192 \lambda_0^2+20 \lambda_0+1}{576 \lambda_0^2+52 \lambda_0+1} <1$. \end{lemma} \begin{proof} \begin{align} g_2(x,y,z) &= \frac{3 \lambda^2 x y z+\lambda x^2 z+\lambda y+x}{(\lambda x z+1) (c \lambda x z+\lambda y+x)} \label{eq:thm3:2:25} \\ &\leq \frac{c^2 \lambda z+c \left(6 \lambda^2 y z+2\right)+4 \lambda y}{(c \lambda z+2) \left(c^2 \lambda z+c+2 \lambda y\right)} \label{eq:thm3:2:27} &&\text{by Lemma~\ref{lem:thm3:2:25}} \\ &\leq \frac{12 c \lambda^2 z+\lambda (c z+8)+2}{(c \lambda z+2) (\lambda (c z+4)+1)} \label{eq:thm3:2:29} &&\text{by Lemma~\ref{lem:thm3:2:27}} \\ &\leq \frac{24 c^2 \lambda^2+2 \left(c^2+16\right) \lambda+8}{\left(c^2 \lambda+4\right) \left(\left(c^2+8\right) \lambda+2\right)} \label{eq:thm3:2:34} &&\text{by Lemma~\ref{lem:thm3:2:29}} \\ &\leq \frac{192 \lambda^2+20 \lambda+1}{576 \lambda^2+52 \lambda+1} \label{eq:thm3:2:g2a} &&\text{by Lemma~\ref{lem:thm3:2:34}} \\ &\leq \frac{192 \lambda_0^2+20 \lambda_0+1}{576 \lambda_0^2+52 \lambda_0+1}. &&\text{by Lemma~\ref{lem:thm3:2:g2a}} \end{align} \end{proof}
\begin{lemma} \label{lem:thm3:2:25} \eqref{eq:thm3:2:25} is decreasing monotonically with respect to $x$ when $c \geq 2$, $x,z \geq c/2$, and $y \leq 2c$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:25} with respect to $x$ is \begin{align} & -\frac{\lambda^2 y z \left(3 c \lambda^2 x^2 z^2+2 c \lambda x z+c+2 \lambda x^2 z-2 \lambda y\right)}{(\lambda x z+1)^2 (c \lambda x z+\lambda y+x)^2} \\ \leq & -\frac{\lambda^2 y z \left(3 c \lambda^2 x^2 z^2+ \frac{2c^3}{4}+c+\frac{2c^3}{8}-4c\right)}{(\lambda x z+1)^2 (c \lambda x z+\lambda y+x)^2} \\ \leq & -\frac{\lambda^2 y z \left(3 c \lambda^2 x^2 z^2+ \frac{3}{4}c\left(c^2-4\right)\right)}{(\lambda x z+1)^2 (c \lambda x z+\lambda y+x)^2} \\ \leq & 0. \end{align} \end{proof}
\begin{lemma} \label{lem:thm3:2:27} \eqref{eq:thm3:2:27} is increasing monotonically with respect to $y$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:27} with respect to $y$ is $$\frac{2 c^2 \lambda^2 z (3 c \lambda z+4)}{(c \lambda z+2) \left(c^2 \lambda z+c+2 \lambda y\right)^2}.$$ \end{proof}
\begin{lemma} \label{lem:thm3:2:29} \eqref{eq:thm3:2:29} is decreasing monotonically with respect to $z$ when $z \geq c/2$ and $c \geq 4 \sqrt{\sqrt{2}-1} = 2.57$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:29} with respect to $z$ is \begin{align*} & -\frac{c \lambda \left(12 c^2 \lambda^3 z^2+\lambda^2 \left(c^2 z^2+16 c z-64\right)+4 c \lambda z+4\right)}{(c \lambda z+2)^2 (\lambda (c z+4)+1)^2} \\ \leq & -\frac{c \lambda \left(12 c^2 \lambda^3 z^2+\lambda^2 \left(\frac{c^4}{4} +8 c^2-64\right)+4 c \lambda z+4\right)}{(c \lambda z+2)^2 (\lambda (c z+4)+1)^2} \\ \leq & 0. \end{align*} \end{proof}
\begin{lemma} \label{lem:thm3:2:34} \eqref{eq:thm3:2:34} is decreasing monotonically with respect to $c$ when $c \geq 4 \sqrt{\sqrt{2}-1} = 2.57$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:34} with respect to $c$ is $$-\frac{4 c \lambda \left(12 c^4 \lambda^3+8 c^2 \lambda+\left(c^4+32 c^2-256\right) \lambda^2+16\right)}{\left(c^2 \lambda+4\right)^2 \left(\left(c^2+8\right) \lambda+2\right)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:2:g2a} \eqref{eq:thm3:2:g2a} is decreasing monotonically when $\lambda\geq0$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:2:g2a} with respect to $\lambda$ is $-\frac{32(48\lambda^2+24\lambda+1)}{(576\lambda^2+52\lambda+1)^2}<0$. \end{proof}
\begin{lemma} \label{lem:3:dghx} If $x,y \geq c/2 \geq 1$, $c \geq 1.60$, and $\lambda\geq\lambda_0>0$, then there is some constant $\alpha(\lambda_0)<1$ such that
$\frac{\left|\pderiv{\hat{g}}{x}\right|\Phi\left(x\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \leq \alpha(\lambda_0)<1$. \end{lemma} \begin{proof} \begin{align}
& \frac{\left|\pderiv{\hat{g}}{x}\right|\Phi\left(x\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \\
=& \frac{x \left|c \lambda y-\lambda^2 y^2+1\right|}{(\lambda x y+1) (c \lambda x y+\lambda y+x)} \\ \leq & \frac{x \left(c \lambda y+\lambda^2 y^2+1\right)}{(\lambda x y+1) (c \lambda x y+\lambda y+x)} \label{eq:thm3:14} \\ \leq & \frac{2 c \left(c \lambda y+\lambda^2 y^2+1\right)}{(c \lambda y+2) \left(c^2 \lambda y+c+2 \lambda y\right)} \label{eq:thm3:16} && \text{by Lemma~\ref{lem:thm3:14}} \\ \leq & \frac{2 c^2 \lambda (\lambda+2)+8}{\left(c^2 \lambda+4\right) \left(\left(c^2+2\right) \lambda+2\right)} \label{eq:thm3:22} && \text{by Lemma~\ref{lem:thm3:16}} \\ \leq & \frac{\lambda^2+2 \lambda+2}{2 \lambda^2+5 \lambda+2} && \text{by Lemma~\ref{lem:thm3:22}} \\ \leq & \max\left\{\frac{\lambda_0^2+2\lambda_0+2}{2\lambda_0^2+5\lambda_0+2},\frac{1}{2}\right\}. && \text{by Lemma~\ref{lem:thm3:dghxa}}. \end{align} \end{proof}
\begin{lemma} \label{lem:thm3:14} \eqref{eq:thm3:14} is decreasing monotonically with respect to $x$ when $x \geq c/2 \geq 1$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:14} with respect to $x$ is $$-\frac{\lambda y \left(c \lambda y+\lambda^2 y^2+1\right) \left(x^2 (c \lambda y+1)-1\right)}{(\lambda x y+1)^2 (c \lambda x y+\lambda y+x)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:16} \eqref{eq:thm3:16} is decreasing monotonically with respect to $y$ when $c \geq \sqrt{\frac{1}{2} \left(1+\sqrt{17}\right)} = 1.60$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:16} with respect to $y$ is $$-\frac{2 c \lambda \left(c^4 \lambda^2 y^2+2 c^3 \lambda y + c^2 + \left(c^4 - c^2 - 4\right) \lambda^2 y^2\right)}{(c \lambda y+2)^2 \left(c^2 \lambda y+c+2 \lambda y\right)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:22} \eqref{eq:thm3:22} is decreasing monotonically with respect to $c$ when $c \geq \sqrt{2}$. \end{lemma} \begin{proof} The derivative of \eqref{eq:thm3:22} with respect to $\lambda$ is $$-\frac{4 c \lambda \left(c^4 \lambda^3+2 \left(c^4-4\right) \lambda^2+8 \left(c^2-2\right) \lambda+8\right)}{\left(c^2 \lambda+4\right)^2 \left(\left(c^2+2\right) \lambda+2\right)^2} \leq 0.$$ \end{proof}
\begin{lemma} \label{lem:thm3:dghxa} If $\lambda \geq \lambda_0>1$, then $\frac{\lambda^2+2 \lambda+2}{2 \lambda^2+5 \lambda+2} \leq \max\left\{\frac{\lambda_0^2+2 \lambda_0+2}{2 \lambda_0^2+5 \lambda_0+2},\frac{1}{2}\right\}<1$. \end{lemma} \begin{proof} Since $\deriv{}{\lambda}\frac{\lambda^2+2 \lambda+2}{2 \lambda^2+5 \lambda+2} = \frac{\lambda^2-4\lambda-6}{(2\lambda^2+5\lambda+2)^2}$, it is decreasing and then increasing when $\lambda>1$. Hence its maximum is achieved on either boundary. \end{proof}
\begin{lemma} \label{lem:3:dghy} If $c/2 \leq x,y \leq c+2/c$ and $c>0$, then there exists a constant $\alpha(c)<1$ such that
$\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \leq \alpha(c) <1$. \end{lemma} \begin{proof} \begin{align*}
& \frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \\
=& \frac{\lambda y \left|c x-x^2+1\right|}{(\lambda x y+1) (c \lambda x y+\lambda y+x)} \\ \leq & \frac{\lambda y \left(c x+x^2+1\right)}{(\lambda x y+1) (c \lambda x y+\lambda y+x)}. \end{align*}
Since $\lim_{\lambda \to +\infty}\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)}=0$, there is some constant $\lambda_0(x,y,c)$ such that if $\lambda > \lambda_0$, then
$\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)}<\frac{1}{2}$. Let $\lambda_1(c)=\max_{c/2 \leq x,y \leq c+2/c} \lambda_0(x,y,c)$. Then if $\lambda>\lambda_1$, we have
$\frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)}<\frac{1}{2}$. Otherwise, \begin{align*}
& \frac{\left|\pderiv{\hat{g}}{y}\right|\Phi\left(y\right)}{\Phi\left(\hat{g}\left(x,y\right)\right)} \\ \leq & 1- \frac{x}{(\lambda x y+1) (c \lambda x y+\lambda y+x)}\\ \leq & 1-\frac{c}{2(4c^2\lambda_1(c)+1)(4c^3\lambda_1(c)+2c\lambda_1(c)+2c)}. \end{align*} The proof is done by setting $\alpha(c)=\max\left\{\frac{1}{2},1-\frac{c}{2(4c^2\lambda_1(c)+1)(4c^3\lambda_1(c)+2c\lambda_1(c)+2c)}\right\}$. \end{proof}
\begin{lemma} \label{lem:3:dh} If $c/2 \leq x,\mu \leq c+2/c$ and $c>0$, then there exists a constant $\alpha(c)<1$ such that
$\frac{\left|\deriv{h}{x}\right|\Phi\left(x\right)}{\Phi\left(h\left(x\right)\right)} \leq \alpha(c)<1$. \end{lemma} \begin{proof} Since $h(x)=\hat{g}(\mu,x)$, the result follows immediately after Lemma~\ref{lem:3:dghy}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:spin-holant}] This theorem is an application of Lemma~\ref{lem:algo}. Required conditions are verified in Lemma~\ref{lem:3:dg}, Lemma~\ref{lem:3:dghx}, Lemma~\ref{lem:3:dghy}, and Lemma~\ref{lem:3:dh}. \end{proof}
}
\ifabs{
\appendix \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline \ \newline
\center \textbf{\huge Appendix: Full Paper} }{}
\end{document} |
\begin{document}
\title{Electromagnetic space-time crystals. I. Fundamental solution of the Dirac equation}
\author{G. N. Borzdov}
\email[]{[email protected]}
\affiliation{Department of Theoretical Physics and Astrophysics, Belarus State University, Nezavisimosti avenue 4, 220030 Minsk, Belarus}
\begin{abstract}
The fundamental solution of the Dirac equation for an electron in an electromagnetic field with harmonic dependence on space-time coordinates is obtained. The field is composed of three standing plane harmonic waves with mutually orthogonal phase planes and the same frequency. Each standing wave consists of two eigenwaves with different complex amplitudes and opposite directions of propagation. The fundamental solution is obtained in the form of the projection operator defining the subspace of solutions to the Dirac equation. \end{abstract}
\pacs{03.65.-w, 12.20.-m, 03.30.+p, 02.10.Ud}
\maketitle
\section{Introduction}
Considerable recent attension has been focussed on the possibility of time and space-time crystals~\cite{qtime,cltime,ions,gaps}, analogous to ordinary crystals in space. The papers~\cite{qtime,cltime} provide the affirmative answer to the question, whether time-translation symmetry might be spontaneously broken in a closed quantum-mechanical system~\cite{qtime} and a time-independent, conservative classical system~\cite{cltime}. A space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field is proposed in~\cite{ions}. Standing electromagnetic waves comprize another type of space-time crystals. It was shown~\cite{gaps} that one can treat the space-time lattice, created by a standing plane electromagnetic wave, by analogy with the crystals of nonrelativistic solid state physics. In particular, the wave functions, calculated within this framework by using the first-order perturbation theory for the Schr\"{o}dinger-Stuekelberg equation, are Bloch waves with energy gaps~\cite{gaps}.
Standing electromagnetic waves constitute an interesting family of localized fields which may have important practical applications. In particular, optical standing waves can be used to focus atoms and ions onto a surface in a controlled manner, nondiffracting Bessel beams can be used as optical tweezers which are noninvasive tools generating forces powerful enough to manipulate microscopic particles. Superpositions of homogeneous plane waves propagating in opposite directions, the so-called Whittaker expansions, play a very important role in analyzing and designing localized solutions to various homogeneous partial differential equations \cite{Don92,*Don93,*Sha95}.
In \cite{pre00,*pre01,pre02} we have proposed an approach to designing localized fields, that provides a broad spectrum of fields to construct electromagnetic fields with a high degree of two-dimensional and tree-dimensional spatial localization (2D and 3D localized fields) and promising practical applications. In particular, it can be used in designing fields to govern motions of charged and neutral particles. Some illustrations for relativistic electrons in such localized fields have been presented in~\cite{pre02}.
In this series of papers we treat the motion of the Dirac electron in an electromagnetic field with four-dimensional periodicity, i.e., with periodic dependence on all four space-time coordinates. In terms of the three-dimensional description, such electromagnetic space-time crystal (ESTC) can be treated as a time-harmonic 3D standing wave. In solid state physics, the motion of electrons in natural crystals is described by the Schr\"{o}dinger equation with a periodic electrostatic scalar potential. The description of the motion of electrons in ESTCs by the Dirac equation takes into account both the space-time periodicity of the vector potential and the intrinsic electron properties (charge, spin, and magnetic moment). In this case, the Dirac equation reduces to an infinite system of matrix equations. To solve it, we generalize the operator methods developed in~\cite{cr1,*cr2,*cr3,*oc92,*jmp93,*jmp97,*Wiley} to the cases of infinite-dimensional spaces and finite-dimensional spaces with any number of space dimensions. The evolution, projection and pseudoinverse operators are of major importance in this approach. The evolution operator (the fundamental solution of a wave equation) describes the field dependence on the space-time coordinates for the whole family of partial solutions. The method of projection operators is very useful at problem solving in classical and quantum field theory~\cite{Fed58,Fed,Bogush}. It was developed by Fedorov~\cite{Fed58,Fed} to treat finite systems of linear homogeneous equations. In the frame of Fedorov's approach, it is necessary first to find projection operators which define subspaces of solutions for two subsystems (constituent parts) of the system to solve, and then to find its fundamental solution, i.e., the projection operator defining the intersection of these subspaces, by calculating the minimal polynomial for some Hermitian matrix of finite dimensions. In this paper, we present a different approach, based on the use of pseudoinverse operators, which is applicable to both finite and infinite systems of equations and has no need of minimal polynomials.
In this paper basic equations in matrix and operator forms are presented in Sec.~I. The projection operator of a system of homogeneous linear equations (the key concept of our approach) and the relations for its calculations by a recurrent algorithm are introduced in Sec.~II. For the infinite system under consideration, the appropriate recurrent algorithm and the fundamental solution are presented in Sec.~III. It is well known, e.g., see Ref.~\cite{Axi}, that 16 Dirac matrices form a basis in the space of $4\times 4$ matrices. In appendix, we introduce a specific numeration of these basis matrices, which makes it possible, in particular, to reconstruct the matrix from its number. It yields a convenient way both to represent derived matrix expressions in a concise form and accelerate numerical calculations. The subsequent two papers will present a fractal approach to numerical analysis of electromagnetic crystals and some results of this analysis, respectively.
\section{\label{baseq}Basic system of equations} \subsection{Matrix form} An electron in an electromagnetic field with the four-dimensional potential $\bm{A}=(\textbf{A},i \varphi)$ is described by the Dirac equation \begin{equation}\label{Dirac}
\left[\gamma_k\left(\frac{\partial}{\partial x_k} - i A_k\frac{e}{c \hbar}\right) + \kappa_{e}\right]\Psi=0, \end{equation} where $\kappa_{e}=m_{e} c/ \hbar$, $c$ is the speed of light in vacuum, $\hbar$ is the Planck constant, $e$ is the electron charge, $m_{e}$ is the electron rest mass, $\gamma_k$ are the Dirac matrices, $\Psi$ is the bispinor, $x_1$, $x_2$ and $x_3$ are the Cartesian coordinates, $x_4=i c t$, and summation over repeated indices is carried out from 1 to 4. The exact solution of this equation for the Dirac electron in the field of a plane electromagnetic wave was obtained in a few forms, e.g., see~\cite{Fed,Tern}, and a general approach to find approximate solutions of Dirac type equations for particles of any spin in a weak electromagnetic field with an arbitrary dependence on space-time coordinates was suggested by Fedorov~\cite{Fed}. In this paper, we treat the field with $A_4\equiv i \varphi=0$ and \begin{equation}\label{field}
\textbf{A}^\prime \equiv \frac{e}{m_e c^2}\textbf{A}=\sum_{j=1}^6\left(\textbf{A}_j e^{i \bm{K}_j \cdot \bm{x}}+\textbf{A}_j^\ast e^{-i \bm{K}_j \cdot \bm{x}}\right), \end{equation} which is composed of six plane waves with unit wave normals $\pm\textbf{e}_\alpha$, where $\textbf{e}_\alpha$ are the orthonormal basis vectors, $\alpha =1, 2, 3$; $\bm{x}=(\textbf{r},ict)$, $\textbf{r}=x_1\textbf{e}_1+x_2\textbf{e}_2+x_3\textbf{e}_3$. All six waves have the same frequency $\omega_0$ and \begin{eqnarray} \bm{K}_j&=&k_0 \bm{N}_j, \quad j=1,2,...,6, \quad k_0=\frac{\omega_0}{c}=\frac{2\pi}{\lambda_0},\nonumber \\ \bm{N}_j&=&(\textbf{e}_j,i), \quad \bm{N}_{j+3}=(-\textbf{e}_j,i), \quad j=1,2,3. \label{KNj} \end{eqnarray} They may have any polarization, so that their complex amplitudes are specified by dimensionless real constants $a_{jk}$ and $b_{jk}$ as follows \begin{equation}
\textbf{A}_j = \sum_{k=1}^3 \left(a_{jk} + i b_{jk}\right)\textbf{e}_k, \quad j = 1,2,...,6,\label{abjk} \end{equation} where $a_{jj} = b_{jj} = a_{j+3\,j} = b_{j+3\,j} = 0, j=1,2,3.$
We seek the solution of the Dirac equation in the form of a Fourier series \begin{equation}\label{sol1}
\Psi(\bm x)=\sum_{n \in \mathcal L} c(n) e^{i [\bm{K} + \bm{G}(n)]\cdot \bm{x}}, \end{equation} where $\bm{K} = (\textbf{k},i \omega /c)$ is the four-dimensional wave vector, $\textbf{k} = k_1\textbf{e}_1+k_2\textbf{e}_2+k_3\textbf{e}_3$, $n=(n_1,n_2,n_3,n_4)$ is the multi-index specifying $\textbf{n} = n_1\textbf{e}_1+n_2\textbf{e}_2+n_3\textbf{e}_3$ and $\bm{G}(n) = (k_0 \textbf{n},i k_0 n_4)$. Here, $c(n)$ are the Fourier amplitudes (bispinors), and $\mathcal L$ is the infinite set of all multi-indices $n$ with an even value of the sum $n_1+n_2+n_3+n_4$. Substitution of $\textbf{A}$ (\ref{field}) and $\Psi$ (\ref{sol1}) in Eq.~(\ref{Dirac}) results in the infinite system of matrix equations \begin{equation}\label{meq} \sum_{s \in S_{13}} V(n,s)c(n+s)=0, \quad {n\in \mathcal{L}}, \end{equation}
where $S_{13} = \{s_h(i),i=0,1,...,12\}$ is the set of 13 values of function $s_h=s_h(i)$, where $s_h(0)=(0,0,0,0)$. At $i=1,...,12$, this function specifies the shifts $s=(s_1,s_2,s_3,s_4)=s_h(i)$ of multi-indices $n$, defined by the Fourier spectrum of the field $\textbf{A}$ (\ref{field}), which satisfy the condition $|s_1|+|s_2|+|s_3|=|s_4|=1$. Because of this, they will be denoted the shifts of the first generation [$g_{4d}(s)=1$]. By the definition, $g_{4d}(s_1,s_2,s_3,s_4)=\max\{|s_1|+|s_2|+|s_3|,|s_4|\}$. Thus, each equation of the system relates 13 Fourier amplitudes (bispinors), in other words, each amplitude enters in 13 different matrix equations.
Due to the structure of the Dirac equation, the expansion of $4\times 4$ matrices in the basis formed by 16 Dirac matrices ($\Gamma_k, k=0,...,15$) yields a convenient way both to represent derived matrix expressions in a concise form and accelerate numerical calculations (see appendix). To this end, $4\times 4$ matrix $V$ is described by the set of its components in the Dirac basis [Dirac set of matrix $V$, briefly, D-set of $V$, or $D_s(V)$]. Let us introduce the dimensionless parameters \begin{equation}\label{Qw}
\bm Q = (\textbf{q},i q_4) = {\bm K}/\kappa_e, \quad \Omega = \frac{\hbar \omega_0}{m_e c^2}, \end{equation} \begin{equation}\label{qq4} \textbf{q} = q_1\textbf{e}_1+q_2\textbf{e}_2+q_3\textbf{e}_3 = \frac{\hbar \textbf{ k}}{m_e c}, \quad q_4 = \frac{\hbar \omega}{m_e c^2}. \end{equation} In this notation, the matrix coefficients $V[n,s_h(i)]$, in order of increasing $i=0,1,...,12$, have the following D-sets: \begin{widetext} \begin{eqnarray}
D_s\{V[n,(0,0,0,0)]\}&=&\{1,0,0,0,-w_4,0,0,0,0,0,0,0,0,i w_3,i w_1,i w_2\},\nonumber\\ D_s\{V[n,(0,0,-1,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{31}+b_{31},-i a_{32}+b_{32}\},\nonumber\\ D_s\{V[n,(0,-1,0,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{23}+b_{23},-i a_{21}+b_{21},0\},\nonumber\\ D_s\{V[n,(-1,0,0,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{13}+b_{13},0,-i a_{12}+b_{12}\},\nonumber\\ D_s\{V[n,(1,0,0,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{43}+b_{43},0,-i a_{42}+b_{42}\},\nonumber\\ D_s\{V[n,(0,1,0,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{53}+b_{53},-i a_{51}+b_{51},0\},\nonumber\\ D_s\{V[n,(0,0,1,-1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{61}+b_{61},-i a_{62}+b_{62}\},\nonumber\\ D_s\{V[n,(0,0,-1,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{61}-b_{61},-i a_{62}-b_{62}\},\nonumber\\ D_s\{V[n,(0,-1,0,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{53}-b_{53},-i a_{51}-b_{51},0\},\nonumber\\ D_s\{V[n,(-1,0,0,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{43}-b_{43},0,-i a_{42}-b_{42}\},\nonumber\\ D_s\{V[n,(1,0,0,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{13}-b_{13},0,-i a_{12}-b_{12}\},\nonumber\\ D_s\{V[n,(0,1,0,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{23}-b_{23},-i a_{21}-b_{21},0\},\nonumber\\ D_s\{V[n,(0,0,1,1)]\}&=&\{0,0,0,0,0,0,0,0,0,0,0,0,0,0,-i a_{31}-b_{31},-i a_{32}-b_{32}\},\label{DsV} \end{eqnarray} \end{widetext} where $n=(n_1,n_2,n_3,n_4)$, $w_j=q_j+n_j \Omega$.
\subsection{Operator form} Let us treat the infinite set $C =\{c(n),n \in {\mathcal L}\}$ of the Fourier amplitudes $c(n)$ of the wave function $\Psi$ (\ref{sol1}) as an element of an infinite dimensional linear space $V_C$. Since, for any $n \in {\mathcal L}$, \begin{equation}\label{cn}
c(n) = \left(
\begin{array}{c}
c^1(n) \\
c^2(n) \\
c^3(n) \\
c^4(n) \\
\end{array}
\right) \equiv \left(
\begin{array}{c}
c^1 \\
c^2 \\
c^3 \\
c^4 \\
\end{array}
\right)_n \end{equation} is the bispinor, $C \in V_C$ will be denoted the multispinor. Let us define a basis $e_j(n)$ in $V_C$ and the dual basis $\theta^j(n) = e_j^{\dag}(n)$ in the space of one-forms $V_C^\ast$ ($n \in {\mathcal L}$): \begin{eqnarray}\label{e14n}
e_1(n)&=&\left(
\begin{array}{c}
1 \\
0 \\
0 \\
0 \\
\end{array}
\right)_n, \quad e_2(n) = \left(
\begin{array}{c}
0 \\
1 \\
0 \\
0 \\
\end{array}
\right)_n, \nonumber\\
e_3(n)&=&\left(
\begin{array}{c}
0 \\
0 \\
1 \\
0 \\
\end{array}
\right)_n, \quad e_4(n) = \left(
\begin{array}{c}
0 \\
0 \\
0 \\
1 \\
\end{array}
\right)_n, \end{eqnarray} \begin{eqnarray}\label{t14n}
\theta^1(n)&=&\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
\end{array}
\right)_n, \quad \theta^2(n) = \left(
\begin{array}{cccc}
0 & 1 & 0 & 0 \\
\end{array}
\right)_n,\nonumber \\
\theta^3(n)&=&\left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
\end{array}
\right)_n, \quad \theta^4(n) = \left(
\begin{array}{cccc}
0 & 0 & 0 & 1 \\
\end{array}
\right)_n . \end{eqnarray} In this notation, the system of equations (\ref{meq}) takes the form \begin{equation}\label{fC}
\langle f^j(n),C\rangle \equiv \sum_{s \in S_{13}} V^j{}_k(n,s) c^k(n+s) = 0, \end{equation} where $j=1,2,3,4$, $n \in {\mathcal L}$, and \begin{eqnarray}
&&f^j(n)=\sum_{s \in S_{13}} V^j{}_k(n,s) \theta^k(n+s),\nonumber\\
&&\langle f^j(n),e_k(n+s)\rangle=V^j{}_k(n,s).\label{fjn} \end{eqnarray} These relations can be rearranged to the basic system of equations \begin{equation}\label{PnC}
P(n)C = 0, \quad n \in {\mathcal L}, \end{equation} where \begin{equation}\label{Pnfaf}
P(n) = [f^{\alpha}(n)]^\dag \otimes a^{\alpha}{}_\beta (n)f^\beta (n) \end{equation} is the Hermitian projection operator with the trace $tr[P(n)]=4$ and following properties: \begin{equation}\label{Pn2}
[P(n)]^2 = [P(n)]^\dag= P(n), \end{equation} \begin{equation}\label{anL}
a(n) = [L(n)]^{-1}, \quad L^\alpha{}_\beta (n) =\left\langle f^\alpha,\left[f^{\beta}(n)\right]^\dag\right\rangle, \end{equation} where $\alpha , \beta=1,2,3,4$. The Hermitian $4\times 4$ matrices $L(n)$ and $a(n)$ at $n=(n_1,n_2,n_3,n_4)$ are defined by the following D-sets: \begin{eqnarray}
&&D_s[L(n)]=\left\{1+I_A+w_1^2+w_2^2+w_3^2+w_4^2,0,0,0,\right.\nonumber\\
&&\left. -2w_4,0,0,0,0,2w_3w_4,2w_1w_4,2w_2w_4,0,0,0,0 \right\},\label{dsLn} \end{eqnarray} \begin{eqnarray}
&&D_s[a(n)]=\frac{1}{\left|L(n)\right|}\left\{1+I_A+w_1^2+w_2^2+w_3^2+w_4^2,0,0,0,\right.\nonumber\\
&&\left. 2w_4,0,0,0,0,-2w_3w_4,-2w_1w_4,-2w_2w_4,0,0,0,0\right\},\label{dsan} \end{eqnarray} where \begin{eqnarray}
I_A =&& 2\sum_{j=1}^6 \left|\bm{A}_j \right|^2 = 2\left(a_{12}^2+b_{12}^2+a_{13}^2+b_{13}^2+a_{21}^2+b_{21}^2\right. \nonumber\\ &&+a_{23}^2+b_{23}^2+a_{31}^2+b_{31}^2+a_{32}^2+b_{32}^2\nonumber \\
&&+a_{42}^2+b_{42}^2+a_{43}^2+b_{43}^2+a_{51}^2+b_{51}^2\nonumber \\
&&\left. +a_{53}^2 +b_{53}^2+a_{61}^2+b_{61}^2+a_{62}^2+b_{22}^2\right),\label{IA} \end{eqnarray} \begin{eqnarray}
\left|L(n)\right|=&&I_A^2 + 2I_A\left(1+w_1^2+w_2^2+w_3^2+w_4^2\right)\nonumber\\
&&+\left(1+w_1^2+w_2^2+w_3^2-w_4^2\right)^2.\label{dLn} \end{eqnarray}
It is significant that, for a nonvanishing electromagnetic field ($I_A\neq 0$), the determinant $\left|L(n)\right|>0$ and hence equations (\ref{PnC})--(\ref{dLn}) are valid for any $n \in {\mathcal L}$.
\section{Projection operator of a system of homogeneous linear equations\label{project}} Let $\mathcal{V}$ and $\mathcal{V}^\ast$ be a linear space (finite or infinite dimensional) and its dual. At given $\omega \in \mathcal{V}^\ast$, the linear homogeneous equation in $\bm{x} \in \mathcal{V}$ \begin{equation}\label{omx}
\langle\omega,\bm{x}\rangle = 0 \end{equation} can be transformed to the equivalent equation \begin{equation}\label{alx}
\alpha\bm{x} = 0 \end{equation} where \begin{equation}\label{aldyad}
\alpha = \frac{\omega^\dag\otimes\omega}{\langle\omega,\omega^\dag\rangle} \end{equation} is the Hermitian projection operator (dyad) with the trace $tr\,\alpha=1$, and $\omega^\dag\in\mathcal{V}$. Let $U$ be the unit operator, i.e., $U\bm{x}=\bm{x}$ for any $\bm{x} \in \mathcal{V}$ and $\omega U = \omega$ for any $\omega \in \mathcal{V}^\ast$. The Hermitian projection operator $S=U-\alpha$ is the fundamental solution of (\ref{alx}), i.e., for any given $\bm{x}_0 \in \mathcal{V}$, $\bm{x}=S\bm{x}_0$ is a partial solution of (\ref{omx}) and (\ref{alx}).
Let now $\alpha$ and $\beta$ be Hermitian projection operators ($\alpha^\dag=\alpha^2=\alpha, \beta^\dag=\beta^2=\beta$) in $\mathcal{V}$. Providing the series \begin{equation}\label{Aab}
A=\alpha+\beta+\sum_{k=1}^{+\infty} \left[(\alpha\beta)^k\alpha-(\alpha\beta)^k+(\beta\alpha)^k\beta-(\beta\alpha)^k\right] \end{equation} is convergent, it defines the Hermitian projection operator with the following properties \begin{eqnarray}
A^\dag=A^2=A,\quad \alpha A = A \alpha=\alpha,\nonumber\\
\beta A = A \beta=\beta,\quad tr\,A=tr\,\alpha+ tr\,\beta. \label{Aprop} \end{eqnarray} Hence, the system of equations in $\bm{x}\in\mathcal{V}$ \begin{equation}\label{axbx}
\alpha\bm{x}=0, \quad \beta\bm{x}=0 \end{equation} reduces to one equation $A\bm{x}=0$ and has the fundamental solution $S=U-A$. The operator $A$ will be designated the projection operator of the system (\ref{axbx}). The trace $tr\,\alpha$ of the projection operator $\alpha$ specifies the dimension of the image $\alpha(\mathcal{V})$ of $\mathcal{V}$ under the mapping $\alpha$. It is significant that the relations (\ref{Aab}) and (\ref{Aprop}) are valid for any values of integers $tr\,\alpha$ and $tr\,\beta$. This enables us to extend this approach to systems with any (finite or infinite) number of homogeneous linear equations. To this end, we transform~(\ref{Aab}) to the following expression~\cite{bian04} \begin{equation}
A=(\alpha-\alpha\beta\alpha)^{-}(U-\beta)+(\beta-\beta\alpha\beta)^{-}(U-\alpha),\label{Apseu} \end{equation} where $(\alpha-\alpha\beta\alpha)^{-}$ is the pseudoinverse operator with the following properties \begin{eqnarray}
&&(\alpha-\alpha\beta\alpha)^{-}(\alpha-\alpha\beta\alpha)=(\alpha-\alpha\beta\alpha)(\alpha-\alpha\beta\alpha)^{-}=\alpha,\nonumber\\
&&\alpha(\alpha-\alpha\beta\alpha)^{-}=(\alpha-\alpha\beta\alpha)^{-}\alpha=(\alpha-\alpha\beta\alpha)^{-},\nonumber\\
&&\sum_{k=1}^{+\infty}(\alpha\beta)^k=(\alpha-\alpha\beta\alpha)^{-}\beta.\label{pseu3} \end{eqnarray} The similar relations for $(\beta-\beta\alpha\beta)^{-}$ can be obtained from (\ref{pseu3}) by the replacement $\alpha\leftrightarrow\beta$. Numerical implementation of the pseudoinversion reduces to the inversion of $(tr\,\alpha)\times (tr\,\alpha)$ matrix for $(\alpha-\alpha\beta\alpha)^{-}$ and $(tr\,\beta)\times (tr\,\beta)$ matrix for $(\beta-\beta\alpha\beta)^{-}$.
In~\cite{bian04}, we have proposed a technique based on the use of (\ref{Apseu}) to find the fundamental solution of the system~(\ref{PnC}). In the current series of papers, we present the advanced version of this technique based on a fractal expansion of the system of equations taking into account (the following paper) and on the use of $A$ (\ref{Aab}) expressed as \begin{equation}\label{Aad}
A=\alpha+\delta, \quad \delta=(\beta-\alpha)\gamma(\beta-\alpha), \end{equation} where \begin{equation}\label{gam}
\gamma=\beta+\sum_{k=1}^{+\infty}(\beta\alpha\beta)^k=(\beta-\beta\alpha\beta)^{-}, \end{equation} $\alpha,\beta,\delta$, and $A$ are projection operators, $\alpha,\beta,\gamma,\delta$, and $A$ are Hermitian operators interrelated as \begin{eqnarray}
&&\beta\gamma=\gamma\beta=\gamma,\quad \beta\alpha\gamma=\gamma\alpha\beta=\gamma-\beta,\nonumber\\
&&\alpha\delta=\delta\alpha=0,\quad \beta\delta=\beta-\beta\alpha,\quad \delta\beta=\beta-\alpha\beta,\nonumber\\
&&\alpha A=A\alpha=\alpha, \quad \beta A=A\beta=\beta, \quad \delta A=A\delta=\delta. \label{abgd} \end{eqnarray} In the frame of this approach, calculation of all pseudoinverse operators in use reduces to the inversion of $4\times 4$ matrices.
\section{Recurrent algorithm\label{recur}} In this section, we present the recurrent algorithm to find the projection operator of the basic system of equations~(\ref{PnC}). \subsection{Product $P(m)P(n)$} It follows from Eq.~(\ref{Pnfaf}) that \begin{equation}\label{PmPn}
P(m)P(n)=\left[f^i(m)\right]^{\dag}\otimes\left[a(m)N(m,n)a(n)\right]^i{}_j f^j(n), \end{equation} where \begin{equation}\label{Nij}
N^i{}_j(m,n)=\left\langle f^i(m),\left[f^j(n)\right]^{\dag}\right\rangle,\quad i,j=1,2,3,4, \end{equation} $N(n,n)\equiv L(n)$ (\ref{anL}). At any given n, Eq.~(\ref{meq}) relates the Fourier amplitude $c(n)$ only with 12 amplitudes $c(n+s)$, where $g_{4d}(s)=1$. In consequence of this, $N(m,n)\equiv 0$ at $g_{4d}(n-m)>2$. Substitution of (\ref{fjn}) in (\ref{Nij}) at $n=m+s$ gives \begin{eqnarray}
N^{\dag}(n,m)=N(m,n)&=&L(m) \text{ for } n=m,\nonumber\\
&=&N_1(m,s) \text{ for } g_{4d}(s)=1,\nonumber\\
&=&N_2(s)\Gamma_0 \text{ for } g_{4d}(s)=2. \end{eqnarray} The D-sets of 12 matrices $N_1(m,s)$ and the table of 56 scaler coefficients $N_2(s)$ will be presented in the third paper of this series.
\subsection{Sublattices} The Hermitian operator $\mathcal P$ of the system of equations~(\ref{PnC}), by definition, has the following properties \begin{equation}\label{Pdag}
\mathcal{P}^\dag=\mathcal{P}^2=\mathcal{P}, \quad P(n)\mathcal{P}=\mathcal{P}P(n)=P(n) \end{equation} for any $n\in{\mathcal L}$. Let us seek it in the form of a series \begin{equation}\label{Pro}
\mathcal{P}=\sum_{k=0}^{+\infty}\sum_{n\in\mathcal{F}_k}\rho_k(n), \end{equation} where $\mathcal{F}_k$ are the lattices satisfying the conditions \begin{equation}\label{FkL}
\bigcup_{k=0}^{+\infty}\mathcal{F}_k=\mathcal{L},\quad \mathcal{F}_j\bigcap\mathcal{F}_k=\emptyset,\quad j\neq k, \end{equation} and $\rho_k(n)$ are Hermitian projection operators satisfying the relations \begin{equation}\label{rok}
\rho_k^{\dag}(n)=\rho_k^2(n)=\rho_k(n), \quad tr[\rho_k(n)]=4, \quad n \in \mathcal{L}, \end{equation} \begin{equation}\label{romn}
\rho_k(m)\rho_l(n)=0 \text{ if } k\neq l \text{ or (and) } m\neq n, \end{equation} \begin{equation}\label{ro0}
\rho_0(n)=P(n), \quad n\in \mathcal{F}_0. \end{equation} There exist various ways to split the lattice $\mathcal L$ into sublattices $\mathcal{F}_k$ to fulfil conditions (\ref{FkL}) and (\ref{romn}), one of them will be described in the second paper of this series. Providing these conditions are met, substitution of \begin{equation}\label{albero}
\alpha=\sum_{j=0}^{k-1}\sum_{n\in \mathcal{F}_j}\rho_j(n)\equiv\mathcal{P}_{k-1}, \quad \beta=P(m), \quad m\in \mathcal{P}_{k} \end{equation} into Eqs.~(\ref{Aad}) and (\ref{gam}) results in $\rho_k(m)=\delta$ (\ref{Aad}).
All operators $\rho_k(m)$ have the same trace $tr\left[\rho_k(m)\right]=4$ and can be written as \begin{equation}\label{rokm}
\rho_k(m)=\left[F_k^{\alpha}(m)\right]^{\dag}\otimes \left[A_k(m)\right]^{\alpha}{}_{\beta}F_k^{\beta}(m), \quad m\in \mathcal{F}_k, \end{equation} where $k=0,1,...$, and \begin{eqnarray}
F_0^{\alpha}(m)\equiv f^{\alpha}(m), \quad A_0(m)\equiv a(m),\nonumber\\
F_k^{\alpha}(m)=f^{\alpha}(m)-g_k^{\alpha}(m), \quad k=1,2,...;\label{Fkm} \end{eqnarray} \begin{eqnarray}
g_k^{\alpha}(m)&&\equiv f^{\alpha}(m)\mathcal{P}_{k-1}=\sum_{j=0}^{k-1}\sum_{n\in \mathcal{F}_j} \left[C_{kj}(m,n)\right]^{\alpha}{}_{\beta}f^{\beta}(n),\nonumber\\
\left[g_k^{\alpha}(m)\right]^{\dag}&&\equiv\mathcal{P}_{k-1}\left[f^{\alpha}(m)\right]^{\dag}\nonumber\\
&&=\sum_{j=0}^{k-1}\sum_{n\in \mathcal{F}_j} \left[f^{\beta}(n)\right]^{\dag} \left[C_{jk}(n,m)\right]^{\beta}{}_{\alpha}; \label{gkm} \end{eqnarray} \begin{eqnarray}
A_k(m)=\left[L(m)-G_k(m)\right]^{-1},\nonumber\\
G_k(m)=\sum_{j=0}^{k-1}\sum_{n\in \mathcal{F}_j} C_{kj}(m,n)N(n,m).\label{AkGk} \end{eqnarray}
\subsection{Recurrent relations} The $4\times 4$ matrices $C_{kj}(m,n)$ and $C_{jk}(n,m)$ are related as \begin{equation}\label{CjkCkj}
C_{jk}(n,m)=\left[C_{kj}(m,n)\right]^{\dag}, \end{equation} where $m\in \mathcal{F}_k$, $k=1,2,...$; $n\in \mathcal{F}_j$, $j=0,1,...,k-1$. The relationship between one-forms $f^{\alpha}(m)$ and $F_j^{\beta}(n)$ is described by $4\times 4$ matrix $D_{kj}(m,n)$ as \begin{equation}\label{fF}
f^{\alpha}(m)\rho_j(n)=\left[D_{kj}(m,n)\right]^{\alpha}{}_{\beta}F_j^{\beta}(n), \end{equation} where \begin{eqnarray}
D_{kj}(m,n)=&&N(m,n)A_j(n)\nonumber\\
&&-\sum_{i=0}^{j-1}\sum_{p\in \mathcal{F}_i} N(m,p)C_{ij}(p,n)A_j(n),\label{Dkj} \end{eqnarray} where $m\in \mathcal{F}_k, k=2,3,...;n\in \mathcal{F}_j, j=1,...,k-1.$
The families of matrices $C_{kj}(m,n)$ and $D_{kj}(m,n)$ are defined by Eqs. (\ref{CjkCkj}), (\ref{Dkj}), and the recurrent relations: \begin{equation}\label{C10}
C_{10}(m,n)=N(m,n)a(n), \quad m\in \mathcal{F}_1, \quad n\in \mathcal{F}_0; \end{equation} \begin{eqnarray}
&&C_{k0}(m,n)=N(m,n)a(n)-\sum_{j=1}^{k-1}\sum_{p\in \mathcal{F}_j}D_{kj}(m,p)C_{j0}(p,n),\nonumber\\
&&m\in \mathcal{F}_k,\quad k=2,3,...,\quad n\in \mathcal{F}_0;\label{Ck0} \end{eqnarray} \begin{eqnarray}
&&C_{ki}(m,n)=D_{ki}(m,n)-\sum_{j=i+1}^{k-1}\sum_{p\in \mathcal{F}_j}D_{kj}(m,p)C_{ji}(p,n),\nonumber\\
&&m\in \mathcal{F}_k,\quad k=3,4,...,\quad n\in \mathcal{F}_i \quad i=1,...,k-2;\label{Cki} \end{eqnarray} \begin{equation}\label{Ckk1}
C_{k\,k-1}(m,n)=D_{k\,k-1}(m,n), \quad k=2,3,... . \end{equation} From Eqs.~(\ref{Fkm})--(\ref{Ckk1}) it follows \begin{equation}\label{Akdag}
\left[A_k(m)\right]^{\dag}=A_k(m), \quad \left[G_k(m)\right]^{\dag}=G_k(m), \end{equation} that is, $G_k(m)$ and $A_k(m)$ are Hermitian matrices with real D-sets.
In numerical calculations, the projection operator $\rho_k(m)$ (\ref{rokm}) is represented by its components \begin{equation}\label{{Rkrok}}
\left[R_k(m',m,n')\right]^{\mu}{}_{\nu}=\left\langle\theta^{\mu}(m'),\rho_k(m)e_{\nu}(n')\right\rangle, \end{equation} where $m', n' \in {\mathcal L}; \mu, \nu=1,2,3,4.$ They can be conveniently treated as elements of $4\times 4$ matrices \begin{equation}\label{RkA}
R_k(m',m,n')=\left[\Phi_k(m,m')\right]^{\dag}A_k(m)\Phi_k(m,n'), \end{equation} where $\left[\Phi_k(m,n')\right]^{\alpha}{}_{\mu}\equiv\left\langle F_k^{\alpha}(m),e_{\mu}(n')\right\rangle,$ and \begin{eqnarray}
\Phi_k(m,n')=&&V(m,n'-m)\nonumber\\
&&-\sum_{j=0}^{k-1}\sum_{n\in \mathcal{F}_j} C_{kj}(m,n)V(n,n'-n).\label{Phik} \end{eqnarray} The Hermitian matrix $A_k(m)$ and the set of matrices $\Phi_k(m,n')$ uniquely define the projection operator $\rho_k(m)$.
\subsection{Fundamental solution} The fundamental solution of Eq.~(\ref{PnC}), i.e., the operator of projection onto the solution subspace of the multispinor space $V_C$, has the form \begin{equation}\label{SUP}
\mathcal{S}=\mathcal{U}-\mathcal{P}, \end{equation} where $\mathcal{U}$ is the unit operator in $V_C$, which can be written as \begin{equation}\label{UVC}
\mathcal{U}=\sum_{n\in \mathcal{L}} I(n),\quad I(n)=e_j(n)\otimes\theta^j(n), \quad tr[I(n)]=4. \end{equation} For any $C_0\in V_C$, $C=\mathcal{S}C_0$ is a partial solution of Eq.~(\ref{PnC}), i.e., the function $\Psi$~(\ref{sol1}) with the set of Fourier amplitudes $\{c(n), n \in \mathcal{L}\}=\mathcal{S}C_0$ satisfies the Dirac equation~(\ref{Dirac}) for the problem under consideration.
\section{Conclusion} A system of homogeneous linear equations in a finite--dimensional space or an infinite--dimensional space is characterized by the Hermitian projection operator of the system, directly coupled with the fundamental solution --- the operator of projection onto the subspace of solutions. The relations presented in section~\ref{project} provide convenient means to find these operators by making use a recurrent algorithm and the pseudoinversion. In the frame of this general approach, the fundamental solution of the Dirac equation for an electron in the electromagnetic field with four--dimensional periodicity is obtained.
\appendix* \section{} \subsection{Dirac basis for the linear space of $4\times 4$ matrices} Let us enumerate 16 Dirac matrices, forming a basis for the linear space of $4\times 4$ matrices, by taking into account both interrelations between $2\times 2$ blocks of each matrix and interrelations between elements of each nonzero $2\times 2$ block as follows \begin{equation*}
\Gamma_0=\Gamma_{0000}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right)=U, \end{equation*} \begin{equation*}
\Gamma_1=\Gamma_{0001}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right)=\Sigma_3, \end{equation*} \begin{equation*}
\Gamma_2=\Gamma_{0010}=\left(
\begin{array}{cccc}
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{array}
\right)=\Sigma_1, \end{equation*} \begin{equation*}
\Gamma_3=\Gamma_{0011}=\left(
\begin{array}{cccc}
0 & -i & 0 & 0 \\
i & 0 & 0 & 0 \\
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\
\end{array}
\right)=\Sigma_2, \end{equation*} \begin{equation*}
\Gamma_4=\Gamma_{0100}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right)=\gamma_4=\alpha_4, \end{equation*} \begin{equation*}
\Gamma_5=\Gamma_{0101}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right)=\tau_3, \end{equation*} \begin{equation*}
\Gamma_6=\Gamma_{0110}=\left(
\begin{array}{cccc}
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 \\
\end{array}
\right)=\tau_1, \end{equation*} \begin{equation*}
\Gamma_7=\Gamma_{0111}=\left(
\begin{array}{cccc}
0 & -i & 0 & 0 \\
i & 0 & 0 & 0 \\
0 & 0 & 0 & i \\
0 & 0 & -i & 0 \\
\end{array}
\right)=\tau_2, \end{equation*} \begin{equation*}
\Gamma_8=\Gamma_{1000}=\left(
\begin{array}{cccc}
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right)=\gamma_5, \end{equation*} \begin{equation*}
\Gamma_9=\Gamma_{1001}=\left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right)=\alpha_3 \end{equation*} \begin{equation*}
\Gamma_{10}=\Gamma_{1010}=\left(
\begin{array}{cccc}
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
\end{array}
\right)=\alpha_1, \end{equation*} \begin{equation*}
\Gamma_{11}=\Gamma_{1011}=\left(
\begin{array}{cccc}
0 & 0 & 0 & -i \\
0 & 0 & i & 0 \\
0 & -i & 0 & 0 \\
i & 0 & 0 & 0 \\
\end{array}
\right)=\alpha_2, \end{equation*} \begin{equation*}
\Gamma_{12}=\Gamma_{1100}=\left(
\begin{array}{cccc}
0 & 0 & i & 0 \\
0 & 0 & 0 & i \\
-i & 0 & 0 & 0 \\
0 & -i & 0 & 0 \\
\end{array}
\right)=\tau_4, \end{equation*} \begin{equation*}
\Gamma_{13}=\Gamma_{1101}=\left(
\begin{array}{cccc}
0 & 0 & -i & 0 \\
0 & 0 & 0 & i \\
i & 0 & 0 & 0 \\
0 & -i & 0 & 0 \\
\end{array}
\right)=\gamma_3, \end{equation*} \begin{equation*}
\Gamma_{14}=\Gamma_{1110}=\left(
\begin{array}{cccc}
0 & 0 & 0 & -i \\
0 & 0 & -i & 0 \\
0 & i & 0 & 0 \\
i & 0 & 0 & 0 \\
\end{array}
\right)=\gamma_1, \end{equation*} \begin{equation*}
\Gamma_{15}=\Gamma_{1111}=\left(
\begin{array}{cccc}
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
-1 & 0 & 0 & 0 \\
\end{array}
\right)=\gamma_2. \end{equation*} At the presented numeration order, the structural information on each matrix $\Gamma_{\nu}=\Gamma_{MNmn}$ is enclosed in its number which is written above in decimal notation ($\nu$) and binary notation ($MNmn$) with four binary digits for any $\nu=0,...,15$, i.e., \begin{equation}\label{nu842}
\nu=8 M + 4 N +2 m +n. \end{equation} Commonly used notation to the right of each matrix is given for convenience.
To reconstruct the matrix from its number, first we calculate the nonzero element \begin{equation}\label{bnu}
b_{\nu}=b_{MNmn}=i^{M N + m n}(-1)^{(1-M)m n + M(1+N+m+n)} \end{equation} of the first matrix row, which is situated in $2M+m+1$ column. The binary digits $m$ and $n$ define the structure of $2\times 2$ block $X$ containing this element as follows \begin{equation*}
X=\left(
\begin{array}{cc}
b_{MN0n} & 0 \\
0 & (-1)^n b_{MN0n}\\
\end{array}
\right) \text{ for } m=0, \end{equation*} \begin{equation*}
X=\left(
\begin{array}{cc}
0 & b_{MN1n} \\
(-1)^n b_{MN1n} & 0\\
\end{array}
\right) \text{ for } m=1. \end{equation*} Finally, the digits $M$ and $N$ uniquely define \begin{equation*}
\Gamma_{\nu}=\Gamma_{MNmn}=\left(
\begin{array}{cc}
A & B \\
C & D \\
\end{array}
\right) \end{equation*} in terms of its $2\times 2$ blocks $A,B,C$ and $D$ as \begin{equation*}
B=C=\left(
\begin{array}{cc}
0 & 0 \\
0 & 0\\
\end{array}
\right), D=(-1)^N A, A=X \text{ for } M=0, \end{equation*} \begin{equation*}
A=D=\left(
\begin{array}{cc}
0 & 0 \\
0 & 0\\
\end{array}
\right), C=(-1)^N B, B=X \text{ for } M=1. \end{equation*}
The matrix product $\Gamma_{\lambda}\Gamma_{\mu}$ at all values of $\lambda$ and $\mu$ can be written as \begin{equation*}
\Gamma_{\lambda}\Gamma_{\mu}=f_{\lambda\mu}\Gamma_{\nu}. \end{equation*} It can be described by $16\times 16$ multiplication tables of $\nu$ and $f_{\lambda\mu}$ values depending on $\lambda, \mu=0,...,15$. The presented numeration provides a simple way to describe this multiplication rule without recourse to tables. By using (\ref{nu842}), (\ref{bnu}) and the binary forms $GHgh$ and $JKjk$ of numbers $\lambda$ and $\mu$, i.e., \begin{equation*}
\lambda=8G+4H+2g+h, \quad \mu=8J+4K+2j+k, \end{equation*} we obtain \begin{equation*}
M=|G-J|, N=|H-K|, m=|g-j|, n=|h-k|, \end{equation*} \begin{eqnarray*}
f_{\lambda\mu}&=&f[(G,H,g,h),(J,K,j,k)] \\
&=&\frac{b_{GHgh}b_{JKjk}}{b_{MNmn}}(-1)^{GK+gk} = i^{GK+JH+gk+jh}(-1)^Z, \end{eqnarray*} where \begin{eqnarray*}
Z=&&G K(1-J-H)+J H(G+K)\\
&&+(G j+J g)(1-h-k)+G k(1-g)\\
&&+J h(1-j)+g k(1-j-h)+j h(g+k). \end{eqnarray*}
\subsection{Dirac set of $4\times 4$ matrix} In terms of Eq.~(\ref{nu842}) any $4\times 4$ matrix $A$ can be written \begin{equation*}
A=\sum_{\nu=0}^{15}A_{\nu}\Gamma_{\nu}\equiv\sum_{M,N,m,n=0}^1 A_{MNmn}\Gamma_{MNmn}, \end{equation*} where $A_{\nu}\equiv A_{MNmn}=\frac14 tr(A\Gamma_{\nu})$, and $tr\, A=4 A_0$. To single out the specific basis used in this expansion, the set of coefficients with decimal $\{A_{\nu}\}$ or binary $\{A_{MNmn}\}$ indices is called in this article the Dirac set of matrix $A$, briefly, D-set of $A$, and it is denoted $D_s(A)$. This approach is of particular assistance in solving the system of Eqs.~(\ref{meq}). It is best suited to the structure of its matrix coefficients, accelerates numerical calculations and reduces data files.
It should be emphasized that all major matrix operations can be performed directly with D-sets, i.e., without matrix form retrieval. In particular, the function $P_D$, describing the matrix product $C=A B$ in terms of D-sets, is given by \begin{equation*}
D_s(C)=P_D[D_s(A),D_s(B)], \end{equation*} \begin{eqnarray*}
C_{MNmn}=&&\sum_{G,H,g,h=0}^1 A_{GHgh}B_{JKjk}\\
&&\times f[(G,H,g,h),(J,K,j,k)], \end{eqnarray*} where \begin{equation*}
J=|M-G|, K=|N-H|, j=|m-g|, k=|n-h|. \end{equation*} The map $A\mapsto D_s(A)$ and its inverse $D_s(A)\mapsto A$ are linear, and $D_s(A^{\dag})=[D_s(A)]^\ast$, i.e., D-set of a Hermitian matrix is real.
Let us assume that D-set of matrix $A$ has the form \begin{equation*}
D_s(A)=\{a,b,c,d,e,f,g,h,s,t,u,v,w,x,y,z\}. \end{equation*} Then the coefficients $I_1, I_2, I_3$ and $I_4$ of its characteristic equation \begin{equation*}
\lambda^4-I_1\lambda^3+I_2\lambda^2-I_3\lambda+I_4=0 \end{equation*} are given by the expressions: \begin{eqnarray*}
I_4 = \left[(a - e)^2 - (b - f)^2 - (c - g)^2 - (d - h)^2\right] \\
\times\left[(a + e)^2 - (b + f)^2 - (c + g)^2 - (d + h)^2\right]\\
+ 4(-s w + t x + u y + v z)^2\\
+ \left(s^2 - t^2 - u^2 - v^2 - w^2 + x^2 + y^2 + z^2\right)^2\\
-2\left[\left(b^2 - f^2\right)\left(s^2 + t^2 - u^2 - v^2 + w^2 + x^2 - y^2 - z^2\right)\right. \\
+ \left(c^2 - g^2\right)\left(s^2 - t^2 + u^2 - v^2 + w^2 - x^2 + y^2 - z^2\right) \\
+ \left(d^2 - h^2\right)\left(s^2 - t^2 - u^2 + v^2 + w^2 - x^2 - y^2 + z^2\right) \\
\left. + \left(a^2 - e^2\right)\left(s^2 + t^2 + u^2 + v^2 + w^2 + x^2 + y^2 + z^2\right)\right] \\
- 8\left[(d g - c h)(s x - t w) + (a b - e f)(s t + w x)\right.\\
+ (d f - b h)(u w - s y) + (d e - a h)(u x - t y) \\
+ (a c - e g)(s u + w y) + (b c - f g)(t u + x y) \\
+ (b g - c f )(v w - s z) + (a g - c e)(v x - t z)\\
+ (b e - a f)(v y - u z) + (a d - e h)(s v + w z)\\
\left. + (b d - f h)(t v + x z) + (c d - g h)(u v + y z)\right], \end{eqnarray*} \begin{eqnarray*}
I_3=&&4a\left(a^2-I_0\right)\\
&&+8\left[c e g + d e h - c s u - d s v + h u x - g v x \right.\\
&&+b(e f - s t - w x)+y(f v - h t - c w)\\
&&\left. +z(g t - d w - f u)\right], \end{eqnarray*} \begin{equation*}
I_2=6a^2-2I_0, \end{equation*} \begin{equation*}
I_1=4a, \end{equation*} where \begin{eqnarray*}
I_0=&&b^2+c^2+d^2+e^2+f^2+g^2+h^2+s^2\\
&&+t^2+u^2+v^2+w^2+x^2+y^2+z^2. \end{eqnarray*}
Here, $I_1=tr\, A$ and $I_4=|A|$ are the trace and the determinant of $A$, $I_3=tr\,\overline{A}$ is the trace of the adjoint matrix $\overline{A}$ defined by the equation $A\overline{A}=\overline{A}A=|A|\Gamma_0$.The Hamilton--Cayley theorem provides the relation \begin{equation*}
\overline{A}=I_3\Gamma_0 - I_2 A + I_1 A^2 - A^3. \end{equation*} Hence, the D-sets of the adjoint matrix $\overline{A}$ and the inverse matrix $A^{-1}$ (assuming $I_4\neq 0$) are defined by the relations \begin{eqnarray*}
D_s(\overline{A})=&&I_3D_s(\Gamma_0) - I_2D_s(A)\\
&&+P_D[D_s(A^2),I_1D_s(\Gamma_0)-D_s(A)],\\
D_s(A^{-1})=&&D_s(\overline{A})/I_4, \end{eqnarray*} where $D_s(\Gamma_0)=\{1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\}$, and \begin{eqnarray*}
D_s(A^2)=P_D[D_s(A),D_s(A)]=(a^2+I_0)D_s[\Gamma_0] \\
+2\left\{0,a b + e f - s t - w x, \right.\\
a c + e g - s u - w y, a d + e h - s v - w z,\\
a e + b f + c g + d h, b e + a f +v y - u z,\\
c e + a g - v x + t z, d e + a h + u x - t y,\\
a s - b t - c u - d v, a t - b s - h y + g z,\\
a u - c s + h x -f z, a v - s d - g x + f y,\\
a w - b x - c y - d z, h u - g v - b w + a x,\\
\left. f v - h t - c w + a y, g t - f u - d w + a z \right\}. \end{eqnarray*}
\end{document} |
\begin{document}
\title{On the embedding complexity of Liouville manifolds}
\begin{abstract} We define a family of symplectic invariants which obstruct exact symplectic embeddings between Liouville manifolds, using the general formalism of linearized contact homology and its $\mathcal{L}_\infty$ structure. As our primary application, we investigate embeddings between normal crossing divisor complements in complex projective space, giving a complete characterization in many cases. Our main embedding results are deduced explicitly from pseudoholomorphic curves, without appealing to Hamiltonian or virtual perturbations. \end{abstract}
\author{Sheel Ganatra and Kyler Siegel}
\date{\today}
\maketitle
\tableofcontents
\section{Introduction}
\subsection{Context and motivation}\label{subsec:context}
A {\em Liouville domain} is a compact manifold-with-boundary equipped with a primitive one-form ${\lambda}$ such that $\omega := d{\lambda}$ is symplectic and ${\lambda}$ restricts to a positive contact form along the boundary. Liouville domains form a nice class of open symplectic manifolds which naturally arise in many geometric contexts, including: \begin{itemize} \item unit cotangent disk bundles of closed Riemannian manifolds \item sufficiently large compact pieces of smooth complex affine varieties \item regular sublevel sets of Stein manifolds. \end{itemize} The principal goal of this paper is to develop tools to understand when one Liouville domain is ``larger'' or ``more complicated'' than another. Specifically, given two Liouville domains $(X,{\lambda})$ and $(X',{\lambda})$ of the same dimension, we seek to understand when there is a {\em Liouville embedding} $X \overset{L}\hookrightarrow X'$. This consists of a smooth embedding $\iota: X \hookrightarrow X'$ such that $\iota^*{\lambda}'$ agrees with ${\lambda}$ up to some positive scaling factor and the addition of an exact one-form. The existence of a Liouville embedding $X \overset{L}\hookrightarrow X'$ is a qualitative notion, depending only on $X$ and $X'$ up to Liouville homotopy. Equivalently, by attaching an infinite cylindrical end, any Liouville domain $(X,{\lambda})$ can be completed to a (finite type) {\em Liouville manifold} $(\widehat{X},\widehat{{\lambda}})$ (e.g. the completion of a cotangent disk bundle is the full cotangent bundle), and the existence of a Liouville embedding $X \overset{L}\hookrightarrow X'$ is equivalent to having a smooth (but not necessarily proper) embedding $\iota: \widehat{X} \hookrightarrow \widehat{X'}$ such that $\iota^*(\widehat{{\lambda}'}) - \widehat{{\lambda}}$ is exact.
One reason for interest in Liouville embeddings comes from their connection to exact (compact) Lagrangian submanifolds. An {\em exact Lagrangian submanifold} of $(X,{\lambda}$) is a half-dimensional submanifold $L \subset X$ equipped with a function $f$ such that ${\lambda}|_{L} = df$ (so in particular $\omega|_L \equiv 0$). The study of exact Lagrangian embeddings has played a prominent role in the symplectic topology literature, going back to Gromov's theorem \cite{gromov1985pseudo} that there are no closed exact Lagrangians in $\mathbb{C}^n$, and Arnold's ``nearby Lagrangian conjecture'' stating that there is a unique closed exact Lagrangian in the cotangent bundle of a closed manifold up to Hamiltonian isotopy. By a version of the Weinstein neighborhood theorem, a given $L$ admits an exact Lagangian embedding into $(X,{\lambda})$ if and only if there is a Liouville embedding $D^*_\varepsilon L \overset{L}\hookrightarrow X$, where $D^*_\varepsilon L$ denotes the cotangent $\varepsilon$-disk bundle of $L$ for some Riemannian metric and $\varepsilon > 0$ sufficiently small. As it turns out, most known examples of Liouville domains (such as $D^*_\varepsilon L$) are a fortiori {\em Weinstein domains}, meaning they carry Morse functions suitably compatible with the Liouville structure. A Weinstein domain deformation retracts onto its skeleton, which is an isotropic (but possibly singular) closed subset; in the case of $D^*_\varepsilon L$ with its canonical Liouville structure, the resulting skeleton is $L$ itself. Hence, we intuitively view a more general Weinstein domain as the cotangent disk bundle of its singular skeleton, and Liouville embeddings of Weinstein domains as singular generalizations of exact Lagrangian embeddings.
Liouville embeddings also constitute a primary class of morphisms under which functoriality holds for the most widely studied symplectic invariants of Liouville domains, for instance symplectic cohomology, wrapped Fukaya categories and various other invariants built from the theory of pseudoholomorphic curves (see e.g. \cite{Seidel_biased_view,abouzaid2010open}). Given a Liouville domain $X$ and a chosen ground ring $\mathbb{K}$, its symplectic cohomology $\op{SH}(X)$ is, among other things, a unital $\mathbb{K}$-algebra whose isomorphism type depends only on $\widehat{X}$ up to symplectomorphism. A Liouville embedding $X \overset{L}\hookrightarrow X'$ induces a transfer map $\op{SH}(X') \rightarrow \op{SH}(X)$ of unital $\mathbb{K}$-algebras. We have also $\op{SH}(B^{2n}) = 0$ and $\op{SH}(D^*Q) \cong H_*(\mathcal{L} Q)$,\footnote{More precisely, this isomorphism always holds over $\mathbb{K}=\mathbb{Z}/2$, and it holds for more general $\mathbb{K}$ if $Q$ is Spin, whereas the general case necessitates using a twisted coefficient system.} where $\mathcal{L} Q$ denotes the free loop space of $Q$. Combining these properties gives an elegant proof of Gromov's theorem as follows. Given a hypothetical exact Lagrangian $L \subset \mathbb{C}^n$, the transfer map $\op{SH}(\mathbb{C}^n) \rightarrow \op{SH}(D^*_\varepsilon L)$ is necessarily the zero map. This forces the unit in $\op{SH}(D^*_\varepsilon L)$ to be zero, and hence $H_*(\mathcal{L} L) =0$, but this is never the case.
The argument in the preceding paragraph can be formalized into the following simple but surprisingly powerful observation: \begin{observation}\label{obs:vanishingSH} Given a Liouville embedding $X \overset{L}\hookrightarrow X'$, if $\op{SH}(X') = 0$, then we must also have $\op{SH}(X) = 0$. \end{observation} {\noindent} For example, every Weinstein domain is diffeomorphic to one which is {\em flexible} \cite[\S11.8]{cieliebak2012stein}, with the ball and more generally subcritical Weinstein domains arising as special cases. Since flexible Weinstein domains have vanishing symplectic cohomology (see \cite[\S3.3]{murphy2018subflexible}), we immediately extend Gromov's theorem to find that there are no exact Lagrangians in any flexible Weinstein domain.
However, Observation~\ref{obs:vanishingSH} is rather insufficient in situations where $\op{SH}(X)$ and $\op{SH}(X')$ are both vanishing or both nonvanishing, especially since the transfer map is generally neither injective nor surjective. For instance, let $X_k^{2n}$ denote the complement of a small neighborhood of $k$ generic hyperplanes in $\mathbb{CP}^n$. As we explain in \S\ref{subsec:div_compl}, $X_k^{2n}$ has a canonical Weinstein structure. Concretely, we can ask: \begin{problem}\label{prob:hyperplane_comp} For which $k,k' \in \mathbb{Z}_{\geq 1}$ is there a Liouville embedding $X_k^{2n} \overset{L}\hookrightarrow X_{k'}^{2n}$? \end{problem} {\noindent} We make a few preliminary observations: \begin{itemize} \item For $k \leq n$, $X_k^{2n}$ is subcritical, in fact its symplectic completion is $(\mathbb{C}^*)^{k-1}\times \mathbb{C}^{n-k+1}$, and hence we have $\op{SH}(X_k^{2n}) = 0$. From this it is straightforward to produce a Liouville (in fact Weinstein) embedding of $X_{k'}^{2n}$ into $X_k^{2n}$ whenever $k,k' \leq n$. \item There is a spectral sequence \cite{ganatra2020symplectic, mcleanslides} for computing the symplectic cohomology of any ample simple normal crossing divisor complement from the ordinary cohomology of various combinatorial strata defined by the normal crossings configuration. When $k \geq n+1$, this spectral sequence degenerates (see \cite[Thm 1.4 and Example 5.1]{ganatra2020symplectic}) and in particular $\op{SH}(X_k^{2n}) \neq 0$ for any coefficient field $\mathbb{K}$. Therefore, by Observation~\ref{obs:vanishingSH} there is no Liouville embedding $X_{k'}^{2n} \overset{L}\hookrightarrow X_{k}^{2n}$ when $k' \geq n+1$ and $k \leq n$.
\item There is a Weinstein embedding $X_k^{2n} \overset{W}\hookrightarrow X_{k'}^{2n}$ whenever $k < k'$ (see \S\ref{subsubsec:constructions}). There is also a symplectic (and in particular smooth) embedding from $X_k^{2n}$ into $X_{k'}^{2n}$ for $k > k'$, given by adding back in some of the hyperplanes. \end{itemize} We are left to wonder about embeddings of $X_{k'}^{2n}$ into $X_k^{2n}$ in the case $k' > k \geq n+1$, and more generally: \begin{question}\label{question:complexity} Is there a natural notion of ``complexity'' of Liouville domains, such that more complicated Liouville domains cannot Liouville embed into less complicated ones? \end{question}
We point out right away that there are already several interesting partial answers to Question~\ref{question:complexity} appearing in the literature, though these approaches are not sufficient (to our knowledge) to solve Problem~\ref{prob:hyperplane_comp} (see the discussion in \S\ref{subsec:obs_higher}). For one, Abouzaid--Seidel \cite{abouzaid2010altering} introduced a ``homological recombination'' construction which modifies a given Weinstein domain so as to kill its symplectic cohomology when the characteristic of $\mathbb{K}$ belongs to a chosen set of primes, and otherwise leaving $\op{SH}$ intact. As a corollary, by appealing to Observation~\ref{obs:vanishingSH}, we can find e.g. an infinite sequence of Weinstein domains $W_1,W_2,W_3,\dots$, all diffeomorphic to $B^6$, such that $W_i$ does not Liouville embed into $W_j$ unless $i \leq j$. According to \cite{lazsyl}, we can also arrange that $W_i$ does admit a Weinstein embedding into $W_j$ for $i < j$.
In a different direction, which lies closer to the heart of this paper, we have the notion of {\em dilation} \cite{seidel2012symplectic}, and its generalizations and cousins (see e.g. \cite{zhao2016periodic,Zhou_semidilation,li2019exact}). The basic observation here is that symplectic cohomology $\op{SH}$ has an $S^1$-equivariant analogue $\op{SH}_{S^1}$, which enjoys a more refined version of Observation~\ref{obs:vanishingSH} (see \S\ref{subsec:obs_cyls}) For example, \cite{Zhou_semidilation} constructs Brieskorn varieties having $k$-dilations but not $(k-1)$-dilations for any fixed $k \in \mathbb{Z}_{\geq 0}$, and this translates into a non-existence result for Liouville embeddings. Essentially the same structure is exploited by Gutt--Hutchings to construct symplectic capacities in \cite{Gutt-Hu}. In fact, although these capacities were developed as quantitative invariants, they become qualitative if one only remembers whether each capacity is finite or infinite. As we will explain, these notions are closely related to the linear version of the invariants we define in \S\ref{sec:invariants}, whereas our more general invariants parallel the higher symplectic capacities defined in \cite{HSC}.
Problem~\ref{prob:hyperplane_comp} naturally fits into a wider framework as follows. Fix a positive integer $n$. Let $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}^k_{\geq 1}$ denote a tuple of positive integers for some $k \in \mathbb{Z}_{\geq 1}$. We denote by $X^{2n}_{\vec{d}}$ the natural Weinstein domain given by the complement of a small neighborhood of $k$ smooth hypersurfaces of degrees $d_1,\dots,d_k$ in general position in $\mathbb{CP}^n$. Notably, this depends only on $\vec{d}$ and is independent of all other auxiliary choices up to Weinstein deformation equivalence (see \S\ref{subsec:div_compl}). Similarly, put $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathbb{Z}^{k'}_{\geq 1}$ for some $k' \in \mathbb{Z}_{\geq 1}$, and denote the corresponding Weinstein domain by $X^{2n}_{\vec{d'}}$. Note that with this notation we have $X_k^{2n} = X^{2n}_{(\underbrace{1,\dots,1}_k)}$.
\begin{problem}\label{prob:hyp} For which tuples $\vec{d}$ and $\vec{d'}$ is there a symplectic / Liouville / Weinstein embedding of $X^{2n}_{\vec{d}}$ into $X^{2n}_{\vec{d'}}$? \end{problem}
\subsection{Main results}
The following result is representative of the techniques of this paper: \begin{thm}\label{thm:main_liouville} Fix $n \in \mathbb{Z}_{\geq 1}$ and tuples $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ and $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathbb{Z}_{\geq 1}^{k'}$ with $\sum\limits_{i=1}^kd_i,\sum\limits_{i=1}^{k'}d_i' \geq n+1$. Assume that we have $\sum\limits_{i=1}^{k'} d_i' < 2\sum\limits_{i=1}^{k}d_i - n -1$. Then there is a Liouville embedding $X^{2n}_{\vec{d}} \overset{L}\hookrightarrow X^{2n}_{\vec{d'}}$ if and only if $\vec{d} \preceq \vec{d'}$. \end{thm} {\noindent} We will deduce the obstructive part from a more general framework, a synopsis of which is given in \S\ref{subsubsec:obstructions}. The relevant embedding constructions, and in particular the definition of the combinatorial partial order ``$\preceq$'' are summarized in \S\ref{subsubsec:constructions}.
As an illustrative example, combining the above theorem with the observations from in the previous subsection solves Problem~\ref{prob:hyperplane_comp}: \begin{cor} For any $n \in \mathbb{Z}_{\geq 1}$ and $k' > k \geq n+1$, there is no Liouville embedding $X_{k'}^{2n} \overset{L}\hookrightarrow X_{k}^{2n}$. \end{cor}
\begin{example} Consider the case $n=1$ of Problem~\ref{prob:hyperplane_comp}, so that $X_k^{2}$ is the two-sphere minus $k$ open disks. For $k < k'$, there is a Liouville embedding $X_k^2 \overset{L}\hookrightarrow X_{k'}^2$, given by iteratively attaching Weinstein $1$-handles. By contrast, there is no Liouville embedding $\iota: X_{k'}^2 \overset{L}\hookrightarrow X_k^2$. Indeed, given such an embedding, the complement $X_k^2 \setminus \iota(X_{k'}^2)$ would necessarily have at least one component which is disjoint from $\partial X_{k'}^2$, and this violates Stokes' theorem. \end{example}
\subsubsection{Obstructions}\label{subsubsec:obstructions}
In \S\ref{sec:invariants}, we define for each $m \in \mathbb{Z}_{\geq 0}$ a symplectomorphism invariant $\mathbb{G} \Langle \mathcal{T}^m p \Rangle$ of Liouville domains which takes values in positive integers and is monotone with respect to Liouville embeddings, i.e. \begin{align*} X \overset{L}\hookrightarrow X' \;\;\;\Longrightarrow\;\;\; \mathbb{G}\Langle \mathcal{T}^m p \Rangle(X) \leq \mathbb{G}\Langle \mathcal{T}^m p\Rangle(X') \end{align*} Heuristically, $\mathbb{G} \Langle \mathcal{T}^m p \Rangle(X)$ corresponds to the least number of positive ends of a rigid rational curve in $X$ which passes through a generic point $p$ and is tangent to order $m$ to a generic local divisor at $p$. Whereas this would generally depend on the choices involved, the more precise definition of $\mathbb{G}\Langle \mathcal{T}^m p \Rangle(X)$ is based on the $\mathcal{L}_\infty$ structure on linearized contact homology $\op{C}\mathbb{H}_{\op{lin}}(X)$. In \S\ref{sec:computationsI}, we compute this invariant for divisor complements in projective space: \begin{thm}\label{thm:main_G_computation} For $n \in \mathbb{Z}_{\geq 1}$ and $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}^k_{\geq 1}$ with $\sum_{i=1}^k d_i \geq n+1$, we have $$\mathbb{G} \Langle \mathcal{T}^{n-1} p\Rangle(X^{2n}_{\vec{d}}) = \sum_{i=1}^k d_i.$$ \end{thm} {\noindent} As an immediate consequence: \begin{cor}\label{cor:G_obstructions} For $n \in \mathbb{Z}_{\geq 1}$, we have $X^{2n}_{(d_1,\dots,d_k)} \not\overset{L}\hookrightarrow X^{2n}_{(d_1',\dots,d_{k'}')}$ whenever $\sum_{i=1}^k d_i > \sum_{i=1}^{k'} d_i' \geq n+1$. \end{cor} {\noindent} It is worth emphasizing that there is a symplectic (and in particular smooth) embedding from $X_{\vec{d}}^{2n}$ into $X_{\vec{d'}}^{2n}$ whenever $\vec{d'}$ is a subtuple of $\vec{d}$, essentially given by adding back in some of the divisor components (see \S\ref{subsec:div_compl}). For instance, there is a symplectic embedding $X_k^{2n} \overset{S}\hookrightarrow X_{k'}^{2n}$ for any $k,k' \in \mathbb{Z}_{\geq 1}$. In particular, the obstructions provided by Corollary~\ref{cor:G_obstructions} are a purely exact symplectic phenomenon.
\begin{remark}[on virtual perturbations]\label{rmk:virtual} We point out that the general construction of linearized contact homology, with its $\mathcal{L}_\infty$ structure and full functoriality package, requires a virtual perturbation framework to achieve transversality for configurations involving multiply covered curves. The polyfold theory of Hofer--Wysocki--Zehnder is widely expected to provide such a framework (see e.g. \cite{hofer2017polyfold}). There are several other candidate such frameworks in development and in various stages of completeness - see e.g. \cite{fish2018lectures, pardon2019contact,HuN, bao2015semi, ishikawa2018construction} and the references therein. Our high level discussion of symplectic invariants in \S\ref{sec:invariants} and their computation in \S\ref{sec:computationsI} relies only general properties of SFT as outlined in \cite{EGH2000} and not on any particularities of the chosen perturbation framework. Subsequently, in \S\ref{sec:computationsII} we give a detailed discussion of transversality for the curves relevant to our main applications, and proceed to give direct proofs based on classical transversality techniques. \end{remark}
Although Corollary~\ref{cor:G_obstructions} rules out many Liouville embeddings between hypersurface complements, it turns out to be rather far from optimal in general. Indeed, there are many cases in Problem~\ref{prob:hyp} with $\sum_{i=1}^k d_i \leq \sum_{i=1}^{k'}d_i'$ for which a stronger obstruction is necessary. In \S\ref{sec:computationsII}, we refine the proof of Theorem~\ref{thm:main_G_computation} by analyzing the outcome of neck stretching in more detail, arriving at our main combinatorial obstruction. \begin{thm}\label{thm:main_combinatorial} Fix $n \in \mathbb{Z}_{\geq 1}$, and consider tuples $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ and $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathbb{Z}_{\geq 1}^{k'}$ with $\sum_{i=1}^k d_i, \sum_{i=1}^{k'}d_i' \geq n+1$. Given a Liouville embedding $X_{\vec{d}}^{2n} \overset{L}\hookrightarrow X_{\vec{d'}}^{2n}$, we must have: \begin{itemize} \item positive integers $l,q \in \mathbb{Z}_{\geq 1}$ with $\sum_{i=1}^k d_i \leq l \leq \sum_{i=1}^{k'}d_i'$ and $q(\sum_{i=1}^kd_i - n - 1) \leq l -n - 1$ \item tuples $\vec{x}_1,\dots,\vec{x}_l \in \mathbb{Z}_{\geq 0}^k \setminus \{\vec{0}\}$, each having at most $n$ nonzero components, such that $\sum_{i=1}^l \vec{x}_i = q\vec{d}$ \item tuples $\vec{y}_1,\dots,\vec{y}_l \in \mathbb{Z}_{\geq 0}^{k'} \setminus \{\vec{0}\}$, such that $\sum_{i=1}^l \vec{y}_i = \vec{d'}$ \item a group homomorphism $\Phi: \mathbb{Z}^k/(\vec{d}) \rightarrow \mathbb{Z}^{k'}/(\vec{d'})$ such that $\Phi(\vec{x}_i\text{ mod }(\vec{d})) = \vec{y}_i\text{ mod }(\vec{d'})$ for $i = 1,\dots,l$. \end{itemize} \end{thm}
This theorem is proved using moduli spaces of genus zero punctured pseudoholomorphic curves with several positive ends. The proof of Theorem~\ref{thm:main_liouville} in \S\ref{subsec:completing} will then be extracted from the combinatorics of Theorem~\ref{thm:main_combinatorial}, together with the constructions described in Theorem~\ref{thm:embeddings} below. In fact, we conjecture that the main degree assumption $\sum\limits_{i=1}^{k'} d_i' < 2\sum\limits_{i=1}^{k}d_i - n -1$ in Theorem~\ref{thm:main_liouville} can be removed, but it is not immediately clear whether this can be deduced from the combinatorics of Theorem~\ref{thm:main_combinatorial}.
At present, we illustrate the utility of Theorem~\ref{thm:main_combinatorial} with some examples which go beyond Corollary~\ref{cor:G_obstructions}. Note that these examples also hold without the assumption $\sum\limits_{i=1}^{k'} d_i' < 2\sum\limits_{i=1}^{k}d_i - n -1$. \begin{example}\label{ex:hyperplane1} In the context of Theorem~\ref{thm:main_combinatorial}, consider a Liouville embedding $X_{\vec{d}}^{2n} \overset{L}\hookrightarrow X_{\vec{d'}}^{2n}$ where the target $X_{\vec{d'}}^{2n} = X_{k'}^{2n}$ is a hyperplane complement. Then the source is also a hyperplane complement, i.e. we must have $X_{\vec{d}}^{2n} = X_k^{2n}$ for some $k \leq k'$.
Indeed, in the context of Theorem~\ref{thm:main_combinatorial}, note that the rank of the image of $\Phi$ in $\mathbb{Z}^{k'}/(\vec{d'}) \cong \mathbb{Z}^{k'-1}$ must be $l-1$. Since the rank of the domain $\mathbb{Z}^k/(\vec{d})$ of $\Phi$ is $k-1$, this is only possible if we have $k \geq l$. We therefore have $k \leq \sum_{i=1}^{k}d_i \leq l \leq k$, which implies that $d_1 = \dots = d_k = 1$. \end{example}
\begin{example}\label{ex:domain_single_comp} In the context of Theorem~\ref{thm:main_combinatorial}, consider a Liouville embedding $X_{\vec{d}}^{2n} \overset{L}\hookrightarrow X_{\vec{d'}}^{2n}$ where the source $X_{\vec{d}}^{2n} = X_{(d_1)}^{2n}$ is the complement of a single divisor component. We claim that $d_1$ must divide $\gcd(\vec{d'})$. Conversely, by Theorem~\ref{thm:embeddings} below, if $d_1$ divides $\gcd(\vec{d'})$ then there is a Weinstein embedding $X_{(d_1)}^{2n} \overset{W}\hookrightarrow X_{\vec{d'}}^{2n}$.
We conclude that $X^{2n}_{(d_1)} \overset{L}\hookrightarrow X^{2n}_{\vec{d'}}$ if and only if $d_1 | \gcd(\vec{d'})$.
To justify the claim, note that we must have $l \geq d_1$. Moreover, since the domain of $\Phi$ is $\mathbb{Z} / (d_1)$, for each $i = 1,\dots,l$ we must have $d_1 \vec{y_i} = 0 \in \mathbb{Z}^{k'}/(\vec{d'})$, i.e. $d_1 \vec{y_i} = a_i\vec{d'}$ for some $a_i \in \mathbb{Z}_{\geq 1}$. We then have \begin{align*} \vec{d'} = \sum_{i=1}^l \vec{y}_i = \sum_{i=1}^l \tfrac{a_i}{d_1}\vec{d'} \geq \tfrac{l}{d_1} \vec{d'} \geq \vec{d'}, \end{align*} so all of these inequalities are equalities and we have $a_1 = \dots = a_l = 1$. It follows that $d_1$ divides each component of $\vec{d'}$, and hence it divides $\gcd(\vec{d'})$. \end{example}
\subsubsection{Constructions}\label{subsubsec:constructions}
For $k \in \mathbb{Z}_{\geq 1}$, let $\mathcal{S}_k := \mathbb{Z}_{\geq 1}^k/\Sigma_k$ denote the set of unordered $k$-tuples of positive integers. Here $\Sigma_k$ denotes the symmetric group on $k$ letters with its natural action on $\mathbb{Z}^k_{\geq 1}$. We will often represent the equivalence of a $k$-tuple by its unique representative $(d_1,\dots,d_k)$ such that $d_1 \geq \dots \geq d_k$. Put $\mathcal{S} := \cup_{k=1}^\infty \mathcal{S}_k$. We define a partial order on $\mathcal{S}$ as follows: \begin{definition} For $\vec{d},\vec{d'} \in \mathcal{S}$, we put $\vec{d} \preceq \vec{d'}$ if $\vec{d'}$ can be obtained from $\vec{d}$ by a sequence of the following moves: \begin{enumerate} \item (combination) delete two entries $d_i,d_j$ for some $1 \leq i < j \leq k$ and add the new entry $d_i + d_j$ \item (duplication) add a new entry $d_{k+1}$ with $d_{k+1} = d_i$ for some $1 \leq i \leq k$. \end{enumerate} \end{definition} {\noindent} For example, we have $(3,2,2) \preceq (7,2)$ thanks for the following sequence of moves: $$(3,2,2) \rightsquigarrow (3,2,2,2) \rightsquigarrow (5,2,2) \rightsquigarrow (7,2).$$ By contrast, we have $(3,2,2) \not\preceq (10,1)$, since there is no way to acquire the entry $1$ by a sequence of the above moves. \begin{thm}\label{thm:embeddings} Fix $n \in \mathbb{Z}_{\geq 1}$. For $\vec{d},\vec{d'} \in \mathcal{S}$ such that $\vec{d} \preceq \vec{d'}$ there is a Weinstein embedding of $X^{2n}_{\vec{d}}$ into $X^{2n}_{\vec{d'}}$. \end{thm} \begin{remark} Our proof of Theorem~\ref{thm:embeddings} takes inspiration from \cite{khoa}, which gives a more precise description of the resulting Weinstein cobordism in the case that the divisor has no triple intersection points. In fact, Theorem~\ref{thm:embeddings} may already be known to experts, but we nevertheless include the proof for completeness. \end{remark}
Note that Theorem~\ref{thm:main_combinatorial} obstructs Liouville embeddings, hence a fortiori Weinstein embeddings. Since the constructions provided by Theorem~\ref{thm:embeddings} are Weinstein embeddings, we can also reformulate most of the preceding results in the Weinstein category. For example, the analogue of Theorem~\ref{thm:main_liouville} is: \begin{cor}\label{cor:main_weinstein} Fix $n \in \mathbb{Z}_{\geq 1}$ and tuples $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ and $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathbb{Z}_{\geq 1}^{k'}$ with $\sum_{i=1}^kd_i,\sum_{i=1}^{k'}d_i' \geq n+1$. Assume that we have $\sum_{i=1}^{k'} d_i' < 2\sum_{i=1}^{k}d_i - n -1$. Then there is a Weinstein embedding $X^{2n}_{\vec{d}} \overset{W}\hookrightarrow X^{2n}_{\vec{d'}}$ if and only if $\vec{d} \preceq \vec{d'}$. \end{cor} {\noindent} We note, however, that Weinstein embeddings have more restricted topology compared to Liouville embeddings. Namely, the complementary cobordism must admit a Morse function with all critical points having index at most half the dimension (see e.g. \S\ref{subsec:geom_prel}). Consequently, many of the obstructions involved in Corollary~\ref{cor:main_weinstein} follow simply from singular homology considerations.
As for the symplectic category, there is quite a bit more flexibility. For example, as mentioned above there are symplectic embeddings $X^{2n}_k \overset{S}\hookrightarrow X^{2n}_{k'}$ for any $k,k' \in \mathbb{Z}_{\geq 1}$. At the same time, in some cases symplectic embeddings are automatically Liouville embeddings due to first cohomology considerations, and hence Theorem~\ref{thm:main_liouville} applies. \begin{cor}\label{cor:main_symplectic} Fix $n \in \mathbb{Z}_{\geq 1}$ and tuples $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ and $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathbb{Z}_{\geq 1}^{k'}$ with $\sum_{i=1}^kd_i,\sum_{i=1}^{k'}d_i' \geq n+1$. If there is a symplectic embedding $X^{2n}_{\vec{d}} \overset{S}\hookrightarrow X^{2n}_{\vec{d'}}$, then we must have that $\gcd(\vec{d})$ divides $\gcd(\vec{d'})$. Moreover, if we assume that $\gcd(\vec{d})$ is an entry of $\vec{d}$, then there is a symplectic embedding $X^{2n}_{\vec{d}} \overset{S}\hookrightarrow X^{2n}_{\vec{d'}}$ if and only if $\gcd(\vec{d})$ divides $\gcd(\vec{d'})$. \end{cor} \begin{proof} Suppose that we have a symplectic embedding $X^{2n}_{\vec{d}} \overset{S}\hookrightarrow X^{2n}_{\vec{d'}}$. Put $g := \gcd(\vec{d})$. By Theorem~\ref{thm:embeddings} there is a Weinstein embedding $X^{2n}_{(g)} \overset{W}\hookrightarrow X^{2n}_{\vec{d}}$, and hence by concatenating we get a symplectic embedding $X^{2n}_{(g)} \overset{S}\hookrightarrow X_{\vec{d'}}^{2n}$. Since $H^1(X^{2n}_{(g)};\mathbb{R}) = 0$, this is automatically a Liouville embedding. By Example~\ref{ex:domain_single_comp}, such a Liouville embedding exists only if $g$ divides $\gcd(\vec{d'})$.
If we assume that $g$ divides $\gcd(\vec{d'})$ and also that $g$ is an entry of $\vec{d}$, then we have a symplectic embedding $X^{2n}_{\vec{d}} \overset{S}\hookrightarrow X^{2n}_{(g)}$ given by adding back divisor components, and we can concatenate this with a Weinstein embedding $X^{2n}_{(g)} \overset{W}\hookrightarrow X^{2n}_{\vec{d'}}$ to get a symplectic embedding $X_{\vec{d}}^{2n} \overset{S}\hookrightarrow X_{\vec{d'}}^{2n}$. \end{proof}
The rest of this paper is structured roughly as follows. In \S\ref{sec:setting_stage} we discuss the necessary background on divisor complements and pseudoholomorphic curves, meanwhile setting the notation for the rest of the paper. In \S\ref{sec:invariants} we introduce our main symplectic embedding obstructions $\mathbb{G}\Langle\mathcal{T}^mp\Rangle$, which arise as simplifications of a more general family of symplectic invariants $\mathbb{I}^{\leq l}$. In \S\ref{sec:computationsI} we begin the discussion of the relevant SFT moduli spaces and we prove Theorem~\ref{thm:main_G_computation}, assuming virtual perturbations. In \S\ref{sec:computationsII}, we analyze the moduli spaces in more detail and prove Theorem~\ref{thm:main_combinatorial}. In \S\ref{sec:constructions}, we produce Weinstein embeddings, and also discuss flexibility constructions which place our main results into broader context. Finally, in \S\ref{sec:concl} we give a (highly nonexhaustive) list of open problems and future directions.
\begin{addendum} After the first draft of this paper was completed, the authors learned of the concurrent paper \cite{moreno2020landscape} by A. Moreno and Z. Zhou, whose techniques and results are closely related to the present work. In \cite{moreno2020landscape}, the authors define and exploit algebraic structures on rational symplectic field theory in order to obstruct exact cobordisms between contact manifolds (somewhat in the spirit of Remark~\ref{rmk:filling_indep} below), implemented using Pardon's framework \cite{pardon2019contact}. In particular, their techniques recover our Corollary~\ref{cor:G_obstructions} in the special case that all divisor components have degree one (see \cite[Thm. G]{moreno2020landscape}), and they moreover compute their invariants for a variety of other geometrically natural examples. \end{addendum}
\section{Setting the stage}\label{sec:setting_stage}
\subsection{Geometric preliminaries}\label{subsec:geom_prel}
\subsubsection{Contact manifolds and symplectizations}
Recall that a {\em contact form} on a closed odd-dimensional manifold $Y$ is a maximally nondegenerate one-form $\alpha$, i.e. $\alpha \wedge \underbrace{d\alpha \wedge \dots \wedge d\alpha}_{n-1}$ is everywhere nonvanishing, where $\dim Y = 2n-1$. If the orientation induced by this volume form agrees with a preferred orientation on $Y$ then we say that $\alpha$ is a {\em positive} contact from. A {\em contact manifold} is a pair $Y$ equipped with a hyperplane distribution of the form $\xi = \ker \alpha$ for some contact form $\alpha$. In this paper we will typically work with {\em strict} contact manifolds, i.e. contact manifolds having a preferred one-form $\alpha$. By slight abuse of notation, we will often refer to the strict contact manifold simply by $Y$ when $\alpha$ is implicitly understood, and a similar remark holds for Liouville domains and so on.
Given a strict contact manifold $(Y,\alpha)$, the {\em symplectization} is the symplectic manifold $\mathbb{R} \times Y$ equipped with the symplectic form $d(e^r\alpha)$ and preferred one-form $e^r\alpha$, where $r$ is the coordinate on $\mathbb{R}$. We will sometimes also utilize the positive (resp. negative) half-symplectization, given by restricting to $\mathbb{R}_{\geq 0} \times Y$ (resp. $\mathbb{R}_{\leq 0} \times Y$).
The {\em Reeb vector field} $R_\alpha$ is the unique vector field on $Y$ such that $\alpha(R_\alpha) \equiv 1$ and $d\alpha(R_\alpha,-) \equiv 0$. By a ($T$-periodic) {\em Reeb orbit} we mean a loop $\gamma: [0,T] \rightarrow Y$ with $\gamma(0) = \gamma(T)$ for some $T \in \mathbb{R}_{>0}$, such that $\dot{\gamma}(t) = R_\alpha(\gamma(t))$ for all $t \in [0,T]$. Here $T$ is called the {\em period} or {\em action} of $\gamma$, denoted by $\mathcal{A}(\gamma)$.
Note that equivalently we have $\mathcal{A}(\gamma) = \int_\gamma \alpha$. A Reeb orbit $\gamma$ is {\em nondegenerate} if the map $\xi|_{\gamma(0)} \rightarrow \xi|_{\gamma(0)}$ induced by linearized the time-$T$ Reeb flow does not have $1$ as an eigenvalue, and the contact form $\alpha$ is nondegenerate if all of its Reeb orbits are.
\subsubsection{Flavors of open symplectic manifolds}
Recall that a {\em Liouville domain} is a pair $(X,{\lambda})$, where $X$ is an even-dimensional compact manifold with boundary and ${\lambda}$ is a one-form such that $d{\lambda}$ is symplectic and ${\lambda}$ restricts to a positive\footnote{More precisely, we orient $X$ by the volume form $\wedge^n \omega$ and we orient $\partial X$ via the boundary orientation.} contact form on $\partial X$. This last condition is equivalent to the Liouville vector field $V_{\lambda}$, characterized by $d{\lambda}(V_{\lambda},-) = {\lambda}$, being outwardly transverse along $\partial X$.
There is a closely related notion of {\em Liouville manifold}, which is a pair $(X,{\lambda})$, where $X$ is a noncompact manifold and ${\lambda}$ is a one-form such that $d{\lambda}$ is symplectic and the flow of the Liouville vector field $V_{\lambda}$ is complete.
If we can moreover find a compact subdomain $D \subset X$ with smooth boundary such that $V_{\lambda}$ is outwardly transverse along $\partial D$ and ${\lambda}$ is nonvanishing on $X \setminus D$, then $(X,{\lambda})$ is said to be of ``finite type''. In this case, the restriction $(D,{\lambda}|_D)$ defines a Liouville domain. Conversely, if $(X,{\lambda})$ is a Liouville domain, its symplectic completion $(\widehat{X},\widehat{{\lambda}})$ is the finite type Liouville manifold given by attaching the positive half-symplectization of $(\partial X, {\lambda}|_{\partial X})$ to $(X,{\lambda})$.
Given two Liouville domains $(X,{\lambda})$ and $(X,{\lambda}')$ on the same manifold $X$, we say that they are {\em Liouville homotopic} if there is a smooth one-parameter family of Liouville forms ${\lambda}_t$, $t \in [0,1]$, with ${\lambda}_0 = {\lambda}$ and ${\lambda}_1 = {\lambda}'$. Two Liouville domains $(X,{\lambda})$ and $(X',{\lambda}')$ are {\em Liouville deformation equivalent} if there exists a diffeomorphism $F: X \rightarrow X'$ such that $(X,{\lambda})$ and $(X,F^*{\lambda}')$ are Liouville homotopic. These induce equivalent notions of Liouville homotopy and Liouville deformation equivalence between the corresponding symplectic completions. By a version of Moser's Stability Theorem (see \cite[Prop. 11.8]{cieliebak2012stein}), if two Liouville domains are Liouville homotopic, then their symplectic completions are symplectomorphic. Moreover, by \cite[Lem. 11.2]{cieliebak2012stein}, if two Liouville manifolds $(X,{\lambda})$ and $(X,{\lambda}')$ are symplectomorphic, then we can find a diffeormorphism $G: X \rightarrow X'$ such that $G^*{\lambda}' - {\lambda}$ is an exact one-form.
In particular, the process of symplectic completion sets up a one-to-one correspondence between Liouville domains up to Liouville homotopy and finite type Liouville manifolds up to Liouville homotopy. In the sequel we will mostly phrase results in terms of Liouville domains for convenience. A similar remark will apply for Weinstein domains / manifolds and Stein domains / manifolds.
A {\em Weinstein domain} is a triple $(X,{\lambda},\phi)$, where $(X,{\lambda})$ is a Liouville domain and $\phi: X \rightarrow \mathbb{R}$ is a generalized Morse function which is gradient-like for the Liouville vector field $V_{\lambda}$ and constant along $\partial X$ Similarly, a {\em Weinstein manifold} is a triple $(X,{\lambda},\phi)$, where $(X,{\lambda})$ is a Liouville manifold and $\phi: X \rightarrow \mathbb{R}$ is an {\em exhausting} (i.e. proper and bounded from below) Morse function such that $V_{\lambda}$ is gradient-like for $\phi$. The Weinstein manifold $(X,{\lambda},\phi)$ is {\em finite type} if and only if $\phi$ has finitely many critical points, which implies that $(X,{\lambda})$ is a finite type Liouville manifold. A standard computation shows that each critical point of $\phi$ has index at most half the dimension of $X$, and this puts strong restrictions on the homotopy type of $X$. Conversely, any manifold $X$ of dimension at least six which admits a nondegenerate two-form and an exhausting Morse function with critical points of index at most half the ambient dimension is diffeomorphic to a Weinstein manifold (see \cite[Thm. 13.2]{cieliebak2012stein}).
Two Weinstein domains $(X,{\lambda},\phi)$ and $(X,{\lambda}',\phi')$ are {\em Weinstein homotopic} if there exists a smooth family of Weinstein domains $(X,{\lambda}_t,\phi_t)$ for $t \in [0,1]$, with $({\lambda}_0,\phi_0) = ({\lambda},\phi)$ and $({\lambda}_1,\phi_1) = ({\lambda}',\phi')$. Note that a generic one-parameter family of functions will have isolated {\em birth-death type degenerations}, which is why we require the functions $\phi_t$ to be only generalized Morse (see \cite[\S9.1]{cieliebak2012stein}). Similarly, two Weinstein domains $(X,{\lambda},\phi)$ and $(X',{\lambda}',\phi')$ are {\em Weinstein deformation equivalent} if there is a diffeomorphism $F: X \rightarrow X'$ such that $(X,{\lambda},\phi)$ and $(X,F^*{\lambda}',F^*\phi')$ are Weinstein homotopic.
A {\em Stein manifold} is a (necessarily noncompact) complex manifold admitting a proper biholomorphic embedding into affine space $\mathbb{C}^N$ for some $N \in \mathbb{Z}_{\geq 1}$. There are several other common equivalent definitions - see e.g. \cite[\S 5.3]{cieliebak2012stein} for more details. It turns out that one can always find an exhausting strictly plurisubharmonic function $\phi: X \rightarrow \mathbb{R}$, which we can assume is Morse after a small perturbation. Here strict plurisubharmonicity of $\phi$ is equivalent to $-dd^\mathbb{C} \phi$ being a K\"ahler form, where $d^\mathbb{C}\phi$ denotes the one-form $d\phi \circ J$, with $J$ the (integrable) almost complex structure. The Stein manifold $(X,{\lambda},\phi)$ is {\em finite type} if $\phi$ has finitely many critical points. The definitions of {\em Stein homotopy} and {\em Stein deformation equivalence} mirror the Weinstein case.
Given a Stein manifold $(X,J)$ and an exhausting strictly plurisubharmonic function $\phi: X \rightarrow \mathbb{R}$ , we produce a Weinstein manifold $\mathfrak{W}(X,J) := (X,{\lambda} := -d^\mathbb{C} \phi, \psi \circ \phi)$, where $\psi: \mathbb{R} \rightarrow \mathbb{R}$ is a suitable diffeomorphism (this is needed to make the vector field $V_{\lambda}$ complete - see \cite[\S2.1]{cieliebak2012stein}). Moreover, up to Weinstein homotopy this Weinstein manifold depends only on the Stein manifold $(X,J)$ up to Stein homotopy. In fact, by a deep result from \cite{cieliebak2012stein}, this association sets up to one-to-one correspondence between Stein manifolds up to Stein homotopy and Weinstein manifolds up to Weinstein homotopy. As a consequence, for the qualitative embedding problems considered in this paper, it makes no essential difference if we work in the Stein or Weinstein category.
The above definitions also natural generalize to the notions of {\em Liouville cobordism}, {\em Weinstein cobordism}, and {\em Stein cobordism}. For example, a Liouville cobordism (a.k.a. {\em exact cobordism}) is a pair $(X,{\lambda})$ where $X$ is a compact manifold with boundary such that the Liouville vector field is inwardly transverse along some components of $\partial X$ (the {\em negative boundary} $\partial^- X$) and outwardly transverse along the components of $\partial X$ (the {\em positive boundary} $\partial^+ X$). Given a Liouville cobordism $X$, we pass to its symplectic completion by attaching the positive half-symplectization of $\partial^+X$ to its positive end and the negative half-symplectization of $\partial^-X$ to its negative end. Similarly, a Weinstein cobordism is a triple $(X,{\lambda},\phi)$, where $(X,{\lambda})$ is a Liouville cobordism and $\phi$ is a Morse function which is constant along $\partial^-X$ and $\partial^+X$, such that $V_{\lambda}$ is gradient-like for $\phi$.
As we recall in \S\ref{subsec:div_compl}, smooth complex affine algebraic varieties are Stein manifolds, canonically up to Stein homotopy. In summary, we have the following hierarchy for exact symplectic manifolds \begin{align*} \text{affine} \Rightarrow \text{Stein} \Leftrightarrow \text{Weinstein} \Rightarrow \text{Liouville}. \end{align*}
\begin{remark} Although pseudoholomorphic curves are best behaved in exact symplectic manifolds, for many purposes it suffices to have exactness only near the boundary. If we relax the definition of a Liouville cobordism by only requiring the one-form ${\lambda}$ to be defined near the boundary, we arrive at the notion of a {\em (strong) symplectic cobordism}. \end{remark}
\subsubsection{Embeddings} Recall (see e.g. \cite{Seidel_biased_view}) that a {\em Liouville embedding} from one Liouville domain $(X,{\lambda})$ into another Liouville domain $(X',{\lambda}')$ of the same dimension is a smooth embedding $\iota: X \hookrightarrow X'$ such that $\iota^*({\lambda}') = e^{\rho} {\lambda} + df$ for some constant $\rho \in \mathbb{R}$ and some smooth function $f: X \rightarrow \mathbb{R}$. As a shorthand, we put $(X,{\lambda}) \overset{L}\hookrightarrow (X',{\lambda}')$ or simply $X \overset{L}\hookrightarrow X'$ if such a Liouville embedding exists. Note that in the case $\rho = 0$, this says that $\iota$ is an {\em exact symplectic embedding}, i.e. $\iota^*({\lambda}') - {\lambda}$ is an exact one-form, and if moreover $f \equiv 0$, then $\iota$ is a {\em strict exact symplectic embedding}, i.e. it satisfies $\iota^*{\lambda}' = {\lambda}$. The following lemma combines a few standard observations about Liouville embeddings. \begin{lemma}\label{lem:liouville_emb_lemmas}\hspace{1cm} \begin{enumerate}[label=(\alph*)] \item Suppose that $(X,{\lambda}_t)_{t \in [0,1]}$ is a Liouville homotopy of Liouville domains. Then there is a diffeomorphism $h: \widehat{X} \rightarrow \widehat{X}$ such that $h^*\widehat{{\lambda}} = \widehat{{\lambda}} + df$ for some smooth function $f: \widehat{X} \rightarrow \mathbb{R}$.
\item Let $(X,{\lambda})$ and $(X,{\lambda}')$ be Liouville domains, and suppose there is a Liouville embedding of $(X,{\lambda})$ into $(X',{\lambda}')$. Then there is a Liouville homotopy $(X',{\lambda}'_t)_{t \in [0,1]}$ with ${\lambda}'_0 = {\lambda}'$ and a strict exact symplectic embedding of $(X,{\lambda})$ into $(X',{\lambda}'_1)$. Moreover, we can assume that we have ${\lambda}'_t|_{\partial X'} = e^{q(t)}{\lambda}'|_{\partial X'}$ for some smooth function $q: [0,1] \rightarrow \mathbb{R}$ and all $t \in [0,1]$. \item For Liouville domains $(X,{\lambda})$ and $(X',{\lambda}')$, there is a Liouville embedding $(X,{\lambda}) \overset{L}\hookrightarrow (X',{\lambda}')$ if and only if there is a (not necessarily proper) exact symplectic embedding of $(\widehat{X},\widehat{{\lambda}})$ into $(\widehat{X'},\widehat{{\lambda}'})$. \item Suppose that there is a Liouville embedding $(X,{\lambda}) \overset{L}\hookrightarrow (X',{\lambda}')$ of a Liouville domains. Then the same is true after applying a Liouville homotopy to $(X,{\lambda})$ or $(X',{\lambda}')$. \end{enumerate} \end{lemma} \begin{proof} Part (a) is \cite[Prop. 11.8]{cieliebak2012stein}, proved using Moser's trick.
For (b), suppose that $\iota: X \hookrightarrow X'$ is a smooth embedding with $\iota^*{\lambda}' = e^\rho{\lambda} + df$ for some constant $\rho \in \mathbb{R}$ and smooth function $f: X \rightarrow \mathbb{R}$. After post-composing $\iota$ with the Liouville flow of $X'$ for some negative time, we can assume that we have $\iota(X) \subset \op{Int}\, X'$. Let $\widetilde{f}: X' \rightarrow \mathbb{R}$ be a smooth function whose restriction to $f(X)$ agrees with $\iota_* f$ and which vanishes outside of a small neighborhood of $f(X)$. Consider the Liouville one-form on $X'$ given by $\widetilde{{\lambda}'} := e^{-\rho}({\lambda}' - d\widetilde{f})$. Then we have $\iota^*(\widetilde{{\lambda}'}) = {\lambda}$, and the family ${\lambda}'_t := e^{-t\rho}({\lambda}' - td\widetilde{f})$ defines a Liouville homotopy with ${\lambda}'_0 = {\lambda}'$ and ${\lambda}'_1 = \widetilde{{\lambda}'}$.
For (c), first suppose that $\iota: \widehat{X} \hookrightarrow \widehat{X'}$ is a smooth embedding satisfying $\iota^*\widehat{{\lambda}'} = \widehat{{\lambda}} + df$ for a smooth function $f: \widehat{X'} \rightarrow \mathbb{R}$. For $t \in \mathbb{R}$, let $\phi_t: \widehat{X'} \rightarrow \widehat{X'}$ denote the time-$t$ flow of the Liouville vector field, so we have $\phi_t^*\widehat{{\lambda}'} = e^t \widehat{{\lambda}'}$. Then for $t \ll 0$, the composite embedding $\phi_t \circ \iota|_{X}: X \rightarrow \widehat{X'}$ has image in $X'$, and it pulls back $\widehat{{\lambda}'}$ to $e^t\widehat{{\lambda}} + d(e^tf)$, so it is a Liouville embedding.
Conversely, suppose that $\iota: X \hookrightarrow X'$ is a Liouville embedding. By (a) and (b), we can assume that $\iota$ is a strict exact symplectic embedding, i.e. we have $\iota^*{\lambda}' = {\lambda}$. We extend $\iota$ to a smooth embedding $\widehat{\iota}: \widehat{X} \rightarrow \widehat{X'}$ by requiring $\widehat{\iota}$ to intertwine the Liouville flows on $(\widehat{X},\widehat{{\lambda}})$ and $(\widehat{X'},\widehat{{\lambda}'})$. We then have $\widehat{\iota}^*\widehat{{\lambda}'} = \widehat{{\lambda}}$.
Finally, (d) follows immediately by combining (a) and (c).
\end{proof}
\begin{remark}A key feature of Liouville embeddings $X \overset{L}\hookrightarrow X'$ is that curves without positive ends in the complementary cobordism $X' \setminus X$ are ruled out by Stokes' theorem. That is, for any admissible almost complex structure $J$ on the symplectic completion of $X' \setminus X$, there are no nontrivial punctured asymptotically cylindrical $J$-holomorphic curves without positive ends (see \S\ref{subsubsec:SFT_mod_sp} below). \end{remark}
\begin{remark}\label{rmk:filling_indep} Note that whereas a Liouville embedding $X \overset{L}\hookrightarrow X'$ implies the existence of a Liouville cobordism between the contact manifolds $\partial X$ and $\partial X'$, the converse is a priori not true, since the Liouville domain given by concatenating $X$ with such a cobordism might not be Liouville deformation equivalent to $X'$. By verifying the relevant index conditions and performing a more or less standard neck stretching argument (see e.g. \cite[\S9.5]{cieliebak2018symplectic} or \cite{lazarev2020contact}), it seems plausible that one could upgrade the main results of this paper to more generally rule out exact Liouville cobordisms between contact manifolds of the form $\partial X_{\vec{d}}^{2n}$, although we do not pursue this perspective here. \end{remark}
Similarly, given Weinstein domains $(X,{\lambda},\phi)$ and $(X',{\lambda}',\phi')$ of the same dimension, a {\em strict Weinstein embedding} consists of a smooth embedding $\iota: X \rightarrow X'$ such that $\iota(X)$ is a sublevel set of $\phi'$ and we have $\iota^*{\lambda}' = {\lambda}$ and $\phi' \circ \iota = \phi$. In this case, $X' \setminus \iota(X)$ equipped with the restrictions of ${\lambda}'$ and $\phi'$ is a Weinstein cobordism with positive end $\partial X'$ and negative end $\iota(\partial X)$. More generally, we say there is a {\em Weinstein embedding} of $(X,{\lambda},\phi)$ into $(X',{\lambda}',\phi')$, denoted by $X \overset{W}\hookrightarrow X'$, if there is a strict Weinstein embedding after applying Weinstein homotopies to $X$ and $X'$.
\subsubsection{The Conley--Zehnder index}\label{subsubsec:CZ} The {\em Conley--Zehnder index} plays an important role in the Fredholm index formula for punctured curves. Let $\gamma$ be a Reeb orbit in a strict contact manifold $(Y,\alpha)$. The contact distribution $\xi := \ker \alpha$ equipped with the restriction of $d\alpha$ is a symplectic vector bundle over $Y$. The Conley--Zehnder index \cite{conley1984morse} of $\gamma$ is defined with respect to a choice of {\em framing}, i.e. a trivialization (up to homotopy) $\tau$ of the pullback of the symplectic vector bundle $\xi$ by $\gamma$. We denote this by ${\op{CZ}}_{\tau}(\gamma) \in \mathbb{Z}$. Given another framing $\tau'$, we have $${\op{CZ}}_{\tau}(\gamma) - {\op{CZ}}_{\tau'}(\gamma) = 2m(\tau',\tau),$$ where we use $\tau$ to view the framing $\tau'$ as a loop in $\op{Sp}(2n-2)$, and $m(\tau',\tau) \in {\pi_1(\op{Sp}(2n-2))} \cong \mathbb{Z}$ is its Maslov index (see e.g. \cite{robbin1993maslov}).
Suppose that $Y$ is the contact boundary of a Liouville domain $X$. Given a spanning disk for $\gamma$, i.e. a map $u: \mathbb{D}^2 \rightarrow X$ with $u|_{\partial \mathbb{D}^2} = \gamma$, there is a unique (up to homotopy) trivialization of $\gamma^*TX$ which extends to a trivialization of $u^*TX$, and this induces a trivialization of $\gamma^*\xi$. Let ${\op{CZ}}_u(\gamma)$ denote the corresponding Conley--Zehnder index with respect to this trivialization. Given another such spanning disk $u'$, the difference in Conley--Zehner index is given by $${\op{CZ}}_{u}(\gamma) - {\op{CZ}}_{u'}(\gamma) = \langle 2c_1(X),A\rangle,$$ where $A \in H_2(X)$ is the homology class of the sphere given by gluing $u$ to $u'$ with its opposite orientation.
\subsection{SFT moduli spaces}\label{subsec:sft_moduli_spaces}
\subsubsection{Admissible almost complex structures}
Let $(Y,\alpha)$ be a strict contact manifold, and let $\mathbb{R} \times Y$ be its symplectization, with $\mathbb{R}$-coordinate $r$. An almost complex structure $J$ on $\mathbb{R} \times Y$ is {\em admissible} or {\em cylindrical} if it is invariant under $r$-translations, sends $\partial _r$ to $R_\alpha$, and restricts to a $d\alpha$-compatible almost complex structure on the contact distibution $\xi = \ker \alpha$ on $\{0\} \times Y$. Note that such an almost complex structure is compatible with the symplectic form $\omega = d(e^r\alpha)$ on $\mathbb{R} \times Y$, i.e. $\omega(J-,-)$ is symmetic and nondegenerate on each tangent space, but it also satisfies an additional $\mathbb{R}$-symmetry.
Similarly, suppose that $X$ is a strong symplectic cobordism, with corresponding symplectic form $\omega$ and contact forms $\alpha^\pm$ on $\partial^{\pm}X$, and let ${\widehat{X} = \left(\mathbb{R}_{\leq 0} \times \partial^-X\right) \cup X \cup \left( \mathbb{R}_{\geq 0} \times \partial^+X \right)}$ denote its symplectic completion. An almost complex structure $J$ on $X$ is {\em admissible} if is it $\omega$-compatible on $X$, and it is cylindrical when restricted to the ends $\mathbb{R}_{\geq 0} \times \partial^+X$ and $\mathbb{R}_{\leq 0} \times \partial^-X$.
\subsubsection{SFT moduli spaces}\label{subsubsec:SFT_mod_sp}
The main analytical tool in this paper is the study of moduli spaces of punctured pseudoholomorphic curves \`a la symplectic field theory. We refer the reader to \cite{BEHWZ, abbas2014introduction} for more of the technical details, and we also recommend \cite{wendl2016lectures} for an excellent recent treatment. Since the setup here is quite similar to that of \cite[\S 3.2]{HSC}, we give here only a short summary to set our notation.
Let $(Y,\alpha)$ be a nondegenerate strict contact manifold, and let $J$ be an admissible almost complex structure on its symplectization $\mathbb{R} \times Y$. Suppose that we have two collections of Reeb orbits $\Gamma^+ = (\gamma_1^+,\dots,\gamma^+_{s^+})$ and $\Gamma^- = (\gamma^-_1,\dots,\gamma^-_{s^-})$ in $Y$, for some $s_+,s_- \in \mathbb{Z}_{\geq 0}$. We let $\mathcal{M}^J_{Y}(\Gamma^+;\Gamma^-)$ denote the moduli space of $J$-holomorphic\footnote{We sometimes refer to $J$-holomorphic curves as ``pseudoholomorphic curves'' or simply ``curves'' if the almost complex structure is unspecified or implicit. Similarly, we will also sometimes omit $J$ from our moduli space notation.} genus zero\footnote{All curves considered in this paper are genus zero and hence we will generally suppress the genus from the notation.} curves in $\mathbb{R} \times Y$, with $s^+$ punctures which are positively asymptotic to the Reeb orbits $\gamma^+_1,\dots,\gamma^+_{s^+}$, and $s^-$ punctures which are negatively asymptotic to the Reeb orbits $(\gamma^-_1,\dots,\gamma^-_{s^-})$. Note that such curves (called ``asymptotically cylindrical'' in \cite{wendl2016lectures}) are proper, and the conformal structure on the domain (as a sphere with $(s^+ + s^-)$-punctures) is unconstrained. The $\mathbb{R}$-invariance of $J$ induces a corresponding $\mathbb{R}$-action on $\mathcal{M}^J_Y(\Gamma^+;\Gamma^-)$ which is free away from {\em trivial cylinders}, i.e. cylinders of the form $\mathbb{R} \times \gamma \subset \mathbb{R} \times Y$ with $\gamma$ a Reeb orbit in $Y$. We denote the quotient by $\mathcal{M}^J_Y(\Gamma^+;\Gamma^-) / \mathbb{R}$.
Given a curve $u \in \mathcal{M}^J_Y(\Gamma^+;\Gamma^-)$, we define its {\em energy} by $$E(u) := \int_u d\alpha.$$ Note that this is {\em not} quite the same as the symplectic area $\int_u d(e^r\alpha)$, which is always infinite.\footnote{In the SFT literature there is also a more refined notion of {\em Hofer energy}, whose finiteness for an arbitrary punctured curve is equivalent to that curve being asymptotically cylindrical - see e.g. \cite[\S1.3]{wendl2016lectures}). However, since all curves considered in this paper are asymptotically cylindrical the more straightforward notion of energy will suffice.} Nevertheless, we have $E(u) \geq 0$, with equality if and only if $u$ is a branched cover of a trivial cylinder. By Stokes' theorem, we have $$E(u) = \sum_{i=1}^{s^+}\mathcal{A}_{\alpha}(\gamma_i^+) - \sum_{i=1}^{s^-}\mathcal{A}_{\alpha}(\gamma_i^-).$$ In particular, $u$ must have at least one positive puncture. Also, in the case $s^+ = s^- = 1$, if we assume that the actions of simple orbits are rationally independent, then we have $E(u) = 0$ if and only if $\gamma_1^+ = \gamma_1^-$ and $u$ is a trivial cylinder.
Similarly, let $X$ be a strong symplectic cobordism, with nondegenerate contact forms $\alpha^\pm$ on $\partial^\pm X$, and let $J$ be an admissible almost complex structure on its symplectic completion $\widehat{X}$. For a collection of Reeb orbits $\Gamma^+ = (\gamma_1^+,\dots,\gamma^+_{s^+})$ in $Y^+$ and $\Gamma^- = (\gamma^-_1,\dots,\gamma^-_{s^-})$ in $Y^-$, we let $\mathcal{M}^J_X(\Gamma^+;\Gamma^-)$ denote the moduli space of $J$-holomorphic curves in $\widehat{X}$ with $s^+$ punctures positively asymptotic to $\gamma^+_1,\dots,\gamma^+_{s^+}$ and $s^-$ punctures negatively asymptotic to $\gamma^-_1,\dots,\gamma^-_{s^-}$. By slight abuse of notation, we will often suppress the completion process from the discussion and refer to elements of $\mathcal{M}^J_X(\Gamma^+;\Gamma^-)$ simply as ``curves in $X$''.
In the case that $X$ is a {\em symplectic filling}, i.e. $\partial^-X = \varnothing$, note that curves in $X$ cannot have negative ends, and we denote the moduli space with positive asymptotics $\Gamma^+ = (\gamma^+_1,\dots,\gamma^+_{s^+})$ by $\mathcal{M}_X^J(\gamma^+_1,\dots,\gamma^+_{s^+})$ without risk of confusion. Similarly, if $X$ is a {\em symplectic cap}, i.e. $\partial^+X = \varnothing$, we denote the moduli space of curves in $X$ with negative asymptotics $\Gamma^- = (\gamma^-_1,\dots,\gamma^-_{s^-})$ by $\mathcal{M}_X^J(\gamma^-_1,\dots,\gamma^-_{s^-})$.
We define the energy of a curve $u \in \mathcal{M}_X^J(\gamma^-_1,\dots,\gamma^-_{s^-})$ by $$ E(u) := \int_u \check{\omega},$$ giving by integrating over $u$ the piecewise smooth two-form
$$\check{\omega} := (d\alpha^+)|_{\mathbb{R}_{\geq 0} \times \partial^+X} + \omega|_X + (d\alpha^-)|_{\mathbb{R}_{\leq 0} \times \partial^-X}.$$ If $X$ is furthermore a Liouville cobordism, we have by Stokes' theorem $$E(u) = \sum_{i=1}^{s^+}\mathcal{A}_{\alpha^+}(\gamma_i^+) - \sum_{j=1}^{s^-}\mathcal{A}_{\alpha^-}(\gamma_j^-).$$ In particular, $u$ must have at least one positive end.
Let $H_2(Y; \Gamma^+ \cup \Gamma^-)$ denote the set of $2$-chains in $Y$ with boundary $\sum_{i=1}^{s^+} \gamma_i^+ - \sum_{j=1}^{s^-}\gamma_j^-$, modulo boundaries of $3$-chains. This forms a torsor over $H_2(Y)$. A curve $u \in \mathcal{M}^J_Y(\Gamma^+;\Gamma^-)$ has a well-defined homology class $[u] \in H_2(Y;\Gamma^+ \cup \Gamma^-)$, and for a given class $A \in H_2(Y;\Gamma^+ \cup \Gamma^-)$ we have the subspace $\mathcal{M}^J_{Y,A}(\Gamma^+;\Gamma^-) \subset \mathcal{M}^J_Y(\Gamma^+;\Gamma^-)$ consisting of all curves lying in the class $A$. Similarly, we denote by $H_2(X;\Gamma^+ \cup \Gamma^-)$ the set of $2$-chains in $X$ with boundary $\sum_{i=1}^{s^+} \gamma_i^+ - \sum_{j=1}^{s^-}\gamma_j^-$, modulo boundaries of $3$-chains, and for a homology class $A \in H_2(X;\Gamma^+ \cup \Gamma^-)$ we have the corresponding subspace $\mathcal{M}^J_{X,A}(\Gamma^+;\Gamma^-) \subset \mathcal{M}^J_X(\Gamma^+;\Gamma^-)$. The energy of a pseudoholomorphic curve $u$ in $X$ is determined by its homology class $[u] \in H_2(X;\Gamma^+ \cup \Gamma^-)$.
We will also sometimes need to consider parametrized moduli spaces of pseudoholomorphic curves. For example, let $\{J_t\}_{t \in [0,1]}$ be a (smooth) one-parameter family of admissible almost complex structures on the symplectization $\mathbb{R} \times Y$. We denote by $\mathcal{M}_{Y}^{\{J_t\}}(\Gamma^+;\Gamma^-)$ the parametrized moduli space consisting of all pairs $(t,u)$, with $t \in [0,1]$ and $u \in \mathcal{M}_Y^{J_t}(\Gamma^+;\Gamma^-)$. Similarly, if $\{J_t\}_{t \in [0,1]}$ is a one-parameter family of admissible almost complex structures on the strong symplectic cobordism $X$, we denote the corresponding parametrized moduli space by $\mathcal{M}_X^{\{J_t\}}(\Gamma^+;\Gamma^-)$.
\subsubsection{SFT compactness and neck stretching}\label{subsubsec:neck_stretching}
The SFT compactness theorem, which comes in several variants, is the counterpart for punctured pseudoholomorphic curves of Gromov's compactness theorem for closed curves. It provides natural compactifications of each of the above moduli spaces. Roughly, in addition to the nodal degenerations which appear in the closed curve case, punctured curves can degenerate into multilevel {\em pseudoholomorphic buildings}. For example, a typical element of the compactification $\overline{\mathcal{M}}_{Y,A}^J(\Gamma^+;\Gamma^-)/\mathbb{R}$ of $\mathcal{M}^J_{Y,A}(\Gamma^+;\Gamma^-)/\mathbb{R}$ consists of some number $l \geq 1$ of levels in the symplectization $\mathbb{R} \times Y$. Each level consists of one or more $J$-holomorphic curve components\footnote{Here by {\em component} we mean irreducible component, i.e. a curve whose domain is smooth and connected. For example, a cylinder with an attached sphere bubble consists of two components. We will use the term ``curve component'' when we wish to emphasize that there is a single component, as opposed to a nodal curve or building. By contrast, we will use the term ``configuration'' when we wish the emphasize the possibility of several components or levels.} in $\mathbb{R} \times Y$, such that the Reeb orbit asymptotics of adjacent levels are matched, and the total domain after gluing along paired punctures is a sphere with $s^+ + s^-$ punctures. Moreover, the total homology class of the configuration is $A \in H_2(Y;\Gamma^+\cup\Gamma^-)$, the positive asymptotics of the top level are given by $\Gamma^+$, and the negative asymptotics of the bottom level are given by $\Gamma^-$. Each curve component is defined up to biholomorphic reparametrization, and each level is defined up to translation in the $\mathbb{R}$ direction. In addition to disallowing constant closed components with two or fewer special points, the SFT stability condition also disallows symplectization levels with no nontrivial components.
Similarly, a typical element of the compactification $\overline{\mathcal{M}}_{X,A}^J(\Gamma^+;\Gamma^-)$ of $\mathcal{M}_{X,A}^J(\Gamma^+;\Gamma^-)$ consists of a pseudoholomorphic building with some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial^+X$, a single level in $X$, and some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial^- X$, subject to the same conditions as above. Here the curve components in the $X$ level are $J$-holomorphic, whereas the components in $\mathbb{R} \times \partial^{\pm}X$ are $J^{\pm}$-holomorphic, where $J^{\pm}$ are the cylindrical almost complex structure naturally determined by restricting $J$. Here each of the symplectization levels is again defined only up to $\mathbb{R}$-translation, whereas the level in $X$ is defined without any quotient.
For a parametrized moduli space such as $\mathcal{M}_{X,A}^{\{J_t\}}(\Gamma^+;\Gamma^-)$, the SFT compactification $\overline{\mathcal{M}}_{X,A}^{\{J_t\}}(\Gamma^+;\Gamma^-)$ is defined as the union over pairs $(C,t)$ for $C \in \overline{\mathcal{M}}_{X,A}^{J_t}(\Gamma^+,\Gamma^-)$ and $t \in [0,1]$. A variation on this called {\em stretching the neck} constitutes a fundamental tool in symplectic field theory. Namely, $X$ be a strong symplectic cobordism (this includes the case that $X$ is closed) with symplectic form $\omega$, and let $Y \subset X$ be a separating codimension one closed submanifold which is {\em contact type}, i.e. there is a one-form ${\lambda}$ defined near $Y$ satisfying $d{\lambda} = \omega$, and such that the Liouville vector field $V_{\lambda}$ is transverse to $Y$. Following e.g. \cite[\S3.4]{BEHWZ} (see also \cite[Lem. 2.4]{cieliebak2018symplectic}), we can define a family of almost complex structures $\{J_t\}_{t \in [0,1)}$ on $\widehat{X}$ which roughly has the effect of stretching out $(-\varepsilon,\varepsilon) \times Y$ to $(-R_t,R_t) \times Y$, with $\lim\limits_{t \rightarrow 1} R_t = \infty$, such that $J_t$ is cylindrical on $(-R_t,R_t) \times Y$. More precisely let $J$ be an admissible almost complex structure on $X$ which is cylindrical on a small neighborhood $U$ of $Y$ which is identified with $(-\delta,\delta) \times Y$ for some $\delta > 0$ under the flow of $V_{\lambda}$, with $J$ invariant under translations in the first factor.
Let $J_{\mathbb{R} \times Y}$ denote the induced cylindrical almost complex structure on the full symplectization $\mathbb{R} \times Y$. Let $F_t: (-R_t,R_t) \rightarrow (-\delta,\delta)$ be a family of increasing diffeomorphisms for $t \in [0,1)$, such that $F_t$ has slope $1$ near $-R_t$ and $R_t$. We then set $J_t$ to be $(F_t \times \mathbb{1})_*(J_{\mathbb{R} \times Y}|_{(-R_t,R_t)\times Y})$ on $U$ and $J|_{X \setminus U}$ on $X \setminus U$. We assume that $R_0 = \varepsilon$ and $F_0$ is the identity function, so that we have $J_0 = J$.
Although $\lim\limits_{t \rightarrow 1}J_t$ is not a well-defined almost complex structure on $\widehat{X}$, we nevertheless have a compactified moduli space $\overline{\mathcal{M}}^{\{J_t\}}_{X,A}(\Gamma^+;\Gamma^-)$. This has a well-defined projection to $[0,1]$, where the fiber over $1$ corresponds to pseudoholomorphic buildings in the {\em broken symplectic cobordism} $X^- \circledcirc X^+$, where $X^-$ and $X^+$ correspond to the bottom and top components of $X \setminus Y$. More precisely, a typical element of the fiber over $1$ in $\overline{\mathcal{M}}^{\{J_t\}}_{X,A}(\Gamma^+;\Gamma^-)$ is a pseudoholomorphic building with some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial^+ X$ (this is vacuous if $\partial^+X = \varnothing$), a single level in $X^+$, some number (possibly zero) of levels in the symplectization $\mathbb{R} \times Y$, a single level in $X^-$, and some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial^-X$ (this is vacuous if $\partial^-X = \varnothing$). This configuration is subject to similar matching and stability conditions to the above.
\subsubsection{Regularity for simple curves}\label{subsubsec:regularity}
Ideally one would like to say for example that $\mathcal{M}_{X,A}^J(\Gamma^+;\Gamma^-)$ is a smooth manifold and that $\overline{\mathcal{M}}_{X,A}^J(\Gamma^+;\Gamma^-)$ is a smooth compactification, at least for a generically\footnote{Following standard usage, we will say that subset of admissible almost complex structures is {\em generic} if it is comeager, i.e. it contains a countable intersection of open dense subsets (c.f. the Baire category theorem).} chosen admissible almost complex structure $J$. Indeed, the Cauchy--Riemann equation defining a curve $u \in \mathcal{M}_{X}^J(\Gamma^+;\Gamma^-)$ is a Fredholm problem of index \begin{align*} \op{ind}(u) = (n-3)(2- s^+ - s^-) + \sum_{i=1}^{s^+} {\op{CZ}}_\tau(\gamma_i^+) - \sum_{j=1}^{s^-} {\op{CZ}}_\tau(\gamma_j^-) + 2c_1^\tau(u). \end{align*} Here as usual we put $\dim X = 2n$, $\tau$ corresponds to a choice of framing of each of the involved Reeb orbits, and $c_1^\tau(u)$ denotes the first Chern number of $u$ relative to this choice of framings, i.e. the signed count of zeros of a section of $u^*TX$ which is constant with respect to the given framings. Then $\op{ind}(u)$ gives the expected (or ``virtual'') dimension of $\mathcal{M}_{X}^J(\Gamma^+;\Gamma^-)$ near the curve $u$.
If $u$ is {\em regular}, i.e. its linearized Cauchy--Riemann operator is surjective, then by a version of the implicit function theorem it can be shown that $\mathcal{M}_{X,A}^J(\Gamma^+;\Gamma^-)$ is indeed a smooth manifold near $u$. Unfortunately, we cannot in general arrange that all elements $u \in \mathcal{M}_{X,A}^J(\Gamma^+;\Gamma^-)$ are regular for generic $J$, due to the existence of multiple covers, which frequently appear with higher than expected dimension (e.g. they appear despite having negative index). Defining SFT in generality therefore necessitates the use of virtual perturbations (c.f. Remark~\ref{rmk:virtual}).
Nevertheless, by standard techniques we can fortunately achieve regularity for {\em simple} curves. Namely, by \cite[Thm. 6.19]{wendl2016lectures}, any nonconstant asymptotically cylindrical $J$-holomorphic curve can be factored into a degree $\kappa$ holomorphic map between punctured Riemann surfaces, followed by a $J$-holomorphic curve which is an embedding apart from finitely many critical points and self-intersection points. We call $\kappa$ the {\em covering multiplicity} of $u$, and $u$ is simple if and only if we have $\kappa = 1$. We denote the subspace of simple curves by $\mathcal{M}^{J,s}_{X,A}(\Gamma^+;\Gamma^-) \subset \mathcal{M}^J_{X,A}(\Gamma^+;\Gamma^-)$.
Any simple curve $u$ is {\em somewhere injective}, i.e. there is a point $z$ in its domain such that $du|_z \neq 0$ and $u^{-1}(u(z)) = z$ (see e.g. \cite[\S2.5]{JHOL}). A standard argument shows that, for any neighborhood $U$ of $u(z)$ and a generic perturbation $\widetilde{J}$ of $J$ supported in $U$, any $\widetilde{J}$-holomorphic curve in $X$ with a somewhere injective point mapping to $U$ is regular. By leveraging this idea with some care, one can show that every simple curve is regular for generic $J$ (see \cite[\S 7.1]{wendl2016lectures}), and hence $\mathcal{M}^{J,s}_{X}(\Gamma^+;\Gamma^-)$ is a smooth oriented\footnote{For a discussion of how to assign orientations SFT moduli spaces see e.g. \cite[\S 11]{wendl2016lectures}.} manifold of dimension $\op{ind}(u)$.
Similarly, simple curves in a symplectization are regular for generic $J$. In the case of a symplectization, some care is needed due to the $\mathbb{R}$-symmetry (c.f. the discussion in \cite[\S 8]{wendl2016lectures}). Assuming there are no trivial cylinders in the moduli space $\mathcal{M}^{J,s}_Y(\Gamma^+;\Gamma^-)$, the quotient $\mathcal{M}^{J,s}_Y(\Gamma^+;\Gamma^-)/\mathbb{R}$ is a smooth oriented manifold of dimension $\op{ind}(u) -1$. In particular, since this is necessarily nonnegative, nontrivial simple curves in a symplectization must appear with index at least $1$ for generic admissible $J$.
In the case of a parametrized moduli space $\mathcal{M}^{\{J_t\}}_{X}(\Gamma^+;\Gamma^-)$, we also have that simple curves lying over $t \in (0,1)$ are regular, provided that the homotopy $\{J_t\}$ is generic. Note that regularity of $(t,u) \in \mathcal{M}^{\{J_t\}}_{X}(\Gamma^+;\Gamma^-)$ does {\em not} imply regularity of $u$ as a $J_t$-holomorphic curve $u$. Rather, if $(t,u)$ is regular for some $t \in (0,1)$, then $\mathcal{M}^{\{J_t\}}_{X}(\Gamma^+;\Gamma^-)$ is a smooth oriented manifold of dimension $\op{ind}(u) + 1$ near $(t,u)$, where $\op{ind}(u)$ denote the Fredholm index of $u$ as a $J_t$-holomorphic curve. In particular, for $(t,u) \in \mathcal{M}^{\{J_t\},s}_{X}(\Gamma^+;\Gamma^-)$ we must have $\op{ind}(u) \geq -1$.
We will also need to know something about the structure of the SFT compactifications of one-dimensional moduli spaces, which we expect to be compact one-dimensional manifolds with boundary. Provided that all relevant curves are simple, this is indeed the case, with the necessary charts near the boundary provided by the procedure of gluing along cylindrical ends. For example, consider a generic homotopy $\{J_t\}_{t \in [0,1]}$, and assume that each of the curve components appearing in the compactification $\overline{\mathcal{M}}^{\{J_t\}}_{X}(\Gamma^+;\Gamma^-)$ is simple. Then $\overline{\mathcal{M}}^{\{J_t\}}_{X}(\Gamma^+;\Gamma^-)$ is an oriented one-dimensional manifold whose boundary contains $\mathcal{M}^{J_1}_X(\Gamma^+;\Gamma^-)$ and $\mathcal{M}^{J_1}_X(\Gamma^+;\Gamma^-)$ (with its opposite orientation). We defer the reader to \cite[\S 10.2.4]{wendl2016lectures} and the references therein for a more detailed discussion.
\subsubsection{Formal curves and anchors}
As a convenient device for bookkeeping, we will make use of the notion of {\em formal curves}. Namely, in a strong symplectic cobordism $X$, a formal curve consists of a nodal punctured surface $\Sigma$, with each puncture designated as either positive or negative, together with, for each irreducible component of $\Sigma$, the following data: \begin{itemize} \item a collection of Reeb orbits $\Gamma^+= (\gamma^+_1,\dots,\gamma_{s^+}^+)$ in $\partial^+X^+$ corresponding to the positive punctures of $\Sigma$ \item a collection of Reeb orbits $\Gamma^- = (\gamma^-_1,\dots,\gamma^-_{s^-})$ in $\partial^-X$ corresponding to the negative punctures of $\Sigma$ \item a homology class $A_{\Sigma} \in H_2(X; \Gamma^+ \cup \Gamma^-)$. \end{itemize} Formal curves in a symplectization $\mathbb{R} \times Y$ of a strict contact manifold $Y$ are defined similarly, except that both the positive and negative Reeb orbits lie in $Y$, with homology classes $A_{\Sigma} \in H_2(Y; \Gamma^+ \cup \Gamma^-)$. We will also additionally allow $C$ to have extra marked points decorated by ``formal'' local tangency constraints of the form $\Langle \mathcal{T}^{m}p\Rangle$ for some $m \in \mathbb{Z}_{\geq 0}$. The formal curves considered in this paper will typically be connected and without any nodes, and they will always have total genus (after resolving nodes) zero.
Note that a formal curve $C$ has a well-defined Fredholm index. For instance, in the case that $C$ is connected and without nodes or additional constraints we put \begin{align*} \op{ind}(C) := (n-3)(2-s^+-s^-) + \sum_{i=1}^{s^+}{\op{CZ}}_\tau(\gamma_i^+) - \sum_{j=1}^{s_-} {\op{CZ}}_\tau(\gamma_j^-) + 2c_1^\tau(A). \end{align*} Any honest\footnote{We will call a curve ``honest'' when we wish to emphasize that it is not formal.} curve $u \in \mathcal{M}^J_{X,A}(\Gamma^+;\Gamma^-)$ can be viewed as a formal curve, but a formal curve need not have any pseudoholomorphic representative. Consider strong symplectic cobordisms $X^+$ and $X^-$ with common contact boundary $Y = \partial^+X^- = \partial^-X^+$, and let $X^- \circledcirc X^+$ denote the strong symplectic cobordism obtained by concatenating them along $Y$. Given pseudoholomorphic curves $u^- \in \mathcal{M}_{X^-}(\Gamma;\Gamma^-)$ and $u^+ \in \mathcal{M}_{X^+}(\Gamma^+;\Gamma)$ with shared Reeb orbit asymptotics $\Gamma$, we can {\em formally glue} along the orbits of $\Gamma$ in a natural way to obtain a formal curve $C$ in $X^- \circledcirc X^+$. Importantly, note that the index is additive under this operation, i.e. we have $\op{ind}(C) = \op{ind}(u^-) + \op{ind}(u^+)$.
Another important device from symplectic field theory is that of {\em anchors}, which are used to correct naively defined structure maps. For example, if $X$ is a Liouville domain, the linearized contact homology $\op{C}\mathbb{H}_{\op{lin}}(X)$ can be viewed as the anchor-corrected version of cylindrical contact homology, the latter not typically being well-defined without additional assumptions (see e.g. the discussion in \cite[\S 1]{HuN}). Suppose that $X^-$ and $X^+$ are strong symplectic cobordisms with common contact boundary $Y = \partial^+X^- = \partial^-X^+$, and assume that we have $\partial^-X^- = \varnothing$. A pseudoholomorphic curve in $X^+$, anchored in $X^-$, consists of a two-level pseudoholomorphic building, with top level in $X^+$ and bottom level in $X^-$, such that each positive end of a component in $X^-$ is paired with a negative end of a component in $X^+$, but we allow unpaired negative ends of components in $X^+$. We will refer to the unpaired negative ends of components in $X^+$ as the negative ends of the anchored curve. The index of an anchored curve is by definition the sum of the indices of each component, and the topological type is that of the punctured surface given by gluing together the components of the domain along paired punctures. Intuitively, we view a curve in $X^+$ anchored in $X^-$ as a curve in $X^+$ with some extra ``tentacles'' extending into $X^-$, noting that the level in $X^+$ may consist of more than one component. Similarly, if $X$ is a Liouville domain, we define curves in the symplectization $\mathbb{R} \times \partial X$, anchored in $X$, in essentially the same way, but now with top level in $\mathbb{R} \times \partial X$. We can also speak of anchored formal curves, defined similarly but with each level only a formal curve.
\subsection{Local tangency constraints}
In order to probe higher dimensional moduli spaces of pseudoholomorphic curves in the Weinstein domains $X_{\vec{d}}^{2n}$, we will need to impose additional geometric constraints to cut down dimensions. Although there are a number of possible geometric constraints we could impose, such as multiple point constraints or blowup constraints (see e.g. the discussion in \cite[\S 5]{HSC}), the most fruitful for us are local tangency constraints. The basic idea, pioneered by Cieliebak--Mohnke \cite{CM2}, is to require curves to pass through a generically chosen point $p$ and to be tangent to specified order to a generically chosen germ of a divisor $D$ passing through $p$. For example, if $M$ is a closed $2n$-dimensional symplectic manifold with $A \in H_2(M)$, the count of pseudoholomorphic curves in $M$ representing the class $A$ and with tangency order $m$ (i.e. contact order $m+1$) to $D$ at $p$ gives rise to a Gromov--Witten type invariant, denoted by $\op{GW}_{M,A}\Langle \mathcal{T}^m p\Rangle \in \mathbb{Q}$, which is independent of all choices. Note that the local tangency constraint $\Langle \mathcal{T}^m p\Rangle$ cuts down the expected dimension by $2n + 2m - 2$. These counts are defined in \cite{McDuffSiegel_counting} using classical transversality techniques for semipositive closed symplectic manifolds, in which case they are integer-valued, computable by an explicit algorithm at least in dimension four. We can also incorporate local tangency constraints into moduli spaces of punctures curves, and we denote the analogues of the aforementioned moduli spaces by $\mathcal{M}_{X,A}^J(\Gamma^+;\Gamma^-)\Langle \mathcal{T}^m p \Rangle$, $\mathcal{M}_{Y,A}^J(\Gamma^+;\Gamma^-)\Langle\mathcal{T}^mp\Rangle$, and so on.
As explained in \cite[\S 4]{McDuffSiegel_counting}, one can equivalently replace the local tangency constraint with a skinny ellipsoidal constraint. Namely, after removing a small neighborhood of $p$ which is symplectomorphic to a sufficiently skinny $2n$-dimensional ellipsoid, curves satisfying the constraint $\Langle \mathcal{T}^m p \Rangle$ are substituted by curves with an additional negative puncture which is asymptotic to the $(m+1)$-fold cover of the smallest action Reeb orbit in the boundary of the skinny ellipsoid. This approach, while somewhat less geometrically natural, has the advantage of casting the constraint entirely within the standard framework of asymptotically cylindrical curves in strong symplectic cobordisms. For easy of exposition, we stick with the local tangency terminology and notation.
\subsection{Weinstein structure on a divisor complement}\label{subsec:div_compl}
In this subsection, we discuss the geometry and topology of complements of divisors in closed symplectic manifolds. We first recall that there is a natural Weinstein structure on the complement of any ample simple normal crossing divisor in a smooth projective complex variety. We then formulate Theorem~\ref{thm:div_compl1}, which gives a precise model for the Reeb dynamics. Together with Proposition~\ref{prop:compl_cz}, this gives an explicit understanding of the actions, first homology classes, and Conley--Zehnder indices of the corresponding closed Reeb orbits.
Let $M^{2n}$ be a smooth complex projective variety, and let $D \subset M$ be an ample divisor, i.e. $D = \sigma^{-1}(0)$ is the zero set of a holomorphic section $\sigma$ of an ample line bundle $\mathcal{L} \rightarrow M$.
We assume that $D$ is a simple normal crossing divisor, i.e. each irreducible component is smooth, and near each point of $D$ there are local holomorphic coordinates $z_1,\dots,z_n$ such that $D$ is cut out by the equation $z_1\dots z_k = 0$ for some $1 \leq k \leq n$. Recall that ampleness of $\mathcal{L}$ is equivalent to positivity, i.e. the existence of a Hermitian inner product $\langle -,-\rangle$ on $\mathcal{L}$ such that curvature with respect to the Chern connection is a K\"ahler form. Given a holomorphic section $\sigma$, this is equivalent to the function $\phi := -\log ||\sigma||$ being a strictly plurisubharmonic function on $M \setminus D$, where $||-||$ is the norm corresponding to $\langle-,-\rangle$. In this case, $-dd^\mathbb{C}\phi$ extends to a K\"ahler form on $M$.
By \cite[Lem. 4.3]{Seidel_biased_view}, the critical points of $\phi$ form a compact subset of $X$. In particular, after a small perturbation we can assume that $\phi$ is a Morse function. Then since $\phi$ is exhausting, $(X,J)$ is a Stein manifold, and the restriction to $\{\phi \leq C\}$ is a Stein domain for any $C > 0$ sufficiently large. Note that the Liouville vector field dual to ${\lambda} := -d^{\mathbb{C}}\phi$ is not complete, though this can easily be rectified by postcomposing $\phi$ with a suitable function $\psi: \mathbb{R} \rightarrow \mathbb{R}$ (c.f. \cite[Prop. 2.11]{cieliebak2012stein}), after which $(M \setminus D,{\lambda},\psi \circ \phi)$ becomes a Weinstein manifold. As explained in \cite[\S 4a]{Seidel_biased_view}, it follows from Hironaka's resolution of singularities that any smooth complex affine variety can be presented in this way as the complement of an ample simple normal crossing divisor in a smooth projective variety, and moreover the resulting Stein manifold is independent of all choices (the compactifying divisor, the Hermitian metric, etc) up to Stein deformation equivalence.
Now let $D_1,\dots,D_k$ denote the irreducible components of an ample normal crossing divisor $D$ in a smooth complex projective variety $M$, and consider a nonzero tuple $\vec{v} = (v_1,\dots,v_k) \in \mathbb{Z}_{\geq 0}^k \setminus \{\vec{0}\}$, which we suppose has exactly $r \geq 1$ nonzero components.
For future reference, we introduce some additional notation: \begin{itemize} \item Let $D_{\vec{v}}$ denote the intersection of all those $D_i$ for which $v_i \neq 0$. \item Let $\mathring{D}_{\vec{v}} := D_{\vec{v}} \setminus \bigcup\limits_{i\;:\; v_i = 0}D_i$ denote the open stratum of $D_{\vec{v}}$. \item Let $ND_{\vec{v}} \rightarrow D_{\vec{v}}$ denote the normal bundle to $D_{\vec{v}} \subset M$. There is a natural reduction of the structure group to $U(1)^{\times r}$, and in particular we can locally identify the fibers with $\mathbb{C}^{\times r}$ in a manner which preserves the splitting. We will also sometimes identify $ND_{\vec{v}}$ with a small neighborhood of $D_{\vec{v}}$ in $M$ \item Let $S_{\vec{v}}$ denote the $\mathbb{T}^{r}$ bundle over $D_{\vec{v}}$ given by $(ND_{\vec{v}} \setminus D) / \mathbb{R}_{> 0}^r$ and let $\mathring{S}_{\vec{v}} \rightarrow \mathring{D}_{\vec{v}}$ denote its restriction to $\mathring{D}_{\vec{v}}$ \item Let $S_{\vec{v}}/S^1$ denote the $\mathbb{T}^{r-1}$-bundle over $D_{\vec{v}}$ given by quotienting $S_{\vec{v}}$ by the restriction of the natural free $\mathbb{T}^r$ action to the circle $\{ t\vec{v}\;:\; t \in \mathbb{R}\} \subset \mathbb{R}^r/\mathbb{Z}^r = \mathbb{T}^r$,
and let $\mathring{S}_{\vec{v}}/S^1$ denote its restriction to $\mathring{D}_{\vec{v}}$. \item For $i = 1,\dots,k$, let $c_i$ denote a small disk in $M$ which intersects $D_i$ once transversality and negatively and is disjoint from the other divisor components, and let $[\partial c_i] \in H_1(X)$ denote the homology class of its boundary. \end{itemize}
In order to discuss the action filtration on a divisor complement, we also recall the notion of wrapping numbers. \begin{definition}\cite{mcleanslides} Assume that $M$ is a smooth complex projective variety with a divisor $D = \sigma^{-1}(0)$, where $\sigma$ is a holomorphic section of an ample line bundle $\mathcal{L} \rightarrow M$. For $i = 1,\dots,k$, the $i$th {\em holomorphic wrapping number} is minus the vanishing order of $\sigma$ along $D_i$. \end{definition} {\noindent} There is also a purely symplectic analogue given as follows. Following \cite{mclean2012growth, tehrani2018normal}, recall that a {\em symplectic simple normal crossing (SNC) divisor} $D$ in a symplectic manifold $(M,\omega)$ consists of a collection of transversely intersecting symplectic submanifolds $D_1,\dots,D_k \subset M$ such that each partial intersection $D_I := \bigcap\limits_{i \in I}D_i,\; I \subset \{1,\dots,k\}$, is a symplectic submanifold, and the ``symplectic orientation'' on $D_I$ agrees with the ``intersection orientation''. We note that this last condition is equivalent to the existence of a compatible almost complex structure $J$ for $(M,\omega)$ which makes each $D_i$ $J$-holomorphic. \begin{definition}\cite{mcleanslides}
Let $(M,\omega)$ be a symplectic manifold with a symplectic SNC divisor $D = D_1 \cup \dots \cup D_k$, and let ${\lambda}$ be a one-form on $M \setminus D$ with $d{\lambda} = \omega|_{M \setminus D}$. Let $N$ be the closure of a small neighborhood of $D$ with smooth boundary which deformation retracts onto $D$, and let $\rho: N \rightarrow [0,1]$ be a function which is equal to $1$ near $D$ and vanishes near $\partial N$. Let $\widetilde{\omega}$ be the two-form on $N$ given by $\omega$ near $D$ and $d(\rho {\lambda})$ away from $D$. The {\em symplectic wrapping numbers} $\mathbb{w}_1,\dots,\mathbb{w}_k$ are the unique coefficients such that $-\sum_{i=1}^k \mathbb{w}_i[D_i] \in H_{2n-2}(N;\mathbb{R})$ is Poincar\'e--Lefschetz dual to $[\widetilde{\omega}] \in H^2(N,\partial N;\mathbb{R})$. \end{definition}
The following theorem summarizes most of what we will need to know about the symplectic geometry of divisor complements: \begin{thm}\label{thm:div_compl1}\cite{mclean2012growth, tehrani2018normal} Fix $\mathcal{C} \in \mathbb{R}_{> 0}$ arbitrarily large and $\varepsilon \in \mathbb{R}_{>0}$ arbitrarily small. Let $M$ be a smooth complex projective variety with an ample simple normal crossing divisor $D$, and let $(X,{\lambda},\phi)$ denote the associated Weinstein domain corresponding to the divisor complement. For each $\vec{v} \in \mathbb{Z}_{\geq 0}^k \setminus \{\vec{0}\}$, pick an exhausting Morse function\footnote{Note that $\mathring{S}_{\vec{v}}/S^1$ is typically noncompact, hence we must specific the behavior at infinity. Since $f_{\vec{v}}$ is exhausting, after choosing a Riemannian metric which is Morse--Smale for $f_{\vec{v}}$, the resulting Morse cohomology is isomorphic to the ordinary cohomology of $\mathring{S}_{\vec{v}}/S^1$. By Poincar\'e duality, this is isomorphic the Borel--Moore homology of $\mathring{S}_{\vec{v}}/S^1$ after a degree shift. }
$f_{\vec{v}}: \mathring{S}_{\vec{v}}/S^1 \rightarrow \mathbb{R}$. After a Weinstein homotopy of $(X,{\lambda},\phi)$ and a deformation of $D$ through symplectic SNC divisors, we can find a K\"ahler form $\omega$ on $M$ and an embedding $\iota: X \hookrightarrow M$ such that: \begin{enumerate}
\item[(1)] $N := \overline{M \setminus \iota(X)}$ deformation retracts onto $D$
\item[(2)] $\iota^*\omega = d{\lambda}$, and $\iota_*{\lambda}$ extends to a one-form $\widetilde{{\lambda}}$ on $M \setminus D$ such that $d\widetilde{{\lambda}} = \omega|_{M \setminus D}$
\item[(3)] the symplectic wrapping numbers of $D$ coincide with the holomorphic wrapping numbers. \end{enumerate}
Moreover, the contact form $\alpha := {\lambda}|_{\partial X}$ has nondegenerate Reeb dynamics, where: \begin{enumerate}
\item[(4)] the Reeb orbits of $(\partial X,\alpha)$ of period less than $\mathcal{C}$ are in one-to-one correspondence with the set of critical points $\op{crit}(f_{\vec{v}})$ of $f_{\vec{v}}$ as $\vec{v}$ ranges over $\mathbb{Z}^k_{\geq 0}\setminus \{\vec{0}\}$ such that $-\sum_{i=1}^k v_i\mathbb{w}_i \leq \mathcal{C}$.
\item[(5)] the Reeb orbit $\gamma_{\vec{v}}^A$ corresponding to the tuple $\vec{v} \in \mathbb{Z}^k_{\geq 0}\setminus \{\vec{0}\}$ and critical point $A \in \op{crit}(f_{\vec{v}})$ lies in the homology class $\sum_{i=1}^k v_i[\partial c_i] \in H_1(X)$
\item[(6)] the action of $\gamma_{\vec{v}}^A$ is given by $-\sum_{i=1}^k v_i\mathbb{w}_i$, up to a discrepancy of $\varepsilon$. \end{enumerate} \end{thm} {\noindent} In the sequel, we will typically assume that the above theorem has already been applied to a given divisor complement, and by slight abuse of notation we view $\iota$ as an inclusion $X \subset M$ and denote $\widetilde{{\lambda}}$ again by ${\lambda}$. \begin{remark} There is also a natural analogue of this statement in which the Reeb orbits are left to appear in Morse--Bott torus familes. However, for later purposes we find it more convenient to perturb these families and work in the setting of nondegenerate Reeb orbits. Technically we can only perturb finitely many of the Morse--Bott loci for a given contact form, hence the appearance of the constant $\mathcal{C}$. We refer the reader to e.g. \cite[\S5.3]{hutchings2016beyond} for the details of the perturbation process. \end{remark}
\begin{remark}\hspace{1cm} \begin{itemize} \item Note that for any admissible almost complex structure $J$ on the symplectic completion of $N$, any $J$-holomorphic curve in $N$ must intersect $D$. Indeed, otherwise by (2) we can apply Stokes' theorem together with nonnegativity of energy to get a contradiction. \item For most of the pseudoholomorphic curve arguments in this paper we have an a priori upper bound on the actions of Reeb orbits which could arise. This means we can simply take $\mathcal{C}$ to be sufficiently large and safely ignore all Reeb orbits of action greater than $\mathcal{C}$. \end{itemize} \end{remark}
We end this subsection by discussing the Conley--Zehnder indices of the Reeb orbits $\gamma_{\vec{v}}^A$ described in Theorem~\ref{thm:div_compl1}. In the context of Theorem~\ref{thm:div_compl1}, each Reeb orbit $\gamma_{\vec{v}}^A$ bounds a small spanning disk $u$ in $N$ which has homological intersection $v_i$ with $[D_i]$ for $i = 1,\dots,k$. \begin{lemma}\label{lem:cz_well_def} Assume that the Poincar\'e dual to $c_1(TM)$ can be expressed in the form $h_1[D_1] + \dots + h_k[D_k] \in H_{2n-2}(M;\mathbb{R})$ for some $h_1,\dots,h_k \in \mathbb{R}$. Let $\gamma$ be a Reeb orbit in $\partial X$, and let $u,u': \mathbb{D}^2 \rightarrow M$ be two bounding disks such that $[u] \cdot [D_i] = [u'] \cdot [D_i]$ for $i = 1,\dots,k$. We have ${\op{CZ}}_u(\gamma) = {\op{CZ}}_{u'}(\gamma)$. \end{lemma} \begin{proof} Let $S$ denote the sphere in $N = M \setminus X$ obtained by gluing $u$ to $u'$ with its opposite orientation, and let $[S] \in H_2(N)$ denote the corresponding homology class. Note that the homological intersection number $[S] \cdot [D_i]$ vanishes for $i = 1,\dots k$. Then by \S\ref{subsubsec:CZ} we have \begin{align*} {\op{CZ}}_{u}(\gamma) - {\op{CZ}}_{u'}(\gamma) = 2 c_1(M) \cdot [S] = \sum_{i=1}^k h_i [D_i] \cdot[S] = 0. \end{align*} \end{proof} By default, we will compute the Conley--Zehnder index of the Reeb orbit $\gamma_{\vec{v}}^A$ via a small spanning disk $u$ as in Lemma~\ref{lem:cz_well_def} which satisfies $[u]\cdot [D_i] = v_i$ for $i = 1,\dots,k$. We denote this trivialization of $TM$ along the Reeb orbits $\gamma_{\vec{v}}^A$ by $\tau_0$, and we denote the corresponding Conley--Zehnder index by ${\op{CZ}}_{\tau_0}$. \begin{prop}\label{prop:compl_cz}\cite[\S2]{ganatra2020symplectic} For each $\vec{v} \in \mathbb{Z}^k \setminus \{\vec{0}\}$ with $D_{\vec{v}} \neq \varnothing$, we have
$${\op{CZ}}_{\tau_0}(\gamma_{\vec{v}}^A) = n-1 - |A| - 2\sum_{i=1}^k v_i.$$ \end{prop}
{\noindent} Here $|A|$ denotes the Morse index of $A \in \op{crit}(f_{\vec{v}})$. Putting $\delta = \delta(\gamma_{\vec{v}}^A) := n-1 - |A|$, we have alternatively ${\op{CZ}}_{\tau_0}(\gamma_{\vec{v}}^A) = \delta - 2\vec{v}\cdot \vec{1}$ with $\delta \leq n-1$. Here we use the shorthand $\vec{1} := \underbrace{(1,\dots,1)}_{k}$.
\begin{remark}\label{rmk:gradings_antican}
Consider the case that $D$ is anticanonical, with irreducible components $D_1,\dots,D_k$, and let $\kappa$ be a meromorphic section of the canonical bundle of $M$ which is nonvanishing away from $D$. For $i = 1,\dots,k$, let $a_i$ denote the order of vanishing (possibly negative) of $\kappa$ along $D_i$. In this case we can alternatively compute Conley--Zehnder indices with respect to the holomorphic volume form $\kappa|_{X}$, and we have
$${\op{CZ}}_{\kappa}(\gamma^A_{\vec{v}}) = n - 1 - |A| - 2\sum_{i=1}^k v_i(a_i+1).$$ \end{remark}
\begin{example}As a simple example, let us specialize to the case of a four-dimensional hyperplane complement $X_k^4$. For $k \in \mathbb{Z}_{\geq 0}$, let $\Sigma_k$ denote the two-sphere with $k$ punctures. Then for any nonzero $\vec{v} = (v_1,\dots,v_k) \in \mathbb{Z}_{\geq 0}^k$ we have: \begin{itemize} \item when there is exactly one nonzero component of $\vec{v}$, $\mathring{S}_{\vec{v}}$ is diffeomorphic to $\Sigma_{k-1} \times S^1$ and $\mathring{S}_{\vec{v}}/S^1$ is diffeomorphic to $\Sigma_{k-1}$ \item when there are exactly two nonzero components of $\vec{v}$, $\mathring{S}_{\vec{v}}$ is diffeomorphic to $\mathbb{T}^2$ and $\mathring{S}_{\vec{v}}/S^1$ is diffeomorphic to $S^1$ \item if three or more components of $\vec{v}$ are nonzero, then $D_{\vec{v}} = \varnothing$. \end{itemize}
Now choose exhausting Morse functions $f_{\vec{v}}: \mathring{S}_{\vec{v}}/S^1 \rightarrow \mathbb{R}$ as in Theorem~\ref{thm:div_compl1}, which in this example we can assume are perfect, so that the critical points give rise to a distinguished basis for $H^*(\mathring{S}_{\vec{v}}/S^1)$. The Reeb orbits of $\partial X_k^4$ (of period less than $\mathcal{C}$) are then given explicitly as follows: \begin{enumerate} \item For each $\vec{v}$ with exactly one nonzero component, we have the Reeb orbit $\gamma_{\vec{v}}^A$ with ${\op{CZ}}_{\tau_0}(\gamma_{\vec{v}}^A) = 1 - \sum_{i=1}^k v_i$, corresponding to the unique basis element of $H^0(\Sigma_{k-1})$. \item For each $\vec{v}$ with exactly one nonzero component, we have the Reeb orbits $\gamma^A_{\vec{v}}$ with ${\op{CZ}}_{\tau_0}(\gamma^A_{\vec{v}}) = - \sum_{i=1}^k v_i$, corresponding to the $k-2$ basis elements of $H^1(\Sigma_{k-1})$ \item For each $\vec{v}$ with exactly two nonzero components, we have the Reeb orbit $\gamma^A_{\vec{v}}$ with ${\op{CZ}}_{\tau_0}(\gamma^A_{\vec{v}}) = 1 - \sum_{i=1}^k v_i$, corresponding to the unique basis element of $H^0(S^1)$ \item For each $\vec{v}$ with exactly two nonzero components, we have the Reeb orbit $\gamma^A_{\vec{v}}$ with ${\op{CZ}}_{\tau_0}(\gamma^A_{\vec{v}}) = - \sum_{i=1}^k v_i$, corresponding to the unique basis element of $H^1(S^1)$. \end{enumerate} \end{example}
\begin{example}\label{ex:surfaces}
For $g,k \in \mathbb{Z}_{\geq 0}$, let $\Sigma_{g,k}$ denote a compact surface of genus $g$ with $k$ boundary components. Then $\Sigma_{g,k}$ admits a unique Liouville structure up to Liouville deformation equivalence. Indeed, it is easy to produce such a structure by attaching Weinstein one-handles to the two-ball, and if ${\lambda}_0$ and ${\lambda}_1$ are one-forms on $\Sigma_{n,k}$ which induce the same orientation then they can be joined by the Liouville homotopy ${\lambda}_t := (1-t){\lambda}_0 + t{\lambda}_1$, $t \in [0,1]$.
Moreover, if $X$ and $X'$ are two-dimensional Liouville domains with a Liouville embedding $X \overset{L}\hookrightarrow X'$, by Stokes' theorem each component of $X' \setminus X$ must contain at least one component of $\partial X'$, and in fact this condition also suffices to provide the existence of a Liouville embedding $X \overset{L}\hookrightarrow X'$. It follows that there is a Liouville embedding $\Sigma_{g,k} \overset{L}\hookrightarrow \Sigma_{g',k'}$ if and only if we have $g \leq g'$ and $k-k' \leq g'-g$. \end{example}
\section{A family of invariants}\label{sec:invariants}
In this section we describe our general family of symplectic invariants which obstruct Liouville embeddings. Firstly, in \S\ref{subsec:obs_cyls} we elaborate on the $S^1$-equivariant analogue of Observation~\ref{obs:vanishingSH} and the resulting Liouville embedding obstruction $\mathbb{F}$, which has appeared in the literature in various forms. In \S\ref{subsec:obs_higher}, we vastly generalize this by incorporating $\mathcal{L}_\infty$ structures, and we encode this data as the invariant $\mathbb{I}^{\leq l}$.
Finally, in \S\ref{subsec:obs_simp} we introduce the simplified invariant $\mathbb{G}\Langle\mathcal{T}^m p\Rangle$, whose computation is more tractable and will suffice for our main applications.
\subsection{Obstructions from cylinders}\label{subsec:obs_cyls}
We seek to generalize the binary phenomenon of vanishing symplectic cohomology as in Observation~\ref{obs:vanishingSH}. For simplicity, we work throughout over $\mathbb{K} = \mathbb{Q}$. Let $X$ be a Liouville domain, and let $e \in \op{SH}(X)$ denote the unit in its symplectic cohomology ring with its pair of pants product. We will also use $e$ to denote the unit in the ordinary cohomology ring $H^*(X)$. Let $$ \dots \rightarrow H^*(X) \rightarrow \op{SH}^*(X) \rightarrow \op{SH}^*_+(X) \xrightarrow{\delta} H^{*+1}(X) \rightarrow \dots$$ denote the long exact sequence coming from splitting off the generators of low action. Note that symplectic cohomology is generally graded only by $\mathbb{Z}/2$, although this can upgraded to a $\mathbb{Z}$ grading if $c_1(X) = 0$, and by default we grade Hamiltonian orbits by $n - {\op{CZ}}$ (as usual, $n$ denotes half the real dimension of $X$). Since $\op{SH}(X)$ is a unital $\mathbb{K}$-algebra, it vanishes if and only if we have $e = 0 \in \op{SH}(X)$, or equivalently if $e \in H^0(X)$ lies in the image of $\delta$.
Passing to $S^1$-equivariant symplectic cohomology produces more information as follows. We refer the reader to e.g. \cite{Bourgeois-Oancea_equivariant,Seidel_biased_view,ganatra2019cyclic, Gutt-Hu} for more on the technical setup and structural properties of $\op{SH}_{S^1}$. By slight abuse of notation, let $e$ also denote the image of $e \in \op{SH}(X)$ under the ``erase'' map $\op{SH}^*(X) \rightarrow \op{SH}^*_{S^1}(X)$. It should be emphasized that $\op{SH}_{S^1}(X)$ does not have a product, although it does have a Lie bracket of degree $-2$, which corresponds to the ``string bracket'' in the case that $X$ is cotangent bundle. In particular, the vanishing of $e \in \op{SH}_{S^1}(X)$ does not necessarily imply that $\op{SH}_{S^1}(X)$ itself vanishes.
We put $\mathbb{K}[u^{-1}]$ as a shorthand for the $\mathbb{K}[u]$-module $\mathbb{K}[u,u^{-1}] / (u\mathbb{K}[u])$, where $u$ has degree $2$. From the algebraic point of view (c.f. \cite[\S 2]{ganatra2019cyclic}), $\op{SC}(X)$ is endowed with the structure of an $S^1$-complex, i.e. we have a sequence of operations $\delta^i: \op{SC}^*(X) \rightarrow \op{SC}^{*+1-2i}$ for $i \in \mathbb{Z}_{\geq 0}$, with $\delta^0$ the differential and $\delta^1$ descending to the BV operator, such that we have $\sum\limits_{i+j = k}\delta^i \circ \delta^j = 0$ for all $k \in \mathbb{Z}_{\geq 0}$. Then $\op{SH}_{S^1}(X)$ is the homology of the positive cyclic chain complex $(\op{SC}_{S^1}(X),\partial_{S^1})$ with
$\op{SC}_{S^1}(X) := \op{SC}(X) \otimes \mathbb{K}[u^{-1}]$ and $\partial_{S^1} := \sum\limits_{i=0}^{\infty} u^i \delta^i$.
Similar to the nonequivariant case, we have the connecting map $$\delta_{S^1}: \op{SH}^*_{S^1,+}(X) \rightarrow H^{*+1}_{S^1}(X),$$ which is a map of $\mathbb{K}[u]$-modules. Here $H^{*}_{S^1}(X)$ is canonically identified with $H^{*}(X) \otimes \mathbb{K}[u^{-1}]$. Let $P_0: H_{S^1}(X) \approx H^*(X) \otimes \mathbb{K}[u^{-1}]\rightarrow \mathbb{K}[u^{-1}]\langle e \rangle$ be induced by the map $H^*(X) \rightarrow H^0(X) = \mathbb{K}\langle e\rangle$ projecting to degree zero (we assume that $X$ is connected, so that $H^0(X)$ is one-dimensional and generated by $e$). We also use $e$ to denote the image of the unit under the natural map $H^*(X) \rightarrow H^*_{S^1}(X)$.
\begin{definition} Let $\mathbb{F}(X) \in \mathbb{Z}_{\geq 0} \cup \{\infty\}$ be the smallest $k$ such that $u^{-k}e$ does not lie in the image of $P_0 \circ \delta_{S^1}: \op{SH}^*_{S^1,+}(X) \rightarrow \mathbb{K}[u^{-1}]\langle e \rangle$. If no such $k$ exists, we put $\mathbb{F}(X) = \infty$. \end{definition} {\noindent} Note that if $u^{-k}e$ lies in the image of $P_0 \circ \delta_{S^1}$, then by $\mathbb{K}[u]$-linearity so do the elements $u^{-k+1}e,\dots,u^{-1}e,e$. Using Viterbo functoriality and standard invariance properties for symplectic cohomology, we have: \begin{thm}\label{thm:F} For a Liouville domain $X$, $\mathbb{F}(X)$ is independent of all choices and invariant under Liouville deformation equivalences. Given a Liouville embedding $X \overset{L}\hookrightarrow X'$ of Liouville domains $X,X'$, we have $\mathbb{F}(X) \geq \mathbb{F}(X')$. \end{thm}
Theorem~\ref{thm:F} is proved in \cite{Gutt-Hu} with mostly quantitative applications in mind. Indeed, the authors define a sequence of symplectic capacities $c_1^{\op{GH}}(X) \leq c_2^{\op{GH}} \leq c_3^{\op{GH}}(X) \leq \dots$ valued in $\mathbb{R}_{> 0} \cup {\infty}$, and $\mathbb{F}(X)$ is equivalent to the number of such capacities which are finite. For example, if $X$ is a star-shaped domain in $\mathbb{C}^n$, then all of these capacities are finite and hence we have $\mathbb{F}(X) = \infty$.
A closely related notion of {\em higher dilation} is introduced in \cite{zhao2016periodic}, based on the original definition of {\em dilation} from \cite{seidel2012symplectic}. A Liouville domain $X$ admits a dilation if $\Delta(x) = e \in \op{SH}^0(X)$ for some $x \in \op{SH}^{1}(X)$, where $\Delta: \op{SH}^*(X) \rightarrow \op{SH}^{*-1}(X)$ denotes the BV operator, and it admits a higher dilation if we have $e = 0 \in \op{SH}_{S^1}(X)$. Note that $X$ admits a higher dilation if and only if $\mathbb{F}(X) > 0$. The higher dilation concept is refined in \cite{Zhou_semidilation} by declaring that $X$ has a {\em $k$-dilation} if $e \in \op{SH}_{S^1}(X)$ is killed on the $k$th page of the spectral sequence induced by the $u$-adic filtration. As explained there, if $X$ admits a $k$-dilation then it also admits a $j$-dilation for all $j > k$, and given a Liouville embedding $X \overset{L}\hookrightarrow X'$, $X$ admits a $k$-dilation if $X'$ does.
\begin{remark} The definition of $\mathbb{F}(X)$ is also formally similar to the notion of {\em algebraic torsion} of contact manifolds introduced in \cite{latschev2011algebraic}, which provides a hierarchy of symplectic fillability obstructions. Whereas the former involves only genus zero curves and applies to Liouville domains, the latter is based on higher genus symplectic field theory and applies to contact manifolds which cannot be strongly filled. \end{remark}
\subsection{Incorporating curves with several positive ends}\label{subsec:obs_higher}
The invariant $\mathbb{F}(X)$ is unfortunately not strong enough to tackle Problem~\ref{prob:hyperplane_comp} or the more general Problem~\ref{prob:hyp}. Indeed, recall that $\op{SH}_{S^1,+}^*(X)$ has a natural grading by $H_1(X)$, corresponding to the homology classes of generator loops. Moreover, the map $\delta_{S^1}: \op{SH}_{S^1,+}^*(X) \rightarrow H^{*+1}(X)$ is compatible with this grading, which means that it is supported on the graded piece of the trivial class in $H_1(X)$. However, for $\vec{d} = (d_1,\dots,d_k)$ with $k \geq n+1$, none of the Reeb orbits in $\partial X_{\vec{d}}^{2n}$ with our preferred contact form are contractible in $X_{\vec{d}}^{2n}$ (see \S\ref{subsec:hypersurf_compl}). It then follows that $\delta_{S^1}$ is trivial, and hence: \begin{lemma} For $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ with $k \geq n+1$, we have $\mathbb{F}(X_{\vec{d}}^{2n}) = 0$. \end{lemma} {\noindent} In principle one could imagine extracting more information using the product or Lie bracket on $\op{SH}$ or the Lie bracket on $\op{SH}_{S^1}$, which are based on pseudoholomorphic curves with not one but two positive ends. However, for $X_{\vec{d}}^{2n}$ with $k \geq 2n + 1$ these operations are purely ``topological'' by first homology considerations, i.e. they vanish unless one of the inputs comes from $H^*(X)$, and hence it seems unlikely that they carry any nontrivial embedding obstructions. Note that the ring structure on $\op{SH}(X_{\vec{d}}^{2n})$ in this case is isomorphic to the log cohomology ring from \cite{ganatra2020symplectic}.
To go further, we can consider the chain-level $\mathcal{L}_\infty$ structure on $\op{SC}_{S^1}(X)$, which encodes certain counts of genus zero pseudoholomorphic curves with an arbitrary number of positive ends and one negative end. By upgrading the map ${P_0 \circ \delta_{S^1}: \op{SC}_{S^1,+}(X) \rightarrow \mathbb{K}[u^{-1}]}$ to an $\mathcal{L}_\infty$ homomorphism and appealing to the bar construction framework of \cite{HSC}, we obtain a large family of symplectic invariants which behave well with respect to Liouville embeddings. Unfortunately, a complete description of this $\mathcal{L}_\infty$ algebra has not yet appeared in the literature, and its relationship to the direct geometric approach of \S\ref{sec:computationsII} is somewhat opaque.
Rather, in this paper, following \cite{HSC} we replace $\op{SC}_{S^1,+}(X)$ with its SFT counterpart $\op{CH}_{\op{lin}}(X)$, and we replace the connecting map $\delta_{S^1}$ with a map counting curves with local tangency constraints. Here $\op{CH}_{\op{lin}}(X)$ denotes {\em linearized contact chains}, the chain complex computing linearized contact homology $\op{C}\mathbb{H}_{\op{lin}}(X)$.\footnote{Adopting the notational convention of \cite{bourgeois2012effect}, $\op{C}\mathbb{H}_{\op{lin}}$ by default denotes the chain level object, and we use the boldface $\op{C}\mathbb{H}_{\op{lin}}$ to denote its homology.} We refer the reader to \cite{EGH2000} for a structural description of linearized contact homology and to e.g. \cite{fish2018lectures, pardon2019contact,HuN, bao2015semi, ishikawa2018construction} for some technical approaches to its construction. An isomorphism between positive $S^1$-equivariant symplectic cohomology and linearized contact homology (assuming $\mathbb{K} = \mathbb{Q}$) is described in \cite{Bourgeois-Oancea_equivariant}. In the following discussion, we defer to \cite[\S3]{HSC} for a more detailed discussion of the $\mathcal{L}_\infty$ structure on $\op{CH}_{\op{lin}}$ and \cite[\S5]{HSC} for augmentations defined by local tangency constraints.
As a $\mathbb{K}$-module, $\op{CH}_{\op{lin}}(X)$ is freely spanned by the good\footnote{Recall that a Reeb orbit is {\em good} if it is a cover of another Reeb orbit whose Conley--Zehnder index has the opposite parity. All of the Reeb orbits appearing in the main examples in this paper are good.} Reeb orbits of $\partial X$. Using the $n - {\op{CZ}}$ grading convention, the $\mathcal{L}_\infty$ operations $\ell^1,\ell^2,\ell^3,\dots$ are such that $\ell^k: \odot^k \op{CH}_{\op{lin}}(X) \rightarrow \op{CH}_{\op{lin}}(X)$ has degree $4-3k$. Here $\ell^k$ counts (possibly virtually perturbed) index one asymptotically cylindrical pseudoholomorphic curves in the symplectization $\mathbb{R} \times \partial X$ modulo target translations, anchored in $X$, with $k$ positive punctures and one negative puncture. Strictly speaking this makes $\op{CH}_{\op{lin}}(X)$ into a {\em shifted} $\mathcal{L}_\infty$ algebra, and following \cite{chscI} it is convenient to instead grade by $n - {\op{CZ}} - 3$, so that each operation $\ell^k: \odot^k\op{CH}_{\op{lin}}(X) \rightarrow \op{CH}_{\op{lin}}(X)$ has degree $+1$ (here $\odot^k$ denotes the $k$-fold graded symmetric tensor product over $\mathbb{K}$).
The {\em bar complex} $\mathcal{B}\op{CH}_{\op{lin}}(X)$ is by definition the chain complex given by the reduced symmetric tensor algebra $\overline{S}\op{CH}_{\op{lin}}(X) = \bigoplus\limits_{k=1}^\infty \odot^k \op{CH}_{\op{lin}}(X)$, equipped with the degree $+1$ bar differential $\widehat{\ell}: \overline{S}\op{CH}_{\op{lin}}(X) \rightarrow \overline{S}\op{CH}_{\op{lin}}(X)$. For $l \in \mathbb{Z}_{\geq 1}$, let $\mathcal{B}^{\leq l}\op{CH}_{\op{lin}}(X) \subset \mathcal{B}\op{CH}_{\op{lin}}(X)$ denote the subcomplex spanned by elements of tensor word length at most $l$. We also put $\mathcal{B}^{\leq \infty}\op{CH}_{\op{lin}}(X) := \mathcal{B}\op{CH}_{\op{lin}}(X)$.
Let $\e\Langle \T^{\bullet} \Rangle: \op{CH}_{\op{lin}}(X) \rightarrow \mathbb{K}[t]$ denote the $\mathcal{L}_\infty$ homomorphism given by counting curves in $X$ with local tangency constraints. Here $\mathbb{K}[t]$ is viewed as an abelian $\mathcal{L}_\infty$ algebra, i.e. all operations vanish identically, graded such that $t^k$ has degree $-4-2k$. More precisely, this $\mathcal{L}_\infty$ homomorphism consists of terms $\e^k\Langle \T^{\bullet} p \Rangle: \odot^k \op{CH}_{\op{lin}}(X) \rightarrow \mathbb{K}[t]$ for $k \in \mathbb{Z}_{\geq 1}$. For Reeb orbits $\gamma_1,\dots,\gamma_k$, the structure coefficient
$\langle \e^k \Langle \T^{\bullet} p\Rangle(\gamma_1,\dots,\gamma_k),t^m\rangle$
counts $k$-punctured spheres in the (symplectic completion of) $X$ with positive Reeb orbit asymptotics $\gamma_1,\dots,\gamma_k$, and having tangency order $m$ (i.e. contact order $m+1$) to a generic local divisor $D$ at a point $p \in X$. Let $\widehat{\e}\Langle \T^{\bullet} p\Rangle: \mathcal{B} \op{CH}_{\op{lin}}(X) \rightarrow \mathcal{B}\mathbb{K}[t]$ denote the induced map on bar complexes, which has degree zero with our conventions. Note that $\mathcal{B}\mathbb{K}[t]$ is simply $\overline{S}\mathbb{K}[t]$ equipped with trivial bar differential, and we identify its homology $H\mathcal{B}\mathbb{K}[t]$ with $\overline{S}\mathbb{K}[t]$.
\begin{definition}For $l \in \mathbb{Z}_{\geq 1} \cup \{\infty\}$, let $\mathbb{I}(X) \subset \overline{S}\mathbb{K}[t]$ denote the image of the homology level map $H \widehat{\e}\Langle \T^{\bullet} p\Rangle: H\mathcal{B} \op{CH}_{\op{lin}}(X) \rightarrow \overline{S}\mathbb{K}[t]$. More refinedly, let $\mathbb{I}^{\leq l}(X)$ denote the image of the same homology level map after restricting to $H \mathcal{B}^{\leq l}\op{CH}_{\op{lin}}(X)$. \end{definition}
As in \cite{HSC}, by the functoriality package for $\op{CH}_{\op{lin}}(X)$ and $\e\Langle \T^{\bullet}\Rangle$ we have: \begin{thm}\label{thm:I} For a Liouville domain $X$ and $l \in \mathbb{Z}_{\geq 1} \cup \{\infty\}$, $\mathbb{I}^{\leq l}(X)$ is independent of all choices and invariant under Liouville deformation equivalences. Given a Liouville embedding $X \overset{L}\hookrightarrow X'$ of Liouville domains $X,X'$, we have $\mathbb{I}^{\leq l}(X') \subset \mathbb{I}^{\leq l}(X)$. \end{thm} {\noindent} Note that $l=1$ corresponds to the case of curves with only one positive end as in $\mathbb{F}(X)$. At the other extreme, $\mathbb{I}(X) = \mathbb{I}^{\leq \infty}(X)$ corresponds to the case of no restrictions on the number of positive ends.
\begin{remark} It is also natural to consider analogous invariants defined by replacing the local tangency constraint by some other geometric constraint. For example, we can consider curves with a fixed number of generic point constraints. As explained in \cite[\S5]{HSC}, this necessitates the more elaborate formalism of rational symplectic field theory. \end{remark}
\subsection{The simplified invariant $\mathbb{G}\Langle \mathcal{T}^m p\Rangle$}\label{subsec:obs_simp}
The invariant $\mathbb{I}^{\leq l}$ provides strong obstructions to the existence of Liouville embeddings $X \overset{L}\hookrightarrow X'$ between Liouville domains $X,X'$. However, its full computation requires a rather strong understanding of both the chain-level $\mathcal{L}_\infty$ algebra $\op{CH}_{\op{lin}}(X)$ and the $\mathcal{L}_\infty$ homomorphism $\e\Langle \T^{\bullet} p\Rangle: \op{CH}_{\op{lin}}(X) \rightarrow \mathbb{K}[t]$, which is quite challenging to achieve for all but the simplest examples. Also, the map $\widehat{\e}\Langle \T^{\bullet} p\Rangle$ has a geometric interpretation as counting curves with several components, but this is somewhat unintuitive.
In order to have a more easily interpretable invariant, we now introduce a simplified invariant whose computation is typically more tractable and which involves only irreducible curves. To achieve this, let $\pi_k: \overline{S}\mathbb{K}[t] \rightarrow \odot^k \mathbb{K}[t]$ denote the projection to the subspace spanned by elements of word length $k$. In particular, we have $\pi_1: \overline{S}\mathbb{K}[t] \rightarrow \mathbb{K}[t]$.
As a warmup, consider the condition that $1 \in \mathbb{K}[t]$ lies in $\pi_1(\mathbb{I}^{\leq l}(X))$. Heuristically, to first approximation this means there is a rigid curve in $X$ which passes through a generic point constraint and has at most $l$ positive ends. However, to make this more accurate we need to keep in mind that (a) in addition the asymptotic orbits must define a cycle with respect to the bar complex differential, (b) this cycle could be a linear combination of several elementary tensors, and (c) the relevant curves are possibly anchored and virtually perturbed, and they are counted algebraically with signs.
More generally, we can replace the point constraint $\Langle p \Rangle$, which corresponds to $1 \in \mathbb{K}[t]$, with a local tangency constraint $\Langle \mathcal{T}^mp\Rangle$, which corresponds to $t^m \in \mathbb{K}[t]$. \begin{definition} Let $\mathbb{G}\Langle \mathcal{T}^m p \Rangle(X) \in \mathbb{Z}_{\geq 1}\cup \{\infty\}$ denote the smallest $l$ such that $t^m$ lies in $\pi_1(\mathbb{I}^{\leq l}(X))$. If no such $l$ exists, we put $\mathbb{G}\Langle \mathcal{T}^m p \Rangle(X) = \infty$. \end{definition} {\noindent} Heuristically, to first approximation $\mathbb{G}\Langle \mathcal{T}^m p\Rangle(X)$ records the smallest number of positive ends of a rigid curve in $X$ satisfying a $\Langle\mathcal{T}^m p\Rangle$ constraint. The following is immediately extracted from Theorem~\ref{thm:I}: \begin{thm} For a Liouville domain $X$ and $m \in \mathbb{Z}_{\geq 0}$, $\mathbb{G}\Langle \mathcal{T}^m p \Rangle(X)$ is independent of all choices and invariant under Liouville deformation equivalences. Given a Liouville embedding $X \overset{L}\hookrightarrow X'$ of Liouville domains $X,X'$, we have $\mathbb{G}(X) \leq \mathbb{G}(X')$. \end{thm}
\begin{remark} A closely related invariant based on an $\mathcal{L}_\infty$ structure on $\op{SC}_{S^1}$ and defined in terms of the $u$-adic spectral sequence is mentioned in \cite{seidelslides}. \end{remark}
\section{Computations for hypersurface complements I: SFT version}\label{sec:computationsI}
The goal of this section is to prove Theorem~\ref{thm:main_G_computation}. We begin with some generalities on projective hypersurface complements in \S\ref{subsec:hypersurf_compl}. The proof then proceeds by establishing a lower bound in \S\ref{subsec:lower_bound} and an upper bound in \S\ref{subsec:upper_bound}. Finally, in \S\ref{subsec:CL}, we put describe a more general framework for producing Maurer--Cartan elements.
Note that in this section we assume SFT transversality via virtual perturbations, although the arguments are independent of any specific perturbation scheme. In the next section we upgrade this to the stronger Theorem~\ref{thm:main_combinatorial} and also remove this assumption.
\subsection{Geometry of hypersurface complements in projective space}\label{subsec:hypersurf_compl}
We now specialize the discussion from \S\ref{subsec:div_compl} to the complement of a collection of generic hypersurfaces in projective space. Here by {\em generic} we mean that the each hypersurface is smooth, and the collection defines a simple normal crossing divisor. Given a tuple $\vec{d} \in \mathbb{Z}_{\geq 1}^k$ with $k \in \mathbb{Z}_{\geq 1}$, we consider the corresponding Weinstein domain $X_{\vec{d}}^{2n} = \mathbb{CP}^n \setminus \mathcal{O}p(D)$, where $D_1,\dots,D_k$ is a generic collection of (smooth) hypersurfaces in $\mathbb{CP}^n$ of degrees $d_1,\dots,d_k$ respectively, and we put $D := D_1 \cup \dots \cup D_k$.
After a Weinstein homotopy, we arrange that $X_{\vec{d}}^{2n}$ has geometry as in Theorem~\ref{thm:div_compl1}. In particular, for each $\vec{v} \in \mathbb{Z}^k_{\geq 0} \setminus \{\vec{0}\}$ we choose a Morse function $f_{\vec{v}}: \mathring{S}_{\vec{v}}/S^1 \rightarrow \mathbb{R}$, which we further assume has a unique minimum. Let $\vec{e}_i$ denote the $i$th standard basis vector, i.e. $\vec{e}_i := (\underbrace{0,\dots,0}_{i-1},1,\underbrace{0,\dots,0}_{k-i})$. Then there is a unique Reeb orbit (of action less than $\mathcal{C}$) of $\partial X^{2n}_{\vec{d}}$ of the form $\gamma_{\vec{e}_i}^A$ with ${\op{CZ}}_{\tau_0}(\gamma_{\vec{e}_i}^A) = n-3$, corresponding to the minimum of $f_{\vec{e}_i}: \mathring{S}_{\vec{e}_i}/S^1 \rightarrow \mathbb{R}$. For future reference, we denote these orbits by $\beta_1,\dots,\beta_k$. Heuristically, these correspond to the fundamental classes of the open divisor strata $\mathring{D}_{\vec{e}_1},\dots,\mathring{D}_{\vec{e}_k}$.
Let $[\partial c_1],\dots,[\partial c_k]$ denote the homology classes of small loops surrounding the hyperplanes $D_1,\dots,D_k$ as in \S\ref{subsec:div_compl}. Then $H_1(X^{2n}_{\vec{d}})$ is $(k-1)$-dimensional, with $$H_1(X^{2n}_{\vec{d}}) = \mathbb{Z}\langle [\partial c_1],\dots,[\partial c_k]\rangle / (d_1[\partial c_1]+\dots + d_k[\partial c_k]) \cong \mathbb{Z}^k/(\vec{d})$$ (see e.g. \cite[Prop. 2.3]{libgober2007lectures}).
For $k,n \in \mathbb{Z}_{\geq 1}$ and a tuple $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$, put $\gcd(\vec{d}) := \gcd(d_1,\dots,d_k)$. As explained in \S\ref{subsec:div_compl}, we have a preferred trivialization $\tau_0$ of the symplectic vector bundle $TX_{\vec{d}}^{2n}$ over each Reeb orbit $\gamma_{\vec{v}}^A$ in $\partial X_{\vec{d}}^{2n}$. Observe that a symplectic embedding $X_{\vec{d}}^{2n} \overset{S}\hookrightarrow X_{\vec{d'}}^{2n}$ pulls back $c_1(X_{\vec{d'}}^{2n})$ to $c_1(X_{\vec{d}}^{2n})$. In particular, if $X_{\vec{d'}}^{2n}$ is Calabi--Yau, i.e. $c_1(X_{\vec{d'}}^{2n}) = 0$, then the same must also be true of $X_{\vec{d}}^{2n}$. The Calabi--Yau condition for $X_{\vec{d}}^{2n}$ is equivalent to the existence of $a_1,\dots,a_k \in \mathbb{Z}$ such that $\sum_{i=1}^k a_kd_k = -n-1$. More generally, we have: \begin{lemma}\label{lem:c_1} For $i \in \mathbb{Z}$, we have $ic_1(X_{\vec{d}}^{2n}) = 0 \in H_2(X_{\vec{d}}^{2n})$ if and only if $i(n+1)$ is divisible by $\gcd(\vec{d})$. \end{lemma} {\noindent} As a consequence, we obtain the following purely formal counterpart to Theorem~\ref{thm:main_combinatorial}. In the following, a codimension zero smooth embedding is an {\em almost symplectic embedding} if it preserves the homotopy class of the symplectic form as a nondegenerate two-form (or, equivalently, it preserves the homotopy class of a compatible almost complex structure). Put \begin{align*} F_n(\vec{d}) := \frac{\gcd(\vec{d})}{\gcd(\gcd(\vec{d}),n+1)}. \end{align*} \begin{cor}\label{cor:formal_obstr} Suppose there is an almost symplectic embedding of $X_{\vec{d}}^{2n}$ into $X_{\vec{d'}}^{2n}$. Then we must have that $F_n(\vec{d})$ divides $F_n(\vec{d'})$. \end{cor}
\begin{proof} Let $\iota$ be an almost symplectic embedding $X_{\vec{d}}^{2n} \hookrightarrow X_{\vec{d'}}^{2n}$. Observe that $F_n(\vec{d})$ is the smallest positive $i$ such that $\gcd(\vec{d})$ divides $i(n+1)$, or equivalently such that $ic_1(X_{\vec{d}}^{2n}) = 0$. Then we have $$F_n(\vec{d'})c_1(X_{\vec{d}}^{2n}) =\iota^*( F_n(\vec{d'})c_1(X_{\vec{d'}}^{2n})) = 0,$$ and hence $F_n(\vec{d'})$ is a multiple of $F_n(\vec{d})$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:c_1}] Put $X := X_{\vec{d}}^{2n}$, and consider the long exact sequence \begin{align*} \dots \rightarrow H^2(\mathbb{CP}^n,X) \rightarrow H^2(\mathbb{CP}^n) \rightarrow H^2(X) \rightarrow \dots, \end{align*} which using excision and Poincare--Lefschetz duality we can rewrite as \begin{align*} \dots \rightarrow H_{2n-2}(D) \rightarrow H^2(\mathbb{CP}^n) \rightarrow H^2(X) \rightarrow \dots . \end{align*} Using the identifications $H_{2n-2}(D) = \mathbb{Z}\langle [D_1],\dots,[D_k]\rangle$ and $H^2(\mathbb{CP}^n) = \mathbb{Z}[H]^{\vee}$, observe that the image of an element $x \in H_{2n-2}(D)$ is $(x \cdot [H]) [H]^{\vee}$. In particular, the element $[D_i] \in H_{2n-2}(D)$ gets mapped to $d_i[H]^\vee$ for $i = 1,\dots,k$, and hence the image of this map is $\mathbb{Z}\langle \gcd(\vec{d})[H]^\vee\rangle$. Since $c_1(X) \in H^2(X)$ is the image of $(n+1)[H]^{\vee} \in H^2(\mathbb{CP}^n)$, this vanishes if and only if $\gcd(\vec{d})$ divides $n+1$. More generally, for $i \in \mathbb{Z}$, $i c_1(X)$ vanishes if and only if $\gcd(\vec{d})$ divides $i(n+1)$. \end{proof} \begin{remark}
It is interesting to compare Corollary~\ref{cor:formal_obstr} with Example~\ref{ex:domain_single_comp}. Namely, consider a Liouville embedding $X^{2n}_{(d_1)} \overset{L}\hookrightarrow X^{2n}_{\vec{d'}}$. Under the assumption $\gcd(d_1,n+1) = 1$, we have $F_n((d_1)) = d_1$, and hence Corollary~\ref{cor:formal_obstr} implies that $d_1$ must divide $F_n(\vec{d'})$, and hence also $\gcd(\vec{d'})$. By contrast, in the case that $d_1$ is a multiple of $n+1$, we have $F_n((d_1)) = 1$, so Corollary~\ref{cor:formal_obstr} is vacuous, whereas Theorem~\ref{thm:main_combinatorial} implies that $d_1 | \gcd(\vec{d'})$ still holds. \end{remark}
Fix $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}$ for some $k \in \mathbb{Z}_{\geq 1}$. As we mentioned in the introduction, the Morse--Bott Reeb dynamics for $\partial X_{\vec{d}}^{2n}$ give rise to the spectral sequence in \cite{ganatra2020symplectic, mcleanslides}, which computes the symplectic cohomology of $X_{\vec{d}}^{2n}$ and whose first page is described in terms of the ordinary cohomology of the torus bundles $\mathring{S}_{\vec{v}}$ over the open divisor strata $\mathring{D}_{\vec{v}}$. A straightforward consequence using compatibility with the grading by $H_1(X_{\vec{d}}^{2n})$ is that the spectral sequence degenerates at the first page if we have $k \geq n+1$. Combining this with \cite[Cor. 1.2]{ganatra2020symplectic}: \begin{prop} We have $\op{SH}(X_{\vec{d}}^{2n}) \neq 0$ for any coefficient ring $\mathbb{K}$, provided that either $\sum_{i=1}^k d_i \geq n+1$ or $d_i \geq 2$ for some $i \in \{1,\dots,k\}$. \end{prop} {\noindent} Note that $X_{n+1}^{2n}$ is Weinstein deformation equivalent to $D^*\mathbb{T}^n$, and in particular has nonvanishing symplectic cohomology. For $\sum_{i=1}^k d_i \geq n+1$, we have $X_{n+1}^{2n} \overset{W}\hookrightarrow X_{\vec{d}}^{2n}$ by Theorem~\ref{thm:embeddings}, and hence $\op{SH}(X_{\vec{d}}^{2n}) \neq 0$ by Observation~\ref{obs:vanishingSH}.
\begin{remark} Note that for $\sum_{i=1}^k d_i < n+1$ with $d_i \geq 2$ for some $i \in \{1,\dots,k\}$, $X_{\vec{d}}^{2n}$ is not subcritical or flexible. It would be interesting to see whether the assumption $\sum_{i=1}^{k'} d_i' \geq n+1$ in Corollary~\ref{cor:G_obstructions} could be weakened. \end{remark}
\subsection{Lower bound}\label{subsec:lower_bound}
In this subsection we prove the following lemma, which is based on index and first homology considerations: \begin{lemma}\label{lem:lb_sft} For $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ with $\sum_{i=1}^k d_i \geq n+1$, we have $\mathbb{G}\Langle \mathcal{T}^{n-1}p\Rangle(X^{2n}_{\vec{d}}) \geq \sum_{i=1}^k d_i$. \end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:lb_sft}] Let $\e^k \Langle \T^{\bullet} p \Rangle: \odot^k\op{CH}_{\op{lin}}(X_{\vec{d}}^{2n}) \rightarrow \mathbb{K}[t]$ for $k \in \mathbb{Z}_{\geq 1}$ denote the maps constituting the $\mathcal{L}_\infty$ homomorphism $\e \Langle \T^{\bullet} p \Rangle: \op{CH}_{\op{lin}}(X_{\vec{d}}^{2n}) \rightarrow \mathbb{K}[t]$, i.e. the coefficient of $t^m$ in $\e^k \Langle \T^{\bullet} p \Rangle$ counts curves with $k$ positive ends and a $\Langle \mathcal{T}^m p\Rangle$ local tangency constraint. Let $\e\Langle \mathcal{T}^m p\Rangle: \odot^k\op{CH}_{\op{lin}}(X) \rightarrow \mathbb{K}$ denote the $\mathcal{L}_\infty$ augmentation whose constituent maps $\e^k\Langle \mathcal{T}^m p\Rangle$ for $k \in \mathbb{Z}_{\geq 1}$ are given by post composing $\e^k\Langle \T^{\bullet} p \Rangle$ with the projection to the $t^m$ component of $\mathbb{K}[t]$.
According to the definition of $\mathbb{G}\Langle \mathcal{T}^{n-1}p\Rangle(X^{2n}_{\vec{d}})$, it suffices to show that we have $\e^l\Langle \mathcal{T}^{n-1} p\Rangle(\gamma_1,\dots,\gamma_l) = 0$ for any $l < \sum_{i=1}^k d_k$ and Reeb orbits $\gamma_1,\dots,\gamma_l$ in $\partial X_{\vec{d}}^{2n}$. Since $\e^l\Langle \mathcal{T}^{n-1}p\Rangle$ counts index zero curves, this follows immediately from the next lemma. \end{proof}
\begin{lemma}\label{lem:too_few_ends} Assume $\sum_{i=1}^k d_i \geq n+1$. Let $u$ be a formal curve in $X_{\vec{d}}^{2n}$ with $l < \sum_{i=1}^kd_i$ positive ends and satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$. Then we have $\op{ind}(u) < 0$. \end{lemma} \begin{proof} For $u$ a formal curve in $X_{\vec{d}}^{2n}$ as in the statement of the lemma with $l$ positive ends, which we take to be of the form $\gamma_{\vec{v}_1}^{A_1},\dots, \gamma_{\vec{v}_l}^{A_l}$ for some vectors $\vec{v}_1,\dots,\vec{v}_l \in \mathbb{Z}_{\geq 0}^k$ and critical points $A_i \in \op{crit}(f_{\vec{v}_i})$ for $i = 1,\dots,l$. Note that we must have $\sum_{i=1}^l [\gamma_{\vec{v}_i}^{A_i}] = 0 \in H_1(X_{\vec{d}}^{2n})$, i.e. $\sum_{i=1}^l \vec{v}_i = q\vec{d}$ for some $q \in \mathbb{Z}_{\geq 1}$.
Let $\tau_0$ denote the framing of Reeb orbits in $\partial X_{\vec{d}}^{2n}$ which extends over small spanning disks as in \S\ref{subsec:div_compl}, and let $c_1^{\tau_0}(u)$ denote the relative first Chern number of $u$ with respect to $\tau_0$. By capping off each asymptotic Reeb orbit of $u$ with its small spanning disk, we obtain a formal two-sphere $S$ in $\mathbb{CP}^n$ of degree $q$. Since the trivialization $\tau_0$ extends over each of the small bounding disks, we have $$c_1^{\tau_0}(u) = c_1(S) = q(n+1).$$
By the discussion in \S\ref{subsec:div_compl}, for each $1 \leq i \leq l$ we have ${\op{CZ}}_{\tau_0}(\gamma_{\vec{v_i}}^{A_i}) = \delta_i - 2\vec{v}\cdot \vec{1}$ for some $\delta_i \leq n-1$. Noting that the constraint $\Langle \mathcal{T}^{n-1} p\Rangle$ is codimension $4n-4$, we then have \begin{align*} \op{ind}(u) &= (n-3)(2-l) + \sum_{i=1}^l {\op{CZ}}(\gamma_{\vec{v}_i}^{A_i}) + 2c_1^{\tau_0}(u) - (4n-4)\\ &= (n-3)(2-l) + \sum_{i=1}^l\left( \delta_i - 2\vec{v}_i \cdot\vec{1}\right) +2q(n+1)- 4n + 4\\ &\leq (n-3)(2-l) + (n-1)l - 2\left(\sum_{i=1}^l \vec{v}_i \right)\cdot\vec{1} + 2q(n+1)- 4n + 4\\ &= (n-3)(2-l) + (n-1)l - 2q\left(\vec{d} \cdot \vec{1} - n - 1\right) - 4n + 4\\ &\leq (n-3)(2-l) + (n-1)l - 2\left(\sum_{i=1}^kd_i - n - 1\right) - 4n+4\\ &= 2l - 2\sum_{i=1}^k d_i\\ &< 0. \end{align*} \end{proof}
\subsection{Upper bound}\label{subsec:upper_bound} We prove the following lemma, which together with Lemma~\ref{lem:lb_sft} proves Theorem~\ref{thm:main_G_computation}:
\begin{lemma}\label{lem:ub} For any $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$, we have $\mathbb{G}\Langle \mathcal{T}^{n-1}p\Rangle(X^{2n}_{\vec{d}}) \leq \sum_{i=1}^k d_i$. \end{lemma} {\noindent} The basic idea for getting an upper bound is as follows. Starting with the moduli space $\mathcal{M}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1} p\Rangle$ of degree one curves in $\mathbb{CP}^n$ satisfying a $\Langle \mathcal{T}^{n-1}p\Rangle$ local tangency constraint, we stretch the neck along the boundary of $X_{\vec{d}}^{2n}$, keeping the constraint in the interior (a similar approach appears in a slightly different context in \cite{CM2,tonk}). Building on the discussion in \S\ref{subsec:hypersurf_compl}, we have strong control over the geometry of the complement $\mathbb{CP}^n \setminus X_{\vec{d}}^{2n}$, which is a small neighborhood of the divisor $D$. We show that the outcome must be a nonzero count of curves in $X_{\vec{d}}^{2n}$ with precisely $\sum_{i=1}^k d_i$ positive ends, and moreover the corresponding collection of Reeb orbits must give rise to a cycle in $\mathcal{B}\op{CH}_{\op{lin}}(X_{\vec{d}}^{2n})$.
Let $N_{\vec{d}}^{2n}$ denote the closure of $\mathbb{CP}^n \setminus X_{\vec{d}}^{2n}$. We begin with a lemma characterizing certain relative homology classes in $H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n})$. \begin{lemma}\label{lem:rel_hom_es} Consider the exact sequence \begin{align*} H_2(N_{\vec{d}}^{2n}) \xrightarrow[]{\;\;f\;\;} H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n}) \xrightarrow[]{\;\;\delta\;\;} H_1(\partial N_{\vec{d}}^{2n}), \end{align*} and suppose that $A \in H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n})$ lies in the kernel of $\delta$. Then there is some $q \in \mathbb{Z}$ such that we have $A \cdot [D_i] = qd_i$ for each $i = 1,\dots,k$. \end{lemma} \begin{proof} By exactness, we have $A = f(B)$ for some element $B \in H_2(N_{\vec{d}}^{2n})$. Let $\iota: H_2(N_{\vec{d}}^{2n}) \rightarrow H_2(\mathbb{CP}^n)$ denote the map induced by the inclusion $N_{\vec{d}}^{2n} \subset \mathbb{CP}^n$. Observe that we have $f(B) \cdot [D_i] = \iota(B) \cdot [D_i]$ for each $1 \leq i \leq k$. Since $H_2(\mathbb{CP}^n)$ is generated by the line class $[L]$, we have $\iota(B) = q[L]$ for some $q \in \mathbb{Z}$, and hence $A \cdot [D_i] = q[L]\cdot [D_i] = qd_i$ for each $1 \leq i \leq k$. \end{proof}
By our transversality assumptions, each limiting configuration under the aforementioned neck stretching procedure must be a two-level building, with \begin{itemize} \item top level in $N_{\vec{d}}^{2n}$ consisting of a collection of index zero curves, each with at least one negative end \item bottom level in $X_{\vec{d}}^{2n}$ consisting of a collection of index zero curves, each with at least one positive end, \end{itemize} such that the total configuration represents a sphere in class $[L] \in H_2(\mathbb{CP}^n)$. Note that the bottom level has one ``main'' component $u$ which inherits the $\Langle \mathcal{T}^{n-1}p\Rangle$ constraint. By grouping together components sharing a paired asymptotic end, excepting the positive ends of the main component, we can view this as a building with: \begin{itemize} \item top level in $N_{\vec{d}}^{2n}$ consisting of a collection of index zero planes $C_1,\dots,C_l$, anchored in $X^{2n}_{\vec{d}}$ \item bottom level in $\mathbb{CP}^n$ consisting of just the main component $u$. \end{itemize} Note that since the anchored planes are composed of pseudoholomorphic curves, they have nonnegative energy, and by positivity of intersection the homological intersection number $[C_i] \cdot [D_j]$ is nonnegative for each $i \in \{1,\dots,k\}$ and $j \in \{1,\dots,l\}$. Put $\vec{s}_i = ([C_i] \cdot [D_1] ,\dots, [C_i]\cdot [D_k]) \in \mathbb{Z}_{\geq 0}^k$. Then, by Lemma~\ref{lem:formal_caps} below, each of the anchored planes $C_i$ is negatively asymptotic to $\beta_j$ for some $j \in \{1,\dots,k\}$, and we have $\vec{s}_i = e_j$. Since the total configuration represents the line class in $H_2(\mathbb{CP}^n)$, we must have $\sum_{i=1}^l \vec{s}_i = \vec{d}$. It follows that we have $l = \sum_{i=1}^k d_i$, and up to reordering the positive ends of $u$ are $\underbrace{\beta_1,\dots,\beta_1}_{d_1},\dots,\underbrace{\beta_k,\dots,\beta_k}_{d_k}$. For brevity, in the sequel we denote this list by $\beta_1^{\times d_1},\dots,\beta_k^{\times d_k}$.
\begin{lemma}\label{lem:formal_caps} Let $C$ be a formal plane in $N_{\vec{d}}^{2n}$, anchored in $X^{2n}_{\vec{d}}$, with negative asymptotic $\gamma_{\vec{v}}^A$. Assume that the homological intersection number of $C$ with each component of $D$ is nonnegative, and put $\vec{s} := ([C]\cdot [D_1],\dots,[C]\cdot [D_k]) \in \mathbb{Z}_{\geq 0}^k$. Assume also that $C$ has nonnegative symplectic area. Then we have $\op{ind}(C) \geq 0$. Moreover, if $\op{ind}(C) = 0$, for some $j \in \{1,\dots,k\}$ we must have $\vec{v} = \vec{s} = \vec{e}_j$ and $\delta(\gamma_{\vec{v}}^A) = n-1$, and hence $\gamma_{\vec{v}}^A = \beta_j$. \end{lemma} \begin{proof} For $i = 1,\dots,k$, let $c_i$ be a small disk intersecting $D_i$ once tranversely and negatively and disjoint from the other components of $D$ as in \S\ref{subsec:div_compl}, and let $[c_i] \in H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n})$ denote its relative homology class. Since $\sum_{i=1}^k v_i[c_i] - [C]$ lies in the kernel of $\delta: H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n}) \rightarrow H_1(\partial N_{\vec{d}}^{2n})$, by Lemma~\ref{lem:rel_hom_es} we have $\vec{s} = \vec{v} + q\vec{d}$ for some $q \in \mathbb{Z}$.
Let $\rho: N_{\vec{d}}^{2n} \rightarrow [0,1]$ be a function which is $0$ near $\partial N_{\vec{d}}^{2n}$ and $1$ outside of a small neighborhood $U$ of $\partial N_{\vec{d}}^{2n}$, and let $\widetilde{\omega}$ be the two-form on $N_{\vec{d}}^{2n}$ given by $\rho {\lambda}$ on $U$ and $\omega$ on $N_{\vec{d}}^{2n} \setminus U$. Let $\vec{\mathbb{w}} = (\mathbb{w}_1,\dots,\mathbb{w}_k)$ denote the corresponding symplectic wrapping numbers as in \S\ref{subsec:div_compl}, i.e. $-\sum_{i=1}^k \mathbb{w}_i[D_i] \in H_{2n-2}(N_{\vec{d}}^{2n};\mathbb{R})$ is Poincar\'e--Lefschetz dual to $[\widetilde{\omega}] \in H^2(N,\partial N;\mathbb{R})$. By Theorem~\ref{thm:div_compl1}, these agree with the holomorphic wrapping numbers, so we have $\mathbb{w}_i = -d_i < 0$ for $i = 1,\dots,k$. We then have: \begin{align*}
0 \leq \int_C \omega
&= \int_C \widetilde{\omega} + \int_C(\omega - \widetilde{\omega})\\
&= \left(-\sum_{i=1}^k \mathbb{w}_i[D_i]\right) \cdot [C] - \mathcal{A}(\gamma_{\vec{v}}^A)\\
&\leq -\vec{\mathbb{w}} \cdot \vec{s} + \vec{\mathbb{w}} \cdot \vec{v} + \varepsilon\\
&= -q\vec{\mathbb{w}}\cdot \vec{d} + \varepsilon, \end{align*} where in the second line we have used the definition of symplectic wrapping numbers and in the third line we have used part (6) of Theorem~\ref{thm:div_compl1}. Since each component of $\vec{d}$ is positive, each component of $\mathbb{w}$ is nonpositive, and $\varepsilon$ is arbitrarily small, we must have $q \geq 0$.
Observe that we have $c_1^{\tau}(C) = q(n+1)$, since we can glue $C$ to the small spanning disk for $\gamma_{\vec{v}}^A$ with its opposite orientation to get a formal sphere of degree $q$. We then have \begin{align*} \op{ind}(C) &= n-3 + 2c_1^{\tau_0}(C) - {\op{CZ}}_{\tau_0}(\gamma_{\vec{v}}^A)\\ &= n-3 + 2q(n+1) - (\delta(\gamma_{\vec{v}}^A) - 2\vec{v} \cdot \vec{1})\\ &\geq -2 + 2q(n+1) + 2\vec{v} \cdot \vec{1}, \end{align*} with equality only if $\delta(\gamma_{\vec{v}}^A) = n-1$. We recall here that $\vec{v} \cdot \vec{1}$ is a shorthand for $\sum_{i=1}^k v_i$, and $\delta$ is defined in the discussion following Proposition~\ref{prop:compl_cz}. Moreover, $-2 + 2\vec{v} \cdot \vec{1}$ and $2q(n+1)$ are both nonnegative, with equality only if $\vec{v}\cdot \vec{1} = 1$ and $q = 0$, in which case we must have $\vec{v} = \vec{e}_j$ for some $j \in \{1,\dots,k\}$. Since $\beta_j$ is the unique Reeb orbit of the form $\gamma^A_{\vec{e}_j}$ satisfying $\delta(\gamma^A_{\vec{e}_j}) = n-1$, the lemma follows.
\end{proof}
Since neck stretching produces a cobordism of moduli spaces, the above discussion shows that the count of curves in $X_{\vec{v}}^{2n}$ with positive ends $\beta_1^{\times d_1},\dots,\beta_k^{\times d_k}$ and satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$ is nonzero. Therefore, to complete the proof of Lemma~\ref{lem:ub}, it suffices to show that $(\odot^{d_1}\beta_1) \odot \dots \odot (\odot^{d_k}\beta_k)$ is closed under the bar differential:
\begin{lemma}\label{lem:bar_cycle} The element $(\odot^{d_1}\beta_1) \odot \dots \odot (\odot^{d_k}\beta_k) \in \mathcal{B}\op{CH}_{\op{lin}}(X^{2n}_{\vec{d}})$ is a cycle with respect to the bar differential. \end{lemma} \begin{proof} Recall that the $\mathcal{L}_\infty$ operations on $\op{CH}_{\op{lin}}(X_{\vec{d}}^{2n})$ count index one curves in the symplectization $\mathbb{R} \times \partial X_{\vec{v}}^{2n}$, anchored in $X_{\vec{v}}^{2n}$, with some number $l \geq 1$ of positive ends and one negative end. Let $C$ be such an anchored curve, with top ends $\beta_{i_1},\dots,\beta_{i_l}$ for some $i_1,\dots,i_l \in \{1,\dots,k\}$ such that $\sum_{j=1}^l \vec{e}_{i_j} \leq \vec{d}$, and let $\gamma_{\vec{v}}^A$ be the bottom end. First homology considerations give $$[\gamma_{\vec{v}}^A] = [\beta_{i_1}] + \dots + [\beta_{i_l}] \in H_1(X_{\vec{d}}^{2n}),$$ and hence $\vec{v} = \vec{e}_{i_1} + \dots + \vec{e}_{i_l} + q\vec{d}$ for some $q \in \mathbb{Z}$.
By nonnegative of energy and Stokes' theorem, we have \begin{align*} 0 \leq E(C) &= \mathcal{A}(\beta_{i_1}) + \dots + \mathcal{A}(\beta_{i_l}) - \mathcal{A}(\gamma_{\vec{v}}^A) \\&\leq -\mathbb{w} \cdot (\vec{e}_{i_1} + \dots + \vec{e}_{i_l} - \vec{v}) + (l+1)\varepsilon \\&= -\mathbb{w} \cdot (-q\vec{d}) + (l+1)\varepsilon. \end{align*} Since each component of $\mathbb{w}$ is negative, each component of $\vec{d}$ is positive, and $\varepsilon > 0$ is arbitrarily small, we must have $q \leq 0$. This implies that $q = 0$, since each component of $\vec{v}$ is nonnegative. We then have \begin{align*} \op{ind}(C) &= (n-3)(1-l) + \sum_{j=1}^l {\op{CZ}}_{\tau_0}(\beta_{i_j}) - {\op{CZ}}_{\tau_0}(\gamma_{\vec{v}}^A) + 2c_1^{\tau_0}(C)\\ &= (n-3)(1-l) + l(n-3) - (\delta(\gamma_{\vec{v}}^A) - 2 \vec{v} \cdot \vec{1})\\ &\geq (n-3)(1-l) + l(n-3) - (n-1) + 2l\\ &= 2l-2. \end{align*} Note that $2c_1^{\tau_0}(C) = 0$ since $q = 0$ (c.f. Lemma~\ref{lem:cz_well_def}). This shows that $\op{ind}(C) \geq 2$ unless $l = 1$.
The case $l=1$ corresponds to a cylinder $C$ in $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$, anchored in $X_{\vec{d}}^{2n}$, with top end $\beta_{i_1}$ and bottom end $\gamma_{\vec{v}}^A$, and $\op{ind}(C) = 1$ means that we have
${\op{CZ}}(\gamma_{\vec{v}}^A) = {\op{CZ}}(\beta_{i_1}) - 1$. Note that by action and first homology considerations as above we must have $\vec{v} = \vec{e}_{i_1}$. Furthermore, we claim that $C$ cannot be anchored. Indeed, we have $E(C) \leq \varepsilon$ (c.f. the proof of Lemma~\ref{lem:formal_caps}), whereas any curve in $X_{\vec{d}}^{2n}$ would have energy at least that of the minimal action of a Reeb orbit in $\partial X_{\vec{d}}^{2n}$, which is turn bounded from below by $\min\limits_{1 \leq i \leq k}-\mathbb{w}_i - \varepsilon \geq 1-\varepsilon.$
Although we cannot a priori rule out the existence of $C$ as an honest index one cylinder in $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$, it suffices to show that the count of such cylinders (modulo target translations) is vanishing for any fixed choice of negative asymptotic Reeb orbit $\gamma_{\vec{e}_{i_1}}^A$. To see this, consider the compactified moduli space of index one pseudoholomorphic planes in $N_{\vec{d}}^{2n}$ with negative asymptotic $\gamma_{\vec{e}_{i_1}}^A$. Its boundary consists of two-level configurations, with: \begin{itemize} \item top level consisting of an index zero plane in $N_{\vec{d}}^{2n}$ with negative asymptotic $\beta_{i_1}$ \item bottom level consisting of an index one cylinder in the symplectization $\mathbb{R} \times \partial N_{\vec{d}}^{2n}$, positively asymptotic to $\beta_{i_1}$ and negatively asymptotic to $\gamma_{\vec{e}_{i_1}}^A$. \end{itemize} We claim that the count of planes in the top level is nonzero. Since the total count of boundary configurations is zero, it then follows that the count of cylinders in the bottom level is necessarily zero, as desired.
Finally, the preceding claim follows from the neck stretching procedure described at the beginning of this subsection. Indeed, by energy considerations as above, each of the anchored planes $C_1,\dots,C_l$ is in fact unanchored. Consequently, since neck stretching induces a cobordism of moduli spaces, the count of index zero planes in $N$ with negative asymptotic $\beta_i$ is necessarily nonzero for each $1 \leq i \leq k$.
\end{proof}
\subsection{The Cieliebak--Latschev formalism}\label{subsec:CL} In this subsection, which is logically independent from the rest of the paper, we provide a broader perspective on the upper bound in the previous subsection based on Maurer--Cartan theory. This approach, which builds on unpublished work of Cieliebak--Latschev and is discussed also in \cite[\S 4]{HSC} from a slightly different perspective, can be used to produce bar complex cycles in greater generality. In particular, we prove: \begin{thm}\label{thm:cl_ub} Let $M^{2n}$ be a $2n$-dimensional closed symplectic manifold and $A \in H_2(M)$ a homology class such that the count $\op{GW}_{M,A}\Langle \mathcal{T}^m p \Rangle \in \mathbb{Q}$ is nonzero\footnote{We note that the invariant $\op{GW}_{M,A}\Langle \mathcal{T}^m p\Rangle$ is well-defined via classical perturbation techniques if $M$ is semipositive by \cite[Prop. 2.2.2]{McDuffSiegel_counting}, whereas the definition for general $M$ necessitates virtual perturbations.} for some $m \in \mathbb{Z}_{\geq 0}$. Let $X^{2n}$ be a $2n$-dimensional Liouville domain admitting a symplectic embedding into $M$ such that the induced map $H^{2n-2}(M) \rightarrow H^{2n-2}(N)$ is injective. Then we have $$ \mathbb{G}\Langle \mathcal{T}^m p \Rangle(X) < \infty.$$ \end{thm}
Let $X$ be a Liouville domain which is symplectically embedded into a closed symplectic manifold $M$. As before, for simplicity we work over $\mathbb{K} = \mathbb{Q}$. Let $\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ denote the $\mathcal{L}_\infty$ algebra as described in \S\ref{subsec:obs_higher}, but with the following modifications: \begin{itemize} \item as a $\mathbb{K}$-module, the generators of $\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ are pairs $(\gamma,[\Sigma])$, where $\gamma$ is a good Reeb orbit in $\partial X$ and $[\Sigma]$ is a $2$-chain $\Sigma$ in $M$ with boundary $\gamma$, modulo boundaries of $3$-chains in $M$ \item the differential and higher $\mathcal{L}_\infty$ operations count the same curves as before, and the bounding $2$-chain $\Sigma$ of the output is given by concatenating these with the bounding $2$-chains of the inputs. \end{itemize} Here we are assuming that $\partial X$ has nondegenerate Reeb dynamics, which we can always achieve by a small perturbation. Note that, for a given Reeb orbit $\gamma$, the set of possible choices of $[\Sigma]$ is a torsor over $H_2(M)$.
Each generator $(\gamma,[\Sigma])$ of $\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ has a well-defined energy, given by the integrating the symplectic form $\omega$ of $M$ over $\Sigma$. This induces a decreasing $\mathbb{R}$-filtration on $\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$, and we denote the corresponding completed $\mathcal{L}_\infty$ algebra by $\wh{\op{CH}}_{\op{lin}}(X;\widetilde{\mathbb{K}})$. The bar complex $\mathcal{B}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ also inherits a decreasing filtration by energy (i.e. the energy of an elementary tensor is the sum of the energies of its components), and we denote the corresponding completed chain complex by $\widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$.
The Cieliebak--Latschev formalism associates to the symplectic embedding $X \overset{S}\hookrightarrow M$ a Maurer--Cartan element $$\mathfrak{m} \in \wh{\op{CH}}_{\op{lin}}(X;\widetilde{\mathbb{K}}),$$ given by the (possibly infinite) count of index zero planes in the symplectic cap $N:= \overline{M \setminus X}$, anchored\footnote{More precisely, each such configuration is a two-level pseudoholomorphic building, with top level in $N$ and bottom level in $X$, such that the total configuration after formally gluing along each pair of Reeb orbits is a plane.} in $X$ (c.f. \cite[\S 4]{HSC}). Note that this sum is well-defined in $\wh{\op{CH}}_{\op{lin}}(X;\widetilde{\mathbb{K}})$, since by SFT compactness there are only finitely many configurations with energy below any given value. Since $\mathfrak{m}$ lies in the positive part of the filtration on $\wh{\op{CH}}_{\op{lin}}(X;\widetilde{\mathbb{K}})$, it has a well-defined exponential $$\exp(\mathfrak{m}) = \sum_{l=1}^\infty \frac{1}{l!}\underbrace{\mathfrak{m} \odot \dots \odot \mathfrak{m}}_l \in \widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}}),$$ and the Maurer--Cartan equation for $\mathfrak{m}$ is equivalent to the fact that $\exp(\mathfrak{m})$ is a cycle.
Given a pair $(\gamma,[\Sigma])$ as above, note that $[\Sigma]$ defines a well-defined element in $H_2(M,X) \cong H_2(N,\partial N)$, and this gives rise to a natural $H_2(N,\partial N)$-grading on the $\mathcal{L}_\infty$ algebra $\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$, and also on its completed bar complex $\widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$. Given a homology class $A \in H_2(M)$, let $\widetilde{A} \in H_2(N,\partial N)$ denote its restriction to $N$ (i.e. we apply Poincar\'e--Lefschetz duality to the input and output of the restriction map $H^{2n-2}(M) \rightarrow H^{2n-2}(N)$), and let $\exp(\mathfrak{m})_{\widetilde{A}} \in \widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ denote the part of $\exp(\mathfrak{m})$ lying in the graded piece corresponding to $\widetilde{A}$. Then $\exp(\mathfrak{m})_{\widetilde{A}} \in \widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$ is itself a cycle.
We claim that $\exp(\mathfrak{m})_{\widetilde{A}}$ in fact lifts to a cycle $x$ in the uncompleted bar complex $\mathcal{B}\op{CH}_{\op{lin}}(X;\widetilde{\mathbb{K}})$. Indeed, it suffices to show that there is a uniform upper bound on the energy of each summand of $\exp(\mathfrak{m})_{\widetilde{A}}$, since then the SFT compactness theorem implies that there are only finitely many such terms. To justify the claim, consider a summand of $\exp(\mathfrak{m})_{\widetilde{A}}$, which we represent as a formal curve in $N$, anchored in $X$. We denote by $C$ the resulting formal curve in $N$ after throwing away any anchors in $X$. Then $C$ represents the homology class $\widetilde{A} \in H_2(N,\partial N)$, and we denote its negative asymptotic Reeb orbits by $\gamma_1,\dots,\gamma_l$. Let ${\lambda}$ be a primitive one-form for $\omega$ defined near $\partial N$, and let $\rho: N \rightarrow [0,1]$ be a function which is $0$ near $\partial N$ and $1$ outside of a small neighborhood $U$ of $\partial N$. Let $\widetilde{\omega}$ be the two-form on $N$ given by $d(\rho {\lambda})$ on $U$ and $\omega$ on $N \setminus U$. We have \begin{align*} \int_C \omega &= \int_C \widetilde{\omega} + \int_C (\omega - \widetilde{\omega})\\ &= \int_C \widetilde{\omega} + \int_{C \cap U} d([1-\rho]{\lambda})\\ &= \int_C \widetilde{\omega} - \sum_{i=1}^l \mathcal{A}(\gamma_i). \end{align*} Note that $\int_C\widetilde{\omega}$ depends only on the homology classes $[\widetilde{\omega}] \in H^2(N, \partial N;\mathbb{R})$ and $[C] = \widetilde{A} \in H_2(N,\partial N;\mathbb{R})$, and hence we have $\int_C \omega \leq [\widetilde{\omega]} \cdot [\widetilde{A}]$ as desired.
Now let $\mathbb{K}[H_2(M)]$ denote the group ring of $H_2(M)$, and let $\widehat{\mathbb{K}[H_2(M)]}$ denotes its completion with respect to symplectic area. Put $\op{GW}_{M}\Langle \mathcal{T}^m p \Rangle := \sum\limits_{A \in H_2(M)} e^A \op{GW}_{M,A}\Langle \mathcal{T}^m p \Rangle \in \widehat{\mathbb{K}[H_2(M)]}$. In general, neck stretching curves with a $\Langle \mathcal{T}^m p \Rangle$ constraint gives the relation in $\widehat{\mathbb{K}[H_2(M)]}$ of the form \begin{align*} \pi_1\circ \widehat{\e}\Langle \mathcal{T}^m p \Rangle(\exp(\mathfrak{m})) = \op{GW}_{M}\Langle \mathcal{T}^m p \Rangle. \end{align*} Here $\pi_1 \circ \widehat{\e}\Langle \mathcal{T}^m p\Rangle: \widehat{\mathcal{B}}\op{CH}_{\op{lin}}(X) \rightarrow \mathbb{K}[\widehat{H_2(M)}]$ is the induced map counting curves with a $\Langle \mathcal{T}^m p\Rangle$ local tangency constraint as in \S\ref{subsec:obs_simp}, except that we now concatenate these curves with the input curves to define homology classes in $H_2(M)$, and we pass to completions. Since the restriction map $H_2(M) \rightarrow H_2(N,\partial N)$ is injective by assumption, $A \in H_2(M)$ is the unique class which restricts to $\widetilde{A} \in H_2(N,\partial N)$. Therefore by projecting the above relation to the graded piece corresponding to $\widetilde{A}$, we get \begin{align*} \pi_1 \circ \widehat{\e}\Langle \mathcal{T}^m p \Rangle(x) = \op{GW}_{M,A}\Langle \mathcal{T}^m p \Rangle . \end{align*} It then follows directly from the definition that we have \begin{align*} \mathbb{G}\Langle \mathcal{T}^m p\Rangle(X) < \infty, \end{align*} so this completes the proof of Theorem~\ref{thm:cl_ub}.
\begin{example} In the case of the natural inclusion $X_{\vec{d}}^{2n} \subset \mathbb{CP}^n$ for a tuple $\vec{d} = (d_1,\dots,d_k)$, we have $H_{2n-2}(N_{\vec{d}}^{2n})\cong \mathbb{K}\langle [D_1],\dots,[D_k]\rangle$, where $D_1,\dots,D_k$ represent hypersurfaces of degrees $d_1,\dots,d_k$, and the induced map $H^{2n-2}(\mathbb{CP}^n) \rightarrow H^{2n-2}(N_{\vec{d}}^{2n})$ is injective, so Theorem~\ref{thm:cl_ub} applies. In this case, by the argument in \S\ref{subsec:upper_bound}, for $A = [L]$ the line class in $H_2(\mathbb{CP}^n)$ we have that $\exp(\mathfrak{m})_{\widetilde{A}}$ is a multiple of $(\odot^{d_1}\beta_1) \odot \dots \odot (\odot^{d_k}\beta_k)$ \end{example}
\section{Computations for hypersurface complements II: avoiding virtual perturbations}\label{sec:computationsII}
The main goal in this section is to prove Theorem~\ref{thm:main_combinatorial}. In \S\ref{subsec:some_lemmas}, we revisit the neck stretching argument from the previous subsection and analyze the possible degenerations in more detail without virtual perturbations. Subsequently, in \S\ref{subsec:completing} we assemble these ingredients and complete the proof.
\subsection{Some lemmas}\label{subsec:some_lemmas}
In this subsection we formulate various technical results about our moduli spaces of interest, providing the main ingredients for the proof in the next subsection. Fix $\vec{d} = (d_1,\dots,d_k) \in \mathbb{Z}_{\geq 1}^k$ for some $k \in \mathbb{Z}_{\geq 1}$. Note that for this subsection we do not need to assume $\sum_{i=1}^k d_i \geq n+1$.
\begin{lemma}\label{lem:main_cpnt_nonneg_ind} Let $J$ be a generic admissible almost complex structure on the symplectic completion of $X_{\vec{d}}^{2n}$, and let $u$ be a $J$-holomorphic curve in $\widehat{X}_{\vec{d}}^{2n}$ satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$. Then we have $\op{ind}(u) \geq 0$. \end{lemma} \begin{proof}
Let us take the positive Reeb orbit asymptotics of $u$ to be $\gamma_{\vec{v}_1}^{A_1},\dots,\gamma_{\vec{v}_l}^{A_l}$, and put $\delta_i := \delta(\gamma_{\vec{v}_i}^{A_i})$ for $i = 1,\dots,l$. As before, the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$ is codimension $4n-4$, and the index of $u$ is given by \begin{align*} \op{ind}(u) &= (n-3)(2-l) - (4n - 4) +\sum_{i=1}^l{\op{CZ}}(\gamma_{\vec{v}_i}^{A_i}) + 2c_1^{\tau_0}(u) \\ &= (n-3)(2-l) - (4n - 4) + \sum_{i=1}^l \delta_i - 2\left( \sum_{i=1}^l \vec{v}_i \right) \cdot \vec{1} + 2c_1^{\tau_0}(u). \end{align*} Since $\sum_{i=1}^l [\gamma_{\vec{v}_i}^{A_i}] = 0 \in H_1(X_{\vec{d}}^{2n})$, we must have $\sum_{i=1}^l \vec{v}_i = q\vec{d}$ for some $q \in \mathbb{Z}_{\geq 1}$. Then the sphere in $\mathbb{CP}^n$ obtained by capping off each end of $u$ by the corresponding small spanning disk as in \S\ref{subsec:div_compl} has degree $q$, and since $\tau_0$ extends these disks we have $c_1^{\tau_0}(u) = q(n+1)$.
Now suppose that $u$ is a $\kappa$-fold branched cover of its underlying simple curve $\overline{u}$, i.e. $u$ is given by the precomposition of $\overline{u}$ with an order $\kappa$ branched cover $\Phi$, which extends over the punctures to a holomorphic map $\mathbb{CP}^1 \rightarrow \mathbb{CP}^1$. Let us denote the positive Reeb orbit asymptotics of $\overline{u}$ by $\gamma_{\vec{w}_1}^{B_1},\dots,\gamma_{\vec{w}_{\overline{l}}}^{B_{\overline{l}}}$, and put $\overline{\delta}_i := \delta(\gamma_{\vec{w}_i}^{B_i})$ for $i = 1,\dots,\overline{l}$. The point in the domain of $u$ satisfying the $\Langle \mathcal{T}^{n-1}p\Rangle$ constraint is mapped by $\Phi$ to a point in the domain of $\overline{u}$ satisfying a constraint $\Langle \mathcal{T}^{m-1}p\Rangle$ for some $m \in \mathbb{Z}_{\geq 1}$. Taking into account this constraint, the index of $\overline{u}$ is given by \begin{align*} \op{ind}(\overline{u}) &= (n-3)(2-\overline{l}) - (2n + 2m - 4) + \sum_{i=1}^{\overline{l}} \overline{\delta}_i - 2\left(\sum_{i=1}^{\overline{l}}\vec{w}_i\right)\cdot \vec{1} + 2c_1^{\tau_0}(\overline{u}). \end{align*}
We define the {\em branching order} of a branched cover of the Riemann sphere at a point in the domain to be the local degree of the map at that point minus one. Let $\mathfrak{a}$ denote the branching order of $\Phi$ at the point satisfying the $\Langle\mathcal{T}^{n-1}p\Rangle$ constraint, and let $\mathfrak{b}$ be the sum of the branching orders of $\Phi$ over all of its punctures. Note that we must have $$\mathfrak{a} \leq \kappa - 1,$$ and by the Riemann--Hurwitz formula we have $$\mathfrak{a} + \mathfrak{b} \leq 2\kappa - 2.$$ Also, since $u$ satisfies the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$ and since the contact order to the local divisor gets multiplied by the local degree of the cover, we must have $$ m(\mathfrak{a}+1) \geq n.$$ Furthermore, looking at the punctures we have $$ l = \kappa \overline{l} -\mathfrak{b},$$ and since $\overline{\delta}_i \leq n-1$ for each $i$, we have $$\sum_{i=1}^l \delta_i \geq \kappa\sum_{i=1}^{\overline{l}} \overline{\delta}_i - (n-1)\mathfrak{b}.$$ Lastly, observe that we have $$ \sum_{i=1}^l \vec{v}_i = \kappa \sum_{i=1}^{\overline{l}}\vec{w}_i,$$ and hence $c_1^{\tau_0}(\overline{u}) = q(n+1)/\kappa$.
We then have \begin{align*} \op{ind}(u) &\geq (n-3)(2-\kappa\overline{l}+\mathfrak{b}) - (4n - 4) + \left(\kappa\sum_{i=1}^{\overline{l}}\overline{\delta}_i - (n-1)\mathfrak{b}\right) - 2\kappa\left(\sum_{i=1}^{\overline{l}}\vec{w}_i\right) \cdot \vec{1} + 2q(n+1), \end{align*} and therefore \begin{align*} \op{ind}(u) - \kappa \op{ind}(\overline{u}) &\geq (n-3)(2-\kappa \overline{l} + \mathfrak{b}) - 4n + 4 - (n-1)\mathfrak{b} -\kappa(n-3)(2-\overline{l}) + \kappa(2n + 2m - 4)\\ &= -2n - 2 + 2\kappa - 2\mathfrak{b} + 2m\kappa\\ &\geq -2n - 2 + 2\kappa - 2(2\kappa - 2 - \mathfrak{a}) + 2m\kappa\\ &= -2n + 2 + 2\mathfrak{a} + 2\kappa(m-1)\\ &\geq -2n + 2 + 2(n/m-1) +2\kappa(m-1)\\ &= (m-1)(2\kappa - 2n/m)\\ &\geq (m-1)(2\kappa - 2[a+1])\\ & \geq 0. \end{align*} Since $\overline{u}$ is simple, it is regular for $J$ generic, and hence we have $\op{ind}(\overline{u}) \geq 0$. \end{proof}
We now revisit the neck stretching procedure for the moduli space $\mathcal{M}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1} p\Rangle$ along the contact type hypersurface $\partial X_{\vec{d}}^{2n} \subset \mathbb{CP}^n$ as in \S\ref{subsec:upper_bound}. Consider a generic compatible almost complex structure $J$ on $\mathbb{CP}^n$ which is cylindrical near $\partial X_{\vec{d}}^{2n}$. Let $J_X$ and $J_N$ denote the induced admissible almost complex structures on the symplectic completions of $X_{\vec{d}}^{2n}$ and $N_{\vec{d}}^{2n}$ respectively, given by restricting $J$ and then extending over the cylindrical ends. Similarly, let $J_{\mathbb{R} \times \partial X}$ denote the resulting admissible almost complex structure on the symplectization $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$, given by restricting $J_X$ to $\partial X_{\vec{d}}^{2n}$ and extending $\mathbb{R}$-invariantly. We assume that $J_X,J_N,J_{\mathbb{R} \times \partial X}$ are generic, so that simple curves are regular. Now let $\{J_t\}_{t \in [0,1)}$ be the one-parameter family of almost complex structures on $\mathbb{CP}^n$ realizing the neck stretching as described in \S\ref{subsubsec:neck_stretching}. \begin{lemma}\label{lem:cpn_ns_result} Under the above neck stretching, each limiting configuration corresponding to $t = 1$ in the compactified moduli space $\overline{\mathcal{M}}_{\mathbb{CP}^n,[L]}^{\{J_t\}}\Langle \mathcal{T}^{n-1} p\Rangle$ is a two-level pseudoholomorphic building with \begin{itemize} \item top level consisting of $\sum_{i=1}^k d_i$ index zero regular $J_N$-holomorphic planes in $\mathbb{CP}^{n}$, with negative asymptotic ends $\beta_1^{\times d_1},\dots,\beta_k^{\times d_k}$ and lying in the homology classes $\underbrace{[c_1],\dots,[c_1]}_{d_1},\dots,\underbrace{[c_k],\dots,[c_k]}_{d_k} \in H_2(N_{\vec{d}}^{2n},\partial N_{\vec{d}}^{2n})$ respectively \item bottom level consisting of a single index zero regular $J_X$-holomorphic (genus zero) curve in $X_{\vec{d}}^{2n}$ with positive ends $\beta_1^{\times d_1},\dots,\beta_k^{\times d_k}$ and satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$. \end{itemize} \end{lemma} \begin{proof} By the SFT compactness theorem, any limiting configuration consists of a multilevel pseudoholomorphic building, with \begin{itemize} \item top level $J_N$-holomorphic in $N_{\vec{d}}^{2n}$ \item some number (possibly zero) of levels $J_{\mathbb{R} \times \partial X}$-holomorphic in the symplectization $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$ \item bottom level $J_X$-holomorphic in $X_{\vec{d}}^{2n}$ consisting of a ``main component'' $u$ satisfying the $\Langle \mathcal{T}^{n-1}p\Rangle$ constraint, along with some number (possibly zero) of additional unconstrained components. \end{itemize} Note that in principle some of the components in a given level could be joined by nodes (each of which increases the expected codimension by two), but this is easily ruled out. Namely, by formally gluing together all pairs of asymptotic ends, we obtain a possibly nodal formal sphere in $\mathbb{CP}^n$. Since the total degree is one and each component has positive area and hence positive degree, this precludes any nodes.
Let us now formally glue together all pairs of ends except for those corresponding to the positive asymptotics of the main component $u$ to arrive at the following simplified picture: \begin{itemize} \item top level consisting of some number $l \geq 1$ of formal planes in $N_{\vec{d}}^{2n}$, anchored in $X_{\vec{d}}^{2n}$ \item bottom level consisting of a single main pseudoholomorphic component $u$ in $X_{\vec{d}}^{2n}$ with $l$ positive ends and satisfying the $\Langle \mathcal{T}^{n-1}p\Rangle$ constraint. \end{itemize} By Lemma~\ref{lem:formal_caps}, each of the formal planes has nonnegative index. Similarly, by Lemma~\ref{lem:main_cpnt_nonneg_ind}, the main component has nonnegative index. Since the total configuration has index zero, it follows that the main component and each of the formal planes $C$ must have index zero. Lemma~\ref{lem:formal_caps} then implies that each of the components in the top level has negative end $\beta_j$ for some $j \in \{1,\dots,k\}$, and has homological intersection $\delta_{ij}$ with $[D_i]$ for each $i \in \{1,\dots,k\}$.
We wish to show that each of these formal planes $C$ corresponds to an honest pseudoholomorphic curve in $N_{\vec{d}}^{2n}$. We can suppose that the configuration underlying $C$ consists of a multilevel pseudoholomorphic building with \begin{itemize} \item top level in $N_{\vec{d}}^{2n}$ \item some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$ \item bottom level (possibly empty) in $X_{\vec{d}}^{2n}$. \end{itemize} Since the symplectic form on $N_{\vec{d}}^{2n}$ is exact away from $D$ (c.f. part (2) of Theorem~\ref{thm:div_compl1}), by Stokes' theorem each component in the top level must intersect $D$ nontrivially. Since the homological intersection number of $[C]$ with $[D] = \sum_{i=1}^k [D_i]$ is $1$, it follows by positivity of intersection that there is exactly one component in the top level. Also, as in the proof of Lemma~\ref{lem:bar_cycle}, energy considerations rule out any component in the bottom level $X_{\vec{d}}^{2n}$. Indeed, note that we have $E(C) \leq \varepsilon$, whereas any such component would have at least one positive Reeb orbit asymptotic and hence energy at least $1-\varepsilon$. It follows that the component in the top level must be a plane, and by similar energy considerations and the stability condition in the SFT compactness theorem, there are no symplectization levels, so $C$ is an honest plane in $N_{\vec{d}}^{2n}$. Evidently $C$ is simple since $\beta_j$ is a primitive Reeb orbit.
Finally, since $J_X$ is generic, regularity of the component $u$ follows from the simple Lemma~\ref{lem:main_curves_are_simple} below. \end{proof}
\begin{lemma}\label{lem:main_curves_are_simple} Let $J_X$ be any admissible almost complex structure on the symplectic completion of $X_{\vec{d}}^{2n}$, and consider $u \in \mathcal{M}^{J_X}_{X_{\vec{d}}^{2n}}(\beta_1^{\times d_1},\dots,\beta_k^{\times d_k})\Langle \mathcal{T}^{n-1}\Rangle$. Then $u$ is simple, and hence regular if $J_X$ is generic. \end{lemma} \begin{proof} Suppose by contradiction that $u$ is a $\kappa$-fold cover of its underlying simple curve $\overline{u}$ for some $\kappa \geq 2$. Let $\gamma^{B_1}_{\vec{w}_1},\dots,\gamma^{B_{\overline{l}}}_{\vec{w}_{\overline{l}}}$ denote positive ends of $\overline{u}$. Since we have $\sum_{i=1}^{\overline{l}}[\gamma^{B_i}_{\vec{w}_i}] = 0 \in H_1(X_{\vec{d}}^{2n})$, we must have $\sum_{i=1}^{\overline{l}} \vec{w}_i = q\vec{d}$ for some $q \in \mathbb{Z}_{\geq 1}$. However, we then must have $\kappa q\vec{d} = \sum_{i=1}^k d_i\vec{e}_i = \vec{d}$, which is not possible unless $\kappa = 1$. \end{proof}
The following proposition is roughly the geometric analogue of Lemma~\ref{lem:bar_cycle}. Let $J_{\mathbb{R} \times \partial X}$ be a fixed generic cylindrical almost complex structure on the symplectization $\mathbb{R} \times \partial X_{\vec{d}}^{2n}$, and let $\mathcal{J}_{X}$ denote the space of all admissible almost complex structures on $\widehat{X}_{\vec{d}}^{2n}$ which agree with $J_{\mathbb{R} \times \partial X}$ on a neighborhood of the cylindrical end. \begin{prop}\label{prop:cnt_fin_and_well_def} For generic $J_X \in \mathcal{J}_X$, the signed count of (genus zero) $J_X$-holomorphic curves in $X_{\vec{d}}^{2n}$ with positive asymptotics $\beta_1^{\times d_1},\dots,\beta_k^{\times d_k}$ and satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$ is finite, nonzero, and independent of $J_X$. \end{prop} \begin{remark} Since $J_{\mathbb{R} \times \partial X}$ is arbitrary, it follows that the count in Proposition~\ref{prop:cnt_fin_and_well_def} is nonzero for any choice of generic admissible almost complex structure $J_X$ on $\widehat{X}$, i.e. not necessarily with fixed behavior at infinity. With a bit more work, it is also possible to show that this count is entirely independent of this choice, but since we will not explicitly need this we omit the proof for brevity. \end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:cnt_fin_and_well_def}] As before, let $J$ be a generic compatible almost complex structure on $\mathbb{CP}^n$ which agrees with $J_{\mathbb{R} \times \partial X}$ on $\mathcal{O}p(\partial X_{\vec{d}}^{2n})$ and restricts to $J_X$ and $J_N$ on $X_{\vec{d}}^{2n}$ and $N_{\vec{d}}^{2n}$ respectively, and let $\{J_t\}_{t \in [0,1)}$ be a family of almost complex structures on $\mathbb{CP}^n$ with $J_0 = J$ which realizes the neck stretching along $\partial X_{\vec{d}}^{2n}$. According to \cite[Prop. 2.2.2]{McDuffSiegel_counting}, the count $\op{GW}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle = \# \mathcal{M}^{J,s}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle$ is independent of $J$ (provided that it is generic), and in fact by \cite[Prop. 3.4]{CM2} we have $\op{GW}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle = (n-1)! \neq 0$. Now consider the compactified moduli space $\overline{\mathcal{M}}_{\mathbb{CP}^n,[L]}^{\{J_t\}}\Langle \mathcal{T}^{n-1}p\Rangle$, and let $\pi$ denote its natural projection to $[0,1]$. The fiber $\pi^{-1}(0)$ coincides with $\mathcal{M}_{\mathbb{CP}^n,[L]}^{J,s}\Langle\mathcal{T}^{n-1}p\Rangle$, whereas according to Lemma~\ref{lem:cpn_ns_result} the fiber $\pi^{-1}(1)$ consists of two-level pseudoholomorphic buildings with regular components.
In particular, by standard gluing along cylindrical ends (see e.g. \cite[Thm. 2.54]{pardon2019contact}), $\overline{\mathcal{M}}_{\mathbb{CP}^n,[L]}^{\{J_t\}}\Langle \mathcal{T}^{n-1}p\Rangle$ defines a one-dimensional oriented topological cobordism, at least after restricting the family to $[1-\delta,1)$ for $\delta > 0$ sufficiently small. Using again \cite[Prop. 2.2.2]{McDuffSiegel_counting}, the counts $\# \pi^{-1}(0)$ and $\# \pi^{-1}(1-\delta)$ coincide, and hence by counting signed boundary points we obtain the relation
\begin{align*}\label{eqn:neck_stretch}\tag{*} (n-1)! = \#\mathcal{M}^{J_X}_{X_{\vec{d}}^{2n}}(\beta_1^{\times d_1},\dots,\beta_k^{\times d_k})\Langle \mathcal{T}^{n-1}p \Rangle \cdot \prod_{i=1}^k d_i \cdot \#\mathcal{M}^{J_N}_{N_{\vec{d},[c_i]}^{2n}}(\beta_i). \end{align*} In particular, we have $\#\mathcal{M}^{J_X}_{X_{\vec{d}}^{2n}}(\beta_1^{\times d_1},\dots,\beta_k^{\times d_k})\Langle \mathcal{T}^{n-1} p\Rangle \neq 0$, as well as ${\#\mathcal{M}^{J_N}_{N_{\vec{d},[c_i]}^{2n}}(\beta_i) \neq 0}$ for $i = 1,\dots,k$, and all of these counts must be finite.
Finally, suppose that we have another generic admissible almost complex structure $J_X'$ which coincides with $J_X$ on a neighborhood of the cylindrical end of $\widehat{X}_{\vec{d}}^{2n}$. Let $J'$ be the compatible almost complex structure on $\mathbb{CP}^n$ which restricts to $J_{X'}$ and $J_N$ on $X_{\vec{d}}^{2n}$ and $N_{\vec{d}}^{2n}$ respectively. Since we have also $$\# \mathcal{M}^{J',s}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle = \op{GW}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle,$$ by comparing \eqref{eqn:neck_stretch} with the analogous relation using $J'$ instead of $J$, we must have $$\#\mathcal{M}^{J_X}_{X_{\vec{d}}^{2n}}(\beta_1^{\times d_1},\dots,\beta_k^{\times d_k})\Langle \mathcal{T}^{n-1}p \Rangle = \#\mathcal{M}^{J_X'}_{X_{\vec{d}}^{2n}}(\beta_1^{\times d_1},\dots,\beta_k^{\times d_k})\Langle \mathcal{T}^{n-1} p\Rangle.$$
\end{proof}
\subsection{Completing the obstructions proof}\label{subsec:completing}
We now complete the proof of Theorem~\ref{thm:main_combinatorial}. Fix $n \in \mathbb{Z}_{\geq 1}$ and tuples of positive integers $\vec{d}=(d_1,\dots,d_k) \in \mathcal{S}$ and $\vec{d'} = (d_1',\dots,d_{k'}') \in \mathcal{S}$ with $\sum_{i=1}^k d_i, \sum_{i=1}^{k'}d_i' \geq n+1$, put $X := X_{\vec{d}}^{2n}$ and $X' := X_{\vec{d'}}^{2n}$, and consider a hypothetical Liouville embedding $\iota: X \overset{L}\hookrightarrow X'$. Let ${\lambda}$ and ${\lambda}'$ denote the preferred Liouville one-forms on $X$ and $X'$ respectively provided by Theorem~\ref{thm:div_compl1}. By Lemma~\ref{lem:liouville_emb_lemmas} (b), after applying a Liouville homotopy to $X'$ we can assume that we have $\iota^*{\lambda}' = {\lambda}$. Note that since this homotopy only modifies the contact form on $\partial X'$ by a positive scaling factor, the Reeb dynamics are unaffected (expect for possibly rescaling the periods of all Reeb orbits by a fixed constant), so all of our results about the geometry of $X'$ still apply.
After deforming $J'$, we can assume that $\iota^*(J')$ is the restriction of a generic admissible almost complex structure $J$ on the symplectic completion of $X$. In particular, $J'$ is cylindrical near $Y := \iota(\partial X)$. Now let $\{J_t'\}_{t \in [0,1)} \in \mathcal{J}_{X'}$ be a corresponding neck stretching family of admissible complex structures on the symplectic completion of $X$, with $J_0' = J'$ and $J_t'$ limiting as $t \rightarrow 1$ to a broken almost complex structure on $X \circledcirc (X' \setminus \iota(X))$. We consider the compactification $\overline{\mathcal{M}}^{\{J_t'\}}_{X'}\Langle \mathcal{T}^{n-1}p\Rangle(\beta_1^{\times d_1'},\dots,\beta_k^{\times d_k'})$ provided by the SFT compactness theorem, and let $\pi$ denote the natural projection to $[0,1]$. \begin{lemma} In the situation above, the fiber $\pi^{-1}(1)$ is nonempty. \end{lemma} \begin{proof} Observe that the fiber $\pi^{-1}(t)$ is nonempty for any $t \in [0,1)$. Indeed, if $\pi^{-1}(t)$ were empty, then in particular the moduli space $\mathcal{M}_{X'}^{J_t}\Langle \mathcal{T}^{n-1}p\Rangle(\beta_1^{\times d_1'},\dots,\beta_k^{\times d_k'})$ would be empty, and hence trivially regular, contradicting Proposition~\ref{prop:cnt_fin_and_well_def}. It follows then by compactness of $\overline{\mathcal{M}}^{\{J_t'\}}_{X'}\Langle \mathcal{T}^{n-1}p\Rangle(\beta_1^{\times d_1'},\dots,\beta_k^{\times d_k'})$ that $\pi^{-1}(1)$ is nonempty. \end{proof}
We now consider a configuration in $\pi^{-1}(1)$ and use its existence to read off various consequences. A priori, we have a pseudoholomorphic building with \begin{itemize}
\item some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial X'$
\item a level in the cobordism $X' \setminus \iota(X)$
\item some number (possibly zero) of levels in the symplectization $\mathbb{R} \times \partial X$
\item bottom level in the domain $X$ consisting of one ``main'' component inheriting the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$, along with some number (possibly zero) of unconstrained components. \end{itemize} By formally gluing together all pairs of ends except for those corresponding to the positive ends of the main component in the bottom level, we arrive at the following simplified picture: \begin{itemize}
\item a level in the cobordism $X' \setminus \iota(X)$ consisting of some number $l \geq 1$ of formal components, anchored in $X$
\item bottom level consisting of a main component $u$ with $l$ positive ends and satisfying the constraint $\Langle \mathcal{T}^{n-1}p\Rangle$. \end{itemize} Note that main component is pseudoholomorphic with respect to the generic admissible almost complex structure $J$ on $X$, and so by Lemma~\ref{lem:main_cpnt_nonneg_ind} we must have $\op{ind}(u) \geq 0$. Therefore, by Lemma~\ref{lem:too_few_ends} we must have $l \geq \sum_{i=1}^kd_i$. Since each component in the cobordism level has at least one positive end, we also must have $l \leq \sum_{i=1}^{k'}d_i'$.
Let $\gamma_{\vec{x}_1}^{A_1},\dots,\gamma_{\vec{x}_l}^{A_l}$ be the positive ends of $u$. Then for $i = 1,\dots,l$, the $i$th formal component in the cobordism level has negative end $\gamma_{\vec{x}_i}^{A_i}$, and its positive ends form a nonempty subcollection of $\underbrace{\beta_1,\dots,\beta_1}_{d_1},\dots,\underbrace{\beta_k,\dots,\beta_k}_{d_k}$. Each of these positive ends corresponds to one of the unit basis vectors $e_1,\dots,e_k$. Let $\vec{y}_i \in \mathbb{Z}_{\geq 0}^{k'} \setminus \{\vec{0}\}$ denote the sum of these unit basis vectors. Note that by construction we have $\sum_{i=1}^l \vec{y}_i = \vec{d'}$.
We now consider the map $\iota_*: H_1(X) \rightarrow H_1(X')$ induced by the Liouville embedding $\iota: X \hookrightarrow X'$. Under the identification from \S\ref{subsec:hypersurf_compl}, this is naturally viewed as a group homomorphism $\Phi: \mathbb{Z}^k/(\vec{d}) \rightarrow \mathbb{Z}^{k'}/(\vec{d'})$, and as such it sends $\vec{x}_i \text{ mod } (\vec{d})$ to $\vec{y}_i\text{ mod }(\vec{d'})$ for $i = 1,\dots,l$. Since $u$ provides a nulhomology of $\sum_{i=1}^l[\gamma_{\vec{x}_i}^{A_i}] \in H_1(X)$, we must have $\sum_{i=1}^l\vec{x}_i = q\vec{d}$ for some $q \in \mathbb{Z}_{\geq 1}$.
Finally, nonnegativity of the index of $u$ translates into \begin{align*} 0 &\leq (n-3)(2-l) - (4n-4) + \sum_{i=1}^l {\op{CZ}}_{\tau_0}(\gamma_{\vec{x}_i}^{A_i}) + 2c_1^{\tau_0}(u)\\ &= (n-3)(2-l) - (4n-4) + \sum_{i=1}^l \left( \delta(\gamma_{\vec{x}_i}^{A_i}) - 2\vec{x}_i \cdot \vec{1} \right) + 2q(n+1)\\ &\leq (n-3)(2-l) - (4n-4) + l(n-1) - 2q\sum_{i=1}^k d_i + 2q(n+1)\\ &= -2n + 2l - 2 - 2q(\sum_{i=1}^k d_i - n -1), \end{align*} i.e. $q(\sum_{i=1}^k d_i - n - 1) \leq l - n - 1.$ This completes the proof of Theorem~\ref{thm:main_combinatorial}.
\begin{remark} In the above neck stretching argument, it is not too difficult to show that any configuration in $\pi^{-1}(t)$ with $t \in [0,1)$ consists entirely of simple components, and hence can be assumed to be regular, meaning that $\pi^{-1}([0,1))$ is a one-dimensional topological manifold with boundary. However, we did not show (or require) that the configurations in $\pi^{-1}(1)$ are transversely cut out, and multiply covered components in the cobordism $X' \setminus \iota(X)$ could be in principle appear. Transversality for the whole compactification $\overline{\mathcal{M}}^{\{J_t'\}}_{X'}\Langle \mathcal{T}^{n-1}p\Rangle(\beta_1^{\times d_1'},\dots,\beta_k^{\times d_k'})$ should follow by adapting the Cieliebak--Mohnke framework \cite{CM1,CM2} or a more general virtual perturbation frameworks (c.f. Remark~\ref{rmk:virtual}), leading to a slight strengthening of Theorem~\ref{thm:main_combinatorial}. \end{remark}
\section{Constructions}\label{sec:constructions}
\subsection{Weinstein cobordisms from degenerations}
In this subsection we prove Theorem~\ref{thm:embeddings} based on the idea that in a degenerating family of divisors there is a Weinstein cobordism from the complement of the special fiber to the complement of the general fiber. We then complete the proof of Theorem~\ref{thm:main_liouville}.
\begin{proof}[Proof of Theorem~\ref{thm:embeddings}] Since Weinstein cobordisms can be concatenated, it suffices to consider the case that $\vec{d'}$ is obtained from $\vec{d}$ by either a combination move or a duplication move. In the former case, put $\vec{d} = (d_1,\dots,d_k)$ and $\vec{d'} = (d_1,\dots,d_{k-2},d_{k-1}+d_{k})$ without loss of generality, and let $D_t$, $t \in [0,1]$, be a smooth family of simple normal crossing divisors in $\mathbb{CP}^n$ such that \begin{itemize} \item for $t > 0$, $D_t$ has $k-1$ irreducible components of degrees $d_1,\dots,d_{k-2},d_{k-1}+d_k$ respectively \item $D_0$ has $k$ irreducible components of degrees $d_1,\dots,d_{k-1},d_k$ respectively. \end{itemize} Namely, the last component, which is a smooth hypersurface of degree $d_{k-1}+d_k$, degenerates into a union of two hypersurfaces of degrees $d_{k-1}$ and $d_k$. Put $\mathcal{L} := \mathcal{O}(\sum_{i=1}^k d_i)$. Correspondingly, we can find a smooth family of holomorphic sections $\sigma_t \in H^0(\mathbb{CP}^n;\mathcal{L})$, i.e. degree $\sum_{i=1}^kd_i$ homogeneous polynomials in $\mathbb{C}[X_0,\dots,X_n]$, such that $D_t = \sigma_t^{-1}(0)$ for $t \in [0,1]$ Then the existence of a Weinstein embedding of $X_{\vec{d}}^{2n}$ into $X_{\vec{d'}}^{2n}$ follows from Proposition~\ref{prop:cob_gen_proj_var} below.
Similarly, suppose now that $\vec{d'}$ differs from $\vec{d}$ by a duplication move, and put $\vec{d} = (d_1,\dots,d_k)$ and $\vec{d'} = (d_1,\dots,d_k,d_k)$ without loss of generality. Let $D_t$, $t \in [0,1]$, be a smooth family of simple normal crossing divisors in $\mathbb{CP}^n$ such that \begin{itemize} \item for $t > 0$, $D_t$ has $k+1$ irreducible components of degrees $d_1,\dots,d_k,d_k$ respectively \item $D_0$ has $k$ irreducible components of degrees $d_1,\dots,d_k$ respectively. \end{itemize} Namely, the last two components, which are smooth hypersurfaces of degree $d_k$, degenerate into a single hypersurface of degree $d_k$ (but with multiplicity two). Put $\mathcal{L} := \mathcal{O}(\sum_{i=1}^k d_i + d_k)$, and let $\sigma_t \in H^0(\mathbb{CP}^n;\mathcal{L})$ be smooth family of holomorphic sections such that $D_t = \sigma_t^{-1}(0)$ for $t \in [0,1]$. Then again the existence of a Weinstein embedding of $X_{\vec{d}}^{2n}$ into $X_{\vec{d'}}^{2n}$ follows from Proposition~\ref{prop:cob_gen_proj_var} below.
\end{proof}
Recall from \S\ref{subsec:div_compl} that if $M$ is a smooth complex projective variety and $D \subset M$ is an ample simple normal crossing divisor, then $M \setminus \mathcal{O}p(D)$ is canonically a Weinstein domain up to Weinstein deformation equivalence.
\begin{prop}\label{prop:cob_gen_proj_var} Let $M$ be a smooth complex projective variety, let $\mathcal{L} \rightarrow M$ be an ample line bundle, and let $\sigma_t \in H^0(M;\mathcal{L})$, $t \in [0,1]$ be a smooth family of holomorphic sections such that $D_t := \sigma_t^{-1}(0)$ is a simple normal crossing divisor for each $t \in [0,1]$. Then for $t > 0$ sufficiently small, there is a Weinstein embedding of $M \setminus \mathcal{O}p(D_0)$ into $M \setminus \mathcal{O}p(D_t)$. \end{prop}
\begin{proof}
Pick a Hermitian metric $\langle -,-,\rangle$ on $\mathcal{L}$, with associated norm $||-||$. For $t \in [0,1]$, put $\phi_t := -\log ||\sigma_t||$. As in \S\ref{subsec:div_compl}, the function $\phi_t: M \setminus D_t \rightarrow \mathbb{R}$ is exhausting and strictly plurisubharmonic for all $t \in [0,1]$, with critical points contained in a compact subset. After a small perturbation, we can further assume that $\phi_t$ is generalized Morse for all $t \in [0,1]$. Put $N := \phi_0^{-1}((R,\infty))$ for $R$ sufficiently large,
so that we have $D \subset N$ and all of the critical points of $\phi_0$ lie in $M \setminus \overline{N}$.
Pick $\delta > 0$ sufficiently small such that $R$ is a regular value of $\phi_t$ for all $t \in [0,\delta]$.
Put $U := \phi_\delta^{-1}((S,\infty))$ for $S$ sufficiently large, so that we have $D_\delta \subset U \subset N$ and none of the critical points of $\phi_\delta$ lie in $\overline{U}$. Put $W_t := \phi_t^{-1}((-\infty,R])$ for $t \in [0,\delta]$ and $W' := \phi_\delta^{-1}((-\infty,S])$. Then $(W_0,-d^{\mathbb{C}}\phi_0|_{W_0},\phi_0|_{W_0})$ and $(W',-d^{\mathbb{C}}\phi_\delta|_{W'},\phi_{\delta}|_{W'})$ are Weinstein domains which represent the natural Weinstein structures on $M \setminus \mathcal{O}p(D_0)$ and $M \setminus \mathcal{O}p(D_\delta)$ respectively up to Weinstein deformation equivalence. Moreover, the former is Weinstein deformation equivalent to the Weinstein subdomain of the latter corresponding to $\{\phi_\delta \leq R\}$. Indeed, the family of Weinstein domains $(W_t,-d^\mathbb{C}\phi_t|_{W_t},\phi_t|_{W_t})$ for $t \in [0,\delta]$ induces the desired Weinstein deformation equivalence.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main_liouville}] The ``if'' part is immediate from Theorem~\ref{thm:embeddings}, so it suffices to prove the ``only if'' statement, i.e. given a Liouville embedding $X_{\vec{d}}^{2n} \overset{L}\hookrightarrow X_{\vec{d'}}^{2n}$ we must have $\vec{d} \preceq \vec{d'}$. By Theorem~\ref{thm:main_combinatorial} we have \begin{align*} q \leq \dfrac{l-n-1}{\sum_{i=1}^k d_i - n -1} \leq \dfrac{\sum_{i=1}^{k'}d_i' - n - 1}{\sum_{i=1}^k d_i - n - 1} < \dfrac{2 \sum_{i=1}^k d_i - 2n - 2}{\sum_{i=1}^k d_i - n - 1} = 2, \end{align*} and hence $q = 1$. We therefore have \begin{align*} \vec{d} \cdot \vec{1} \leq l \leq \sum_{i=1}^l \vec{x}_i \cdot \vec{1} = \vec{d} \cdot \vec{1}, \end{align*} which forces $l = \vec{d} \cdot \vec{1}$ and $\vec{x}_i \cdot \vec{1} = 1$ for $i = 1,\dots,l$. Therefore, after possibly reordering, we can assume that we have \begin{align*} \vec{x}_1,\dots,\vec{x}_l = \underbrace{\vec{e}_1,\dots,\vec{e}_1}_{d_1},\dots,\underbrace{\vec{e}_k,\dots,\vec{e}_k}_{d_k}. \end{align*} Note that for two equal tuples $\vec{x}_i,\vec{x}_j$, the corresponding elements $\vec{y}_i,\vec{y}_j$ must have equal residues in $\mathbb{Z}^{k'}/(\vec{d'})$, and hence must simply be equal. We therefore have $$ d_1\vec{z}_1 + \dots + d_k\vec{z}_k = \vec{d'}$$ for some tuples $\vec{z}_1,\dots,\vec{z}_k \in \mathbb{Z}_{\geq 0}^{k'} \setminus \{\vec{0}\}$. To see that $\vec{d} \preceq \vec{d'}$, observe that we have \begin{align*} (d_1,\dots,d_k) \preceq (\underbrace{d_1,\dots,d_1}_{\vec{z}_1 \cdot \vec{1}},\dots,\underbrace{d_k,\dots,d_k}_{\vec{z}_k\cdot\vec{1}}) \preceq (d_1',\dots,d_{k'}'), \end{align*} where the first inequality comes from iteratively applying duplication moves and the second inequality comes from iteratively applying combination moves. \end{proof}
\subsection{Flexible constructions}
As already pointed out, the main obstructions in this paper are of an exact symplectic nature, i.e. they obstruct exact symplectic embeddings in situations where symplectic embeddings (and in particular formal symplectic embeddings) do exist. Still, since the divisor complements $X_{\vec{d}}^{2n}$ have fairly nontrivial smooth topology, it is natural to wonder what role (if any) the topology plays. It turns out that there is a fair amount of freedom to modify the diffeomorphism type without invalidating the obstructions, although first homology groups appear to play an essential role.
We begin with a definition: \begin{definition} Two Liouville domains $X,X'$ of the same dimension are {\em Liouville (resp. symplectic) embedding equivalent} if there is a Liouville (resp. symplectic) embedding of $X$ into $X'$ and of $X'$ into $X$. \end{definition} {\noindent} For instance, if $X$ and $X'$ are Liouville domains which are Liouville embedding equivalent, and if $X''$ is a third Liouville domain, then we have $X \overset{L}\hookrightarrow X''$ if and only if $X' \overset{L}\hookrightarrow X''$. In an attempt to remove all smooth topology from the discussion, it is natural to ask whether any Liouville domain $X$ is Liouville embedding equivalent to another domain which is diffeomorphic to the ball. This is easily seen to be false. Indeed, suppose that we have $X \overset{L}\hookrightarrow X'$ and $X' \overset{S}\hookrightarrow X$, and let $X''$ be any other Liouville domain such that we have $X \overset{S}\hookrightarrow X''$ and $X \not\overset{L}\hookrightarrow X''$. Then we must have $H_1(X';\mathbb{R}) \neq 0$, since otherwise we would have $X' \overset{L}\hookrightarrow X''$ (since any symplectic embedding $X' \overset{S}\hookrightarrow X''$ is automatically a Liouville embedding), and hence $X \overset{L}\hookrightarrow X''$, a contradiction.
Now suppose $X$ is a Weinstein domain, and $X'$ is obtained from $X$ by Weinstein handle attachments. Then we have $X \overset{W}\hookrightarrow X'$, so by monotonicity we have $\mathbb{G}\Langle \mathcal{T}^m p \Rangle(X) \leq \mathbb{G}\Langle \mathcal{T}^m p\Rangle(X')$ and more generally $\mathbb{I}^{\leq l}(X) \supset \mathbb{I}^{\leq l}(X')$ for any $m \in \mathbb{Z}_{\geq 0}$ and $l \in \mathbb{Z}_{\geq 1}$. In the special case that $X'$ differs from $X$ by subcritical or flexible Weinstein handle attachments, a well-known metaprinciple states that ``all'' pseudoholomorphic curve invariants of $X$ and $X'$ should coincide (see e.g. \cite{handleattachinginSH,bourgeois2012effect,murphy2018subflexible} for the case of symplectic cohomology). Accordingly, we conjecture that $I^{\leq l}(X)$ and in particular $\mathbb{G}\Langle\mathcal{T}^m p \Rangle$ are invariant under subcritical and flexible Weinstein handle attachments. In particular, this would give a large amount of freedom to change the smooth topology of $X$ without affecting these Liouville embedding obstructions (although one cannot kill the homology in the critical dimension $\tfrac{1}{2}\dim(X)$ by Weinstein handles, since these have index at most $\tfrac{1}{2}\dim(X)$). Note that the above discussion shows that the Liouville embedding type is {\em not} generally invariant under subcritical handle attachments, since it is possible to kill the fundamental group by adding Weinstein two-handles.
Using somewhat more geometric considerations, we have: \begin{prop}\label{cor:flex_constr} Fix $n \geq 3$ and $N \in \mathbb{Z}_{\geq 1}$. For each $k \in \mathbb{Z}_{\geq 1}$ and $\vec{d} \in \mathbb{Z}_{\geq 1}^k$ with $\sum_{i=1}^k d_i \leq N$ and such that $1$ is an entry of $\vec{d}$, there is a Weinstein domain $\widetilde{X}_{\vec{d}}^{2n}$ such that \begin{itemize} \item there is a Weinstein embedding $X_{\vec{d}}^{2n} \overset{W}\hookrightarrow \widetilde{X}_{\vec{d}}^{2n}$ and a Liouville embedding $\widetilde{X}_{\vec{d}}^{2n} \overset{L}\hookrightarrow X_{\vec{d}}^{2n}$ \item $\widetilde{X}_{\vec{d}}^{2n}$ is almost symplectomorphic to $\widetilde{X}_{\vec{d'}}^{2n}$ for any such $\vec{d},\vec{d'}$. \end{itemize} In particular, Theoem~\ref{thm:main_liouville} still holds if we replace each $X_{\vec{d}}^{2n}$ with the corresponding $\widetilde{X}_{\vec{d}}^{2n}$. \end{prop} \begin{proof} Observe that we can find $\vec{f} \in \mathcal{S}$ such that $\vec{d} \preceq \vec{f}$ for each $\vec{d} \in \mathcal{S}$ with $\sum_{i=1}^k d_i \leq N$ having $1$ as an entry. For each such $\vec{d}$, we consider the Weinstein embedding $X_{\vec{d}}^{2n} \overset{W}\hookrightarrow X_{\vec{f}}^{2n}$ provided by Theorem~\ref{thm:embeddings}. Note that this presents $X_{\vec{f}}^{2n}$ as the result after concatenating $X_{\vec{d}}^{2n}$ with a Weinstein cobordism $W$. We define $\widetilde{X}_{\vec{d}}^{2n}$ to be the Weinstein domain obtained by concatenating $X_{\vec{d}}^{2n}$ with the flexibilization $\op{Flex}(W)$ of the Weinstein cobordism $W$ (see \cite[\S11.8]{cieliebak2012stein}). Then $\widetilde{X}_{\vec{d}}^{2n}$ is almost symplectomorphic to $X_{\vec{f}}^{2n}$, and evidently there is a Weinstein embedding of $\widetilde{X}_{\vec{d}}^{2n}$ into $X_{\vec{f}}^{2n}$. To see that there is a Liouville embedding $X_{\vec{f}}^{2n} \overset{L}\hookrightarrow \widetilde{X}_{\vec{d}}^{2n}$, consider the result after concatenating the Weinstein embedding $X_{\vec{d}}^{2n} \overset{W}\hookrightarrow X_{\vec{f}}^{2n}$ with the symplectic embedding $X_{\vec{f}}^{2n} \rightarrow X_{\vec{d}}^{2n}$ given by adding in the missing divisor components. This can be identified with the result after attaching to $X_{\vec{d}}^{2n}$ a finite piece of the symplectization of $\partial X_{\vec{d}}^{2n}$. Then by a consequence of the Lagrangian caps h principle \cite{eliashberg2013lagrangian}, we can extend this to a Liouville embedding $\widetilde{X}_{\vec{f}}^{2n} \overset{L}\hookrightarrow X_{\vec{d}}^{2n}$.
\end{proof}
\begin{remark} We do not know of any example of a Liouville domain which is diffeomorphic to a Weinstein domain but not symplectomorphic to any Weinstein domain. Still, Liouville embeddings between Weinstein domains are often more flexible than Weinstein embeddings. For example, any $n \geq 3$, $\op{Flex}(D^*\mathbb{T}^n)$ admits a Liouville embedding into $\mathbb{C}^n$, but it does not admit a Weinstein embedding for (ordinary) homological reasons. More generally, if $X$ and $X'$ are Weinstein domains with $X \overset{W}\hookrightarrow X'$, then the boundary connect sum of $X$ with sufficiently many copies of $\op{Flex}(D^*\mathbb{T}^n)$ still Liouville embeds into $X'$, whereas a Weinstein embedding is impossible for homological reasons. \end{remark}
\begin{remark} If $X$ is a Weinstein domain, it is interesting to ask whether the invariant $\mathbb{I}^{\leq l}(X)$ can be computed from the wrapped Fukaya category $\mathcal{W}(X)$ of $X$. By \cite{ganatra2019cyclic} together with generation results from \cite{chantraine2017geometric,ganatra2018structural}, the ``cyclic open-closed map'' gives an isomorphism from the cyclic homology of $\mathcal{W}(X)$ to $\op{SH}_{S^1}(X)$. After taking into account additional naturality properties it should be possible to recover the invariant $\mathbb{F}(X)$ from \S\ref{subsec:obs_cyls}. However, it seems unlikely that we can recover the full invariant $\mathbb{I}^{\leq l}(X)$ solely from the $\mathcal{A}_\infty$ category $\mathcal{W}(X)$ without any additional structure. Rather, the $\mathcal{L}_\infty$ structure on $\op{SC}_{S^1}$ should be equivalent to an $\mathcal{L}_\infty$ structure on cyclic chains of $\mathcal{W}(X)$ which depends on a smooth Calabi--Yau structure on $\mathcal{W}(X)$ (see e.g. \cite{chen2020gravity} for a somewhat analogous setup), the existence of which is guaranteed by \cite{ganatra2019cyclic}. Under the cyclic open-closed map, we expect that this data is sufficient to recover the $\mathcal{L}_\infty$ homomorphism ${P_0 \circ \delta_{S^1}: \op{SC}^*_{S^1,+}(X) \rightarrow \mathbb{K}[u^{-1}]}$ (c.f. \S\ref{subsec:obs_higher}), which in turn determines an invariant which is closely analogous (and conjecturally equivalent up to an isomorphism of $\overline{S}\mathbb{K}[u^{-1}]$) to $\mathbb{I}^{\leq l}(X)$. Note that subcritical handle attachment does not change the wrapped Fukaya category (see \cite[\S1.7]{ganatra2018structural}), so this approach could potentially be used to determine the effect on $\mathbb{I}^{\leq l}(X)$ of subcritical handle attachment. \end{remark}
\section{Possible extensions}\label{sec:concl}
In this brief conclusion we sample a few interesting directions for further research.
\subsubsection*{Allowing self-crossings}
One fairly mild generalization of Problem~\ref{prob:hyp} is to let $D$ be a normal crossing divisor which is not necessarily simple, i.e. the irreducible components are allowed to have self intersections. For example, the log Calabi--Yau surfaces studied in \cite{pascaleff2019symplectic} can all be viewed as complements of degree three curves in $\mathbb{CP}^2$ which are not necessarily smooth or irreducible. Although these examples do not quite fit into the framework of Theorem~\ref{thm:div_compl1}, it seems natural to expect the theorem to generalize to this case in light of \cite{tehrani2018normal}.
More ambitiously, one could consider e.g. complements of (nongeneric) hyperplane arrangements in $\mathbb{C}^n$. In this case it is interesting to ask to what extent the invariant $\mathbb{I}^{\leq l}$ is sensitive to the intersection poset of the arrangement.
\subsubsection*{More general hypersurface singularities}
It is also natural to consider divisors with more general hypersurface singularities. For instance, complements in $\mathbb{C}^n$ of vanishing loci of Brieskorn polynomials $P(z_1,\dots,z_n) = z_1^{a_1} + \dots + z_n^{a_n} \in \mathbb{C}[z_1,\dots,z_n]$ for $a_1,\dots a_n \in \mathbb{Z}_{\geq 2}$ play an essential role in McLean's constructions of symplectically exotic affine spaces \cite{mcleanlefschetz} (see also \cite[\S4b]{abouzaid2010altering}). Concretely, if we take $P = z_1^2 + \dots + z_{n-1}^2 + z_n^3$ for $n \geq 4$ even or $P = z_1^2 + \dots + z_{n-2}^2 + z_{n-1}^3 + z_n^5$ for $n \geq 3$ odd, then, after attaching a (subcritical) Weinstein two-handle, $\mathbb{C}^n \setminus P^{-1}(0)$ becomes diffeomorphic but not symplectomorphic to $\mathbb{C}^n$. Furthermore, by counting idempotents in the symplectic cohomology algebra over $\mathbb{Z}/(p)$ for $p$ prime, McLean shows that $V_1,V_2,V_3,\dots$ are pairwise nonsymplectomorphic, where $V_k$ denotes the boundary connect sum of $k$ copies of $V$. Evidently we have Weinstein embeddings $V_k \overset{W}\hookrightarrow V_{k'}$ for $k \leq k'$, and the above suggests that $V_{k'}$ might be ``larger'' than $V_k$: \begin{question} Is there a Liouville embedding $V_k \overset{L}\hookrightarrow V_{k'}$ for $k > k'$? \end{question} {\noindent} A starting point would be to compute $\mathbb{G}\Langle \mathcal{T}^mp\Rangle(\mathbb{C}^n \setminus P^{-1}(0))$ for $m \in \mathbb{Z}_{\geq 0}$, where $P(z_1,\dots,z_n)$ is a Brieskorn polynomial as above. More broadly, what values can the invariant $\mathbb{G}\Langle \mathcal{T}^m p \Rangle$ assume on symplectically exotic affine spaces in a given dimension?
\subsubsection*{Other ambient spaces} We are also interested in divisor complements in more general smooth projective varieties. In fact, recall that every smooth complex affine algebraic variety is the complement of an ample simple normal crossing divisor in a smooth projective variety as a consequence of Hironaka's resolution of singularities theorem. The neck stretching strategy in \S\ref{sec:computationsI},\S\ref{sec:computationsII} could plausibly extend to compute the invariant $\mathbb{G}\Langle\mathcal{T}^mp\Rangle$ for divisor complements in other smooth projective varieties $M$, after replacing $\op{GW}_{\mathbb{CP}^n,[L]}\Langle \mathcal{T}^{n-1}p\Rangle$ with an appropriate invariant $\op{GW}_{M,A}\Langle \mathcal{T}^m p \Rangle$ which is nonzero for some $A \in H_2(M)$ and $m \in \mathbb{Z}_{\geq 0}$. At least in the case $n = 2$, the counts $\op{GW}_{M,A}\Langle \mathcal{T}^m p\Rangle$ can be computed by the algorithm in \cite{McDuffSiegel_counting}. For example, for $M = \mathbb{CP}^1 \times \mathbb{CP}^1$ we have $\op{GW}_{\mathbb{CP}^1 \times \mathbb{CP}^1,A}\Langle \mathcal{T}^{2d}p\Rangle = 1 \neq 0$ when $A = d[L_1] + [L_2]$ (i.e. we are counting curves of bidegree $(d,1)$) for $d \in \mathbb{Z}_{\geq 0}$.
\subsubsection*{Higher genus analogues}
On the other hand, there are also examples having no apparent rational curves whatsoever. For instance, suppose that $M$ is a product of $n$ closed Riemann surfaces, each having genus at least one. In this case we have $\pi_2(M) = 0$, and hence $\op{GW}_{M,A}\Langle\mathcal{T}^{m}p\Rangle = 0$ for all $m \in \mathbb{Z}_{\geq 0}$ and all $A \in H_2(M)$. In fact, if $D \subset M$ is an ample simple normal crossing divisor and $X = M \setminus \mathcal{O}p(D)$ denotes the corresponding divisor complement, then there cannot be any nonconstant asymptotically cylindrical pseudoholomorphic curves of genus zero in $X$. In particular, the obstructions defined in this paper are vacuous in this case. It is possible that analogous invariants defined using higher genus curves could provide more refined obstructions.
As a special case, we consider the Liouville domains given by products of Riemann surfaces with boundary. Note that there are obvious symplectic embeddings given by filling in some of the boundary components. In dimension $2n=2$, Example~\ref{ex:surfaces} provides complete Liouville embedding obstructions by purely elementary considerations, but this does not extend to higher dimensions.
\subsubsection*{Cotangent bundles} Another interesting direction is to compute $\mathbb{I}^{\leq l}$ for cotangent bundles $T^*Q$ of closed smooth manifolds $Q$. This case has implications for (exact) Lagrangian embeddings. As a starting point, recall that if $Q$ admits a Riemannian metric of negative sectional curvature, then the unit sphere bundle $S^*Q$ admit a corresponding contact form all of whose Reeb orbits have Conley--Zehnder index zero. For $\dim(Q) \geq 6$, it follows that any formal curve in $D^*Q$ without constraints has nonpositive index (c.f. \cite[Cor. 1.7.4]{EGH2000}), and hence we have $\mathbb{G}\Langle\mathcal{T}^m p\Rangle(D^*Q) = \infty$ for any $m \in \mathbb{Z}_{\geq 0}$, and in fact: \begin{prop} Let $X^{2n \geq 6}$ be a Liouville domain such that $\mathbb{G}\Langle \mathcal{T}^mp\Rangle(X) < \infty$ for some $m \in \mathbb{Z}_{\geq 0}$. Then $X$ does not have any embedded Lagrangian submanifold which admits a Riemannian metric of negative sectional curvature. \end{prop} {\noindent} In light of the discussion in \S\ref{sec:invariants} and the relationship between symplectic cohomology and string topology (see e.g. \cite{abouzaid2013symplectic,cohencalabi}), the invariants this paper for cotangent bundles should be computable using techniques from rational homotopy theory.
\end{document} |
\begin{document}
\title{Cauchy robust principal component analysis with applications to high-deimensional data sets}
\begin{abstract}
Principal component analysis (PCA) is a standard dimensionality reduction technique used in various research and applied fields. From an algorithmic point of view, classical PCA can be formulated in terms of operations on a multivariate Gaussian likelihood. As a consequence of the implied Gaussian formulation, the principal components are not robust to outliers. In this paper, we propose a modified formulation, based on the use of a multivariate Cauchy likelihood instead of the Gaussian likelihood, which has the effect of robustifying the principal components. We present an algorithm to compute these robustified principal components. We additionally derive the relevant influence function of the first component and examine its theoretical properties. Simulation experiments on high-dimensional datasets demonstrate that the estimated principal components based on the Cauchy likelihood outperform or are on par with existing robust PCA techniques. \end{abstract}
\section{Introduction} \label{Intro.3-CPC} In the analysis of multivariate data, it is frequently desirable to employ statistical methods which are insensitive to the presence of outliers in the sample. To address the problem of outliers, it is important to develop robust statistical procedures. Most statistical procedures include explicit or implicit prior assumptions about the distribution of the observations, but often without taking into account the effect of outliers. The purpose of this paper is to present a novel robust version of PCA which has some attractive features.
Principal components analysis (PCA) is considered to be one of the most important techniques in statistics. However, the classical version of PCA depends on either a covariance or a correlation matrix, both of which are very sensitive to outliers. We develop an alternative method to classical PCA, which is far more robust, by using a multivariate Cauchy likelihood to construct a robust principal components (PC) procedure. It is an adaptation of the classic method of PCA obtained by replacing the Gaussian log-likelihood function by the Cauchy log-likelihood function, in a sense that will be explained in section \ref{Lik.Inter.PCA}. Although we do not claim that the interpretation of standard PCA in terms of operations on a Gaussian likelihood is new, see Bolton and Krzanowski, this fact does not appear to have been exploited in the development of a robust PCA procedure, as we do in this paper. An important reason for using the multivariate Cauchy likelihood is that this likelihood has only one maximum point, but the single most important motivation is that it leads to a robust procedure.
In the next section we review briefly some of the techniques employed for estimating parameters and for directing a PCA in ways which are robust against the presence of outliers. We also present robustness preliminaries that include some important techniques which are necessary to assess whether the method used is robust or not. In Section \ref{CPCA} we develop the Cauchy-PCA and theoretically explore its robustness properties. Finally, in Section \ref{Comp.Algo} we present the numerical algorithms for creating Cauchy PCs, and also give the results of a number of very high-dimensional real-data and simulated examples. Our approach is seen to be competitive with, and often gives superior results to, that of the projection pursuit algorithm of Croux et al. (2007, 2013). Finally we conclude the paper in Section \ref{concl.}.
\subsection{Literature review on robust PCA} \label{Lit.Review} It is well known that PCA is an important technique for high-dimensional data reduction. PCA is based on the sample covariance matrix $\hat{{\bf \Sigma}}$ and it involves searching for a linear combination $y_{j}= {\bf u}^{T}{\bf x}_{j}$ of the ${\bf x}$ components of the vector that maximize the sample variance of the components of $y$. According to \citet{Mardia&Kent&Bibby:1979}, the solution will be given by the equation \[\hat{\bf \Sigma}={\bf U \Lambda U}^{T},\] where ${\bf \Lambda}= \hbox{diag}\{\lambda_{1}, \ldots, \lambda_{p}\}$ and its diagonal elements $\lambda_{i}$ are the sample variances, while ${\bf U}$ is an orthogonal matrix, i.e. ${\bf U U}^{T} ={\bf U}^{T}{\bf U}={\bf I}_{p}$, whose columns ${\bf u}_{i}$ are the corresponding eigenvectors which represent the linear combinations. [[The principal components are efficiently estimated in practice via Singular Value Decomposition (SVD) (cite Lanczos for an efficient algorithm).]]
Classical PCA, unfortunately, is non-robust, since it based on the sample covariance or sample correlation matrix which are very sensitive to outlying observations; see section \ref{NonRob.SPCA}. However, this problem has been handled by two different methods which result in robust versions of PCA by:
\begin{description}
\item[i.] replacing the standard covariance or correlation matrix with a robust estimator; or
\item[ii.] maximising (or minimising) a different objective function to obtain a robust PCA.
\end{description} Many different proposes had been developed to carry out robust PCA, such that using projection pursuit PP, $M-$estimators and so on.
Despite maximum likelihood estimation, perhaps, being considered as the most important statistical inference method, sometimes this approach can lead to improper results when the underlying assumptions are not satisfied, for instance, when data contain outliers or deviate slightly from the supposed model. A generalization of maximum likelihood estimation proposed by \citet{Huber:1964} which is called $M$-estimation, aims to produce a robust statistic by constructing approaches that are resistant to deviations from the underline assumptions. $M$-estimators were also defined for the multivariate case by \citet{Maronna:1976}.
\citet{Campbell:1980} provided a procedure for robust PCA by examining the estimates of means and covariances which are less affected by outlier observations, and by exploring the observations which have a large effect on the estimates. He replaced the sample covariance sample by an $M-$estimator. \citet{Hubert:2003} introduced a new approach to create robust PCA. It combines the advantages of two methods, the first one is based on replacing the covariance or correlation matrix by its robust estimator, while the second one is based on maximizing the objective function for this robust estimate.
A robust PCA based on the projection pursuit (PP) method was developed by \citet{Li:1985}, using Huber's $M$-estimator of dispersion as the projection index. The objective of PP is to seek projections, of the high-dimensional data set onto low-dimensional subspaces, that optimise a function of "interestingness". The function that should be optimised is called an index or objective function and its choice depends on a feature that the researcher is concerned about. This property gives the PP technique a flexibility to handle many different statistical problems range from clustering to identifying outliers in a multivariate data set.
\citet{Bolton:1999} characterized the PC's for PP in terms of maximum likelihood under the assumption of normality. PCA can be considered as a special case of PP as well as many other methods of multivariate analysis. \citet{Li:1985} used Huber's $M$-estimator of dispersion as projective index to develop a robust PCA based on the PP approach. The sample median was used as a projective index to develop a robust PCA by \citet{Xie:1993}. In their simulation studies, \citet{Xie:1993} observed a PCA resistant to outliers and deviations from the normal distribution.
\cite{croux2007algorithms,croux2013robust} also suggested a robust PCA using projection pursuit and we will contrast our methodology against their algorithm.
\section{Preliminaries on standard PCA} \label{NonRob.SPCA} PCA is an orthogonal linear transformation that projects the data to a new coordinate system according to the variance of each direction. Given a data matrix ${\bf X}\in\mathbb R^{n\times p}$ with each row correspond to a sample, the first direction ${\bf u}_1$ that maximizes the variance is defined through \begin{equation*}
{\bf u}_1 = \argmax_{||{\bf u}||_2=1} ||({\bf X} - {\bf 1}_n\bar{{\bf x}}^T) {\bf u}||_2^2, \end{equation*} where ${\bf 1}_n$ is an $n$-dimensional vector whose elements are all set to 1 while $\bar{{\bf x}}=\frac{1}{n}\sum_{i=1}^n {\bf x}_i$ is the empirical mean. The process is repeated $k$ times and at each iteration the to-be-estimated principal direction has to be orthogonal to all previously-computed principal directions. Thus, the $k$-th direction which has to be orthogonal to the previous ones is defined by \begin{equation*}
{\bf u}_k = \argmax_{||{\bf u}||_2=1} ||({\bf X} - {\bf 1}_n\bar{{\bf x}}^T) {\bf u}||_2^2 \ \ \text{subject to} \ \ {\bf u}_k \perp {\bf u}_j \ \text{with} \ j=1,...,k-1 \ . \end{equation*}
\subsection{Non-robustness of standard PCA} We will show that the influence function for the largest eigenvalue of the covariance matrix and the respective eigenvector are unbounded with respect to the norm of an outlier sample. Suppose that $\boldsymbol{\Sigma}$ is the covariance matrix of a population with distribution function $F$, i.e., \begin{equation}\label{Pop.Cov.} {\boldsymbol{\Sigma}} = \int_{\mathbb R^p} ({\bf x}-\boldsymbol{\mu})({\bf x}-\boldsymbol{\mu})^{T} dF({\bf x}), \end{equation} where $\boldsymbol{\mu}=\int_{\mathbb R^p} {\bf x} dF({\bf x})$ corresponds to the mean vector. Assume that the leading eigenvalue of $\boldsymbol{\Sigma}$ has multiplicity 1, then we denote it by $\lambda$ and the leading eigenvector by $\hat{{\bf u}}$ (i.e., ${\bf u}_{1}=\hat{{\bf u}}$).
Let $T$ be an arbitrary functional, $F$ a distribution and ${\bf z}\in\mathbb R^p$ an arbitrary point in the relevant sample space. The influence function is defined as \begin{equation} IF_T({\bf z};F) = \lim_{\epsilon\to 0+} \frac{T((1-\epsilon) F + \epsilon \Delta_{{\bf z}}) - T(F)}{\epsilon}, \end{equation} where $\Delta_{{\bf z}}$ is a unit point mass located at ${\bf z}$.
A robust estimator for $T$ means that the influence function is bounded with respect to the norm of the outlier $\bf z$.
\begin{proposition}
The influence function for the leading eigenvector of $\boldsymbol{\Sigma}$ is given by\footnote{We use ${\bf A}^+$ to denote the Moore-Penrose inverse of a matrix $\bf A$.} \begin{equation} IF_{\hat{{\bf u}}} ({\bf z}, F) = - \big( ({\bf z}-\boldsymbol{\mu})^{T}\hat{{\bf u}} \big) (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} ({\bf z}-\boldsymbol{\mu}). \end{equation} Similarly, the IF for the largest eigenvalue of ${\boldsymbol{\Sigma}}$ is \begin{equation} IF_\lambda ({\bf z}, F) = \big( ({\bf z}-\boldsymbol{\mu})^{T}\hat{{\bf u}} \big)^2 - \lambda. \end{equation}
\end{proposition}
The detailed calculations are presented in Appendix \ref{NonRob:PCA:proof}. The following result shows that outliers with unbounded influence function do exist.
\begin{corollary}
Let ${\bf z}=\boldsymbol{\mu} + \gamma \hat{{\bf u}} + \eta {\bf v}$ where ${\bf v}$ is orthogonal to $\hat{{\bf u}}$ and does not belong to the null space of $\boldsymbol{\Sigma}$ and $\gamma,\eta\neq 0$ then \begin{equation*}
\lim _{{\bf z}: \, ||{\bf z}||_2 \rightarrow \infty}||IF_{\hat{{\bf u}}} ({\bf z}, F)||_2 = \infty, \end{equation*} and similarly for $IF_\lambda ({\bf z}, F)$. \end{corollary}
\begin{proof} Direct substitution of ${\bf z}$ into the influence function gives: \begin{equation*} IF_{\hat{{\bf u}}} ({\bf z}, F) = -((\gamma \hat{{\bf u}} + \eta {\bf v})^T \hat{{\bf u}}) (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} (\gamma \hat{{\bf u}} + \eta {\bf v}) = - \gamma \eta (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v}. \end{equation*}
Since ${\bf v}$ does not belong to the null space of $\boldsymbol{\Sigma}$, it holds that $(\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v} \neq {\bf 0}$ thus $||(\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v}||_2=c\neq0$. Hence, \begin{equation*}
||IF_{\hat{{\bf u}}} ({\bf z}, F)||_2 = |\gamma| |\eta| c. \end{equation*}
Given that $||{\bf z}||_2^2 = \gamma^2+\eta^2+||\boldsymbol{\mu}||_2^2+\gamma \boldsymbol{\mu}^T\hat{{\bf u}}+\eta \boldsymbol{\mu}^T{\bf v}$, either sending $|\gamma|\to\infty$ or $|\eta|\to\infty$ completes the proof.
Similarly, \begin{equation*} IF_\lambda ({\bf z}, F) = \gamma^2-\lambda \rightarrow \infty, \end{equation*}
as $|\gamma|\to\infty$. \end{proof}
\subsection{Generalizations of standard PCA} \label{Lik.Inter.PCA} Standard PCA can be viewed as a special case of a more general optimization problem. We present two such generalization: the first one leads to projection pursuit algorithms while the second leads to a maximum likelihood formulation. Let ${\bf u}$ be a unit vector and define the projection values \begin{equation*} c_{i}({\bf u}) = {\bf x}^{T}_{i} {\bf u}, {\hspace{3mm}} i=1, \ldots, n, \end{equation*} and a function $\Phi:\mathbb R^n \to \mathbb R$ acting on the projected values. The first generalization of PCA is defined as the maximization of $\Phi$: \begin{equation*}
{\bf u}_1 = \argmax_{||{\bf u}||_2=1} \Phi(c_1({\bf u}),...,c_n({\bf u})) \ . \end{equation*} As in the standard PCA, the following principal directions are obtained after removing the contribution of the current principal component from the data. When $\Phi$ is the sample variance then we recover the standard PCA.
The second generalization interprets the computation of the principal component as a maximum likelihood estimation problem. By letting, \begin{equation}\label{GausLogLik}
l_{G}(\mu, \sigma^{2}| c_{1},\ldots, c_{n})= -\frac{n}{2} \log {\sigma}^{2} - \frac{n}{2{\sigma}^{2}}\sum_{i=1}^{n}(c_{i}-\mu)^{2}. \end{equation} be the Gaussian log-likelihood, the first principal direction can be obtained by solving the minimax problem: \begin{equation*}
\min_{||{\bf u}||_2=1}\max_{\mu, \sigma^2} \ l_{G}(\mu, \sigma^{2}| c_{1}({\bf u}),\ldots, c_{n}({\bf u})). \end{equation*} Indeed, the inner maximization can be solved analytically which leads to the optimal solution \begin{equation*} \hat{\mu}({\bf u}) = \frac{1}{n} \sum_{i=1}^n c_i({\bf u}) =: \bar{c}({\bf u}) \end{equation*} and \begin{equation*} {\hat{\sigma}}^{2}({\bf u}) = \frac{1}{n} \sum_{i=1}^{n} (c_{i}({\bf u})- \bar{c}({\bf u}))^{2}. \end{equation*} Unsurprisingly, the optimal values are the sample mean and the sample variance. Using the above formulas it is straightforward to show that \begin{eqnarray}
\argmin_{||{\bf u}||_2=1} \ l_{G}\big(\hat{\mu}({\bf u}), {\hat{\sigma}}^{2}({{\bf u}})| c_{1}({{\bf u}}), \ldots, c_{n}({{\bf u}})\big)
= \argmax_{||{\bf u}||_2=1} \ \hat{\sigma}^{2}({{\bf u}}) \ . \end{eqnarray} Variations of PCA can be derived by changing the likelihood function and in the next section we analyze the case of Cauchy distribution.
\section{Cauchy PCA} \label{CPCA} The Cauchy log-likelihood function is given by \begin{equation}\label{Cau.LogLik}
l_{C}({\mu},{\sigma}| {c}_{1}({{\bf u}}), \ldots, {c}_{n}({{\bf u}}))= n \log{\frac{\sigma}{\pi}} - \sum_{i=1}^{n} \log \left\{{\sigma}^{2}+ (c_{i}({\bf u})-{\mu})^{2}\right\}. \end{equation} where $\mu$ and $\sigma$ are the two parameters of the Cauchy distribution. The first Cauchy principal direction is also obtained by solving the minimax optimization problem: \begin{equation}\label{cauchy:minimax}
\min_{||{\bf u}||_2=1}\max_{\mu,\sigma} \ l_{C}(\mu, \sigma^{2}| c_{1}({\bf u}),\ldots, c_{n}({\bf u})). \end{equation} In contrast to the Gaussian case, the inner maximization cannot be performed analytically. Therefore an iterative approach needs to be utilized. Here, we apply the Newton-Raphson method with initial values the median and half the interquartile range for the location and scale parameters, respectively. According to \citet{Copas:1975}, although the mean of the Cauchy distribution does not exist and it has infinite variance, the Cauchy log-likelihood function $l_{C}(\mu, \sigma)$ has a unique maximum likelihood estimate, $(\hat{\mu},\hat{\sigma})$.
Fixing $\mu$ and $\sigma$, the outer minimization is also non-analytic and a fixed point iteration is applied to calculate ${\bf u}$. The iteration is given by \begin{equation}
\hat{{\bf u}} = \frac{\hat{{\bf u}}_{un}}{||\hat{{\bf u}}_{un}||_2}, \label{cauchy:norm:eq} \end{equation} where $\hat{{\bf u}}_{un}$ is the unnormalized direction which is obtained from the differentiation of the Lagrangian function with respect to ${\bf u}$ and it is given by \begin{eqnarray} \label{parallel} \hat{{\bf u}}_{un} = \sum_{i=1}^{n}\frac{({{\bf x}}_i^T{\hat{{\bf u}}}-\hat{\mu}){{\bf x}}_i} {\hat{\sigma}^2 + \left({{\bf x}}_i^T{\hat{{\bf u}}}-\hat{\mu} \right)^2} \ . \label{fixed:point:eq} \end{eqnarray}
Once the first principal direction has been computed, its contribution from the dataset ${\bf X}$ is removed and the same procedure to estimate the next principal direction is repeated. This iterative process is repeated $k$ times. The removal of the contribution makes the principal directions orthogonal to each other.
We summarize the estimation of $k$ Cauchy principal components in the following pseudo-code (Algorithm \ref{1CPC:Algo.}).
\begin{algorithm}[H] \caption{Cauchy PCA} \label{1CPC:Algo.} \begin{algorithmic} \FOR{$j=1,...,k$} \STATE $\bullet$ Initialize ${\hat{{\bf u}}_{un}}$ and normalize
$\hat{{\bf u}}= \hat{{\bf u}}_{un} / ||\hat{{\bf u}}_{un}||_2$ \WHILE{not converged} \STATE $\bullet$ Fix $\hat{{\bf u}}$ and set $$ c_i(\hat{{\bf u}}) = {\bf x}_i^T\hat{{\bf u}}, \ \ i=1,...,n.$$ \STATE $\bullet$ Via Newton-Raphson algorithm find $$(\hat{\mu},\hat{\sigma})=\argmax_{\mu, \sigma} \ l_C(\mu, \sigma; c_1(\hat{{\bf u}}), \ldots, c_n(\hat{{\bf u}})).$$ \STATE $\bullet$ Fix $(\hat{\mu}, \hat{\sigma})$ and using fixed point iteration (i.e., (\ref{fixed:point:eq}) \& (\ref{cauchy:norm:eq})) find
$$\hat{{\bf u}} = \argmin_{{\bf u}} \ l_C(\hat{\mu}, \hat{\sigma}| c_1({\bf u}), \ldots, c_n({\bf u})) - \lambda (||{\bf u}||_2^2-1)$$ \ENDWHILE \STATE $\bullet$ Set the $j$-th Cauchy principal direction $${\bf u}_{j} = \hat{{\bf u}}.$$ \STATE $\bullet$ Remove the contribution from the dataset \begin{eqnarray*} {\bf X} = {\bf X} ({\bf I}_p - {\bf u}_{j}{\bf u}^T_{j}), \end{eqnarray*} \ENDFOR \end{algorithmic} \end{algorithm}
\subsection{Robustness of the Leading Cauchy Principal Direction} Let $\boldsymbol{\theta} = \left(\mu,\sigma\right)^T$ be the parameter vector of the Cauchy distribution and consider the infinite-sample normalized Cauchy log-likelihood function \begin{equation}
l({\bf u}|\boldsymbol{\theta}) = \int_{{\bf x}\in\mathbb R^p} g(c({\bf u}),\boldsymbol{\theta})\, dF({\bf x}), \end{equation} where $g(c,\boldsymbol{\theta}) = \log(\sigma/\pi) - \log( \sigma^2+(c-\mu)^2)$ and $c({\bf u})={\bf x}^T{\bf u}$. We will estimate the influence function for the leading Cauchy principal direction \begin{equation}
\hat{{\bf u}} = \argmin_{||{\bf u}||_2=1} \ l({\bf u}|\boldsymbol{\theta}_F({\bf u})), \end{equation}
where $\boldsymbol{\theta}_F({\bf u})=\argmax_{\boldsymbol{\theta}} l({\bf x}^T{\bf u}|\boldsymbol{\theta})$ is the optimal Cauchy parameters for a given direction ${\bf u}$.
Since $\hat{{\bf u}}$ is restricted to be a unit vector, the standard condition for the minimum, i.e., $\left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0}$ is not valid. The proper condition is defined by \begin{equation}
{\bf P}_{\hat{{\bf u}}} \left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0} , \end{equation} where ${\bf P}_{{\bf u}}$ is the projection matrix given by ${\bf P}_{{\bf u}}={\bf I}_p-{\bf u}{\bf u}^T$.
\begin{Remark}
An equivalent condition is to satisfy ${\bf h}^T \left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0}$ for all ${\bf h}$ such that ${\bf h}^T\hat{{\bf u}}=0$ and $||{\bf h}||_2=1$. Both derived conditions are essentially a consequence of the Lagrangian formulation of the constraint optimization problem. Indeed, the Lagrange condition implies that at the minimum the direction of the objective function's derivative should be parallel to the direction of the constraint's derivative which translates to $\left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = \lambda \hat{{\bf u}}$ where $\lambda\neq 0$ is the Lagrange multiplier. \end{Remark}
Let $\bar{g}({\bf x};{\bf u}) = \left.g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})}$ be the likelihood function computed at $\theta=\theta_F({\bf u})$ and let denote its partial derivatives as \[
\bar{g}_c({\bf x};{\bf u}) = \left.\frac{\partial}{\partial c} g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})} \] and \[
\bar{g}_{\boldsymbol{\theta}}({\bf x};{\bf u}) = \left.\frac{\partial}{\partial \boldsymbol{\theta}} g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})}. \] Similarly, $\bar{g}_{cc}$, $\bar{g}_{c\theta}$ and $\bar{g}_{\theta\theta}$ denote the second order derivatives. The following proposition establishes the expression for the influence function of the leading Cauchy principal direction, $\hat{{\bf u}}$.
\begin{proposition}\label{influence:func:cauchy:pca} Under the assumption of ${{\bf I}}_F(\hat{{\bf u}})$ and ${\bf A}$ being invertible matrices, the influence function of $\hat{{\bf u}}$ is \begin{equation} IF_{\hat{{\bf u}}} ({\bf z}, F) = {\bf A}^{-1} {\bf b} , \end{equation} where $$ \begin{aligned} {\bf A} &= {\bf I}_p \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) {\bf x}^T\hat{{\bf u}} dF({\bf x}) - {\bf P}_{\hat{{\bf u}}} \int_{\mathbb R^p} \bar{g}_{cc}({\bf x};\hat{{\bf u}}) {\bf x}^T{\bf x} dF({\bf x}) {\bf P}_{\hat{{\bf u}}} \\ &+ {\bf P}_{\hat{{\bf u}}} \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) dF({\bf x}) \, {{\bf I}}_F(\hat{{\bf u}})^{-1} \, \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta} c}({\bf x};\hat{{\bf u}}) {\bf x}^T dF({\bf x}) {\bf P}_{\hat{{\bf u}}} \end{aligned} $$ and $$ {\bf b} = {\bf b}(z) = \bar{g}_c({\bf z}, \hat{{\bf u}}) {\bf z} + \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) dF({\bf x}) \, {{\bf I}}_F(\hat{{\bf u}})^{-1} \, \bar{g}_{\boldsymbol{\theta}}({\bf z};\hat{{\bf u}}), $$ while $$ {{\bf I}}_F(\hat{{\bf u}}) = \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) dF({\bf x}) $$ is the expected Fisher information matrix under $F$ for the parameters of the Cauchy distribution computed at $\hat{{\bf u}}$. \end{proposition} \begin{proof} The proof consists of several straightforward series expansions and implicit function calculations. The complete proof is given in Appendix \ref{robust:cauchy:proof}. \end{proof}
The following boundedness result for the influence function states the conditions under which Cauchy PCA is robust.
\begin{corollary} \label{boundness} Let the assumptions of the proposition hold. If ${\bf z}\not\perp\hat{{\bf u}}$ or if ${\bf z}\perp\hat{{\bf u}}=0$ but $\mu_F(\hat{{\bf u}})=0$ then the influence function for $\hat{{\bf u}}$ is bounded. \end{corollary}
\begin{proof} First, observe that matrix ${\bf A}$ does not depend on ${\bf z}$. It is only ${\bf b}$ that depends on ${\bf z}$ and our goal is to prove that ${\bf b}$ is bounded with respect to ${\bf z}$. Second, we have to compute the partial derivatives $\bar{g}_c({\bf z}; \hat{{\bf u}})$ and $\bar{g}_{\boldsymbol{\theta}}({\bf z}; \hat{{\bf u}})$. Straightforward calculations lead to $$ \bar{g}_c({\bf z}; \hat{{\bf u}}) = - \frac{2({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2} $$ $$ \bar{g}_\mu({\bf z}; \hat{{\bf u}}) = \frac{2({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2} $$ and $$ \bar{g}_\sigma({\bf z}; \hat{{\bf u}}) = \frac{1}{\sigma_F(\hat{{\bf u}})} - \frac{2\sigma_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2}. $$
Let us now define an arbitrary scaling of the outlier ${\bf z}\rightarrow\alpha{\bf z}$ and prove boundedness of ${\bf b}$ as we send $\alpha\to\infty$. We consider the first case where ${\bf z}\not\perp\hat{{\bf u}}$. It holds that $\lim_{\alpha\to\infty} \bar{g}_c(\alpha{\bf z}; \hat{{\bf u}})\alpha{\bf z} = -({\bf z}^T\hat{{\bf u}})^{-1}{\bf z}$, $\lim_{\alpha\to\infty} \bar{g}_\mu(\alpha{\bf z}; \hat{{\bf u}}) = 0$ and $\lim_{\alpha\to\infty} \bar{g}_\sigma(\alpha{\bf z}; \hat{{\bf u}}) = \frac{1}{\sigma_F(\hat{{\bf u}})}$ therefore ${\bf b}$ is bounded with respect to $\alpha$.
For the second case, we have \[ \lim_{\alpha\to\infty} \bar{g}_c(\alpha{\bf z}; \hat{{\bf u}})\alpha{\bf z} = \lim_{\alpha\to\infty} \frac{2\mu_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2} \alpha{\bf z} = 0, \] \[ \lim_{\alpha\to\infty} \bar{g}_\mu(\alpha{\bf z}; \hat{{\bf u}}) = \frac{2\mu_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2} = 0 \] and \[ \lim_{\alpha\to\infty} \bar{g}_\sigma(\alpha{\bf z}; \hat{{\bf u}}) = \frac{1}{\sigma_F(\hat{{\bf u}})} - \frac{2\sigma_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2} = -\frac{1}{\sigma_F(\hat{{\bf u}})} \] since $\mu_F(\hat{{\bf u}})=0$ by assumption. Thus ${\bf b}$ is bounded with respect to $\alpha$ for the second case, too. \end{proof}
The only case not covered by the corollary is when ${\bf z}^T\hat{{\bf u}}=0$ and $\mu(\hat{{\bf u}})\neq 0$. Our experiments presented in the following section show that outliers that are orthogonal to the Cauchy principal direction do sometimes influence the estimation of the Cauchy principal direction yet not significantly.
\subsection{Several Cauchy principal components}
We briefly mention possibilities for estimating several Cauchy principal components. There are two obvious approaches: one approach, the sequential approach, is to repeat the algorithm described above on the subspace orthogonal to $\hat{{\bf u}}=\hat{{\bf u}}_1$ to obtain $\hat{{\bf u}}_2$, the second Cauchy principal component, where $\hat{{\bf u}}_1$ is the first Cauchy principal component; then repeat the procedure on the subspace orthogonal to $\hat{{\bf u}}_1$ and $\hat{{\bf u}}_2$ to obtain $\hat{{\bf u}}_3$; and so on. A second approach, the simultaneous approach, is to decide in advance how many principal components we wish to determine, $p$ say, and then use a $p$-dimensional multivariate Cauchy likelihood, which has $p+ p(p+1)/2$ free parameters, to obtain $\hat{{\bf u}}_1, \ldots , \hat{{\bf u}}_p$. It turns out that these two approaches lead to equivalent results in classical (Gaussian) PCA but when a Cauchy likelihood is used the two approaches produce different sets of principal components. Our current thinking is this: the sequential approach is easier to implement (essentially the same software can be used at each step) and it is faster. However, the simultaneous approach could potentially be preferable if we know in advance how many principal components we wish to estimate. Further investigation is required.
\section{Numerical Results} \label{Comp.Algo}
\subsection{Simulation studies} In this section we will empirically validate our proposed methodology, via simulation studies. We searched for R packages that offer robust PCA in the $n<<p$ case and came up with \textit{FastHCS} \citep{fasthcs2018}, \textit{rrcovHD} \citep{rrcovhd2016}, \textit{rpca} \citep{rpca2017} and \textit{pcaPP} \citep{pcapp2018}. Out of them, \textit{pcaPP} (Projection Pursuit PCA) is the only one which does not require hyper-parameter tuning, e.g. selection of the LASSO penalty $\lambda$ or choice of the percentage of observations used to estimate a robust covariance matrix.
\subsubsection{Setup of the simulations} Initially, we created a $p \times p$ (orthonormal) basis $\bf B$ by using QR decomposition on some randomly generated data. We then generated eigenvalues $\lambda_i \sim Exp(0.4)$, where $i=1,\ldots,p$ and hence we obtained the covariance matrix $\pmb{\Sigma} = {\bf B}\pmb{\Lambda}{\bf B}^T$, where $\pmb{\Lambda} =\text{diag}(\lambda_i)$. The first column of $\bf B$ served as the first ``clean'' eigenvector, and was the benchmark in our comparative evaluations. Following this step, we simulated $n$ random vectors ${\bf X} \sim N_p\left({\bf 0}, \pmb{\Sigma} \right)$ and in order to check the robustness of the results to the center of the data, all observations were shifted right by adding $50$ everywhere. A number of outliers equal to 2$\%$ of the sample size were introduced. These outliers were $\bar{\bf x}+e^{\kappa}{\bf z} \in {\mathbb{R}}^{p}$, where $\bar{\bf x}$ is the sample mean vector, ${\bf z}$ are unit vector(s) and $e^{\kappa}$ a real number denoting their norm, where $\kappa$ varied from $3$ up to $8$ increasing with a step size equal to $1$ and the angle between the outliers ${\bf z}$ and the first ``clean'' eigenvector spanned from $0^{\circ}$ up to $90^{\circ}$. In all cases, we subtracted the spatial median or the column-wise median\footnote{The results are pretty similar for either type of median and we here show the results of he column-wise median.} and scaled them by the mean absolute deviation.
At each case, we computed the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector. The performance metric is the angle (in degrees) between the first robust (based on Cauchy or PP-PCA) eigenvector and the first "clean" eigenvector computed using the classical PCA. All experiments were repeated $100$ times and the results were averaged.
\subsubsection{Comparative results}
Tables \ref{tab100_500}-\ref{tab500_1000} present the performance of the first Cauchy-PCA eigenvector and of the first PP-PCA eigenvector for a variety of norms of the outlier, with different angles ($\phi$) between the outlier and the leading true eigenvector, for the $n<p$ case.
The case of $n<p$ was selected as statistical inference in this case is more challenging than the $p<n$ case\footnote{In this paper we focus on high-dimensional simulations and real-date examples ($p>n)$ but in results not presented in the paper we found that Cauchy PCA is also very competitive and performs strongly in low dimensional settings ($p<n$).}. Additionally, this case is also ordinarily met in the field of bioinformatics were the -omics data count tens of thousands of variables (genes, single nucleotide polymorphisms, etc.) but only tens or at most hundreds of observations.
As observed in Tables \ref{tab100_500}-\ref{tab500_1000}, the average angular difference between the Cauchy and the PP PCA ranges from $20^{\circ}$ up to more than $50^{\circ}$, which is evidently quite substantial, providing evidence that Cauchy PCA has performed in a superior manner to the projection pursuit method of Croux et al. (2007, 2013). In particular, the tables demonstrate that Cauchy PCA is less error prone than its competitor but, as is seen in Table \ref{tab500_1000}, the error decreases for both methods with increasing sample size. Further, the mean angular difference between the two methods increases as the angle $\phi$ increases. For instance, in Table \ref{tab100_500}, when $k=8$ and $\phi=0^{\circ}$ the difference between the two methods is $20^{\circ}$, whereas when $\phi=90^{\circ}$ the difference increases to $48^{\circ}$. Further, the error is not highly affected by the angle $\phi$, or the norm of the outliers. It can be seen that in Table \ref{tab100_1000} and Table \ref{tab500_1000} in the special case of $\phi=90^{\circ}$, the error increases for the Cauchy PCA by $2^{\circ}-3^{\circ}$, thus corroborating the result of Corollary \ref{boundness}. However, this effect, as in Table \ref{tab100_500}, is rather small, though noticeable.
\begin{table} \caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=100$ and $p=500$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.} \label{tab100_500}
\begin{tabular}{ll|rrrrrrr} \hline Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline $\phi=0^{\circ}$ & Cauchy & 31.17 & 29.79 & 29.54 & 28.83 & 28.86 & 29.24 & 28.78 \\
& PP & 82.45 & 49.91 & 48.84 & 48.22 & 49.08 & 49.61 & 48.14 \\ \hline $\phi=30^{\circ}$ & Cauchy & 31.44 & 29.24 & 29.13 & 28.60 & 28.89 & 29.34 & 29.65 \\
& PP & 82.45 & 65.28 & 65.34 & 63.42 & 62.96 & 66.63 & 65.43 \\ \hline $\phi=60^{\circ}$ & Cauchy & 31.49 & 29.86 & 29.07 & 29.04 & 29.55 & 29.70 & 29.09 \\
& PP & 82.11 & 81.11 & 82.55 & 82.63 & 82.12 & 82.49 & 82.03 \\ \hline $\phi=90^{\circ}$ & Cauchy & 32.32 & 31.67 & 33.00 & 33.13 & 32.86 & 33.19 & 33.06 \\
& PP & 82.38 & 82.06 & 81.69 & 82.12 & 81.73 & 81.74 & 81.88 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=100$ and $p=1000$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.} \label{tab100_1000}
\begin{tabular}{ll|rrrrrrr} \hline Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline $\phi=0^{\circ}$ & Cauchy & 36.53 & 33.12 & 33.60 & 33.69 & 32.62 & 32.51 & 33.16 \\
& PP & 83.06 & 80.36 & 80.17 & 81.87 & 80.50 & 80.76 & 80.16 \\ \hline $\phi=30^{\circ}$ & Cauchy & 36.55 & 34.72 & 33.91 & 33.09 & 33.11 & 33.16 & 32.79 \\
& PP & 83.07 & 82.36 & 82.76 & 82.65 & 83.07 & 82.93 & 83.12 \\ \hline $\phi=60^{\circ}$ & Cauchy & 36.42 & 34.46 & 33.96 & 33.61 & 34.41 & 33.07 & 33.47 \\
& PP & 83.78 & 82.86 & 82.71 & 84.05 & 83.46 & 82.71 & 82.78 \\ \hline $\phi=90^{\circ}$ & Cauchy & 36.50 & 36.12 & 36.81 & 37.18 & 39.34 & 39.11 & 38.51 \\
& PP & 83.63 & 83.73 & 83.69 & 83.65 & 84.03 & 83.66 & 83.00 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=500$ and $p=1000$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.} \label{tab500_1000}
\begin{tabular}{ll|rrrrrrr} \hline Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline $\phi=0^{\circ}$ & Cauchy & 19.95 & 18.60 & 18.46 & 18.35 & 18.24 & 18.20 & 17.93 \\
& PP & 68.76 & 26.08 & 24.93 & 24.91 & 24.83 & 24.73 & 24.72 \\ \hline $\phi=30^{\circ}$ & Cauchy & 19.43 & 18.30 & 18.39 & 18.22 & 18.16 & 18.01 & 18.13 \\
& PP & 68.98 & 39.72 & 38.88 & 38.44 & 38.20 & 38.15 & 38.14 \\ \hline $\phi=60^{\circ}$ & Cauchy & 19.76 & 18.60 & 18.12 & 18.20 & 18.40 & 18.19 & 18.01 \\
& PP & 69.10 & 64.10 & 63.12 & 62.89 & 62.91 & 62.82 & 62.77 \\ \hline $\phi=90^{\circ}$ & Cauchy & 19.49 & 19.84 & 20.16 & 21.87 & 22.41 & 22.87 & 22.84 \\
& PP & 68.99 & 68.62 & 68.59 & 68.70 & 68.45 & 68.73 & 68.43 \\ \hline \end{tabular} \end{table}
\subsection{High dimensional real datasets} Two real gene expression datasets, GSE13159 and GSE31161\footnote{From a biological standpoint, the data have already been uniformly pre-processed, curated and automatically annotated.}, downloaded from the \href{dataome.mensxmachina.org}{Biodataome} platform \citep{lakiotaki2018}, were used in the experiments. The dimensions of the datasets were equal to $2,096 \times 54,630$ and $1035 \times 54,675$, respectively. We randomly selected $5,000$ variables and computed the outliers using the high dimensional Minimum Covariance Determinant (MCD) of \cite{ro2015}. In accordance with the simulations studies, we removed the $2\%$ of the most extreme outliers detected by MCD and computed the first classical PC (benchmark eigenvector), the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector of the "clean" data. We then added those outliers and increased their norm by $e^k$, where $k=(0, 3, 4, \ldots, 8)$ and computed computed the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector. In all cases, we subtracted the spatial median or the column-wise median and scaled them by the mean absolute deviation. The performance metric is the angle (in degrees) between the first robust (based on Cauchy or PP-PCA) eigenvector and the first true ``clean" eigenvectors and the time required by each method. This procedure was repeated $200$ times and the average results are graphically displayed in Figures \ref{gse}(a)-(d).
Broadly speaking the effect of the PP PCA does not seem to have been affected substantially by the centering method, i.e. subtraction of the spatial or the column-wise median. On the contrary, the Cauchy PCA is affected by the type of median employed to this end. Centering with the spatial median yields high error levels for all norms of the outliers, for both datasets, whereas centering with the column-wise median produces much lower error levels. On average, the difference in the error between Cauchy PCA and PP PCA is about $30^{\circ}$ for the GSE31159 dataset (Figure \ref{gse}(a)) and about $14^{\circ}$ for the GSE3161 dataset (Figure \ref{gse}(b)). However, the error of the Cauchy PCA increases and the stabilizes in the GSE31159 dataset whereas the error of the PP PCA is stable regardless of the norm of the outliers. A different conclusion is extracted in the GSE31161 where the error of either method decreases as the norm of the outliers increases, until it reaches a plateau.
With regards to computational efficiency, the PP PCA is not affected by either centering method, whereas Cauchy PCA seems to be affected in the GSE31159 dataset but not in the GSE31161 dataset as seen in Figures \ref{gse}(c) and \ref{gse}(d). Cauchy PCA centered with the column-wise median is, on average, 5 times faster than PP PCA.
\begin{figure}
\caption{The first row presents the angle between the first Cauchy PC of the "contaminated" data and the 1st leading eigenvector of the "clean" data and the angle between the first Projection Pursuit PC of the "contaminated" data and the 1st leading eigenvector of the "clean" data for increasing norms of the outliers. The second row contains the time in seconds.}
\label{gse}
\end{figure}
\section{Conclusion}\label{concl.} The starting point for this paper is the observation that classical PCA can be formulated purely in terms of operations on a Gaussian likelihood. Although this observation is not new, the specifics of this formulation of classical PCA do not appear to be as widely known as might be expected. The novel idea underlying this paper is to formulate a version of PCA in which a Cauchy likelihood is used instead of a Gaussian likelihood, leading to what we call Cauchy PCA. Study of the resulting influence functions shows that Cauchy PCA has very good robustness properties. Moreover, we have provided an implementation of Cauchy PCA which runs quickly and reliably. Numerous simulation and real-data examples, mainly in high-dimensional settings, show that Cauchy PCA typically out-performs alternative robust versions of PCA whose implementation is in the public domain.
\section*{Appendix} \setcounter{section}{0} \renewcommand{A\arabic{subsection}}{A\arabic{subsection}}
\subsection{Proof of Proposition 2.1}\label{NonRob:PCA:proof} \begin{proof} The perturbed distribution $(1-\epsilon)F({\bf x}) + \epsilon\Delta_{\bf z}({\bf x})$ has perturbed mean value \begin{equation*} \boldsymbol{\mu}_\epsilon = \boldsymbol{\mu} + \epsilon ({\bf z}-\boldsymbol{\mu}) \end{equation*} and perturbed covariance matrix \begin{equation*} \boldsymbol{\Sigma}_\epsilon = \boldsymbol{\Sigma} + \epsilon (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}) + \epsilon^2 ({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T \end{equation*}
Denoting by $\lambda_\epsilon$ the leading eigenvalue of $\boldsymbol{\Sigma}_\epsilon$ and by ${\bf u}_\epsilon$ the corresponding eigenvector, it holds that \begin{equation}\label{perturbed:eigen:eq} \boldsymbol{\Sigma}_\epsilon{\bf u}_\epsilon = \lambda_\epsilon {\bf u}_\epsilon \ \ \text{and} \ \ {\bf u}_\epsilon^{T}{\bf u}_\epsilon = 1 \ . \end{equation} Next, we expand the perturbed eigenvector and eigenvalue around the unperturbed ones as follows: \begin{equation*} {\bf u}_\epsilon = {\bf u}_0 + \epsilon {\bf u}_1 + O(\epsilon^2) \end{equation*} and \begin{equation*} \lambda_\epsilon = \lambda_0 + \epsilon \lambda_1 + O(\epsilon^2) \end{equation*} with \begin{equation*} \boldsymbol{\Sigma}{\bf u}_0 = \lambda_0 \ \ \text{and} \ \ {\bf u}_0^{T}{\bf u}_0 = 1 \ . \end{equation*}
Substituting the formulas into (\ref{perturbed:eigen:eq}), and equating the zero-th and first order we get \begin{equation*} \boldsymbol{\Sigma}{\bf u}_0 = \lambda_0 {\bf u}_0 \ \ \text{and} \ \ {\bf u}_0^{T}{\bf u}_0 = 1 \ . \end{equation*} and \begin{equation}\label{1st:order:eq} (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 + \boldsymbol{\Sigma}{\bf u}_1 = \lambda_0{\bf u}_1 + \lambda_1{\bf u}_0 \end{equation} and \begin{equation*} {\bf u}_0^{T}{\bf u}_1 = 0 \ . \end{equation*}
Multiplying (\ref{1st:order:eq}) from the left with ${\bf u}_0^T$, we get \begin{equation*} \lambda_1 = {\bf u}_0^T (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 + {\bf u}_0^T\boldsymbol{\Sigma}{\bf u}_1 = ({\bf u}_0^T ({\bf z}-\boldsymbol{\mu}))^2 - \lambda_0 \end{equation*}
For ${\bf u}_1$, we rearrange (\ref{1st:order:eq}) to \begin{equation*} (\boldsymbol{\Sigma}-\lambda_0{\bf I}){\bf u}_1 = \lambda_1{\bf u}_0 - (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 \end{equation*} and then multiply from the left with the pseudo-inverse of $\boldsymbol{\Sigma}-\lambda_0{\bf I}$ to obtain \begin{equation*} (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(\boldsymbol{\Sigma}-\lambda_0{\bf I}){\bf u}_1 = \lambda_1(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+{\bf u}_0 - (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 \end{equation*} Using the properties (\cite{Mardia&Kent&Bibby:1979}): $(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(\boldsymbol{\Sigma}-\lambda_0{\bf I}) = {\bf I} - {\bf u}_0{\bf u}_0^T$ and $(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+{\bf u}_0 = {\bf 0}$, we obtain \begin{equation*} \begin{aligned} &{\bf u}_1 - {\bf u}_0{\bf u}_0^T{\bf u}_1 = (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T{\bf u}_0 - (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+ \lambda_0 {\bf u}_0 \\ \Rightarrow & {\bf u}_1 = (({\bf z}-\boldsymbol{\mu})^T{\bf u}_0)(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+({\bf z}-\boldsymbol{\mu}) \end{aligned} \end{equation*} and the proof is completed. \end{proof}
\begin{comment}
We prove that standard PCA is not robust by showing that the influence function is unbounded. Suppose that $\Sigma_{0}$ is the covariance matrix of a population with distribution function $F_{0}$, i.e. \begin{equation}\label{Pop.Cov.}
{\boldsymbol{\Sigma}}_{0}= \int {\bf x}{\bf x}^{T}dF_{0}({\bf x})-\left(\int {\bf x}dF_{0}({\bf x})\right)\left(\int {\bf x}dF_{0}({\bf x})\right)^{T}; \end{equation} denote the corresponding mean by: \begin{equation}\label{Pop.Mean}
{\boldsymbol{\alpha}}_{0}=\int {\bf x}dF_{0}({\bf x}). \end{equation} Let us consider the distribution function $F_{{\bf z},\epsilon}$. The corresponding mean and covariance matrix will be defined as following, respectively: \begin{equation}\label{Mix.Mean}
{\boldsymbol{\alpha_{\epsilon}}}= \int {\bf x}[(1-\epsilon)dF_{0}({\bf x})+\epsilon d \delta_{\bf z}(\bf x)]=(1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z}, \end{equation} \begin{eqnarray}
{\boldsymbol{\Sigma_{\epsilon}}} &=& (1-\epsilon)({\boldsymbol{\Sigma}}_{0}+{\boldsymbol{\alpha}}_{0}{\boldsymbol{\alpha}}_{0}^{T})+ \epsilon {\bf z}{\bf z}^{T}-((1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z})((1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z})^{T} \nonumber \\
&=& {\boldsymbol{\Sigma}}_{0}+\epsilon[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]+\epsilon^{2}({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}. \end{eqnarray} Assume that the leading eigenvalue of ${\boldsymbol{\Sigma}}_{0}$ has multiplicity 1. then the leading eigenvalue and the leading eigenvector have the following expansions, respectively (\citet{Amaral:2007}): \begin{equation}\label{mu-eps}
{\boldsymbol{\mu}}_{[\epsilon]}={\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots,
\end{equation} \begin{equation}\label{lamda-eps}
{\lambda}_{[\epsilon]}=\lambda_{0}+\epsilon \lambda_{1}+\epsilon^{2} \lambda_{2}+\ldots,
\end{equation} where the following identities hold \begin{equation}\label{sigma-eps}
{\boldsymbol{\Sigma_{\epsilon}}}{\boldsymbol{\mu}}_{[\epsilon]}={\lambda}_{[\epsilon]}{\boldsymbol{\mu}}_{[\epsilon]} \ \ \text{and} \ \ {\boldsymbol{\mu}}_{[\epsilon]}^{T}{\boldsymbol{\mu}}_{[\epsilon]}=1. \end{equation} This expansion will enable us to determine the influence function for parameters of interest. Substituting (\ref{mu-eps}) and (\ref{lamda-eps}) into (\ref{sigma-eps}) yields \begin{eqnarray*}
&\left\{{\boldsymbol{\Sigma}}_{0}+\epsilon[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]+\epsilon^{2}({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}\right\} \left\{{\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots \right\}&\\ &=\left\{\lambda_{0}+\epsilon \lambda_{1}+\epsilon^{2} \lambda_{2}+\ldots\right\} \left\{{\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots\right\}.& \end{eqnarray*} Now, collect the coefficients of $\epsilon^{0}=1$; that leads to the original problem: \begin{equation}\label{epsilon0 coef.} {\boldsymbol{\Sigma}}_{0} {\boldsymbol{\mu}}_{0}=\lambda_{0} {\boldsymbol{\mu}}_{0}. \end{equation} Then collecting the coefficients of $\epsilon^{1}=\epsilon$ gives: \begin{equation}\label{epsilon1 coef.}
[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}+ {\boldsymbol{\Sigma}}_{0}{\boldsymbol{\mu}}_{1}= \lambda_{0} {\boldsymbol{\mu}}_{1}+ \lambda_{1} {\boldsymbol{\mu}}_{0}. \end{equation} Note that ${\boldsymbol{\mu}}^{T}_{[\epsilon]}{\boldsymbol{\mu}}_{[\epsilon]}=1$ which means: \begin{eqnarray*}
({\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+ \ldots)^{T}({\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+ \ldots)&=& 1 \\
{\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0}+\epsilon({\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}+{\boldsymbol{\mu}}_{1}^{T} {\boldsymbol{\mu}}_{0})+\ldots &=& 1 , \end{eqnarray*} but it is clear from (\ref{epsilon0 coef.}) that ${\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0}=1$, which results in \begin{equation}\label{mu.Ortho.} {\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}={\boldsymbol{\mu}}_{1}^{T}{\boldsymbol{\mu}}_{0} = 0. \end{equation} So it turns out that ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\mu}}_{1}$ are orthogonal to each other. Now, multiply (\ref{epsilon1 coef.}) by ${\boldsymbol{\mu}}_{0}^{T}$ from left, then use a property of the orthogonality of ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\mu}}_{1}$ in (\ref{mu.Ortho.}) to obtain the following: \[ {\boldsymbol{\mu}}_{0}^{T}[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}+ {\boldsymbol{\mu}}_{0}^{T} {\boldsymbol{\Sigma}}_{0}{\boldsymbol{\mu}}_{1}= \lambda_{0} {\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}+ \lambda_{1} {\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0},\] using (\ref{epsilon0 coef.}) and the previous result in (\ref{mu.Ortho.}), respectively, the second term on the left hand side and the first term on the right hand side equal zero. Then \begin{equation}\label{lamda1}
\lambda_{1}={\boldsymbol{\mu}}_{0}^{T}[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}= \left({\boldsymbol{\mu}}_{0}^{T}({\bf z}-{\boldsymbol{\alpha}}_{0}) \right)^{2}-\lambda_{0}. \end{equation}
If we let ${\bf z}={\boldsymbol{\alpha}}_{0}+ \gamma {\boldsymbol{\mu}}_{0}$, then $\lambda_{1}=O(\gamma^{2})$ as $\gamma\rightarrow\infty$. Along all directions not perpendicular to ${\boldsymbol{\mu}}_{0}.$ ${\bf z}={\boldsymbol{\alpha}}_{0}+ \gamma {\bf v}$ that is $\lambda_{1}=O(\|{\bf z}\|^{2})$ provided ${\bf v}^{T}{\boldsymbol{\mu}}_{0}\neq 0$.\\ Now, to find ${\boldsymbol{\mu}}_{1}$, rearrange (\ref{epsilon1 coef.}) to be in the form: \begin{equation}\label{epsilon1 coef.2}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}){\boldsymbol{\mu}}_{1}=\lambda_{1}{\boldsymbol{\mu}}_{0}-[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}. \end{equation} But $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})$ is singular matrix, since it does not have full rank. Then, instead, multiply by a generalized inverse $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}$. To prepare for this step, we have to note that ${\boldsymbol{\Sigma}}_{0}$ could be written as: \[{\boldsymbol{\Sigma}}_{0}=\lambda_{0}{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}\lambda_{j}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\] then \[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})= \lambda_{0}{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}\lambda_{j}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T} -\lambda_{0}{\bf I},\] but ${\bf I}$ also could be written in the form \[{\bf I}={\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\] thus \[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})= \sum_{j\neq0}(\lambda_{j}-\lambda_{0}){\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\] and so it is natural to define \[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}= \sum_{j\neq0}(\lambda_{j}-\lambda_{0})^{-1}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\] which turns out to be the Morre-Penrose inverse (\citet{Mardia&Kent&Bibby:1979}). Moreover, \[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})= {\bf I}-{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T},\] which is results in \begin{equation}\label{mu0}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}){\boldsymbol{\mu}}_{0}= {\bf 0} \end{equation} \begin{equation}\label{mu-1}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}) {\boldsymbol{\mu}}_{1}= {\boldsymbol{\mu}}_{1}. \end{equation} Now, multiplying (\ref{epsilon1 coef.2}) from the left by the generalized inverse yields: \begin{equation}\label{} {\boldsymbol{\mu}}_{1}=({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}[\lambda_{1}{\boldsymbol{\mu}}_{0}-[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}], \end{equation} then using (\ref{mu0}) and (\ref{mu-1}) where $\lambda_{1}$ is given by (\ref{lamda1}) to obtain: \begin{eqnarray}\label{mu1}
{\boldsymbol{\mu}}_{1} &=& -({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} [({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0} \nonumber\\
&=& - [ ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0}] ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} ({\bf z}-{\boldsymbol{\alpha}}_{0}). \end{eqnarray} The quadratic form in (\ref{mu1}) is unbounded. To justify this result, let us consider ${\bf z}$ to be the following linear combination: \begin{equation}\label{newz}
{\bf z}={\boldsymbol{\alpha}}_{0} + \gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v}, \end{equation} where {\bf v} is an orthogonal to ${\boldsymbol{\mu}}_{0}$, i.e. such that ${\bf v}^{T}{\boldsymbol{\mu}}_{0}=0$. Now, rewriting (\ref{mu1}) as a function of ${\bf z}$ as in (\ref{newz}) yields: \begin{eqnarray*}
{\boldsymbol{\mu}}_{1}({\bf z}) &=& -(\gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v})^{T} {\boldsymbol{\mu}}_{0} ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} (\gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v})\\
&=& - \eta \gamma ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} {\bf v}. \end{eqnarray*}
In the previous equation, the quantity $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} {\bf v}\neq {\bf 0}$, since ${\boldsymbol{\Sigma}}_{0}$ has a full rank, ${\bf v}$ is an orthogonal to ${\boldsymbol{\mu}}_{0}$ and $\eta$ and $\gamma$ can be chosen to make $\|{\boldsymbol{\mu}}_{1}\|$ as large as we desire. Moreover, $\|{\boldsymbol{\mu}}_{1}\|= O(\|{\bf z}\|^{2})$ as $\|{\bf z}\| \rightarrow \infty$ and the $O(\|{\bf z}\|^{2})$ rate is achieved in all directions except those for which $\gamma =0$ or $\eta =0$.\\ The influence function for ${\boldsymbol{\mu}}$ is: \begin{eqnarray*} IF_{{\boldsymbol{\mu}}}({\bf z}, F) &=& \lim _{\epsilon \rightarrow 0}\left(\frac{{\boldsymbol{\mu}}_{[\epsilon]}-{\boldsymbol{\mu}}_{0}}{\epsilon}\right)={\boldsymbol{\mu}}_{1}({\bf z}) \nonumber \\ &=& - \big( ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0} \big) ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} ({\bf z}-{\boldsymbol{\alpha}}_{0}). \end{eqnarray*} where ${\boldsymbol{\mu}}_{[\epsilon]}$ is given in (\ref{mu-eps}). We can follow the same steps to find the IF for ${\lambda}_{[\epsilon]}$ which is: \begin{eqnarray*} IF_{{\boldsymbol{\lambda}}}({\bf z}, F) = \lim _{\epsilon \rightarrow 0}\left(\frac{{\boldsymbol{\lambda}}_{[\epsilon]}-{\boldsymbol{\lambda}}_{0}}{\epsilon}\right)={\boldsymbol{\lambda}}_{1}({\bf z}) \nonumber = \big( ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0} \big)^2 - \lambda_0, \end{eqnarray*} where ${\lambda}_{[\epsilon]}$ is defined in (\ref{lamda-eps}). Further, let $z={\boldsymbol{\alpha}}_{0} + \gamma {\boldsymbol{\mu}}_{0} + \eta {\boldsymbol{v}}$ where ${\boldsymbol{v}}$ is orthogonal to ${\boldsymbol{\mu}}_{0}$ and $\gamma,\eta\neq 0$ then \begin{equation*}
\lim _{||\boldsymbol{z}|| \rightarrow \infty}||{\boldsymbol{\mu}}_{1}(\boldsymbol{z})|| = \infty, \end{equation*} and similarly for ${{\lambda}}_{1}({\bf z})$, completing the proof that both IFs are unbounded.
\end{comment}
\subsection{Proof of Proposition \ref{influence:func:cauchy:pca}} \label{robust:cauchy:proof}
Let us first make the symbolism more explicit and denote $l_F({\bf u}|\boldsymbol{\theta})$ the Cauchy log-likelihood function with respect to the distribution $F$ and $\hat{{\bf u}}_F$ the respective leading Cauchy principal direction. Then, our goal is to calculate the limit of $$ \frac{1}{\epsilon} (\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} - \hat{{\bf u}}_F) $$ as $\epsilon\to 0$ where $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is the leading Cauchy principal direction for the distribution $F_{\epsilon,{\bf z}}=(1-\epsilon)F+\epsilon \Delta_{\bf z}$. The optimality condition for the leading Cauchy principal direction reads \begin{equation}
{\bf P}_{\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} \left. \frac{\partial}{\partial{\bf u}} l_{F_{\epsilon,{\bf z}}}\big({\bf u}|\boldsymbol{\theta}_{F_{\epsilon,{\bf z}}}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} = 0 \label{opt:cond:perturbed} \end{equation} and $$
{\bf P}_{\hat{{\bf u}}_{F}} \left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} = 0 $$ Moreover, $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is a unit vector which can be represented as $$ \hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \cos(\rho) \hat{{\bf u}}_F + \sin(\rho) {\bf h} $$ where ${\bf h}$ is a unit vector perpendicular to $\hat{{\bf u}}_F$ and $\rho$ is a (small) real number. Under these assumptions, $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is a unit vector since $$
||\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}||_2^2 = \cos^2(\rho) ||\hat{{\bf u}}_F||_2^2 + \sin^2(\rho) ||{\bf h}||_2^2 = 1 $$ Obviously, $\rho$ depends on $\epsilon$ and ${\bf z}$ (i.e., $\rho=\rho(\epsilon,{\bf z})$) and $\lim_{\epsilon\to 0} \rho = 0$ but we choose to avoid denoting their explicit relationship because it is not required in our proof. Moreover, a Taylor expansion for the representation leads to $$ \hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \hat{{\bf u}}_F + \rho {\bf h} + O(\rho^2) $$ thus we obtain that $$ {\bf P}_{\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} = {\bf P}_{\hat{{\bf u}}_{F}} - \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) + O(\rho^2) $$
Next, we compute the partial derivative using the chain rule $$
\frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) = \int_{\mathbb R^p} \left[ \frac{\partial}{\partial c} g(c({\bf u}), \boldsymbol{\theta}_{F}({\bf u})) \frac{\partial}{\partial {\bf u}} c({\bf u}) + \frac{\partial}{\partial \boldsymbol{\theta}} g(c({\bf u}), \boldsymbol{\theta}_{F}({\bf u})) \frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \right] dF({\bf x}) $$ Therefore, \begin{equation*} \begin{aligned}
\left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} &= \int_{\mathbb R^p} \left[ \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} + \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F})
\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} \right] dF({\bf x}) \\ &= \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) + \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x})
\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} \\ &= \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) \end{aligned} \end{equation*} The second summand equals to zero because $\hat{{\bf u}}_{F}$ maximizes the Cauchy log-likelihood function thus it holds that $\int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x}) = {\bf 0}$. Similarly, \begin{equation*} \begin{aligned}
&\left. \frac{\partial}{\partial{\bf u}} l_{F_{\epsilon,{\bf z}}}\big({\bf u}|\boldsymbol{\theta}_{F_{\epsilon,{\bf z}}}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} = \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF_{\epsilon,{\bf z}} ({\bf x}) \\ =& (1-\epsilon) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x}) + \epsilon \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \end{aligned} \end{equation*}
Next, we further Taylor expand $\bar{g}_{c}({\bf x}; {\bf u}_{F_{\epsilon,{\bf z}}})$ using $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \hat{{\bf u}}_F + \rho {\bf h} + O(\rho^2)$ $$ \bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) = \bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F})
+ \rho {\bf h} \frac{\partial}{\partial{\bf u}} \bar{g}_{c}({\bf x}; {{\bf u}}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} + O(\rho^2) $$ Using again the chain rule, we obtain that $$ \frac{\partial}{\partial{\bf u}} \bar{g}_{c}({\bf x}; {{\bf u}}) = \bar{g}_{cc}({\bf x}; {{\bf u}}) {\bf x} + \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {{\bf u}}) \frac{\partial}{\partial{\bf u}} \boldsymbol{\theta}_{F}({\bf u}) $$ The computation of the partial derivative $\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u})$ follows. Formula
$\boldsymbol{\theta}_{F}({\bf u}) = \argmax_{\boldsymbol{\theta}} l_F({\bf x}^T{\bf u}|\boldsymbol{\theta})$ implies that \begin{equation*}
\frac{\partial}{\partial \boldsymbol{\theta}} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} = 0 \ . \end{equation*} Differentiating with respect to ${\bf u}$ and using the implicit function theorem, we get \begin{equation*} \begin{aligned} \frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) &=
- \frac{\partial}{\partial {\bf u}} \frac{\partial}{\partial \boldsymbol{\theta}} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} \left[ \frac{\partial^2}{\partial \boldsymbol{\theta}^2} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} \right]^{-1} \\ &= - \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};{\bf u}) dF({\bf x}) \left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\boldsymbol{\theta}}({\bf x};{\bf u})dF({\bf x}) \right]^{-1} \end{aligned} \end{equation*}
Thus, \begin{equation*} \begin{aligned} &\bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) = \bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F}) \\ +& \rho {\bf h} \left[\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x} + \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x}) \left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) \right] + O(\rho^2) \end{aligned} \end{equation*}
Overall, (\ref{opt:cond:perturbed}) becomes \begin{equation*} \begin{aligned} &\left[ {\bf P}_{\hat{{\bf u}}_F} - \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) + O(\rho^2) \right] \cdot \left[ (1-\epsilon) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x}) + \epsilon \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] = 0 \\ \Rightarrow& {\bf P}_{\hat{{\bf u}}_F} \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x}) - \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) + O(\rho^2) \\ &= \epsilon {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) - \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] + O(\epsilon\rho) \\ \Rightarrow& \rho {\bf h} \left[ \int_{\mathbb R^p}\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x}^T {\bf x} dF({\bf x}) + \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) \hat{{\bf u}}_{F}^T {\bf x} dF({\bf x}) \right. \\ &+ \left. \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x}) \left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) {\bf x} dF({\bf x}) \right] + O(\rho^2) \\ &= \epsilon {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) - \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] + O(\epsilon\rho) \end{aligned} \end{equation*} where we use the facts that $${\bf P}_{\hat{{\bf u}}_F} {\bf h} = {\bf h}$$ and $${\bf h}^T \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) = {\bf P}_{\hat{{\bf u}}_F} \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
= {\bf P}_{\hat{{\bf u}}_F} \left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} = 0 $$
Thus, the influence function is $$ IF_{\hat{{\bf u}}_F} ({\bf z}, F) = \lim_{\epsilon\to0} \frac{\rho{\bf h}}{\epsilon} = {\bf A}^{-1} {\bf b} $$ where \begin{equation*} \begin{aligned} {\bf A} &= {\bf I}_{d} \left[ \int_{\mathbb R^p}\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x}^T {\bf x} dF({\bf x}) + \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) \hat{{\bf u}}_{F}^T {\bf x} dF({\bf x}) \right] \\ &+ \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x}) \left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) {\bf x} dF({\bf x}) \end{aligned} \end{equation*} and $$ {\bf b} = {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) - \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] $$
\begin{comment} \subsection{Proof of Lemma \ref{d/du(ThetaF(u))}} To find the influence function for the functional version of the parameter vector ${\boldsymbol {\theta}}_{F}({\bf u})$ at fixed ${\bf u}$, starting by step $1$ in section (\ref{Pre}). In this step we need to find the derivative of ${\hat{\boldsymbol {\theta}}}_{F}({\bf u})$ with respect to ${\bf u}$ as the following:\\ \begin{eqnarray*} \frac{\partial}{\partial {\boldsymbol{\theta}}} m({\bf u},{\boldsymbol {\theta}}) &=& \frac{\partial}{\partial {\boldsymbol{\theta}}} \int_{{\bf x} \in {\mathbb{R}}^{p}} l[{\bf x}; {\bf u}] dF({\bf x}) = \int_{{\bf x} \in {\mathbb{R}}^{p}} l_{; {\boldsymbol {\theta}}}[{\bf x}; {\bf u}] dF({\bf x}). \end{eqnarray*} Choose ${\boldsymbol {\theta}}_{F}({\bf u})$ in step $1$ to satisfy the condition $M-$estimator which is \begin{equation}\label{1}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] dF({\bf x}) = {\bf 0}. \end{equation} Suppose we now rotate ${\bf u} \mapsto {\bf v}$, where ${\bf v}$ is defined in (\ref{v}) such that: \begin{equation*}
{\bf v}= {\bf u} \cos{\alpha} + {\bf h} \sin{\alpha}, \end{equation*} Now (\ref{1}) still holds for ${\bf v}$, so \begin{equation}\label{2}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf v}] dF({\bf x}) = {\bf 0}. \end{equation} For small ${\alpha},$ a Taylor expansion gives \begin{eqnarray}\label{3}
\nonumber l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf v}]&=& l_{;{\boldsymbol {\theta}}}[{\bf x}; ({\bf u} \cos{\alpha}+ {\bf h} \sin{\alpha})]\\ &=& l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]+ l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h} \alpha + l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Delta_{F,{\bf h},\alpha}+ O(\alpha^{2}), \end{eqnarray} where $\Delta_{F,{\bf h},\alpha}$ is defined in (\ref{Delta(F,h,alpha)}). Substituting (\ref{3}) into (\ref{2}) yields \begin{equation}\label{3-a}
\int_{{\bf x} \in \mathbb{R}^{p}} \left[l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] + l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] {\bf x}^{T}{\bf h} \alpha + l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Delta_{F,{\bf h},\alpha}\right]dF({\bf x}) = O(\alpha^{2}). \end{equation} Using (\ref{1}) on the first term in the integrand of (\ref{3-a}) and rearranging the equation, we obtain \begin{equation*} \left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x}) \right) \Delta_{F,{\bf h},\alpha} = - \alpha \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] {\bf x}^{T}{\bf h} dF({\bf x}) + O(\alpha^{2}), \end{equation*} which leads to \begin{equation}\label{3-b} \frac{\Delta_{F,{\bf h},\alpha}}{\alpha} = - \left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x}) \right)^{-1} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}dF({\bf x}) + O(\alpha) \end{equation} Taking the limit for (\ref{3-b}), as $\alpha \rightarrow 0,$ we result in \begin{equation}\label{4a}
\lim _{\alpha \rightarrow 0} \frac{\Delta_{F,h,\alpha}}{\alpha} = - {\boldsymbol \Xi}^{-1} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}dF({\bf x}). \end{equation} where ${\boldsymbol \Xi}$ is defined in (\ref{xi-CPC}).
$\square$
\subsection{Proof of Lemma \ref{d/dF(ThetaF(u))}} Choose ${\boldsymbol {\theta}}_{F}({\bf u})$ in step $1$ to satisfy the condition $M-$estimator which is \begin{equation}\label{1}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})) dF_{{\bf z},\epsilon}({\bf x}) = {\bf 0}. \end{equation} For fixed ${\bf u}$ and variable $F$: \begin{equation}\label{1-1CPC}
l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u}))= l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]+ l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Gamma_{F,{\bf h},\alpha}+ O(\epsilon^{2}), \end{equation} where $\Gamma_{F,{\bf h},\alpha}$ is given in (\ref{Gamma(F,h,alpha)}). Then \begin{eqnarray*} \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})) dF_{{\bf z},\epsilon}({\bf x}) &=& + \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Gamma_{F,{\bf h},\alpha} dF_{\bf z,\epsilon}({\bf x}) \nonumber \\ & & + \epsilon {\hspace{1mm}} l_{;{\boldsymbol {\theta}}}[{\bf z}; {\bf u}] + O(\epsilon^{2}), \end{eqnarray*} which implies that \begin{eqnarray}\label{4b} \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}{\hspace{1mm}} \Gamma_{F,{\bf h},\alpha} &=& - \left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}({\bf x}; {\bf u})dF_{\bf z,\epsilon}({\bf x}) \right)^{-1} l_{;{\boldsymbol {\theta}}} [{\bf z}; {\bf u}] \nonumber \\ &=& - {\boldsymbol \Xi}^{-1} l_{;{\boldsymbol {\theta}}} [{\bf z}; {\bf u}]. \end{eqnarray} where ${\boldsymbol \Xi}$ is defined in (\ref{xi-CPC})
$\square$
\subsection{Proof of Lemma \ref{lemma_cauchyIF}} In Section \ref{d/dF(ThetaF(u))}, we found the derivative of ${\boldsymbol {\theta}}_{F}({\bf u})$ for fixed ${\bf u}$, by satisfying step $1$ in which $m({\bf u},{\boldsymbol {\theta}})$ is maximised. In this section we want to find the influence function for ${\hat{\bf u}}$, by satisfying step $2$ in section \ref{Pre}. Using (\ref{1}), note that \begin{eqnarray*}
\nonumber \frac{\partial}{\partial{\bf u}} \int_{{\bf x} \in \mathbb{R}^{p}} l[{\bf x}; {\bf u}]dF({\bf x}) &=& \int_{{\bf x} \in \mathbb{R}^{p}}\left(l_{c;}[{\bf x}; {\bf u}]{\bf x} + l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]\frac{\partial{\boldsymbol {\theta}}_{F}}{\partial{\bf u}}\right) dF({\bf x}) \\
&=& \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}dF({\bf x}), \end{eqnarray*} where the second term above is $0$ by definition of ${\boldsymbol {\theta}}_{F}({\bf u}).$ Therefore, a necessary condition for ${\bf u}$ to be a minimum of $m({\bf u}; {\boldsymbol {\theta}}_{F}({\bf u}))$ is \begin{equation}\label{5}
{\bf P_{u}} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}dF({\bf x})= {\bf 0}, \end{equation} where ${\bf P_{u}}= {\bf I}_{p} - {\bf u}{\bf u}^{T};$ see (\ref{Pu0}) is a projection matrix which appears because ${\bf u}$ is a unit vector as we explain in section \ref{P} that we treat {\bf u} as a general vector and then project onto the subspace that is orthogonal to {\bf u}.
Now, consider a mixture distribution which is defined in (\ref{mix.dis.}) where $\epsilon \in {[0,1)}$ is small and $\delta_{\bf z}$ is the distribution which assigns all probability to ${\bf z}$ as in (\ref{delta}). Assume ${\bf v}= {\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}$ where ${\bf h}$ and ${\alpha}$ depend on $\epsilon$ and ${\bf z}$. Then \begin{equation}\label{6}
{\bf P}_{\bf v} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}\left({\bf x}^{T}{\bf v}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})\right){\bf x} dF_{{\bf z},\epsilon}({\bf x})= {\bf 0} \end{equation} When $\epsilon$ and $\alpha$ are small, then \begin{eqnarray}\label{7}
{\bf P}_{\bf v}={\bf P}_{({\bf u}\cos{\alpha}+{\bf h} \sin{\alpha})} &=& {\bf I} - ({{\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}})({{\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}})^{T} \nonumber\\
&=& {\bf I}-{\bf u}{\bf u}^{T}-\alpha({\bf uh}^{T}+{\bf hu}^{T})+ O(\alpha^{2}) \nonumber\\
&=& {\bf P_{u}}- \alpha({\bf uh}^{T}+{\bf hu}^{T})+ O(\alpha^{2}). \end{eqnarray} Therefore, from Lemmas \ref{d/du(ThetaF(u))} and \ref{d/dF(ThetaF(u))}, using the fact that \begin{eqnarray}
({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u})) &=& ({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf v}))+({\boldsymbol {\theta}}_{F}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u})) \nonumber\\
&=& ({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})-{\boldsymbol {\theta}}_{F}({\bf u}))+({\boldsymbol {\theta}}_{F}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u}))+ O(\epsilon^{2})\nonumber\\ &=& \Gamma_{F, {\bf h}, \alpha} + \Delta_{F, {\bf h}, \alpha} + O(\epsilon^{2}). \end{eqnarray} we have \begin{eqnarray}\label{8}
l_{c;}({\bf x}^{T}{\bf v};{\boldsymbol{\theta}}_{{\bf z},\epsilon}({\bf v}))&=&l_{c;}[{\bf x}; ({\bf u}\cos{\alpha}+{\bf h}\sin{\alpha})]\nonumber\\
&=&l_{c;}[{\bf x}; {\bf u}] + l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}\alpha + l_{c;\boldsymbol{\theta}}[{\bf x}; {\bf u}]\Delta_{F,{\bf h},\alpha} \nonumber \\
& &+ \ l_{c; \boldsymbol{\theta}}[{\bf x}; {\bf u}]\Gamma_{F, {\bf h}, \alpha} + O(\alpha^{2}), \end{eqnarray} where $\Delta_{F, {\bf h}, \alpha} $ and $\Gamma_{F, {\bf h}, \alpha}$ are defined in section \ref{P} and \ref{d/du(ThetaF(u))} respectively. Substituting (\ref{7}) and (\ref{8}) into (\ref{6}), and ignoring $O(\alpha^{2})$ terms, we obtain \begin{equation}\label{}
\left({\bf P_{u}} - \alpha({\bf hu}^{T}+{\bf uh}^{T})\right) \int_{{\bf x} \in \mathbb{R}^{p}} {\bf T}.{\bf x} [(1-\epsilon)dF({\bf x}) + \epsilon {\hspace{1mm}} d \delta_{\bf z}({\bf x})]={\bf 0}, \end{equation} where \[{\bf T}= l_{c;}[{\bf x}; {\bf u}] + l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}\alpha + l_{c;\boldsymbol{\theta}}[{\bf x};{\bf u}]\Delta_{F,{\bf h},\alpha}+l_{c;\boldsymbol{\theta}}[{\bf x}; {\bf u}]\Gamma_{F,{\bf h},\alpha}.\]
Then collecting the $O(\alpha)$ terms we get: \begin{eqnarray*}
O(\alpha) &=& -\alpha({\bf hu}^{T}+ {\bf uh}^{T})\int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}] {\bf x} dF({\bf x}) + \epsilon l_{c;}[{\bf z}; {\bf u}]{\bf z} \\ & & + {\bf P_{u}} \int_{{\bf x} \in \mathbb{R}^{p}} \left( l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T} {\bf h} \alpha + l_{c;{\boldsymbol \theta}}[{\bf x}; {\bf u}]\Delta_{F,h,\alpha}+ l_{c;{\boldsymbol \theta}}[{\bf x};{\bf u}]\Gamma_{F,h,\alpha} \right) {\bf x} dF({\bf x}) \\ &=& {\bf 0}. \end{eqnarray*} Now (\ref{1}) implies that \[{\bf h}^{T}\int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}] {\bf x} dF({\bf x}) ={\bf 0},\] and using (\ref{4b}), \begin{equation}\label{Gamma approx}
\Gamma_{F,h,\alpha}\simeq - \epsilon \left[\int_{{\bf x} \in \mathbb{R}^{p}} l_{; {\boldsymbol {\theta \theta}}}[{\bf x}; {\bf u}]\right]^{-1} l_{;{\boldsymbol \theta}}[{\bf z}; {\bf u}]. \end{equation} Consequently, \[\alpha {\bf A h}= \epsilon {\bf B}+O(\epsilon^{2}),\] where \begin{eqnarray}\label{A}
{\bf A} &=& {\bf I}_{p} \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf u} dF({\bf x}) - {\bf P_{u}} \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{cc;}[{\bf x}; {\bf u}]{\bf x}{\bf x}^{T} dF(\bf x) {\bf P_{u}} \nonumber\\
& & + {\bf P_{u}} \left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} {\bf x} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x})\right)\left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta \theta}}}[{\bf x}; {\bf u}]dF({\bf x})\right)^{-1} \nonumber \\ & & \left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l^{T}_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T} dF({\bf x})\right) {\bf P_{u}}. \end{eqnarray} and \begin{eqnarray}\label{B} {\bf B}=l_{c;}[{\bf z}; {\bf u}]{\bf z}- \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}}{\bf x} l_{c;{\boldsymbol{\theta}}}[{\bf x}; {\bf u}]dF({\bf x}) \left(-\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta \theta}}} [{\bf x}; {\bf u}]dF({\bf x}) \right)^{-1} l_{;{\boldsymbol {\theta}}}[{\bf z}; {\bf u}]. \end{eqnarray} Therefore \[\frac{\alpha{\bf h}}{\epsilon}={\bf A}^{-1} {\bf B}+ O(\epsilon), \] and finally, letting $\epsilon \rightarrow 0$ \begin{equation}\label{CauchyIF}
IF_{\hat{\bf u}}({\bf z}; F)= \lim _{\epsilon \rightarrow 0}\left( \frac{{\bf v}-{\bf u}}{\epsilon}\right)=\lim _{\epsilon \rightarrow 0}\frac{{\bf h}\alpha}{\epsilon}={\bf A}^{-1} {\bf B}. \end{equation}
\end{comment}
\end{document} |
\begin{document}
\title{$M$-ideal properties in Orlicz-Lorentz spaces} \keywords{$M$-ideals, Orlicz-Lorentz spaces, dual norm} \subjclass[2010]{46B20, 46E30, 47B38}
\author{Anna Kami\'nska} \address{Department of Mathematical Sciences, The University of Memphis, TN 38152-3240} \email{[email protected]}
\author{Han Ju Lee} \address{Department of Mathematical Education, Dongguk University, Seoul, 100-715, Republic of Korea} \email{[email protected]}
\author{Hyung-Joon Tag} \address{Department of Mathematical Sciences, The University of Memphis, TN 38152-3240} \email{[email protected]}
\date{\today}
\begin{abstract} We provide explicit formulas for the norm of bounded linear functionals on Orlicz-Lorentz function spaces $\Lambda_{\varphi,w}$ equipped with two standard Luxemburg and Orlicz norms. Any bounded linear functional is a sum of regular and singular functionals, and we show that the norm of a singular functional is the same regardless of the norm in the space, while the formulas of the norm of general functionals are different for the Luxemburg and Orlicz norm. The relationship between equivalent definitions of the modular $P_{\varphi,w}$ generating the dual space to Orlicz-Lorentz space is discussed in order to compute the norm of a bounded linear functional on $\Lambda_{\varphi,w}$ equipped with Orlicz norm. As a consequence, we show that the order-continuous subspace of Orlicz-Lorentz space equipped with the Luxemburg norm is an $M$-ideal in $\Lambda_{\varphi,w}$, while this is not true for the space with the Orlicz norm when $\varphi$ is an Orlicz $N$-function not satisfying the appropriate $\Delta_2$ condition. The analogous results on Orlicz-Lorentz sequence spaces are given. \end{abstract}
\maketitle
\section{Introduction}
A closed subspace $Y$ of a Banach space $X$ is an $M$-ideal of $X$ if $Y^{\perp}$ is the range of the bounded projection $P:X^* \rightarrow X^*$ which satisfies \[
\|x^*\| = \|Px^*\| + \|x^* - Px^*\| \ \ \ \text{for all}\ \ x^* \in X^*. \]
If $Y$ is an $M$-ideal in $X$, then each $y^* \in Y^*$ has a unique norm-preserving extension to $x^* \in X^*$\cite{HWW}. It is well known that $c_0$ is an $M$-ideal in $l^\infty$. The $M$-ideal properties in Marcinkiewicz spaces have been studied in \cite{KL}. It was shown there that the subspace of order-continuous elements in $L^1 + L^{\infty}$ equipped with the standard norm is not an $M$-ideal, while there exists an equivalent norm such that this subspace is an $M$-ideal. For Orlicz spaces $L_\varphi$ it is well known that the order-continuous subspace of $L_{\varphi}$ is an $M$-ideal if the space is equipped with the Luxemburg norm \cite{A, N}, while this is not true if the space is equipped with the Orlicz norm and if $\varphi$ does not satisfy the appropriate $\Delta_2$ conditions \cite{CH}. For more details of general $M$-ideal theory and their applications, we refer to \cite{HWW}.
In this article, we investigate Orlicz-Lorentz function and sequence spaces. While we obtain analogous results as in Orlicz spaces, the techniques are different and the calculations are more involved since there is necessity to deal with decreasing rearrangements and level functions, and the K\"othe associate spaces to Orlicz-Lorentz spaces are not of the same sort as in the case of Orlicz spaces. The exact isometric dual norm for regular functionals in Orlicz-Lorentz spaces has been recently found in \cite{KLR} and it is expressed in terms of the Hardy-Littlewood order and the level functions. This paper completes the topic of characterization of the dual spaces by providing exact formulas of dual norms to Orlicz-Lorentz spaces equipped with two standard Luxemburg and Orlicz norms.
Denote by $L^0 = L^0(I)$ the set of all Lebesgue measurable functions $f: I= [0, \gamma) \to \mathbb{R}$, where $0 < \gamma \leq \infty$. If $I=\mathbb{N}$ then $\ell^0 = L^0(\mathbb{N})$ denotes the collection of all real valued sequences $x = (x(i))$. The interval $I=[0,\gamma)$ is equipped with the Lebesgue measure $m$, and the space $\ell^0 = L^0(\mathbb{N})$ with the counting measure $| \cdot |$. A Banach space $(X, \| \cdot \|)$ over $I$ is said to be a Banach function lattice if $X \subset L^0(I)$ and whenever $0 \leq x \leq y$, $x \in L^0(I)$, $y \in X$, then $x \in X$ and $0 \leq \|x\| \leq \|y\|$. If $I=[0,\gamma)$ then $X$ is called a Banach function space, while if $I=\mathbb{N}$ then $X$ is called a Banach sequence space.
We say that a Banach function lattice $(X, \|\cdot\|)$ has the Fatou property provided that for every sequence $(x_n) \subset X$, if $x_n \uparrow x$ a.e. for $x \in L^0$ and $\sup_n \|x_n\| < \infty$, then $x \in X$ and $\|x_n\| \uparrow \|x\|$. An element $x \in X$ is order-continuous if for any $0 \leq x_n \leq |x|$, if $x_n \downarrow 0$ a.e., then $\|x_n\| \downarrow 0$. The set of all order-continuous elements in $X$ is a closed subspace of $X$ and is denoted by $X_a$. We also define a subspace $X_b$ which is the closure of the set of all simple functions with supports of finite measure. In general, $X_a \subset X_b$ \cite{BS}.
The K\"othe associate space of $X$, denoted by $X'$, is a subset of $L^0(I)$, where $I = [0, \gamma)$, $0 < \gamma \leq \infty$, or $I = \mathbb{N}$ consisting of all $y \in X'$ satisfying $\|y\|_{X'}=\sup\{\int_I |xy|: \|x\|_X \leq 1\} < \infty$. The space $X'$ equipped with the norm $\|\cdot\|_{X'}$ is a Banach function lattice. It is well known that $X$ has the Fatou property if and only if $X=X''$ \cite{Z}.
We say that a bounded linear functional $H \in X^*$ is regular if there exists $h\in X'$ such that $H(x) = \int_I hx$ for all $x \in X$. The set of all regular linear functionals from $X^*$ will be denoted by $X_r^*$.
In the case where $X_a=X_b$ and $X$ has the Fatou property, we have that $(X_a)^*$ is isometric to $X'$, and so $X^* = (X_a)^* \oplus (X_a)^\perp$ is isometric to $X' \oplus (X_a)^\perp$. The set $(X_a)^\perp$ is called the space of singular functionals and it coincides with those $S\in X^*$ for which $S(x) = 0$ for all $x\in X_a$. It follows that any $F\in X^*$ is represented uniquely as the sum $H+S$ where $H$ is a regular functional and $S$ a singular functional \cite{Z}.
A distribution function $d_x$ of $x \in X$ is defined by $d_x(\lambda) = \mu\{t \in I : |x(t)| > \lambda\}$, $\lambda > 0$, where $\mu = m$ is the Lebesgue measure on $I = [0, \gamma)$, $0 < \gamma \leq \infty$ and the counting measure on $I = \mathbb{N}$. The decreasing rearrangement of $x$, denoted by $x^*$, is given as $x^*(t) = \inf\{\lambda > 0: d_x(\lambda) \leq t\}$, $t \in [0, \gamma)$. For a sequence $x=(x(i))$, its decreasing rearrangement $x^*$ may be identified with the sequence $ (x^*(i))$ such that $x^*(i) = \inf\{\lambda > 0: d_x(\lambda) < i\}$ for $i \in \mathbb{N}$. The functions $x, y$ are said to be equimeasurable if $d_x(\lambda) = d_y(\lambda)$ for all $\lambda > 0$, denoted by $x \sim y$. It is clear that $x$ and $x^*$ are equimeasurable. A Banach function lattice $(X, \|\cdot\|)$ is called a rearrangement invariant Banach space if $x \in X$ and $y \in L^0$ with $x \sim y$, we have $y \in X$ and $\|x\| = \|y\|$.
An Orlicz function $\varphi: \mathbb{R}_{+} \rightarrow \mathbb{R}_{+}$ is a convex function such that $\varphi(0) = 0$ and $\varphi(t) > 0$ for $t >0$. It is said to be an Orlicz $N$-function when $\lim_{t \rightarrow 0}{\varphi(t)}/{t} = 0$ and $\lim_{t \rightarrow \infty} {\varphi(t)}/{t} = \infty$ \cite{Chen}. The complementary function of $\varphi$, denoted by $\varphi_*$, is defined as $\varphi_*(v) = \sup\{uv - \varphi(u): u \geq 0\}$, $v\ge 0$. We have that $\varphi$ is $N$-function if and only if $\varphi_*$ is $N$-function. Let $p$ and $q$ stand for the right derivatives of $\varphi$ and $\varphi_*$, respectively. The functions $p$ and $q$ are non-negative, right-continuous and increasing on $\mathbb{R}_+$. If $\varphi$ is $N$-function then $p(0)= p(0+)= q(0) = q(0+)=0$ and $\lim_{t \rightarrow \infty}p(t)=\lim_{t \rightarrow \infty}q(t)= \infty$. Clearly for $\varphi$ and $\varphi_*$, Young's inequality is satisfied, that is, $uv \leq \varphi(u) + \varphi_*(v)$ for all $u,v \in \mathbb{R_+}$. Recall also that the equality holds for $v = p(u)$ or $u=q(v)$ \cite{Chen}.
Let $w: I=[0,\gamma)\rightarrow (0, \infty)$ be a weight function that is decreasing and locally integrable. Then we define $W(t): = \int_0^t w < \infty$ for all $t \in I$. If $\gamma = \infty$, we assume $W(\infty) = \infty$. Given $f \in L^0$, define the modular \[ \rho_{\varphi,w}(f) = \int_0^{\gamma} \varphi(f^*(t))w(t)dt = \int_I \varphi(f^*)w. \]
The modular $\rho_{\varphi,w}$ is orthogonally subadditive, that is, for $f, g \in L^0$, if $|f| \wedge |g| = 0$, we have $\rho_{\varphi,w}(f + g) \leq \rho_{\varphi,w}(f) + \rho_{\varphi,w}(g)$ \cite{K}. The Orlicz-Lorentz function space $\Lambda_{\varphi,w}$ is the set of all $f \in L^0$ such that $\rho_{\varphi,w}(\lambda f) < \infty$ for some $\lambda > 0$. It is equipped with either the Luxemburg norm \[
\|f\| = \|f\|_{\Lambda_{\varphi,w}} = \inf\{\epsilon > 0 : \rho_{\varphi,w}\left({f}/{\epsilon}\right) \leq 1\}, \]
or the Orlicz norm \[
\|f\|^0 = \|f\|_{\Lambda_{\varphi,w}}^0 = \sup\left\{\int_I f^*g^*w : \rho_{\varphi_*, w}(g) \leq 1\right\}. \]
It is well known that $\|x\| \leq \|x\|^0 \leq 2\|x\|$ \cite{WC, WR}. From now on, we let $\Lambda_{\varphi,w}$ be the Orlicz-Lorentz function space equipped with the Luxemburg norm $\|\cdot\|$ and $\Lambda_{\varphi,w}^0$ be the Orlicz-Lorentz function space equipped with the Orlicz norm $\|\cdot\|^0$. The spaces $\Lambda_{\varphi,w}$ and $\Lambda_{\varphi,w}^0$ are rearrangement invariant Banach spaces. Also, it is well known that $(\Lambda_{\varphi,w})_a = (\Lambda_{\varphi,w})_b = \{x \in \Lambda_{\varphi,w} : \rho_{\varphi,w}(\lambda x) < \infty \ \text{for all} \ \lambda > 0\}$ \cite{K}.
In the case of sequence spaces let $w = (w(i))$ be a positive decreasing real sequence and $W(n) = \sum_{i=1}^n w(i)$ for all $n \in \mathbb{N}$ and $W(\infty)= \infty$. For a sequence $x \in \ell^0$, we define the modular $\alpha_{\varphi,w}(x)
= \sum_{i=1}^{\infty} \varphi(x^*(i))w(i) $ and then the Orlicz-Lorentz sequence space $\lambda_{\varphi,w}$ is the set of all real sequences $x= (x(i))$ satisfying $\alpha_{\varphi,w}(\eta x)< \infty$ for some $\eta >0$. The Luxemburg and the Orlicz norm on $\lambda_{\varphi,w}$ are defined similarly as in the function case where the modular $\rho_{\varphi,w}$ is replaced by $\alpha_{\varphi,w}$, and $\lambda_{\varphi, w}$ denotes the Orlicz-Lorentz sequence space equipped with the Luxemburg norm, and $\lambda_{\varphi,w}^0$ with the Orlicz norm. The both norms are equivalent and the spaces are rearrangement invariant Banach spaces. We also have $(\lambda_{\varphi,w})_a = (\lambda_{\varphi,w})_b = \{x \in \lambda_{\varphi,w} : \alpha_{\varphi,w}(\eta x) < \infty \ \text{for all} \ \eta >0\}$ \cite{KR2}.
An Orlicz function $\varphi$ satisfies $\Delta_2$ (resp., $\Delta_2^{\infty}$; $\Delta_2^0$) condition if there exist $K > 0$ (resp., $K>0$ and $u_0\geq 0$; $K>0$ and $u_0 >0$) such that $\varphi(2u) \leq K\varphi(u)$ for all $u \geq 0$ (resp., $u \geq u_0$; $0< u \leq u_0$). {\it Appropriate $\Delta_2$ condition} means $\Delta_2$ and $\Delta_2^\infty$ in the case of the function spaces for $\gamma = \infty$ and $\gamma<\infty$, respectively, and $\Delta_2^0$ for the sequence spaces. It is well known that $(\Lambda_{\varphi,w}^0)_a = \Lambda_{\varphi,w}^0$ and $(\lambda_{\varphi,w}^0)_a = \lambda_{\varphi,w}^0$ if and only if $\varphi$ satisfies the appropriate $\Delta_2$ conditions \cite{K}.
If $f \in \Lambda_{\varphi,w}$ then for some $\lambda_0 > 0$, $\rho_{\varphi,w}(\lambda_0 f) < \infty$, and so for any $\lambda > 0$, $\infty> \rho_{\varphi,w}(\lambda_0 f) \geq \varphi(\lambda_0 \lambda) \int_0 ^{m\{f^* > \lambda\}} w.$ It follows from $W(\infty) = \infty$ that $d_f(\lambda) = m\{f^* > \lambda\} < \infty$ for every $\lambda > 0$. The similar fact holds for $x\in \lambda_{\varphi,w}$ by $\lambda_{\varphi,w} \subset c_0$.
Let us define $k^* = k^*(f) = \inf\{k>0: \rho_{\varphi_*, w}(p(kf)) \geq 1\}$ and $k^{**} = k^{**}(f) = \sup\{k>0: \rho_{\varphi_*, w}(p(kf)) \leq 1\}$. Clearly $0\le k^* \le k^{**} \le \infty$. If $\varphi$ is $N$-function then $k^{**} < \infty$. Indeed if for a contrary $k^{**} = \infty$, then there exists a non-negative sequence $(k_n)$ such that $k_n \uparrow \infty$ and $\int_I \varphi_*(p(k_nf)^*)w \leq 1$. Hence for $t_0 = m\{f^* > 1\} < \infty$, \[ \varphi_*(p(k_n))W(t_0) = \int_0^{t_0}\varphi_*(p(k_n))w = \int_0^{m\{f^* >1\}} \varphi_*(p(k_n))w \leq \int_I \varphi_*(p(k_n f^*)w \leq 1. \]
This implies that $\varphi_*(p(k_n))/p(k_n) \leq 1/W(t_0)p(k_n)$, where the left side tends to $\infty$ since $\varphi_*$ is $N$-function, and the right side approaches $0$ since $p(k_n) \rightarrow \infty$. This contradiction proves the claim. We define $k^*$ and $k^{**}$ analogously for Orlicz-Lorentz sequence spaces. Set $K(f) = [k^*, k^{**}]$ if $f\in \Lambda_{\varphi,w}$, and similarly $K(x)$ for $x\in\lambda_{\varphi,w}$.
Recall the following facts which are similar in Orlicz spaces \cite{Chen}.
\begin{Theorem}[\cite{WC}, pg 133]\label{WC} Let $\varphi$ be an Orlicz N-function. Then, \begin{enumerate}
\item[$(1)$] If there exists $k>0$ such that $\rho_{\varphi_*,w} (p(kf)) = 1$, then $\|f\|^0 = \int_0^{\gamma} f^*p(kf^*) = \frac{1}{k}(1 + \rho_{\varphi,w} (kf))$.
\item[$(2)$] For any $f \in \Lambda_{\varphi,w}^0$, $\|f\|^0 = \inf_{k>0} \frac{1}{k} (1 + \rho_{\varphi,w}(kf))$.
\item[$(3)$] $k \in K(f)$ if and only if $\|f\|^0 = \frac{1}{k}(1 + \rho_{\varphi,w}(kf))$. \end{enumerate} The analogous statements occur in Orlicz-Lorentz sequence space when the modular $\rho_{\varphi,w}$ is replaced by the modular $\alpha_{\varphi,w}$. \end{Theorem}
This article has three parts. In section 2, we compute the norm of a singular linear functional $S$ on Orlicz-Lorentz spaces. We show that $\|S\|$ is the same for both the Luxemburg norm and the Orlicz norm. In section 3, we compute the norm of a bounded linear functional on $\Lambda_{\varphi,w}$ and $\Lambda_{\varphi,w}^0$. The formulas differ dependently on the norm of the space. Furthermore, we show that $(\Lambda_{\varphi,w})_a$ is an $M$-ideal of $\Lambda_{\varphi,w}$, but $(\Lambda_{\varphi,w}^0)_a$ is not an $M$-ideal of $\Lambda_{\varphi,w}^0$ when $\varphi$ is an Orlicz $N$-function and does not satisfy the appropriate $\Delta_2$ condition. The analogous results for the sequence spaces are also given.
\section{Singular linear functionals on Orlicz-Lorentz spaces}
In this section, we show that the formula for $\|S\|$ is the same regardless of Luxemburg or Orlicz norm on Orlicz-Lorentz function or sequence spaces. Letting $f \in L^0$, define $\theta = \theta(f) = \inf\{\lambda > 0 : \rho_{\varphi,w}(f/\lambda) < \infty\}$. It is clear that $\theta(f) < \infty$ for any $f\in \Lambda_{\varphi,w}$. If $f\in (\Lambda_{\varphi,w})_a$, then $\rho_{\varphi,w}\left(\frac{f}{\lambda}\right) < \infty$ for all $\lambda > 0$, so we see that $\theta(f) = 0$. Clearly, $\theta(f) \le \|f\|$. The analogous definitions and facts also hold for Orlicz-Lorentz sequence spaces.
Even though the next two results and their proofs in Orlicz-Lorentz spaces are similar to those in Orlicz spaces \cite{Chen}, we state and prove them in detail because they require slightly different techniques, mostly dealing with decreasing rearrangements.
\begin{Theorem} \label{thm5}
For any $f \in \Lambda_{\varphi,w}$, $\lim_n \|f - f_n\| = \lim_n\|f - f_n\|^0 = \theta(f)$, for $f_n = f \chi_{\{\frac{1}{n} \leq |f| \leq n\}}$. For any $x = (x(i)) \in \lambda_{\varphi,w}$, $\lim_n\|x-x_n\| = \lim_n \|x - x_n\|^0 = \theta(x)$ for $x_n = x \chi_{\{1,2,...,n\}}$. \end{Theorem} \begin{proof}
Let first $f \in (\Lambda_{\varphi,w})_a$. Then, clearly $\theta(f) = 0$. Moreover, in view of $d_f(\lambda) < \infty$ for all $\lambda > 0$, the functions $f_n = f \chi_{\{\frac{1}{n} \leq |f| \leq n\}}$ are bounded with supports of finite measure, and $f_n \rightarrow f$ a.e. and $|f_n| \leq |f|$. Since $(\Lambda_{\varphi,w})_a = (\Lambda_{\varphi,w})_b$, from Proposition 1.3.6 in \cite{BS}, we have that $\|f - f_n\| \rightarrow 0$. Moreover, by the equivalence of $\|\cdot \|$ and $\|\cdot\|^0$, we also get $\|f - f_n\|^0 \rightarrow 0$.
Now, consider $f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a$ and $f_n$ as above. In this case, we have $\theta(f) > 0$. Since $d_f(\lambda) < \infty$ for all $\lambda > 0$, and $|f-f_n| \downarrow 0$ a.e., we have $(f- f_n)^* \rightarrow 0$ (\cite{KPS}, pg 68). Hence $\|f-f_n\|$ and $\|f-f_n\|^0$ are monotonically decreasing, and so the limits for both $\|f-f_n\|$ and $\|f-f_n\|^0$ exist.
Letting $\epsilon \in (0, \theta)$ we have $\rho_{\varphi,w} \left(\frac{f}{\theta - \epsilon}\right) = \infty$. By the orthogonal subadditivity of $\rho_{\varphi,w}$, we have $\infty = \rho_{\varphi,w} \left(\frac{f}{\theta-\epsilon}\right) \leq \rho_{\varphi,w}\left(\frac{f_n}{\theta - \epsilon}\right) + \rho_{\varphi,w} \left(\frac{f - f_n}{\theta - \epsilon}\right)$. Clearly, the functions $f_n$ are bounded with supports of finite measure. This implies that $\rho_{\varphi,w}\left(\frac{f_n}{\theta - \epsilon}\right) < \infty$. Hence, we have $\|f - f_n\| \geq \theta - \epsilon$ from the fact that $\rho_{\varphi,w} \left(\frac{f - f_n}{\theta - \epsilon}\right)=\infty$.
On the other hand for $\epsilon > 0$, we have $\rho_{\varphi,w} \left(\frac{f}{\theta + \epsilon}\right) < \infty$ by the definition of $\theta(f)$. Consequently, since $(f - f_n)^* \rightarrow 0$, we get $\lim_{n \rightarrow \infty} \rho_{\varphi,w} \left(\frac{f-f_n}{\theta + \epsilon}\right) = 0$ by the Lebesgue dominated convergence theorem. Hence, in view of Theorem \ref{WC}.(2), we see that \[
\|f - f_n\|^0 \leq (\theta + \epsilon) \left(1+ \rho_{\varphi,w} \left(\frac{f - f_n}{\theta + \epsilon}\right)\right) \rightarrow (\theta + \epsilon), \]
as $n \rightarrow \infty$. Since $\|f\| \leq \|f\|^0$, we finally get \[
\theta - \epsilon \leq \|f - f_n\| \leq \|f - f_n\|^0 \leq \theta + \epsilon \] for sufficiently large $n$ and arbitrary $\epsilon > 0$, and the proof is complete in the function case. The proof in the sequence case is similar, so we skip it. \end{proof}
Now, we compute the norm of a singular functional $S$ on Orlicz-Lorentz function spaces.
\begin{Theorem}\label{theta}
For any singular functional $S$ of $\Lambda_{\varphi,w}$ equipped with the Luxemburg norm or the Orlicz norm, $\|S\| = \|S\|_{(\Lambda_{\varphi,w})^*} = \|S\|_{(\Lambda_{\varphi,w}^0)^*} = \sup\{S(f) : \rho_{\varphi,w}(f) < \infty\} = \sup\{\frac{S(f)}{\theta(f)} : f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a\}$.
The analogous formulas hold for Orlicz-Lorentz sequence spaces.
\end{Theorem}
\begin{proof}
Here we also provide the proof only in the function spaces. For a function $f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a$, take again $f_n = f \chi_{\{\frac{1}{n} \leq |f| \leq n\}}$. From the fact that $f_n \in (\Lambda_{\varphi,w}^0)_a$ we have $S(f) = S(f- f_n)$ and $S(f) \leq \|S\|_{(\Lambda_{\varphi,w}^0)^*} \|f - f_n\|^0$. By Theorem \ref{thm5}, $\|f- f_n\|^0 \rightarrow \theta(f)$, and so we obtain $\frac{S(f)}{\theta(f)} \leq \|S\|_{(\Lambda_{\varphi,w}^0)^*}$.
If $\rho_{\varphi,w}(f) < \infty$ then $\rho_{\varphi,w}(f - f_n) \rightarrow 0$. Thus for sufficiently large $n$, $\rho_{\varphi,w} (f - f_n) \leq 1$, and so $\|f - f_n\| \leq 1$. Hence by Theorem \ref{thm5}, $\theta(f) = \lim_{n \rightarrow \infty} \|f-f_n\| \leq 1$. Since $S(f) = 0$ for all $f \in (\Lambda_{\varphi,w})_a$, we have $\sup\left\{ S(f) : \rho_{\varphi,w}(f) < \infty\right\} = \sup\left\{ S(f): f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a, \ \rho_{\varphi,w}(f) < \infty\right\}$. Notice that $S(f) \leq \frac{S(f)}{\theta(f)}$ since $\theta(f) \le 1$. Therefore, taking into account that $\|S\|_{(\Lambda_{\varphi,w}^0)^*} \leq \|S\|_{(\Lambda_{\varphi,w})^*}$ in view of the inequality $\|\cdot\| \le \|\cdot\|^0$ and that $\|f\| \leq 1$ if and only if $\rho_{\varphi,w}(f) \leq 1$,
we obtain
\begin{eqnarray*}
\|S\|_{(\Lambda_{\varphi,w}^0)^*} \leq \|S\|_{(\Lambda_{\varphi,w})^*} &=& \sup\{S(f) : \rho_{\varphi,w}(f) \leq 1\}\\ &\leq& \sup\left\{ S(f) : \rho_{\varphi,w}(f) < \infty\right\}\\ &\leq& \sup\left\{ \frac{S(f)}{\theta(f)} : f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a, \,\,\, \rho_{\varphi,w}(f) < \infty\right\}\\ &\leq& \sup\left\{ \frac{S(f)}{\theta(f)} : f \in \Lambda_{\varphi,w} \setminus (\Lambda_{\varphi,w})_a\right\}\\
&\leq& \|S\|_{(\Lambda_{\varphi,w}^0)^*}. \end{eqnarray*}
\end{proof}
\section{Norm of bounded linear functionals}
We need to recall first the K\"othe associate space to an Orlicz-Lorentz space. For any non-negative integrable function $f\in L^0$ and $0\le a < b < \infty$, denote $F(a,b) = \int_a^b f$. Let $h \in L^0$ be non-negative and locally integrable on $I$. Then the interval $(a, b) \subset I$ is called a level interval of $h$ with respect to the weight $w$, if $R(a,t) := \frac{H(a,t)}{W(a,t)} \leq \frac{H(a,b)}{W(a,b)} = R(a, b)$ for all $a < t < b$ and $R(a,b) > 0$. In the case where $b = \infty$, define $R(a,b) = R(a, \infty) = \limsup_{t \rightarrow \infty}R(a, t)$. If the level interval $(a,b)$ is not contained in a larger level interval, we say that $(a,b)$ is a maximal level interval. Halperin's level function of $h$, denoted by $h^0$, is defined as \[ h^0(t) = \begin{cases}
R(a_j, b_j)w(t) = \frac{H(a_j, b_j)}{W(a_j, b_j)}w(t) , & t \in (a_j, b_j) \ \ \text{for some} \ \ j, \\
h(t), & t \notin \cup_j(a_j, b_j),
\end{cases} \] \noindent provided that each $(a_j, b_j)$ is a maximal level interval. Similarly, for a non-negative sequence $h= (h(i)) \in l^0$ and a positive decreasing weight $w= (w(i))$, the interval $(a, b] = \{a+1, a+2, ... , b\}$ is called a level interval if $r(a,j) = \frac{h(a,j)}{w(a,j)} \leq \frac{h(a,b)}{w(a,b)} = r(a,b)$ for every $a+1 \leq j \leq b$ and $r(a,b) >0$, where $h(a,j) = \sum_{i=a+1}^j h(i)$ and $w(a, j) = \sum_{i=a+1}^j w(i)$. The level sequence $h^0$ is defined as
\[ h^0(i) = \begin{cases}
r(a_j, b_j)w(i) , & i \in (a_j, b_j] \ \ \text{for some} \ \ j , \\
h(i), & i \notin \cup_j(a_j, b_j],
\end{cases} \]
where each $(a_j, b_j]$ is a maximal level interval. Letting $h\in L^0$ define $P_{\varphi,w}(h) = \inf\left\{\int_I \varphi\left(\frac{|h|}{v}\right)v : v \prec w\right\} $, and then the space $\mathcal{M}_{\varphi,w}$ as the set of all $h \in L^0$ such that $P_{\varphi,w}(\lambda h) < \infty$ for some $\lambda > 0$. By Theorem 4.7 in \cite{KLR} we have $P_{\varphi,w}(h) = \int_I \varphi((h^*)^0/w) w$ if $\varphi$ is $N$-function. The Luxemberg norm and the Orlicz norm for the modular $P_{\varphi,w}$ are defined as, \[
\|h\|_{\mathcal{M}_{\varphi,w}} = \inf\{\epsilon > 0 : P_{\varphi,w}\left({h}/{\epsilon}\right) \leq 1\} \ \ \ \text{and} \ \ \
\|h\|_{\mathcal{M}_{\varphi,w}^0} = \inf_{k > 0} \frac{1}{k}(1 + P_{\varphi,w}(kh)), \] respectively.
For $h \in \ell^0$, we define
$p_{\varphi,w}(h) = \inf\left\{\sum_{i=1}^{\infty} \varphi\left(\frac{|h(i)|}{v(i)}\right)v(i) : v \prec w \right\}$. The space $\mathfrak{m}_{\varphi,w}$ is the set of all $h = (h(i))$ such that $p_{\varphi,w}(\eta h) < \infty$ for some $\eta > 0$. The Luxemburg norm and the Orlicz norm on $\mathfrak{m}_{\varphi,w}$ are given analogously as in function spaces where we replace $P_{\varphi,w}$ by $p_{\varphi,w}$.
From now on we denote by $\mathcal{M}_{\varphi,w}$ and $\mathfrak{m}_{\varphi,w}$ the space equipped with the Luxemburg norm $\| \cdot \|_{\mathcal{M}_{\varphi,w}}$ and $\|\cdot\|_{\mathfrak{m}_{\varphi,w}}$ respectively, and $\mathcal{M}_{\varphi,w}^0$ and $\mathfrak{m}_{\varphi,w}^0$ the space equipped with the Orlicz norms $\| \cdot \|_{\mathcal{M}_{\varphi,w}^0}$ and $\|\cdot\|_{\mathfrak{m}_{\varphi,w}^0}$ respectively. All those spaces are rearrangement invariant Banach spaces \cite{KR}.
\begin{Theorem}[\cite{KLR}, Theorems 2.2, 5.2]\label{th:01} Let $w$ be a decreasing weight and $\varphi$ be an Orlicz $N$-function. Then the K\"othe dual space to an Orlicz-Lorentz space $\Lambda _{\varphi ,w}$ (resp. $\Lambda_{\varphi,w}^0$) is expressed as \begin{equation*} \left( \Lambda _{\varphi ,w}\right) ^{\prime }=\mathcal{M}_{\varphi _{\ast },w}^{0}\ \ \ (\text{resp.}\ (\Lambda_{\varphi,w}^0)^{\prime} = \mathcal{M}_{\varphi _{\ast },w}) \end{equation*} with equality of norms. Similarly in the sequence case we have \begin{equation*} \left(\lambda _{\varphi ,w}\right) ^{\prime }=\mathfrak{m}_{\varphi _{\ast},w}^{0}\ \ (\text{resp.} \ \ (\lambda_{\varphi,w}^0)' = \mathfrak{m}_{\varphi_*,w}) \end{equation*} with equality of norms.
\end{Theorem}
Let $X$ be an Orlicz-Lorentz function or sequence space equipped with either norm. Then, $X^* = X_r \oplus X_s$, where $X_r$ is isomorphically isometric to its K\"othe associate space $X'$, and $X_s = (X_a)^\perp$.
\begin{Theorem}\label{th:lux} Assume $\varphi$ is $N$-function. Let $F$ be a bounded linear functional on $\Lambda_{\varphi,w}$. Then $F= H + S$, where $H(f) = \int_I fh$ for some $h\in \mathcal{M}^0_{\varphi_*,w}$,
$\|H\|= \|h\|^0_{\mathcal{M}_{\varphi_*,w}}$, $S(f)=0$
for all $f\in (\Lambda_{\varphi,w})_a$, and $\|F\|_{(\Lambda_{\varphi,w})^*} = \|h\|_{\mathcal{M}_{\varphi_*, w}}^0 + \|S\|$. \end{Theorem}
\begin{proof}
By Theorem \ref{th:01} and the remark above, $F= H+S$ uniquely, where $H(f) = \int_I hf$ for some $h\in \mathcal{M}^0_{\varphi_*,w}$ with
$\|H\|= \|h\|^0_{\mathcal{M}_{\varphi_*,w}}$, and $S(f)=0$
for all $f\in (\Lambda_{\varphi,w})_a$. Observe by Theorem \ref{theta} that the norm of the singular functional $\|S\|$ is the same under either the Luxemburg norm or the Orlicz norm.
Clearly $\|F\|_{(\Lambda_{\varphi,w})^*} = \|H+S\|_{(\Lambda_{\varphi,w})^*} \leq \|H\|_{(\Lambda_{\varphi,w})^*} + \|S\| = \|h\|_{\mathcal{M}_{\varphi_*, w}}^0 + \|S\|$. Now we show the opposite inequality.
Let $\epsilon >0$ be arbitrary. From the definitions of $\|h\|_{\mathcal{M}_{\varphi_*,w}}^0$ and $\|S\|$, we can choose $f, g \in \Lambda_{\varphi,w}$ with $\|f\| \leq 1, \|g\| \leq 1$ such that
\begin{equation} \label{DN}
\|h\|_{\mathcal{M}_{\varphi_*,w}}^0 - \epsilon < \int_I hf \,\,\, \text{and} \,\,\, \|S\| - \epsilon < S(g). \end{equation}
We can assume that $f$ is bounded. Indeed, let $z \in S_{\Lambda_{\varphi,w}}$ be such that $\|h\|_{\mathcal{M}_{\varphi_*,w}}^0- \frac{\epsilon}{2} < \int_I |hz|$. Let $(z_n)_{n=1}^{\infty}$ be a sequence of non-negative bounded functions with supports of finite measure defined on $[0,n)$ such that $z_n \uparrow |z|$ a.e. Then, $\int_I |h| |z| = \lim_{n \rightarrow \infty} \int_I |h| z_n$ by the monotone convergence theorem, which implies that for all $\epsilon > 0$, there exists $z_{n_0}$ such that $\int_I |hz| - \frac{\epsilon}{2} \leq \int_I |h| z_{n_0}$. Hence,
\[
\|h\|_{\mathcal{M}_{\varphi_*, w}}^0 - \frac{\epsilon}{2} - \frac{\epsilon}{2} \leq \int_I |hz| - \frac{\epsilon}{2} \leq \int_I |h|z_{n_0}. \]
\noindent Let $f = (\sign{h})z_{n_0}$. Thus, we found a bounded function $f$ of support of finite measure such that $\|f\| \leq 1$ and $\|h\|_{\mathcal{M}_{\varphi_*, w}}^0 - \epsilon < \int_I hf$.\
Since $H$ is a bounded linear functional on $\Lambda_{\varphi,w}$, $hf$ is integrable, so there exists $\delta > 0$ such that for every measurable subset $E \subset I$, with $mE < \delta$, we have
\begin{equation}\label{Ex}
\int_{E} |hf| < \epsilon. \end{equation}
Now, we show that there exist $n \in \mathbb{N}$ and a measurable subset $E \subset I$ such that $mE < \delta$ and
\begin{equation}\label{Ey}
\int_E |hg| < \epsilon,\ \ \int_0^{mE} \varphi(g^*) w < \frac{\epsilon}{2},\ \ \int_I \varphi((g\chi_{[n, \gamma)})^*)w < \frac{\epsilon}{2},\ \ \text{and} \ \ \int_n^{\gamma} |hg|< \epsilon. \end{equation}
\noindent Indeed, let $E_n = \{g^* > n\} = [0, t_n)$ and define $g_n^* = g^* \chi_{[0, t_n)}$. We see that $g_n^* \leq g^*$ and $ g_n^* \downarrow 0$ a.e., so by the Lebesgue dominated convergence theorem, $\lim_{n \rightarrow \infty} \int_I \varphi(g_n^*) w = 0$. This implies that for any $\epsilon >0$, there exists $N_1$ such that for every $n \geq N_1$,
\begin{equation} \label{Eyn} \int_I \varphi(g_n^*)w = \int_I \varphi(g^*\chi_{[0, t_n)})w = \int_0^{t_n} \varphi(g^*)w = \int_0^{mE_n} \varphi(g^*)w<\frac{\epsilon}{2}. \end{equation}
\noindent Also, $E_{n+1} \subset E_n$ for all $n \in \mathbb{N}$ and $m(\cap E_n) = m\{g^* =\infty\} = 0$. By continuity of measure, $0= m(\cap E_n) = \lim_{n \rightarrow \infty} m\{g^*>n\}$.\
Since $g \sim g^*$, we see that $\lim_{n \rightarrow \infty} m\{|g| > n\} = 0$. The function $hg$ is integrable, so we have $\lim_{n \rightarrow \infty} \int_{\{|g| > n\}} |hg| = 0$. Then, there exists $N_2$ such that $\int_{\{|g| > n\}} |hg| < \epsilon$ for $n \geq N_2$. Since $\rho_{\varphi,w}(g) < \infty$, we choose sufficiently large $n \geq N = \max\{N_1, N_2\}$ satisfying $mE_n = m \{|g|>n\} < \delta$, $\supp{f} \cap [n, \gamma) = \emptyset$, $\int_I \varphi((g\chi_{[n, \gamma)})^*)w < \frac{\epsilon}{2}$, and $\int_{[n,\gamma)} |hg|< \epsilon$. By letting $E = \{|g|>n\}$ for such $n$, we found $n \in \mathbb{N}$ and a measureable subset $E \subset I$ satisfying (\ref{Ey}). Note that $\supp{f} \subset [0, n)$ from the construction.
Define
\[u(t) = \begin{cases} f(t), & t \in G_1 = \supp{f} \setminus E\\ g(t), & t \in G_2 = E \cup [n, \gamma)\\ 0, & \text{otherwise}. \end{cases} \]
\noindent By the orthogonal subadditivity of the modular $\rho_{\varphi, w}$, we have
\begin{eqnarray*} \rho_{\varphi,w}(u) = \int_I \varphi(f \chi_{G_1} + g \chi_{G_2})^*w &\leq& \int_I \varphi((f\chi_{G_1})^*)w + \int_I \varphi((g\chi_ {G_2})^*)w\\ &\leq& \int_0^{mG_1} \varphi(f^*)w + \int_I \varphi(g\chi_E + g\chi_{[n, \gamma) \setminus E})^* w \\ &\leq& \int_0^{mG_1} \varphi(f^*)w + \int_I \varphi(g\chi_E)^* w + \int_I \varphi(g\chi_{[n, \gamma)\setminus E})^* w\\ &\leq& \int_0^{mG_1} \varphi(f^*)w + \int_0^{mE} \varphi(g^*) w + \int_I \varphi(g\chi_{[n, \gamma)})^* w\\ &\leq& 1 + \epsilon,
\end{eqnarray*}
\noindent which implies that $\rho_{\varphi,w}(\frac{u}{1+\epsilon}) \leq 1$, and so $\|\frac{u}{1+\epsilon}\| \leq 1$. We see that $S(f\chi_{G_1}) = 0$ from $f\in (\Lambda_{\varphi,w})_a$. Also, $g \chi_{G_1} \in (\Lambda_{\varphi,w})_a$ because $mG_1 = m(\supp{f} \setminus E) \leq m([0,n) \setminus E) < \infty$ and $g$ is bounded on $G_1$. This implies that $S(g \chi_{G_1}) = 0$. Hence, $S(g) = S(g \chi_{G_1}) + S(g \chi_{G_2}) = S(g \chi_{G_2})$. Moreover, from (\ref{Ey}), we have $\left|\int_{E \setminus [n, \gamma)} hg\right| \leq \int_{E \setminus [n, \gamma)} |hg| \leq \int_E |hg| < \epsilon$.
It follows that
\begin{eqnarray*}
(1+ \epsilon)\|F\| \geq (1+ \epsilon) F\left(\frac{u}{1+\epsilon}\right) = F(u) &=& F(f \chi_{G_1} + g \chi_{G_2})\\ &=& \int_I h(f \chi_{G_1}+ g\chi_{G_2}) + S((f \chi_{G_1} + g\chi_{G_2})) \\ &=& \int_I hf \chi_{G_1} + \int_I hg\chi_ {G_2} +S(f \chi_{G_1}) + S(g\chi_{G_2}) \\ &=& \int_{\supp{f} \setminus E} hf +\int_{E \cup [n, \gamma)} hg + S(g\chi_{G_2})\\ &=& \int_I hf -\int_E hf +\int_{E \setminus [n, \gamma)} hg + \int_{[n, \gamma)} hg + S(g)\\
&>& \|h\|_{\mathcal{M}_{\varphi_*,w}}^0 - 2\epsilon - 2\epsilon + S(g)\,\,\, (\text{by (\ref{Ex}) and (\ref{Ey})})\\
&>& \|h\|_{\mathcal{M}_{\varphi_*, w}}^0 -2\epsilon -2\epsilon + \|S\| - \epsilon \,\,\, (\text{by (\ref{DN})})\\
&=& \|h\|_{\mathcal{M}_{\varphi_*, w}}^0 + \|S\| - 5\epsilon. \end{eqnarray*}
\noindent As $\epsilon \rightarrow 0$, the proof is done. \end{proof}
The sequence version below has analogous (simpler) proof so we skip it.
\begin{Theorem}\label{th:luxseq} Suppose $\varphi$ is $N$-function and let $F$ be a bounded linear functional on $\lambda_{\varphi,w}$. Then $F= H+S$, where $H(x) = \sum_{i=1}^{\infty} x(i) y(i)$, $\|H\| = \| y\|^0_{\mathfrak{m}_{\varphi_*,w}}$, $S$ is a singular functional vanishing on $(\lambda_{\varphi,w})_a$ and $\|F\|_{(\lambda_{\varphi,w})^*} = \|y\|^0_{\mathfrak{m}_{\varphi_*,w}} + \|S\|$. \end{Theorem}
As a consequence of Theorems \ref{th:lux} and \ref{th:luxseq} we obtain the following result.
\begin{Corollary}\label{cor:luxideal} If $\varphi$ does not satisfy the appropriate $\Delta_2$ condition then the order-continuous subspaces $(\Lambda_{\varphi,w})_a$ and $(\lambda_{\varphi,w})_a $ are non-trivial $M$-ideals of $\Lambda_{\varphi,w}$ and $\lambda_{\varphi,w}$, respectively. \end{Corollary}
Recall \cite{KR, KLR} that for an Orlicz $N$-function $\varphi$ and $h \in L^0$ we have \begin{equation}\label{form:1} P_{\varphi,w}(h) = \inf\left\{\int_I \varphi\left(\frac{h^*}{v}\right)v : v \prec w, v\downarrow\right\}=\int_I \varphi \left(\frac{(h^*)^0}{w}\right) w, \end{equation} and that similar formula holds true for any sequence $x\in\ell^0$ \cite{KLR}. Hence, we have
\[ p_{\varphi,w}(h) = \sum_{i=1}^{\infty} \varphi\left(\frac{(h^*)^0(i)}{w(i)}\right)w(i). \]
Consider the decreasing simple function $h^* = \sum_{i=1}^n a_i \chi_{(t_{i-1}, t_i)}$ where $ a_1 > a_2 > \cdots > a_n >0$ and $t_0 = 0$. Let $H^*(a,b) = \int_a^b h^*$. By Algorithm A provided in \cite{KLR}, the maximal level intervals of $h^*$ are of the form $(t_{i_j}, t_{i_{j+1}})$ where $(t_{i_j})_{j=0}^{l-1}$ is a subsequence of $(t_i)_{i=1}^n$ with $0= t_0 = t_{i_0}< t_{i_1} < ... < t_{i_{l}} = t_n < \infty$. Then, we have
\begin{equation}\label{level} \frac{(h^*)^0}{w} = \frac{\sum_{j=0}^{l-1}R(t_{i_j}, t_{i_{j+1}})w\chi_{(t_{i_j}, t_{i_{j+1}})}}{w} = \sum_{j=0}^{l-1} R(t_{i_j}, t_{i_{j+1}}) \chi_{(t_{i_j}, t_{i_{j+1}})} =\sum_{j=0}^{l-1} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})}\chi_{(t_{i_j}, t_{i_{j+1}})} . \end{equation}
Observe that the sequence $(R(t_{i_j}, t_{i_{j+1}}))_{j=0}^{l-1}$ is decreasing since $\frac{(h^*)^0}{w}$ is decreasing (\cite{Hal}, Theorem 3.6). Furthermore, we obtain \[ P_{\varphi,w}(h) = \int_I \varphi \left(\sum_{j=0}^{l-1} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})}\chi_{(t_{i_j}, t_{i_{j+1}})}\right) w = \sum_{j=0}^{l-1} \varphi \left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right) \cdot W(t_{i_j}, t_{i_{j+1}}). \] The next lemma is a key ingredient for computation of the norm of a bounded linear functional on $\Lambda_{\varphi,w}^0$ or $\lambda^0_{\varphi,w}$.
\begin{Lemma}\label{lem3} Let $h \in L^0$ be a non-negative simple function with support of finite measure. Then, there exists a non-negative simple function $v$ such that \begin{equation*} P_{\varphi_*, w}(h) = \int_I \varphi_* \left(\frac{h}{v}\right)v \,\,\, \text{and} \,\,\, \int_I \varphi\left(q\left(\frac{h}{v}\right)\right) v = \int_I \varphi\left(q\left(\frac{h}{v}\right)^*\right) w. \end{equation*}
The similar formula holds for modular $p_{\varphi_*, w} (x)$ for any $x\in \ell^0$.
\end{Lemma}
\begin{proof} Let $h = \sum_{i=1}^n a_i \chi_{A_i}$ with $ a_1 > a_2 > \cdots > a_n >0$ and $\{A_i\}_{i=1}^n$ be a family of disjoint measurable subsets of $I$ with finite measure. Since $h$ and $h^*$ are equimeasurable, we see that $mA_i = t_i- t_{i-1}$ for $i=1,\dots,n$.
It is well known by \cite{Hal} and \cite{KLR} that each $(t_{i-1}, t_i)$ is a level interval of $h^*$, contained in at most one maximal level interval $(t_{i_j}, t_{i_{j+1}})$ for some $0 \leq j \leq l-1$ \cite{KLR}. So, for every $j$, we can see
\[ m(t_{i_j}, t_{i_{j+1}}) = m(\cup_{i_j < i \leq i_{j+1}}(t_{i-1}, t_i)) = m(\cup_{i_j < i \leq i_{j+1}}A_i), \]
\noindent and this implies
\begin{equation}\label{tc} H^*(t_{i_j}, t_{i_{j+1}}) = \int_{t_{i_j}}^{t_{i_{j+1}}} h^* = \sum_{i = i_j+1}^{i_{j+1}} \int_{t_{i-1}}^{t_i} a_i = \sum_{i = i_j+1}^{i_{j+1}} a_i(t_i - t_{i-1}) = \sum_{i_j < i \leq i_{j+1}} a_i mA_i. \end{equation}
\noindent By (\ref{level}), we have \begin{equation*} \frac{(h^*)^0}{w} =\sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \chi_{(t_{i-1}, t_i)} = \left(\sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \chi_{A_i}\right)^*. \end{equation*}
\noindent Hence, by right-continuity of $q$, we also have $q\left(\frac{(h^*)^0}{w}\right) = q\left(\sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \chi_{A_i} \right)^*$. Let $v= \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}$. Then, $q\left(\frac{(h^*)^0}{w}\right) = q\left(\frac{h}{v}\right)^*$. The functions $h$ and $v$ have the same supports, so the quotient $h/v$ is set to be zero outside of the supports of $h$ and $v$.
Now, we compute $\int_I \varphi_* \left(\frac{h}{v}\right)v$ and $\int_I \varphi\left(q\left(\frac{h}{v}\right)\right) v$.
\begin{eqnarray*} \int_I \varphi_* \left(\frac{h}{v}\right)v &=& \int_I \varphi_* \left(\frac{\sum_{i=1}^n a_i \chi_{A_i}}{ \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}}\right) \cdot \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}\\ &=& \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \int_I \varphi_* \left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right) \cdot \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}\\ &=& \sum_{j=0}^{l-1} \varphi_* \left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right) \cdot \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} \sum_{i_j < i \leq i_{j+1}} a_i \cdot mA_i\\ &=& \sum_{j=0}^{l-1} \varphi_* \left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right) \cdot W(t_{i_j}, t_{i_{j+1}})\,\,\, \text{(by (\ref{tc}))}\\ &=& P_{\varphi_*,w}(h). \end{eqnarray*}
\noindent and \begin{eqnarray*} \int_I \varphi\left(q\left(\frac{h}{v}\right)\right)v &=& \int_I \varphi\left(q\left(\frac{\sum_{i=1}^n a_i \chi_{A_i}}{ \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}}\right)\right) \cdot \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}\\ &=& \sum_{j=0}^{l-1} \sum_{i_j < i \leq i_{j+1}} \int_I \varphi \left(q\left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right)\right) \cdot \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} a_i \chi_{A_i}\\ &=& \sum_{j=0}^{l-1} \varphi\left(q\left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right)\right) \cdot \frac{W(t_{i_j}, t_{i_{j+1}})}{H^*(t_{i_j}, t_{i_{j+1}})} \sum_{i_j < i \leq i_{j+1}} a_i \cdot mA_i\\ &=& \sum_{j=0}^{l-1} \int_I \varphi\left(q\left(\frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})} \right)\right) \cdot w\chi_{(t_{i_j}, t_{i_{j+1}})} \,\,\, (\text{by (\ref{tc})}) \\ &=& \int_I \varphi\left(q\left(\sum_{j=0}^{l-1} \frac{H^*(t_{i_j}, t_{i_{j+1}})}{W(t_{i_j}, t_{i_{j+1}})}\chi_{(t_{i_j}, t_{i_{j+1}})}\right)\right) w\\ &=& \int_I \varphi \left(q \left(\frac{(h^*)^0}{w}\right) \right) w = \int_I \varphi \left(q\left(\frac{h}{v}\right)^*\right)w. \end{eqnarray*}
\end{proof}
Now, we are ready to compute the norm of a bounded linear functional in $\Lambda_{\varphi,w}^0$.
\begin{Theorem}\label{Orlicz} Let $\varphi$ be an Orlicz $N$-function and $F$ be a bounded linear functional on $\Lambda^0_{\varphi,w}$. Then $F= H + S$, where $H(f) = \int_I fh$ for some $h\in \mathcal{M}_{\varphi_*,w}$,
$\|H\|= \|h\|_{\mathcal{M}_{\varphi_*,w}}$, $S(f)=0$
for all $f\in (\Lambda_{\varphi,w})_a$, and $\|F\| = \inf\{\lambda>0 : P_{\varphi_*,w}(\frac{h}{\lambda}) + \frac{1}{\lambda}\|S\| \leq 1\}$. \end{Theorem}
\begin{proof}
Similarly as in Theorem \ref{th:lux}, we have $F = H + S$, where $H(f) = \int_I hf$ for some $h \in \mathcal{M}_{\varphi_*,w}$ with $\|H\| = \|h\|_{\mathcal{M}_{\varphi_*,w}}$ and $S(f) = 0$ for all $f \in (\Lambda_{\varphi,w}^0)_a$ in view of Theorem \ref{th:01}. Thus, we only need to show the formula for $\|F\|$. Without loss of generality, assume $\|F\| = 1$. Let $f \in S_{\Lambda_{\varphi,w}^0}$. Since $h \in \mathcal{M}_{\varphi_*,w}$, we have $P_{\varphi_*,w}(\frac{h}{\lambda}) < 1$ for some $\lambda>0$. So, we can choose $\lambda>0$ such that $P_{\varphi_*,w}(\frac{h}{\lambda})+ \frac{1}{\lambda} \|S\| \leq 1$. Let $k \in K(f)$. By Theorem \ref{WC}.(3), $1 = \|f\|^0 = \frac{1}{k}(1 + \rho_{\varphi,w}(kf))$, and this implies that $\rho_{\varphi,w}(kf) < \infty$. For every $v \prec w,\, v \downarrow$, we have
\[ \frac{1}{\lambda}F(kf) = \frac{1}{\lambda}\left(\int_I khf + S(kf)\right) \leq \frac{1}{\lambda} \left(\int_I kh^*f^* + S(kf)\right) = \int_I \frac{kh^*f^*v}{\lambda v} + \frac{1}{\lambda} S(kf). \]
\noindent By Young's inequality, we see that $\int_I \frac{kh^*f^*v}{\lambda v} + \frac{1}{\lambda} S(kf) \leq \int_I \varphi(kf^*)v + \int_I \varphi_*\left(\frac{h^*}{\lambda v}\right)v + \frac{1}{\lambda}S(kf)$. Since by (\ref{form:1}) this is for all $v \prec w$, $v\downarrow$, and by Hardy's lemma (\cite{BS}, Proposition 3.6), we get
\[ \frac{1}{\lambda}F(kf) \leq \int_I \varphi(kf^*)v + \int_I \varphi_*\left(\frac{h^*}{\lambda v}\right)v + \frac{1}{\lambda}S(kf) \leq\rho_{\varphi,w}(kf) + P_{\varphi_*,w}\left(\frac{h}{\lambda}\right) + \frac{1}{\lambda}S(kf)). \]
\noindent Furthermore, $S(kf) \leq \|S\|$ because $\rho_{\varphi,w}(kf) < \infty$. Hence,
\[
\frac{1}{\lambda}F(kf) \leq \rho_{\varphi,w}(kf) + P_{\varphi_*,w}\left(\frac{h}{\lambda}\right) + \frac{1}{\lambda}\|S\| \leq 1+ \rho_{\varphi,w}(kf), \]
\noindent which implies that $F(f) \leq \lambda \cdot \frac{1}{k}(1 + \rho_{\varphi,w}(kf)) \leq \lambda \|f\|^0=\lambda$. Since $f$ and $\lambda$ are arbitrary, we showed that $\|F\| \leq \inf\{\lambda>0 : P_{\varphi_*,w}(\frac{h}{\lambda}) + \frac{1}{\lambda} \|S\| \leq 1 \}$.
Now, suppose that
\[
1= \|F\| < \inf\{\lambda>0 : P_{\varphi_*,w}\left(\frac{h}{\lambda}\right) + \frac{1}{\lambda}\|S\| \leq 1\}. \]
\noindent Then, there exists $\delta>0$ such that
\[
P_{\varphi_*,w}(h) + \|S\| > 1 + 3\delta. \]
\noindent From Theorem \ref{theta}, $\|S\| = \sup\{S(f): \rho_{\varphi,w}(f) < \infty\}$. So, there exists $f \in \Lambda_{\varphi,w}^0$ such that $\rho_{\varphi,w}(f) < \infty$ and $\|S\| < S(f) + \delta$. This implies that
\[
P_{\varphi_*,w}(h) + S(f)+ \delta >P_{\varphi_*,w}(h) + \|S\| > 1 + 3\delta, \]
\noindent and so
\[ P_{\varphi_*,w}(h) + S(f) > 1 + 2\delta. \]
Without loss of generality, let $h \geq 0$. Let $(h_n)_{n=1}^{\infty}$ be a sequence of simple functions with support of finite measure such that $h_n \uparrow h$. By Lemma 4.6 in \cite{KR}, we get $P_{\varphi_*,w}(h_n) \uparrow P_{\varphi_*,w}(h)$. Hence, there exists a non-negative simple function $h_0$ with $m(\supp{h_0}) < \infty$ such that $0\le h_0 \le h$ a.e. and
\[ P_{\varphi_*,w}(h) < P_{\varphi_*,w}(h_0) + \delta. \]
\noindent This implies that
\[ P_{\varphi_*,w}(h_0) + S(f) > P_{\varphi_*,w}(h) + S(f) - \delta > 1 + 2\delta - \delta = 1 + \delta. \]
Now, consider a function $f_n = f \chi_{\{\frac{1}{n}\leq |f| \leq n\}}$. The function $|f-f_n| \downarrow 0$ a.e. Hence, we have $(f- f_n)^* \rightarrow 0$, and so $\rho_{\varphi,w}(f-f_n) \downarrow 0$ by the Lebesgue dominated convergence theorem. Since $H$ is a bounded linear functional on $\Lambda_{\varphi,w}^0$, we have $\int_I |f-f_n| h \le \int_I |f| h < \|H\|\|f\|^0< \infty$, and so $\int_I |f-f_n| h \to 0$. For $\delta > 0$, there exists $N_0$ such that for $n\ge N_0$, we have
\[ \rho_{\varphi,w}(f-f_n) \le 1 \ \ \ \text{and} \ \ \
\int_I |f-f_n| h < \frac{\delta}{8}. \]
\noindent Let $g_1 = f-f_n$ for some $n\ge N_0$. The function $f_n$ is bounded with support of finite measure since $\supp{f_n} \subset \{|f| > \frac{1}{n}\}$ and $m\{|f| > \frac{1}{n}\} < \infty$. Thus, we have $S(f) = S(g_1) + S(f_n) = S(g_1)$ and
\begin{equation}\label{eq1}
\rho_{\varphi,w}(g_1) \leq 1, \,\,\, \int_I |g_1| h < \frac{\delta}{8}, \ \ \text{and} \,\,\, P_{\varphi_*,w}(h_0) + S(g_1) > 1 + \delta. \end{equation}
Let $v$ be the non-negative simple function constructed in Lemma \ref{lem3} for $h_0$. By Young's equality, we obtain
\[ \int_I q\left(\frac{h_0}{v}\right)h_0 = \int_I q\left(\frac{h_0}{v}\right)\frac{h_0}{v} v = \int_I \varphi\left(q\left(\frac{h_0}{v}\right)\right)v + \int_I \varphi_*\left(\frac{h_0}{v}\right)v = \int_I \varphi\left(q\left(\frac{h_0}{v}\right)\right)v + \int_I \varphi_*\left(\frac{h_0}{v}\right)v. \]
\noindent Let $g_2 = q\left(\frac{h_0}{v}\right)$. It is a simple function with support of finite measure, so $g_2 \in (\Lambda_{\varphi,w}^0)_a$. In view of Lemma \ref{lem3}, we get
\begin{equation}\label{eq2} P_{\varphi_*,w}(h_0) = \int_I \varphi_*\left(\frac{h_0}{v}\right)v = \int_I q\left(\frac{h_0}{v}\right)h_0 - \int_I \varphi\left(q\left(\frac{h_0}{v}\right)\right)v = \int_I g_2 h_0 - \int_I \varphi(g_2^*)w. \end{equation}
The function $g_2h$ is integrable. So, there exists $\eta>0$ such that for any measurable subset $E \subset I$ with $mE < \eta$, we have $\int_E |g_2h| < \frac{\delta}{2}$. We will now show that for $\delta>0$, there exist $n \in \mathbb{N}$ and $E \subset I$ such that $mE <\eta$, \begin{equation}\label{eq3}
\int_0^{mE} \varphi (g_1^*) w < \frac{\delta}{4}, \ \ \int_E |g_2 h| < \frac{\delta}{2}, \ \ \text{and} \ \ \rho_{\varphi,w}(g_1 \chi_{[n, \gamma)}) = \int_I\varphi((g_1 \chi_{[n, \gamma)})^*)w < \frac{\delta}{8}. \end{equation}
\noindent Let $E_n=\{g_1^*> n\}= [0, t_n)$. We see that $g_1^* \chi_{E_n} \leq g_1^*$ for all $n$ and $g_1^* \chi_{E_n} \rightarrow 0$ a.e. By the Lebesgue dominated convergence theorem, for $\delta > 0$, there exists $N_1$ such that for all $n \geq N_1$,
\[ \int_I \varphi(g_1^*\chi_{E_n})w = \int_0^{mE_n} \varphi(g_1^*)w< \frac{\delta}{4}. \]
\noindent Since $g_1$ and $g_1^*$ are equimeasurable, we have $m\{|g_1|>n\} = m\{g_1^* > n\} = mE_n$ for all $n$. Choose $n > N_1$ such that $mE_n < \eta$, $\supp{h_0} \cap [n, \gamma) = \emptyset$, and $\rho_{\varphi,w}(g_1 \chi_{[n, \gamma)}) = \int_I\varphi((g_1 \chi_{[n, \gamma)})^*)w < \frac{\delta}{8}$. Finally, by letting $\{|g_1|>n\} = E$ for such $n$, we obtain $n \in \mathbb{N}$ and a measurable subset $E \subset I$ satisfying (\ref{eq3}). Note that $\supp{h_0} \subset [0, n)$.
Now, we define \[\bar{u}(t) = \begin{cases} g_2(t), & t \in A_1 = \supp{h_0} \setminus E\\ g_1(t),& t \in A_2 = E \cup [n, \gamma) \\ 0, & \text{Otherwise}.
\end{cases} \]
\noindent The function $g_1$ is bounded on the set $A_2^c$. Moreover, $A_2^c$ is a subset of $[0,n)$. So, $g_1\chi_{A_2^c} \in (\Lambda_{\varphi,w}^0)_a$, and this implies that $S(g_1) = S(g_1 \chi_{A_2})$. Since $g_2$ is a simple function with support of finite measure, $S(g_2\chi_{A_1}) = 0$. By orthogonal subadditivity of $\rho_{\varphi,w}$, we get
\[ \rho_{\varphi,w}(\bar{u}) \leq \rho_{\varphi,w}(g_2 \chi_{A_1}) + \rho_{\varphi,w}(g_1 \chi_{A_2}) \leq \rho_{\varphi,w}(g_2 \chi_{A_1}) + \rho_{\varphi,w}(g_1 \chi_E) + \rho_{\varphi,w}(g_1\chi_{[n, \gamma)}), \]
\noindent and by (\ref{eq3}), we have
\[ \rho_{\varphi,w}(\bar{u}) < \rho_{\varphi,w}(g_2 \chi_{A_1}) + \rho_{\varphi,w}(g_1 \chi_E) + \frac{\delta}{8}. \]
\noindent Hence, we see that
\begin{equation}\label{eq6} \int_I \bar{u}h + S(\bar{u}) - \rho_{\varphi,w}(\bar{u}) \geq \int_{A_1} g_2 h + \int_{A_2} g_1 h + S(g_1) - \rho_{\varphi, w} (g_2\chi_{A_1}) - \rho_{\varphi,w}(g_1 \chi_E) - \frac{\delta}{8}. \end{equation}
\noindent Since $g_2 \ge 0$ and $h \ge h_0 \ge 0$, we have
\[ \int_{A_1} g_2 h\ge \int_{A_1} g_2 h_0 = \int_{I \setminus E} g_2 h_0. \]
\noindent Also, in view of (\ref{eq1}) and (\ref{eq3}), we see that
\[
\int_{A_2} |g_1 h| < \int_I |g_1 h| < \frac{\delta}{8} \ \ \ \text{and} \ \ \ \int_E g_2h_0 \leq \int_E g_2h < \frac{\delta}{2}. \]
\noindent Then, the inequality (\ref{eq6}) becomes
\[ \int_I h\bar{u} + S(\bar{u}) - \rho_{\varphi,w}(\bar{u}) \ge \int_{I \setminus E} g_2 h_0 -\frac{\delta}{4} + S(g_1) - \rho_{\varphi, w} (g_2\chi_{A_1}) - \rho_{\varphi,w}(g_1\chi_{E}). \]
\noindent Hence, we obtain \begin{eqnarray*} \int_I \bar{u}h + S(\bar{u}) - \rho_{\varphi,w}(\bar{u}) &\geq& \int_{I \setminus E} g_2 h_0 -\frac{\delta}{4} + S(g_1) - \rho_{\varphi, w} (g_2\chi_{A_1}) - \rho_{\varphi,w}(g_1\chi_{E}) \\ &\geq& \int_I g_2 h_0 - \int_E g_2 h_0 - \frac{\delta}{4} + S(g_1) - \rho_{\varphi, w} (g_2) - \rho_{\varphi,w}(g_1 \chi_{E})\\ &\geq& \int_I g_2 h_0 - \int_E g_2 h_0 - \frac{\delta}{4}+ S(g_1) - \rho_{\varphi, w} (g_2) - \frac{\delta}{4}\,\,\, \text{by (\ref{eq3})}\\ &=& P_{\varphi_*,w}(h_0) -\int_E g_2h_0 + S(g_1) - \frac{\delta}{2} \,\,\, \text{by (\ref{eq2})}\\ &\geq& P_{\varphi_*,w}(h_0)- \frac{\delta}{2} + S(g_1) - \frac{\delta}{2} \\ &>& 1+ \delta - \delta = 1. \,\,\, \text{by (\ref{eq1})} \end{eqnarray*}
\noindent Finally, this implies that \[
1 = \|F\| \geq F\left(\frac{\bar{u}}{\|\bar{u}\|^0}\right) = \frac{H(\bar{u}) + S(\bar{u})}{\|\bar{u}\|^0} = \frac{\int_I \bar{u}h+ S(\bar{u})}{\|\bar{u}\|^0}> \frac{1 + \rho_{\varphi,w}(\bar{u})}{\|\bar{u}\|^0} > 1, \] \noindent which leads to a contradiction.
\end{proof}
Next result is the sequence analogue of the formula for the norm of a bounded linear functional on $\lambda^0_{\varphi,w}$.
\begin{Theorem}\label{Orliczseq}
If $\varphi$ is an Orlicz $N$-function and $F$ is a bounded linear functional on $\lambda^0_{\varphi,w}$ then $F = H+S$, where $H(x) = \sum_{i=1}^{\infty} x(i) y(i)$, $ \|H\| = \| y\|_{\mathfrak{m}_{\varphi_*,w}}$, $S$ is a singular functional vanishing on $(\lambda_{\varphi,w})_a$ and $\|F\| = \inf\{\eta>0 : p_{\varphi_*,w}(\frac{h}{\eta}) + \frac{1}{\eta}\|S\| \leq 1\}$. \end{Theorem}
Contrary to Corollary \ref{cor:luxideal} about $M$-ideals in the Orlicz-Lorentz spaces equipped with the Luxemburg norm, we conclude this paper by showing that $(\Lambda_{\varphi,w}^0)_a$ and $(\lambda_{\varphi,w}^0)_a$ are not $M$-ideals in $\Lambda_{\varphi,w}^0$ and $\lambda_{\varphi,w}^0$ respectively, if the Orlicz $N$-function $\varphi$ does not satisfy the appropriate $\Delta_2$ condition.
\begin{Corollary} Let $\varphi$ be an Orlicz $N$-function which does not satisfy the appropriate $\Delta_2$ condition. Then the order-continuous subspaces $(\Lambda_{\varphi,w}^0)_a$ or $(\lambda_{\varphi,w}^0)_a$ are not $M$-ideals in $\Lambda_{\varphi,w}^0$ or $\lambda_{\varphi,w}^0$, respectively. \end{Corollary}
\begin{proof}
We give a proof only in the case of function spaces.
Let $\varphi$ be an Orlicz $N$-function, which does not satisfy the appropriate $\Delta_2$ condition. Then $(\Lambda_{\varphi,w}^0)_a$ is a proper subspace of $\Lambda_{\varphi,w}^0$, and in view of Theorem \ref{Orlicz} there exists $S \in (\Lambda_{\varphi,w}^0)^*$ such that $S \neq 0$. So, choose $S \in (\Lambda_{\varphi,w}^0)^*$ such that $0 < \|S\| < 1$. We show that there exist $u>0$ and $0 < t_0 < \gamma$ such that $h = uw\chi_{(0,t_0)}$ and $\|h\|_{\mathcal{M}_{\varphi_*,w}} + \|S\| =1$. Indeed choose $u$ satisfying $\varphi_*(u) > 1/W(\gamma)$, where $1/W(\infty) = 0$. Then $\frac{1}{\varphi_*(u/(1-\|S\|))} < W(\gamma)$. Since $W$ is continuous on $(0, \gamma)$, there exists $0 < t_0 < \gamma$ such that $W(t_0) = \frac{1}{\varphi_*(u/(1-\|S\|))}$. Let $h = uw\chi_{(0,t_0)}$ for such $u$ and $t_0$. Clearly $h$ is a decreasing function. Furthermore, the interval $(0, t_0)$ is its maximal level interval since $R(0, t) = \frac{uW(t)}{W(t)} = \frac{uW(t_0)}{W(t_0)} = R(0,t_0) = u$ for all $0 < t < t_0$, and $R(0,t_0) < R(0,t)$ for $\gamma> t> t_0$ . Hence $\frac{h^0}{w} = u \chi_{(0, t_0)}$, and so $P_{\varphi_*,w}(h) = \int_I \varphi_*\left(\frac{h^0}{w}\right)w = \varphi_*(u)W(t_0)$. It follows that
\begin{eqnarray*}
\|h\|_{\mathcal{M}_{\varphi_*,w}} &=& \inf\left\{\epsilon > 0 : P_{\varphi_*,w}\left(\frac{h}{\epsilon}\right) \leq 1\right\} = \inf\left\{\epsilon > 0 : \varphi_* \left(\frac{u}{\epsilon}\right) \leq \frac{1}{W(t_0)}\right\}\\
&=& \inf\left\{\epsilon > 0 : \varphi_*\left(\frac{u}{\epsilon}\right) \leq \varphi_*\left(\frac{u}{1-\|S\|}\right)\right\}
= \inf\{\epsilon > 0 : \epsilon \geq 1 - \|S\|\} = 1 - \|S\|. \end{eqnarray*}
Thus, we have $\|h\|_{\mathcal{M}_{\varphi_*,w}} + \|S\| = 1$, which implies that $P_{\varphi_*,w} (\frac{h}{1 - \|S\|}) \leq 1$. Now since $\varphi$ is $N$-function, $\varphi_*$ is also an $N$-function, and so $\varphi_*$ is not identical to a linear function $ku$ for any $k>0$. Hence for all $u>0$, $\lambda> 1$ we have $\varphi_*(\lambda u) > \lambda \varphi_*(u)$. Therefore by $\frac{1}{1-\|S\|} > 1$, \[
1 \geq P_{\varphi_*,w}\left(\frac{h}{1-\|S\|}\right) = \varphi_*\left(\frac{u}{1-\|S\|}\right)W(t_0) > \frac{1}{1 - \|S\|} P_{\varphi_*,w}(h), \]
which shows that
\begin{equation}\label{eq:00}
P_{\varphi_*,w}(h) < 1 - \|S\| = \|h\|_{\mathcal{M}_{\varphi_*,w}}.
\end{equation}
On the other hand if we assume that $(\Lambda_{\varphi,w}^0)_a$ is an $M$-ideal of $\Lambda_{\varphi,w}^0$ then, $1 =\|H + S\| = \|h\|_{\mathcal{M}_{\varphi_*,w}} + \|S\| \geq P_{\varphi_*,w}(h) + \|S\|$. It follows that $P_{\varphi_*,w}(h) + \|S\| = 1$. Indeed, suppose that $P_{\varphi_*,w}(h) + \|S\| < 1$. Define $g(\lambda) = P_{\varphi_*,w}(\lambda h) + \lambda \|S\|$ for $\lambda > 0$. The function $g$ is convex, $g(0) = 0$, and $\lim_{\lambda \rightarrow \infty} g(\lambda) = \infty$. Since $g(1) = P_{\varphi_*,w}(h) + \|S\| < 1$, there exists $\frac{1}{\lambda_0} > 1$ such that $P_{\varphi_*,w}\left(\frac{h}{\lambda_0}\right) + \frac{1}{\lambda_0}\|S\| = 1$. But then, from Theorem \ref{Orlicz}, we have $1 = \|H+S\| = \inf\{\lambda>0 : P_{\varphi_*,w}(\frac{h}{\lambda}) + \frac{1}{\lambda}\|S\| \leq 1\} > 1$, which is a contradiction.
However $P_{\varphi_*,w}(h) + \|S\| = 1$ contradicts (\ref{eq:00}) and completes the proof.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
According to the Second Law of thermodynamics, the evolution of physical systems has a preferred direction,
that is characterized by some positive entropy production. Here we propose a direct way to measure the
stochastic entropy produced while driving a quantum open system out of thermal equilibrium.
The driving work is provided by a quantum battery, the system and the battery forming an autonomous machine. We show that the battery's energy fluctuations
equal work fluctuations and check Jarzynski's equality. Since these energy fluctuations are measurable, the battery behaves as an embedded quantum work meter and the machine verifies a generalized fluctuation theorem involving the information encoded in the battery. Our proposal can be implemented
with state-of-the-art opto-mechanical systems. It paves the way towards the experimental demonstration
of fluctuation theorems in quantum open systems. \end{abstract}
\title{An autonomous quantum machine to measure the thermodynamic arrow of time} \author{Juliette Monsel} \email{[email protected]} \affiliation{Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut N\'eel, 38000 Grenoble, France} \author{Cyril Elouard} \affiliation{Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut N\'eel, 38000 Grenoble, France} \affiliation{Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA} \author{Alexia Auff\`eves} \email{[email protected]} \affiliation{Univ. Grenoble Alpes, CNRS, Grenoble INP, Institut N\'eel, 38000 Grenoble, France} \date{\today} \pacs{42.50.-p, 05.70.-a} \keywords{quantum thermodynamics, quantum optics, opto-mechanics}
\maketitle
\noindent Irreversibility is a fundamental feature of our physical world. The degree of irreversibility of thermodynamic transformations is measured by the entropy production, which is always positive according to the Second Law. At the microscopic level, stochastic thermodynamics \cite{sekimoto, thermo_stoc_classique} has extended this concept to characterize the evolution of small systems coupled to reservoirs and driven out of equilibrium. Such systems follow stochastic trajectories $\rightvect{\Sigma}$ and the stochastic entropy production $\ds\Traj$ obeys the integral fluctuation theorem (IFT) $\ev{ \exp(-\ds\Traj)}_{\rightvect{\Sigma}} = 1 $ where $\ev{\cdot}_{\rightvect{\Sigma}}$ denotes the average over all trajectories $\rightvect{\Sigma}$. Jarzynski's equality (JE) \cite{jarzynski} is a paradigmatic example of such IFT, that constrains the fluctuations of the entropy produced while driving some initially thermalized system out of equilibrium. Experimental demonstrations of JE especially require the ability to measure the stochastic work $W\Traj$ exchanged with the external entity driving the system. In the classical regime, $W\Traj$ can be completely reconstructed from the monitoring of the system's trajectory, allowing for successful experimental demonstrations \cite{jarz_elec, jarz_osc_macro,jarzynski_classique_2018}.
Defining and measuring the entropy production in the quantum regime is of fundamental interest in the perspective of optimizing the performances of quantum heat engines and the energetic cost of quantum information technologies \cite{Mancini_arxiv, Mancini_npjQI, Santos2017a,Francica2017}. However, measuring a quantum fluctuation theorem can be problematic in the genuinely quantum situation of a coherently driven quantum system, because of the fundamental and practical issues to define and measure quantum work \cite{TH2016, BaumerChapter, engel_jarzynski_2007,RMPCampisi}. So far the quantum JE has thus been extended and experimentally verified in {\it closed} quantum systems, i.e. systems that are driven but otherwise isolated. In this case work corresponds to the change in the system's internal energy, accessible by a two-points measurement protocol \cite{TH2016} or the measurement of its characteristic function \cite{Mazzola,Dorner,DeChiaraChapter}. Experimental demonstrations have been realized, e.g. with trapped ions \cite{an_experimental_2015,jarzynski_quantique_2018}, ensemble of cold atoms \cite{cerisola_using_2017}, and spins in Nuclear Magnetic Resonance (NMR) \cite{Serra2014} where the thermodynamic arrow of time was successfully measured \cite{Serra2015}.
On the other hand, realistic strategies must still be developed to measure the fluctuations of entropy production for quantum {\it open} systems, i.e. that can be simultaneously driven, and coupled to reservoirs. Since work is usually assumed to be provided by a classical entity, most theoretical proposals so far have relied on the measurement of {\it heat} fluctuations, i.e. small energy changes of the reservoir. Experimentally, this requires to engineer this reservoir and to develop high efficiency detection schemes, which is very challenging \cite{pekola, Horowitz12, elouard_probing_2017}. Experimental demonstrations have remained elusive.
In this article, we propose a new and experimentally feasible strategy to measure the thermodynamic arrow of time for a quantum open system in Jarzynski's protocol, that is based on the direct measurement of {\it work} fluctuations. We investigate a so-called hybrid opto-mechanical system\cite{Treutlein}, that consists in a two-level system (further called a qubit) strongly coupled to a mechanical oscillator (MO) on the one hand, and to a thermal bath on the other hand. Studying single quantum trajectories of the hybrid system, we show that the MO and the qubit remain in a product state all along their joint evolution, allowing to unambiguously define their stochastic energies. We evidence that the mechanical energy fluctuations can be identified with the stochastic work received by the qubit and satisfy JE. Therefore the MO plays the role of a quantum battery, the ensemble of the qubit and the battery forming an autonomous machine \cite{frenzel_quasi-autonomous_2016,tonner_autonomous_2005,holmes_coherent_2018}. Originally, the battery behaves as an embedded quantum work meter, encoding information on the stochastic work exchanges. We show that the evolution of the complete machine is characterized by a generalized IFT, that quantitatively involves the amount of extracted information. This situation gives rise to so-called absolute irreversibility, in agreement with recent theoretical predictions and experimental results \cite{murashita_nonequilibrium_2014,funo2015,Ueda_PRA,Nakamura_arXiv}. Our proposal is robust against finite measurement precision \cite{lahaye_approaching_2004, schliesser_resolved-sideband_2009} and can be probed with state-of-the-art experimental devices.
The paper is divided as follows. Firstly, we introduce hybrid opto-mechanical devices as autonomous machines, and build the framework to model their evolution on average and at the single trajectory level. Focusing on Jarzynski's protocol, we define stochastic heat, work and entropy production and study the regime of validity and robustness of JE as a function of the parameters of the problem and experimental imperfections. Finally, we derive and simulate an IFT for the complete machine, evidencing the presence of absolute irreversibility. Our results demonstrate that work fluctuations can be measured directly, by monitoring the energetic fluctuations of the quantum battery. They represent an important step towards the experimental demonstration of quantum fluctuation theorem in a quantum open system.
\section*{Results} \subsection*{Hybrid opto-mechanical systems as autonomous machines.} A hybrid opto-mechanical system consists in a qubit of ground (resp. excited) state $\ket{g}$ (resp. $\ket{e}$) and transition frequency $\omega_0$, parametrically coupled to a mechanical oscillator of frequency $\Omega \ll \omega_0$ (See Fig.~\ref{fig1}a). Recently, physical implementations of such hybrid systems have been realized on various platforms, e.g. superconducting qubits embedded in oscillating membranes \cite{hybrid_circuit}, nanowires coupled to diamond nitrogen vacancies \cite{NV_defect}, or to semiconductor quantum dots \cite{trompette}. The complete Hamiltonian of the hybrid system reads $H_{\text{qm}} = H_{\text{q}} + H_{\text{m}} + V_{\text{qm}}$\cite{Treutlein}, where $H_{\text{q}} = \hbar\omega_0 \dyad{e}{e} \otimes \mathbf{1}_{\text{m}}$ and $H_{\text{m}} = \mathbf{1}_{\text{q}} \otimes \hbar\Omega b^\dagger b$ are the qubit and MO free Hamiltonians respectively. We have introduced the phonon annihilation operator $b$, and $ \mathbf{1}_{\text{m}}$ (resp. $ \mathbf{1}_{\text{q}}$) the identity on the MO (resp. qubit) Hilbert space. The coupling Hamiltonian is $V_{\text{qm}} = \hbar \gm\dyad{e}{e} \otimes (b + b^\dagger)$, where $\gm$ is the qubit-mechanical coupling strength. Of special interest for the present paper, the so-called ultra-strong coupling regime is defined as $\gm \geq \Omega$, with $\omega_0 \gg \gm$. It was recently demonstrated experimentally\cite{trompette}.
The Hamiltonian of the hybrid system can be fruitfully rewritten $H_{\text{qm}} = \dyad{e} \otimes H_\text{m}^{e} + \dyad{g} \otimes H_\text{m}^{g}$ with $H_\text{m}^{g} = \hbar \Omega b^\dagger b$ and $H_\text{m}^{e} = \hbar \Omega B^\dagger B + \hbar (\omega_0 - \gm^2/\Omega)\mathbf{1}_{\text{m}}$, with $B = b + (\gm/\Omega)\mathbf{1}_{\text{m}}$. It appears that the qubit bare energy states $\epsilon=e,g$ are stable under the dynamics and perfectly determine the evolution of the MO ruled by the Hamiltonian $H_\text{m}^{\epsilon}$. Interestingly, $H_\text{m}^{\epsilon}$ preserves the statistics of coherent mechanical states, defined as $\ket{\beta} = e^{\beta^* b - \beta b^\dagger}\ket{0}$, where $\ket{0}$ is the zero-phonon state and $\beta$ the complex amplitude of the field. Consequently, if the hybrid system is initially prepared in a product state $\ket{\epsilon,\beta_0}$, it remains in a similar product state $\ket{\epsilon, \beta^\epsilon_t}$ at any time, with $\ket{ \beta^\epsilon_t} = \exp(-i H_\text{m}^{\epsilon}t/\hbar) \ket{\beta_0}$. The two possible mechanical evolutions are pictured in Fig.~\ref{fig1}b between time $t_0=0$ and $t = \Omega/2\pi$, in the phase space defined by the mean quadratures of the MO $\langle \tilde{x} \rangle = \langle b+b^\dagger \rangle$ and $\langle \tilde{p} \rangle = -i \langle b-b^\dagger \rangle$. If the qubit is initially prepared in the state $\ket{e}$ (resp. $\ket{g}$), the mechanical evolution is a rotation around around the displaced origin $(-\gm/\Omega,0)$ (resp. the origin $(0,0)$). Such displacement is caused by the force the qubit exerts on the MO, that is similar to the optical radiation pressure in cavity opto-mechanics. Defining $\delta \beta_t = \beta^e_t - \beta^g_t$, it appears that the distance between the two final mechanical states $|\delta \beta_t|$ scales like $\gm/\Omega$. In the ultra-strong coupling regime, this distance is large such that mechanical states are distinguishable, and can be used as quantum meters to detect the qubit state.
Since the hybrid system remains in a pure product state at all times, its mean energy defined as ${\cal E}_\text{qm}(\epsilon,\beta^\epsilon_t) = \bra{\epsilon,\beta^\epsilon_t} H_\text{qm} \ket{\epsilon,\beta^\epsilon_t}$ naturally splits into two distinct components respectively quantifying the qubit and the mechanical energies: \begin{align} {\cal E}_\text{q}(\epsilon,\beta^\epsilon_t) & = \hbar \omega(\beta^\epsilon_t) \delta_{\epsilon,e} \label{Eq} \\
{\cal E}_\text{m}(\beta^\epsilon_t) & = \hbar \Omega |\beta^\epsilon_t|^2, \label{Em} \end{align} where $\delta_{\epsilon,e}$ is the Kronecker delta and $\omega(\beta)$ is the effective transition frequency of the qubit defined as: \begin{equation} \label{omega_eff} \omega(\beta^\epsilon_t) = \omega_0 + 2 \gm \Re(\beta^\epsilon_t). \end{equation}
The frequency modulation described by Eq.~\eqref{omega_eff} manifests the back-action of the mechanics on the qubit. Note that the case $\gm/\Omega \ll |\beta_0|$ corresponds to $|\delta \beta_t| \ll |\beta^g_t|$: Then the frequency modulation is independent of the qubit state and follows $\omega(\beta^\epsilon_t) \sim \omega(\beta_0 e^{-i\Omega t})$, even in the ultra-strong coupling regime. In what follows, we will be especially interested in the regime where $1\ll \gm/\Omega \ll |\beta_0|$, where the mechanical evolution depends on the qubit state, while the qubit transition frequency is independent of it.\\
We now take into account the coupling of the qubit to a bath prepared at thermal equilibrium. The bath of temperature $T$ consists of a spectrally broad collection of electromagnetic modes of frequencies $\omega'$, each mode containing a mean number of photons $\bar{n}_{\omega'} = \left(\exp(\hbar\omega'/\kT )- 1\right)^{-1}$. The bath induces transitions between the states $\ket{e}$ and $\ket{g}$, and is characterized by a typical correlation time $\tau_c$ giving rise to a bare qubit spontaneous emission rate $\gamma$.
The hybrid system is initially prepared in the product state $\rho_\text{qm}(0) = \rho_\text{q}(0)\otimes \dyad{\beta_0}$. $\rho_\text{q}(0)$ is the qubit state, taken diagonal in the $\{ e,g \}$ basis. $\dyad{\beta_0}$ is the mechanical state, that is chosen pure and coherent. In the rest of the paper, we shall study transformations taking place on typical time scales $t \sim \Omega^{-1}$, such that the mechanical relaxation is neglected. From the properties of the interaction with the bath and the total hybrid system's Hamiltonian $H_\text{qm}$, it clearly appears that the qubit does not develop any coherence in its bare energy basis. We show in the Supplementary \cite{suppl} that as long as $|\beta_0 | \gg \gm t$, the MO imposes a well defined modulation of the qubit frequency $\omega(\beta_0(t))$ with $\beta_0(t) = \beta_0 e^{-i\Omega t}$. This defines the semi-classical regime, where the hybrid system evolution is ruled by the following master equation \cite{suppl}: \begin{align} \dot{\rho}_{\text{qm}}(t) =\, & -\frac{\ii}{\hbar}[H_{\text{qm}}, \rho_{\text{qm}}(t)] \nonumber\\ &+ \gamma \bar{n}_{\omega(\beta_0(t))} D[\sigma^\dagger\otimes \mathbf{1}_{\text{m}}]\rho_{\text{qm}}(t) \nonumber\\ &+ \gamma \left(\bar{n}_{\omega(\beta_0(t))} + 1\right)D[\sigma \otimes \mathbf{1}_{\text{m}}]\rho_{\text{qm}}(t). \label{master_eq} \end{align} We have defined the super-operator $D[X]\rho = X\rho X^\dagger - \frac{1}{2}\{X^\dagger X, \rho\}$ and $\sigma= \dyad{g}{e}$.
Product states of the form $\rho_\text{qm}(t) = \rho_\text{q}(t)\otimes \rho_\text{m}(t)$ are natural solutions of Eq.~\eqref{master_eq}, giving rise to two reduced coupled equations respectively governing the dynamics of the qubit and the mechanics: \begin{align} \dot{\rho}_\text{q}(t) =\, & -\frac{\ii}{\hbar}[H_{\text{q}}(t), \rho_{\text{q}}(t)] \nonumber+ \gamma \bar{n}_{\omega(\beta_0(t))} D[\sigma^\dagger]\rho_{\text{q}}(t) \label{ev_q}\\ &+ \gamma \left(\bar{n}_{\omega(\beta_0(t))} + 1\right)D[\sigma]\rho_{\text{q}}(t), \\ \dot{\rho}_\text{m}(t) = & -\frac{\ii}{\hbar}[ H_\text{m}(t) , \rho_{\text{m}}(t)]. \label{ev_mec} \end{align} We have introduced the effective time-dependent Hamiltonians: $H_\text{q}(t) = \Tr_\text{m}[\rho_\text{m}(t)(H_\text{q}+V_\text{qm})] = \hbar \omega(\beta_0(t)) \dyad{e}$ and $H_\text{m}(t) = \Tr_\text{q}[\rho_\text{q}(t)(H_\text{m}+V_\text{qm})]$. The physical meaning of these semi-classical equations is transparent: The force exerted by the qubit results into the effective Hamiltonian $H_\text{m}(t)$ ruling the mechanical evolution. Reciprocally, the mechanics modulates the frequency $\omega(\beta_0(t))$ of the qubit (Eq.~\eqref{omega_eff}), which causes the coupling parameters of the qubit to the bath to be time-dependent.
The semi-classical regime of hybrid opto-mechanical systems is especially appealing for quantum thermodynamical purposes, since it allows modeling the time-dependent Hamiltonian ruling the dynamics of a system (the qubit) by coupling this system to a quantum entity, i.e. a quantum battery (the MO). The Hamiltonian of the compound is time-independent, justifying to call it an ``autonomous machine"\cite{frenzel_quasi-autonomous_2016, tonner_autonomous_2005,holmes_coherent_2018}. As demonstrated in a previous work \cite{rev_work_extraction}, this scenery suggests a new strategy to measure average work exchanges in quantum open systems. Defining the average work rate received by the qubit as $\langle \dot{W} \rangle= \Tr_\text{q}[\rho_\text{q}(t)\dot{H}_\text{q}(t)]$, we have shown that this work rate exactly compensates the mechanical energy variation rate: $\langle \dot{{\cal E}}_\text{m} \rangle= \Tr_\text{m}[\dot{\rho}_\text{m}(t){H}_\text{m}] = -\langle \dot{W} \rangle$. Remarkably, this relation demonstrates the possibility of measuring work ``in situ", directly inside the battery. This strategy offers undeniable practical advantages, since it solely requires to measure the mechanical energy at the beginning and at the end of the transformation. The corresponding mechanical energy change is potentially measurable in the ultra-strong coupling regime $g_\text{m}/\Omega \gg 1$ \cite{rev_work_extraction}, which is fully compatible with the semi-classical regime $g_\text{m} t \ll |\beta_0|$.
Our goal is now to extend this strategy to work {\it fluctuations}. A key point is to demonstrate that the qubit and the mechanical state remain in a pure product state along single realizations of the protocol, allowing to unambiguously define stochastic energies for each entity. This calls for an advanced theoretical treatment based on the quantum trajectories picture.\\
\begin{figure}
\caption{
\textbf{(a)} Situation under study: a qubit exchanging work $W$ with a mechanical resonator and heat $Q$ with a thermal bath at temperature $T$. The ensemble of the qubit and mechanics constitutes an autonomous machine. \textbf{(b)} Evolution of the complex mechanical amplitude $\beta$ if the qubit is in the $\ket{e}$ (resp. $\ket{g}$) and the MO is initially prepared in the state $\big|\ii|\beta_0|\big\rangle$. The mechanics can be used as a meter to detect the qubit state if $\gm /\Omega \gg 1$ (ultra-strong coupling regime). The mechanical fluctuations induced by the qubit state are small w.r.t. the free evolution if $|\beta_0| \gg \gm/\Omega$ (semi-classical regime). These two regimes are compatible (See text) \textbf{(c)} Stochastic mechanical trajectories $\rightvect{\beta}[\rightvect{\epsilon}\,]$ in the phase space defined by $(\tilde{x},\tilde{p})$ (See text). The MO is initially prepared in the coherent state $\big|\ii|\beta_0|\big\rangle$, and the qubit state is drawn from thermal equilibrium. Inset: Distribution of final states $\ket{\beta_\Sigma(t_N)}$ within an area of typical width $\gm/\Omega$. Parameters: $T = 80$ K, $\hbar\omega_0 = 1.2\kT$, $\Omega/2\pi = 100$ kHz, $\gamma/\Omega = 5$, $\gm/\Omega = 100$, $|\beta_0| = 1000$.}
\label{fig1}
\end{figure}
\subsection*{Quantum trajectories.}
We shall now describe the evolution of the machine between the time $t_0$ and $t_N$ by stochastic quantum trajectories of pure states $\rightvect{\Sigma} := \{ \ket{\Psi_{\Sigma}(t_n)} \}_{n=0}^N$, where $\ket{\Psi_\Sigma(t_n)}$ is a vector in the Hilbert space of the machine and $t_n = t_0 + n\Delta t$ with $\Delta t$ the time increment. To introduce our approach we first consider the semi-classical regime where the master equation \eqref{master_eq} is valid: The initial state of the machine $\ket{\Psi_{\Sigma}(t_0)}$ is drawn from the product state $\rho_\text{q}(0)\otimes \dyad{\beta_0}$ where $\rho_\text{q}(0)$ is diagonal in the $\{ e,g \}$ basis, and the evolution is studied over a typical duration $(t_N - t_0) \ll |\beta_0| g_\text{m}^{-1}$. Eq.~\eqref{master_eq} is unraveled in the quantum jump picture \cite{plenio_quantum-jump_1998, Gardiner, CarmichaelII, Wisemanbook, haroche}, giving rise to the following set of Kraus operators $\{J_{-1}(t_n);J_{+1}(t_n);J_0(t_n)\}$: \begin{align} \label{eq:jump} J_{-1}(t_n) &= \sqrt{\gamma \Dt(\bar{n}_{\omega(\beta_0(t_n))} + 1)}\; \sigma \otimes \mathbf{1}_{\text{m}},\\ J_{+1}(t_n) &= \sqrt{\gamma \Dt\bar{n}_{\omega(\beta_0(t_n))}}\;\sigma^\dagger \otimes \mathbf{1}_{\text{m}},\\ J_0(t_n) &= \mathbf{1}_{\text{qm}} - \frac{\ii\Dt}{\hbar} H_{\text{eff}}(t_n). \end{align} We have introduced $\mathbf{1}_{\text{qm}}= \mathbf{1}_{\text{m}} \otimes \mathbf{1}_{\text{q}}$ the identity operator in the Hilbert space of the machine. $J_{-1}$ and $J_{+1}$ are the so-called jump operators. Experimentally, they are signaled by the emission or absorption of a photon in the bath, that corresponds to the transition of the qubit in the ground or excited state respectively. The mechanical state remains unchanged. Reciprocally, the absence of detection event in the bath corresponds the no-jump operator $J_0$, i.e. a continuous, non Hermitian evolution governed by the effective Hamiltonian $H_\text{eff}(t_n) = H_\text{qm} + H_\text{nh}(t_n)$. Here $H_\text{nh}(t) = -(\ii \hbar/ 2) (J_{+1}^\dagger(t) J_{+1}(t) + J_{-1}^\dagger(t) J_{-1}(t))$ is the non-hermitian part of $H_\text{eff}$.
Let us suppose that the machine is initially prepared in a pure state $\ket{ \Psi(t_0)} = \ket{\epsilon_0,\beta_0}$. The quantum trajectory $\rightvect{\Sigma}$ is then perfectly defined by the sequence of stochastic jumps/no-jump $ \{ {\cal K}_\Sigma(t_n) \}_{n=1}^{N}$ where ${\cal K} = 0,-1,+1$. Namely, $\ket{\Psi_\Sigma(t_N) } = \left(\prod_{n=1}^N J_{{\cal K}_\Sigma(t_n)} \ket{\Psi(t_0)}\right)/ \sqrt{P[\rightvect{\Sigma} | \Psi(t_0)] }$ where we have introduced $P[\rightvect{\Sigma} | \Psi(t_0)] = \prod_{n=1}^{N} \!P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ]$ the probability of the trajectory $\rightvect{\Sigma}$ conditioned to the initial state $\ket{ \Psi(t_0)}$. $P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ] = \ev{J^\dagger_{{\cal K}_\Sigma(t_n)}J_{{\cal K}_\Sigma(t_n)}}{\Psi_\Sigma(t_{n-1})}$ denotes the probability of the transition from $\ket{ \Psi_\Sigma(t_{n-1})}$ to $\ket{ \Psi_\Sigma(t_{n})}$ at time $t_{n}$. At any time $t_N$, the density matrix of the machine, i.e. the solution of Eq.~\eqref{master_eq}, can be recovered by averaging over the trajectories: \begin{equation} \label{rho_qm} \rho_\text{qm}(t_N) = \sum_{\rightvect{\Sigma} } P[\rightvect{\Sigma}] \dyad{\Psi_\Sigma(t_N)}. \end{equation}
We have introduced the probability of the trajectory $P[\rightvect{\Sigma}] = p[\Psi(t_0)] P[\rightvect{\Sigma} | \Psi(t_0)]$, where $p[\Psi(t_0)]$ the probability that the machine is initially prepared in $\ket{\Psi(t_0)}$.
Interestingly from the expression of the Kraus operators, it appears that starting from the product state $ \ket{\epsilon_0,\beta_0}$, the machine remains in a product state $\ket{ \Psi_\Sigma(t_n)} = \ket{\epsilon_\Sigma(t_n),\beta_\Sigma(t_n)}$ at any time $t_n$, which is the first result of this paper. The demonstration is as follows: At each time step $t_n$, either the machine undergoes a quantum jump $J_{\pm 1}$, or it evolves under the no-jump operator $J_0$. In the former case, the qubit jumps from $\ket{\epsilon_\Sigma(t_n)}$ into $\ket{\epsilon_\Sigma(t_{n+1})}$ and the mechanical state remains unchanged, such as $\ket{\beta_\Sigma(t_{n+1})} = \ket{\beta_\Sigma(t_n)}$. In the latter case, the evolution of the machine state is governed by the effective Hamiltonian $H_\text{eff}$, whose non-hermitian part can be rewritten $H_\text{nh} = (-i\hbar /2) \mathbf{1}_{\text{m}} \otimes H^{q}_\text{nh}$ with $H^{q}_\text{nh}$ diagonal in the bare qubit energy eigenbasis. It naturally derives from the evolution rules that $H_\text{nh}$ has no effect on a machine state of the form $\ket{\epsilon_\Sigma(t_n),\beta_\Sigma(t_n)}$, such that the no-jump evolution reduces to its unitary component defined by $H_\text{qm}$. As studied above, the qubit energy state is stable under such evolution, such that $\ket{\epsilon_\Sigma(t_n)} = \ket{\epsilon_\Sigma(t_{n+1})}$. Reciprocally, the coherent nature of the mechanical field is preserved by $H_\text{m}^{\epsilon_\Sigma(t_n)}$. Thus the mechanics evolves into $\ket{\beta_\Sigma(t_{n+1})} = \exp(-\ii \Dt H_\text{m}^{\epsilon_\Sigma(t_n)}) \ket{\beta_\Sigma(t_n)}$, completing the demonstration.
This result invites to recast the machine trajectory as a set of two reduced trajectories $\rightvect{\Sigma} = \{ \rightvect{\epsilon}, \rightvect{\beta}[\rightvect{\epsilon}\,] \}$ where $ \rightvect{\epsilon} = \{ \ket{\epsilon_\Sigma(t_n)} \}_{n=0}^N$ is the stochastic qubit trajectory with $\epsilon_\Sigma(t_n) = e,g$. In the semi-classical regime considered here, the jump probabilities solely depend on $\omega(\beta_0(t))$, such that the qubit reduced evolution is Markovian. Conversely, $\rightvect{\beta} = \{ \ket{\beta_\Sigma(t_n)} \}_{n=0}^N$ is the continuous MO trajectory verifying $\ket{\beta_\Sigma(t_n)} = \prod_{k=0}^{n-1} \exp(-\ii \Dt H_\text{m}^{\epsilon_\Sigma(t_k)}) \ket{\beta_0}$. At any time $t_N$, the mechanical state depends on the complete qubit trajectory $\rightvect{\epsilon}$.
Examples of numerically generated mechanical trajectories $\rightvect{\beta}[\rightvect{\epsilon}\,]$ (See Methods) are plotted in Fig.~\ref{fig1}c. As it appears in the figure, at the final time the mechanical states $\ket{\beta_\Sigma(t_N)}$ are restricted within an area of typical dimension $\gm/\Omega$. Splitting the mechanical amplitude as $\beta_\Sigma(t_N) = \beta_0(t_N) + \delta \beta_\Sigma(t_N)$, the semi-classical regime is characterized by $\vert \delta \beta_\Sigma(t_n)\vert \ll \vert \beta_0(t_N)\vert$ while in the ultra-strong coupling regime $|\delta \beta_\Sigma(t_N)| \gg 1$. These two regimes are compatible, which is the key of our proposal as we show in the next Section. \\
Interestingly, the modeling of the machine stochastic evolution can be extended over timescales $t \geq |\beta_0|g_\text{m}^{-1}$, beyond the semi-classical regime. The key point is that the trajectory picture allows keeping track of the mechanical state at each time step $\ket{\beta_\Sigma(t_n)}$. Therefore at each time $t_n$, a master equation of the form of Eq.~\eqref{master_eq} can thus be derived and unraveled into a set of {\it trajectory-dependent} Kraus operators similar to Eq.~\eqref{eq:jump}, taking now $\omega(\beta_\Sigma(t_n))$ as the qubit effective frequency. In this general situation, the machine stochastic evolution still consists in trajectories of pure product states $\ket{\Psi_\Sigma(t_n)} = \ket{\epsilon_\Sigma(t_n), \beta_\Sigma(t_n)}$, but the mechanical fluctuations $| \delta \beta_\Sigma(t_n) |$ cannot be neglected anymore with respect to the mean amplitude $|\beta_0(t_n)|$. Consequently, Eq.\eqref{rho_qm} can not be written as an {\it average} product state of the qubit and the MO, resulting in the emergence of classical correlations between the qubit and the MO average states. Moreover, the jump probabilities at time $t_n$ now depend on $\bar{n}_{\omega(\beta_\Sigma(t_n))}$, such that the reduced qubit trajectory $\rightvect{\epsilon}$ is not Markovian anymore. As we show below, this property conditions the validity of our proposal, which is restricted to the Markovian regime. \\
\subsection*{Stochastic thermodynamics.} From now on we focus on the following protocol: At the initial time $t_0$ the machine is prepared in a product state $\rho_\text{qm}(t_0) = \rho^\infty_\text{q}(\beta_0) \otimes \dyad{\beta_0}$ where $\rho^\infty_\text{q}(\beta_0)$ is the qubit thermal distribution defined by the effective frequency $\omega(\beta_0)$. Note that $\rho_\text{qm}(t_0)$ is {\it not} an equilibrium state of the whole machine. One performs an energy measurement of the qubit, preparing the state $\ket{\Psi(t_0)} = \ket{\epsilon(t_0), \beta_0}$ with probability $p^\infty_{\beta_0}[\epsilon] = \exp(-\hbar\omega(\beta_0)\delta_{\epsilon,e}/\kT)/Z(\beta_0)$. $Z(\beta_0) = 1 + \exp(-\hbar\omega(\beta_0)/\kT)$ is the partition function. The machine is then coupled to the bath and its evolution is studied between $t_0=0$ and $t_N = \pi/2\Omega$. Depending on the choice of thermodynamical system, this physical situation can be studied from two different perspectives, defining two different transformations. If the considered thermodynamical system is the machine, then the studied evolution corresponds to a relaxation towards thermal equilibrium. Since the machine Hamiltonian $H_\text{qm}$ is time-independent, energy exchanges reduce to heat exchanges between the machine and the bath. On the other hand, if the considered thermodynamical system is the qubit, then the studied transformation consists in driving the qubit out of equilibrium through the time-dependent Hamiltonian $H_\text{q}(t)$, the driving work being provided by the mechanics. In the semi-classical regime, the qubit evolution is Markovian, such that this last situation simply corresponds to Jarzynski's protocol with $H_\text{q}(t) = \hbar \omega(\beta_0(t)) \dyad{e}$.
We now define and study the stochastic thermodynamical quantities characterizing the transformation experienced by the system (qubit or machine) for the protocol introduced above. As shown previously, starting from a product state $\ket{\Psi(t_0)}= \ket{\epsilon_0, \beta_0}$ the machine remains in a product state at any time $\ket{\Psi_\Sigma(t_n)}= \ket{\epsilon_\Sigma(t_n), \beta_\Sigma(t_n)}$. Defining as ${\cal E}_\text{qm}(\Psi_{\Sigma(t_n)}) = \ev{H_\text{qm}}{\Psi_{\Sigma(t_n)}}$ the machine internal energy, it thus naturally splits into a sum of the qubit energy ${\cal E}_\text{q}(\epsilon_\Sigma(t_n),\beta_\Sigma(t_n))$ (See Eq.~\eqref{Eq}) and mechanical energy ${\cal E}_\text{m}(\beta_\Sigma(t_n))$ (See Eq.~\eqref{Em}). Along the trajectory, the set of internal energies can change in two distinct ways. A quantum jump taking place at time $t_n$ stochastically changes the qubit and the machine energies by the same amount $\delta {\cal E}_\text{q}[\Sigma,t_n] = \delta {\cal E}_\text{qm}[\Sigma,t_n]$, leaving the MO energy unchanged. Following standard definitions in stochastic thermodynamics \cite{Alicki79,Horowitz12,elouard_role_2017}, the corresponding energy change is identified with heat $q[\Sigma,t_{n}]$ provided by the bath. Conversely in the absence of jump, the qubit remains in the same state between $t_n$ and $t_{n+1}$ while its energy eigenvalues evolve in time due to the qubit-mechanical coupling. Such energy change is identified with work denoted $w[\Sigma,t_n]$ and verifies $ \delta {\cal E}_\text{q}[\Sigma,t_n] = w[\Sigma,t_n]$. During this time interval, the machine is energetically isolated such that $ \delta {\cal E}_\text{qm}[\Sigma,t_n] = 0$. Therefore the work increment exactly compensates the mechanical energy change $\delta {\cal E}_\text{m}[\Sigma,t_n] = -w[\Sigma,t_n]$. Finally, the total work (resp. heat) received by the qubit is defined as $W\Traj = \sum_{n = 0}^{N - 1} w[\Sigma,t_n]$ (resp. $Q\Traj = \sum_{n = 0}^{N - 1} q[\Sigma,t_n] $). By construction, their sum equals the qubit total energy change between $t_0$ and $t_N$, $\Delta {\cal E}_\text{q}\Traj = W\Traj + Q\Traj$. From the analysis conducted above, it appears that the heat exchange corresponds to the energy change of the machine, $\Delta {\cal E}_\text{qm}\Traj = Q\Traj$. Reciprocally, the work received by the qubit is entirely provided by the mechanics and verifies:
\begin{equation} W\Traj = -\Delta {\cal E}_\text{m}\Traj, \label{autonomous} \end{equation} which is the second result of this article. Eq.~\eqref{autonomous} extends the results obtained for the average work in a previous work \cite{rev_work_extraction}, and explicitly demonstrates the one-by-one correspondence between the stochastic work received by the qubit and the mechanical energy change between the start and the end of the trajectory. The MO thus behaves as an ideal embedded quantum work meter at the single trajectory level. \\
We finally derive the expression of the stochastic entropy production $\ds\Traj$. It is defined by comparing the probability of the forward trajectory in the direct protocol $P\Traj$ to the probability of the backward trajectory in the time-reversed protocol $\tilde{P}\rTraj$\cite{thermo_traj_Broeck}: \begin{equation} \label{entropy} \ds\Traj = \log \left( \frac{P\Traj}{\tilde{P}\rTraj} \right). \end{equation}
The probability of the direct trajectory reads: \begin{equation}
P\Traj = p^\infty_{\beta_0}[\epsilon_\Sigma(t_0)]\prod_{n=1}^{N} \!P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ], \label{Pd} \end{equation} The state of the hybrid system averaged over the forward trajectories at time $t_N$ is described by Eq.~\eqref{rho_qm}. At the end of the protocol, the reduced mechanical average state defined as $\rho_\text{m} (t_N) = \text{Tr}_\text{q} [\rho_\text{qm} (t_N)]$ thus consists in a discrete distribution of the final mechanical states $\{ \ket{\beta_\Sigma(t_N)} \}$. Introducing the probability $p_\text{m}[\beta_\text{f}]$ for the mechanical amplitude to end up in a state of amplitude $\beta_\text{f}$, we shall denote it in the following $\rho_\text{m} (t_N) = \Sigma_{\beta_\text{f}} p_\text{m}[\beta_\text{f}] \dyad{\beta_\text{f}}$ where $\Sigma_{\beta_\text{f}} p_\text{m}[\beta_\text{f}] = 1$. \\
Reciprocally, the time-reversed protocol is defined between $t_N$ and $t_0$. It consists in time-reversing the unitary evolution governing the dynamics of the machine, keeping the same stochastic map at each time $t_n$. This leads to the expression of the time dependent reversed Kraus operators\cite{crooks_quantum_2008, elouard_role_2017, Manzano17, Manikandan18}: \begin{align} \tilde{J}_0(t_n)& = \mathbf{1}_{\text{qm}} + \frac{\ii\Dt}{\hbar} H^\dagger_\text{eff}(t_n),\\ \tilde{J}_{-1}(t_n) & = J_{+1}(t_n),\\ \tilde{J}_{+1}(t_n) &= J_{-1}(t_n), \end{align}
The initial state of the backward trajectory is defined as follows: The mechanical state $\ket{\beta_\Sigma(t_N)}$ is drawn from the final distribution of states $\{ \ket{\beta_\text{f}} \}$ generated by the direct protocol with probability $p_\text{m}[\beta_\text{f}]$, while the qubit state is drawn from the thermal equilibrium defined by $\beta_\Sigma(t_N)$ with probability $p^\infty_{\beta_\Sigma(t_N)}$. The probability of the backward trajectory reads \begin{align} \tilde{P}\rTraj =\,& p_\text{m}[\beta_\Sigma(t_N)] p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)] \nonumber\\
&\quad\times\prod_{n = N}^{1}\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]. \label{Pr} \end{align}
We have introduced the reversed jump probability at time $t_n$ $\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ] =\ev {\tilde{J}^\dagger_{{\cal K}_\Sigma(t_n)} \tilde{J}_{{\cal K}_\Sigma(t_n)} }{\Psi_\Sigma(t_{n}) }$. Based on Eqs.~ \eqref{autonomous}, \eqref{entropy}, \eqref{Pd}, \eqref{Pr}, we derive in the Supplementary\cite{suppl} the following expression for the stochastic entropy produced along $\rightvect{\Sigma}$: \begin{equation} \label{Si_machine} \ds\Traj = \sigma\Traj + I_{\text{Sh}}\Traj, \end{equation} where $\sigma\Traj$ and $I_{\text{Sh}}\Traj$ are defined as \begin{align} \sigma \Traj & = -\frac{\Delta {\cal E}_\text{m} \Traj + \Delta F \Traj}{k_\text{B}T}, \label{sigma_q}\\ I_{\text{Sh}}\Traj & = -\log( p_\text{m}[\beta_\Sigma(t_N)]). \label{sigma_m} \end{align}
We have introduced the quantity $\Delta F \Traj = k_\text{B} T \log(Z(\beta_0)/Z (\beta_\Sigma(t_N)))$ that extends the notion of the qubit free energy change to cases where the reduced qubit trajectory $\rightvect{\epsilon}$ is non-Markovian. In the Markovian regime, we simply recover $Z(t_N) =1 + \exp(-\hbar \omega(\beta_0(t_N))/ k_\text{B}T)$ and $\Delta F\Traj = \Delta F$. As we show below, in this case $\sigma\Traj$ can be interpreted as the entropy produced along the reduced trajectory of the qubit, that gives rise to a reduced JE. Conversely, $I_{\text{Sh}}\Traj $ measures the stochastic entropy increase of the MO and is involved in a generalized IFT characterizing the evolution of the whole machine. We now study in detail these two fluctuation theorems.\\
\begin{figure}\label{fig2}
\end{figure}
\subsection*{Reduced Jarzynski's equality.} We first focus on the transformation experienced by the qubit. As mentioned above, in the Markovian regime the applied protocol corresponds to Jarzynski's protocol: The qubit is driven out of thermal equilibrium while it experiences the frequency modulation $\omega(\beta_0(t))$. Since the stochastic work $W\Traj$ is provided by the mechanics, one expects the mechanical energy fluctuations to obey a reduced Jarzynski's equality. We derive in the Supplementary \cite{suppl} the following IFT:
\begin{equation} \ev{\exp(\frac{\Delta {\cal E}_\text{m} \Traj} {k_\text{B}T})} _{\rightvect{\Sigma}}= \exp(-\frac{\Delta F} {k_\text{B}T}). \label{JE} \end{equation}
Eq.~\eqref{JE} corresponds to the usual Jarzynski's equality, with the remarkable difference that the stochastic work involved in $\sigma\Traj$ is now replaced by the mechanical energy change $\Delta {\cal E}_\text{m}\Traj$. This is the third and most important result of this paper, which now suggests a new strategy to measure work {\it fluctuations}. Instead of reconstructing the stochastic work by monitoring the complete qubit trajectory, one can simply measure the mechanical stochastic energy at the beginning and at the end of the protocol. This can be done, e.g. in time-resolved measurements of the mechanical complex amplitude through to optical deflection techniques\cite{Sanii10,Mercier16}. To do so, the final mechanical states $\ket{\beta_\Sigma(t_N)}$ should be distinguishable, which requires to reach the ultra-strong coupling regime. As mentioned above, this regime has been experimentally evidenced \cite{trompette} with typical values $\Omega \sim \gm \sim 400$kHz. The strategy we suggest here is drastically different from former proposals aiming at measuring JE in a quantum open system, that involved challenging reservoir engineering techniques \cite{Horowitz12,elouard_probing_2017} or fine thermometry \cite{pekola} in order to measure heat exchanges.
\begin{figure}\label{fig3}
\end{figure}
We have simulated the reduced JE (See Fig.~\ref{fig2}a). As expected, JE is verified in the Markovian limit where we have checked that the action of the MO is similar to a classical external operator imposing the qubit frequency modulation $\omega(\beta_0(t))$ (Fig.~\ref{fig2}b). On the contrary, the Markovian approximation and JE break down in the regime $(\gm/\Omega)/|\beta_0| \geq 10^{-2}$. In what follows, we restrict the study to the range of parameters $(\gm/\Omega)/|\beta_0| < 10^{-2}$.\\
The results presented in Fig.~\ref{fig2} presuppose the experimental ability to measure the mechanical states with an infinite precision. To take into account both quantum uncertainty and experimental limitations, we now assume that the measured complex amplitude $\beta^\text{M}$ corresponds to the mechanical amplitude $\beta_\text{f}$ in the end of the protocol with a finite precision $\delta \beta$. For our simulations we have chosen $\delta \beta = 2$ which corresponds to achievable experimental value \cite{Sanii10,Mercier16}. To quantify this finite precision, we introduce the mutual information between the final distribution of mechanical states $p_\text{m}[\beta_\text{f}]$ introduced above, and the measured distribution $p_\text{m}[\beta^\text{M}]$, defined as: \begin{align}
& I[\beta_\text{f}, \beta^\text{M}] = \nonumber\\
& \quad\sum_{\beta_\text{f}, \beta^\text{M}}p(\beta_\text{f}, \beta^\text{M}) \log(\frac{p(\beta_\text{f}, \beta^\text{M})}{p_\text{m}[\beta_\text{f}] p_\text{m}[\beta^\text{M}]}). \end{align} $p(\beta_\text{f}, \beta^\text{M})$ denotes the joint probability of measuring $\beta^\text{M}$ while the mechanical amplitude equals $\beta_\text{f}$. If the measurement precision is infinite, the mutual information $I[\beta_\text{f}, \beta^\text{M}]$ exactly matches the Shannon entropy characterizing the final distribution of mechanical states $S_\text{Sh}[\beta_\text{f}] = -\sum_{\beta_\text{f}} p_\text{m}[\beta_\text{f}] \log(p_\text{m}[\beta_\text{f}])$. On the opposite, it vanishes in the absence of correlations between the two distributions.
The simulation of the measured JE and the mutual information $I[\beta_\text{f}, \beta^\text{M}]$ are plotted in Fig.~\ref{fig3} for the measurement precision $\delta\beta = 2$, as a function of the parameter $g_\text{m}/\Omega$ (See Methods). We have introduced the measured reduced entropy production $\sigma^\text{M}\Traj = (W^M\Traj - \Delta F)/ k_\mathrm{B} T$ where $W^M\Traj$ is the measured work distribution $W^M\Traj = -\Delta {\cal E}_\text{m}^\text{M}\Traj = \hbar\Omega(|\beta_0^\text{M}|^2 - |\beta^\text{M}_\Sigma(t_N)|^2)$. As expected, small values of $\gm/\Omega$ correspond to a poor ability to distinguish between the different final mechanical states, hence to measure work, which is characterized by a non-optimal mutual information. In this limit, the measured work fluctuations $W^M\Traj$ do not verify JE. Increasing the ratio $\gm/\Omega$ allows to increase the information extracted on the work distribution during the readout. Thus the mutual information converges towards $S_\text{Sh}[\beta_\text{f}]$ despite the finite precision readout. JE is recovered for $\gm/\Omega \sim 50$. Such high rates are within experimental reach, by engineering modes of lower mechanical frequency\cite{trompette_suppl}. \\
\subsection*{Generalized integral fluctuation theorem.}
\begin{figure}\label{fig4}
\end{figure}
We finally consider the complete machine as the thermodynamical system under study. Based on Eq.~\eqref{entropy} and \eqref{Si_machine}, we show in the Supplementary \cite{suppl} that the entropy produced along the stochastic evolution of the hybrid system obeys a modified IFT of the form: \begin{equation} \ev{\exp(-\Delta_\text{i} s\Traj)}_{\rightvect{\Sigma}} = 1- \lambda. \label{GFT} \end{equation} Following \cite{murashita_nonequilibrium_2014,funo2015,Ueda_PRA,Nakamura_arXiv}, we have defined the parameter $\lambda$ as $\sum_{\rightvect{\Sigma}} \tilde{P} \rTraj = 1 -\lambda$. The case $\lambda \neq 0$ signals the existence of backward trajectories $\leftvect{\Sigma}$ without any forward counterpart, i.e. $P[\rightvect{\Sigma}] = 0$, a phenomenon that has been dubbed absolute irreversibility (See Supplementary \cite{suppl}). From Eq.~\eqref{GFT} and the convexity of the exponential, it is clear that absolute irreversibility characterizes transformations associated to a strictly positive entropy production. This is the case in the present situation, which describes the relaxation of the machine towards a thermal equilibrium state: Such transformation is never reversible, unless for $T = 0$.
The IFT (Eq.~\eqref{GFT}) and the mean entropy production $\ev{\ds\Traj}_{\rightvect{\Sigma}}$ are plotted in Fig.~\ref{fig4}a and Fig.~\ref{fig4}b respectively, as a function of the bath temperature $T$ (See Methods). The limit $\hbar\omega_0 \gg \kT$ corresponds to the trivial case of a single reversible trajectory characterized by a null entropy production and $\lambda \rightarrow 0$. In the opposite regime defined by $\kT \gg \hbar \omega_0$, a mean entropy is produced while $\lambda \rightarrow 1$: In this situation, most backward trajectories have no forward counterpart. As we show in the Supplementary\cite{suppl}, such effect arises since a given $\beta_\text{f}$ of the final distribution of mechanical states can only be reached by a single forward trajectory, while it provides a starting point for a large number of backward trajectories.
As noticed in\cite{murashita_nonequilibrium_2014,Nakamura_arXiv,Manikandan18}, absolute irreversibility can also appear in IFTs characterizing the entropy produced by a measurement process. In particular, $\lambda \neq 0$ can signal a perfect information extraction: This typically corresponds to the present situation which describes the creation of classical correlations between the qubit reduced trajectory $\rightvect{\epsilon}$ and the distributions of final mechanical states $\rightvect{\beta}[\rightvect{\epsilon}]$. Interestingly, the two FTs \eqref{JE} and \eqref{GFT} are thus deeply related. To be experimentally checked, Eq.~\eqref{JE} requires the MO to behave as a perfect quantum work meter, which is signaled by absolute irreversibility Eq.~\eqref{GFT}. Therefore absolute irreversibility is constitutive of the protocol, and a witness of its success.
\section*{Discussion} \noindent We have evidenced a new protocol to measure stochastic entropy production and thermodynamic time arrow in a quantum open system. Based on the direct readout of stochastic work exchanges within an autonomous machine, this protocol is experimentally feasible in state-of-the-art opto-mechanical devices and robust against finite precision measurements. It offers a promising alternative to former proposals relying on the readout of stochastic heat exchanges within engineered reservoirs, which require high efficiency measurements. Originally, our proposal sheds new light on absolute irreversibility, which quantifies information extraction within the quantum work meter and therefore signals the success of the protocol.\\
In the near future, direct work measurement may become extremely useful to investigate genuinely quantum situations where a battery coherently drives a quantum open system into coherent superpositions. Such situations are especially appealing for quantum thermodynamics since they lead to entropy production and energetic fluctuations of quantum nature \cite{elouard_role_2017,elouard_extracting_2017}, related to the erasure of quantum coherences \cite{Santos2017a,Francica2017}. Recently, small amounts of average work have been directly measured, by monitoring the resonant field coherently driving a superconducting qubit \cite{Cottet7561}. Generalizing our formalism to this experimental situation would relate measurable work fluctuations to quantum entropy production, opening a new chapter in the study of quantum fluctuation theorems.
\section*{Methods}
\noindent The numerical results presented in this article were obtained using the jump and no-jump probabilities to sample the ensemble of possible direct trajectories \cite{haroche}. The average value of a quantity $A\Traj$ is then approximated by $\ev{A\Traj}_{\srightvect{\Sigma}} \simeq \frac{1}{N_\text{traj}}\sum_{i=1}^{N_\text{traj}} A[\rightvect{\Sigma}_i]$ where $N_\text{traj} = 5\times 10^6$ is the number of numerically generated trajectories and $\rightvect{\Sigma}_i$ denotes the $i$-th trajectory.
The reduced entropy production $\sigma\Traj$ used in Fig.~\ref{fig2} and \ref{fig4} was calculated with the expression \eqref{sigma_q}, using the numerically generated values of $\beta_0$ and $\beta_\Sigma(t_N)$ in the trajectory $\rightvect{\Sigma}$. One value of $\beta_\Sigma(t_N)$ can be generated by a single direct trajectory $\rightvect{\Sigma}$: Below we use the equality $p_\text{m}[\beta_\Sigma(t_N)] = P\Traj$. Using the expression \eqref{Pr} of the probability of the reversed trajectory, the average entropy production becomes: \begin{align*} \ev{\Delta_{\text{i}}s\Traj}_{\!\srightvect{\Sigma}} &=\ev{\log(\frac{P\Traj}{\tilde{P}\rTraj})}_{\srightvect{\Sigma}}\\ &=\left\langle-\log\left(p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)]\phantom{\prod_{n = 1}^{N}}\right.\right. \\
&\hspace{2cm}\left.\left.\times\prod_{n = 1}^{N} \tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]\right)\right\rangle_{\srightvect{\Sigma}} \\
&\simeq \frac{-1}{N_\text{traj}}\sum_{i = 1}^{N_\text{traj}}\log\left(p^\infty_{\beta^i_\Sigma(t_N)}[\epsilon^i(t_N)]\phantom{\prod_{n = 1}^{N}}\right.\\
&\hspace{2.7cm}\left.\times\prod_{n = 1}^{N } \tilde{P}[\Psi^i_\Sigma(t_{n-1}) | \Psi^i_\Sigma(t_{n}) ]\right), \end{align*} and, \begin{align*} \sum_{\srightvect{\Sigma}}\tilde P\rTraj
&= \sum_{\srightvect{\Sigma}} p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)] p_\text{m}[\beta_\Sigma(t_N)]\\ &\hspace{0.9cm}\times\prod_{n = 1}^{N} \tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]\\
&= \ev{ p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)] \prod_{n = 1}^{N} \tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]}_{\srightvect{\Sigma}}\\
&\simeq \frac{1}{N_\text{traj}}\sum_{i = 1}^{N_\text{traj}} p^\infty_{\beta^i_\Sigma(t_N)}[\epsilon^i(t_N)]\\
&\hspace{1.7cm}\times\prod_{n = 1}^{N} \tilde{P}[\Psi^i_\Sigma(t_{n-1}) | \Psi^i_\Sigma(t_{n}) ]. \end{align*} The plotted error bars represent the statistical error $\sigma/\sqrt{N_\text{traj}}$, where $\sigma$ is the standard deviation.\\
To obtain Fig.~\ref{fig3}, we considered that the preparation of the initial MO state was not perfect. So instead of starting from exactly $\ket{\beta_0}$, the MO trajectories start from $\ket{\beta_\Sigma(t_0)}$ with the $\beta_\Sigma(t_0)$ uniformly distributed in a square of width $2\delta \beta$, centered on $\beta_0$. Similarly, the measuring apparatus has a finite precision, modeled by a grid of cell width $2\delta\beta$ in the phase plane $(\Re\beta_\text{f}, \Im\beta_\text{f})$. Instead of obtaining the exact value of $\beta_\Sigma(t_N)$, we get $\beta^\text{M}_\Sigma(t_N)$, the center of the grid cell in which $\beta_\Sigma(t_N)$ is. The value used to compute the thermodynamical quantities are not the exact $\beta_\Sigma(t_0)$ and $\beta_\Sigma(t_N)$ but $\beta^\text{M}_0 = \beta_0$ and $\beta^\text{M}_\Sigma(t_N)$.
\section*{Acknowledgment} \noindent J.M. acknowledges J-P Aguilar Ph.D. grant from the CFM foundation. C.E. acknowledges the US Department of Energy grant No. de-5sc0017890. This work was supported by the ANR project QDOT (ANR-16-CE09-0010-01). Part of this work was discussed at the Kavli Institute for Theoretical Physics during the program Thermodynamics of quantum systems: Measurement, engines, and control. The authors acknowledge the National Science Foundation under Grant No. NSF PHY-1748958.
\begin{thebibliography}{10}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{sekimoto}
\bibinfo{author}{Sekimoto, K.}
\newblock \emph{\bibinfo{title}{Stochastic {{Energetics}}}}
(\bibinfo{publisher}{{Springer}}, \bibinfo{year}{2010}).
\bibitem{thermo_stoc_classique}
\bibinfo{author}{Seifert, U.}
\newblock \bibinfo{title}{Stochastic thermodynamics: Principles and
perspectives}.
\newblock \emph{\bibinfo{journal}{Eur. Phys. J. B}}
\textbf{\bibinfo{volume}{64}}, \bibinfo{pages}{423--431}
(\bibinfo{year}{2008}).
\bibitem{jarzynski}
\bibinfo{author}{Jarzynski, C.}
\newblock \bibinfo{title}{Equilibrium free-energy differences from
nonequilibrium measurements: {{A}} master-equation approach}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. E}} \textbf{\bibinfo{volume}{56}},
\bibinfo{pages}{5018--5035} (\bibinfo{year}{1997}).
\bibitem{jarz_elec}
\bibinfo{author}{Saira, O.-P.} \emph{et~al.}
\newblock \bibinfo{title}{Test of the {{Jarzynski}} and {{Crooks Fluctuation
Relations}} in an {{Electronic System}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{109}}, \bibinfo{pages}{180601}
(\bibinfo{year}{2012}).
\bibitem{jarz_osc_macro}
\bibinfo{author}{Douarche, F.}, \bibinfo{author}{Ciliberto, S.},
\bibinfo{author}{Petrosyan, A.} \& \bibinfo{author}{Rabbiosi, I.}
\newblock \bibinfo{title}{An {{Experimental Test}} of the {{Jarzynski
Equality}} in a {{Mechanical Experiment}}}.
\newblock \emph{\bibinfo{journal}{Europhys. Lett.}}
\textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{593--599}
(\bibinfo{year}{2005}).
\bibitem{jarzynski_classique_2018}
\bibinfo{author}{Hoang, T.~M.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental {{Test}} of the {{Differential
Fluctuation Theorem}} and a {{Generalized Jarzynski Equality}} for
{{Arbitrary Initial States}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{120}}, \bibinfo{pages}{080602}
(\bibinfo{year}{2018}).
\bibitem{Mancini_arxiv}
\bibinfo{author}{Mancino, L.} \emph{et~al.}
\newblock \bibinfo{title}{Geometrical bounds on irreversibility in open quantum
systems} (\bibinfo{year}{2018}).
\newblock URL \url{http://arxiv.org/abs/1801.05188}.
\bibitem{Mancini_npjQI}
\bibinfo{author}{Mancino, L.} \emph{et~al.}
\newblock \bibinfo{title}{The entropic cost of quantum generalized
measurements}.
\newblock \emph{\bibinfo{journal}{Npj Quantum Inf.}}
\textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{20} (\bibinfo{year}{2018}).
\bibitem{Santos2017a}
\bibinfo{author}{Santos, J.~P.}, \bibinfo{author}{C{\'{e}}leri, L.~C.},
\bibinfo{author}{Landi, G.~T.} \& \bibinfo{author}{Paternostro, M.}
\newblock \bibinfo{title}{{The role of quantum coherence in non-equilibrium
entropy production}} (\bibinfo{year}{2017}).
\newblock URL \url{http://arxiv.org/abs/1707.08946}.
\newblock \eprint{1707.08946}.
\bibitem{Francica2017}
\bibinfo{author}{Francica, G.}, \bibinfo{author}{Goold, J.} \&
\bibinfo{author}{Plastina, F.}
\newblock \bibinfo{title}{{The role of coherence in the non-equilibrium
thermodynamics of quantum systems}} (\bibinfo{year}{2017}).
\newblock URL \url{https://arxiv.org/pdf/1707.06950.pdf}.
\newblock \eprint{1707.06950}.
\bibitem{TH2016}
\bibinfo{author}{Talkner, P.} \& \bibinfo{author}{H\"anggi, P.}
\newblock \bibinfo{title}{Aspects of quantum work}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. E}} \textbf{\bibinfo{volume}{93}},
\bibinfo{pages}{022131} (\bibinfo{year}{2016}).
\bibitem{BaumerChapter}
\bibinfo{editor}{Binder, F.}, \bibinfo{editor}{Correa, L.~A.},
\bibinfo{editor}{Gogolin, C.}, \bibinfo{editor}{Anders, J.} \&
\bibinfo{editor}{Adesso, G.} (eds.).
\newblock \emph{\bibinfo{title}{Fluctuating work in coherent quantum systems:
proposals and limitations}} (\bibinfo{year}{2018}),
\bibinfo{edition}{springer international publishing} edn.
\newblock URL \url{http://arxiv.org/abs/1805.10096}.
\bibitem{engel_jarzynski_2007}
\bibinfo{author}{Engel, A.} \& \bibinfo{author}{Nolte, R.}
\newblock \bibinfo{title}{Jarzynski equation for a simple quantum system:
{{Comparing}} two definitions of work}.
\newblock \emph{\bibinfo{journal}{Europhys. Lett.}}
\textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{10003} (\bibinfo{year}{2007}).
\bibitem{RMPCampisi}
\bibinfo{author}{Campisi, M.}, \bibinfo{author}{H\"anggi, P.} \&
\bibinfo{author}{Talkner, P.}
\newblock \bibinfo{title}{Colloquium: Quantum fluctuation relations:
Foundations and applications}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{771--791}
(\bibinfo{year}{2011}).
\newblock URL \url{https://link.aps.org/doi/10.1103/RevModPhys.83.771}.
\bibitem{Mazzola}
\bibinfo{author}{Mazzola, L.}, \bibinfo{author}{De~Chiara, G.} \&
\bibinfo{author}{Paternostro, M.}
\newblock \bibinfo{title}{Measuring the {{Characteristic Function}} of the
{{Work Distribution}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{230602}
(\bibinfo{year}{2013}).
\bibitem{Dorner}
\bibinfo{author}{Dorner, R.} \emph{et~al.}
\newblock \bibinfo{title}{Extracting {{Quantum Work Statistics}} and
{{Fluctuation Theorems}} by {{Single}}-{{Qubit Interferometry}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{230601}
(\bibinfo{year}{2013}).
\bibitem{DeChiaraChapter}
\bibinfo{author}{De~Chiara, G.}, \bibinfo{author}{Solinas, P.},
\bibinfo{author}{Cerisola, F.} \& \bibinfo{author}{Roncaglia, A.~J.}
\newblock \emph{\bibinfo{title}{Ancilla-assisted measurement of quantum work}}
(\bibinfo{year}{2018}), \bibinfo{edition}{springer international publishing}
edn.
\newblock URL \url{http://arxiv.org/abs/1805.06047}.
\bibitem{an_experimental_2015}
\bibinfo{author}{An, S.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental {{Test}} of the {{Quantum Jarzynski
Equality}} with a {{Trapped}}-{{Ion System}}}.
\newblock \emph{\bibinfo{journal}{Nat. Phys.}} \textbf{\bibinfo{volume}{11}},
\bibinfo{pages}{193} (\bibinfo{year}{2015}).
\bibitem{jarzynski_quantique_2018}
\bibinfo{author}{Xiong, T.~P.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental {{Verification}} of a
{{Jarzynski}}-{{Related Information}}-{{Theoretic Equality}} by a {{Single
Trapped Ion}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{120}}, \bibinfo{pages}{010601}
(\bibinfo{year}{2018}).
\bibitem{cerisola_using_2017}
\bibinfo{author}{Cerisola, F.} \emph{et~al.}
\newblock \bibinfo{title}{Using a quantum work meter to test non-equilibrium
fluctuation theorems}.
\newblock \emph{\bibinfo{journal}{Nat. Commun.}} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{1241} (\bibinfo{year}{2017}).
\bibitem{Serra2014}
\bibinfo{author}{Batalh\~ao, T.~B.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental {{Reconstruction}} of {{Work
Distribution}} and {{Study}} of {{Fluctuation Relations}} in a {{Closed
Quantum System}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{140601}
(\bibinfo{year}{2014}).
\bibitem{Serra2015}
\bibinfo{author}{Batalh\~ao, T.~B.} \emph{et~al.}
\newblock \bibinfo{title}{Irreversibility and the {{Arrow}} of {{Time}} in a
{{Quenched Quantum System}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{115}}, \bibinfo{pages}{190601}
(\bibinfo{year}{2015}).
\bibitem{pekola}
\bibinfo{author}{Pekola, J.~P.}, \bibinfo{author}{Solinas, P.},
\bibinfo{author}{Shnirman, A.} \& \bibinfo{author}{Averin, D.~V.}
\newblock \bibinfo{title}{Calorimetric measurement of work in a quantum
system}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{15}},
\bibinfo{pages}{115006} (\bibinfo{year}{2013}).
\bibitem{Horowitz12}
\bibinfo{author}{Horowitz, J.~M.}
\newblock \bibinfo{title}{Quantum-trajectory approach to the stochastic
thermodynamics of a forced harmonic oscillator}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. E}} \textbf{\bibinfo{volume}{85}},
\bibinfo{pages}{031110} (\bibinfo{year}{2012}).
\bibitem{elouard_probing_2017}
\bibinfo{author}{Elouard, C.}, \bibinfo{author}{Bernardes, N.~K.},
\bibinfo{author}{Carvalho, A. R.~R.}, \bibinfo{author}{Santos, M.~F.} \&
\bibinfo{author}{Auff\`eves, A.}
\newblock \bibinfo{title}{Probing {{Quantum Fluctuation Theorems}} in
{{Engineered Reservoirs}}}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{19}},
\bibinfo{pages}{103011} (\bibinfo{year}{2017}).
\bibitem{Treutlein}
\bibinfo{author}{Treutlein, P.}, \bibinfo{author}{Genes, C.},
\bibinfo{author}{Hammerer, K.}, \bibinfo{author}{Poggio, M.} \&
\bibinfo{author}{Rabl, P.}
\newblock \bibinfo{title}{Hybrid {{Mechanical Systems}}}.
\newblock In \bibinfo{editor}{Aspelmeyer, M.}, \bibinfo{editor}{Kippenberg, T.}
\& \bibinfo{editor}{Marquardt, F.} (eds.) \emph{\bibinfo{booktitle}{Cavity
{{Optomechanics}}}} (\bibinfo{publisher}{{Springer}},
\bibinfo{address}{Berlin}, \bibinfo{year}{2014}).
\bibitem{frenzel_quasi-autonomous_2016}
\bibinfo{author}{Frenzel, M.~F.}, \bibinfo{author}{Jennings, D.} \&
\bibinfo{author}{Rudolph, T.}
\newblock \bibinfo{title}{Quasi-autonomous quantum thermal machines and quantum
to classical energy flow}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{18}},
\bibinfo{pages}{023037} (\bibinfo{year}{2016}).
\bibitem{tonner_autonomous_2005}
\bibinfo{author}{Tonner, F.} \& \bibinfo{author}{Mahler, G.}
\newblock \bibinfo{title}{Autonomous quantum thermodynamic machines}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. E}} \textbf{\bibinfo{volume}{72}},
\bibinfo{pages}{066118} (\bibinfo{year}{2005}).
\bibitem{holmes_coherent_2018}
\bibinfo{author}{Holmes, Z.}, \bibinfo{author}{Weidt, S.},
\bibinfo{author}{Jennings, D.}, \bibinfo{author}{Anders, J.} \&
\bibinfo{author}{Mintert, F.}
\newblock \bibinfo{title}{Coherent fluctuation relations: From the abstract to
the concrete} (\bibinfo{year}{2018}).
\newblock URL \url{http://arxiv.org/abs/1806.11256}.
\bibitem{murashita_nonequilibrium_2014}
\bibinfo{author}{Murashita, Y.}, \bibinfo{author}{Funo, K.} \&
\bibinfo{author}{Ueda, M.}
\newblock \bibinfo{title}{Nonequilibrium equalities in absolutely irreversible
processes}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. E}} \textbf{\bibinfo{volume}{90}},
\bibinfo{pages}{042110} (\bibinfo{year}{2014}).
\bibitem{funo2015}
\bibinfo{author}{Funo, K.}, \bibinfo{author}{Murashita, Y.} \&
\bibinfo{author}{Ueda, M.}
\newblock \bibinfo{title}{Quantum nonequilibrium equalities with absolute
irreversibility}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{17}},
\bibinfo{pages}{075005} (\bibinfo{year}{2015}).
\bibitem{Ueda_PRA}
\bibinfo{author}{Murashita, Y.}, \bibinfo{author}{Gong, Z.},
\bibinfo{author}{Ashida, Y.} \& \bibinfo{author}{Ueda, M.}
\newblock \bibinfo{title}{Fluctuation theorems in feedback-controlled open
quantum systems: {{Quantum}} coherence and absolute irreversibility}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{043840} (\bibinfo{year}{2017}).
\bibitem{Nakamura_arXiv}
\bibinfo{author}{Masuyama, Y.} \emph{et~al.}
\newblock \bibinfo{title}{Information-to-work conversion by {{Maxwell}}'s demon
in a superconducting circuit-{{QED}} system} (\bibinfo{year}{2017}).
\newblock URL \url{https://arxiv.org/abs/1709.00548}.
\bibitem{lahaye_approaching_2004}
\bibinfo{author}{LaHaye, M.~D.}, \bibinfo{author}{Buu, O.},
\bibinfo{author}{Camarota, B.} \& \bibinfo{author}{Schwab, K.~C.}
\newblock \bibinfo{title}{Approaching the {{Quantum Limit}} of a
{{Nanomechanical Resonator}}}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{304}},
\bibinfo{pages}{74--77} (\bibinfo{year}{2004}).
\bibitem{schliesser_resolved-sideband_2009}
\bibinfo{author}{Schliesser, A.}, \bibinfo{author}{Arcizet, O.},
\bibinfo{author}{Rivi\`ere, R.}, \bibinfo{author}{Anetsberger, G.} \&
\bibinfo{author}{Kippenberg, T.~J.}
\newblock \bibinfo{title}{Resolved-sideband cooling and position measurement of
a micromechanical oscillator close to the {{Heisenberg}} uncertainty limit}.
\newblock \emph{\bibinfo{journal}{Nat. Phys.}} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{509--514} (\bibinfo{year}{2009}).
\bibitem{hybrid_circuit}
\bibinfo{author}{Pirkkalainen, J.-M.} \emph{et~al.}
\newblock \bibinfo{title}{Hybrid circuit cavity quantum electrodynamics with a
micromechanical resonator}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{494}},
\bibinfo{pages}{211--215} (\bibinfo{year}{2013}).
\bibitem{NV_defect}
\bibinfo{author}{Arcizet, O.} \emph{et~al.}
\newblock \bibinfo{title}{A {{Single Nitrogen}}-{{Vacancy Defect Coupled}} to a
{{Nanomechanical Oscillator}}}.
\newblock \emph{\bibinfo{journal}{Nat. Phys.}} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{879} (\bibinfo{year}{2011}).
\bibitem{trompette}
\bibinfo{author}{Yeo, I.} \emph{et~al.}
\newblock \bibinfo{title}{Strain-mediated coupling in a quantum
dot\textendash{}mechanical oscillator hybrid system}.
\newblock \emph{\bibinfo{journal}{Nat. Nanotech.}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{106--110}
(\bibinfo{year}{2014}).
\bibitem{suppl}
\bibinfo{author}{Monsel, J.}, \bibinfo{author}{Elouard, C.} \&
\bibinfo{author}{Auff\`eves, A.} \bibinfo{note}{Supplementary Material for
``An autonomous quantum machine to measure the thermodynamic arrow of
time''}.
\bibitem{rev_work_extraction}
\bibinfo{author}{Elouard, C.}, \bibinfo{author}{Richard, M.} \&
\bibinfo{author}{Auff\`eves, A.}
\newblock \bibinfo{title}{Reversible {{Work Extraction}} in a {{Hybrid
Opto}}-{{Mechanical System}}}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{17}},
\bibinfo{pages}{055018} (\bibinfo{year}{2015}).
\bibitem{plenio_quantum-jump_1998}
\bibinfo{author}{Plenio, M.~B.} \& \bibinfo{author}{Knight, P.~L.}
\newblock \bibinfo{title}{The quantum-jump approach to dissipative dynamics in
quantum optics}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{101--144}
(\bibinfo{year}{1998}).
\bibitem{Gardiner}
\bibinfo{author}{Gardiner, C.~W.} \& \bibinfo{author}{Zoller, P.}
\newblock \emph{\bibinfo{title}{Quantum noise : a handbook of Markovian and
non-Markovian quantum stochastic methods with applications to quantum
optics}} (\bibinfo{publisher}{Springer}, \bibinfo{year}{2010}).
\bibitem{CarmichaelII}
\bibinfo{author}{Carmichael, H.~J.}
\newblock \emph{\bibinfo{title}{Statistical {{Methods}} in {{Quantum Optics}}
2: {{Non}}-{{Classical Fields}}}}, vol.~\bibinfo{volume}{2} of
\emph{\bibinfo{series}{Theoretical and Mathematical Physics, Statistical
Methods in Quantum Optics}} (\bibinfo{publisher}{{Springer-Verlag}},
\bibinfo{address}{Berlin Heidelberg}, \bibinfo{year}{2008}).
\bibitem{Wisemanbook}
\bibinfo{author}{Wiseman, H.~M.} \& \bibinfo{author}{Milburn, G.~J.}
\newblock \emph{\bibinfo{title}{Quantum measurement and control}}
(\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{2010}).
\bibitem{haroche}
\bibinfo{author}{Haroche, S.} \& \bibinfo{author}{Raimond, J.-M.}
\newblock \emph{\bibinfo{title}{Exploring the {{Quantum}}: {{Atoms}},
{{Cavities}}, and {{Photons}}}} (\bibinfo{publisher}{{Oxford university
press}}, \bibinfo{year}{2006}).
\bibitem{Alicki79}
\bibinfo{author}{Alicki, R.}
\newblock \bibinfo{title}{The quantum open system as a model of the heat
engine}.
\newblock \emph{\bibinfo{journal}{J. Phys. Math. Gen.}}
\textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{L103--L107}
(\bibinfo{year}{1979}).
\bibitem{elouard_role_2017}
\bibinfo{author}{Elouard, C.}, \bibinfo{author}{{Herrera-Mart\'i}, D.~A.},
\bibinfo{author}{Clusel, M.} \& \bibinfo{author}{Auff\`eves, A.}
\newblock \bibinfo{title}{The {{Role}} of {{Quantum Measurement}} in
{{Stochastic Thermodynamics}}}.
\newblock \emph{\bibinfo{journal}{Npj Quantum Inf.}}
\textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{9} (\bibinfo{year}{2017}).
\bibitem{thermo_traj_Broeck}
\bibinfo{author}{den Broeck, C.~V.} \& \bibinfo{author}{Esposito, M.}
\newblock \bibinfo{title}{Ensemble and trajectory thermodynamics: {{A}} brief
introduction}.
\newblock \emph{\bibinfo{journal}{Physica A Stat. Mech. Appl.}}
\textbf{\bibinfo{volume}{418}}, \bibinfo{pages}{6 -- 16}
(\bibinfo{year}{2015}).
\newblock \bibinfo{note}{00073 Proceedings of the 13th International Summer
School on Fundamental Problems in Statistical Physics}.
\bibitem{crooks_quantum_2008}
\bibinfo{author}{Crooks, G.~E.}
\newblock \bibinfo{title}{Quantum {{Operation Time Reversal}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{034101} (\bibinfo{year}{2008}).
\bibitem{Manzano17}
\bibinfo{author}{Manzano, G.}, \bibinfo{author}{Horowitz, J.~M.} \&
\bibinfo{author}{Parrondo, J. M.~R.}
\newblock \bibinfo{title}{Quantum {{Fluctuation Theorems}} for {{Arbitrary
Environments}}: {{Adiabatic}} and {{Nonadiabatic Entropy Production}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. X}} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{031037} (\bibinfo{year}{2018}).
\bibitem{Manikandan18}
\bibinfo{author}{Manikandan, S.~K.} \& \bibinfo{author}{Jordan, A.~N.}
\newblock \bibinfo{title}{Time reversal symmetry of generalized quantum
measurements with past and future boundary conditions}
(\bibinfo{year}{2018}).
\bibitem{Sanii10}
\bibinfo{author}{Sanii, B.} \& \bibinfo{author}{Ashby, P.~D.}
\newblock \bibinfo{title}{High {{Sensitivity Deflection Detection}} of
{{Nanowires}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{104}} (\bibinfo{year}{2010}).
\bibitem{Mercier16}
\bibinfo{author}{{de L\'epinay}, L.~M.} \emph{et~al.}
\newblock \bibinfo{title}{A universal and ultrasensitive vectorial
nanomechanical sensor for imaging {{2D}} force fields}.
\newblock \emph{\bibinfo{journal}{Nat. Nanotech.}}
\textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{156--162}
(\bibinfo{year}{2016}).
\bibitem{trompette_suppl}
\bibinfo{author}{Yeo, I.} \emph{et~al.}
\newblock \bibinfo{title}{Supplementary information for
``\uppercase{S}train-mediated coupling in a quantum
dot\textendash{}mechanical oscillator hybrid system''}.
\newblock \emph{\bibinfo{journal}{Nat. Nanotech.}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{106--110}
(\bibinfo{year}{2014}).
\bibitem{elouard_extracting_2017}
\bibinfo{author}{Elouard, C.}, \bibinfo{author}{{Herrera-Mart\'i}, D.},
\bibinfo{author}{Huard, B.} \& \bibinfo{author}{Auff\`eves, A.}
\newblock \bibinfo{title}{Extracting {{Work}} from {{Quantum Measurement}} in
{{Maxwell}}'s {{Demon Engines}}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{118}}, \bibinfo{pages}{260603}
(\bibinfo{year}{2017}).
\bibitem{Cottet7561}
\bibinfo{author}{Cottet, N.} \emph{et~al.}
\newblock \bibinfo{title}{Observing a quantum maxwell demon at work}.
\newblock \emph{\bibinfo{journal}{Proceedings of the National Academy of
Sciences}} \textbf{\bibinfo{volume}{114}}, \bibinfo{pages}{7561--7564}
(\bibinfo{year}{2017}).
\newblock URL \url{http://www.pnas.org/content/114/29/7561}.
\newblock \eprint{http://www.pnas.org/content/114/29/7561.full.pdf}.
\end{thebibliography}
\onecolumngrid
\section*{\large Supplementary information for ``An autonomous quantum machine to measure the thermodynamic arrow of time''} \vspace*{0.5cm} \supplsection{Master equation.} Here we describe the coupling of a hybrid opto-mechanical system to a thermal bath of temperature $T$. The total Hamiltonian reads $H = H_\text{qm} + H_\text{b} + V_\text{qb}$, where $H_{\text{b}} = \sum_{k}\hbar \omega_k a^\dagger_k a_k$ is the free Hamiltonian of the bath, $a_k$ is the annihilation operator of the $k$-th electromagnetic mode of frequency $\omega_k$. The coupling Hamiltonian between the qubit and the bath in the Rotating Wave Approximation equals $V_{\text{qb}} = \sum_k \hbar g_k (a_k \sigma^\dagger + a^\dagger_k \sigma)$ , where $\sigma= \dyad{g}{e}$ and $g_k$ is the coupling strength between the qubit and the $k$-th mode. We denote $\gamma = \sum_k g_k^2\delta(\omega_0-\omega_k)$ the spontaneous emission rate of the bare qubit in the bath. The typical correlation time of the bath verifies $\tau_{\text{c}} \ll \gamma^{-1}, g_{\text{m}}^{-1}, \Omega^{-1}$.
The hybrid system is initially prepared in a factorized state $\rho_\text{qm}(0) = \rho_\text{q}(0)\otimes \dyad{\beta_0}$ where $\rho_\text{q}(0)$ is diagonal in the bare qubit energy basis and $\ket{\beta_0}$ is a pure coherent state. We can define a coarse grained time step $\Dt$, fulfilling $\tau_{\text{c}} \ll \Dt \ll \gamma^{-1}$ such that under these assumptions, the hybrid system and the bath are always in a factorized state (Born-Markov approximation). Moreover, the coupling to the bath solely induces transitions between the qubit bare energy states, such that the hybrid system naturally evolves into a classically correlated state of the form $\rho_\text{qm}(t) = P_e(t) \dyad{e}\otimes \dyad{\beta_e(t)} + P_g(t) \dyad{g}\otimes \dyad{\beta_g(t)}$. $\{\ket{\beta_{\epsilon}(t)}\}_{\epsilon=e,g}$ are coherent states of the MO verifying $\beta_{\epsilon} = \beta_0 e^{-\ii\Omega t} + \delta \beta_{\epsilon}(t)$. The mechanical fluctuations after a typical time $t$ verify $|\delta \beta_{\epsilon}(t)| \sim g_\text{m}t$. They become potentially detectable as soon as $|\delta \beta_{\epsilon}(t)| \geq 1$, i.e. $t \geq \gm^{-1}$. Conversely, the mechanical fluctuations have no influence on the qubit frequency as long as $|\delta \beta_{\epsilon}(t)| \ll |\beta_0|$, i.e. $t \ll |\beta_0|\gm^{-1}$ (See main text).
The precursor of the master equation reads \begin{align*} \Delta \rqm^\I(t) = \rqm^\I(t+\Dt) - \rqm^\I(t) = -\frac{1}{\hbar^2}\int_t^{t+\Delta t}\dd t' \int_t^{t'}\dd t'' \Tr_\text{b}\left[ \left[V_{\text{qb}}^\I(t'), \left[V_{\text{qb}}^\I(t''), \rqm^\I(t)\otimes\rho_\text{b}\right]\right]\right], \end{align*} where we have defined the interaction representation with respect to the free Hamiltonians of the hybrid system and the bath $\rqm^\I(t) = \e^{\ii t (H_\text{qm} + H_\text{b})/\hbar}\rqm(t)\e^{-\ii t (H_\text{qm} + H_\text{b})/\hbar}$, $V_{\text{qb}}^\I(t) = \e^{\ii t (H_\text{qm} + H_\text{b})/\hbar}V_{\text{qb}}(t)\e^{-\ii t (H_\text{qm} + H_\text{b})/\hbar}$. $\Tr_\text{b}$ is the trace over the bath's Hilbert space and $\rho_\text{b}$ is the bath's density matrix. We have used that the term of first order in $V^\I_\text{qb}$ vanishes.
Because of the presence of $V_\text{qm} = \hbar \gm \dyad{e}(b + b^\dagger)$ in $H_0$, $V_{\text{qb}}^\I$ also acts on the MO. It can be split in the following way: $V_{\text{qb}}^\I(u) = R_{\text{b}}^\dagger(u)\otimes S(u) + R_{\text{b}}(u)\otimes S^\dagger(u)$, with $R_{\text{b}}(u) = \hbar \sum_k g_k a_k \e^{-\ii \omega_k u}$ and $S(u) = \e^{\ii u H_\text{qm}/\hbar}(\sigma \otimes \mathbf{1}_\text{m})\e^{-\ii u H_\text{qm}/\hbar}$. Then, expanding the commutators, the trace over the bath's degrees of freedom can be computed. For any two times $u$ and $v$, the correlation functions of the bath read: $\Tr_\text{b}[\rho_{\text{b}} R_\text{b}(u)R_\text{b}(v)] = \Tr_\text{b}[\rho_{\text{b}} R^\dagger_\text{b}(u)R^\dagger_\text{b}(v)] = 0$, $g_-(u,v) = \Tr_\text{b}[\rho_{\text{b}} R_\text{b}(u)R^\dagger_\text{b}(v)] = \hbar^2 \sum_k g_k^2(\bar{n}_{\omega_k} + 1)\e^{-\ii \omega_k (u - v)}$ and $g_+(u,v) = \Tr_\text{b}[\rho_{\text{b}} R^\dagger_\text{b}(u)R_\text{b}(v)] = \hbar^2 \sum_k g_k^2\bar{n}_{\omega_k} \e^{\ii \omega_k (u - v)}$. $\bar{n}_{\omega_k}$ is the average number of photon of frequency $\omega_k$ in the bath. As a result, only terms containing one $S$ and one $S^\dagger$ remains in $\Delta \rqm^\I$. The integral $\int_t^{t'}\dd t''$ can then be changed into an integral over $\tau = t' - t''$: $\int_0^{t' - t}\dd \tau$. Since $g_{s}(u,v) = g_{s}(u - v)$, with $s \in \{+, -\}$, is non zero only for $|u - v| \lesssim \tau_c \ll \Dt$, the upper bound can be set to infinity. In addition, the coarse-graining time can be chosen such that $\Dt \ll \gamma^{-1}, g_\text{m}^{-1}, \Omega^{-1}$ and the MO does not evolve during the integration. As a consequence, the operator $S(u)$ becomes $S(u) = \sigma\e^{-\ii \omega_0 u}\e^{-\ii \gm(b + b^\dagger) u}$.
As long as $u \ll |\beta_0|\gm^{-1}$, $\omega(\beta_e(u)) \simeq \omega(\beta_g(u))\simeq \omega(\beta_0(u)) $ where $\omega(\beta) = \omega_0 + g_\text{m}(\beta + \beta^*)$ and $\beta_0(t) = \beta_0 e^{-i\Omega t}$. Moreover, the state of the system $\rqm(t)$ can be approximated by the factorized state $\rho_{\text{q}}(t)\otimes\dyad{\beta_0(t)}$. Denoting $\ket{E(u)} = \dyad{e}\otimes \dyad{\beta_0(u)}$ (resp. $\ket{G(u)} = \dyad{g}\otimes \dyad{\beta_0}(u)$), $S(u)$ thus verifies $\ev{S(u)}{E(u)} = \ev{S(u)}{G(u)} = \mel{E(u)}{S(u)}{G(u)} = 0$ and $\mel{G(u)}{S(u)}{E(u)} \simeq e^{-\ii \omega(\beta_0(u)) u}$. $\Delta\rho_{\text{qm}}^\text{I}(t)$ can then be decomposed over the states $\ket{E(t)}$, $\ket{G(t)}$ and the integral over $\tau$ make the system interacts only with bath photons of frequency $\omega(\beta_0(t))$.
As announced in the main text, the master equation describing the relaxation of the hybrid system in the bath can finally be written as \begin{equation} \dot{\rho}_{\text{qm}}(t) =\, -\frac{\ii}{\hbar}[H_{\text{qm}}, \rho_{\text{qm}}(t)] + \gamma \bar{n}_{\omega(\beta_{0}(t))} D[\sigma^\dagger\otimes \mathbf{1}_{\text{m}}]\rho_{\text{qm}}(t) + \gamma \left(\bar{n}_{\omega(\beta_{0}(t))} + 1\right)D[\sigma\otimes \mathbf{1}_{\text{m}}]\rho_{\text{qm}}(t), \label{suppl:master_eq} \end{equation} where $D[X]\rho = X\rho X^\dagger - \frac{1}{2}\{X^\dagger X, \rho\}$, and $\bar{n}_{\omega} = \left(\exp(\hbar\omega/\kT )- 1\right)^{-1}$.
\supplsection{Entropy production for the autonomous machine.} Starting from the definition $\ds\Traj = \log(P\Traj/\tilde{P}\rTraj)$ and using Eqs.~(13) and (17) from the main text, the entropy production can be written: \begin{equation} \ds\Traj = \log\left(\frac{p^\infty_{\beta_0}[\epsilon_\Sigma(t_0)]}{p_\text{m}[\beta_\Sigma(t_N)]p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)]}
\frac{\prod_{n=1}^{N} P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ]}{ \prod_{n = 1}^{N}\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]}\right). \end{equation} From the expressions of the jump and no-jump operators, we obtain \begin{equation}
\frac{P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ]}{\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]} =\frac{\ev{J^\dagger_{{\cal K}_\Sigma(t_n)}J_{{\cal K}_\Sigma(t_n)}}{\Psi_\Sigma(t_{n-1})}}{\ev{\tilde{J}^\dagger_{{\cal K}_\Sigma(t_n)}\tilde{J}_{{\cal K}_\Sigma(t_n)}}{\Psi_\Sigma(t_{n})}} =\exp(-q[\Sigma, t_{n-1}]/\kT), \\ \end{equation} and, using the expression of the thermal distribution $p^\infty_{\beta}[\epsilon] = \exp(-\hbar\omega(\beta)\delta_{\epsilon, e}/\kT)/Z(\beta)$, we get \begin{equation} \frac{p^\infty_{\beta_0}[\epsilon_\Sigma(t_0)]}{p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)]} = \exp((\Delta {\cal E}_\text{q}\Traj - \Delta F\Traj)/\kT). \end{equation} The initial and final thermal distributions respectively depend on $\beta_0$ and $\beta_\Sigma(t_N)$, which leads to a trajectory-dependent free energy variation $\Delta F\Traj = k_\text{B} T \log(Z(\beta_0)/Z(\beta_\Sigma(t_N)))$. Finally, \begin{align} \ds\Traj &= -\log(p_\text{m}[\beta_\Sigma(t_N)]) + \frac{\Delta {\cal E}_\text{q}\Traj - \Delta F\Traj - Q\Traj}{\kT} \nonumber\\ &= I_\text{Sh}\Traj -\frac{ \Delta \mathcal{E}_\text{m} + \Delta F\Traj}{\kT}\nonumber\\ &= I_\text{Sh}\Traj + \sigma\Traj, \end{align} where we used $\Delta {\cal E}_\text{q}\Traj = W\Traj + Q\Traj$ and $W\Traj = -\Delta \mathcal{E}_\text{m}\Traj$ [Eq.~(11) from the main text].
\supplsection{Reduced Jarzynski Equality} We show that, in the Markovian limit, the reduced entropy production $\sigma [\rightvect{\Sigma}\,]$ obeys the Jarzynski like equality: \begin{equation} \ev{\exp(-\sigma[\rightvect{\Sigma}\,])} _{\rightvect{\Sigma}}= 1. \label{suppl:JE} \end{equation}
The derivation starts from the sum over all reversed trajectories of the complete machine: $1 = \sum_{\sleftvect{\Sigma}} \tilde{P}\rTraj$. In the limit $|\beta_0| \gg g_{\text{m}}/\Omega$, the action of the MO on the qubit is similar to an external operator imposing the evolution of the qubit frequency $\omega(\beta_0(t))$. As a consequence, the reversed jump probability at time $t_n$ does not depend on the exact MO state $\beta_\Sigma(t_n)$, but only on $\beta_{0}(t_n) = \beta_0\e^{-\ii\Omega t_{n}}$, which corresponds to the free MO dynamics. Therefore, we can get rid of the state dependencies in the MO state: $\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ] = \tilde{P}[\epsilon_\Sigma(t_{n}) | \epsilon_\Sigma(t_{n+1})]$ and $p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)] = p^\infty_{\beta_0(t_N)}[\epsilon_\Sigma(t_N)]$. Therefore, \begin{align*} 1
=\,& \left(\sum_{\beta_\Sigma(t_N)} p_\text{m}[\beta_\Sigma(t_N)]\!\right)\sum_{\rightvect{\epsilon}}p^\infty_{\beta_0(t_N)}[\epsilon_\Sigma(t_N)] \prod_{n = 1}^{N} \tilde{P}[\epsilon_\Sigma(t_{n-1}) | \epsilon_\Sigma(t_{n})]\\
=\,& \sum_{\rightvect{\epsilon}}p^\infty_{\beta_0(t_N)}[\epsilon_\Sigma(t_N)] \prod_{n = 1}^{N} \tilde{P}[\epsilon_\Sigma(t_{n-1}) | \epsilon_\Sigma(t_{n})]\\
=\,& \sum_{\rightvect{\epsilon}} P[\rightvect{\epsilon}\,] \frac{p^\infty_{\beta_0(t_N)}[\epsilon_\Sigma(t_N)] \prod_{n = 1}^{N} \tilde{P}[\epsilon_\Sigma(t_{n-1}) | \epsilon_\Sigma(t_{n})]}{p^\infty_{\beta_0}[\epsilon_\Sigma(t_0)] \prod_{n = 1}^{N}P[\epsilon_\Sigma(t_{n}) | \epsilon_\Sigma(t_{n-1})]}. \end{align*}
Since the trajectory of the MO $\vec{\beta}[\rightvect{\epsilon}\,]$ is completely determined by the one of the qubit, we can restore the sum over the trajectories $\rightvect{\Sigma}$ of the autonomous machine. Then, from the expressions $p^\infty_{\beta}[\epsilon] = \exp(-\hbar\omega(\beta)\delta_{\epsilon, e}/\kT)/Z(\beta)$, $W\Traj = -\Delta {\cal E}_\text{m}\Traj$ and $\tilde{P}[\Psi_\Sigma(t_{n-1}) |\Psi_\Sigma(t_{n})]/P[\Psi_\Sigma(t_{n}) |\Psi_\Sigma(t_{n-1})] = \exp(-q[\Sigma, t_{n - 1}])$ , we get \begin{align*} 1 &= \sum_{\rightvect{\Sigma}} P\Traj \exp(-\frac{\Delta {\cal E}_\text{q}\Traj - \Delta F - Q\Traj}{k_\text{B}T})\\ &=\ev{\exp(\frac{\Delta {\cal E}_\text{m}\Traj + \Delta F}{\kT})}_{\srightvect{\Sigma}}. \end{align*}
\supplsection{Fluctuation theorem for the complete autonomous machine.}
\begin{figure}
\caption{ Example of trajectories for the qubit \textbf{(a)} and the MO \textbf{(b)}. The solid (resp. dashed) arrows correspond to the direct (resp. reversed) protocol. For the sake of simplicity, only the trajectories without any jump are represented. $\beta_g$ (resp. $\beta_e$) is the final state of the MO after the direct protocol when the qubit is in state $\ket{g}$ (resp. $\ket{e}$). The expressions of the MO evolution operators are: ${\cal U}_\epsilon(t) = \exp(-\ii t H_\text{m}^\epsilon)$ and $\tilde{\cal U}_\epsilon(t) = {\cal U}_\epsilon^\dagger(t)$, with $\epsilon=e, g$. The reversed trajectories that do not have a direct counterpart are plotted in red and the corresponding qubit states with dashed lines. The final MO states for these trajectories are $\ket{\beta''} = \tilde{\cal U}_g(t_N)\ket{ \beta_e}$ and $\ket{\beta'} = \tilde{\cal U}_e(t_N)\ket{ \beta_g}$, where $\beta''\neq \beta_0$ and $ \beta' \neq \beta_0$. $\rho^\infty_\text{q}(t)$ (resp. $\rho_\text{m}(t)$) is the qubit thermal state (resp. the MO average state) at time $t$.}
\label{fig-supp1}
\end{figure}
The IFT for the complete autonomous machine [Eq.~(23) from the main text] can be derived starting from the sum over all reversed trajectories, making appear the ratio $\tilde{P}\rTraj/P\Traj$. To do so, we need to ensure that $P\Traj \neq 0$. This requires to separate the set $\Sigma_\text{d} = \{\tilde{P}\rTraj | P\Traj \neq 0\}$ of reversed trajectories with a direct counterpart from the set without: \begin{equation} 1 =\sum_{\sleftvect{\Sigma}} \tilde{P}\rTraj=\sum_{\sleftvect{\Sigma} \in \Sigma_\text{d}} P\Traj\frac{\tilde{P}\rTraj}{P\Traj} + \sum_{\sleftvect{\Sigma} \notin \Sigma_\text{d}} \tilde{P}\rTraj. \end{equation}
Only the reversed trajectories $\leftvect{\Sigma} = \{ |\tilde{\epsilon}_\Sigma(t_n), \tilde{\beta}_\Sigma(t_n)\rangle \}_{n=N}^0$ such that $\tilde{\beta}_\Sigma(t_0) = \beta_0$ verify $P\Traj \neq 0$. Fig.~\ref{fig-supp1} gives examples of both kinds of trajectories. Denoting $\lambda = \sum_{\sleftvect{\Sigma} \notin \Sigma_\text{d}} \tilde{P}\rTraj$ and using Eqs.~(13) and (17) from the main text we obtain: \begin{align} 1
=\,& \sum_{\rightvect{\Sigma}} \left(P\Traj p_\text{m}[\beta_\Sigma(t_N)]\frac{p^\infty_{\beta_\Sigma(t_N)}[\epsilon_\Sigma(t_N)]}{p^\infty_{\beta_0}[\epsilon_\Sigma(t_0)]}\frac{ \prod_{n = 1}^{N}\tilde{P}[\Psi_\Sigma(t_{n-1}) | \Psi_\Sigma(t_{n}) ]}{\prod_{n=1}^{N} P[\Psi_\Sigma(t_{n}) | \Psi_\Sigma(t_{n-1}) ]}\right) + \lambda\nonumber\\ =\,& \sum_{\srightvect{\Sigma}} P\Traj\exp(-I_\text{Sh}\Traj- \frac{\Delta {\cal E}_\text{q}\Traj- \Delta F\Traj - Q\Traj}{\kT}) + \lambda\nonumber\\ =\,&\ev{\exp(-(\sigma\Traj + I_\text{Sh}\Traj))}_{\srightvect{\Sigma}} + \lambda. \end{align} Thus, $\ev{\exp(-\Delta_\text{i} s\Traj)}_{\rightvect{\Sigma}} = 1- \lambda.$
\end{document} |
\begin{document}
\title{Recursive Least Squares Advantage \\ Actor-Critic Algorithms}
\author{Yuan Wang, Chunyuan Zhang, Tianzong Yu, Meng Ma \thanks{This work was supported by the National Natural Science Foundation of China under Grant 61762032 and Grant 11961018. (Corresponding author: Chunyuan Zhang).}
\thanks{Y. Wang, C. Zhang, T. Yu and M. Ma are with the School of Computer Science and Technology, Hainan University, Haikou 570228, China (email: [email protected]; [email protected]; [email protected]; [email protected]).} }
\markboth{IEEE Transactions on Systems, Man, and Cybernetics: Systems} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals}
\maketitle
\begin{abstract} As an important algorithm in deep reinforcement learning, advantage actor critic (A2C) has been widely succeeded in both discrete and continuous control tasks with raw pixel inputs, but its sample efficiency still needs to improve more. In traditional reinforcement learning, actor-critic algorithms generally use the recursive least squares (RLS) technology to update the parameter of linear function approximators for accelerating their convergence speed. However, A2C algorithms seldom use this technology to train deep neural networks (DNNs) for improving their sample efficiency. In this paper, we propose two novel RLS-based A2C algorithms and investigate their performance. Both proposed algorithms, called RLSSA2C and RLSNA2C, use the RLS method to train the critic network and the hidden layers of the actor network. The main difference between them is at the policy learning step. RLSSA2C uses an ordinary first-order gradient descent algorithm and the standard policy gradient to learn the policy parameter. RLSNA2C uses the Kronecker-factored approximation, the RLS method and the natural policy gradient to learn the compatible parameter and the policy parameter. In addition, we analyze the complexity and convergence of both algorithms, and present three tricks for further improving their convergence speed. Finally, we demonstrate the effectiveness of both algorithms on 40 games in the Atari 2600 environment and 11 tasks in the MuJoCo environment. From the experimental results, it is shown that our both algorithms have better sample efficiency than the vanilla A2C on most games or tasks, and have higher computational efficiency than other two state-of-the-art algorithms. \end{abstract}
\begin{IEEEkeywords} Deep reinforcement learning (DRL), advantage actor-critic (A2C), recursive least squares (RLS), standard policy gradient (SPG), natural policy gradient (NPG). \end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction} Reinforcement learning (RL) is an important machine learning methodology for solving sequential decision-making problems. In RL, the agent aims to learn an optimal policy for maximizing the cumulative return by interacting with the initially unknown environment \cite{1}. For the past 40 years, RL has roughly experienced three historical periods, namely, tabular representation RL (TRRL), linear function approximation RL (LFARL) and deep RL (DRL). TRRL can only solve a few simple problems with small-scale discrete state and action spaces. LFARL can only solve some control tasks with low-dimensional continuous state and action spaces since its approximation capability is still limited \cite{2}. In recent years, by combining with various deep neural networks (DNNs), DRL has shown a huge potential to solve real-world complex problems \cite{3}-\cite{6} and has received more and more research interest. Similar to LFARL classified in \cite{7}, DRL can also be classified into three categories: deep value function approximation (DVFA) \cite{8,9}, deep policy search (DPS) \cite{10} and deep actor-critic (DAC) methods \cite{11}-\cite{17}. DAC algorithms can be viewed as a hybrid of DVFA and DPS. They generally have a critic for policy evaluation and an actor for policy learning. Among three classes of DRL methods, DAC is more effective than DVFA and DPS for online learning real-world problems, and thus has received extensive attention.
In recent years, many novel DAC algorithms have been proposed. According to the policy type used in the actor, they can be roughly divided into two main subclasses. One subclass is the DAC algorithms with the deterministic policy. The actor-critic algorithms with the deterministic policy gradient (DPG) are first proposed in \cite{11}. Based on this work, a few variants, such as deep DPG (DDPG) \cite{12}, recurrent DPG (RDPG) \cite{13} and twin delayed DDPG (TD3) \cite{14}, have been suggested recently, but they can solve only continuous control tasks. The other subclass is the DAC algorithms with the stochastic policy gradient. In this subclass, the most famous algorithm is perhaps the asynchronous advantage actor-critic (A3C) algorithm \cite{15}. A3C employs multiple workers to interacting with each own environment in parallel, uses the accumulated samples of each worker to update the shared model and uses the parameters of the shared model to update the model of each worker asynchronously. Compared with DPG-type DAC algorithms, A3C can solve both discrete and continuous control tasks without the experience replay. However, A3C doesn't work better than its synchronized version \cite{18}, namely the synchronous advantage actor-critic (A2C) algorithm \cite{15}, which uses all samples obtained by each worker to update the shared model and uses the shared model to select the action of each worker synchronously. A2C is easier to implement and has become a baseline algorithm in OpenAI. Therefore, we will focus on A2C in this paper.
How to improve the sample efficiency of the agent is an open issue in RL. Although A2C and A3C are very excellent, their sample efficiency still needs to improve more. In recent years, there have been some studies such as combining with the experience replay \cite{19,20} and the entropy maximization \cite{21} to tackle this problem. Here we mainly focus on another way in which researchers try to use more advanced optimization methods for gradient updates. Schulman et al. propose the proximal policy optimization (PPO) \cite{16}, which uses a novel objective with clipped probability ratios and forms a pessimistic estimate of the performance of the policy. Compared with the trust region policy optimization (TRPO) \cite{17}, PPO is much simpler to implement. Byun et al. propose the proximal policy gradient (PPG) \cite{22}, which shows that the performance similar to PPO can be obtained by using the gradient formula from the original policy gradient theorem. Wu et al. propose the actor critic using Kronecker-factored trust region (ACKTR), which uses a Kronecker-factored approximation \cite{23} to the natural policy gradient (NPG) that allows the covariance matrix of the gradient to be inverted efficiently \cite{24}. These algorithms are more effective than the vanilla A2C or A3C with ordinary first-order optimization algorithms, but have higher computational complexities and run slowly. By using Kostrikov's source code \cite{25} to test on Atari games, we find that PPO and ACKTR are only 1/10 and 7/12 as fast as A2C with RMSProp \cite{26}.
In LFARL, traditional actor-critic algorithms usually use the recursive least squares (RLS) method to improve their convergence performance. In 1996, Bradtke and Barto first propose the least squares temporal difference (LSTD) algorithm and define a recursive version (namely RLSTD) \cite{27}. In 1999, Konda and Tsitsiklis first propose a class of two-time-scale actor-critic algorithms with linear function approximations, and point out that it is possible to use LSTD for policy evaluation \cite{28}. After that, Xu et al. propose an actor-critic algorithm by using the RLSTD($\lambda$) algorithm as the critic \cite{29}. In 2003, Peter et al. first propose the natural actor-critic (NAC) algorithm, which uses the LSTD-Q($\lambda$) algorithm for policy evaluation and uses NPG for policy learning \cite{30}. On this basis, Park et al. propose the RLS-based NAC algorithm \cite{31}, and Bhatnagar et al. provide four actor-critic algorithms as well as their convergence proofs \cite{32}. There are some other RLS-based actor-critic algorithms such as KDHP \cite{7} and CIPG \cite{33}. RLS has been the baseline optimization method for traditional actor-critic algorithms. However, to the best of our knowledge, there aren't any RLS-based DAC algorithms to be proposed, since the DNN approxiamtion is much more complicated than the linear function approximation. In our previous work \cite{34}, we propose a class of RLS optimization algorithms for DNNs and validate their effectiveness on some classification benchmark datasets. Whereas, there is a big difference between RL and supervised learning, and there are still some obstacles we need to overcome.
In this paper, we try to introduce the RLS optimization into the A2C algorithm. We propose two RLS-based A2C algorithms, called RLSSA2C and RLSNA2C, respectively. Both of them use the same loss function as the vanilla A2C, and employ the RLS method to optimize their critic networks and the hidden layers of their actor networks. The main difference between them is the policy learning. RLSSA2C uses the standard policy gradient (SPG) and an ordinary first-order gradient descent algorithm to update the policy parameters, and RLSNA2C uses the NPG, the Kronecker-factored approximation and the RLS method to learn the compatible parameter and the policy parameters. We show that their computational complexities have the same order as the vanilla A2C with the first-order optimization algorithm. In addition, we also provide some tricks for accelerating their convergence speed. Finally, we demonstrate their effectiveness on 40 games in the Atari 2600 environment and 11 tasks in the MuJoCo environment. Experimental results show that both RLSSA2C and RLSNA2C have better sample efficiency than the vanilla A2C on most games. Furthermore, they can achieve a faster running speed than PPO and ACKTR.
The rest of this paper is organized as follows. In Section \uppercase\expandafter{\romannumeral2}, we introduce some background knowledge. In Section \uppercase\expandafter{\romannumeral3}, we present the detail derivation of our proposed algorithms. In Section \uppercase\expandafter{\romannumeral4}, we analyze the computational complexity and convergence of our proposed algorithms, and provide three tricks for further improving their convergence speed and solution. In Section \uppercase\expandafter{\romannumeral5}, we demonstrate the effectiveness of our proposed algorithms on Atari games and MuJoCo tasks. Finally, we conclude our work in Section \uppercase\expandafter{\romannumeral6}.
\section{Background} In this section, we briefly review Markov decision processes (MDPs), stochastic policy gradient, convolutional neural networks (CNNs) and the vanilla A2C algorithm. In addition, we also introduce some notations used in this paper. \subsection{Markov Decision Process} In RL, a sequential decision-making problem is generally formulated into a Markov decision process (MDP)
defined as $\left \langle \mathcal{S},\mathcal{A},p,r,\gamma \right \rangle$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $p(s'_t|s_t,a_t)\in[0,1]$ and $r_t\in \mathcal{R}$ are the state-transition probability distribution and the immediate reward from the state $s_t$ to the next state $s'_t$ by taking the action $a_t$ at time step $t$, and $\gamma\in(0,1]$ is the discount factor. The action $a_t$ is selected by a policy, which can be either stochastic or deterministic. In this paper, we focus on the former and use a tensor or matrix $\Theta$ to parameterize it. A stochastic policy $\pi(a_t|s_t;\Theta)$ describes the probability distribution of taking $a_t$ in $s_t$.
For an MDP, the goal of RL is to find an optimal policy $\pi^*$ (also an optimal policy parameter $\Theta^*$) to maximize the cumulative expected return $J(\Theta)$ from an initial state $s_0$, namely \begin{equation}
\Theta^*=\mathop{\arg\!\max}\limits_{\Theta}J(\Theta)=\mathop{\arg\!\max}\limits_{\Theta}\mathbb{E}_{\pi}\bigg[\sum_{t=0}^{\infty}\gamma^tr_t \big|s_0 \bigg] \end{equation}
Unfortunately, $J(\Theta)=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^t r_t|s_0]$ is difficult to be calculated directly, since $p(s'|s_t,a_t)$ is unknown in RL, and $s'_t$ and $r_t$ can be obtained only by the agent's interaction with the environment. DRL generally uses the DAC method to solve $\Theta^*$. At time step $t$, the critic uses the DVFA method to approximate the state-value function $V^{\pi}(s_t)=\mathbb{E}_{\pi}[\sum_{i=0}^{\infty}\gamma^ir_i|s_0=s_t]$ and the action-value function $Q^{\pi}(s_t,a_t)=\mathbb{E}_{\pi}[\sum_{i=0}^{\infty}\gamma^ir_i|s_0=s_t,a_0=a_t]$ for evaluating the performance of the current policy $\pi$, and then the actor uses the policy gradient to update $\Theta_t$.
\subsection{Stochastic Policy Gradient} Currently, there are two main types of policy gradients: SPG and NPG. From the policy gradient theorem \cite{35}, SPG can be calculated as \begin{equation}
\nabla_{\Theta_t}J(\Theta_t) = \mathbb{E}_{\pi}\big[A^{\pi}(s_t,a_t)\nabla_{\Theta_t}\textrm{log}\pi(a_t|s_t;\Theta_t)\big] \end{equation} where $\nabla_{\Theta_t}J(\Theta_t)$ denotes $ \partial J(\Theta_t)/\partial \Theta_t$, and $A^{\pi}(s_t,a_t)$ is the advantage function. $A^{\pi}(s_t,a_t)$ measures how much better than the average it is to take an action $a_t$ \cite{36}. It is defined as \begin{equation} A^{\pi}(s_t,a_t)=Q^{\pi}(s_t,a_t)-V^{\pi}(s_t) \end{equation} Unlike SPG, NPG does not follow the steepest direction in the policy parameter space but the steepest direction with respect to the Fisher information metric \cite{37}. It is defined as \begin{equation} \tilde{\nabla}_{\Theta_t}J(\Theta_t) = (\textrm{F}(\Theta_t))^{-1} \nabla_{\Theta_t}J(\Theta_t) \end{equation} where $\textrm{F}(\Theta_t)$ is the Fisher information matrix defined by \begin{equation}
\hspace{-0.15cm}\textrm{F}(\Theta_t)\!=\!\mathbb{E}_{\pi}\big[v(\nabla_{\!\Theta_t}\!\textrm{log}\pi(a_t|s_t;\!\Theta_t\!))
v(\nabla_{\Theta_t}\!\textrm{log}\pi(a_t|s_t;\!\Theta_t\!))^{\!\textrm{T}}\big] \end{equation} where $v(\cdot)$ denotes reshaping the given tensor or matrix into a column vector. To avoid computing the inverse of $\textrm{F}(\Theta_t)$, (4) can also be redefined as \begin{equation} \tilde{\nabla}_{\Theta_t}J(\Theta_t) = m(w_t) \end{equation} where $m(\cdot)$ denotes reshaping the given vector into a matrix or tensor, and $w_t$ is the parameter of the linear compatible function approximator \cite{37} defined by \begin{equation}
\tilde{A}(s_t,a_t;w_t) =w_t^\textrm{T} v\big(\nabla_{\Theta_t}\log\pi(a_t|s_t;\Theta_t)\big) \end{equation} $\tilde{A}(s_t,a_t;w_t)$ is the compatible approximation of $A^{\pi}(s_t,a_t)$.
\subsection{Convolutional Neural Network} In DAC, CNNs are widely used to approximate $V^{\pi}(s)$ and $\pi$ for solving control tasks with raw pixel inputs. A CNN generally consists of some convolutional (conv) layers, pooling layers and fully-connected (fc) layers. Since there are no learnable parameters in pooling layers, we only review the forward learning of conv layers and fc layers. Let $l$, $M$, $\textrm{X}^l_t$, $\textrm{Z}^l_t$, $\textrm{Y}^l_t$, $\Theta^l_t$ and $f_l(\cdot)$ denote the current layer, the mini-batch size, the mini-batch input, the pre-activation input, the activation output, the parameter, the activation function in this layer at current time $t$, respectively. For brevity, we omit the bias term of each layer in this paper.
In a conv layer, $\textrm{X}^l_t \in \mathcal{R}^{M\times C_{l-1} \times H_{l-1} \times W_{l-1} }$ is convolved with the kernel
$\Theta^l_t \in \mathcal{R}^{C_{l-1}\times C_l \times H_l^k \times W_l^k} $ and puts through $f_l(\cdot)$ to form $\textrm{Y}^l_t
\in \mathcal{R}^{M\times C_{l} \times H_{l} \times W_{l}}$, where $C_l$, $H_l$, $W_l$, $H_l^k$ and $W_l^k$ denote
the number of output channels, the output image height, the output image width, the kernel height and the kernel width, respectively. Let $\textrm{\v{X}}^l_{t(:,:,:,:,h,w)} \in \mathcal{R}^{M \times C_{l-1}\times H_l^k \times W_l^k}$ denote the input selection of the output pixel $\textrm{Y}^l_{t(:,:,h,w)}$. Reshape $\textrm{\v{X}}^l_t$ and $\Theta^l_t$ as $\textrm{\^{X}}^l_t \in \mathcal{R}^{M\times C_{l-1} H_l^k W_l^k\times H_lW_l}$ and ${\hat{\Theta}}^l_t \in \mathcal{R}^{ C_{l-1} H_l^k W_l^k\times C_l}$, respectively. Then, $\textrm{Y}^l_{t(:,:,j)}$ is defined as \begin{equation} \textrm{Y}^l_{t(:,:,j)} = f_l(\textrm{Z}^l_{t(:,:,j)}) = f_l(\textrm{\^X}^l_{t(:,:,j)}{\hat\Theta}^l_t) \end{equation}
In an fc layer, $\textrm{X}^l_t\in \mathcal{R}^{M\times N_{l-1}}$ is weighed to connect all output neurons by $\Theta^l_t\in \mathcal{R}^{N_{l-1}\times N_{l}}$ and puts through $f_l(\cdot)$ to form $\textrm{Y}^l_t \in \mathcal{R}^{M\times N_{l}}$, where $N_l$ denotes the number of output neurons. Namely, $\textrm{Y}^l_t$ is defined as \begin{equation} \textrm{Y}^l_t = f_l(\textrm{Z}^l_t) = f_l(\textrm{X}^l_t\Theta^l_t) \end{equation}
\subsection{Advantage Actor Critic} A2C is an important baseline algorithm in OpenAI. In A2C, there are $N$ parallel workers, a shared critic network and a shared actor network. The critic network and the actor network can be joint or disjoint. If both networks are joint, they will share lower layers but have distinct output layers.
The algorithm flow of A2C can be summarized as follows. At current iteration step $t$, it lets each worker interact with each own environment for $T$ timesteps, and uses all state-transition pairs to form the mini batch $\mathcal{M}_t=\{(s_{t,i}^{(k)}, a_{t,i}^{(k)},{s'}_{t,i}^{(k)},r_{t,i}^{(k)},d_{t,i}^{(k)})\}_{i=1,\cdots,N}^{k=1,\cdots,T}$, where $d_{t,i}^{(k)} \in \{0,1\}$ denotes that the next state ${s'}_{t,i}^{(k)}$ of the $i^{th}$ worker is the terminal state or not at the $k^{th}$ timestep, and the mini-batch size $M$ is equal to $NT$. Then, it calculates the loss function defined by \begin{equation} L(\Psi_t,\Theta_t)= L(\Psi_t)+L(\Theta_t)+\eta E(\Theta_t) \end{equation} where $\Psi_t$ and $L(\Psi_t)$ are the parameter and the loss function of the critic network, $\Theta_t$ and $L(\Theta_t)$ are the parameter and the loss function of the actor network, and $\eta E(\Theta_t)$ is the entropy regularization term with a small $\eta>0$. $L(\Psi_t)$ is defined as \begin{equation}
L(\Psi_t)=\frac{1}{2NT}\big\|A(\textrm{S}_t,\textrm{A}_t)\big\|^2_F \end{equation} where $\textrm{S}_t=[s_{t,1}^{(1)}, \cdots, s_{t,N}^{(T)}]^\textrm{T}$, $\textrm{A}_t=[a_{t,1}^{(1)}, \cdots, a_{t,N}^{(T)}]^\textrm{T}$, and $A(\textrm{S}_t,\textrm{A}_t) \in \mathcal{R}^{NT}$ denotes the estimate value of $A^{\pi}(\textrm{S}_t,\textrm{A}_t)$, which is calculated as \begin{equation} A(\textrm{S}_t,\textrm{A}_t) = Q(\textrm{S}_t,\textrm{A}_t) - V(\textrm{S}_t;\Psi_t) \end{equation} where $V(\textrm{S}_t;\Psi_t)$ and $Q(\textrm{S}_t,\textrm{A}_t)$ denote the estimate values of $V^{\pi}(\textrm{S}_t)$ and $Q^{\pi}(\textrm{S}_t,\textrm{A}_t)$, respectively. The former is the actual output of the critic network, and the latter is the desired output of the critic network. Each element in the latter is calculated as \begin{equation} \hspace{-0.2cm} Q(s_{t,i}^{(k)},a_{t,i}^{(k)}) \!=\! \!\left\{\! \begin{aligned} r_{t,i}^{(T)} \!+\!\gamma(1\!-\!d_{t,i}^{(T)}) V({s'}_{t,i}^{(T)};\Psi_t)~~, &~k\!=\!T\\ r_{t,i}^{(k)} \!+\!\gamma(1\!-\!d_{t,i}^{(k)}) Q({s}_{t,i}^{k+1},a_{t,i}^{k+1}), &~k\!<\!T \end{aligned} \right. \end{equation} where $V({s'}_{t,i}^{(T)};\Psi_t)$ is also approximated by the critic network. $L(\Theta_t)$ and $E(\Theta_t)$ are defined as follows \begin{eqnarray}
L(\Theta_t)\!=\!-\frac{1}{NT}\!\sum_{i=1}^{N}\!\sum_{k=1}^{T}\! A(s_{t,i}^{(k)},a_{t,i}^{(k)})\!\log\!\pi(a_{t,i}^{(k)}|s_{t,i}^{(k)};\!\Theta_t)~\\
E(\Theta_t)\!=\!-\frac{1}{NT}\!\sum_{i=1}^{N}\!\sum_{k=1}^{T}\! \pi(a_{t,i}^{(k)}|s_{t,i}^{(k)};\!\Theta_t)\!\log\!\pi(a_{t,i}^{(k)}|s_{t,i}^{(k)};\!\Theta_t) \end{eqnarray}
where $\pi(a_{t,i}^{(k)}|s_{t,i}^{(k)};\Theta_t)$ is the actual output of the actor network. Finally, A2C uses (10) to update $\Psi_t$ and $\Theta_t$ with some first-order optimization algorithms such as RMSProp.
\section{RLS-based Advantage Actor Critic} In this section, we try to integrate the RLS method into the vanilla A2C with CNNs. Under the loss function defined by (10), the RLS update rules for four different types of layers used in critic and actor networks are derived respectively. On the basis, we propose RLSSA2C and RLSNA2C algorithms.
\subsection{Optimizing Critic Output Layer} As introduced in Section I, the critic of traditional actor-critic algorithms widely uses LSTD for policy evaluation. From (11), (12) and (13), $A(\textrm{S}_t,\textrm{A}_t) \in \mathcal{R}^{NT} $ is a temporal difference error vector. It means we can also use LSTD for updating the critic parameter. Let $\mathcal{D}_t=\{\mathcal{M}_1,\cdots,\mathcal{M}_t\}$ denote the state-transition mini-batch dataset from the start to the current iteration step $t$. On the basis, we define an auxiliary least squares loss function as \begin{equation}
\tilde{L}(\Psi)=\frac{1}{2NT}\sum_{n=1}^{t}\lambda^{t-n}\big\|A(\textrm{S}_n,\textrm{A}_n)\big\|^2_F \end{equation} where $\lambda\in(0,1]$ is the forgetting factor. Then, the learning problem of the current critic parameter can be described as \begin{equation} \Psi_{t+1}=\mathop{\arg\!\min}\limits_{\Psi}\tilde{L}(\Psi) \end{equation}
In the critic network of A2C, the output layer is generally a linear fc layer with one output neuron. In other words, the activation function of this layer is the identity function. Thus, from (9), $V(\textrm{S}_n;\Psi)$ is calculated as \begin{equation} V(\textrm{S}_n;\Psi)=\textrm{X}^l_n\Psi^l \end{equation} where $\Psi^l$ is the parameter of this layer. Then, from (12), (16) can be rewritten as \begin{equation}
\tilde{L}(\Psi)=\frac{1}{2NT}\sum_{n=1}^{t}\lambda^{t-n}\big\|Q(\textrm{S}_n,\textrm{A}_n) - \textrm{X}^l_n\Psi^l\big\|^2_F \end{equation} By the chain rule for $\Psi^l$, $\nabla_{\Psi^l}\tilde{L}(\Psi)$ can be derived as \begin{equation} \nabla_{\Psi^l}\tilde{L}(\Psi)=-\frac{1}{NT}\!\sum_{n=1}^{t}\lambda^{t-n}(\textrm{X}^l_n)^{\textrm{T}}\!\big(Q(\textrm{S}_n,\textrm{A}_n) - \textrm{X}^l_n\Psi^l\big) \end{equation} Let $\nabla_{\Psi^l}\tilde{L}(\Psi)=\textrm{0}$. We can easily obtain the least squares solution of $\Psi^l_{t+1}$, namely \begin{equation} \Psi^l_{t+1}=(\textrm{H}^l_{t+1})^{-1}b^l_{t+1} \end{equation} where $\textrm{H}_{t+1}^l \in \mathcal{R}^{N_{l-1}\times N_{l-1}}$ and $b_{t+1}^l \in \mathcal{R}^{N_{l-1}}$ are defined as \begin{eqnarray} \textrm{H}_{t+1}^l=\frac{1}{NT}\sum_{n=1}^{t}\lambda^{t-n}(\textrm{X}^l_n)^{\textrm{T}}\textrm{X}^l_n ~~~~ \\ b_{t+1}^l=\frac{1}{NT}\sum_{n=1}^{t}\lambda^{t-n}(\textrm{X}^l_n)^{\textrm{T}}Q(\textrm{S}_n,\textrm{A}_n) \end{eqnarray}
Next, to avoid computing the inverse of $\textrm{H}^l_{t+1}$ and realize online learning, we try to derive the RLS solution of $\Psi^l_{t+1}$. Rewrite (22) and (23) as the following incremental update equations \begin{eqnarray} \textrm{H}_{t+1}^l=\lambda \textrm{H}^l_{t}+\frac{1}{NT}(\textrm{X}^l_{t})^{\textrm{T}}\textrm{X}^l_{t} ~~~~~\\ b_{t+1}^l=\lambda b^l_{t}+\frac{1}{NT}(\textrm{X}^l_{t})^{\textrm{T}}Q(\textrm{S}_{t},\textrm{A}_{t}) \end{eqnarray} However, by using the Sherman-Morrison matrix inversion lemma \cite{38} for (24), the recursive update of $(\textrm{H}^{l}_{t+1})^{-1}$ still includes a new inverse matrix, since the rightmost term in (24) is a matrix product rather than a vector product. In our previous work \cite{34}, we propose an average-approximation method to tackle this problem and reduce the computation burden. Using this method, we define \begin{eqnarray} \bar{x}^l_t=\frac{1}{NT}\sum_{i=1}^{NT}\textrm{X}^l_{t(i,:)}~~~~~~ \\ \bar{q}_t =\frac{1}{NT}\sum_{i=1}^{NT}Q(\textrm{S}_{t(i,:)},\textrm{A}_{t(i,:)}) \end{eqnarray} where $\textrm{X}^l_{t(i,:)} \in \mathcal{R}^{N_{l-1}}$ and $Q(\textrm{S}_{t(i,:)},\textrm{A}_{t(i,:)}) \in \mathcal{R}$ are the column vectors sliced from $\textrm{X}^l_t$ and $Q(\textrm{S}_t,\textrm{A}_t)$, respectively. On the basis, we can rewrite (24) and (25) as follows \begin{eqnarray} \textrm{H}_{t+1}^l =\lambda \textrm{H}^l_t+k\bar{x}^l_t (\bar{x}^l_t)^\textrm{T} \\ b_{t+1}^l =\lambda b^l_t+k\bar{x}^l_t \bar{q}_t ~~~ \end{eqnarray} where $k>0$ is the average scaling factor. Let $\textrm{P}_t = (\textrm{H}_t)^{-1}$. Then, using the Sherman-Morrison matrix inversion lemma and (17), we finally obtain the following RLS update rules \begin{eqnarray} ~~~~~~\textrm{P}^l_{t+1}\approx\frac{1}{\lambda}\bigg(\textrm{P}^l_{t}-\frac{k \textrm{P}^l_{t}\bar{x}^l_t(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}}{\lambda+k(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t}\bigg)\\ ~~~~~~\Psi^l_{t+1}\approx\Psi^l_{t}+\frac{k \textrm{P}^l_{t}\bar{x}^l_t (\bar{q}_t-\bar{v}_t)}{\lambda+k(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t} ~~~ \end{eqnarray} where $\bar{v}_t$ is defined as \begin{equation} \bar{v}_t=\frac{1}{NT}\sum_{i=1}^{NT}V(\textrm{S}_{t(i,:)};\Psi_t) \end{equation}
In our previous work \cite{34}, we find that the RLS optimization can be converted into a gradient descent algorithm, which is easier to implement by using PyTorch or TensorFlow. Based on (10)-(12), $\nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t)$ can be derived as \begin{equation} \nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t) = -\frac{1}{NT} (\textrm{X}_t^l)^{\textrm{T}} (Q(\textrm{S}_t,\textrm{A}_t) - \textrm{X}^l_t\Psi^l_t) \end{equation} In (28) and (29), we ever use $k\bar{x}^l_t (\bar{x}^l_t)^\textrm{T}$ and $k\bar{x}^l_t \bar{q}_t$ to replace $\frac{1}{NT}(\textrm{X}^l_t)^{\textrm{T}}\textrm{X}^l_t$ and $\frac{1}{NT}(\textrm{X}_t^l)^{\textrm{T}} Q(\textrm{S}_t,\textrm{A}_t)$, respectively. On the basis, we can easily get \begin{equation} k\bar{x}^l_t(\bar{q}_t-\bar{v}_t) \approx \frac{1}{NT} (\textrm{X}_t^l)^{\textrm{T}} (Q(\textrm{S}_t,\textrm{A}_t) - \textrm{X}^l_t\Psi^l_t) \end{equation} Thus, we can rewrite (31) as the following gradient update form \begin{equation} \Psi^l_{t+1} \approx \Psi^l_{t}-\frac{\textrm{P}^l_{t} \nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t)}{\lambda+k(\bar{\textrm{x}}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{\textrm{x}}^l_t} \end{equation} where $\frac{\textrm{P}^l_{t}}{\lambda+k(\bar{\textrm{x}}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{\textrm{x}}^l_t}$ is the learning rate. That means we needn't change the loss function defined by (10).
\subsection{Optimizing Actor Output Layer}
In the actor network of A2C, the output layer is also an fc layer, which is to output $\pi(\textrm{A}_t|\textrm{S}_t;\Theta_t)$. For discrete control problems, A2C often uses the Softmax function to define $\pi$, called the Softmax policy. For continuous control problems, A2C usually uses the Gaussian function to define $\pi$, called the Gaussian policy. Unlike (11), (14) and (15) are difficult to be converted into least squares loss functions. In fact, for the same reason, many traditional actor-critic algorithms only use the RLS method to update the critic parameter, but use the SPG descent method to update the actor parameter. Our first algorithm RLSSA2C will follow this method. Its actor output layer is optimized by an ordinary first-order optimization algorithm. For example, its update rules based on RMSProp are defined as follows \begin{eqnarray} \textrm{C}^l_{t+1} = \rho\textrm{C}^l_{t} +(1-\rho) \nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t) \odot \nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t) \\ \Theta^l_{t+1} = \Theta^l_{t}-\frac{\epsilon}{\sqrt{\delta +\textrm{C}^l_{t+1}}} \odot \nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t) ~~~~~ \end{eqnarray} where $\textrm{C}^l_{t}$ is the accumulative squared gradient, $\rho$ is the decay rate, $\epsilon$ is the learning rate, $\delta>0$ is a small constant, $\nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t)$ is SPG, and $\odot$ denotes the Hadamard product.
As introduced in Section II. \textit{B}, there is another type of policy gradients, namely NPG \cite{30,37}, which has yielded a few novel NAC algorithms. To avoid computing the inverse of the Fisher information matrix, some traditional NAC algorithms, often use the RLS method to approximate the compatible parameter $\textrm{W}_t$, and use $\textrm{W}_t$ as NPG for updating the actor parameter \cite{31,32}. Following this way, we define an auxiliary least squares loss function based on $\mathcal{D}_t$ as \begin{equation}
\tilde{L}(w)=\frac{1}{2NT}\sum_{n=1}^{t}\lambda^{t-n}\big\|A(\textrm{S}_n,\textrm{A}_n) - \tilde{A}(\textrm{S}_n,\textrm{A}_n;w)\big\|^2_F \end{equation} where $A(\textrm{S}_n,\textrm{A}_n)$ is calculated by (12). From (7), each element in $\tilde{A}(\textrm{S}_n,\textrm{A}_n;w)$ is defined as \begin{equation} \tilde{A}(\textrm{S}_{n(i,:)},\textrm{A}_{n(i,:)};w)
= w^\textrm{T} \textrm{G}_{n(i,:)}^{\Theta^l} \end{equation}
where $\textrm{G}_{n(i,:)}^{\Theta^l}$ denotes $v(\nabla_{\Theta_t^l}\log\pi(\textrm{A}_{n(i,:)}|\textrm{X}^l_{n(i,:)};\Theta_t^l))\in \mathcal{R}^{N_{l-1}N_l}$ for simplifying notations. Then, the learning problem of the current compatible parameter can be described as \begin{equation} w_{t+1}=\mathop{\arg\!\min}\limits_{w}\tilde{L}(w) \end{equation} Let $\nabla_{w}\tilde{L}(w)=0$. We can easily obtain the least squares solution of $w_{t+1}$, namely \begin{equation} w_{t+1}=(\textrm{H}^l_{t+1})^{-1}b^l_{t+1} \end{equation} where $\textrm{H}_{t+1}^l \in \mathcal{R}^{N_{l-1}N_l\times N_{l-1}N_l}$ and $b_{t+1} \in \mathcal{R}^{N_{l-1}N_l}$ are defined as follows \begin{eqnarray} \textrm{H}_{t+1}^l=\frac{1}{NT}\sum_{n=1}^{t}\lambda^{t-n}(\textrm{G}_{n}^{\Theta^l})^{\textrm{T}}\textrm{G}_{n}^{\Theta^l} ~~~~\\ b_{t+1}^l=\frac{1}{NT}\sum_{n=1}^{t}\lambda^{t-n} (\textrm{G}_{n}^{\Theta^l})^{\textrm{T}} A(\textrm{S}_{n},\textrm{A}_{n}) \end{eqnarray} where $\textrm{G}_{n}^{\Theta^l}=[\textrm{G}_{n(1,:)}^{\Theta^l},\cdots,\textrm{G}_{n(NT,:)}^{\Theta^l}]^\textrm{T} \in \mathcal{R}^{NT\times N_{l-1}N_l}$. By using the same derivation method in Section III. \textit{A}, we can easily obtain the RLS update rules for $\textrm{P}_{t+1}^l$ and $w_{t+1}$. However, here $\textrm{P}_{t+1}^l=\big(\textrm{H}_{t+1}^l\big)^{-1}$ will be $N_l^2$ times as complex as in the critic output layer.
Since the forgetting factor $\lambda$ is 1 or close to 1 in general, $\textrm{H}_{t+1}^l$ can be approximately viewed as a Fisher information matrix. In addition, by the chain rule, we easily get \begin{equation} \textrm{G}_{n(i,:)}^{\Theta^l} = \textrm{X}_{n(i,:)}^l (\textrm{G}_{n(i,:)}^{\textrm{Z}_{n(i,:)}^l})^\textrm{T} \end{equation}
where $\textrm{G}_{n(i,:)}^{\textrm{Z}_{n(i,:)}^l} \!=\! \nabla_{\textrm{Z}_{n(i,:)}^l}\!\log\pi(\textrm{A}_{n(i,:)}|\textrm{X}^l_{n(i,:)};\Theta^l)$ and $\textrm{Z}_{n(i,:)}^l = (\Theta^l)^\textrm{T}\textrm{X}^l_{n(i,:)}$. Then, we have \begin{equation} (\textrm{G}_{n(i,:)}^{\Theta^l})^{\textrm{T}}\textrm{G}_{n(i,:)}^{\Theta^l} = \textrm{X}_{n(i,:)}^l (\textrm{X}_{n(i,:)}^l)^\textrm{T} \otimes \textrm{G}_{n(i,:)}^{\textrm{Z}_{n(i,:)}^l}(\textrm{G}_{n(i,:)}^{\textrm{Z}_{n(i,:)}^l})^\textrm{T} \end{equation} where $\otimes$ is the Kronecker product. To reduce the memory and computation burden, we use the Kronecker-factored approximation \cite{23,24} to rewrite (42) as \begin{equation} \textrm{H}^l_{t+1} \approx \textrm{H}^{(1)}_{t+1} \otimes \textrm{H}^{(2)}_{t+1} \end{equation} where $\textrm{H}^{(1)}_{t+1}$ and $\textrm{H}^{(2)}_{t+1}$ are defined as follows \begin{equation} \textrm{H}^{(1)}_{t+1} = \frac{k}{NT} \sum_{n=1}^{t}\lambda^{t-n}(\textrm{X}_{n}^l)^{\textrm{T}}\textrm{X}_{n}^l \end{equation} \begin{equation} \textrm{H}^{(2)}_{t+1} = \frac{k}{NT}\sum_{n=1}^{t}\lambda^{t-n}(\textrm{G}_{n}^{\textrm{Z}_n^l})^{\textrm{T}}\textrm{G}_{n}^{\textrm{Z}_n^l} \end{equation} where $\textrm{G}_{n}^{\textrm{Z}_n^l}= [\textrm{G}_{n(1,:)}^{\textrm{Z}_{n(i,:)}^l},\cdots,\textrm{G}_{n(NT,:)}^{\textrm{Z}_{n(i,:)}^l}]^\textrm{T} \in \mathcal{R}^{NT\times N_l}$, and $k>0$ is another average scaling factor. Let $\textrm{P}^{(1)}_{t}=\big(\textrm{H}^{(1)}_{t}\big)^{-1}$ and $\textrm{P}^{(2)}_{t}=\big(\textrm{H}^{2)}_{t}\big)^{-1}$. Now, we rewrite (47) and (48) as the incremental forms, use the Sherman-Morrison matrix inversion lemma for each sample in the current mini batch, and average the recursive results. We can easily get \begin{eqnarray} \textrm{P}^{(1)}_{t+1} \approx \frac{1}{\lambda}\bigg(\textrm{P}^{(1)}_{t}- \frac{k}{NT}\sum_{i=1}^{NT} \frac{ \textrm{P}^{(1)}_{t} \textrm{X}^l_{t(i,:)} (\textrm{X}^l_{t(i,:)})^{\textrm{T}}\textrm{P}^{(1)}_{t}} {\lambda+k(\textrm{X}^l_{t(i,:)})^{\textrm{T}}\textrm{P}^{(1)}_{t}\textrm{X}^l_{t(i,:)} }\bigg)~ \\ \textrm{P}^{(2)}_{t+1} \approx \frac{1}{\lambda}\bigg(\textrm{P}^{(2)}_{t}- \frac{k}{NT}\sum_{i=1}^{NT} \frac{ \textrm{P}^{(2)}_{t} \textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)} (\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)})^{\textrm{T}}\textrm{P}^{(2)}_{t}} {\lambda+k(\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)})^{\textrm{T}}\textrm{P}^{(2)}_{t}\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)} }\bigg) \end{eqnarray} Plugging (46) into (41) yields \begin{equation} w_{t+1}\approx \big(\textrm{P}^{(1)}_{t+1} \otimes \textrm{P}^{(2)}_{t+1}\big)b_{t+1}^l = \textrm{P}^{(1)}_{t+1} m(b_{t+1}^l) \textrm{P}^{(2)}_{t+1} \end{equation} where $m(b_{t+1}^l)$ denotes reshaping the vector $b_{t+1}^l$ into an $N_{l-1}\times N_l$ matrix. Then, the rest deviation is similar to what we do in Section III. \textit{A}. Using (43), (49) and (50), we can obtain \begin{equation} w_{t+1} \approx w_{t} - v\big(\textrm{P}^{(1)}_{t}m\big(\frac{1}{NT}\sum_{i=1}^{NT} \textrm{G}_{t(i,:)}^w\big)\textrm{P}^{(2)}_{t}\big) \end{equation} where $\textrm{G}_{t(i,:)}^w$ is defined as \begin{equation} \textrm{G}_{t(i,:)}^w \!=\!\frac{\big(A(\textrm{S}_{t(i,:)},\textrm{A}_{t(i,:)})-w_t^\textrm{T} \textrm{G}_{t(i,:)}^{\Theta_t^l}\big)\textrm{G}_{t(i,:)}^{\Theta_t^l}} {\!\big(\lambda\!+\!k\big(\textrm{X}^l_{t(i,:)}\big)^{\!\textrm{T}}\textrm{P}^{(1)}_{t}\!\textrm{X}^l_{t(i,:)}\!\big) \!\big(\lambda\!+\!k\big(\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)}\big)^{\!\textrm{T}}\textrm{P}^{(2)}_{t}\!\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)} \!\big)} \nonumber \end{equation} Note that we don't use the average-approximation method used in Section III. \textit{A} to update $\textrm{P}^{l(1)}_{t}$, $\textrm{P}^{l(2)}_{t}$ and $w_t$, since $\textrm{G}^{\textrm{Z}_{t(i,:)}^l}_{t(i,:)}$ and $\textrm{G}_{t(i,:)}^{\Theta_t^l}$ of different samples sometimes have large difference and this method will blur their difference.
Finally, the parameter of the actor output layer updated by using NPG can be defined as \begin{equation} \Theta^l_{t+1}=\Theta^l_{t} + \alpha m(w_{t+1}) \end{equation} where $\alpha$ is the learning rate.
\subsection{Optimizing Fully-connected Hidden Layer} In this subsection, we discuss the RLS optimization for fc hidden layers in the critic and actor networks. Generally, there is a nonlinear activation function in each hidden layer, which makes us difficult to derive the least squares solutions of $\Psi_{t+1}^l$ and $\Theta_{t+1}^l$ by using the same method introduced in Section III. \textit{A}. In fact, this is the main reason why it is difficult to combine DAC and RLS.
Here we use the equivalent-gradient method, proposed in our previous work \cite{34}, to tackle this issue. For the current layer $l$ of the critic network, we define an auxiliary least squares loss function as \begin{equation}
\tilde{L}(\Psi)=\frac{1}{2NT}\sum_{n=1}^{t}\lambda^{t-n}\big\|\textrm{Z}^{l*}_n - \textrm{Z}^{l}_n \big\|^2_F \end{equation} where $\textrm{Z}^{l*}_n$ is the corresponding desired value of $\textrm{Z}^{l}_n=\textrm{X}^l_n\Psi^l$. Then, by using the same derivation method in Section III. \textit{A}, $\Psi^l_{t+1}$ is defined as \begin{equation} \Psi^l_{t+1}\approx\Psi^l_{t}+ \frac{k \textrm{P}^l_{t} \bar{x}_t^l(\bar{z}^{l*}_t - \bar{z}^{l}_t)^{\textrm{T}} } {\lambda+k(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t} \end{equation} where $\bar{z}^{l*}_t$ and $\bar{z}^l_t$ are defined as follows \begin{eqnarray} \bar{z}^{l*}_t=\frac{1}{NT}\sum_{i=1}^{NT}\textrm{Z}^{l*}_{t(i,:)} \\ \bar{z}^l_t =\frac{1}{NT}\sum_{i=1}^{NT} \textrm{Z}^l_{t(i,:)} ~ \end{eqnarray} and the update rule of $\textrm{P}_t$ is the same as (30). Furtherly, from our previous work $\cite{34}$, $\nabla_{\textrm{Z}^l_{t}} L(\Psi_t,\Theta_t)$ can be equivalently defined as \begin{equation} \nabla_{\textrm{Z}^l_{t}} L(\Psi_t,\Theta_t) = - \frac{1}{\mu NT}(\textrm{Z}^{l*}_t - \textrm{Z}^{l}_t) \end{equation} where $\mu>0$ is the gradient scaling factor. Plugging (58) into (55), we finally get \begin{equation} \Psi^l_{t+1} \approx \Psi^l_{t}- \frac{\mu\textrm{P}^l_{t}}{\lambda+k(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t} \nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t) \end{equation}
Note that the derivation of $\Theta^l_{t+1}$, which is the parameter of the current fc hidden layer in the actor network, is the same as the above. For brevity, we directly give the result, namely \begin{equation} \Theta^l_{t+1} \approx \Theta^l_{t}- \frac{\mu\textrm{P}^l_{t}}{\lambda+k(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t} \nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t) \end{equation} where the update rule of $\textrm{P}^l_t$ is also the same as (30).
\subsection{Optimizing Convolutional Hidden Layer} Conv layers are at the front of a CNN. They are usually used to learn spatial features of original state inputs. As defined by (8), a conv layer can be viewed as a special fc layer. That means we can also use the same RLS update rules as (59) and (60) to optimize the conv layers of critic and actor networks. However, there is a little different, since the current input $\hat{\textrm{X}}^l_t$ of the conv layer $l$ has three dimensions rather than two dimensions. In our previous work \cite{34}, we define $\bar{x}^l_t=\frac{1}{NTH_lW_l}\sum_{i=1}^{NT}\sum_{j=1}^{H_l W_l}\hat{\textrm{X}}^l_{t(i,:,j)}$ to tackle this problem. But in practice, we find that this definition will blur the difference among the input selections of different output pixels, which will worsen the performance of our algorithms.
To avoid this situation, we define \begin{equation} \bar{\textrm{X}}^l_t=\frac{1}{NT}\sum_{i=1}^{NT}\hat{\textrm{X}}^l_{t(i,:,:)} \end{equation} where $\bar{\textrm{X}}^l_t \in \mathcal{R}^{C_{l-1} H_l^k W_l^k\times H_lW_l}$. On the basis, similar to that of $\textrm{P}^{l(1)}_{t}=\big(\textrm{H}^{l(1)}_{t}\big)^{-1}$ and $\textrm{P}^{l(2)}_{t}=\big(\textrm{H}^{l(2)}_{t}\big)^{-1}$ introduced in Section III. \textit{B}, the recursive derivation of $\textrm{P}^l_{t+1}=\big(\frac{1}{NT}\sum_{n=1}^{t}\lambda^{t-n}(\hat{\textrm{X}}^l_n)^{\textrm{T}}\hat{\textrm{X}}^l_n \big)^{-1} $ will yield \begin{eqnarray} \textrm{P}^{l}_{t+1}\!\approx\! \frac{1}{\lambda}\bigg(\textrm{P}^{l}_{t}\!-\! \frac{k}{H_lW_l} \!\sum_{j=1}^{H_lW_l}\!\frac{\textrm{P}^{l}_{t} \bar{\textrm{X}}^l_{t(:,j)} (\bar{\textrm{X}}^l_{t(:,j)})^{\textrm{T}}\textrm{P}^{l}_{t}} {\lambda+k(\bar{\textrm{X}}^l_{t(:,j)})^{\textrm{T}}\textrm{P}^{l}_{t}\bar{\textrm{X}}^l_{t(:,j)} }\bigg) \end{eqnarray} Finally, we can obtain the update rules for $\Psi_t^l$ and $\Theta_t^l$ defined as follows \begin{equation} \Psi^l_{t+1} \approx \Psi^l_{t}- \tau\bigg(\frac{\mu H_lW_l\textrm{P}^l_{t}o\big(\nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t)\big)}{\!\sum_{j=1}^{H_lW_l}\!\big(\lambda+k(\bar{\textrm{X}}^l_{t(:,j)})^{\textrm{T}}\textrm{P}^{l}_{t}\bar{\textrm{X}}^l_{t(:,j)}\big)}\bigg) \end{equation} \begin{equation} \Theta^l_{t+1} \approx \Theta^l_{t}- \tau\bigg(\frac{\mu H_lW_l\textrm{P}^l_{t}o\big(\nabla_{\Theta^l_{t}} L(\Psi_t,\Theta_t)\big)}{\!\sum_{j=1}^{H_lW_l}\!\big(\lambda+k(\bar{\textrm{X}}^l_{t(:,j)})^{\textrm{T}}\textrm{P}^{l}_{t}\bar{\textrm{X}}^l_{t(:,j)}\big)}\bigg) \end{equation} where $o(\cdot)$ denotes reshaping a $C_{l-1}\times C_l \times H_l^k \times W_l^k$ tensor into a $C_{l-1} H_l^k W_l^k\times C_l$ matrix, and $\tau(\cdot)$ does the reverse.
\subsection{RLSSA2C and RLSNA2C Algorithms} \begin{algorithm}[htb] \caption{RLS-Based Advantage Actor Critic}
\textbf{Input:} critic parameters $\{\Psi_0^l, \textrm{P}_0^l\}_{l=1}^{L_{\textrm{C}}}$; actor parameters $\{\Theta_0^l\}_{l=1}^{L_{\textrm{A}}}$, $\{\textrm{P}_0^l\}_{l=1}^{L_{\textrm{A}}\!-\!1}$, hyperparameters of the ordinary first-order algorithm or $\{w_0,\textrm{P}_0^{(1)},\textrm{P}_0^{(2)},\alpha\}$; initial states $\!\{s_{0,i}^{(1)} \}_{i=1}^N$ of $N$ workers, discount factor $\gamma$, scaling factors $k$ and $\mu$, forgetting factor $\lambda$, regularization factor $\eta$.
\For{$t=0,1,2, ... $} {
\textbf{Excute}: let each worker run $T$ timesteps and generate
$\mathcal{M}_t=\{(s_{t,i}^{(k)},a_{t,i}^{(k)},{s'}_{t,i}^{(k)},r_{t,i}^{(k)},d_{t,i}^{(k)})\}_{i=1,\cdots,N}^{k=1,\cdots,T}$,
where $a_{t,i}^{(k)} \sim \pi(a_{t,i}^{(k)}|s_{t,i}^{(k)},\Theta_t)$ decided by the actor network
\textbf{Measure}: calculate the loss function by (10)
\Critic{
update $\Psi_t^{l_C}, \textrm{P}_t^{l_C}$ in fc output layer by (35), (30) \\
update $\Psi_t^l, \textrm{P}_t^l$ in each fc hidden layer by \!(59), \!(30)\\
update $\Psi_t^l, \textrm{P}_t^l$ in each conv layer by (63), (62)\\
}
\Actor{
update $\Theta_t^{L_{\!A}}$ by an ordinary first-order algorithm or
$w_t,\textrm{P}_t^{l(1)},\textrm{P}_t^{l(2)},\Theta_t^{L_{\!A}}$\! by (52), (49), (50), (53)\\
update $\Theta_t^l, \textrm{P}_t^l$ in each fc hidden layer by \!(60), \!(30)\\
update $\Theta_t^l, \textrm{P}_t^l$ in each conv layer by (64), (62)\\
}
\textbf{Set} $\{s_{t+1,i}^{(1)}\}_{i=1}^N = \{{s'}_{t,i}^{(T)}\}_{i=1}^N$ and \textbf{discard} $\mathcal{M}_t$ } \end{algorithm}
Based on the above derivation and the vanilla A2C, both RLSSA2C and RLSNA2C can be summarized in Algorithm 1, where $L_{\textrm{C}}$ and $L_{\textrm{A}}$ denote the total layer numbers in critic and actor networks. Note that the autocorrelation matrix $\textrm{P}_t^{l}$ in the critic network is different from that in the actor network and we use the same notation only for brevity. For RLSSA2C, $\Theta_t^{L_{\textrm{A}}}$ is updated by using SPG and an ordinary first-order optimization algorithm. For RLSNA2C, $\Theta_t^{L_{\textrm{A}}}$ is updated by using the compatible parameter $w_{t+1}$, which is updated by $\textrm{P}_t^{(1)}$ and $\textrm{P}_t^{(2)}$. Except for those, the rest of RLSSA2C and RLSNA2C is the same. In practice, to avoid instability in training, the critic network and the actor network sometimes are joint \cite{15,19}. In this case, the shared layers are updated by the RLS optimization only once at each iteration.
\section{Analysis and Improvement} In this section, we analyze the computational complexity and the convergence of RLSSA2C and RLSNA2C in brief. In addition, we also present three practical tricks for further improving their convergence speed.
\subsection{Theoretical Analysis} First, we analyze the computational complexity of our proposed algorithms. Although their derivation seems complex and tedious, but in fact they are very simple and easy to implement. From (35), (52), (59), (60), (63) and (64), the RLS optimization for actor and critic networks can be viewed as a special SGD optimization. For an fc hidden layer, a critic fc output layer, an actor fc output layer in RLSNA2C and a conv layer, the computational complexities of the RLS optimization are $1+\frac{N_{l-1}}{NT}$, $1+\frac{N_{L-1}}{NT}$, $2+\frac{N_{L-1}N_{L}}{NT}$ and $1+\frac{C_{l-1}H_{l}^kW_{l}^k}{NTH_lW_l}$ times as those of SGD, respectively. In practice, A2C generally uses 16 or 32 workers to interact with their environments for 5$\sim$20 timesteps at each iteration, that is, $NT$ is 80$\sim$640. Thus, our RLS optimization is only several times as complex as SGD. In Section \uppercase\expandafter{\romannumeral5}, experimental results show that the running speeds of RLSSA2C and RLSNA2C are only 10\%$\sim$30\% slower than that of the vanilla A2C with RMSProp, but they are significantly faster than those of PPO and ACKTR.
Next, we analyze the convergence of our proposed algorithms. As shown in Algorithm 1, RLSSA2C and RLSNA2C are two-time-scale actor-critic algorithms. In LFARL, the convergence of this type of algorithm has been established \cite{7,32}. Namely, if the actor learning rate $\alpha_t^{\textrm{A}}$ and the critic learning rate $\alpha_t^{\textrm{C}}$ satisfy the standard Robbins-Monro condition \cite{39} and a time-scale seperation condition $\textrm{lim}_{t\rightarrow\infty} \alpha_t^{\textrm{A}}/\alpha_t^{\textrm{C}}=0$, actor and critic parameters will converge to asymptotic stable equilibriums. However, unlike traditional actor-critic algorithms, DAC algorithms use non-linear function approximations. Thus, their convergence proofs are difficult. In recent years, there have been a few studies on this issue. Yang et al. establish the convergence of batch actor-critic with nonlinear function approximations and finite samples \cite{40}. Liu et al. establish nonasymptotic upper bounds of the numbers of TD and SGD iterations, and prove that a variant of PPO and TRPO with overparametrized neural networks converges to the globally optimal policy at a sublinear rate \cite{41}. Assuming independent sampling, wang et al. prove that neural NPG converges to a globally optimal policy at a sublinear rate \cite{42}. Compared with the proof methods for traditional actor-critic algorithms, these methods have more assumptions and are more complex. Our algorithms are similar to the batch actor-critic algorithm studied in \cite{40}, and the RLS optimization can be viewed as a special SGD optimization. Therefore, we should establish the convergence of RLSSA2C and RLSNA2C by using the methods in \cite{40}, \cite{41} and \cite{40}, \cite{42}, respectively. Intuitively, if actor and critic networks can be mapped into two linear function approximator with some error, the convergence proofs of RLSSA2C and RLSNA2C will be converted into the proofs of traditional actor-critic algorithms.
\subsection{Performance Improvement} In Section III, to simplify the calculation, we import two scaling factors $k$ and $\mu$. From (28), (29), (47), (48) and (58), they should be time-variant, so we present two new definitions for them. From (30), we can get \begin{eqnarray} \textrm{P}^l_{t+1}\approx\frac{1}{\lambda}\bigg(\textrm{P}^l_{t}-\frac{
\textrm{P}^l_{t}\bar{x}^l_t(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}}{\frac{\lambda}{k}+(\bar{x}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{x}^l_t}\bigg) \end{eqnarray} It is clear that the average scaling factor $k$ also plays a role similar to the forgetting factor $\lambda$. That is, a big $k$ will increase the forgetting rate, and vice versa. At the beginning of learning, the policy $\pi$ is infantile and unstable, so we should select a big $k$ to forget the historical information and emphasize the new samples for accelerating convergence. At the end of learning, the policy $\pi$ is close to the optimal, so we should select a small $k$ to stabilize it. In (49), (50) and (62), $k$ also play the same role. Thus, we redefine $k$ as \begin{equation} k_t= \max\big[k_0-\lfloor\frac{t}{t_\Delta}\rfloor \delta_k, k_{min}\big] \end{equation} where $k_0$, $t_\Delta$, $\delta_k$ and $k_{min}$ denote the initial value, the update interval, the decay size and the lower bound of $k$, respectively. From (59), (60), (63) and (64), the gradient scaling factor $\mu$ is a part of the learning rate. A big $\mu$ can accelerate convergence, but it will also cause $\pi$ to fall into the local optimum. Thus, $\mu$ should gradually decay to a steady value. Here, we redefine it as \begin{equation} \mu_t= \max\big[\mu_0-\lfloor\frac{t}{t_\Delta}\rfloor \delta_\mu, \mu_{min}\big] \end{equation} where $\mu_0$, $\delta_\mu$ and $\mu_{min}$ denote the initial value, the decay size and the lower bound of $\mu$, respectively. In fact, $\mu_t$ is different for each layer. In a DNN, deeper layers are more likely to suffer from the gradient vanishing, so we suggest that users choose a big $\mu_0$ for these layers.
In addition, it is well known that the vanilla SGD can be accelerated by the momentum method \cite{43}. Our RLS optimization is a special SGD, so we can also use this method to accelerate it. Similar to in our previous work \cite{34}, (35) can be redefined as \begin{eqnarray} \Phi^l_{t+1} \approx \beta \Phi^l_{t}-\frac{\textrm{P}^l_{t} \nabla_{\Psi^l_{t}} L(\Psi_t,\Theta_t)}{\lambda+k(\bar{\textrm{x}}^l_t)^{\textrm{T}}\textrm{P}^l_{t}\bar{\textrm{x}}^l_t}\\ \Psi^l_{t+1} \approx \Psi^l_{t}+ \Phi^l_{t+1} ~~~~~~~~ \end{eqnarray} where $\Phi^l_{t}$ denote the velocity matrix in the $l^{th}$ layer, and $\beta$ is the momentum factor. (59), (60), (63) and (64) can also be redefined as the similar forms. Note that we only suggest RLSSA2C to use this method, since we find that it will worsen the RLSNA2C's stabilities empirically .
\section{Experimental Results} In this section, we will compare our algorithms against the vanilla A2C with RMSProp (called RMSA2C) for evaluating the sample efficiency, and compare them against RMSA2C, PPO and ACKTR for evaluating the computational efficiency. We first test these algorithms on 40 discrete control games in the Atari 2600 environment, and then test them on 11 continuous control tasks in the MuJoCo environment.
\subsection{Discrete Control Evaluation} Atari 2600 is the most famous discrete control benchmark platform for evaluating DRL. It is comprised of a lots of highly diverse games, which have high-dimensional observations with raw pixels. In this set of experiments, we select 40 games from the Atari environment for performance evaluation. For each game, the state is a $4\!\times84\!\times\!84$ normalized image.
All tested algorithms have the same model architecture, which is defined in \cite{15}. It has 3 shared conv layers, 1 shared fc hidden layer, 1 separate fc critic output layer and 1 separate fc actor output layer. The first conv layer has 32 $8\!\times\!8$ kernels with stride $4$, the second conv layer has 64 $4\!\times\!4$ kernels with stride $2$, the third conv layer has 32 $3\!\times\!3$ kernels with stride $1$, and the fc hidden layer has $512$ neurons. All hidden layers use ReLU activation functions, the fc critic output layer uses the Identity activation function to predict the state-value function, and the fc actor output layer uses the Softmax activation function to represent the policy. The parameter of each layer is initialized with the default settings of PyTorch. All algorithms use the same loss function defined by (10), where the discount factor $\gamma=0.99$ and the entropy regularization factor $\eta=0.01$. At each iteration step, each algorithm lets 32 parallel workers run 5 timesteps, and uses the generated minibatch including 160 samples for training. All workers use an Intel Core i7-9700K CPU for trajectory sampling, and use a Nvidia RTX 2060 GPU for accelerating the optimization computation. To avoid gradient exploding, all parameter gradients are clipped by the $L_2$ norm with 0.5.
Besides the above common settings, individual settings of each algorithm are summarized as follows: 1) For RMSA2C, the learning rate $\epsilon$, decay factor $\rho$ and small constant $\delta$ in RMSProp are set to 0.00025, 0.99 and 0.00005, respectively. 2) For RLSSA2C, all initial autocorrelation matrices are set to Identity matrices, the forgetting factor $\lambda$ and momentum factor $\beta$ are set to $1$ and $0.5$, the hyperparameters $k_0$, $\delta_k$, $k_{min}$, $\mu_0$, $\delta_{\mu}$, $\mu_{min}$, $t_\triangle$ of scaling factors $k_t$ and $\mu_t$ are set to 0.1, 0.02 ,0.01, 5, 0.1, 1 and 5000, and the actor output layer uses the same RMSProp as that in RMSA2C. Note that $\mu_t$ is only used for conv layers but is fixed to 1 for all fc layers. 3) For RLSNA2C, the momentum factor $\beta$ is set to $0.0$, $\textrm{P}_0^{(1)}$ and $\textrm{P}_0^{(2)}$ are also set to Identity matrices, and the learning rate $\alpha$ of the actor output layer is initialized to 0.01 and decays by 0.002 per 5000 timesteps to 0.001. The other settings are the same as those in RLSSA2C. 4) For PPO and ACKTR, we directly use Kostrikov's source code and settings \cite{25}. Note that the settings of RMSA2C are selected from \cite{15}, and the settings of RLSSA2C and RLSNA2C are obtained by tuning Pong, Breakout and StarGunner games.
The convergence comparison of our algorithms against RMSA2C on 40 Atari games trained for 10 million timesteps is shown in Fig. 1. It is clear that RLSSA2C and RLSNA2C outperform RMSA2C on most games. Among these three algorithms, RLSSA2C has the best convergence performance (i.e., sample efficiency) and stability. Compared with RMSA2C, RLSSA2C wins on 30 games. On Alien, Amidar, Assault, Asterix, BattleZone, Boxing, Breakout, Kangaroo, Krull, KungFuMaster, MsPacman, Pitfall, Pong, Qbert, Seaquest and Tennis games, RLSSA2C is significantly superior to RMSA2C in terms of convergence speed and convergence quality. Compared with RMSA2C, RLSNA2C also wins on 30 games. On Asterix, Atlantis, DoubleDunk, FishingDerby, NameThisGame, Pong, Riverraid, Seaquest, UpNDown and Zaxxon games, RLSNA2C performs very well. But compared with RLSSA2C, it is not very stable on some games.
In Table \uppercase\expandafter{\romannumeral1}, we present the last 100 average episode rewards of our algorithms and RMSA2C on 40 Atari games trained for 10 million timesteps. From Table I, RLSSA2C and RLSNA2C obtain the highest rewards on 19 and 15 games respectively, but RMSA2C only wins on 6 games. Notably, on Assault, BattleZone, Gopher and Q-bert games, RLSSA2C respectively acquires 2.6, 2.4, 3.6, and 3.1 times scores as much as RMSA2C. On Asterix, Atlantis, FishingDerby, UpNDown and Zaxxon games, RLSNA2C respectively acquired 2.8, 2.5, 2.7, 4.8, and 45 times scores as much as RMSA2C.
The running speed comparison of our algorithms against RMSA2C, PPO and ACKTR on six Atari games is shown in Table \uppercase\expandafter{\romannumeral2}. Among these five algorithms, RMSA2C has the highest computational efficiency, RLSNA2C and RLSSA2C are listed 2nd and 3rd respectively, and PPO is listed last. In detail, RLSSA2C is only 28.1\% slower than RMSA2C, but is 592.7\% and 31.3\% faster than PPO and ACKTR. RLSNA2C is only 27.7\% slower than RMSA2C, but is 596.3\% and 31.9\% faster than PPO and ACKTR. By using Kostrikov's source code to test PPO and ACKTR, we find that our algorithms have the similar performance. Therefore, our algorithms achieve a better trade-off between high convergence performance and low computational cost.
\begin{figure*}
\caption{Convergence comparison of our algorithms against RMSA2C on 40 Atari games trained for 10M timesteps.}
\end{figure*}
\renewcommand{1.5}{1.5} \begin{table*}
\centering
\fontsize{7.4}{9}\selectfont \begin{floatrow} \capbtabbox{
\begin{tabular*}{0.45\textwidth}{lrrr}
\toprule[1pt]
Game&~~~~~~~~~~RMSA2C&~~~~~~~RLSSA2C&~~~~~~~RLSNA2C\cr
\midrule[1pt]
Alien &975.8 &{1375.7} &\textbf{1675.0} \cr
Amidar &234.9 &\textbf{405.6} &301.1 \cr
Assault &1281.7 &\textbf{3346.9} &3136.4 \cr
Asterix &2937.0 &5138.5 &\textbf{8140.0} \cr
Asteroids &1427.4 &1344.8 &\textbf{1626.0} \cr
Atlantis &1085676.0 &1885303.0&\textbf{2747970.0} \cr
BankHeist &\textbf{1077.1} &789.5 &662.0 \cr
BattleZone &4620.0 &\textbf{11200.0} &8100.0 \cr
BeamRider &4602.9 &5225.5 &\textbf{5313.0} \cr
Bowling &29.1 &\textbf{30.9} &29.3 \cr
Boxing &93.3 &97.2 &\textbf{100.0} \cr
Breakout &389.9 &\textbf{460.5} &445.9 \cr
Centipede &2875.6 &\textbf{4300.1} &2950.9 \cr
DemonAttack &\textbf{25309.3}&13252.4 &2619.5 \cr
DoubleDunk &-14.4 &-15.1 &\textbf{-2.0} \cr
FishingDerby &10.2 &18.3 &\textbf{27.8} \cr
Frostbite &250.7 &\textbf{277.2} &267.0 \cr
Gopher &986.0 &\textbf{3585.0} &1088.0 \cr
Gravitar &204.5 &176.5 &\textbf{240.0} \cr
IceHockey & -11.2 &\textbf{-6.5 } &-7.4 \cr
\bottomrule[1pt]
\end{tabular*}
\hfil\hfil\hfil\hfil\hfil\hfil~~~~~~~~~~\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil
\begin{tabular*}{0.45\textwidth}{lrrr}
\toprule[1pt]
Game&~~~~~~~RMSA2C&~~~~~~~RLSSA2C&~~~~~~~RLSNA2C\cr
\midrule[1pt]
Jamesbond &\textbf{423.5} &379.0 &30.0 \cr
Kangaroo &992.0 &\textbf{1512.0} &60.0 \cr
Krull &7327.0 &\textbf{7996.3} &3715.0 \cr
KungFuMaster &20427.0 &25224.0 &\textbf{29000.0} \cr
MsPacman &1846.9 &1916.4 &\textbf{2195.0} \cr
NameThisGame &6054.1 &\textbf{8592.8} &8555.0 \cr
Pitfall &-63.4 &-6.7 &\textbf{-2.8} \cr
Pong &19.2 &\textbf{20.5} &19.6 \cr
Qbert &4267.5 &\textbf{13064.5} &12020.0 \cr
Riverraid &\textbf{7572.3} &7193.4 &7125.0 \cr
RoadRunner &30705.0 &\textbf{33160.0} &23300.0 \cr
Seaquest &1754.8 &1728.8 &\textbf{1756.0} \cr
SpaceInvaders &\textbf{1001.2} &677.4 &591.0 \cr
StarGunner &36820.0 &\textbf{40808.0} &33280.0\cr
Tennis &-22.4 &\textbf{-16.2} &-22.1 \cr
TimePilot &3471.0 &\textbf{4648.0} &4480.0 \cr
Tutankham &211.9 &\textbf{236.5} &227.7 \cr
UpNDown &14666.5 &24466.6 &\textbf{69848.0} \cr
WizardOfWor &\textbf{2770.0} &2626.0 &1920.0 \cr
Zaxxon &8.0 &87.0 &\textbf{360.0} \cr
\bottomrule[1pt]
\end{tabular*} } {
\caption{\textsc{Last 100 Average Episode Rewards of our algorithms and RMSA2C on 40 Atari Games Trained for 10M Timesteps}}
\label{tab.tb2} } \end{floatrow} \end{table*}
\renewcommand{1.5}{1.5} \begin{table}[tp]
\centering
\fontsize{7.4}{9}\selectfont
\begin{threeparttable}
\caption{\textsc{Running Speed Comparison on Six Atari Games}\\{( Timesteps\! /\! Second )}}
\label{tab:performance_comparison}
\begin{tabular}{lccccc}
\toprule[1pt]
Game&RMSA2C&PPO&ACKTR&RLSSA2C&RLSNA2C\cr
\midrule[1pt]
Alien &2897 &320 &1690 &2056 &2021 \cr
Breakout &3033 &321 &1668 &2057 &2150 \cr
Pong &3265 &324 &1754 &2411 &2402 \cr
SpaceInvaders&3124 &328 &1731 &2293 &2306 \cr
StarGunner &3436 &341 &1794 &2515 &2512 \cr
Zaxxon &3201 &336 &1750 &2301 &2311 \cr
\midrule[1pt]
Mean &3159 &328 &1731 &2272 &2284 \cr
\bottomrule[1pt]
\end{tabular}
\end{threeparttable} \end{table}
\subsection{Continuous Control Evaluation} MuJoCo is a high-dimensional continuous control benchmark platform. In this set of experiments, we select 11 tasks from the MuJoCo environment for performance evaluation. In contrast to in Atari, the state in MuJoCo is a multiple dimensional vector, and the action space is continuous.
\begin{figure*}
\caption{Convergence comparison of our algorithms against RMSA2C on 11 MuJoCo tasks trained for 10M timesteps.}
\end{figure*}
\renewcommand{1.5}{1.5} \begin{table*}[tp]
\centering
\fontsize{7.4}{9}\selectfont \begin{floatrow} \capbtabbox{
\begin{tabular*}{0.45\textwidth}{lrrr}
\toprule[1pt]
Game&RMSA2C&~~~~RLSSA2C&~~~~RLSNA2C\cr
\midrule[1pt]
Ant &-62.0 &\textbf{4607.0} &941.5 \cr
HalfCheetah &454.6 &\textbf{1630.3} &1625.4 \cr
Hopper &921.9 &2068.9 &\textbf{2133.6} \cr
InvertedDoublePendulum&7957.0 &\textbf{9333.4} &8500.0 \cr
InvertedPendulum &1000.0 &1000.0 &1000.0 \cr
Pusher &-117.7 &\textbf{-39.0} &-56.6 \cr
\bottomrule[1pt]
\end{tabular*}
\hfil\hfil\hfil\hfil\hfil\hfil~~~~~~~~~~\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil\hfil
\begin{tabular*}{0.45\textwidth}{lrrr}
\toprule[1pt]
Game&~~~~~~~~~~~~~~~~~~~~~RMSA2C&~~~~~RLSSA2C&~~~~~RLSNA2C\cr
\midrule[1pt]
Reacher &-21.9 &\textbf{-4.9} &-9.2 \cr
Striker &-336.1 &\textbf{-124.7} &-264.3 \cr
Swimmer &\textbf{33.6} &32.9 &32.3 \cr
Thrower &-106.2 &-68.5 &\textbf{-59.3} \cr
Walker2d &1682.0 &\textbf{3306.9} &1363.6 \cr
\cr
\bottomrule[1pt]
\end{tabular*} }{
\caption{\textsc{Top 10 Average Episode Rewards of our Algorithms and RMSA2C on 11 MuJoCo Tasks Trained for 10M Timesteps}}} \end{floatrow} \end{table*}
\renewcommand{1.5}{1.5} \begin{table}[tp]
\centering
\fontsize{7.4}{9}\selectfont
\begin{threeparttable}
\caption{\textsc{Running Speed Comparison on Six MuJoCo Tasks}\\{( Timesteps\! /\! Second )}}
\label{tab:performance_comparison}
\begin{tabular*}{\textwidth}{lccccc}
\toprule[1pt]
Game&RMSA2C&PPO&ACKTR&RLSSA2C&RLSNA2C\cr
\midrule[1pt]
Ant &5911 &372 &4584 &5646 &5380 \cr
Hopper &7735 &375 &5574 &7160 &6909 \cr
InvertedPendulum&10076 &384 &6633 &8997 &8399 \cr
Pusher &7873 &368 &5558 &7185 &7073 \cr
Swimmer &9033 &384 &6068 &7996 &7927 \cr
Walker2d &7716 &373 &5438 &7052 &6855 \cr
\midrule[1pt]
Mean &8057 &376 &5643 &7339 &7091 \cr
\bottomrule[1pt]
\end{tabular*}
\end{threeparttable} \end{table}
All tested algorithms also use two disjoint FNNs defined in \cite{15}. One is the critic network, the other is the actor network. Both networks have two same fc hidden layers with 64 Tanh neurons. The critic output layer is a linear fc layer for predicting the value function. The actor network uses a linear fc layer with bias to represent the mean and the standard deviation of the Gaussian policy \cite{24}. All settings for five tested algorithms are the same as those in Section \uppercase\expandafter{\romannumeral5}. \textit{A}, except for $\alpha=0.001$ in RLSNA2C.
The convergence comparison of our algorithms against RMSA2C on 11 MuJoCo tasks trained for 10 million timesteps is shown in Fig. 2. It is clear that RLSSA2C and RLSNA2C outperform RMSA2C on almost all tasks. Among these three algorithms, RLSSA2C has the best convergence performance and stability. Compared with RMSA2C, RLSSA2C wins on 10 tasks. On Ant, HalfCheetah, Hopper, Pusher, Reacher, Striker, Swimmer, Thrower and Walker2d tasks, RLSSA2C is significantly superior to RMSA2C in terms of convergence speed and convergence quality. Compared with RMSA2C, RLSNA2C also wins on 9 tasks. On Pusher, Reacher, and Swimmer tasks, RLSNA2C performs very well.
In Table \uppercase\expandafter{\romannumeral3}, we present the top 10 average episode rewards of our algorithms and RMSA2C on 11 MuJoCo tasks trained for 10 million timesteps. From Table \uppercase\expandafter{\romannumeral3}, RLSSA2C and RLSNA2C obtain the highest rewards on 7 and 2 tasks respectively, but RMSA2C only wins on 1 task.
The running speed comparison of our algorithms against RMSA2C, PPO and ACKTR on six MuJoCo tasks is shown in Table \uppercase\expandafter{\romannumeral4}. Among these five algorithms, RMSA2C has the highest computational efficiency, RLSSA2C and RLSNA2C are listed 2nd and 3rd respectively, and PPO is listed last. In detail, RLSSA2C is only 8.9\% slower than RMSA2C, but is 1851.9\% and 30.1\% faster than PPO and ACKTR. RLSNA2C is only 12.0\% slower than RMSA2C, but is 1785.9\% and 25.7\% faster than PPO and ACKTR.
Obviously, RLSSA2C and RLSNA2C also perform very well on MuJoCo tasks. In summary, our both algorithms have better sample efficiency than RMSA2C, and have higher computational efficiency than PPO and ACKTR.
\section{Conclusion} In this paper, we proposed two RLS-based A2C algorithms, called RLSSA2C and RLSNA2C. To the best of our knowledge, they are the first RLS-based DAC algorithms. Our both algorithms use the RLS method to train the critic network and hidden layers of the actor network. Their main difference is their policy learning. RLSSA2C uses SPG and an ordinary first-order gradient descent algorithm to learn the policy parameter, but RLSNA2C uses NPG, the Kronecker-factored approximation and the RLS method to learn the compatible parameter and the policy parameter. We also analyzed the complexity and convergence of our both algorithms, and presented three tricks for further accelerating their convergence. We tested our both algorithms on 40 Atari discrete control games and 11 MuJoCo continuous control tasks. Experimental results show that our both algorithms have better sample efficiency than RMSA2C on most games or tasks, and have higher computational efficiency than PPO and ACKTR. In future work, we will try to establish the convergence of our both algorithms and improve the stability of RLSNA2C.
\end{document} |
\begin{document}
{\scriptsize June 25, 2018} \vskip1ex
\title[The Jacobian of $t$-shifted invariants]{A combinatorial identity for the Jacobian of $t$-shifted invariants} \author[O.\,Yakimova]{Oksana Yakimova} \address[O.\,Yakimova]{Universit\"at zu K\"oln, Mathematisches Institut, Weyertal 86-90, 50931 K\"oln, Deutschland} \email{[email protected]} \thanks{This research is supported by the Heisenberg-Stipendium of the DFG} \begin{abstract} Let $\mathfrak g$ be a simple Lie algebra. There are classical formulas for the Jacobians of the generating invariants of the Weyl group of $\mathfrak g$ and of the images under the Harich-Chandra projection of the generators of $\mathcal{ZU}(\mathfrak g)$. We present a modification of these formulas related to Takiff Lie algebras. \end{abstract} \maketitle
\section*{Introduction}
Let $\mathfrak g$ be a simple complex Lie algebra, $\mathfrak h\subset \mathfrak g$ be a Cartan subalgebra. Fix a triangular decomposition $\mathfrak g=\mathfrak n^-\oplus\mathfrak h\oplus\mathfrak n^+$. Let $\Delta\subset\mathfrak h^*$ be the corresponding root system with $\Delta^+\subset \Delta$ being the subset of positive roots. Define $\rho=\frac{1}{2}\sum\limits_{\alpha\in\Delta^+} \alpha$. Let $W=W(\mathfrak g,\mathfrak h)$ be the Weyl group of $\mathfrak g$. Set $n=\mathrm{rk\,}\mathfrak g$ and let $d_i{-}1$ with $1\leqslant i\leqslant n$ be the exponents of $\mathfrak g$.
For $\alpha\in\Delta^+$, let $\{f_{\alpha},h_\alpha,e_\alpha\}\subset\mathfrak g$ be an $\mathfrak{sl}_2$-triple
with $e_\alpha\in\mathfrak g_\alpha$.
Finally choose a basis $\{h_1,\ldots,h_n\}$ of $\mathfrak h$.
For polynomials $P_1,\ldots, P_n\in\cS(\mathfrak h)\cong \mK[\mathfrak h^*]$, the Jacobian $J(\{P_i\})$ is defined by the property $$
d P_1\wedge d P_2\wedge\ldots\wedge d P_n=J(\{P_i\}) d h_1\wedge\ldots\wedge d h_{n}. $$ If $\hat P_1,\ldots, \hat P_n\in\cS(\mathfrak h)^W$ are generating invariants (with $\deg \hat P_i=d_i$), then $$J(\{\hat P_i\})=C\prod\limits_{\alpha\in\Delta^+} h_\alpha \ \text{ with } \ C\in \mK, C\ne 0$$ by a classical argument, which is presented, for example, in
\cite[Sec.~3.13]{Ref-b}.
Let $\mathcal{ZU}(\mathfrak g)$ denote the centre of the enveloping algebra $\mathcal{U}(\mathfrak g)$. Then $\mathcal{ZU}(\mathfrak g)$ has a set $\{{\mathcal P}_i \mid 1\leqslant i\leqslant n\}$ of algebraically independent generators such that
${\mathcal P}_i\in {\mathcal U}_{d_i}(\mathfrak g)$. Let $P_i\in\cS(\mathfrak h)$ be the image of ${\mathcal P}_i$ under the Harish-Chandra projection. Then $\hat P_i\in\cS(\mathfrak h)^W$ for $\hat P_i(x)=P_i(x-\rho)$, see e.g. \cite[Sec.~7.4]{Dix}, and $$J(\{P_i\})=C\prod\limits_{\alpha\in\Delta^+} (h_\alpha+\rho(h_\alpha)). $$
For any complex Lie algebra $\mathfrak l$, let $\varpi\!: \cS(\mathfrak l)\to {\mathcal U}(\mathfrak l)$ be the canonical symmetrsation map. Let $\cS(\mathfrak l)^{\mathfrak l}$ denote the ring of symmetric $\mathfrak l$-invariants. Since $\varpi$ is an isomorphism of $\mathfrak l$-modules, it provides an isomorphism of vector spaces $\cS(\mathfrak l)^{\mathfrak l}\cong \mathcal{ZU}(\mathfrak l)$.
Suppose next that ${\mathcal P}_i=\varpi(H_i)$ is the symmetrisation of $H_i$ and that $H_i\in\cS(\mathfrak g)^{\mathfrak g}$ is a homogeneous generator of degree $d_i$. Let $T\!: \mathfrak g\to \mathfrak g[t]$ be the $\mathbb C$-linear map sending each $x\in\mathfrak g$ to $xt$. Then $T$ extends uniquely to the commutative algebras homomorphism $$T\!:~\cS(\mathfrak g)\to \cS(\mathfrak g[t]).$$ Set ${H}_i^{[1]}= T(H_i)$
and ${\mathcal P}_i^{[1]}=\varpi({H}_i^{[1]})$. Here ${\mathcal P}_i^{[1]}\in{\mathcal U}(t\mathfrak g[t])$.
The triangular decomposition of $\mathfrak g$ extends to $\mathfrak g[t]$ as $\mathfrak g[t]=\mathfrak n^-[t]\oplus\mathfrak h[t]\oplus\mathfrak n^+[t]$. Let $P_i^{[1]}\in\cS(t\mathfrak h[t])$ be the image of ${\mathcal P}_i^{[1]}$ under the Harish-Chandra projection. In order to define the Jacobian $J(\{P_i^{[1]}\})$ as an element of $\cS(\mathfrak h)$, set at first
$\partial_{x}(x t^k)=k t^{k-1} $ for every $x\in\mathfrak h$ and every $k\geqslant 1$, $\partial_{h_i}(h_j t^k)=0$ for $i\ne j$. Then the desired formula reads $$
J(\{P_i^{[1]}\}) = \det(\partial_{h_j} P_i^{[1]})|_{t=1}. $$
\begin{thm}\label{th-eq} We have the following
identity
$$ J(\{P_i^{[1]}\})= C\prod\limits_{\alpha\in\Delta^+} (h_\alpha+\rho(h_\alpha)+1). $$
\end{thm}
Our proof of Theorem~\ref{th-eq} interprets the zero set of $J(\{P_i^{[1]}\})$
in terms of the {\it Takiff Lie algebra} $\mathfrak q=\mathfrak g[u]/(u^2)$ and then uses the {\it extremal projector} associated with $\mathfrak g$, see Section~\ref{sec-proj} for the definition.
In 1971, Takiff proved that $\cS(\mathfrak q)^{\mathfrak q}$ is a polynomial ring whose Krull dimension equals $2\mathrm{rk\,}\mathfrak g$~\cite{takiff}. This has started a serious investigation of these Lie algebras and their generalisations, see e.g. \cite{k-T} and reference therein. Verma modules and an analogue of the Harish-Chandra homomorphism for $\mathfrak q$ were defined and studied in \cite{Geof,Wil}. We remark that $\mathfrak q$-modules appearing in this paper are essentially different.
\section{Several combinatorial formulas}
Keep the notation of the introduction. In particular, $H_i\in\cS(\mathfrak g)^{\mathfrak g}$ stands for a homogeneous generator of degree $d_i$, $P_i$ is the image of ${\mathcal P}_i=\varpi(H_i)$ under the Harish-Chandra projection, $\hat P_i\in\cS(\mathfrak h)^W$ is the $({-}\rho)$-shift of $P_i$, i.e., $\hat P_i(x)=P_i(x-\rho)$, and $P_i^{[1]}$ is the image of $\varpi(T(H_i))$ under the Harish-Chandra projection related to $\mathfrak g[t]$. Let also $P_i^{\circ}$ be the highest degree
component of $P_i$. Then $P_i^{\circ}={H_i}|_{\mathfrak h}$. By the Chevalley restriction theorem, the polynomials $P_i^{\circ}$ with $1\leqslant i\leqslant n$ generate $\cS(\mathfrak h)^W$. The constant $C$ is fixed by the equality $J(\{P_i^{\circ}\})=C\prod\limits_{\alpha\in\Delta^+} h_\alpha$. It is clear that $J(\{P_i^{\circ}\})=J(\{\hat P_i\})$.
\begin{lm}\label{highest} The highest degree component of $J(\{P_i^{[1]}\})$ is equal to $C \prod\limits_{\alpha\in\Delta^+} h_\alpha$. \end{lm} \begin{proof} The highest degree component of $P_i^{[1]}$ is $T(P_i^{\circ})\in\cS(\mathfrak h t)$.
Each monomial of $P_i^{[1]}$ is of the form $(x_1t)\ldots (x_{d_i} t)$ with $x_j\in \mathfrak h$ for each $j$.
By the construction, $\partial_{h_j} T(P_i^{\circ}) |_{t=1} = \partial_{h_j} P_i^{\circ}$. The result follows. \end{proof} In order to prove the next lemma, we need a well-known equality, namely
$\prod\limits_{i=1}^n d_i=|W|$.
\begin{lm}\label{free} We have $J(\{P_i^{[1]}\})(0)= C \prod\limits_{\alpha\in\Delta^+} (\rho(h_\alpha)+1)$. \end{lm} \begin{proof} Clearly
$P_i^{[1]}|_{t=1} = P_i$.
Since $H_i$ is a homogeneous polynomial of degree $d_i$, the linear in $\mathfrak h$ part of $P_i^{[1]}$ has degree $d_i$ in $t$. It follows that $$
J(\{P_i^{[1]}\})(0)=(d_1\ldots d_n) J(\{P_i\})(0)=|W| \,C \prod\limits_{\alpha\in\Delta^+} \rho(h_\alpha). $$ According to a formula of Kostant: \begin{equation}\label{KF}
\prod\limits_{\alpha\in\Delta^+} \frac{\rho(h_\alpha)+1}{\rho(h_\alpha)} = |W^\vee|=|W|. \end{equation} This completes the proof. \end{proof}
\begin{rmk} The Kostant formula \eqref{KF} is a particular case of another combinatorial statement.
Let $W(t)=\sum\limits_{w\in W} t^{{\sc l}(w)}$ be the Poincar{\'e} polynomial of $W$. Then $$ W^{\vee}(t)=\prod_{\alpha\in\Delta^+} \frac{t^{(\rho,\alpha^{\vee})+1}-1}{t^{(\rho,\alpha^\vee)}-1}, $$ see Equation~(34) in \cite[Sec.~3.20]{Ref-b}. Since $\rho(h_\alpha)=(\rho,\alpha^\vee)$, evaluating at $t=1$ one gets exactly Eq.~\eqref{KF}. \end{rmk}
\begin{ex} Take $\mathfrak g=\mathfrak{sl}_2$ with the usual basis $\{e,h,f\}$. Then $H=H_1=4ef+h^2$, $H^{[1]}=4etft+(ht)^2$, and $$ {\mathcal P}^{[1]}=\varpi(H^{[1]})=2etft+2ftet+(ht)^2=(ht)^2+2ht^2+4ftet. $$ Therefore $P^{[1]}=P_1^{[1]}=(ht)^2+2ht^2$. Computing the partial derivative and evaluating at $t=1$, we obtain $J(\{P^{[1]}\})=2h+4=2(h+2)$. Observe that $\rho(h)=1$. \end{ex}
\section{Takiff Lie algebras and branching}
Theorem~\ref{th-eq} can be interpreted as a statement in representation theory of Takiff Lie algebras $$\mathfrak q=\mathfrak g{\ltimes}\mathfrak g^{\rm ab}\cong\mathfrak g[u]/(u^2).$$ The first factor, the non-Abelian copy of $\mathfrak g$, acts on $\mathfrak g^{\rm ab}=\mathfrak g u$ as a subalgebra of $\mathfrak{gl}(\mathfrak g)$. Therefore there is the canonical embedding
$\mathfrak{q}\subset \mathfrak{gl}(\mathfrak g){\ltimes}\mathfrak g^{\rm ab}$. Set $\ell=\dim\mathfrak g+1$. In its turn, $\mathfrak{gl}(\mathfrak g){\ltimes}\mathfrak g^{\rm ab}$ can be realised as a subalgebra of $\mathfrak{gl}(\mathfrak g{\oplus}\mK)\cong \mathfrak{gl}_\ell(\mK)$. The Lie algebra $\mathfrak{gl}_\ell(\mK)$ is equipped with the standard triangular decomposition. Let $\mathfrak b_\ell\subset \mathfrak{gl}_\ell(\mK)$ be the corresponding positive Borel. Recall that we have chosen a triangular decomposition $\mathfrak g=\mathfrak n^-\oplus\mathfrak h\oplus \mathfrak n^+$. Set $\mathfrak b=\mathfrak h{\oplus}\mathfrak n^+$, $\mathfrak b^-=\mathfrak h{\oplus}\mathfrak n^-$. We fix an embedding $\mathfrak{gl}(\mathfrak g){\ltimes}\mathfrak g^{\rm ab}\subset \mathfrak{gl}(\mathfrak g{\oplus}\mK)$ such that $\mathfrak b^-{\ltimes}\mathfrak g^{\rm ab}$ lies in the opposite Borel $\mathfrak b_\ell^-$ and $\mathfrak b\subset\mathfrak b_\ell$.
Let $\psi\!: \cS(\mathfrak g)\to \cS(\mathfrak q)$ be a $\mK$-linear map sending a monomial $\xi_1\ldots \xi_d$ with $\xi_i\in\mathfrak g$ to $$ \sum_{i=1}^d \xi_1\ldots \xi_{i{-}1} (\xi_i u) \xi_{i{+}1}\ldots \xi_d\,. $$ Set ${\mathcal R}_i=\varpi(\psi(H_i))$. The elements ${\mathcal R}_1,\ldots,{\mathcal R}_n\in {\mathcal U}(\mathfrak q)$ are not necessary central. If we assume that $d_1=2$, then ${\mathcal R}_1\in \mathcal{ZU}(\mathfrak q)$. However, since both maps, $\psi$ and $\varpi$, are homomorphisms of $\mathfrak g$-modules, each ${\mathcal R}_i$ commutes with $\mathfrak g$. Note that the elements ${\mathcal R}_i$ have degree $1$ in $\mathfrak g u$. They are crucial for further considerations and our next goal is to relate them to ${\mathcal P}_i^{[1]}\in{\mathcal U}(t\mathfrak g[t])$.
The map $\psi$ is also well-defined for the tensor algebra of $\mathfrak g$, but not for ${\mathcal U}(\mathfrak g)$, because of the following obstacle $$ \psi(\xi_1\xi_2{-}\xi_2\xi_1)=(\xi_1 u)\xi_2+\xi_1(\xi_2 u)-(\xi_2 u)\xi_1-\xi_2 (\xi_1 u)= [\xi_1 u,\xi_2]+[\xi_1,\xi_2u]=2[\xi_1,\xi_2]u \ne [\xi_1,\xi_2]u. $$ The remedy is to pass to the current algebras $\mathfrak g[t]$ and $\mathfrak q[t]$. Let ${\mathcal T}\!: {\mathcal U}(t\mathfrak g[t])\to {\mathcal U}(\mathfrak q[t])$ be a $\mK$-linear map such that $$ \begin{array}{l}
{\mathcal T}(\xi t^k)=k (\xi u)t^{k{-}1} \ \text{ for each } \ \xi\in\mathfrak g, \\
{\mathcal T}(ab)= {\mathcal T} (a)b+a {\mathcal T}(b) \ \text{ for all } \
a,b\in{\mathcal U}(t\mathfrak g[t]), \ {\mathcal T} \text{ is a derivation}. \end{array} $$ Of course, one has to check that ${\mathcal T}$ exists.
\begin{lm}\label{T} The map ${\mathcal T}$ is well-defined. \end{lm} \begin{proof} Take $\xi,\eta\in\mathfrak g$. Then $$
\begin{array}{l} {\mathcal T}(\xi t^k \eta t^m - \eta t^m\xi t^k)= k (\xi u) t^{k{-}1} \eta t^m + m \xi t^k (\eta u) t^{m{-}1} - m (\eta u) t^{m{-}1} \xi t^{k} - k \eta t^m (\xi u) t^{k{-}1} = \\ \qquad \enskip = k[\xi u,\eta] t^{k{-}1{+}m} + m[\xi, \eta u] t^{k{+}m{-}1}
= (k{+}m) ([\xi,\eta]u) t^{k{+}m{-}1}={\mathcal T}([\xi,\eta]t^{k{+}m}). \end{array} $$
\end{proof}
Now, having the map ${\mathcal T}$, we can state that
${\mathcal R}_i={\mathcal T}({\mathcal P}_i^{[1]})|_{t=1}$.
A word of caution, in ${\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u)$ and similar expressions, $(\mathfrak n^+ u)$ stands for the subspace $\mathfrak n^+ u\subset\mathfrak g^{\rm ab}$ and {\bf not} for an ideal generated by $\mathfrak n^+ u$. The same applies to $(\mathfrak b u)$, $(\mathfrak g u)$, etc.
\begin{lm} \label{lc}
Let $M_\lambda={\mathcal U}(\mathfrak b_{\ell}^-)v_\lambda$ with $\lambda\in\mK^{\ell}$ be a Verma module of $\mathfrak{gl}(\mathfrak g{\oplus}\mK)$. Set $\mu=\lambda|_{\mathfrak h}$. There exists a non-trivial linear combination ${\mathcal R}=\sum c_i {\mathcal R}_i$ such that ${\mathcal R}v_\lambda \in \mathfrak n^-{\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u) v_\lambda$ if and only if $J(\{P_i^{[1]}\})(\mu)=0$. \end{lm} \begin{proof} We have $$ {\mathcal P}_i^{[1]} \in P_i^{[1]} + \mathfrak n^-[t] {\mathcal U}(\mathfrak g[t])\mathfrak n^+[t]. $$
Accordingly ${\mathcal R}_i= {\mathcal T}(P_i^{[1]})|_{t=1} + {\mathcal X}$, where ${\mathcal X}$ is the image of the second summand of ${\mathcal P}_i^{[1]} $. Let $X=x_1\ldots x_r$ be a monomial appearing in ${\mathcal X}$.
If $x_r\in \mathfrak n^+$, then $X v_\lambda=0$. Assume that $X v_\lambda\ne 0$. Then necessary $x_r\in \mathfrak n^+ u$ and $x_1,\ldots,x_{r-1}\in\mathfrak g$. If $x_i\in\mathfrak n^+$ for some $i\leqslant (r{-}1)$, then we replace $X$ by $x_1\ldots x_{i{-}1}[x_i,x_{i+1}\ldots x_r]$. Note that here $[x_i,x_r]\in \mathfrak n^+ u$. Applying this procedure as long as possible one replaces $X$ by an element of ${\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u)$ without altering $Xv_\lambda$. Since $X$ is an invariant of $\mathfrak h$, the new element lies in $\mathfrak n^- {\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u)$. Summing up, \begin{equation}\label{T-t}
{\mathcal R}_i v_\lambda \in {\mathcal T}(P_i^{[1]})|_{t=1}v_\lambda + \mathfrak n^- {\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u) v_\lambda. \end{equation} Further $$
{\mathcal T}(P_i^{[1]})|_{t=1}=\sum\limits_{j=1}^n (\partial_{h_j} P_i^{[1]})|_{t=1} h_j u, $$ where the partial derivatives are understood in the sense of the introduction. Exactly these derivatives have been used in order to define
$J(\{P_i^{[1]}\})$.
Hence $J(\{P_i^{[1]}\})(\mu)=0$ if and only if there is a non-zero vector
$\bar c=(c_1,\ldots,c_n)$ such that $\sum c_i {\mathcal T}(P_i^{[1]})|_{t=1,\mu}=0$. This shows that if
$J(\{P_i^{[1]}\})(\mu)=0$, then ${\mathcal R}v_\lambda \in \mathfrak n^-{\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u) v_\lambda$.
Suppose now that ${\mathcal R}v_\lambda \in \mathfrak n^-{\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u) v_\lambda\subset {\mathcal U}(\mathfrak n^-)(\mathfrak n^+ u)v_\lambda$. Then $$
\sum c_i {\mathcal T}(P_i^{[1]})|_{t=1,\mu} v_\lambda \in {\mathcal U}(\mathfrak n^-)(\mathfrak n^+ u) v_\lambda. $$ Since we are working with a Verma module of $\mathfrak{gl}_{\ell}(\mK)$
and since ${\mathcal T}(P_i^{[1]})|_{t=1,\mu}\in \mathfrak h u \subset \mathfrak n_{\ell}^-$, $\mathfrak n^+ u\subset \mathfrak n_{\ell}^-$, we have
$$\sum c_i {\mathcal T}(P_i^{[1]})|_{t=1,\mu} \in {\mathcal U}(\mathfrak n^-)(\mathfrak n^+ u).$$ At the same time $\mathfrak h u\cap \mathfrak n^+ u=0$.
Therefore $\sum\limits_{i=1}^n c_i(\partial_{h_j} P_i^{[1]})|_{t=1,\mu}=0$ for each $j$ and thus $J(\{P_i^{[1]}\})(\mu)=0$. \end{proof}
For $\gamma\in\mathfrak h^*$, let $M_{\lambda,\gamma}$ be the corresponding weight subspace of ${\mathcal U}(\mathfrak q)v_\lambda\subset M_\lambda$. Since $\mathfrak h u \subset \mathfrak n_\ell^-$, either $M_{\lambda,\gamma}=0$ or $\dim M_{\lambda,\gamma}=\infty$. We have also $(\mathfrak h u)v_\lambda \ne 0$. Because of these facts, the $\mathfrak q$-modules $M_\lambda$ and ${\mathcal U}(\mathfrak q)v_\lambda$ do not fit in the framework of the highest weight theory developed in \cite{Geof,Wil}. Nevertheless, they may have some nice features.
Lemma~\ref{lc} relates $J(\{P_i^{[1]}\})$ to a property of
the branching $\mathfrak q\downarrow \mathfrak g$ in a particular case of the $\mathfrak q$-module ${\mathcal U}(\mathfrak q)v_\lambda$.
In order to get a better understanding of this branching problem, we employ
a certain projector introduced by Asherova, Smirnov, and Tolstoy in \cite{Proj}.
\subsection{The extremal projector} \label{sec-proj}
Recall that $\{f_{\alpha},h_\alpha,e_\alpha\}\subset \mathfrak g$ is the $\mathfrak{sl}_2$-triple corresponding to $\alpha\in\Delta^+$.
Set $$ p_\alpha = 1 + \sum\limits_{k=1}^{\infty} f_{\alpha}^k e_\alpha^k \frac{(-1)^k}{k! (h_\alpha+\rho(h_\alpha)+1)\ldots (h_\alpha+\rho(h_\alpha)+k)}. $$
Set $N=|\Delta^+|$. Choose a numbering of positive roots, $\alpha_1,\ldots,\alpha_N$. Each $p_\alpha$, as well as any product of finitely many of them, is a formal series with coefficients in $\mathbb C(\mathfrak h^*)$ in monomials $$ f_{\alpha_1}^{r_1}\ldots f_{\alpha_N}^{r_N} e_{\alpha_N}^{k_N} \ldots e_{\alpha_1}^{k_1} \ \text{ such that } \ (k_1-r_1)\alpha_1+\ldots + (k_N-r_N)\alpha_N=0. $$ A total order on $\Delta^+$ is said to be {\it normal} if either $\alpha < \alpha+\beta < \beta $ or $\beta < \alpha+\beta < \alpha$ for each pair of positive roots $\alpha, \beta$ such that $\alpha+\beta\in\Delta$. There is a bijection between the normal orders and the reduced decompositions of the longest element of $W$.
Choose a normal order $\alpha_1<\ldots <\alpha_N$, and define $$ p=p_{\alpha_1}\ldots p_{\alpha_N} $$ accordingly. The element $p$ is called the {\it extremal projector}. It is independent of the choice of a normal order. For proofs and more details on this operator see \cite[\S 9.1]{book}. Most importantly, it has the property that \begin{equation}\label{eq-p} e_\alpha p=p f_{\alpha}=0 \end{equation}
for each $\alpha$.
The nilpotent radical $\mathfrak n_\ell\subset \mathfrak b_\ell$ acts on $M_\lambda$ locally nilpotently. Recall that $\mathfrak n^+\subset \mathfrak n_\ell$. Let $v\in M_\lambda$ be an eigenvector of $\mathfrak h$ of weight $\gamma\in\mathfrak h^*$. First of all, $pv$ is a finite sum of vectors of $M_\lambda$ with coefficients in $\mathbb C(\mathfrak h^*)$. Second, if all the appearing denominators are non-zero at $\gamma$, then $pv$ is a well-defined vector of $M_\lambda$ of the same weight $\gamma$.
\section{Proof of Theorem~\ref{th-eq}}
Let $\lambda$, $\mu$, and $M_\lambda$ be as in Lemma~\ref{lc}. Keep in mind that $\lambda$ and $\mu$ are arbitrary elements of $\mK^\ell$ and $\mK^n$. Since each ${\mathcal R}_i$ commutes with $\mathfrak g$, each ${\mathcal R}_i v_\lambda$ is a highest weight vector of $\mathfrak g$.
We use the extremal projector $p$ associated with $\mathfrak g$. If $p$ can be applied to a highest weight vector $v$, then $pv=v$. Suppose that $p$ is defined at $\mu$. Then, in view of \eqref{T-t} and \eqref{eq-p}, $$
{\mathcal R}_i v_\lambda= p{\mathcal R}_i v_\lambda = p {\mathcal T}(P_i^{[1]} )|_{t=1} v_\lambda . $$ Assume that $J(\{P_i^{[1]}\})(\mu) = 0$. Then there is a non-trivial linear combination ${\mathcal R}=\sum c_i {\mathcal R}_i$ such that $$ {\mathcal R} v_\lambda \in \mathfrak n^-{\mathcal U}(\mathfrak b^-)(\mathfrak n^+ u) v_\lambda, $$ see Lemma~\ref{lc}. Here $p {\mathcal R} v_\lambda =0$ and hence ${\mathcal R} v_\lambda =0$ as well.
Since we are considering a Verma module of $\mathfrak{gl}_{\ell}(\mK)$, this implies that $$ {\mathcal R} \in {\mathcal U}(\mathfrak{gl}_{\ell}(\mK))\mathfrak b_\ell \cap {\mathcal U}(\mathfrak q) = {\mathcal U}(\mathfrak q)\mathfrak b. $$
Hence the symbol ${\rm gr}({\mathcal R})$ of ${\mathcal R}$ lies in the ideal of $\cS(\mathfrak q)$ generated by $\mathfrak b$.
The decomposition $\mathfrak g=\mathfrak n^-{\oplus}\mathfrak b$ defines a bi-grading on $\cS(\mathfrak g)$. Let $H_i^\bullet$ be the bi-homogeneous component of $H_i$ having the highest degree w.r.t. $\mathfrak n^-$. According to \cite[Sec.~3]{py-feig}, $H_i^\bullet\in \mathfrak b\cS^{d_i{-}1}(\mathfrak n^-)$ and the polynomials $H_1^\bullet,\ldots,H_n^\bullet$ are algebraically independent. We have $$ \psi(H_i^\bullet) \in (\mathfrak b u) \cS^{d_i{-}1}(\mathfrak n^-) \oplus \mathfrak b(\mathfrak n^- u)\cS^{d_i{-}2}(\mathfrak n^-). $$ Write this as $\psi(H_i^\bullet)\in H_{i,1} + \mathfrak b(\mathfrak n^- u)\cS^{d_i{-}2}(\mathfrak n^-)$. Then the polynomials $H_{i,1}$ with $1\leqslant i\leqslant n$ are still algebraically independent. As can be easily seen, $\psi(H_i)\in H_{i,1} + \mathfrak b \cS(\mathfrak q)$.
Set $d=\max\limits_{i, c_i\ne 0} d_i$. Then $$ {\rm gr}({\mathcal R})=\sum\limits_{i, \, d_i=d} c_i \psi(H_i)$$ and it lies in $(\mathfrak b)\lhd \cS(\mathfrak q)$ if and only if $\sum\limits_{i, \, d_i=d} c_i H_{i,1}=0$. Since at least one $c_i$ in this linear combination is non-zero, we get a contradiction. The following is settled: if $p$ is defined at $\mu$, then
$J(\{P_i^{[1]}\})(\mu)\ne 0$.
Now we know that
the zero set of $J(\{P_i^{[1]}\})$ lies in the union of hyperplanes $h_\alpha+\rho(h_\alpha)=-k$ with $k\geqslant 1$.
At the same time this zero set is an affine subvariety of $\mathbb C^n$ of codimension one. Therefore it is the union of $N$ hyperplanes and $J(\{P_i^{[1]}\})$ is the product of $N$ linear factors of the form $(h_\alpha+\rho(h_\alpha)+k_\alpha)$. A priory, a root $\alpha$ may appear in several factors with different constants $k_\alpha$.
By Lemma~\ref{highest}, the highest degree component of $J(\{P_i^{[1]}\})$ is equal to $C \prod\limits_{\alpha\in\Delta^+} h_\alpha$. Therefore each $\alpha\in\Delta^+$ must appear in exactly one linear factor of $J(\{P_i^{[1]}\})$. Observe that $\rho(h_\alpha)\geqslant 1$ and that $\rho(h_\alpha)+k_\alpha \geqslant \rho(h_\alpha)+1$. If for some $\alpha$, we have $k_\alpha > 1$,
then $$|J(\{P_i^{[1]}\})(0)|> |C| \prod\limits_{\alpha\in\Delta^+} (\rho(h_\alpha)+1).$$ But this cannot be the case in view of Lemma~\ref{free}.
\qed
\section{Conclusion}
The elements ${\mathcal R}_i$ are rather natural $\mathfrak g$-invariants in ${\mathcal U}(\mathfrak q)$ of degree one in $\mathfrak g u$. Note that because $\mathfrak g u$ is an Abelian ideal of $\mathfrak q$, where is no ambiguity in defining the degree in $\mathfrak g u$. The involvement of these elements in the branching rules $\mathfrak q\downarrow \mathfrak g$ remains unclear. However, combining Lemma~\ref{lc} with Theorem~\ref{th-eq}, we obtain the following statement.
\begin{cl}\label{cl-c} In the notation of Lemma~\ref{lc}, there is a non-trivial linear combination ${\mathcal R}=\sum c_i {\mathcal R}_i$ such that ${\mathcal R} v_\lambda \in \mathfrak n^- {\mathcal U}(\mathfrak q) v_\lambda$ if and only if $\mu(h_\alpha)=-\rho(h_\alpha)-1$ for some $\alpha\in\Delta^+$. \qed \end{cl}
As the theory of finite-dimensional representations suggests, it is unusual for a highest weight vector of $\mathfrak g$ to belong to the image of $\mathfrak n^-$. The proof of Theorem~\ref{th-eq} shows that ${\mathcal R}v_\lambda\ne 0$ for the linear combination of Corollary~\ref{cl-c}.
\begin{rmk} The subspace ${\mathcal V}[1]=({\mathcal U}(\mathfrak g)(\mathfrak g u))^{\mathfrak g}\subset {\mathcal U}(\mathfrak q)$ is a $\mathcal{ZU}(\mathfrak g)$-module. From a well-known description of $(\mathfrak g{\otimes}{\mathcal S}(\mathfrak g))^{\mathfrak g}$, one can deduce that ${\mathcal V}[1]$ is freely generated by ${\mathcal R}_1,\ldots,{\mathcal R}_n$ as a $\mathcal{ZU}(\mathfrak g)$-module. There are other choices of generators in ${\mathcal V}[1]$ and it is not clear, whether one can get nice formulas for the corresponding Jacobians. \end{rmk}
\end{document} |
\begin{document}
\begin{center} .
{\LARGE \textbf{Discrete Dynamical Modeling and Analysis of the \textit{R-S} Flip-Flop Circuit}}
\textsf{Denis Blackmore}
\textsf{Department of Mathematical Sciences and}
\textsf{Center for Applied Mathematics and Statistics}
\textsf{New Jersey Institute of Technology}
\textsf{Newark, NJ 07102-1982}
\textsf{[email protected]}
$\ast\;\ast\;\ast$
\textsf{Aminur Rahman}
\textsf{Department of Mathematical Sciences}
\textsf{New Jersey Institute of Technology}
\textsf{Newark, NJ 07102-1982}
\textsf{[email protected]}
$\ast\;\ast\;\ast$\textsf{\ }
\textsf{Jigar Shah }
\textsf{Department of Computer and Electrical Engineering}
\textsf{New Jersey Institute of Technology}
\textsf{Newark, NJ 07102-1982}
\textsf{[email protected] } \end{center}
\noindent\textbf{ABSTRACT: }A simple discrete planar dynamical model for the ideal (logical) \textit{R-S} flip-flop circuit is developed with an eye toward mimicking the dynamical behavior observed for actual physical realizations of this circuit. It is shown that the model exhibits most of the qualitative features ascribed to the \textit{R-S} flip-flop circuit, such as an intrinsic instability associated with unit set and reset inputs, manifested in a chaotic sequence of output states that tend to oscillate among all possible output states, and the existence of periodic orbits of arbitrarily high period that depend on the various intrinsic system parameters. The investigation involves a combination of analytical methods from the modern theory of discrete dynamical systems, and numerical simulations that illustrate the dazzling array of dynamics that can be generated by the model. Validation of the discrete model is accomplished by comparison with certain Poincar\'{e} map like representations of the dynamics corresponding to three-dimensional differential equation models of electrical circuits that produce \textit{R}-\textit{S} flip-flop behavior.
\noindent\textbf{Keywords:} R-S flip-flop, discrete dynamical system, Poincar\'{e} map, bifurcation, chaos, transverse homoclinic orbits
\noindent\textbf{AMS Subject Classification: }37C05, 37C29, 37D45, 94C05
\section{Introduction}
The ideal \emph{R-S flip-flop circuit} is a logical feedback circuit that can be described most efficiently in terms of Fig. 1, with input/output behavior described in Table 1, which shows the \emph{set} ($S$) and \emph{reset} ($R$) inputs feeding into the simple circuit comprised of two \emph{nor gates} and the corresponding outputs $Q$ and $P$ that are generated. The input to this circuit may be represented as $(S,R)$ and the output by $(Q,P)$, so the association of the input to the output, denoted by $(S,R)\rightarrow(Q,P)$ may be regarded as the action of a map from the plane $\mathbb{R}^{2} :=\{(x,y):x,y\in\mathbb{R}\}$ into itself, where $\mathbb{R}$ denotes the real numbers. Our goal, from this mapping perspective, is to construct a simple nonlinear map of the plane that models the logical properties of the \textit{R-S} flip-flop circuit, with iterates (discrete dynamics) that at least qualitatively capture most of its interesting dynamics, both those that are intuitive and those that have been observed in studies of physical circuit simulations - especially in certain critical cases that we shall describe in the sequel.
\begin{figure}
\caption{$R$-$S$ flip-flop circuit}
\label{circuit}
\end{figure}
\noindent The binary input/output behavior, with 0 and 1 representing false and true, respectively, is given in the following table.
\begin{center} \begin{tabular}
[c]{|c|c|c|c|}\hline
$S$ & $R$ & $S_{1}:=Q$ & $R_{1}:=P$\\\hline 1 & 0 & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 1 \end{array}
$} & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 0 \end{array}
$}\\\hline 0 & 1 & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 0 \end{array}
$} & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 1 \end{array}
$}\\\hline 1 & 1 & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 0 \end{array}
$} & \multicolumn{1}{|r|}{$ \begin{array} [c]{c} 0 \end{array}
$}\\\hline 0 & 0 & \multicolumn{1}{|r|}{or $\ \begin{array} [c]{c} 1\\ 0 \end{array}
$} & \multicolumn{1}{|r|}{or $ \begin{array} [c]{c} 0\\ 1 \end{array} $}\\\hline \end{tabular}
Table 1. Binary input/output of $R$-$S$ flip-flop circuit
\end{center}
Note that the input/output behavior is not well defined when the set and reset values are both $0$; a situation that can be remedied by identifying $(1,0)$ with $(0,1)$. Such an identification strategy is consistent with the obvious symmetry of the circuit with respect to $S$ and $R$, and in fact we shall employ this in the sequel when we describe and analyze our $R$-$S$ flip-flop map model. If we make this symmetry identification, it leads to an unambiguous input/output behavior for the circuit, and from an abstract perspective, it is not immediately clear why the state $(1,1)$ should be problematical - other than that it is the only state that produces non-complementary output values $0,0$. Notwithstanding the well defined behavior of the abstract \textit{R}-\textit{S} flip-flop circuit when the symmetry identification is imposed, actual physical circuit models comprised of elements such as capacitors, diodes, inductors and resistors exhibit highly oscillatory, very unstable and even chaotic dynamics (\emph{metastable operation}), as experimentally observed in such studies as \cite{Chaney, KA, kac, LMH}. This type of highly irregular dynamical behavior has also been found in \textit{R}-\textit{S} flip-flop realizations in the context of Josephson junctions \cite{ztkbn}. In general, it is found that physical models of \textit{R}-\textit{S} flip-flop circuits invariably generate very complex dynamics belying the simplicity of the abstract logical circuit, which can plausibly be ascribed to the fact that every real model of this circuit must have inherent time sequencing characteristics due to the finite speed of electromagnetic waves.
Continuous dynamical systems representations of \textit{R}-\textit{S} flip-flop circuits derived from the usual circuit equations applied to the physical models, typically lead to three-dimensional systems of first order, autonomous, (possibly discontinuous) piecewise smooth (and usually piecewise linear) differential equations such as those investigated by Murali \emph{et al}. \cite{msd}, Okazaki \emph{et al}. \cite{okt} and Ruzbehani \emph{et al}. \cite{rzw}. These mathematical models are traceable back to the pioneering work of Moser \cite{Moser}, and are subsumed by the famous circuit of Chua and its generalizations \cite{chua, CWHZ}. Experimental observations on real circuit models of \textit{R}-\textit{S} flip-flops, together with numerical simulations utilizing such tools as SPICE \cite{hamill}, and analytical studies employing the tools of modern dynamical systems theory (see \emph{e.g}. \cite{guho, kathas, palmel, wigbook}) including Poincar\'{e} sections and Melnikov functions such as in \cite{CWHZ, Danca, kac, okt, rzw} have painted a rather compelling picture of the extreme complexity of the dynamical possibilities.
Naturally, when one has a series of quite successful mathematical representations of phenomena or processes as is the case for \textit{R-S} flip-flop circuit behavior, it leads to the question of the possible existence of simpler models. One cannot expect to reduce the dimension of the continuous dynamical models, since it is impossible for two-dimensional systems of autonomous differential equations to have chaotic solutions. However, two-dimensional discrete dynamical systems are an enticing possibility since they can exhibit almost all types of complex dynamics - including chaotic regimes. Moreover, there have been some studies, albeit just a few, including those of Danca \cite{Danca} and Hamill \emph{et al}. \cite{hdj}, which give strong indications of the potential of modeling circuits such as the \textit{R}-\textit{S} flip-flop using two-dimensional difference equations or iterated planar maps. Encouraged by this literature on discrete dynamical models and relying heavily on our knowledge of dynamical systems theory and physical intuition, we have developed a discrete - essentially phenomenological -dynamical model generated by iterates of a rather simple nonlinear (quadratic), two-parameter planar map, which we present and analyze in this paper.
Our investigation is organized as follows. First, in Section 2, we define our simple planar map model - with iterates producing the dynamic behavior that we shall show mimics that observed in real \textit{R}-\textit{S} flip-flop circuits. Moreover, we derive some basic properties of the map related to fixed points and the existence of a local inverse. This is followed in Section 3 with a more thorough analysis of the fixed points of the map - including a local stability analysis and an analysis of stable and unstable manifolds - under very mild, and quite reasonable, restrictions on the two map parameters. Next, in Section 4, we fix the value of one of the parameters and prove the existence of a Hopf bifurcation at an interior fixed point as the other parameter is varied. We also note what appears to be a kind of Hopf bifurcation cascade (manifested by an infinite sequence) as the parameter is increased, which suggests the existence of extreme oscillatory behavior culminating in chaos. We then prove the existence of chaotic dynamics in Section 5. This is followed in Section 6 by a comparison of the dynamics of our model with other results in the literature, primarily from the perspective of planar Poincar\'{e} maps. Naturally, liberal use is made in this section of numerical simulations of our model for comparison purposes. Finally, in Section 7, we summarize and underscore some of the more important results of this investigation, and identify several interesting directions for future related research.
\section{A Discrete Model}
Based upon the ideal mathematical properties of the \textit{R-S} flip-flop circuit, which were briefly delineated in the preceding section, and a knowledge of the interesting dynamical characteristics of actual physical circuits constructed to perform like the \textit{R-S} flip-flop circuit (see \emph{e.g}. \cite{hdj, kac, okt, rzw, ztkbn}, and also \cite{Danca} for related results) , we postulate the following simple, quadratic, two-parameter planar map model:
\begin{equation} \Phi=\left( \varphi,\psi\right) =\Phi_{\lambda,\mu}=\left( \varphi _{\lambda,\mu},\psi_{\lambda,\mu}\right) :\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}, \label{e1} \end{equation} where $\lambda$ and $\mu$ are positive parameters, and the coordinate functions $\varphi$ and $\psi$ are defined as \begin{align} \varphi\left( x,y\right) & =\varphi_{\lambda,\mu}\left( x,y\right) :=1-x\left[ \lambda\left( 1-x\right) +y\right] ,\nonumber\\ \psi\left( x,y\right) & =\psi_{\lambda,\mu}\left( x,y\right) :=\mu y\left( x-y\right) \label{e2} \end{align} Naturally, this map generates a discrete dynamical system - actually a discrete semidynamical system - in terms of its forward iterates determined by $n$-fold compositions of the map with itself, denoted as $\Phi_{\lambda,\mu }^{n}$, or more simply as $\Phi^{n}$, where $n$ is a nonnegative integer. We shall employ the usual notation and definitions for this discrete system; for example, the \emph{positive semiorbit }of a point $p\in\mathbb{R}^{2}$, which we denote as $O_{+}(p)$, is simply defined as \[ O_{+}(p):=\left\{ \Phi^{n}(p):n\in\mathbb{Z},\,n\geq0\right\} , \] and all other relevant definitions are standard (\emph{cf}. \cite{guho, hartman, kathas, palmel, wigbook}). Our model map is clearly real-analytic, which we denote as usual by $\Phi\in C^{\omega}$ and \emph{a fortiori} smooth, denoted as $\Phi\in C^{\infty}$.
Assuming the reset and set values, corresponding to the $x$- and $y$-coordinates, respectively, are normalized so that they may assume the discrete (logical) values $0$ or $1$, it makes sense to restrict the model map to the square $I^{2}:=I\times I:=[0,1]\times\lbrack0,1]$ in the plane. In fact, owing to the obvious symmetry of the circuit with respect to the reset and set inputs, it is actually natural to further restrict our attention to the triangular domain \begin{equation} T:=\left\{ (x,y)\in\mathbb{R}^{2}:0\leq x\leq1,\,0\leq y\leq x\right\} , \label{e3} \end{equation} but we shall defer additional discussion of this point until later, except to note here that \begin{equation} \Phi(0,0)=(1,0),\;\Phi(1,0)=(1,0)\text{ and }\Phi(1,1)=(0,0), \label{e4} \end{equation} which shows that our model is at least logically consistent with the \textit{R}-\textit{S} flip-flop circuit.
\subsection{Basic properties of the model}
Before embarking on a more thorough dynamical analysis of the iterates of our simple map model, we shall describe some of its simpler properties. The fixed points of the map, satisfying $\Phi(x,y)=(x,y)$, are the solutions of the equations \begin{align} 1-x\left[ \lambda\left( 1-x\right) +y\right] & =x\nonumber\\ \mu y\left( x-y\right) & =y, \label{e5} \end{align} from which we readily deduce the property.
\begin{itemize} \item[(B1)] The points $(1,0)$ and $(1/\lambda,0)$ are fixed points of $\Phi$ for all $\lambda,\mu>0$ (\emph{cf}. (4)), while all other fixed points are in the complement of the $x$-axis and are determined by the equations \[ \left( 1-\lambda\right) x^{2}+\left( 1+\lambda-\mu^{-1}\right) x-1=0,\;y=x-\mu^{-1}. \] Hence, we have the following: there are no additional fixed points if $\lambda=1$ and $\mu^{-1}=1+\lambda=2$; there is one more fixed point, $(x,y)$ with \[ x=\frac{\mu}{\mu(1+\lambda)-1},\;y=\frac{\mu}{\mu(1+\lambda)-1}-\frac{1}{\mu }; \] if $\lambda=1$ and $\mu^{-1}\neq1+\lambda=2$; and if $\lambda\neq1$ and $\left( 1+\lambda-\mu^{-1}\right) ^{2}+4\left( 1-\lambda\right) \geq0 $, there two additional fixed points $(x,y)$ with \begin{equation} x=\frac{\left( \mu^{-1}-\lambda-1\right) \pm\sqrt{\left( 1+\lambda-\mu ^{-1}\right) ^{2}+4\left( 1-\lambda\right) }}{2\left( 1-\lambda\right) },\;y=x-\frac{1}{\mu}, \label{e6} \end{equation} while if $\left( 1+\lambda-\mu^{-1}\right) ^{2}+4\left( 1-\lambda\right) <0$, there are only the two fixed points on the $x$-axis. \end{itemize}
\noindent The following additional properties of the map follow directly from its definition.
\begin{itemize} \item[(B2)] $\Phi$ maps the $x$-axis into itself, and if $0<\lambda\leq4$, $\Phi$ actually maps the horizontal edge $e_{h}:=\{(x,0):0\leq x\leq1\}$ of $T$ into itself.
\item[(B3)] $\Phi$ maps the diagonal line $x-y=0$ into the $x$-axis, and maps the diagonal edge $e_{d}:=\{(x,x):0\leq x\leq1\}$ of $T$ into $e_{h}$ if $0<\lambda\leq2$.
\item[(B4)] $\Phi$ maps the $y$-axis onto the portion of the line $x=1$ with $y\leq0$, and maps the line $x=1$, containing the vertical edge $e_{v} :=\{(1,y):0\leq y\leq1\}$ of $T$, onto the parabola $y=\mu x(1-x)$ passing through the origin and the fixed point $(1,0)$.
\item[(B5)] It follows from the derivative (matrix) \begin{equation} \Phi^{\prime}(x,y)=\left( \begin{array} [c]{cc} \lambda(2x-1)-y & -x\\ \mu y & \mu(x-2y) \end{array} \right) \label{e7} \end{equation} and the inverse function theorem that $\Phi$ is a local $C^{\omega} $-diffeomorphism at any point in the complement of the quadratic curve \[ \lambda\left( 2x-1\right) \left( x-2y\right) +2y^{2}=0, \] while in general, the preimage of any point in the plane, denoted as $\Phi^{-1}\left( (x,y)\right) $, is comprised of at most four points. \end{itemize}
\section{Elementary Dynamics of the Model}
We shall analyze the deeper dynamical aspects of the model map (1)-(2) for various parameter ranges in the sequel, but first we dispose of some of the more elementary properties such as a the usual local linear stability analysis of the fixed points. At this stage, and for the remainder of our investigation, we shall focus on the restriction of the model map to the triangle $T$ and assume that
\qquad($\mathcal{A}$1)$\qquad\qquad0<\lambda<1<\mu$
With the above restriction and assumption, it follows from the preceding section that our model map has precisely four fixed points: two in $T$; one near $T$ at $(1/\lambda,0)$ when $\lambda$ is close to unity; and the final one rather distant from $T$. In the next subsection, we embark on a local stability analysis of the fixed points of $\Phi$ on or near the triangle $T$.
\subsection{Local analysis of the fixed points}
The local properties of the fixed points of our model map shall be delineated in a series of lemmas. They all have straightforward proofs that follow directly from the results in the preceding section and fundamental dynamical systems theory (as in \cite{guho, kathas, palmel, wigbook}), which are left to the reader.
\noindent\textbf{Lemma 3.1. }\emph{The fixed points of }$\Phi$\emph{\ on the }$x$\emph{-axis, namely }$(1,0)$\emph{\ and }$(1/\lambda,0)$\emph{, are a saddle and a source with eigenvalues }(\emph{of }$\Phi^{\prime}(1,0)$ \emph{and} $\Phi^{\prime}(1/\lambda,0)$) $\lambda,\mu$\emph{\ and } $2-\lambda,\mu/\lambda$\emph{, respectively. For }$(1,0)$\emph{, the stable manifold is} \[ W^{s}\left( 1,0\right) =\left\{ (x,0):x<1/\lambda\right\} , \] \emph{and the linear unstable manifold is} \[ W_{\ell}^{u}(1,0)=\left\{ \left( x,(\lambda-\mu)(x-1)\right) :x\in \mathbb{R}\right\} . \]
\noindent\textbf{Lemma 3.2. }\emph{The fixed point of }$\Phi$\emph{\ in the interior of }$T$\emph{, which we denote as }$p_{\ast}=(x_{\ast},y_{\ast} )$\emph{, is defined according to }(6) \emph{as }
\[ x_{\ast}=x_{\ast}(\lambda,\mu)=\frac{\left( \mu^{-1}-\lambda-1\right) +\sqrt{\left( 1+\lambda-\mu^{-1}\right) ^{2}+4\left( 1-\lambda\right) } }{2\left( 1-\lambda\right) },\;y_{\ast}=y_{\ast}(\lambda,\mu)=x_{\ast} -\frac{1}{\mu}, \] \emph{and has complex conjugate eigenvalues that are roots of the quadratic equation} \[ \sigma^{2}-a\sigma+b=0, \] \emph{where} \begin{align*} a & =a(\lambda,\mu):=\left( 2\lambda-\mu-1\right) x_{\ast}+\left( 2-\lambda+\mu^{-1}\right) ,\;\\ b & =b(\lambda,\mu):=\mu\left\{ \left[ \lambda\left( 4\mu^{-1}-1\right) -2(1+\mu^{-1})\right] x_{\ast}+2\left[ 1-\mu^{-1}\left( \lambda-\mu ^{-1}\right) \right] \right\} ; \end{align*} \emph{namely} \begin{align*} \sigma & =\sigma(\lambda,\mu)=\frac{1}{2}\left[ a+i\sqrt{4b-a^{2}}\right] \\ \bar{\sigma} & =\bar{\sigma}(\lambda,\mu)=\frac{1}{2}\left[ a-i\sqrt {4b-a^{2}}\right] . \end{align*} \emph{Hence it is a spiral sink or spiral source, respectively, when} \[ \left\vert \sigma\right\vert ^{2}=\left\vert \bar{\sigma}\right\vert ^{2}=b<1, \] \emph{or} \[ \left\vert \sigma\right\vert ^{2}=\left\vert \bar{\sigma}\right\vert ^{2}=b>1. \] \emph{Otherwise }(\emph{when }$b=1$) \emph{it has neutral stability.}
\noindent\textbf{Lemma 3.3. }\emph{For any fixed }$\lambda$\emph{\ and variable }$\mu$ \emph{satisfying (}$\mathcal{A}$\emph{1), the coefficient } $b$\emph{\ defined above satisfies the following properties}:
\begin{itemize} \item[(i)] \emph{It is a smooth }$(=C^{\infty})$\emph{, nonnegative function of }$\mu$\emph{\ for }$\mu>1$\emph{, such that }$db/d\mu>0$ \emph{for every} $\mu>1$.
\item[(ii)] $b\uparrow\infty$\emph{\ as }$\mu\uparrow\infty.$
\item[(iii)] \emph{There exists a positive }$c(\lambda)$\emph{\ such that }$b<1$\emph{\ for }$1<\mu<1+c(\lambda).$
\item[(iv)] \emph{In particular, for each }$0<\lambda<1$\emph{, there exists a unique }$\mu_{h}=\mu_{h}(\lambda)=1+c(\lambda)$\emph{\ such that }$1<\mu_{h} $\emph{, }$b(\lambda,\mu_{h})=1$\emph{, }$0<b(\lambda,\mu)<1$\emph{\ for }$1<\mu<\mu_{h}$\emph{, and }$b(\lambda,\mu)>1$\emph{\ for }$\mu_{h}<\mu $\emph{.} \end{itemize}
The case when the interior fixed point is a spiral attractor is shown in Fig. 2. By fixing $\lambda$ at a value near one, say $\lambda=0.99$, and then varying $\mu$ over a range from $4$ to $5$, we obtain a very rich array of dynamics as described in what follows.
\section{Oscillation and Hopf Bifurcation}
In order to achieve a reasonable amount of focus - given the wide range of possible model map parameters - we shall narrow our range of investigation by adhering to the following additional assumption in the sequel:
\qquad($\mathcal{A}$2)$\qquad\qquad\lambda=0.99=\frac{99}{100}.$
\noindent Then assuming ($\mathcal{A}$1) and ($\mathcal{A}$2), our map $\Phi=\Phi_{\mu}$ satisfies all the properties delineated above, and depends only on the single parameter $\mu\in(1,\infty)$. In particular, it follows directly from Lemma 3.2 that \begin{equation} x_{\ast}(\mu):=x_{\ast}(.99,\mu)=50\left\{ \left( \mu^{-1}-1.99\right) +\sqrt{\left( 1.99-\mu^{-1}\right) ^{2}+0.04}\right\} ,\;y_{\ast} (\mu):=y_{\ast}(.99,\mu)=x_{\ast}-\frac{1}{\mu}, \label{e8} \end{equation} and \begin{align} a(\mu) & :=a(.99,\mu)=\left( .98-\mu\right) x_{\ast}+\left( 1.01+\mu ^{-1}\right) ,\nonumber\\ b(\mu) & :=b(.99,\mu)=\mu\left\{ \left[ (.99)\left( 4\mu^{-1}-1\right) -2(1+\mu^{-1})\right] x_{\ast}+2\left[ 1-\mu^{-1}\left( .99-\mu ^{-1}\right) \right] \right\} .\; \label{e9} \end{align} It is then straightforward to compute in the notation of Lemma 3.3 that \begin{equation} c:=c(.99)\cong3.5438,\;\mu_{h}:\cong\mu_{h}(.99)\cong4.5438,\;x_{\ast}(\mu _{h})\cong0.5632,\;y_{\ast}(\mu_{h})\cong0.3431. \label{e10} \end{equation}
In order to study the behavior of the map in a neighborhood of the fixed point $p_{\ast}=(x_{\ast},y_{\ast})$, it is convenient to translate the coordinates and map to the origin by defining \begin{align} \hat{\Phi} & :=\hat{\Phi}_{\mu}(\xi,\eta):=\Phi_{\mu}(\xi+x_{\ast} ,\eta+y_{\ast})-\Phi_{\mu}(x_{\ast},y_{\ast})\nonumber\\ & =\Phi_{\mu}(\xi+x_{\ast},\eta+y_{\ast})-(x_{\ast},y_{\ast}). \label{e11} \end{align} It is easy to compute that \begin{equation} \hat{\Phi}:=\hat{\Phi}_{\mu}(\xi,\eta)=\left( \hat{\varphi}_{\mu}(\xi ,\eta),\hat{\psi}_{\mu}(\xi,\eta)\right) , \label{e12} \end{equation} where \begin{align} \hat{\varphi}_{\mu}(\xi,\eta) & :=\left[ (0.98)x_{\ast}(\mu)+\mu ^{-1}-0.99\right] \xi-x_{\ast}(\mu)\eta+\xi\left[ (0.99)\xi-\eta\right] ,\nonumber\\ \hat{\psi}_{\mu}(\xi,\eta) & :=\left[ \mu x_{\ast}(\mu)-1\right] \xi-\left[ 2-\mu x_{\ast}(\mu)\right] \eta+\mu\eta\left( \xi-\eta\right) . \label{e13} \end{align}
\subsection{Invariant curve and Hopf bifurcation}
Now with this simple quadratic representation of the model map with respect to the fixed point $p_{\ast}$ interior to the triangle $T$ is a straightforward matter to describe the bifurcation behavior and oscillatory properties. In particular, we have the following result.
\noindent\textbf{Theorem 4.1. }\emph{The discrete semidynamical system associated to the map }$\Phi_{\mu}$\emph{\ }$($\emph{or }$\hat{\Phi}_{\mu} )$\emph{\ has a Hopf bifurcation at the fixed point }$p_{\ast}$\emph{\ when }$\mu=\mu_{h}$\emph{. More specifically, }$p_{\ast}$\emph{\ is a spiral sink }$($\emph{source}$)$\emph{\ for }$1<\mu<\mu_{h}$\emph{\ }$(\mu_{h}<\mu )$\emph{, and }$p_{\ast}$\emph{\ is neutrally stable for }$\mu=\mu_{h} $\emph{\ with }$\Phi_{\mu_{h}}^{\prime}(p_{\ast})$\emph{\ having complex conjugate eigenvalues on the unit circle }$S^{1}$\emph{\ in the complex plane }$\mathbb{C}$\emph{\ }$(``=$\textquotedblright\ $\mathbb{R}^{2})$\emph{. Furthermore, for sufficiently small }$\nu:=\mu-\mu_{h}>0$\emph{, say } $0<\nu<\epsilon$\emph{, there exists a unique }$\Phi_{\mu}$\emph{-invariant, smooth Jordan curve }$\Gamma_{\nu}$\emph{\ }$($\emph{i.e. with }$\Phi_{\mu }(\Gamma_{\nu})=\Gamma_{\nu})$\emph{\ enclosing }$p_{\ast}$\emph{\ in its interior, }$\mathcal{I}(\Gamma_{\nu}),$\emph{satisfying the following properties:}
\begin{itemize} \item[(i)] \emph{There is a }$0<\epsilon_{\ast}\leq\epsilon$\emph{\ for which }$0<\nu<\epsilon_{\ast}$\emph{\ implies that for every point }$p\in \mathcal{I}(\Gamma_{\nu})\smallsetminus\{p_{\ast}\}$\emph{\ the iterates }$\Phi_{\mu}^{n}(p)$\emph{\ spiral around }$p_{\ast}$\emph{\ in a counterclockwise manner and approach }$\Gamma_{\nu}$\emph{; in particular, the distance between these iterates and the curve, denoted }$\Delta(\Phi_{\mu _{h}+\nu}^{n}(p),\Gamma_{\nu})$\emph{, converges monotonically to zero as }$n\rightarrow\infty$\emph{.}
\item[(ii)] \emph{With }$\epsilon_{\ast}$\emph{\ and }$\epsilon$\emph{\ as in }$(i)$\emph{, }$\Gamma_{\nu}$\emph{\ is a local attractor, in that all positive semiorbits originating in some open neighborhood of this curve converge to }$\Gamma_{\nu}$\emph{.}
\item[(iii)] \emph{The dynamical system on }$\Gamma_{\nu}$\emph{\ induced by the restriction of the map }$\Phi_{\mu}$\emph{\ is either ergodic } $($\emph{with dense orbits}$)$\emph{\ or has periodic orbits }$($\emph{or cycles}$)$\emph{\ according as the rotation number is irrational or rational, respectively, including an }$11$\emph{-cycle as }$\mu$\emph{\ just exceeds the bifurcation value }$\mu_{h}$\emph{, the collection }$\{\Gamma_{\nu} :0<\nu<\epsilon_{\ast}\}$\emph{\ includes }$m$\emph{-cycles for infinitely many }$m\in\mathbb{N}$\emph{, where }$\mathbb{N}$\emph{\ comprises the natural numbers.}
\item[(iv)] \emph{Under the same conditions as in }$(i)$\emph{, } $\Delta(p_{\ast},\Gamma_{\nu})=O(\nu).$ \end{itemize}
\noindent\emph{Proof. }Properties (i) and (ii) follow from a direct application to the map (11) of the Hopf bifurcation theorem for discrete dynamical systems (see \emph{e.g.} \cite{DMP, MM, wan, WXH} and also \cite{guho, IJ, kathas, wigbook}), or one can obtain the same results via a straightforward modification of the main theorem of Champanerkar \& Blackmore \cite{CB}. In fact, the latter approach actually shows that the invariant curve is analytic in its variables and parameter $\nu$. To prove (iii) and (iv) requires a deeper analysis of the invariant curves, which we shall merely outline in the interest of brevity (\emph{cf}. \cite{guho} and Lanford's version of Ruelle's proof in \cite{MM}).
The curve $\Gamma_{\nu}$ can be parametrized in polar form as \begin{equation} \Gamma_{\nu}:\xi=\xi(\theta;\mu):=\rho(\theta;\mu)\cos\theta,\quad\eta =\eta(\theta;\mu):=\rho(\theta;\mu)\sin\theta, \label{e14} \end{equation} where $\theta$ is the usual polar angle about the point $p_{\ast}$. Then $\Phi_{\mu}$-invariance requires that \begin{equation} \hat{\Phi}_{\mu}\left( \xi(\theta;\mu),\eta(\theta;\mu)\right) =\left( \xi(\Theta;\mu),\eta(\Theta;\mu)\right) , \label{e15} \end{equation} where $\Theta$ represents the angular rotational action of the map defined as \begin{equation} \Theta=\Theta\left( \xi,\eta;\nu\right) :=\tan^{-1}\left( \frac{\hat{\psi }_{\mu}(\xi,\eta)}{\hat{\varphi}_{\mu}(\xi,\eta)}\right) . \label{e16} \end{equation} We note that is easy to verify that we must take the branch of the arctangent that takes on values between $\theta+\pi/2$ and $\theta+3\pi/2.$
It is convenient to introduce the following more compact notation for the map as expressed in terms of its coordinate functions in (13): \begin{align} \hat{\varphi}_{\mu}(\xi,\eta) & :=-\alpha(\nu)\xi-\beta(\nu)\eta+\xi\left[ (0.99)\xi-\eta\right] ,\nonumber\\ \hat{\psi}_{\mu}(\xi,\eta) & :=\gamma(\nu)\xi+\delta(\nu)\eta+\mu\eta\left( \xi-\eta\right) , \label{e17} \end{align} where the parameter dependent, positive coefficients $\alpha(\nu)$, $\beta (\nu)$, $\gamma(\nu)$ and $\delta(\nu)$ are defined in the obvious way according to (13). In polar coordinates with respect to $p_{\ast}$, the functions defined in (17) have the form \begin{align} \hat{\varphi}_{\mu}(r,\theta) & :=r\left\{ -\alpha(\nu)\cos\theta-\beta (\nu)\sin\theta+r\cos\theta\left[ (0.99)\cos\theta-\sin\theta\right] \right\} ,\nonumber\\ \hat{\psi}_{\mu}(r,\theta) & :=r\left[ \gamma(\nu)\cos\theta+\delta (\nu)\sin\theta+\mu r\sin\theta\left( \cos\theta-\sin\theta\right) \right] . \label{e18} \end{align} Moreover, in the context of these polar coordinates, our map can be rewritten in the form \begin{equation} \hat{\Phi}_{\mu}\left( r,\theta\right) :=\left( R(r,\theta;\mu ),\Theta(r,\theta;\mu)\right) , \label{e19} \end{equation} and we compute that \begin{align*} R^{2} & =\hat{\varphi}_{\mu}^{2}+\hat{\psi}_{\mu}^{2}=r^{2}\left\{ \left( \alpha^{2}+\gamma^{2}\right) \cos^{2}\theta+\left( \alpha\beta+\gamma \delta\right) \cos2\theta+\left( \beta^{2}+\delta^{2}\right) \sin^{2} \theta-\right. \\ & r\left( 0.99\cos\theta-\sin\theta\right) \left[ 2\alpha\cos^{2} \theta+\beta\cos2\theta-r\cos^{2}\theta\left( 0.99\cos\theta-\sin \theta\right) \right] +\\ & \left. \mu r\left( \cos\theta-\sin\theta\right) \left[ \gamma \cos2\theta+2\delta\sin^{2}\theta+r\sin^{2}\theta\left( \cos\theta-\sin \theta\right) \right] \right\} , \end{align*} which implies that \begin{align} R\left( r,\theta;\mu\right) & :=rU\left( r,\theta;\mu\right) =r\left\{ \left( \alpha^{2}+\gamma^{2}\right) \cos^{2}\theta+\left( \alpha \beta+\gamma\delta\right) \cos2\theta+\left( \beta^{2}+\delta^{2}\right) \sin^{2}\theta-\right. \nonumber\\ & r\left( 0.99\cos\theta-\sin\theta\right) \left[ 2\alpha\cos^{2} \theta+\beta\cos2\theta-r\cos^{2}\theta\left( 0.99\cos\theta-\sin \theta\right) \right] +\label{e20}\\ & \left. \mu r\left( \cos\theta-\sin\theta\right) \left[ \gamma \cos2\theta+2\delta\sin^{2}\theta+r\sin^{2}\theta\left( \cos\theta-\sin \theta\right) \right] \right\} ^{1/2}\nonumber \end{align}
Now it follows from the $\hat{\Phi}_{\mu}$-invariance of $\Gamma_{\nu}$, manifested by (19), and (14)-(20) that the radius function $\rho$ must satisfy \begin{equation} \rho=\rho\left( \theta;\mu\right) =U\left( \rho,\theta;\mu\right) ^{-1}\rho\left( \tan^{-1}\left( \frac{\left[ \gamma(\nu)\cos\theta +\delta(\nu)\sin\theta+\mu\rho\sin\theta\left( \cos\theta-\sin\theta\right) \right] }{\left\{ -\alpha(\nu)\cos\theta-\beta(\nu)\sin\theta+\rho\cos \theta\left[ (0.99)\cos\theta-\sin\theta\right] \right\} }\right) \right) . \label{e21} \end{equation} This equation expresses the fact that $\rho$ is a fixed point of the operator on the right-hand side, and can be used to approximate this radius function to any desired degree of accuracy. As we noted above, $\rho$ is analytic in $(\theta,\mu)$, so the approximation can be effected by assuming a power series representation in $\theta$ or $\mu$, whereupon substitution in (21) would provide a means for recursive determination of the series coefficients. But this turns out to be a rather laborious, albeit straightforward, process. It is actually more efficient in this case to find (global in $\theta$) approximations via Picard iteration. For example, if we take $\rho_{0}=\nu$, the next approximation yields \[ \rho_{1}=\nu U\left( \nu,\theta;\mu\right) ^{-1}, \] which owing to the easily verified rather rapid geometric convergence of the iterates, gives quite a good approximation of the (fixed point) solution - one that is, for example, sufficient to verify property (iv). Then a closer examination of the accuracy of the remaining successive approximations, with special attention to the angular aspect of the restriction of $\hat{\Phi} _{\mu}$ to $\Gamma_{\nu}$ embodied in \[ \Theta=\tan^{-1}\left( \frac{\left[ \gamma(\nu)\cos\theta+\delta(\nu )\sin\theta+\mu\rho\sin\theta\left( \cos\theta-\sin\theta\right) \right] }{\left\{ -\alpha(\nu)\cos\theta-\beta(\nu)\sin\theta+\rho\cos\theta\left[ (0.99)\cos\theta-\sin\theta\right] \right\} }\right) , \] together with some fundamental results on rotation numbers such as given in Hartman \cite{hartman}, makes it possible to verify (iii); thereby completing the proof. $\blacksquare$
\subsection{Cascading Hopf doubling bifurcations}
If one continues further along the lines of analysis of the behavior of the map $\hat{\Phi}_{\mu}$ in the proof of Theorem 4.1, a much more intricate sequence of bifurcations and dynamical properties emerges, which we shall just sketch here. We begin by keeping close tabs on the stability of the invariant curve $\Gamma_{\nu}$ as $\nu$ increases. A careful analysis of this locally attracting curve and the map, which we leave to the reader, reveals that there is a small $\nu_{1}>0$ beyond which the curve becomes locally repelling, and for which the main theorem of \cite{CB} applies. Accordingly a pair of new locally attracting, smooth Jordan curves emerge from a pitchfork bifurcation of $\Gamma_{\nu}$ - one interior to $\Gamma_{\nu}$, which we denote as $\Gamma_{\nu}^{(0)}$, and the other exterior to $\Gamma_{\nu}$, denoted as $\Gamma_{\nu}^{(1)}$, such that $\Gamma_{\nu}^{(0)}\cup\Gamma_{\nu}^{(1)}$ is $\hat{\Phi}_{\mu}$-invariant, with $\hat{\Phi}_{\mu}(\Gamma_{\nu} ^{(0)})=\Gamma_{\nu}^{(1)}$ and $\hat{\Phi}_{\mu}(\Gamma_{\nu}^{(1)} )=\Gamma_{\nu}^{(0)}$. In effect then, $\{\Gamma_{\nu}^{(0)},\Gamma_{\nu }^{(1)}\}$ is a 2-cycle of sets. Thus we have a doubling bifurcation for one-dimensional closed smooth manifolds analogous to the beginning of a period doubling cascade for points (zero-dimensional manifolds) observed in such one-dimensional discrete dynamical systems as that of the logistic map.
One can actually show that this analog is complete, in that there is an infinite sequence of such stability shifting, smooth, invariant, Jordan curve doubling bifurcations that converge to an extremely complicated chaotic state. More specifically, there is a $\nu_{2}>\nu_{1}$ such that across this parameter value, each member of the pair $\Gamma_{\nu}^{(0)}$, $\Gamma_{\nu }^{(1)}$ becomes locally repelling, and gives birth - via pitchfork bifurcation (\emph{cf}. \cite{CB}) - to a pair of locally attracting, smooth Jordan curves, $\Gamma_{\nu}^{(0,0)},\Gamma_{\nu}^{(0,1)}$ and $\Gamma_{\nu }^{(1,0)},\Gamma_{\nu}^{(1,1)}$, respectively. Furthermore, $\Gamma_{\nu }^{(0,0)}$ ($\Gamma_{\nu}^{(0,1)}$) is in the interior (exterior) of $\Gamma_{\nu}^{(0)}$ and $\Gamma_{\nu}^{(1,0)}$ ($\Gamma_{\nu}^{(1,1)}$) is in the interior (exterior) of $\Gamma_{\nu}^{(1)}$, $\Gamma_{\nu}^{(0,0)} \cup\Gamma_{\nu}^{(0,1)}\cup\Gamma_{\nu}^{(1,0)}\cup\Gamma_{\nu}^{(1,1)}$ is $\hat{\Phi}_{\mu}$-invariant, with these four curves forming the 4-cycle $\Gamma_{\nu}^{(0,0)}\rightarrow\Gamma_{\nu}^{(1,0)}\rightarrow\Gamma_{\nu }^{(0,1)}\rightarrow$ $\Gamma_{\nu}^{(1,1)}\rightarrow\Gamma_{\nu}^{(0,0)}$. This process continues \emph{ad infinitum }to generate a bounded monotone increasing sequence of parameter values $\nu_{1}<\nu_{2}<\nu_{3}<\cdots$, with $\nu_{n}\rightarrow\nu_{\infty}<3$, with $\Phi_{\mu_{h}+\nu_{\infty}}$ exhibiting chaotic dynamics. Among the consequences of this cascade of bifurcations is that for $\mu\geq\mu_{h}+\nu_{\infty}$ there exists a closed, smooth, $\hat{\Phi}_{\mu}$-invariant curvilinear annulus $A$ that encloses the fixed point $p_{\ast}$ and contains an infinite number (with cardinality of the continuum) of smooth Jordan curves that can be partitioned into $n$-cycles, with $n$ ranging over the nonnegative integers, and naturally this annulus contains very intricate dynamics. We summarize this in the next result - illustrated in Fig. 3 for the Hopf bifurcation and Fig. 4 for the multiring configuration - whose proof we shall leave to the reader for now, although we plan to prove it in a more general form in a forthcoming paper. It is helpful to observe that it describes the discrete dynamical behavior embodied in the following paradigm represented in polar coordinates, with angular coordinate function similar to the circular map of Arnold \cite{VIA}: \[ (r,\theta)\rightarrow\left( R,\Theta\right) , \] where \begin{align*} R & :=\left( \nu+1\right) r\left( 1-r\right) ,\\ \Theta & :=\theta+\frac{2\pi}{a+\nu}\left( 1+kr\sin\theta\right) \;(\mathrm{mod\,}2\pi), \end{align*} and $a$ and $k$ are positive numbers.
\noindent\textbf{Theorem 4.2. }\emph{Let }$\hat{\Phi}_{\nu}$ \emph{be defined as }$\hat{\Phi}_{\mu_{h}+\nu}$. \emph{There exists an increasing sequence }$0=\nu_{0}<\nu_{1}<\nu_{2}<\cdots\rightarrow\nu_{\infty}<3$\emph{\ such that, in addition to the }$\hat{\Phi}_{\nu}$-\emph{invariant smooth Jordan curve }$\Gamma_{\nu}$,\emph{\ which exists for all }$\nu>\nu_{0}$,\emph{\ and is locally attracting }$(\emph{repelling})$\emph{\ for }$\nu_{0}<\nu<\nu_{1} $\emph{\ }$(\nu>\nu_{1})$,\emph{\ for each }$m\in\mathbb{N}$\emph{\ and } $\nu>\nu_{m}$ \emph{there is a }$2^{m}$-\emph{\ cycle }$($\emph{of sets that are smooth Jordan curves}$)$\emph{\ of }$\hat{\Phi}_{\nu},$ \[ \mathcal{Z}_{m}:=\left\{ \Gamma_{\nu}^{i_{m}}:i_{m}\in\{0,1\}^{m}\right\} \] \emph{\ which is created from }$\mathcal{Z}_{m-1}$ \emph{via pitchfork bifurcation, and is locally attracting }$($\emph{repelling}$)$\emph{\ for }$\nu_{m}<\nu<\nu_{m+1}$ $\emph{(}\nu>\nu_{m+1})$.\emph{\ Furthermore, } $\hat{\Phi}_{\nu}^{2^{m}}$\emph{\ restricted to any of the curves } $\Gamma_{\nu}^{i_{m}}$\emph{\ is either ergodic or has periodic orbits according as the rotation number }$($\emph{which varies continuously with }$\nu)$\emph{\ is irrational or rational, respectively. Consequently, } $\hat{\Phi}_{\nu}$ \emph{has periodic orbits of arbitrarily large period for infinitely many }$\nu$\emph{\ in a small neighborhood of \ }$\nu_{\infty}$, \emph{where it can also be shown to have chaotic orbits if }$\nu>\nu_{\infty} $\emph{. Furthermore, for }$\nu\geq\nu_{\infty}$ \emph{there is a closed, smooth, }$\hat{\Phi}_{\nu}$-\emph{invariant annulus }$A$ \emph{\ enclosing the fixed point }$p_{\ast}$, \emph{which contains }$\Gamma_{\nu}$\emph{, is locally attracting and is the minimal invariant set containing all the cycles }$\mathcal{Z}_{m}$.
\noindent The invariant, locally attracting annulus $A$ may not precisely qualify as a strange attractor, yet the intricacies of the dynamics it contains - including cycles of arbitrarily large period - deserves a special name such as a \emph{pseudo-strange attractor}.
\begin{figure}
\caption{Spiral attractor at interior fixed point ($\cong(0.5639, 0.3417)$) for $\lambda=0.99,\, \mu=4.5$. The initial point is $(x,y)=(0.564,0.342)$ and the eigenvalues are $\xi \cong-0.3763\pm0.9171i$.}
\label{attractor}
\end{figure}
\begin{figure}
\caption{Hopf bifurcation at interior fixed point ($\cong(0.5639, 0.3417)$) for $\lambda=0.99,\, \mu=4.5449$, with eigenvalues $\xi\cong-0.3889\pm0.9215i$. The initial points are $(x,y)=(0.555,0.340)$ and $(x,y)=(0.558,0.34)$.}
\label{Hopf}
\end{figure}
\begin{figure}
\caption{Multiring structure around interior fixed point ($\cong(0.5639, 0.3417)$) for $\lambda=0.99,\, \mu=4.55$, with eigenvalues $\xi\cong-0.3903\pm0.922i$. The initial points are $(x,y)=(0.555,0.338)$ and $(x,y)=(0.43,0.38)$.}
\label{multiring}
\end{figure}
\section{Chaotic Dynamics and Instability}
As pointed out in Theorem 4.2, our model exhibits chaotic dynamics at the limit of the cascade of doubling bifurcations described therein. The proof of this \textquotedblleft limiting form\textquotedblright\ of chaos turns out to be rather subtle and difficult, so we shall not go into it here. Instead, we shall prove the existence of chaotic regimes for higher parameter values using a fairly simple geometric argument based upon demonstrating that a sufficiently high power (iterate) of our model map exhibits Smale horseshoe-like behavior (see \emph{e.g.} \cite{guho, kathas, palmel, wigbook}), as illustrated in Fig. 5, and also heteroclinic cycles with transverse intersections (\emph{cf}. \cite{db, palmel, wigbook}). We keep the value of $\lambda=0.99$, and set $\mu=5$, and summarize our main findings on chaos in the following result.
\noindent\textbf{Theorem 5.1. }\emph{Let }$\Phi:=\Phi_{0.99,5}$. \emph{Then there exist an }$N\in\mathbb{N}$ \emph{and a region }$Q$\emph{\ homeomorphic with a square that intersects the triangular domain }$T$, \emph{and has the following properties depicted in Fig. }$5$:
\begin{itemize} \item[(a)] $F:=\Phi^{N}$\emph{ maps }$Q$\emph{ onto a pill shaped region, which contains the fixed point }$(1,0)$\emph{ in its interior, lies along the vertical edge }$e_{v}$\emph{ of }$T$\emph{, has maximum }$y$\emph{ value of }$0.22$\emph{, and is of sufficient height to include the line segment from }$(1,0)$\emph{ to }$(1,0.2)$\emph{ in its interior.}
\item[(b)] $G:=\Phi^{N+1}$\emph{ maps }$Q$\emph{ along the curve described above in (B4) into a thin curved homeomorph }$\Phi(F(Q))$\emph{ of a square that just crosses the diagonal edge }$e_{d}$\emph{ of }$T$\emph{ in the interior of the unit square.}
\item[(c)] $H:=\Phi^{N+2}=\Phi\circ G$\emph{ maps }$Q$\emph{ into a stretched and folded homeomorph of }$Q$\emph{ that (transversely) intersects }$Q$\emph{ in a disjoint pair of curvilinear rectangles in a horseshoe-like fashion.} \end{itemize}
\noindent\emph{Consequently, }$Q$ \emph{contains a compact invariant set }$\Lambda$ \emph{on which }$H$ \emph{is topologically conjugate to a shift map }$($\emph{or equivalently, }$\Phi$ \emph{\ is topologically conjugate to a subshift}$),$\emph{\ which implies that }$\Phi$ \emph{generates chaotic dynamics in }$Q\cap\Lambda$\emph{, including a dense orbit and cycles of arbitrarily large period.}
\noindent\emph{Proof}. We begin our proof with a disk $D$ with an elliptical boundary having its center at $(0.95,0.1)$, semimajor axis length equal to $0.13$ and semiminor axis length equal to $0.075$. Then it follows from the properties of the model map delineated in (B1)-(B5) above that there exist a sufficiently large positive integer $N$ and a diffeomorph of $D$, which we denote as $Q$, such that $\Phi^{-N}(D)=Q$, where $Q$ contains the horizontal edge $e_{h}$ of \ $T$ as shown in Fig. 5(a). Observe here that $\Phi$ restricted to $e_{h}$ is a doubling map taking the left vertex into the right vertex, which is symmetric with respect to $x=1/2$, so the inverse notation in this definition must be viewed in the set theoretical preimage context. Nevertheless, $Q$ satisfies $F(Q):=\Phi^{N}(Q)=D$.
Next, we consider the image of $D$ under $\Phi$; namely, $\Phi(D)=\Phi\left( F(Q)\right) =G(Q)$, which is illustrated in Fig. 5(b). To see that this is an accurate depiction of the image, first observe that for the point $(1,0.2)$ lying in the interior of $D$ near its highest point, we have $\Phi (1,0.2)=(0.8,0.8)$, which lies on the diagonal edge $e_{d}$ of $T$. Moreover, the fixed point $(1,0)$ is also an interior point of $D$, but one that lies near its lowest point. Accordingly in virtue of the properties of $\Phi$ - in particular (B4) - the shape of $\Phi(D)$ must be that of the thickened version of part of the parabolic curve described in (B4) and must slightly overlap $e_{d}$ around the point $(0.8,0.8)$, just as depicted in Fig. 5(a).
In order to obtain the desired horseshoe-like behavior, another application of $\Phi$ is required; that is, we need to describe $\Phi^{2}(D)=\Phi\left( G(Q)\right) =H(Q)$. Taking into account the overall definition of the map, property (B3), and the fact that $\Phi(0.8,0.8)=(0.2016,0)$, it is clear that Fig. 4(c) is a rather accurate rendering of the region $H(Q)$, which intersects $Q$ in a fairly typical horseshoe type set comprised of two approximately rectangular components. With this geometric representation of $Q$ and its image $H(Q)$ in hand, we can describe the key fractal component $\Lambda$ of nonwandering set $\Omega$ of $H$ (and \emph{a fortiori}, $\Phi$) and the topological conjugacy of the restriction of $H$ on $\Lambda$, which we denote as $h$, to a shift map on doubly-infinite binary sequences in essentially the usual way (\emph{cf}. \cite{guho, kathas, palmel, wigbook}), modulo a minor alteration necessitated by the fact that $\Phi$ fails to injective on some subsets of $T$.
It remains to describe the alteration and the final steps in defining $\Lambda$ and establishing the topological conjugacy between $h:\Lambda \rightarrow\Lambda$ and the shift map $\sigma:2^{\mathbb{Z}}\rightarrow 2^{\mathbb{Z}}$, where $2^{\mathbb{Z}}:=\{0,1\}^{\mathbb{Z}}$ - the space of all doubly-infinite binary sequences $\ldots a_{-2}a_{-1}a_{0}a_{1}a_{2} \ldots$with $a_{i}=0$ or $1$. To this end, we define $C_{\ell}$ and $C_{r}$ to be the left and right components of $H(Q)\cap Q$, respectively, as shown in Fig. 5(c), and observe that $C_{\ell}\subset Q_{l}:=Q\cap\{(x,y):x<2/5\}$ and $C_{r}\subset Q_{r}^{+}:=\left( Q\cap\{(x,y):x>3/5\}\right) \cup D$. It follows readily from the definition and properties of $\Phi$ delineated in Section 2 that by making $Q$ more slender and $N$, larger, if necessary, $\Phi$ maps $Q_{\ell}$ and $Q_{r}^{+}$ diffeomorphically onto their images, with $\Phi\left( Q_{\ell}\right) ,\Phi\left( Q\cap\{(x,y):x>3/5\}\right) \subset$ $\{(x,y):x>3/5\}$. In addition, possibly after another shrinking of $Q$ and increase of $N$, we may assume that the iterated sets $\{C_{\ell} ,\Phi^{m}\left( C_{\ell}\right) \cap Q_{r}^{+}:m\in\mathbb{N}\}$ are pairwise disjoint. We denote the restriction of $\Phi$ to $Q_{\ell}$ and $Q_{r}$ by $\Phi_{\ell}$ and $\Phi_{r}$, respectively. Furthermore, $C_{\ell}$ is the diffeomorphic image of a disjoint set $C_{\ell}^{-1}$ ($=\Phi_{\ell }^{-1}(C_{\ell})$) under $\Phi$, which intersects the edge $e_{d}$, and $C_{\ell}^{-1}$ is, in turn, the diffeomorphic image under $\Phi_{r}$ of a set $C_{\ell}^{-2}$ ($=\Phi_{r}^{-1}(C_{\ell}^{-1})$) contained in $D=\Phi^{N}(Q)$.
We can now define a unique inverse on $\Lambda\subset C_{\ell}\cup C_{r}$ - a set to be defined in this last phase of our proof. As $H^{-1}=\Phi^{-1} \circ\Phi^{-1}\circ\cdots\circ\Phi^{-1}$ ($N$ factors), it remains to select the proper branch of each of the factors in this composition so as to prescribe $H^{-1}$ unambiguously. It is easy to see that this is accomplished as follows: set \[ \Phi_{\ast}^{-1}\left( x,y\right) :=\left\{ \begin{array} [c]{cc} \Phi_{\ell}^{-1}(x,y), & \left( x,y\right) \in C_{\ell}\cup\Phi_{\ell }\left( C_{\ell}\right) \\ \Phi_{r}^{-1}(x,y) & (x,y)\in C_{\ell}^{-1}\cup\left( Q_{r}^{+} \smallsetminus\Phi_{\ell}\left( C_{\ell}\right) \right) \end{array} \right. , \] and \[ H^{-1}:=\Phi_{\ast}^{-1}\circ\Phi_{\ast}^{-1}\circ\cdots\circ\Phi_{\ast} ^{-1}\quad(N\text{ factors}), \] and define \[ \Lambda:=Q\cap\left[
{\displaystyle\bigcap\nolimits_{m\in\mathbb{Z}}}
H^{m}\left( H(Q)\cap Q\right) \right] . \] It is easy to verify that $\Lambda$ is a compact, $H$-invariant set, which is homeomorphic with the cartesian product of a pair of 2-component Cantor sets that , in turn, is homeomorphic with $2^{\mathbb{Z}}$ employing the standard topologies. Then the restriction $h:=H_{\mid\Lambda}$ can be shown to be topologically conjugate to the shift map in the usual way (such as in \cite{guho, palmel, wigbook}), thereby completing the proof. $\blacksquare$
We note here that the existence of chaotic regimes described in Theorem 5.1 could also have been demonstrated following a more detailed analysis of the iterates along the lines of the above proof - revealing both transverse homoclinic points of periodic points and transverse heteroclinic points of branches of heteroclinic cycles of periodic points, both of which imply the existence of chaos (as shown or indicated in \cite{db, guho, kathas, palmel, wigbook}). The chaotic case (with its characteristic splattering effect) is depicted in Fig. 6, which shows the iterates corresponding to three initial points selected near $(1,0)$. Note the accumulation of points near $(0.85,0)$, $(0.7,0.6)$, $(0.4,0.4)$ and $(0.35,0)$, which are near the set $\Lambda$ and its images under $\Phi$, as described above. Three initial points were used in order to get a reasonable representation of the chaotic iterates because of the sensitivity (associated with chaos) of the system and the limits of computing accuracy.
\begin{figure}
\caption{Horseshoe-like behavior of map for $\lambda=0.99,\, \mu=5$: (a) Domain and first stage of geometric image of iterated map; (b) Second stage focusing on stretching and nascent folding; and (c) Final stage of horseshoe configuration}
\label{ffchaos}
\end{figure}
\begin{figure}
\caption{Chaotic orbits about interior fixed point ($\cong(0.5569, 0.3569)$) for $\lambda=0.99,\, \mu=4.55$, with eigenvalues $\xi\cong-0.5144\pm0.9596i$. The three initial points were taken very close to $(1,0)$}
\label{chaos}
\end{figure}
\section{Comparison with Physical Models}
Our purpose in this section is to show that our discrete dynamical model shares many properties with actual physical realizations (and their associated mathematical models) of the \emph{R}-\emph{S} flip-flop circuit. Among the several physically based studies of flip-flop type circuit behavior, which includes the work in \cite{Chaney, Danca, KA, kac, LMH, Moser, msd, okt, rzw, ztkbn}, perhaps the best source of comparison is provided by the investigation of Okazaki \emph{et} \emph{al}. \cite{okt}, so this shall be our focus here. We shall also provide additional numerical simulation illustrations of the behavior of the orbits of our system to further highlight the areas of agreement between our dynamical model and that of \cite{okt}.
The approach in \cite{okt} is quite different from ours. Okazaki and his collaborators start with a physical realization of an \emph{R}-\emph{S} flip-flop circuit using two capacitors, one inductor, one linear resistor, one DC battery and a pair of piecewise linear resistors comprised of tunnel diodes. They then derive the state equations for their circuit (hereafter referred to as the OKT system), which is a system of three first-order, piecewise linear ordinary differential equations in two dependent voltage and one dependent current variable, and depends on several parameters associated with the various electrical elements in the circuit. In their study they find numerical solutions of this system of equations using a Runge-Kutta scheme, which they compare with data collected directly from the physical circuit using fairly standard electronic measuring and representation devices. They find that the dynamical behavior deduced from numerical simulation of their system of differential equations is in very good agreement with that observed experimentally from the physical circuit. This behavior includes Hopf bifurcation and chaotic dynamics for certain ranges of their parameters , which of course we have also observed and actually proved for our discrete dynamical model.
More specifically, they deduce - mainly by observing the phase space behavior of their numerical solutions - that there is a certain parameter value where their system develops a Hopf bifurcation on a center manifold, followed, as the parameter increases, by what appears to be a sequence of Hopf bifurcations on the periodic orbits generated. This, of course, is consistent with the qualitative behavior for our model described in Theorems 4.1 and 5.1, and illustrated in Figs. 3 and 4. Moreover, although there are one-dimensional but no two-dimensional Poincar\'{e} sections studied in \cite{okt}, several projections of phase space structure on the two-dimensional voltage coordinate plane, which are at least indicative of Poincar\'{e} map behavior on this plane, have a striking similarity to the ring structure delineated in Theorem 5.1. Thus, there appears to be considerable qualitative agreement in the dynamical behavior of our model and the OKT system for parameter values nearly up to but just below those producing full-blown chaos.
Chaotic dynamics for the OKT system \cite{okt} is inferred primarily by numerical computation of Lyapunov exponents, analysis of approximate one-dimensional Poincar\'{e} sections, and observation of very complicated, ostensibly random outputs in the experimental monitors. The projections onto the voltage coordinate planes also have a somewhat chaotic appearance, with a complicated looking tangle of orbits attached to and partially surrounding the ring configuration mentioned above. Comparing this with the dynamics in the examples pictured in Fig. 5 and 6, which shows ring configurations and the tell-tale splatter (around the rings and concentrated near the fixed point) associated with chaotic discrete dynamics, we have additional qualitative validation for our model.
\section{Concluding Remarks}
In this investigation we have introduced and analyzed a rather simple discrete dynamical model for the \textit{R}-\textit{S} flip-flop circuit, which is based upon the iterates of a two-parameter family of planar, quadratic maps. We proved that for certain parameter ranges, the dynamics of our model exhibits the qualitative behavior expected in and observed for physical realizations of the logical \textit{R}-\textit{S} flip-flop circuit such as Hopf bifurcations and chaotic responses including oscillatory outputs of arbitrarily large periods concentrated around states corresponding to nearly equal set and reset inputs. In addition, we indicated how the dynamics of our model displays fascinating complexity - as the parameters are varied - generated by cascading bifurcations of stability transferring, doublings of invariant collections of curves encircling a single equilibrium state, which produce extremely intricate orbit structures.
Not being satisfied with the fact that the interesting variety of complex dynamical structures produced by our model is certainly interesting from a purely mathematical perspective, we undertook a more comprehensive validation by comparing our results with the numerically simulated and experimentally observed characteristics of a fairly standard realization of an \textit{R} -\textit{S} flip-flop circuit comprised of linear elements such as inductors and capacitors and piecewise linear components consisting of tunnel diodes. We found that the qualitative agreement between the dynamics of our model and that of physical realization is surprisingly good. Notwithstanding this very favorable comparison, we are aware that it may be largely fortuitous. After all, our model is formulated in an essentially \emph{ad hoc} manner that relies heavily on intuition and a desire to obtain the simplest maps producing the kinds of dynamics known to be exhibited in working \textit{R}-\textit{S} flip-flop circuits. Consequently, we are in the near future going to revisit this logical circuit and investigate others of its kind using a much more direct, first principles oriented approach along the lines of the work of Danca \cite{Danca} and Hamill \emph{et al}. \cite{hdj}. Of course, it would be particularly satisfying if we are able to show that such an approach produces essentially the same discrete model for the \textit{R}-\textit{S} flip-flop circuit investigated here, which is something that we expect but naturally remains to be seen. We also intend to formulate and prove a generalized version of Theorem 5.1, which we expect to have numerous applications in our envisaged program of developing discrete dynamical models for a host of logical circuits.
\end{document} |
\begin{document}
\newcommand{{\mathcal H}}{{\mathcal H}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{\bar \sigma}{\bar \sigma} \newcommand{\bar \boldsymbol{v}}{\bar \boldsymbol{v}} \newcommand{\bar \vartheta}{\bar \vartheta} \newcommand{L_p(0,T;L_q(\Omega))}{L_p(0,T;L_q(\Omega))} \newcommand{L_p(0,T;L_q(\Omega)^{n-1})}{L_p(0,T;L_q(\Omega)^{n-1})} \newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{{\bold G}}{{\bold G}} \newcommand{{\bold J}}{{\bold J}}
\newcommand{\rightline{ $\square$}}{\rightline{ $\square$}} \renewcommand{\text{d}}{\text{d}} \renewcommand{\operatorname{div}}{\operatorname{div}} \newcommand{\boldsymbol{v}}{\boldsymbol{v}} \newcommand{\boldsymbol{g}}{\boldsymbol{g}} \newcommand{\boldsymbol{n}}{\boldsymbol{n}} \newcommand{\tilde{D}}{\tilde{D}} \newcommand{\boldsymbol{\tau}}{\boldsymbol{\tau}} \newcommand{\boldsymbol{T}}{\boldsymbol{T}} \newcommand{\boldsymbol{F}}{\boldsymbol{F}} \newcommand{\boldsymbol{H}}{\boldsymbol{H}} \newcommand{\boldsymbol{\varphi}}{\boldsymbol{\varphi}} \newcommand{\boldsymbol{I}}{\boldsymbol{I}} \newcommand{\boldsymbol{q}}{\boldsymbol{q}} \newcommand{\boldsymbol{w}}{\boldsymbol{w}} \newcommand{\boldsymbol{D}}{\boldsymbol{D}} \newcommand{\boldsymbol{A}}{\boldsymbol{A}} \newcommand{\boldsymbol{f}}{\boldsymbol{f}} \newcommand{\boldsymbol{\omega}}{\boldsymbol{\omega}} \newcommand{\boldsymbol{\alpha}}{\boldsymbol{\alpha}} \newcommand{\boldsymbol{S}}{\boldsymbol{S}} \newcommand{\boldsymbol{b}}{\boldsymbol{b}} \newcommand{\boldsymbol{u}}{\boldsymbol{u}} \newcommand{\boldsymbol{0}}{\boldsymbol{0}} \newcommand{\boldsymbol{\psi}}{\boldsymbol{\psi}} \newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}} \newcommand{\partial}{\partial} \newcommand{\mbox{e}^{z}}{\mbox{e}^{z}} \newcommand{\mbox{e}^{z^j}}{\mbox{e}^{z^j}} \newcommand{\mbox{e}^{z^j_N}}{\mbox{e}^{z^j_N}} \newcommand{\mbox{e}^{\widetilde{z^j}}}{\mbox{e}^{\widetilde{z^j}}} \newcommand{\mbox{e}^{Bz^j}}{\mbox{e}^{Bz^j}} \newcommand{\mbox{e}^{B\widetilde{z^j}}}{\mbox{e}^{B\widetilde{z^j}}} \newcommand{\mbox{e}^{-z^j}}{\mbox{e}^{-z^j}} \newcommand{\mbox{e}^{-z^{j-1}}}{\mbox{e}^{-z^{j-1}}} \newcommand{\mbox{e}^{-\widetilde{z^j}}}{\mbox{e}^{-\widetilde{z^j}}} \newcommand{\mbox{e}^{-\widetilde{z^{j-1}}}}{\mbox{e}^{-\widetilde{z^{j-1}}}} \newcommand{\mbox{e}^{z^{j-1}}}{\mbox{e}^{z^{j-1}}} \newcommand{\mbox{e}^{r_l}}{\mbox{e}^{r_l}} \newcommand{\mbox{e}^{r_{l,N}}}{\mbox{e}^{r_{l,N}}} \newcommand{\mbox{e}^{\widetilde{r_{l,N}}}}{\mbox{e}^{\widetilde{r_{l,N}}}} \newcommand{\mbox{e}^{r_l^j}}{\mbox{e}^{r_l^j}} \newcommand{\mbox{e}^{r_k}}{\mbox{e}^{r_k}} \newcommand{\mbox{e}^{z_k}}{\mbox{e}^{z_k}} \newcommand{\mbox{e}^{r_k^j}}{\mbox{e}^{r_k^j}} \newcommand{\mbox{e}^{\widetilde{r_k^j}}}{\mbox{e}^{\widetilde{r_k^j}}} \newcommand{\mbox{e}^{\widetilde{r_{k,N}}}}{\mbox{e}^{\widetilde{r_{k,N}}}} \newcommand{\mbox{e}^{\widetilde{r_l^j}}}{\mbox{e}^{\widetilde{r_l^j}}} \newcommand{\mbox{e}^{r_k^{j-1}}}{\mbox{e}^{r_k^{j-1}}} \newcommand{\mbox{e}^{r_{k,N}}}{\mbox{e}^{r_{k,N}}} \newcommand{\mbox{e}^{r_{k,N}^j}}{\mbox{e}^{r_{k,N}^j}} \newcommand{\mbox{e}^{r_{k,N,\delta}}}{\mbox{e}^{r_{k,N,\delta}}} \newcommand{\erki}[1]{\mbox{e}^{r_{k,#1}}} \newcommand{\ezki}[1]{\mbox{e}^{z_{k,#1}}}
\newcommand{\operatorname{curl}}{\operatorname{curl}} \newcommand{\Delta}{\Delta} \newcommand{\operatorname{rot}}{\operatorname{rot}}
\newcommand{\operatorname{div}}{\operatorname{div}}
\newcommand{{\noindent\it Proof.~}}{{\noindent\it Proof.~}} \newcommand{\Ov}[1]{\overline{#1}} \newcommand{C^\infty_c}{C^\infty_c} \newcommand{\Un}[1]{\underline{#1}}
\newcommand{| _{\partial\Omega}}{| _{\partial\Omega}} \newcommand{{C}}{{C}} \newcommand{Y_{k}}{Y_{k}} \newcommand{\boldsymbol{F}}{\boldsymbol{F}} \newcommand{{\vc f}}{{\vc f}}
\newcommand{\vc{Q}}{\vc{Q}} \newcommand{\boldsymbol{S}}{\boldsymbol{S}} \newcommand{\vc{w}}{\vc{w}} \newcommand{\vc{v}}{\vc{v}} \newcommand{\hat{\omega}}{\hat{\omega}} \newcommand{\vy_\varepsilon}{Y_{k}_\varepsilon} \newcommand{\{O,F,P\}}{\{O,F,P\}} \newcommand{\{O,F,P,D\}}{\{O,F,P,D\}} \newcommand{\varrho}{\varrho} \newcommand{\tilde{\varrho}}{\tilde{\varrho}} \newcommand{\hat{\varrho}}{\hat{\varrho}} \newcommand{\varphi}{\varphi} \newcommand{\tilde{p}}{\tilde{p}} \newcommand{\ex}[1]{ \left< #1 \right>} \newcommand{\vr_n}{\varrho_n} \newcommand{\vartheta_\varepsilon}{\vartheta_\varepsilon} \newcommand{\boldsymbol{u}_n}{\boldsymbol{u}_n} \newcommand{\vr_\delta}{\varrho_\delta} \newcommand{\vr_{A}}{\varrho_{A}} \newcommand{\vr_{A-}}{\varrho_{A-}} \newcommand{\vr_{B}}{\varrho_{B}} \newcommand{\tilde{\vr}_{A}}{\tilde{\varrho}_{A}} \newcommand{\tilde{\vr}_{B}}{\tilde{\varrho}_{B}} \newcommand{\tilde{\vr}_{n}}{\tilde{\varrho}_{n}} \newcommand{\tilde{g}}{\tilde{g}} \newcommand{\vartheta_\delta}{\vartheta_\delta} \newcommand{\boldsymbol{u}_\delta}{\boldsymbol{u}_\delta} \newcommand{\vartheta}{\vartheta} \newcommand{\tilde{\vartheta}}{\tilde{\vartheta}} \newcommand{\hat{\vartheta}}{\hat{\vartheta}} \newcommand{\boldsymbol{u}}{\boldsymbol{u}} \newcommand{{\cal E}}{{\cal E}} \newcommand{\boldsymbol{d}}{\boldsymbol{d}} \newcommand{\boldsymbol{d}_k}{\boldsymbol{d}_k} \newcommand{\vc}[1]{{\bf #1}} \newcommand{\vcg}[1]{{\pmb #1}}
\newcommand{\nabla}{\nabla} \newcommand{{{\partial} \over {\partial \vr}}}{{{\partial} \over {\partial \varrho}}} \newcommand{{{\partial} \over {\partial \vt}}}{{{\partial} \over {\partial \vartheta}}} \newcommand{\partial_{t}}{\partial_{t}} \newcommand{\ptb}[1]{\partial_{t}(#1)} \newcommand{\Dt}{\frac{ d}{dt}} \newcommand{\tn}[1]{\mbox {\F #1}} \newcommand{{\rm d} {x}}{{\rm d} {x}} \newcommand{{\rm d} t }{{\rm d} t } \newcommand{\it}{\it} \newcommand{\mbox{\FF R}}{\mbox{\FF R}} \newcommand{\dx \ \dt}{{\rm d} {x} \ {\rm d} t } \newcommand{\lr}[1]{\left( #1 \right)} \newcommand{\intO}[1]{\int_{\Omega} #1 \ {\rm d} {x}} \newcommand{\intOj}[1]{\int_{\Omega^{1}} #1 \ {\rm d} {x}} \newcommand{\intOd}[1]{\int_{\Omega^{2}} #1 \ {\rm d} {x}} \newcommand{\intOe}[1]{\int_{\Omega_\varepsilon} #1 \ {\rm d} {x}} \newcommand{\intOB}[1]{\int_{\Omega} \left( #1 \right) \ {\rm d} {x}} \newcommand{\intRN}[1]{\int_{R^3} #1 \ {\rm d} {x}} \newcommand{\intR}[1]{\int_{R} #1 \ {\rm d} t } \newcommand{\intRR}[1]{\int_R \int_{R^3} #1 \ \dx \ \dt} \newcommand{\intT}[1]{\int_0^T #1 \ {\rm d} t } \newcommand{\intTO}[1]{\int_0^T\!\!\!\! \int_{\Omega} #1 \ \dx \ \dt} \newcommand{\intt}[1]{\int_0^t #1 \ {\rm d} t } \newcommand{\inttO}[1]{\int_0^t\!\!\!\! \int_{\Omega} #1 \ \dx \ \dt} \newcommand{\inttauO}[1]{\int_0^\tau\!\!\!\! \int_{\Omega} #1 \ \dx \ \dt} \newcommand{\intTOj}[1]{\int_0^T\!\!\!\! \int_{\Omega^{1}} #1 \ \dx \ \dt} \newcommand{\intTOd}[1]{\int_0^T\!\!\!\! \int_{\Omega^{2}} #1 \ \dx \ \dt} \newcommand{\intTOB}[1]{ \int_0^T\!\!\!\! \int_{\Omega} \left( #1 \right) \ \dx \ \dt} \newcommand{\inttOB}[1]{ \int_0^t\!\!\!\! \int_{\Omega} \left( #1 \right) \ \dx \ \dt} \newcommand{\sumkN}[1]{\sum_{k=1}^3 #1} \newcommand{\sumlN}[1]{\sum_{l=1}^3 #1} \newcommand{\blue}[1]{\textcolor{blue}{ #1}} \newcommand{\red}[1]{\textcolor{red}{ #1}} \newcommand{\eq}[1]{\begin{equation} \begin{split}
#1 \end{split} \end{equation}} \newcommand{\eqh}[1]{\begin{equation*} \begin{split}
#1 \end{split} \end{equation*}} \newcommand{{\cal{B}}}{{\cal{B}}} \newcommand{{\cal{R}}}{{\cal{R}}} \newcommand{{\cal{F}}}{{\cal{F}}} \newcommand{{\cal{G}}}{{\cal{G}}}
\newcommand{{\vc{D}}}{{\vc{D}}}
\newcommand{\varepsilon}{\varepsilon} \newcommand{\delta}{\delta}
\newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{{\bf n}}{{\bf n}} \newcommand{\partial}{\partial} \newcommand{{\bf V}}{{\bf V}} \newcommand{{\bf k}}{{\bf k}} \newcommand{{\bk_{\bv}}}{{{\bf k}_{\boldsymbol{v}}}} \newcommand{{\rm div}}{{\rm div}}
\newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{df}{Definition} \newtheorem{rmk}[thm]{Remark}
\title{On the isothermal compressible multi-component mixture flow:\\
the local existence and maximal $L_p-L_q$ regularity of solutions} \author{T. Piasecki\footnote{Institute of Applied Mathematics and Mechanics, University of Warsaw, ul. Banacha 2, 02-097 Warszawa, Poland. E-mail: {[email protected]}. Supported by the Top Global University Project and the Polish National Science Centre grant 2018/29/B/ST1/00339.}, Y. Shibata\footnote{Department of Mathematics, Waseda University, Ohkubo 3-4-1, Shinjuku-ku, Tokyo 169-8555, Japan. Adjunct faculty member in the Department of Mechanical Engineering and Materias Science, University of Pittsburgh. E-mail: {[email protected]}. Partially supported by JSPS Grant-in-aid for Scientific Research (A) 17H0109 and Top Global University Project.}, E. Zatorska\footnote{Department of Mathematics, University College London, Gower Street, London WC1E 6BT, United Kingdom. E-mail: {[email protected]}. Supported by the Top Global University Project and the Polish Government MNiSW research grant 2016-2019 "Iuventus Plus" No. 0888/IP3/2016/74.}}
\date{}
\maketitle
\noindent{\bf{Abstract:}} We consider the initial-boundary value problem for the system of equations describing the flow of compressible isothermal mixture of arbitrary large number of components. The system consists of the compressible Navier-Stokes equations and a subsystem of diffusion equations for the species. The subsystems are coupled by the form of the pressure and the strong cross-diffusion effects in the diffusion fluxes of the species. Assuming the existence of solutions to the symmetrized and linearized equations, proven in \cite{PSZ2}, we derive the estimates for the nonlinear equations and prove the local-in-time existence and maximal $L_p-L_q$ regularity of solutions. \normalsize
\section{Introduction} \subsection{Setting of the problem} We consider the system of equations describing the motion of an isothermal mixture of compressible gases
\begin{equation} \label{1.1}
\left.
\begin{array}{r}
\partial_{t}\varrho+\operatorname{div} (\varrho \boldsymbol{u}) = 0\\%& \mbox{w}& (0,T)\times\Omega,\\
\ptb{\varrho\boldsymbol{u}}+\operatorname{div} (\varrho \boldsymbol{u} \otimes \boldsymbol{u}) - \operatorname{div} \boldsymbol{S}+ \nabla p =\vc{0}\\%& \mbox{w}& (0,T)\times\Omega,\\
\partial_{t}{\varrho_k}+\operatorname{div} (\varrho_{k} \boldsymbol{u})+ \operatorname{div} \boldsymbol{F}_k = 0
\end{array}\right\}\quad\mbox{in}\ (0,T)\times\Omega
\end{equation} in the regular domain $\Omega\subset \mathbb{R}^3$, supplied with boundary conditions \begin{equation} \label{bc} \boldsymbol{u}=0, \; \boldsymbol{F}_k\cdot\boldsymbol{n}=0 \quad\mbox{on}\ (0,T)\times\partial\Omega \end{equation} and initial condition \begin{equation} \label{bc}
\boldsymbol{u}|_{t=0}=\boldsymbol{u}^0, \quad \varrho_k|_{t=0}=\varrho_k^0,\; k=1 \ldots n \quad \mbox{in} \; \Omega. \end{equation}
Above, in system \eqref{1.1}, $\varrho$ denotes the mass density of the mixture \begin{equation} \label{rho} \varrho=\sum_{k=1}^3\varrho_k, \end{equation} $\boldsymbol{u}$ is the mean velocity of the mixture, and $\varrho_k$ is the density of the $k$-th constituent. The remaining quantities: the stress tensor $\boldsymbol{S}$, the total internal pressure $p$, and the diffusion fluxes $\boldsymbol{F}_k$
are determined as functions of $(\boldsymbol{u},\varrho,\varrho_k)$ by constitutive relations which will be specified later.
The first equation of system \eqref{1.1}, usually called the continuity equation, describes the balance of the mass, and the second equation expresses the balance of the momentum. The last $n$ equations describe the balances of masses of separate constituents (species). Note that the system of equations cannot be independent, as the last $n$ equations must sum up to the continuity equation. Thus, here we meet a serious mathematical obstacle, the subsystem $(\ref{1.1})_4$ is degenerate parabolic in terms of $\varrho_k$.
{\bf The stress tensor.} The viscous part of the stress tensor obeys the {\it Newton rheological law}
\begin{equation}\label{chF:Stokes}
\boldsymbol{S}(\boldsymbol{u})= 2\mu{\vc{D}}(\boldsymbol{u})+\nu\operatorname{div} \boldsymbol{u}{\bf{I}},
\end{equation} where ${\vc{D}}(\boldsymbol{u})=\frac{1}{2}\left(\nabla \boldsymbol{u}+(\nabla \boldsymbol{u})^{T}\right)$ and the nonnegative viscosity coefficients.
{\bf Internal pressure.} The internal pressure of the mixture is determined through the Boyle law, when the temperature is constant it is given by \begin{equation} \label{intpre} p(\varrho_{1},\ldots,\varrho_{n})=\sum_{k=1}^{n}p_{k}(\varrho_{k})=\sum_{k=1}^{n}\frac{\varrho_{k}}{m_{k}}; \end{equation} above, $m_{k}$ is the molar mass of the species $k$, and for simplicity, we set the gaseous constant equal to 1.
{\bf Diffusion fluxes.} A key element of the presented model is the structure of laws governing cross-diffusion processes in the mixture. The diffusion fluxes are given explicitly in the form
\begin{equation}\label{eq:diff}
\boldsymbol{F}_{k}=-\sum_{l=1}^{n} {C}_{kl}\boldsymbol{d}_l, \quad k=1,...n,
\end{equation} where ${C}_{kl}$ are multicomponent flux diffusion coefficients and $\boldsymbol{d}_k=(d_{k}^{1},d_{k}^{2},d_{k}^{3})$ is the species $k$ diffusion force
\begin{equation}\label{eq:}
d_{k}^{i}=\nabla_{x_{i}}\left({p_{k}\over p}\right)+\left({p_{k}\over p}-{\varrho_{k}\over \varrho}\right)\nabla_{x_{i}} \log{p}= \frac{1}{p}\lr{\nabla_{x_i} p_k-\frac{\varrho_k}{\varrho}\nabla_{x_i}p}.
\end{equation} Moreover, we assume that $\sumkN\boldsymbol{F}_k=\vc{0}$, pointwisely. The main properties of the flux diffusion matrix $C$ are \begin{equation} \label{prop_C} C{\cal Y}={\cal Y}C^{T},\quad
N(C)=\mbox{lin}\{\vec Y\},\quad
R(C)={U}^{\bot}, \end{equation} where $Y_k=\frac{\varrho_k}{\varrho}$, ${\cal Y}=\mbox{diag}(Y_{1},\ldots,Y_{N})$, $\vec Y=(Y_1,\ldots,Y_n)^t$, $\mbox{lin}\{\vec Y\}=\{t\vec Y\: \; t \in \mathbb{R} \}$, $N(C)$ is the nullspace of $C$, $R(C)$ is the range of $C$, $\vec U=(1,\ldots,1)^{T},$ and ${U}^{\bot}$ is the orthogonal complement of $\mbox{lin}\{\vec U\}$. The second property in \eqref{prop_C} implies $$ \sum_{l=1}^3 \frac{1}{p}C_{kl} \frac{\varrho_k}{\varrho}\nabla p=\frac{\nabla p}{p}\sum_{l=1}^3 C_{kl}Y_l=0,\quad k=1,\ldots, n, $$ therefore \eqref{eq:diff}, \eqref{eq:} are reduced to \begin{equation} \label{eq:diff1} \boldsymbol{F}_{k}=-\frac{1}{p}\sum_{l=1}^{n} {C}_{kl}\nabla p_l. \end{equation} We also define \begin{equation} \label{def_D} D_{kl}=\frac{C_{kl}}{\varrho Y_k}, \end{equation} thus the properties of $C$ \eqref{prop_C} imply \begin{equation} \label{prop_D}
D= D^{T},\quad D \geq 0, \quad
N(D)=\mbox{lin}\{\vec Y\},\quad
R(D)={Y}^{\bot}. \end{equation} The first property results from $C_{kl}Y_l=C_{lk}Y_k$, the third from the fact that ${\cal Y}$ is diagonal. Next, $p \in R(\tilde D) \iff p_k=\frac{1}{Y_k}\sum_l C_{kl}q_l$ for some $q \in \mathbb{R}^3$. Finally $D$ is positive definite over $U^\bot$.
{\bf Exemplary diffusion matrix.} An example of matrix $C$ satisfying conditions \eqref{prop_C} that will be distinguished throughout the paper is
\begin{equation}\label{Cform}
{C} =\left(
\begin{array}{cccc}
Z_{1} & -Y_{1} & \ldots & -Y_{1}\\
-Y_{2} & Z_{2} & \ldots & -Y_{2}\\
\vdots & \vdots & \ddots & \vdots \\
-Y_{n} & -Y_{n}& \ldots & Z_{n}
\end{array} \right),
\end{equation} where $Z_{k}=\sum_{{i=1} \atop {i\neq k}}^{n} Y_{i}$.\\ Using expressions for the diffusion forces \eqref{eq:diff1} and the properties of this matrix one can rewrite \eqref{eq:diff} into the following form
\eq{\label{difp}
\boldsymbol{F}_k=-\frac{1}{p}\lr{\nabla p_k-Y_k\nabla p}.
}
Clearly for $C$ given by \eqref{Cform}, the matrix $D_{kl}=\frac{C_{kl}}{\varrho Y_k}$ is symmetric and positive semi-definite.
\subsection{Discussion of the known results} The main result of this paper concerns the local well-posedness of system \eqref{1.1} in the maximal $L_p-L_q$ regularity setting. The local well-posedness as well as global well-posedness for small data for two-species variant of system \eqref{1.1} have been shown in authors' previous work \cite{PSZ}. There the so-called normal form, considered earlier e.g. in \cite{VG}, allows to immediately write a parabolic equation for one of the species densities. The aim of this paper is to generalize this result to the system with arbitrary number of constituents, however still isothermal. The key difference is that in the two species case the part corresponding to diffusion flux is reduced to a single parabolic equation, while now we obtain only a symmetrized system. Nevertheless, the properties of $D$ imply only nonnegativity of its leading order part so an important step is to show its parabolicity. Dealing with the systems of species instead of single equation also requires serious modifications in the linear theory.
The mathematical investigation of multicomponent flows dates back to analysis of a two component incompressible model assuming Fick law, hence no cross-diffusion, see among others \cite{BdV1} for inviscid fluid and \cite{BdV2}-\cite{BdV3} in the viscous case.
In the previous results devoted to the complete mixture model, see Giovangigli and Massot \cite{GM1,GM2}, the local smooth solutions and global smooth solutions around constant equilibrium states were considered. Their method of proof was based on normal form of equations, hyperbolic-parabolic estimates and on local strict dissipativity of linearized systems. It can be seen as an application of more abstract theory proposed for the hyperbolic-parabolic systems of conservation laws by Kawashima and Shizuta \cite{K84,KS88}.
When the species equations are decoupled from the fluid equations, the resulting system of PDEs is related to the Stefan-Maxwell system analyzed for example in \cite{B2010, HMPW13}. In both of these papers the isobaric isothermal systems are considered with the barycentric velocity being equal to $0$. This means that, in comparison with the system of last $n$ equations from \eqref{1.1}, the convective term $\operatorname{div}(\varrho_k\boldsymbol{u})$ is absent and the variation of total pressure in the diffusion fluxes \eqref{difp} is neglected. Essential difference between these systems is that in the present case the diffusion fluxes are explicit combination of diffusion deriving forces, while for the Stefan-Maxwell system the flux-forces relations need to be first inverted. This can be done using the Perron-Frobenius theory as first noticed in \cite{VG0}. With this at hand, the local-in-time well-posedness and maximal $L_p$ regularity follow from classical results of Amann \cite{Amann} or Pr\"uss \cite{Pruss}. In the approach presented in the present paper we rather relate on the alternative approach of the second author and collaborators \cite{ES1, SS2, Murata, MS16, S17, SS1} tailored to the compressible fluid systems. The main result of this paper is maximal $L_p-L_q$ regularity of solutions to \eqref{1.1}, but it relies on the proof of existence of relevant solutions to the linearized system. The latter result is proved in our other article \cite{PSZ2} mostly for the sake of brevity, but also as it can be of independent interest. Indeed, it applies to whole class of symmetric parabolic systems satisfying certain regularity assumptions on the coefficients, therefore it is likely to be used in other contexts.
As far as maximal $L_p-L_q$ regularity is concerned, the coupling between Stefan-Maxwell and the fluid equations, was so far considered only for the incompressible Navier-Stokes system, see \cite{BP2017}. It was also proven, independently in \cite{CJ13} and \cite{MT13}, that the incompressible Navier-Stokes-Stefan-Maxwell system possesses a global-in-time weak solution with arbitrary data . The approach employed by Chen and J\"ungel in \cite{CJ13} relies on a certain symmetrization of the species subsystem with one of equations eliminated, see also \cite{JS13}. They have noticed that such reformulation allows to deduce parabolicity in terms of the so-called entropic variables. See also \cite{Jungel} for an overview of different problems where a similar approach can be applied. The idea of our approach is similar, however the change of variables we propose is slightly different, in the spirit of normal variables from \cite{VG}. Concerning analogous results for the compressible Navier-Stokes-Stefan-Maxwell system, the existence of weak solutions is so far known either for stationary flow of species with the same molar masses \cite{EZ, GPZ, PP1, PP2}, or for exemplary diffusion matrix $C$ and stress tensor $\boldsymbol{S}$ with density-dependent viscosity coefficient \cite{EZ2,EZ3,MPZ1, MPZ2}. There are also relevant results for multi-component systems with diffusion fluxes in the form of the Fick law \cite{FPT}.
\subsection{Notation and functional spaces} Let us summarize notation used in the paper. We use standard notation $H^k_p, \; k \in \mathbb{N}$ for Sobolev spaces. For a Banach space $X$, by $L_p(0,T;X)$ we denote a Bochner space and $$ H^1_p(0,T;X)=\{ f \in L_p(0,T;X): \; \partial_t f \in L_p(0,T;X)\}. $$ Furthermore, for $s \in \mathbb{R}$ a Bessel space $H^{s}_p(\mathbb{R},X)$ is a space of $X$-valued functions for which \begin{equation*}
\|f\|_{H^{s}_p(\mathbb{R}, X)}
= \Bigl(\int_\mathbb{R} \|\mathcal{F}^{-1}[(1+\tau^2)^{s/2}\mathcal{F}[f](\tau)]
\|^p\,{\rm d}\tau\Bigr)^{1/p} < \infty. \end{equation*} We also recall that for $0<s<\infty$ and $m$ a smallest integer larger than $s$ we define Besov spaces on domains as intermediate spaces \begin{equation} \label{def:bsqp0} B^{s}_{q,p}(\Omega)=(L_q(\Omega),H^m_q(\Omega))_{s/m,p}, \end{equation} where $(\cdot,\cdot)_{s/m,p}$ is the real interpolation functor, see \cite[Chapter 7]{Ad}. In particular, \begin{equation} \label{def:bsqp} B^{2(1-1/p)}_{q,p}(\Omega)=(L_q(\Omega),H^2_q(\Omega))_{1-1/p,p}=(H^2_q(\Omega),L_q(\Omega))_{1/p,p}. \end{equation} Next, for abbreviation and clarity we introduce the following notation: \begin{enumerate} \item We will denote by $E(T)$ a continuous function of $T$ s.t. $E(0)=0$. Moreover, we use $C$ to denote a generic positive constant, or we use $C(X,Y)$ to specify the dependence of parameters $X$ and $Y$. \item By $\vec\cdot$ we denote an $(n-1)$-vector of functions, for example $\vec\vartheta=(\vartheta_1,\ldots,\vartheta_{n-1})^\top$. \item We introduce the norms describing regularity of our solutions; for $T>0$ we define: \eq{ \label{def:norm}
[{\boldsymbol{v}}]_{T,1}&:=\|{\boldsymbol{v}}\|_{L_p(0,T;H^2_q(\Omega))}+\|\partial_t {\boldsymbol{v}}\|_{L_p(0,T;L_q(\Omega))}, \\
[ \sigma]_{T,2}&:=\|\sigma\|_{H^1_p(0,T;H^1_q(\Omega))},\\ [\sigma,{\boldsymbol{v}},{\vec\vartheta}]_T&:=[{\boldsymbol{v}}]_{T,1}+[\sigma]_{T,2}+\sum_{k=1}^{n-1}[\vartheta_k]_{T,1}. } Then, for given $T,M>0$ we define the sets in the functional spaces: \begin{align} \label{def:H12} {\mathcal H}_{T,M}^1=\{ {\boldsymbol{v}}: [{\boldsymbol{v}}]_{T,1}\leq M \}, \qquad {\mathcal H}_{T,M}^{2}=\{\sigma: \; [\sigma]_{T,2} \leq M\} \end{align} and \begin{equation}\label{def:H} {\mathcal H}_{T,M} = \left\{(\sigma, {\boldsymbol{v}}, \vec\vartheta):
\quad (\sigma, {\boldsymbol{v}}, \vartheta_k)|_{t=0} = (0, \boldsymbol{u}^0, h^0_k) \quad\text{in $\Omega$}, \quad [\sigma,{\boldsymbol{v}},\vec\vartheta]_T \leq M\right\}. \end{equation} \end{enumerate}
\section{Symmetrization and main result} The main result of this paper is the the local well-posedness in the maximal $L_p-L_q$ regularity setting of certain reformulation of system \eqref{1.1} \eqref{sys:normal}. This reformulation is similar to the normal form derived in (\cite{VG}, Chapter 8) for the complete system with thermal effects. In case of constant temperature derivation of the symmetrized equations can be simplified considerably, and, to make our paper self contained, we show in the Appendix the following result \begin{prop} \label{thm:main0} Let $(\varrho,\boldsymbol{u},\varrho_1,\ldots,\varrho_n)$ be a regular solution to system (\ref{1.1}-\ref{rho}) such that \eq{\label{all_pos} \{\varrho_1>C,\ldots,\varrho_n>C\}} for some constant $C>0$. Then the change of unknowns \begin{equation} \label{def:psi} (\varrho,h_1,\ldots,h_{n-1})=\lr{ \sum_{i=1}^3 \varrho_1,\log\lr{\frac{\varrho_2^{\frac{1}{m_2}}}{\varrho_1^{\frac{1}{m_1}}}},\ldots,\log\lr{\frac{\varrho_n^{\frac{1}{m_n}}}{\varrho_1^{\frac{1}{m_1}}} }}=:\Psi(\varrho_1,\ldots,\varrho_n). \end{equation} is a diffeomorphism, and the system \eqref{1.1} is transformed to \eq{ \label{sys:normal} &\partial_{t}\varrho+\operatorname{div}(\varrho\boldsymbol{u})=0,\\ &\varrho \partial_t \boldsymbol{u}+\frac{\varrho\nabla\varrho}{\Sigma_\varrho}+\sum_{l=2}^3\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla h_{l-1}+\varrho(\boldsymbol{u}\cdot \nabla)\boldsymbol{u} =\mu \Delta \boldsymbol{u} + (\mu+\nu)\nabla{\rm div}\boldsymbol{u},\\ &\sum_{l=1}^{n-1} {\cal{R}}_{kl}(\partial_t h_l+ \boldsymbol{u} \cdot \nabla h_l) + \lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\operatorname{div} \boldsymbol{u} =\operatorname{div} \left( \sum_{l=1}^{n-1}{\cal{B}}_{kl}\nabla h_l\right), } with the boundary conditions \begin{equation} \label{bc:normal} \boldsymbol{u}=0, \quad \sum_{l=1}^{n-1}{\cal{B}}_{kl}\nabla h_l \cdot \boldsymbol{n} = 0, \quad k=1,\ldots,n-1,\quad\mbox{on}\ (0,T)\times\partial\Omega, \end{equation} and the initial conditions \begin{equation} \label{ic:normal}
(\boldsymbol{u},\varrho,\{h_k\}_{k=1,\ldots,n-1})|_{t=0}=(\boldsymbol{u}^0,\varrho^0,\{h_k^0\}_{k=1,\ldots,n-1}) = \Psi(\varrho_{1}^0(x),\ldots \varrho_{n}^0(x)), \end{equation}
where \begin{equation} \label{def:sigma} \Sigma_\varrho=\sum_{k=1}^3 m_k \varrho_k \end{equation} and ${\cal{R}}$ and ${\cal{B}}$ are $(n-1)\times(n-1)$ matrices given by \begin{equation} \label{def:Rkl} {\cal R}_{kl}=m_{k+1}\varrho_{k+1}\delta_{kl}-\frac{m_{k+1}m_{l+1}\varrho_{k+1}\varrho_{l+1}}{\Sigma_{\varrho}}, \end{equation} \begin{equation} \label{lag:5b} {\cal{B}}_{kl}=\frac{\varrho_{k+1}\varrho_{l+1} D_{k+1,l+1}}{p}. \end{equation} for $k,l=1,\ldots,n-1$. Moreover, the matrix ${\cal{R}}$ is uniformly coercive in $(x,t)$ and the same property holds for ${\cal{B}}$ provided that either: \\ {\emph{Condition 1:}} The matrix $C$ is of the form \eqref{Cform}\\ or \\ {\emph{Condition 2:}} $\Omega$ is bounded and \eqref{prop_D} is satisfied for $x\in \Ov{\Omega}$, $t\in[0,T]$.\\
\end{prop}
The local well-posedness of system \eqref{sys:normal},\eqref{bc:normal} in the maximal $L_p-L_q$ regularity setting is provided by our main result below. \begin{thm}\label{thm:main2} Assume that \begin{itemize} \item $2 < p < \infty$, $3 < q < \infty$, $2/p + 3/q < 1$ and $L > 0$; \item $\Omega$ is a uniform $C^3$ domain in $\mathbb{R}^3$; \item there exists a constant $C>0$ such that \begin{equation} \label{nablaD}
\forall \, k,l \in 1,\ldots,n \quad \|\nabla D_{kl}(t,\cdot)\|_{L_q(\Omega)}\leq C \sum_{j=1}^3\|\nabla \varrho_j(t,\cdot)\|_{L_q(\Omega)} \quad \textrm{a.e. in} \; (0,T); \end{equation} \item there exist positive numbers $a_1$ and $a_2$ for which \begin{equation}\label{initial:0} a_1 \leq \varrho_{k}^0(x) \leq a_2 \quad \forall x \in \overline{\Omega}, \; k \in 1, \ldots, n. \end{equation} \end{itemize} Let $\varrho_{k}^{0}(x), k=1,\ldots n$, and $\boldsymbol{u}^0(x)$ be initial data for Eq. \eqref{1.1} and let $$ (\varrho^0(x),h_1^0(x),\ldots,h_{n-1}^0(x)) = \Psi(\varrho_1^0(x),\ldots \varrho_n^0(x)). $$ Then, there exists a time $T>0$ depending on $a_1$, $a_2$ and $L$ such that if the initial data satisfy the condition: \begin{equation}\label{initial:1}
\|\nabla(\varrho_1^0,\ldots ,\varrho_n^0)\|_{L_q(\Omega)}
+ \|\boldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^3}
+ \|h_1^0,\ldots,h_{n-1}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^{n-1}} \leq L \end{equation} and the compatibility condition: \begin{equation}\label{initial:2}
\boldsymbol{u}^0|_\Gamma=0, \quad \nabla h^0_{k} \cdot \boldsymbol{n}|_\Gamma = 0, \quad k=1,\ldots,n-1, \end{equation}
then problem \eqref{sys:normal} with boundary conditions \eqref{bc:normal} and initial conditions \eqref{ic:normal} admits a unique solution $(\varrho, \boldsymbol{u}, h_1,\ldots,h_{n-1})$ with \begin{gather*} \varrho - \varrho^0 \in H^1_p((0, T), H^1_q(\Omega)), \quad \boldsymbol{u} \in H^1_p((0, T), L_q(\Omega)^3) \cap L_p((0, T), H^2_q(\Omega)^3),\\ h_1,\ldots,h_{n-1} \in H^1_p((0, T), L_q(\Omega)) \cap L_p((0, T), H^2_q(\Omega)) \end{gather*} possessing the estimates: \begin{gather*}
\|\varrho-\varrho^0\|_{H^1_p((0, T), H^1_q(\Omega))}
+ \|\partial_t(\boldsymbol{u}, h_1,\ldots,h_{n-1})\|_{L_p(0,T;L_q(\Omega)^{n-1})}
+ \|(\boldsymbol{u}, h_1,\ldots,h_{n-1})\|_{L_p((0, T), H^2_q(\Omega)^{n+2})} \leq CL, \\ a_1 \leq \varrho(x,t) \leq na_2+a_1 \quad\text{for $(x, t) \in \Omega\times(0, T)$}, \quad
\int^T_0\|\nabla\boldsymbol{u}(\cdot, s)\|_{L_\infty(\Omega)} \leq \delta. \end{gather*} Here, $C$ is some constant independent of $L$, and $\delta$ is a small positive parameter. \end{thm} Let us state some remarks concerning our main result. \begin{rmk}
Notice that due to \eqref{def_D} the requirement \eqref{nablaD} is satisfied for the special form \eqref{Cform} provided $C_1 \leq |\varrho_k| \leq C_2$ for some positive constants $C_1<C_2$. \end{rmk} \begin{rmk} The parameter $\delta$ above remains small for large times. This is especially important for the existence of global-in-time solutions, not included in the present study. \end{rmk}
\begin{rmk} Due to conditions \eqref{coerc:B} we can apply the inverse of ${\cal{B}}$ to the boundary conditions \eqref{bc:normal} which leads to equivalent formulation of the boundary condition in the standard form \begin{equation} \label{bc:normal1} \boldsymbol{u}=0, \quad \nabla h_{k} \cdot \boldsymbol{n} = 0, \quad k=1,\ldots,n-1,\quad\mbox{on}\ (0,T)\times\partial\Omega . \end{equation} \end{rmk} \begin{rmk} The condition $\frac{2}{p}+\frac{3}{q}<1$ deserves a more detailed comment. First of all, it is stronger than condition $\frac{2}{p}+\frac{3}{q} \neq 1$ imposed in Theorems \ref{thm:lin1} and \ref{thm:main1}, which gives solvability of associated linear problems. A natural question is whether the condition in Theorem \ref{thm:main2} cannot be strenghtened. The answer is partially positive. One could relax this condition allowing $\frac{2}{p}+\frac{3}{q}>1$ with additional constraints on $p,q$ following \cite{SSZ}. However, this would be at a price of numerous additional technicalities that we omit here for brevity. \end{rmk}
A keynote requirement necessary to prove our main result is the coercivity of matrices ${\cal R}$ and ${\cal{B}}$. The details are given in the Appendix, however it is worth to mention in this place that we need to know that fractional densities are bounded from below by a positive constant. Note that the statement of Theorem \ref{thm:main2} provides us only with bounded functions $h_i$ given by \eqref{def:psi}. Let us therefore check that these conditions are in fact equivalent. The implication in one direction follows immediately from \eqref{def:psi}, for the other one we have: \begin{lem}\label{lem:1} Let $h_i$ given by \eqref{def:psi} be bounded and let \begin{equation} \label{vrpos:0} \varrho \geq C>0. \end{equation} Then \begin{equation}\label{rhoidown} \varrho_i \geq C>0, \quad i=1,\ldots,n. \end{equation} \end{lem} \emph{Proof.} Assume $\exists i \in\{1,\ldots, n-1\}$ and $(x_0,t_0)$ s.t. $$ \lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_{i+1}(x,t)=0. $$ Then \begin{equation} \label{vrpos:1} \lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_1(x,t)=0 \end{equation} since otherwise $h_i(x,t)$ would be unbounded from below. This in turn implies that \begin{equation} \label{vrpos:2} \lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_{k+1}(x,t)=
0 \quad \forall\; 1 \leq k \leq n-1 \end{equation} since otherwise corresponding $h_{k}$ would be unbounded from above. This means that $\sum_{k=1}^3\varrho_k(x,t)=0$ which contradicts \eqref{vrpos:0}.
\rightline{ $\square$}
Let us finish this section with presenting the outline of the rest of the paper.
In Section \ref{S:Lag} we rewrite the problem in Lagrangian coordinates ; this step is necessary to apply the maximal $L_p-L_q$ regularity theory. In Section \ref{S:lin} we linearize the problem around the initial condition. Section \ref{S:Nonl} is dedicated to nonlinear estimates which are used to close the fixed point argument and prove Theorem \ref{thm:main2} using the existence result for linearized system from Theorem \ref{thm:main1}, the proof of which can be found in \cite{PSZ2}.
\section{Lagrangian coordinates}\label{S:Lag} We begin the proof of Theorem \ref{thm:main2} by transforming the symmetrized system \eqref{sys:normal} to the Lagrangian coordinates $x = \Phi(y,t)$ related to the vector field $\boldsymbol{v}$: \begin{equation}\label{lag:1} x = y + \int^t_0\boldsymbol{v}(y, s)\,ds. \end{equation} Then for any differentiable function $f$ we have \begin{equation} \label{dt_lag} \partial_t f(\Phi(t,y),t)=\partial_t f+\boldsymbol{u} \cdot \nabla_x f. \end{equation} Since \begin{equation}\label{lag:2} \frac{\partial x_i}{\partial y_j} = \delta_{ij} + \int^t_0\frac{\partial v_i}{\partial y_j}(y, s)\,ds, \end{equation} assuming that \begin{equation}\label{assump:1}
\sup_{t \in (0,T)}\int^t_0\|\nabla\boldsymbol{v}(\cdot, s)\|_{L_\infty(\Omega)}\,ds \leq \delta \end{equation} for sufficiently small positive constant $\delta$, the matrix $\partial x/\partial y = (\partial x_i/\partial y_j)$ has the inverse \begin{equation}\label{lag:3} \Bigr(\frac{\partial x_i}{\partial y_j}\Bigr)^{-1} = \boldsymbol{I} + {\bf V}^0({\bf k}_{\boldsymbol{v}}), \quad {\bf k}_{\boldsymbol{v}} = \int^t_0\nabla\boldsymbol{v}(y, s)\,ds. \end{equation} Here , $\boldsymbol{I}\,$ is the $3\times 3$ identity matrix, and ${\bf V}^0({\bf k})$ is the $3\times 3$ matrix of smooth functions with ${\bf V}^0(0) = 0$. We have \begin{equation}\label{lag:4} \nabla_x = (\boldsymbol{I} + {\bf V}^0({\bk_{\bv}}))\nabla_y, \quad \frac{\partial}{\partial x_i} = \sum_{j=1}^3 (\delta_{ij} + V^0_{ij}({\bk_{\bv}})) \frac{\partial}{\partial y_j}. \end{equation} Moreover (see for instance \cite{St1}), the map $\Phi(y, t)$ is bijection from $\Omega$ onto $\Omega$.
We define our unknown functions in Lagrangian coordinates: \begin{equation}\label{lag:5} \boldsymbol{v}(y, t) = \boldsymbol{u}(x, t), \quad \eta(y, t) = \varrho(x, t), \quad \vartheta_i(y, t) = h_i(x, t), \; i=1,\ldots,n-1, \end{equation} and we denote $$\vec{\vartheta}:=(\vartheta_1,\ldots,\vartheta_{n-1})^\top.$$ We now show that $U=(\boldsymbol{v},\eta,\vec\vartheta)$ satisfies the system \eq{\label{lag:sys} &\partial_t\eta + \eta{\rm div}\boldsymbol{v} = R_1(U)
\\ &\eta \partial_t \boldsymbol{v} - \mu \Delta \boldsymbol{v} - (\mu+\nu)\nabla{\rm div}\boldsymbol{v} +\frac{\eta}{\Sigma_\varrho}\nabla\eta +\sum_{l=1}^{n-1}\lr{\varrho_{l+1}-\frac{m_{l+1}\varrho_{l+1}\varrho}{\Sigma_\varrho}}\nabla \vartheta_l = \vc{R}_2(U)\\ &\sum_{l=1}^{n-1} {\cal R}_{kl}\partial_t \vartheta_l + \lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\operatorname{div} \boldsymbol{v}-{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}\nabla\vartheta_l}=R^k_3(U), \quad k=1,\ldots,n-1 } supplemented with the boundary conditions \begin{equation} \label{lag:bc}
\boldsymbol{v}|_{\partial \Omega}=0, \quad \nabla \vartheta_k\cdot\boldsymbol{n}|_{\partial \Omega}=R^k_4(U), \quad k=1,\ldots,n-1 \end{equation}
where \begin{equation} \label{def:vrk} (\varrho_1,\ldots,\varrho_n)=(\varrho_1,\ldots,\varrho_n)(\eta,\vec{\vartheta})=\Psi^{-1}(\eta,\vec{\vartheta}). \end{equation} \begin{rmk} In the remainder of the paper we write simply $\varrho_k$ keeping in mind that we have the dependence \eqref{def:vrk} since we work in Lagrangian coordinates. \end{rmk} We now derive the precise form of terms on the right hand side of \eqref{lag:sys},\eqref{lag:bc}. First of all we have \begin{equation}\label{lag:div} {\rm div}_x = {\rm div}_y + \sum_{i,j=1}^3V^0_{ij}({\bk_{\bv}})\frac{\partial v_i}{\partial y_j}, \end{equation} therefore we easily obtain \eqref{lag:sys}$_1$ with \begin{equation}\label{lag:6} R_1(U) = -\eta\sum_{i,j=1}^3 V^0_{ij}({\bk_{\bv}})\frac{\partial v_i}{\partial y_j}. \end{equation} Now we need to transform second order operators. By \eqref{lag:4}, we have $$ \Delta_x \boldsymbol{u} = \sum_{k=1}^3\frac{\partial}{\partial x_k}\lr{\frac{\partial \boldsymbol{u}}{\partial x_k}} = \sum_{k,l,m=1}^3\lr{\delta_{kl} + V^0_{kl}({\bk_{\bv}})} \frac{\partial}{\partial y_l} \lr{\lr{\delta_{km} + V^0_{km}({\bk_{\bv}})}\frac{\partial \boldsymbol{v}}{\partial y_m}}. $$ Therefore $$\Delta_x \boldsymbol{u} = \Delta_y \boldsymbol{v} + A_{2\Delta}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} + A_{1\Delta}({\bk_{\bv}})\nabla_y\boldsymbol{v} $$ with \eq{ A_{2\Delta}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} &= 2\sum_{l,m=1}^3V^0_{kl}({\bk_{\bv}}) \frac{\partial^2\boldsymbol{v}}{\partial y_l\partial y_m} + \sum_{k,l, m=1}^3 V^0_{kl}({\bk_{\bv}})V^0_{km}({\bk_{\bv}}) \frac{\partial^2\boldsymbol{v}}{\partial y_l \partial y_m}, \label{a2delta} } \eq{ A_{1\Delta}({\bk_{\bv}})\nabla_y\boldsymbol{v} = &\sum_{l, m=1}^3(\nabla_{\bk_{\bv}} V^0_{l m})({\bk_{\bv}}) \int^t_0(\partial_l\nabla_y\boldsymbol{v})\,ds \frac{\partial \boldsymbol{v}}{\partial y_m}\\ &+ \sum_{k,l, m=1}^3 V^0_{kl}({\bk_{\bv}}) (\nabla_{\bk_{\bv}} V^0_{km})({\bk_{\bv}}) \int^t_0\partial_l\nabla_y\boldsymbol{v}\,ds\frac{\partial \boldsymbol{v}}{\partial y_m}, \label{a1delta} } where $(\nabla_{\bk_{\bv}} V^0_{km})({\bk_{\bv}})$ denotes $ \lr{V^0_{km}}'({\bk_{\bv}})$.
Similarly for $i\in\{1,\ldots,N\}$ we have $$\frac{\partial}{\partial x_j}{\rm div}_x\boldsymbol{u} = \sum_{k=1}^3(\delta_{jk} + V^0_{jk}({\bk_{\bv}}))\frac{\partial}{\partial y_k} \lr{{\rm div}_y\boldsymbol{v} + \sum_{l, m=1}^3 V^0_{l m}({\bk_{\bv}})\frac{\partial v_l}{\partial y_m}}, $$ so we obtain $$\frac{\partial}{\partial x_j}{\rm div}_x\boldsymbol{u} = \frac{\partial}{\partial y_j}{\rm div}_y\boldsymbol{v} + A_{2{\rm div}, j}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} + A_{1{\rm div}, j}({\bk_{\bv}})\nabla_y\boldsymbol{v}, $$ where \eq{ A_{2{\rm div},j}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} & = \sum_{l, m=1}^3V^0_{l m}({\bk_{\bv}})\frac{\partial^2 v_l}{\partial y_m \partial y_j} + \sum_{k=1}^3 V^0_{jk}({\bk_{\bv}})\frac{\partial}{\partial y_k}{\rm div}_y\boldsymbol{v} + \sum_{k, l=1}^3V^0_{jk}({\bk_{\bv}})V^0_{l m}({\bk_{\bv}}) \frac{\partial^2v_l}{\partial y_k \partial y_m}, \label{a2div} } \eq{ A_{1{\rm div}, j}({\bk_{\bv}})\nabla_y\boldsymbol{v}
=& \sum_{l, m=1}^3(\nabla_{{\bk_{\bv}}} V^0_{l m})({\bk_{\bv}}) \int^t_0\partial_j\nabla_y\boldsymbol{v}\,ds\frac{\partial v_l}{\partial y_m} \\ &+ \sum_{k,l, m=1}^3V^0_{jk}({\bk_{\bv}})(\nabla_{{\bk_{\bv}}} V^0_{l m})({\bk_{\bv}}) \int^t_0\partial_k\nabla_y\boldsymbol{v}\,ds\frac{\partial v_l}{\partial y_m}. \label{a1div} } Therefore, transforming also $\nabla_x \varrho$ and $\nabla_x h_l$ we obtain \eqref{lag:sys}$_2$ with \eq{ \label{lag:7} \vc{R}_2(U) =& \mu A_{2\Delta}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} + \mu A_{1\Delta}({\bk_{\bv}})\nabla_y\boldsymbol{v} + \nu A_{2{\rm div}}({\bk_{\bv}})\nabla^2_y\boldsymbol{v} + \nu A_{1{\rm div}}({\bk_{\bv}})\nabla_y\boldsymbol{v} \\ &+ \frac{\eta}{\Sigma_\varrho}{\bf V}^0({\bk_{\bv}})\nabla_y\eta + {\bf V}^0({\bk_{\bv}})\sum_{l=2}^3\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla_y \vartheta_{l-1}, } where $A_{i{\rm div}}\nabla^i_y\boldsymbol{v},\ i=1,2$ are vectors with coordinates $A_{i{\rm div},j}\nabla^i_y\boldsymbol{v},\ j=1,\ldots,N$.
Finally we transform the species balance equations. We have \begin{equation*}\begin{split} {\rm div}_x({\cal{B}}_{kl}\nabla_x h_l) &={\cal{B}}_{kl}(\Delta_y\vartheta_l+A_{2\Delta}({\bk_{\bv}})\nabla^2_y\vartheta_l+A_{1\Delta}({\bk_{\bv}})\nabla_y \vartheta_l)\\ &\quad+\lr{\nabla_y {\cal{B}}_{kl}+{\bf V}^0({\bk_{\bv}})\nabla_y {\cal{B}}_{kl}}\lr{\nabla_y \vartheta_l+{\bf V}^0({\bk_{\bv}})\nabla_y \vartheta_l}\\ &={\rm div}_y({\cal{B}}_{kl}\nabla_y\vartheta_l)+R^{kl}_3(U), \end{split}\end{equation*} where \eq{\label{lag:8} R^{kl}_3(U)=&{\cal{B}}_{kl}(A_{2\Delta}({\bk_{\bv}})\nabla^2_y\vartheta_l+A_{1\Delta}({\bk_{\bv}})\nabla_y \vartheta_l)\\ &+{\bf V}^0({\bk_{\bv}})\nabla_y {\cal{B}}_{kl}(\nabla_y\vartheta_l+{\bf V}^0({\bk_{\bv}})\nabla_y\vartheta_l)+(\nabla_y {\cal{B}}_{kl}){\bf V}^0({\bk_{\bv}})\nabla_y\vartheta_l. } Therefore, transforming also $\operatorname{div} \boldsymbol{u}$, we obtain \eqref{lag:sys}$_3$ with \begin{equation}\label{lag:10} R^k_{3}(U)=\sum_{l=1}^{n-1} R_{3}^{kl}(U)-\lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\sum_{j,m=1}^3V^0_{jm}({\bk_{\bv}})\frac{\partial v_j}{\partial y_m}. \end{equation} It remains to transform the boundary conditions. For this purpose notice that $$\boldsymbol{n}(x) = \boldsymbol{n}\lr{y + \int^t_0\boldsymbol{v}(y, s)\,ds} = \boldsymbol{n}(y) + \int^1_0(\nabla\boldsymbol{n}) \lr{y + \tau\int^t_0\boldsymbol{v}(y, s)\,ds}\,d\tau \int^t_0\boldsymbol{v}(y, s)\,ds, $$ and therefore we obtain \eqref{lag:bc} with \eq{\label{lag:9} R_4^k(U)=&\boldsymbol{n}\lr{y + \int^t_0\boldsymbol{v}(y, s)\,ds}\cdot ({\bf V}^0({\bk_{\bv}})\nabla_y \vartheta_k)\\ &+ \left\{\int^1_0(\nabla\boldsymbol{n}) \lr{y + \tau\int^t_0\boldsymbol{v}(y, s)\,ds}\,d\tau \int^t_0\boldsymbol{v}(y, s)\,ds\right\} \cdot\nabla_y\vartheta_k. }
\section{Linearization}\label{S:lin} \subsection{Formulation of linearized system} We now linearize the system in the Lagrangian coordinates \eqref{lag:sys} around the initial conditions. For this purpose we introduce small perturbations \begin{equation} \label{lin1:1} \eta=\sigma+\varrho^0,\quad \varrho_l=\sigma_l+\varrho^0_l, \end{equation} following the convention introduced in the previous section that $\varrho_l$ are the functions in the Lagrangian coordinates.
\noindent Let us denote $$ \Sigma_\varrho^0=\sum_{k=1}^n m_k \varrho^0_k, \quad p^0=\sum_{k=1}^n \frac{\varrho_k^0}{m_k}, $$ and \begin{equation} \label{lin1:1b} h^0_k=\frac{1}{m_k}\log \varrho^0_{k+1} - \frac{1}{m_1}\log \varrho^0_1, \quad k=1,\ldots,n-1 . \end{equation} Observe that due to \eqref{initial:0} we have \begin{equation} n a_1 \leq \varrho^0 \leq n a_2, \end{equation} as well as $$
|h^0_k|\leq \frac{1}{m_{k+1}}|\log a_2|+\frac{1}{m_1}|\log a_1|. $$
The linearization of the continuity equation is straighforward, while for the momentum equation we have $$ \frac{\eta}{\Sigma_{\varrho}}\nabla \eta = \frac{\varrho^0}{\Sigma_\varrho^0} \nabla \sigma + \varrho^0 \nabla \sigma \left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right) + \frac{\eta}{\Sigma_\varrho}\nabla\varrho^0 + \frac{\sigma}{\Sigma_\varrho}\nabla\sigma $$ and \begin{equation} \label{lin1:2} \frac{m_l\varrho_l\varrho}{\Sigma_\varrho}= \frac{m_l\varrho^0_l\varrho^0}{\Sigma_\varrho^0}+m_l\varrho^0\varrho^0_l\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right) +\frac{m_l}{\Sigma_\varrho}(\varrho_l^0\sigma+\varrho^0\sigma_l). \end{equation} Similarly we linearize the ${\cal{R}}_{kl}$ in the species equations while for the reduced diffusion matrix we use \begin{equation} \label{lin1:3} {\cal{B}}_{k-1,l-1}=\frac{\varrho_k \varrho_l D_{kl}}{p}=\frac{\varrho^0_k\varrho^0_lD_{kl}^0}{p^0} +\frac{\varrho_k\varrho_lD_{kl}-\varrho^0_k\varrho^0_lD_{kl}^0}{p}+\varrho^0_k\varrho^0_lD_{kl}^0\left(\frac{1}{p}-\frac{1}{p^0}\right). \end{equation} Therefore we obtain the following linearized system \begin{align} \label{lin1:sys} &\partial_{t}\sigma + \varrho^0 {\rm div} \boldsymbol{v} = f_1(U)\\ &\varrho^0 \partial_t \boldsymbol{v} - \mu\Delta\boldsymbol{v}-(\mu+\nu)\nabla{\rm div} \boldsymbol{v} + \gamma_1 \nabla \sigma + \sum_{l=1}^{n-1} \gamma_2^{l} \nabla\vartheta_{l}={\vc f}_2(U)\\ &\sum_{l=1}^{n-1} {\cal R}_{kl}^0\partial_t \vartheta_l + \gamma_2^k\operatorname{div} \boldsymbol{v} -{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}^0\nabla\vartheta_l}=f^k_3(U), \quad k=1,\ldots,n-1 \end{align}
in $\Omega\times(0, T)$, supplied with the boundary conditions \begin{equation} \label{lin1:bc}
\boldsymbol{v}|_{\partial \Omega}=0, \quad \nabla \vartheta_k \cdot \boldsymbol{n}|_{\partial \Omega}=f^k_4(U), \quad k=1,\ldots,n-1 \end{equation} and initial conditions \begin{equation} \label{lin1:ic}
(\sigma,\boldsymbol{v},\vec\vartheta)|_{t=0}=(0,\boldsymbol{u}^0, \vec h^0), \end{equation} where we denote $$\vec h^0=(h_1^0,\ldots,h_{n-1}^0),$$ $$D_{kl}^0=D_{kl}(\varrho^0), \quad {\cal{R}} _{kl}^0=m_{k+1}\varrho^0_{k+1}\delta_{kl}-\frac{m_{k+1}m_{l+1}\varrho^0_{k+1}\varrho^0_{l+1}}{\Sigma_{\varrho}^0}, \quad {\cal{B}}_{kl}^0=\frac{\varrho^0_{l+1}\varrho^0_{k+1}D_{k+1,l+1}^0}{p_0},$$ $$\gamma_1=\frac{\varrho^0}{\Sigma_\varrho^0}, \quad \gamma_2^k=\varrho^0_{k+1}-\frac{m_{k+1}\varrho^0_{k+1}\varrho^0}{\Sigma_\varrho^0}, $$ and the right hand side is given by \begin{equation} \label{lin1:5a} f_1(U)=R_1(U)-\sigma {\rm div} \boldsymbol{v}, \end{equation} \begin{equation} \label{lin1:5b} \begin{split} {\vc f}_2(U)=&R_2(U)-\sigma\partial_t \boldsymbol{v} - \varrho^0\nabla\eta \left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right) - \frac{\varrho^0}{\Sigma_\varrho}\nabla\varrho^0 - \frac{\sigma}{\Sigma_\varrho}\nabla\eta\\ &+\sum_{l=1}^{n-1}\lr{-\sigma_{l+1}+m_{l+1}\varrho_{l+1}^0\varrho^0\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)+\frac{m_{l+1}}{\Sigma_\varrho}(\varrho_{l+1}\sigma+\varrho^0\sigma_{l+1})}\nabla\vartheta_{l}, \end{split} \end{equation} \eq{ \label{lin1:5c} f^k_3(U) &=R^k_3(U) + \varrho^0_{k+1}{\rm div} \boldsymbol{v} + \left[ m_{k+1}\varrho^0\varrho^0_{k+1}\lr{\frac{1}{\Sigma}-\frac{1}{\Sigma^0}}+\frac{m_{k+1}}{\Sigma_\varrho}(\varrho_{k+1}^0\sigma+\varrho^0\sigma_{k+1})\right]{\rm div} \boldsymbol{v}\\ &+\sum_{l=1}^{n-1}\left( - \delta_{kl}m_{k+1}\sigma_{k+1} +m_{k+1} m_{l+1}\left[\varrho^0_{k+1}\varrho^0_{l+1}\left(\frac{1}{\Sigma}-\frac{1}{\Sigma_\varrho^0}\right)+ \frac{\varrho_{k+1}^0\sigma_{l+1}+\varrho^0_{l+1}\sigma_{k+1}}{\Sigma_\varrho} \right]\right)\partial_t\vartheta_{l}\\ &+{\rm div} \left( \sum_{l=1}^{n-1} \left[\frac{\varrho_{k+1}\varrho_{l+1}D_{k+1,l+1}-\varrho^0_{k+1}\varrho^0_{l+1}D_{k+1,l+1}^0}{p}+\varrho^0_{k+1}\varrho^0_{l+1}D_{k+1,l+1}^0\lr{\frac{1}{p}-\frac{1}{p_0}}\right]\nabla\vartheta_{l} \right), } \begin{equation} \label{lin1:5d} f^k_4(U)=R^k_4(U). \end{equation}
\subsection{Solvability of the complete linear system} \subsubsection{Auxiliary results}
To prove existence of local-in-time strong solutions to system \eqref{lin1:sys} with fixed and given $f_1, {\vc f}_2, f_3^k$, and $f_4^k$ we will use some auxiliary results for two subsystems. First let us recall a relevant existence result for the fluid part (for the proof see \cite{PSZ}, Theorem 5.1 ): \begin{thm} \label{thm:lin1} Assume $1 < p, q < \infty$ $2/p + 1/q \not =1$, $T>0$ and $\Omega$ is a uniformly $C^2$ domain in $\mathbb{R}^N$ $(N \geq 2)$. Assume moreover that $\varrho^0\in H^1_q(\Omega)$, $\boldsymbol{u}_0 \in B^{2(1-1/p)}_{q,p}(\Omega)^N$, $\tilde f_1 \in L_p(\mathbb{R}, H^1_q(\Omega)^N) $ and $\tilde {\vc f}_2 \in L_p((0, T), L_q(\Omega)^N)$. Then the problem \begin{equation} \left\{ \begin{aligned} \partial_{t}\sigma+ \varrho^0 {\rm div} \boldsymbol{v} &= \tilde f_1 &\quad&\text{in $\Omega\times(0, T)$}, \\ \varrho^0 \partial_t \boldsymbol{v} - \mu\Delta\boldsymbol{v}-(\mu+\nu)\nabla{\rm div} \boldsymbol{v} + \gamma_1 \nabla \sigma &= \tilde{\vc f}_2&\quad&\text{in $\Omega\times(0, T)$}, \\
\boldsymbol{v}|_{\partial \Omega}&=0&\quad&\text{on $\Gamma \times (0, T)$}, \\
(\sigma,\boldsymbol{v})|_{t=0}&=(0,\boldsymbol{u}^0)&\quad&\text{in $\Omega$}, \end{aligned}\right. \end{equation} admits a solution $(\sigma,\boldsymbol{v})$ such that \begin{align}
&\|\boldsymbol{v}\|_{L_p((0, T), H^2_q(\Omega))}
+ \|\partial_t\boldsymbol{v}\|_{L_p((0, T), L_q(\Omega))}
+ \| \sigma \|_{H^1_p(0,T;H^1_q(\Omega))} \\ &\quad
\leq Ce^{cT}\lr{\|\varrho^0\|_{H^1_q(\Omega)}+\|\boldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|\tilde f_1\|_{L_p((0, T), H^1_q(\Omega))}
+ \|\tilde{\vc f}_2\|_{L_p((0, T), L_q(\Omega))}}. \end{align} \end{thm} For the species subsystem we recall the following theorem which gives solvability in a maximal $L_p-L_q$ regime of a linear problem, its proof can be found in our previous work \cite{PSZ2}. For general $m$ species we consider $k\in\{1,\ldots,m\}$ and the following set of equations \begin{equation}\label{1.1?}\left\{ \begin{aligned} \sum_{\ell=1}^m {\cal{R}}_{k\ell}\partial_t \vartheta_\ell -{\rm div}\lr{\sum_{\ell=1}^m {\cal{B}}_{k\ell}\nabla \vartheta_\ell} & = \tilde f_3^k &\quad&\text{in $\Omega\times(0, T)$}, \\ \sum_{\ell=1}^m {\cal{B}}_{k\ell}\nabla \vartheta_\ell \cdot \boldsymbol{n} & = \tilde f_4^k &\quad&\text{on $\Gamma \times (0, T)$}, \\
\vartheta_k|_{t=0} & = h_{k}^0 &\quad&\text{in $\Omega$}, \end{aligned}\right. \end{equation} where ${\cal{B}}= {\cal{B}}(x)$ and ${\cal{R}}={\cal{R}}(x)$ are $m\times m$ matrices whose $(k, \ell)^{\rm th}$ components are ${\cal{B}}_{k\ell}(x)$ and ${\cal{R}}_{k\ell}(x)$, respectively.
\begin{thm} \label{thm:main1} Assume that \begin{itemize} \item there exists a number $M_0$ for which \begin{equation}\label{1.2}\begin{aligned}
&|{\cal{B}}_{k\ell}(x)|, |{\cal{R}}_{k\ell}(x)| \leq M_0, \quad \text{for any $x \in \Omega$}, \\
&|{\cal{B}}_{k\ell}(x) - {\cal{B}}_{k\ell}(y)|\leq M_0|x-y|^\sigma, \quad
|{\cal{R}}_{k\ell}(x) - {\cal{R}}_{k\ell}(y)|\leq M_0|x-y|^\sigma \quad\text{for any $x, y \in \Omega$},\\
&\|\nabla({\cal{B}}_{k\ell}, {\cal{R}}_{k\ell})\|_{L_r(\Omega)} \leq M_0. \end{aligned} \end{equation} \item the matrices ${\cal{B}}$ and ${\cal{R}}$ are positive and symmetric and that there exist constants $m_1,m_2 > 0$ for which \begin{equation}\label{1.3}
\langle {\cal{B}}(x)\xi, \overline{\xi} \rangle \geq m_1|\xi|^2, \quad
\langle {\cal{R}}(x)\xi, \overline{\xi} \rangle \geq m_2|\xi|^2 \end{equation} for any complex $m$-vector $\xi$ and $x \in \Omega$.
\item $1 < p, q < \infty$ and $T > 0$, $2/p + 1/q \not =1$ and $\Omega$ is a uniformly $C^2$ domain in $\mathbb{R}^3$ $(N \geq 2)$.
\item for all $k=1,\ldots,m$, $h_k^0 \in B^{2(1-1/p)}_{q,p}(\Omega)$, $\tilde f_3^k \in L_p((0, T), L_q(\Omega))$ and $\tilde f_4^k \in L_p(\mathbb{R}, H^1_q(\Omega)) \cap H^{1/2}_p(\mathbb{R}, L_q(\Omega))$ are given functions satisfying the compatibility conditions: \begin{equation}\label{1.4}\sum_{\ell=1}^m B_{k\ell}\nabla h_{\ell}^0 \cdot\boldsymbol{n} = \tilde f_4^k(\cdot, 0) \quad\text{on $\Gamma$} \end{equation} provided $2/p + 1/q < 1$. \end{itemize}
\noindent Then, problem \eqref{1.1?} admits a unique solution $\vec\vartheta = (\vartheta_1, \ldots, \vartheta_m)^\top$ with \begin{equation}\label{1.5} \vec\vartheta \in L_p((0, T), H^2_q(\Omega)^m) \cap H^1_p((0, T), L_q(\Omega)^m) \end{equation} possessing the estimate: \begin{equation}\label{1.6} \begin{aligned}
&\|\vec\vartheta\|_{L_p((0, T), H^2_q(\Omega))}
+ \|\partial_t\vec\vartheta\|_{L_p((0, T), L_q(\Omega))}\\ &\quad
\leq Ce^{cT}(\|\vec h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|\vec{\tilde f}_3\|_{L_p((0, T), L_q(\Omega))}
+ \|\vec{\tilde f}_4\|_{L_p((0, T), H^1_q(\Omega))}
+ \|\vec{\tilde f}_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}) \end{aligned} \end{equation} for some constants $C$ and $c$. \end{thm}
\subsubsection{Fixed point argument}
With Theorems \ref{thm:lin1} and \ref{thm:main1} it is easy to show solvability with appropriate estimates of complete linear system corresponding to \eqref{lin1:sys}-\eqref{lin1:bc}: \begin{equation}\label{lin2:sys} \left\{ \begin{aligned} &\partial_{t}\sigma + \varrho^0{\rm div} \boldsymbol{v} = f_1 \\ &\varrho^0 \partial_t \boldsymbol{v} - \mu\Delta\boldsymbol{v}-(\mu+\nu)\nabla{\rm div} \boldsymbol{v} + \gamma_1 \nabla \sigma +\sum_{l=1}^{n-1}\gamma^l_2 \nabla \vartheta_l = {\vc f}_2\\ &\sum_{l=1}^{n-1} {\cal R}_{kl}^0\partial_t \vartheta_l + \gamma_2^k\operatorname{div} \boldsymbol{v} -{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}^0\nabla\vartheta_l}=f^k_3, \quad k=1,\ldots,n-1 \end{aligned}\right. \end{equation} with given $\gamma_1,\{\gamma^l_2\}_{l=1,\ldots,n-1}$ and the boundary conditions \begin{equation} \label{lin2:bc}
\boldsymbol{v}|_{\partial \Omega}=0, \quad \sum_{l=1}^{n-1} {\cal{B}}_{kl}^0 \nabla \vartheta_l\cdot \boldsymbol{n}|_{\partial \Omega}=f_4^k, \quad k=1,\ldots,n-1. \end{equation} and initial conditions \eqref{lin1:ic}.
We have the following result. \begin{thm} \label{thm:lin2} Assume ${\cal{B}}^0$,${\cal{R}}^0$, $\Omega$ and $p,q$ satisfy the assumptions of Theorem \ref{thm:main1} with $m=n-1$. Assume moreover $\boldsymbol{u}^0,\vec h^0 \in B^{2(1-1/p)}_{q,p}(\Omega)$, $\varrho^0 \in H^1_q(\Omega)$, $f_1 \in L_p((0, T), H^1_q(\Omega))$, $({\vc f}_2,\vec f_3) \in L_p((0, T), L_q(\Omega)^{n+2})$, $\vec f_4 \in L_p(\mathbb{R}, H^1_q(\Omega)^{n-1}) \cap H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})$. Then for any $M>0$, if \begin{align}
&\|\boldsymbol{u}^0,\vec h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+\|\varrho^0\|_{H^1_q(\Omega)}
+\|f_1\|_{L_p((0, T), H^1_q(\Omega))}\\
&+\|({\vc f}_2,\vec f_3)\|_{L_p((0, T), L_q(\Omega)^{n+2})}
+\|\vec f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega)^{n-1})}
+\|\vec f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})} \leq M, \end{align} then there exists $T>0$ such that system
\eqref{lin2:sys}-\eqref{lin2:bc} admits a solution $(\sigma,\boldsymbol{v},\vec\vartheta)$ on $(0,T)$ with \begin{align} \label{est:lin3} [\sigma,\boldsymbol{v},\vec\vartheta]_T \leq
&\|\boldsymbol{u}^0,\vec h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+\|\varrho^0\|_{H^1_q(\Omega)}
+\|f_1\|_{L_p((0, T), H^1_q(\Omega))}\\
&+\|({\vc f}_2,\vec f_3)\|_{L_p((0, T), L_q(\Omega))}
+\|\vec f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega))}
+\|\vec f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^3)} \end{align} \end{thm} \emph{Proof.} We use the Banach fixed point argument. For given $\bar \boldsymbol{v} \in {\mathcal H}^1_{T,M}$ denote by $\vec\vartheta(\bar \boldsymbol{v})$ solution to \eqref{lin2:sys}$_3$ with $\boldsymbol{v}=\bar \boldsymbol{v}$ and boundary condition \eqref{lin2:bc}$_2$. Since $
\|\boldsymbol{v}\|_{L_\infty(0,T,H^1_\infty(\Omega))}\leq CM, $ (see estimate \eqref{est:04}) therefore by Theorem \ref{thm:main1} such solution exists for arbitrary time $T>0$, it is unique and it satisfies \eq{
[\vec\vartheta(\bar\boldsymbol{v})]_{T,1}\leq &C(T)\Big(\|\vec h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}+
\|\vec f_3\|_{L_p(0,T;L_q(\Omega)^{n-1})}+E(T)\|\boldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\\
&\quad+\|\vec f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega)^{n-1})}
+\|\vec f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})}\Big)\\
\leq & C(T,M)\left(1+E(T)\|\bar\boldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\right).
} Therefore for $(\bar \boldsymbol{v},\bar \sigma) \in {\mathcal H}^1_{T,M} \times {\mathcal H}^2_{T,M}$ we can define $(\boldsymbol{v}, \sigma)={\mathcal T}(\bar \boldsymbol{v},\bar \sigma)$ as a unique solution of the first two equations of system \eqref{lin2:sys} with $\vec\vartheta=\vec\vartheta(\bar \boldsymbol{v})$ and boundary condition \eqref{lin2:bc}$_1$. By Theorem \ref{thm:lin1} we have \eq{ [\sigma]_{T,2}+[\boldsymbol{v}]_{T,1} \leq &
C(T)\Big(\|\varrho^0\|_{H^1_q(\Omega)}+\|\boldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^3}\\
&\quad+ \|f_1\|_{L_p((0, T), H^1_q(\Omega))}
+ \|{\vc f}_2\|_{L_p((0, T), L_q(\Omega)^3)}+\|\nabla\vec\vartheta(\bar\boldsymbol{v})\|_{L_p((0, T), L_q(\Omega)^{n-1})}\Big)\\
\leq & C(T,M)\left(1+E(T)\|\bar\boldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\right).
} Moreover, taking different $\bar\boldsymbol{v}_1, \bar\boldsymbol{v}_2\in {\mathcal H}^1_{T,M}$ corresponding to the same initial data $\boldsymbol{u}^0$, and then subtracting the for $\vec\vartheta(\bar \boldsymbol{v}_1)$ and $\vec\vartheta(\bar \boldsymbol{v}_2)$ we get $$ [\vec\vartheta(\bar \boldsymbol{v}_1)-\vec\vartheta(\bar \boldsymbol{v}_2)]_{T,1}\leq C(M)E(T)[\bar\boldsymbol{v}_1-\bar\boldsymbol{v}_2]_{T,1}. $$ Therefore applying Theorem \ref{thm:lin1} to a difference of two solutions we have \eqh{ [{\mathcal T}(\bar \boldsymbol{v}_1,\bar \sigma_1)-{\mathcal T}(\bar \boldsymbol{v}_2,\bar \sigma_2)]_{T,1;T,2}&\leq C(M)E(T) [\bar\boldsymbol{v}_1-\bar\boldsymbol{v}_2]_{T,1}\\ &\leq C(M)E(T)[(\bar\boldsymbol{v}_1-\bar\boldsymbol{v}_2,\bar\sigma_1-\bar\sigma_2)]_{T,1;T,2}. } Therefore for sufficiently small $T$, $\mathcal T$ is a contraction on a set ${\mathcal H}^1_{T,M}\times{\mathcal H}^2_{T,M}$, and applying the Banach fixed point theorem we complete the proof.
\rightline{ $\square$} \section{Proof of Theorem \ref{thm:main2}}\label{S:Nonl}
\subsection{Nonlinear estimates}
The aim of this section is to prove the following proposition which gives the estimate on the right hand side of linearized system in the regularity required in order to apply Theorem \ref{thm:lin2}. We shall use notation introduced at the beginning of Section 5.2. For brevity in this subsection we will not distinguish between scalar and vector valued functions in notation of functional spaces, except for final estimates. \begin{prop} \label{prop:est} Let $\bar U=(\bar \sigma,\bar \boldsymbol{v},\bar \vartheta) \in {\mathcal H}_{T,M} $ for given $T,M>0$, where the initial conditions satisfy the assumptions of Theorem \ref{thm:main2}. Let $f_1(U),f_2(U),f_3^k(U)$ and $f^k_4(U)$ be given by \eqref{lin1:5a}-\eqref{lin1:5d}, where $R_1(U),R_2(U),R_3^k(U)$ and $R^k_4(U)$ are defined in \eqref{lag:6},\eqref{lag:7},\eqref{lag:8}-\eqref{lag:10} and \eqref{lag:9}, respectively. Then \eq{\label{est:nonlin}
&\|f_1(\bar U)\|_{L_p(0,T;W^1_q(\Omega))} + \|f_2(\bar U)\|_{L_p(0,T,L_q(\Omega)^3)}+\|\vec f_3\|(\bar U)\|_{L_p(0,T;L_q(\Omega)^{n-1})} \\
&+\|\vec f^k(\bar U)\|_{L_p(0,T,H^1_q(\Omega)^{n-1})} + \|\vec f_4(\bar U)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega)^{n-1})} \leq C(M,L) E(T). } \end{prop} Let us start with recalling some auxiliary results. The first one is due to Tanabe (cf. \cite{Tanabe} p.10):
\begin{lem} \label{L:int} Let $X$ and $Y$ be two Banach spaces such that $X$ is a dense subset of $Y$ and $X\subset Y$ is continuous. Then for each $p \in (1, \infty)$ $$H^1_p((0, \infty), Y) \cap L_p((0, \infty), X) \subset C([0, \infty), (X, Y)_{1/p,p})$$ and for every $u\in H^1_p((0, \infty), Y) \cap L_p((0, \infty), X)$ we have
$$\sup_{t \in (0, \infty)}\|u(t)\|_{(X, Y)_{1/p,p}}
\leq (\|u\|_{L_p((0, \infty),X)}^p
+ \|u\|_{H^1_p((0, \infty), Y)}^p)^{1/p}. $$ \end{lem}
Next two results will be needed to estimate the boundary data. For the first one see [Shibata and Shimizu \cite{SS1}, Lemma 2.7]: \begin{lem}\label{lem:5.1} Let $1 < p < \infty$, $3 < q < \infty$ and $0 < T \leq 1$. Assume that $\Omega$ is a uniformly $C^2$ domain. Let \begin{align*} f \in H^1_\infty(\mathbb{R}, L_q(\Omega)) \cap L_\infty(\mathbb{R}, H^1_q(\Omega)), \quad g \in L_p(\mathbb{R}, H^1_q(\Omega)) \cap H^{1/2}_p(\mathbb{R}, L_q(\Omega)). \end{align*} If we assume that $f \in L_p(\mathbb{R}, H^1_q(\Omega))$ and that $f$ vanishes for $t \notin [0, 2T]$ in addition, then we have \begin{align*}
&\|fg\|_{L_p(\mathbb{R}, H^1_q(\Omega))} + \|fg\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}\\
&\quad\leq C(\|f\|_{L_\infty(\mathbb{R}, H^1_q(\Omega))}
+T^{(q-3)/(pq)}\|\partial_tf\|_{L_\infty(\mathbb{R}, L_q(\Omega))}^{(1-3/(2q))}
\|\partial_tf\|_{L_p((\mathbb{R}, H^1_q(\Omega))}^{3/(2q)})
(\|g\|_{_p(\mathbb{R}, H^1_q(\Omega))} + \|g\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}). \end{align*} \end{lem} \begin{rmk} \thetag1~ The boundary of $\Omega$ was assumed to be bounded in \cite{SS1}. However, Lemma \ref{lem:5.1} can be proved using Sobolev's inequality and complex interpolation theorem, and so employing the same argument as that in the proof of \cite[Lemma 2.7]{SS1}, we can prove Lemma \ref{lem:5.1}. \\
\thetag2~ By Sobolev's inequality, $\|fg\|_{H^1_q(\Omega)}
\leq C\|f\|_{H^1_q(\Omega)}\|g\|_{L_q(\Omega)}$, and so the essential part of Lemma \ref{lem:5.1} is the estimate of
$\|fg\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}$. \end{rmk} The second result has been shown in Shibata and Shimizu \cite{SS2} for $\Omega=\mathbb{R}^3$ and generalized to a uniform $C^2$ domain in Shibata \cite{S17}: \begin{lem}\label{lem:5.2} Let $1 < p, q < \infty$. Assume that $\Omega$ is a uniform $C^2$ domain. Then $$H^1_p(\mathbb{R}, L_q(\Omega)) \cap L_p(\mathbb{R}, H^2_q(\Omega)) \subset H^{1/2}_p(\mathbb{R}, H^1_q(\Omega)), $$ and
$$\|\nabla f\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}
\leq C(\|f\|_{L_p(\mathbb{R}, H^2_q(\Omega))}
+ \|\partial_t f\|_{L_p(\mathbb{R}, L_q(\Omega))}). $$ \end{lem}
Now we show preliminary estimates for functions from the space ${\mathcal H}_{T,M}$. \begin{lem} Let $\sigma, \boldsymbol{v}, \vartheta_1 \ldots \vartheta_{n-1} \in {\mathcal H}_{T,M}$ and let ${\bk_{\bv}},{\bf V}^0({\bk_{\bv}})$ be defined in \eqref{lag:3}. Then \begin{align}
&\|{\bf V}^0({\bk_{\bv}}),\nabla_{{\bk_{\bv}}}{\bf V}^0({\bk_{\bv}})\|_{L_\infty(\Omega\times(0,T))} \leq C(M,L)E(T), \label{est:01}\\
&{\rm sup}_{t \in (0,T)} \|\sigma(\cdot,t)\|_{H^1_q(\Omega)}\leq C(M,L)E(T), \label{est:02} \\
&{\rm sup}_{t \in (0,T)}\|{\vec \vartheta}(\cdot,t)-{\vec h^0}\|_{B^{2(1-1/p)}_{q,p}}+{\rm sup}_{t \in (0,T)}\|\boldsymbol{v}(\cdot,t)-\boldsymbol{u}_0\|_{B^{2(1-1/p)}_{q,p}}\leq C(M,L),\label{est:03}\\
&\|\boldsymbol{v},{\vec \vartheta}\|_{L_\infty(0,T,H^1_\infty(\Omega))}\leq C(M), \label{est:04}\\
&\|\varrho_k-\varrho_k^0\|_{L_\infty(0,T;H^1_q)}\leq C(M,L) \quad \forall k=1,\ldots,n,\label{est:05}\\
&\|\varrho_k-\varrho_k^0\|_{L_\infty(\Omega \times (0,T))} \leq C(M,L)E(T). \label{est:06} \end{align} \end{lem} \emph{Proof}. First of all, we have \begin{align}
\int^T_0\|\nabla\boldsymbol{v}(\cdot, t)\|_{L_\infty(\Omega)}\,dt
&\leq C\int^T_0\|\boldsymbol{v}(\cdot, t)\|_{H^2_q(\Omega)}\,dt\nonumber\\ &\leq T^{1/{p'}}
\Bigl({\rm sup}_{t \in (0,T)}\int^T_0\|\boldsymbol{v}(\cdot, t)\|_{H^2_q(\Omega)}^p\,dt\Bigr)^{1/p} \leq MT^{1/p'}, \end{align} which implies \eqref{est:01}. Next,
$$\|\sigma(\cdot, t)\|_{H^1_q(\Omega)}
\leq \int^t_0\|\partial_t\sigma(\cdot, s)\|_{H^1_q(\Omega)}\,ds
\leq T^{1/{p'}}\|\partial_t\sigma\|_{L_p((0, T), H^1_q(\Omega))} \leq C(M)E(T), $$ and so we have \eqref{est:02}. In order to prove \eqref{est:03} we introduce extension operator \begin{equation} \label{def:ext} e_T[f](\cdot, t) = \begin{cases} 0 \quad &t\in(-\infty,0)\cup (2T,+\infty), \\ f(\cdot, t) \quad &t\in(0,T), \\ f(\cdot, 2T-t)\quad & t\in(T,2T). \end{cases} \end{equation} Obviously, $e_T[f](\cdot, t) = f(\cdot, t)$ for $t \in (0, T)$. If
$f|_{t=0}=0$, then we have \begin{equation} \label{ext:2} \partial_te_T[f](\cdot, t) = \begin{cases} 0 \quad &t\in(-\infty,0)\cup (2T,+\infty),\\ (\partial_tf)(\cdot, t) \quad &t\in(0,T), \\ -(\partial_tf)(\cdot, 2T-t)\quad & t\in(T,2T), \end{cases} \end{equation} understood in a weak sense. Applying Lemma \ref{L:int} with $X=H^2_q(\Omega), \, Y=L_q(\Omega)$ and using \eqref{def:ext} and \eqref{ext:2} we have \begin{align*}
&\sup_{t \in (0, T)}\|\vartheta(\cdot, t)-h_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
\leq \sup_{t \in (0, \infty)}\|e_T[\vartheta_k-h_k^0]\|_{B^{2(1-1/p)}_{q,p}(\Omega)}\\&\quad
= (\|e_T[\vartheta_k-h_k^0]\|_{L_p((0, \infty), H^2_q(\Omega))}^p
+ \|e_T[\vartheta_k-h^0_k]\|_{H^1_p((0, \infty), L_q(\Omega))}^p)^{1/p}\\
&\quad \leq C(\|\vartheta_k-h^0_k\|_{L_p((0, \infty), H^2_q(\Omega))}
+ \|\partial_t\vartheta_k\|_{L_p((0, T), L_q(\Omega))}) \leq C(M,L), \end{align*}
and estimating $\|\boldsymbol{v}(\cdot, t)-\boldsymbol{u}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}$ in the same way we obtain \eqref{est:03}. Then \eqref{est:04} follows from \eqref{est:03} due to Sobolev imbedding theorem as $\frac{2}{p}+\frac{3}{q}<1$. In order to prove \eqref{est:04} we use a fact that $$ (\varrho_1,\ldots \varrho_n)=\Psi^{-1}(\varrho,\vartheta_1,\ldots,\vartheta_{n-1}), $$ where $\Psi$ is the diffeomorphism defined in \eqref{def:psi}, and therefore \begin{equation} \label{5.9} \begin{split}
&\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)-\varrho_k^0(\cdot)\|_{L_q(\Omega)}
\leq \int^T_0\|\partial_t (\Psi^{-1})({\vec \theta}(\cdot, t),
\varrho_0(\cdot)+\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt\\ &\quad \leq \int^T_0
\|(\Psi^{-1})'({\vec \theta}(\cdot, t), \varrho_0(\cdot) + \sigma(\cdot, t))
\|_{L_\infty(\Omega)}
\|(\partial_t{\vec \theta}(\cdot, t), \partial_t\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt. \end{split}\end{equation} By \eqref{est:01} and \eqref{est:04}, we have \begin{equation} \label{5.8}
\sup_{t \in (0, T)}\|(\Psi^{-1})'({\vec\theta}(\cdot, t),
\varrho_0(\cdot) + \sigma(\cdot, t))\|_{L_\infty(\Omega)} \leq C. \end{equation}
Thus, by \eqref{5.9} we have \begin{equation} \label{5.10}\begin{split}
\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)-\varrho_k^0(\cdot)\|_{L_q(\Omega)}
&\leq C\int^T_0\|(\partial_t{\vec\theta}(\cdot, t),
\partial_t\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt \\
& \leq CT^{1/{p'}}\|\partial_t({\vec \theta}, \sigma)\|_{L_p((0, T), L_q(\Omega))}
\leq C(M)E(T). \end{split}\end{equation} Similarly, \begin{align*}
&\|\nabla[\varrho_k(\cdot, t)-\varrho_k^0(\cdot)]\|_{L_q(\Omega)} \\ &\quad
\leq \|(\Psi^{-1})'({\vec \theta}(\cdot, t), \varrho^0(\cdot)+\sigma(\cdot, t)
\|_{L_\infty(\Omega)}
\|(\nabla{\vec \theta}(\cdot, t), \nabla\varrho^0(\cdot)
+\nabla\sigma(\cdot, t))\|_{L_q(\Omega)}
+ \|\nabla(\vec{\varrho^0})\|_{L_q(\Omega)} \leq C(L,M),
\end{align*} which implies \eqref{est:05}. In order to show \eqref{est:06} note that $ W^{\frac{3}{q}+\epsilon}_q(\Omega)\subset L_\infty(\Omega)$ $\forall \epsilon>0,$ therefore for $\epsilon < 1-\frac{3}{q}$ we have \begin{equation}\label{5.12}\begin{split}
&\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{L_\infty(\Omega)}\\ &\quad \leq (\sup_{0 \in (0, T)}
\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{L_q(\Omega)})^{\theta}\\ &\qquad\times (\sup_{0 \in (0, T)}
\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{H^1_q(\Omega)})^{1-\theta} \leq C(M,L)E(T) \end{split}\end{equation} with $\theta = 1-(3/q+\epsilon) \in (0, 1)$. This way we obtain \eqref{est:06} and complete the proof.
\rightline{ $\square$} \noindent The next lemma gives bounds on the terms coming from the change of coordinates. \begin{lem} \label{l:est_lag} Let $A_{2\Delta}({\bk_{\bv}})\nabla^2(\cdot),A_{1\Delta}({\bk_{\bv}})\nabla(\cdot),A_{2{\rm div}}({\bk_{\bv}})\nabla^2(\cdot),A_{1{\rm div}}({\bk_{\bv}})\nabla(\cdot)$ be defined in \eqref{a2delta},\eqref{a1delta},\eqref{a2div} and \eqref{a1div}, respectively. Then \begin{align}
&\|A_{2\Delta}\nabla^2\boldsymbol{v},A_{2{\rm div}}\nabla^2\boldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}+
\|A_{1\Delta}\nabla \boldsymbol{v},A_{1{\rm div}}\nabla \boldsymbol{v}\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T) \label{est:06}\\
&\|A_{2\Delta}\nabla^2\vartheta_k,A_{2{\rm div}}\nabla^2\vartheta_k\|_{L_p(0,T;L_q(\Omega))}+
\|A_{1\Delta}\nabla \vartheta_k,A_{1{\rm div}}\nabla \vartheta_k\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T) \label{est:07} \end{align} for all $k=1,\ldots,n-1$. \end{lem} \emph{Proof.} By \eqref{est:01} and \eqref{a2delta} we have $$
\|A_{2\Delta}\nabla^2\boldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}\leq \|{\bf V}^0({\bk_{\bv}})\|_{L^\infty(\Omega\times(0,T))}(1+\|{\bf V}^0({\bk_{\bv}})\|_{L^\infty(\Omega\times(0,T))})\|\nabla^2 \boldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}\leq C(M)E(T). $$ Next, notice that $$
\left\| \int_0^t \nabla^2 \boldsymbol{v} \right\|_{L_q(\Omega)} \leq \int_0^t \|\nabla^2 \boldsymbol{v}\|_{L_q(\Omega)} \leq t^{1/p'}\|\nabla^2\boldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}, $$ therefore, by \eqref{est:01} and \eqref{est:04}, \eqh{
\left\| \nabla_{{\bk_{\bv}}}V^0_{lm}({\bk_{\bv}}) \left[\int_0^t \partial_l \nabla \boldsymbol{v} ds\right] \frac{\partial \boldsymbol{v}}{\partial y_m} \right\|_{L_q(\Omega)}
&\leq \|\nabla_{{\bk_{\bv}}}V^0_{lm}({\bk_{\bv}})\|_{L_\infty(\Omega\times(0,T))} \left\|\int_0^t \nabla^2 \boldsymbol{v}\right\|_{L_p(0,T;L_q(\Omega))} \|\nabla \boldsymbol{v}\|_{L_\infty(\Omega\times(0,T))} \\ &\leq C(M)E(T). } The other terms in $A_{1\Delta}\nabla \boldsymbol{v}$ have a similar structure, therefore we get $$
\|A_{1\Delta}\nabla \boldsymbol{v}\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T). $$ As $A_{2{\rm div}}({\bk_{\bv}})\nabla^2(\cdot)$ and $A_{1{\rm div}}({\bk_{\bv}})\nabla(\cdot)$ have structure similar to $A_{2\Delta}\nabla^2(\cdot)$ and $A_{1\Delta}\nabla (\cdot)$, respectively, we conclude \eqref{est:06}. Finally, $\vartheta_k$ have the same regularity as $\boldsymbol{v}$ so we obtain $\eqref{est:07}$ in the same way. Proof of Lemma \ref{l:est_lag} is complete.
\rightline{ $\square$} \noindent With these results at hand we can proceed with the proof of Proposition \ref{prop:est}.\\ {\bf Estimate of $f_1(U)$}. Since $f_1(U)$ it is exactly the same as in the two species case, we obtain (see \cite{PSZ}) \begin{equation} \label{est:1}
\|f_1(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{equation} {\bf Estimate of $f_2(U)$}. Let us start with $R_2(U)$ defined in \eqref{lag:7}. By \eqref{est:01} we have \begin{align*}
\|{\bf V}^0({\bk_{\bv}})\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla \vartheta_{l-1}\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{align*} Applying Lemma \ref{l:est_lag} to the remaining terms we obtain $$
\|{\bf R}_2(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). $$
Next, by \eqref{est:02} $$
\|\sigma\partial_t \boldsymbol{v}\|_{L_p(0,T;L_q(\Omega))} \leq \|\sigma\|_{L_\infty(\Omega \times (0,T))}
\|\partial_t\boldsymbol{v} \|_{L_p(0,T;L_q(\Omega))} \leq C(M)E(T), $$ and similarly, using \eqref{est:02}-\eqref{est:05} we get $$
\left\| \frac{\sigma}{\Sigma_\varrho}\nabla\eta,\sigma_l\nabla\vartheta_{l-1},\frac{m_l}{\Sigma_\varrho}(\varrho_l\sigma+\varrho^0\sigma_l)\nabla\vartheta_{l-1},\frac{\varrho^0}{\Sigma_\varrho}\nabla\varrho^0 \right\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). $$ In order to estimate the terms with $\frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0}$ we write it as $$ \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} = \frac{\Sigma_\varrho^0-\Sigma_\varrho}{\Sigma_\varrho \Sigma_\varrho^0}. $$ As the denominator is bounded from below by a positive constant, using \eqref{est:04} we get $$
\|\varrho^0\nabla\eta \left(\frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)\|_{L_p(0,T;L_q(\Omega))} \leq C \sum_{k=1}^3\|\nabla \eta (\varrho_k-\varrho^0_k)\|_{L_p(0,T;L_q(\Omega))}\leq $$$$
C \sum_{k=1}^3 \left[ \int_0^T \|\varrho_k-\varrho^0_k\|_{L_\infty}^p\|\nabla \eta\|_{L_q}^p \right]^{1/p} \leq \sum_{k=1}^3 \|\varrho_k-\varrho^0_k\|_{L_\infty(H^1_q)}\|\nabla\eta\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T), $$ and similarly $$
\|m_l\varrho^0\varrho^0_{l}\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)\nabla\vartheta_{l-1}\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). $$ Collecting all above estimates we get \begin{equation} \label{est:2}
\|{\bf f}_2(U)\|_{L_p(0,T;L_q(\Omega)^3)} \leq C(M,L)E(T). \end{equation} {\bf Estimate of $f_3(U)$}. First we estimate $R^k_3(U)$ given by \eqref{lag:8}-\eqref{lag:10}. For this purpose we show \begin{lem} We have \begin{align}
\|{\cal{B}}_{kl}\|_{L_\infty(\Omega\times(0,T))}\leq C(M), \label{est:3_1} \\
\|\nabla {\cal{B}}_{kl}\|_{L_p(0,T;L_q(\Omega))} \leq C(M)E(T). \label{est:3_2} \end{align} \end{lem} \emph{Proof.} \eqref{est:3_1} follows directly from \eqref{est:05} and the form of ${\cal{B}}_{kl}$ \eqref{lag:5b}. To show \eqref{est:3_2} we need a bound on $\nabla D_{kl}$. For this purpose notice that, by \eqref{est:05}, $$
\|\nabla \varrho_k\|_{L_p(0,T;L_q(\Omega))}^p \leq \int_0^T\|(\nabla\varrho_k - \nabla \varrho^0_k)(t,\cdot)\|_{L_q}^p dt + \int_0^T \|\nabla \varrho^0_k\|_{L_q(\Omega)}dt \leq [C(M,L)E(T)]^p. $$ Therefore, under the assumption \eqref{nablaD} and using the fact that the fractional densities are bounded from below by a positive constant we obtain \eqref{est:3_2}.
\rightline{ $\square$}
\noindent From \eqref{est:07} and \eqref{est:3_1} we get \begin{equation} \label{est:3_3}
\|{\cal{B}}_{kl}(A_{2\Delta}({\bk_{\bv}})\nabla^2\vartheta_l+A_{1\Delta}({\bk_{\bv}})\nabla \vartheta_l)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{equation} Next, by \eqref{est:04} and \eqref{est:3_2}, $$
\|\nabla {\cal{B}}_{kl}\nabla \vartheta_l\|_{L_p(0,T;L_q(\Omega))} \leq \|\nabla\vartheta_l\|_{L_\infty(\Omega \times (0,T))}\|\nabla {\cal{B}}_{kl}\|_{L_p(0,T;L_q(\Omega))}\leq C(M,L)E(T). $$ Therefore \begin{align} \label{est:3_4}
&\|{\bf V}^0({\bk_{\bv}})\nabla {\cal{B}}_{kl}\left([\nabla\vartheta_l+\nabla\vartheta_l{\bf V}^0({\bk_{\bv}})]\right)+(\nabla {\cal{B}}_{kl}){\bf V}^0({\bk_{\bv}})\nabla\vartheta_l \|_{L_p(0,T;L_q(\Omega))} \nonumber\\
&\leq C \|\|\nabla {\cal{B}}_{kl}\nabla \vartheta_l\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{align} Combining \eqref{est:3_3} and \eqref{est:3_4} we get \begin{equation} \label{est:3_5}
\|R^{kl}_3(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{equation} Finally, by \eqref{est:01} and \eqref{est:04}, \begin{align*}
\left\|\left(\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}\right)\sum_{j,m=1}^3V^0_{jm}({\bk_{\bv}})\frac{\partial v_j}{\partial y_m}\right\|_{L_p(0,T;L_q(\Omega))} & \leq C \sum_{j,m=1}^3\|V^0_{jm}({\bk_{\bv}})\|_{L_\infty(\Omega \times (0,T))}\left\|\frac{\partial v_j}{\partial y_m}\right\|_{L_p(0,T;L_q(\Omega))}\\ & \leq C(M)E(T), \end{align*} which together with \eqref{est:3_5} yields \begin{equation}
\|R^k_3(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T). \end{equation} The remaining terms in \eqref{lin1:5c} contains only components of type $\varrho_k \nabla \boldsymbol{v}, \varrho_k \partial_t \theta_l, \nabla \varrho_k \nabla \theta_l$ and $\varrho_k \nabla^2 \theta_l$, therefore we can estimate them in a similar way to ${\bf f}_2(U)$ using \eqref{est:02}-\eqref{est:06} obtaining \begin{equation} \label{est:3}
\|f^k_3(U)\|_{L_p(0,T;L_q(\Omega)^{n-1})} \leq C(M,L)E(T), \quad k=1,\ldots,n-1. \end{equation}
\noindent {\bf Estimate of $f^k_4(U)$}. This task is more delicate since we have to find a bound on
$\|f^k_4(U)\|_{H^{1/2}_p(\mathbb{R};L_q(\Omega))}$. However, the structure of boundary condition \eqref{bc:normal1} is exactly the same as in the two species case, therefore we can repeat the estimate from \cite{PSZ}. For the sake of completeness we repeat the idea here. First we have to extend $f^k_4(U)$ to whole real line. For this purpose we apply the extension operator \eqref{def:ext}.
Let us denote $$ {\bold J}[\boldsymbol{v}](t)= \boldsymbol{n}(x){\bf V}^0({\bk_{\bv}}) \left\{\int^1_0(\nabla\boldsymbol{n}) (y + \tau\int^t_0\boldsymbol{v}(y, s)\,ds)\,d\tau \int^t_0\boldsymbol{v}(y, s)\,ds\right\}. $$ Then \eqref{lag:9} can be rewritten as $$ R^k_4(U)= - {\bold J}[\boldsymbol{v}] \nabla\vartheta_k. $$ Since ${\bold J}[\boldsymbol{v}](0)=0$, we can readily define \begin{equation} \tilde {\bold J}[u]=e_T({\bold J}[u]) \end{equation} Next, we also need to extend $\vartheta_k$. The difference is that it does not vanish at $0$, therefore first we first extend the boundary data to $\tilde \vartheta_k^0$ defined on $\mathbb{R}^3$ and define \begin{equation} \label{Eh} E\vartheta_k=e_T [\vartheta_k^0-{\mathcal T}(t)\vartheta_k^0] + {\mathcal T}(t)\vartheta_k^0, \end{equation} where ${\mathcal T}(t)$ is an exponentially decaying semigroup (details can be found in Section 5 of \cite{PSZ}). The norms of extensions are equivalent with the norms defined on $(0,T)$, therefore we have to estimate
$\|E{\bold J}[u] \nabla(E\vartheta_l)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega))}$.
For this purpose we apply Lemma \ref{lem:5.1}.
As $\partial \Omega$ is uniformly $C^3$, we can extend the normal vector to $E{\bf n}$ defined on $\mathbb{R}^3$ s.t. $\|E{\bf n}\|_{H^2_\infty(\mathbb{R}^3)}\leq C(\Omega)$. Then we obtain \begin{equation} \label{est:10}
\|EJ[\boldsymbol{v}]\|_{L_\infty(0,T;H^1_q(\Omega))} \leq C(M)E(T). \end{equation} and, due to \eqref{ext:2}, \begin{equation} \label{est:11}
\|\partial_t EJ[\boldsymbol{v}]\|_{L_\infty(0,T;L_q(\Omega))}+\|\partial_t EJ[\boldsymbol{v}]\|_{L_p(0,T;H^1_q(\Omega))} \leq C\left[\|\boldsymbol{v}\|_{L_\infty(0,T;H^1_q(\Omega))}+\|\boldsymbol{v}\|_{L_p(0,T;H^2_q(\Omega))}\right]\leq C(M). \end{equation} In order to estimate $E\nabla \vartheta_k$ we apply Lemma \ref{lem:5.2} to obtain \begin{equation} \label{est:12}
\|E\nabla \vartheta_k\|_{H^{1/2}_p(0,T;L_q(\Omega))}+\|E\nabla \vartheta_k\|_{L_p(H^1_q(\Omega))}\leq C(M,L). \end{equation} Applying Lemma \ref{lem:5.1} with $f=E{\bold J}[u]$ and $g=\nabla(E\vartheta_k)$ and using \eqref{est:10} - \eqref{est:12} we obtain \begin{equation} \label{est:13}
\|R^k_4(U)\|_{L_p(\mathbb{R},H^1_q(\Omega)) \cap H_p^{1/2}(\mathbb{R},L_q(\Omega))} \leq E(T)C(M,L). \end{equation} Now, combining \eqref{est:1},\eqref{est:2},\eqref{est:3},\eqref{est:13} and \eqref{lin1:5d} we obtain \eqref{est:nonlin}, which completes the proof of Proposition \ref{prop:est}.
\subsection{Fixed point argument} Theorem \ref{thm:lin2} allows us to define an operator $(\sigma,\boldsymbol{v},\vartheta)={\mathcal S}(\bar \sigma,\bar \boldsymbol{v},\bar \vartheta)$ as a solution to system \eqref{lin1:sys} with the right hand side $f_1(\bar U), {\bf f}_2(\bar U),f^k_3(\bar U),f^k_4(\bar U)$ where $\bar U=(\bar \sigma,\bar \boldsymbol{v},\bar \vartheta)$. From the Proposition \ref{prop:est} combined with Theorem \ref{thm:lin2} we easily verify that for any $M>0$ $$ {\mathcal S}:{\mathcal H}_{T,M} \to {\mathcal H}_{T,M} $$ is well defined provided $T>0$ is sufficiently small. It remains to show that ${\mathcal S}$ is a contraction on ${\mathcal H}_{T,M}$. For this purpose we show \begin{prop} \label{prop:est_dif} Let $\bar U_1=(\bar \sigma_1,\bar \boldsymbol{v}_1,\bar \vartheta_1),\bar U_2=(\bar \sigma_2,\bar \boldsymbol{v}_2,\bar \vartheta_2) \in {\mathcal H}_{T,M}$ for given $T,M>0$, where the initial conditions satify the assumptions of Theorem \ref{thm:main2}. Let $f_1(U),f_2(U),f_3^k(U)$ and $f^k_4(U)$ be given by \eqref{lin1:5a}-\eqref{lin1:5d}, where $R_1(U),R_2(U),R_3^k(U)$ and $R^k_4(U)$ are defined in \eqref{lag:6},\eqref{lag:7},\eqref{lag:8}-\eqref{lag:10} and \eqref{lag:9}, respectively. Then \begin{align} \label{est:dif1}
&\|f_1(U_1)-f_1(U_2)\|_{L_p(0,T;W^1_q(\Omega))} + \|{{\vc f}_2}(U_1)-{{\vc f}_2}(U_2)\|_{L_p(0,T;L_q(\Omega)^3)}
+\|f_3^k(U_1)-f_3^k(U_2)\|_{L_p(0,T;L_q(\Omega)^{n-1})} +\nonumber\\
&\|f^k_4(U_1)-f^k_4(U_2)\|_{L_p(0,T,H^1_q(\Omega)^{n-1})}
+ \|f^k_4(U_1)-f^k_4(U_2)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega)^{n-1})} \leq E(L,M,T) [U_1-U_2]_T. \end{align} \end{prop} \emph{Proof}. The precise form of the terms on the left hand side of \eqref{est:dif1} is rather complicated, however what is essential is that it contains only the terms which are products of either $\bar \boldsymbol{v}_1-\bar \boldsymbol{v}_2$, $\bar \sigma_1-\bar \sigma_2$ or $\bar \vartheta_1-\bar \vartheta_2$ multiplied by some quantities which are small for small times. Therefore, following the lines of the proof of Proposition \ref{prop:est} we obtain \eqref{est:dif1}.
\rightline{ $\square$}
Now we can subtract systems for $U_1$ and $U_2$ to obtain a linear problem for $U_1-U_2$ with the structure of the left hand side that same as in \eqref{lag:sys}, zero initial and boundary conditions and left hand side which is estimated in \eqref{est:dif1}. Therefore, combining Proposition \ref{prop:est_dif} and Theorem \ref{thm:lin2} we obtain \begin{equation} [{\mathcal S}(U_1)-{\mathcal S}(U_2)]_T \leq E(T) [U_1 - U_2]_T, \end{equation} which implies that for any $M>0$, $\mathcal S$ is a contraction on ${\mathcal H}_{T,M}$ for sufficiently small $T$. Therefore, application of the Banach fixed point theorem to $\mathcal S$ completes the proof of Theorem \ref{thm:main2}.
\rightline{ $\square$}
\section*{Appendix: Proof of Proposition \ref{thm:main0}} \subsection*{Derivation of the normal form} The proof of Proposition \ref{thm:main0} is split into a couple of steps. First we derive the normal form of system \eqref{1.1}. By the change of unknowns \eqref{def:psi} we have \begin{equation} \label{norm:1} [\nabla \varrho,\nabla h_1 \ldots \nabla h_{n-1}]^T = A [\nabla \varrho_1,\ldots \nabla \varrho_n]^T \end{equation} with \begin{equation} \label{norm:2} A = \left( \begin{array}{cc} 1 & 1_{1\times (n-1)} \\[5pt] \left(-\frac{1}{m_1\varrho_1}\right)_{(n-1) \times 1} & {\rm diag}\left(\frac{1}{m_2\varrho_2},\ldots,\frac{1}{m_n\varrho_n}\right) \\ \end{array}\right). \end{equation} The matrix $A$ is diagonal except the first row and first column, which also have quite simple structure. It is therefore easy to observe that its inverse reads \begin{equation} \label{norm:4} A^{-1} = \left( \begin{array}{cc} \frac{m_1\varrho_1}{\Sigma_\varrho} & \left[\left(-\frac{m_1\varrho_1 m_k \varrho_k}{\Sigma_\varrho}\right)_{k=2 \ldots n}\right]_{1 \times (n-1)}\\[5pt] \left[\left(\frac{m_k\varrho_k}{\Sigma_\varrho}\right)_{k=2, \ldots, n}\right]_{(n-1)\times 1} & {\cal R} \end{array} \right), \end{equation} where \begin{equation} \label{def:sigma} \Sigma_\varrho=\sum_{k=1}^3 m_k \varrho_k \end{equation} and ${\cal R}$ is matrix of dimension $n-1$ given by \begin{equation} \label{def:Rkl} {\cal R}_{kl}=m_{k+1}\varrho_{k+1}\delta_{kl}-\frac{m_{k+1}m_{l+1}\varrho_{k+1}\varrho_{l+1}}{\Sigma_{\varrho}}, \quad k,l=1,\ldots, n-1. \end{equation} Therefore, from \eqref{norm:1} we obtain \begin{equation} \label{norm:3} [\nabla \varrho_1,\ldots \nabla \varrho_n]^T=A^{-1}[\nabla \varrho,\nabla h_1 \ldots \nabla h_{n-1}]^T \end{equation} and, analogously, for the time derivative \begin{equation} \label{norm:3a} [\partial_t \varrho_1,\ldots \partial_t \varrho_n]^T=A^{-1}[\partial_t \varrho,\nabla h^1 \ldots \partial_t h_{n-1}]^T. \end{equation} From \eqref{norm:3}, \eqref{norm:3a}, and \eqref{norm:4} we infer \begin{equation} \label{norm:4} \partial_t \varrho_{k+1} + \boldsymbol{u} \cdot \nabla \varrho_{k+1} = \frac{m_{k+1}\varrho_{k+1}}{\Sigma_\varrho}(\partial_t \varrho+\boldsymbol{u} \cdot \nabla \varrho) +\sum_{l=1}^{n-1}{\cal R}_{kl}(\partial_t h_l+\boldsymbol{u} \cdot \nabla h_l), \quad k=1,\ldots,n-1. \end{equation} However, from \eqref{1.1} we have $$ \partial_t \varrho + \boldsymbol{u} \cdot \nabla \varrho = -\varrho \operatorname{div} \boldsymbol{u} $$ as well as $$ \partial_t \varrho_k + \boldsymbol{u} \cdot \nabla \varrho_k = -\varrho_k \operatorname{div} \boldsymbol{u} - \operatorname{div} \boldsymbol{F}_k. $$
Inserting these relations to \eqref{norm:4} we obtain \begin{equation} \label{norm:5} \sum_{l=1}^{n-1} {\cal R}_{kl}(\partial_t h_l+\boldsymbol{u} \cdot \nabla h_l)+\left(\varrho_{k+1} - \frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}\right)\operatorname{div} \boldsymbol{u} = -\operatorname{div} {\boldsymbol{F}}_{k+1}. \end{equation} We can further rewrite the rhs of the above equations. For this purpose we observe that $$ -\frac{\nabla p_1}{\varrho}\left( \frac{1}{\varrho_1} \sum_{l=2}^3 \varrho_l C_{kl} + \frac{\varrho_1}{\varrho_1}C_{k1}\right)= -\frac{\nabla p_1}{\varrho_1}\sum_{l=1}^3 Y_l C_{kl}=0 $$ due to \eqref{prop_C}.
Therefore, denoting \begin{equation} \label{def:barm} \bar m = \frac{\varrho}{p} \end{equation} we obtain from \eqref{eq:diff1} \eq{ \label{norm:6} -\boldsymbol{F}_k &= \frac{1}{p}\sum_{l=1}^3 C_{kl} \nabla p_l\\ &= \frac{\bar m}{\varrho}\left[\sum_{l=1}^3 C_{kl}\nabla p_l - \nabla p_1\left(\frac{1}{\varrho_1}\sum_{l=2}^3 C_{kl}\varrho_l + C_{k1}\right)\right]\\ &=\frac{\bar m}{\varrho}\sum_{l=2}^3 C_{kl} \left(\nabla p_l-\frac{\varrho_l}{m_1}\frac{\nabla \varrho_1}{\varrho_1}\right)\\ &=\frac{\bar m}{\varrho}\sum_{l=2}^3 \varrho_l C_{kl}\left( \frac{\nabla \varrho_l}{m_l \varrho_l} - \frac{\nabla \varrho_1}{m_1 \varrho_1}\right)\\ &=\frac{\bar m}{\varrho}\sum_{l=2}^3\varrho_k\varrho_l D_{kl}\nabla h_{l-1}. } Now let us transform the pressure term, from \eqref{norm:3} we have \eq{ \label{norm:7} \nabla p &= \sum_{k=1}^3 \frac{\nabla \varrho_k}{m_k}\\ &=\frac{1}{m_1}\left( \frac{m_1\varrho_1}{\Sigma_\varrho}\nabla \varrho -\sum_{k=2}^3 \frac{m_1\varrho_1m_k\varrho_k}{\Sigma_\varrho}\nabla h_k\right)\\ &\quad +\sum_{l=2}^3\frac{1}{m_l}\left( \frac{m_l\varrho_l}{\Sigma_\varrho}\nabla \varrho + m_l\left(\varrho_l-\frac{m_l\varrho_l^2}{\Sigma_\varrho}\right)\nabla h_{l-1}-\sum_{\substack{k>1\\k\neq l}} \frac{m_l\varrho_lm_k\varrho_k}{\Sigma_\varrho}\nabla h_k\right)\\ & =\frac{\varrho}{\Sigma_\varrho}\nabla \varrho + \sum_{k=1}^{n-1}A_k\nabla h_k, } where we denoted \begin{equation} \label{norm:8} A_k = \varrho_{k+1}-\frac{1}{\Sigma_\varrho}\left[m_{k+1}\varrho_{k+1}^2+m_{k+1}\varrho_{k+1}\sum_{l\neq k+1}\varrho_l\right]=\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}. \end{equation} From \eqref{norm:5}-\eqref{norm:8} we obtain the explicit form of the symmetrized system \eqref{sys:normal}.
Now we have to rewrite the boundary conditions \eqref{bc} for the symmetrized system \eqref{sys:normal}. First note that with equation for $\varrho_1$ being omitted, the system \eqref{sys:normal} needs to be supplemented only with the boundary conditions for $n-1$ last species densities; due to \eqref{norm:6} we get \begin{equation} \label{bc:normalD} \boldsymbol{u}=0, \quad \frac{\bar m}{\varrho}\sum_{l=2}^3 \varrho_k \varrho_l D_{kl}\nabla h_{l-1} \cdot \boldsymbol{n} = 0, \quad k=2,\ldots,n, \quad\mbox{on}\ (0,T)\times\partial\Omega \end{equation} which is exactly \eqref{bc:normal} and it is a natural boundary conditions in view of the second order term in \eqref{sys:normal}$_3$.
\subsection*{Coercivity properties} Recall that Lemma \ref{lem:1} gives a positive lower bound on fractional densities. We are now ready to prove the more direct coercivity of ${\cal R}$. Below, $\xi=(\xi_1,\ldots \xi_n)$ is a vector of complex numbers, $\overline{\xi}=(\overline{\xi_1},\ldots \overline{\xi_n})$ is a vector of their complex conjugates, and $\langle\cdot,\cdot\rangle$ is a scalar product in $\mathbb{C}$.
\begin{lem} \label{l:R} Let assumptions of Lemma \ref{lem:1} be satisfied.
Then there exists a constant $C_1>0$ independent of (x,t) such that \begin{equation} \label{coerc:R}
\langle{\cal{R}}(x,t)\xi,\overline{\xi}\rangle \geq C_1|\xi|^2. \end{equation} \end{lem} \emph{Proof.} Notice first that ${\cal R}_{kk}>0$ for every $k=1,\ldots, n-1$. We rewrite ${\cal R}_{kk}$ as $$ {\cal R}_{kk}=\frac{1}{\Sigma_{\varrho}}m_{k+1}\varrho_{k+1}(\Sigma_{\varrho}-m_{k+1}\varrho_{k+1})= \frac{1}{\Sigma_{\varrho}}m_{k+1}\varrho_{k+1}\sum_{l=1,\ l\neq k+1}^{n}m_l\varrho_l. $$ Then we have due to symmetry of ${\cal R}$ \eq{
\langle{\cal R}\xi,\overline{\xi}\rangle&=\sum_{k=1}^{n-1}{\cal R}_{kk}|\xi_k|^2 + \sum_{l=1}^{n-1}\sum_{k<l}{\cal R}_{kl}(\xi_k\overline{\xi_l}+\xi_l\overline{\xi_k})\\ &\geq
\sum_{k=1}^{n-1}{\cal R}_{kk}|\xi_k|^2- \sum_{l=1}^{n-1}\sum_{k<l}|{\cal R}_{kl}|(|\xi_k|^2+|\xi_l|^2)\\ &=
\frac{m_1\varrho_1}{\Sigma_{\varrho}}\sum_{k=1}^{n-1}m_{k+1}\varrho_{k+1}|\xi_k|^2\\ &
\geq \frac{m_1\varrho_1}{\Sigma_{\varrho}}{\rm min}_{k\neq 1}\{m_k\varrho_k\}|\xi|^2, } which proves \eqref{coerc:R}.
\rightline{ $\square$} Although \eqref{prop_D} implies only semi-definitness of $D \geq 0$, the change of unknowns introduced in the previous section and resulting reduction by one row and column enables to deduce ellipticity of the resulting matrix which follows from the properties of $D$. The next lemma shows the coercivity of ${\cal{B}}$. \begin{lem} \label{l:B} Assume that one of Conditions 1,2 from Proposition \ref{thm:main0} hold. Then there exists a constant $C_2>0$ independent of (x,t) such that \begin{equation} \label{coerc:B}
\langle {\cal{B}}(x,t)\xi,\overline{\xi}\rangle \geq C_2|\xi|^2 \quad \forall \; (x,t) \in \Omega \times [0,T]. \end{equation} \end{lem}
\noindent \emph{Proof.} It is convenient to rewrite the entries of ${\cal{B}}$ as \begin{equation} {\cal{B}}_{kl}=\frac{\varrho}{p} Y_{k+1}Y_{l+1}\frac{C_{k+1,l+1}}{Y_{k+1}}=\frac{\varrho}{p} Y_{l+1} C_{k+1,l+1}. \end{equation} Under Condition 1 we therefore have \begin{equation} {\cal{B}}=\frac{\varrho}{p} \left( \begin{array}{cccc} Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2Y_n\\ -Y_3Y_2 & Y_3Z_3 & \ldots & - Y_3Y_n\\ \ldots & & &\\ -Y_nY_2 & \ldots & & Y_nZ_n \end{array} \right). \end{equation} In order to compute ${\rm det} \,{\cal{B}}$ we transform the matrix with elementary operations. First we add $n-1$ first rows to the last one. Denoting the new matrix by ${\cal{B}}^1$ we have $$ {\cal{B}}^1_{nn}=Y_nZ_n-Y_n \sum_{j=2}^{n-1}Y_j = Y_nY_1 $$ and for $k<n$ we have $$ {\cal{B}}^1_{nk}=-Y_nY_k+Y_kZ_k-Y_k\sum_{j \neq k,j\geq2}Y_j=Y_kY_1, $$ therefore \begin{equation} {\cal{B}}^1=\frac{\varrho}{p} \left( \begin{array}{cccc} Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2Y_n\\ -Y_3Y_2 & Y_3Z_3 & \ldots & - Y_3Y_n\\ \ldots & & &\\ Y_1Y_2 & Y_1Y_3 & \ldots & Y_1Y_n \end{array} \right). \end{equation} Notice that all entries of the last column contain $Y_n$ and all entries of the last row contain $Y_1$, therefore \begin{equation} \label{B:2} {\rm det} \,{\cal{B}} = \left(\frac{\varrho}{p}\right)^{n-1} Y_1Y_n {\rm det}\underbrace{\left( \begin{array}{cccc} Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2\\ -Y_2Y_3 & Y_3Z_3 & \ldots & - Y_3\\ \ldots & & &\\ Y_2 & Y_3 & \ldots & 1 \end{array} \right)}_{{\cal{B}}^2}. \end{equation} Now we can easily diagonalize part of the above matrix. For this purpose we add to each $k$-th row, $k=1 \ldots n-1$, the last row multiplied by $Y_{k+1}$. Then all the entries except the diagonal becomes zero. Namely, we have $$ {\cal{B}}^2_{k,\cdot} + Y_{k+1}{\cal{B}}^2_{n,\cdot} = Y_{k+1}\sum_{j=1}^3Y_j {\bf e}_k. $$ Therefore \eqref{B:2} yields \begin{equation} {\rm det}\, {\cal{B}} = \left(\frac{\varrho}{p}\right)^{n-1}\prod_{k=1}^3 Y_k \left(\sum_{k=1}^3Y_k \right)^{n-1} \geq C >0, \end{equation} since $Y_k(x,t)>C$ for every $k=1,\ldots,n$ uniformly w.r.t. $(x,t)$, due to \eqref{rhoidown}. Next, denoting \begin{equation} \label{minors}
{\rm det} \,{\cal{B}}_k = \left| \begin{array}{ccc} {\cal{B}}_{11} & \ldots & {\cal{B}}_{1k} \\ \vdots&\ddots&\vdots\\
{\cal{B}}_{k1}&\ldots & {\cal{B}}_{kk}
\end{array} \right| \end{equation} we have ${\rm det}\,{\cal{B}}_{k}>0$. Therefore, all the leading principal minors of matrix ${\cal{B}}$ are positive and hence we have shown \begin{equation} \label{detB:pos} {\cal{B}}(x,t)>0, \quad {\rm det}{\cal{B}}(x,t) \geq C>0 \quad \textrm{uniformly in} \; (x,t). \end{equation} Now from \eqref{detB:pos} it's easy to deduce \eqref{coerc:B}. For this purpose note that the eigenvectors $\zeta_i(x,t)$ of ${\cal{B}}(x,t)$ form an orthonormal basis of $\mathbb{R}^3$ and ${\cal{B}}(x,t)$ in this basis is in a form \begin{equation} \label{B:diag} {\cal{B}}(x,t)={\rm diag}(\lambda_1(x,t),\ldots,\lambda_n(x,t)\},\quad \lambda_i(x,t)\geq C>0 \quad \textrm{uniformly in} \;(x,t). \end{equation} Therefore, denoting $\xi=\sum_{i=1}^3 \alpha_i\zeta_i$ we have $$
\langle B(x,t)\xi,\bar\xi\rangle=\sum_{i=1}^3\lambda_i(x,t)\alpha_i^2 \geq {\rm min}_i \{\lambda_i(x,t)\}\sum_{i=1}^3 \alpha_i^2 \geq C |\xi|^2 \quad \textrm{uniformly in} \; (x,t). $$
Now let us consider a general form of $ D$ satisfying the assumptions \eqref{prop_D}. In this case we use the form of ${\cal{B}}$ as in \eqref{lag:5b}. In particular, each entry of $k$-th row of ${\cal{B}}$ contains $Y_{k+1}$, therefore \begin{equation}
{\rm det} \,{\cal{B}} = \lr{\frac{\varrho^2}{p}}^{n-1}Y_2 \ldots Y_n \left| \begin{array}{cccc} Y_2D_{22} & Y_3D_{23} & \ldots & Y_n D_{2n}\\ Y_2D_{32} & Y_3D_{33} & \ldots & Y_n D_{3n}\\ \ldots & & &\\ Y_2D_{n2} & Y_3D_{n3} & \ldots & Y_n D_{nn} \end{array}
\right| \end{equation} Similarly, since each entry of $k$-th column contains $Y_{k+1}$, we have \begin{equation} {\rm det} \,{\cal{B}} = \left(\frac{\varrho^2}{p}\right)^{n-1} (Y_2 \ldots Y_n)^2 \, {\rm det} \underbrace{ \left( \begin{array}{cccc} D_{22} & D_{23} & \ldots & D_{2n}\\ D_{32} & D_{33} & \ldots & D_{3n}\\ \ldots & & &\\ D_{n2} & D_{n3} & \ldots & D_{nn} \end{array} \right)}_{:=\bar D} \end{equation} Due to \eqref{rhoidown} we have $Y_2 \ldots Y_n \geq C>0$, and so, the whole coefficient in front of matrix $\bar D$ is positive. Notice however that we only have $D \geq 0$ in general, but $\bar D$ is a $(n-1)\times(n-1)$ sub-matrix of $D$ for which we can show positive definiteness. Assume on the contrary that there is a vector $ [v_2,\ldots,v_n]\neq 0$, s.t. \begin{equation*} \bar D [v_2,\ldots,v_n] = 0. \end{equation*} Then one would also have that $$
D[0,v_2,\ldots,v_n]=0, $$ which is in contradiction with the fact that ${\rm Ker}D = {\rm lin}\{\vec{Y}\}$ and all $Y_k$ are strictly positive.
Similarly we show that the minors \eqref{minors} are positive, hence we conclude that \begin{equation} \label{Dpos}
D(x)>0. \end{equation} Now, as for each $(x,t)$ fixed, $D(x,t)$ is a linear operator, we have \begin{equation} \label{coerc:2}
\forall (x,t) \in \Omega \; \exists c(x,t)>0 \; s.t. \; \langle\bar D(x,t)\xi,\bar \xi\rangle \, \geq \, c(x,t)|\xi|^2, \end{equation} where $$
c(x,t)={\rm min}_{|\xi|=1}\langle\bar D(x,t)\xi,\bar \xi\rangle. $$ Finally, if Condition 2 is satisfied, we can have the function $c(x,t)>0$ defined on a compact set $\overline{\Omega}\times [0,T]$, hence $$ \exists \kappa>0: \;c(x,t) \geq \kappa \quad \forall \; (x,t) \in \overline{\Omega} \times [0,T], $$ which completes the proof.
\rightline{ $\square$}
\begin{rmk} The method which we applied for the special structure \eqref{Cform} can be to some extent repeated for a general matrix using the fact that ${\rm Ker} D={\rm lin}\{\vec{Y}\}$. However, in the last step we do not obtain a diagonal sub-matrix but just a matrix with modified entries. For this matrix coercivity probably could be shown under some additional assumptions on $D$ also for unbounded domain, we leave this direction for further investigation in the future. \end{rmk}
\end{document} |
\begin{document}
\title{A Galerkin least squares approach for\photoacoustic tomographyootnote{ extbf{Funding:}
\begin{abstract} The development of fast and accurate image reconstruction algorithms is a central aspect of computed tomography. In this paper we address this issue for photoacoustic computed tomography in circular geometry. We investigate the Galerkin least squares method for that purpose. For approximating the function to be recovered we use subspaces of translation invariant spaces generated by a single function. This includes many systems that have previously been employed in PAT such as generalized Kaiser-Bessel basis functions or the natural pixel basis. By exploiting an isometry property of the forward problem we are able to efficiently set up the Galerkin equation for a wide class of generating functions and devise efficient algorithms for its solution. We establish a convergence analysis and present numerical simulations that demonstrate the efficiency and accuracy of the derived algorithm.
\noindent \textbf{Key words:} Photoacoustic imaging, computed tomography, Galerkin least squares method, Kaiser-Bessel functions, Radon transform, least-squares approach.
\noindent \textbf{AMS subject classification:} 65R32, 45Q05, 92C55. \end{abstract}
\section{Introduction} \label{sec:intro}
Photoacoustic tomography (PAT) is an emerging non-invasive tomographic imaging modality that allows high resolution imaging with high contrast. Applications are ranging from breast screening in patients to whole body imaging of small animals \cite{beard2011biomedical,ntziachristos2005looking,KruKisReiKruMil03,wang2012photoacoustic}. The basic principle of PAT is as follows. If a semitransparent sample is illuminated with a short pulse, then parts of the optical energy are absorbed inside the sample (see Figure~\ref{fig:pat}). This causes a rapid thermoelastic expansion, which in turns induces an acoustic pressure wave. The pressure wave is measured outside of the sample and used for reconstructing an image of the interior.
\begin{figure}
\caption{ \textsc{Basic principle of PAT.} Pulsed optical illumination and subsequent thermal expansion induces an acoustic pressure wave. The pressure wave is measured outside of the object and used to obtain an image of the interior.}
\label{fig:pat}
\end{figure}
In this paper we work with the standard model of PAT, where the acoustic pressure $p \colon \mathbb{R}^d \times \kl{0, \infty} \to \mathbb{R}$ solves the standard wave equation \begin{equation} \label{eq:wave-fwd}
\left\{ \begin{aligned}
&\partial_t^2 p (x,t) - \Delta_x p(x,t)
=
0 \,,
&& \text{ for }
\kl{x,t} \in
\mathbb{R}^d \times \kl{0, \infty} \,,
\\
&p\kl{x,0}
=
f(x) \,,
&& \text{ for }
x \in \mathbb{R}^d \,,
\\
&\partial_t
p\kl{x,0}
=0 \,,
&& \text{ for }
x \in \mathbb{R}^d \,. \end{aligned} \right. \end{equation} Here $d$ is the spatial dimension, $f \colon \mathbb{R}^d \to \mathbb{R}$ the absorbed energy distribution, $\Delta_x$ the spatial Laplacian, and
$\partial_t$ the derivative with respect to the time variable $t$. The speed of sound is assumed to be constant and has been rescaled to one. We further suppose that $f$ vanishes outside an open ball $B_R(0) \subseteq \mathbb{R}^d$. The goal of PAT is to recover the function $f$ from measurements of $\mathbf W f \coloneqq p|_{ \partial B_R(0) \times (0, \infty) }$. Evaluation of $\mathbf W$ is referred to as the direct problem and the problem of reconstructing $f$ from (possibly approximate) knowledge of $\mathbf W f $ as the inverse problem of PAT. The cases $d=3$ and $d=2$ are of actual relevance in PAT (see \cite{kuchment2011mathematics,BurBauGruHalPal07}).
In the recent years several solution methods for the inverse problem of PAT have been derived. These approaches can be classified in direct methods on the one and iterative (model based) approaches on the other hand. Direct methods are based on explicit solutions for inverting the operator $\mathbf W$ that can be implemented numerically. This includes time reversal (see \cite{burgholzer2007exact,Hristova2008,FinPatRak04,nguyen2016dissipative,Treeby10}), Fourier domain algorithms (see \cite{AgrKuc07,HalSchBurNusPal07,Kun07b,palamodov2012uniform,salman14inversion,xu2002timedomain}), and explicit reconstruction formulas of the back-projection type (see \cite{ansorg2013summability,FinHalRak07,FinPatRak04,haltmeier13inversion,haltmeier14universal,HalPer15a,HalPer15b,Kun07a,kunyansky2015inversion,natterer2012photo,nguyen2009family,xu2005universal}). Model based iterative approaches, on the other hand, are based on a discretization of the forward problem together with numerical solution methods for solving the resulting system of linear equations. Existing iterative approaches use interpolation based discretization (see \cite{DeaBueNtzRaz12,PalNusHalBur07b,PalViaPraJac02,RosNtzRaz13,zhang2009effects}) or approximation using radially symmetric basis functions (see \cite{wang2014discrete,wang2012investigation}). Recently, also iterative schemes using a continuous domain formulation of the adjoint have been studied, see \cite{arridge2016adjoint,belhachmi2016direct,haltmeier2016iterative}. Direct methods are numerically efficient and robust and have similar complexity as numerically evaluating the forward problem. Iterative methods typically are slower since the forward and adjoint problems have to be evaluated repeatedly. However, iterative methods have the advantage of being flexible as one can easily add regularization terms and incorporate measurement characteristics such as finite sampling, finite bandwidth and finite detectors size (see \cite{DeaBueNtzRaz12,haltmeier2010spatial,RoiEtAl14,wang2011imaging,wang2014discrete,XuWan03}). Additionally, iterative methods tend to be more accurate in the case of noisy data.
\subsection{Proposed Galerkin least squares approach}
In this paper we develop a Galerkin approach for PAT that combines advantages of direct and model based approaches. Our method comes with a clear convergence theory, sharp error estimates and an efficient implementation. The Galerkin least squares method for $\mathbf W f = g$ consists in finding a minimizer of the restricted least squares functional, \begin{equation} \label{eq:LN}
f_N \coloneqq \argmin \set{ \norm{\mathbf W h - g } \mid h \in \mathcal X_N } \,, \end{equation} where $\mathcal X_N$ is a finite dimensional reconstruction space and $\norm{\,\cdot\,} $ an appropriate Hilbert space norm. If $(\varphi^k_N)_{k \in\Lambda_N}$ is a basis of $\mathcal X_N$ then $f_N = \sum_{k \in\Lambda_N} c_{N,k}\varphi^k_N$, where $c_N=(c_{N,k})_{k\in \Lambda_N} $ is the unique solution of the Galerkin equation \begin{equation}\label{eq:lsgal} \mathbf A_N c_N = (\ip{\mathbf W\varphi^k_N}{g})_{k \in\Lambda_N} \qquad \text{ with } \; \mathbf A_N \coloneqq (\ip{\mathbf W \varphi^k_N}{\mathbf W \varphi_N^\ell})_{k,\ell \in \Lambda_N} \,. \end{equation} We call the matrix $\mathbf A_N$ the (discrete) imaging matrix.
In general, both the computation of the imaging matrix as well as the solution of the Galerkin equation can be numerically expensive. In this paper we demonstrate that for the inverse problem of PAT, both issues can be efficiently implemented. These observations are based on the following: \begin{itemize} \item \textsc{Isometry property.} Using the isometry property of \cite{FinHalRak07,FinPatRak04} one shows that the entries of the system matrix are given by $\tfrac{R}{2}\ip{ \varphi^k_N}{ \varphi_N^\ell}_{L^2}$; see~Theorem \ref{thm:leastsquares}.
\item \textsc{Shift invariance.} If, additionally, we take the basis functions $\varphi^k_N$ as translates of a single generating function $\varphi \in L^2(\mathbb{R}^d)$, then $\ip{ \varphi^k_N}{ \varphi_N^\ell}_{L^2} = \ip{ \varphi^0_N}{ \varphi_N^{k-\ell}}_{L^2}$ for $k, \ell \in \Lambda_N \subseteq \mathbb{Z}^d$ . \end{itemize} Consequently only $2^d \abs{\Lambda_N}$ inner products have to be computed in our Galerkin approach opposed to $\abs{\Lambda_N}(\abs{\Lambda_N}+1)/2 = \mathcal{O}(\abs{\Lambda_N}^2)$ inner products required in the general case. Further, the resulting shift invariant structure of the system matrix allows to efficiently solve the Galerkin equation.
Note that shift invariant spaces are frequently employed in computed tomography and include splines spaces, spaces of bandlimited functions, or spaces generated by Kaiser-Bessel functions. In this paper we will especially use Kaiser-Bessel functions which are often considered as the most suitable basis for computed tomography \cite{herman2015basis,lewitt1992alternatives,matej1996practical,nilchian2015optimized}. For the use in PAT they have first been proposed in~\cite{wang2014discrete}. We are not aware of existing Galerkin approaches for tomographic image reconstruction exploiting isometry and shift invariance. However, we anticipate that similar methods can be derived for other tomographic problems, where an isometry property is known (such as X-ray based CT~\cite{Kuc14,Nat01}). We further note that our approach has close relations to the method of approximate inverse, which has frequently been applied to computed tomography \cite{HalSchuSch05,Lou96,LouMaa90,LouSchu96,rieder2000approximate,rieder2004approximate}. Instead of approximating the unknown function using a prescribed reconstruction space, the method of approximate inverse recovers prescribed moments of the unknown and is somehow dual to the Galerkin approach.
\subsection{Outline} The rest of this article is organized as follows. In Section~\ref{sec:galerkinPAT} we apply the Galerkin least squares method for the inverse problem of PAT. By using the isometry property we derive a simple characterization of the Galerkin equation in Theorem~\ref{thm:leastsquares}. We derive a convergence and stability result for the Galerkin least squares method applied to PAT (see Theorem~\ref{thm:convG}). In Section~\ref{sec:spaces} we study shift invariant spaces for computed tomography. As the main results in that section we derive an estimate for the $L^2$-approximation error using elements from the shift invariant space. In Section~\ref{sec:galerkinshift} we present details for the Galerkin approach using subspaces of shift invariant spaces. In Section \ref{sec:num} we present numerical studies using our Galerkin approach and compare it to related approaches in the literature. The paper concludes with a conclusion and a short outlook in Section~\ref{sec:conclusion}.
\section{Galerkin approach for PAT} \label{sec:galerkinPAT}
Throughout the following, suppose $d\geq 2$, let $B_R(0) \coloneqq \set{x \in \mathbb{R}^d \mid \norm{x} < R }$ denote the open ball with radius $R$ centered at the origin, and let $ L^2_R(\mathbb{R}^d) \coloneqq \set{ f \in L^2\kl{\mathbb{R}^d} \mid f(x) = 0 \text { for } x \in \mathbb{R}^d \setminus B_R(0)}$ denote the Hilbert space of all square integrable functions which vanish outside $B_R(0)$. For two measurable functions $ g_1, g_2 \colon \partial B_R(0) \times (0, \infty) \to \mathbb{R}$ we write
\begin{equation} \label{eq:tinner}
\ip{g_1}{g_2}_t \coloneqq
\int_{\partial B_R(0) }
\int_0^\infty g_1(z,t)g_2(z,t) \, t \, \rmd t \, \ds(z) \,, \end{equation} provided that the integral exists. We further denote by $\mathcal Y$ the Hilbert space of all functions $ g \colon \partial B_R(0) \times (0, \infty) \to \mathbb{R}$ with $\norm{g}_t^2 \coloneqq \ip{g}{g}_t < \infty$.
\subsection{PAT and the wave equation}
For initial data $ f \in C_c^{[d/2]+2}(B_R(0))$ consider the wave equation \eqref{eq:wave-fwd}. The solution $p \colon \mathbb{R}^d \times (0, \infty) \to \mathbb{R}$ of \eqref{eq:wave-fwd} restricted to the boundary of $B_R(0)$ is denoted by $\bar \mathbf W f \colon \partial B_R(0) \times (0, \infty) \to \mathbb{R}$. The associated operator is defined by $\bar \mathbf W \colon C_c^{[d/2]+2}(B_R(0)) \subseteq L^2_R(\mathbb{R}^d) \to \mathcal Y \colon f \mapsto \bar \mathbf W f$
\begin{lemma}[Isometry and continuous extension \label{lem:iso} of $\bar \mathbf W$]\mbox{} \begin{enumerate} \item\label{lem:iso1} For all $f_1, f_2 \in C_c^{[d/2]+2}(B_R(0))$ we have $\ip{f_1}{f_2} = \tfrac{2}{R} \ip{\bar \mathbf W f_1}{\bar \mathbf W f_2}_t$. \item\label{lem:iso2} $\bar \mathbf W$ uniquely extends to a bounded linear operator $\mathbf W \colon L^2_R(\mathbb{R}^d) \to \mathcal Y$. \item\label{lem:iso3} For all $f_1, f_2 \in L^2_R(\mathbb{R}^d)$ we have $\ip{f_1}{f_2} = \tfrac{2}{R} \ip{\mathbf W f_1}{\mathbf W f_2}_t$. \end{enumerate} \end{lemma}
\begin{proof}\mbox{} \ref{lem:iso1}: See~\cite[Equation (1.16)]{FinHalRak07} for $d$ even and \cite[Equation (1.16)]{FinPatRak04} for $d$ odd. (Note that the isometry identities in \cite{FinHalRak07,FinPatRak04} are stated for the wave equation with different initial conditions, and therefore at first glance look different from~\ref{lem:iso1}.)\\ \ref{lem:iso2}, \ref{lem:iso3}: Item~\ref{lem:iso1} implies that $\mathbf W$ is bounded with respect to the norms of $L^2_R(\mathbb{R}^d)$ and $\mathcal Y$ and defined on a dense subspace of $L^2_R(\mathbb{R}^d)$. Consequently it uniquely extends to a bounded operator $\mathbf W \colon L^2_R(\mathbb{R}^d) \to \mathcal Y$. The continuity of the inner product finally shows the isometry property on $L^2_R(\mathbb{R}^d)$. \end{proof}
We call $\mathbf W$ the acoustic forward operator. PAT is concerned with the inverse problem of estimating $f$ from potentially noisy and approximate knowledge of $\mathbf W f$. In this paper we use the Galerkin least squares method for that purpose.
\subsection{Application of the Galerkin method}
Let $(\mathcal X_N)_{N\in \mathbb{N}}$ and $(\mathcal Y_N)_{N\in \mathbb{N}}$ be families of subspaces of $L^2_R(\mathbb{R}^d)$ and $\mathcal Y$, respectively, with $\dim \mathcal X_N = \dim \mathcal Y_N < \infty$. Further let $\mathbf Q_N $ denote the orthogonal projection on $\mathcal Y_N$ and suppose $ g \in \mathcal Y$. The Galerkin method for solving $\mathbf W f = g$ defines the approximate solution $f_N \in \mathcal X_N$ as the solution of \begin{equation}\label{eq:galerkin}
\mathbf Q_N \mathbf W f_N = \mathbf Q_N g \,. \end{equation} In this paper we consider the special case where $\mathcal Y_N = \mathbf W \mathcal X_N$, in which case the solution of \eqref{eq:galerkin} is referred to as \emph{Galerkin least squares method}. The name comes from the fact that in this case the Galerkin solution can be uniquely characterized as the minimizer of the least squares functional over $\mathcal X_N$, \begin{equation}\label{eq:leastsquares}
\Phi_N(f_N) \coloneqq
\frac{1}{2} \norm{ \mathbf W f_N - g}_t^2
\to \min_{f_N \in \mathcal X_N} \,. \end{equation} Because $\Phi_N$ is a quadratic functional on a finite dimensional space and $\mathbf W$ is injective, \eqref{eq:leastsquares} possesses a unique solution. Together with the isometry property we obtain the following characterizations of the least squares Galerkin method for PAT.
\begin{theorem}[Characterizations of the Galerkin least squares method]\label{thm:leastsquares} For $g \in \mathcal Y$ and $f_N \in \mathcal X_N$
the following are equivalent:
\begin{enumerate}[label=(\arabic*)] \item\label{thm:leastsquares1} $ \mathbf Q_N \mathbf W f_N = \mathbf Q_N g$; \item\label{thm:leastsquares3} $f_N$ minimizes the least squares functional \eqref{eq:leastsquares}; \item\label{thm:leastsquares2} For an arbitrary basis $(\varphi_N^k)_{k \in \Lambda_N}$ of $\mathcal X_N$, we have \begin{equation} \label{eq:galerkinequation} \mathbf A_N c_N = d_N \end{equation} where \begin{itemize} \item $c_N \coloneqq (c_{N,k})_k$ with $f_N = \sum_{k\in \Lambda_N} c_{N,k} \varphi_N^k$; \item $d_N \coloneqq (\ip{\mathbf W \varphi_N^k}{g}_t)_{k\in \Lambda_N}$; \item $\mathbf A_N \coloneqq ( \tfrac{R}{2}\ip{\varphi_N^k}{\varphi_N^\ell}_{L^2})_{k,\ell\in \Lambda_N}$. \end{itemize} \end{enumerate} \end{theorem}
\begin{proof} The equivalence of \ref{thm:leastsquares1} and \ref{thm:leastsquares3} is a standard result for the Galerkin squares method (see, for example, \cite{Kre99}).
Another standard characterization shows the equivalence of \ref{thm:leastsquares1} and \ref{thm:leastsquares2} with the system matrix $\mathbf A_N =( \ip{\mathbf W \varphi_N^k}{\mathbf W \varphi_N^\ell}_t)_{k,\ell \in \Lambda_N}$. Now, the isometry property given in Lemma \ref{lem:iso} shows $\ip{\mathbf W\varphi_N^k}{\mathbf W\varphi_N^\ell}_t =\tfrac{R}{2}\ip{\varphi_N^k}{\varphi_N^\ell}_{L^2})_{k,\ell \in \Lambda_N}$ and concludes the proof. \end{proof}
In general, evaluating all matrix entries $\ip{\mathbf W \varphi_N^k}{\mathbf W \varphi_N^\ell}_t $ can be difficult. For many basis functions an explicit expression for $\mathbf W \varphi_N^k$ is not available including the natural pixel basis, spaces defined by linear interpolation, or spline spaces. Hence $\mathbf W \varphi_N^k$ has to be evaluated numerically which is time consuming and introduces additional errors. Even if $\mathbf W \varphi_N^k$ is given explicitly, then the inner products $\ip{\mathbf W \varphi_N^k}{\mathbf W \varphi_N^\ell}_{L^2} $ have to be computed numerically and stored. For large $N$ this can be problematic and time consuming. In contrast, by using the isometry property in our approach we only have to compute the inner products $\ip{\varphi_N^k}{\varphi_N^\ell}$. Further, in computed tomography it is common to take $\varphi_N^k$ as translates of a single function $\varphi_N^0$. In such a situation the inner products satisfy $\ip{\varphi_N^k}{\varphi_N^\ell} = \ip{\varphi_N^0}{\varphi_N^{\ell-k}}$ and therefore only a small fraction of all inner products actually have to be computed.
\subsection{Convergence and stability analysis}
As another consequence of the isometry property we derive linear error estimates for the Galerkin approach to PAT. We consider noisy data where the data $g^\delta \in \mathcal Y$ is known to satisfy
\begin{equation} \label{eq:noise}
\| \mathbf W f^0 - g^\delta \| \leq \delta \,,
\end{equation} for some noise level $\delta \geq 0$ and unknown $f^0 \in L^2_R(\mathbb{R}^d)$. For noisy data we define the Galerkin least squares solution by \begin{equation}\label{eq:leastsquares-delta}
f_N^\delta =
\argmin \left\{ \norm{ \mathbf W h - g^\delta}_t
\mid h \in \mathcal X_N \right\} \,. \end{equation} We then have the following convergence and stability result.
\begin{theorem}[Convergence and stability of the Galerkin method for PAT] \label{thm:convG}
Let $f^0 \in L^2_R(\mathbb{R}^d)$, $g^\delta \in \mathcal Y$, $\delta \geq 0$
satisfy \eqref{eq:noise} and let $f_N^\delta$ be defined by
\eqref{eq:leastsquares-delta}. Then, the following error estimate for the
Galerkin method holds:
\begin{equation} \label{eq:galerkin-est}
\norm{f_N^\delta - f^0 } \leq
\min \set{\norm{h-f^0} \mid h \in \mathcal X_N}
+
\sqrt{\frac{2}{R}} \, \delta \,.
\end{equation}
\end{theorem}
\begin{proof} We start with the noise free case $\delta =0$. The definition of $f_N^\delta$ and the isometry property of $\mathbf W$ yield \begin{align*} f_N^0 &= \argmin \left\{ \norm{ \mathbf W h - g^0 }_t
\mid h \in \mathcal X_N \right\}
\\ &= \argmin \left\{ \norm{ \mathbf W h - \mathbf W f^0 }_t
\mid h \in \mathcal X_N \right\} \\ &= \argmin \left\{\norm{ h - f^0 }
\mid h \in \mathcal X_N \right\} \end{align*} This shows $f_N^0 = \mathbf P_{\mathcal X_N} f^0$ and yields \eqref{eq:galerkin-est} for $\delta =0$. Here and below we use $\mathbf P_V$ to denote the orthogonal projection on a closed subspace $V \subseteq L^2_R(\mathbb{R}^d)$.
Now consider the case of arbitrary $\delta$, with $ g^\delta = \mathbf W f^0 + e^\delta$ where $e^\delta \in \mathcal Y$ satisfies $\norm{e^\delta} \leq \delta$. Because $\range (\mathbf W)$ is closed we can write \begin{equation*} g^\delta = ( \mathbf W f^0+ \mathbf P_{\range(\mathbf W)} (e^\delta) ) + \mathbf P_{\range(\mathbf W)^\bot} (e^\delta) \eqqcolon \mathbf W f^\delta + \mathbf P_{\range(\mathbf W)^\bot} (e^\delta) \,. \end{equation*} Following the case $\delta=0$ and using that $\mathbf P_{\range(\mathbf W)^\bot} (e^\delta) \bot \range(\mathbf W)$ one verifies that $f_N^\delta = \mathbf P_{\mathcal X_N} f^\delta$. Therefore, by the triangle inequality and the isometry property of $\mathbf W$ we obtain \begin{align*} \norm{f_N^\delta - f^0} &\leq \norm{f_N^\delta - f_N^0} + \norm{ f_N^0 - f^0} \\ & = \norm{\mathbf P_{\mathcal X_n} (f^\delta - f^0)} + \min \set{\norm{h-f^0} \mid h \in \mathcal X_N} \\ &\leq \sqrt{\frac{2}{R}} \, \norm{\mathbf W f^\delta - \mathbf W f^0 }_t + \min \set{\norm{h-f^0} \mid h \in \mathcal X_N} \,. \end{align*} Together with $\norm{\mathbf W f^\delta - \mathbf W f^0}_t = \norm{\mathbf P_{\range(\mathbf W)} (e^\delta) }_t \leq \delta$ this concludes the proof. \end{proof}
The error estimate in Theorem~\ref{thm:convG} depends on two terms: the first term depends on the approximation properties of the space $\mathcal X_N$ and the second term on the noise level $\delta$. As easily verified both terms are optimal and cannot be improved. The second term shows stability of our Galerkin least squares approach. Under the reasonable assumption that the spaces $\mathcal X_N$ satisfy the denseness property \begin{equation*}
\forall f \in L^2_R(\mathbb{R}^d) \colon \quad
\lim_{N \to \infty} \min \set{\norm{h-f} \mid h \in \mathcal X_N} = 0 \,,
\end{equation*} the derived error estimate further implies convergence of the Galerkin approach.
\section{Shift invariant spaces in computed tomography} \label{sec:spaces}
In many tomographic and signal processing applications, natural spaces for approximating the underlying function are subspaces of shift invariant spaces. In this paper we consider spaces $\mathcal V_{T,s,\varphi} $ that are generated by translated and scaled versions of a single function $\varphi\in L^2(\mathbb{R}^d)$, \begin{equation} \label{eq:XTs}
\mathcal V_{T,s,\varphi} \coloneqq
\overline{\spa (\set{ \varphi_{T,s}^k \mid k \in \mathbb{Z}^d }})
\subseteq L^2(\mathbb{R}^d) \,. \end{equation} Here $\spa$ denotes the linear hull, $\overline{X}$ stands for the closure with respect to $ \norm{\,\cdot\,}_{L^2}$ of a set $X$, and \begin{equation} \label{eq:phik} \varphi_{T,s}^k(x) \coloneqq \frac{1}{T^{d/2}} \, \varphi \left( \frac{x}{T}-sk \right) \quad \text{ for $T,s >0$ and $k \in \mathbb{Z}^d$ } \,. \end{equation}
We have chosen the normalization of the generating
functions $\varphi_{T,s}^k$ in such a way that $\norm{\varphi_{T,s}^k}_{L^2} = \norm{\varphi}_{L^2}$ for all $T,s,k$. In this section we derive conditions such that any $L^2$ function can be approximated by elements in $\mathcal V_{T,s,\varphi}$. Further, we present examples of generating functions that are relevant for (photoacoustic) computed tomography.
Any tomographic reconstruction method uses, either explicitly or implicitly, a particular discrete reconstruction space. This is obvious for any iterative procedure as it requires a finite dimensional representation of the forward operator that can be evaluated numerically. However, also direct methods use an underlying discrete image space. For example, standard filtered backprojection algorithms usually reconstruct samples of a bandlimited approximation of the unknown function. In such a situation, the underlying discrete signal space consists of bandlimited functions. In this paper we allow more general shift invariant spaces.
The following properties of the generating function and the spaces $\mathcal V_{T,s,\varphi}$ have been reported desirable for tomographic applications (see \cite{nilchian2015optimized,wang2014discrete}): \begin{enumerate}[label=(V\arabic*)] \item\label{V1} $\varphi$ has ``small'' spatial support; \item\label{V2} $\varphi$ is rotationally invariant; \item\label{V3} $(\varphi_{T,s}^k)_{k \in \mathbb{Z}^d}$ is a Riesz basis of $\mathcal V_{T,s,\varphi}$; \item\label{V4} $\varphi$ satisfies the so called partition of unity property. \end{enumerate} Conditions \ref{V1} and \ref{V2} are desirable from a computational point of view and often help to derive efficient reconstruction algorithms. The properties \ref{V3} and \ref{V4} are of more fundamental nature as these conditions imply that any $L^2$ function can be approximated arbitrarily well by elements in $\mathcal V_{T,s,\varphi}$ as $T \to 0$ (with $s$ kept fixed; the so called stationary case). In \cite{nilchian2015optimized} it has been pointed out that the properties \ref{V1}-\ref{V4} cannot be simultaneously fulfilled. This implies that for taking $s$ independent of $T$, the spaces $\mathcal V_{T,s,\varphi}$ have a limited approximation capability in the sense that for a typical function $f$, the approximation error $\min_{u \in \mathcal V_{T,s,\varphi}} \norm{f-u}^2_{L^2}$ does not converge to zero as $T\to 0$ and $s$ is kept fixed.
Despite these negative results, radially symmetric basis functions are of great popularity in computed tomography (see for example, \cite{herman2015basis,lewitt1990multidimensional,lewitt1992alternatives,matej1996practical,nilchian2015optimized,wang2012investigation,wang2014discrete}). In this paper we therefore propose to also allow the shift parameter $s$ to be variable. Under reasonable assumptions we show that the approximation error converges to zero for
$s \to 0$. This convergence in particularly holds for radially symmetric generating functions having some decay in the Fourier space, including generalized Kaiser-Bessel functions which are the most popular choice in tomographic image reconstruction.
\subsection{Riesz bases of shift invariant spaces}
Recall that the family $(\varphi_{T,s}^k)_{k \in \mathbb{Z}^d}$ is called a Riesz basis of $\mathcal V_{T,s,\varphi}$ if there exist $A,B>0$ such that \begin{equation} \label{eq:rb} \forall c \in \ell^2(\mathbb{Z}^d)\colon \quad A\norm{c}_{\ell^2}^2\leq \Bigl\lVert\sum_{k\in\mathbb{Z}^d} c_k \varphi_{T,s}^k\Bigr\rVert_{L^2}^2\leq B\norm{c}_{\ell^2}^2 \,, \end{equation} where $\norm{c}_{\ell^2}^2 \coloneqq \sum_{k\in\mathbb{Z}^d} \abs{c_k}^2$ is the squared $\ell^2$-norm of $c=(c_k)_{k \in \mathbb{Z}^d}$. A Riesz basis of $\mathcal V_{T,s,\varphi}$ can equivalently be defined as a linear independent family of frames and the constants $A$ and $B$ are the lower and upper frame bounds of $(\varphi_{T,s}^k)_{k \in \mathbb{Z}^d}$, respectively. In the following we write $\hat \varphi$ for the $d$-dimensional Fourier transform defined by $\hat \varphi (\xi) \coloneqq (2\pi)^{-d/2} \int_{\mathbb{R}^d} \varphi (x) e^{-i \inner{\xi}{x} } \rmd x$ for $\varphi \in L^2(\mathbb{R}^d)\cap L^1(\mathbb{R}^d)$ and extended to $L^2(\mathbb{R}^d)$ by continuity.
The following two Lemmas are well known in the case that $d = T=1$ (see~\cite[Theorem 3.4]{Mal09}). Due to page limitations and because the general case is shown analogously, the proofs of the Lemmas are omitted.
\begin{lemma}[Riesz basis property] \label{lem:riesz} The family $(\varphi_{T,s}^k)_{k\in\mathbb{Z}^d}$ is a Riesz basis of $\mathcal V_{T,s,\varphi}$ with frame bounds $A$ and $B$, if and only if \begin{equation}\label{eq:riesz}
\frac{A}{(2\pi)^d}\leq \frac{1}{s^d}\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(\xi+\tfrac{2\pi}{s}k)|^2\leq \frac{B}{(2\pi)^d} \quad \text{ for a.e. } \; \xi\in[0,\tfrac{2\pi}{s}]^d \,. \end{equation} \end{lemma}
\begin{proof} Follows the lines of \cite[Theorem 3.4]{Mal09}. \end{proof}
The following Lemma implies that for any Riesz basis $(\varphi_{T,s}^k)_{k\in\mathbb{Z}^d}$ one can construct an orthonormal basis of $\mathcal V_{T,s,\varphi}$ that is again generated by translated and scaled versions $\theta_{T,s}^k(x) \coloneqq T^{-d/2} \theta (x/T - sk)$ of a single function $\theta \in L^2(\mathbb{R}^d)$.
\begin{lemma}[Orthonormalization]\label{lem:on} Let $(\varphi_{T,s}^k)_{k\in\mathbb{Z}^d}$ be a Riesz basis of $\mathcal V_{T,s,\varphi}$. \begin{enumerate} \item \label{lem:on1}$(\varphi_{T,s}^k)_{k\in\mathbb{Z}^d}$ orthonormal $\iff$ $
\sum_{k\in\mathbb{Z}^d} |\hat{\varphi}(\xi+\tfrac{2\pi}{s}k)|^2=\frac{s^d}{(2\pi)^d} $ for a.e. $\xi \in \mathbb{R}^d$. \item \label{lem:on2} $(\theta_{T,s}^k)_{k\in\mathbb{Z}^d}$ is an orthonormal basis of $\mathcal V_{T,s,\varphi}$, where $\theta \in L^2(\mathbb{R}^d)$ is defined by \begin{equation}\label{eq:on}
\hat{\theta}(\xi)
=
\frac{s^{d/2} \hat{\varphi}(\xi)}{(2\pi)^{d/2} \sqrt{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(\xi+\tfrac{2\pi}{s}k)|^2}}. \end{equation} \end{enumerate} \end{lemma}
\begin{proof} Follows the lines of \cite[Theorem 3.4]{Mal09}. \end{proof}
According to Lemma~\ref{lem:on}, for theoretical purposes one may assume that the considered basis of $\mathcal V_{T,s,\varphi}$ is already orthogonal. From a practical point of view, however, it may be more convenient to work with the original non-orthogonal basis. The function $\varphi$ may have additional properties such as small support or radial symmetry which may not be the case for $\theta$. Also it may not be the case that $\theta$ is known analytically.
\subsection{The $L^2$-approximation error}
We now investigate the $L^2$-approximation error in shift invariant spaces, \begin{equation}\label{eq:aerror-def}
\forall f \in L^2(\mathbb{R}^d) \colon
\quad
\min_{u \in \mathcal V_{T,s,\varphi}} \norm{f - u}_{L^2}
= \norm{f - \mathbf P_{T,s} f}_{L^2} \,, \end{equation} as well as its asymptotic properties. Here and in the following $\mathbf P_{T,s}\colon L^2(\mathbb{R}^d)\rightarrow \mathcal V_{T,s,\varphi}$ denotes the orthogonal projection on $\mathcal V_{T,s,\varphi}$. It is given by $\mathbf P_{T,s} f = \sum_{\lambda \in \Lambda}\langle f,e_\lambda\rangle e_\lambda$, where $\kl{e_\lambda}_{\lambda\in\Lambda}$ is any orthogonal basis of $\mathcal V_{T,s,\varphi}$. For the stationary case $s=1$, the following Theorem has been obtained in~\cite{blu99approximation}.
\begin{theorem}[The $L^2$-approximation error] Let\label{thm:aerror} $\kl{\varphi_{T,s}^k}_{k\in\mathbb{Z}^d}$ be a Riesz basis of $\mathcal V_{T,s,\varphi}$ and define \begin{equation}\label{eq:RR} \mathcal{E}_{\varphi}(s,T\xi)
\coloneqq
1-\frac{|\hat{\varphi}(T\xi)|^2}{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(T\xi+2k\pi/s)|^2}
\quad \text{ for } \xi \in \mathbb{R}^d \text{ and } T,s> 0 \,. \end{equation} Then, for every $f \in W_2^r(\mathbb{R}^d)$ with $r>d/2$ we have \begin{equation} \label{eq:aerror} \norm{\mathbf P_{T,s}f-f}_{L^2} =\left[\int_{[-\frac{\pi}{Ts},\frac{\pi}{Ts}]^d}\abs{\hat{f}(\xi)}^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd \xi \right]^{\frac{1}{2}} +\mathcal{R}_{\varphi}(f,Ts) \,, \end{equation} where the remainder can be estimated as \begin{equation} \label{eq:EE} \mathcal{R}_{\varphi}(f,Ts) \leq \norm{f}_{W^r_2} \bkl{\frac{Ts}{\pi}}^{r} \sqrt{\sum_{n\in\mathbb{Z}^d\setminus\{0\}} \frac{1}{\lVert n \rVert^{2r}}} \quad \text{ for } T, s > 0\,. \end{equation} \end{theorem}
\begin{proof} Let $(\theta_{T,s}^k)_{k\in\mathbb{Z}^d}$ denote the orthonormal basis of the space $\mathcal V_{T,s,\varphi}$ as constructed in Lemma~\ref{lem:on}. Further, for every $n\in \mathbb{Z}^d$ define $Q_n\coloneqq\frac{2\pi}{Ts}n+[-\frac{\pi}{Ts},\frac{\pi}{Ts}]^d$ and define functions $f_n \in L^2(\mathbb{R}^d)$ by its Fourier representation \begin{equation*} \hat{f}_n(\xi)= \begin{cases}\hat{f}(\xi) \quad &\text{ if }\xi \in Q_n \,,\\ 0 &\text{ if }\xi \not\in Q_n \,. \end{cases} \end{equation*} Then we have $f =\sum_{n\in\mathbb{Z}^d} f_n$ and $\mathbf P_{T,s}f - f =\sum_{n\in\mathbb{Z}^d} \mathbf P_{T,s}f_n-f_n$ .
Now for every $n\in\mathbb{Z}^d$, we investigate the approximation error
$\|\mathbf P_{T,s}f_n-f_n\|^2$. We have
$\|\mathbf P_{T,s}f_n-f_n\|^2 = \|f_n\|^2-\sum_{k\in\mathbb{Z}^d}|\langle f_n,\theta_{T,s}^k\rangle|^2$. Further, \begin{align*} \langle f_n,\theta_{T,s}^k\rangle &= \langle \hat{f}_n,\hat{\theta}_{T,s}^k\rangle\\ &=T^{d/2}\int_{Q_n}\hat{f}_n(\xi) \overline{\hat{\theta}(T\xi) e^{-iTs \inner{\xi}{k}}} \rmd \xi\\ &=T^{d/2}\int_{[-\frac{\pi}{Ts},\frac{\pi}{Ts}]^d} \hat{f}_n(\xi- 2\pi n/(Ts)) \overline{ \hat{\theta}(T(\xi- 2\pi n/(Ts)))} e^{ iTs\inner{(\xi- 2\pi n/(Ts))}{k}} \rmd \xi\\ &= T^{d/2}\int_{[-\frac{\pi}{Ts},\frac{\pi}{Ts}]^d} \hat{f}_n(\xi- 2\pi n/(Ts)) \overline{ \hat{\theta}(T(\xi-2\pi n/(Ts)))} e^{ iTs\inner{\xi}{k}} \rmd \xi \\ &=T^{d/2} \hat{d}_{n,-k} \,, \end{align*} where $\hat{d}_{n,k}$ is the $k$-th Fourier-coefficient of the $2\pi/(Ts)$-periodization of the function $\xi\mapsto\hat{f}_n(\xi- 2\pi n/(Ts)) \, \overline{ \hat{\theta}(T(\xi-2\pi n/(Ts)))}$. Due to Parseval's identity we have \begin{align*}
\sum_{k\in\mathbb{Z}^d}|&\langle f_n,\theta_{T,s}^k\rangle|^2\\ &=T^d\sum_{k\in\mathbb{Z}^d} \abs{\hat{d}_{n,k}}^2\\
&=T^d\frac{(2\pi)^d}{(sT)^d}\int_{Q_n}|\hat{f}_n(\xi)|^2|\hat{\theta}(T\xi)|^2\rmd\xi\\
&=\int_{Q_n}|\hat{f}_n(\xi)|^2\frac{|\hat{\varphi}(T\xi)|^2}{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(T\xi+2k\pi/s)|^2} \rmd\xi. \end{align*} Therefore we obtain \begin{align*}
\|\mathbf P_{T,s}f_n-f_n\|^2&=\int_{Q_n}|\hat{f}_n(\xi)|^2\left(1-\frac{|\hat{\varphi}(T\xi)|^2}{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(T\xi+2k\pi/s)|^2}\right)\rmd\xi\\
&=\int_{Q_n}|\hat{f}_n(\xi)|^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd \xi \,. \end{align*}
Next notice that for $ n\in\mathbb{Z}^d\setminus\{0\}$ and $\xi\in Q_n\subseteq \mathbb{R}^d$ we have $\|\xi\|\geq\frac{\pi}{Ts} \, \norm{n}$. Therefore we can estimate \begin{equation*} \norm{\mathbf P_{T,s}f_n-f_n} \leq \bkl{ \frac{Ts}{\pi }}^r \bkl{ \frac{1}{\norm{n}} }^r
\bkl{ \int_{Q_n}\|\xi\|^{2r}|\hat{f}_n(\xi)|^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd\xi }^{\frac{1}{2}} \,. \end{equation*} Together with the triangle inequality and the Cauchy-Schwarz inequality for sums we obtain \begin{align*}
\|&\mathbf P_{T,s}f-f\| \\
&\leq\sum_{n\in\mathbb{Z}^d}\|\mathbf P_{T,s}f_n - f_n\| \\
&\leq \left(\int_{Q_0}|\hat{f}(\xi)|^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd \xi\right)^{\frac{1}{2}} +
\left(\frac{Ts}{\pi}\right)^r \sum_{n\neq 0}\frac{1}{\|n\|^{r}}
\left(\int_{Q_n}\|\xi\|^{2r} |\hat{f}_n(\xi)|^2 \rmd \xi \right)^{\frac{1}{2}} \\
&\leq \left(\int_{Q_0}|\hat{f}(\xi)|^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd \xi\right)^{\frac{1}{2}} + \bkl{ \frac{Ts}{\pi}}^r
\bkl{ \sum_{n\neq 0}\frac{1}{\|n\|^{2r}}}^{\frac{1}{2}}
\bkl{ \int_{\mathbb{R}^d\setminus Q_0}\|\xi\|^{2r} |\hat{f}(\xi)|^2 \rmd\xi }^{\frac{1}{2}} \\ \\
&\leq \left(\int_{Q_0}|\hat{f}(\xi)|^2 \mathcal{E}_{\varphi}(s,T\xi) \rmd\xi\right)^{\frac{1}{2}} + \bkl{ \frac{Ts}{\pi}}^r
\bkl{ \sum_{n\neq 0}\frac{1}{\|n\|^{2r}}}^{\frac{1}{2}} \norm{f}_{W^r_2} \,. \end{align*}
Here the sum $\sum_{n\neq 0} \|n\|^{-2r}$ is convergent because $r >d/2$. After recalling that $Q_0 = [-\pi/(Ts),\pi/(Ts)]^d$, the above estimate yields~\eqref{eq:aerror}. \end{proof}
Note that the remainder in Theorem~\ref{thm:aerror} satisfies $\mathcal{R}_{\varphi}(f,Ts) \to 0$ as $Ts \to 0 $. Consequently, for every
sequence $(T_N, s_N)_{N\in \mathbb{N}}$ we have $\lim_{N \to \infty} \norm{\mathbf P_{T_N,s_N} f - f}_{L^2}^2 =0$ if $T_N s_N \to 0 $ and \begin{equation*} \int_{[-\frac{\pi}{T_Ns_N},\frac{\pi}{T_Ns_N}]^d}\abs{\hat{f}(\xi)}^2 \underbrace{\bkl{ 1- \frac{\abs{\hat{\varphi}(T_N\xi)}^2}{\sum_{k\in\mathbb{Z}^d}\abs{\hat{\varphi}(T_N\xi+2k\pi/s_N)}^2}}}_{= \mathcal{E}_{\varphi}(s_N,T_N \xi )} \rmd \xi \to 0 \,. \end{equation*} By Lebesgue's dominated convergence theorem this holds if $\mathcal{E}_{\varphi}(s_N, T_N \xi )$ almost everywhere converges to $0$ as $N \to \infty$. In the following theorem we consider two possible sequences where this is the case. Note that Item~\ref{thm:conv1} in that theorem is well known (see, for example, \cite{Mal09}), while Item~\ref{thm:conv2} to the best of our knowledge is new.
\begin{theorem}[Asymptotic behavior of $\mathcal{E}_{\varphi}$]\label{thm:conv} Let $\varphi\in L^2(\mathbb{R}^d) \cap L^1(\mathbb{R}^d)$. \begin{enumerate} \item\label{thm:conv1} Suppose that $\hat{\varphi}(0) > 0$. Then, for every $s \in(0,\infty)$ we have that $\lim_{T\rightarrow 0} \mathcal{E}_{\varphi}(s,T \xi ) = 0$ almost everywhere if and only if \begin{align} \label{eq:pou} \frac{1}{\hat\varphi(0)} \sum_{m\in\mathbb{Z}^d}\varphi(x-ms) = \frac{(2\pi)^{d/2}}{s^d} \quad \text{ for almost every $x \in \mathbb{R}^d$} \,.
\end{align} Equation \eqref{eq:pou} is called the partition of unity property.
\item\label{thm:conv2} Suppose
$\hat{\varphi}(\xi)=\mathcal{O}(\|\xi\|^{-p})$ as $\|\xi\| \rightarrow \infty$ for some $p>d/2$. Let $(T_N)_{N\in \mathbb{N}}$ and $(s_N)_{N\in \mathbb{N}}$ be bounded sequences in $(0, \infty)$ with $s_N \to 0$ as $N \to \infty$. Then \begin{equation} \label{eq:conv2} \lim_{N \to \infty } \mathcal{E}_{\varphi}(s_N, T_N \xi ) =0 \quad \text{ for every $\xi \in \mathbb{R}^d$}\,. \end{equation} \end{enumerate} \end{theorem}
\begin{proof}\mbox{} \ref{thm:conv1} We have \begin{align*} &\lim_{T\to 0} \mathcal{E}_{\varphi}(s,T \xi ) = 0
\text{ for a.e. $\xi \in \mathbb{R}^d$ } \\ \iff
&\lim_{T\rightarrow 0}\sum_{k\neq 0}|\hat{\varphi}(T\xi+ 2k\pi/s)|^2=0 \text{ for a.e. $\xi \in \mathbb{R}^d$ } \\ \iff& \forall k \in \mathbb{Z}^d\setminus\set{0} \colon \hat{\varphi}( 2k\pi/s)=0\\ \iff& \forall k \in \mathbb{Z}^d\setminus\set{0} \colon \int_{\mathbb{R}^d}\varphi(x) e^{i 2\pi \inner{x}{k}/s} \rmd x =0\\ \iff&\forall k \in \mathbb{Z}^d\setminus\set{0} \colon \int_{[0,s]^d}\sum_{m\in\mathbb{Z}^d}\varphi(x-ms)e^{i 2\pi \inner{x}{k}/s} \rmd x=0\\ \iff& \sum_{m\in\mathbb{Z}^d}\varphi(x-ms)= \frac{(2\pi)^{d/2}}{s^d} \, \hat\varphi(0) \text{ for a.e. $x \in \mathbb{R}^d$\,. } \end{align*}
\ref{thm:conv2}
As $\hat{\varphi}(\xi)=\mathcal{O}(\|\xi\|^{-p})$ for $\|\xi\|\rightarrow \infty$ there exist constants $R, C>0$ such that for $\|\xi\|>R$ we have $ \abs{\hat{\varphi}(\xi)}\leq C \|\xi\|^{-p}$. Further, for all $\xi \in \mathbb{R}^d$
and $k\neq 0$ we have $ \|T_N\xi- 2\pi k/s_N\| \to \infty$. Therefore it exists $N_0\in \mathbb{N}$, such that for all $N\geq N_0$ we have
$\|T_N\xi-2\pi k / s_N\|>c$ and $\|T_N\xi\|\leq\frac{1}{2}\|2\pi k/s_N\|$ for $k\neq 0$. Therefore, for all $N\geq N_0$, \begin{align*}
\sum_{k\neq 0}|\hat{\varphi}(T_N\xi-\frac{2\pi}{s_N}k)|^{2}&\leq C \sum_{k\neq 0} \|T_N\xi-\frac{2\pi}{s_N}k\|^{-2p}\\
&\leq C\sum_{k\neq 0} \left| \norm{\frac{2\pi}{s_N}k}-\|T_N\xi\|\right|^{-2p}\\
&\leq C\sum_{k\neq 0} \left|\norm{\frac{2\pi}{s_N}k}-\frac{1}{2}\norm{\frac{2\pi}{s_N}k}\right|^{-2p}\\ &\leq C \left(\frac{s_N}{\pi}\right)^{2p}
\sum_{k\neq 0}\|k\|^{-2p} \,, \end{align*}
which implies \eqref{eq:conv2}. Note that $\sum_{k\neq 0}\|k\|^{-2p}$ is convergent because $p >d/2$. \end{proof}
From Theorems~\ref{thm:aerror} and~\ref{thm:conv} one concludes that the system of $(\varphi_{T,s}^k)_{k\in \mathbb{Z}^d}$ yields a vanishing approximation error $\min_{u \in \mathcal V_{T,s,\varphi}} \norm{f-u}^2_{L^2}$ in either of the following cases: \begin{enumerate} \item $\varphi$ satisfies the partition of unity property, $s$ is fixed and $T \to 0$; \item $\hat \varphi(\xi) = \mathcal O(\norm{\xi}^{-d/2-\epsilon})$ for $\norm{\xi}\to \infty$, $T$ is bounded and $s \to 0$. \end{enumerate}
In both cases one could derive quantitative error estimates. We do not investigate this issue further since our main emphasis is pointing out that allowing $s$ to vary yields asymptotically vanishing approximation error without the partition of unity property. This is relevant since the partition of unity property cannot be satisfied by any radially symmetric compactly supported function.
Below we study two basic examples for generating functions where Theorems~\ref{thm:aerror} and~\ref{thm:conv} can be applied. These are pixel (or voxel) basis functions and generalized Kaiser-Bessel functions. We focus on these basis functions since the pixel basis has been the most common choice in early tomographic image reconstruction while generalized Kaiser-Bessel functions are currently considered as the method of choice. Further, also some standard finite element bases satisfy the partition of unit property; compare
with Remark~\ref{rem:FE}.
\subsection{Example: The pixel basis} \label{sec:ex:pixel}
The pixel basis (also called voxel basis in the case $d > 2$) has been frequently used for image representation in early tomographic image reconstruction (see, for example \cite{gordon1970algebraic,herman2015basis,kak2001principles}). It consists of scaled and translated version of the indicator function of the hyper-cube $[-1/2, 1/2[^d$ \begin{equation} \label{eq:chid} \chi \colon \mathbb{R}^d \to \mathbb{R} \colon x \mapsto \begin{cases} 1 & \text{ if } x \in [-1/2, 1/2[^d \\ 0 & \text{ otherwise } \,. \end{cases} \end{equation} For every $T, s>0$, the family $(\chi_{T,s}^k)_{k\in \mathbb{Z}^d}$ with $\chi_{T,s}^k (x) = T^{-d/2} \chi ((x- Ts k)/ T)$ clearly forms a Riesz basis of \begin{equation*} \mathcal V_{T,s,\chi} = \overline{\spa\set{\chi_{T,s}^k \mid k \in \mathbb{Z}^d} } \,. \end{equation*}
Note that the Fourier transform of $\chi$ is given by \begin{equation} \label{eq:chidhat} \hat \chi \colon \mathbb{R} \to \mathbb{C} \colon \xi \mapsto (2\pi)^{-d/2} \sinc \bkl{\frac{\xi}{2}} \coloneqq (2\pi)^{-d/2} \prod_{j=1}^d \sinc \bkl{\frac{\xi_j}{2}} \,, \end{equation} where $\sinc(a) \coloneqq \sin(a)/a$ for $a\neq 0$ and $\sinc(0) \coloneqq 1$. We see $\hat \chi (\xi)= \mathcal O(\norm{\xi}^{-1})$ as $\norm{\xi} \to \infty$. Consequently, we cannot conclude from Theorem~\ref{thm:aerror} that the spaces $\mathcal V_{T,s,\chi}$ yields an asymptotically vanishing approximation error for $s \to 0$.
However, the pixel basis allows to consider the stationary case where $s$ is a constant and where $T$ tends to $0$. In fact, from the proof of Theorem~\ref{thm:conv} we see that $\chi$ satisfies the partition of unity property if and only if $\sinc(\pi k/ s) = 0$ for every $k \neq 0$. This in turn is the case if and only if $s = 2^{-m}$ for some $m \in \mathbb{N}$. The case $s=1$ seems the most natural one, since it uses non-overlapping basis functions filling the whole space $\mathbb{R}^d$. The non-overlapping case is in fact used in existing tomographic image reconstruction algorithms; see \cite{gordon1970algebraic,herman2015basis,kak2001principles}. Further, note that the number of basis elements $\chi_{T,s}^k$ for which its center $m_k \coloneqq Tsk$ is contained in the unit cube $[-1,1]^d$ is given by $(2/(Ts) + 1)^d$ and that $T$ is inversely proportional to the essential bandwidth of the basis function. Therefore, the choice $s=1$ yields to a minimal number of pixel basis functions representing a function with given support and essential bandwidth.
\subsection{Example: Generalized Kaiser-Bessel functions} \label{sec:ex:KB}
As often argued in the literature on tomographic image reconstruction, the lack of continuity and rotation invariance are severe drawbacks of the pixel basis functions for image reconstruction. Therefore in \cite{lewitt1990multidimensional} the generalized Kaiser-Bessel (KB) functions have been introduced and proposed for image reconstruction.
The generalized KB functions in $\mathbb{R}^d$ form a family of functions that depend on three parameters $m \in \mathbb{N}$, $\gamma \geq 0$ and $a >0$, where $m\in \mathbb{N}$ is referred to as the order, $\gamma \geq 0$ the taper parameter and $a>0$ is the support parameter. More precisely, the KB function $\varphi(\,\cdot\,; m,\gamma,a ) \colon \mathbb{R}^d \to \mathbb{R} $ of order $m$ is defined by \begin{equation}\label{eq:kb} \varphi(x; m,\gamma,a ) \coloneqq \begin{cases} \bkl{\sqrt{1- \norm{x}^2/a^2}}^m \frac{I_m \bkl{\gamma \sqrt{1- \norm{x}^2/a^2}}}{I_m(\gamma)} & \text{if $\norm{x} \leq a$} \\ 0 & \text{otherwise} \,, \end{cases} \end{equation} where $I_m$ is the modified first kind Bessel function. The window taper $\gamma$ describes how spiky the basis function is and $a$ is the support radius. The order allows to control the smoothness and the taper parameter allows to further tune the shape of the basis function.
The Fourier transform $\hat\varphi(\,\cdot\,; m,\gamma,a )$ of the KB function $\varphi(\,\cdot\,; m,\gamma,a )$ can be computed to (see \cite{lewitt1990multidimensional}) \begin{equation}\label{eq:kb-hat} \hat\varphi(\xi; m,\gamma,a ) \coloneqq \begin{cases} \frac{a^d \gamma^m}{I_m(\gamma)} \,
\frac{I_{d/2+m} \bkl{\sqrt{\gamma^2 - a^2\norm{\xi}^2}}}{\bkl{\sqrt{\gamma^2 - a^2\norm{\xi}^2}}^{d/2+m}} & \text{if $a\norm{\xi} \leq \gamma$} \\ \frac{a^d \gamma^m}{I_m(\gamma)} \,
\frac{J_{d/2+m} \bkl{\sqrt{a^2\norm{\xi}^2-\gamma^2 }}}{\bkl{\sqrt{ a^2\norm{\xi}^2-\gamma^2}}^{d/2+m}} & \text{otherwise}\,. \end{cases} \end{equation} Here $J_m$ denotes the first kind Bessel function of order $m$. The known asymptotic decay $J_{d/2+m}(r) = \mathcal O(r^{-1/2})$ implies that the asymptotic behavior of the generalized KB function is
$\hat\varphi(\xi; m,\gamma,a ) = \mathcal O\kl{ \|\xi\|^{-(d/2+m+1/2)}} $. From Theorem~\ref{thm:conv} we therefore conclude that for any choice of $m$, $a$ and $\gamma$, the spaces \begin{equation*} \mathcal V_{T,s, \varphi(\,\cdot\,; m,\gamma,a )} = \overline{\spa\set{\varphi_{T,s}^k (\,\cdot\,; m,\gamma,a )\mid k \in \mathbb{Z}^d} } \end{equation*} yield vanishing approximation error when $s \to 0$ and $T$ keeps bounded. Note that the parameter $a$ plays exactly the same roles as the parameter $T$. Therefore without loss of generality one could omit $a$ in the definition of the KB functions. However we include it since it is standard to consider the KB functions as a family of three parameters.
Note that the KB function (as any other radially symmetric basis function with compact support) does not satisfy the partition of unity condition. Therefore, Theorem \ref{thm:aerror} implies (for sufficiently regular functions) that the asymptotic approximation error saturates; that is, we have \begin{equation}\label{eq:sat} \lim_{T \to 0}\norm{\mathbf P_{T,s}f-f}_{L^2}^2 = A_{\varphi,s} \norm{f}_{L^2}^2
\quad \text{ with } \quad A_{\varphi,s} \coloneqq
\frac{\sum_{k \neq 0}|\hat{\varphi}(2k\pi/s)|^2}{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(2k\pi / s )|^2} \,. \end{equation} Keeping $m=2$, $a=2$ and $s=1$ fixed, in \cite{nilchian2015optimized} it has been proposed to select the taper parameter $\gamma$ in such a way that the asymptotic approximation error given by $A_{\varphi,s}$ is minimized. Although such a procedure does not overcome the saturation phenomenon, the saturation effect (for given order and given redundancy factor) is minimized. Oppose to that, our theory shows that taking $s$ variable and non-constant overcomes the saturation phenomenon.
In case the partition of unity property does not hold, another natural strategy to address the saturation phenomenon is first considering $T \to 0$ while keeping $s$ fixed. In a second step one studies the saturation error $A_{\varphi,s}$ defined in \eqref{eq:sat} as $s \to 0$. Similar to Theorem~\ref{thm:conv} one can show that the saturation error vanishes in limit. More precisely, the following theorem holds.
\begin{theorem}[Saturation error in the limit]
Let $\varphi \in L^2(\mathbb{R}^d) \cap L^1(\mathbb{R}^d)$ be such that $(\varphi_{T,s}^k)_{k\in\mathbb{Z}^d}$ is a Riesz-basis of $\mathcal V_{T,s,\varphi}$ with $\hat{\varphi}(\xi)=\mathcal{O}(\|\xi\|^{-p})$ as $\|\xi\|\rightarrow \infty$ for some $p>d/2$, and let $f\in W_2^r(\mathbb{R}^d)$ with $r > d/2$. Then \eqref{eq:sat} holds and $\lim_{s \to 0} A_{\varphi,s} = 0$. \end{theorem}
\begin{proof} We have \begin{equation*}
A_{\varphi,s} = \frac{\sum_{k\neq 0}|\hat{\varphi}(2k\pi/s)|^2}{\sum_{k\in\mathbb{Z}^d}|\hat{\varphi}(2k\pi/s)|^2}=1-\frac{|\hat{\varphi}(0)|^2}{|\hat{\varphi}(0)|^2 + \sum_{k\neq 0}|\hat{\varphi}(2k\pi/s)|^2} \,. \end{equation*} Therefore the claim follows after showing
$\sum_{k\neq 0}|\hat{\varphi}(2k\pi/s)|^2 \to 0$ as $s \to 0$. Since $\hat{\varphi}(\xi)=\mathcal{O}(\|\xi\|^{-p})$, there exist
$C,s_0>0$ with $|\hat{\varphi}(2k\pi/s)|\leq C \|2k\pi/s\|^{-p}$
for $k\neq 0$ and $s \leq s_0$. This implies $\sum_{k\neq 0}|\hat{\varphi}(2k\pi/s)|^2 \leq C(\frac{s}{2\pi})^{2p}\sum_{k\neq 0}\| k\|^{-2p}$. Because $p>d/2$, the sum $\sum_{k\neq 0}\| k\|^{-2p}$ is absolutely convergent. Hence we have $\sum_{k\neq 0}|\hat{\varphi}(2k\pi/s)|^2 \to 0$ as $s \to 0$ which concludes the proof. \end{proof}
\section{The Galerkin approach for PAT using shift invariant spaces} \label{sec:galerkinshift}
In this section we give details how to efficiently implement the least squares Galerkin method using subspaces of a shift invariant space. This is in contrast to the use of a general reconstruction space, where both the computation of the system matrix and the solution of the
Galerkin equation can be slow. For shift invariant spaces the system matrix takes a very special form which allows an efficient implementation.
Let $\varphi \in L^2(\mathbb{R}^d)$ be such that the elements $\varphi_{T,s}^k$ form a Riesz basis of $\mathcal V_{T,s,\varphi}$; see Section~\ref{sec:spaces}. Moreover, let $(T_N)_{N \in \mathbb{N}}$ and $(s_N)_{N \in \mathbb{N}}$ be two sequences of positive numbers describing the support and the redundancy of the basis functions, respectively. We consider the reconstruction spaces \begin{equation} \label{eq:XN} \mathcal X_N \coloneqq \left\{\sum_{k\in\Lambda_N} c_k \varphi_N^k \mid
k \in \Lambda_N \right\} \subseteq \mathcal V_{T_N,s_N} \,, \end{equation} where $\varphi_N^k \coloneqq \varphi_{T_N,s_N}^k$ are the basis functions (with $\varphi_{T_N,s_N}^k$ as in \eqref{eq:phik}), and $\Lambda_N \coloneqq \{k\in\mathbb{Z}^d \mid m_k \coloneqq T_N s_N k \in B_R(0) \}$
denotes the set of all $k \in \mathbb{Z}^d$ such that the mid-point $m_k$ of the $k$-th basis function is contained in $B_R(0)$. Then $\dim \mathcal X_N = |\Lambda_N|$ is the number of basis elements used for image representation. In the case that the support of the function to be reconstructed intersects (or is at least close) to $\partial B_R(0)$, it may be better to use all basis functions $\varphi_{T_N,s_N}^k$ whose support intersects $\overline{B_R(0)}$. The following consideration also hold for such an alternative choice. Further note that the approximation results for the shift invariant spaces $\mathcal V_{T,s}$ do not immediately yield approximation results the finite dimensional spaces $ \mathcal X_N$. Such investigates are an interesting aspect of future studies.
When applied with the reconstruction space $\mathcal X_N $, our Galerkin approach to PAT analyzed in Section~\ref{sec:galerkinPAT} takes the form (see Theorem~\ref{thm:leastsquares})
\begin{equation} \label{eq:galerkin2}
f_N = \sum_{k\in \Lambda_N} c_{N,k} \varphi_N^k \,,
\end{equation}
where \begin{itemize} \item $\mathbf A_N \coloneqq ( \tfrac{R}{2}\ip{\varphi_N^k}{\varphi_N^\ell}_{L^2})_{k,\ell\in \Lambda_N}$ is the system matrix ; \item $d_N \coloneqq (\ip{\mathbf W \varphi_N^k}{g}_t)_{k\in \Lambda_N}$ is the right hand side; \item $c_N \coloneqq (c_{N,k})_k$ solves the Galerkin equation $\mathbf A_N c_N = d_N$. \end{itemize} As discussed in the following subsection, for the shift invariant case the system matrix $ \mathbf A_N$ takes a very special form which significantly simplifies the computations. Further, the right hand of the Galerkin equation can be computed efficiently as described in Subsection \ref{sec:rhs} below.
\subsection{Evaluation of the system matrix} \label{sec:sys}
For any $N \in \mathbb{N}$ and any $k, \ell \in \Lambda_N$, the entries of the system matrix $\mathbf A_N$ satisfy \begin{align*}
\ip{\varphi_N^k}{\varphi_N^\ell}
&=
\frac{1}{T_N^d}
\int_{\mathbb{R}^d}
\varphi\bkl{\frac{x}{T_N} - s_N k}
\varphi\bkl{\frac{x}{T_N} - s_N \ell}
\rmd x \\
&=
\int_{\mathbb{R}^d}
\varphi\bkl{y - s_N k}
\varphi\bkl{y - s_N \ell}
\rmd y \\
&=
\int_{\mathbb{R}^d}
\varphi\bkl{x}
\varphi\bkl{x - s_N (\ell-k)}
\rmd x \\
&= \ip{\varphi_{1,s_N}^0}{\varphi_{1,s_N}^{\ell-k}}\,. \end{align*} Hence instead of computing and storing the whole system matrix required by standard Galerkin methods, in our approach only the values $\ip{\varphi_{1,s_N}^0}{\varphi_{1,s_N}^{n}}$ where $n = \ell-k$ with $\ell, k \in \Lambda_N$ have to be computed and stored.
The total number of such inner products is bounded by
$2^d |\Lambda_N|$. In the case where $\varphi$ has small support this number is actually much smaller since $\ip{\varphi_{1,s_N}^k}{\varphi_{1,s_N}^\ell}$ vanishes if the supports of $\varphi_{1,s_N}^k$ and $\varphi_{1,s_N}^\ell$ do not overlap.
In this paper we mainly consider the (non-overlapping) pixel basis (see Subsection~\ref{sec:ex:pixel}) and the KB functions in two spatial dimensions (see Subsection~\ref{sec:ex:KB}). The pixel basis is an orthonormal system and therefore the system matrix is the identity. The KB functions are radially symmetric. In such a situation we compute the entries $\ip{\varphi_{1,s_N}^0}{\varphi_{1,s_N}^{\ell-k}}$ of the system matrix $\mathbf A_N$ approximately as follows. We numerically computed the inner products $\langle \varphi_{1,s_N}^0,\varphi_{1,s_N}^k\rangle_{L^2}$ for all $k\in\mathbb{Z}^2$
with $\|k\|_2\leq 2a$ using the rectangle rule. For this we discretized the square $[-a,a]^2$ by an equidistant Cartesian grid with $M \times M$ grid points $(x_i,y_j)$ and computed \begin{equation}\label{eq:sm-discrete} \langle \varphi_{1,s_N}^0,\varphi_{1,s_N}^k\rangle_{L^2} \simeq \frac{(2a)^2}{(M-1)^2}\sum_{i=1}^{M}\sum_{j=1}^{M}\varphi_{1,s_N}^{0}(x_i,y_j)\varphi_{1,s_N}^{k}(x_i,y_j) \,. \end{equation} The resulting system matrix is a tensor product of Toeplitz matrices.
\subsection{Evaluation of the right hand side} \label{sec:rhs}
In the practical application instead of the continuously sampled data $g = \mathbf W f$ only discrete data $g(z_i, t_j)_{i,j}$ are known, where $t_j = j \, T/N_t$ are $N_t$ equidistant time points in the interval $[0,T]$ and $z_i$ are $N_{\rm det}$ points on the measurement surface $\partial B_R(0)$. In our numerical implementation we approximate the right hand side in the Galerkin equation as follows: \begin{equation}\label{eq:ipg1}
\ip{\mathbf W\varphi_N^k}{g}_t
\simeq
\frac{T}{N_t-1} \sum_{i=1}^{N_{\rm det}}
\sum_{j=1}^{Nt}
w_i (\mathbf W\varphi_N^k)(z_i, t_j) g(z_i, t_j) t_j \,. \end{equation} Here $w_i$ are appropriate weights accounting for the density of the sampling points. The right hand side in~\eqref{eq:ipg1} may be interpreted as the exact inner product $\ip{\mathbf W\varphi_N^k}{g^\delta}_t$ for some approximate data $g^\delta \simeq g$, which allows application of our convergence and stability result derived in Theorem~\ref{thm:conv}.
In some situations (for example for the KB functions and other radially symmetric basis functions in three dimensions), the solution of $\mathbf W\varphi_N^k$ is available analytically (see~\cite{diebold1991photoacoustic,wang2014discrete}). In our numerical solutions we use the pixel basis and the KB basis functions in two spatial dimensions, where we are not aware of explicit representations for the corresponding solution of the wave equation. In this case we numerically compute $\mathbf W\varphi$ using the well known solution formula for the wave equation \eqref{eq:wave-fwd}, \begin{align}\label{eq:wavesol} \mathbf W f(z,t) = (\partial_t \mathbf A_t \mathbf M f) (z,t) \coloneqq
\frac{1}{2\pi} \frac{\partial}{\partial t} \int_0^t\int_{\mathbb S^1} \frac{r f(z + r \omega)}{\sqrt{t^2-r^2}} \rmd s(\omega) \rmd r \,. \end{align} Here \begin{align*}
&\forall (z,r) \in \partial B_R(0) \times (0, \infty) \colon
&&
\mathbf M f(z,r)
\coloneqq\frac{1}{2\pi}\int_{\mathbb S^1}
f(z + r \omega) \rmd s(\omega) \,,\\
&\forall (z,t) \in \partial B_R(0) \times (0, \infty) \colon
&&
\mathbf A_t g(z,t)
\coloneqq
\int_0^t \frac{r g(z,r)}{\sqrt{t^2-r^2}} \, \rmd r \,, \end{align*} denote the spherical means transform of a function $f \colon \mathbb{R}^2 \to \mathbb{R}$ with support in $B_R(0)$, and the Abel transform of a function $g \colon \partial B_R(0) \times (0, \infty) \to \mathbb{R}$ in the second variable, respectively. The solution formula \eqref{eq:wavesol} is used to numerically compute $\mathbf W\varphi_N^k$ required for evaluating the right hand side of the Galerkin equation as outlined in the following.
\begin{itemize}
\item For a symmetric basis function of the form $\varphi(x) = \bar \varphi(\norm x)$ the corresponding solution of the wave equation also is radially symmetric. Hence in order to approximate $\mathbf W\varphi_N^k$ we numerically approximate $\mathbf W \varphi_{1,s_N}^0((r_n,0), t_j)$ for $N_r$ equidistant radii $r_n \in[0,2R]$ and using a numerical approximation of $\mathbf W$ by discretizing the spherical Radon transform as well as the Abel transform in \eqref{eq:wavesol}. As a next step, for any basis functions $\varphi_N^k$, we approximately compute \begin{equation*}
\mathbf W\varphi_N^k(z_i,t_j)
=
\mathbf W\varphi_{1,s_N}^{0}((\|z_i - k\|,0), t_j) \end{equation*} at any detector points $z_i \in \partial B_R(0)$
and discrete time points $t_j$ by replacing the right hand side with the piecewise linear interpolation in the first argument using the known values $\mathbf W\varphi_{1,s_N}^0((r_n,0), t_j)$.
\item In the case of the pixel basis, the spherical means $\mathbf M \chi_N^k$ have been computed analytically and evaluated at the discretization points $(z_i,t_j)$. Subsequently, the wave data $\mathbf W\varphi_N^k(z_i,t_j)$ are computed by numerically evaluating the Abel transform in~\eqref{eq:wavesol}.
\end{itemize}
\begin{remark}[Implementation\label{rem:FE} for finite element bases] Above we have demonstrated how the Galerkin approach can be implemented efficiently if $\mathcal X_N$ is generated by a radially symmetric function. In fact, an efficient implementation of the Galerkin method for PAT can be obtained for an arbitrary generating element. In this case the entries of the system matrix are computed similar to~\eqref{eq:sm-discrete}. Due to the lack of symmetry of the basis functions, the solution formula \eqref{eq:wavesol} cannot be used to accelerate the computation of $\mathbf W \varphi_N^k$; still this would require the separate evaluation \eqref{eq:wavesol} for each basis function. In the non-radially symmetric case, however, one can exploit again that $\varphi_N^k(x) = \varphi((x - skT)/T)$ are translates of a single function $\varphi_{T,1}^0$. For that purpose, one first numerically computes the solution $p(x,t)$ of the wave equation \eqref{eq:wave-fwd} with initial data $\varphi_{T,1}^0$. In a second step one uses interpolation to approximately find $(\mathbf W \varphi_N^k) (z_i,t_j) = p(z_i - skT, t_j)$.
Such an approach can, for example, be used for a bilinear finite elements basis that consists of scaled and translated versions of the basis function $\varphi \colon \mathbb{R}^2\rightarrow\mathbb{R}$ defined by \begin{equation} \varphi(x) \coloneqq \begin{cases} (1-\abs{x_1})(1-\abs{x_2}) &(x_1,x_2)\in [-1,1]^2 \\ 0 &\text{else} \,. \end{cases} \end{equation} The corresponding finite element basis using the shift parameter $s=1/2$ can easily be seen to satisfy the partition of unity property. Note that in this case the entries of the system matrix even can be computed analytically. Exploring the use of finite elements further is beyond the scope of this paper. However, we think that a precise comparison of different basis elements (in combination with our Galerkin approach as well as in combination with related approaches) is an interesting and important line of future research. \end{remark}
\section{Numerical studies} \label{sec:num}
In this section we present results of our numerical studies for our Galerkin least squares approach, where the approximation space $\mathcal X_N$ is taken as the subspace of a shift invariant space. We further compare our results with related approaches in the literature. We restrict ourselves to the case of two spatial dimensions and take $R=1$ for the radius of the measurement circle.
For all presented numerical results, the function $f$ is taken a superposition of indicator functions as shown in top left
image in Figure~\ref{fig:rec1}. The corresponding discrete data \begin{equation} \label{eq:ddatad} g(z_i, t_j) \simeq (\mathbf W f)(z_i, t_j) \quad \text{ for $i = 1, \dots N_{\rm det}$ and $j = 1, \dots , N_t$} \,, \end{equation} where $z_i = (\cos (i 2\pi/N_{\rm det} ), \sin (i 2\pi/N_{\rm det} ))$ denote the equidistant detector locations and $t_j = j T/N_t$ the discrete time points, have been computed numerically by implementing \eqref{eq:wavesol}. For that purpose we discretized the spherical Radon transform as well as the Abel transform in \eqref{eq:wavesol}. We take $T=3$ as the final measurement time, $N_{\rm det} = 100$, $N_t = 376$ for the discretization of the data and $N_x = 300$ for discretizing the function. Note that the data are computed in a way that is completely different from the Galerkin system which avoids any inverse crime.
\subsection{Reconstruction results using Kaiser-Bessel Functions}
We first investigate the case where $\varphi = \varphi(\,\cdot\,; m,a,\gamma)$ is a KB function. The parameters $m$ and $\gamma$ determine the shape and smoothness of the KB function, whereas $a$ determines its support. It is therefore reasonable to fix $m$ and $\gamma$. Here we choose the fixed parameters $m=1$ and $\gamma=2$. Further, $a$ determines the support of the KB function, which is also controlled by the parameter $T$. Therefore also this parameter can be fixed; without loss of generality we take $a=2$. This effects that for $s=1$ the functions $\varphi_N^k$ show sufficient overlap. Since the total number of basis functions which are centered in the square $[-1,1]^2$ is equal to $N^2$ with $N = 2 /(sT)+1$ it is reasonable to consider combinations of the parameters $s$ and $T$ where the product $sT$ remains constant.
\begin{figure}\label{fig:rec1}
\end{figure}
The proposed KB Galerkin approach for the inverse PAT problem consists in solving the Galerkin equation~\eqref{eq:galerkin2}. Therefore the system matrix and the right hand side are computed in \textsc{Matlab}\xspace as described in Section~\ref{sec:galerkinshift} and the direct solver {\tt mldivide} is used for numerically computing the solution of \eqref{eq:galerkin2}. Figure~\ref{fig:rec1} shows reconstructions using the KB Galerkin reconstruction for $N=100$ and step size parameters $s=0.8081$, $s = 1.0101$ and $s=1.3636$, respectively. One notices that actually all considered step size parameters yield quite good results.
\begin{table}[thb!] \centering
\begin{tabular}{c c c | c c c } \toprule
\multicolumn{3}{c}{$N=50$} & \multicolumn{3}{c}{$N=100$} \\ \midrule $s$ & $T$ & $e_N(s,f)$ & $s$ & $T$ & $e_N(s,f)$ \\ \midrule 1.4286 &0.0286& 0.0412 &1.4141 &0.0143& 0.0336 \\ \midrule 1.3776 &0.0296& 0.0379 &1.3636 &0.0148& 0.0299 \\ \midrule 1.3265 &0.0308& 0.0351 &1.3131 &0.0154& 0.0298 \\ \midrule 1.2755 &0.0320& 0.0352 &1.2626 &0.0160& 0.0303 \\ \midrule 1.2245 &0.0333& 0.0369 &1.2121 & 0.0167& 0.0307 \\ \midrule 1.1735 &0.0348& 0.0401 &1.1616 &0.0174& 0.0320 \\ \midrule 1.1224 &0.0364& 0.0439 &1.1111 &0.0182& 0.0345 \\ \midrule 1.0714 &0.0381& 0.0427 &1.0606 & 0.0190& 0.0367 \\ \midrule 1.0204 &0.0400& 0.0428 &1.0101& 0.0200& 0.0384 \\ \midrule 0.9694 &0.0421& 0.0403 &0.9596 & 0.0211& 0.0335 \\ \midrule 0.9184 &0.0444& 0.0391 &0.9091&0.0222& 0.0392 \\ \midrule 0.8673 &0.0471& 0.0412 &0.8586 &0.0235& 0.0366 \\ \midrule 0.8163 &0.0500& 0.0432 &0.8081&0.0250& 0.0322 \\ \midrule 0.7653 &0.0533& 0.0464 &0.7576 & 0.0267& 0.0365 \\ \bottomrule \end{tabular} \caption{\textsc{\label{tab:error} Relative $L^2$-reconstruction errors} For different choices of $s$ the reconstruction error $e_N(s,f)$ with is evaluated for $N=50$ and $N=100$. Recall that $s$ is the step size and $T = 2/(s(N-1))$ determines the size of the KB basis functions.} \end{table}
\subsection{Parameter selection for the KB functions}
Choosing optimal parameters seems a difficult issue. In the following we numerically investigate the optimal choice of the parameters $s$ and $T$ for a fixed number of basis functions $N^2$ with $N=50$ and $N=100$, respectively. For that purpose we compute the $L^2$-reconstruction error
\begin{equation} \label{eq:err-nsf}
e_N(s,f) \coloneqq
\frac{\sum_{i=1}^N\sum_{j=1}^N
|f_N(x_i,y_j)-f(x_i,y_j)|^2}{\sum_{i=1}^N\sum_{j=1}^N |f(x_i,y_j)|^2}. \end{equation} for different choices of $s$ and $T$ satisfying the side condition $sT = 2/(N-1)$. Here $f_N \coloneqq\sum_{k\in\Lambda_N} c_{N,k}\varphi_N^k$ is the Galerkin reconstruction given by \eqref{eq:galerkin2} and the evaluation points $(x_i,y_j)$ for evaluating the error in \eqref{eq:err-nsf} are taken as the elements on $\{s_N T_N k \ \mid \ k\in\mathbb{Z}^2\}\cap[-1,1]^2$. In Table~\ref{tab:error} we show these relative $L^2$ reconstruction errors. From Table~\ref{tab:error} one finds that for the considered function optimal choices for the step size parameter are $s=1.3265$ for $N=50$ and $s = 1.3131$ for $N=100$.
\begin{figure}\label{fig:psi}
\end{figure}
From Table~\ref{tab:error} one notices an irregular behavior of the reconstruction error in dependance on $s$ and $N$. To better understand this issue recall that for a given basis function $\varphi = \varphi(\,\cdot\, ; m,a,\gamma)$ the best the $L^2$-approximation error
using functions $\varphi_{T,s}^k$ is given by (see Theorem~\ref{thm:aerror}) \begin{equation*} \norm{\mathbf P_{T,s}f-f}_{L^2} = \int_{\bigl[-\frac{\pi}{Ts},\frac{\pi}{Ts}\bigr]^2}\abs{\hat{f}(\xi)}^2 \bkl{ 1- \frac{\abs{\hat{\varphi}(T\xi)}^2}{\sum_{k\in\mathbb{Z}^d}\abs{\hat{\varphi}(T\xi+2k\pi/s)}^2}} \,, \end{equation*} where it is assumed that the Fourier transform of $f$ is sufficiently small outside $[-\pi/(Ts),\pi/(Ts)]^2$. Hence for fixed $N$ a ``good'' choice of $s$
should be made at least in such a way that \begin{equation*} S_{\varphi}(s,N ,\xi ) \coloneqq
\frac{\sum_{k \neq 0} \abs{\hat{\varphi} \bigl( \frac{2}{s (N-1)} \xi - \frac{2k\pi}{s} \bigr) }^2}
{\abs{\hat{\varphi}\bigl(\frac{2}{s (N-1)} \xi \bigr)}^{2}} \text{is ``small'' for } \norm{\xi} \leq \frac{\pi (N-1)}{2} \,. \end{equation*} (We have taken $T = 2/(s (N-1))$ and $\hat f$ is supposed to be unknown.) Figure~\ref{fig:psi} shows that absolute value of the radially symmetric Fourier transform of the basis function $\varphi(\,\cdot\, ; 1,2,2)$ in a logarithmic plot. This shows a complicated dependence of $S_{\varphi}(s, N ,\xi )$ on $s$, $N$ and $\xi$ and indicates that a simple universally valid answer how to optimally chose parameters seems difficult. We further note that $S_{\varphi}(s,N ,\xi )$ does not contain error due to frequency content outside $[-\pi/(Ts),\pi/(Ts)]^2$. Nevertheless, theoretical error estimates in combination with numerical studies can give precise guidelines for selecting good parameter for the practical applications. The quality of the reconstruction depends on the parameters of the KB function $m,\gamma, a$ as well as on $s$ and $T$ (note that $T$ has a similar role as $a$). In the paper \cite{nilchian2015optimized} the authors studied optimizing the parameter $\gamma$ (in the limit $T \to 0$) while the parameters $s=1$, $a=2$ and $m=2$ have been kept fixed. For that purpose they choose the parameter $\gamma$ in
$\varphi(\cdot \ ; 2,2,\gamma)$ such that the limiting residual error $S_{\varphi}(1, 0 ,\xi ) = \sum_{k \neq 0} |\hat{\varphi}(2k\pi)|^2$ (that is independent of $\xi$) becomes minimal. As we argued above the drawback of such an approach is that taking $s$ fixed does not yield vanishing asymptotic error as $N \to \infty$. Allowing $s$ to depend on $N$ overcomes this issue but makes the parameter selection more complicated.
\begin{figure}\label{fig:comparison}
\end{figure}
\subsection{Comparison with state of the art reconstruction methods}
We compare our Galerkin approach using KB functions with other state of the art approaches for PAT image reconstruction. We used the same phantom as above and the same wave data $\mathbf W f$ for all reconstruction methods. We selected $100 \times 100$ basis functions. For the KB Galerkin approach we use the generating function $\varphi(\,\cdot\,; 1,2,2)$ with step size parameter $s = 0.8081$ and correspondingly $T = 0.025$.
The KB Galerkin-least squares approach is compared to the following methods:
\begin{itemize}
\item {\bfseries Discrete-discrete KB imaging model \cite{wang2014discrete}.} We compare our method also to the DD (discrete-discrete) image reconstruction approach using KB functions proposed in \cite{wang2014discrete}. There the same basis functions for approximating the unknown function are used, $f_N = \sum_{k\in \Lambda_N} c_{N,k} \varphi_N^k$. Opposed to our Galerkin approach, for recovering the coefficients in the basis expansion one forces $\mathbf W f_N$ to exactly interpolate the discrete data values $g (x_i,t_j)$. This is equivalently characterized as the minimizer of following discrete data least squares functional over $\mathcal X_N$, \begin{equation}\label{eq:leastfully}
\frac{1}{2} \norm{ \mathbf B_N c_N - g_N}^2
\to \min_{c_N} \end{equation} where $\mathbf B_N := (\mathbf W\varphi_N^k(x_i,t_j))_{i,k}$ and $g_N \coloneqq (g (x_i,t_j))_{i,j}$. Note that in~\cite{wang2014discrete} it has been proposed to add an additional regularization term to~\eqref{eq:leastfully}, which we do not consider here.
\item {\bfseries Filtered backprojection (FBP) algorithm.} For the filtered backprojection algorithm we implemented the explicit inversion formula \begin{multline} \label{eq:inv}
f(x) =\frac{2}{R} \left( \mathbf W^*t\mathbf W f \right)(x)
\\=
-\frac{1}{\pi}\int_{\partial D_1}\int_{|x-p|}^{\infty}\frac{\partial_t (t\mathbf W f(p,t))}{\sqrt{t^2-|x-p|^2}}\rmd t \rmd s(p) \quad \text{ for all } x \in B_R(0) \,. \end{multline} The inversion formula has been derived in \cite{FinPatRak04} for odd spatial dimension and in \cite{FinHalRak07} for even dimension. The inversion formula \eqref{eq:inv} can be efficiently implemented in the form of a filtered backprojection algorithm requiring $\mathcal{O} (N^3)$ floating operations, where $N \times N$ is the number of reconstruction points, see~\cite{BurBauGruHalPal07,FinHalRak07}. For a fair comparison, the number of reconstruction points in the filtered backprojection algorithm is taken equal to the number of basis functions in the KB Galerkin approach.
\item {\bfseries Galerkin reconstruction using the pixel basis.}
Here reconstruction space is generated by $100 \times 100$ basis functions given by piecewise constant functions on a square of length $2/100$ (see Section~\ref{sec:ex:pixel}). Since the pixel basis forms an orthonormal system it holds $\mathbf A_N = \mathbf I_N$. The right hand side of the matrix equation is computed as described in Section~\ref{sec:galerkinshift}. \end{itemize}
\begin{figure}\label{fig:comparisonnoise}
\end{figure}
The minimizer of the optimization problem \eqref{eq:leastfully} is given as the solution of the normal equation
$\mathbf B_N^\mathsf{T} \mathbf B_N c =\mathbf B_N^\mathsf{T} g_N$. The matrix $\mathbf B_N^\mathsf{T} \mathbf B_N$ is less structured and less sparse
than our Galerkin matrix $\mathbf A_N$. We observed that the direct solver in
\textsc{Matlab}\xspace was much slower than for the Galerkin method (more than a minute compared to a fraction of a second) and therefore we decided to use iterative methods for its solution. In particular we found the CG algorithm to perform good, which has been used
for the results shown below. For better comparison we also computed the KB Galerkin solution using the CG method. Iteratively addressing the arising equations has the advantage that they are applicable for three dimension image reconstruction as well.
In Figure \ref{fig:comparison} we show reconstruction results with the above methods applied to the simulated data obtained on a standard desktop PC. Computing the right hand side in the Galerkin equation took about $1.95$ seconds for the KB functions and 2.04 for the pixel basis. The solution of the KB Galerkin equation took $0.31$ seconds with the direct \textsc{Matlab}\xspace solver and $0.11$ seconds using $40$ steps of the CG equation. The solution of the discrete equation took about $76.17$ seconds with the direct \textsc{Matlab}\xspace solver and $5.01$ seconds using $40$ steps of the CG equation. The used filtered
backprojection algorithm took about $0.08$ seconds. One observes that
computing the right hand side is currently the most time consuming part in
the Galerkin approach. Since we have to compute $N^2$ inner products and each inner product consist of a sum over $N_{\rm det} N_t$ components, the numerical effort of that step is $\mathcal O(N^4)$
if we take $N_{\rm det} = \mathcal O(N)$ and $N_t = \mathcal O(N)$. By exploiting the special
structure of the basis functions and the wave operator we believe that
it might be possible to derive $\mathcal O(N^3)$ algorithm
for evaluating the right hand side. In such a situation we would reach the computational performance of the FPB algorithm with more flexibility and a potentially better accuracy.
Further note that the matrix $\mathbf B_N^\mathsf{T} \mathbf B_N$ in the DD approach is not sparse
which explains why the CG method for the Galerkin approach is faster than the CG method for the DD approach. In three spatial dimension, both the DD approach (see \cite{wang2014discrete})
and the Galerkin approach yields to a sparse system matrix and therefore both have
similar and good numerical efficiency in this case.
In order to investigate the stability of the above algorithms with respect to noise we repeated the above computation after Gaussian white noise with variance equal to $5\%$ of the $L^2$-norm of the data. The results are shown in Figure~\ref{fig:comparisonnoise}. Table~\ref{tab:comparison} summarizes the $L^2$-reconstruction for different noise level and different reconstruction errors. We see that the methods using the KB functions perform best in terms of the $L^2$-reconstruction error. Note that the early stopping of the CG methods has a regularization effect. This partially explains the smaller reconstruction error of the method using the CG iteration. We emphasize that we did not select the number of iterations to minimize the reconstruction error. The KB Galerkin using the direct solver in \textsc{Matlab}\xspace also gives quite small error, which indicates that early stopping is not a very important issue in terms of the stability. Note that for noisy data all results can be improved by incorporating regularization (see, for example, \cite{Hal11b} for the FBP algorithm and \cite{wang2014discrete} for the DD approach).
\begin{table}[thb!] \centering
\begin{tabular}{c |c c c c c } \toprule
noise ($\%$) & Galerkin & Galerkin (CG) & DD approach (CG) & FBP & Pixel \\ \midrule $0$ & 0.0323 & 0.0306 & 0.0314 & 0.0347 & 0.0283 \\ \midrule $2.5$ & 0.0830 & 0.0748 & 0.0783 & 0.2064 & 0.1249 \\ \midrule $5$ & 0.1411 & 0.1272 & 0.1140 & 0.3897 & 0.2092 \\ \bottomrule \end{tabular} \caption{\textsc{Relative\label{tab:comparison} $L^2$-reconstruction errors for $N = 100$, $s=0.8081$ using different reconstruction methods and different noise levels.} The Galerkin, the Galerkin (CG) and DD-approach (CG) we use the the KB basis function $\varphi(\,\cdot\,; 1,2,2)$. For the methods using the CG algorithm 40 iterative steps have been performed.} \end{table}
\section{Conclusion and outlook} \label{sec:conclusion}
In this paper we studied (least-squares) Galerkin methods for photoacoustic tomography with spherical geometry (and arbitrary dimension). We implemented our Galerkin approach for two spatial dimensional and presented numerical results demonstrating that yields accurate results. The considered approach yields to solution of the Galerkin equation $ \mathbf A_N c_N = b_N$, where the system matrix $\mathbf A_N$ has size $N^2 \times N^2$ with $N^2$ denoting the number of basis elements. For a general reconstruction space, the system matrix to be computed and stored is dense and unstructured. In this paper we showed that by using the isometry property of~\cite{FinHalRak07,FinPatRak04} in combination with translation invariant reconstruction spaces, the system matrix is sparse and has simple structure. This can be used to easily set up the Galerkin equation and efficiently solve the Galerkin equation. This is in contrast to existing model based approaches for two-dimensional PAT, that do not yield to a sparse system matrix and numerical solvers for the arising equation (such as the CG algorithm) are numerically more expensive.
There are several possible interesting extensions and modifications of our image reconstruction approach. One intended line of research is the extension of our algorithm to three spatial dimension. For that purpose we believe that it is most promising to use iterative methods (such as the CG algorithm) for solving the Galerkin equation. One advantage in this case is that the system matrix is not required to be explicitly stored. For that purpose we will further derive more efficient ways how to evaluate the right hand side in the Galerkin equation which is, at least for the presented algorithm in two spatial dimensions, the most time consuming part. Another practically important extension of our framework is to incorporate finite detector size, finite bandwidth of the detection system and allowing incomplete data. In such cases it will be necessary to include additional regularization to stabilize the reconstruction process. We intend to apply our algorithm to experimental data and to study the optimal parameter choices in such a situation. Finally it would be interesting to extend our approach to more general measurement surfaces.
\end{document} |
\begin{document}
\title[]{The Stein Str\"omberg Covering Theorem in metric spaces}
\author{J. M. Aldaz} \address{Instituto de Ciencias Matem\'aticas (CSIC-UAM-UC3M-UCM) and Departamento de Matem\'aticas, Universidad Aut\'onoma de Madrid, Cantoblanco 28049, Madrid, Spain.} \email{[email protected]} \email{[email protected]}
\thanks{2010 {\em Mathematical Subject Classification.} 42B25}
\thanks{The author was partially supported by Grant MTM2015-65792-P of the MINECO of Spain, and also by by ICMAT Severo Ochoa project SEV-2015-0554 (MINECO)}
\begin{abstract} In \cite{NaTa} Naor and Tao extended to the metric setting the $O(d \log d)$ bounds given by Stein and Str\"omberg for Lebesgue measure in $\mathbb{R}^d$, deriving these bounds first from a localization result, and second, from a random Vitali lemma. Here we show that the Stein-Str\"omberg original argument can also be adapted to the metric setting, giving a third proof. We also weaken the hypotheses, and additionally, we sharpen the estimates for Lebesgue measure. \end{abstract}
\maketitle
\markboth{J. M. Aldaz}{Stein Str\"omberg covering theorem}
\section {Introduction}
In \cite{StSt}, Stein and Str\"omberg proved that for Lebesgue measure in $\mathbb{R}^d$, and with balls defined by an arbitrary norm, the centered maximal function has weak type (1,1) bounds of order $O(d \log d)$, which is much better than the exponential bounds obtained via the Vitali covering lemma. Naor and Tao extended the Stein-Str\"omberg
result to the metric setting in \cite{NaTa}.
There, a localization result is proven (using the notion of microdoubling, which basically entails a
very regular growth of balls) from which the Stein-Str\"omberg
bounds
are obtained (using the notion of strong microdoubling, which combines microdoubling with
local comparability). Also, a second argument is given, via a random Vitali Theorem that has its
origin in \cite{Li}.
Here we note that the Stein-Str\"omberg original proof,
which is shorter and conceptually simpler, can also be used in the metric setting,
yielding a slightly more general result.
We will divide the Stein-Str\"omberg argument into two parts, one with radii separated by large gaps, and the second, with radii inside an interval, bounded away from 0 and $\infty$. This will allow us to obtain more precise information about which hypotheses are needed in each case. We shall see that under the same hypotheses used by Naor and Tao, the Stein-Str\"omberg covering theorem for sparse radii (cf. Theorem \ref{StSt1} below) suffices to obtain the $d\log d$ bounds in the metric setting. But Theorem \ref{StSt1} itself is presented in a more general version. In particular, one does not need to assume that the metric space is geometrically doubling.
We also show that the Stein-Str\"omberg method, applied to balls with no restriction in the radii, yields the $O(d \log d)$ bounds in the metric context, for doubling measures where the growth of balls can be more irregular than is allowed by the microdoubling condition. Finally, we lower the known weak type (1,1) bounds in the case of Lebesgue measure: For $d$ lacunary sets of radii, from $(e^2 + 1) (e + 1)$ to $(e^{1/d} + 1) (1 + 2 e^{1/d})$ (to 6 in the specific case of $\ell_\infty$ balls), and for unrestricted radii, from
$e^2 (e^2 + 1) (1 + o(1)) d \log d$ to $(2 + 3 \varepsilon) d \log d$, where $\varepsilon > 0$
and $d = d(\varepsilon)$ is sufficiently large.
\section {Notation and background material}
Some of the definitions here come from \cite{A2}; we refer the interested reader to that paper for motivation and additional explanations.
We will use $B(x,r) := \{y\in X: d(x,y) < r\}$ to denote open balls, $\overline{B(x,r)}$ to denote their topological closure, and $B^{cl}(x,r) := \{y\in X: d(x,y) \le r\}$ to refer to closed balls. Recall that in a general metric space, a ball $B$, considered as a set, can have many centers and many radii. When we write $B(x,r)$ we mean to single out $x$ and $r$, speaking respectively of the center and the radius of $B(x,r)$.
\begin{definition} A Borel measure is {\em $\tau$-smooth} if for every collection $\{U_\alpha : \alpha \in \Lambda\}$
of open sets, $\mu (\cup_\alpha U_\alpha) = \sup \mu(\cup_{i=1}^nU_{\alpha_i})$,
where the supremum is taken over all finite subcollections of $\{U_\alpha : \alpha \in \Lambda\}$.
We say that $(X, d, \mu)$ is a {\em metric measure space} if $\mu$ is a Borel measure on the metric space $(X, d)$, such that for all balls $B(x,r)$, $\mu (B(x,r)) < \infty$, and furthermore, $\mu$ is $\tau$-smooth. \end{definition}
The assumption of $\tau$-smoothness does not represent any real restriction, since it is consistent with standard set theory (Zermelo-Fraenkel with Choice) that in every metric space, every Borel measure which assigns finite measure to balls is $\tau$-smooth (cf. \cite[Theorem (a), pg. 59]{Fre}).
\begin{definition}\label{maxfun} Let $(X, d, \mu)$ be a metric measure space and let $g$ be a locally integrable function on $X$. For each $x\in X$, the centered Hardy-Littlewood maximal operator $M_{\mu}$
is given by \begin{equation}\label{HLMFc} M_{\mu} g(x) := \sup _{\{r : 0 < \mu (B(x, r))\}} \frac{1}{\mu
(B(x, r))} \int _{B(x, r)} |g| d\mu. \end{equation} \end{definition}
Maximal operators can be defined using closed balls instead of open balls, and this does not change their values, as can be seen by an approximation argument.
When the measure is understood, we will omit the subscript $\mu$ from $M_\mu$.
A sublinear operator $T$ satisfies a weak type $(1,1)$ inequality if there exists a constant $c > 0$ such that \begin{equation}\label{weaktype}
\mu (\{T g > s \}) \le \frac{c \|g\|_{L^1(\mu)}}{s }, \end{equation} where $c=c(T, \mu)$ depends neither on $g\in L^1 (\mu)$
nor on $s > 0$. The lowest constant $c$ that satisfies the preceding inequality is denoted by $\|T\|_{L^1\to L^{1, \infty}}$.
\begin{definition} A Borel measure $\mu$ on $(X,d)$ is {\em doubling} if there exists a $C> 0 $ such that for all $r>0 $ and all $x\in X$, $\mu (B(x, 2 r)) \le C\mu(B(x,r)) < \infty$. \end{definition}
\begin{definition} \label{geomdoub} A metric space is {\it $D$-geometrically doubling} if there exists a positive integer $D$ such that every ball of radius $r$ can be covered with no more than $D$ balls of radius $r/2$. \end{definition}
If a metric space supports a doubling measure, then it is geometrically doubling. Regarding weak type inequalities for the maximal operator, in order to estimate $\mu \{M f > s\}$, one considers balls $B(x,r)$ over which
$|f|$ has average larger than $s$. Now, while in the uncentered case any such ball is contained in the corresponding level set, this is not so for the centered maximal function. Thus, using the balls $B(x,r)$ to cover $\{M f > s\}$ can be very inefficient.
A key ingredient in the Stein-Str\"omberg proof is to cover $\{M f > s\}$ by the much smaller balls $B(x,t r)$, $0 < t <<1$, something that leads to the ``microdoubling" notion of Naor and Tao. We slightly modify their notation, using $1/n$-microdoubling to denote what these authors call $n$-microdoubling.
\begin{definition} (\cite[p. 735] {NaTa} Let $0 < t < 1$ and let $K\ge 1$. Then
$\mu$ is said to be $t$-microdoubling with constant $K$ if for all $x \in X$ and all $r >0$, we have $$ \mu B\left(x,\left(1 + t \right) r \right) \le K \mu B(x,r). $$ \end{definition}
The next property is mentioned in \cite{NaTa}, and more extensively studied in \cite {A2}.
\begin{definition} \label{loccomp} A measure $\mu$ satisfies a {\it local comparability condition} if there exists a constant $C\in[1, \infty)$ such that for all pairs of points $x,y\in X$, and all $r >0$, whenever $d(x,y) < r$, we have $$\mu(B(x,r))\le C \mu(B(y,r)).$$
We denote the smallest such $C$ by $C(\mu)$ or $C_\mu$. \end{definition}
\begin{remark} \label{doublingandmicro}
If $\mu$ is doubling with constant $K_1$ then it is microdoubling and satisfies a local comparability condition with the same constant $K_1$, while if it is $t$-microdoubling with constant $K_2$ and $2 \le (1 + t)^M$, then $\mu$ is doubling and satisfies a local comparability condition with constant $K_2^M$. Thus, the difference between doubling and microdoubling lies in the size of the constants, it is quantitative, not qualitative: The microdoubling condition adds something new only when $K_2 < K_1$, in which case it entails a greater regularity in the growth of the measure of balls, as the radii increase. Likewise, bounds of the form $\mu B(x, T r) \le K \mu B(x, r)$ for $T > 2$, allow a greater irregularity in the growth of balls than standard doubling ($T = 2$) or than microdoubling.
We mention that while local comparability is implied by doubling, it is a uniformity condition, not a growth condition. Thus, it is compatible with the failure of doubling, and even for doubling measures, it is compatible with any rate of growth for the volume of balls. Consider, for instance, the case of $d$-dimensional Lebesgue measure $\lambda^d$: A doubling constant is $2^d$, a $1/d$-microdoubling constant is $e$, and the smallest local comparability constant is $C(\lambda^d) = 1$.
\end{remark}
The next definition combines the requirement that the microdoubling and the local comparability constants be ``small" simultaneously.
\begin{definition} (\cite[p. 737] {NaTa} Let $0 < t < 1$ and let $K\ge 1$. Then
$\mu$ is said to be strong $t$-microdoubling with constant $K$ if for all $x \in X$,
all $r > 0$, and all $y\in B(x,r)$, $$ \mu B\left(y,\left(1 + t \right)r\right) \le K \mu B(x,r). $$ \end{definition}
Thus, if $\mu$ is strong $t$-microdoubling with constant $K$, then $C(\mu) \le K$. Also, local comparability is the same as strong $0$-microdoubling. To get a
better understanding of how bounds depend on the different constants,
it is useful to keep separate $C(\mu)$ and $ K$.
\begin{definition} Given a set $S$ we define its $s$-{\em blossom} as the enlarged set \begin{equation} \label{altblossom} Bl(S, s):= \cup_{x\in S}B(x,s),
\end{equation}
and its {\em uncentered $s$-blossom} as the set \begin{equation} \label{altublossom} Blu(S, s):= \cup_{x\in S}\cup\{B(y, s): x\in B(y, s)\}.
\end{equation} When $S= B(x,r)$, we simplify the notation and write $Bl(x,r, s)$, instead of $Bl(B(x,r), s)$, and likewise for uncentered blossoms.
We say that $\mu$ {\em blossoms boundedly} if there exists a $K\ge 1$ such that
for all $r>0 $ and all $x\in X$, $\mu (Blu(x, r, r)) \le K \mu(B(x,r)) < \infty$. \end{definition}
Blossoms can be defined using closed instead of open balls, in an entirely analogous way. To help understand the relationship between blossoms and balls, we include the following definitions and results.
\begin{definition} A metric space has the {\it approximate midpoint property} if for every $\varepsilon > 0$ and every pair of points $x,y$, there exists a point $z$ such that $d(x,z), d(z,y) < \varepsilon + d(x,y)/2$. \end{definition}
\begin{definition} \label{quasi}A metric space $X$ is {\it quasiconvex} if there exists a constant $C\ge 1$ such that for every
pair of points $x,y$, there exists a curve with $x$ and $y$ as endpoints, such
that its length is bounded above by $C d(x,y)$. If for every $\varepsilon > 0$ we can take $C=1 + \varepsilon$, then we say that $X$ is a {\it length space}.
\end{definition}
It is well known that for a complete metric space, having the approximate midpoint property is equivalent to being a length space.
\begin{example} \label{blossomsballs} The $s$-blossom of an $r$-ball may fail to contain a strictly larger ball, even in quasiconvex spaces.
For instance, let $X \subset \mathbb{R}^2$ be the set $\{0\} \times [0,1] \cup [0,1]\times \{0\}$ with metric defined by restriction of the $\ell_\infty$ norm; then we can take $C = 2$. Now $B((1,0), 1) = (0,1]\times \{0\}$, while for every $r > 1$, $B((1,0), r) = X$, which is not contained in $Blu((1,0), 1, 1/6)$. Furthermore, neither $Blu((1,0), 1, 1/6)$ nor $Bl((1,0), 1, 1/6)$ are balls, i.e., given any $x\in X$ and any $r > 0$, we have that $B(x, r) \ne Blu((1,0), 1, 1/6)$ and $B(x, r) \ne Bl((1,0), 1, 1/6)$.
On the other hand, if a metric space $X$ has the approximate midpoint property, then blossoms and
balls coincide (as we show next) so in this case considering
blossoms gives nothing new. \end{example}
\begin{theorem} \label{equiv} Let $(X, d)$ be a metric space. The following are equivalent:
a) $X$ has the approximate midpoint property.
b) For all $x\in X$, and all $r, s >0$, $Bl(x, r, s) = B(x, r + s).$
c) For all $x\in X$, and all $r >0$, $Bl(x, r, r) = B(x, 2 r).$ \end{theorem}
\begin{proof} Suppose first that $X$ has the approximate midpoint property. Since $Bl(x, r, s) \subset B(x, r + s),$ to prove b) it is enough to show that if $y\in B(x, r + s),$ then $y \in Bl(x, r, s)$, or equivalently, that there is a $z\in X$ such that $d(x, z) < r$ and $d(z, y) < s$. If either $d(x, y) < s$ or $d(x, y) < r$ we can take $z = x$ and there is nothing to prove, so assume otherwise. Let $(\hat X, \hat d)$ be the completion of $(X, d)$; then $\hat X$ is a length space, since it has the approximate midpoint property. Let $\Gamma :[0,1] \to \hat X$ be a curve with $\Gamma (0) = x$, $\Gamma (1) = y$, and length $\ell (\Gamma) < r + s$. Then $\Gamma([0, 1]) \subset B(x, r) \cup B(y, s)$, for if there is a $w \in [0,1]$ with $\Gamma (w) \notin B(x, r) \cup B(y, s)$, then $\ell (\Gamma) \ge r + s$. Now let $c \in [0,1]$ be the time of first exit of $\Gamma (t)$ from $B(x,r)$, that is, for all $t < c$,
$\Gamma (t) \in B(x,r)$ and $\Gamma (c) \notin B(x,r)$ . Then $\Gamma (c) \in B(y,s)$, so by continuity of $\Gamma$, there is a $\delta \in [0, c) $ such that $\Gamma (\delta) \in B(y,s)$. Thus, the open set $ B(x, r) \cap B(y, s) \ne \emptyset$ in $\hat X$. But $X$ is dense in $\hat X$, so there exists a $z\in X$ such that $d(x, z) < r$ and $d(z, y) < s$, as we wanted to show.
Part c) is a special case of part b). From part c) we obtain a) as follows. Let $x, y \in X$, and let $r >0$ be such that $d(x,y) < 2 r$. By hypothesis, $y\in Bl(x, r, r) = B(x, 2r)$, so there is a $z\in X$ such that $d(x, z) < r$ and $d(z, y) < r$. Thus, $X$ has the approximate midpoint property. \end{proof}
\begin{example} \label{chordal} Let $X$ be the unit sphere (unit circumference) in the plane, with the chordal metric, that is, with the restriction to $X$ of the euclidean metric in the plane. While this space does not have the approximate midpoint property, blossoms are nevertheless geodesic balls. However, the equality $Bl(x, r, s) = B(x, r + s)$ no loger holds. For instance,
$Bl((1, 0), 1, 1) \ne Bl((1, 0), \sqrt2, \sqrt2 ) = B((1, 0), 2) = X \setminus \{(-1,0)\}$.
\end{example}
\section{Microblossoming and related conditions}
\begin{definition} \label{bmicroblu} Let $0 < t < 1$ and let $K\ge 1$. Then
$\mu$ is said to $t$-microblossom boundedly with constant $K$,
if for all $x \in X$ and all $r >0$, we have \begin{equation} \mu (Blu \left(x, r, t r \right)) \le K \mu B(x,r). \end{equation} \end{definition}
We shall say $\mu$ is a measure that $(t,K)$-microblossoms, instead of using the longer expression.
\begin{example} \label{bmicrobluex}
Microblossoming (even together with doubling) is more general than microdoubling,
in a quantitative sense. Consider $(\mathbb{Z}^d, \ell_\infty, \mu)$, where $\mu$ is
the counting measure. Then $\mu$ is doubling, and ``microdoubling in the large", since for
large radii ($r > d$), $\mu$ can be regarded as a discrete approximation to Lebesgue measure.
However,
$\mu B(0,1) = 1$, and for every $t > 0$, $\mu B(0,1 + t ) \ge 3^d$, no matter how small
$t$ is. Thus, the measure $\mu$ is not $(t,K)$-microdoubling, for any $K < 3^d$, $0 < t << 1$. However,
$\mu$ is $1/d$-microblossoming, since for $r > d$, $\mu$ behaves as a microdoubling
measure, and for $r \le d$, $Blu(x, r, r/d) = B(x,r)$.
A less natural but stronger example is furnished by the measure $\mu$ given by
\cite[Theorem 5.9]{A2}. Since $\mu$ satisfies a local comparability condition,
and is defined in a geometrically doubling space, it blossoms boundedly, so it
microblossoms boundedly (at least with the blossoming constant). But $\mu$ is not doubling,
and hence it is not microdoubling.
\end{example}
\begin{example} \label{bmicroblunonblu} While $(t,K_1)$-microdoubling entails
$(2, K_2)$-doubling for some $K_2 \ge K_1$, the analogous statement is not true
for microblossoming. The following example shows that $(1/2,1)$-microblossoming
does not entail local comparability.
Let $X = \{0, 1, 3\}$ with the inherited metric from $\mathbb{R}$, and let $\mu = \delta_3$.
Then $B(0,3) \cap B(3, 3) = \{1\}$, but $\mu B(0,3) = 0$ while $\mu B(3,3) = 1$, so
local comparability fails. Since bounded blossoming
implies local comparability, all we have to do is to check that $\mu$ is $(1/2,1)$-microblossoming.
For $t\le 3$, $B(0,t) \subset Blu(0, t, t/2) \subset \{0,1\}$, so $\mu B(0,t) = \mu Blu(0, t, t/2)= 0$,
and for $t > 3$, $B(0,t) = Blu(0, t, t/2) = X$. Likewise, for $t\le 2$, $B(1,t) = Blu(1, t, t/2) \subset \{0,1\}$, so $\mu B(1,t) = \mu Blu(1, t, t/2)= 0$,
and for $t > 2$, $B(1,t) = Blu(1, t, t/2) = X$.
\end{example}
\begin{definition} Given a metric measure space $(X, d, \mu)$, and denoting the support of $\mu$ by $supp(\mu)$, the {\em relative increment function} of $\mu$, $ri_{\mu}: supp(\mu)\times (0,\infty)\times [1,\infty)$, is defined as \begin{equation} \label{ri} ri_{\mu}(x, r, t) := \frac{\mu B(x, tr)}{\mu B(x, r)}, \end{equation} and the {\em maximal relative increment function}, as \begin{equation} \label{mri} mri_{\mu}(r, t) := \sup_{x \in supp(\mu)}\frac{\mu B(x, tr)}{\mu B(x, r)}. \end{equation} When $\mu$ is understood we will simply write $ri$ and $mri$. \end{definition}
This notation allows one to unify different conditions that have been considered regarding the boundedness of maximal operators. For instance, on $supp(\mu)$ the doubling condition simply means that there is a constant $C\ge 1$ such that for all $r > 0$, $mri_{\mu}(r, 2) \le C$, and the $d^{-1}$-microdoubling condition, that for all $r > 0$, $mri_{\mu}(r, d^{-1}) \le C$. Note that by $\tau$-smoothness, the complement of the support of $\mu$ has $\mu$-measure zero, so the relative increment function is defined for almost every $x$.
\begin{example} \label{macrodoub}
The interest of considering values of $t >2$ in the preceding
definition comes from the fact that, under the additional assumption of microblossoming, it
will allow a much more irregular
growth of balls than microdoubling or plain doubling, without a comparable worsening
of the estimates for the weak type (1,1) bounds.
To fix ideas, consider the right hand side $ C(\mu) \ K_1 K \left(2 + \frac{\log K_2}{\log K}\right)$ of formula (\ref{sum}) below. This bound is related to the centered maximal operator when the supremum is restricted to radii $R$ between $r$ and $T r$, $T > 1$. The constant $K_2$ depends on $T$, as it must satisfy $ mri_{\mu}(r, T) \le K_2$.
For Lebesgue measure on $\mathbb{R}^d$ with the $\ell_\infty$-norm, $ C(\lambda^d) = 1$. If we set $T=2$, then we can take $K_2 = 2^d$, while $K_2 = d^d$ for $T=d$, a choice which yields bounds of order $O( d \log d)$.
A $1/d$-microdoubling constant is $K_1 = e$ ($\mathbb{R}^d$ has the approximate midpoint property, and in fact it is a geodesic space, so microdoubling is the same as microblossoming in this case) and $K := \max\{K_1, e\} = e$.
Returning to Example \ref{bmicrobluex}, by a rescaling argument it is clear that the situation
for
$(\mathbb{Z}^d, \ell_\infty, \mu)$ cannot be much worse than for
$(\mathbb{R}^d, \ell_\infty, \lambda^d)$, and in fact it is easy to see that
the same argument of Stein and Str\"omberg (which will be presented in greater
generality below) yields the $O( d \log d)$ bounds. Now suppose we modify
the measure so that at one single point it is much smaller. Clearly, this will have
little impact in the weak type (1,1) bounds, since for $d >>1$, $x \in \mathbb{Z}^d,$ and $r>1$, the measure of $B(x,r)$
will be changed by little or not at all, while for $ r \le 1$, balls with distinct centers do not
intersect. For definiteness, set $\nu = \mu$ on $\mathbb{Z}^d \setminus \{0\}$,
and $\nu \{0\} = d ^{-d}$. Then the doubling constant, and the $(t,K)$-microdoubling constant, for any $t > 0$, is at least $d ^{d} (3 ^{d} -1) \le K = K_2$, much larger than the corresponding constants for $\mu$. However, the local comparability constant is still very close to 1, since intersecting balls of the same radius must contain at least $3^d$ points each, and a $1/d$-microblossoming constant can be taken to be very close to $e$. Setting $T = d$, we get $K_2 \le d^d (2d + 1)^d$, so $\log K_2$ in this case is comparable to the constant obtained when $T = 2$.
\end{example}
\begin{remark} One might define $(T, K)$-macroblossoming,
with $T > 1$, by analogy with Definition \ref{bmicroblu}.
However, since $B(x, T r) \subset Blu(x, r, T r)$, assuming
directly that $mri_{\mu}(r, T) \le K$ is not stronger than $(T, K)$-macroblossoming, \end{remark}
\section{The Stein-Str\"omberg covering theorem}
Next, we present the Stein-Str\"omberg argument using the terminology of blossoms. Note that the next theorem does not require $X$ to be geometrically doubling.
Given an ordered sequence of sets $A_1, A_2, \dots$, we denote by $D_1, D_2, \dots$ its sequence of disjointifications, that is $D_1 = A_1$, and $D_{n + 1} = A_{n + 1}\setminus \cup_1^n A_i$. We shall avoid reorderings and relabelings of collections of balls, as this may lead to confusion regarding the meaning of $D_j$. The unfortunate downside of this choice is an inflation of subindices.
\begin{theorem}\label{StSt1} {\bf Stein-Str\"omberg covering theorem for sparse radii.} Let $(X, d, \mu)$ be a metric measure space, where $\mu$ satisfies a $C(\mu)$ local comparability condition, and let $R:= \{r_n: n\in \mathbb{Z}\}$ be a $T$-lacunary sequence of radii, i.e., $r_n >0$ and $r_{n+1}/r_n \ge T > 1$. Suppose there exists a $t > 0$ such that $T t \ge 1$ and $\mu$ $t$-microblossoms boundedly
with constant $K$. Let $\{B(x_i, s_i): s_i \in R, 1 \le i \le M\}$ be a finite collection of balls with positive measure, ordered by non-increasing radii. Set $U:= \cup_{i = 1}^M B(x_i, t s_i)$. Then there exists a subcollection $\{B(x_{i_1}, s_{i_1}), \dots, B(x_{i_N}, s_{i_N})\}$,
such that, denoting by $D_{i_j}$ the disjointifications of the reduced balls $B(x_{i_j}, t s_{i_j})$, \begin{equation}\label{set} \mu U \le (K + 1) \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j}), \end{equation}
and
\begin{equation}\label{bound} \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})} \le C(\mu) \ K + 1. \end{equation}
\end{theorem}
\begin{proof} We use the Stein-Str\"omberg selection algorithm.
Let $B(x_{i_1}, s_{i_1}) = B(x_{1}, s_{1})$
and suppose that the balls $B(x_{i_1}, s_{i_1}), \dots ,B(x_{i_k}, s_{i_k})$ have already been selected. If
$$ \sum_{j=1}^k \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{Bl(x_{i_j}, s_{i_j}, t s_{i_j})} (x_{i_{k} + 1}) \le 1, $$ accept $B(x_{i_{k + 1}}, s_{i_{k + 1}}) := B(x_{i_{k} + 1}, s_{i_{k} + 1})$ as the next ball in the subcollection. Otherwise, reject it. Repeat till we run out of balls. Let $\mathcal{C}$ be the collection of all rejected balls. Then $\mu$ a.e., $$ \mathbf{1}_{\cup \mathcal{C}} < \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{Blu(x_{i_j}, s_{i_j}, t s_{i_j})}. $$ Integrating both sides and using microblossoming we conclude that $\mu \cup \mathcal{C} \le K \sum_i^N \mu D_{i_j} = K \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j})$, whence $\mu U \le (K + 1) \ \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j})$.
Next we show that $$ \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})} \le C(\mu) \ K + 1. $$ Suppose $\sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})}(z) > 0$. Let $\{B(x_{i_{k_1}}, s_{i_{k_1}}),$ $ \dots, B(x_{i_{k_n}}, s_{i_{k_n}})\}$ be the collection of all balls containing $z$ (keeping the original ordering by decreasing radii). Then each $B(x_{i_{k_j}}, s_{i_{k_j}})$ has radius either equal to or (substantially) larger than $s_{i_{k_n}}$. We separate the contributions of these balls into two sums. Suppose $B(x_{i_{k_1}}, s_{i_{k_1}}), \dots , B(x_{i_{k_m}}, s_{i_{k_m}})$ all have radii larger than $s_{i_{k_n}}$, while $B(x_{i_{k_{m} + 1}}, s_{i_{k_{m} + 1}}), \dots , B(x_{i_{k_n}}, s_{i_{k_n}})$ have radii equal to $s_{i_{k_{n}}}$. Now for $1 \le j \le m$, by $T$ lacunarity and the fact that $T t \ge 1$, we have $s_{i_{k_n}} \le t s_{i_{k_j}}$, so $z\in B(x_{i_{k_j}}, s_{i_{k_j}})$ implies that $x_{i_{k_n}}\in Bl(x_{i_{k_j}}, s_{i_{k_j}}, t s_{i_{k_j}})$, whence $$ \sum_{j=1}^m \frac{\mu D_{i_{k_j}}}{\mu B(x_{i_{k_j}}, s_{i_{k_j}})} \mathbf{1}_{Bl(x_{i_{k_j}}, s_{i_{k_j}}, t s_{i_{k_j}})} (x_{i_{k_n}}) \le 1, $$ and thus $$ \sum_{j=1}^m \frac{\mu D_{i_{k_j}}}{\mu B(x_{i_{k_j}}, s_{i_{k_j}})} \mathbf{1}_{B(x_{i_{k_j}}, s_{i_{k_j}})} (z) \le 1. $$ Next, note that the sets $D_{i_{k_{m} + 1}}, \dots , D_{i_{k_{n}}}$ are all disjoint and contained in $Bl(z, s_{i_{k_{n}}}, t s_{i_{k_{n}}})$. By microblossoming and local comparability, for $j = m + 1, \dots, n$ we have $$ \mu \cup_{j= m+1}^n D_{i_{k_{j}}}\le \mu Bl(z, s_{i_{k_{n}}}, t s_{i_{k_{n}}}) \le K \mu B(z, s_{i_{k_{n}}})\le K \ C(\mu) \ \mu B(x_{i_{k_{j}}}, s_{i_{k_{n}}}). $$ It follows that $$ \sum_{j= m +1}^n \frac{\mu D_{i_{k_{j}}}}{\mu B(x_{i_{k_{j}}}, s_{i_{k_{n}}})}\mathbf{1}_{B(x_{i_{k_{j}}},
s_{i_{k_{n}}})} (z) \le \frac{ C(\mu) \ \mu Bl(z, s_{i_{k_{n}}}, t s_{i_{k_{n}}})}{\mu B(z, s_{i_{k_{n}}})} \le C(\mu) \ K.
$$
\end{proof}
Denote by $M_R$ the centered Hardy-Littlewood maximal operator, with the additional restriction
that the supremum is taken over radii belonging to the subset $R\subset (0,\infty)$ (cf. \cite[p. 735]{NaTa}). We mention that under the hypotheses of the next corollary, it is not known whether
the centered Hardy-Littlewood maximal operator $M$ (with no restriction on the radii) is of weak type (1,1).
\begin{corollary}\label{MR} Let $(X, d, \mu)$ be a metric measure space, where $\mu$ satisfies a $C(\mu)$ local comparability condition, and let $R:= \{r_n: n\in \mathbb{Z}\}$ be a $T$-lacunary sequence of radii. Suppose there exists a $t > 0$ with $T t \ge 1$ such that
$\mu$ $(t, K)$-microblossoms boundedly. Then $\|M_R\|_{L^1-L^{1,\infty}} \le (K + 1) \ (C(\mu) \ K + 1)$.
\end{corollary}
The proof is standard. We present it to keep track of the constants.
\begin{proof} Fix $\varepsilon > 0$, let $a > 0$, and let $f\in L^1(\mu)$.
For each $x\in \{M_R f > a\}$ select $B(x,r)$ with $r\in R$, such that $a \mu B(x,r) < \int_{B(x,r)} |f|$.
Then the collection of ``small" balls $\{ B(x, tr) : x\in \{M_R f > a\}\}$ is a cover of $\{M_R f > a\}$.
By the $\tau$-smoothness of $\mu$, there is a finite subcollection
$\{B(x_i, t s_i): s_i \in R, 1 \le i \le M\}$ of balls with positive measure, ordered by non-increasing radii, such that $$ (1 - \varepsilon) \mu \{M_R f > a\} \le (1 - \varepsilon) \mu \cup \{ B(x, tr) : x\in \{M_R f > a\}\} < \mu \cup_{i= 1}^M B(x_i, t s_i). $$
Next, let $\{B(x_{i_1}, s_{i_1}), \dots, B(x_{i_N}, s_{i_N})\}$ be the subcollection given by the Stein-Str\"omberg covering theorem for sparse radii. Then we have $$ \mu \cup_{i= 1}^M B(x_i, t s_i) \le (K + 1) \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j}) = (K + 1) \sum_{j=1}^N \mu D_{i_j} $$ $$ = (K + 1) \sum_{j=1}^N \frac{\mu D_{i_j} }{\mu B(x_{i_j}, s_{i_j})}\int \mathbf{1}_{B(x_{i_j}, s_{i_j})} \le
(K + 1) \frac{1}{a }\int |f| \sum_{j=1}^N \frac{\mu D_{i_j} }{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})} $$
$$\le (K + 1) \ (C(\mu) \ K + 1) \frac{1}{a }\int |f| . $$
\end{proof}
In the specific case of $d$-dimensional Lebesgue measure $\lambda^d$, $C(\lambda^d) = 1$. Choosing
$t = 1/d$
and $T = d$,
$K$ above can be taken to be $e^2$, so the constant obtained
is $(e^2+1)^2$, which is worse than the constant $(e^2 + 1) (e + 1)$ yielded by the Stein-Str\"omberg argument. This discrepancy is due to the fact that our definition of microblossoming uses the uncentered blossom instead of the blossom, so from the assumption $\mu (Blu \left(x, r, t r \right)) \le K \mu B(x,r)$ we get the same bound $\mu (Bl \left(x, r, t r \right)) \le K \mu B(x,r)$ for the potentially smaller centered blossom. Of course, we could strengthen the definition, using blossoms, to obtain the same constant as in the Stein-Str\"omberg proof, but in the case of Lebesgue measure we prefer to consider it separately, using different values of $(t, K)$ to lower the known bounds. We do this in the next section.
While Corollary \ref{MR} follows from the
proof of the Stein-Str\"omberg covering theorem, it was not stated there but in \cite[Lemma 4]{MeSo} for Lebesgue measure,
and in the microdoubling case, in \cite[Corollary 1.2]{NaTa}. A source of interest for this result comes from the fact that under $(t,K)$-microblossoming, the maximal operator defined by a $(1 + t)$-lacunary set of radii $R$ is controlled by the sum of $N$ maximal operators with lacunarity $1/t$, where $N$ is the least integer such that $(1 + t)^N \ge 1/t$. Thus, the bound
$\|M_R\|_{L^1-L^{1,\infty}} \le N (K + 1) \ (C(\mu) \ K + 1)$ follows. Under the additional assumption of $(t, K^{1/2})$-microdoubling, the maximal operator defined by taking suprema of radii in $[a, (1 + t) a)$ is controlled by $ K^{1/2}$ times the averaging operator of radius $(1 + t) a$. Putting these estimates together, and using the better bound for $\mu Bl(x, r, tr)\le K^{1/2} \mu B(x,R)$, the following result due to Naor and Tao (cf. \cite[Corollary 1.2]{NaTa}) is obtained. Of course, in this case $\mu$ is doubling and $X$, geometrically doubling.
\begin{corollary} Let $(X, d, \mu)$ be a metric measure space, where $\mu$ satisfies a $C(\mu)$ local comparability condition and is $(t, K^{1/2})$-microdoubling. If $N$ is the least integer such that $(1 + t)^N \ge 1/t$, then
$$\|M\|_{L^1-L^{1,\infty}} \le N \ K^{1/2} \ (K + 1) \ (C(\mu) \ K^{1/2} + 1).$$
\end{corollary}
This shows that the Stein-Str\"omberg covering theorem for sparse radii in metric spaces suffices to prove the Naor-Tao bounds, but no greater generality is achieved in either the spaces or the measures, since microdoubling is used in the last step. A second approach, which yields a slightly more general version of the result and gives
better constants, consists
in going back to the original Stein-Str\"omberg argument. Recall that when defining $(t,K_1)$-microblossoming, we set $0 < t < 1$ and $K_1\ge 1$. In the proof of the next result $K:= \max\{K_1, e\}$
is used to determine the size of the steps. For convenience we take $K \ge e$, but $e$ is just one possible choice.
Note that the condition on $ mri(r, T)$ below entails that $\mu$ is doubling on its support, and hence $supp(\mu)$ is geometrically doubling.
\begin{theorem}\label{StSt2} {\bf Stein-Str\"omberg covering theorem for bounded radii.} Let $(X, d, \mu)$ be a metric measure space such that $\mu$ satisfies a $C(\mu)$ local comparability condition, and
is $(t,K_1)$-microblossoming. Set $K = \max\{K_1 , e\}$. Let $ r > 0$, and suppose there exists a $T > 1$ such that $K_2:= mri(r,T) <\infty$. Let $\{B(x_i, s_i): r \le s_i < T r, 1 \le i \le M\}$ be a finite collection of balls with positive measure, given in any order, and let $D_1 = B(x_1, t s_1), \dots, D_{M} = B(x_{M}, t s_{M})\setminus \cup_1^{M-1} B(x_{i}, t s_{i})$ be the disjointifications of the $t$-reduced balls. Then
\begin{equation}\label{sum} \sum_{i=1}^M\frac{\mu D_i}{\mu B(x_{i}, s_{i})}\mathbf{1}_{B(x_{i}, s_{i})} \le C(\mu) \ K_1 K \left(2 + \frac{\log K_2}{\log K}\right).
\end{equation}
\end{theorem}
Since the big $d\log d$ part in the estimates for the maximal operator (in $\mathbb{R}^d$ with Lebesgue measure) comes
from this case, which does not require any particular ordering nor any choice of balls, it is natural
to enquire whether some additional selection process can lead to an improvement
in the bounds. In general metric spaces this cannot be done, by \cite[Theorem 1.4]{NaTa}, but it might be possible in $\mathbb{R}^d$. However, I have not been able to find such a new selection argument.
In the statement above, $T$ is not assumed to be close to 1, and in fact it could be much larger than 2 (recall Example \ref{macrodoub}).
From the viewpoint of the proof, the difference between $T >> 2$ and the assumption of $t$-microdoubling lies in the fact that the size of the steps will vary depending on the growth of balls, rather than having increments given by the constant factor $1 + t$ at every step. But the total number of steps will be determined by $K$ and $K_2$, not by whether the factors are all equal to $1 + t$ or not.
\begin{proof} Suppose
$$\sum_{i=1}^M\frac{\mu D_i}{\mu B(x_{i}, s_{i})}\mathbf{1}_{B(x_{i}, s_{i})} (y) > 0.
$$
Let $s = \min \{s_i: 1 \le i \le M \mbox{ and } y\in B(x_{i}, s_{i})\}$. Then
$r \le s < Tr$. Select
$$h_1 = \sup \{h > 0 : \mu B(y , (1 + h) s)
\le K \mu B^{cl} (y ,s) \mbox{ \ and \ } (1 + h) s \le T r\}.
$$
This is always possible since $ \lim_{h\downarrow 0}\mu B(y , (1 + h) s)
= \mu B^{cl} (y ,s) $. Now either $(1 + h_1) s = T r$, in which case the process finishes in one step, and then it could happen
that
$\mu B^{cl}(y , (1 + h_1) s)
< K \mu B^{cl} (y ,s)$, or $(1 + h_1) s < T r$, in which case $\mu B(y , (1 + h_1) s)
\le K \mu B^{cl} (y ,s) \le \mu B^{cl}(y , (1 + h_1) s)$ (the last inequality must hold, since otherwise we would be able to select a larger
value for $h_1$).
If $h_2, \dots, h_m$ have been chosen, let $$h_{m + 1}
:=
\sup \{h > 0 : \mu B(y , s (1 + h)\Pi_{i=1}^m (1 + h_i) \le K \mu B^{cl} (y , s \Pi_{i=1}^m (1 + h_i) )
\mbox{ \ and \ }
$$
$$ s (1 + h)\Pi_{i=1}^m (1 + h_i) \le T r\}.$$
Since $\mu B (y , Tr) < \infty$, the process stops after a finite number of steps, so
there is an $N \ge 0$ (assigning value 1 to the empty product) such that $ s \Pi_{i=1}^{N + 1} (1 + h_i) = T r$ and
$$
\mu B^{cl} (y , s \Pi_{i=1}^N (1 + h_i) ) \le \mu B (y , Tr) \le K \mu B^{cl} (y , s \Pi_{i=1}^N (1 + h_i) ).
$$
To estimate $N$, note that since $r \le s$,
$$
\mu B (y , T r) \le K_2 \mu B (y , s) \le \frac{K_2}{K} \mu B^{cl} (y ,(1 + h_1) s)
$$
$$
\le \dots
\le \frac{K_2}{K^N} \mu B^{cl} (y , s \Pi_{i=1}^N (1 + h_i) )
\le \frac{K_2}{K^N} \mu B (y , T r).
$$
Hence $K^N \le K_2$ and thus $N\le \log K_2/ \log K$.
The remaining part of the argument is a variant of what was done in Stein-Str\"omberg for sparse radii, when considering the contribution of balls with the same radius as the smallest ball. Here we arrange the balls containing $y$ into $N + 2$ ``scales" (instead of just one) depending on whether their radii $R$ are equal to $s$, or $s \Pi_{i=1}^m (1 + h_i) < R \le s \Pi_{i=1}^{m +1} (1 + h_i) $, or $ s \Pi_{i=1}^N (1 + h_i) < R \le T r$.
For the first scale, consider all balls $B(x_{i_{1,1}}, s), \dots, B(x_{i_{1,k_1}}, s)$ containing $y$. Since for $1\le j \le k_1$, $x_{i_{1,j}} \in B(y, s)$, it follows that the disjoint sets $D_{i_{1,j}}$ are all contained in
$Bl(y, s, t s).$ By microblossoming and local comparability we have, for $j = 1, \dots, k_1$, $$ \mu \cup_{j= 1}^{k_1} D_{i_{1,j}}\le \mu Bl(y, s, t s) \le K_1 \mu B(y, s)\le K_1 \ C(\mu) \ \mu B(x_{i_{1,j}}, s), $$ so $$ \sum_{j= 1}^{k_1} \frac{\mu D_{i_{1,j}}}{\mu B(x_{i_{1,j}}, s_{i_{1,j}})}\mathbf{1}_{B(x_{i_{1,j}}, s_{i_{1,j}})} (y) \le \frac{ C(\mu) \ \mu Bl(y, s, t s)}{\mu B(y, s)} \le C(\mu) \ K_1.
$$
The contributions of all the other scales are estimated in the same way as the second one, which is presented next. Again, consider all balls $B(x_{i_{2,1}}, s_{i_{2,1}}), \dots, B(x_{i_{2,k_2}}, s_{i_{2,k_2}})$ containing $y$ and with radii $s_{i_{2,j}}$ in the interval
$(s, (1 + h_1) s]$. Then all the sets $D_{i_{2,j}}$ are contained in
$$
Bl(y, (1 + h_1) s, t (1 + h_1) s).
$$
Using microblossoming, the choice of $h_1$, and the local comparability of $\mu$, for $j = 1, \dots, k_2$ we have
\begin{equation}\label{minus} \mu \cup_{j= 1}^{k_2} D_{i_{2,j}} \le \mu Bl(y, (1 + h_1) s, t (1 + h_1) s)
\end{equation} $$ \le K_1 \mu B (y, (1 + h_1) s) \le K_1 K \mu B^{cl}(y, s) \le K_1 K \ C(\mu) \ \mu B(x_{i_{2,j}}, s_{i_{2,j}}), $$ so $$ \sum_{j= 1}^{k_2} \frac{\mu D_{i_{2,j}}}{\mu B(x_{i_{2,j}}, s_{i_{2,j}})}\mathbf{1}_{B(x_{i_{2,j}}, s_{i_{2,j}})} (y) \le \frac{ C(\mu) \ \mu Bl(y, (1 + h_1) s, t (1 + h_1) s)}{\mu B^{cl}(y, s)} \le C(\mu) \ K_1 K.
$$
Adding up over the $N + 2$ scales we get (\ref{sum}).
\end{proof}
Next we put together the two parts of the Stein-Str\"omberg covering theorem. This helps to see why the original argument gives better bounds than domination by several sparse operators.
\begin{theorem}\label{StSt3} {\bf Stein-Str\"omberg covering theorem.} Let $(X, d, \mu)$ be a metric measure space, where $\mu$ satisfies a $C(\mu)$ local comparability condition, and
is $(t,K_1)$-microblossoming. Set $K = \max\{K_1 , e\}$, and suppose $K_2:= \sup_{r > 0} mri(r,1/t) <\infty$.
Let $\{B(x_i, s_i): s_i \in R, 1 \le i \le M\}$ be a finite collection of balls with positive measure, ordered by non-increasing radii, and let $U:= \cup_{i = 1}^M B(x_i, t s_i)$. Then there exists a subcollection $\{B(x_{i_1}, s_{i_1}), \dots, B(x_{i_N}, s_{i_N})\}, $
such that,
denoting by $D_{i_1} = B(x_{i_1}, t s_{i_1}), \dots, D_{i_N} = B(x_{i_N}, t s_{i_N})
\setminus \cup_1^{N-1} B(x_{i_j}, t s_{i_j})$, we have \begin{equation}\label{set2} \mu U \le (K_1 + 1) \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j}), \end{equation} and \begin{equation} \label{bound2} \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})} \le 1 + C(\mu) \ K_1 K \left(2 + \frac{\log K_2}{\log K}\right). \end{equation}
\end{theorem}
\begin{proof} The selection process is the same as in the proof of Theorem \ref{StSt1},
yielding the desired subcollection, with (\ref{set2}) being the same as (\ref{set}).
As for the right hand side of (\ref{bound2}) the 1 comes from the contribution of balls
with very large radii, as in (\ref{bound}), while
$C(\mu) \ K_1 K \left(2 + \frac{\log K_2}{\log K}\right)$
is the bound from (\ref{sum}).
\end{proof}
The same argument given for Corollary \ref{MR} now yields
\begin{corollary}\label{M} Under the assumptions and with the notation of the preceding result, the centered maximal function satisfies the weak type (1,1) bound
$$\|M\|_{L^1-L^{1,\infty}} \le (K_1 + 1) \left( 1 + C(\mu) \ K_1 K \left(2 + \frac{\log K_2}{\log K}\right)\right).$$
\end{corollary}
For Lebesgue measure on $\mathbb{R}^d$, with balls defined by an arbitrary norm and $t = d^{-1}$, this is worse (by a factor of $e^2$) than the bound $(1 + e^2) (1 + o(1)) e^2 d \log d$ obtained by Stein and Str\"omberg.
Regarding lower bounds, currently it is known that for the centered maximal function defined using $\ell^\infty$-balls (cubes)
the numbers $\|M\|_{L^1-L^{1,\infty}}$ diverge to infinity (cf. \cite{A})
at a rate at least $O(d^{1/4})$ (cf. \cite{IaSt}). No information is available for other balls. In particular, the question (asked by Stein and Str\"omberg) as to whether or not the constants $\|M\|_{L^1-L^{1,\infty}}$ diverge to infinity with $d$, for euclidean balls, remains open.
\section{Sharpening the bounds for Lebesgue measure}
Here we revisit the original case studied by Stein and Str\"omberg, Lebesgue measure $\lambda^d$ on $\mathbb{R}^d$, with metric (and hence, with maximal function) defined by an arbitrary norm.
Since $\lambda^d$ is $(t, (1 + t)^d)$-microdoubling for every $t > 0$, values of $t\ne 1/d$ can be used to obtain improvements on the size of the constants.
\begin{theorem}\label{SSLebesgue} Consider $\mathbb{R}^d$ with Lebesgue measure $\lambda^d$ and balls defined by an arbitrary norm. Let $R:= \{r_n: n\in \mathbb{Z}\}$ be a $d$-lacunary sequence of radii, and let $M_R$ be
the corresponding (sparsified) Hardy-Littlewood maximal operator. Then $\|M_R\|_{L^1-L^{1,\infty}} \le (e^{1/d} + 1) (1 + 2 e^{1/d})$. Furthermore, if the maximal function is defined using the $\ell_\infty$-norm, so balls are cubes with sides perpendicular to the coordinate axes, then $\|M_R\|_{L^1-L^{1,\infty}} \le 6.$ \end{theorem}
As we noted above, using the original argument from
\cite{StSt} one obtains $\|M_R\|_{L^1-L^{1,\infty}} \le (e^2 + 1) (e + 1)$.
\begin{proof} Suppose, for simplicity in the writing, that $r_{n + 1} = d r_n$ (the case $r_{n + 1} \ge d r_n$ is proven in the same way). We apply the
Stein Str\"omberg selection process with $t =1/d^2$ and microdoubling constant $K = (1 + 1/d^2)^{d}
< e^{1/d}$. As before, given $0 \le f \in L^1$ and $a >0$, we cover the level set $\{M_R f > a\}$ almost completely, by a finite collection of ``small" balls
$\{B(x_i, t s_i): s_i \in R, 1 \le i \le M\}$ ordered by non-increasing radii,
and such that $a \mu B(x_i, s_i) < \int_{B(x_i, s_i)} f$.
From this collection we extract a subcollection $\{B(x_{i_1}, t s_{i_1}), \dots, B(x_{i_N}, t s_{i_N})\}$
satisfying $$ \mu \cup_{i= 1}^M B(x_i, t s_i) \le (e^{1/d} + 1) \mu \cup_{j=1}^N B(x_{i_j}, t s_{i_j}) = (e^{1/d} + 1) \sum_{j=1}^N \mu D_{i_j}. $$ Next, we obtain the bound $$ \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})} \le 2 e^{1/d} + 1, $$ by considering $z$ such that
$\sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})}(z) > 0$. Select the ball $B$ with largest index that contains $z$. Since $B$ belongs to the subcollection obtained by the Stein-Str\"omberg method, all balls containing $z$ and with radii $\ge d^2 r(B)$ (where $r(B)$ denotes the radius of $B$), contribute at most 1 to the sum.
Next we have to consider two more scales, all the balls with radius $r(B)$, and all the balls with radius $d r(B)$. By the usual argument (as in the proof of Theorem \ref{StSt1}) each of these scales contributes at most $e^{1/d}$ to the sum, so $\|M_R\|_{L^1-L^{1,\infty}} \le (e^{1/d} + 1) (1 + 2 e^{1/d})$ follows.
The result for
cubes is obtained by letting $d\to\infty$, since in this case it is known that
the weak type (1,1) norms increase with the dimension (cf. \cite[Theorem 2]{AV}).
\end{proof}
\begin{theorem}\label{SS2Lebesgue} Consider $\mathbb{R}^d$ with Lebesgue measure $\lambda^d$ and balls defined by an arbitrary norm. If $\varepsilon > 0$,
then
$\|M\|_{L^1-L^{1,\infty}} \le (2 + 3 \varepsilon) d \log d$ for all $d = d(\varepsilon)$ sufficiently large. \end{theorem}
The bound from the proof of \cite[Theorem 1]{StSt} is
$\|M\|_{L^1-L^{1,\infty}} \le e^2 (e^2 + 1) (1 + o(1)) d \log d.$
\begin{proof} Fix $\varepsilon \in (0,1)$. Since $(1 + d^{-1 - \varepsilon})^d = 1 + d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon})$, it follows that $\lambda^d$ is $(d^{-1 - \varepsilon}, 1 + d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon}))$-microdoubling. Note that if a ball $B$ contains the center of a second ball of
radius $1$, and the latter ball is contained in $(1 + d^{-1 - \varepsilon}) B$, then the radius $r_B$
of $B$ must satisfy $r_B \ge d^{ 1 + \varepsilon}$. Let $L$ be any natural number such that
$(1 + d^{-1 - \varepsilon})^L \ge d^{1 + \varepsilon}$. Taking logarithms
to estimate $L$,
and using $\log(1 + x) > x - x^2$ for $x$ sufficiently close to $0$, we see that it is
enough, for the preceding inequality to hold, to choose $L$ satisfying $L (d^{-1 - \varepsilon} - d^{-2 - 2\varepsilon})
\ge (1 + \varepsilon) \log d$, or, $L \ge (1 + o(d^{-1}))(1 + \varepsilon) d^{1 + \varepsilon} \log d$. For the least such integer we will have $$ L \le 1 + (1 + o(d^{-1}))(1 + \varepsilon) d^{1 + \varepsilon} \log d. $$
Again we apply the
Stein Str\"omberg selection process with $t = d^{-1 - \varepsilon}$,
covering a given level set $\{M f > a\}$ almost completely (up to a small $\delta > 0$) by a finite collection of small balls
$\{B(x_i, t s_i): s_i \in R, 1 \le i \le k\}$ ordered by non-increasing radii,
and such that $a \mu B(x_i, t s_i) < \int_{B(x_i, t s_i)} |f|$.
Using the Stein Str\"omberg algorithm, we extract a subcollection $$ \{B(x_{i_1}, t s_{i_1}), \dots, B(x_{i_N}, t s_{i_N})\} $$
satisfying \begin{equation} \label{SSsum1} (1 - \delta) \mu \{M f > a\} \le (2 + d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon})) \sum_{j=1}^N \mu D_{i_j},
\end{equation}
where the sets $D_{i_j}$ denote the disjointifications determined by the above subcollection. To sharpen the usual uniform bound for \begin{equation*} \sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})}, \end{equation*}
we use the fact that the sets $D_i$ are disjoint across different steps, and not just within the same step. More precisely, let $z$ satisfy \begin{equation} \label{SSsum}
\sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})}(z) > 0. \end{equation} Select the ball $B$ with largest index that contains $z$. Since $B$ belongs to the subcollection obtained by the Stein-Str\"omberg method, all balls containing $z$ and with radii $\ge d^{ 1 + \varepsilon} r(B)$ contribute at most 1 to the sum. Next we consider the first two scales, since for all the others, the argument is the same as for the second.
Take all the balls with radii equal to $r_B$. In order to bound (\ref{SSsum}) from above, we suppose that $(1 + d^{ - 1 - \varepsilon}) B$ is completely filled up with the sets $D_i$ associated to balls with radii $r_B$, and hence, no $D_j$ associated to a ball with larger radius intersects $(1 + d^{ -1 - \varepsilon}) B$. When we consider the sum (\ref{SSsum}), but just for the balls with radius $r_B$, we obtain the upper bound $(1 + d^{ - 1 - \varepsilon})^d$. For the second level, we consider all balls in the subcollection with radii in $(r_B, (1 + d^{ - 1 - \varepsilon}) r_B]$, and as before, we suppose that $(1 + d^{ - 1 - \varepsilon})^2 B \setminus (1 + d^{ - 1 - \varepsilon}) B$ is completely filled up with the sets $D_j$ associated to these balls. The estimate we obtain for this second level is $(1 + d^{ - 1 - \varepsilon})^d - 1 = d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon})$. For balls with radii in $((1 + d^{ - 1 - \varepsilon})^k r_B,
(1 + d^{ - 1 - \varepsilon})^{k + 1} r_B]$, $0 \le k < L$, we use the same estimate.
Adding up over all scales we obtain
$$
\sum_{j=1}^N \frac{\mu D_{i_j}}{\mu B(x_{i_j}, s_{i_j})}\mathbf{1}_{B(x_{i_j}, s_{i_j})}(z)
\le
1 + 1 + d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon})
$$
$$
+ (1 + (d^{ - \varepsilon}
+ O(d^{ -2 \varepsilon})) (1 + o(d^{-1}))(1 + \varepsilon) d^{1 + \varepsilon} \log d)
\le (1 + O(d^{-\varepsilon}))(1 + \varepsilon) d \log d.
$$
Multiplying this bound with the bound from (\ref{SSsum1})
and adding an $\varepsilon$ to absorb the big Oh terms,
for $d$ large enough we obtain
$\|M\|_{L^1-L^{1,\infty}} \le (2 + 3 \varepsilon) d \log d$.
\end{proof}
\end{document} |
\begin{document}
\title{The Lindel{\"o}f Hypothesis for almost all Hurwitz's Zeta-Functions holds true
} \author{Masumi Nakajima \ \\
\it Department of Economics \ \\
\it International University of Kagoshima \ \\
\it Kagoshima 891-0191, JAPAN \\
e-mail: [email protected] } \maketitle
\begin{abstract} By probability theory we prove here that the Lindel{\"o}f hypothesis holds for almost all Hurwitz's zeta-functions, i.e. \\ $ \qquad \zeta({1\over2} + it,\omega)={\rm o}_{\omega,\epsilon}\{(\log t)^{{3\over2} + \epsilon}\} $ \\ for almost everywhere $ 0< \omega <1,$ and for any small $ \epsilon >0,$ where
$ {\rm o}_{\omega,\epsilon} $ denotes the Landau small o-symbol which depends on
$\omega$ and $\epsilon$ and $\zeta(s,\omega)$ denotes the Hurwitz zeta-function. The details will be given elsewhere.\\
Key words ; The Riemann zeta function, the Hurwitz zeta function, the Lindel{\"o}f hypothesis,
law of large numbers, law of the iterated logarithm.
Mathematics Subject Classification ; \\ 11M06, 11M26, 11M35, 60F15. \end{abstract}
Let $ \zeta(s,\omega)$ be the Hurwitz zeta function which is meromorphically extended to the whole complex plane from the Dirichlet series \[ \sum_{n=0}^{\infty} (n+{\omega})^{-s} \quad (s=\sigma + it,\sigma=\Re s >1, \ 0<\omega \leq 1). \] We should note that \[ \zeta(s,1)=\zeta(s),\] \[ \zeta(s,{1\over2} )=({2^s}-1)\zeta(s), \] where $\zeta(s)$ denotes the Riemann zeta function.\\ \quad In analytic number theory, there are three famous conjectures which are related each other as follows.\\ {\bf The Riemann Hypothesis }(1859, by B.Riemann):\\ $ \rho \notin {\bf R}, \ \zeta(\rho)=0 \Rightarrow \Re \rho ={1\over2} $ \\ {\bf The Lindel{\"o}f Hypothesis }(1908, by E.Lindel{\"o}f):\\ $ \zeta({1\over2}+ it)={\rm O}_{\epsilon}(t^{\epsilon}) \ for \ any \ small \ {\epsilon}>0,$ \\ where ${\rm O}_{\epsilon}$ denotes the Bachmann-Landau large O-symbol which depends on $\epsilon$. \\ {\bf The Density Hypothesis }: \\ \[ N(\sigma,T)={\rm O}_{\epsilon}(T^{2-2\sigma + \epsilon}) \] \[ for \ any \ small \ \epsilon>0 \ and \ {1\over2}\leq \sigma \leq 1,\] \\ where $ N(\sigma,T) $ denotes the number of zeros of $\zeta(s)$ in the rectangle whose four vertices are $ \sigma ,1,1+iT $ and $ \sigma +iT $. \\ It is well known that \\ $the \ Riemann \ Hypothesis \Rightarrow the \ Lindel \ddot{o}f \ Hypothesis $ \\ $ \Rightarrow the \ Density \ Hypothesis.$ \\ (It is not known whether the Lindel{\"o}f Hypothesis implies the Riemann Hypothesis or not.)\\ And also as it is well known, the Riemann Hypothesis is the most important and the strongest conjecture that has serious influences on many branches of mathematics including number theory. But it is less known that in fact the Lindel{\"o}f Hypothesis has almost the same effects on number theory as the Riemann Hypothesis ~\cite{A1976}~\cite{I1985}~\cite{T1951}. About the Lindel{\"o}f Hypothesis there are many studies which improve the power $L$ of $t$
in $\zeta({1\over2}+ it)={\rm O}(t^L)$. These studies in this direction have their long history and story. The recent results in this direction are due to G.Kolesnik, E.Bombieri, H.Iwaniec, M.N.Huxley, N.Watt and others, for example, $\zeta({1\over2}+ it)={\rm O}(t^{9/56})$ due to Bombieri and Iwaniec in 1986 and the best up to the present time is $={\rm O}(t^{32/205})$ due to Huxley in 2005. \\ \quad In 1952, Koksma and Lekkerkerker~\cite{K-L1952} proved that \\
\[ \int_0^1 |\zeta_{1}({1\over2}+it,\omega)|^{2}d\omega={\rm O}(\log t) \] where $\zeta_{1}(s,\omega):=\zeta(s,\omega)-\omega^{-s}$ whose term $-\omega^{-s}$ makes keeping out the singularity at $ \omega=0$. \\ \quad From this mean value results, by using {\v C}eby{\v s}ev's inequality in probability theory, we easily have
\[ \mu \{ 0<\omega \leq 1 ; |\zeta({1\over2}+it,\omega)| \geq C \sqrt{\log t} \} \leq {{{\rm O}(1)}\over{C^2}} \] \[ for \ any \ t>1 \ and \ any \ large \ C>0, \] where $\mu \{ B \} $ denotes the Lebesgue measure of measurable set $ B $, which shows that the Lindel{\"o}f Hypothesis holds in the sence of weak law in probability theory. \\ \quad In this short note we give the following strong law version of the Lindel{\"o}f Hypothesis , that is, \\
\newtheorem{theo}{Theorem}
\begin{theo}
\begin{eqnarray*} \zeta({1\over2}+it,\omega)={\rm o}_{\omega,\epsilon}\{(\log t)^{{3\over2} + \epsilon}\} \end{eqnarray*} for almost everywhere $ \omega \in \Omega:=(0,1) $ and for any small $ \epsilon >0 $.
\end{theo}
\quad In order to prove this theorem, we need some definitions and some results in probability theory. \\ \quad Let $ (\Omega, { F},{\rm P}) $ be some probability space, $X,Y,Z,\cdots $ be complex valued random variables on this space, $ {\rm E}[X] $ be the
expectation value of the random variable $ X $ and $ {\rm V}[X]={\rm E}[|X-{\rm E}[X]|^2] $ be the variance of $ X $.
\newtheorem{lem}{Lemma}
\begin{lem}
Let $ Z $ be a complex valued random variable. If $ {\rm E}[|Z|^2]| < +\infty $, then we have
\[ |Z|<+\infty \ almost \ surely \ (abbreivated \ by \ a.s. ),\]
\[i.e. \ {\rm P}\{ |Z|<+\infty \}=1. \] \end{lem}
{\bf proof.}\ From $|Z| \geq 0$, we have
\[ 0 \leq |Z|=|Z(\omega)| < +\infty \ {\rm or} \ |Z|=|Z(\omega)| = +\infty. \] We define the set $A \subset \Omega$ by
$A:=\{ \omega ; |Z(\omega)|=+\infty \}$ and the indicator function of the set $A$; \begin{eqnarray*} 1_{A}:=1_{A}(\omega):=\left\{ \begin{array}{ll} 1 & (\omega \in A) \\ 0 & (\omega \notin A). \end{array} \right. \end{eqnarray*}
If we assumed that ${\rm P}\{ \omega \in \Omega ; |Z(\omega)| < +\infty \} < 1 $, we would have ${\rm P}\{ A \}>0 $ and \[
{\rm E}[|Z|^2] \geq {\rm E}[|Z|^{2} 1_{A} ]=(+\infty){\rm P}\{ A \}=+\infty, \]
which is the contradiction to the assumption $ {\rm E}[|Z|^2]| < +\infty. $ So we have the lemma.
\begin{lem} Let $ Z_n $ be a complex valued random variables \ $(n=1,2,3.\cdots)$.
If $ \sum_{n=1}^{\infty}{\rm E}[|Z_n|^2] < +\infty $, then we have \\ \[ Z_n \rightarrow 0 \ a.s.\ (as \ n \rightarrow +\infty ).\] \end{lem}
{\bf proof.}\ By Lemma 1, we have \[
{\rm P}\{ \sum_{n=1}^{\infty}|Z_n|^2 < \infty \}=1, \]
which shows that $|Z_n|^2 \rightarrow 0 \ a.s.\ (as \ n \rightarrow +\infty )$, that is, $Z_n \rightarrow 0 \ a.s.\ (as \ n \rightarrow +\infty )$.
\begin{lem} {\rm ({\rm Rademacher-Menchoff's lemma}~\cite{D1953} ~\cite{K1999}) } \\ Let $ a(p) \ (p=1,2,\cdots,2^{n+1}-1) $ be complex numbers and $ a(0):=0 $, then we have \begin{eqnarray*}
\lefteqn{ \max_{ 1\leq p <2^{n+1} } |a(p)|^2 } \\ & \leq & (n+1)
\sum_{k=0}^{n} \sum_{j=0}^{2^{n-k}-1}|a(2^{k}+j2^{k+1}) - a(j2^{k+1})|^2. \end{eqnarray*} \end{lem} {\bf proof.}\ For the natural number $p$ which satisfy $1 \leq p < 2^{n+1}$, we have its binomial expansion; \[ p=\sum_{j=0}^{n}\epsilon_{j}2^{j} \ (\epsilon_{j}=0\ {\rm or} \ 1). \] With respect to the above $p$, we define $p_{k+1},\ p_{n+1},\ p_{0}$ respectively by \begin{eqnarray*} p_{k+1}:=\sum_{j=k+1}^{n}\epsilon_{j}2^{j} \ (k=0,1,2,\cdots,n-1),\\ p_{n+1}:=0,\ p_0:=p. \end{eqnarray*} From these definitions we have \begin{eqnarray*} p_0=p \geq p_1 \geq p_2 \geq \cdots \geq p_n \geq p_{n+1}=0, \ \ \ \ \ (1) \\
p_k - p_{k+1}=\epsilon_{k}2^k, \quad \quad \quad \\
p_{k+1}=\sum_{j=k+1}^{n}\epsilon_{j}2^{j} =\sum_{j=0}^{n-k-1}\epsilon_{k+1+j}2^{j+k+1} \\ =\sum_{j=0}^{n-k-1}(\epsilon_{k+1+j}2^{j})2^{k+1}=: \delta_{k+1}2^{k+1},\ \ \ (2) \\ 0 \leq \delta_{k+1} \leq \sum_{i=0}^{n-k-1} 2^i = 2^{n-k}-1. \ \ \ (3) \end{eqnarray*} From \[ a(p)=a(p_0)=a(p_0)-a(p_{n+1})=\sum_{k=0}^{n}(a(p_{k})-a(p_{k+1})), \] we have \begin{eqnarray*}
|a(p)|^2=|\sum_{k=0}^n 1 \cdot (a(p_k)-a(p_{k+1}))|^2 \\ \leq
\sum_{k=0}^n 1 \sum_{k=0}^n |a(p_k)-a(p_{k+1})|^2 \\
=(n+1)\sum_{k=0}^n |a(p_k)-a(p_{k+1})|^2 \\
=(n+1)\sum_{k=0}^n |a(\epsilon_k 2^k + p_{k+1})-a(p_{k+1})|^2 \ \\
\ ({\rm by \ (1)}) \\
=(n+1)\sum_{k=0}^n |a(\epsilon_k 2^k + \delta_{k+1} 2^{k+1})-a(\delta_{k+1} 2^{k+1})|^2 \ \\
\ ({\rm by \ (2)}) \\ \leq (n+1)\sum_{k=0}^n \sum_{j=0}^{2^{n-k}-1}
|a(2^k + j 2^{k+1})-a(j 2^{k+1})|^2, \ (4) \\ \end{eqnarray*} because we take the summation with respect to $k$ into account only when $\epsilon_k =1$, and we sum up $j$ in place of $\delta_k$ by (3). By the fact that the right hand side of (4) is independent of $p$, we have the lemma.
\newtheorem{de}{Definition}
\begin{de} {\rm Let $ X,Y $ be complex valued random variables which satisfy \
$ {\rm E}[|X|^2],\ {\rm E}[|Y|^2]< \infty. $ \\ If $ {\rm E}[\bar{X}Y]={\rm E}[\bar{X}]{\rm E}[Y], $ \ we call $ X,Y $ (pairwise) uncorrelated. } \end{de}
\begin{de} {\rm Let $ X,Y $ be complex valued random variables which satisfy \
$ {\rm E}[|X|^2],\ {\rm E}[|Y|^2]< \infty. $ \\ If $ {\rm E}[\bar{X}Y]=0, $ \ we call $ X,Y $ {\rm(}pairwise{\rm)} orthogonal. } \end{de}
\begin{lem} {\rm ({\rm Rademacher-Menchoff}~\cite{D1953} ~\cite{K1999}) } \\ Let $ X_1,X_2,\cdots $ be pairwise uncorrelated complex valued random variables which satisfy \[ {\rm E}[X_i]=0, \ \sigma_i^2:={\rm V}[X_i] \ (i=1,2,\cdots), \ \sigma_i \geq 0 \] and let $ S_n:=S_n(\omega):=X_1+X_2+\cdots+X_n. $ \\ Then we have \begin{eqnarray*}
\lefteqn{ {\rm E}[\max_{ 2^m < k \leq 2^{m+1} } |S_k-S_{2^m}|^2 ] } \\ & \leq & (m^2 + 1) \sum_{i=1}^{2^m} \sigma_{2^m + i}^2 \ for \ m=0,1,2,\cdots . \end{eqnarray*} \end{lem} {\bf proof.}\ In Lemma 3, we put \[ n=m-1, \ a(p)=X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^m+p}. \] From this, we have \begin{eqnarray*}
{\rm E}[\max_{ 2^m < k \leq 2^{m+1} } |S_k-S_{2^m}|^2 ]
\\ \leq
{\rm E}[\max_{ 1 \leq k <2^{m} } |S_{2^m+k}-S_{2^m}|^2 ]
+ {\rm E}[|S_{2^{m+1}}-S_{2^m}|^2 ] \\
={\rm E}[\max_{ 1 \leq p <2^{m} } |X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^m+p}|^2 ] \\
+ {\rm E}[|X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^{m+1}}|^2 ] \\ \leq
{\rm E}[m\sum_{k=0}^{m-1}\sum_{j=0}^{2^{m-1-k}-1} |(X_{2^m+1}+ X_{2^m+2}+\cdots+X_{2^m+2^k+j2^{k+1}}) \\
-(X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^m+j2^{k+1}})|^2 ] \\
+ {\rm E}[|X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^{m+1}}|^2 ] \\ ({\rm by \ Lemma \ 3})\\ =m \sum_{k=0}^{m-1}\sum_{j=0}^{2^{m-1-k}-1}
{\rm E}[|X_{2^m+j2^{k+1}+1}+X_{2^m+j2^{k+1}+2}+ \\ \cdots
+X_{2^m+j2^{k+1}+2^k}|^2 ] \\
+ {\rm E}[|X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^{m+1}}|^2 ] \\ =m \sum_{k=0}^{m-1}\sum_{j=0}^{2^{m-1-k}-1}
{\rm V}[X_{2^m+j2^{k+1}+1}+X_{2^m+j2^{k+1}+2}+ \\ \cdots +X_{2^m+j2^{k+1}+2^k} ] \\ + {\rm V}[X_{2^m+1}+X_{2^m+2}+\cdots+X_{2^{m+1}} ] \\ =m \sum_{k=0}^{m-1}\sum_{j=0}^{2^{m-1-k}-1} \sum_{i=1}^{2^k}\sigma_{2^m+j2^{k+1}+i}^2 +\sum_{i=1}^{2^m}\sigma_{2^m+i}^2 \\ \leq m \sum_{k=0}^{m-1} \sum_{i=1}^{2^m}\sigma_{2^m+i}^2 +\sum_{i=1}^{2^m}\sigma_{2^m+i}^2 \\ \leq m \cdot m \sum_{i=1}^{2^m}\sigma_{2^m+i}^2 +\sum_{i=1}^{2^m}\sigma_{2^m+i}^2 \\ =(m^2+1)\sum_{i=1}^{2^m} \sigma_{2^m+i}^2, \end{eqnarray*} which completes the proof of the lemma.\\ \quad By using these lemmas, we have
\begin{theo}{\rm~\cite{N2004.1}} \\ Let $ X_1^{(n)},X_2^{(n)},\cdots , X_k^{(n)},\cdots $ be pairwise uncorrelated complex valued random variables which may depend on $ n $ and satisfy \[ {\rm E}[X_k^{(n)}]=0, \ \sigma_k^2:={\rm V}[X_k^{(n)}]=
{\rm O}(k^{-2\alpha}),\ |X_k^{(n)}|<+\infty \] \[ (k,n=1,2,\cdots,\ \ \alpha \in {\bf R}, \forall \omega \in \Omega) \] , where $ \sigma_k \geq 0 $ do not depend on $ n $ . Also let \[ S_n^{(l)}:=S_n^{(l)}(\omega):= X_1^{(l)}+X_2^{(l)}+\cdots+X_n^{(l)}, \]
and \[ \varphi(n):=n^{\beta}(\log n)^{{3\over2}+\epsilon} \ with \ any \ small \ \epsilon >0 \] \begin{eqnarray*} \beta :=\left\{ \begin{array}{ll} 0 & (\alpha \geq {1\over2}) \\ {1\over2}- \alpha & (\alpha < {1\over2}). \end{array} \right. \end{eqnarray*} Then we have \[ S_n^{(n)}=S_n^{(n)}(\omega)= {\rm o}_{\omega,\epsilon}(\varphi(n)) \ a.s.\ \omega \in \Omega. \] \end{theo} {\bf proof.}\ We choose any natural number sequence $\{n_k\}_{k=1}^\infty$
with $2^k<n_k\leq 2^{k+1}$ and $X_{1}^{(l)},X_{2}^{(l)},\cdots,X_{2^{m+1}}^{(l)},\cdots\ (l,m \in {\bf N})$
are pairwise uncorrelated complex valued random variables for any $l \in {\bf N}$. We have \begin{eqnarray*}
{\rm E}[\sum_{k=1}^m |{{S_{2^k}^{(n_k)}}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{S_{2^k}^{(2^{k+1})}}\over{\varphi(2^k)}}|^2] =\sum_{k=1}^\infty 2^{-2k\beta}{(\log 2^k)}^{-3-2\epsilon} (\sigma_1^2 + \sigma_2^2 + \cdots + \sigma_{2^k}^2). \ \ (5)\\ \end{eqnarray*}
In case of $\alpha > {1\over2}$, then $\beta=0$ and we have \begin{eqnarray*} (5)={\rm O}(\sum_{k=1}^\infty {(\log 2^k)}^{-3-2\epsilon} \sum_{l=1}^{2^k} l^{-2\alpha}) \\ ={\rm O}(\sum_{k=1}^\infty k^{-3-2\epsilon})<+\infty. \end{eqnarray*} In case of $\alpha = {1\over2}$, then $\beta=0$ and we have \begin{eqnarray*} (5)={\rm O}(\sum_{k=1}^\infty {(\log 2^k)}^{-3-2\epsilon} \sum_{l=1}^{2^k} l^{-1}) \\ ={\rm O}(\sum_{k=1}^\infty k^{-3-2\epsilon}\cdot \log 2^k) \\ ={\rm O}(\sum_{k=1}^\infty k^{-2-2\epsilon})<+\infty. \end{eqnarray*} In case of $\alpha < {1\over2}$, then $\beta={1\over2}-\alpha$ and we have \begin{eqnarray*} (5)={\rm O}(\sum_{k=1}^\infty 2^{-k(1-2\alpha)} {(\log 2^k)}^{-3-2\epsilon} \sum_{l=1}^{2^k} l^{-2\alpha}) \\ ={\rm O}(\sum_{k=1}^\infty 2^{-k(1-2\alpha)} k^{-3-2\epsilon} \cdot 2^{k(1-2\alpha)}) \\ ={\rm O}(\sum_{k=1}^\infty k^{-3-2\epsilon})<+\infty. \end{eqnarray*} Then in any case, we have \[
{\rm E}[\sum_{k=1}^m |{{S_{2^k}^{(n_k)}}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{S_{2^k}^{(2^{k+1})}}\over{\varphi(2^k)}}|^2] <+\infty , \] which means, by Lemma 2, with some $A((n_1,\cdots,n_m))\subset\Omega$, \[
\sum_{k=1}^m |{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{S_{2^k}^{(2^{k+1})}(\omega)}\over{\varphi(2^k)}}|^2 <+\infty \] for $\forall \omega \in A((n_1,\cdots,n_m))$ with ${\rm P}\{A((n_1,\cdots,n_m))\}=1$. We put \[ A(m):=\bigcap_{(n_1,\cdots,n_m)}A((n_1,\cdots,n_m)) \] where $(n_1,\cdots,n_m)$ under $\cap$ runs through all $(n_1,\cdots,n_m)\in {\bf N}^{m}$ with $2^k<n_k\leq 2^{k+1} (k=1,2,\cdots,m).$ \\ Since \[\bigcap_{(n_1,\cdots,n_m)}\] is finitely many intersections of the sets, we have \[{\rm P}\{A(m)\}=1.\] Therefore we have \[
\sum_{k=1}^m |{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{S_{2^k}^{(2^{k+1})}(\omega)}\over{\varphi(2^k)}}|^2 <+\infty \] for $\forall\omega \in A(m)$ with \ ${\rm P}\{A(m)\}=1$.\\ We show that \[A(m)=A(m+1) \ (m=1,2,\cdots).\] In fact, if $\omega \in A(m)$ which means \[
\sum_{k=1}^m |{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{S_{2^k}^{(2^{k+1})}(\omega)}\over{\varphi(2^k)}}|^2 <+\infty \], then we immediately have \[
\sum_{k=1}^m |{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}}|^2
+|{{S_{2^{m+1}}^{(n_{m+1})}(\omega)}\over{\varphi(2^{m+1})}}|^2
+\sum_{k=m+2}^\infty |{{S_{2^k}^{(2^{k+1})}(\omega)}\over{\varphi(2^k)}}|^2 <+\infty \] for $2^{m+1}<\forall n_{m+1}\leq 2^{m+2}$,
because $|X_k^{(l)}(\omega)|<+\infty$ for $\forall \omega \in \Omega.$ This means $\omega \in A(m+1)$. Inversely $\omega \in A(m+1)$ implies $\omega \in A(m)$ by the same argument.\\ So, There exists \[\lim_{m \to +\infty}A(m)=:A=A(1) \ \ {\rm and} \ \ {\rm P}\{A\}=1.\] This means \[
\sum_{k=1}^\infty |{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}}|^2 <+\infty \ for \ \forall\{n_k\}_{k=1}^\infty,\ \forall \omega \in A \ with \ {\rm P}\{A\}=1 \] and \[ \lim_{k \to +\infty}{{S_{2^k}^{(n_k)}(\omega)}\over{\varphi(2^k)}} =0 \ a.s. \ \omega \ for \ \forall\{n_k\}_{k=1}^\infty \quad \quad \quad {\rm (6)} \] Next we put \begin{eqnarray*} Y_k^{(n_k)}:=\max_{1\leq l \leq 2^k}
|X_{2^k+1}^{(n_k)}+X_{2^k+2}^{(n_k)}+\cdots+X_{2^k+l}^{(n_k)}| \\ . \end{eqnarray*} By Lemma 4, we have for any $l \in {\bf N}$ \begin{eqnarray*}
{\rm E}[|Y_k^{(l)}|^2]\leq (k^2+1)\sum_{i=1}^{2^k}\sigma_{2^k+i}^2 \\ =\left\{ \begin{array}{ll} {\rm O}(k^2+1) & (\alpha \geq {1\over2}) \\ {\rm O}(2^{(k+1)(1-2\alpha)}(k^2+1)) & (\alpha < {1\over2}). \end{array} \right. \end{eqnarray*} and \[
{\rm E}[\sum_{k=1}^m |{{Y_{k}^{(n_k)}}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{Y_{k}^{(2^{k+1})}}\over{\varphi(2^k)}}|^2] ={\rm O}(\sum_{k=1}^\infty 2^{-2k\beta} k^{-3-2\epsilon}
{\rm E}[ |Y_{k}^{(l)}|^2])\ \ with \ \forall l\in {\bf N}. \ \ (7) \] In case of $\alpha \geq {1\over2}$, then $\beta=0$ and we have \begin{eqnarray*} (7)={\rm O}(\sum_{k=1}^\infty k^{-3-2\epsilon}(k^2+1) ) ={\rm O}(\sum_{k=1}^\infty k^{-1-2\epsilon})<+\infty. \end{eqnarray*}
In case of $\alpha < {1\over2}$, then $\beta={1\over2}-\alpha$ and we have \begin{eqnarray*} (7)={\rm O}(\sum_{k=1}^\infty 2^{-k(1-2\alpha)} k^{-3-2\epsilon} \cdot (k^2+1) 2^{(k+1)(1-2\alpha)}) \\ ={\rm O}(2^{-2\alpha}\sum_{k=1}^\infty k^{-1-2\epsilon})<+\infty. \end{eqnarray*} In any case, we have \[
{\rm E}[\sum_{k=1}^m |{{Y_{k}^{(n_k)}}\over{\varphi(2^k)}}|^2
+\sum_{k=m+1}^\infty |{{Y_{k}^{(2^{k+1})}}\over{\varphi(2^k)}}|^2] <+\infty . \] The same argument as that of ${{S_{2^k}^{(n_k)}}\over{\varphi(2^k)}}$ leads \[ \lim_{k\to\infty}{{Y_{k}^{(n_k)}(\omega)}\over{\varphi(2^k)}} =0 \ {\rm a.s.} \ \ {\rm for}\ \forall\{n_k\}_{k=1}^\infty . \ \ \ (8) \] Then, for any $n$ with $2^m<n\leq 2^{m+1}$, we have, by (6) and (8), \begin{eqnarray*}
{{|S_{n}^{(n)}|}\over{\varphi(n)}} \leq
{{|S_{n}^{(n)}|}\over{\varphi(2^m)}} \leq
{ { |S_{2^m}^{(n)}|+Y_{m}^{(n)} }\over{\varphi(2^m)} }\\
={ { |S_{2^m}^{(n)}| }\over{\varphi(2^m)} }+
{ { Y_{m}^{(n)} }\over{\varphi(2^m)} } \to 0 \ \ {\rm a.s.}\ \ ({\rm as}\ n\to\infty), \end{eqnarray*} which means \[ S_n^{(n)}=S_n^{(n)}(\omega)= {\rm o}_{\omega}(\varphi(n)) \ a.s.\ \omega \in \Omega, \ \ {\rm with \ any \ small \ \epsilon>0.} \] This completes the proof.
\newtheorem{rem}{Remark}
\begin{rem} {\rm
This theorem is a generalization of the strong limit theorem the position of which may be placed between laws of large numbers and laws of the iterated logarithm in probability theory. (\ Therefore we would like to call these types of theorems quasi laws of the iterated logarithm.) } \end{rem}
\begin{rem} {\rm
This is also a new proof of the strong law of large numbers without using the Borel-Cantelli theorem. We can prove other limit theorems in probability theory by this method. } \end{rem}
\ We yet need some lemmas for proving Theorem 1.
\begin{lem} {\rm ( {\rm Functional equation for the Hurwitz zeta function} ~\cite{A1976}~\cite{I1985}~\cite{T1951} ) } \\
\[ \zeta(s,\omega)={{\Gamma(s)}\over{(2\pi)^s}} \{ e^{-{\pi\over2}is}{\rm F}(\omega,s) + e^{+{\pi\over2}is}{\rm F}(-\omega,s) \} \] for \ $ 0<\omega<1,\sigma>0 $ or \ $ 0<\omega \leq 1,\sigma>1 $,\\ where $\Gamma(s)$ is the gamma function of Euler and \\ ${\rm F}(\omega,s):=\sum_{k=1}^{\infty}k^{-s}e^{2\pi ik \omega}.$ \end{lem}
\begin{lem} {\rm ~\cite{I1985}~\cite{M1970}~\cite{T1980}~\cite{T1951}}
\[ |\Gamma(s)|=\sqrt{2\pi}|t|^{\sigma -{1\over2}}e^{-{\pi\over2}|t|}
\{ 1 + {\rm O}_{\sigma_1,\sigma_2,\delta}({1\over{|t|}}) \} \]
for \ $ \sigma_1 \leq \sigma \leq \sigma_2,\ |t| \geq \delta >0. $
\end{lem}
\begin{lem}
\[ {\rm F}(\omega,s)=\sum_{k \leq t^2}k^{-s}e^{2\pi ik \omega} +
{\rm O}({1\over{|1-e^{2\pi i \omega}|}}t^{1-2\sigma})
\] for \ $ \sigma \geq {1\over2} . $
\end{lem} {\bf proof.}\ \[ {\rm F}(\omega,s)=\sum_{k \leq t^2}k^{-s}e^{2\pi ik \omega} + \sum_{k > t^2}k^{-s}e^{2\pi ik \omega}.
\] By applying the partial summation to the second term of the above ${\rm F}(\omega,s)$, \begin{eqnarray*} \sum_{k > t^2}k^{-s}e^{2\pi ik \omega}\\ =-A(t^2){(t^2)}^{-s}+s\int_{t^2}^\infty A(u)u^{-s-1}du \\
={\rm O}({1\over{|1-e^{2\pi i \omega}|}}t^{-2\sigma})+
{\rm O}(t{1\over{|1-e^{2\pi i \omega}|}}t^{-2\sigma}) \\
={\rm O}({{t^{1-2\sigma}}\over{|1-e^{2\pi i \omega}|}}), \end{eqnarray*} where $A(t^2):=\sum_{k < t^2}e^{2\pi ik \omega}$, which completes the proof of the lemma.
From Lemma 5,6 we easily have
\begin{lem}
\[
|\zeta({1\over2} + it,\omega)|={\rm O}({\rm F} (\omega,{1\over2} + it)) \ \ for \ 0<\omega<1. \]
\end{lem}
\noindent {\bf Proof of Theorem 1.} From Lemma 7, we have \[ {\rm F}(\omega,{1\over2}+it)={\rm O}_{\delta} (\sum_{k \leq t^2}k^{-{1\over2}-it}e^{2\pi ik \omega}) \] \[ for \ 0<\delta<\omega<1-\delta \ with \ any \ small \ \delta>0. \] In Theorem 2, put $ \Omega=(0,1),\ {\rm P}=\mu $ (Lebesgue \ measure), $ n=[t^2] $ \ ($ [x] $ denotes the integral part of real number $ x.$) and \[ X_k^{(t)}=k^{-{1\over2}-it}e^{2\pi ik \omega} \ \ (k=1,2,\cdots), \] which satisfy all the conditions in Theorem 2. Then we have \[ \sum_{k \leq t^2}k^{-{1\over2}-it}e^{2\pi ik \omega} ={\rm o}_{\omega,\epsilon}(\varphi([t^2])) ={\rm o}_{\omega,\epsilon}((\log t)^{{3\over2}+\epsilon}) \] With Lemma 7,8, this completes the proof of the theorem.
\begin{rem} {\rm
The exact expression of the Lindel{\"o}f Hypothesis is
\begin{eqnarray*} \mu_{\omega}(\sigma)=\left\{ \begin{array}{ll} 0 & (\sigma \geq {1\over2}) \\ {1\over2}- \sigma & (\sigma < {1\over2}). \end{array} \right. \end{eqnarray*} \begin{eqnarray*} where \ \ \mu_{\omega}(\sigma):=
\lim_{\stackrel{\scriptstyle }{t\ \to}}\sup_{+\infty}
{{\log |\zeta(\sigma + it,\omega)|}\over{\log t}}, \end{eqnarray*} which is the same form as
\begin{eqnarray*} \beta =\left\{ \begin{array}{ll} 0 & (\alpha \geq {1\over2}) \\ {1\over2}- \alpha & (\alpha < {1\over2}), \end{array} \right. \end{eqnarray*} in Theorem 2. \\ } \end{rem}
\begin{rem} {\rm
In 1936, Davenport and Heilbronn~\cite{D-H1936} has already proved that the Riemann Hypothesis fails for $\zeta(s,\omega)$ with transcendental number $\omega$ and rational number $\omega \neq {1\over2},1$ in contrast with our Theorem 1, which shows that the Lindel{\"o}f Hypothesis by itself, for example, without the Euler product, does not imply the Riemannn Hypothesis. } \end{rem}
\begin{rem} {\rm
It seems that the behaviour of $\zeta(s,\omega)$ as $\omega$ varies in the interval $(0,1)$ is very complicated because of the following facts;\\ (1)Barasubramanian-Ramachandra~\cite{B-R1977}(\ the case $ \omega=1
$ ) and Ramachandra-Sankaranarayanan~\cite{R-S1989} proved the following $\Omega$-theorem; \begin{eqnarray*} \zeta({1\over2} + it,\omega)=\Omega(\exp (C_{\omega}\sqrt{{\log t} \over{\log\log t}})) \\ with \ some \ C_{\omega}>0 \ and \ \omega \in {\bf Q}, \end{eqnarray*} which shows \begin{eqnarray*} \{0<\omega<1;{\rm Theorem \ 1. \ holds } \} \cap {\bf Q}=\emptyset. \end{eqnarray*} \noindent (2)It is well known that divisor problems and circle problems are closely related each other and so are shifted divisor problems and shifted circle problems. The Hurwitz zeta function naturally appears in shifted divisor problems~\cite{N1993}. And Bleher-Cheng-Dyson-Lebowitz ~\cite{B-C-D-L1993} pointed out that the value distributions of the error terms of the number of lattice points inside shifted circles behave very differently when the shift varies by their numerical studies. Therefore it seems that the behaviour of $\zeta(s,\omega)$ including its value distribution is very complicated as $\omega$ varies. (For the value distribution of $\zeta(s,\omega)$ with transcendental number $\omega$, see ~\cite{N1997}. ) \\ \noindent (3)Our numerical studies by "Mathematica" show also the complexity of the behaviour of $\zeta(s,\omega)$ as follows, for example,} \end{rem} \noindent The graph of $\zeta(s,x)$ which plots the points $(x,y) \in {\bf R^2}$ such that \[
y={{|\zeta({1\over2}+it,x)-x^{-({1\over2}+it)}|}\over{(\log t)^2}} \ (0 \leq x \leq 1,\ t=10^8.). \] seems to be a kind of white noise.\\
\noindent {\bf Acknowledgment} \ \ The author thanks to \\ Prof. Jyoichi Kaneko of the University of the Ryukyus for his careful reading of the previous manuscript.\\
\end{document} |
\begin{document}
\title[Ehrhart theory of symmetric edge polytopes via ribbon structures]{Ehrhart theory of symmetric edge polytopes via ribbon structures}
\author{Tam\'as K\'alm\'an} \address{Department of Mathematics\\ Tokyo Institute of Technology\\ H-214, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, Japan} \email{[email protected]}
\author{Lilla T\'othm\'er\'esz} \address{MTA-ELTE Egerv\'ary Research Group, P\'azm\'any P\'eter s\'et\'any 1/C, Budapest, Hungary} \email{[email protected]} \date{}
\begin{abstract} Using a ribbon structure of the graph, we construct a dissection of the symmetric edge polytope of a graph into unimodular simplices. Our dissection is shellable, and one can interpret the elements of the resulting $h$-vector via graph theory. This gives an elementary method for computing the $h^*$-vector of the symmetric edge polytope.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
Let ${\mathcal{G}}$ be a simple graph with vertex set $V({\mathcal{G}})$ and edge set $E({\mathcal{G}})$. The polyhedron $$P_{\mathcal{G}} =\conv \{\mathbf{u}-\mathbf{v}, \mathbf{v}-\mathbf{u} \mid uv\in E({\mathcal{G}})\} \subset \mathbb{R}^{V(\mathcal{G})}$$ is called the \emph{symmetric edge polytope} of ${\mathcal{G}}$ \cite{Sym_edge_appearance}. (Here $\mathbf u, \mathbf v$ stand for generators of $\mathbb R^{V({\mathcal{G}})}$ that correspond to $u,v\in V({\mathcal{G}})$.) The $h^*$-vector of the symmetric edge polytope is a palindromic polynomial that has been actively investigated;
see \cite{arithm_symedgepoly,DDM19,Sym_edge_appearance,OT} and references within.
In particular, Ohsugi and Tsuchiya conjecture \cite{OT} that the $h^*$-vector of a symmetric edge polytope is $\gamma$-positive. As pointed out in \cite{DDM19}, the symmetric edge polytope is also relevant in physics, where an upper bound for the number of steady states of the Kuramoto synchronization model can be obtained via its volume.
In this paper we give a general method for computing the $h^*$-vector of the symmetric edge polytope of a graph. Our method only uses elementary notions of graph theory and can be converted into an exponential time algorithm.
Higashitani, Jochemko, and Micha{\l}ek described the facets of $P_{\mathcal{G}}$ \cite{arithm_symedgepoly} and in a previous paper \cite{semibalanced} we treated exactly that class of polytopes. The facets are indexed by certain spanning subgraphs of ${\mathcal{G}}$ that are also endowed with an orientation. Here we build on the machinery of \cite{semibalanced} to dissect the symmetric edge polytope into unimodular simplices in a shellable manner. The basic idea is to dissect each facet in a shellable way as we did in \cite{semibalanced}, construct cones over them whose common apex is the origin, and merge the shellings into a shelling order of the whole construction. This way we can read off the $h^*$-polynomial as the $h$-polynomial of the shelling.
We interpret the simplices in the dissection, as well as the coefficients of the $h^*$-polynomial, in terms of graph theory. More exactly, the simplices in the dissection correspond to special spanning trees (so called Jaeger trees) of the aforementioned oriented spanning subgraphs of ${\mathcal{G}}$. Here Jaeger trees, see Definition \ref{def:jaegertree}, are defined using an arbitrarily fixed ribbon structure of ${\mathcal{G}}$ and a graph traversal due to Bernardi \cite{Bernardi_first}. Roughly speaking, the condition is that each edge that is \emph{not} part of the tree, is first reached during the traversal at its tail. The notion of these trees is inspired by the knot-theoretical work of F. Jaeger \cite{Jaeger}. Our main claim is that
\begin{customthm}{\ref{cor:h^*_of_sym_edge_poly}} The coefficient $(h^*_{P_{\mathcal{G}}})_i$ equals the number of Jaeger trees so that, during the traversal, there are exactly $i$ edges that are \emph{in the tree} and are first reached at their tail. \end{customthm}
The resulting computation of $h^*_{P_{\mathcal{G}}}$ is lengthy (due to the large number of oriented subgraphs to be considered and the multitude of Jaeger trees in each) but eminently doable for any graph. In particular, no polytopes are considered during the actual process.
Furthermore, one can use our formula to prove that the coefficient $\gamma_1$ of the linear term of the $\gamma$-polynomial of $h^*_{P_{\mathcal{G}}}$ is nonnegative for any graph ${\mathcal{G}}$. More precisely, we show
\begin{customthm}{\ref{thm:gamma(1)=2g}}
For any connected, simple, undirected graph ${\mathcal{G}}$ we have $$\gamma_1=2g,$$ where $g=|E({\mathcal{G}})|-|V({\mathcal{G}})|+1$ is the so called cyclomatic number (or nullity or first Betti number) of ${\mathcal{G}}$. \end{customthm}
The same formula appears in an extremely recent announcement\footnote{Their preprint \cite{dalietal} was submitted one day ahead of ours.} by D'Al\`i et al. They show this by a quick calculation based on the existence of a unimodular triangulation, cf.\ \cite[Lemma 3.1]{dalietal}. Our approach yields a much longer proof but it does provide an explicit description of the simplices in the shelling order that are attached along exactly one facet.
If ${\mathcal{G}}$ is bipartite with partite classes $U$ and $W$, the \emph{root polytope} \[\mathcal{Q}_{\mathcal{G}}=\conv\{\mathbf{u}-\mathbf{w}\mid u\in U, w\in W, uw\in E({\mathcal{G}})\}\] is a facet of the symmetric edge polytope.
We will also call the $h^*$-vector of $\mathcal{Q}_{\mathcal{G}}$ the \emph{interior polynomial} of ${\mathcal{G}}$. (Cf.\ \cite{KP_Ehrhart} and Kato's clarification \cite{Kato} of its main result. The interior polynomial was first defined in \cite{hiperTutte}.) Ohsugi and Tsuchiya proved \cite[Theorem 5.3]{OT} that for a certain special class of graphs, the $\gamma$-polynomial of $h^*_{P_{\mathcal{G}}}$ is a positive linear combination of interior polynomials of some subgraphs. Such a formula implies $\gamma$-positivity for the symmmetric edge polytope of these graphs. In Section \ref{sec:connection_of_gamma_and_interior}, we collect further results suggesting a strong connection of the $\gamma$ and interior polynomials, and formulate some conjectures.
Previous results about the $h^*$-vector of the symmetric edge polytope mostly relied on Gröbner bases of the toric ideal associated to $P_{\mathcal{G}}$, and a resulting regular unimodular triangulation of $P_{\mathcal{G}}$. Even though our approach and the method of Gröbner bases are conceptually very different, they sometimes produce the same subdivision. For example in the case of a complete bipartite graph, with the right ribbon structure, our method yields the same triangulation that was obtained using Gröbner bases \cite{arithm_symedgepoly} (see Section \ref{sec:concrete_cases}). To summarize the comparison, our approach is less algebraic and more combinatorial. Gröbner bases provide regular triangulations for the symmetric edge polytope, while we can only guarantee a shellable dissection. To us, it is not clear whether Gröbner bases directly yield a graph-theoretic interpretation for the $h^*$-vector the way our method does.
To demonstrate the usefulness of our method in a more concrete setting, in Section \ref{sec:concrete_cases}, we revisit some further graph classes where $h^*_{P_{\mathcal{G}}}$ was computed previously using Gröbner bases.
Finally, in Section \ref{sec:geometric_formula_for_volume}, we give a geometric formula for the volume of the symmetric edge polytope of a bipartite graph. Namely, we identify a set of points such that the volume of the polytope is equal to the sum of the numbers of facets visible from the given points.
Acknowledgements: TK was supported by a Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research C (no.\ 17K05244). LT was supported by the National Research, Development and Innovation Office of Hungary -- NKFIH, grant no.\ 132488, by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the ÚNKP-21-5 New National Excellence Program of the Ministry for Innovation and Technology, Hungary. LT was also partially supported by the Counting in Sparse Graphs Lendület Research Group of the Rényi Institute.
\section{Preparations} \label{sec:prep}
\subsection{Ehrhart theory essentials}
Suppose that $P\subset\mathbf{R}^n$ is a $d$-dimensional
polytope with vertices in $\mathbf{Z}^n$.
We define its \emph{Ehrhart series} as $$\mathrm{Ehr}_P(t)=\sum_{k=0}^\infty|(k\cdot P)\cap\mathbf{Z}^n|\,t^k. $$ It is well known that the Ehrhart series can be written in the closed form $$ \mathrm{Ehr}_P(t) = \frac{h^*(t)}{(1-t)^{d+1}}, $$ where $h^*(t)$ is a polynomial
called the \emph{$h^*$-polynomial} (or \emph{$h^*$-vector}) of $P$.
One possible way to compute the $h^*$-vector of $P$ is via the $h$-vector of a shellable dissection of $P$ into unimodular simplices, that we now explain. This technique was already established for triangulations before we extended its scope in our earlier work \cite{hyperBernardi,semibalanced}, but we summarize it here again for easier readability.
A \emph{dissection} of a polytope $P$ is a set of mutually interior-disjoint maximal dimensional simplices, whose union is the polytope. A dissection is \emph{shellable} if it admits a \emph{shelling order}. That means that the simplices of the dissection are listed as $\sigma_1,\sigma_2,\ldots,\sigma_N$, in such a way that for each $i=2,3,\ldots,N$, the intersection of $\sigma_i$ with the `earlier' simplices, \[ \sigma_i\cap\left(\bigcup_{j=1}^{i-1}\sigma_j\right), \] coincides with the union of a positive number of facets (codimension $1$ faces) of $\sigma_i$.
Let us denote the number of said facets by $r_i$ for $i=2,3,\ldots,N$ and let us also put $r_1=0$. Then the \emph{$h$-vector} associated to the shelling order is the distribution of the statistic $r_i$.
We may write it as a finite sequence $(h_0,h_1,\ldots)$, where $h_k$ is the number of simplices $\sigma_i$ with $r_i=k$. Note that $h_0=1$ and since each maximal simplex has $d+1$ facets, the subscript of the last non-zero term is at most $d+1$. In fact, it is at most $d$ by a topological argument: each $\sigma_i$ with $r_i=d+1$ (i.e., a $d$-cell attached along its entire boundary)
would add an infinite cyclic summand to the $d$-dimensional homology group of the polytope, but that group is $0$.
Alternatively, we may express the $h$-vector in the polynomial form \begin{equation} \label{eq:h-hejazott} h(x)=h_{d}\,x+h_{d-1}\,x^2+\cdots+h_1\,x^{d}+h_0\,x^{d+1}. \end{equation}
Now in the special case when the simplices in the dissection are unimodular, we have the following result, which will be a basic tool in this paper.
\begin{prop} \label{prop:h-dissect}\cite{semibalanced} For any shellable dissection of a $d$-dimensional lattice polytope into unimodular simplices, and for any shelling order, the $h$-polynomial \eqref{eq:h-hejazott} determines the $h^*$-polynomial of the polytope as \[h^*(t)=t^{d+1}h(1/t)=h_0+h_1\,t+\cdots+h_{d-1}\,t^{d-1}+h_d\,t^d.\] \end{prop}
If a polynomial $p(x)=c_dx^d+\dots + c_1x + c_0$ is palindromic (that is, $c_i=c_{d-i}$ for each $i=1, \dots, d$), then it can be written in the form $$p(x)= (x+1)^d \sum_{i=1}^{\lceil \frac{d}{2}\rceil} \gamma_i \cdot \left(\frac{x}{(x+1)^2}\right)^i.$$ In this case $\gamma(y) = \sum_{i=1}^{\lceil \frac{d}{2}\rceil} \gamma_i\, y^i$ is called the \emph{$\gamma$-polynomial} of $p$.
A palindromic polynomial $p$ is called \emph{$\gamma$-positive} if all coefficients of its $\gamma$-polynomial are nonnegative. This is a strong property, which implies the unimodality of $p$.
The $h^*$-polynomial of the symmetric edge polytope is palindromic \cite{OT}, hence $P_{\mathcal{G}}$ has a $\gamma$-polynomial
for any graph ${\mathcal{G}}$. We denote this polynomial by $\gamma_{\mathcal{G}}$ and call it the \emph{$\gamma$-polynomial} of ${\mathcal{G}}$.
\subsection{Graphs and digraphs}
In this paper
we denote undirected graphs by calligraphic letters, e.g. ${\mathcal{G}}$, while we denote directed graphs by regular capital letters, e.g., by $G$.
For an undirected graph ${\mathcal{G}}$, we denote its set of vertices by $V({\mathcal{G}})$ and its set of edges by $E({\mathcal{G}})$. We write undirected edges as $uv$, where $u,v\in V({\mathcal{G}})$ are the two endpoints.
For a directed graph $G$, we also denote its set of vertices by $V(G)$ and its set of (directed) edges by $E(G)$. We denote an edge of $G$ by $\overrightarrow{uv}$ if $u$ is the tail and $v$ is the head of the edge. If we do not want to specify which endpoint of the edge is the head and which one is the tail, we simply write $uv$. A lowercase letter, say $e$, might denote a directed or an undirected edge, depending on the context. We write $\overrightarrow{e}$ if we want to stress that $e$ is directed. Occasionally, $\overrightarrow{e}$ and $\overleftarrow{e}$ will denote the two oppositely oriented versions of the edge $e$.
In this paper, we typically consider directed graphs $G$ whose vertex sets agree with the vertex set of some undirected graph ${\mathcal{G}}$, and the edge set of $G$, if we ignore the orientations, is a subset of the edge set of ${\mathcal{G}}$. If we want to emphasize that $uv\in E({\mathcal{G}})$ is also in $E(G)$, then we say that $uv$ is \emph{present} in $G$. If $G$ is an oriented subgraph of ${\mathcal{G}}$ and $uv\in E({\mathcal{G}})-E(G)$, then we say that $uv$ is a \emph{hidden edge} for $G$. In our figures, we will denote hidden edges by dotted lines. See Figure \ref{fig:facet_graphs} for an undirected graph and two oriented subgraphs of it.
A \emph{cut} in a graph ${\mathcal{G}}=(V,E)$ is a set of edges $C^*$ that contains exactly the edges going between $V_0$ and $V_1$ for some partition (into nonempty parts) $V_0 \sqcup V_1 = V$. We sometimes denote a cut by its vertex partition $(V_0,V_1)$, and call $V_0$ and $V_1$ the \emph{sides} of the cut. In a directed graph, we say that $C^*$ (or $(V_0,V_1)$) is a \emph{directed cut} if each edge points in the same direction, i.e., all of them point from $V_0$ to $V_1$ or all of them point from $V_1$ to $V_0$. Otherwise we say that the cut is \emph{undirected}.
We remark that in connected graphs, the edges of the cut do uniquely determine the corresponding vertex partition. Working with vertex partitions has the advantage that we can specify cuts in various subgraphs of ${\mathcal{G}}$ at once, even though the cuts will typically be different when viewed as sets of edges.
For a digraph $G$ and a set of vertices $V_1 \subset V$, we denote by $G[V_1]$ the graph whose vertex set is $V_1$, and whose edge set is the set of edges of $G$ where both endpoints are from $V_1$.
For a connected graph or weakly connected digraph, a \emph{spanning subgraph} is a subgraph that contains all vertices
and is itself connected/weakly connected.
A \emph{spanning tree} in a directed or undirected graph is a cycle-free spanning subgraph. (Thus, in a directed graph, a subgraph is a spanning tree if and only if it is a spanning tree when we ignore the orientations.) If $T$ is a spanning tree of a graph ${\mathcal{G}}$, then for any edge $e\in T$, the subgraph $T-e$ has exactly two connected components. Let $V_0$ and $V_1$ be the vertex sets of these components and call the cut $(V_0,V_1)$ of ${\mathcal{G}}$, denoted by $C^*_{\mathcal{G}}(T,e)$, the \emph{fundamental cut} of $e$ with respect to $T$. (We may omit the subscript if the graph is clear from the context.) We define fundamental cuts for spanning trees of directed graphs the same way (i.e., edge directions do not play a role in the definiton).
\subsection{Symmetric edge polytopes and their facets} Let ${\mathcal{G}}$ be a graph with vertex set $V$. For $v\in V$, we denote by $\mathbf{v}$ the vector in $\mathbf{R}^V$ whose coordinate corresponding to $v$ is 1, and all the other coordinates are 0.
\begin{defn} The \emph{symmetric edge polytope} of ${\mathcal{G}}$ is
$$P_{\mathcal{G}} =\conv\{\mathbf{u}-\mathbf{v}, \mathbf{v}-\mathbf{u} \mid uv\in E({\mathcal{G}})\} \subset \mathbb{R}^{V(\mathcal{G})}.$$ \end{defn}
We note that $P_{\mathcal{G}}$ lies along the hyperplane of $\mathbb{R}^{V(\mathcal{G})}$ where the sum of components is $0$. In \cite{arithm_symedgepoly}, the facets of the symmetric edge polytope are characterized as follows.
\begin{thm}\cite[Theorem 3.1]{arithm_symedgepoly}\label{thm:facets_of_P_G}
For a graph ${\mathcal{G}}$, the facets of $P_{\mathcal{G}}$ are enumerated by layerings $l\colon V\to \mathbb{Z}$ such that
\begin{itemize}
\item[(i)] $|l(v)-l(u)|\leq 1$ for each $uv\in E({\mathcal{G}})$
\item[(ii)] The subset of edges $E_l=\{
{uv}\mid uv\in E({\mathcal{G}}), |l(v)-l(u)|=1\}$ forms a spanning subgraph of ${\mathcal{G}}$. (In cases when ${\mathcal{G}}$ is disconnected, this means that $(V,E_l)$ has the same number of
connected components as ${\mathcal{G}}$.)
\end{itemize}
For such an $l$, the corresponding facet is $\conv\{\mathbf v - \mathbf{u}\mid
{uv}\in E_l\text{ and }l(v)-l(u)=1\}$. \end{thm}
For each $l$ as above, the edges in $E_l$ are naturally directed from $u$ to $v$ where $l(v)-l(u)=1$. Let us call the directed graphs $(V,E_l)$, arising as in the theorem, the \emph{facet graphs} of ${\mathcal{G}}$. (They are subgraphs of ${\mathcal{G}}$ that are also given an orientation.) For an example, see Figure \ref{fig:facet_graphs}.
Notice that (as we suppose that ${\mathcal{G}}$ is connected) two layerings determine the same facet graph if any only if they differ by a constant. Let us also point out that the linear extension, to $\mathbb R^V$, of the layering $l$ serves as a conormal vector for the facet which corresponds to $l$.
\begin{figure}
\caption{An undirected graph and two of its subgraphs, with orientations. (They are in fact facet graphs.) Hidden edges are drawn by dotted lines.}
\label{fig:facet_graphs}
\end{figure}
In our paper \cite{semibalanced} we investigated exactly the polytopes seen in Theorem \ref{thm:facets_of_P_G}. Let us recall some definitions and results.
\begin{defn}[Root polytope of a digraph]
For a directed graph $G$, its \emph{root polytope} is defined as $\mathcal{Q}_G = \conv\{\mathbf v - \mathbf{u}\mid \overrightarrow{uv}\in E(G) \}$. \end{defn}
With this definition, the facets of $P_{\mathcal{G}}$ are exactly the root polytopes of the facet graphs of ${\mathcal{G}}$. Let us say a bit more about these digraphs.
\begin{defn}[Semi-balanced digraph]
A digraph $G$ is called \emph{semi-balanced} if for each cycle $C$, the numbers of the edges of $C$ pointing in the two cyclic directions are the same. \end{defn}
We recall a characterization of semi-balanced digraphs from \cite{semibalanced}.
\begin{thm}\cite{semibalanced}\label{thm:semibalanced_graphs_char}
$G$ is semi-balanced if and only if there is a layering $l\colon V \to \mathbb{Z}$ so that regarding the orientation of $G$, we have $l(h)-l(t)=1$ for each edge $\overrightarrow{th}$. \end{thm}
\begin{cor}
Any facet of $P_{\mathcal{G}}$
is the root polytope of a spanning subgraph of ${\mathcal{G}}$ that is oriented in a semi-balanced way.
If ${\mathcal{G}}$ is a bipartite graph, then the facets of $P_{\mathcal{G}}$ are in bijection with the semi-balanced orientations of ${\mathcal{G}}$. \end{cor}
\begin{proof}
The first statement follows directly from Theorems \ref{thm:facets_of_P_G} and \ref{thm:semibalanced_graphs_char}.
Now suppose that ${\mathcal{G}}$ is bipartite with vertex classes $U$ and $W$. Each facet has the form $\mathcal{Q}_{G_l}$ where we have $|l(u)-l(v)|=1$ for each $uv\in G_l$ and $G_l$ is a spanning subgraph of ${\mathcal{G}}$. This ensures that either $l(u)$ is odd for each $u\in U$ and $l(w)$ is even for each $w\in W$, or vice versa. But this also implies that we cannot have $l(u)=l(v)$ for any $uv\in {\mathcal{G}}$, whence $|l(u)-l(v)|=1$ for each edge $uv\in {\mathcal{G}}$. That is, each edge of ${\mathcal{G}}$ is present and receives an orientation in $G_l$.
Conversely, it is clear from Theorem \ref{thm:facets_of_P_G} that a layering that corresponds to a semi-balanced orientation defines a facet. \end{proof}
We will also need the following basic observations about the simplices within facets.
\begin{prop}\cite[Proposition 3.1]{semibalanced} For a semibalanced digraph $G$, within $\mathcal{Q}_G$, some vertices are affinely independent if and only if they correspond to the edge set of a forest. In particular, spanning trees of $G$ give rise to maximal simplices in $\mathcal Q_G$. \end{prop}
\begin{prop} \cite[Lemma 3.5]{semibalanced} \label{prop:unimodular_simplices} For any facet graph $G$ and spanning tree $T\subset G$, the simplex $\tilde{\mathcal{Q}}_T=\conv\{0, \mathbf{v}-\mathbf{u} \mid \overrightarrow{uv}\in E(G)\}$ is unimodular. \end{prop}
\subsection{Dissecting a facet of the symmetric edge polytope: Jaeger trees}
In \cite{semibalanced} we gave a method for dissecting the root polytope of a semi-balanced digraph into simplices, in a shellable way. This was done using the so-called Jaeger trees of the digraph. Let us recall some details.
Let $G$ be a semi-balanced digraph.
To define Jaeger trees of $G$, we need to fix a ribbon structure for $G$.
Here the notion of a \emph{ribbon structure} is independent of the graph's orientation: it is a family of cyclic permutations, namely for each vertex $x$ of $G$, a cyclic permutation of the edges incident to $x$ (the collection of in- and out-edges) is given. For an edge $xy$ of $G$, we use the following notations: \begin{itemize}
\item $yx^+_G$: the edge following $yx$ at $y$
\item $xy^+_G$: the edge following $xy$ at $x$.
\end{itemize} If $G$ is clear from the context, we omit the subscript.
In our applications, $G$ will often be a facet graph of an undirected graph ${\mathcal{G}}$. In this case, by fixing a ribbon structure for ${\mathcal{G}}$, all facet graphs $G$ inherit a ribbon structure from ${\mathcal{G}}$ in the natural way.
In addition to the ribbon structure, we also need to fix a \emph{basis} $(b_0,b_0b_1)$, where the \emph{base node} $b_0$ is an arbitrary vertex of $G$ and the \emph{base edge} $b_0b_1$ is an arbitrary edge incident to $b_0$ in $G$.
Note that no assumption is made about the orientation of $b_0b_1$.
Suppose that a ribbon structure and a basis are fixed. Then any spanning tree $T$ of $G$ gives us a natural ``walk'' in the graph $G$. This was defined by Bernardi \cite{Bernardi_first}, and following him we call it the \emph{tour of $T$}. The following definition is valid for both directed and undirected graphs, as orientations play no role in it.
\begin{defn}[Tour of a tree] \label{def:tour_of_a_tree}
Let ${\mathcal{G}}$ be a ribbon graph with a basis $(b_0,b_0b_1)$, and let $T$ be a spanning tree of ${\mathcal{G}}$.
The tour of $T$ is a sequence of node-edge pairs, starting with $(b_0, b_0b_1)$. If the current node-edge pair is $(x,xy)$ and $xy\notin T$, then the current node-edge pair of the next step is $(x,xy^+_{{\mathcal{G}}})$. If the current node-edge pair is $(x,xy)$ and $xy\in T$, then the current node-edge pair of the next step is $(y,yx^+_{\mathcal{G}})$. In the first case we say that the tour \emph{skips} $xy$ and in the second case we say that the tour \emph{traverses} $xy$. The tour stops right before when $(b_0,b_0b_1)$ would once again become the current node-edge pair. \end{defn}
Bernardi proved {\cite[Lemma 5]{Bernardi_first}} that in the tour of a spanning tree $T$, each edge $xy$ of $G$ becomes current edge twice, in one case with $x$ as current node, and in the other case with $y$ as current node.
\begin{ex}\label{ex:tour}
Figure \ref{fig:tour_and_Jaeger_ex} shows two spanning trees in a semi-balanced digraph. Let the ribbon structure be induced by the positive orientation of the plane and let the basis be $(v_0,v_0v_3)$.
The tour of the tree in the left panel is $(v_0, v_0v_3)$, $(v_3,v_3v_1)$, $(v_1,v_1v_4)$, $(v_1, v_1v_6)$, $(v_6,v_6v_2), (v_6,v_6v_0), (v_6,v_6v_1), (v_1, v_1v_3), (v_3, v_3v_0)$, $(v_0, v_0v_6)$, $(v_0, v_0v_5)$, $(v_5, v_5v_2)$, $(v_2, v_2v_6)$, $(v_2, v_2v_4)$, $(v_4, v_4v_1)$, $(v_4, v_4v_2)$, $(v_2, v_2v_5)$, $(v_5, v_5v_0)$.
The tour of the tree on the right is $(v_0, v_0v_3)$, $(v_0,v_0v_6)$, $(v_0,v_0v_5)$, $(v_5, v_5v_2)$, $(v_2,v_2v_6), (v_6,v_6v_0), (v_6,v_6v_1), (v_1, v_1v_3), (v_3, v_3v_0)$, $(v_3, v_3v_1)$, $(v_1, v_1v_4)$, $(v_1, v_1v_6)$, $(v_6, v_6v_2)$, $(v_2, v_2v_4)$, $(v_4, v_4v_1)$, $(v_4, v_4v_2)$, $(v_2, v_2v_5)$, $(v_5, v_5v_0)$. \end{ex}
The following is the key notion in this paper. Note that this is the point where orientations start to matter.
\begin{defn}[Jaeger tree] \label{def:jaegertree}
Let $G$ be a
digraph.
Given a fixed ribbon structure and basis for $G$, we call a spanning tree $T$ of $G$ \emph{Jaeger tree} if for each edge $\overrightarrow{th}\in G-T$, in the tour of $T$, the pair $(t,th)$ becomes current node-edge pair before $(h,th)$. In other words, in the tour of $T$, each non-edge is first seen at its tail. \end{defn}
\begin{figure}
\caption{Two spanning trees in a semi-balanced digraph.
The ribbon structure is induced by the positive orientation of the plane and the basis is indicated by the small gray arrow. The tree on the left is not a Jaeger tree, while the one on the right is.
See Examples \ref{ex:tour} and \ref{ex:Jaeger} for more details.
}
\label{fig:tour_and_Jaeger_ex}
\end{figure}
\begin{ex}\label{ex:Jaeger}
The tree of the left panel of Figure \ref{fig:tour_and_Jaeger_ex} is not a Jaeger tree, because in its tour $(v_6,v_6v_2)$ precedes $(v_2,v_2v_6)$ whereas the edge $v_2v_6$ is oriented from $v_2$ to $v_6$ (and similarly for $v_0v_6$). On the other hand, the tree in the right panel is a Jaeger tree. \end{ex}
Note that for a Jaeger tree, we require that the tail of a non-tree edge be seen before its head in the tour of $T$, but we do not care about the orientations of tree edges. However, when analyzing relationships of Jaeger trees, it will turn out that the latter can also be important. Hence we introduce the following terminology:
\begin{defn}
Let $T$ be a spanning tree of a ribbon digraph $G$, and let $b_0\in V(G)$. We say that $\overrightarrow{th}\in T$ is a \emph{tail-edge} of $T$ if $b_0$ and the tail $t$ of $\overrightarrow{th}$ are in the same component of $T-\overrightarrow{th}$. We call $\overrightarrow{th}$ a \emph{head-edge} otherwise. \end{defn}
Notice that $\overrightarrow{th}$ being a tail-edge is equivalent to the fact that (for an arbitrary ribbon structure and basis where $b_0$ is the base node) $(t,th)$ comes before $(h,th)$ in the tour of $T$. However, we can define tail-edges without refering to a ribbon structure, and fixing only the base node.
\begin{ex}
The right panel of Figure \ref{fig:tour_and_Jaeger_ex} shows a Jaeger tree (for the given ribbon structure and basis).
The tail edges of the tree are $v_0v_5$, $v_2v_4$, $v_2v_6$ and $v_1v_3$ while the head-edges are $v_5v_2$ and $v_6v_1$. \end{ex}
The following property makes Jaeger trees very useful for us.
\begin{thm}\label{thm:semibalanced_dissection}\cite{semibalanced}
For a semi-balanced digraph $G$, the simplices corresponding to Jaeger trees dissect the root polytope $\mathcal{Q}_G$. \end{thm}
Hence, if we take the Jaeger trees for each facet graph of ${\mathcal{G}}$ (for some ribbon structure and basis, which might even vary from facet to
facet) and add the origin to each
corresponding simplex as a new vertex, then we get a dissection of the symmetric edge polytope $P_{\mathcal{G}}$. Since by Proposition \ref{prop:unimodular_simplices} these simplices are all unimodular, we immediately obtain that the normalized volume of $P_{\mathcal{G}}$ is equal to the sum of the numbers of Jaeger trees over all facet graphs. (Note that, also by Thorem \ref{thm:semibalanced_dissection}, the number of Jaeger trees of a semibalanced digraph is independent of the chosen ribbon structure and basis.)
Jaeger trees of a semi-balanced digraph have a property that further enhances their usefulness:
The dissection by Jaeger trees is shellable, with the following shelling order.
\begin{defn}\label{def:ordering_of_Jaeger_trees_in_a_face} Let $T$ and $T'$ be Jaeger trees of a semi-balanced digraph $G$ (for some fixed ribbon structure and basis). We say that $T' \prec T$ if their respective tours first differ so that the current node-edge pair $(t,\overrightarrow{th})$ satisfies $\overrightarrow{th}\notin T'$ and $\overrightarrow{th} \in T$. (I.e., an edge seen at its tail is not included in $T'$ but is included in $T$.) \end{defn}
Note that as we supposed that $T$ and $T'$ were both Jaeger trees, the first difference of their tours necessarily looks as above and hence this is a complete ordering. In fact, the order has a natural extension to all spanning trees \cite{semibalanced}.
\begin{thm}\cite{semibalanced} \label{thm:shelling_for_semibalanced}
The order $\prec$ is a shelling order of the dissection given by Jaeger trees. For each Jaeger tree $T$, the number of facets of the corresponding simplex $Q_{T}$ that lie in the union of previous simplices,
equals the number of tail-edges $e\in T$ such that the fundamental cut $C^*(T,e)$ is not a directed cut. \end{thm}
\begin{remark} We remark that in \cite{semibalanced} those tail-edges of a Jaeger tree $T$ whose fundamental cuts are not directed are called the internally semipassive edges of $T$. This relates to an activity-like notion defined there; moreover, \cite[Lemma 6.4]{semibalanced} provides another equivalent descriptions of these edges. Namely, they are exactly the edges that arise as a ``first difference'' between the tours of $T$ and another Jaeger tree.
\end{remark}
In the next section, we show that the shellings of the facets can be put together into a shelling of the whole boundary (or, by coning over the origin, to a shelling of the dissection of $P_{\mathcal{G}}$). Moreover, we will be able to use this shelling to obtain a formula for the $h^*$-polynomial of $P_{\mathcal{G}}$.
\section{Shellability and $h^*$-vector}\label{sec:shelling_orders}
In this section we give a shellable dissection into unimodular simplices for an arbitrary symmetric edge polytope, and determine the $h$-vector of the shelling (and hence, by Proposition \ref{prop:h-dissect}, also the $h^*$-vector of the polytope).
We have already pointed out that (for an arbitrary fixed ribbon structure) Jaeger trees yield dissections of the facets of the symmetric edge polytope, moreover, for each facet, this dissection is shellable. By putting these dissections together we get a dissection of the whole surface of $P_{\mathcal{G}}$. In this section, we show that this dissection of the boundary of $P_{\mathcal{G}}$ is also shellable.
Note that for any dissection of the boundary of $P_{\mathcal{G}}$, we can add the origin to each simplex of the dissection, which results in a dissection for $P_{\mathcal{G}}$. This dissection of $P_{\mathcal{G}}$ is shellable if and only if the dissection of the boundary was shellable. Also, notice that the $h$-vectors coincide for the shelling of the boundary and for the shelling of $P_{\mathcal{G}}$. We will focus on the shellability of the dissection of the boundary.
In fact we will give two shelling orders. In the first one, our strategy will be to first choose an ordering of the facets, then build up the facets one-by-one in the chosen order. This will be a relatively flexible construction, where we do not need to use the same ribbon structure for different facet graphs. The only important point will be to use the same base point for each of them.
In the second shelling order, we use the same ribbon structure for all facet graphs, and give a recipe to directly compare Jaeger trees of different facets. This second type of shelling might start to build up a facet before some other one is finished.
\subsection{The face-by-face shelling} Fix a vertex $b_0$ of ${\mathcal{G}}$. For each facet graph $G$, take some ribbon structure and basis, such that the base point is $b_0$. (The ribbon structure and the base edge might be different for different facet graphs.)
Consider a real-valued weight function $f$ on the vertices of ${\mathcal{G}}$ that associates $f(b_0)=1$ to the base vertex, while for $v\in V-b_0$, we let $-1\leq f(v)\leq 0$ in such a way that $\sum_{v\in V-b_0} f(v)=-1$ and the values $\{f(v)\mid v\in V-b_0\}$ are linearly independent over $\mathbb{Z}$.
Recall that for each facet graph $G$, there is a layering $l$ such that $l(v)-l(u)=1$ for every $\overrightarrow{uv}\in G$ and $|l(u)-l(v)|\leq 1$ for every $uv\in {\mathcal{G}}$, moreover, two layerings $l$ and $l'$ define the same facet if and only if $l'(v)=l(v)+c$, with some constant $c$, for each vertex $v$. Let \[f(G)=\sum_{v\in V}l(v)\cdot f(v).\] Notice that as $\sum_{v\in V}f(v)=0$, this value is the same for any layering defining a given facet.
We arrange the facet graphs in decreasing order according to this valuation.
That is, let us label the facet graphs $G_1, G_2, \dots, G_M$ in such a way that $f(G_1) > f(G_2) > \dots> f(G_M)$.
\begin{remark} Geometrically, $f$ can be viewed as a covector that is a generic perturbation of the dual of the vector $\mathbf b'_0$, which in turn is the projection of the standard basis vector $\mathbf b_0$ on the hyperplane of $P_{\mathcal{G}}$. Each valuation $f(G)$ is the inner product of $f$ with the conormal of the facet which belongs to $G$. That is, facets are ordered in the following manner: We start from the origin and travel along a generic straight line specified by $f$. We list the facets as our trajectory crosses their planes; e.g., $G_1$ is the graph of the facet through which we first cross the boundary of $P_{\mathcal{G}}$. By the time we `reach infinity,' half of the facets are recorded. Then we travel along the other half-line of our straight line in the same direction, that is, this time toward the origin, and continue the process. As a result, the symmetric pairs of the already-recorded facets come up in the opposite order. This is well known to be a shelling order of the facets. Our challenge is to combine it with the dissections of the facets and to keep track of the $h$-vector.
\end{remark}
For each facet graph $G_i$ we denote the set of Jaeger trees of $G_i$, for the chosen ribbon structure and basis, by ${\mathcal{J}}(G_i)$. We also put ${\mathcal{J}}({\mathcal{G}})=\bigcup_{G \text{ is a facet graph of ${\mathcal{G}}$}}{\mathcal{J}}(G)$. Here it is somewhat important to take Jaeger trees as sets of directed edges; without that extra information, the same tree can have the Jaeger property with respect to multiple facet graphs. In this subsection we order the trees in ${\mathcal{J}}({\mathcal{G}})={\mathcal{J}}(G_1)\cup {\mathcal{J}}(G_2)\cup \dots\cup {\mathcal{J}}(G_M)$ in the following way.
\begin{defn}[face-by-face ordering of Jaeger trees, $<_f$] For $T\in {\mathcal{J}}(G_i)$ and $T'\in {\mathcal{J}}(G_j)$ we have $T<_f T'$ if and only if either $i<j$ or $i=j$ and $T\prec T'$ in the ordering of Jaeger trees of $G_i$ given in Definition \ref{def:ordering_of_Jaeger_trees_in_a_face}. \end{defn}
\begin{thm}\label{thm:shelling_of_symm_edge_poly}
The simplices corresponding to $\mathcal{J}({\mathcal{G}})$ form a shellable dissection of the boundary of $P_{\mathcal{G}}$, with shelling order $<_f$.
The terms of the resulting $h$-vector are $$h_i=|\{T\in{\mathcal{J}}({\mathcal{G}})\mid T \text{ has exactly $i$ tail-edges}\}|.$$ \end{thm}
Before giving a proof, let us point out a corollary for the $h^*$-vector of $P_{\mathcal{G}}$.
\begin{thm}\label{cor:h^*_of_sym_edge_poly}
$$(h^*_{P_{\mathcal{G}}})_i = |\{T\in{\mathcal{J}}({\mathcal{G}})\mid T \text{ has exactly $i$ tail-edges}\}|.$$ \end{thm}
\begin{proof} Consider the collection $\{\tilde{\mathcal{Q}}_T\mid T\in {\mathcal{J}}({\mathcal{G}})\}$. It results from coning over a dissection of the boundary. Hence by Theorem \ref{thm:shelling_of_symm_edge_poly} this is a shellable dissection whose $h$-vector is as specified in Theorem \ref{thm:shelling_of_symm_edge_poly}. By Proposition \ref{prop:unimodular_simplices} the simplices are unimodular. Hence by Proposition \ref{prop:h-dissect}, the $h^*$-vector agrees with the $h$-vector. \end{proof}
To prove Theorem \ref{thm:shelling_of_symm_edge_poly}, first we need two lemmas on the relationship of facets.
\begin{lemma}\label{l:reversing_cut_pointing_away}
If $(V_0,V_1)$ is a directed cut in the facet graph $G$ such that $b_0\in V_0$ and all edges point from $V_0$ to $V_1$, then there exists another facet graph $G'$ such that $f(G')>f(G)$, moreover, edges of $G$ outside the cut $(V_0,V_1)$ are present in $G'$ with the same orientation as in $G$ (that is, each edge $uv\in{\mathcal{G}}$ with either $u,v\in V_0$ or $u,v\in V_1$ is either not present in $G$, or it is present in $G'$ with the same orientation as in $G$), and
$(V_0,V_1)$ is also a directed cut in $G'$, but in $G'$ each edge points from $V_1$ to $V_0$. \end{lemma}
We note that $G$ and $G'$ above may not share the same edge set, not even when orientations are ignored. Thus as sets of edges, the cuts in $G$ and in $G'$ that are induced by the same partition $(V_0,V_1)$ of $V$, may be different.
\begin{proof}
Let $l$ be a layering for $G$.
Suppose that $G[V_1]$ has $k$ connected components ($k$ might be 1), and let the vertex sets of these components be $V_{1,1}, \dots, V_{1,k}$ (with $V_1=V_{1,1}\sqcup \dots \sqcup V_{1,k}$).
Without loss of generality, we can suppose that there is some $r$ with $0\leq r \leq k$ such that for $i=1, \dots, r$ there is an edge $x_iy_i\in {\mathcal{G}}$ with $x_i\in V_0$, $y_i\in V_{1,i}$, and $l(x_i)=l(y_i)$ (that is, an edge of ${\mathcal{G}}$ that is not present in $G$), and for $i>r$, there is no such edge.
Take the layering $l'$ defined by
$$l'(v) = \left\{\begin{array}{cl}
l(v)+1 & \text{if $v\in V_0$}, \\
l(v) & \text{if $v\in V_{1,i}$ for $i\leq r$}, \\
l(v)-1 & \text{if $v\in V_{1,i}$ for $i > r$}.
\end{array} \right.
$$
We claim that $l'$ gives a facet graph $G'$ as claimed. We simultaneously show that $l'$
satisfies the conditions of Theorem \ref{thm:facets_of_P_G}
and that $G'$ satisfies the conditions of the lemma. If $uv\in {\mathcal{G}}$ is an edge with either $u,v\in V_0$, $u,v\in V_{1,1} \cup \dots \cup V_{1,r}$, or $u,v\in V_{1,r+1} \cup \dots \cup V_{1,k}$, then $l'(v)-l'(u)=l(v)-l(u)$.
If $v\in V_{1,1} \cup \dots \cup V_{1,r}$ and $u\in V_{1,r+1}\cup \dots \cup V_{1,k}$ then $l'(v)-l'(u)=l(v)-l(u)+1$, but (since the $G[V_{1,i}]$ are connected components) we had $l(v)-l(u)=0$. This implies that the $l'$-difference for edges outside the cut $(V_0,V_1)$ is at most 1, moreover, while we may gain new edges in $G'$, the orientation of each edge of $G$ outside the cut remains the same.
If $u\in V_0$ and $v\in V_{1,i}$ with $i\leq r$, then we had either $l(v)-l(u)=0$ or $l(v)-l(u)=1$ as each edge of the cut $(V_0,V_1)$ pointed toward $V_1$ (if it was present in $G$). Hence for these edges, we have $l'(v)-l'(u)=-1$ if $uv$ was not present in $G$ (and we supposed that there is at least one such edge) and $l'(v)-l'(u)=0$ if $uv$ was present in $G$. Thus, each such edge has layer-difference at most $1$ with respect to $l'$, and each edge of $G'$ between $V_0$ and $V_{1,i}$ points toward $V_0$.
If $uv$ is an edge of ${\mathcal{G}}$ so that $u\in V_0$ and $v\in V_{1,i}$ with $i>r$, then we had $l(v)-l(u)=1$ by the definition of $r$. Hence for these edges, we have $l'(v)-l'(u)=-1$. Thus, each such edge has $l'$-difference at most 1, and each edge of $G'$ between $V_0$ and $V_{1,i}$ points toward $V_0$.
To show that $l'$
gives a facet graph $G'$,
it remains to show that $G'$ is connected. The induced subgraphs $G'[V_0], G'[V_{1,1}], \dots, G'[V_{1,k}]$ remain connected, as the edges within them did not change. For $i\leq r$, the graph $G'[V_{1,i}]$ stays connected to $G'[V_0]$ through the edge $x_iy_i$. For $i>r$, all edges between $V_0$ and $V_{1,i}$ remain in $G'$, hence $G'[V_{1,i}]$ is also connected to $G'[V_0]$ for $i>r$.
Finally we need to ascertain that $f(G')>f(G)$, but this follows from the definition of $f$ and the obvious $f(G')=f(G)+\sum_{v\in V_0}f(v)-\sum_{v\in V_{1,r+1}\cup \dots \cup V_{1,k}}f(v)$: as $b_0\in V_0$, we have $\sum_{v\in V_0}f(v)>0$ and $\sum_{v\in V_{1,r+1}\cup \dots \cup V_{1,k}}f(v)<0$. \end{proof}
\begin{lemma}\label{l:nonfirst_orientation_exits_dircut}
If $G_i$ and $G_j$ are facet graphs with $i>j$ (that is, $f(G_i)<f(G_j)$), then there exists a cut $(V_0,V_1)$ such that $b_0\in V_0$ and each edge points from $V_1$ to $V_0$ in $G_j$ while each edge points from $V_0$ to $V_1$ in $G_i$. \end{lemma}
\begin{proof}
Let $l_j$ be the layering of $G_j$ and $l_i$ be the layering of $G_i$. Shift the layerings so that $l_j(v)\geq l_i(v)$ for each $v\in V$, but there exists some $v\in V$ with $l_i(v)=l_j(v)$ (this is achievable for any two layerings). Then the difference $l_j(v)-l_i(v)$ is a nonnegative integer for each $v\in V$. Let $U(k)=\{v\in V\mid l_j(v)-l_i(v)=k\}$.
For each edge $uv$ of ${\mathcal{G}}$, we have $|l_i(u)-l_i(v)|\leq 1$ and similarly for $l_j$, hence $|(l_j(v)-l_i(v))-(l_j(u)-l_i(u))|=|(l_j(v)-l_j(u))+ (l_i(u)-l_i(v))|\leq 2$. Thus, any edge of ${\mathcal{G}}$ connects vertices within some $U(k)$, or between $U(k)$ and $U(k+1)$, or between $U(k)$ and $U(k+2)$ for some $k$. Also, any edge of ${\mathcal{G}}$ between $U(k)$ and $U(k+2)$ points toward $U(k+2)$ in $G_j$ and toward $U(k)$ in $G_i$. Similarly, any edge of ${\mathcal{G}}$ between $U(k)$ and $U(k+1)$ is either not in $G_j$ and points toward $U(k)$ in $G_i$, or it points toward $U(k+1)$ in $G_j$ and it is not present in $G_i$. Hence for any $k\geq 0$, the partition $(U(0) \sqcup \dots \sqcup U(k), U(k+1)\sqcup U(k+2)\sqcup\dots )$ induces an oriented cut in both $G_i$ and $G_j$, with each edge oriented toward $U(0)\sqcup \dots \sqcup U(k)$ in $G_i$ and each edge oriented toward $U(k+1)\sqcup U(k+2)\sqcup \dots$ in $G_j$.
We claim that at least one of these cuts satisfies the condition of the Lemma.
For this, it is enough to show that $b_0\not\in U(0)$. Indeed, if $b_0\in U(k)$ for $k\neq 0$, then the sets $V_0=U(k)\sqcup U(k+1)\sqcup \dots$ and $V_1=U(0)\sqcup \dots \sqcup U(k-1)$ provide the desired cut.
Now notice that if we had $b_0\in U(0)$, then the only vertex with $f>0$ would have the same coefficient in $f(G_i)$ and $f(G_j)$, while some vertices with negative $f$-value would have a larger coefficient in $f(G_j)$ than in $f(G_i)$. This would imply $f(G_j)<f(G_i)$ and thus contradict our assumption. \end{proof}
\begin{remark} Although we will not require it later, let us point out that Lemma \ref{l:nonfirst_orientation_exits_dircut} provides an explicit description of the first facet graph $G_1$: it is defined by the layering $l(v)=-\dist(b_0,v)$, where $\dist(b_0,v)$ means the minimal number of edges in a path between $b_0$ and $v$ in the (undirected) graph ${\mathcal{G}}$.
The function $l$ clearly gives us a layering, that is, the difference between the endpoints of any edge is at most $1$. Moreover, we cannot have a directed cut $(V_0,V_1)$ with $b_0\in V_0$ and each edge oriented toward $V_1$, since for any vertex $v\in V_1$, the edges of a shortest path to $b_0$ all point toward $b_0$. However, by Lemma \ref{l:nonfirst_orientation_exits_dircut}, if this were not the first facet graph then we would need to have such a cut. \end{remark}
\begin{proof}[Proof of Theorem \ref{thm:shelling_of_symm_edge_poly}] Take an arbitrary Jaeger tree $T$ that is not the first Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$ according to $<_f$. We need to prove that the simplex $\mathcal{Q}_T$ meets $\bigcup_{T'<_f T}\mathcal{Q}_{T'}$ in the union of the facets $\mathcal{Q}_{T-e}$, where $e$ ranges over the tail-edges of $T$. First, we will show that the facet $\mathcal{Q}_{T-e}$ is a subset of $\bigcup_{T'<T}\mathcal{Q}_{T'}$ when $e$ is a tail-edge of $T$. Then we will show that if $T$ is not the first Jaeger tree, then the simplex $\mathcal{Q}_T$ is not disjoint from $\bigcup_{T'<_f T}\mathcal{Q}_{T'}$, and any point $\mathbf p\in \mathcal{Q}_T \cap (\bigcup_{T'<_f T}\mathcal{Q}_{T'})$ is in $\mathcal{Q}_{T-e}$ for some tail-edge $e$ of $T$.
Let $G_i$ be the facet graph that contains $T$ as a Jaeger tree. If some $e\in T$ is a tail-edge of $T$ and $C^*(T,e)$ is not an oriented cut, then by Theorem \ref{thm:shelling_for_semibalanced}, $$ \mathcal{Q}_{T-e}\subseteq \bigcup_{T'\in{\mathcal{J}}(G_i):\, T' \prec T} \mathcal{Q}_{T'}\subseteq \bigcup_{T'<_f T}\mathcal{Q}_{T'}. $$
If $e\in T$ is a tail-edge and $C^*(T,e)$ is an oriented cut, then $C^*(T,e)$ points away from its side containing $b_0$. Now by Lemma \ref{l:reversing_cut_pointing_away}, there exists another facet graph $G_j$ with $f(G_j)>f(G_i)$ (and hence $j<i$) such that all edges of $G_i$ outside $C^*(T,e)$ are in $G_j$ with the same orientation. Hence $\mathcal{Q}_{T-e}\subseteq Q_{G_i}\cap Q_{G_j}$ and in conclusion, $\mathcal{Q}_{T-e}\subseteq Q_{G_j}\subseteq \bigcup_{T'<_f T}\mathcal{Q}_{T'}$.
Next we claim that if $T$ is not the first Jaeger tree, then $\mathcal{Q}_T$ is not disjoint from $\bigcup_{T'<T}\mathcal{Q}_{T'}$. If $T$ is not the first Jaeger tree from its facet graph $G_i$, then $\mathcal{Q}_T$ intersects the union of the simplices of previous Jaeger trees of $G_i$ by Theorem \ref{thm:shelling_for_semibalanced} applied to $G_i$. If $T$ is the first Jaeger tree of its facet, then it is enough to show that there is a tail-edge $e\in T$ as we already proved that in this case $\mathcal{Q}_{T-e}\subseteq \bigcup_{T'<_f T}\mathcal{Q}_{T'}$. If $T$ is the first Jaeger tree in its facet, but not the first tree altogether, then its facet is not the first facet. In this case, by Lemma \ref{l:nonfirst_orientation_exits_dircut}, there exists a directed cut $C^*$ in $G_i$ that points away from its side containing $b_0$. Now when we first take an edge from $C^*$ in the tour of $T$, it will be reached at its tail, as claimed.
Finally we show that any point $\mathbf p\in \mathcal{Q}_T \cap (\bigcup_{T'<_f T}\mathcal{Q}_{T'})$ is in $\mathcal{Q}_{T-e}$ for some tail-edge $e$ of $T$.
Suppose that $\mathbf p\in \mathcal{Q}_T\cap \mathcal{Q}_{T'}$ where $T'<_f T$. If $T'\in{\mathcal{J}}(G_i)$, then by Theorem \ref{thm:shelling_for_semibalanced}, there is a tail-edge $e\in T$ (moreover, $C(T,e)$ is not directed) such that $\mathbf{p}\in \mathcal{Q}_{T-e}$. If $T'\in {\mathcal{J}}(G_j)$ for $j \neq i$, then necessarily $j<i$. By Lemma \ref{l:nonfirst_orientation_exits_dircut} there is a directed cut $(V_0,V_1)$ with $b_0\in V_0$ so that each edge points from $V_0$ to $V_1$ in $G_i$. Moreover, in $G_j$, each edge between $V_0$ and $V_1$ points toward $V_0$. Hence if $\mathbf{p}\in \mathcal{Q}_T \cap \mathcal{Q}_{T'}$ (that is, $\mathbf p$ is a convex linear combination of vectors representing edges in $T$, as well as those in $T'$), then the coefficient of each edge of the cut $(V_0,V_1)$ needs to be zero in $\mathbf{p}$. Let $e$ be the first edge of $T$ that we reach from the cut $(V_0,V_1)$ in the tour of $T$. As in $G_i$ each edge points away from $V_0$, we reach $e$ at its tail. As $e$ is in the cut,
our earlier observation provides that $\mathbf{p}\in \mathcal{Q}_{T-e}$. \end{proof}
\subsection{The quadratic shelling} We give a second type of shelling order for $P_{\mathcal{G}}$, that we call the quadratic shelling order. In this shelling, we will directly compare Jaeger trees of different facet graphs based on their tours.
Fix a ribbon structure and a basis $(b_0,b_0b_1)$. This time we will use this data for each facet graph. In fact, assuming that we did the same for the face-by-face shelling order, then the two orders would not be dramatically different. As we are about to see, each simplex of the dissection attaches to the previous ones along the same set of facets; in particular, the contribution to the $h$-vector of each simplex is the same with respect to either order.
To facilitate the comparison of Jaeger trees of different facet graphs, in this section, when considering the tour of a tree $T$ of a facet graph $G$, we also keep track of the hidden edges. (See Example \ref{ex:<_4}.) In other words, we consider the tour of $T$ in ${\mathcal{G}}$. When defining Jaeger trees, we will simply disregard the hidden edges (or in other words, we do not care about which endpoint of a hidden edge we reached first). This way, the definition of a Jaeger tree does not change.
\begin{figure}
\caption{Jaeger trees in various facet graphs of a graph. The tree edges are thick. Hidden edges are dotted.
The basis is $(v_0,v_0v_1)$, which is indicated by the little gray arrow.}
\label{fig:<_4_Jaeger_trees}
\end{figure}
For any two spanning trees $T$, $T'$
of a ribbon graph, we say that the tours of $T$ and $T'$ agree until some point if up to that moment the current node-edge pairs of the two tours agree, moreover the status of each current edge has been the same up to this point, meaning that it was either hidden with respect to both trees, or was present in both and oriented in the same way.
In this subsection we order the Jaeger trees in ${\mathcal{J}}({\mathcal{G}})$ the following way:
\begin{defn}[quadratic ordering of Jaeger trees, $<_4$] Let $T_1,T_2,T_3,T_4\in{\mathcal{J}}({\mathcal{G}})$. If their tours agree until the node-edge pair $(u,uv)$ becomes current, but at that point $\overrightarrow{vu}\in T_1$, the edge $uv$ is not present in the facet graph of $T_2$, the oriented edge $\overrightarrow{uv}$ is in the facet graph of $T_3$ but $\overrightarrow{uv}\notin T_3$, finally $\overrightarrow{uv}\in T_4$, then we put $T_1 <_4 T_2 <_4 T_3 <_4 T_4$. \end{defn}
Note that as we look at first differences of Jaeger trees, it is not possible to have $\overrightarrow{vu}\notin T$ if $\overrightarrow{vu}$ is in the facet graph of $T$. Indeed, by the Jaeger property, in this case $(v,vu)$ would have to be current earlier than $(u,uv)$ but then that would have been an earlier difference.
\begin{ex}\label{ex:<_4} Figure \ref{fig:<_4_Jaeger_trees} shows four Jaeger trees in various facet graphs, with ribbon structure induced by the positive orientation of the plane, and basis $(v_0,v_0v_1)$. For the tree of the leftmost panel, its tour (considered in ${\mathcal{G}}$) is $(v_0,v_0v_1)$, $(v_1,v_1v_2)$, $(v_2,v_2v_3)$, $(v_3,v_3v_0)$, $(v_3,v_3v_1)$, $(v_3,v_3v_2)$, $(v_2,v_2v_1)$, $(v_1,v_1v_3)$, $(v_1,v_1v_0)$, $(v_0,v_0v_3)$. The quadratic ordering $<_4$ arranges these Jaeger trees in increasing order from left to right. Indeed, the first difference between the tours of any two of the trees is at the node-edge pair $(v_0,v_0v_1)$, which is a head-edge for the leftmost tree, a hidden edge for the second one, a nonedge seen from its tail for the third one, and a tail edge for the rightmost one. \end{ex}
In preparation to our main claim about the quadratic order, we need the following two statements.
\begin{lemma}\label{l:edge_in_or_out_in_two_facets}
If there are two facet graphs $G$ and $G'$ such that an edge $xy$ is present in $G$ oriented as $\overrightarrow{xy}$ and it is not present in $G'$, then there is a cut $(V_0,V_1)$ such that $x\in V_0$, $y\in V_1$, moreover, in $G$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_1$ to $V_0$. \end{lemma}
\begin{proof}
Let $l$ be the layering of $G$ and $l'$ be the layering of $G'$, and shift them so that $l(y)=l'(y)$. Notice that in this case, $l'(x)-l(x)=1$. Just like in the proof of Lemma \ref{l:nonfirst_orientation_exits_dircut}, for $i\in\mathbb{Z}$ let $U(i)=\{v\in V\mid l'(v)-l(v)=i\}$. Take $V_0=\bigcup_{i\leq 0}U(i)$ and $V_1=\bigcup_{i\geq 1}U(i)$. As in the earlier proof, $(V_0,V_1)$ will be a cut, where in $G$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_1$ to $V_0$. \end{proof}
\begin{lemma}\label{l:edge_ordered_differently_in_two_facets}
If there are two facet graphs $G$ and $G'$ such that an edge $xy$ is oriented as $\overrightarrow{xy}$ in $G$ and it is oriented as $\overrightarrow{yx}$ in $G'$, then there is a cut $(V_0,V_1)$ such that $x\in V_0$, $y\in V_1$, moreover, in $G$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_1$ to $V_0$. \end{lemma}
\begin{proof}
Let $l$ be the layering of $G$ and $l'$ be the layering of $G'$, and shift them so that $l(y)=l'(y)$. Notice that in this case, $l'(x)-l(x)=2$. Once again, for $i\in\mathbb{Z}$ let $U(i)=\{v\in V\mid l'(v)-l(v)=i\}$. Take $V_0=\bigcup_{i\leq 0}U(i)$ and $V_1=\bigcup_{i\geq 1}U(i)$. As in the previous proof, $(V_0,V_1)$ will be a cut, where in $G$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge between $V_0$ and $V_1$ is either not present or oriented from $V_1$ to $V_0$. \end{proof}
\begin{thm}\label{t:quadratic_shelling} For any connected graph ${\mathcal{G}}$, the quadratic order $<_4$ is a shelling order on the set of simplices corresponding to the collection of Jaeger trees $\mathcal{J}({\mathcal{G}})$.
\end{thm}
As before, the terms of the resulting $h$-vector are $$h_i=|\{T\in{\mathcal{J}}({\mathcal{G}})\mid T \text{ has exactly $i$ tail-edges}\}|.$$ The coincidence here with the formula of Theorem \ref{thm:shelling_of_symm_edge_poly} comes as no surprise in light of Proposition \ref{prop:h-dissect} and Corollary \ref{cor:h^*_of_sym_edge_poly}. Recall though that in this subsection $\mathcal J({\mathcal{G}})$ is defined by using (restrictions of) the same ribbon structure, whereas in the case of Theorem \ref{thm:shelling_of_symm_edge_poly}, the definition was more flexible.
\begin{proof}
The outline of the proof is the same as that of the proof of Theorem \ref{thm:shelling_of_symm_edge_poly}. First we show that if $e\in T$ is a tail-edge then $\mathcal{Q}_{T-e}\subseteq \bigcup_{T'<_4 T}\mathcal{Q}_{T'}$.
Then we show that if $T$ is not the first tree in $<_4$, then $\mathcal{Q}_T$ is not disjoint from $\bigcup_{T'<_4 T}\mathcal{Q}_{T'}$, moreover, if $\mathbf x\in \mathcal{Q}_T\cap (\bigcup_{T'<_4 T}\mathcal{Q}_{T'})$ then $\mathbf x\in \mathcal{Q}_{T-e}$ for some tail-edge $e\in T$.
First, let us notice that if $T,T'\in {\mathcal{J}}(G)$ for some facet graph $G$, then $T$ and $T'$ are ordered the same way in $<_4$ as in the ordering of Jaeger trees of $G$. Indeed, as they live in the same directed subgraph, the first difference between their tours must be that one of them contains an edge reached at its tail, while the other one does not. In this case, they are ordered the same way as for the ordering $\prec$ of ${\mathcal{J}}(G)$.
Let $T\in{\mathcal{J}}(G)$.
Suppose that $e\in T$ is a tail-edge and $C^*(T,e)$ is not directed. Then by Theorem \ref{thm:shelling_for_semibalanced}, $\mathcal{Q}_{T-e}\subseteq \bigcup_{ T'\in{\mathcal{J}}(G), T' \prec T}\mathcal{Q}_{T'}\subseteq \bigcup_{T'<_4 T}\mathcal{Q}_{T'}$.
Now suppose that $e\in T$ is a tail-edge but $C^*(T,e)$ is a directed cut. We show a set of Jaeger trees all preceding $T$ in $<_4$ such that the union of their simplices contains $\mathcal{Q}_{T-e}$.
Let the sides of the cut $C^*(T,e)$ be $V_0$ and $V_1$ with $b_0\in V_0$.
Take the tour of $T$. As $T$ is a Jaeger tree, $e$ has its tail in $V_0$. Hence each edge of $C^*_G(T,e)$ has its tail in $V_0$. Thus, $e$ is the last edge of $C^*_G(T,e)$ to become current in the tour of $T$.
Take the facet graph $G'$ whose existence is proved in Lemma \ref{l:reversing_cut_pointing_away}. Notice that as in our case $C^*(T,e)$ is a fundamental cut, $G[V_1]$ is connected. Hence $G$ and $G'$ agree outside the cut $(V_0,V_1)$. If there is no edge in ${\mathcal{G}}$ between $V_0$ and $V_1$ that is not present in $G$ then we obtained $G'$ from $G$ by reversing the cut $C^*_G(T,e)$. If there are edges of ${\mathcal{G}}$ between $V_0$ and $V_1$ that are not present in $G$, then we get $G'$ from $G$ by removing the edges of $C^*_G(T,e)$ and adding the non-present edges of ${\mathcal{G}}$ between $V_0$ and $V_1$ directed toward $V_0$.
In the first case, let $uv$ be the edge of $C^*_G(T,e)$ that first becomes current in the tour of $T$. In the second case, let $uv$ be the first non-edge of $G$ between $V_0$ and $V_1$ that becomes current in the tour of $T$. In both cases, let $u\in V_0$ and $v\in V_1$.
In both cases, $uv$ is in $G'$ with orientation $\overrightarrow{vu}$. Let $vw$ be the first edge of $G'$ following $vu$ in the ribbon structure of $G'$ at $v$ that is not in $G'(V_0,V_1)$.
Let $T_0$ be the component of $T-e$ containing $b_0$, and let $T_1$ be the other component.
Let ${\mathcal{J}}(G'[V_1])$ denote the set of Jaeger trees of $G'[V_1]$ with base $(v,vw)$, where the ribbon structure is inherited from $G'$. Notice the following:
$$
\mathcal{T}:=\{T_0\cup \overleftarrow{uv} \cup T'_1: T'_1\in{\mathcal{J}}(G'[V_1])\}\subseteq {\mathcal{J}}(G').
$$
Firstly, these are all trees within $G'$. Now we show that they are Jaeger.
Indeed, until reaching $\overleftarrow{uv}$, our walk agrees with the tour of $T$, hence we do not cut any edge at its head. We need to traverse $\overrightarrow{vu}$ as we reach it at its head. After traversing $\overrightarrow{vu}$, we arrive at $G'[V_1]=G[V_1]$. When we encounter an edge of $G'(V_0,V_1)$, we can cut it, since now the tails are in the side of $G'[V_1]$. Otherwise we traverse a Jaeger tree, hence within $V_1$ we also cut the non-tree edges at their tail. Arriving back to $T_0$ we finish the traversal as in the tour of $T$ with the exception that the edges of $G'(V_0,V_1)$ are already cut from their other side.
Notice that the above described trees all precede $T$ in $<_4$, as the first difference is one of the followings:
In case 1, if $uv\neq e$, we traverse an edge from the head direction in $T'$ and remove the edge (from the tail direction) in $T$. In case 1, if $uv=e$, we traverse an edge from the head direction in $T'$ and traverse the edge from the tail direction in $T$.
In case 2 we traverse an edge from the head direction in $T'$ and it is not present in the facet of $T$.
As $\bigcup_{T'\in{\mathcal{J}}(G'[V_1])} \mathcal{Q}_{T'}=Q_{G'[V_1]}=Q_{G[V_1]}\supseteq \mathcal{Q}_{T_1}$, we have $\mathcal{Q}_{T-e}\subseteq \bigcup_{T'\in\mathcal{T}} \mathcal{Q}_{T'}\subseteq \bigcup_{T'<_4 T} \mathcal{Q}_{T'}$.
Now we show that if $T$ is not the first tree according to $<_4$, then $\mathcal{Q}_T$ is not disjoint from $\bigcup_{T'<_4 T}\mathcal{Q}_{T'}$. For this, it is enough to show that there is a tail-edge $e\in T$, as we already proved that in this case $\mathcal{Q}_{T-e}\subseteq \bigcup_{T'<_4 T}\mathcal{Q}_{T'}$.
Let $T'$ be any tree preceding $T$ in $<_4$. Let us take the first difference between $T'$ and $T$. If the first difference is an edge that is included into $T$ from the tail direction, then we are ready. Otherwise, since $T'<_4 T$, there are three possibilities :
Case 1: An edge $e$ is included into $T'$ from the head direction, and $e$ is not present in $G$.
Case 2: An edge $e$ is included into $T'$ from the head direction, and $e$ is seen from its tail in the tour of $T$, but not included into $T$.
Case 3: We see an edge $e$ in the tour of $T'$ that is not present in $G'$, and $e$ is seen from its tail in the tour of $T$, but $e\notin T$.
In cases 1 and 3, we can use Lemma \ref{l:edge_in_or_out_in_two_facets} to deduce that there is a cut $(V_0,V_1)$ such that $e$ is in the cut, moreover, in $G$ each edge of the cut is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge of the cut is either not present or oriented from $V_1$ to $V_0$. In both cases we see that $T$ and $T'$ need to differ when we first reach an edge of the cut, since no edge of the cut occurs both in $G$ and $G'$ with the same orientation. Hence in both cases, $e$ is the edge that we first reach from the cut. Hence we conclude that in both cases, $b_0\in V_0$.
As $T$ is a tree, it needs to contain an edge from the cut $G(V_0,V_1)$. However, as we see, each edge of $G(V_0,V_1)$ points from $V_0$ to $V_1$, hence the first edge of $T$ from the cut is necessarily reached from its tail. This finishes the proof for Cases 1 and 3.
In case 2, we can repeat essentially the same argument using Lemma \ref{l:edge_ordered_differently_in_two_facets}.
Now we show that any point $\mathbf{p}\in \mathcal{Q}_T \cap (\bigcup_{T'<_4 T} \mathcal{Q}_{T'})$ is in $\mathcal{Q}_{T-e}$ for some tail-edge $e\in T$. This will finish the proof.
Suppose that $\mathbf{p}\in \mathcal{Q}_T\cap \mathcal{Q}_{T'}$ where $T'<_4 T$.
Let $G$ and $G'$ be the facet graphs of $T$ and $T'$, respectively. Write up $\mathbf{p}$ as a linear combination $\sum_{\overrightarrow{e}\in T\cap T'} \lambda_{\overrightarrow{e}} \mathbf{x}_{\overrightarrow{e}}$.
Let $H\subset T \cap T'$ be the set of edges of $T$ that are taken with a positive coefficient in the above combination. It is enough to find an edge of $T-H$ that is reached at its tail in the tour of $T$, since in this case $\mathbf{p}\in_{T-e}$.
Examine the first difference between the tours of $T$ and $T'$.
The edges of $H$ are part of $T'$, and with the same orientation as in $G$.
Hence the first difference between $T$ and $T'$ is an edge outside of $H$. If the first difference is a tail-edge of $T$, then we are ready. If not, then there are three cases:
Case 1: An edge $e$ is included into $T'$ from the head direction, and $e$ is not present in $G$.
Case 2: An edge $e$ is included into $T'$ from the head direction, and $e$ is seen from its tail in the tour of $T$, but not included into $T$.
Case 3: We see an edge $e$ in the tour of $T'$ that is not present in $G'$, and $e$ is seen from its tail in the tour of $T$, but $e\notin T$.
In cases 1 and 3, we can use Lemma \ref{l:edge_in_or_out_in_two_facets} to deduce that there is a cut $(V_0,V_1)$ such that in $G$ each edge of the cut is either not present or oriented from $V_0$ to $V_1$, and in $G'$ each edge of the cut is either not present or oriented from $V_1$ to $V_0$. In both cases we see that $T$ and $T'$ need to differ when we first reach an edge of the cut, since no edge of the cut occurs both in $G$ and $G'$ with the same orientation. Hence in both cases, $e$ is the edge that we first reach from the cut. Hence we conclude that in both cases, $b_0\in V_0$. We also see that no edge of the cut is in $H$.
As $T$ is a tree, it needs to contain an edge from the cut $G(V_0,V_1)$. However, as we see, each edge of $G(V_0,V_1)$ points from $V_0$ to $V_1$, hence the first edge of $T$ from the cut is necessarily reached from its tail. This finishes the proof for Cases 1 and 3.
In case 2, we can repeat essentially the same argument using Lemma \ref{l:edge_ordered_differently_in_two_facets}. \end{proof}
\section{An interpretation of $(\gamma_{\mathcal{G}})_1$} \label{sec:gamma_1}
Ohsugi and Tsuchiya conjecture that the $h^*$-polynomial of the symmetric edge polytope is $\gamma$-positive \cite[Conjecture 4.11]{OT}.
This has been proved for some graph classes. For some graph classes (cycles \cite{OT}, complete bipartite graphs \cite{arithm_symedgepoly}, complete graphs \cite{Root_poly_complete_graph}), the $h^*$-polynomial was explicitly computed. Also, Ohsugi and Tsuchiya proved that the $h^*$-polynomial of the symmetric edge polytope is $\gamma$-positive if $G$ has a vertex that is connected to every other vertex \cite[Theorem 5.3]{OT} or if $G$ is bipartite where each partite class contains a vertex that is connected to each vertex of the other partite class \cite[Corollary 5.5]{OT}.
In this section we prove that $\gamma_1$ is nonnegative for any graph, moreover, it equals twice the cyclomatic number. Independently from us, D'Al\`i et al.\ \cite{dalietal} found the same result with a simple proof. Moreover, they also establish the nonnegativity of $\gamma_2$. (Note that $\gamma_0=1$ is trivially true for each graph, since the constant term of the $h^*$-vector is known to be 1.)
\begin{thm}\label{thm:gamma(1)=2g}
For any simple undirected graph ${\mathcal{G}}$, $$\gamma_1=2g,$$ where $g=|E({\mathcal{G}})|-|V({\mathcal{G}})|+1$. \end{thm}
Let us fix a ribbon structure and a basis for ${\mathcal{G}}$, and let ${\mathcal{J}}({\mathcal{G}})$ be the union of the Jaeger trees of the facet graphs of ${\mathcal{G}}$ for this ribbon structure and basis. Let ${\mathcal{J}}_1({\mathcal{G}})$ denote the Jaeger trees of ${\mathcal{G}}$ with exactly one tail-edge. By Corollary \ref{cor:h^*_of_sym_edge_poly}, $h^*_1=|{\mathcal{J}}_1({\mathcal{G}})|$.
We know that $\gamma_0 = 1$ since $h^*_0=1$. By \cite{OT}, the degree of $h^*_{P_{\mathcal{G}}}$ is equal to $|V|-1$. (Note that this also follows from the fact that the last simplex of the shelling of the boundary will glue on all of its $|V|-1$ facets.)
Hence $h^*_1=\gamma_1+(|V|-1)\gamma_0$. Now for proving Theorem \ref{thm:gamma(1)=2g}, it is enough to show that $h^*_1 = |{\mathcal{J}}_1({\mathcal{G}})| = 2|E|-|V|+1$.
We call a Jaeger tree $T$ an $\overrightarrow{e}$-stick Jaeger tree if $\overrightarrow{e}\in T$ is the unique tail-edge of $T$. (The name is motivated by the fact that in this case $\mathcal{Q}_T$ sticks to the previous simplices at the face $\mathcal{Q}_{T-e}$.) We will do a bit more than just counting Jaeger trees with exactly one tail-edge, we will also be able to tell what are these tail-edges. Let $G_1$ again denote the facet graph of ${\mathcal{G}}$ with maximal $f$-value. (That is, the facet graph defined by $l(v)=dist(b_0,v)$.) Also, let $T_1$ be the first Jaeger tree of $G_1$ with respect to $\prec$ (which is also the first Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$ with respect to both $<_f$ and $<_4$). The following lemma is the heart of the proof of Theorem \ref{thm:gamma(1)=2g}. \begin{lemma}\label{l:one-stick_trees}
For each $\overrightarrow{e}\in T_1$, there is no $\overrightarrow{e}$-stick Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$, and there is exactly one $\overleftarrow{e}$-stick Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$.
For each edge $e$ of ${\mathcal{G}}$ outside of $T_1$, there is exactly one $\overrightarrow{e}$-stick Jaeger tree and exactly one $\overleftarrow{e}$-stick Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$. \end{lemma}
We start by showing that for any oriented edge $\overrightarrow{e}$, there is at most one $\overrightarrow{e}$-stick Jaeger tree. This is in fact true in greater generality, hence we state it in this more general form.
Let $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ be a set of oriented edges. Then we call a Jaeger tree $T\in {\mathcal{J}}({\mathcal{G}})$ an $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$-stick Jaeger tree, if $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ is exactly the set of tail-edges of $T$ (that is, all these edges are in $T$ and they are first reached at their tail in the tour of $T$, moreover, all other edges of $T$ are first reached at their head).
\begin{lemma}\label{l:at_most_one_sticky_Jaeger_tree}
For any (possibly empty) set of oriented edges $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$, there is at most one $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$-stick Jaeger tree in ${\mathcal{J}}({\mathcal{G}})$. \end{lemma}
We can further strengthen the above lemma in the following way. We will prove this stronger version. \begin{lemma}\label{l:one-stick_stronger}
Let $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ be a set of oriented edges.
Suppose that there is a Jaeger tree $T$ such that $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}\subseteq T$, and $ \{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ contains all the tail-edges of $T$ (and possibly some head-edges). Then there is no $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$-stick Jaeger tree $T'$, except possibly for $T$. \end{lemma} \begin{proof}
We can suppose that $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_i\}$ are tail-edges and $\{\overrightarrow{e}_{i+1}, \dots \overrightarrow{e}_r\}$ are head edges in $T$.
Suppose for a contradiction that there is a $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$-stick Jaeger tree $T'$.
If $T$ and $T'$ are Jaeger trees of the same facet graph $G$, then the first difference in their tours needs to be that an edge $f$ is included into one of them, and not included into the other. As both graphs live in $G$, the edge $f$ is oriented the same way in the two tours. As $T$ and $T'$ are both Jaeger, $\overrightarrow{f}$ needs to be reached at its tail, hence $f$ is a tail-edge in either $T$ or $T'$. However, as $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\} \subseteq T\cap T'$, $\overrightarrow{f}\notin \{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$. This is a contradiction, since $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ should contain all tail-edges of both $T$ and $T'$.
Now suppose that the facet graph of $T$ is $G$ and the facet graph of $T'$ is $G'\neq G$.
Suppose that $f(G)>f(G')$. Then by Lemma \ref{l:nonfirst_orientation_exits_dircut} there exist a cut $(V_0, V_1)$ such that $b_0\in V_0$ and in $G(V_0,V_1)$ each edge is either not present or points from $V_1$ to $V_0$ while in $G'(V_0,V_1)$ each edge is either not present or points from $V_0$ to $V_1$.
As the edges $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$ are present both in $T$ and in $T'$ with the same orientation, they are present in both $G$ and $G'$ with this orientation, hence none of them is in the cut $(V_0,V_1)$.
As $T'$ is connected, it needs to contain at least one edge from the cut $G'(V_0,V_1)$. Moreover, the edge $\overrightarrow{f}$ of $T'$ that is first reached from $G'(V_0,V_1)$ will be reached at its tail.
As by the above reasoning $\overrightarrow{f}\notin \{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$, $T'$ cannot be an $\{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$-stick Jaeger tree.
If $f(G')>f(G)$, then by the same argument, we conclude that $T$ needs to contain a tail-edge $\overrightarrow{f}\notin \{\overrightarrow{e}_1, \dots \overrightarrow{e}_r\}$, which is again a contradiction. \end{proof}
Lemma \ref{l:one-stick_stronger} implies that in all the cases listed in \ref{l:one-stick_trees}, there is at most one $\overrightarrow{e}$-stick Jaeger tree. Moreover, it also implies that there is no $\overrightarrow{e}$-stick Jaeger tree for $\overrightarrow{e}\in T_1$. Indeed $\overleftarrow{e}\in T_1$ is a head-edge of $T_1$ (since $T_1$ has only head-edges). Hence we can apply Lemma \ref{l:one-stick_stronger} to $\{\overrightarrow{e}\}$.
However, to characterize Jaeger trees in ${\mathcal{J}}_1({\mathcal{G}})$ we also need ways to construct Jaeger trees with one tail-edge. For this, we will use the following greedy procedure. Intuitively, we traverse the graph and build up a tree while obeying the Jaeger rule (not cutting edges at their head), and always cutting edges at their tail if they are not already in the tree. We do not consider hidden edges in this process.
\begin{defn}[greedy tree]\label{def:greedy_tree}
Let $G$ be a ribbon digraph with a basis $(b_0,b_0b_1)$.
Let us call the outcome of the following procedure a greedy tree. We start with the empty subgraph $H=\emptyset$, and with current vertex $b_0$ and current edge $b_0b_1$. At any moment, if the current node-edge pair is $(h, ht)$ where $h$ is the head of $ht$, then if $(t,th)$ has not yet been current node-edge pair, then we include $ht$ into $H$, traverse $ht$ and take $(t, th^+)$ as the next current node-edge pair. If $(t,th)$ has already been current node edge pair, then we do not include $ht$ into $H$, an take $(h, ht^+)$ as next current node-edge pair.
If at some moment the current node-edge pair is $(t, th)$ where $t$ is the tail of $th$, and $(h, ht)$ has not yet been current node-edge pair, then do not include $th$ into $H$ and take $(t, th^+)$ as the next current node-edge pair. If $(h, ht)$ has already been current node-edge pair, then take $(h, ht^+)$ as next current node-edge pair. The process stops when a node-edge pair becomes current for the second time. The output is the subgraph $H$. \end{defn} See the left panel of Figure \ref{fig:greedy_and_almost_greedy_trees} for an example.
\begin{lemma}\label{l:greedy_tree_is_a_tree}
The above process produces a (not necessarily spanning) tree. \end{lemma} \begin{proof}
Suppose for a contradiction that a cycle appears in $H$, and suppose that this first happens when an edge $\overrightarrow{xy}$ is included into $H$. Stop the process immediately before the inclusion of $\overrightarrow{xy}$. Then the current node-edge pair is $(y,yx)$. Let $C$ be the unique cycle that we get upon including $\overrightarrow{xy}$ into $H$. Let us call its vertices $y=v_0, v_1, \dots v_k= x$.
As $C$ is semi-balanced, half the edges need to be oriented in one cyclic direction, and half of them in the other one. Hence we need to have at least one edge $v_jv_{j+1}$ that is oriented as $\overrightarrow{v_{j+1}v_{j}}$ (as $\overrightarrow{xy}$ stands in the opposite direction to this).
Notice that until now, during the process, around any vertex $v$, the node-edge pairs $\{(v, vu): uv \in E\}$ became current in an order compatible with the ribbon structure. Hence if a vertex $v$ is first reached through an edge $uv$, then $(v,vu)$ becomes current only after each $(v, vu')$ for $u'\in\Gamma(v)-u$ has become current (where $\Gamma(v)$ denotes all in-, and out-neighbors of $v$).
Notice also, that during the process, we always move along the (currently present) edges of $H$. Hence (since we have been in $x$, and then we later get to $y$) we must have traversed each edge $v_iv_{i+1}$ from $v_{i+1}$ to $v_{i}$. That is, each node-edge pair $(v_{i+1},v_{i+1}v_{i})$ (for $i=0, \dots k-1$) has been current before $(y,yx)$.
Take the edge $v_jv_{j+1}$ which is oriented as $\overrightarrow{v_{j+1}v_{j}}$. As $\overrightarrow{v_{j+1}v_j}\in H$, $(v_{j},v_{j}v_{j+1})$ was current before $(v_{j+1},v_{j+1},v_j)$. Hence $v_{j+1}$ was first reached through $\overleftarrow{v_jv_{j+1}}$. Thus, by our previous note, each node-edge pair $(v_{j+1},v_{j+1}u)$ for $u\neq v_j$ became current before $(v_{j+1},v_{j+1},v_j)$. In particular, $(v_{j+1},v_{j+1}v_{j+2})$ also became current before $(v_{j+1},v_{j+1},v_j)$. As we have not traversed a cycle by that point, this must be the first time when $v_{j+1}v_{j+2}$ became current. Hence (since $v_{j+1}v_{j+2} \in H$), we need to have the orientation $ \overleftarrow{v_{j+1}v_{j+2}}$.
Continuing this way, we get that $v_{k-1}x$ also has to be oriented as $\overleftarrow{v_{k-1}x}$. But then $(x,\overrightarrow{xy})$ has to become current before $(x,\overrightarrow{xv_{k-1}})$, and hence before $(y,\overleftarrow{yx})$. But this implies that we will not include $\overrightarrow{xy}$ into $H$, a contradiction.
\end{proof}
Note the following: \begin{claim} If the greedy tree of a digraph $G$ is a spanning tree, then it is a Jaeger tree. \end{claim} \begin{proof}
This follows from the construction, as the tour of the greedy tree agrees with the walk as we build it up, and we always include any edge that is first reached at its head. \end{proof}
The greedy tree might not be spanning. Indeed, if there is a cut $(V_0,V_1)$ in the graph $G$ such that $b_0\in V_0$ and all edges of $G(V_0,V_1)$ point toward $V_1$, then the process will never get to $V_1$. However, this is the only obstacle that can prevent the greedy tree from being spanning. Indeed, suppose that the greedy tree is not spanning for a digraph, and let $V_0$ be the vertex set of the greedy tree, and let $V_1$ be the rest of the vertices. We claim that each edge of the cut $(V_0,V_1)$ is oriented toward $V_1$. At any node of $V_0$, all incident edges eventually become current. Hence if there were any edges in the cut $(V_0,v_1)$ oriented toward $V_0$, then the first reached such edge would be included into $H$, a contradiction.
Hence in the facet graph $G_1$ (that does not contain a bad cut by Lemma \ref{l:reversing_cut_pointing_away}), the greedy tree will be a Jaeger tree. It is also clear by its construction that it will not contain any tail-edge. We have seen in the proofs of Theorems \ref{thm:shelling_of_symm_edge_poly} and \ref{t:quadratic_shelling} that for both $<_f$ and $<_4$ if a Jaeger tree is not the first one, then it contains at least one tail-edge. Hence the greedy tree of $G_1$ is the first Jaeger tree according to both $<_f$ and $<_4$, the one that we were referring to as $T_1$.
To understand ${\mathcal{J}}_1({\mathcal{G}})$, we will also need a slightly modified version of the greedy tree construction.
\begin{defn}[almost-greedy tree]
Let $G$ be a ribbon digraph, $(b_0,b_0b_1)$ a basis, and $\overrightarrow{uv}$ an arbitrary edge.
Let us call the outcome of the following procedure an $\overrightarrow{uv}$-almost greedy tree. We follow the steps of Definition \ref{def:greedy_tree}, with the exception that if $(u,uv)$ becomes current node-edge pair before $(v, vu)$, then (when it becomes current) we include $uv$ into $H$, and take $(v,vu^+)$ as next current node-edge pair. \end{defn} See the rightmost panel of Figure \ref{fig:greedy_and_almost_greedy_trees} for an example of an almost greedy tree.
\begin{lemma}\label{l:almost_greedy_tree}
The above process produces a (not necessarily spanning) tree. \end{lemma} \begin{proof}
The proof of Lemma \ref{l:greedy_tree_is_a_tree} carries over with a few modifications. Once again suppose for a contradiction that a cycle $C$ appears, and stop the process when it first happens. Suppose again that in this moment $(y,yx)$ is the current edge (that is, we include $xy$ into $H$ and this creates the cycle $C$). Once again denote the vertices of $C$ by $y=v_0, v_1, \dots v_k= x$.
As $G$ does not contain multiple edges, $|C|\geq 4$. If $xy \neq uv$ then $xy$ needs to have orientation $\overrightarrow{xy}$, as we put it into $H$ when $y$ is the current node.
There must be at least 2 edges in $C$ standing opposite to $\overrightarrow{xy}$, and hence these edges are oriented as $\overrightarrow{v_{i+1}v_{i}}$. One of these might by $uv$, but at least one of them has to be a ``regular'' edge. For that edge, we can repeat the argument of the proof of Lemma \ref{l:greedy_tree_is_a_tree}, and obtain that $(x,xy)$ was current before $(y,xy)$, which is a contradiction, as in such a case we would not put $xy$ into $H$.
If $xy = uv$, then it might be oriented as $\overrightarrow{yx}$. As $|C|\geq 4$, we still need to have some other edge $v_iv_{i+1}$ oriented as $\overrightarrow{v_{i+1}v_{i}}$. In this case, we also conclude that $(x,xy)$ was current before $(y,xy)$ which is still a contradiction, since in this case we would already have included $xy$ into $H$ at its endpoint $x$.
\end{proof}
\begin{figure}
\caption{We consider the ribbon structure induced by the positive orientation of the plane, with basis $(y,ya)$. \\ Left panel: The first facet graph $G_1$ of the underlying undirected graph (hidden edges are dotted), and its greedy tree (thick edges). \\ Middle panel: The sets $S_x^d$ and $S_x^h$ for the edge $\protect\overrightarrow{e} = \protect\overrightarrow{xy}$, as defined in the proof of Lemma \ref{l:one-stick_trees}. \\ Right panel: The orientation obtained for $xy$ in Case 3 of the proof of Lemma \ref{l:one-stick_trees}, and the $\protect\overrightarrow{yx}$ - almost greedy tree (thick edges). }
\label{fig:greedy_and_almost_greedy_trees}
\end{figure}
\begin{proof}[Proof of Lemma \ref{l:one-stick_trees}]
\textbf{Case 1:} Take an edge $\overrightarrow{e}\in T_1$. We show that $\overrightarrow{e}$ cannot be the unique tail-edge in a Jaeger tree (of and facet graph). Indeed, as $T_1$ has no tail-edges, Lemma \ref{l:one-stick_stronger} applied to $\{\overrightarrow{e}\}$ tells us that there is no $\overrightarrow{e}$-stick Jaeger tree.
\textbf{Case 2:} Let $\overrightarrow{e}\in G_1-T_1$.
Lemma \ref{l:at_most_one_sticky_Jaeger_tree} tells us that we cannot have more than one $\overrightarrow{e}$-stick Jaeger tree.
Take the $\overrightarrow{e}$-almost greedy tree in $G_1$, and call it $T$. We show that $T$ is an $\overrightarrow{e}$-stick Jaeger tree.
By Lemma \ref{l:almost_greedy_tree}, this is a tree. Also, we claim that $\overrightarrow{e}\in T$ and $\overrightarrow{e}$ is a tail-edge of $T$. Indeed, since $T_1$ is the greedy tree of $G_1$, and $\overrightarrow{e}\notin T_1$,
in the greedy tree process, at some point we reach $\overrightarrow{e}$ at its tail (and do not include it to $T_1$) before seeing $\overrightarrow{e}$ from its head. Hence in the $\overrightarrow{e}$-almost greedy tree in $G_1$, $\overrightarrow{e}$ will be included so that it is reached at its tail. We also claim that $T$ is spanning. Indeed, the only way for not reaching some vertices would be if they are separated from $b_0$ by a cut directed away from the part of $b_0$, but there is no such cut in $G_1$.
Notice that the tour of $T$ agrees with the way we traversed the graph when building up $T$. In that process, we traversed each edge that was first seen from the head direction. Hence $T$ is a Jaeger tree. Moreover, by construction, $\overrightarrow{e}$ is the only tail-edge, hence $T$ is indeed an $\overrightarrow{e}$-stick Jaeger tree.
\textbf{Case 3:}
Now we show that for an edge $\overrightarrow{e}\in G_1$, (no matter whether $\overrightarrow{e}$ is in $T_1$ or not) there is exactly one $\overleftarrow{e}$-stick Jaeger tree for ${\mathcal{G}}$. Lemma \ref{l:at_most_one_sticky_Jaeger_tree} tels us that there is at most one.
To show that there is at least one $\overleftarrow{e}$-stick Jaeger tree, we need to find out which orientation should contain this Jaeger tree. To get this orientation, take the following layering:
Let $l$ be the layering of $G_1$ (that is, $l(v)$ is the distance in ${\mathcal{G}}$ from $b_0$). We would like to modify this so that $\overrightarrow{e}=\overrightarrow{xy}$ is reversed. Take $l'(v)=l(v)$ for $v\neq x$ and $l'(x)=l(x)+2$. This will cause $xy$ to get reversed, but it might not be an appropriate layer function. Indeed, if there was any edge $\overrightarrow{wx}$ pointing to $x$ in $G_1$, then now $l'(x)-l'(w)=3$. Also, if there was a hidden edge $zx$ incident to $x$ then we have $l'(x)-l'(z)=2$. Hence we need to increase $l'$ by 2 on the in-neighbors of $x$, and on the in-neighbors of those, etc. And we need to increase $l'$ by 1 on the vertices that are connected by a hidden edge to a node on which $l$ was increased by 2, and also on their in-neighbors, etc.
More formally, let $$S^d_x=\{v\in V: \text{ $x$ is reachable on a directed path from $v$ in $G_1$}\},$$ and let
\begin{align*}
S^h_x=\{v\in V: \exists u \in V \text{ such that $u$ is reachable from $v$ on a directed}\\ \text{ path in $G_1$ and $u$ is connected to $S^d_x$ by a hidden edge}\}.
\end{align*}
For an example, see the middle panel of Figure \ref{fig:greedy_and_almost_greedy_trees}. Then let
$$
l''(v) = \left\{\begin{array}{cl}
l(v)+2 & \text{if $v\in S^d_x$} \\
l(v)+1 & \text{if $v \in S^h_x$}\\
l(v) & \text{otherwise}
\end{array} \right..
$$
It is easy to check that $l''$ is a good layering, that is, $|l''(u)-l''(v)|\leq 1$ for each edge $uv \in E({\mathcal{G}})$.
Also, we claim that $l''(y)=l(y)$, thus, $xy$ indeed gets reversed. Indeed, each vertex in $v\in S^d_x$ has $l(v)<l(y)$, and each vertex in $u\in S^h_x$ has $l(u)\leq l(v)$ for some $v\in S^d_x$, hence $y\notin S^d_x\cup S^h_x$.
Hence $l''$ defines a semi-balanced subgraph $G$ in which $xy$ is oriented as $\overrightarrow{yx}$. (For an example, see the right panel of Figure \ref{fig:greedy_and_almost_greedy_trees}.) We show that $G$ has a Jaeger tree where $\overrightarrow{yx}$ is the only tail-edge. For this, let us take the $\overrightarrow{yx}$-almost greedy tree $T$ in $G$. We need to show that $T$ is spanning. Then by construction it also follows that it is a Jaeger tree. Also, we need to show that $T$ indeed contains $\overrightarrow{yx}$, reached at its tail.
Notice that $(V-(S^d_x\cup S^h_x), S^d_x\cup S^h_x)$ defines a directed cut in $G$: all edges of ${\mathcal{G}}(V-(S^d_x\cup S^h_x), S^d_x\cup S^h_x)$ are either not present, or point from $V-(S^d_x \cup S^h_x$ to $S^d_x \cup S^h_x$.
In $G$, all edges of ${\mathcal{G}}$ between $S^h_x$ and $S^d_x$ point toward $S^d_x$. As by the definition of $S^h_x$, each vertex of $S^h_x$ is connected to some vertex of $S^d_x$, we conclude that in $G$, $x$ is reachable from each vertex of $S^d_x\cup S^h_x$.
Also, $b_0\in V-(S^d_x\cup S^h_x)$ as $b_0$ is a sink in $G_1$. Hence the almost greedy tree process will start at $V-(S^d_x\cup S^h_x)$ and as it can only get through the cut $G(V-(S^d_x\cup S^h_x),S^d_x\cup S^h_x)$ via $\overrightarrow{yx}$, it will eventually reach $\overrightarrow{yx}$ at $y$. Thus, it also traverses $\overrightarrow{yx}$. Now from each vertex, either $x$ or $b_0$ is reachable in $G$. Hence $T$ spans $G$.
We conclude that $T$ is a Jaeger tree of $G$, and by construction $\overrightarrow{yx}$ is the only tail-edge of $T$.
\textbf{Case 4:}
Let $e\in{\mathcal{G}}$ be an edge that is not present in $G_1$. We show that there is exactly one $\overrightarrow{e}$-stick Jaeger tree, and exactly one $\overleftarrow{e}$-stick Jaeger tree. Once again, Lemma \ref{l:at_most_one_sticky_Jaeger_tree} implies the ``at most one'' part.
Let $e=xy$. The fact that $xy$ is hidden in $G_1$ means that $l(x)=l(y)$.
By symmetry, it is enough to show that there is a $\overrightarrow{yx}$-stick Jaeger tree. For this, take $l'(x)=l(x)+1$, while $l'(v)=l(v)$ for $v\in V-x$. This might not be a good layering, because if $x$ has any in-edge $\overrightarrow{wx}$, then $l'(x)-l'(v)=2$. Hence we need to increase $l'$ by one on the vertices from where $x$ is reachable on a directed path. Let $S_x=\{v\in V: \text{ $x$ is reachable on a directed path from $v$ in $G_1$}\}$, and let $l''(v)=l(v)+1$ for $v\in S_x$ and $l''(v)=l(v)$ for $v\notin S_x$. Then $l''$ is a good layering, moreover, for the obtained facet graph $G$, $G(V-S_x, S_x)$ is a cut where each edge is either not present or oriented from $V-S_x$ to $S_x$. Now we can repeat the proof of the previous case.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:gamma(1)=2g}]
As we argued after the statement of Theorem \ref{thm:gamma(1)=2g}, it is enough to prove
$|{\mathcal{J}}_1({\mathcal{G}})|=2|E|-|V|+1$. This follows from Lemma \ref{l:one-stick_trees}, hence we are ready. \end{proof}
\begin{problem}
Give a combinatorial meaning for $\gamma(i)$ for larger values of $i$. \end{problem}
\section{On the connection of $\gamma$-polynomials and interior polynomials} \label{sec:connection_of_gamma_and_interior}
Let ${\mathcal{G}}$ be an undirected graph, and $G$ one of its facet graphs. In \cite{semibalanced}, the $h^*$-vector of the root polytope of $G$ (which is a facet of $P_{\mathcal{G}}$) is called the interior polynomial of $G$.
If ${\mathcal{G}}$ is a bipartite graph, then there are two special facet graphs, namely, the standard orientations of ${\mathcal{G}}$, where each edge is oriented from partite class $U$ to partite class $W$, or vice versa. The root polytopes of these two orientations are mirror images of each other, hence the interior polynomials of the two standard orientations are the same. We call this polynomial the interior polynomial of ${\mathcal{G}}$, and denote it by $I_{\mathcal{G}}$. (We note that in fact interior polynomials were originally defined for undirected bipartite graphs, and the original definition is equivalent to the one given here.)
In this section, we gather results that suggest interesting connections between $\gamma_{\mathcal{G}}$ and the interior polynomial $I_{\mathcal{G}}$, and also formulate some conjectures.
First of all, for a special class of graphs, Ohsugi and Tsuchiya proved that $\gamma_{\mathcal{G}}$ can be written as a linear combination of interior polynomials of come cuts in ${\mathcal{G}}$.
For a bipartite graph ${\mathcal{G}}$, they define $\tilde{{\mathcal{G}}}$ as the bipartite graph where a new vertex is added to both partite classes, and the new vertices are connected to each vertex of the other partite class (including the other new vertex). For a graph ${\mathcal{G}}$, $Cut(G)$ is the set of cuts of ${\mathcal{G}}$, taken as subgraphs ($|Cut({\mathcal{G}})|=2^{|V|-1}$, each $S\subset V$ determines the cut $(S,V-S)$ with $S$ and $V-S$ determining the same cut). They show
\begin{thm}\cite[Corollary 5.5]{OT}
For a bipartite graph ${\mathcal{G}}$,
$$\gamma_{\tilde{{\mathcal{G}}}}(x) = \frac{1}{2^{|V|-2}}\sum_{\mathcal{H}\in Cut({\mathcal{G}})} I_{\tilde{\mathcal{H}}}(4x).$$ \end{thm}
This means that for bipartite graphs where each partite class has a vertex that is connected to each vertex of the other partite class, the gamma polynomial can be written as a positive linear combination of interior polynomials of some subgraphs. As each coordinate of an interior polynomial is nonnegative, this implies $\gamma$-positivity for these graphs, as \cite{OT} points out. (We note that in \cite[Theorem 5.3]{OT} they give another similar formula for (not necessarily bipartite) graphs where some vertex is connected to any other vertex.)
It would be interesting to see if a similar formula is true for all bipartite graphs. Here we list some more results suggesting that the two polynomials should be strongly related in general, and we formulate some conjectures.
One thing that suggests a connection is that certain product formulas are true for both polynomials. Ohsugi and Tsuchiya proved the following product formula for $\gamma$-polynomials: \begin{thm}\cite[Corollary 5.5]{OT}
Let $\mathcal{G}_1$ and $\mathcal{G}_2$ be bipartite graphs with $V(\mathcal{G}_1)\cap V(\mathcal{G}_2)=\{u,w\}$, so that $E(\mathcal{G}_1)\cap E(\mathcal{G}_2)=\{uw\}$. Let $\mathcal{G}_1\cup \mathcal{G}_2$ be the bipartite graph glued along the edge $uw$. Then $\gamma_{\mathcal{G}_1 \cup \mathcal{G}_2}=\gamma_{\mathcal{G}_1}\cdot \gamma_{\mathcal{G}_2}$. \end{thm}
The analogous statement holds for interior polynomials. \begin{thm}\cite[Theorem 6.7]{hiperTutte} Let $\mathcal{G}_1$ and $\mathcal{G}_2$ be bipartite graphs with $V(\mathcal{G}_1)\cap V(\mathcal{G}_2)=\{u,w\}$, so that $E(\mathcal{G}_1)\cap E(\mathcal{G}_2)=\{uw\}$. Let $\mathcal{G}_1\cup \mathcal{G}_2$ be the bipartite graph glued along the edge $uw$. Then $I_{\mathcal{G}_1 \cup \mathcal{G}_2}=I_{\mathcal{G}_1}\cdot I_{\mathcal{G}_2}$. \end{thm}
We note that a verbatim generalization of this statement holds for interior polynomials of all semi-balanced digraphs, which is stated as Proposition 11.1 in \cite{semibalanced}.
For both polynomials, the same product formula holds also if two graphs are glued along one vertex instead of one edge, see \cite[Proposition 5.2]{OT} and \cite[Corollary 6.8]{hiperTutte}.
Computations also suggest connections between $\gamma_{\mathcal{G}}$ and $I_{\mathcal{G}}$. We computed $\gamma$ and interior polynomials of 12000 randomly chosen bipartite graphs with 6 to 11 vertices. Based on the computations, we formulate the following conjectures:
\begin{conject} For any bipartite graph ${\mathcal{G}}$, the degrees of $I_{\mathcal{G}}$ and of $\gamma_{\mathcal{G}}$ agree. \end{conject}
\begin{conject}\label{conj:gamma_determines_interior} For any bipartite graph ${\mathcal{G}}$, $\gamma_{\mathcal{G}}$ determines $I_{{\mathcal{G}}}$. That is, if for two bipartite graphs, their $\gamma$-polynomial agrees, then their interior polynomial agrees as well. \end{conject} We note that the converse of Conjecture \ref{conj:gamma_determines_interior} is not true. In particular, we found two graphs both having $3x^2+3x+1$ as interior polynomial, but one of them having $12x^2+6x+1$ as $\gamma$-polinomial, while the other one having $14x^2+6x+1$ as $\gamma$-polinomial.
\begin{conject} For any bipartite graph ${\mathcal{G}}$, among the facet graphs of ${\mathcal{G}}$, the two standard orientations of ${\mathcal{G}}$ have a coefficientwise minimal interior polynomial. \end{conject} We note that for non-bipartite graphs there is no sense of standard orientation, and it is not clear how one could generalize the preceding conjectures to non-bipartite graphs.
We can also see some connections between the low-degree coefficients of $I_{\mathcal{G}}$ and $\gamma_{\mathcal{G}}$. By \cite[Theorem 6.3]{hiperTutte}, the coefficient of the constant term of the interior polynomial $I_{\mathcal{G}}$ is the nullity $g({\mathcal{G}})$. While Theorem \ref{thm:gamma(1)=2g} tells us that $\gamma_1({\mathcal{G}})=2g({\mathcal{G}})$. However, for coefficients of larger degree, there is no such simple connection, and the ratio of the quadratic terms can be non-integer.
\section{Some concrete special cases} \label{sec:concrete_cases}
We would like to demonstrate that the technique of Jaeger trees is also applicable to compute the $h^*$-polynomial of symmetric edge polytopes in concrete cases. Here we treat trees, cycles and cacti, complete bipartite graphs, and planar graphs. For the first four classes, the $h^*$-vector was determined previously using Gröbner bases.
\subsection{Warmup: Trees} As a warmup, let us compute the $\gamma$-polinomial of $P_{\mathcal{G}}$ if ${\mathcal{G}}$ is a tree.
\begin{prop}\cite[Example 5.1]{OT} Let ${\mathcal{G}}$ be a tree. Then $(\gamma_{\mathcal{G}})_0=1$, and $(\gamma_{\mathcal{G}})_i=0$ for each $i>0$. \end{prop} \begin{proof}
Suppose that ${\mathcal{G}}$ has $m$ edges. As ${\mathcal{G}}$ is bipartite, the facet graphs of ${\mathcal{G}}$ are orientations of ${\mathcal{G}}$. As there is no cycle, all $2^m$ orientations of ${\mathcal{G}}$ are semibalanced. In each orientation, there is exactly one Jaeger tree, that contains all edges. Moreover, the tour of each of these Jaeger trees is the same, hence the orientation of an edge will determine if it is going to be a head-edge or a tail-edge. Hence there are exactly $\binom{m}{k}$ Jaeger trees in the facet graphs of ${\mathcal{G}}$ with exactly $k$ tail-edges. This means that $(\gamma_{\mathcal{G}})_0=1$, and $(\gamma_{\mathcal{G}})_i=0$ for each $i>0$. \end{proof}
\subsection{Cycles and cacti}
Using Gröbner bases, Ohsugi and Tsuchiya \cite{OT} compute the $\gamma$-polynomial of $P_{\mathcal{G}}$ if ${\mathcal{G}}$ is a cycle. Here we reproduce this result using our methods.
\begin{thm}\cite[Proposition 5.7]{OT}
$\gamma_{C_{n}}(i)=\binom{2i}{i}$ for $i=0, 1, \dots, \lfloor\frac{n-1}{2} \rfloor$. \end{thm} Let the vertex set of the cycle be $\{v_0, \dots , v_{n-1}\}$, with edges $v_{i}v_{i+1}$ (modulo $n$) for $i=0, \dots n-1$. Let us call an edge $v_iv_{i+1}$ positively oriented if it points towards $v_{i+1}$ and negatively oriented otherwise.
The following is the key lemma. \begin{lemma}
Let us fix the base $(v_0, v_0v_1)$, and let $0\leq i\leq \lfloor\frac{n-1}{2}\rfloor$ be fixed. The total number of Jaeger trees with exactly $i$ tail-edges in all the facet graphs of $C_n$ is $\sum_{j=0}^i \binom{2j}{j}\binom{n-1-2j}{i-j}$. \end{lemma}
\begin{proof}
Suppose first that $n=2k+1$ is odd. Then the facet graphs are exactly the spanning trees of $C_n$, with any orientation that has $k$ positively and $k$ negatively oriented edges. Each such facet has one Jaeger tree. For a facet graph where the edge $v_av_{a+1}$ is missing, the number of tail-edges is
\begin{multline*}
\sharp\{\text{positively oriented edges between $v_0$ and $v_a$}\}\\
+\sharp\{\text{negatively oriented edges between $v_{a+1}$ and $v_0$}\}.
\end{multline*}
Suppose that the first term is $0\leq j\leq i\leq k$ and the second is $i-j$. Then between $v_0$ and $v_a$, there need to be $j$ positively oriented edges, and $k-(i-j)\geq 0$ negatively oriented edges, and between $v_{a+1}$ and $v_0$ there need to be $k-j \geq 0$ positively oriented edges and $i-j$ negatively oriented edges. This also implies $a=k-i+2j$. Once we decide the value of $i$ and $j$, all choices of $j$ positively oriented edges between $v_0$ and $v_a$ and $i-j$ negatively oriented edges between $v_{a+1}$ and $v_0$ give us exactly one Jaeger tree with $i$ tail-edges.
Hence altogether, the number of Jaeger trees in all facet graphs with $i$ tail-edges is
$$
\sum_{j=0}^i \binom{k-i+2j}{j}\binom{k+i-2j}{i-j}=\sum_{j=0}^i \binom{2j}{j}\binom{2k-2j}{i-j}=\sum_{j=0}^i \binom{2j}{j}\binom{n-1-2j}{i-j},
$$
where the first equality is by Lemma \ref{l:binom_azonossag}.
Now let us look at the case of $n=2k$. In this case, the facet graphs are orientations of $C_{2k}$ where there are exactly $k$ positively and $k$ negatively oriented edges. It is easy to see that each such facet has exactly $k$ Jaeger trees: one of the $k$ positively oriented edges can be cut. If the (positive) edge between $\overrightarrow{v_av_{a+1}}$ is cut, then this tree has $i$ tail-edges iff there are $0\leq j\leq i$ positive edges between $v_0$ and $v_a$ and $i-j$ negative edges between $v_{a+1}$ and $v_0$. This implies (as $v_av_{a+1}$ was positive) that there are $k-i+j$ negative edges between $v_0$ and $v_a$ and $k-1-j$ positive edges between $v_{a+1}$ and $v_0$ and $a=k-i+j$.
Hence altogether, the number of Jaeger trees with $i$ tail-edges in all facet graphs is
\begin{multline*}
\sum_{j=0}^i \binom{k-i+2j}{j}\binom{k-1+i-2j}{i-j}\\
=\sum_{j=0}^i \binom{2j}{j}\binom{2k-1-2j}{i-j}=\sum_{j=0}^i \binom{2j}{j}\binom{n-1-2j}{i-j},
\end{multline*}
where the first equality is by Lemma \ref{l:binom_azonossag}. \end{proof}
\begin{lemma}\label{l:binom_azonossag}
For any nonnegative integers $(b,c,n)$ such that $0\leq c\leq b-2n$, we have
$$
\sum_{a=0}^n \binom{2a}{a}\binom{b-2a}{n-a} = \sum_{a=0}^n \binom{2a+c}{a}\binom{b-c-2a}{n-a}.
$$ \end{lemma} \begin{proof}
We fix $c$ and proceed by induction on $n$ and $b$. The base case is $b=2n + c$. In that case, notice that by substituting $a'=n-a$ in the right side, we get $$\sum_{a'=0}^n \binom{b-2a'}{n-a'}\binom{2a'}{a'},$$ which agrees with the left side.
Now suppose that $b>2n+c$ and suppose that for smaller $b$, we know the statement for each triple $(b,c,n)$ that satisfies $b\geq 2n + c$.
We define the following two functions on subsets of $[b]$: For $S\subseteq [b]$, let
$f_0(S)=|\{a\leq b: |S\cap [2a]|=a\}|$ and $f_c(S)=|\{a\leq b: |S\cap [2a+c]|=a\}|$.
It is easy to see that $\sum_{a=0}^n \binom{2a}{a}\binom{b-2a}{n-a}= \sum_{S\subseteq [b], |S|=n} f_0(S)$, as well as that $\sum_{a=0}^n \binom{2a+c}{a}\binom{b-c-2a}{n-a}= \sum_{S\subseteq [b], |S|=n} f_c(S)$.
Now $$\sum_{S\subseteq [b], |S|=n} f_0(S)=\sum_{S\subseteq [b], b\notin S, |S|=n} f_0(S)+\sum_{S\subseteq [b], b\in S, |S|=n} f_0(S),$$
and similarly,
$$\sum_{S\subseteq [b], |S|=n} f_c(S)=\sum_{S\subseteq [b], b\notin S, |S|=n} f_c(S)+\sum_{S\subseteq [b], b\in S, |S|=n} f_c(S).$$
Notice that whether an element $i>2n+c$ is in $S$ or not does not change neither the value of $f_0(S)$ nor the value of $f_c(S)$. (In fact, for $f_0(S)$, already the elements $i>2n$ are immaterial.) Hence $\sum_{S\subseteq [b], b\notin S, |S|=n} f_0(S)=\sum_{S\subseteq [b-1], |S|=n} f_0(S)$ and $$\sum_{S\subseteq [b], b\in S, |S|=n} f_0(S)=\sum_{S\subseteq [b-1], |S|=n-1} f_0(S),$$ and similarly for $f_c$.
As we had $b>2n+c$, we have $b-1\geq 2n+c$ and $b-1\geq 2(n-1)+c$. Hence by induction, $\sum_{S\subseteq [b-1], |S|=n} f_0(S)=\sum_{S\subseteq [b-1], |S|=n} f_c(S)$ and $\sum_{S\subseteq [b-1], |S|=n-1} f_0(S)=\sum_{S\subseteq [b-1], |S|=n-1} f_c(S)$, which means that we are ready. \end{proof}
\begin{remark} A cactus graph is a graph whose 2-connected components are all cycles or edges.
Ohsugi and Tsuchiya shows \cite[Proposition 5.2]{OT} that if the 2-connected components of a graph ${\mathcal{G}}$ are ${\mathcal{G}}_1, \dots {\mathcal{G}}_k$ then $h^*_{\mathcal{G}}$ is the product of $h^*_{{\mathcal{G}}_1}, \dots , h^*_{{\mathcal{G}}_k}$. Hence one can also compute the $h^*$ polynomial of cactus graphs. We note that this can also be seen directly from Theorem \ref{thm:shelling_of_symm_edge_poly} as the Jaeger trees of different 2-connected components behave independently (and for a fixed ribbon structure and base point, the tour of any tree reaches a given 2-connected component at the same node-edge pair). \end{remark}
\subsection{Complete bipartite graphs}
The $h^*$-polynomial of the symmetric edge polytope of complete bipartite graphs was determined by Higashitani, Jochemko and Micha\l{}ek \cite{arithm_symedgepoly} using Gröbner basis techniques. The computation of \cite{arithm_symedgepoly} can be divided into two parts. In the first part, they reduce the computation of the $h^*$-polynomial to a graph-theoretic counting problem. Then, in the second part, they solve this counting problem. The first part of the problem can also be done using Jaeger trees, as we now explain.
In \cite{arithm_symedgepoly}, they construct a triangulation of $P_{K_{n,m}}$ using Gröbner bases. Then they interpret the simplices of the triangulation as certain oriented spanning trees of $K_{n,m}$. Then they prove that $h^*_i$ is equal to the number of trees in their triangulation that (for some fixed base node) have $i$ head-edges.
In \cite{semibalanced}, for each facet of $P_{K_{n,m}}$, we compute a dissection by Jaeger trees (which is actually a triangulation in that case). We use different ribbon structures for different facet graphs, but the base point can be chosen the same (by \cite[Remark 9.4]{semibalanced}). The Jaeger trees have a simple geometric description. Moreover, they agree with the trees of \cite{arithm_symedgepoly}. Now the interpretation of $h^*_i$ as the number of Jaeger trees with $i$ head-edges is a direct application of Corollary \ref{cor:h^*_of_sym_edge_poly} (up to symmetry).
Even though it is slightly more involved to give a description of the trees of the triangulation based on the Jaeger language, we feel that the rest of the computation is more automatic, with the notion of Jaeger trees and the graph theoretic description of the $h^*$-polynomial (Corollary \ref{cor:h^*_of_sym_edge_poly}) at hand.
\subsection{Planar bipartite graphs}
Let ${\mathcal{G}}$ be a planar bipartite graph. In this case, we can translate quantities about $P_{\mathcal{G}}$ to quantities about the planar dual of ${\mathcal{G}}$.
We know that the facets of $P_{\mathcal{G}}$ correspond to semi-balanced orientations of ${\mathcal{G}}$. A planar directed graph is semi-balanced if and only if its (directed) planar dual is Eulerian; indeed, a cycle of a digraph $G$ is semi-balanced if and only if the corresponding cut in $G^*$ has equal number of edges going in the two directions. Hence the number of facets of $P_{\mathcal{G}}$ equals to the number of Eulerian orientations of ${\mathcal{G}}^*$.
By \cite{semibalanced}, the number of Jaeger trees in a planar semi-balanced digraph $G$ is equal to the number of spanning arborescences of $G^*$ rooted at $r^*$ (for an arbitrary fixed choice of $r^*$). Hence the number of Jaeger trees of ${\mathcal{G}}$ is the sum of the numbers of spanning arborescences of Eulerian orientations of ${\mathcal{G}}^*$.
\section{A geometric formula for the volume of $P_{\mathcal{G}}$ for bipartite ${\mathcal{G}}$} \label{sec:geometric_formula_for_volume}
Finally, for bipartite graphs ${\mathcal{G}}$ let us give a simple geometric formula for the volume of $P_{\mathcal{G}}$.
Let ${\mathcal{G}}$ be a bipartite graph. Then the facets of $P_{\mathcal{G}}$ correspond to all the semi-balanced orientations of ${\mathcal{G}}$. (That is, there are no hidden edges in any of them.) Fix a ribbon structure and basis for ${\mathcal{G}}$. This induces a ribbon structure and a basis for each facet graph $G_l$. As explained in Section \ref{sec:prep}, the volume of $P_{\mathcal{G}}$ is the total numbers of Jaeger trees in the facets.
Take an (unoriented) spanning tree $\mathcal{T}$ of the (undirected) graph ${\mathcal{G}}$. It is enough to tell for each spanning tree $\mathcal{T}$ of ${\mathcal{G}}$ how many facet graphs $G_l$ are there such that (the appropriatelly oriented version of) $\mathcal{T}$ is a Jaeger tree in $G_l$ (for the fixed ribbon structure and basis).
We claim the following: for each tree $\mathcal{T}$ there is a point $p_T$ such that the facet graphs $G_l$ where $\mathcal{T}$ is a Jaeger tree correspond exactly to the facets of the symmetric edge polytope that are visible from $p_T$.
The tour of $\mathcal{T}$ is independent of the orientation of the edges. Hence in order for $\mathcal{T}$ to be a Jaeger tree in $G_l$, each edge $e\notin \mathcal{T}$ has to be oriented so that it is cut at its tail in the tour of $\mathcal{T}$. This orientation only depends on $T$.
For each edge $e\notin T$, take the above mentioned orientation $\overrightarrow{e}$. Let $\mathbf{x}_{\overrightarrow{e}}$ be the vertex of the symmetric edge polytope corresponding to $\overrightarrow{e}$.
Let $p_T=(1+\delta)\frac{1}{|E - T|}\sum_{e\notin \mathcal{T}}\mathbf {x}_{\overrightarrow{e}}$, where $0 < \delta < \frac{1}{|E|}$.
We know that each facet has a linear functional $l$ such that points of the facet have $l(\mathbf{p})=1$, while points of the polytope have $l(\mathbf{p})\leq 1$. This facet is visible from a point $\mathbf{p}$ if and only if $l(\mathbf{p})\geq 1$. It is enough to show that if $l$ is the fuctional defining the facet $G_l$ where $T$ is a Jaeger tree, then $l(p_T)>1$ and if $l$ is the fuctional defining the facet $G_l$ where $T$ is not a Jaeger tree, then $l(p_T) < 1$.
This is clear, since $T$ is a Jaeger tree exactly in facet graphs $G_l$ where each $e\notin \mathcal{T}$ is oriented as above.
In these faces, $l(p_T)=1+ > 1$. If in a face, at least one edge of $E-T$ is oriented oppositely then for that edge $\overleftarrow{e}$, we have $l(\mathbf{x}_{\overrightarrow{e}})= -1$, hence $l(p_T)\leq (1+\delta)(1-\frac{2}{|E-T|}) < 1$.
\end{document} |
\begin{document}
\def\sigma^x_a{\sigma^x_a}
\def\sum_a{\sum_a}
\def\sum_b{\sum_b}
\def\tilde{E}^x_a{\tilde{E}^x_a}
\def\tilde{F}^y_b{\tilde{F}^y_b}
\def{E}^x_a{{E}^x_a}
\def{F}^y_b{{F}^y_b}
\def({E}^x_a)^{1/2}{({E}^x_a)^{1/2}}
\def\Phi{\Phi}
\def\Phi^*{\Phi^*}
\defB(H){B(H)}
\defC_1(H){C_1(H)}
\def\{E^x_a\}\sb{a,x}{\{E^x_a\}\sb{a,x}}
\def\{F^y_b\}\sb{b,y}{\{F^y_b\}\sb{b,y}}
\def\{\tilde{E}^x_a\}\sb{a,x}{\{\tilde{E}^x_a\}\sb{a,x}}
\def\{\tilde{F}^y_b\}\sb{b,y}{\{\tilde{F}^y_b\}\sb{b,y}}
\defp(a,b|x,y){p(a,b|x,y)}
\title[Tsirelson's problem]{Tsirelson's problem\\
and purely atomic\\
von Neumann algebras} \author{Bebe Prunaru}
\address{Institute of Mathematics ``Simion Stoilow'' of the Romanian Academy\\ P.O. Box 1-764
RO-014700 Bucharest Romania} \email{[email protected]}
\begin{abstract}
It is shown that if a bipartite behavior admits a field representation in which Alice (or Bob's) observable algebra generates a purely atomic von Neumann algebra then it is non-relativistic.
\end{abstract}
\maketitle
Let $H$ be a separable complex Hilbert space, and let $B(H)$ be the algebra of all bounded linear operators on $H$. If $S\subsetB(H)$ then $span(S)$ denotes its linear span
and $comm(S)$ its commutant.
Let $C_1(H)$ be the space of all trace-class operators on $H$. We denote
$$<\mu,T>=tr(\mu T) \quad \mu\inC_1(H), T\inB(H).$$
If $\Psi:B(H)\toB(H)$ is a weak star continuous map, then we shall denote by $\Psi^*:C_1(H)\toC_1(H)$ its predual map, hence $$<\Psi^*(\mu),T>=<\mu,\Psi(T)> \quad \mu\inC_1(H), T\inB(H).$$
In what follows $A$ and $B$ are finite sets. Moreover $\{A_x\}\sb{x\in A}$ and $\{B_y\}\sb{y\in B}$ are families of finite sets. Elements of $A\sb x$ are identified by pairs of the form $(a,x)$ and similarly for $B\sb y$.
The following result has been recently proved in \cite{NCPV}.
\begin{theorem} \label{quansal}
Let $\{E^x_a\}\sb{a,x}$ and $\{F^y_b\}\sb{b,y}$ be two families of positive operators
in $B(H)$
such that
\begin{itemize}
\item[(i)] $\sum_a{E}^x_a=1 \quad (\forall) x\in A$
\item[(ii)] $\sum_b{F}^y_b=1 \quad (\forall) y\in B$
\item[(iii)] ${E}^x_a{F}^y_b={F}^y_b{E}^x_a \quad (\forall) a,b,x,y.$
\end{itemize}
Let $\rho\inC_1(H)$ be positive with $tr(\rho)=1$ and let
$$p(a,b|x,y)=<\rho,{E}^x_a{F}^y_b>.$$
Suppose there exist a family $\{\sigma^x_a\}\sb{a,x}$ of positive trace-class operators pn $H$ such that
\begin{itemize}
\item[(iv)] $\sigma=\sum_a\sigma^x_a$ does not depend on $x\in A$ and
\item[(v)] $<\sigma^x_a,{F}^y_b>=p(a,b|x,y)$ for all $a,b,x,y$. \end{itemize}
Then there exist families $\{\tilde{E}^x_a\}\sb{a,x}$ and $\{\tilde{F}^y_b\}\sb{b,y}$ of positive operators on $H$
and a normal state $\tilde{\rho}$ on $B(H\otimes H)$
such that
$$\sum_a\tilde{E}^x_a=1 \quad (\forall) x\in A$$
and
$$\sum_b\tilde{F}^y_b=1 \quad (\forall)y\in B$$
and such that $$p(a,b|x,y)=<\tilde{\rho},\tilde{E}^x_a\otimes\tilde{F}^y_b>\quad (\forall) a,b,x,y.$$
\end{theorem}
This result is related to a certain problem in the theory of quantum correlations formulated in \cite{T1} and \cite{T2}. For recent work in this area we refer to \cite{F}, \cite{J},\cite{NCPV},\cite{SW} and the references therein. In this paper we shall provide a class of examples where this theorem applies.
\begin{proposition} \label{main}
Suppose $\{E^x_a\}\sb{a,x}$ and $\{F^y_b\}\sb{b,y}$ are families of positive operators on $H$ satisfying (i)-(iii) in Theorem \ref{quansal} and let
$\rho\inC_1(H)$ be positive with $tr(\rho)=1$.
Assume there exists a positive linear weak star continuous idempotent map
$$\Phi:B(H)\toB(H)$$ such that
$$span(\{F^y_b\}\sb{b,y})\subset range(\Phi)\subset comm(\{E^x_a\}\sb{a,x})$$
Then there exist a family $\{\sigma^x_a\}\sb{a,x}$ of positive trace-class operators
on $H$ such that (iv) and (v) in Theorem \ref{quansal} hold true.
\end{proposition}
\begin{proof}
Let us define $$\sigma^x_a=\Phi^*(({E}^x_a)^{1/2}\rho({E}^x_a)^{1/2})$$ for all $a,x$. Then for every $T\inB(H)$ we have $$<\sum_a\sigma^x_a,T>=<\rho,\sum_a({E}^x_a)^{1/2}\Phi(T)({E}^x_a)^{1/2}>
=<\rho,\Phi(T)>=<\Phi^*(\rho),T>$$ therefore $\sigma=\sum_a\sigma^x_a$
does not depend on $x$. Moreover for every $a,x,b,y$ we have $$<\sigma^x_a, {F}^y_b>=<\rho, ({E}^x_a)^{1/2}\Phi({F}^y_b)({E}^x_a)^{1/2}>=<\rho,{E}^x_a{F}^y_b>=p(a,b|x,y).$$
\end{proof}
This proof is in part inspired by the proof of Thm 5 in \cite{NCPV}. Recall that a purely atomic von Neumann algebra is one in which the identity is a sum of minimal projections. Obviously, every finite dimensional von Neumann algebra is purely atomic as well as $B(H)$ itself. It is known that any such algebra is the range of a weak star continuous completely positive idempotent. It follows that the above result applies for instance when either $\{E^x_a\}\sb{a,x}$ or $\{F^y_b\}\sb{b,y}$ generate purely atomic von Neumann algebras. For terminology and results on this class of algebras we refer to \cite{B}.
\end{document} |
\begin{document}
\title[Asymptotic stability of the compressible Euler-Maxwell equations ]
{Asymptotic stability of stationary solutions to the compressible Euler-Maxwell equations }
\author{Qingqing Liu} \address{(QQL) The Hubei Key Laboratory of Mathematical Physics, School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, P. R. China} \email{[email protected]}
\author{Changjiang Zhu*} \address{(CJZ) The Hubei Key Laboratory of Mathematical Physics, School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, P. R. China} \email{[email protected]} \thanks{*Corresponding author. Email: [email protected] }
\date{\today} \keywords{Compressible Euler-Maxwell equations, stationary solutions, asymptotic stability}
\subjclass[2000]{35Q35, 35P20}
\begin{abstract} In this paper, we are concerned with the compressible Euler-Maxwell equations with a nonconstant background density (e.g. of ions) in three dimensional space. There exist stationary solutions when the background density is a small perturbation of a positive constant state. We first show the asymptotic stability of solutions to the Cauchy problem near the stationary state provided that the initial perturbation is sufficiently small. Moreover the convergence rates are obtained by combining the $L^p$-$L^q$ estimates for the linearized equations with time-weighted estimate. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
The dynamics of two separate compressible fluids of ions and electrons interacting with their self-consistent electromagnetic field in plasma physics can be described by the compressible 2-fluid Euler-Maxwell equations \cite{Besse,Rishbeth}. In this paper, we consider the following one-fluid compressible Euler-Maxwell system when the background density $n_{b}$ is a function of spatial variable and the electrons flow is isentropic (see \cite{Duan,UK,USK} when $n_{b}=const.$), taking the form of \begin{eqnarray}\label{1.1} &&\left\{\begin{aligned} &\partial_t n+\nabla\cdot(nu)=0,\\ &\partial_t u+u \cdot \nabla u+\frac{1}{n}\nabla p(n)=-(E+u\times B)-\nu u,\\ &\partial_t E-\nabla\times B=nu,\\ &\partial_t B+\nabla \times E=0,\\ &\nabla \cdot E=n_{b}(x)-n, \ \ \nabla \cdot B=0. \end{aligned}\right. \end{eqnarray} Here, $n=n(t,x)\geq 0 $ is the electron density, $ u=u(t,x)\in \mathbb{R}^{3}$ is the electron velocity, $ E=E(t,x)\in \mathbb{R}^{3}$, $ B=B(t,x)\in \mathbb{R}^{3}$, for $ t>0, \ x \in \mathbb{R}^{3} $, denote electron and magnetic fields respectively. Initial data is given as \begin{eqnarray}\label{1.2}
[n,u,E,B]|_{t=0}=[n_{0},u_{ 0},E_{0},B_0],\ \ \ x\in\mathbb{R}^{3}, \end{eqnarray} with the compatible conditions \begin{eqnarray}\label{1.3} \nabla \cdot E_0=n_{b}(x)-n_{0}, \ \ \nabla \cdot B_0=0, \ \ \ x\in\mathbb{R}^{3}. \end{eqnarray} The pressure function $ p(\cdot)$ of the flow depending only on the density satisfies the power law $p(n)=A n^{\gamma}$ with constants $A>0$ and the adiabatic exponent $\gamma >1 $. Constant $\nu>0$ is the velocity relaxation frequency. In this paper, we set $ A=1,\ \nu=1$ without loss of generality. $n_{b}(x)$ denotes the stationary background ion density satisfying \begin{eqnarray*} n_{b}(x)\rightarrow n_{\infty}, \ \ \ \ \textrm{as}\ \ \ \
|x|\rightarrow \infty, \end{eqnarray*} for a positive constant state $n_{\infty}>0$. Throughout this paper, we take $n_{\infty}=1$ for simplicity.
In comparison with the Euler-Maxwell system studied in \cite{Duan}, where the background density is a uniform constant, the naturally existing steady states of system \eqref{1.1} are no longer constants $[1,0,0,0]$. The stationary equations to the Cauchy problem \eqref{1.1}-\eqref{1.2} are given as \begin{eqnarray}\label{sta.eq0} \left\{\begin{aligned} &\frac{1}{n_{st}}\nabla p(n_{st})=-E_{st},\\ &\nabla \times E_{st}=0,\\ &\nabla \cdot E_{st}=n_{b}(x)-n_{st}. \end{aligned}\right. \end{eqnarray}
First, in this paper, we prove the existence of the stationary
solutions to the Cauchy problem \eqref{1.1}-\eqref{1.2} under
some conditions on the background density $n_{b}(x)$. For this
purpose, let us define the weighted norm
$\|\cdot\|_{W_{k}^{m,2}}$ by \begin{eqnarray}\label{def.norm}
\|g\|_{W_{k}^{m,2}}=\left(\sum_{|\alpha|\leq m}\int_{\mathbb{R}^{3}}(1+|x|)^{k}|\partial^{\alpha}_{x}g(x)|^2dx\right)^{\frac{1}{2}}, \end{eqnarray} for suitable $g=g(x)$ and integers $m\geq0$, $k\geq0$.
Actually, one has the following theorem. \begin{theorem}\label{sta.existence} For integers $m\geq 2$ and $k\geq 0$, suppose that
$\|n_{b}-1\|_{W_{k}^{m,2}}$ is small enough. Then the stationary problem \eqref{sta.eq0} admits a unique solution $(n_{st},E_{st})\in L^\infty(0, T; W_{k}^{m,2})$ satisfying \begin{eqnarray}\label{sta.pro}
\|n_{st}-1\|_{{W_{k}^{m,2}}}\leq C \|n_{b}-1\|_{W_{k}^{m,2}},\ \ \
\|E_{st}\|_{{W_{k}^{m-1,2}}}\leq C \|n_{b}-1\|_{W_{k}^{m,2}}, \end{eqnarray} for some constant $C$. \end{theorem}
There have been extensive investigations into the simplified Euler-Maxwell system where all the physical parameters are set to unity. For the one-fluid Euler-Maxwell system, by using the fractional Godunov scheme as well as the compensated compactness argument, Chen-Jerome-Wang in \cite{Chen} proved global existence of weak solutions to the initial-boundary value problem in one space dimension for arbitrarily large initial data in $L^{\infty}$. Jerome in \cite{Jerome} established a local smooth solution theory for the Cauchy problem over $\mathbb{R}^3$ by adapting the classical semigroup-resolvent approach of Kato in \cite{Kato}. Recently, Duan in \cite{Duan} proved the existence and uniqueness of global solutions in the framework of smooth solutions with small amplitude, moreover the detailed analysis of Green's function to the linearized system was made to derive the optimal time-decay rates of perturbed solutions. The similar results are independently given by Ueda-Wang-Kawashima in \cite{USK} and Ueda-Kawashima in \cite{UK} by using the pure time-weighted energy method. For the the original two-fluid Euler-Maxwell systems
with various parameters, the limits as some parameters go to zero
have been studied recently.
Peng-Wang in \cite{Peng ,PW1,PW2} justified the convergence of the one-fluid compressible Euler-Maxwell system to the incompressible Euler system, compressible Euler-Poisson system and an electron magnetohydrodynamics system for well-prepared smooth initial data. These asymptotic limits are respectively called non-relativistic limit, the quasi-neutral limit and the limit of their combination. Recently, Hajjej and Peng in \cite{HP} considered the zero-relaxation limits for periodic smooth solutions of Euler-Maxwell systems. For the 2-fluid Euler-Maxwell system, depending on the choice of physical parameters, especially the coefficients of $u_{\pm}$ were assumed $\nu_{+}=\nu_{-}$, Duan-Liu-Zhu in \cite{DLZ} obtained the existence and the time-decay rates of the solutions. Much more studies have been made for the Euler-Poisson system when the magnetic field is absent; see \cite{Guo,GuoPausader,Luo,Deng,Smoller,Chae} and references therein for discussion and analysis of the different issues such as the existence of global smooth irrotational flow \cite{Guo} for an electron fluid and \cite{GuoPausader} for the ion dynamics, large time behavior of solutions \cite{Luo}, stability of star solutions \cite{Deng,Smoller} and finite time blow-up \cite{Chae}.
However, there are few results on the global existence of solutions to the Euler-Maxwell system when the non-moving ions provide a nonconstant background $n_{b}(x)$, whereas in many papers related to one-fluid Euler-Maxwell system $n_{b}=1$. In this paper, we prove that there exists a stationary solution when the background density is a small perturbation of a positive constant state and we show the asymptotic stability of the stationary solution and then obtain the convergence rate of the global solution towards the stationary solution. The main result is stated as follows. Notations will be explained at the end of this section.
\begin{theorem}\label{Corolary} Let $ N\geq 3$ and $ \eqref{1.3}$ hold. Suppose
$\|n_{b}-1\|_{W_{0}^{N+1,2}}$ is small enough. Then there are $ \delta_{0}>0$, $ C_{0}>0$ such that if \begin{eqnarray*}
\|[n_{0}-n_{st},u_{0},E_{0}-E_{st},B_{0}]\|_{N} \leq \delta_{0}, \end{eqnarray*} then, the Cauchy problem $\eqref{1.1}$-$\eqref{1.2}$ admits a unique global solution $[n(t,x),u(t,x),E(t,x),B(t,x)] $ satisfying \begin{eqnarray*} [n-n_{st},u,E-E_{st},B]\in C([0,\infty);H^{N}(\mathbb{R}^{3}))\cap {\rm Lip}([0,\infty);H^{N-1}(\mathbb{R}^{3})), \end{eqnarray*} and \begin{eqnarray*}
\sup_{t \geq 0}\|[n(t)-n_{st},u(t),E(t)-E_{st},B(t)]\|_{N}\leq C_{0}
\|[n_{0}-n_{st},u_{0},E_{0}-E_{st},B_{0}]\|_{N}. \end{eqnarray*} Moreover, there are $\delta_{1}>0$, $ C_{1}>0$ such that if \begin{eqnarray*}
\|[n_{0}-n_{st},u_{0},E_{0}-E_{st},B_{0}]\|_{N+3}+\|[u_{0},E_{0}-E_{st},B_{0}]\|_{L^{1}}\leq \delta_{1}, \end{eqnarray*}
and $\|n_{b}-1\|_{W_{0}^{N+4,2}}$ is small enough, then the solution $[n(t,x),u(t,x),E(t,x),B(t,x)] $ satisfies that for any $ t \geq 0$, \begin{eqnarray}\label{UN.decay}
\|[n(t)-n_{st},u(t),B(t),E(t)-E_{st}]\|_{N} \leq C_{1} (1+t)^{-\frac{3}{4}}, \end{eqnarray} \begin{eqnarray}\label{UhN.decay}
\|\nabla[n(t)-n_{st},u(t),B(t),E(t)-E_{st}]\|_{N-1} \leq C_{1} (1+t)^{-\frac{5}{4}}. \end{eqnarray} More precisely, if \begin{eqnarray*}
\|[n_{0}-n_{st},u_{0},E_{0}-E_{st},B_{0}]\|_{6}+\|[u_{0},E_{0}-E_{st},B_{0}]\|_{L^{1}}\leq \delta_{1}, \end{eqnarray*}
and $\|n_{b}-1\|_{W_{0}^{7,2}}$ is small enough, we have \begin{eqnarray}\label{sigmau.decay}
\|[n(t)-n_{st},u(t)]\| \leq C_{1} (1+t)^{-\frac{5}{4}}, \end{eqnarray} \begin{eqnarray}\label{EB.decay}
\|[E(t)-E_{st},B(t)]\|\leq C_{1}(1+t)^{-\frac{3}{4}}. \end{eqnarray} If \begin{eqnarray*}
\|[n_{0}-n_{st},u_{0},E_{0}-E_{st},B_{0}]\|_{7}+\|[u_{0},E_{0}-E_{st},B_{0}]\|_{L^{1}}\leq \delta_{1}, \end{eqnarray*}
and $\|n_{b}-1\|_{W_{0}^{8,2}}$ is small enough, then $E(t)$ satisfies \begin{eqnarray}\label{E.decay}
\|E(t)-E_{st}\|\leq C_{1}(1+t)^{-\frac{5}{4}}. \end{eqnarray} \end{theorem}
The proof of existence in Theorem \ref{Corolary} is based on the classical energy method. As in \cite{Duan}, the key point is to obtain the uniform-in-time {\it a priori} estimates in the form of $$ \mathcal{E}_N(\bar{V}(t))+\lambda \int_0^t\mathcal{D}_N(\bar{V}(s))\,ds\leq \mathcal{E}_N(\bar{V}_0), $$ where $\bar{V}(t)$ is the perturbation of solutions, and $\mathcal{E}_N(\cdot)$, $\mathcal{D}_N(\cdot)$ denote the energy functional and energy dissipation rate functional. Here if we make the energy estimates like what Duan did in \cite{Duan}, it is difficult to control the highest-order derivative of $\bar{E}$ because of the regularity-loss type in the sense that $[\bar{E},\bar{B}]$ is time-space integrable up to $N-1$ order only. In this paper, we modify the energy estimates by choosing a weighted function $1+\sigma_{st}+\Phi(\sigma_{st})$ which plays a vital role in closing the energy estimates.
Furthermore, for the convergence rates of perturbed solutions in Theorem 1.1, we can not analyze the corresponding linearized system of \eqref{1.1} around the steady state $[n_{st},0,E_{st},0]$ directly. In this case, the Fourier analysis fails due to the difficulty of variant coefficients. Here, the main idea follows from \cite{Duan} for combining energy estimates with the linearized results in \cite{Duan}. In the process of obtaining the fastest decay rates of the perturbed solution, the great difficulty is to deal with these linear nonhomogeneous sources including $\rho_{st}$, which can not bring enough decay rates. Whereas in \cite{Duan}, the nonhomogeneous sources are at least quadratically nonlinear. To overcome this difficulty, we make iteration for the inequalities \eqref{sec5.ENV0} and $\eqref{sec5.high}$ together. In theorem \ref{Corolary}, we only capture the same time-decay properties of $u,\ E-E_{st}$ and $B$ as \cite{Duan} except $n-n_{st}$.
$\|n-n_{st}\|$ decays as $(1+t)^{-\frac{5}{4}}$ in the fastest way, because the nonhomogeneous sources containing $\rho_{st}$ decay at most the same as $\sqrt{\mathcal{E}^h_N(\cdot)}$.
The similar work was done for Vlasov-Poisson-Boltzmann system, where the background density is also a function of spatial variable. Duan and Yang in \cite{RY} considered the stability of the stationary states which were given by an elliptic equation with the exponential nonlinearity. The optimal time-decay of the Vlasov-Poisson-Boltzmann system in $\mathbb{R}^{3}$ was obtained by Duan and Strain in \cite{DS}. We also mention the work Duan-Ukai-Yang-Zhao in \cite{RSYZ}, Duan-Liu-Ukai-Yang in \cite{DLUY} for the study of optimal convergence rates of the compressible Navier-Stokes equations with potential forces. Their proofs were based on the combination of spectral analysis and energy estimates. Recently, Duan-Ukai-Yang in \cite{RSY} developed a method of the combination of the spectral analysis and energy estimates to deal with the optimal time decay for study of equations of gas motion.
We further remark the result in \cite{RY}, the existence of solution to the elliptic equation $\Delta \phi=e^{\phi}-\bar{\rho}(x)$ has been proved when $\|\bar{\rho}-1\|_{W_{k}^{m,\infty}}$ is sufficiently small, where the weighted norm
$\|\cdot\|_{W_{k}^{m,\infty}}$ is defined by \begin{eqnarray}\label{def.norm1}
\|g\|_{W_{k}^{m,\infty}}=\sup_{x\in\mathbb{R}^{3}}(1+|x|)^{k}\sum_{|\alpha|\leq m}|\partial^{\alpha}_{x}g(x)| \end{eqnarray} for suitable $g=g(x)$ and integers $m\geq0$, $k\geq0$, the stability of the perturbed solutions can be proved when
$\|\bar{\rho}-1\|_{W_{2}^{N+1,\infty}}$ is sufficiently small. We can also prove the stability of stationary solutions in the framework of \cite{RY} if $\|n_{b}-1\|_{W_{0}^{N+1,\infty}}$ is sufficiently small. In order to obtain the same convergence rates,
$\|n_{b}-1\|_{W_{2}^{N+4,\infty}}$ should be sufficiently small in the process of dealing with $\rho_{st}\bar{u}$ as in Section \ref{sec4}, $$
\|\rho_{st} \bar{u}\|_{L^1} \leq \|\rho_{st}\|\left\|\bar{u}
\right\| \leq C \|\rho_{st}\|_{W_{2}^{N+4,\infty}}\| \bar{u}\|. $$ Notice that $W_{2}^{N+4,\infty}\subseteq W_{0}^{N+4,2}$, it seems to be better to consider the existence of steady states in the weighted Sobolev space $W_{k}^{m,2}$.
Let us introduce some notations for the use throughout this paper. $C$ denotes some positive (generally large) constant and $ \lambda$ denotes some positive (generally small) constant, where both $C$ and $ \lambda$ may take different values in different places. For two quantities $a$ and $b$, $a\sim b$ means $\lambda a \leq b \leq \frac{1}{\lambda} a $ for a generic constant $0<\lambda<1$. For any integer $m\geq 0$, we use $H^{m}$, $\dot{H}^{m}$ to denote the usual Sobolev space $H^{m}(\mathbb{R}^{3})$ and the corresponding $m$-order homogeneous Sobolev space, respectively. Set $L^{2}=H^{m}$ when $m = 0$. For simplicity, the norm of $ H^{m}$ is denoted by
$\|\cdot\|_{m} $ with $\|\cdot \|=\|\cdot\|_{0}$. We use $ \langle\cdot, \cdot \rangle$ to denote the inner product over the Hilbert space $ L^{2}(\mathbb{R}^{3})$, i.e. \begin{eqnarray*} \langle f,g \rangle=\int_{\mathbb{R}^{3}} f(x)g(x)dx,\ \ \ \ f = f(x),\ \ g = g(x)\in L^2(\mathbb{R}^{3}). \end{eqnarray*}
For a multi-index $\alpha = [\alpha_1, \alpha_2, \alpha_3]$, we denote $\partial^{\alpha} =
\partial^{\alpha_{1}}_ {x_1}\partial^{\alpha_{2}}_ {x_2} \partial^{\alpha_{3}}_ {x_3} $. The length of $ \alpha$ is $|\alpha| = \alpha_1 + \alpha_2 + \alpha_3$. For simplicity, we also set $\partial_{j}=\partial_{x_{j}}$ for $j = 1, 2, 3$.
We conclude this section by stating the arrangement of the rest of this paper. In Section 2, we prove the existence of the stationary solution. In Section 3, we reformulate the Cauchy problem under consideration and obtain asymptotic stability of solutions near the stationary state provided that the initial perturbation is sufficiently small. In Section 4, we study the time-decay rates of solutions to the stationary solutions by combining the $L^p$-$L^q$ time-decay property of the linearized homogeneous system with time-weighted estimate.
\section{Existence of stationary solution}\label{sec2} In this section, we will prove the existence of stationary solutions to $\eqref{sta.eq0}$ by using the contraction mapping theorem. From $\eqref{sta.eq0}_2$, there exists $\phi_{st}$ such that $E_{st}=\nabla \phi_{st}$, it turns equation $\eqref{sta.eq0}$ into \begin{eqnarray}\label{sta.eq1} \left\{\begin{aligned} &\frac{1}{n_{st}}\nabla p(n_{st})=-\nabla\phi_{st},\\ &\Delta\phi_{st}=n_{b}(x)-n_{st}. \end{aligned}\right. \end{eqnarray} We introduce the nonlinear transformation (cf. \cite{Deng}) \begin{eqnarray}\label{sta.tra} Q_{st}=\frac{\gamma}{\gamma-1}(n_{st}^{\gamma-1}-1). \end{eqnarray} From $\eqref{sta.eq1}$ and $\eqref{sta.tra}$, we derive the following elliptic equation \begin{eqnarray}\label{sta.ellip} \Delta Q_{st}=\left(\frac{\gamma-1}{\gamma}Q_{st}+1\right)^{\frac{1}{\gamma-1}}-n_{b}(x). \end{eqnarray} For convenience, we replace $Q_{st}$ by $\phi$ in the following. Equation $\eqref{sta.ellip}$ can be rewritten as the integral form \begin{eqnarray*} \phi=T(\phi)=G*\left(\left(\frac{\gamma-1}{\gamma}\phi+1\right)^{\frac{1}{\gamma-1}}-\frac{1}{\gamma}\phi -n_{b}(x)\right), \end{eqnarray*} where $G=G(x)$ given by \begin{eqnarray*}
G(x)=-\frac{1}{4\pi|x|}e^{-\tfrac{1}{\sqrt{\gamma}}|x|} \end{eqnarray*} is the fundamental solution to the linear elliptic equation $ \Delta_{x}G-\frac{1}{\gamma}G=0$. Thus \eqref{sta.ellip} admits a solution if and only if the nonlinear mapping $T$ has a fixed point. Define \begin{eqnarray*}
\mathscr{B}_{m,k}(B)=\{\phi\in W^{m,2}_{k}(\mathbb{R}^{3});\|\phi\|_{W^{m,2}_{k}}\leq B\|n_{b}-1\|_{W^{m,2}_{k}},\ m\geq2\} \end{eqnarray*} for some constant $B$ to be determined later. Next, we prove that if
$\|n_{b}-1\|_{W^{m,2}_{k}} $ is small enough, there exists a constant $B$ such that $T:\mathscr{B}_{m,k}(B)\rightarrow \mathscr{B}_{m,k}(B) $ is a contraction mapping. In fact, for simplicity, let us denote \begin{eqnarray*} g(x)=\left(\frac{\gamma-1}{\gamma}x+1\right)^{\frac{1}{\gamma-1}}-\frac{1}{\gamma}x-1. \end{eqnarray*} Then it holds that \begin{eqnarray}\label{T.phi}
T(\phi)(x)=-\int_{\mathbb{R}^{3}}\frac{1}{4\pi|x-y|}e^{-\tfrac{1}{\sqrt{\gamma}}|x-y|} [g(\phi(y))-(n_{b}(y)-1)]dy. \end{eqnarray} Taking derivatives $ \partial_{x}^{\alpha}$ on both sides of $ \eqref{T.phi}$, one has \begin{eqnarray}\label{T.estimate} \arraycolsep=1.5pt \begin{array}[b]{rl}
\partial_{x}^{\alpha}T(\phi)(x)=&\displaystyle-(-1)^{|\alpha|}
\int_{\mathbb{R}^{3}}\frac{1}{4\pi|x-y|}e^{-\tfrac{1}{\sqrt{\gamma}}|x-y|} [\partial^{\alpha}_{y} g(\phi(y))-\partial^{\alpha}_{y}(n_{b}(y)-1)]dy\\[5mm]
=&-(-1)^{|\alpha|}G*(\partial^{\alpha}g(\phi)-\partial^{\alpha}(n_{b}-1)). \end{array} \end{eqnarray} Here let's list some properties of the operator $G*$. \begin{lemma}\label{Pro.OG} For any $k\geq 0$, it holds that \begin{eqnarray}\label{pro.Gdecay}
\int_{\mathbb{R}^{3}}\frac{1}{|y|}e^{-\tfrac{1}{\sqrt{\gamma}}|y|}\frac{1}{(1+|x-y|)^{k}}dy\leq
\frac{C_{k}}{(1+|x|)^{k}}, \end{eqnarray} and for any $f\in W_{k}^{m,2}$, \begin{eqnarray}\label{pro.G}
\|(1+|x|)^{\frac{k}{2}}(G*f)\|\leq C_{k}^{\frac{1}{2}}\|G\|_{L^{1}}^{\frac{1}{2}}\|(1+|x|)^{\frac{k}{2}}f\|. \end{eqnarray} \end{lemma} \textit{Proof.} \eqref{pro.Gdecay} has been proved in \cite{RY}. We only prove \eqref{pro.G} by using $\eqref{pro.Gdecay}$. \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}[b]{rl}
\displaystyle\left| \int_{\mathbb{R}^{3}}G(x-y)f(y)dy \right|
\leq & \displaystyle \int_{\mathbb{R}^{3}}\frac{|G(x-y)|^{\frac{1}{2}}}{(1+|y|)^{\frac{k}{2}}}
|G(x-y)|^{\frac{1}{2}}(1+|y|)^{\frac{k}{2}}|f(y)|dy\\[5mm]
\leq & \displaystyle \left(\int_{\mathbb{R}^{3}}\frac{|G(x-y)|}{(1+|y|)^{k}}dy\right)^{\frac{1}{2}}
\left(\int_{\mathbb{R}^{3}}|G(x-y)|(1+|y|)^{k}|f(y)|^2dy\right)^{\frac{1}{2}}\\[5mm] \leq & \displaystyle
\frac{C_{k}^{\frac{1}{2}}}{(1+|x|)^{\frac{k}{2}}}
\left(\int_{\mathbb{R}^{3}}|G(x-y)|(1+|y|)^{k}|f(y)|^2dy\right)^{\frac{1}{2}}. \end{array} \end{eqnarray*} Then \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}[b]{rl}
& \displaystyle \int_{\mathbb{R}^{3}}(1+|x|)^{k}\left| \int_{\mathbb{R}^{3}}G(x-y)f(y)dy \right|^2dx\\[5mm]
\leq & \displaystyle C_{k} \int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}|G(x-y)|(1+|y|)^{k}|f(y)|^2dydx\\[5mm]
= & \displaystyle C_{k} \int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}|G(x-y)|(1+|y|)^{k}|f(y)|^2dxdy\\[5mm]
= & \displaystyle C_{k} \int_{\mathbb{R}^{3}}(1+|y|)^{k}|f(y)|^2 dy \int_{\mathbb{R}^{3}}|G(x-y)|dx\\[5mm]
=& C_{k}\|G\|_{L^{1}}\|(1+|x|)^{\frac{k}{2}}f\|^2. \end{array} \end{eqnarray*}
\textbf{Remark:} When $k=0$, $C_{k}=\|G\|_{L^{1}}$, \eqref{pro.G} is in accordance with Young inequality.
By \eqref{pro.G} and \eqref{T.estimate}, one has \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}[b]{rl}
& \|(1+|x|)^{\frac{k}{2}}\partial_{x}^{\alpha}T(\phi)(x)\|\\[3mm]
\leq & C\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}g(\phi)\|
+C\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}(n_{b}-1))\|. \end{array} \end{eqnarray*} By the definition $\eqref{def.norm}$ of the norm $
\|\cdot\|_{W^{m,2}_{k}}$, one has
\begin{eqnarray}\label{T.nb} \arraycolsep=1.5pt \begin{array}[b]{rl}
\|T(\phi)(x)\|_{W^{m,2}_{k}}
= & \displaystyle \left(\sum_{|\alpha|\leq m}\|(1+|x|)^{\frac{k}{2}}\partial_{x}^{\alpha}T(\phi)(x)\|^2\right)^{\frac{1}{2}}\\[5mm]
\leq & \displaystyle C\left(\sum_{|\alpha|\leq m}\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}g(\phi)\|^2\right)^{\frac{1}{2}}
+C\left(\sum_{|\alpha|\leq m}\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}(n_{b}-1)\|^2\right)^{\frac{1}{2}}\\[5mm]
\leq & \displaystyle C\left(\sum_{|\alpha|\leq m}\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}g(\phi)\|^2\right)^{\frac{1}{2}}
+C\|n_{b}-1\|_{W^{m,2}_{k}}.
\end{array} \end{eqnarray} On the other hand, note \begin{eqnarray*}
g(\phi)=\int_{0}^{1}\int_{0}^{\theta}g''(\tau\phi)d\tau d\theta
\phi^{2}\triangleq h(\phi)\phi^{2}, \end{eqnarray*} where $g''(x)=\frac{2-\gamma}{\gamma^{2}}\left(\frac{\gamma-1}{\gamma}x+1\right)^{\frac{3-2\gamma}{\gamma-1}}$.
It is straightforward to check that \begin{eqnarray*}
\|(1+|x|)^{\frac{k}{2}}\partial^{\alpha}(h(\phi)\phi^{2})\|\leq \sum_{\beta_{1}+\beta_{2}+\beta_{3}=\alpha}C_{\beta_1,\beta_2,\beta_{3}}^{\alpha}
\|(1+|x|)^{\frac{k}{2}}\partial^{\beta_{1}}h(\phi)\partial^{\beta_2}\phi\partial^{\beta_3}\phi\|. \end{eqnarray*}
In addition, one has the following claim.
\textbf{Claim}: \begin{eqnarray}\label{T.gphi}
\|(1+|x|)^{\frac{k}{2}}\partial^{\beta_{1}}h(\phi)\partial^{\beta_2}\phi\partial^{\beta_3}\phi\|\leq C \|\phi\|^2_{W^{m,2}_{k}}. \end{eqnarray}
\textit{Proof of claim:} We prove \eqref{T.gphi} by two cases.
\textbf{Case 1.} $\beta_{1}=0$. In this case,
$|\beta_{2}|+|\beta_{3}|\leq m$, thus one can suppose
$|\beta_{2}|\leq [\frac{m}{2}]$ by the symmetry of $\beta_{2}$ and $\beta_{3}$. This deduces \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}[b]{rl}
\|(1+|x|)^{\frac{k}{2}}h(\phi)\partial^{\beta_2}\phi\partial^{\beta_3}\phi\| \leq &
\|h(\phi)\|_{L^{\infty}}\|\partial^{\beta_2}\phi\|_{L^{6}}\|(1+|x|)^{\frac{k}{2}}\partial^{\beta_3}\phi\|_{L^{3}}\\[3mm]
\leq & C \|\nabla
\partial^{\beta_2}\phi\|\|(1+|x|)^{\frac{k}{2}}\partial^{\beta_3}\phi\|_{1}\\[3mm]
\leq & C \|\phi\|^2_{W^{m,2}_{k}}. \end{array} \end{eqnarray*} Here, we have used that $h(\cdot)$ is a continuous function in the argument, \begin{eqnarray*}
\|\phi\|_{L^{\infty}}\leq C\|\nabla \phi\|_{H^{1}}\leq C\|\phi\|_{W^{m,2}_{k}}\leq C \|n_{b}-1\|_{W^{m,2}_{k}}\ll 1, \end{eqnarray*} and $m\geq 2$.
\textbf{Case 2}. $|\beta_{1}|\geq 1$. Notice that \begin{eqnarray*}
\partial^{\beta_{1}}h(\phi)=\sum_{l=1}^{|\beta_{1}|}h^{(l)}(\phi) \sum_{\gamma_{1}+\gamma_{2}+\cdots\gamma_{l}=\beta_{1}} C_{\gamma_{1},\gamma_{2},\cdots\gamma_{l}}\Pi_{i=1}^{l}\partial^{\gamma_{i}}\phi, \end{eqnarray*} \eqref{T.gphi} can be similarly obtained because $h^{(m)}(\phi)$ is also bounded.
Putting $\eqref{T.gphi}$ into \eqref{T.nb}, and using the above estimates, one has \begin{eqnarray}\label{T.phi.est}
\|T(\phi)(x)\|_{W^{m,2}_{k}}\leq C B^2
\|n_{b}-1\|_{W^{m,2}_{k}}^2+C\|n_{b}-1\|_{W^{m,2}_{k}}. \end{eqnarray}
Finally, for any $\phi_{1}=\phi_{1}(x)$ and $\phi_{2}=\phi_{2}(x)$, it holds that \begin{eqnarray*} T(\phi_{1})-T(\phi_{2})=G*(g(\phi_{1})-g(\phi_{2})) \end{eqnarray*} with \begin{eqnarray*} g(\phi_{1})-g(\phi_{2})=\int_{0}^{1}g'(\theta\phi_{1}+(1-\theta)\phi_{2})d\theta(\phi_{1}-\phi_{2}). \end{eqnarray*} Notice that for any $\phi=\phi(x)$, \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rcl} g'(\phi)&=&\displaystyle\frac{1}{\gamma}\left(\frac{\gamma-1}{\gamma}\phi+1\right)^{\frac{2-\gamma}{\gamma-1}} -\frac{1}{\gamma}\\[3mm] &=&\displaystyle\int_{0}^{1}\frac{2-\gamma}{\gamma^{2}} \left(\frac{\gamma-1}{\gamma}\theta\phi+1\right)^{\frac{3-2\gamma}{\gamma-1}}d\theta\phi. \end{array} \end{eqnarray*} Then the same computations as for $\eqref{T.phi.est}$ yield \begin{eqnarray}\label{T.contract} \arraycolsep=1.5pt \begin{array}{rl}
&\|T(\phi_{1})-T(\phi_{2})\|_{W_{k}^{m,2}}\\[3mm] \leq & C
(\|\phi_{1}\|_{W^{m,2}_{k}}+\|\phi_{2}\|_{W^{m,2}_{k}})\|\phi_{1}-\phi_{2}\|_{W^{m,2}_{k}}. \end{array} \end{eqnarray} Combining $\eqref{T.phi.est}$ with $ \eqref{T.contract}$, the standard argument implies that $T$ has a unique fixed point $\phi$ in $ \mathscr{B}_{m,k}(B)$ for a proper constant $B$ provided that
$\|n_{b}-1\|_{W^{m,2}_{k}}$ is small enough. This completes Theorem \ref{sta.existence}.
Let us conclude this section with a remark. The existence of solutions to the elliptic equation \eqref{sta.ellip} can also be proved in the framework of \cite{RY} when
$\|n_{b}-1\|_{W^{m,\infty}_{k}}$ is sufficiently small. We consider the existence when $\|n_{b}-1\|_{W^{m,2}_{k}}$ is sufficiently small in order to derive the more general conclusion. In fact, in the process of dealing with the stability and convergence rates, only the smallness of $\|n_{b}-1\|_{W^{m,2}_{0}}$ is assumed, and the space decay at infinity of $n_{b}(x)-1$ is not needed.
\section{Stability of stationary solution}
\subsection{Reformulation of the problem} Let $[n,u,E,B]$ be a smooth solution to the Cauchy problem of the Euler-Maxwell system (\ref{1.1}) with given initial data (\ref{1.2}) satisfying (\ref{1.3}). Set
\begin{eqnarray}\label{2.1} &&\left\{
\begin{aligned}
&\sigma(t,x)=\frac{2}{\gamma-1}\left\{\left[n\left(\frac{t}{\sqrt{\gamma}},x\right)\right]
^{\frac{\gamma-1}{2}}-1\right\}, \ \ \
v=\frac{1}{\sqrt{\gamma}}u\left(\frac{t}{\sqrt{\gamma}},x\right),
\\[5mm]
&\ \
\tilde{E}=\frac{1}{\sqrt{\gamma}}E\left(\frac{t}{\sqrt{\gamma}},x\right),\ \
\ \tilde{B}=\frac{1}{\sqrt{\gamma}}B\left(\frac{t}{\sqrt{\gamma}},x\right).
\end{aligned}\right. \end{eqnarray} Then, $V:=[\sigma,v,\tilde{E},\tilde{B}]$ satisfies
\begin{equation}\label{2.2} \left\{
\begin{aligned}
&\partial_t \sigma+\left(\frac{\gamma-1}{2}\sigma+1\right)\nabla\cdot v+v\cdot \nabla \sigma=0,\\
&\partial_t v+v \cdot \nabla
v+\left(\frac{\gamma-1}{2}\sigma+1\right)\nabla \sigma=-\left(\frac{1}{\sqrt{\gamma}}\tilde{E}+v\times \tilde{B}\right)
-\frac{1}{\sqrt{\gamma}}v,\\
&\partial_t\tilde{E}-\frac{1}{\sqrt{\gamma}}\nabla\times\tilde{B}
=\frac{1}{\sqrt{\gamma}}v+\frac{1}{\sqrt{\gamma}}[\Phi(\sigma)+\sigma]v,\\
&\partial_t \tilde{B}+\frac{1}{\sqrt{\gamma}}\nabla \times \tilde{E}=0,\\
&\nabla \cdot
\tilde{E}=-\frac{1}{\sqrt{\gamma}}[\Phi(\sigma)+\sigma]
+\frac{1}{\sqrt{\gamma}}(n_{b}(x)-1), \ \ \nabla
\cdot \tilde{B}=0, \ \ \ t>0,\ x\in\mathbb{R}^{3}, \end{aligned}\right. \end{equation} with initial data \begin{eqnarray}\label{2.3}
V|_{t=0}=V_{0}:=[\sigma_{0},v_{0},\tilde{E}_{0},\tilde{B}_{0}],\ \ x\in\mathbb{R}^{3}. \end{eqnarray} Here, $\Phi(\cdot)$ is defined by \begin{eqnarray}\label{def.phi} \Phi(\sigma)=\left(\frac{\gamma-1}{2}\sigma+1\right)^{\frac{2}{\gamma-1}}-\sigma-1, \end{eqnarray} and $V_{0}=[\sigma_{0},v_{0},\tilde{E}_{0},\tilde{B}_{0}]$ is given from $[n_{0},u_{0},E_{0},B_0]$ according to the transform (\ref{2.1}), and hence $V_{0}$ satisfies \begin{eqnarray}\label{2.4}
\nabla\cdot\tilde{E}_0=-\frac{1}{\sqrt{\gamma}}[\Phi(\sigma_{0})+\sigma_{0}]
+\frac{1}{\sqrt{\gamma}}(n_{b}(x)-1),\ \ \ \
\nabla \cdot \tilde{B}_0=0,\ \ \ x\in\mathbb{R}^{3}. \end{eqnarray}
On the other hand, set \begin{eqnarray}\label{sta.tran}
\sigma_{st}(x)=\frac{2}{\gamma-1}\left\{n_{st}(x)^{\frac{\gamma-1}{2}}-1\right\},
\ \ \ \
\tilde{E}_{st}=\frac{1}{\sqrt{\gamma}}E_{st}(x). \end{eqnarray} Then, $[\sigma_{st},\tilde{E}_{st}]$ satisfies
\begin{eqnarray}\label{sta.eq} \left\{\begin{aligned} & \left(\frac{\gamma-1}{2}\sigma_{st}+1\right)\nabla\sigma_{st}=-\frac{1}{\sqrt{\gamma}}\tilde{E}_{st},\\ &\frac{1}{\sqrt{\gamma}}\nabla\times \tilde{E}_{st}=0,\\
&\nabla \cdot
\tilde{E}_{st}=\frac{1}{\sqrt{\gamma}}(n_{b}(x)-1)-\frac{1}{\sqrt{\gamma}}(\Phi(\sigma_{st})+\sigma_{st}). \end{aligned}\right. \end{eqnarray} Based on the existence result proved in Section 2, we will study the stability of the stationary state $[\sigma_{st},0,\tilde{E}_{st},0]$. Set the perturbations $[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$ by \begin{eqnarray*} \bar{\sigma}=\sigma-\sigma_{st},\ \ \bar{v}=v,\ \ \bar{E}=\tilde{E}-\tilde{E}_{st},\ \ \bar{B}=\tilde{B}. \end{eqnarray*} Combining \eqref{2.2} with \eqref{sta.eq}, then $\bar{V}:=[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$ satisfies
\begin{equation}\label{sta.equ} \left\{
\begin{aligned}
&\partial_t \bar{\sigma}+(\frac{\gamma-1}{2}\bar{\sigma}+1)\nabla\cdot \bar{v}+\bar{v}\cdot \nabla \bar{\sigma}
+\bar{v}\cdot \nabla \sigma_{st}+\frac{\gamma-1}{2}\sigma_{st}\nabla\cdot \bar{v}=0,\\
&\partial_t \bar{v}+\bar{v} \cdot \nabla
\bar{v}+(\frac{\gamma-1}{2}\bar{\sigma}+1)\nabla \bar{\sigma}
+\frac{\gamma-1}{2}\bar{\sigma}\nabla \sigma_{st}+\frac{\gamma-1}{2}\sigma_{st}\nabla\bar{\sigma}=
-(\frac{1}{\sqrt{\gamma}}\bar{E}+\bar{v}\times \bar{B})
-\frac{1}{\sqrt{\gamma}}\bar{v},\\
&\partial_t\bar{E}-\frac{1}{\sqrt{\gamma}}\nabla\times \bar{B}
=\frac{1}{\sqrt{\gamma}}\bar{v}+
\frac{1}{\sqrt{\gamma}}[\Phi(\bar{\sigma}+\sigma_{st})+\bar{\sigma}+\sigma_{st}]\bar{v},\\
&\partial_t \bar{B}+\frac{1}{\sqrt{\gamma}}\nabla \times \bar{E}=0,\\
&\nabla \cdot
\bar{E}=-\frac{1}{\sqrt{\gamma}}[\Phi(\bar{\sigma}+\sigma_{st})-\Phi(\sigma_{st})]
-\frac{1}{\sqrt{\gamma}}\bar{\sigma}, \ \ \nabla
\cdot \bar{B}=0, \ \ t>0,\ x\in\mathbb{R}^{3}, \end{aligned}\right. \end{equation} with initial data \begin{eqnarray}\label{sta.equi}
\bar{V}|_{t=0}=\bar{V}_{0}:=[\sigma_{0}-\sigma_{st},v_{0},\tilde{E}_{0}-\tilde{E}_{st},\tilde{B}_{0}],\ \ x\in\mathbb{R}^{3}. \end{eqnarray} Here, $\Phi(\cdot)$ is defined by \eqref{def.phi},
and $\bar{V}_{0}$ satisfies \begin{eqnarray}\label{sta.equC}
\nabla \cdot
\bar{E}_{0}=-\frac{1}{\sqrt{\gamma}}[\Phi(\bar{\sigma}_{0}+\sigma_{st})-\Phi(\sigma_{st})]
-\frac{1}{\sqrt{\gamma}}\bar{\sigma}_{0}, \ \ \nabla
\cdot \bar{B}_{0}=0, \ \ t>0,\ x\in\mathbb{R}^{3}. \end{eqnarray}
In what follows, we suppose the integer $N \geq 3$. Besides, for $\bar{V}=[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$, we define the full instant energy functional $\mathcal {E}_{N}(\bar{V}(t))$, the high-order instant energy functional $\mathcal {E}_{N}^{h}(\bar{V}(t))$, and the dissipation rates $\mathcal {D}_{N}(\bar{V}(t))$, $\mathcal {D}_{N}^{h}(\bar{V}(t))$ by
\begin{equation}\label{de.E} \arraycolsep=1.5pt \begin{array}{rl}
\mathcal{E}_{N}(\bar{V}(t))=&\displaystyle\sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|[\bar{E},\bar{B}]\|_{N}^{2}\\[5mm]
&\displaystyle+\kappa_{1}\sum_{|\alpha|\leq N-1} \langle
\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle+\kappa_{2}\sum_{|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle\\[5mm]
&\displaystyle-\kappa_{3}\sum_{|\alpha|\leq N-2}\langle \nabla \times \partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle, \end{array} \end{equation} and \begin{equation}\label{de.Eh} \begin{aligned}
\mathcal{E}_{N}^{h}(\bar{V}(t))&=\sum_{1\leq|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|\nabla[\bar{E},\bar{B}]\|_{N-1}^{2}\\
&+\kappa_{1}\sum_{1\leq|\alpha|\leq N-1}\langle
\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle+\kappa_{2}\sum_{1\leq|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle\\[3mm]
&-\kappa_{3}\sum_{1\leq |\alpha|\leq N-2}\langle \nabla
\times\partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle, \end{aligned} \end{equation} respectively, where $0<\kappa_{3}\ll\kappa_{2}\ll\kappa_{1}\ll 1$ are constants to be properly chosen in the later proof. Notice that since all constants $\kappa_i$ $(i=1,2,3)$ are small enough, one has \begin{equation*}
\mathcal {E}_{N}(\bar{V}(t))\sim
\|[\bar{\sigma},\bar{v},\bar{E},\bar{B}] \|_{N}^{2},\quad \mathcal
{E}_{N}^{h}(\bar{V}(t))\sim \|\nabla
[\bar{\sigma},\bar{v},\bar{E},\bar{B}] \|_{N-1}^{2}. \end{equation*} We further define the dissipation rates $\mathcal {D}_{N}(\bar{V}(t))$, $\mathcal {D}_{N}^{h}(\bar{V}(t))$ by
\begin{eqnarray}\label{de.D} \arraycolsep=1.5pt \begin{array}{rl}
\mathcal {D}_{N}(\bar{V}(t))=\displaystyle \sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}&+\Phi(\sigma_{st}))|\partial^{\alpha}\bar{v}|^{2}dx\\[3mm]
&+\|\bar{\sigma}\|_{N}^{2}+\|\nabla[\bar{E},\bar{B}]\|_{N-2}^{2}+\|\bar{E}\|^{2}, \end{array} \end{eqnarray} and \begin{eqnarray}\label{de.Dh} \arraycolsep=1.5pt \begin{array}{rl} \mathcal {D}_{N}^{h}(\bar{V}(t))=\displaystyle
\sum_{1\leq|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}&+\Phi(\sigma_{st}))|\partial^{\alpha}\bar{v}|^{2}dx\\[3mm]
&+\|\nabla\bar{\sigma}\|_{N-1}^{2}+\|\nabla^2[\bar{E},\bar{B}]\|_{N-3}^{2}+\|\nabla\bar{E}\|^{2}. \end{array} \end{eqnarray} Then, concerning the reformulated Cauchy problem $\eqref{sta.equ}$-$\eqref{sta.equi}$, one has the following global existence result.
\begin{proposition}\label{pro.2.1}
Suppose that $\|n_{b}-1\|_{W_{0}^{N+1,2}}$ is small enough and $\eqref{sta.equC}$ holds for given initial data $\bar{V}_{0}=[\sigma_{0}-\sigma_{st},v_{0},\tilde{E}_0-\tilde{E}_{st},\tilde{B}_{0}]$. Then, there are $\mathcal {E}_{N}(\cdot) $ and $\mathcal {D}_{N}(\cdot)$ in the form $\eqref{de.E} $ and $\eqref{de.D}$ such that the following holds true:
If $\mathcal {E}_{N}(\bar{V}_{0})>0$ is small enough, the Cauchy problem $\eqref{sta.equ}$-$\eqref{sta.equi}$ admits a unique global nonzero solution $\bar{V}=[\sigma-\sigma_{st},v,\tilde{E}-\tilde{E}_{st},\tilde{B}] $ satisfying \begin{eqnarray}\label{V.satisfy} \bar{V} \in C([0,\infty);H^{N}(\mathbb{R}^{3}))\cap {\rm Lip}([0,\infty);H^{N-1}(\mathbb{R}^{3})), \end{eqnarray} and \begin{eqnarray}\label{pro.2.1j} \mathcal {E}_{N}(\bar{V}(t))+\lambda\int_{0}^{t}\mathcal {D}_{N}(\bar{V}(s))ds\leq \mathcal {E}_{N}(\bar{V}_{0}) \end{eqnarray} for any $t\geq 0$. \end{proposition}
Moreover, solutions obtained in Proposition $ \ref{pro.2.1}$ indeed decay in time with some rates under some extra regularity and integrability conditions on initial data. For that, given $\bar{V}_{0}=[\sigma_{0}-\sigma_{st},v_{0},\tilde{E}_0-\tilde{E}_{st},\tilde{B}_{0}]$, set $\epsilon_{m}(\bar{V}_0)$ as \begin{eqnarray}\label{def.epsi}
\epsilon_{m}(\bar{V}_0)=\|\bar{V}_{0}\|_{m}+\|[v_{0},\tilde{E}_0-\tilde{E}_{st},\tilde{B}_{0}]\|_{L^{1}}, \end{eqnarray} for the integer $m \geq 6$. Then one has the following proposition. \begin{proposition}\label{pro.2.2}
Suppose that $\|n_{b}-1\|_{W_{0}^{N+4,2}}$ is small enough and $\eqref{sta.equC}$ holds for given initial data $\bar{V}_{0}=[\sigma_{0}-\sigma_{st},v_{0},\tilde{E}_0-\tilde{E}_{st},\tilde{B}_{0}]$. If $\epsilon_{N+3}(\bar{V}_{0})>0$ is small enough, then the solution $\bar{V}=[\sigma-\sigma_{st},v,\tilde{E}-\tilde{E}_{st},\tilde{B}] $ satisfies \begin{eqnarray}\label{V.decay}
\|\bar{V}(t)\|_{N} \leq C \epsilon_{N+3}(\bar{V}_{0})(1+t)^{-\frac{3}{4}}, \end{eqnarray} and \begin{eqnarray}\label{nablaV.decay}
\|\nabla \bar{V}(t)\|_{N-1} \leq C \epsilon_{N+3}(\bar{V}_{0})(1+t)^{-\frac{5}{4}} \end{eqnarray} for any $t\geq 0$. \end{proposition}
\subsection{a priori estimates} In this subsection, we prove that the stationary solution obtained in Section \ref{sec2} is stable under small initial perturbation.
We begin to use the refined energy method to obtain some uniform-in-time {\it a priori} estimates for smooth solutions to the Cauchy problem (\ref{sta.equ})-(\ref{sta.equi}). To the end, let us denote \begin{equation}\label{def.delta}
\delta=\|\sigma_{st}\|_{W_{0}^{N+1,2}}=\left(\sum_{|\alpha|\leq N+1}\int_{\mathbb{R}^3}|\partial^{\alpha}_{x}\sigma_{st}|^2dx\right)^{\frac{1}{2}} \end{equation} for simplicity of presentation. A careful look at the proof of Theorem \ref{sta.existence} shows that \begin{eqnarray*} \sigma_{st}&=&\frac{2}{\gamma-1}\left\{n_{st}^{\frac{\gamma-1}{2}}-1\right\}\\
&=&\frac{2}{\gamma-1}\left\{\left(\frac{\gamma-1}{\gamma}Q_{st}+1\right)^{\frac{1}{2}}-1\right\}\\
&=&\frac{2}{\gamma}\dfrac{Q_{st}}{\left(\frac{\gamma-1}{\gamma}Q_{st}+1\right)^{\frac{1}{2}}+1}\sim
Q_{st}. \end{eqnarray*}
It follows that $\delta \leq C\|Q_{st}\|_{W_{0}^{N+1,2}}\leq C\|n_{b}-1\|_{W_{0}^{N+1,2}}$ is small enough. Notice that (\ref{sta.equ}) is a quasi-linear symmetric hyperbolic system. The main goal of this subsection is to prove
\begin{theorem}\label{estimate}(\textrm{a priori estimates}). Let $0<T\leq \infty$ be given. Suppose $\bar{V}:=[\bar{\sigma},\bar{v},\bar{E},\bar{B}]\in C([0,T);H^{N}(\mathbb{R}^{3}))$ is smooth for $T>0$ with \begin{eqnarray}\label{3.1}
\sup_{0\leq t<T}\|\bar{V}(t)\|_{N}\leq 1, \end{eqnarray} and assume that $\bar{V}$ solves the system (\ref{sta.equ}) for $t\in(0,T)$. Then, there are $\mathcal {E}_{N}(\cdot) $ and $\mathcal {D}_{N}(\cdot)$ in the form $\eqref{de.E} $ and $\eqref{de.D}$ such that \begin{eqnarray}\label{3.2} && \frac{d}{dt}\mathcal {E}_{N}(\bar{V}(t))+\lambda\mathcal {D}_{N}(\bar{V}(t))\leq
C[\mathcal {E}_{N}(\bar{V}(t))^{\frac{1}{2}}+\mathcal {E}_{N}(\bar{V}(t))+\delta]\mathcal {D}_{N}(\bar{V}(t)) \end{eqnarray} for any $0\leq t<T$. \end{theorem}
\begin{proof} The proof is divided into five steps.
\textbf{ Step 1.} It holds that \begin{equation}\label{3.3} \begin{aligned}
&\frac{1}{2}\frac{d}{dt}\left(\sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|[\bar{E},\bar{B}]\|_{N}^{2}\right)\\
&+\frac{1}{\sqrt{\gamma}}\sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))|\partial^{\alpha}\bar{v}|^{2}dx\\ \leq &
C(\|\bar{V}\|_{N}+\delta)(\|[\bar{\sigma},\bar{v}]\|^{2}+\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}
+\|\nabla \bar{E}\|_{N-2}^2).
\end{aligned} \end{equation} In fact, applying $\partial^{\alpha}$ to the first two equations of
(\ref{sta.equ}) for $|\alpha|\leq N$ and multiplying them by $(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma}$ and $(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}$ respectively, taking integrations in $x$ and then using integration by parts give
\begin{eqnarray}\label{3.4}
&& \begin{aligned}
&\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\frac{1}{\sqrt{\gamma}}
\langle \partial^{\alpha}\bar{E},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\
&+\frac{1}{\sqrt{\gamma}}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))|\partial^{\alpha}\bar{v}|^{2}dx
=-\sum_{\beta<\alpha}C^{\alpha}_{\beta}I_{\alpha,\beta}(t)+I_{1}(t).
\end{aligned} \end{eqnarray} Here, $I_{\alpha,\beta}(t)=I_{\alpha,\beta}^{(\sigma)}(t)+I_{\alpha,\beta}^{(v)}(t)$ with \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rl}
\displaystyle I_{\alpha,\beta}^{(\sigma)}(t)=& \displaystyle\langle \partial^{\alpha-\beta}\bar{v} \cdot \nabla \partial^{\beta}\bar{\sigma}, (1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma}\rangle\\[3mm] &\displaystyle+\frac{\gamma-1}{2}\langle\partial^{\alpha-\beta}\bar{\sigma} \partial^{\beta}\nabla\cdot\bar{v} ,(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma}\rangle\\[3mm]
& \displaystyle+\frac{\gamma-1}{2}\langle \partial^{\alpha-\beta}\sigma_{st} \partial^{\beta}\nabla\cdot\bar{v} , (1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \rangle\\[3mm] & \displaystyle +\langle \partial^{\alpha-\beta}\bar{v}\cdot \partial^{\beta}\nabla \sigma_{st} ,(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma}\rangle,
\end{array} \end{eqnarray*} \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rl}
\displaystyle I_{\alpha,\beta}^{(v)}(t)=& \displaystyle\langle \partial^{\alpha-\beta}\bar{v} \cdot \nabla \partial^{\beta}\bar{v} , (1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\[3mm] &\displaystyle+\frac{\gamma-1}{2}\langle\partial^{\alpha-\beta}\bar{\sigma} \nabla\partial^{\beta}\bar{\sigma},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\[3mm] &\displaystyle+\frac{\gamma-1}{2}\langle \partial^{\alpha-\beta}\sigma_{st} \nabla \partial^{\beta}\bar{\sigma} ,(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\[3mm] &\displaystyle+\langle \partial^{\alpha-\beta}\bar{v}\times \partial^{\beta}\bar{B} ,(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\[3mm] &\displaystyle+\frac{\gamma-1}{2}\langle \partial^{\alpha-\beta}\bar{\sigma} \nabla \partial^{\beta}\sigma_{st} ,(1+\sigma_{st}+\Phi(\sigma_{st})) \partial^{\alpha}\bar{v} \rangle \end{array} \end{eqnarray*} and \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rl}
I_{1}(t)=&
\displaystyle\frac{1}{2}\langle \nabla \cdot \bar{v},
(1+\sigma_{st}+\Phi(\sigma_{st}))(|\partial^{\alpha}\bar{\sigma}|^{2}+|\partial^{\alpha}\bar{v}|^{2})
\rangle\\[3mm]
& \displaystyle+ \frac{\gamma-1}{2}\langle \nabla \bar{\sigma}\cdot\partial^{\alpha}\bar{v} ,(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \rangle-\langle \bar{v}\times \partial^{\alpha}\bar{B},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v}\rangle\\[3mm] & \displaystyle+\frac{\gamma-1}{2}\langle \nabla\sigma_{st} \partial^{\alpha}\bar{v},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \rangle-\frac{\gamma-1}{2}\langle \bar{\sigma}\partial^{\alpha} \nabla\sigma_{st},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{v} \rangle\\[3mm] &- \displaystyle \langle \bar{v}\cdot\partial^{\alpha} \nabla\sigma_{st},(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \rangle\\[3mm] &\displaystyle+\left\langle \left(\frac{\gamma-1}{2}\bar{\sigma}+1\right)\partial^{\alpha}\bar{v}, \nabla(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \right\rangle\\[3mm] & \displaystyle +\frac{\gamma-1}{2}\langle \sigma_{st} \partial^{\alpha}\bar{v},\nabla(1+\sigma_{st}+\Phi(\sigma_{st}))\partial^{\alpha}\bar{\sigma} \rangle\\[3mm] &\displaystyle+\frac{1}{2}\langle\bar{v},\nabla(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^{2})\rangle\triangleq\sum_{j=1}^{9}I_{1,j}(t). \end{array} \end{eqnarray*}
When $|\alpha|=0$, it suffices to estimate $I_{1}(t)$ by \begin{eqnarray*} \begin{aligned}
I_{1}(t)\leq &C \|\nabla \cdot \bar{v}\|(\|\bar{v}\|_{L^{6}}\|\bar{v}\|_{L^{3}}
+\|\bar{\sigma}\|_{L^{6}}\|\bar{\sigma}\|_{L^{3}})
+C \|\nabla \bar{\sigma}\|\|\bar{v}\|_{L^{6}}\|\bar{\sigma}\|_{L^{3}}
+C \|\bar{B}\|_{L^{\infty}}\|\bar{v}\|^{2}\\
&+C \|\nabla\sigma_{st}\|\left\|\bar{\sigma}\right\|_{L^{6}}
\|\bar{v}\|_{L^3}+C\|\sigma_{st}\|_{L^{\infty}}\|\nabla\sigma_{st}\|\left\|\bar{\sigma}\right\|_{L^{6}}
\|\bar{v}\|_{L^3}\\
&+\|\bar{v}\|_{L^{\infty}}
\|\nabla\sigma_{st}\|(\|\bar{\sigma}\|_{L^{6}}\|\bar{\sigma}\|_{L^{3}}
+\|\bar{v}\|_{L^{6}}\|\bar{v}\|_{L^{3}})
\\
\leq & C (\|[\bar{\sigma},\bar{v}]\|_{H^{1}}+\delta+\delta\|\nabla\bar{v}\|_{H^1})(\|\nabla
[\bar{\sigma},\bar{v}]\|^{2}+\|[\bar{\sigma},\bar{v}]\|^{2})+ C \|\nabla
\bar{B}\|_{H^{1}}\|\bar{v}\|^{2},
\end{aligned} \end{eqnarray*} which is further bounded by the r.h.s. term of (\ref{3.3}). When
$|\alpha|\geq 1$, for $I_{1}(t)$, the similarity of $I_{1,1}(t)$ and $I_{1,2}(t)$ shows that we can estimate them together as follows \begin{eqnarray*} \begin{aligned} I_{1,1}(t)+I_{1,2}(t)\leq & C
\|\nabla\cdot\bar{v}\|_{L^{\infty}}\|(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^2\\
&+C\|\nabla\bar{\sigma}\|_{L^{\infty}}\|(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^2\\ \leq & C
\|[\bar{\sigma},\bar{v}]\|_{N}\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^2.
\end{aligned} \end{eqnarray*} For $I_{1,3}(t)$, $I_{1,5}(t)$ and $I_{1,6}(t)$, there are no derivative of $\bar{\sigma}$ or $\bar{v}$, then we use $L^{\infty}$ of $\bar{v}$ or $\bar{\sigma}$, \begin{eqnarray*} \begin{aligned} I_{1,3}(t)+I_{1,5}(t)+I_{1,6}(t)\leq & C
\|\bar{v}\|_{L^{\infty}}\|\partial^{\alpha}
\bar{B}\|\|(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\nabla\bar{v}\|_{N-1}\\
&+C \|\bar{\sigma}\|_{L^{\infty}}\|\partial^{\alpha}
\nabla\sigma_{st}\|\|(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\nabla\bar{v}\|_{N-1}\\
&+C \|\bar{v}\|_{L^{\infty}}\|\partial^{\alpha}
\nabla\sigma_{st}\|\|(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\nabla\bar{\sigma}\|_{N-1}\\ \leq &
C(\delta+\|\bar{B}\|_{N})\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^2.
\end{aligned} \end{eqnarray*} For other terms of $I_{1}(t)$, both $\bar{\sigma}$ and $\bar{v}$ contain the derivative, one can use the $L^{2}$ of these terms and $L^{\infty}$ of others. Combining the above two estimates, one has \begin{eqnarray*}
I_{1}(t)\leq C (\|[\bar{\sigma},\bar{v},\bar{B}]\|_{N}
+\delta+\delta\|\nabla \bar{v}\|_{H^1})\|\nabla
[\bar{\sigma},\bar{v}]\|_{N-1}^{2}, \end{eqnarray*} which is bounded by the r.h.s. term of (\ref{3.3}). On the other hand, since each term in $I_{\alpha,\beta}(t)$ is the integration of the four-terms product in which there is at least one term containing the derivative, one has \begin{eqnarray*}
I_{\alpha,\beta}(t)\leq C (\|[\bar{\sigma},\bar{v},\bar{B}]\|_{N}
+\delta+\delta\|\nabla \bar{v}\|_{H^1})\|\nabla
[\bar{\sigma},\bar{v}]\|_{N-1}^{2}, \end{eqnarray*} which is also bounded by the r.h.s. term of (\ref{3.3}).
From (\ref{sta.equ}), energy estimates on $\partial^{\alpha}\bar{E}$
and $\partial^{\alpha}\bar{B}$ with $|\alpha| \leq N$ give \begin{eqnarray}\label{3.5}
&& \begin{aligned}
&\frac{1}{2}\frac{d}{dt}\|\partial^{\alpha}[\bar{E},\bar{B}]\|^{2} -\frac{1}{\sqrt{\gamma}}\langle (1+\sigma_{st}+\Phi(\sigma_{st}))\partial
^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle\\
=&\frac{1}{\sqrt{\gamma}}\langle \partial
^{\alpha}[(\Phi(\bar{\sigma}+\sigma_{st})-\Phi(\sigma_{st}))\bar{v}],\partial^{\alpha}\bar{E}\rangle+
\frac{1}{\sqrt{\gamma}}\langle \partial
^{\alpha}[\bar{\sigma}\bar{v}],\partial^{\alpha}\bar{E}\rangle\\
&+\frac{1}{\sqrt{\gamma}}\sum_{\beta<\alpha}C_{\beta}^{\alpha}\langle \partial^{\alpha-\beta}(1+\sigma_{st}+\Phi(\sigma_{st}))
\partial^{\beta}\bar{v},\partial^{\alpha}\bar{E}\rangle\\
=&I_{2,1}(t)+I_{2,2}(t)+\sum_{\beta<\alpha}C_{\beta}^{\alpha}I_{2,\beta}(t).
\end{aligned} \end{eqnarray}
In a similar way as before, when $|\alpha|=0$, it suffices to estimate $I_{2,1}(t)+ I_{2,2}(t)$ by \begin{eqnarray*}
I_{2,1}(t)+ I_{2,2}(t)\leq C \|\nabla
\bar{\sigma}\|\cdot\|\bar{v}\|_{1}\|\bar{E}\|. \end{eqnarray*}
When $|\alpha|>0$, $I_{2,1}(t)$ and $I_{2,2}(t)$ can be estimated in a similar way as in \cite{Duan}, \begin{eqnarray*}
I_{2,1}(t)+ I_{2,2}(t)\leq C \|\nabla \bar{\sigma}\|_{N-1}\|\nabla \bar{v}\|_{N-1}\|\bar{E}\|_{N}. \end{eqnarray*}
When $|\alpha|>0$, for each $\beta$ with $\beta<\alpha$, $I_{2,\beta}$ is estimated by three cases.
\textsl{Case 1.} $|\alpha|=N$. In this case, integration by parts shows that \begin{eqnarray*}
&& \begin{aligned}
I_{2,\beta}(t) \leq & C \delta \|\nabla \bar{v}\|_{N-1}\|\nabla\bar{E}\|_{N-2}\\
\leq & C \delta \|\nabla \bar{v}\|_{N-1}^2+C \delta \|\nabla \bar{E}\|_{N-2}^2. \end{aligned} \end{eqnarray*}
\textsl{Case 2.} $|\alpha|<N $ and $|\beta|\geq 1$ which imply
$|\alpha-\beta|\leq N-2$. It holds that \begin{eqnarray*}
&& \begin{aligned}
I_{2,\beta}(t) \leq & C\|\partial^{\alpha-\beta}(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{L^{\infty}}
\|\partial^{\beta}\bar{v}\|\|\partial^{\alpha}\bar{E}\|\\
\leq & C\|\nabla\partial^{\alpha-\beta}(1+\sigma_{st}+\Phi(\sigma_{st}))\|_{H^{1}}\|\nabla \bar{v}\|_{N-1}\|\nabla\bar{E}\|_{N-2}\\
\leq & C \delta \|\nabla \bar{v}\|_{N-1}^2+C \delta \|\nabla
\bar{E}\|_{N-2}^2. \end{aligned} \end{eqnarray*}
\textsl{Case 3.} $|\alpha|<N $ and $|\beta|=0$. In this case, there is no derivative of $\bar{v}$, one can use $L^{\infty}$ of $\bar{v}$ to estimate $I_{2,\beta}(t)$, \begin{eqnarray*}
&& \begin{aligned}
I_{2,\beta}(t) \leq & C\|\partial^{\alpha-\beta}(1+\sigma_{st}+\Phi(\sigma_{st}))\|
\|\bar{v}\|_{L^{\infty}}\|\partial^{\alpha}\bar{E}\|\\
\leq & C \delta \|\nabla \bar{v}\|_{N-1}^2+C \delta \|\nabla
\bar{E}\|_{N-2}^2, \end{aligned} \end{eqnarray*} which is bounded by the r.h.s. term of (\ref{3.3}). Then (\ref{3.3}) follows by taking summation of (\ref{3.4}) and (\ref{3.5}) over
$|\alpha| \leq N$. Then the time evolution of the full instant energy $\|V(t)\|_{N}^{2}$ has been obtained but its dissipation rate only contains the contribution from the explicit relaxation variable $\bar{v}$. In a parallel way as \cite{Duan}, by introducing some interactive functionals, the dissipation from contributions of the rest components $\bar{\sigma}$, $\bar{E}$, and $\bar{B}$ can be recovered in turn.
\textbf{Step 2.} It holds that \begin{eqnarray}\label{step2}
&&\begin{aligned}
&\frac{d}{dt}\mathcal {E}_{N,1}^{int}(\bar{V})+\lambda\|\bar{\sigma}\|^{2}_{N} \\
\leq & C\|\nabla\bar{v}\|_{N-1}^{2}+C(\|[\bar{\sigma}, \bar{v},\bar{B}]\|_{N}^{2}+\delta)
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2},
\end{aligned} \end{eqnarray} where $\mathcal {E}_{N,1}^{int}(\cdot)$ is defined by \begin{eqnarray*}
\mathcal {E}_{N,1}^{int}(\bar{V})=\sum_{|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle. \end{eqnarray*} In fact, the first two equations of $ \eqref{sta.equ}$ can be rewritten as \begin{eqnarray}\label{3.7}
&&\partial_t \bar{\sigma}+\nabla \cdot \bar{v}=f_{1}, \end{eqnarray} \begin{eqnarray}\label{3.9}
\partial_t \bar{v}+\nabla
\bar{\sigma}+\frac{1}{\sqrt{\gamma}}\bar{E}=f_{2}-\frac{1}{\sqrt{\gamma}}\bar{v}, \end{eqnarray} where \begin{eqnarray}\label{f1f2}
&& \left\{
\begin{aligned}
& f_{1}:=-\bar{v}\cdot
\nabla\bar{\sigma}-\frac{\gamma-1}{2}\bar{\sigma}\nabla \cdot \bar{v}
-\bar{v}\cdot
\nabla \sigma_{st}-\frac{\gamma-1}{2}\sigma_{st}\nabla \cdot \bar{v},\\
& f_{2}:=-\bar{v}\cdot \nabla
\bar{v}-\frac{\gamma-1}{2}\bar{\sigma}\nabla \bar{\sigma}-\bar{v}\times \bar{B}
-\frac{\gamma-1}{2}\sigma_{st}\nabla \bar{\sigma}-\frac{\gamma-1}{2}\bar{\sigma}\nabla\sigma_{st}. \end{aligned}\right. \end{eqnarray}
Let $|\alpha|\leq N-1$. Applying $\partial^{\alpha}$ to (\ref{3.9}), multiplying it by $\partial^{\alpha}\nabla \bar{\sigma}$, taking integrations in $x$ and then using integration by parts and also the final equation of (\ref{sta.equ}), replacing $\partial_{t}\bar{\sigma}$ from (\ref{3.7}) give \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rl}
&\displaystyle \frac{d}{dt}\langle \partial^{\alpha}\bar{v},\nabla \partial^{\alpha}\bar{\sigma}\rangle
+\|\nabla\partial^{\alpha}\bar{\sigma}\|^{2}+\frac{1}{\gamma}\|
\partial^{\alpha}\bar{\sigma}\|^2\\[3mm] =&\displaystyle -\frac{1}{\gamma}\langle \partial^{\alpha}\left(\Phi(\bar{\sigma}+\sigma_{st})-\Phi(\sigma_{st})\right), \partial^{\alpha}\bar{\sigma}\rangle+\langle\partial^{\alpha}f_{2},\nabla\partial^{\alpha}\bar{\sigma}\rangle\\[3mm] &-\displaystyle\frac{1}{\sqrt{\gamma}}\langle\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle
+\|\nabla \cdot
\partial^{\alpha}\bar{v}\|^{2}-\langle\partial^{\alpha}f_{1},\nabla \cdot \partial^{\alpha}\bar{v}\rangle. \end{array} \end{eqnarray*} Then, it follows from Cauchy-Schwarz inequality that \begin{eqnarray}\label{3.11} \arraycolsep=1.5pt \begin{array}{rl}
&\displaystyle \frac{d}{dt}\langle \partial^{\alpha}\bar{v},\nabla \partial^{\alpha}\bar{\sigma}\rangle
+\lambda(\|\nabla\partial^{\alpha}\bar{\sigma}\|^{2}+\|
\partial^{\alpha}\bar{\sigma}\|^2)\\[3mm]
\leq & C \displaystyle \|\nabla \cdot
\partial^{\alpha}\bar{v}\|^{2}+C(\|\partial^{\alpha}
\left(\Phi(\bar{\sigma}+\sigma_{st})-\Phi(\sigma_{st})\right) \|^{2}
+\|\partial^{\alpha}f_{1}\|^{2}+\|\partial^{\alpha}f_{2}\|^{2}). \end{array} \end{eqnarray} Noticing that $\Phi(\sigma)$ is smooth in $\sigma$ with $\Phi'(0)=0$, one has from (\ref{f1f2}) that \begin{eqnarray*} \arraycolsep=1.5pt \begin{array}{rl}
&\|\partial^{\alpha}
\left(\Phi(\bar{\sigma}+\sigma_{st})-\Phi(\sigma_{st})\right) \|^{2}
+\|\partial^{\alpha}f_{1}\|^{2}+\|\partial^{\alpha}f_{2}\|^{2}\\[3mm]
\leq &C(\|[\bar{\sigma},\bar{v},\bar{B}]\|^{2}_{N}+\delta)\|
\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}. \end{array} \end{eqnarray*}
Here, if there is no derivative on $\bar{\sigma}$ or $\bar{v}$, then use the $L^{\infty}$ of $\bar{\sigma}$ or $\bar{v}$. Plugging this into (\ref{3.11}) taking summation over $|\alpha|\leq N-1$ yield (\ref{step2}).
\textbf{Step 3.} It holds that \begin{equation}\label{step3}
\begin{aligned}
\dfrac{d}{dt}\mathcal {E}_{N,2}^{int}(\bar{V})+\lambda\|\bar{E}\|^{2}_{N-1} \leq
C&\|[\bar{\sigma},\bar{v}]\|_{N}^{2}+C\|\bar{v}\|_{N}\|\nabla
\bar{B}\|_{N-2}\\
&+C(\|[\bar{\sigma},\bar{v},\bar{B}]\|_{N}^{2}+\delta)
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2},
\end{aligned} \end{equation} where $\mathcal {E}_{N,2}^{int}(\cdot)$ is defined by \begin{eqnarray*}
\mathcal {E}_{N,2}^{int}(\bar{V})=\sum_{|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle. \end{eqnarray*} Applying $\partial^{\alpha}$ to (\ref{3.9}), multiplying it by $\partial^{\alpha}\bar{E}$, taking integrations in $x$ and using integration by parts and replacing $ \partial_{t}\bar{E}$ from the third equation of (\ref{sta.equ}) give \begin{equation*} \arraycolsep=1.5pt \begin{array}{rl}
&\dfrac{d}{dt}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle+
\dfrac{1}{\sqrt{\gamma}}\|\partial^{\alpha}\bar{E}\|^{2}\\[3mm]
=& \dfrac{1}{\sqrt{\gamma}}\|\partial^{\alpha}\bar{v}\|^{2}+ \dfrac{1}{\sqrt{\gamma}}\langle \partial^{\alpha}\bar{v},\nabla \times \partial^{\alpha}\bar{B}\rangle+\dfrac{1}{\sqrt{\gamma}}\langle \partial^{\alpha}\bar{v}, \partial^{\alpha}[\Phi(\bar{\sigma}+\sigma_{st})\bar{v}+(\bar{\sigma}+\sigma_{st})\bar{v}]\rangle\\[3mm] &-\langle\partial^{\alpha}\nabla \bar{\sigma}+\dfrac{1}{\sqrt{\gamma}}\partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle +\langle\partial^{\alpha}f_{2}, \partial^{\alpha}\bar{E}\rangle, \end{array} \end{equation*} which from the Cauchy-Schwarz inequality further implies \begin{equation*} \arraycolsep=1.5pt \begin{array}{rl}
&\dfrac{d}{dt}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle+
\lambda\|\partial^{\alpha}\bar{E}\|^{2}\\[3mm]
\leq &
C\|[\bar{\sigma},\bar{v}]\|_{N}^{2}+C\|\bar{v}\|_{N}\|\nabla
\bar{B}\|_{N-2}+C(\|[\bar{\sigma},\bar{v},\bar{B}]\|_{N}^{2}+\delta)
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}. \end{array} \end{equation*}
Thus $\eqref{step3}$ follows from taking summation of the above estimate over $|\alpha|\leq N-1$.
\textbf{Step 4.} It holds that \begin{equation}\label{step4} \begin{aligned}
\frac{d}{dt}\mathcal {E}_{N,3}^{int}(\bar{V})+\lambda\|\nabla\bar{B}\|^{2}_{N-2}
\leq & C\|[\bar{v},\bar{E}]\|_{N-1}^{2}\\
&+C(\|\bar{\sigma}\|_{N}^{2}+\delta)\|\nabla \bar{v}\|_{N-1}^{2}, \end{aligned} \end{equation} where $\mathcal {E}_{N,3}^{int}(\cdot)$ is defined by \begin{eqnarray*}
\mathcal {E}_{N,3}^{int}(\bar{V})=-\sum_{|\alpha|\leq N-2}\langle \nabla \times \partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle. \end{eqnarray*}
In fact, for $|\alpha|\leq N-2$, applying $\partial^{\alpha}$ to the third equation of $\eqref{sta.equ}$, multiplying it by $-\partial^{\alpha}\nabla \times \bar{B}$, taking integrations in $x$ and using integration by parts and replacing $ \partial_{t}\bar{B}$ from the fourth equation of $\eqref{sta.equ}$ implie \begin{equation*} \arraycolsep=1.5pt \begin{array}{rl} & -\dfrac{d}{dt}\langle \partial^{\alpha}\bar{E},\nabla \times \partial^{\alpha}\bar{B}\rangle+
\dfrac{1}{\sqrt{\gamma}}\|\nabla\times \partial^{\alpha}\bar{B}\|^{2} \\[3mm]
=&\dfrac{1}{\sqrt{\gamma}}\|\nabla\times\partial^{\alpha}\bar{E}\|^{2}-\dfrac{1}{\sqrt{\gamma}} \langle \partial^{\alpha}\bar{v},\nabla \times \partial^{\alpha}\bar{B}\rangle -\dfrac{1}{\sqrt{\gamma}}\langle \partial^{\alpha}[\Phi(\bar{\sigma}+\sigma_{st})\bar{v}+(\bar{\sigma}+\sigma_{st})\bar{v}],\nabla \times \partial^{\alpha}\bar{B}\rangle, \end{array} \end{equation*}
which gives $\eqref{step4}$ by further using Cauchy-Schwarz inequality and taking summation over $|\alpha|\leq N-2$, where we also used \begin{eqnarray*}
\|\partial^{\alpha}\partial_{i}\bar{B}\|=\|\partial_{i}\Delta^{-1}\nabla
\times(\nabla\times\partial^{\alpha}\bar{B}) \|\leq\|\nabla\times
\partial^{\alpha}\bar{B}\| \end{eqnarray*} for each $1\leq i\leq 3$, due to the fact $\partial_{i}\Delta^{-1}\nabla$ is bounded from $L^{p}$ to itself for $1<p<\infty$, cf. \cite{Stein}.
\textbf{Step 5.} Now, following the four steps above, we are ready to prove
$\eqref{3.2}$. Let us define \begin{eqnarray*}
\mathcal {E}_{N}(\bar{V}(t))=\sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|[\bar{E},\bar{B}]\|_{N}^{2}
+\sum_{i=1}^{3}\kappa_{i}\mathcal {E}^{int}_{N,i}(\bar{V}(t)), \end{eqnarray*} that is, \begin{equation}\label{3.12} \arraycolsep=1.5pt \begin{array}{rl}
\mathcal{E}_{N}(\bar{V}(t))=&\displaystyle\sum_{|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|[\bar{E},\bar{B}]\|_{N}^{2}\\[3mm]
&\displaystyle+\kappa_{1}\sum_{|\alpha|\leq N-1} \langle
\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle+\kappa_{2}\sum_{|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle\\[3mm]
&\displaystyle-\kappa_{3}\sum_{|\alpha|\leq N-2}\langle \nabla \times \partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle \end{array} \end{equation} for constants $0<\kappa_{3}\ll\kappa_{2}\ll\kappa_{1}\ll 1$ to be determined. Notice that as long as $ 0<\kappa_{i}\ll 1$ is small enough for $i=1,2,3$, and $\sigma_{st}+\Phi(\sigma_{st}) $ depending only on $ x$ is sufficiently small compared with $1$, then
$\mathcal{E}_{N}(\bar{V}(t))\sim \|\bar{V}(t)\|^{2}_{N}$ holds true. Moreover, letting $0<\kappa_{3}\ll\kappa_{2}\ll\kappa_{1}\ll 1$ with $\kappa_{2}^{3/2}\ll\kappa_{3}$, the sum of $\eqref{3.3}\times \kappa_{1}$, $\eqref{step2}\times \kappa_{2}$, $\eqref{step4}\times \kappa_{3}$ implies that there are $\lambda>0$, $C>0$ such that $\eqref{3.2}$ holds true with $\mathcal {D}_{N}(\cdot)$ defined in $\eqref{de.D}$. Here, we have used the following Cauchy-Schwarz inequality: \begin{eqnarray*}
2 \kappa_{2} \|\bar{v}\|_{N}\|\nabla \bar {B}\|_{N-2}\leq
\kappa_{2}^{1/2}\|\bar{v}\|_{N}^{2}+\kappa_{2}^{3/2}\|\nabla\bar{B}\|^{2}_{N-2}. \end{eqnarray*} Due to $\kappa_{2}^{3/2}\ll \kappa_{3}$, both terms on the r.h.s. of the above inequality were absorbed. This completes the proof of Theorem $\ref{estimate}$. \end{proof}
Since $\eqref{sta.equ}$ is a quasi-linear symmetric hyperbolic system, the short-time existence can be proved in much more general case as in \cite{Kato}; see also (Theorem 1.2, Proposition 1.3, and Proposition 1.4 in Chapter 16 of \cite{Taylor}). From Theorem \ref{estimate} and the continuity argument, it is easy to see that $ \mathcal {E}_{N}(\bar{V}(t)) $ is bounded uniformly in time under the assumptions that $\mathcal {E}_{N}(\bar{V}_{0})>0$ and
$\|n_{b}-1\|_{W_{0}^{N+1,2}}$ are small enough. Therefore, the global existence of solutions satisfying \eqref{V.satisfy} and \eqref{pro.2.1j} follows in the standard way; see also \cite{Duan}. This completes the proof of Proposition \ref{pro.2.1}.\qed
\section{Decay in time for the non-linear system}\label{sec4} In this section, we are devoted to the rate of the convergence of solution to the equilibrium $[n_{st},0,E_{st},0]$ for the system \eqref{1.1} over $\mathbb{R}^3$. In fact by setting \begin{eqnarray*} \bar{\rho}=n-n_{st},\ \ \bar{u}=u,\ \ E_{1}=E-E_{st},\ \ B_{1}=B, \end{eqnarray*} and \begin{eqnarray*} \rho_{st}=n_{st}-1, \end{eqnarray*} then $\bar{U}:=[\bar{\rho},\bar{u},E_{1},B_{1}]$ satisfies \begin{equation}\label{rhost} \left\{
\begin{aligned}
&\partial_t \bar{\rho}+\nabla\cdot \bar{u}=g_{1} ,\\
&\partial_t \bar{u}+\bar{u} + E_{1} +\gamma \nabla\bar{\rho}=g_{2},
\\
&\partial_t E_{1}-\nabla\times B_{1}-\bar{u}=g_{3},\\
&\partial_t B_{1}+\nabla \times E_{1}=0,\\
&\nabla \cdot E_{1}=-\bar{\rho}, \ \ \nabla \cdot B_{1}=0, \ \ \ t>0,\ x\in\mathbb{R}^{3},\\ \end{aligned}\right. \end{equation} with initial data \begin{eqnarray}\label{rhosti} \begin{aligned}
\bar{U}|_{t=0}=\bar{U}_{0}:=&[\bar{\rho}_{0},\bar{u}_{0},E_{1,0},B_{1,0}]\\
=&[n_0-n_{st},u_0,E_{0}-E_{st},B_0], \ \ \ x\in\mathbb{R}^{3}, \end{aligned} \end{eqnarray} satisfying the compatible conditions \begin{eqnarray}\label{rhostC} \nabla \cdot E_{1,0}=-\bar{\rho}_{0}, \ \ \nabla \cdot B_{1,0}=0. \end{eqnarray} Here the nonlinear source terms take the form of \begin{equation}\label{sec5.ggg} \arraycolsep=1.5pt \left\{
\begin{aligned}
& g_{1}=-\nabla\cdot[(\bar{\rho}+\rho_{st}) \bar{u}],\\
&\begin{array}[b]{rcl}
g_{2}&=&-\bar{u} \cdot \nabla \bar{u}-\bar{u}\times B_{1}
-\gamma [(\bar{\rho}+1+\rho_{st})^{\gamma-2}-1]\nabla\bar{\rho}\\
&&-\gamma
[(1+\bar{\rho}+\rho_{st})^{\gamma-2}-(1+\rho_{st})^{\gamma-2}]\nabla\rho_{st},
\end{array}\\
& g_{3}=(\bar{\rho}+\rho_{st}) \bar{u}. \end{aligned}\right. \end{equation}
In what follows, we will denote $[\rho,u,E,B]$ as the solution to the the following linearized equation of \eqref{rhost}: \begin{equation}\label{DJ} \left\{
\begin{aligned}
&\partial_t \rho+\nabla\cdot u=0,\\
&\partial_t u+u+ E +\gamma \nabla\rho=0,\\
&\partial_t E-\nabla\times B-u=0,\\
&\partial_t B+\nabla \times E=0,\\
&\nabla \cdot E=-\rho, \ \ \nabla \cdot B=0, \ \ \ t>0, \ \ x\in\mathbb{R}^{3},\\ \end{aligned}\right. \end{equation} with given initial data \begin{eqnarray}\label{2.61}
U|_{t=0}=\bar{U}_{0}:=[\bar{\rho}_{0},\bar{u}_{0},E_{1,0},B_{1,0}], \ \ \ x\in\mathbb{R}^{3}, \end{eqnarray} satisfying the compatible conditions \eqref{rhostC}.
For the above linearized equations, the $L^{p}$-$L^{q}$ time-decay property was proved by Duan in \cite{Duan}. We list only some special $L^{p}$-$L^{q}$ time decay properties in the following proposition. \begin{proposition}\label{thm.decay}
Suppose $U(t)=e^{tL}\bar{U}_{0}$ is the solution to the Cauchy problem \eqref{DJ}-\eqref{2.61} with the initial data $\bar{U}_{0}=[\bar{\rho}_{0},\bar{u}_{0},E_{1,0},B_{1,0}] $ satisfying \eqref{rhostC}. Then, $U=[\rho,u,E,B]$ satisfies the following time-decay property:
\begin{eqnarray}\label{col.decay1}
&& \left\{
\begin{aligned}
& \|\rho(t)\|\leq C e^{-\frac{t}{2}}\|[\bar{\rho}_{0},\bar{u}_{0}]\|,\\
& \|u(t)\| \leq C e^{-\frac{t}{2}}\|\bar{\rho}_{0}\|+C(1+t)^{-\frac{5}{4}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{2}},\\
&\|E(t)\|\leq C (1+t)^{-\frac{5}{4}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{3}},\\
&\|B(t)\|\leq C (1+t)^{-\frac{3}{4}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{2}}, \end{aligned}\right. \end{eqnarray} and \begin{eqnarray}\label{col.decayinfty1}
&& \left\{
\begin{aligned}
& \|\rho(t)\|_{\infty}\leq C e^{-\frac{t}{2}}\|[\bar{\rho}_{0},\bar{u}_{0}]\|_{L^{2}\cap\dot{H}^{2}},\\
& \|u(t)\|_{\infty} \leq C e^{-\frac{t}{2}}\|\bar{\rho}_{0}\|_{L^{2}\cap\dot{H}^{2}}+C(1+t)^{-2}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{5}},\\
&\|E(t)\|_{\infty}\leq C (1+t)^{-2}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{6}},\\
&\|B(t)\|_{\infty}\leq C (1+t)^{-\frac{3}{2}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{5}}, \end{aligned}\right. \end{eqnarray} and, moreover, \begin{eqnarray}\label{col.EB}
&& \left\{
\begin{aligned}
&\|\nabla B(t)\|\leq C (1+t)^{-\frac{5}{4}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{ L^1 \cap \dot{H}^{4}},\\
& \|\nabla^{N}[E(t),B(t)]\|\leq C(1+t)^{-\frac{5}{4}}
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{ L^1 \cap \dot{H}^{N+3}}. \end{aligned}\right. \end{eqnarray}
\end{proposition}
In what follows, since we shall apply the linear $L^{p}$-$L^{q}$ time-decay property of the homogeneous system \eqref{DJ}, we need the mild form of the non-linear Cauchy problem \eqref{rhost}-\eqref{rhosti}. From now on, we always denote $\bar{U}=[\bar{\rho},\bar{u},E_{1},B_{1}]$ to the non-linear Cauchy problem $\eqref{rhost}$-$\eqref{rhosti}$. Then, by Duhamel's principle, the solution $\bar{U}$ can be formally written as \begin{eqnarray}\label{sec5.U} \bar{U}(t)=e^{tL}\bar{U}_{0}+\int_{0}^{t}e^{(t-s)L}[g_{1}(s),g_{2}(s),g_{3}(s),0]d s, \end{eqnarray} where $e^{tL}\bar{U}_{0}$ denotes the solution to the Cauchy problem $\eqref{DJ}$-$\eqref{2.61}$ without nonlinear sources.
The following two lemmas give the full and high-order energy estimates. \begin{lemma}\label{lem.V} Let $\bar{V}=[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$ be the solution to the Cauchy problem $\eqref{sta.equ}$--$ \eqref{sta.equi}$ with initial data $\bar{V}_{0}=[\bar{\sigma}_{0},\bar{v}_{0},\bar{E}_{0},\bar{B}_{0}]$ satisfying $\eqref{sta.equC}$. Then, if $\mathcal
{E}_{N}(\bar{V}_{0})$ and $\|n_{b}-1\|_{W_{0}^{N+1,2}}$ are sufficiently small, \begin{eqnarray}\label{sec5.ENV0} \dfrac{d}{dt}\mathcal {E}_{N}(\bar{V}(t))+\lambda \mathcal {D}_{N}(\bar{V}(t)) \leq 0 \end{eqnarray} holds for any $t>0$, where $\mathcal {E}_{N}(\bar{V}(t))$, $\mathcal {D}_{N}(\bar{V}(t))$ are defined in the form of $\eqref{de.E}$ and $\eqref{de.D}$, respectively. \end{lemma} \begin{proof} It can be seen directly from the proof of Theorem \ref{estimate}. \end{proof}
\begin{lemma}\label{estimate2} Let $\bar{V}=[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$ be the solution to the Cauchy problem $\eqref{sta.equ}$-$\eqref{sta.equi}$ with initial data $\bar{V}_{0}=[\bar{\sigma}_{0},\bar{v}_{0},\bar{E}_{0},\bar{B}_{0}]$ satisfying $\eqref{sta.equC}$ in the sense of Proposition $\ref{pro.2.1}$. Then if $ \mathcal {E}_{N}(\bar{V}_{0})$ and
$\|n_{b}-1\|_{W_{0}^{N+1,2}}$ are sufficiently small, there are the high-order instant energy functional $\mathcal {E}_{N}^{h}(\cdot)$ and the corresponding dissipation rate $\mathcal {D}_{N}^{h}(\cdot)$ such that \begin{eqnarray}\label{sec5.high} && \frac{d}{dt}\mathcal {E}_{N}^{h}(\bar{V}(t))+\lambda\mathcal {D}^{h}_{N}(\bar{V}(t))\leq 0, \end{eqnarray} holds for any $ t \geq 0$. \end{lemma}
\begin{proof} The proof can be done by modifying the proof of Theorem $\ref{estimate}$ a little. In fact, by letting the energy estimates made only on the high-order derivatives, then corresponding to $\eqref{3.3}$, $\eqref{step2}$, $\eqref{step3}$ and $\eqref{step4}$, it can be re-verified that
\begin{equation*} \arraycolsep=1.5pt \begin{array}{rl}
&\displaystyle \frac{1}{2}\frac{d}{dt}\left(\sum_{1\leq|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|\nabla[\bar{E},\bar{B}]\|_{N-1}^{2}\right)\\[5mm]
&\displaystyle+\frac{1}{\sqrt{\gamma}}\sum_{1\leq|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))|\partial^{\alpha}\bar{v}|^{2}dx\\[5mm]
\leq & \displaystyle C(\|\bar{V}\|_{N}+\delta)(\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}
+\|\nabla \bar{E}\|_{N-2}^2),
\end{array} \end{equation*} \begin{eqnarray*}
\frac{d}{dt}\sum_{1\leq|\alpha|\leq N-1}\langle
\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle+\lambda\|\nabla\bar{\sigma}\|^{2}_{N-1}
\leq C\|\nabla^2\bar{v}\|_{N-2}^{2}+C(\|[\bar{\sigma}, \bar{v},\bar{B}]\|_{N}^{2}+\delta)
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}, \end{eqnarray*} \begin{eqnarray*} \begin{aligned}
\dfrac{d}{dt}\sum_{1\leq|\alpha|\leq N-1}\langle
\partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle+\lambda\|\nabla\bar{E}\|^{2}_{N-2} \leq
C&\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2}+C\|\nabla\bar{v}\|_{N-1}\|\nabla^2
\bar{B}\|_{N-3}\\
&+C(\|[\bar{\sigma},\bar{v},\bar{B}]\|_{N}^{2}+\delta)
\|\nabla[\bar{\sigma},\bar{v}]\|_{N-1}^{2},
\end{aligned} \end{eqnarray*} and \begin{eqnarray*} &&\begin{aligned}
& -\frac{d}{dt}\sum_{1\leq |\alpha|\leq N-2}\langle \nabla
\times\partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle+\lambda\|\nabla^{2}\bar{B}\|^{2}_{N-3}\\
\leq & C\|\nabla^2\bar{E}\|_{N-3}^{2}
+C\|\nabla\bar{v}\|_{N-3}^2+C(\|\bar{\sigma}\|_{N}^{2}+\delta)\|\nabla \bar{v}\|_{N-1}^{2}. \end{aligned} \end{eqnarray*} Here, the details of proof are omitted for simplicity. Now, similar to $\eqref{3.12}$, let us define \begin{equation}\label{def.high} \begin{aligned}
\mathcal{E}_{N}^{h}(\bar{V}(t))&=\sum_{1\leq|\alpha|\leq N}\int_{\mathbb{R}^3}(1+\sigma_{st}+\Phi(\sigma_{st}))
(|\partial^{\alpha}\bar{\sigma}|^2+|\partial^{\alpha}\bar{v}|^2)dx+\|\nabla[\bar{E},\bar{B}]\|_{N-1}^{2}\\
&+\kappa_{1}\sum_{1\leq|\alpha|\leq N-1}\langle
\partial^{\alpha}\bar{v},\nabla\partial^{\alpha}\bar{\sigma}\rangle+\kappa_{2}\sum_{1\leq|\alpha|\leq N-1}\langle \partial^{\alpha}\bar{v},\partial^{\alpha}\bar{E}\rangle\\[3mm]
&-\kappa_{3}\sum_{1\leq |\alpha|\leq N-2}\langle \nabla
\times\partial^{\alpha}\bar{E},\partial^{\alpha}\bar{B}\rangle. \end{aligned} \end{equation}
Similarly, one can choose $0<\kappa_{3}\ll\kappa_{2}\ll\kappa_{1}\ll 1$ with $\kappa_{2}^{3/2}\ll\kappa_{3}$ such that $\mathcal
{E}_{N}^{h}(\bar{V}(t))\sim \|\nabla \bar{V}(t)\|_{N-1}^{2}$ because $\sigma_{st}+\Phi(\sigma_{st}) $ depends only on $ x$ sufficiently small compared with $1$. Furthermore, the linear combination of previously obtained four estimates with coefficients corresponding to $\eqref{def.high}$ yields $\eqref{sec5.high}$ with $\mathcal {D}_{N}^{h}(\cdot)$ defined in $\eqref{de.Dh}$. This completes the proof of Lemma \ref{estimate2}. \end{proof}
Now, we begin with the time-weighted estimate and iteration for the Lyapunov inequality $\eqref{sec5.ENV0}$. Let $\ell \geq 0$. Multiplying $\eqref{sec5.ENV0}$ by $(1+t)^{\ell}$ and taking integration over $[0,t]$ give
\begin{eqnarray*} \begin{aligned}
& (1+t)^{\ell}\mathcal {E}_{N}(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{\ell}\mathcal {D}_{N}(\bar{V}(s))d s \\
\leq & \mathcal {E}_{N}(\bar{V}_{0})+ \ell
\int_{0}^{t}(1+s)^{\ell-1}\mathcal {E}_{N}(\bar{V}(s))d s. \end{aligned} \end{eqnarray*} Noticing \begin{eqnarray*}
\mathcal {E}_{N}(\bar{V}(t))
\leq C (D_{N+1}(\bar{V}(t))+\|
\bar{B}\|^{2}), \end{eqnarray*} it follows that \begin{eqnarray*} \begin{aligned}
& (1+t)^{\ell}\mathcal {E}_{N}(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{\ell}\mathcal {D}_{N}(\bar{V}(s))d s \\
\leq & \mathcal {E}_{N}(\bar{V}_{0})+ C \ell
\int_{0}^{t}(1+s)^{\ell-1}\|
\bar{B}(s)\|^{2}d s+ C\ell\int_{0}^{t}(1+s)^{\ell-1}\mathcal {D}_{N+1}(\bar{V}(s))d
s. \end{aligned} \end{eqnarray*} Similarly, it holds that \begin{eqnarray*} \begin{aligned}
& (1+t)^{\ell-1}\mathcal {E}_{N+1}(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{\ell-1}\mathcal {D}_{N+1}(\bar{V}(s))d s \\
\leq & \mathcal {E}_{N+1}(\bar{V}_{0})+ C (\ell-1)
\int_{0}^{t}(1+s)^{\ell-2}\|
\bar{B}(s)\|^{2}ds + C(\ell-1)\int_{0}^{t}(1+s)^{\ell-2}\mathcal {D}_{N+2}(\bar{V}(s))d s, \end{aligned} \end{eqnarray*} and \begin{eqnarray*} \mathcal {E}_{N+2}(\bar{V}(t))+\lambda \int_{0}^{t}\mathcal {D}_{N+2}(\bar{V}(s))d s \leq \mathcal {E}_{N+2}(\bar{V}_{0}). \end{eqnarray*} Then, for $1<\ell<2$, it follows by iterating the above estimates that \begin{eqnarray}\label{sec5.ED} \begin{aligned}
& (1+t)^{\ell}\mathcal {E}_{N}(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{\ell}\mathcal {D}_{N}(\bar{V}(s))d s \\
\leq & C \mathcal {E}_{N+2}(\bar{V}_{0})+ C
\int_{0}^{t}(1+s)^{\ell-1}\|
\bar{B}(s)\|^{2}d s. \end{aligned} \end{eqnarray}
Similarly, for $2<\kappa<3$, the time-weighted estimate and iteration for the Lyapunov inequality $\eqref{sec5.high}$ give \begin{eqnarray*} \begin{aligned}
& (1+t)^{\kappa}\mathcal {E}_{N}^h(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{\kappa}\mathcal {D}_{N}^h(\bar{V}(s))d s \\
\leq & C \mathcal {E}_{N+3}^h(\bar{V}_{0})+ C
\int_{0}^{t}(1+s)^{\kappa-1}\|
\nabla\bar{B}(s)\|^{2}d s. \end{aligned} \end{eqnarray*}
Here the smallness of $\|n_{b}-1\|_{W_{0}^{N+4,2}}$ has been used in the process of iteration for the Lyapunov inequalities $\eqref{sec5.ENV0}$ and $\eqref{sec5.high}$. Taking $\kappa=l+1$, it holds that \begin{multline}\label{sec5.EhD} (1+t)^{l+1}\mathcal {E}_{N}^h(\bar{V}(t))+\lambda
\int_{0}^{t}(1+s)^{l+1}\mathcal {D}_{N}^h(\bar{V}(s))d s \\
\leq C \mathcal {E}_{N+3}^h(\bar{V}_{0})+ C
\int_{0}^{t}(1+s)^{l}\|
\nabla\bar{B}(s)\|^{2}d s\\
\leq C \mathcal {E}_{N+3}^h(\bar{V}_{0})+ C \int_{0}^{t}(1+s)^{\ell}\mathcal {D}_{N}(\bar{V}(s))d s. \end{multline} Combining $\eqref{sec5.ED}$ with $\eqref{sec5.EhD}$, we have \begin{multline}\label{sec5.EDEhD} (1+t)^{\ell}\mathcal {E}_{N}(\bar{V}(t))+
\int_{0}^{t}(1+s)^{\ell}\mathcal {D}_{N}(\bar{V}(s))d s\\
+(1+t)^{l+1}\mathcal {E}_{N}^h(\bar{V}(t))+
\int_{0}^{t}(1+s)^{l+1}\mathcal {D}_{N}^h(\bar{V}(s))d s \\
\leq C \mathcal {E}_{N+3}(\bar{V}_{0})+ C \int_{0}^{t}(1+s)^{\ell-1}\|
\bar{B}(s)\|^{2}d s. \end{multline}
For this time, to estimate the integral term on the r.h.s. of $\eqref{sec5.EDEhD}$, let's define \begin{eqnarray}\label{sec5.def} \mathcal {E}_{N,\infty}(\bar{V}(t))=\sup\limits_{0\leq s \leq t} \ \left\{(1+s)^{\frac{3}{2}}\mathcal {E}_{N}(\bar{V}(s))+(1+s)^{\frac{5}{2}}\mathcal {E}_{N}^h(\bar{V}(s))\right\}, \end{eqnarray} \begin{eqnarray}\label{sec5.defL} L_{0}(t)=\sup\limits_{0\leq s \leq t}
(1+s)^{\frac{5}{2}}\|[\bar{\rho},\bar{u}]\|^{2}. \end{eqnarray} Then, we have the following \begin{lemma}\label{lem.Bsigma} For any $t\geq0$, it holds that: \begin{eqnarray}\label{lem.tildeB}
&&\begin{aligned} \|\bar{B}(t)\|^2\leq C
(1+t)^{-\frac{3}{2}}\left(\|[\bar{\sigma}_{0},\bar{v}_{0}]\|^{2}+
\|[\bar{v}_{0},\right.& \bar{E}_{0},\bar{B}_{0}]\|^2_{L^1\cap \dot{H}^{2}}\\ &\left.+[\mathcal {E}_{N,\infty}(\bar{V}(t))]^2+\delta^2 \mathcal {E}_{N,\infty}(\bar{V}(t))\right). \end{aligned} \end{eqnarray} \end{lemma}
\begin{proof} Applying the fourth linear estimate on $B$ in $\eqref{col.decay1}$ to the mild form \eqref{sec5.U} gives \begin{eqnarray}\label{sec5.decayB}
&&\begin{aligned} \|B_{1}(t)\|\leq C (1+t)^{-\frac{3}{4}}
\|[\bar{u}_{0},& E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{2}}\\ &+C
\int_{0}^{t}(1+t-s)^{-\frac{3}{4}}\|[g_{2}(s),g_{3}(s)]\|_{L^{1}\cap\dot{H}^{2}}ds. \end{aligned} \end{eqnarray} Applying the $L^{2}$ linear estimate on $u$ in $\eqref{col.decay1}$ to the mild form $\eqref{sec5.U}$, \begin{eqnarray}\label{baruL2}
&&\begin{aligned}
\|\bar{u}(t)\| \leq C(1+t)^{-\frac{5}{4}}(
&\|\bar{\rho}_{0}\|+\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^{1}\cap\dot{H}^{2}})\\
&+C \int_{0}^{t}(1+t-s)^{-\frac{5}{4}}\left(\|g_{1}(s)\|+\|[g_{2}(s),g_{3}(s)]\|_{L^{1}\cap
\dot{H}^{2}}\right)ds.
\end{aligned} \end{eqnarray} Applying the $L^{2}$ linear estimate on $\rho$ in $\eqref{col.decay1}$ to $\eqref{sec5.U}$, one has \begin{eqnarray}\label{rhoL2}
\|\bar{\rho}(t)\|\leq C e^{-\frac{t}{2}}\|[\bar{\rho}_{0},\bar{u}_{0}]\|+ C
\int_{0}^{t}e^{-\frac{t-s}{2}}\|[g_{1}(s),g_{2}(s)]\|d s. \end{eqnarray}
Recall the definition $\eqref{sec5.ggg}$ of $g_{1}$, $g_{2}$ and $g_{3}$, \begin{eqnarray*} \begin{aligned} & g_{1}(s)=-\rho_{st} \nabla \cdot \bar{u}-\bar{\rho} \nabla \cdot \bar{u}-\bar{u} \cdot \nabla \rho_{st}-\bar{u} \cdot \nabla \bar{\rho},\\ & g_{2}(s)\sim \bar{u} \cdot \nabla \bar{u} + \bar{u}\times B_{1} +\bar{\rho}\nabla \bar{\rho}+ \rho_{st}\nabla \bar{\rho} +\bar{\rho}\nabla \rho_{st},\\ & g_{3}(s)=\bar{\rho} \bar{u}+\rho_{st} \bar{u}. \end{aligned} \end{eqnarray*} Firstly, we estimate those terms including $\rho_{st}$. It follows that \begin{eqnarray*} \begin{aligned}
&\|\rho_{st}\nabla \cdot \bar{u}\|\leq
\|\rho_{st}\|_{L^{\infty}}\|\nabla \bar{u}\|,\ \ \ \ \ \|\bar{u}
\cdot \nabla \rho_{st}\|\leq
\|\nabla\rho_{st}\|\|\bar{u}\|_{L^{\infty}}\leq
\|\nabla\rho_{st}\|\|\nabla\bar{u}\|_{H^{1}},\\
&\|\rho_{st}\nabla \bar{\rho}\|_{L^1}\leq \|\rho_{st}\|\|\nabla
\bar{\rho}\|,\ \ \ \ \ \|\rho_{st}\nabla \bar{\rho}\|\leq
\|\rho_{st}\|_{L^{\infty}}\|\nabla\bar{\rho}\|\leq
\|\nabla\rho_{st}\|_{H^{1}}\|\nabla\bar{\rho}\|,\\
& \|\bar{\rho}
\nabla \rho_{st}\|_{L^1}\leq \|\nabla\rho_{st}\|\|\bar{\rho}\|,\ \ \
\ \|\nabla\rho_{st}\bar{\rho}\|\leq
\|\bar{\rho}\|_{L^{\infty}}\|\rho_{st}\|\leq
\|\rho_{st}\|\|\nabla\bar{\rho}\|_{H^{1}},\\
&\|\rho_{st} \bar{u}\|_{L^1} \leq \|\rho_{st}\|\|\bar{u}\|,\\ \end{aligned} \end{eqnarray*}
and for $|\alpha|=2$, one has \begin{eqnarray*}
&&\begin{aligned} \|\partial^{\alpha}(\rho_{st}\nabla
\bar{\rho})\|\leq & \|\rho_{st}\partial^{\alpha}\nabla\bar{\rho}\|
+\|\partial^{\alpha}(\rho_{st}\nabla
\bar{\rho})-\rho_{st}\partial^{\alpha}\nabla\bar{\rho}\|\\[3mm] \leq &
\|\rho_{st}\|_{L^{\infty}}\|\partial^{\alpha}\nabla\bar{\rho}\|
+C\|\nabla\rho_{st}\|_{H^{|\alpha|-1}}\|\nabla\bar{\rho}\|_{L^{\infty}}+C\|\nabla
\rho_{st}\|_{L^{\infty}}\|\nabla \bar{\rho}\|_{H^{|\alpha|-1}}\\[3mm]
\leq & C\delta\|\nabla \bar{\rho}\|_{H^{2}}, \end{aligned} \end{eqnarray*}
where we have used the estimate $\|\partial^{\alpha}(f g)-f
\partial^{\alpha}g\|\leq C\|\nabla f\|_{H^{k-1}}\|g\|_{L^{\infty}}+C\|\nabla f\|_{L^{\infty}}\|g\|_{H^{k-1}}$, for any $|\alpha|=k$.
Similarly, it holds that \begin{eqnarray*} \begin{aligned}
\|\partial^{\alpha}(\rho_{st} \bar{u})\|\leq & \|\bar{u}
\partial^{\alpha}\rho_{st}\| +\|\partial^{\alpha}(\rho_{st}\bar{u})-\bar{u} \partial^{\alpha}\rho_{st}\|
\leq C \delta \|\nabla \bar{u}\|_{H^{2}}, \end{aligned} \end{eqnarray*} \begin{eqnarray*} \begin{aligned}
& \|\partial^{\alpha}(\bar{\rho} \nabla \rho_{st})\|\leq C \delta
\|\nabla \bar{\rho}\|_{H^{2}}. \end{aligned} \end{eqnarray*} It is straightforward to verify that for any $0\leq s\leq t$, \begin{eqnarray}\label{dec.g2g3} \begin{aligned}
\|[g_{2}(s),g_{3}(s)]\|_{L^{1}}\leq & C \|\bar{u}\|\|\nabla \bar{u}\|+\|\bar{u}\|\|B_{1}\|+\|\bar{\rho}\|\|\bar{u}\|+\|\bar{\rho}\|\|\nabla \bar{\rho}\|\\
& +C(\|\rho_{st}\nabla \bar{\rho}\|_{L^1}+\|\rho_{st} \bar{u}\|_{L^1}+\|\bar{\rho} \nabla \rho_{st}\|_{L^1} )\\ &\leq C \mathcal {E}_{N}(\bar{U}(s))+ C \delta \sqrt{\mathcal
{E}_{N}^h(\bar{U}(s))}+C \delta \|[\bar{\rho},\bar{u}]\|, \end{aligned} \end{eqnarray} \begin{eqnarray}\label{dec.g2g3H} \begin{aligned}
\|[g_{2}(s),g_{3}(s)]\|_{\dot{H}^{2}}\leq & C\mathcal {E}_{N}(\bar{U}(s))+ C \delta \sqrt{\mathcal {E}_{N}^h(\bar{U}(s))}, \end{aligned} \end{eqnarray} and \begin{eqnarray}\label{dec.g1g2} \begin{aligned}
\|[g_{1}(s),g_{2}(s)]\|\leq & C\mathcal {E}_{N}(\bar{U}(s))+ C \delta \sqrt{\mathcal {E}_{N}^h(\bar{U}(s))}. \end{aligned} \end{eqnarray} Notice that $ \mathcal {E}_{N}(\bar{U}(s))\leq C \mathcal {E}_{N}(\bar{V}(\sqrt{\gamma}s))$. From $\eqref{sec5.def}$ and $\eqref{sec5.defL}$, for any $0\leq s\leq t$, \begin{eqnarray*} \mathcal {E}_{N}(\bar{V}(\sqrt{\gamma}s))\leq (1+\sqrt{\gamma}s)^{-\frac{3}{2}}\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t)), \end{eqnarray*} \begin{eqnarray*} \mathcal {E}_{N}^h(\bar{V}(\sqrt{\gamma}s))\leq (1+\sqrt{\gamma}s)^{-\frac{5}{2}}\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t)), \end{eqnarray*} \begin{eqnarray*}
\|[\bar{\rho},\bar{u}](s)\|\leq \sqrt{L_{0}(t)}(1+s)^{-\frac{5}{4}}. \end{eqnarray*} Then, it follows that for $0\leq s \leq t$, \begin{multline*}
\|[g_{2}(s),g_{3}(s)]\|_{L^{1}}\leq C(1+\sqrt{\gamma}s)^{-\frac{3}{2}}\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))\\ +C \delta (1+\sqrt{\gamma}s)^{-\frac{5}{4}}\sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}+C\delta \sqrt{L_{0}(t)}(1+s)^{-\frac{5}{4}}, \end{multline*} \begin{eqnarray*} \begin{aligned}
\|[g_{2}(s),g_{3}(s)]\|_{\dot{H}^{2}}\leq & C(1+\sqrt{\gamma}s)^{-\frac{3}{2}}\mathcal {E}_{N,\infty}(V(\sqrt{\gamma}t))\\ &+C \delta (1+\sqrt{\gamma}s)^{-\frac{5}{4}}\sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}, \end{aligned} \end{eqnarray*} \begin{eqnarray*} \begin{aligned}
\|[g_{1}(s),g_{2}(s)]\|\leq & C(1+\sqrt{\gamma}s)^{-\frac{3}{2}}\mathcal {E}_{N,\infty}(V(\sqrt{\gamma}t))\\ &+C \delta (1+\sqrt{\gamma}s)^{-\frac{5}{4}}\sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}. \end{aligned} \end{eqnarray*} Putting the above inequalities into \eqref{sec5.decayB}, $\eqref{baruL2}$ and \eqref{rhoL2} respectively gives \begin{multline}\label{sec5.decayB1}
\|B_{1}(t)\|\leq C (1+t)^{-\frac{3}{4}}\Big\{
\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{2}}+\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))\\ +\delta\sqrt{L_{0}(t)}+\delta \sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}\Big\}, \end{multline} \begin{multline}\label{sec5.decayu}
\|\bar{u}(t)\|\leq C (1+t)^{-\frac{5}{4}}\Big\{
\|\bar{\rho}_{0}\|+\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^{1}\cap\dot{H}^{2}}+\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))\\ +\delta\sqrt{L_{0}(t)}+\delta \sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}\Big\}, \end{multline} \begin{eqnarray}\label{sec5.decayrho} \begin{aligned}
\|\bar{\rho}(t)\|\leq C (1+t)^{-\frac{5}{4}}\Big\{
\|[\bar{\rho}_{0},u_{0}]\|+\mathcal {E}_{N,\infty}(\bar{V}&(\sqrt{\gamma}t))\\ &+\delta \sqrt{\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))}\Big\}. \end{aligned} \end{eqnarray} The definition of $L_{0}(t)$, \eqref{sec5.decayu} and \eqref{sec5.decayrho} further imply that \begin{multline}\label{bou.L}
L_{0}(t)\leq C\|[\bar{\rho}_{0},u_{0}]\|^{2}+C\|[\bar{u}_{0}, E_{1,0},B_{1,0}]\|_{L^{1}\cap\dot{H}^{2}}^{2}\\ +C\left[\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t))\right]^{2}+C\delta^{2}\mathcal {E}_{N,\infty}(\bar{V}(\sqrt{\gamma}t)), \end{multline}
where we have used that $\delta$ is small enough. Plugging the above estimate into \eqref{sec5.decayB1} implies $\eqref{lem.tildeB}$, since $\|\bar{ B}(t)\|\leq C \| B_{1}(t/\sqrt{\gamma})\|$ and $[\bar{\rho},\bar{u},E_{1},B_{1}]$ is equivalent with $[\bar{\sigma},\bar{v},\bar{E},\bar{B}]$ up to a positive constant. This completes the proof of Lemma $\ref{lem.Bsigma}$. \end{proof}
Now, the rest is to prove the uniform-in-time bound of $\mathcal
{E}_{N,\infty}(\bar{V}(t))$ which yields the time-decay rates of the
Lyapunov functionals $\mathcal
{E}_{N}(\bar{V}(t))$ and $\mathcal
{E}_{N}^h(\bar{V}(t))$ thus $\|\bar{V}(t)\|_{N}^{2}$, $\|\nabla\bar{V}(t)\|_{N-1}^{2}$. In fact, by taking $\ell =\frac{3}{2}+\epsilon$ in $\eqref{sec5.EDEhD}$ with $\epsilon>0$ small enough, one has \begin{multline*} (1+t)^{\frac{3}{2}+\epsilon}\mathcal {E}_{N}(\bar{V}(t))+
\int_{0}^{t}(1+s)^{\frac{3}{2}+\epsilon}\mathcal {D}_{N}(\bar{V}(s))d s\\
+(1+t)^{\frac{5}{2}+\epsilon}\mathcal {E}_{N}^h(\bar{V}(t))+
\int_{0}^{t}(1+s)^{\frac{5}{2}+\epsilon}\mathcal {D}_{N}^h(\bar{V}(s))d s \\
\leq C \mathcal {E}_{N+3}(\bar{V}_{0})+ C \int_{0}^{t}(1+s)^{\frac{1}{2}+\epsilon}\|
\bar{B}(s)\|^{2}d s. \end{multline*} Here, using $\eqref{lem.tildeB}$ and the fact $\mathcal
{E}_{N,\infty}(\bar{V}(t))$ is non-decreasing in $t$, it further holds
that \begin{eqnarray*} \begin{aligned}
\int_{0}^{t}(1+s)^{\frac{1}{2}+\epsilon}\|
\bar{B}(s)\|^{2}d s\leq
C(1+t)^{\epsilon}\Big\{\|[\bar{\sigma}_{0},&\bar{v}_{0}]\|^{2}+
\|[\bar{v}_{0},\bar{E}_{0},\bar{B}_{0}]\|^2_{L^1\cap \dot{H}^{2}}\\
&+[\mathcal {E}_{N,\infty}(\bar{V}(t))]^2 +\delta ^2 \mathcal {E}_{N,\infty}(\bar{V}(t))\Big\}. \end{aligned} \end{eqnarray*} Therefore, it follows that \begin{multline*} (1+t)^{\frac{3}{2}+\epsilon}\mathcal {E}_{N}(\bar{V}(t))+(1+t)^{\frac{5}{2}+\epsilon}\mathcal {E}_{N}^h(\bar{V}(t))\\ + \int_{0}^{t}(1+s)^{\frac{3}{2}+\epsilon}\mathcal {D}_{N}(\bar{V}(s))d s
+ \int_{0}^{t}(1+s)^{\frac{5}{2}+\epsilon}\mathcal {D}_{N}^h(\bar{V}(s))d s \\
\leq C \mathcal {E}_{N+3}(\bar{V}_{0})+ C (1+t)^{\epsilon}\left(\|[\bar{\sigma}_{0},\bar{v}_{0}]\|^{2}+
\|[\bar{v}_{0},\bar{E}_{0},\bar{B}_{0}]\|^2_{L^1\cap \dot{H}^{2}}\right.\\
\left.+[\mathcal {E}_{N,\infty}(\bar{V}(t))]^2 +\delta^2 \mathcal {E}_{N,\infty}(\bar{V}(t))\right), \end{multline*} which implies \begin{multline*} (1+t)^{\frac{3}{2}}\mathcal {E}_{N}(\bar{V}(t))+(1+t)^{\frac{5}{2}}\mathcal {E}_{N}^h(\bar{V}(t))
\leq C \Big\{ \mathcal {E}_{N+3}(\bar{V}_{0})+
\|[\bar{v}_{0},\bar{E}_{0},\bar{B}_{0}]\|^2_{L^1}\\
+[\mathcal {E}_{N,\infty}(\bar{V}(t))]^2 +\delta ^2 \mathcal {E}_{N,\infty}(\bar{V}(t))\Big\}, \end{multline*} and thus \begin{eqnarray}\label{ENb} \mathcal {E}_{N,\infty}(\bar{V}(t))
\leq C \left( \epsilon_{N+3}(\bar{V}_{0})^{2}+ \mathcal {E}_{N,\infty}(\bar{V}(t))^{2}\right). \end{eqnarray} Here, we have used that $\delta$ is small enough. Recall the definition of $\epsilon_{N+3}(\bar{V}_{0})$, since $\epsilon_{N+3}(\bar{V}_{0})>0$ is sufficiently small, $\mathcal {E}_{N,\infty}(\bar{V}(t)) \leq C \epsilon_{N+3}(\bar{V}_{0})^{2}$ holds true for any $t\geq 0$, which implies \begin{eqnarray}\label{UN}
\|\bar{V}(t)\|_{N} \leq C \mathcal {E}_{N}(\bar{V}(t))^{1/2}
\leq C \epsilon_{N+3}(\bar{V}_{0})(1+t)^{-\frac{3}{4}}, \end{eqnarray} \begin{eqnarray}\label{nablaUN}
\|\nabla\bar{V}(t)\|_{N-1} \leq C \mathcal {E}_{N}^{h}(\bar{V}(t))^{1/2}
\leq C \epsilon_{N+3}(\bar{V}_{0})(1+t)^{-\frac{5}{4}}. \end{eqnarray} The definition of $L_{0}(t)$, the uniform-in-time bound of $\mathcal {E}_{N,\infty}(\bar{V}(t))$ and $\eqref{bou.L}$ show that \begin{eqnarray*}
\|[\bar{\rho},\bar{u}](t)\|
\leq C \epsilon_{N+3}(\bar{V}_{0})(1+t)^{-\frac{5}{4}}. \end{eqnarray*} In addition, applying the $L^{2}$ linear estimate on $E$ in $\eqref{col.decay1}$ to the mild form $\eqref{sec5.U}$, \begin{eqnarray*}
&&\begin{aligned} \|E_{1}(t)\|\leq C (1+t)^{-\frac{5}{4}}
\|[\bar{u}_{0},& E_{1,0},B_{1,0}]\|_{L^1\cap \dot{H}^{3}}\\
&+C \int_{0}^{t}(1+t-s)^{-\frac{5}{4}}\|[g_{2}(s),g_{3}(s)]\|_{L^{1}\cap
\dot{H}^{3}}ds.
\end{aligned} \end{eqnarray*} Since by $\eqref{UN}$ and $\eqref{nablaUN}$, similar to obtaining $\eqref{dec.g2g3}$ and $\eqref{dec.g2g3H}$, we have \begin{eqnarray*} \begin{aligned}
&\|[g_{2}(s),g_{3}(s)]\|_{L^{1}\cap \dot{H}^{3}}\leq C\|\bar{U}(t)\|^{2}_{4}+ C\delta\|\nabla
\bar{U}(t)\|_{3}+C\delta\|[\bar{\rho},\bar{u}]\|\leq C\epsilon_{7}(\bar{V}_{0})(1+t)^{-\frac{5}{4}}, \end{aligned} \end{eqnarray*} it follows that \begin{eqnarray}\label{uL2}
\|E_{1}(t)\| \leq C\epsilon_{7}(\bar{V}_{0})(1+t)^{-\frac{5}{4}}. \end{eqnarray} This completes Theorem \ref{Corolary}.
\noindent{\bf Acknowledgements:}\ \ The first author Qingqing Liu would like to thank Dr. Renjun Duan for his guidance and continuous help. The research was supported by the National Natural Science Foundation of China $\#$11071093, the PhD specialized grant of the Ministry of Education of China $\#$20100144110001, and the Special Fund for Basic Scientific Research of Central Colleges $\#$CCNU10C01001, $\#$CCNU12C01001.
\bigbreak
\end{document} |
\begin{document}
\title[uniqueness along subsequences]{Cantor uniqueness and multiplicity along subsequences} \author{Gady Kozma and Alexander Olevski\u\i} \address{GK: Weizmann institute of Science, Rehovot, Israel.} \email{[email protected]} \address{AO: Tel Aviv University, Tel Aviv, Israel} \email{[email protected]} \begin{abstract} We construct a sequence $c_{l}\to0$ such that the trigonometric series $\sum c_{l}e^{ilx}$ converges to zero everywhere on a subsequence $n_{k}$. We show that any such series must satisfy that the $n_{k}$ are very sparse, and that the support of the related distribution is quite large. \end{abstract}
\maketitle
\section{Introduction}
In 1870, Georg Cantor proved his famous uniqueness theorem for trigonometric series: if a series $\sum c_{l}e^{ilx}$ converges to zero for every $x\in[0,2\pi]$, then the $c_{l}$ are all zero \cite{C1870}. The proof used important ideas from Riemann's \emph{Habilitationsschrift}, namely, that of taking the formal double integral $F(x)=\sum\frac{1}{l^{2}}c_{l}e^{ilx}$ and examining the second Schwarz derivative of $F$. Cantor's proof is now classic and may be found in many books, e.g.\ \cite[\S IX]{Z} or \cite[\S XIV]{B64}. A fascinating historical survey of these early steps in uniqueness theory, including why Riemann defined $F$ in the first place, may be found in \cite{C93}. (briefly, Riemann was writing \emph{necessary} conditions for a function to be represented by a trigonometric series in terms of its double integral).
Cantor's result may be extended in many directions, and probably the most famous one was the direction taken by Cantor himself, that of trying to see if the theorem still holds if the series is allowed not to converge at a certain set, which led Cantor to develop set theory, and led others to the beautiful theory of sets of uniqueness, see \cite{KL87}. But in this paper we are interested in a different kind of extension: does the theorem hold when the series $\sum c_{l}e^{ilx}$ is required to converge only on a subsequence?
This problem was first tackled in 1950, when Kozlov constructed a nontrivial sequence $c_{l}$ and a second sequence $n_{k}$ such that \begin{equation} \lim_{k\to\infty}\sum_{l=-n_{k}}^{n_{k}}c_{l}e^{ilx}=0\qquad\forall x\in[0,2\pi].\label{eq:1} \end{equation} See \cite{K50} or \cite[\S XV.6]{B64}. A feature of Kozlov's construction that was immediately apparent is that the coefficients $c_{l}$ are (at least for some $l$), very large. Therefore it was natural to ask if it is possible to have (\ref{eq:1}) together with $c_{l}\to0$. The problem was first mentioned in the survey of Talalyan \cite{T60} \textemdash{} this is problem 13 in \S10 (note that there is a mistake in the English translation), and then repeated in \cite{AT64} where the authors note, on page 1406, that the problem is ``very hard''. In the same year, the survey of Ulyanov \cite[page 20 of the English version]{U64} mentions the problem and conjectures that in fact, no such series exists. Skvortsov constructed a counterexample for the Walsh system \cite{S75}, but not for the Fourier system.
\subsection{Results}
In this paper we answer this question in the positive. Here is the precise statement: \begin{thm} \label{thm:example}There exist coefficients $c_{l}\to0$, not all zero, and $n_{k}\to\infty$ such that (\ref{eq:1}) holds. \end{thm}
The existence of such an example raises many new questions about the nature of the $c_{l}$, of the distribution $\sum c_{l}e^{ilx}$, and of the numbers $n_{k}$. We have two results which show some restrictions on these objects. The first states, roughly, that the $n_{k}$ must increase at least doubly exponentially: \begin{thm} \label{thm:doubly exponentially}Let $c_{l}\to0$ and let $n_{k}$ be such that (\ref{eq:1}) holds. Assume further that $n_{k+1}=n_{k}^{1+o(1)}$. Then $c_{l}\equiv0$. \end{thm}
Our second result is a lower bound on the dimension of the support of the distribution $\sum c_{l}e^{ilx}$. It is stated in terms of the upper Minkowski dimension (see, e.g., \cite{F14} where it is called the box counting dimension) which we denote by $\dim_{\textrm{Mink}}$. \begin{thm} \label{thm:dim}Let $c_{l}\to0$ and let $n_{k}$ be such that (\ref{eq:1}) holds. Let $K$ be the support of the distribution $\sum c_{l}e^{ilx}$, and assume that \[ \dim_{\Mink}(K)<\frac{1}{2}(\sqrt{17}-3)\approx0.561. \] Then $c_{l}\equiv0$. \end{thm}
\subsection{\label{subsec:Comments}Comments and questions}
An immediate question is the sharpness of the double exponential bound of theorem \ref{thm:doubly exponentially}. The proof of theorem \ref{thm:example} which we will present is not quantitative, but it can be quantified with only a modicum of effort, giving: \begin{quote} \emph{There exists $c_{l}\to0$ and $n_{k}=\exp(\exp(O(k))$ such that (\ref{eq:1}) holds.} \end{quote} (this quantitative version, and all other claims in this section, \S \ref{subsec:Comments}, will not be proved in this paper). Thus in this setting the main problem remaining is the constant in the exponent. The reader might find it useful to think about the question as follows: suppose $n_{k+1}=n_{k}^{\lambda}$. For which value of $\lambda$ is it possible to construct a counterexample with this $n_{k}$?
But an even more interesting question is: what happens when the condition $c_{l}\to0$ is removed from theorem \ref{thm:doubly exponentially}? The answer is no longer doubly exponential, in fact Nina Bary \cite{B60} showed that one can take $n_{k}$ growing only slightly faster than exponentially, and conjectured that this rate of growth is optimal. Our techniques allow only modest progress towards Bary's conjecture: we can show that if $n_{k+1}-n_{k}=o(\log k)$ then no such example may exist.
A variation of the problem where our upper and lower bounds match more closely is the following: suppose that we require $c_{l}=0$ for all $l<0$ (often this is called an ``analytic'' version of the problem, because there is a naturally associated analytic function in the disk, $\sum c_{l}z^{l}$). In this case, the following can be proved. On the one hand, one can extend Bary's construction and find an example of a $c_{l}$ and $n_{k}$ both growing slightly faster than exponential such that \[ \lim_{k\to\infty}\sum_{l=0}^{n_{k}}c_{l}e^{ilx}=0\qquad\forall x. \] On the other hand, it is not possible to have such an example if either
$n_{k+1}-n_{k}\le n_{k}/\log^{2}n_{k}$ or $|c_{l}|\le\exp(Cl/\log^{2}l)$. This holds for any inverse of a non-quasianalytic sequence.
In a different direction, the condition $c_{l}\to0$ can be improved: it is possible to require, in theorem \ref{thm:example}, that the coefficients $c_{l}$ be inside $\ell^{2+\epsilon}$, for any $\epsilon>0$. This, too, will not be shown in this paper, but the proof is a simple variation on the proof of theorem \ref{thm:example} below.
Another interesting question is the sharpness of the dimension bound in theorem \ref{thm:dim}. In our example the dimension of the support is $1$ (even for the Hausdorff dimension, which is smaller than the upper Minkowski dimension). It would be very interesting to construct an example with dimension strictly smaller than $1$. In the opposite direction, let us remark that in our example the support of the distribution $\sum c_{l}e^{ilx}$ has measure zero, but it is not difficult to modify the example so that the support would have positive measure. We find this interesting because this distribution is so inherently singular. The support must always be nowhere dense, see lemma \ref{lem:K} below.
\subsection{Measures}
It is interesting to note that the proofs of theorem \ref{thm:doubly exponentially} and \ref{thm:dim} do not use the Riemann function in any way. In fact, the only element of classic uniqueness theory that appears in the proof is the localisation principle, in the form of Rajchman (see \S \ref{subsec:Rajchman}). Thus the proof of theorem \ref{thm:doubly exponentially} is also a new proof of Cantor's classic result. In the 150 years that passed since its original publication, the only other attempt we are aware of is \cite{A89}, which gives a proof of Cantor's theorem using one formal integration rather than two. To give the reader a taste of the ideas in the proofs of theorems \ref{thm:doubly exponentially} and \ref{thm:dim}, let us apply the same basic scheme to prove a simpler result: that no such construction is possible with $c_{l}$ being the Fourier coefficients of a measure. \begin{prop} \label{prop:measure}Let $\mu$ be a measure on $[0,2\pi]$ with $\widehat{\mu}(l)\to0$ and let $n_{k}$ be a series such that \[ \lim_{k\to\infty}\sum_{l=-n_{k}}^{n_{k}}\widehat{\mu}(l)e^{ilx}=0\qquad\forall x\in[0,2\pi]. \] Then $\mu=0$. \end{prop}
\begin{proof} Denote $S_{n}(x)=\sum_{l=-n}^{n}\widehat{\mu}(l)e^{ilx}$. For every
$x\in\supp\mu$ there exists an $M(x)$ such that $|S_{n_{k}}(x)|\le M(x)$ for all $k$ (certainly $M$ exists also for $x\not\in\supp\mu$ but we will not need it). By the Baire category theorem there is an interval $I$ and a value $M$ such that the set $\{x:M(x)\le M\}$ is dense in $I\cap\supp\mu$ (and $I\cap\supp\mu\ne\emptyset$). Note that we are using the Baire category theorem on the support of $\mu$, which is compact (here and below support will always mean in the distributional sense, and in particular will be compact). By continuity, in fact $M(x)\le M$ for all $x\in I\cap\supp\mu$. Let $\varphi$ be a smooth function supported on $I$ (and $\varphi(x)\ne0$ for all $x\in I^{\circ}$). We apply the localisation principle (see, e.g.\ \cite[theorem IX.4.9]{Z}) and get that the series \[ \varphi(x)\sum\widehat{\mu}(l)e^{ilx}\qquad\text{and}\qquad\sum\widehat{\varphi\mu}(l)e^{ilx} \] are uniformly equiconvergent. Hence $\varphi\mu$ satisfies the same property as $\mu$ i.e. \[ \lim_{k\to\infty}\sum_{l=-n_{k}}^{n_{k}}\widehat{\varphi\mu}(l)e^{ilx}=0\qquad\forall x\in[0,2\pi] \] and further, this convergence is bounded on $\supp\varphi\mu$ (since $\supp\varphi\mu=I\cap\supp\mu$). If $\mu\ne0$ then also $\varphi\mu\ne0$.
The conclusion of the previous paragraph is that we could have, without loss of generality, assumed to start with that $S_{n_{k}}$ is bounded on $\supp\mu$. Let us therefore make this assumption (so we do not have to carry around the notation $\varphi$). We now argue as follows: \[
\sum_{l=-n_{k}}^{n_{k}}|\widehat{\mu}(l)|^{2}=\sum_{l=-\infty}^{\infty}\overline{\widehat{S_{n_{k}}}(l)}\cdot\widehat{\mu}(l)=\int\overline{S_{n_{k}}}(x)\,d\mu(x)\le M||\mu|| \]
where the second equality is due to Parseval and where $M$ is again the maximum of $|S_{n_{k}}|$ on $\supp\mu$. Since this holds for all $k$, we get that $\sum|\widehat{\mu}(l)|^{2}<\infty$, so $\mu$ is in fact an $L^{2}$ function. But this is clearly impossible, since the Fourier series of an $L^{2}$ function converges in measure to it. \end{proof} The crux of the proof is that $S_{n_{k}}$ is small where $\mu$ is supported. The proofs of theorems \ref{thm:doubly exponentially} and \ref{thm:dim} replace $\mu$ with a different partial sum, $S_{s}$ for some carefully chosen $s$ (roughly, for $s\approx n_{k}^{3/2}$) and show that $S_{n_{k}}$ is small where $S_{s}$ is essentially supported. The details are below.
Let us remark that the only place where the condition $\widehat{\mu}(l)\to0$ was used in the proof of proposition \ref{prop:measure} is in the application of the localisation principle. This can be circumvented, with a slightly more involved argument. See details in \S \ref{sec:Localisation}. Similarly theorems \ref{thm:doubly exponentially} and \ref{thm:dim} may be generalised from $c_{l}\to0$ to $c_{l}$ bounded, at the expense of a more involved use of the localisation principle.
\section{Construction}
It will be convenient to work in the interval $[0,1]$ and not carry around $\pi$-s, so define \[ e(x)=e^{2\pi ix}. \] For an integrable function $f$ we define the usual Fourier partial sums, \[ S_{n}(f;x)=\sum_{l=-n}^{n}\widehat{f}(l)e(lx). \] In this paper ``smooth'' means $C^{2}$, but the proofs work equally well with higher smoothness (up to the quasianalytic threshold). We use $C$ and $c$ to denote arbitrary constants, whose value might change from line to line or even inside the same line. We use $C$
for constants which are large enough, and $c$ for constants which are small enough. We use $||\cdot||$ for the $L^{2}$ (or $\ell^{2}$)
norm, other $L^{p}$ norms are denoted by $||\cdot||_{p}$ (except one place in the introduction where we used $||\mu||$ for the norm of the measure $\mu$). For a set $E\subset[0,1]$ we denote by $|E|$ the Lebesgue measure of $E$.
\subsection{The localisation principle\label{subsec:Rajchman}}
Let us recall Riemann's localisation principle: as formulated by Riemann, it states that the convergence of a trigonometric series at a point $x$ depends only on the behaviour of the Riemann function at a neighbourhood of $x$. See \cite[\S IX.4]{Z}. Rajchman found a formulation of the principle which does not use the Riemann function and has a simple proof. It states that for any $c_{l}\to0$ and any smooth function $\varphi$, \begin{equation} \varphi(x)\sum c_{l}e(lx)\text{ and }\sum(c*\widehat{\varphi})(l)e(lx)\text{ are uniformly equiconvergent}\label{eq:Rajchman} \end{equation} where $c*\widehat{\varphi}$ is a discrete convolution. See \cite[theorem IX.4.9]{Z}, or the proof of theorem \ref{thm:local} below, which follows Rajchman's approach precisely. We will use Rajchman's theorem both on and off the support of $\sum c_{l}e(lx)$ (denote this support by $K$). Off $K$, it has the following nice formulation: if $c_{l}\to0$ then \begin{equation} \sum c_{l}e(lx)=0\qquad\forall x\not\in K.\label{eq:KS} \end{equation} and further, convergence is uniform on any closed interval disjoint from $K$. To the best of our knowledge, this precise formulation first appeared in \cite[Proposition 1, \S V.3, page 54]{KS94}.
\subsection{First estimates} \begin{lem} \label{lem:willthisbetheendofthisproblematlast}For every $\epsilon>0$ there exists a smooth function $u:[0,1]\to\mathbb{R}$ with $u(0)=u(1)=0$,
$u(x)\in[0,1]$, and $||\widehat{u-1}||_{\infty}<\epsilon$. \end{lem}
When we say that $u$ is smooth we mean also when extended periodically (or when extended by $0$, which is the same under the conditions above). \begin{proof} Take any standard construction of a smooth function satisfying $u(0)=u(1)=0$,
$u(x)\in[0,1]$ and $u(x)=1$ for all $x\in[\frac{1}{2}\epsilon,1-\frac{1}{2}\epsilon]$. The condition on the Fourier coefficients then follows by $||\widehat{u-1}||_{\infty}\le||u-1||_{1}$. \end{proof} \begin{lem} \label{lem:vandermonde}For every $\epsilon>0$ there exists a smooth function $h:[0,1]\to\mathbb{R}$ and an $n\in\mathbb{N}$ such that \begin{enumerate} \item $\widehat{h}(0)=1$ \item $\supp h\subset[0,\frac{1}{2}]$
\item For all $x\in[0,\frac{1}{2}]$, $|S_{n}(h;x)|<\epsilon$. \end{enumerate} \end{lem}
\begin{proof} Let $P$ be an arbitrary trigonometric polynomial satisfying that
$\widehat{P}(0)=1$ and $|P(x)|<\epsilon$ for all $x\in[0,\frac{1}{2}]$. Let $n=\deg P$, let $m=2n+1$ and let $q$ be a smooth function supported on $[0,\nicefrac{1}{2m}]$ with $\widehat{q}(k)\ne0$ for all $|k|\le n$. Examine a function $h$ of the type \[ h(x)=\sum_{j=0}^{m-1}a_{j}q\Big(x-\frac{j}{2m}\Big). \] Then $h$ is smooth, supported on $[0,\frac{1}{2}]$, and its Fourier coefficients are given by \[ \widehat{h}(k)=\widehat{q}(k)\sum_{j=0}^{m-1}a_{j}e(-jk/2m). \] The matrix $\{e(-jk/2m):j\in\{0,\dotsc,m-1\},k\in\{-n,\dotsc,n\}\}$ is a Vandermonde matrix hence invertible, so one may find $a_{j}$ such that $\sum a_{j}e(-jk/2m)\linebreak[0]=\widehat{P}(k)/\widehat{q}(k)$ for all $k\in\{-n,\dotsc,n\}$. With these $a_{j}$ our $h$ satisfies
$\widehat{h}(k)=\widehat{P}(k)$ for all $k$ such that $|k|\le n$ so $S_{n}(h)=P$ which has the required properties. \end{proof} \begin{rem*} The coefficients of the $h$ given by lemma \ref{lem:vandermonde}
are typically large. The reason is the Vandermonde matrix applied. We need to invert the Vandermonde matrix and its inverse has a large norm, exponential in $n$ (the inverse of a Vandermonde matrix has an explicit formula). To counterbalance this last sentence a little, let us remark that $n$, the degree of the polynomial $P$ used during the proof can be taken to be logarithmic in $\epsilon$. This requires to choose a good $P$. For this purpose we apply the following theorem of Szeg\H{o}: for every compact $K\subset\mathbb{C}$ there exists monic polynomials $Q_{n}$ with $\max_{x\in K}|Q_{n}(x)|=(\capa(K)+o(1))^{n}$. See \cite[corollary 5.5.5]{R95}. We apply Szeg\H{o}'s theorem with
$K=\{e(x):x\in[0,\nicefrac{1}{2}]\}$ and then define $P_{n}(x)=\textrm{Re}(e(-nx)Q_{n}(e(x)))$. We get that $\widehat{P_{n}}(0)=1$ and $\max_{x\in[0,1/2]}|P_{n}(x)|\le(\capa(K)+o(1))^{n}$. The capacity of $K$ can be calculated by writing explicitly a Riemann mapping between $\mathbb{C}\setminus K$ and $\{z:|z|>1\}$ and is $\nicefrac{1}{\sqrt{2}}$, and in particular smaller than 1 (see \cite[theorem 5.2.3]{R95} for the connection to Riemann mappings). Hence it is enough to take $n=C\log\nicefrac{1}{\epsilon}$ to ensure that $P$ would satisfy
$|P(x)|\le\epsilon$ for all $x\in[0,\nicefrac{1}{2}]$. With this $P$ the norm of $h$ would be polynomial in $\epsilon$. \end{rem*}
\subsection{Reducing the coefficients}
In the next lemma we reduce the Fourier coefficients using a method inspired by a proof of the Menshov representation theorem (see \cite{O85}). We separate the interval $[0,1]$ into many small pieces and on each put a copy of the $h$ above, scaled differently. Unlike in typical applications of Menshov's approach, we do not have each copy of $h$ sit in a distinct ``spectral interval'' but they are rather intertwined. The details are below. Still, like in other applications of Menshov's technique, the resulting set is divided into many small intervals in a way that pushes the dimension up. This is why we are unable to construct an example supported on a set with dimension less than $1$. \begin{lem} \label{lem:no g}For every $\epsilon>0$ there exists a smooth function $f:[0,1]\to\mathbb{R}$ and an $n\in\mathbb{N}$ with the following properties: \begin{enumerate} \item $\widehat{f}(0)=1$.
\item For all $k\ne0$, $|\widehat{f}(k)|<\epsilon$.
\item For every $x\in\supp f,$ $|S_{n}(f;x)|<\epsilon$. \end{enumerate} \end{lem}
\begin{proof} We may assume without loss of generality that $\epsilon<\frac{1}{2}$, and it is enough to replace requirement (i) by the weaker requirement
$|\widehat{f}(0)-1|<\epsilon$ (and then normalise).
\emph{1. }Let $h$ be the function given by lemma \ref{lem:vandermonde} with $\epsilon_{\text{lemma \ref{lem:vandermonde}}}=\epsilon/4$, and denote $m=n_{\text{lemma \ref{lem:vandermonde}}}$. In other words, $h$ satisfies \begin{gather*} \widehat{h}(0)=1\qquad\qquad\supp h\subset[0,\tfrac{1}{2}]\\
|S_{m}(h;x)|<\tfrac{1}{4}\epsilon\quad\forall x\in[0,\tfrac{1}{2}]. \end{gather*}
Let $a>2||h||_{1}$/$\epsilon$ be some integer. Let $u$ be the function given by lemma \ref{lem:willthisbetheendofthisproblematlast} with $\epsilon_{\textrm{lemma \ref{lem:willthisbetheendofthisproblematlast}}}=\epsilon/2$ i.e.\ $u$ is smooth from $[0,1]$ to $[0,1]$, $u(0)=u(1)=0$ and $u$ satisfies \[
||\widehat{u-1}||_{\infty}<{\textstyle \frac{1}{2}}\epsilon. \] Let $v(x)=u(xa)$ (extended to zero outside $[0,1/a]$). Let $r$ be a large integer parameter to be fixed later, depending on all previously defined quantities ($\epsilon$, $h$, $m$, $a$ and $u$). Define \[ f(x)=\sum_{j=0}^{a-1}v\Big(x-\frac{j}{a}\Big)h(x(r^{3}+jr)). \] The role of the quantities $r^{3}+jr$ will become evident later.
Let us see that $f$ satisfies all required properties. It will be easier to consider trigonometric polynomials rather than smooth functions so define \begin{gather} \begin{aligned}H & \mathrel{\mathop:}= S_{\lfloor r/2\rfloor-1}(h)\qquad\qquad\qquad & V & \mathrel{\mathop:}= S_{\lfloor r/2\rfloor-1}(v)\end{aligned} \nonumber \\ F(x)\mathrel{\mathop:}=\sum_{j=0}^{a-1}V\Big(x-\frac{j}{a}\Big)H(x(r^{3}+jr)).\label{eq:def f'} \end{gather}
The smoothness of $v$ and $h$ imply that $||\widehat{v-V}||_{1}$
and $||\widehat{h-H}||_{1}$ can be taken arbitrarily small as $r\to\infty$. Since \[
||\widehat{f-F}||_{1}\le\sum_{j=0}^{a-1}||\widehat{v-V}||_{1}||\widehat{h}||_{1}+||\widehat{V}||_{1}||\widehat{h-H}||_{1} \]
we may take $r$ sufficiently large and get $||\widehat{f-F}||_{1}<\frac{1}{2}\epsilon$ (but do not fix the value of $r$ yet). Thus, with such an $r$, we need only show \begin{enumerate}
\item $||\widehat{F-1}||_{\infty}<\frac{1}{2}\epsilon$
\item For every $x\in\supp f$, $|S_{n}(F;x)|<\frac{1}{2}\epsilon$ (note that we take $x$ in $\supp f$ and not in $\supp F$). \end{enumerate} \emph{2}. We start with the estimate of $\widehat{F-1}$. Examine one summand in the definition of $F$, (\ref{eq:def f'}). Denoting $G_{j}=V(x-j/a)H(x(r^{3}+jr))$ we have \begin{equation} \widehat{G_{j}}(l)=\begin{cases}
\widehat{V}(p)\widehat{H}(q)e(-pj/a) & l=p+q(r^{3}+jr),\;|p|,|q|<r/2\\ 0 & \text{otherwise}. \end{cases}\label{eq:gj hat} \end{equation} In particular, $l$ and $j$ determine $p$ and $q$ uniquely. An immediate corollary is: \begin{equation}
||\widehat{G_{j}}||_{\infty}=||\widehat{V}||_{\infty}||\widehat{H}||_{\infty}\le||v||_{1}||h||_{1}\le\frac{||h||_{1}}{a}<\frac{\epsilon}{2}\label{eq:gj hat infty} \end{equation} where the last inequality is from the definition of $a$. Assume now that $r>a$. Then we can extract another corollary from (\ref{eq:gj hat}): that the different $G_{j}$ have disjoint spectra, except at $(-r/2,r/2)$. Hence \begin{equation}
|\widehat{F}(l)|=\max_{j}|\widehat{G_{j}}(l)|\stackrel{\textrm{(\ref{eq:gj hat infty})}}{<}\frac{\epsilon}{2}\qquad\forall|l|\ge r/2.\label{eq:l>r/2} \end{equation}
Finally, for $l\in(-r/2,r/2)$ we have that $F$ ``restricted spectrally to $(-r/2,\linebreak[0]r/2)$'' is simply $\sum_{j}V(x-j/a)$ so its Fourier spectrum is simply that of $u$ spread out. Since $||\widehat{u-1}||<\frac{\epsilon}{2}$
we get also in this case $|\widehat{F-1}(l)|<\frac{1}{2}\epsilon$. For those who prefer formulas, just note in (\ref{eq:gj hat}) that if $l\in(-r/2,r/2)$ then $q=0$ and since $\widehat{H}(0)=1$ we get \[ \widehat{F}(l)=\sum_{j=0}^{a-1}\widehat{V}(l)e(-lj/a)=\begin{cases} a\widehat{V}(l) & l\equiv0\text{ mod }a\\ 0 & \text{otherwise}. \end{cases} \] Recall that $v(x)=u(xa)$ so for $l\equiv0$ mod $a$ we have \[
|\widehat{aV-1}(l)|\le|\widehat{av-1}(l)|=|\widehat{u-1}(l/a)|<\tfrac{1}{2}\epsilon. \]
With (\ref{eq:l>r/2}) we get $||\widehat{F-1}||_{\infty}<\frac{1}{2}\epsilon$, as needed.
\emph{3}. Finally, we need to define $n$ and see that $S_{n}(F)$ is small on $\supp f$. Assume $r>m$ and define \[ n=m(r^{3}+r^{2}). \] This value of $n$ has the property that \begin{align*} n & >m(r^{3}+jr)+r/2\\ n & <(m+1)(r^{3}+jr)-r/2 \end{align*} for all $j\in\{0,\dotsc,a-1\}$. We now see why it was important to choose the spacings of the arithmetic progressions to be $r^{3}+jr$: these spacings need to be different to have separation of the spectra of the different $G_{j}$ (and they must be different by at least $r$, because the spectra of the $G_{j}$ are arranged in blocks of size $r$), but they need to be sufficiently close that it would still be possible to ``squeeze'' an $n$ between all the terms that correspond to the $m^{\textrm{th}}$ block in all $G_{j}$ and all the terms that correspond to the $m+1^{\textrm{st}}$ blocks. The $r^{3}$ in the spacings ensures that.
Using (\ref{eq:gj hat}) gives that \[ S_{n}(G_{j};x)=S_{m}\big(H;x(r^{3}+jr)\big)\cdot V\Big(x-\frac{j}{a}\Big). \] At this point it will be easier to compare to $v$ rather than to $V$, so write \[ S_{n}(G_{j};x)=S_{m}\big(H;x(r^{3}+jr)\big)\cdot v\Big(x-\frac{j}{a}\Big)+E_{j} \] and note that for $r$ sufficiently large $E_{j}$ can be taken to be arbitrarily small. Take $r$ so large as to have \begin{equation}
\bigg|S_{n}(F;x)-\sum_{j=0}^{a-1}S_{m}(H;x(r^{3}+jr))v\Big(x-\frac{j}{a}\Big)\bigg|<\tfrac{1}{4}\epsilon\qquad\forall x\in[0,1].\label{eq:h tag u notag} \end{equation} This is our last requirement from $r$ and we may fix its value now.
For every $x\in[0,1]$ there is at most one $j_{0}$ such that $v(x-j_{0}/a)\ne0$, namely $j_{0}=\lfloor x/a\rfloor$. If $x\in\supp f$ then it must be the case that $x(r^{3}+j_{0}r)\in[0,\frac{1}{2}]$ mod 1. But in this case, by our definition, \[
|S_{m}(H;x(r^{3}+j_{0}r))|<\tfrac{1}{4}\epsilon. \] We get \[
x\in\supp f\implies\bigg|\sum_{j=0}^{a-1}S_{m}(H;x(r^{3}+jr))\cdot v\Big(x-\frac{j}{a}\Big)\bigg|<\tfrac{1}{4}\epsilon, \]
and with (\ref{eq:h tag u notag}) we get $|S_{n}(F;x)|<\frac{1}{2}\epsilon$, as needed. \end{proof} \begin{lem} \label{lem:yes g}Let $f:[0,1]\to\mathbb{R}$ be smooth, $\epsilon>0$ and $N\in\mathbb{N}$. Then there exists a smooth function $g:[0,1]\to\mathbb{R}$ satisfying \begin{enumerate} \item $\supp g\subseteq\supp f$.
\item \label{enu:g-f^}For all $n\in\mathbb{Z}$, $|\widehat{g}(n)-\widehat{f}(n)|<\epsilon$ \item \label{enu:Sng small}For some $n>N$ we have \[
|S_{n}(g;x)|<\epsilon\qquad\forall x\in\supp g. \] \end{enumerate} \end{lem}
\begin{proof}
Let $h$ be the function from lemma \ref{lem:no g} with $\epsilon_{\text{lemma \ref{lem:no g}}}=\epsilon/2||\widehat{f}||_{1}$. Denote by $m$ the integer output of lemma \ref{lem:no g} i.e.\ the number such that $S_{m}(h;x)<\epsilon/(2||\widehat{f}||_{1})$ for all $x\in\supp h$. Let $r$ be large enough so that \[
\sum_{|k|\ge r/2}|\widehat{f}(k)|<\epsilon/(2||\widehat{h}||_{1}) \] and such that $r(m+1/2)>N$ (let $r$ be even). Denote \begin{align*} g(x) & \mathrel{\mathop:}= f(x)h(rx)\qquad n\mathrel{\mathop:}= r(m+1/2) \end{align*} where $h$ is extended periodically to $\mathbb{R}$. Let us see that $g$ and $n$ satisfy the requirements of the lemma. The smoothness of $g$ follows from those of $f$ and $h$. Condition (\ref{enu:g-f^}) follows because \[ \widehat{g}(k)-\widehat{f}(k)=\sum_{l}\widehat{h-1}(l)\widehat{f}(k-lr) \]
and because $||\widehat{h-1}||_{\infty}\le\epsilon/(2||\widehat{f}||_{1})$. Finally, to see condition (\ref{enu:Sng small}) write \[ F\mathrel{\mathop:}= S_{r/2}(f)\qquad G(x)\mathrel{\mathop:}= F(x)h(rx) \]
and note that $||\widehat{g-G}||_{1}\le||\widehat{f-F}||_{1}||\widehat{h}||_{1}<\frac{1}{2}\epsilon$. To estimate $S_{n}(G)$, note that if $x\in\supp g$ then $rx\in\supp h$
mod 1 and hence $S_{m}(h;rx)<\epsilon/(2||\widehat{f}||_{1})$. But \[ S_{n}(G;x)=F(x)S_{m}(h;rx) \]
but since $|F(x)|\le||\widehat{F}||_{1}\le||\widehat{f}||_{1}$ we get \[
|S_{n}(G;x)|\le||\widehat{f}||_{1}\frac{\epsilon}{2||\widehat{f}||_{1}}=\frac{\epsilon}{2} \] finishing the lemma. \end{proof}
\subsection{Proof of theorem \ref{thm:example}}
The coefficients $c_{l}$ will be constructed by inductively applying lemma \ref{lem:yes g}. Define therefore $f_{1}=1$ and $n_{1}=2$, and for all $k\ge1$ define $f_{k+1}=g_{\text{lemma \ref{lem:yes g}}}$ and $n_{k+1}=n_{\text{lemma \ref{lem:yes g}}}$ where lemma \ref{lem:yes g} is applied with $f_{\text{lemma \ref{lem:yes g}}}=f_{k}$, $\epsilon_{\text{lemma \ref{lem:yes g}}}=2^{-k}/n_{k}$ and $N_{\text{lemma \ref{lem:yes g}}}=n_{k}+1$ (this last parameter merely ensures that the $n_{k}$ are increasing). We now claim that $\widehat{f_{k}}(l)$ converges as $k\to\infty$, and that the limit, $c_{l}$, satisfies the requirements of the theorem.
The fact that $\lim_{k\to\infty}\widehat{f_{k}}(l)$ exists is clear, because $\widehat{f_{k+1}}(l)-\widehat{f_{k}}(l)<2^{-k}/n_{k}$. Denote \[ c_{l}=\lim_{k\to\infty}\widehat{f_{k}}(l). \] This also shows that $c_{l}\to0$.
Denote now $S_{n}=\sum_{l=-n}^{n}c_{l}e(lx).$ To see that $S_{n_{k}}(x)\to0$ for all $x$ we separate into $x\in\cap\supp f_{k}$ and the rest. Note that $\cap\supp f_{k}$ contains the support of the distribution $\delta:=\sum c_{l}e(lx)$. Indeed, if $\varphi$ is a Schwartz test function supported outside $\cap\supp f_{k}$ then $\supp\varphi\cap\supp f_{k}$ is a sequence of compact sets decreasing to the empty set (recall that $\supp f_{k+1}\subseteq\supp f_{k}$) so for some finite $k_{0}$
we already have $\supp\varphi\cap\supp f_{k}=\emptyset$ for all $k>k_{0}$. This of course implies that $\langle\varphi,f_{k}\rangle=0$. Taking the limit $k\to\infty$ we get $\langle\varphi,\delta\rangle=0$ (we may take the limit since $||\widehat{f_{k}-\delta}||_{\infty}\to0$ while $\widehat{\varphi}\in l_{1}$). Since this holds for any $\varphi$ supported outside $\cap\supp f_{k}$ we get $\supp\delta\subset\cap\supp f_{k}$, as claimed.
Now, for $x\not\in\cap\supp f_{k}$ we use the localisation principle in the form (\ref{eq:KS}) and get \begin{equation} \lim_{n\to\infty}S_{n}(x)=0\qquad\forall x\not\in\bigcap\supp f_{k}\label{eq:outside support} \end{equation} i.e.\ outside the support it is not necessary to take a subsequence.
Finally, examine $x\in\supp f_{k}$. By clause (\ref{enu:Sng small}) of lemma \ref{lem:yes g} \begin{equation}
|S_{n_{k}}(f_{k};x)|<\frac{1}{2^{k-1}n_{k-1}}.\label{eq:nk at k} \end{equation}
For any $j\ge k$, the condition $|\widehat{f_{j+1}}(k)-\widehat{f_{j}}(k)|<2^{-j}/n_{j}\le2^{-j}/n_{k}$ means that \[
|S_{n_{k}}(f_{j+1};x)-S_{n_{k}}(f_{j};x)|<3\cdot2^{-j} \] which we sum (also with (\ref{eq:nk at k})) to get \[
|S_{n_{k}}(f_{j};x)|<8\cdot2^{-k}\qquad\forall j\ge k \] and taking limit as $j\to\infty$ gives \[
\big|S_{n_{k}}(x)\big|<8\cdot2^{-k}\qquad\forall x\in\supp f_{k}. \] We conclude \[ \lim_{k\to\infty}S_{n_{k}}(x)=0\qquad\forall x\in\bigcap\supp f_{k}. \] With (\ref{eq:outside support}), the theorem is proved.\qed \begin{rem*} The observant reader probably noticed that we use smooth functions as our building blocks rather than trigonometric polynomials, and hence our construction does not naturally have large spectral gaps, unlike many constructions of null series. This is not a coincidence: it is not possible to have many large spectral gaps in any series that satisfies the requirements of Theorem \ref{thm:example}. Precisely, a theorem of Beurling states that any tempered distribution $\sum c_{l}e^{ilt}$ whose supported is not the whole interval (and our $c_{l}$ satisfy that, see Lemma \ref{lem:K} below) cannot have $c_{l}=0$ on an increasing sequence of intervals $[a_{k},b_{k}]$ satisfying $\sum(b_{k}-a_{k})^{2}/a_{k}^{2}=\infty$. See e.g.~\cite[Theorem 4]{B84}. \end{rem*}
\section{Proof of theorems \ref{thm:doubly exponentially} and \ref{thm:dim}}
The following lemma summarises some properties of the support of the distribution. \begin{lem} \label{lem:K}Let $c_{l}\to0$ and $n_{k}\to\infty$ such that \[ \lim_{k\to\infty}S_{n_{k}}(x)=0\qquad\forall x\qquad S_{n}(x)=\sum_{l=-n}^{n}c_{l}e(lx) \] Let $K$ be the support of the distribution $\sum c_{l}e(lx)$. Then \begin{enumerate} \item \label{enu:critical}$K=\{x:\forall\epsilon>0,S_{n_{k}}\text{ is unbounded in }(x-\epsilon,x+\epsilon)\}$. \item \label{enu:nowhere dense}$K$ is nowhere dense. \end{enumerate} \end{lem}
\begin{proof} We start with clause (\ref{enu:critical}). On the one hand, if $x\not\in K$ then the localisation principle (\ref{eq:KS}) tells us that $S_{n}\to0$ uniformly in some neighbourhood of $x$. On the other hand, if $S_{n_{k}}$ is bounded in some neighbourhood $I$ of $x$ then for any smooth test function $\varphi$ supported on $I$ we have \[ \langle\varphi,\sum c_{l}e(lx)\rangle=\sum_{l=-\infty}^{\infty}c_{l}\widehat{\varphi}(l)=\lim_{k\to\infty}\sum_{l=-n_{k}}^{n_{k}}c_{l}\widehat{\varphi}(l)=\lim_{k\to\infty}\int\varphi S_{n_{k}} \] but the integral on the right-hand side tends to zero from the bounded convergence theorem. This shows (\ref{enu:critical}).
To see clause (\ref{enu:nowhere dense}) examine the function $N(x)=\sup_{k}|S_{n_{k}}(x)|$ and apply the Baire category theorem to the sets $\{x:N(x)\ge M\}$ for all integer $M$. We get, in every interval $I$, an open interval $J\subset I$ and an $M$ such that $N(x)\le M$ on a dense subset of $J$. continuity shows that in fact $N(x)\le M$ on all of $J$ and hence $J\cap K=\emptyset$, as needed. \end{proof} \begin{rem*} Without the condition $c_{l}\to0$ it still holds that \[ K\subset\{x:\forall\epsilon>0,S_{n_{k}}\text{ is unbounded in }(x-\epsilon,x+\epsilon)\} \] and that $K$ is nowhere dense. The proof is the same. \end{rem*} We will now make a few assumptions that will make the proof less cumbersome. First we assume that $c_{-l}=\overline{c_{l}}$ (or, equivalently, that the $S_{n}$ are real). It is straightforward to check that this assumption may be made without loss of generality in both theorems \ref{thm:doubly exponentially} and \ref{thm:dim}. Our next assumption is:
\begin{assumption}In the next lemma we assume that $S_{n_{k}}$ is bounded on $K,$ the support of the distribution $\sum c_{l}e(lx)$. Further, whenever we write ``$C$'', the constant is allowed to depend on $\sup\{|S_{n_{k}}(x)|:x\in K,k\}$. \end{assumption}
As in the proof of proposition \ref{prop:measure}, we will eventually remove this assumption by a simple localisation argument. \begin{lem} \label{lem:new riemann}Let $c_{l}$, $n_{k}$ and $S_{n}$ be as in the previous lemma. Let $r$ be a sufficiently large number in our sequence (i.e.\ $r=n_{k}$ for some $k$) and let $s>r^{3/2}\log^{4}r$ not necessarily in the sequence. Then \begin{equation}
||S_{s}||\ge c||S_{r}||^{2}.\label{eq:no dim} \end{equation} \end{lem}
Lemma \ref{lem:new riemann} is used in the proof of theorem \ref{thm:doubly exponentially}. We will also need a version of lemma \ref{lem:new riemann} for theorem \ref{thm:dim} but that version is somewhat clumsy to state, so rather than doing it now, we postpone it to the end of the proof of the lemma, the impatient can jump to (\ref{eq:dim-2}) to see it. The only point worthy of making now is that we will need a result that holds for all $s>r$ so throughout the proof of lemma \ref{lem:new riemann} we will note when we use the assumption $s>r^{3/2}\log^{4}r$ and when $s>r$ is enough.
It might be tempting to think that lemma \ref{lem:new riemann} is a lemma on trigonometric polynomials, i.e.\ that it would have been possible to simply formulate it for $S_{r}$ being the Fourier partial sum of $S_{s}$. However, as the proof will show, we need to have the full distribution acting ``in the background'' restricting both what $S_{r}$ and $S_{s}$ may do. \begin{proof}
Fix $r$ and $s>r$. It will be convenient to assume $s/\log^{4}s\ge r$, so let us make this assumption until further notice. Denote $K=\supp\sum c_{l}e(lx)$
and let $I$ be a component of $K^{c}$ with $|I|>(2\log^{3}s)/s$. Let $\varphi_{I}$ be a function with the following properties: \begin{enumerate} \item If $I=(a,b)$ then $\varphi_{I}$ restricted to $[a+(\log^{3}s)/s,b-(\log^{3}s)/s]$ is identically $1$. \item $\supp\varphi_{I}\subset I$ (note that $I$ is open, so this inclusion must be strict). \item $\varphi_{I}(x)\in[0,1]$ for all $x\in[0,1]$.
\item $|\widehat{\varphi_{I}}(l)|\le C\exp\Big(-c\sqrt{(|l|\log^{3}s)/s}\Big)$. \end{enumerate} It is easy to see that such a $\varphi_{I}$ exists \textemdash{} take a standard construction of a $C^{\infty}$ function $\psi:\mathbb{R}\to[0,1]$
with $\psi|_{(-\infty,0)}\equiv0$, $\psi|_{[1,\infty)}\equiv1$ and
$||\psi^{(k)}||_{\infty}\le C(k!)^{2}$ (see e.g.\ \cite[\S V.2]{K04}), define $\varphi$ by mapping $\psi$ (restricted to an appropriate interval) linearly to each half of $I$ and estimate $\widehat{\varphi}(l)$
by writing $|\widehat{\varphi}(l)|\le l^{-k}\cdot||\varphi^{(k)}||_{\infty}$ and optimising over $k$. We skip any further details.
Let \[ \varphi=\sum_{I}\varphi_{I} \]
where the sum is taken over all $I$ as above, i.e.\ $I$ is a component of $K^{c}$ with $|I|>(2\log^{3}s)/s$. Our lemma is based on the following decomposition \[
||S_{r}||^{2}=\int S_{s}\cdot S_{r}=\int S_{s}\cdot S_{r}\cdot\varphi+\int S_{s}\cdot S_{r}\cdot(1-\varphi). \] To estimate the first summand, first note that \begin{align*}
|\widehat{S_{r}\cdot\varphi_{I}}(n)| & \le\sum_{l=-r}^{r}|c_{l}\widehat{\varphi_{I}}(n-l)|\le C\sum_{l=-r}^{r}\exp\left(-c\sqrt{\frac{|n-l|\log^{3}s}{s}}\right)\\
& \stackrel{\mathclap{{(*)}}}{\le}Cr\exp\Big(-c\sqrt{(|n|\log^{3}s)/s}\Big). \end{align*}
The inequality marked by $(*)$ is a simple exercise, but let us remark on it anyway. If $|n|<2s/\log^{3}s$ then both sides of $(*)$ are
$\approx r$ and it holds. If $|n|\ge2s/\log^{3}s$ then, because we assumed $s/\log^{4}s>r$, we get that $\frac{1}{2}|n|>r\ge|l|$
so $|n-l|\ge\frac{1}{2}|n|$ and $(*)$ holds again.
Summing over $I$ gives \[
|\widehat{S_{r}\cdot\varphi}(n)|\le Crs\exp\Big(-c\sqrt{(|n|\log^{3}s)/s}\Big). \] Next, because $S_{r}\cdot\varphi$ is supported outside $K$ we have \[ \sum_{l=-\infty}^{\infty}c_{l}\widehat{S_{r}\cdot\varphi}(l)=0 \] so \[
\int S_{s}\cdot S_{r}\cdot\varphi=-\sum_{|l|>s}c_{l}\widehat{S_{r}\cdot\varphi}(l) \] and then \begin{align}
\Big|\int S_{s}\cdot S_{r}\cdot\varphi\Big| & \le\sum_{|l|>s}|c_{l}|\cdot|\widehat{S_{r}\cdot\varphi}(l)|\le C\sum_{|l|>s}rs\exp\Big(-c\sqrt{(|l|\log^{3}s)/s}\Big)\nonumber \\
& \le C\exp(-c\log^{3/2}s)\label{eq:SrSsphi} \end{align} which is negligible (the last inequality can be seen, say, by dividing into blocks of size $s$, getting the expression $Crs^{2}\sum_{k=1}^{\infty}\exp(-ck\log^{3/2}s)$ which is clearly comparable to its first term $\exp(-c\log^{3/2}s)$, and finally noting that the term $rs^{2}\le s^{3}$ may be dropped at the price of changing the constants $C$ and $c$).
We move to the main term, $\int S_{r}S_{s}(1-\varphi)$, which we will estimate using Cauchy-Schwarz \[
\Big|\int S_{s}\cdot S_{r}\cdot(1-\varphi)\Big|\le||S_{s}||\cdot||S_{r}(1-\varphi)||. \]
Hence we need to estimate $||S_{r}(1-\varphi)||$. For this we do not need the smoothness of $\varphi$ so define $E:=\supp(1-\varphi)$ and replace $1-\varphi$ with $\mathbbm{1}_{E}$. Thus the lemma will be proved once we show \begin{claim*}
If $s>r^{3/2}\log^{4}r$ then $||S_{r}\mathbbm{1}_{E}||\le C$. \end{claim*} To show the claim, we need the following definition. Let $I$ be a component of $K^{c}$ (not necessarily large, any component) and denote, for each such $I$ and for each $M$, \[
A_{I,M}\mathrel{\mathop:}=|\{x\in I\cap E:|S_{r}'|\in[M,2M]\}|. \]
We need a simple bound for the values of $M$ that interest us, and we use that $|S_{r}'|\le Cr^{2}$ always (simply because the $c_{l}$ are bounded). For any $x\in I\cap E$ we may then estimate $S_{r}$ itself by integrating $S_{r}'$ from the closest point of $K$ up to $x$. We get \begin{equation}
|S_{r}(x)|\le C+\sum_{\substack{M=1\\ \text{scale} } }^{Cr^{2}}2MA_{I,M}\label{eq:SraIM-1} \end{equation} where the word ``scale'' below the $\Sigma$ means that $M$ runs through powers of $2$ (i.e.\ it is equivalent to $\sum_{m=0}^{\lfloor\log_{2}Cr^{2}\rfloor}$
and $M=2^{m}$). Note that (\ref{eq:SraIM-1}) uses our assumption that $\max_{x\in K}|S_{r}(x)|\le C$ for a constant $C$ independent of $r$ (and the additive constant $C$ in (\ref{eq:SraIM-1}) is the same $C$). Rewriting (\ref{eq:SraIM-1}) as \[
|S_{r}\cdot\mathbbm{1}_{E}|\le\sum_{I}\mathbbm{1}_{I\cap E}\Big(C+\sum_{M\textrm{ scale}}^{Cr^{2}}2MA_{I,M}\Big) \] gives \begin{align}
||S_{r}\mathbbm{1}_{E}|| & \le C\Big\Vert\sum_{I}\mathbbm{1}_{I\cap E}\Big\Vert+\sum_{M\textrm{ scale}}^{Cr^{2}}\Big\Vert\sum_{I}2MA_{I,M}\mathbbm{1}_{I\cap E}\Big\Vert\nonumber \\
& =C\sqrt{|E|}+\sum_{M\textrm{ scale}}^{Cr^{2}}2M\sqrt{\sum_{I}|I\cap E|A_{I,M}^{2}}.\label{eq:norm Sr} \end{align}
To estimate the sum notice that $A_{I,M}\le|I\cap E|\le2(\log s)^{3}/s$ so \begin{align*}
\sum_{I}|I\cap E|A_{I,M}^{2} & \le4\frac{\log^{6}s}{s^{2}}\sum_{I}A_{I,M}\le4\frac{\log^{6}s}{s^{2}}|\{x:|S_{r}'(x)|\ge M\}|\\
& \stackrel{\mathclap{{(*)}}}{\le}4\frac{\log^{6}s}{s^{2}}\frac{||S_{r}'||^{2}}{M^{2}}\le4\frac{\log^{6}s}{s^{2}}\frac{||S_{r}||^{2}r^{2}}{M^{2}} \end{align*} where the inequality marked by $(*)$ follows by Chebyshev's inequality. The sum over scales in (\ref{eq:norm Sr}) has only $C\log r\le C\log s$ terms, so we get \[
||S_{r}\mathbbm{1}_{E}||\le C\Big(\sqrt{|E|}+\frac{||S_{r}||r\log^{4}s}{s}\Big). \]
This finishes the claim, since we assumed $s>r^{3/2}\log^{4}r$ and since $||S_{r}||\le C\sqrt{r}$ because the coefficients $c_{l}$ are bounded. \qed
Let us recall how the claim implies the lemma: using Cauchy-Schwarz and $(1-\varphi)\le\mathbbm{1}_{E}$ gives \begin{equation}
\Big|\int S_{s}\cdot S_{r}\cdot(1-\varphi)\Big|\le C||S_{s}||\Big(\sqrt{|E|}+\frac{||S_{r}||r\log^{4}s}{s}\Big)\label{eq:norm Sr final} \end{equation}
Recall that (\ref{eq:SrSsphi}) showed that the other term in $||S_{r}||^{2}$
is negligible, so we get the same kind of estimate for $||S_{r}||^{2}$: \begin{equation}
||S_{r}||^{2}\le C||S_{s}||\bigg(\sqrt{|E|}+\frac{||S_{r}||r\log^{4}s}{s}\bigg).\label{eq:dim} \end{equation}
With $s>r^{3/2}\log^{4}r$ and $||S_{r}||\le C\sqrt{r}$ equation
(\ref{eq:dim}) translates to $||S_{r}||^{2}\le C||S_{s}||$, as needed.
Before putting the q.e.d.\ tombstone, though, let us reformulate (\ref{eq:dim}) in a way that will be useful in the proof of theorem \ref{thm:dim}. We no longer assume $s>r^{3/2}\log^{4}r$ (though we cannot yet remove the assumption $s/\log^{4}s>r$ from the beginning of the proof, as it was used to reach (\ref{eq:dim})). Recall that $E=\supp(1-\varphi)$, that $\varphi=\sum\varphi_{I}$ and that each $\varphi_{I}$ is $1$ except in a $(\log^{3}s)/s$ neighbourhood of $K$. Hence $E\subset K+[-(\log s)^{3}/s,(\log s)^{3}/s]$ (the sum here is the Minkowski sum of two sets) and (\ref{eq:dim}) can be written as \begin{equation}
||S_{r}||^{2}\le C||S_{s}||\bigg(\sqrt{\bigg|K+\left[-\frac{\log^{3}s}{s},\frac{\log^{3}s}{s}\right]\bigg|}+\frac{||S_{r}||r\log^{4}s}{s}\bigg).\label{eq:dim-2} \end{equation} Finally, note that (\ref{eq:dim-2}) does not actually require the assumption $s/\log^{4}s>r$ because in the other case it holds trivially. Hence (\ref{eq:dim-2}) holds for all $s>r$. Now we can put the tombstone. \end{proof}
\begin{proof}
[Proof of theorem \ref{thm:doubly exponentially}] Let $K$ be the support of the distribution $\sum c_{l}e(lx)$. We first claim that we can assume without loss of generality that $S_{n_{k}}$ is bounded on $K$. This uses the localisation principle exactly like we did in the proof of proposition \ref{prop:measure}, but let us do it in details nonetheless. Since $S_{n_{k}}(x)\to0$ everywhere $\sup_{k}|S_{n_{k}}(x)|$ is finite everywhere. Applying the Baire category theorem to the function
$\sup_{k}|S_{n_{k}}(x)|$ on $K$ we see that there is an open interval $I$ such that $S_{n_{k}}$ is bounded on a dense subset of $K\cap I$, and $K\cap I\ne\emptyset$. Continuity of $S_{n_{k}}$ shows that they are in fact bounded on the whole of $K\cap I$. By the definition of support of a distribution, we can find a smooth test function $\varphi$ supported on $I$ such that $\sum\widehat{\varphi}(l)c_{l}$ is not zero. Let $d_{l}=c_{l}*\widehat{\varphi}$ (and hence $d$ is not zero either). Then by the localisation principle (\ref{eq:Rajchman}), $\sum_{-n_{k}}^{n_{k}}d_{l}e(lx)$ converges everywhere to zero and is bounded on $K\cap I$, which contains the support of $\sum d_{l}e(lx)$. Hence we can rename $d_{l}$ to $c_{l}$ and simply assume that $S_{n_{k}}$ is bounded on $K$.
We now construct a series $r_{i}$ as follows: we take $r_{1}=n_{1}$ and for each $i\ge1$ let $r_{i+1}$ be the first element of the series $n_{k}$ which is larger than $r_{i}^{7/4}$. Because $n_{k+1}=n_{k}^{1+o(1)}$ we will have in fact that $r_{i+1}=r_{i}^{7/4+o(1)}$ and hence \begin{equation} r_{i}=\exp((7/4+o(1))^{i}).\label{eq:ridoubly} \end{equation} We now apply lemma \ref{lem:new riemann} with $r_{\text{lemma \ref{lem:new riemann}}}=r_{i}$ and $s_{\text{lemma \ref{lem:new riemann}}}=r_{i+1}$. We get \[
||S_{r_{i+1}}||\ge c||S_{r_{i}}||^{2} \]
Denote this last constant by $\lambda$ for clarity (i.e.\ $||S_{r_{i+1}}||\ge\lambda||S_{r_{i}}||^{2}$). Iterating the inequality $||S_{r_{i+1}}||\ge\lambda||S_{r_{i}}||^{2}$
starting from some $i_{0}$ such that $||S_{r_{i_{0}}}||>e/\lambda$ gives \[
||S_{r_{i}}||\ge(\lambda||S_{r_{i_{0}}}||)^{2^{i-i_{0}}}>\exp(2^{i-i_{0}}) \] Together with (\ref{eq:ridoubly}) we get \[
||S_{r_{i}}||\ge\exp((\log r_{i})^{1.2386+o(1)}) \] (the number is $\approx\log2/\log\nicefrac{7}{4}$) which certainly contradicts the boundedness of the $c_{l}$. \end{proof}
\begin{proof} [Proof of theorem \ref{thm:dim}]Denote $d=\dim_{\Mink}(K)$ (recall that this is the upper Minkowski dimension). Assume by contradiction that $c_{l}\not\equiv0$ and without loss of generality assume that $c_{0}=1$ (if $c_{0}=0$, shift the sequence $c_{l}$ and note that the condition $c_{l}\to0$ ensures that $S_{n_{k}}(x)\to0$ even for the shifted sequence).
Fix $s\in\mathbb{N}$ and let $\varphi$ be as in the proof of lemma \ref{lem:new riemann}: let us remind the most important properties: \begin{enumerate} \item $\supp\varphi\cap K=\emptyset$; \item $\supp(1-\varphi)\subset K+[-(\log^{3}s)/s,(\log^{3}s)/s]$; \item $\varphi(x)\in[0,1]$ for all $x\in[0,1]$; and
\item $|\widehat{\varphi}(l)|\le Cs\exp\Big(-c\sqrt{(|l|\log^{3}s)/s}\Big)$. \end{enumerate}
From this we can get a lower bound for $||S_{s}||$. From $\supp\varphi\cap K=\emptyset$ we get \[ \sum_{l=-\infty}^{\infty}c_{l}\widehat{\varphi}(l)=0 \] so \[
\int S_{s}\varphi=\sum_{l=-s}^{s}c_{l}\widehat{\varphi}(l)=-\sum_{|l|>s}c_{l}\widehat{\varphi}(l) \] giving \[
\Big|\int S_{s}\varphi\Big|\le\sum_{|l|>s}Cs\exp\Big(-c\sqrt{(|l|\log^{3}s)/s}\Big)\le C\exp(-c\log^{3/2}s). \] By assumption $\int S_{s}=c_{0}=1$ so for $s$ sufficiently large \[
\Big|\int S_{s}(1-\varphi)\Big|=1-O(\exp(-c\log^{3/2}s))>\nicefrac{1}{2}. \] Using Cauchy-Schwarz gives \begin{align*}
\nicefrac{1}{2} & <||S_{s}||\sqrt{|\supp(1-\varphi)|}\\
& \le||S_{s}||\sqrt{\left|K+\Big[-\frac{\log^{3}s}{s},\frac{\log^{3}s}{s}\Big]\right|}\le||S_{s}||\cdot\sqrt{s^{d-1+o(1)}} \end{align*} where in the last inequality we covered $K$ by intervals of size $1/s$ \textemdash{} no more than $s^{d+o(1)}$ by the definition of upper Minkowski dimension \textemdash{} and inflated each one by $(\log^{3}s)/s$. We conclude that \begin{equation}
||S_{s}||\ge s^{(1-d)/2+o(1)}\label{eq:lower bound} \end{equation} as $s\to\infty$.
In the other direction, fix some $r$ in our sequence and use (\ref{eq:dim-2}) to get: \[
||S_{r}||^{2}\le C||S_{s}||\bigg(s^{(d-1)/2+o(1)}+\frac{||S_{r}||r\log^{4}s}{s}\bigg). \]
Choose $s=(r||S_{r}||)^{2/(d+1)}$ (this makes the summands approximately equal) and get \begin{align}
\frac{||S_{s}||}{\sqrt{s}} & \ge||S_{r}||^{2}\cdot(r||S_{r}||)^{{\displaystyle \Big(-\frac{d}{d+1}+o(1)\Big)}}\nonumber \\
& \stackrel{\textrm{\ensuremath{\mathclap{{(*)}}}}}{\ge}r^{{\displaystyle \Big(-\frac{d}{d+1}+\frac{1-d}{2}\cdot\frac{d+2}{d+1}+o(1)\Big)}}\label{eq:power of r} \end{align}
where the inequality marked by $(*)$ follows from $||S_{r}||\ge r^{(1-d)/2+o(1)}$, which is (\ref{eq:lower bound}) with $s$ replaced by $r$. When $d<\frac{1}{2}(\sqrt{17}-3)$ the power of the $r$ in (\ref{eq:power of r})
is positive. This means that $||S_{s}||/\sqrt{s}\to\infty$, contradicting the boundedness of the coefficients $c_{l}$. \end{proof}
\section{\label{sec:Localisation}Localisation with bounded coefficients}
Our last remark is that there is a version of the localisation principle suitable even when the coefficients of the series do not converge to zero, but are still bounded. Let us state it first \begin{thm} \label{thm:local}Let $c_{l}$ be bounded and $n_{k}$ some sequence and let $\varphi$ be a smooth function. Then there exists a subsequence $m_{k}$ and two functions $a$ and $b$ such that \[ \varphi(x)\sum_{l=-m_{k}}^{m_{k}}c_{l}e(lx)-\sum_{l=-m_{k}}^{m_{k}}(c*\widehat{\varphi})(l)e(lx)+e^{im_{k}x}a(x)+e^{-im_{k}x}b(x) \] converges to zero uniformly.
Further, $a$ and $b$ have some smoothness that depends on $\varphi$ as follows: \[
|\widehat{a}(l)|\le\sum_{|j|>|l|}|\widehat{\varphi}(j)|. \] and ditto for $b$. \end{thm}
(recall that in the classic Rajchman formulation $a\equiv b\equiv0$ and $m_{k}$ can be taken to be $n_{k}$, one does not need to take a subsequence). \begin{proof} Denote \[ E_{n}(x)=\varphi(x)\sum_{l=-n}^{n}c_{l}e(lx)-\sum_{l=-n}^{n}(c*\widehat{\varphi})(l)e(lx). \]
For $|j|>n$ only the first term appears in $\widehat{E_{n}}(j)$ and we get \[
\widehat{E_{n}}(j)=\sum_{l=-\infty}^{\infty}c_{j-l}\widehat{\varphi}(l)\mathbbm{1}\{|j-l|\le n\} \]
and in particular $|\widehat{E_{n}}(n+r)|\le C\sum_{s\ge r}|\widehat{\varphi}(s)|$, and similarly for $\widehat{E_{n}}(-n-r)$. For $|l|\le n$ the second term also appears, but since it is simply the sum without the restriction
$|j-l|\le n$ the difference takes the following simple form: \[
\widehat{E_{n}}(j)=-\sum_{l=-\infty}^{\infty}c_{j-l}\widehat{\varphi}(l)\mathbbm{1}\{|j-l|>n\}. \]
Again we get $|\widehat{E_{n}}(n-r)|\le C\sum_{|s|\ge r}|\widehat{\varphi}(s)|$ and similarly for $\widehat{E_{n}}(-n+r)$.
These uniform bounds for $|\widehat{E_{n_{k}}}(\pm n_{k}+r)|$ allow us to use compactness to take a subsequence $m_{k}$ of $n_{k}$ such that both $\widehat{E_{m_{k}}}(m_{k}+r)$ and $\widehat{E_{m_{k}}}(-m_{k}+r)$ converge for all $r$. Defining \begin{align*} a(x) & =-\sum_{r=-\infty}^{\infty}e(rx)\lim_{k\to\infty}\widehat{E_{m_{k}}}(m_{k}+r)\\ b(x) & =-\sum_{r=-\infty}^{\infty}e(rx)\lim_{k\to\infty}\widehat{E_{m_{k}}}(-m_{k}+r) \end{align*} the theorem is proved. \end{proof} Theorem \ref{thm:local} can be used to strengthen both theorems \ref{thm:doubly exponentially} and \ref{thm:dim} to hold for bounded coefficients rather than for coefficients tending to zero. But let us skip these applications and show only how to use it to strengthen proposition \ref{prop:measure}. \begin{thm} Let $\mu$ be a measure and let $n_{k}$ be a series such that \[ \lim_{k\to\infty}S_{n_{k}}(\mu;x)=0\qquad\forall x. \] Then $\mu=0$. \end{thm}
\begin{proof} Let $K$ be the support of $\mu$ and let, as in the proof of proposition \ref{prop:measure}, $I$ be an interval such that $S_{n_{k}}(\mu)$ is bounded on $I$ and $I\cap K\ne\emptyset$. Let $\varphi$ be a smooth function supported on all of $I$. We use theorem \ref{thm:local} to find a subsequence $m_{k}$ of $n_{k}$ and an $a$ and a $b$ such that \begin{equation} \varphi S_{m_{k}}(\mu)-S_{m_{k}}(\varphi\mu)+e^{im_{k}x}a+e^{-im_{k}x}b\to0.\label{eq:a and b} \end{equation} This has two applications. First we conclude that $\varphi\mu\not\in L^{2}$. Indeed, if we had that $\varphi\mu\in L^{2}$ then we would get that $\varphi S_{m_{k}}(\mu)\to0$ pointwise while $S_{m_{k}}(\varphi\mu)\to\varphi\mu$ in measure, which can only hold if $\varphi\mu\equiv0$ (also $a$ and $b$ need to be zero, but we do not need this fact). This contradicts our assumption that $I\cap K\ne\emptyset$ and that $\varphi$ is supported on all of $I$.
Our second conclusion from (\ref{eq:a and b}) is that $S_{m_{k}}(\varphi\mu)$ is bounded on $I\cap K$, which is the support of $\varphi\mu$. From here the proof continues as in the proof of proposition \ref{prop:measure}. \end{proof}
\end{document} |
\begin{document}
\title{Space-Efficient Construction of Compressed Indexes in Deterministic Linear Time} \author{
J. Ian Munro\thanks{Cheriton School of Computer Science, University of Waterloo. Email {\tt [email protected]}.}
\and
Gonzalo Navarro\thanks{CeBiB --- Center of Biotechnology and Bioengineering, Department of Computer Science, University of Chile. Email {\tt [email protected]}. Funded with Basal Funds FB0001, Conicyt, Chile.} \and
Yakov Nekrich\thanks{Cheriton School of Computer Science, University of Waterloo.
Email: {\tt [email protected]}.}
} \date{} \maketitle
\thispagestyle{empty} \begin{abstract} We show that the compressed suffix array and the compressed suffix tree of a string $T$ can be built in $O(n)$ deterministic time using $O(n\log\sigma)$ bits of space, where $n$ is the string length and $\sigma$ is the alphabet size. Previously described deterministic algorithms either run in time that depends on the alphabet size or need $\omega(n\log \sigma)$ bits of working space. Our result has immediate applications to other problems, such as yielding the first deterministic linear-time LZ77 and LZ78 parsing algorithms that use $O(n \log\sigma)$ bits.
\end{abstract}
\setcounter{page}{1}
\section{Introduction} \label{sec:intro} In the string indexing problem we pre-process a string $T$, so that for any query string $P$ all occurrences of $P$ in $T$ can be found efficiently. Suffix trees and suffix arrays are two most popular solutions of this fundamental problem. A suffix tree is a compressed trie on suffixes of $T$; it enables us to find
all occurrences of a string $P$ in $T$ in time $O(|P|+\mathrm{occ})$ where $\mathrm{occ}$ is the number of times $P$ occurs in $T$ and $|P|$ denotes the length of $P$. In addition to indexing, suffix trees also support a number of other, more sophisticated, queries. The suffix array of a string $T$ is the lexicographically sorted array of its suffixes. Although suffix arrays do not support all queries that can be answered by the suffix tree, they use less space and are more popular in practical implementations. While the suffix tree occupies $O(n\log n)$ bits of space, the suffix array can be stored in $n\log n$ bits.
During the last twenty years there has been a significant increase in interest in compressed indexes, i.e., data structures that keep $T$ in compressed form and support string matching queries. The compressed suffix array (CSA)~\cite{GrossiV05,FerraginaM05,Sad03} and the compressed suffix tree (CST)~\cite{Sadakane07} are compressed counterparts of the suffix array and the suffix tree respectively. A significant part of compressed indexes relies on these two data structures or their variants. Both CSA and CST can be stored in $O(n\log \sigma)$ bits or less; we refer to e.g.~\cite{BelazzouguiN14} or~\cite{NM06} for an overview of compressed indexes.
It is well known that both the suffix array and the suffix tree can be constructed in $O(n)$ time~\cite{McCreight76,Ukkonen95,Weiner73, KSPP05}. The first algorithm that constructs the suffix tree in linear time independently of the alphabet size was presented by Farach~\cite{Farach97}. There are also algorithms that directly construct the suffix array of $T$ in $O(n)$ time~\cite{KarkkainenSB06,KoA05}. If the (uncompressed) suffix tree is available, we can obtain CST and CSA in $O(n)$ time. However this approach requires $O(n\log n)$ bits of working space. The situation is different if we want to construct compressed variants of these data structures using only $O(n\log \sigma)$ bits of space. Within this space the algorithm of Hon et al.~\cite{HonSS09} constructs the CST in $O(n\log^{\varepsilon}n)$ time for an arbitrarily small constant $\varepsilon>0$. In the same paper the authors also showed that CSA can be constructed in $O(n\log\log \sigma)$ time. The algorithm of Okanohara and Sadakane constructs the CSA in linear time, but needs $O(n\log\sigma\log\log n)$ bits of space~\cite{OkanoharaS09}. Belazzougui~\cite{Belaz14} described randomized algorithms that build both CSA and CST in $O(n)$ time and $O(n\log\sigma)$ bits of space. His approach also provides deterministic algorithms with runtime $O(n\log\log \sigma)$~\cite{Belaz14arx}. In this paper we show that randomization is not necessary in order to construct CSA and CST in linear time. Our algorithms run in $O(n)$ deterministic time and require $O(n\log\sigma)$ bits of space.
Suffix trees, in addition to being an important part of many compressed indexes, also play an important role in many string algorithms. One prominent example is Lempel-Ziv parsing of a string using $O(n\log\sigma)$ bits. The best previous solutions for this problem either take $O(n\log\log\sigma)$ deterministic time or $O(n)$ randomized time~\cite{KS16,BelazzouguiP16}. For instance K{\"o}ppl and Sadakane~\cite{KS16} showed how we can obtain LZ77- and LZ78-parsing for a string $T$ in $O(n)$ deterministic time and $O(n\log\sigma)$ bits, provided that the CST of $T$ is constructed. Thus our algorithm, combined with their results, leads to the first linear-time deterministic LZ-parsing algorithm that needs $O(n\log\sigma)$ bits of space.
\paragraph{Overview.} The main idea of our approach is the use of batch processing. Certain operations, such as rank and select queries on sequences, are a bottleneck of previous deterministic solutions. Our algorithms are divided into a large number of small tasks that can be executed independently. Hence, we can collect large batches of queries and answer all queries in a batch. This approach speeds up the computation because, as will be shown later, answering all queries in a batch takes less time than answering the same set of queries one-by-one. For example, our algorithm for generating the Burrows-Wheeler Transform of a text $T$ works as follows. We cut the original text into slices of $\Delta=\log_{\sigma}n$ symbols. The BWT sequence is constructed by scanning all slices in the right-to-left order. All slices are processed at the same time. That is, the algorithm works in $\Delta$ steps and during the $j$-th step, for $0\le j \le \Delta-1$, we process all suffixes that start at position $i\Delta-j-1$ for all $1\le i\le n/\Delta$. Our algorithm maintains the sorted list of suffixes and keeps information about those suffixes in a symbol sequence $B$. For every suffix $S_i=T[i\Delta -j-1 ..]$ processed during the step $j$, we must find its position in the sorted list of suffixes. Then the symbol $T[i\Delta-j-2]$ is inserted at the position that corresponds to $S_i$ in $B$. Essentially we can find the position of every new suffix $S_i$ by answering a rank query on the sequence $B$. Details are given in Section~\ref{sec:lintimebwt}. Next we must update the sequence by inserting the new symbols into $B$. Unfortunately we need $\Omega(\log n/\log \log n)$ time in general to answer rank queries on a dynamic sequence ~\cite{FS89}. Even if we do not have to update the sequence, we need $\Omega(\log\log \sigma)$ time to answer a rank query~\cite{BelazzouguiN15}. In our case, however, the scenario is different: There is no need to answer queries one-by-one. We must provide answers to a large \emph{batch} of $n/\Delta$ rank queries with one procedure. In this paper we show that the lower bounds for rank queries can be circumvented in the batched scenario: we can answer the batch of queries in $O(n/\Delta)$ time, i.e., in constant time per query. We also demonstrate that a batch of $n/\Delta$ insertions can be processed in $O(n/\Delta)$ time. This result is of independent interest.
Data structures that answer batches of rank queries and support batched updates are described in Sections~\ref{sec:batchrank},~\ref{sec:listlabel}, and~\ref{sec:batchdynseq}. This is the most technically involved aspect of our result. In Section~\ref{sec:batchrank} we show how answers to a large batch of queries can be provided. In Section~\ref{sec:listlabel} we describe a special labeling scheme that assigns monotonously increasing labels to elements of a list. We conclude this portion in Section~\ref{sec:batchdynseq} where we show how the static data structure can be dynamized. Next we turn to the problem of constructing the compressed suffix tree. First we describe a data structure that answers partial rank queries in constant time and uses $O(n\log\log \sigma)$ additional bits in Section~\ref{sec:partrank}; unlike previous solutions, our data structure can be constructed in $O(n)$ deterministic time. This result is plugged into the algorithm of Belazzougui~\cite{Belaz14} to obtain the suffix tree topology in $O(n)$ deterministic time. Finally we show how the permuted LCP array (PLCP) can be constructed in $O(n)$ time, provided we already built the suffix array and the suffix tree topology; the algorithm is described in Section~\ref{sec:permlcp}. Our algorithm for constructing PLCP is also based on batch processing of rank queries. To make this paper self-contained we provide some background on compressed data structures and indexes in Section~\ref{sec:prelim}.
We denote by $T[i..]$ the suffix of $T$ starting at position $i$ and we denote by $T[i..j]$ the substring of $T$ that begins with $T[i]$ and ends with $T[j]$, $T[i..]=T[i]T[i+1]\ldots T[n-1]$ and $T[i..j]=T[i]T[i+1]\ldots T[j-1]T[j]$. We assume that the text $T$ ends with a special symbol \$ and \$ lexicographically precedes all other symbols in $T$. The alphabet size is $\sigma$ and symbols are integers in $[0..\sigma-1]$ (so \$ corresponds to $0$). In this paper, as in the previous papers on this topic, we use the word RAM model of computation. A machine word consists of $\log n$ bits and we can execute standard bit operations, addition and subtraction in constant time. We will assume for simplicity that the alphabet size $\sigma\le n^{1/4}$. This assumption is not restrictive because for $\sigma> n^{1/4}$ linear-time algorithms that use $O(n\log\sigma)=O(n\log n)$ bits are already known.
\section{Linear Time Construction of the Burrows-Wheeler Transform} \label{sec:lintimebwt} In this section we show how the Burrows-Wheeler transform (BWT) of a text $T$ can be constructed in $O(n)$ time using $O(n\log \sigma)$ bits of space. Let $\Delta=\log_{\sigma}n$. We can assume w.l.o.g. that the text length is divisible by $\Delta$ (if this is not the case we can pad the text $T$ with $\ceil{n/\Delta}\Delta - n$ \$-symbols). The BWT of $T$ is a sequence $B$ defined as follows:
if $T[k..]$ is the $(i+1)$-th lexicographically smallest suffix, then $B[i]=T[k-1]$\footnote{So $B[0]$ has the lexicographically smallest suffix ($i+1=1$) and so on. The exact formula is $B[i]=T[(k-1) \mathrm{mod}\, n]$. We will write $B[i]=T[k-1]$ to avoid tedious details.}. Thus the symbols of $B$ are the symbols that precede the suffixes of $T$, sorted in lexicographic order. We will say that $T[k-1]$ \emph{represents} the suffix $T[k..]$ in $B$.
Our algorithm divides the suffixes of $T$ into $\Delta$ classes and constructs $B$ in $\Delta$ steps. We say that a suffix $S$ is a $j$-suffix for $0\le j< \Delta$ if $S=T[i\Delta-j-1..]$ for some $i$, and denote by ${\cal S}_j$ the set of all $j$-suffixes, ${\cal S}_j=\{\,T[i\Delta-j-1..]\,|\, 1\le i\le n/\Delta\,\}$. During the $j$-th step we process all $j$-suffixes and insert symbols representing $j$-suffixes at appropriate positions of the sequence $B$.
\paragraph{Steps $0-1$.} We sort suffixes in ${\cal S}_0$ and ${\cal S}_1$ by constructing a new text and representing it as a sequence of $n/\Delta$ meta-symbols. Let $T_1=T[n-1]T[0]T[1]\ldots T[n-2]$ be the text $T$ rotated by one symbol to the right and let $T_2=T[n-2]T[n-1]T[0]\ldots T[n-3]$ be the text obtained by rotating $T_1$ one symbol to the right. We represent $T_1$ and $T_2$ as sequences of length $n/\Delta$ over meta-alphabet $\sigma^{\Delta}$ (each meta-symbol corresponds to a string of length $\Delta$). Thus we view $T_1$ and $T_2$ as $$T_1=\boxed{T[n-1]\ldots T[\Delta-2]}\boxed{T[\Delta-1]\ldots T[2\Delta-2]}\boxed{T[2\Delta-1]\ldots T[3\Delta-2]}\boxed{T[3\Delta-1]\ldots }\ldots $$ $$T_2=\boxed{T[n-2]\ldots T[\Delta-3]}\boxed{T[\Delta-2]\ldots T[2\Delta-3]}\boxed{T[2\Delta-2]\ldots T[3\Delta-3]}\boxed{T[3\Delta-2]\ldots}\ldots $$
Let $T_3=T_1\circ T_2$ denote the concatenation of $T_1$ and $T_2$. To sort the suffixes of $T_3$, we sort the meta-symbols of $T_3$ and rename them with their ranks. Since meta-symbols correspond to $(\log n)$-bit integers, we can sort them in time $O(n)$ using radix sort. Then we apply a linear-time and linear-space suffix array construction algorithm~\cite{KarkkainenSB06} to $T_3$. We thus obtain a sorted list of suffixes $L$ for the meta-symbol sequence $T_3$. Suffixes of $T_3$ correspond to the suffixes from ${\cal S}_0\cup {\cal S}_1$ in the original text $T$: the suffix $T[i\Delta-1..]$ corresponds to the suffix of ${\cal S}_0$ starting with meta-symbol $\boxed{T[i\Delta-1]T[i\Delta]\ldots}$ in $T_3$ and the suffix $T[i\Delta-2\ldots]$ corresponds to the suffix of ${\cal S}_1$ starting with $\boxed{T[i\Delta-2]T[i\Delta-1]\ldots}$. Since we assume that the special symbol \$ is smaller than all other symbols, this correspondence is order-preserving. Hence by sorting the suffixes of $T_3$ we obtain the sorted list $L'$ of suffixes in ${\cal S}_0\cup {\cal S}_1$. \no{ $L$ contains the same suffixes as ${\cal S}_1\cup {\cal S}_0$ with only four exceptions: Two suffixes of $T_3$, ones that start with $T[n-1]T[0]\ldots$ and $T[n-2]T[n-1]T[0]\ldots$, are not in ${\cal S}_1\cup {\cal S}_0$. We can remove two unneeded suffixes from $L$ in $O(1)$ time. Besides two rightmost suffixes of $T$, $S_1=T[n-1]$ and $S_2=T[n-2]T[n-1]$, are not in $L$. Since $S_1$ and $S_2$ consist of $O(1)$ symbols, we can find their positions and insert them into $L$ in $O(\log n)$ time by binary search in $L$.} Now we are ready to insert symbols representing $j$-suffixes into $B$: Initially $B$ is empty. Then the list $L'$ is traversed and for every suffix $T[k..]$ that appears in $L'$ we add the symbol $T[k-1]$ at the end of $B$.
When suffixes in ${\cal S}_0$ and ${\cal S}_1$ are processed, we need to record some information for the next step of our algorithm. For every suffix $S\in {\cal S}_1$ we keep its position in the sorted list of suffixes. The position of suffix $T[i\Delta-2..]$ is stored in the entry $W[i]$ of an auxiliary array $W$, which at the end of the $j$-th step will contain the positions of the suffixes $T[i\Delta-j-1..]$. We also keep an auxiliary array $\mathit{Acc}$ of size $\sigma$: $\mathit{Acc}[a]$ is equal to the number of occurrences of symbols $i\le a-1$ in the current sequence $B$.
\paragraph{Step $j$ for $j\ge 2$.} Suppose that suffixes from ${\cal S}_0$, $\ldots$, ${\cal S}_{j-1}$ are already processed. The symbols that precede suffixes from these sets are stored in the sequence $B$; the $k$-th symbol $B[k]$ in $B$ is the symbol that precedes the $k$-th lexicographically smallest suffix from $\cup_{t=0}^{j-1}{\cal S}_t$. For every suffix $T[i\Delta-j..]$, we know its position $W[i]$ in $B$. Every suffix
$S_i=T[i\Delta-j-1..]\in {\cal S}_j$ can be represented as $S_i=aS'_i$ for some symbol $a$ and the suffix $S'_i=T[i\Delta-j..]\in {\cal S}_{j-1}$. We look up the position $t_i=W[i]$ of $S'_i$ and answer rank query $r_i=\idrm{rank}_a(t_i,B)$. We need $\Omega(\log\frac{\log \sigma}{\log\log n})$ time to answer a single rank query on a static sequence~\cite{BelazzouguiN15}. If updates are to be supported, then we need $\Omega(\log n/\log \log n)$ time to answer such a query~\cite{FS89}. However in our case the scenario is different: we perform a \emph{batch} of $n/\Delta$ queries to sequence $B$, i.e., we have to find $r_i$ for \emph{all} $t_i$. During Step $2$ the number of queries is equal to $|B|/2$ where $|B|$ denotes the number of symbols in $B$. During step $j$ the number of queries is $|B|/j\ge |B|/\Delta$. We will show in Section~\ref{sec:batchrank} that such a large batch of rank queries can be answered in $O(1)$ time per query. Now we can find the rank $p_i$ of $S_i$ among $\cup_{t=1}^{j}{\cal S}_t$: there are exactly $p_i$ suffixes in $\cup_{t=1}^{j}{\cal S}_t$ that are smaller than $S_i$, where $p_i=\mathit{Acc}[a]+r_i$. Correctness of this computation can be proved as follows. \begin{proposition}
\label{prop:sufrank} Let $S_i=aS'_i$ be an arbitrary suffix from the set ${\cal S}_j$. For every occurrence of a symbol $a'<a$ in the sequence $B$, there is exactly one suffix $S_p< S_i$ in $\cup_{t=1}^j {\cal S}_t$, such that $S_p$ starts with $a'$. Further, there are exactly $r_i$ suffixes $S_v$ in $\cup_{t=1}^j {\cal S}_t$ such that $S_v\le S_i$ and $S_v$ starts with $a$. \end{proposition} \begin{proof}
Suppose that a suffix $S_p$ from ${\cal S}_t$, such that $j\ge t\ge 1$, starts with $a'<a$. Then $S_p=a'S'_p$ for some $S'_p\in {\cal S}_{t-1}$. By definition of the sequence $B$, there is exactly one occurrence of $a'$ in $B$ for every such $S'_p$. Now suppose that a suffix $S_v\in {\cal S}_t$, such that $j\ge t\ge 1$, starts with $a$ and $S_v\le S_i$. Then $S_v=aS'_v$ for $S'_v\in {\cal S}_{t-1}$ and $S'_v\le S'_i$. For every such $S'_v$ there is exactly one occurrence of the symbol $a$ in $B[1..t_i]$, where $t_i$ is the position of $S'_i$ in $B$. \end{proof} The above calculation did not take into account the suffixes from ${\cal S}_0$. We compute the number of suffixes $S_k\in {\cal S}_0$ such that $S_k<S_i$ using the approach of Step $0-1$. Let $T_1$ be the text obtained by rotating $T$ one symbol to the right. Let $T'$ be the text obtained by rotating $T$ $j+1$ symbols to the right. We can sort suffixes of ${\cal S}_0$ and ${\cal S}_j$ by concatenating $T_1$ and $T'$, viewing the resulting text $T''$ as a sequence of $2n/\Delta$ meta-symbols and constructing the suffix array for $T''$. When suffixes in ${\cal S}_0\cup {\cal S}_j$ are sorted, we traverse the sorted list of suffixes; for every suffix $S_i\in {\cal S}_j$ we know the number $q_i$ of lexicographically smaller suffixes from ${\cal S}_0$.
We then modify the sequence $B$: We sort new suffixes $S_i$ by $o_i=p_i+q_i$. Next we insert the symbol $T[i\Delta -j-1]$ at position $o_i-1$ in $B$ (assuming the first index of $B$ is $B[0]$); insertions are performed in increasing order of $o_i$. We will show that this procedure also takes $O(1)$ time per update for a large batch of insertions. Finally we record the position of every new suffix from ${\cal S}_j$ in the sequence $B$. Since the positions of suffixes from ${\cal S}_{j-1}$ are not needed any more, we use the entry $W[i]$ of $W$ to store the position of $T[i\Delta-j-1..]$. The array $\mathit{Acc}$ is also updated.
When Step $\Delta-1$ is completed, the sequence $B$ contains $n$ symbols and $B[i]$ is the symbol that precedes the $(i+1)$-th smallest suffix of $T$. Thus we obtained the BWT of $T$. Step $0$ of our algorithm uses $O((n/\Delta)\log n)=O(n\log \sigma)$ bits. For all the following steps we need to maintain the sequence $B$ and the array $W$. $B$ uses $O(\log\sigma)$ bits per symbol and $W$ needs $O((n/\Delta)\log n)=O(n\log\sigma)$ bits. Hence our algorithm uses $O(n\log \sigma)$ bits of workspace. Procedures for querying and updating $B$ are described in the following section. Our result can be summed up as follows. \begin{theorem}
\label{theor:bwt} Given a string $T[0..n-1]$ over an alphabet of size $\sigma$, we can construct the BWT of $T$ in $O(n)$ deterministic time using $O(n\log\sigma)$ bits. \end{theorem}
\section{Batched Rank Queries on a Sequence} \label{sec:batchrank} In this section we show how a batch of $m$ rank queries for $\frac{n}{\log^2 n} \le m \le n$ can be answered in $O(m)$ time on a sequence $B$ of length $n$. We start by describing a static data structure. A data structure that supports batches of queries and batches of insertions will be described later. We will assume $\sigma\ge \log^4n$; if this is not the case, the data structure from~\cite{FMMN07} can be used to answer rank queries in time $O(1)$.
Following previous work \cite{GolynskiMR06}, we divide $B$ into chunks of size $\sigma$ (except for the last chunk that contains at most $\sigma$ symbols). For every symbol $a$ we keep a binary sequence $M_a=1^{d_1}01^{d_2}0\ldots 1^{d_f}$ where $f$ is the total number of chunks and $d_i$ is the number of occurrences of $a$ in the chunk. We keep the following information for every chunk $C$. Symbols in a chunk $C$ are represented as pairs $(a,i)$: we store a pair $(a,i)$ if and only if $C[i]=a$. These pairs are sorted by symbols and pairs representing the same symbol $a$ are sorted by their positions in $C$; all sorted pairs from a chunk are kept in a sequence $R$. The array $F$ consists of $\sigma$ entries; $F[a]$ contains a pointer to the first occurrence of a symbol $a$ in $R$ (or {\em null} if $a$ does not occur in $C$). Let $R_a$ denote the subsequence of $R$ that contains all pairs $(a,\cdot)$ for some symbol $a$. If $R_a$ contains at least $\log^2 n$ pairs, we split $R_a$ into groups $H_{a,r}$ of size $\Theta(\log^2 n)$. For every group, we keep its first pair in the sequence $R'$. Thus $R'$ is also a subsequence of $R$. For each pair $(a',i')$ in $R'$ we also store the partial rank of $C[i']$ in $C$, $\idrm{rank}_{C[i']}(i',C)$.
All pairs in $H_{a,r}$ are kept in a data structure $D_{a,r}$ that contains the second components of pairs $(a,i)\in H_{a,r}$. Thus $D_{a,r}$ contains positions of $\Theta(\log^2 n)$ consecutive symbols $a$. If $R_a$ contains less than $\log^2 n$ pairs, then we keep all pairs starting with symbol $a$ in one group $H_{a,0}$. Every $D_{a,r}$ contains $O(\log^2 n)$ elements. Hence we can implement $D_{a,r}$ so that predecessor queries are answered in constant time: for any integer $q$, we can find the largest $x\in H_{a,r}$ satisfying $x\le q$ in $O(1)$ time~\cite{FW94}. We can also find the number of elements $x\in H_{a,r}$ satisfying $x\le q$ in $O(1)$ time. This operation on $H_{a,r}$ can be implemented using bit techniques similar to those suggested in~\cite{NavarroN13}; details are to be given in the full version of this paper.
\paragraph{Queries on a Chunk.} Now we are ready to answer a batch of queries in $O(1)$ time per query. First we describe how queries on a chunk can be answered. Answering a query $\idrm{rank}_a(i,C)$ on a chunk $C$ is equivalent to counting the number of pairs $(a,j)$ in $R$ such that $j\le i$. Our method works in three steps. We start by sorting the sequence of all queries on $C$. Then we ``merge'' the sorted query sequence with $R'$. That is, we find for every $\idrm{rank}_a(i,C)$ the rightmost pair $(a,j')$ in $R'$, such that $j'\le i$. Pair $(a,j')$ provides us with an approximate answer to $\idrm{rank}_a(i,C)$ (up to an additive $O(\log^2 n)$ term). Then we obtain the exact answer to each query by searching in some data structure $D_{a,j}$. Since $D_{a,j}$ contains only $O(\log^2 n)$ elements, the search can be completed in $O(1)$ time. A more detailed description follows.
Suppose that we must answer $v$ queries $\idrm{rank}_{a_1}(i_1,C)$, $\idrm{rank}_{a_2}(i_2,C)$, $\ldots$, $\idrm{rank}_{a_v}(i_v,C)$ on a chunk $C$. We sort the sequence of queries by pairs $(a_j,i_j)$ in increasing order. This sorting step takes $O(\sigma/\log^2 n + v)$ time, where $v$ is the number of queries: if $v<\sigma/\log^3n$, we sort in $O(v\log n)=O(\sigma/\log^2 n)$ time; if $v\ge \sigma/\log^3 n$, we sort in $O(v)$ time using radix sort (e.g., with radix $\sqrt{\sigma}$). Then we simultaneously traverse the sorted sequence of queries and $R'$; for each query pair $(a_j,i_j)$ we identify the pair $(a_t,p_t)$ in $R'$ such that either (i) $p_t\le i_j\le p_{t+1}$ and $a_j=a_t=a_{t+1}$ or (ii) $p_t\le i_j$, $a_j=a_t$, and $a_t\not=a_{t+1}$. That is, we find the largest $p_t\le i_j$ such that $(a_j,p_t)\in R'$ for every query pair $(a_j,i_j)$. If $(a_t,p_t)$ is found, we search in the group $H_{a_t,p_t}$ that starts with the pair $(a_t,p_t)$. If the symbol $a_j$ does not occur in $R'$, then we search in the leftmost group $H_{a_j,0}$. Using $D_{a_t,p_t}$ (resp.\ $D_{a_t,0}$), we find the largest position $x_t\in H_{a_t,p_t}$ such that $x_t\le i_j$. Thus $x_t$ is the largest position in $C$ satisfying $x_t\le i_j$ and $C[x_t]=a_j$. We can then compute $\idrm{rank}_{a_t}(x_t,C)$ as follows: Let $n_1$ be the partial rank of $C[p_t]$, $n_1=\idrm{rank}_{C[p_t]}(p_t,C)$. Recall that we explicitly store this information for every position in $R'$. Let $n_2$ be the number of positions $i\in H_{a_t,p_t}$ satisfying $i\le x_t$. We can compute $n_2$ in $O(1)$ time using $D_{a_t,p_t}$. Then $\idrm{rank}_{a_j}(x_t,C)=n_1+n_2$. Since $C[x_t]$ is the rightmost occurrence of $a_j$ up to $C[i_j]$, $\idrm{rank}_{a_j}(i_j,C)=\idrm{rank}_{a_j}(x_t,C)$. The time needed to traverse the sequence $R'$ is $O(\sigma/\log^2 n)$ for all the queries. Other computations take $O(1)$ time per query. Hence the sequence of $v$ queries on a chunk is answered in $O(v+\sigma/\log^2 n)$ time.
\paragraph{Global Sequence.} Now we consider the global sequence of queries $\idrm{rank}_{a_1}(i_1,B)$, $\ldots$, $\idrm{rank}_{a_m}(i_m,B)$. First we assign queries to chunks (e.g., by sorting all queries by $(\floor{i/\sigma}+1)$ using radix sort). We answer the batch of queries on the $j$-th chunk in $O(m_j+\sigma/\log^2 n)$ time where $m_j$ is the number of queries on the $j$-th chunk. Since $\sum m_j=m$, all $m$ queries are answered in $O(m +n/\log^2 n)=O(m)$ time. Now we know the rank $n_{j,2}=\idrm{rank}_{a_j}(i'_j,C)$, where $i'_j=i_j-\floor{i/\sigma}\sigma$ is the relative position of $B[i_j]$ in its chunk $C$.
The binary sequences $M_a$ allows us reduce rank queries on $B$ to rank queries on a chunk $C$. All sequences $M_a$ contain $n+\floor{n/\sigma}\sigma$ bits; hence they use $O(n)$ bits of space. We can compute the number of occurrences of $a$ in the first $j$ chunks in $O(1)$ time by answering one select query. Consider a rank query $\idrm{rank}_{a_j}(i_j,B)$ and suppose that $n_{j,2}$ is already known. We compute $n_{j,1}$, where $n_{j,1}=\idrm{select}_0(\floor{i_j/\sigma},M_{a_j})-\floor{i_j/\sigma}$ is the number of times $a_j$ occurs in the first $\floor{i_j/\sigma}$ chunks. Then we compute $\idrm{rank}_{a_j}(i_j,B)=n_{j,1}+n_{j,2}$. \begin{theorem}
\label{theor:batchstat} We can keep a sequence $B[0..n-1]$ over an alphabet of size $\sigma$ in $O(n\log\sigma)$ bits of space so that a batch of $m$ $\idrm{rank}$ queries can be answered in $O(m)$ time, where $\frac{n}{\log^2 n} \le m\le n$. \end{theorem} The static data structure of Theorem~\ref{theor:batchstat} can be dynamized so that batched queries and batched insertions are supported. Our dynamic data structures supports a batch of $m$ queries in time $O(m)$ and a batch of $m$ insertions in amortized time $O(m)$ for any $m$ that satisfies $\frac{n}{\log_{\sigma}n}\le m\le n$. We describe the dynamic data structure in Sections~\ref{sec:listlabel} and~\ref{sec:batchdynseq}. \section{Building the Suffix Tree}
Belazzougui proved the following result \cite{Belaz14}: if we are given the BWT $B$ of a text $T$ and if we can report all the distinct symbols in a range of $B$ in optimal time, then in $O(n)$ time we can: (i) enumerate all the suffix array intervals corresponding to internal nodes of the suffix tree and (ii) for every internal node list the labels of its children and their intervals. Further he showed that, if we can enumerate all the suffix tree intervals in $O(n)$ time, then we can build the suffix tree topology~\cite{Sadakane07} in $O(n)$ time. The algorithms need only $O(n)$ additional bits of space. We refer to Lemmas 4 and 1 and their proofs in~\cite{Belaz14} for details.
In Section~\ref{sec:partrank} we show that a partial rank data structure can be built in $O(n)$ deterministic time. This can be used to build the desired structure that reports the distinct symbols in a range, in $O(n)$ time and using $O(n\log\log\sigma)$ bits. The details are given in Section~\ref{sec:colrep}. Therefore, we obtain the following result.
\begin{lemma}
\label{theor:suftreetopol} If we already constructed the BWT of a text $T$, then we can build the suffix tree topology in $O(n)$ time using $O(n\log\log \sigma)$ additional bits. \end{lemma}
In Section~\ref{sec:permlcp} we show that the permuted LCP array of $T$ can be constructed in $O(n)$ time using $O(n\log\sigma)$ bits of space. Thus we obtain our main result on building compressed suffix trees.
\begin{theorem}
\label{theor:cst} Given a string $T[0..n-1]$ over an alphabet of size $\sigma$, we can construct the compressed suffix tree of $T$ in $O(n)$ deterministic time using $O(n\log\sigma)$ additional bits. \end{theorem}
\section{Constructing the Permuted LCP Array} \label{sec:permlcp} The permuted LCP array is defined as $PLCP[i]=j$ if and only if $SA[r]=i$ and the longest common prefix of $T[SA[r]..]$ and $T[SA[r-1]..]$ is of length $j$. In other words $PLCP[i]$ is the length of the longest common prefix of $T[i..]$ and the suffix that precedes it in the lexicographic ordering. In this section we show how the permuted LCP array $PLCP[0..n-1]$ can be built in linear time.
\paragraph{Preliminaries.} For $i=0,1,\ldots,n$ let $\ell_i=PLCP[i]$. It is easy to observe that $\ell_i\le \ell_{i+1}+1$: if the longest common prefix of $T[i..]$ and $T[j..]$ is $q$, then the longest common prefix of $T[i+1..]$ and $T[j+1..]$ is at least $q-1$. Let $\Delta'=\Delta\log\log\sigma$ for $\Delta=\log_{\sigma}n$. By the same argument $\ell_{i}\le \ell_{i+\Delta'}+\Delta'$. To simplify the description we will further assume that $\ell_{-1}=0$. It can also be shown that $\sum_{i=0}^{n-1}(\ell_i-\ell_{i-1})=O(n)$.
We will denote by $B$ the BWT sequence of $T$; ${\overline B}$ denotes the BWT of the reversed text ${\overline T}=T[n-1]T[n-2]\ldots T[1]T[0]$. Let $p$ be a factor (substring) of $T$ and let $c$ be a character. The operation $\mathtt{extendright}(p,c)$ computes the suffix interval of $pc$ in $B$ and the suffix interval of $\overline{pc}$ in ${\overline B}$ provided that the intervals of $p$ and ${\overline p}$ are known. The operation $\mathtt{contractleft}(cp)$ computes the suffix intervals of $p$ and ${\overline p}$ provided that the suffix intervals of factors $cp$ and $\overline{cp}$ are known\footnote{Throughout this paper reverse strings are overscored. Thus $\overline{p}$ and $\overline{pc}$ are reverse strings of $p$ and $pc$ respectively.}. It was demonstrated~\cite{SchnattingerOG12,BelazzouguiCKM13} that both operations can be supported by answering $O(1)$ rank queries on $B$ and ${\overline B}$.
Belazzougui~\cite{Belaz14} proposed the following algorithm for consecutive computing of $\ell_0$, $\ell_1$, $\ldots$, $\ell_n$. Suppose that $\ell_{i-1}$ is already known. We already know the rank $r_{i-1}$ of $T[i-1..]$, the interval of $T[i-1..i+\ell_{i-1}-1]$ in $B$, and the interval of $\overline{T[i-1..i+\ell_{i-1}-1]}$ in ${\overline B}$. We compute the rank $r_i$ of $T[i.. ]$. If $r_{i-1}$ is known, we can compute $r_i$ in $O(1)$ time by answering one select query on $B$; see Section~\ref{sec:prelim}. Then we find the interval $[r_s,r_e]$ of $T[i..i+\ell_{i-1}-1]$ in $B$ and the interval $[r'_s,r'_e]$ of $\overline{T[i..i+\ell_{i-1}-1]}$ in ${\overline B}$. These two intervals can be computed by $\mathtt{contractleft}$. In the special case when $i=0$ or $\ell_{i-1}=0$, we set $[r_s,r_e]=[r'_s,r'_e]=[0,n-1]$. Then for $j=1,2,\ldots $ we find the intervals for $T[i..i+(\ell_{i-1}-1)+j]$ and $\overline{T[i..i+(\ell_{i-1}-1)+j]}$. Every following pair of intervals is found by operation $\mathtt{extendright}$. We stop when the interval of $T[i..i+\ell_{i-1}-1+j]$ is $[r_{s,j},r_{e,j}]$ such that $r_{s,j}=r_i$. For all $j'$, such that $0\le j'<j$, we have $r_{s,j'}<r_i$. It can be shown that $\ell_i=\ell_{i-1}+j-1$; see the proof of \cite[Lemma 2]{Belaz14}. Once $\ell_i$ is computed, we increment $i$ and find the next $\ell_i$ in the same way. All $\ell_i$ are computed by $O(n)$ $\mathtt{contractleft}$ and $\mathtt{extendright}$ operations.
\paragraph{Implementing $\mathtt{contractleft}$ and $\mathtt{extendright}$.} We create the succinct representation of the suffix tree topology both for $T$ and ${\overline T}$; they will be denoted by ${\cal T}$ and ${\overline {\cal T}}$ respectively. We keep both $B$ and ${\overline B}$ in the data structure that supports access in $O(1)$ time. We also store $B$ in the data structure that answers $\idrm{select}$ queries in $O(1)$ time. The array $\mathit{Acc}$ keeps information about accumulated frequencies of symbols: $\mathit{Acc}[i]$ is the number of occurrences of all symbols $a\leq i-1$ in $B$. Operation $\mathtt{contractleft}$ is implemented as follows. Suppose that we know the interval $[i,j]$ for a factor $cp$ and the interval $[i',j']$ for the factor $\overline{cp}$. We can compute the interval $[i_1,j_1]$ of $p$ by finding $l=\idrm{select}_c(i-\mathit{Acc}[c],B)$ and $r=\idrm{select}_c(j-\mathit{Acc}[c],B)$. Then we find the lowest common ancestor $x$ of leaves $l$ and $r$ in the suffix tree ${\cal T}$. We set $i_1=\mathtt{leftmost\_leaf}(x)$ and $j_1=\mathtt{rightmost\_leaf}(x)$. Then we consider the number of distinct symbols in $B[i_1..j_1]$. If $c$ is the only symbol that occurs in $B[i_1..j_1]$, then all factors $p$ in $T$ are preceded by $c$. Hence all factors $\overline{p}$ in ${\overline T}$ are followed by $c$ and $[i'_1,j'_1]=[i',j']$. Otherwise we find the lowest common ancestor $y$ of leaves $i'$ and $j'$ in ${\overline {\cal T}}$. Then we identify $y'=\mathtt{parent}(y)$ in ${\overline {\cal T}}$ and let $i'_1=\mathtt{leftmost\_leaf}(y')$ and $j'_1=\mathtt{rightmost\_leaf}(y')$. Thus $\mathtt{contractleft}$ can be supported in $O(1)$ time.
Now we consider the operation $\mathtt{extendright}$. Suppose that $[i,j]$ and $[i',j']$ are intervals of $p$ and $\overline{p}$ in $B$ and ${\overline B}$ respectively. We compute the interval of $\overline{pc}$ by using the standard BWT machinery. Let $i'_1=\idrm{rank}_c(i'-1,{\overline B})+\mathit{Acc}[c]$ and $j'_1=\idrm{rank}_c(j',{\overline B})+\mathit{Acc}[c]-1$. We check whether $c$ is the only symbol in ${\overline B}[i'..j']$. If this is the case, then all occurrences of ${\overline p}$ in ${\overline T}$ are preceded by $c$ and all occurrences of $p$ in $T$ are followed by $c$. Hence the interval of $pc$ in $B$ is $[i_1,j_1]=[i,j]$. Otherwise there is at least one other symbol besides $c$ that can follow $p$. Let $x$ denote the lowest common ancestor of leaves $i$ and $j$. If $y$ is the child of $x$ that is labeled with $c$, then the interval of $pc$ is $[i_1,j_1]$ where $i_1=\mathtt{leftmost\_leaf}(y)$ and $j_1=\mathtt{rightmost\_leaf}(y)$.
We can find the child $y$ of $x$ that is labeled with $c$ by answering rank and select queries on two additional sequences, $L$ and $D$. The sequence $L$ contains labels of children for all nodes of ${\cal T}$; labels are ordered by nodes and labels of the same node are ordered lexicographically. We encode the degrees of all nodes in a sequence $D=1^{d_1}01^{d_2}0\ldots 1^{d_n}$, where $d_i$ is the degree of the $i$-th node.
We compute $v=\idrm{select}_0(x,D)-x$, $p_1=\idrm{rank}_{c}(v,L)$, $p_2=\idrm{select}_c(p_1+1,L)$, and $j=p_2-v$. Then $y$ is the $j$-th child of $x$. The bottleneck of $\mathtt{extendright}$ are the computations of $p_1$, $i_1'$, and $j_1'$ because we need $\Omega(\log\frac{\log \sigma}{\log\log n})$ time to answer a rank query on $L$ (resp.\ on ${\overline B}$); all other calculations can be executed in $O(1)$ time.
\paragraph{Our Approach.} Our algorithm follows the technique of~\cite{Belaz14} that relies on operations $\mathtt{extendright}$ and $\mathtt{contractleft}$ for building the PLCP. We implement these two operations as described above; hence we will have to perform $\Theta(n)$ $\idrm{rank}$ queries on sequences $L$ and ${\overline B}$. Our method creates large batches of queries; each query in a batch is answered in $O(1)$ time using Theorem~\ref{theor:batchstat}.
During the pre-processing stage we create the machinery for supporting operations $\mathtt{extendright}$ and $\mathtt{contractleft}$. We compute the BWT $B$ of $T$ and the BWT ${\overline B}$ for the reverse text ${\overline T}$. We also construct the suffix tree topologies ${\cal T}$ and ${\overline {\cal T}}$. When $B$ is constructed, we record the positions in $B$ that correspond to suffixes $T[i\cdot\Delta'..]$ for $i=0,\ldots, \floor{n/\Delta'}$. PLCP construction is divided into three stages: first we compute the values of $\ell_i$ for selected evenly spaced indices $i$, $i=j\cdot \Delta'$ and $j=0,1$,$\ldots$,$\floor{n/\Delta'}$. We use a slow algorithm for computing lengths that takes $O(\Delta')$ extra time for every $\ell_i$. During the second stage we compute all remaining values of $\ell_i$. We use the method from~\cite{Belaz14} during Stage 2. The key to a fast implementation is ``parallel'' computation. We divide all lengths into groups and assign each group of lengths to a \emph{job}. At any time we process a list containing at least $2n/\log^2 n$ jobs. We answer $\idrm{rank}$ queries in batches: when a job $J_i$ must answer a slow $\idrm{rank}$ query on $L$ or ${\overline B}$, we pause $J_i$ and add the rank query to the corresponding pool of queries. When a pool of queries on $L$ or the pool of queries on ${\overline B}$ contains $n/\log^2 n$ items, we answer the batch of queries in $O(n/\log^2 n)$ time. The third stage starts when the number of jobs becomes smaller than $2n/\log^2n$. All lengths that were not computed earlier are computed during Stage 3 using the slow algorithm. Stage 2 can be executed in $O(n)$ time because $\idrm{rank}$ queries are answered in $O(1)$ time per query. Since the number of lengths that we compute during the first and the third stages is small, Stage 1 and Stage 3 also take time $O(n)$. A more detailed description follows.
\paragraph{Stage 1.} Our algorithm starts by computing $\ell_i$ for $i=j\cdot\Delta'$ and $j=0,1,\ldots, \floor{n/\Delta'}$. Let $j=0$ and $f=j\Delta'$. We already know the rank $r_f$ of $S_f=T[j\Delta'..]$ in $B$ ($r_f$ was computed and recorded when $B$ was constructed). We can also find the starting position $f'$ of the suffix $S'$ of rank $r_f-1$, $S'=T[f'..]$. Since $f'$ can be found by employing the function LF at most $\Delta'$ times, we can compute $f'$ in $O(\Delta')$ time; see Section~\ref{sec:prelim}\footnote{A faster computation is possible, but we do not need it here.}. When $f$ and $f'$ are known, we scan $T[f..]$ and $T[f'..]$ until the first symbol $T[f+p_f]\not=T[f'+p_f]$ is found. By definition of $\ell_j$, $\ell_0=p_f-1$. Suppose that $\ell_{s\Delta'}$ for $s=0$, $\ldots$, $j-1$ are already computed and we have to compute $\ell_f$ for $f=j\Delta'$ and some $j\ge 1$. We already know the rank $r_f$ of suffix $T[f..]$. We find $f'$ such that the suffix $T[f'..]$ is of rank $r_f-1$ in time $O(\Delta')$. We showed above that $\ell_f\ge \ell_{(j-1)\Delta'}-\Delta'$. Hence the first $o_f$ symbols in $T[f..]$ and $T[f'..]$ are equal, where $o_f=\max(0,\ell_{(j-1)\Delta'}-\Delta')$. We scan $T[f+o_f..]$ and $T[f'+o_f..]$ until the first symbol $T[f+o_f+p_f]\not=T[f'+o_f+p_f]$ is found. By definition, $\ell_f=o_f+p_f$. Hence we compute $\ell_f$ in $O(\Delta'+ p_f)$ time for $f=j\Delta'$ and $j=1$, $\ldots$, $\floor{n/\Delta'}$. It can be shown that $\sum_f p_f=O(n)$. Hence the total time needed to compute all selected $\ell_f$ is $O((n/\Delta')\Delta'+ \sum_f p_f)=O(n)$. For every $f=j\Delta'$ we also compute the interval of $T[j\Delta'..j\Delta'+\ell_f]$ in $B$ and the interval of $\overline{T[j\Delta'..j\Delta'+\ell_{f}]}$ in ${\overline B}$. We show in Section~\ref{sec:intervals} that all needed intervals can be computed in $O(n)$ time.
\paragraph{Stage 2.} We divide $\ell_i$ into groups of size $\Delta'-1$ and compute the values of $\ell_k$ in every group using a \emph{job}. The $i$-th group contains lengths $\ell_{k+1}$, $\ell_{k+2}$, $\ldots$, $\ell_{k+\Delta'-1}$ for $k=i\Delta'$ and $i=0,1,\ldots$. All $\ell_k$ in the $i$-th group will be computed by the $i$-th \emph{job} $J_i$. Every $J_i$ is either active or paused. Thus originally we start with a list of $n/\Delta'$ jobs and all of them are active. All active jobs are executed at the same time. That is, we scan the list of active jobs, spend $O(1)$ time on every active job, and then move on to the next job. When a job must answer a rank query, we pause it and insert the query into a \emph{query list}. There are two query lists: $Q_l$ contain rank queries on sequence $L$ and $Q_b$ contains rank queries on ${\overline B}$. When $Q_l$ or $Q_b$ contains $n/\log^2 n$ queries, we answer all queries in $Q_l$ (resp.\ in $Q_b$). The batch of queries is answered using Theorem~\ref{theor:batchstat}, so that every query is answered in $O(1)$ time. Answers to queries are returned to jobs, corresponding jobs are re-activated, and we continue scanning the list of active jobs. When all $\ell_k$ for $i\Delta' \le k <(i+1)\Delta'$ are computed, the $i$-th job is finished; we remove this job from the pool of jobs and decrement by $1$ the number of jobs. See Fig.~\ref{fig:jobs-ex}. \begin{figure}
\caption{Computing lengths during Stage 2. Groups corresponding to paused jobs are shown shaded by slanted lines. Only selected groups are shown. The $i$-th job $J_i$ is paused because we have to answer a $\idrm{rank}$ query on ${\overline B}$; the job $J_1$ is paused because we have to answer a $\idrm{rank}$ query on $L$. When $Q_l$ or $Q_b$ contains $n/\log^2 n$ queries, we answer a batch of $\idrm{rank}$ queries contained in $Q_l$ or $Q_b$.}
\label{fig:jobs-ex}
\end{figure}
Every job $J_i$ computes $\ell_{k+1}$, $\ell_{k+2}$, $\ldots$, $\ell_{k+\Delta'-1}$ for $k=i\Delta'$ using the algorithm of Belazzougui~\cite{Belaz14}. When the interval of $T[i+\ell_k..]$ in $B$ and the interval of $\overline{T[i+\ell_k..]}$ in ${\overline B}$ are known, we compute $\ell_{k+1}$. The procedure for computing $\ell_{k+1}$ must execute one operation $\mathtt{contractleft}$ and $\ell_{k+1}-\ell_{k}+1$ operations $\mathtt{extendright}$. Operations $\mathtt{contractleft}$ and $\mathtt{extendright}$ are implemented as described above. We must answer two rank queries on ${\overline B}$ and one rank query on $L$ for every $\mathtt{extendright}$. Ignoring the time for these three rank queries, $\mathtt{extendright}$ takes constant time. Rank queries on ${\overline B}$ and $L$ are answered in batches, so that each $\idrm{rank}$ query takes $O(1)$ time. Hence every operation $\mathtt{extendright}$ needs $O(1)$ time. The job $J_i$ needs $O(\ell_{i\Delta'+j}-\ell_{i\Delta'}+j)$ time to compute $\ell_{i\Delta'+1}$, $\ell_{i\Delta'+1}$, $\ldots$, $\ell_{i\Delta'+j}$. All $J_i$ are executed in $O(n)$ time.
\paragraph{Stage 3.} ``Parallel processing'' of jobs terminates when the number of jobs in the pool becomes smaller than $2n/\log^2n$. Since every job computes $\Delta'$ values of $\ell_i$, there are at most $2n(\log\log\sigma/(\log n\log\sigma))< 2n/\log n$ unknown values of $\ell_i$ at this point. We then switch to the method of Stage 1 to compute the values of unknown $\ell_i$. All remaining $\ell_i$ are sorted by $i$ and processed in order of increasing $i$. For every unknown $\ell_i$ we compute the rank $r$ of $T[i .. ]$ in $B$. For the suffix $S'$ of rank $r-1$ we find its starting position $f'$ in $T$, $S'=T[f'..]$. Then we scan $T[f'+\ell_{i-1}-1..]$ and $T[i+\ell_{i-1}-1..]$ until the first symbol $T[f'+\ell_{i-1}+j-1]\not=T[f+\ell_{i-1}+j-1]$ is found. We set $\ell_i=\ell_{i-1}+j-2$ and continue with the next unknown $\ell_i$. We spend $O(\Delta'+\ell_i)$ additional time for every remaining $\ell_i$; hence the total time needed to compute all $\ell_i$ is $O(n+(n/\log n)\Delta')=O(n)$.
Every job during Stage 2 uses $O(\log n)$ bits of workspace. The total number of jobs in the job list does not exceed $n/\Delta'$. The total number of queries stored at any time in lists $Q_l$ and $Q_b$ does not exceed $n/\log^2n$. Hence our algorithm uses $O(n\log \sigma)$ bits of workspace. \begin{lemma} \label{lemma:permlcp}
If the BWT of a string $T$ and the suffix tree topology for $T$ are already known, then we can compute the permuted LCP array in $O(n)$ time and $O(n\log\sigma)$ bits. \end{lemma}
\section{Conclusions} \label{sec:concl} We have shown that the Burrows-Wheeler Transform (BWT), the Compressed Suffix Array (CSA), and the Compressed Suffix Tree (CST) can be built in deterministic $O(n)$ time by an algorithm that requires $O(n\log\sigma)$ bits of working space. Belazzougui independently developed an alternative solution, which also builds within the same time and space the simpler part of our structures, that is, the BWT and the CSA, but not the CST. His solution, that uses different techniques, is described in the updated version of his ArXiV report~\cite{Belaz14arx} that extends his conference paper~\cite{Belaz14}.
Our results have many interesting applications. For example, we can now construct an FM-index \cite{FerraginaM05,FMMN07} in $O(n)$ deterministic time using $O(n\log\sigma)$ bits. Previous results need $O(n\log\log \sigma)$ time or rely on randomization~\cite{HonSS09,Belaz14}. Furthermore Theorem~\ref{theor:partrank} enables us to support the function LF in $O(1)$ time on an FM-index. In Section~\ref{sec:index} we describe a new index based on these ideas.
Another application is that we can now compute the Lempel-Ziv 77 and 78 parsings \cite{LZ76,ZL77,ZL78} of a string $T[0..n-1]$ in deterministic linear time using $O(n\log\sigma)$ bits: K{\"o}ppl and Sadakane \cite{KS16} recently showed that, if one has a compressed suffix tree on $T$, then they need only $O(n)$ additional (deterministic) time and $O(z\log n)$ bits to produce the parsing, where $z$ is the resulting number of phrases. Since $z \le n/\log_\sigma n$, the space is $O(n\log\sigma)$ bits. With the suffix tree, they need to compute in constant time any $\Psi(i)$ and to move in constant time from a suffix tree node to its $i$-th child. The former is easily supported as the inverse of the LF function using constant-time select queries on $B$ \cite{GolynskiMR06}; the latter is also easily obtained with current topology representations using parentheses \cite{NS14}.
Yet another immediate application of our algorithm are index data structures for dynamic document collections. If we use our compressed index, described in Section~\ref{sec:index}, and apply Transformation 2 from~\cite{MunroNV15}, then we obtain an index data structure for a dynamic collection of documents that uses $nH_k + o(n\log\sigma)+O(n\frac{\log n}{s})$ bits where $H_k$ is the $k$-th order entropy and $s$ is a parameter. This index can count how many times a query pattern $P$ occurs in a collection in $O(|P|\log\log n + \log\log \sigma\log\log n)$ time; every occurrence can be then reported in time $O(s)$. An insertion or a deletion of some document $T_u$ is supported in $O(|T_u|\log^{\varepsilon}n)$ and $O(|T_u|(\log^{\varepsilon}n+s))$ deterministic time respectively.
We believe that our technique can also improve upon some of the recently presented results on bidirectional FM-indices~\cite{SchnattingerOG12,BelazzouguiCKM13} and other scenarios where compressed suffix trees are used~\cite{BelazzouguiCKM16}. \paragraph{Acknowledgment.} The authors wish to thank an anonymous reviewer of this paper for careful reading and helpful comments.
\appendix \renewcommand\thesection{A.\arabic{section}}
\section{Preliminaries} \label{sec:prelim} \paragraph{Rank and Select Queries} The following two kinds of queries play a crucial role in compressed indexes and other succinct data structures. Consider a sequence $B[0..n-1]$ of symbols over an alphabet of size $\sigma$. The rank query $\idrm{rank}_a(i,B)$ counts how many times $a$ occurs among the first $i+1$ symbols
in $B$, $\idrm{rank}_a(i,B)=|\{\,j\,|\, B[j]=a \text{ and } 0\le j< i\,\}|$. The select query $\idrm{select}_a(i,B)$ finds the position in $B$ where $a$ occurs for the $i$-th time, $\idrm{select}_a(i,B)=j$ where $j$ is such that $B[j]=a$ and $\idrm{rank}_a(j,B)=i$. The third kind of query is the access query, $\idrm{access}(i,B)$, which returns the $(i+1)$-th symbol in $B$, $B[i]$. If insertions and deletions of symbols in $B$ must be supported, then both kinds of queries require $\Omega(\log n/\log\log n)$ time~\cite{FS89}. If the sequence $B$ is static, then we can answer select queries in $O(1)$ time and the cost of $\idrm{rank}$ queries is reduced to $\Theta(\log\frac{\log \sigma}{\log\log n})$ \cite{BelazzouguiN15}.\footnote{If we aim to use $n\log\sigma +o(n\log\sigma)$ bits, then either $\idrm{select}$ or $\idrm{access}$ must cost $\omega(1)$. If, however, $(1+\epsilon)n\log\sigma$ bits are available, for any constant $\epsilon>0$, then we can support both queries in $O(1)$ time.} One important special case of $\idrm{rank}$ queries is the partial rank query, $\idrm{rank}_{B[i]}(i,B)$. Thus a partial rank query asks how many times $B[i]$ occurred in $B[0..i]$. Unlike general rank queries, partial rank queries can be answered in $O(1)$ time~\cite{BelazzouguiN15}. In Section~\ref{sec:partrank} we describe a data structure for partial rank queries that can be constructed in $O(n)$ deterministic time. Better results can be achieved in the special case when the alphabet size is $\sigma=\log^{O(1)}n$; in this case we can represent $B$ so that $\idrm{rank}$, $\idrm{select}$, and $\idrm{access}$ queries are answered in $O(1)$ time~\cite{FMMN07}.
\paragraph{Suffix Tree and Suffix Array.} A suffix tree for a string $T[0..n-1]$ is a compacted tree on the suffixes of $T$. The suffix array is an array $SA[0..n-1]$ such that $SA[i]=j$ if and only if $T[j..]$ is the $(i+1)$-th lexicographically smallest suffix of $T$. All occurrences of a substring $p$ in $T$ correspond to suffixes of $T$ that start with $p$; these suffixes occupy a contiguous interval in the suffix array $SA$.
\paragraph{Compressed Suffix Array.} A compressed suffix array (CSA) is a compact data structure that provides the same functionality as the suffix array. The main component of CSA is the function $\Psi$, defined by the equality $SA[\Psi(i+1)]=(SA[i]+1)\!\!\mod n$. It is possible to regenerate the suffix array from $\Psi$. We refer to~\cite{NM06} and references therein for a detailed description of CSA and for trade-offs between space usage and access time. \paragraph{Burrows-Wheeler Transform and FM-index.} The Burrows-Wheeler Transform (BWT) of a string $T$ is obtained by sorting all possible rotations of $T$ and writing the last symbol of every rotation (in sorted order). The BWT is related to the suffix array as follows: $BWT[i]=T[(SA[i]-1)\!\!\mod n]$. Hence, we can build the BWT by sorting the suffixes and writing the symbols that precede the suffixes in lexicographical order. This method is used in Section~\ref{sec:lintimebwt}.
The FM-index uses the BWT for efficient searching in $T$. It consists of the following three main components: \begin{itemize} \item The BWT of $T$. \item The array $\mathit{Acc}[0..\sigma-1]$ where $\mathit{Acc}[i]$ holds the total number of symbols $a\leq i-1$ in $T$ (or equivalently, the total number of symbols $a\leq i-1$ in $B$). \item A sampled array $SAM_b$ for a sampling factor $b$: $SAM_b$ contains values of $SA[i]$ if and only if $SA[i]\!\!\mod b =0$ or $SA[i]=n-1$. \end{itemize}
The search for a substring $P$ of length $m$ is performed backwards: for $i=m-1,m-2,\ldots$, we identify the interval of $p[i..m]$ in the BWT. Let $B$ denote the BWT of $T$. Suppose that we know the interval $B[i_1..j_1]$ that corresponds to $p[i+1..m-1]$. Then the interval $B[i_2..j_2]$ that corresponds to $p[i..m-1]$ is computed as $i_2=\idrm{rank}_c(i_1-1,B)+\mathit{Acc}[c]$ and $j_2=\idrm{rank}_c(i_2,B)+\mathit{Acc}[c]-1$, where $c=P[i]$. Thus the interval of $p$ is found by answering $2m$ $\idrm{rank}$ queries. We observe that the interval of $p$ in $B$ is exactly the same as the interval of $p$ in the suffix array $SA$.
Another important component of an FM-index is the function $LF$, defined as follows: if $SA[j]=i+1$, then $SA[LF(j)]=i$. $LF$ can be computed by answering $\idrm{rank}$ queries on $B$. Using $LF$ we can find the starting position of the $r$-th smallest suffix, $SA[r]$, in $O(b)$ applications of $LF$, where $b$ is the sampling factor; we refer to~\cite{NM06} for details. It is also possible to compute the function $\Psi$ by using $\idrm{select}$ queries the BWT~\cite{LeeP07}. Therefore the BWT can be viewed as a variant of the CSA. Using $\Psi$ we can consecutively obtain positions of suffixes $T[i..]$ in the suffix array: Let $r_i$ denote the position of $T[i..]$ in $SA$. Since $T[n-1..]=\$$ is the smallest suffix, $r_0=\Psi(0)$. For $i\ge 1$, $r_i=\Psi(r_{i-1})$ by definition of $\Psi$. Hence we can consecutively compute each $r_i$ in $O(1)$ time if we have constant-time $\idrm{select}$ queries on the BWT.
\paragraph{Compressed Suffix Tree.} A compressed suffix tree consists of the following components: \begin{itemize} \item The compressed suffix array of $T$. We can use the FM-index as an implementation. \item The suffix tree topology. This component can be stored in $4n+o(n)$ bits~\cite{Sadakane07}. \item The permuted LCP array, or PLCP. The longest common prefix array $LCP$ is defined as follows: $LCP[r]=j$ if and only if the longest common prefix between the suffixes of rank $r$ and $r-1$ is of length $j$. The permuted LCP array is defined as follows: $PLCP[i]=j$ if and only if the rank of $T[i..]$ is $r$ and $LCP[r]=j$. A careful implementation of $PLCP$ occupies $2n+o(n)$ bits~\cite{Sadakane07}. \end{itemize}
\section{Monotone List Labelling with Batched Updates} \label{sec:listlabel}
A direct attempt to dynamize the data structure of Section~\ref{sec:batchrank} encounters one significant difficulty. An insertion of a new symbol $a$ into a chunk $C$ changes the positions of all the symbols that follow it. Since symbols are stored in pairs $(a_j,i)$ grouped by symbol, even a single insertion into $C$ can lead to a linear number of updates. Thus it appears that we cannot support the batch of updates on $C$ in less than $\Theta(|C|)$ time. In order to overcome this difficulty we employ a monotone labeling method and assign labels to positions of symbols. Every position $i$ in the chunk is assigned an integer label $\idrm{lab}(i)$ satisfying $0\le\idrm{lab}(i)\le \sigma\cdot n^{O(1)}$ and $\idrm{lab}(i_1)< \idrm{lab}(i_2)$ if and only if $i_1< i_2$. Instead of pairs $(a,i)$ the sequence $R$ will contain pairs $(a,\idrm{lab}(i))$.
When a new element is inserted, we have to change the labels of some other elements in order to maintain the monotonicity of the labeling. Existing labeling schemes~\cite{Willard92,BCDFCZ02,DS87} require $O(\log^2 n)$ or $O(\log n)$ changes of labels after every insertion. In our case, however, we have to process large batches of insertions. We can also assume that at most $\log n$ batches need to be processed. In our scenario $O(1)$ amortized modifications per insertion can be achieved, as shown below.
In this section we denote by $C$ an ordered set that contains between $\sigma$ and $2\sigma$ elements. Let $x_1\le x_2\le\ldots\le x_t$ denote the elements of $C$. Initially we assign the label $\idrm{lab}(x_i)=i\cdot d$ to the $i$-th smallest element $x_i$, where $d=4n$. We associate an interval $[\idrm{lab}(x_i),\idrm{lab}(x_{i+1})-1]$ with $x_i$. Thus initially the interval of $x_i$ is $[id,(i+1)d-1]$. We assume that $C$ also contains a dummy element $x_0=-\infty$ and $\idrm{lab}(-\infty)=0$. Thus all labels are non-negative integers bounded by $O(\sigma\cdot n)$.
Suppose that the $k$-th batch of insertions consists of $m$ new elements $y_1\le y_2\le\ldots\le y_m$. Since at most $\log n$ batches of insertions must be supported, $1\le k\le\log n$. We say that an element $y_j$ is in an interval $I=[\idrm{lab}(x_s),\idrm{lab}(x_e)]$ if $x_s< y_j < x_e$. We denote by $new(I)$ the number of inserted elements in $I$. The parameter $\rho(I)$ for an interval $I$ is defined as the ratio of old and new elements in $I=[\idrm{lab}(x_s),\idrm{lab}(x_e)]$, $\rho(I)=\frac{e-s+1}{new(I)}$. We identify the set of non-overlapping intervals $I_1$, $\ldots$, $I_r$ such that every new element $y_t$ is in some interval $I_j$, and $1 \le \rho(I_j) \le 2$ for all $j$, $1\le j\le r$. (This is always possible if $m \le |C|$; otherwise we simply merge the insertions with $C$ in $O(|C|+m)=O(m)$ time and restart all the labels.) We can find $I_1$, $\ldots$, $I_r$ in $O(m)$ time. For every $I_j$, $1\le j\le r$, we evenly distribute the labels of old and new elements in the interval $I'_j\subseteq I_j$. Suppose that $f$ new elements $y_p$, $\ldots$, $y_{p+f-1}$ are inserted into interval $I_j=[\idrm{lab}(x_s),\idrm{lab}(x_e)]$ so that now there are $v=f+(e-s)+1$ elements in this interval. We assign the label $\idrm{lab}(x_s)+ d_j\cdot(i-1)$ to the $i$-th smallest element in $I_j$ where $d_j=\frac{\idrm{lab}(x_e)-\idrm{lab}(x_s)}{v-1}$. By our choice of $I_j$, $f\le e-s+1$ and the number of elements in $I_j$ increased at most by twofold. Hence the minimal distance between two consecutive labels does not decrease by more than a factor of $2$ after insertion of new elements into $I_j$. We inserted $f$ new elements into $I_j$ and changed the labels of at most $2f$ old elements. Hence the amortized number of labels that we must change after every insertion is $O(1)$. The initial distance between labels is $d=4n$ and this distance becomes at most two times smaller after every batch of insertions. Hence the distance between consecutive labels is an integer larger than 2 during the first $\log n$ batches.
One remaining problem with our scheme is the large range of the labels. Since labels are integers bounded by $4|C|n$, we need $\Theta(\log\sigma+\log n)$ bits per label. To solve this problem, we will split the chunk $C$ into blocks and assign the same label to all the symbols in a block. A label assigned to the symbols in a block will be stored only once. Details are provided in Section~\ref{sec:batchdynseq}.
\section{Batched Rank Queries and Insertions on a Sequence} \label{sec:batchdynseq}
In this section we describe a dynamic data structure that supports both batches of rank queries and batches of insertions. First we describe how queries and updates on a chunk $C$ are supported.
The linked list $L$ contains all the symbols of $C$ in the same order as they appear in $C$. Each node of $L$ stores a block of $\Theta(\log_{\sigma}n)$ symbols, containing at most $(1/4)\log_{\sigma}n$ of them. We will identify list nodes with the blocks they contain; however, the node storing block $b$ also stores the total number of symbols in all preceding blocks and a label $\idrm{lab}(b)$ for the block. Labels are assigned to blocks with the method described in Section~\ref{sec:listlabel}. The pointer to (the list node containing) block $b$ will be called $p_b$; these pointers use $O(\log\sigma)$ bits.
We also maintain a data structure that can answer rank queries on any block. The data structure for a block supports queries and insertions in $O(1)$ time using a look-up table: Since $\sigma\le n^{1/4}$ and the block size is $(1/4)\log_{\sigma}n$, we can keep pre-computed answers to all rank queries for all possible blocks in a table $Tbl[0..n^{1/4}-1][0..n^{1/4}-1][0..\log_\sigma n-1]$. The entry $Tbl[b][a][i]$ contains the answer to the query $\idrm{rank}_a(i,b)$ on a block $b$. $Tbl$ contains $O(n^{1/2}\log_{\sigma}n)=o(n)$ entries and can be constructed in $o(n)$ time. Updates can be supported by a similar look-up table or by bit operations on the block $b$.
We also use sequences $R$ and $R'$, defined in Section~\ref{sec:batchrank}, but we make the following modifications. For every occurrence $C[i]=a$ of a symbol $a$ in $C$, the sequence $R$ contains pair $(a,p_b)$, where $p_b$ is a pointer to the block $b$ of $L$ that contains $C[i]$. Pairs are sorted by symbol in increasing order, and pairs with the same symbol are sorted by their position in $C$. Unlike in Section~\ref{sec:batchrank}, the chunk $C$ can be updated and we cannot maintain the exact position $i$ of $C[i]$ for all symbols in $C$; we only maintain the pointers $p_b$ in the pairs $(a,p_b)\in R$.
Note that we cannot use block pointers for searching in $L$ (or in $C$). Instead, block labels are monotonously increasing and $\idrm{lab}(b_1)< \idrm{lab} (b_2)$ if the block $b_2$ follows $b_1$ in $L$. Hence block labels will be used for searching and answering rank queries. Block labels $\idrm{lab}(b)$ use $\Theta(\log n)$ bits of space, so we store them only once with the list nodes $b$ and access them via the pointers $p_b$.
Groups $H_{a,j}$ are defined as in Section~\ref{sec:batchrank}; each $H_{a,j}$ contains all the pairs of $R$ that are between two consecutive elements of $R'_a$ for some $a$. The data structure $D_{a,j}$ that permits searching in $H_{a,j}$ is defined as follows. Suppose that $H_{a,j}$ contains pairs $(a,p_{b_1})$, $\ldots$, $(a,p_{b_f})$. We then keep a Succinct SB-tree data structure~\cite{GrossiORR09} on $\idrm{lab}(b_1)$, $\ldots$, $\idrm{lab}(b_f)$. This data structure requires $O(\log\log n)$ additional bits per label. For any integer $q$, it can find the largest block label $\idrm{lab}(b_i) < q$ in $O(1)$ time or count the number of blocks $b_i$ such that $\idrm{lab}(b_i) < q$ in $O(1)$ time (because our sets $H_{a,r}$ contain a logarithmic number of elements). The search procedure needs to access one block label, which we read from the corresponding block pointer.
\paragraph{Queries.} Suppose that we want to answer queries $\idrm{rank}_{a_1}(i_1,C)$, $\idrm{rank}_{a_2}(i_2,C)$, $\ldots$, $\idrm{rank}_{a_t}(i_t,C)$ on a chunk $C$. We traverse all the blocks of $L$ and find for every $i_j$ the label $l_j$ of the block $b_j$ that contains the $i_j$-th symbol, $l_j=\idrm{lab}(b_j)$. We also compute $r_{j,1}=\idrm{rank}_{a_j}(i'_j,b_j)$ using $Tbl$, where $i'_j$ is the relative position of the $i_j$-th symbol in $b_j$. Since we know the total number of symbols in all the blocks that precede $b_j$, we can compute $i'_j$ in $O(1)$ time.
We then represent the queries by pairs $(a_j,l_j)$ and sort these pairs stably in increasing order of $a_j$. Then we traverse the list of query pairs $(a_j,l_j)$ and the sequence $R'$. For every query $(a_j,l_j)$ we find the rightmost pair $(a_j,p_j)\in R'$ satisfying $\idrm{lab}(p_j)\le l_j$. Let $r_{j,2}$ denote the rank of $(a_j,p_j)$ in $R_{a_j}$, i.e., the number of pairs $(a_j,i)\in R$ preceding $(a_j,p_j)$. We keep this information for every pair in $R'$ using $O(\log\sigma)$ additional bits. Then we use the succinct SB-tree $D_{a_j,p_j}$, which contains information about the pairs in $H_{a_j,p_j}$ (i.e., the pairs in the group starting with $(a_j,p_j)$). The structure finds in constant time the largest $\idrm{lab}(b_g)\in D_{a_j,p_j}$ such that $\idrm{lab}(b_g)< l_j$, as well as the number $r_{j,3}$ of pairs from the beginning of $H_{a_j,p_j}$ up to the pair with label $\idrm{lab}(b_g)$. The answer to the $j$-th rank query is then $\idrm{rank}_{a_j}(i_j,C)=r_{j,1}+r_{j,2}+r_{j,3}$.
The total query time is then $O(\sigma/\log_\sigma n + t)$.
\paragraph{Insertions.} Suppose that symbols $a_1$, $\ldots$, $a_t$ are to be inserted at positions $i_1$, $\ldots$, $i_t$, respectively. We traverse the list $L$ and identify the nodes where new symbols must be inserted. We simultaneously update the information about the number of preceding elements, for all nodes. All this is done in time $O(\sigma/\log_\sigma n + t)$. We also perform the insertions into the blocks. If, as a result, some block contains more than $(1/4)\log_{\sigma} n$ symbols, we split it into an appropriate number of blocks, so that each block contains $\Theta(\log_{\sigma}n)$ but at most $(1/4)\log_{\sigma}n$ symbols. Nodes for the new blocks are allocated\footnote{Constant-time allocation is possible because we use fixed-size nodes, leaving the maximum possible space, $(1/4)\log n$ bits, for the block contents.}, linked to the list $L$, and assigned appropriate labels using the method described in Section~\ref{sec:listlabel}. After $t$ insertions, we create at most $O(t/\log_{\sigma}n)$ new blocks (in the amortized sense, i.e., if we consider the insertions from the beginning). Each such new block $b'$, coming from splitting an existing block $b$, requires that we change all the corresponding pointers $p_b$ from the pairs $(a_z,p_b)$ in $R$ (and $R'$), so that they become $(a_z,p_{b'})$. To find those pairs efficiently, the list node holding $b$ also contains the $O(\log_\sigma n)$ pointers to those pairs (using $O(\log\sigma)$ bits each); we can then update the required pointers in $O(t)$ total time.
The new blocks also require creating their labels. Those $O(t/\log_{\sigma}n)$ label insertions also trigger $O(t/\log_{\sigma}n)$ changes of other labels, with the technique of Section~\ref{sec:listlabel}. If the label of a block $b$ was changed, we visit all pairs $(a_z,p_{b})$ in $R$ that point to $b$. Each such $(a_z,p_b)$ is kept in some group $H_{a_z,k}$ and in some succinct SB-tree $D_{a_z,k}$. We then delete the old label of $b$ from $D_{a_z,k}$ and insert the new modified label. The total number of updates is thus bounded by $O(t)$. While not mntioned in the original paper \cite{GrossiORR09}, one can easily perform constant-time insertions and deletions of labels in a succinct SB-tree: The structure is a two-level B-tree of arity $\sqrt{\log n}$ holding encoded Patricia trees on the bits of the keys, and storing at the leaves the positions of the keys in $H_{a,r}$ using $O(\log\log n)$ bits each. To insert or delete a label we follow the usual B-tree procedures. The insertion or deletion of a key in a B-tree node is done in constant time with a precomputed table that, in the same spirit of $Tbl$, yields the resulting Patricia tree if we delete or insert a certain node; this is possible because internal nodes store only $O(\sqrt{\log n}\log\log n)=o(\log n)$ bits. Similarly, we can delete or insert a key at the leaves of the tree.
Apart from handling the block overflows, we must insert in $R$ the pairs corresponding to the new $t$ symbols we are actually inserting. We perform $t$ rank queries $\idrm{rank}_{a_1}(i_1,C)$, $\ldots$, $\idrm{rank}_{a_t}(i_t,C)$, just as described above, and sort the symbols to insert by those ranks using radix sort. We then traverse $R'$ and identify the groups $H_{a_1,j_1}$, $\ldots$, $H_{a_t,j_t}$ where new symbols must be inserted; the counters of preceding pairs for the pairs in $R'$ is easily updated in the way. We allocate the pairs $(a_k,p_{b_k})$ that will belong to $H_{a_i,j_i}$ and insert the labels $\idrm{lab}(b_k)$ in the corresponding data structures $D_{a_k,j_k}$, for all $1\le k\le t$. If some groups $H_{a_t,j_t}$ become larger than permitted, we split them as necessary and insert the corresponding pairs in $R'$. We can answer the rank queries, traverse $R$, and update the groups $H_{a_k,j_k}$ all in $O(\sigma/\log_{\sigma}n + t)$ time.
\paragraph{Global Sequence.} In addition to chunk data structures, we keep a static bitvector $M_a=1^{d_1}0\ldots 1^{d_s}$ for every symbol $a$; $d_i$ denotes the number of times $a$ occurs in the $i$-th chunk.
Given a global sequence of $m\ge n/\log_{\sigma}n$ queries, $\idrm{rank}_{a_1}(i_1,B)$, $\ldots$, $\idrm{rank}_{a_m}(i_m,B)$ on $B$, we can assign them to chunks in $O(m)$ time. Then we answer queries on chunks as shown above. If $m_j$ queries are asked on chunk $C_j$, then these queries are processed in $O(m_j+\sigma/\log_{\sigma}n)$ time. Hence all queries on all chunks are answered in $O(m+n/\log_{\sigma}n)=O(m)$ time. We can answer a query $\idrm{rank}_{a_k}(i_k,B)$ by answering a rank query on the chunk that contains $B[i_k]$ and $O(1)$ queries on the sequence $M_{a_k}$ \cite{GolynskiMR06}. Queries on $M_{a_k}$ are supported in $O(1)$ time because the bitvector is static. Hence the total time to answer $m$ queries on $B$ is $O(m)$.
When a batch of symbols is inserted, we update the corresponding chunks as described above. If some chunk contains more than $4\sigma$ symbols, we split it into several chunks of size $\Theta(\sigma)$ using standard techniques. Finally we update the global sequences $M_a$, both because of the insertions and due to the possible chunk splits. We simply rebuild the bitvectors $M_a$ from scratch; this is easily done in $O(n_a/\log n)$ time, where $n_a$ is the number of bits in $M_a$; see e.g.~\cite{MunroNV14}. This adds up to $O(m/\log n)$ time.
Hence the total amortized cost for a batch of $m\ge n/\Delta$ insertions is $O(m)$.
\begin{theorem}
\label{theor:batchdyn}
We can keep a sequence $B[0..n-1]$ over an alphabet of size $\sigma$ in $O(n\log\sigma)$ bits of space so that a batch of $m$ $\idrm{rank}$ queries can be answered in $O(m)$ time and a batch of $m$ insertions is supported in $O(m)$ amortized time, for $\frac{n}{\log_{\sigma}n}\le m\le n$. \end{theorem}
\section{Sequences with Partial Rank Operation} \label{sec:partrank} If $\sigma=\log^{O(1)}n$, then we can keep a sequence $S$ in $O(n\log\sigma)$ bits so that select and rank queries (including partial rank queries) are answered in constant time~\cite{FMMN07}. In the remaining part of this section we will assume that $\sigma\ge \log^3 n$.
\begin{lemma}
\label{lemma:partrank} Let $\sigma\le m\le n$. We can support partial rank queries on a sequence $C[0..m-1]$ over an alphabet of size $\sigma$ in time $O(1)$. The data structure needs $O(m\log\log m)$ additional bits and can be constructed in $O(m)$ deterministic time. \end{lemma} \begin{proof} Our method employs the idea of buckets introduced in~\cite{BelazzouguiBPV09}. Our structure does not use monotone perfect hashing, however. Let $I_a$ denote the set of positions where a symbol $a$ occurs in $C$, i.e., $I_a$ contains all integers $i$ satisfying $C[i]=a$. If $I_a$ contains more than $2\log^2m$ integers, we divide $I_a$ into buckets $B_{a,s}$ of size $\log^2 m$. Let $p_{a,s}$ denote the longest common prefix of all integers (seen as bit strings) in the bucket $B_{a,s}$ and let $l_{a,s}$ denote the length of $p_{a,s}$. For every element $C[i]$ in the sequence we keep the value of $l_{C[i],t}$ where $B_{C[i],t}$ is the bucket containing $i$. If $I_{C[i]}$ was not divided into buckets, we assume $l_{C[i],t}=$ {\em null}, a dummy value. We will show below how the index $t$ of $B_{C[i],t}$ can be identified if $l_{C[i],t}$ is known. For every symbol $C[i]$ we also keep the rank $r$ of $i$ in its bucket $B_{C[i],t}$. That is, for every $C[i]$ we store the value of $r$ such that $i$ is the $r$-th smallest element in its bucket $B_{C[i],t}$. Both $l_{C[i],t}$ and $r$ can be stored in $O(\log\log m)$ bits. The partial rank of $C[i]$ in $C$ can be computed from $t$ and $r$, $\idrm{rank}_{C[i]}(i,C)=t\log^2m +r$.
It remains to describe how the index $t$ of the bucket containing $C[i]$ can be found. Our method uses $o(m)$ additional bits. First we observe that $p_{a,i}\not= p_{a,j}$ for any fixed $a$ and $i\not=j$; see~\cite{BelazzouguiBPV09} for a proof. Let $T_w$ denote the full binary trie on the interval $[0..m-1]$. Nodes of $T_w$ correspond to all possible bit prefixes of integers $0,\ldots,m-1$. We say that a bucket $B_{a,j}$ is assigned to a node $u\in T_w$ if $p_{a,j}$ corresponds to the node $u$. Thus many different buckets can be assigned to the same node $u$. But for any symbol $a$ at most one bucket $B_{a,k}$ is assigned to $u$. If a bucket is assigned to a node $u$, then there are at least $\log^2 m$ leaves below $u$. Hence buckets can be assigned to nodes of height at least $2\log\log m$; such nodes will be further called bucket nodes. We store all buckets assigned to bucket nodes of $T_w$ using the structure described below.
We order the nodes $u$ level-by-level starting at the top of the tree. Let $m_j$ denote the number of buckets assigned to $u_j$. The data structure $G_j$ contains all symbols $a$ such that some bucket $B_{a,k_a}$ is assigned to $u_j$. For every symbol $a$ in $G_j$ we can find in $O(1)$ time the index $k_a$ of the bucket $B_{a,k_a}$ that is assigned to $u_j$. We implement $G_j$ as deterministic dictionaries of Hagerup et al.~\cite{HagerupMP01}. $G_j$ uses $O(m_j\log \sigma)$ bits and can be constructed in $O(m_j\log\sigma)$ time.
We store $G_j$ only for bucket nodes $u_j$ such that $m_j>0$. We also keep an array $W[1..\frac{m}{\log^2 m}]$ whose entries correspond to bucket nodes of $T_w$: $W[j]$ contains a pointer to $G_j$ or {\em null} if $G_j$ does not exist.
Using $W$ and $G_j$ we can answer a partial rank query $\idrm{rank}_{C[i]}(i,C)$. Let $C[i]=a$. Although the bucket $B_{a,t}$ containing $i$ is not known, we know the length $l_{a,t}$ of the prefix $p_{a,t}$. Hence $p_{a,t}$ can be computed by extracting the first $l_{a,t}$ bits of $i$. We can then find the index $j$ of the node $u_j$ that corresponds to $p_{a,t}$, $j=(2^{l_{a,t}}-1)+p_{a,t}$. We lookup the address of the data structure $G_j$ in $W[j]$. Finally the index $t$ of the bucket $B_{a,t}$ is computed as $t=G_j[a]$.
A data structure $G_j$ consumes $O(m_j\log m)$ bits. Since $\sum_j m_j\le \frac{m}{\log^2 m}$, all $G_j$ use $O(m/\log m)$ bits of space. The array $W$ also uses $O(m/\log m)$ bits. Hence our data structure uses $O(\log\log m)$ additional bits per symbol. \end{proof}
\begin{theorem}
\label{theor:partrank} We can support partial rank queries on a sequence $B$ using $O(n\log\log \sigma)$ additional bits. The underlying data structure can be constructed in $O(n)$ deterministic time. \end{theorem} \begin{proof}
We divide the sequence $B$ into chunks of size $\sigma$ (except for the last chunk that contains $n-(\floor{n/\sigma}\sigma)$ symbols). Global sequences $M_a$ are defined in the same way as in Section~\ref{sec:batchrank}. A partial rank query on $B$ can be answered by a partial rank query on a chunk and two queries on $M_a$. \end{proof}
\section{Reporting All Symbols in a Range} \label{sec:colrep}
We prove the following lemma in this section.
\begin{lemma} \label{lemma:colrep} Given a sequence $B[0..n-1]$ over an alphabet $\sigma$, we can build in $O(n)$ time a data structure that uses $O(n\log\log \sigma)$ additional bits and answers the following queries: for any range $[i..j]$, report $\mathrm{occ}$ distinct symbols that occur in $B[i..j]$ in $O(\mathrm{occ})$ time, and for every reported symbol $a$, give its frequency in $B[i..j]$ and its frequency in $B[0..i-1]$. \end{lemma}
The proof is the same as that of Lemma 3 in~\cite{Belaz14}, but we use the result of Theorem~\ref{theor:partrank} to answer partial rank queries. This allows us to construct the data structure in $O(n)$ deterministic time (while the data structure in~\cite{Belaz14} achieves the same query time, but the construction algorithm requires randomization). For completeness we sketch the proof below.
Augmenting $B$ with $O(n)$ additional bits, we can report all distinct symbols occurring in $B[i..j]$ in $O(\mathrm{occ})$ time using the idea originally introduced by Sadakane~\cite{Sadakane07doc}. For every reported symbol we can find in $O(1)$ time its leftmost and its rightmost occurrences in $B[i..j]$. Suppose $i_a$ and $j_a$ are the leftmost and rightmost occurrences of $a$ in $B[i..j]$. Then the frequencies of $a$ in $B[i..j]$ and $B[0..i-1]$ can be computed as $\idrm{rank}_a(j_a,B)-\idrm{rank}_a(i_a,B)+1$ and $\idrm{rank}_a(i_a,B)-1$ respectively. Since $\idrm{rank}_a(i_a,B)$ and $\idrm{rank}_a(j_a,B)$ are partial rank queries, they are answered in $O(1)$ time. The data structure that reports the leftmost and the rightmost occurrences can be constructed in $O(n)$ time. Details and references can be found in~\cite{BelazzouguiNV13}. Partial rank queries are answered by the data structure of Theorem~\ref{theor:partrank}. Hence the data structure of Lemma~\ref{lemma:colrep} can be built in $O(n)$ deterministic time. We can also use the data structure of Lemma~\ref{theor:partrank} to determine whether the range $B[i..j]$ contains only one distinct symbol in $O(1)$ time by using the following observation. If $B[i..j]$ contains only one symbol, then $B[i]=B[j]$ and $\idrm{rank}_{B[i]}(j,B)-\idrm{rank}_{B[i]}(i,B)=j-i+1$. Hence we can find out whether $B[i..j]$ contains exactly one symbol in $O(1)$ time by answering two partial rank queries. This observation will be helpful in Section~\ref{sec:permlcp}.
\section{Computing the Intervals} \label{sec:intervals} The algorithm for constructing PLCP, described in Section~\ref{sec:permlcp}, requires that we compute the intervals of $T[j\Delta'..j\Delta'+\ell_i]$ and $\overline{T[j\Delta'..j\Delta'+\ell_{i}]}$ for $i=j\Delta'$ and $j=0,1,\ldots, n/\Delta'$. We will show in this section how all necessary intervals can be computed in linear time when $\ell_i$ for $i=j\Delta'$ are known. Our algorithm uses the suffix tree topology. We construct some additional data structures and pointers for selected nodes of the suffix tree ${\cal T}$. First, we will describe auxiliary data structures on ${\cal T}$. Then we show how these structures can be used to find all needed intervals in linear time.
\paragraph{Marking Nodes in a Tree.} We use the marking scheme described in~\cite{NavarroN12}. Let $d=\log n$. A node $u$ of ${\cal T}$ is \emph{heavy} if it has at least $d$ leaf descendants and \emph{light} otherwise. We say that a heavy node $u$ is a \emph{special} or marked node if $u$ has at least two heavy children. If a non-special heavy node $u$ has more than $d$ children and among them is one heavy child, then we keep the index of the heavy child in $u$.
We keep all children of a node $u$ in the data structure $F_u$, so that the child of $u$ that is labeled by a symbol $a$ can be found efficiently. If $u$ has at most $d+1$ children, then $F_u$ is implemented as a fusion tree~\cite{FW94}; we can find the child of $u$ labeled by any symbol $a$ in $O(1)$ time. If $u$ has more than $d+1$ children, then $F_u$ is implemented as the van Emde Boas data structure and we can find the child labeled by $a$ in $O(\log\log \sigma)$ time. If the node $u$ is special, we keep labels of its heavy children in the data structure $D_u$. $D_u$ is implemented as a dictionary data structure~\cite{HagerupMP01} so that we can find any heavy child of a special node in $O(1)$ time. We will say that a node $u$ is \emph{difficult} if $u$ is light but the parent of $u$ is heavy. We can quickly navigate from a node $u\in {\cal T}$ to its child $u_i$ unless the node $u_i$ is difficult. \begin{proposition} \label{prop:descendant}
We can find the child $u_i$ of $u$ that is labeled with a symbol $a$ in $O(1)$ time unless the node $u_i$ is difficult. If $u_i$ is difficult, we can find $u_i$ in $O(\log\log \sigma)$ time. \end{proposition} \begin{proof}
Suppose that $u_i$ is heavy. If $u$ is special, we can find $u_i$ in $O(1)$ time using $D_u$. If $u$ is not special and it has at most $d+1$ children, then we find $u_i$ in $O(1)$ time using $F_u$. If $u$ is not special and it has more than $d+1$ children, then $u_i$ is the only heavy child of $u$ and its index $i$ is stored with the node $u$. Suppose that $u_i$ is light and $u$ is also light. Then $u$ has at most $d$ children and we can find $u_i$ in $O(1)$ time using $F_u$. If $u$ is heavy and $u_i$ is light, then $u_i$ is a difficult node. In this case we can find the index $i$ of $u_i$ in $O(\log\log \sigma)$ time using $F_u$. \end{proof} \begin{proposition} \label{prop:path}
Any path from a node $u$ to its descendant $v$ contains at most one difficult node. \end{proposition} \begin{proof}
Suppose that a node $u$ is a heavy node and its descendant $v$ is a light node. Let $u'$ denote the first light node on the path from $u$ to $v$. Then all descendants of $u'$ are light nodes and $u'$ is the only difficult node on the path from $u$ to $v$. If $u$ is light or $v$ is heavy, then there are apparently no difficult nodes between $u$ and $v$. \end{proof}
\paragraph{Weiner Links.} A Weiner link (or w-link) $\mathtt{wlink}(v,c)$ connects a node $v$ of the suffix tree ${\cal T}$ labeled by the path $p$ to the node $u$, such that $u$ is the locus of $cp$. If $\mathtt{wlink}(v,c)=u$ we will say that $u$ is the target node and $v$ is the source of $\mathtt{wlink}(v,c)$ and $c$ is the label of $\mathtt{wlink}(v,c)$. If the target node $u$ is labeled by $cp$, we say that the w-link is explicit. If $u$ is labeled by some path $cp'$, such that $cp$ is a proper prefix of $cp'$, then the Weiner link is implicit. We classify Weiner links using the same technique that was applied to nodes of the suffix tree above. Weiner links that share the same source node are called sibling links. A Weiner link from $v$ to $u$ is \emph{heavy} if the node $u$ has at least $d$ leaf descendants and \emph{light} otherwise. A node $v$ is \emph{w-special} iff there are at least two heavy w-links connecting $v$ and some other nodes. For every special node $v$ the dictionary $D'_v$ contains the labels $c$ of all heavy w-links $\mathtt{wlink}(v,c)$. For every $c$ such that $\mathtt{wlink}(v,c)$ is heavy, we also keep the target node $u=\mathtt{wlink}(v,c)$. $D'_v$ is implemented as in \cite{HagerupMP01} so that queries are answered in $O(1)$ time. Suppose that $v$ is the source node of at least $d+1$ w-links, but $u=\mathtt{wlink}(v,c)$ is the only heavy link that starts at $v$. In this case we say that $\mathtt{wlink}(v,c)$ is \emph{unique} and we store the index of $u$ and the symbol $c$ in $v$. Summing up, we store only heavy w-links that start in a w-special node or unique w-links. All other w-links are not stored explicitly; if they are needed, we compute them using additional data structures that will be described below.
Let $B$ denote the BWT of $T$. We split $B$ into intervals $G_j$ of size $4d^2$. For every $G_j$ we keep the dictionary $A_j$ of symbols that occur in $G_j$. For each symbol $a$ that occurs in $G_j$, the data structure $G_{j,a}$ contains all positions of $a$ in $G_j$. Using $A_j$, we can find out whether a symbol $a$ occurs in $G_j$. Using $G_{j,a}$, we can find for any position $i$ the smallest $i'\ge i$ such that $B[i']=a$ and $B[i']$ is in $G_j$ (or the largest $i''\le i$ such that $B[i'']=a$ and $B[i'']$ is in $G_j$). We implement both $A_j$ and $G_{j,a}$ as fusion trees~\cite{FW94} so that queries are answered in $O(1)$ time. Data structures $A_j$ and $G_{j,a}$ for a fixed $j$ need $O(d^2\log\sigma)$ bits. We also keep (1) the data structure from~\cite{GolynskiMR06} that supports $\idrm{select}$ queries on $B$ in $O(1)$ time and rank queries on $B$ in $O(\log\log\sigma)$ time and (2) the data structure from Theorem~\ref{theor:partrank} that supports partial rank queries in $O(1)$ time. All additional data structures on the sequence $B$ need $O(n\log\sigma)$ bits.
\begin{proposition} \label{prop:heavyspec} The total number of heavy w-links that start in w-special nodes is $O(n/d)$. \end{proposition} \begin{proof}
Suppose that $u$ is a w-special node and let $p$ be the label of $u$. Let $c_1$, $\ldots$, $c_s$ denote the labels of heavy w-links with source node $u$. This means that each $c_1p$, $c_2p$, $\ldots$, $c_sp$ occurs at least $d$ times in $T$. Consider the suffix tree ${\overline {\cal T}}$ of the reverse text ${\overline T}$. ${\overline {\cal T}}$ contains the node $\overline{u}$ that is labeled with $\overline{p}$. The node $\overline{u}$ has (at least) $s$ children $\overline{u}_1$, $\ldots$, $\overline{u}_s$. The edge connecting $\overline{u}$ and $\overline{u}_i$ is a string that starts with $c_i$. In other words each $\overline{u}_i$ is the locus of $\overline{p}c_i$. Since $c_ip$ occurs at least $d$ times in $T$, $\overline{p}c_i$ occurs at least $d$ times in ${\overline T}$. Hence each $\overline{u}_i$ has at least $d$ descendants. Thus every w-special node in ${\cal T}$ correspond to a special node in ${\overline {\cal T}}$ and every heavy w-link outgoing from a w-special node corresponds to some heavy child of a special node in ${\overline {\cal T}}$. Since the number of heavy children of special nodes in a suffix tree is $O(n/d)$, the number of heavy w-links starting in a w-special node is also $O(n/d)$. \end{proof}
\begin{proposition} \label{prop:heavy2}
The total number of unique w-links is $O(n/d)$. \end{proposition} \begin{proof}
A Weiner link $\mathtt{wlink}(v,a)$ is unique only if $\mathtt{wlink}(v,a)$ is heavy, all other w-links outgoing from $v$ are light, and there are at least $d$ light outgoing w-links from $v$. Hence there are at least $d$ w-links for every explicitly stored target node of a unique Weiner link. \end{proof}
We say that $\mathtt{wlink}(v,a)$ is difficult if its target node $u=\mathtt{wlink}(v,a)$ is light and its source node $v$ is heavy. \begin{proposition} \label{prop:wdescendant}
We can compute $u=\mathtt{wlink}(v,a)$ of $u$ in $O(1)$ time unless $\mathtt{wlink}(v,a)$ is difficult. If the $\mathtt{wlink}(v,a)$ is difficult, we can compute $u=\mathtt{wlink}(v,a)$ in $O(\log\log \sigma)$ time. \end{proposition} \begin{proof}
Suppose that $u$ is heavy. If $v$ is w-special, we can find $u$ in $O(1)$ time using $D_u$. If $v$ is not w-special and it has at most $d+1$ w-children, then we find $u_i$ in $O(1)$ time using data structures on $B$. Let $[l_v,r_v]$ denote the suffix range of $v$. The suffix range of $u$ is $[l_u,r_u]$ where $l_u=\mathit{Acc}[a]+\idrm{rank}_a(l_v-1,B)+1$ and $r_u=\mathit{Acc}[a]+\idrm{rank}_a(r_v,B)$. We can find $\idrm{rank}_a(r_v,B)$ as follows. Since $v$ has at most $d$ light w-children, the rightmost occurrence of $a$ in $B[l_v,r_v]$ is within the distance $d^2$ from $r_v$. Hence we can find the rightmost $i_a\le r_v$ such that $B[i_a]=a$ by searching in the interval $G_j$ that contains $r_v$ or the preceding interval $G_{j-1}$. When $i_a$ is found, $\idrm{rank}_a(r_v,B)=\idrm{rank}_a(i_a,B)$ can be computed in $O(1)$ time because partial rank queries on $B$ are supported in time $O(1)$. We can compute $\idrm{rank}_a(l_v-1,B)$ in the same way. When $\idrm{rank}$ queries are answered, we can find $l_u$ and $r_u$ in constant time. Then we can identify the node $u$ by computing the lowest common ancestor of $l_u$-th and $r_u$-th leaves in ${\cal T}$.
If $v$ is not special and it has more than $d+1$ outgoing w-links, then $u$ is the only heavy target node of a w-link starting at $v$; hence, its index $i$ is stored in the node $v$. Suppose that $u$ is light and $v$ is also light. Then the suffix range $[l_v,r_v]$ of $v$ has length at most $d$. $B[l_v,r_v]$ intersects at most two intervals $G_j$. Hence we can find $\idrm{rank}_a(l_v-1,B)$ and $\idrm{rank}_a(r_v,B)$ in constant time. Then we can find the range $[l_u,r_u]$ of the node $u$ and identify $u$ in time $O(1)$ as described above. If $v$ is heavy and $u$ is light, then $\mathtt{wlink}(v,a)$ is a difficult w-link. In this case we need $O(\log\log \sigma)$ time to compute $\idrm{rank}_a(l_v-1,B)$ and $\idrm{rank}_a(r_v,B)$. Then we find the range $[l_u,r_u]$ and the node $u$ is found as described above. \end{proof} \begin{proposition} \label{prop:wpath}
Any sequence of nodes $u_1$, $\ldots$, $u_t$ where $u_i=\mathtt{wlink}(u_{i-1},a_{i-1})$ for some symbol $a_{i-1}$ contains at most one difficult w-link. \end{proposition} \begin{proof}
Let $\pi$ denote the path of w-links that contains nodes $u_1$, $\ldots$, $u_t$. Suppose that a node $u_1$ is a heavy node and $u_t$ is a light node. Let $u_l$ denote the first light node on the path $\pi$. Then all nodes on the path from $u_l$ to $u_t$ are light nodes and $\mathtt{wlink}(u_{l-1},a_{l-1})$ is the only difficult w-link on the path from $u_1$ to $u_t$. If $u_1$ is light or $u_t$ is heavy, then all nodes on $\pi$ are light nodes (resp. all nodes on $\pi$ are heavy nodes). In this case there are apparently no difficult w-links between $u_1$ and $u_t$. \end{proof}
\paragraph{Pre-processing.} Now we show how we can construct above described auxiliary data structures in linear time. We start by generating the suffix tree topology and creating data structures $F_u$ and $D_u$ for all nodes $u$. For every node $u$ in the suffix tree we create the list of its children $u_i$ and their labels in $O(n)$ time. For every tree node $u$ we can find the number of its leaf descendants using standard operations on the suffix tree topology. Hence, we can determine whether $u$ is a heavy or a light node and whether $u$ is a special node. When this information is available, we generate the data structures $F_u$ and $D_u$.
We can create data structures necessary for navigating along w-links in a similar way. We visit all nodes $u$ of ${\cal T}$. Let $l_u$ and $r_u$ denote the indexes of leftmost and rightmost leaves in the subtree of $u$. Let $B$ denote the BWT of $T$. Using the method of Lemma~\ref{lemma:colrep}, we can generate the list of distinct symbols in $B[l_u..r_u]$ and count how many times every symbol occurred in $B[l_u..r_u]$ in $O(1)$ time per symbol. If a symbol $a$ occurred more than $d$ times, then $\mathtt{wlink}(u,a)$ is heavy. Using this information, we can identify w-special nodes and create data structures $D'_u$. Using the method of~\cite{Ruzic08}, we can construct $D'_u$ in $O(n_u\log\log n_u)$ time. By Lemma~\ref{prop:heavyspec} the total number of target nodes in all $D'_u$ is $O(n/d)$; hence we can construct all $D'_u$ in $o(n)$ time.
We can also find all nodes $u$ with a unique w-link. All dictionaries $D'_u$ and all unique w-links need $O((n/d)\log n)=O(n)$ bits of space.
\paragraph{Supporting a Sequence of $\mathtt{extendright}$ Operations.} \begin{lemma} \label{lemma:extendsequence} If we know the suffix interval of a right-maximal factor $T[i..i+j]$ in $B$ and the suffix interval of $\overline{T[i..i+j]}$ in ${\overline B}$, the we can find the intervals of $T[i..i+j+t]$ and $\overline{T[i..i+j+t]}$ in $O(t+\log\log\sigma)$ time. \end{lemma} \begin{proof}
Let ${\cal T}$ and ${\overline {\cal T}}$ denote the suffix tree for the text $T$ and let ${\overline {\cal T}}$ denote the suffix tree of the reverse text ${\overline T}$.
We keep the data structure for navigating the suffix tree ${\cal T}$, described in Proposition~\ref{prop:descendant} and the data structure for computing Weiner links described in Proposition~\ref{prop:wdescendant}. We also keep the same data structures for ${\overline {\cal T}}$.
Let $[\ell_{0,s},\ell_{0,e}]$ denote the suffix interval of $T[i..i+j]$; let $[\ell'_{0,s},\ell'_{0,e}]$ denote the suffix interval of $\overline{T[i..i+j]}$. We navigate down the tree following the symbols $T[i+j+1]$, $\ldots$, $T[i+j+t]$. Let $a=T[i+j+k]$ for some $k$ such that $1\le k\le t$ and suppose that the suffix interval $[\ell_{k-1,s},\ell_{k-1,e}]$ of $T[i..i+j+k-1]$ and the suffix interval $[\ell'_{k-1,s},\ell'_{k-1,e}]$ of $\overline{T[i..i+j+k-1]}$ are already known. First, we check whether our current location is a node of ${\cal T}$. If ${\overline B}[\ell'_{k-1,s},\ell'_{k-1,e}]$ contains only one symbol $T[i+j+k]$, then the range of $T[i..i+j+k]$ is identical with the range of $T[i..i+j+k-1]$. We can calculate the range of $\overline{T[i..i+j+k]}$ in a standard way by answering two rank queries on ${\overline B}$ and $O(1)$ arithmetic operations; see Section~\ref{sec:prelim}. Since ${\overline B}[\ell'_{k-1,s},\ell'_{k-1,e}]$ contains only one symbol, $\idrm{rank}$ queries that we need to answer are partial rank queries. Hence we can find the range of $\overline{T[i..i+j+k]}$ in time $O(1)$. If ${\overline B}[\ell'_{k-1,s},\ell'_{k-1,e}]$ contains more than one symbol, then there is a node $u\in {\cal T}$ that is labeled with $T[i..i+j+k-1]$; $u=lca(\ell_{k-1,s},\ell_{k-1,e})$ where $lca(f,g)$ denotes the lowest common ancestor of the $f$-th and the $g$-th leaves. We find the child $u'$ of the node $u$ in ${\cal T}$ that is labeled with $a=T[i+j+k]$. We also compute the Weiner link $\overline{u'}=\mathtt{wlink}(\overline{u},a)$ for a node $\overline{u'}=lca(\ell'_{k-1,s},\ell'_{k-1,e})$ in ${\overline {\cal T}}$. Then $\ell'_{k,s}=\mathtt{leftmost\_leaf}(\overline{u'})$ and $\ell'_{k,e}=\mathtt{rightmost\_leaf}(\overline{u'})$. We need to visit at most $t$ nodes of ${\cal T}$ and at most $t$ nodes of ${\overline {\cal T}}$ in order to find the desired interval. By Proposition~\ref{prop:descendant} and Proposition~\ref{prop:path}, the total time needed to move down in ${\cal T}$ is $O(t + \log\log \sigma)$. By Proposition~\ref{prop:wdescendant} and Proposition~\ref{prop:wpath}, the total time to compute all necessary w-links in ${\overline {\cal T}}$ is also $O(t +\log\log \sigma)$. \end{proof}
\paragraph{Finding the Intervals.} The algorithm for computing PLCP, described in Section~\ref{sec:permlcp}, assumes that we know the intervals of $T[j\Delta'..j\Delta'+\ell_i]$ and $\overline{T[j\Delta'..j\Delta'+\ell_{i}]}$ for $i=j\Delta'$ and $j=0,1,\ldots, n/\Delta'$. These values can be found as follows. We start by computing the intervals of $T[0..\ell_0]$ and $\overline{T[0..\ell_0]}$. Suppose that the intervals of $T[j\Delta'..j\Delta'+\ell_i]$ and $\overline{T[j\Delta'..j\Delta'+\ell_{i}]}$ are known. We can compute $\ell_{(j+1)\Delta'}$ as shown in Section~\ref{sec:permlcp}. We find the intervals of $T[(j+1)\Delta'..j\Delta'+\ell_{i}]$ and $\overline{T[(j+1)\Delta'..j\Delta'+\ell_{i}]}$ in time $O(\Delta')$ by executing $\Delta'$ operations $\mathtt{contractleft}$. Each operation $\mathtt{contractleft}$ takes constant time. Then we calculate the intervals of $T[(j+1)\Delta'..(j+1)\Delta'+\ell_{i+1}]$ and $\overline{T[(j+1)\Delta'..(j+1)\Delta'+\ell_{i+1}]}$ in $O(\log\log\sigma+(\ell_{i+1}-\ell_i+\Delta'))$ time using Lemma~\ref{lemma:extendsequence}. We know from Section~\ref{sec:permlcp} that $\sum (\ell_{i+1}-\ell_i)=O(n)$. Hence we compute all necessary intervals in time $O(n+(n/\Delta')\log\log \sigma)=O(n)$.
\section{Compressed Index} \label{sec:index} In this section we show how our algorithms can be used to construct a compact index in deterministic linear time. We prove the following result. \begin{theorem}
\label{theor:index}
We can construct an index for a text $T[0..n-1]$ over an alphabet of size $\sigma$ in $O(n)$ deterministic time using O$(n\log\sigma)$ bits of working space. This index occupies $nH_k+ o(n\log\sigma)+O(n\frac{\log n}{d})$ bits of space for a parameter $d>0$. All occurrences of a query substring $P$ can be counted in $O(|P|+\log\log \sigma)$ time; all $\mathrm{occ}$ occurrences of $P$ can be reported in $O(|P|+\log\log\sigma + \mathrm{occ}\cdot d)$ time. An arbitrary substring $P$ of $T$ can be extracted in $O(|P|+d)$ time. \end{theorem}
An uncompressed index by Fischer and Gawrychowski~\cite{FG13} also supports counting queries in $O(|P|+\log\log \sigma)$ time; however their data structure uses $\Theta(n\log n)$ bits. We refer to~\cite{BelazzouguiN14} for the latest results on compressed indexes. \paragraph{Interval Rank Queries.} We start by showing how a compressed data structure that supports select queries can be extended to support a new kind of queries that we dub \emph{small interval rank queries}. An interval rank query $\idrm{rank}_a(i,j,B)$ asks for $\idrm{rank}_a(i',B)$ and $\idrm{rank}_a(j',B)$, where $i'$ and $j'$ are the leftmost and rightmost occurrences of the symbol $a$ in $B[i..j]$; if $a$ does not occur in $B[i..j]$, we return {\em null}. An interval query $\idrm{rank}_a(i,j,B)$ is a small interval query if $j-i\le 2\log^2\sigma$. Our compressed index relies on the following result. \begin{lemma}
\label{lemma:interrank} Suppose that we are given a data structure that supports $\idrm{access}$ queries on a sequence $C[0..m]$ in time $t_{\idrm{select}}$. Then, using $O(m\log \log \sigma)$ additional bits, we can support small interval rank queries on $C$ in $O(t_{\idrm{select}})$ time. \end{lemma} \begin{proof}
We split $C$ into groups $G_i$ so that every group contains $\log^2\sigma$ consecutive symbols of $S$, $G_i=C[i\log^2\sigma .. (i+1)\log^2\sigma-1]$. Let $A_i$ denote the set of symbols that occur in $G_i$. We would need $\log \sigma$ bits per symbol to store $A_i$. Therefore we keep only a dictionary $A'_i$ implemented as a succinct SB-tree~\cite{GrossiORR09}. A succinct SB-tree needs $O(\log\log m)$ bits per symbol; using SB-tree, we can determine whether a query symbol $a$ is in $A_i$ in constant time if we can access elements of $A_i$. We can identify every $a\in A_i$ by its leftmost position in $G_i$. Since $G_i$ consists of $\log^2\sigma$ consecutive symbols, a position within $G_i$ can be specified using $O(\log\log \sigma)$ bits. Hence we can access any symbol of $A_i$ in $O(1)$ time. For each $a\in A_i$ we also keep a data structure $I_{a,i}$ that stores all positions where $a$ occurs in $G_i$. Positions are stored as differences with the left border of $G_i$: if $C[j]=a$, we store the difference $j-i\log^2\sigma$. Hence elements of $I_{a,i}$ can be stored in $O(\log\log\sigma)$ bits per symbol. $I_{a,i}$ is also implemented as an SB-tree.
Using data structures $A'_i$ and $I_{a,i}$, we can answer small interval rank queries. Consider a group $G_t=C[t\log^2\sigma..(t+1)\log^2\sigma-1]$, an index $i$ such that $t\log^2\sigma \le i \le (t+1)\log^2\sigma$, and a symbol $a$. We can find the largest $j\le i$ such that $C[j]=a$ and $C[j]\in G_t$: first we look for the symbol $a$ in $A'_t$; if $a\in A'_t$, we find the predecessor of $j$ in $I_{a,t}$. An interval $C[i..j]$ of size $d\le \log^2 \sigma$ intersects at most two groups $G_t$ and $G_{t-1}$. We can find the rightmost occurrence of a symbol $a$ in $[i,j]$ as follows. First we look for the rightmost occurrence $j'\le j$ of $a$ in $G_t$; if $a$ does not occur in $C[t\log^2\sigma.. j]$, we look for the rightmost occurrence $j'\le t\log^2\sigma-1$ of $a$ in $G_{t-1}$. We can find the leftmost occurrence $i'$ of $a$ in $C[i..j]$ using a symmetric procedure. When $i'$ and $j'$ are found, we can compute $\idrm{rank}_a(i',C)$ and $\idrm{rank}_a(j',C)$ in $O(1)$ time by answering partial rank queries. Using the result of Theorem~\ref{theor:partrank} we can support partial rank queries in $O(1)$ time and $O(m\log\log\sigma)$ bits.
Our data structure takes $O(m\log\log m)$ additional bits: Dictionaries $A'_i$ need $O(\log\log m)$ bits per symbol. Data structures $I_{a,t}$ and the structure for partial rank queries need $O(m\log\log \sigma)$ bits. We can reduce the space usage from $O(m\log\log m)$ to $O(m\log\log \sigma)$ using the same method as in Theorem~\ref{theor:partrank}. \end{proof}
\paragraph{Compressed Index.} We mark nodes of the suffix tree ${\cal T}$ using the method of Section~\ref{sec:intervals}, but we set $d=\log \sigma$. Nodes of ${\cal T}$ are classified into heavy, light, and special as defined in Section~\ref{sec:intervals}. For every special node $u$, we construct a dictionary data structure $D_u$ that contains the labels of all heavy children of $u$. If there is child $u_j$ of $u$, such that the first symbol on the edge from to $u$ to $u_j$ is $a_j$, then we keep $a$ in $D_u$. For every $a_j\in D_u$ we store the index $j$ of the child $u_j$. If a heavy node $u$ has only one heavy child $u_j$ and more than $d$ light children, then we also store data structure $D_u$ for such a node $u$. If a heavy node has less $d$ children and one heavy child, then we keep the index of the heavy child using $O(\log d)=O(\log\log \sigma)$ bits.
The second component of our index is the Burrows-Wheeler Transform ${\overline B}$ of the reverse text ${\overline T}$. We keep the data structure that supports partial rank, select, and access queries on ${\overline B}$. Using e.g., the result from~\cite{BCGNNalgor13}, we can support $\idrm{access}$ queries in $O(1)$ time while $\idrm{rank}$ and $\idrm{select}$ queries are answered in $O(\log\log \sigma)$ time. Moreover we construct a data structure, described in Lemma~\ref{lemma:interrank}, that supports rank queries on a small interval in $O(1)$ time. We also keep the data structure of Lemma~\ref{lemma:colrep} on ${\overline B}$; using this data structure, we can find in $O(1)$ time whether an arbitrary interval ${\overline B}[l..r]$ contains exactly one symbol. Finally we explicitly store answers to selected rank queries. Let ${\overline B}[l_u..r_u]$ denote the range of ${\overline P}_u$, where $P_u$ is the string that corresponds to a node $u$ and ${\overline P}_u$ is the reverse of $P_u$. For all data structures $D_u$ and for every symbol $a\in D_u$ we store the values of $\idrm{rank}_a(l_u-1,{\overline B})$ and $\idrm{rank}_a(r_u,{\overline B})$.
We will show later in this section that each $\idrm{rank}$ value can be stored in $O(\log\sigma)$ bits. Thus $D_u$ needs $O(\log\sigma)$ bits per element. The total number of elements in all $D_u$ is equal to the number of special nodes plus the number of heavy nodes with one heavy child and at least $d$ light children. Hence all $D_u$ contain $O(n/d)$ symbols and use $O((n/d)\log\sigma)=O(n)$ bits of space. Indexes of heavy children for nodes with only one heavy child and at most $d$ light children can be kept in $O(\log\log\sigma)$ bits.
Data structure that supports $\idrm{select}$, $\idrm{rank}$, and $\idrm{access}$ queries on ${\overline B}$ uses $nH_k(T)+o(n\log\sigma)$ bits. Auxiliary data structures on ${\overline B}$ need $O(n)+O(n\log\log\sigma)$ bits. Finally we need $O(n\frac{\log n}{d})$ bits to retrieve the position of a suffix in ${\overline T}$ in $O(d)$ time. Hence the space usage of our data structure is $nH_k(T)+o(n\log\sigma)+O(n)+O(n\frac{\log n}{d})$.
\paragraph{Queries.}
Given a query string $P$, we will find in time $O(|P|+\log\log \sigma)$ the range of the reversed string ${\overline P}$ in ${\overline B}$. We will show below how to find the range of $\overline{P[0..i]}$ if the range of $\overline{P[0..i-1]}$ is known. Let $[l_j..r_j]$ denote the range of $\overline{P[0..j]}$, i.e., $\overline{P[0..j]}$ is the longest common prefix of all suffixes in ${\overline B}[l_j..r_j]$. We can compute $l_j$ and $r_j$ from $l_{j-1}$ and $r_{j-1}$ as $l_j=\mathit{Acc}[a]+\idrm{rank}_a(l_{j-1}-1,{\overline B})+1$ and $r_j=\mathit{Acc}[a]+\idrm{rank}_a(r_{j-1},{\overline B})$ for $a=P[j]$ and $j=0,\ldots, |P|$. Here $\mathit{Acc}[f]$ is the accumulated frequency of the first $f-1$ symbols. Using our auxiliary data structures on ${\overline B}$ and additional information stored in nodes of the suffix tree ${\cal T}$, we can answer necessary $\idrm{rank}$ queries in constant time (with one exception). At the same time we traverse a path in the suffix tree ${\cal T}$ until the locus of $P$ is found or a light node is reached. Additional information stored in selected tree nodes will help us answer $\idrm{rank}$ queries in constant time. A more detailed description is given below.
Our procedure starts at the root node of ${\cal T}$ and we set $l_{-1}=0$, $r_{-1}=n-1$, and $i=0$.
We compute the ranges ${\overline B}[l_i..r_i]$ that correspond to $\overline{P[0..i]}$ for $i=0,\ldots, |P|$. Simultaneously we move down in the suffix tree until we reach a light node. Let $u$ denote the last visited node of ${\cal T}$ and let $a=P[i]$. We denote by $u_a$ the next node that we must visit in the suffix tree, i.e., $u_a$ is the locus of $P[0..i]$. We can compute $l_i$ and $r_i$ in $O(1)$ time if $\idrm{rank}_a(r_{i-1},{\overline B})$ and $\idrm{rank}_a(l_{i-1}-1,{\overline B})$ are known. We will show below that these queries can be answered in constant time because either (a) the answers to $\idrm{rank}$ queries are explicitly stored in $D_u$ or (b) the $\idrm{rank}$ query that must be answered is a small interval $\idrm{rank}$ query. The only exception is the situation when we move from a heavy node to a light node in the suffix tree; in this situation the $\idrm{rank}$ query takes $O(\log\log\sigma)$ time. For ease of description we distinguish between the following four cases. \\ ({\bf i}) Node $u$ is a heavy node and $a\in D_u$. In this case we identify the heavy child $u_j$ of $u$ that is labeled with $a$. We can also find $l_i$ and $r_i$ in time $O(1)$ because $\idrm{rank}_a(l_{i-1},{\overline B})$ and $\idrm{rank}_a(r_{i-1},{\overline B})$ are stored in $D_u$.\\ ({\bf ii}) Node $u$ is a heavy node and $a\not\in D_u$ or we do not keep the dictionary $D_u$ for the node $u$. In this case $u$ has at most one heavy child and at most $d$ light children. If $u_a$ is a heavy node (case {\bf iia}), then the leftmost occurrence of $a$ in ${\overline B}[l_{i-1}..r_{i-1}]$ is within $d^2$ symbols of $l_{i-1}$ and the rightmost occurrence of $a$ in ${\overline B}[l_{i-1}..r_{i-1}]$ is within $d^2$ symbols of $r_{i-1}$. Hence we can find $l_i$ and $r_i$ by answering small interval rank queries $\idrm{rank}_a(l_{i-1},l_{i-1}+d^2)$ and $\idrm{rank}_a(r_{i-1}-d^2,r_{i-1})$ respectively. \\ If $u_a$ is a light node (case {\bf iib}), we answer two standard rank queries on ${\overline B}$ in order to compute $l_i$ and $r_i$. \\ ({\bf iii}) If $u$ is a light node, then $P[0..i-1]$ occurs at most $d$ times. Hence $\overline{P[0..i-1]}$ also occurs at most $d$ times and $r_{i-1}-l_{i-1}\le d$. Therefore we can compute $r_i$ and $l_i$ in $O(1)$ time by answering small interval rank queries. \\ ({\bf iv}) We are on an edge of the suffix tree between a node $u$ and some child $u_j$ of $u$. In this case all occurrences of $P[0..i-1]$ are followed by the same symbol $a=P[i]$. Hence all occurrences of $\overline{P[0..i-1]}$ are preceded by $P[i]$ in the reverse text. Therefore ${\overline B}[l_{i-1}..r_{i-1}]$ contains only one symbol $a=P[i]$. In this case $\idrm{rank}_a(r_{i-1},{\overline B})$ and $\idrm{rank}_a(l_{i-1}-1,{\overline B})$ are partial rank queries; hence $l_i$ and $r_i$ can be computed in $O(1)$ time.
In all cases, except for the case (iia), we can answer $\idrm{rank}$ queries and compute $l_i$ and $r_i$ in $O(1)$ time. In case (iia) we need $O(\log\log \sigma)$ time answer $\idrm{rank}$ queries. However case (iia) only takes place when the node $u$ is heavy and its child $u_a$ is light. Since all descendants of a light node are light, case (iia) occurs only once when the pattern $P$ is processed. Hence the total time to find the range of $\overline{P}$ in ${\overline B}$ is $O(|P|+\log\log \sigma)$ time. When the range is known, we can count and report all occurrences of ${\overline P}$ in standard way.
\paragraph{Construction Algorithm.} We can construct the suffix tree ${\cal T}$ and the BWT ${\overline B}$ in $O(n)$ deterministic time. Then we can visit all nodes of ${\cal T}$ and identify all nodes $u$ for which the data structure $D_u$ must be constructed. We keep information about nodes for which $D_u$ will be constructed in a bit vector. For every such node we also store the list of its heavy children with their labels. To compute additional information for $D_u$, we traverse the nodes of ${\cal T}$ one more time using a variant of depth-first search. When a node $u\in {\cal T}$ is reached, we know the interval $[l_u,r_u]$ of $\overline{s_u}$ in ${\overline B}$, where $s_u$ is the string that labels the path from the root to a node $u\in {\cal T}$. We generate the list of all children $u_i$ of $u$ and their respective labels $a_i$. If we store a data structure $D_u$ for the node $u$, we identify labels $a_h$ of heavy children $u_h$ of $u$. For every $a_h$ we compute $\idrm{rank}_{a_h}(l_u-1,{\overline B})$ and $\idrm{rank}_{a_h}(r_u,{\overline B})$ and add this information to $D_u$. Then we generate the intervals that correspond to all strings $\overline{s_ua_i}$ in ${\overline B}$ and keep them in a list $List(u)$. Since intervals in $List(u)$ are disjoint, we can store $List(u)$ in $O(\sigma \log n)$ bits.
We can organize our traversal in such way that only $O(\log n)$ lists $List(u)$ need to be stored. Let $num(u)$ denote the number of leaves in the subtree of a node $u$. We say that a node is \emph{small} if $num(u_i)\le num(u)/2$ and big otherwise. Every node can have at most one big child. When a node $u$ processed and $List(u)$ is generated, we visit small children $u_i$ of $u$ in arbitrary order. When all small children $u_i$ are visited and processed, we discard the list $L(u)$. Finally if $u$ has a big child $u_b$, we visit $u_b$. If a node $u$ is not the root node and we keep $List(u)$, then $num(u)\le num(parent(u))/2$. Therefore we keep $List(u)$ for at most $O(\log n)$ nodes $u$. Thus the space we need to store all $List(u)$ is $O(\sigma\log^2 n)=o(n)$ for $\sigma\le n^{1/2}$. Hence the total workspace used of our algorithm is $O(n\log\sigma)$. The total number of $\idrm{rank}$ queries that we need to answer is $O(n/d)$ because all $D_u$ contain $O(n/d)$ elements. We need $O((n/d)\log\log\sigma)$ time to construct all $D_u$ and to answer all $\idrm{rank}$ queries. The total time needed to traverse ${\cal T}$ and collect necessary data about heavy nodes and special nodes is $O(n)$. Therefore our index can be constructed in $O(n)$ time.
It remains to show how we can store selected precomputed answers to $\idrm{rank}$ queries in $O(\log\sigma)$ bits per query. We divide the sequence ${\overline B}$ into chunks of size $\sigma^2$. For each chunk and for every symbol $a$ we encode the number of $a$'s occurrences per chunk in a binary sequence $A_a$, $A_a=1^{d_1}01^{d_2}0\ldots1^{d_i}0\ldots$ where $d_i$ is equal to the number of times $a$ occurs in the $i$-th chunk. If a symbol ${\overline B}[i]$ is in the chunk $Ch$, then we can answer $\idrm{rank}_a(i,{\overline B})$ by $O(1)$ queries on $A_a$ and a rank query on $Ch$; see e.g.,~\cite{GolynskiMR06}. Suppose that we need to store a pre-computed answer to a query $\idrm{rank}_a(i,{\overline B})$; we store the answer to $\idrm{rank}_a(i',Ch)$ where $Ch$ is the chunk that contains $i$ and $i'$ is the relative position of ${\overline B}[i]$ in $Ch$. Since a chunk contain $\sigma^2$ symbols, $\idrm{rank}_a(i',Ch)\le \sigma^2$ and we can store the answer to $\idrm{rank}_a(i',Ch)$ in $O(\log \sigma)$ bits. When the answer to the rank query on $Ch$ is known, we can compute the answer to $\idrm{rank}_a(i,{\overline B})$ in $O(1)$ time.
\no{ The set of all special nodes and their heavy children induces a subtree ${\cal T}'$ of ${\cal T}$; every leaf of ${\cal T}$ is heavy and every internal node has at least two children. Hence the total number of special nodes and their heavy children is $O(n/d)$.
Every $d$-th leaf of ${\cal T}$ is marked. If a node $u$ has at least two children that have marked descendants, then $u$ is also marked. This marking scheme has several properties that will be used by our algorithm. We will say that a node $v$ is \emph{special} if the parent of $v$ is marked, $v$ itself is not marked, and $v$ has at least one marked descendant. We say that a node $v$ is \emph{difficult} if the parent of $v$ is marked, but $v$ is not marked and not special. Thus all children of a marked node are either marked, or special, or difficult. We will show later in this section that we can quickly navigate from a node $u\in {\cal T}$ to its child $u_i$ unless the node $u_i$ is difficult. \begin{proposition} \label{prop:descendant}
If $u$ is a difficult node, then there are at most $d$ nodes in the subtree rooted at $u$. \end{proposition}
\begin{proposition} \label{prop:unmarked}
If a node $u$ is not marked and its parent is not marked, then $u$ has at most $2d$ siblings. \end{proposition} \begin{proof}
If $u$ has more than $2d$ siblings, then at least two of these siblings are marked or at least two of them have marked descendants. Then the parent of $u$ is marked too. \end{proof}
\begin{proposition}
\label{prop:path} Every path from a node $u$ to its descendant $u'$ contains at most one difficult node. \end{proposition} \begin{proof}
Let $v$ be the first difficult node encountered on a path from $u$ to $u'$. Since $v$ is difficult, it has no marked descendants. Hence there are no difficult nodes below $v$. \end{proof}
Now we estimate the number of marked and special nodes. The number of marked leaves is $O(n/d)$. All marked nodes induce a subtree ${\cal T}'$ of ${\cal T}$; every internal node of ${\cal T}'$ has at least two children and every leaf of ${\cal T}'$ is a marked leaf of ${\cal T}$. Hence there are $O(n/d)$ marked nodes. \begin{proposition} \label{prop:special}
Every special node has exactly one direct marked descendant $u'$. \end{proposition} \begin{proof}
That is, for every special node $u$ there is exactly one marked node $u'$, such that $u'$ is a descendant of $u$ and there are no marked nodes on the path from $u$ to $u'$. Suppose that $u$ has two direct marked descendants $u'$ and $u''$. Then the lowest common ancestor $u_1$ of $u'$ and $u''$ is also marked. The node $u_1$ is either identical with $u$ or is a proper descendant of $u$. Since $u$ is not marked, $u_1$ is a proper descendant of $u$ that is an ancestor of both $u'$ and $u''$. Hence neither $u'$ nor $u''$ are direct descendants of $u$. \end{proof} It follows from Proposition~\ref{prop:special} that the total number of special nodes does not exceed the total number of marked nodes.
We can identify all marked nodes and all special nodes in linear time. Let $M[0..2n-2]$ and $Sp[0..2n-2]$ denote bit sequences that keep positions of marked and special nodes, $M[i]=1$ (or $Sp[i]=1$) if and only if the $i$-th node is a marked node (resp.\ the $i$-th node is a special node). Let $A[0..2n-2]$ be a sequence of length $2n-1$ over an alphabet $\{\,0,1,2,3,4\,\}$. Suppose that $u$ is the $i$-th node in ${\cal T}$. Then $A[i]=0$ if no children of $u$ have marked descendants; $A[i]=1$ if exactly one child of $u$ has a marked descendant; $A[i]=2$ if at least two children of $u$ have marked descendants. We can compute $A[i]$ for the $i$-th node $u$ if the entries of $A$ that contain information about children of $u$ are already known. Hence we can perform an Euler tour of ${\cal T}$ and compute $A$ in $O(n)$ time. When $A$ is known, we can calculate $M$ and $Sp$: $M[i]=1$ if and only if $A[i]=2$; $Sp[i]=1$ if and only if $A[i]=1$ and $A[k]=2$, where $j$ is the index of the parent of the $i$-th node. Hence all marked and special nodes are found in linear time.
\paragraph{Auxiliary Data Structures on Suffix Tree Topology.} For every node $u$, we keep special and marked children of $u$ in a dictionary data structure $D_u$. We also keep all children of $u$ in a data structure $F_u$. If $u$ has at most $d$ children, $F_u$ is organized as the data structure of~\cite{FW94}. If $u$ has more than $d$ children, $F_u$ is organized as the van Emde Boas data structure. Using $D_u$, we can find for any symbol $a$ the special or marked child $u_i$ of $u$ such that the edge from $u$ to $u_i$ is labelled with $a$. Using $F_u$, we can find for any symbol $a$ the child $u_i$ of $u$ such that the edge from $u$ to $u_i$ is labelled with $a$. Since the total number of marked and special nodes is $O(n/d)$, all $D_u$ contain $O(n/d)$ elements. We can construct $D_u$ in $O(n_u\log n_u)$ time, where $n_u$ is the number of marked and special children of $u$. Hence all $D_u$ can be pre-processed in $O((n/d)\log n)=O(n)$ time. }
\end{document} |
\begin{document}
\title[Examples of mixing subalgebras of von Neumann algebras]{Examples of mixing subalgebras of von Neumann algebras and their normalizers}
\author{Paul Jolissaint}
\address{ Universit\'e de Neuch\^atel,
Institut de Math\'emathiques,
Emile-Argand 11,
CH-2000 Neuch\^atel, Switzerland}
\email{[email protected]} \thanks{To appear in the \textit{Bulletin of the Belgian Mathematical Society Simon Stevin}}
\subjclass[2010]{Primary 46L10; Secondary 22D25}
\date{\today}
\keywords{Finite von Neumann algebras, relative weak mixing subalgebras, relative weak asymptotic homomorphism property, discrete groups}
\begin{abstract} We discuss different mixing properties for triples of finite von Neumann algebras $B\subset N\subset M$, and we introduce families of triples of groups $H<K<G$ whose associated von Neumann algebras $L(H)\subset L(K)\subset L(G)$ satisfy $\mathcal{N}_{L(G)}(L(H))''=L(K)$. It turns out that the latter equality is implied by two conditions: the equality $\mathcal{N}_G(H)=K$ and the above mentioned mixing properties. Our families of examples also allow us to exhibit examples of pairs $H<G$ such that $L(\mathcal{N}_G(H))\not=\mathcal{N}_{L(G)}(L(H))''$. \end{abstract}
\maketitle
\section{Weakly and strongly mixing finite von Neumann algebras}
The main purpose of the present paper is to present families of triples of groups $H\triangleleft K<G$ whose associated von Neumann algebras have mixing properties which imply that the $L(K)$ is the von Neumann algebra generated by the normalizer of $L(H)$ in $L(G)$. Thus this section is devoted to the discussion of mixing properties for arbitrary finite von Neumann algebras. \par Let $1\in B\subset N\subset M$ be finite von Neumann algebras being endowed with a normal, finite, faithful, normalized trace $\tau$. The normalizer of $B$ in $M$ is denoted by $\mathcal{N}_M(B)$, and it is the group of all unitary elements $u\in U(M)$ such that $uBu^*=B$. We assume for simplicity that $M$ has separable predual. We denote by $\mathbb{E}_B$ (resp. $\mathbb{E}_N$) the trace-preserving conditional expectation from $M$ onto $B$ (resp. $N$), and we set $M\ominus N=\{x\in M:\mathbb{E}_N(x)=0\}$. For $v\in U(B)$, let $\sigma_v\in \mathrm{Aut}(M)$ be defined by $\sigma_v(x)=vxv^*$. This gives an action of $U(B)$ on $M$ that preserves the subspace $M\ominus N$. If $G$ is a group, every element $x$ of the associated von Neumann algebra $L(G)$ admits a Fourier series decomposition
$x=\sum_{g\in G}x(g)\lambda_g$ where $x(g)=\tau(x\lambda_{g}^{-1})$ and $\sum_g|x(g)|^2=\Vert x\Vert_2^2$. If $H$ is a subgroup of $G$, then $L(H)$ identifies to the von Neumann subalgebra of $L(G)$ formed by all elements $y$ such that $y(g)=0$ for every $g\in G\setminus H$. The corresponding conditional expectation $\mathbb{E}_{L(H)}$ satisfies $\mathbb{E}_{L(H)}(x)=\sum_{h\in H}x(h)\lambda_h$ for every $x\in L(G)$. \par
Our first definition is an extension of Definitions 2.1 and 3.4 of \cite{JS} to the case of triples as above.
\begin{definition} Consider a triple $1\in B\subset N\subset M$ of finite von Neumann algebras as above and the action $\sigma$ of $U(B)$ on $M$ by conjugation. \begin{enumerate} \item [(1)] We say that $B$ is \textbf{weakly mixing in $M$ relative to} $N$ if, for every finite set $F\subset M\ominus N$ and every $\varepsilon>0$, one can find $v\in U(B)$ such that $$ \Vert \mathbb{E}_{B}(x\sigma_v(y))\Vert_2=\Vert \mathbb{E}_{B}(xvy)\Vert_2<\varepsilon\quad\forall x,y\in F. $$ If $B=N$, we say that $B$ is \textbf{weakly mixing} in $M$. \item [(2)] If $B$ is diffuse, we say that $B$ is \textbf{strongly mixing in} $M$ \textbf{relative to} $N$ if $$ \lim_{n\to\infty}\Vert \mathbb{E}_{B}(xu_ny)\Vert_2=0 $$ for all $x,y\in M\ominus N$ and all sequences $(u_n)\subset U(B)$ which converge to 0 in the weak operator topology. If $B=N$, we say that $B$ is \textbf{strongly mixing in} $M$. \end{enumerate} \end{definition}
The next definition introduces a relative version of the so-called \textit{weak asymptotic homomorphism property}; the latter was used first by Robertson, Sinclair and Smith in \cite{RSS} for MASA's in order to get an easily verifiable criterion for singularity. It was proved next by Sinclair, Smith, White and Wiggins in \cite{SSWW} that, conversely, any singular MASA has the weak asymptotic homomorphism property. The relative version of the above property was introduced by Chifan in \cite{Chi} in order to prove that if $A$ is a masa in a separable type $\textrm{II}_1$ factor $M$ then the triple $$ A\subset \mathcal{N}_M(A)''\subset M $$ has the relative weak asymptotic homomorphism property. He used it to prove that, if $(M_i)_{i\geq 1}$ is a sequence of finite von Neumann algebras and if $(A_i)_{i\geq 1}$ is a sequence such that $A_i\subset M_i$ is a MASA for every $i$, then $$ \overline{\bigotimes}_{i\geq 1}\mathcal{N}_{M_i}(A_i)''=(\mathcal{N}_{\overline{\bigotimes}_i M_i}(\overline{\bigotimes}_i A_i))''. $$ This relative property is related to one-sided quasi-normalizers, as it was proved by Fang, Gao and Smith in \cite{FGS}. See Theorem 1.4 below.
\begin{definition} Let $1\in B\subset N\subset M$ be a triple of finite von Neumann algebras as in Definition 1.1 above. \begin{enumerate}
\item [(1)] The triple of algebras $1\in B\subset N\subset M$ has the \textbf{relative weak asymptotic homomorphism property} if there exists a net of unitaries $(u_i)_{i\in I}$ in $B$ such that $$ \lim_{i\in I}\Vert \mathbb{E}_B(xu_iy)-\mathbb{E}_B(\mathbb{E}_N(x)u_i\mathbb{E}_N(y))\Vert_2=0 $$ for all $x,y\in M$. \item [(2)] The \textbf{one-sided quasi-normalizer} of $B$ in $M$ is the set of all elements $x\in M$ for which there exists a finite set $\{x_1,\ldots,x_n\}\subset M$ such that $$ Bx\subset \sum_{i=1}^n x_iB. $$ Following \cite{FGS}, we denote the set of these elements by $q\mathcal{N}_{M}^{(1)}(B)$. \end{enumerate} \end{definition}
\begin{remark} (1) If $B\subset N\subset M$ and if $B$ is diffuse and strongly mixing in $M$ relative to $N$, then it is obviously weakly mixing in $M$ relative to $N$. \par\noindent (2) If $B$ is strongly mixing in $M$ relative to $N$, then every diffuse von Neumann algebra $1\in D\subset B$ is also strongly mixing in $M$ relative to $N$. \par\noindent (3) The following identity $$ \mathbb{E}_B(xuy)-\mathbb{E}_B(\mathbb{E}_N(x)u\mathbb{E}_N(y))=\mathbb{E}_B([x-\mathbb{E}_N(x)]u[y-\mathbb{E}_N(y)]), $$ which holds for every $u\in U(B)$ and all $x,y\in M$, implies that the relative weak mixing property and the relative weak asymptotic homomorphism property are equivalent. \end{remark}
The following theorem is the main result of \cite{FGS}.
\begin{theorem} (J. Fang, M. Gao and R. R. Smith) Let $1\in B\subset N\subset M$ be a triple of finite von Neumann algebras with separable predual. Then the following conditions are equivalent: \begin{enumerate}
\item [(1)] The triple $B\subset N\subset M$ has the relative weak asymptotic homomorphism property, or, equivalently, $B$ is weakly mixing in $M$ relative to $N$.
\item [(2)] The one-sided quasi-normalizer $q\mathcal{N}_{M}^{(1)}(B)$ is contained in $N$. \end{enumerate} \end{theorem}
The next technical result is inspired by Proposition 4.1 of \cite{Popa83} and Lemma 2.1 of \cite{RSS}; it reminds also heredity properties of relative weak mixing from \cite{FGS}.
\begin{proposition} Let $B\subset N\subset M$ be a triple as above. \begin{enumerate} \item [(1)] Assume that $B$ is weakly mixing in $M$ relative to $N$. Then for every nonzero projection $e\in B$, the reduced algebra $eBe$ is weakly mixing in $eMe$ relative to $eNe$. Moreover, one has for every $u\in U(M)$: $$ \Vert \mathbb{E}_{B}-\mathbb{E}_{uBu^*}\Vert_{\infty,2}\geq \Vert u-\mathbb{E}_{N}(u)\Vert_2. $$ \item [(2)] If $B$ is strongly mixing in $M$ relative to $N$, then for every diffuse unital von Neumann subalgebra $D$ of $B$, one has for every $u\in U(M)$: $$ \Vert \mathbb{E}_D-\mathbb{E}_{uDu^*}\Vert_{\infty,2}\geq \Vert u-\mathbb{E}_{N}(u)\Vert_2. $$ \item [(3)] If $B_i\subset N_i\subset M_i$ are triples of finite von Neumann algebras, $i=1,2$, and if $B_i$ is weakly mixing in $M_i$ relative to $N_i$, then $B_1\overline{\otimes}B_2$ is weakly mixing in $M_1\overline{\otimes}M_2$ relative to $N_1\overline{\otimes}N_2$. \end{enumerate} \end{proposition} \begin{proof} The first part of claim (1) is a straightforward consequence of Corollary 6.3 of \cite{FGS} and claim (3) follows from Proposition 6.1 of the same article. \par We prove claim (2) because the proof of the last assertion in (1) is similar to that of statement (2). \par Thus fix a diffuse von Neumann subalgebra $D$ of $B$, a unitary operator $u\in U(M)$, and let us consider $x=u^*-\mathbb{E}_{N}(u^*)$ and $y=u-\mathbb{E}_{N}(u)\in M\ominus N$ and let $\varepsilon>0$. By the above remark, one has for every $v\in U(D)$: $$ \mathbb{E}_{D}(xvy) = \mathbb{E}_{D}(u^*vu)-\mathbb{E}_{D}(\mathbb{E}_{N}(u^*)v\mathbb{E}_{N}(u)) $$ As $D$ is diffuse, there exists a sequence of unitaries $(v_n)\subset U(D)$ which converges to 0 with respect to the weak operator topology. Since $B$ is strongly mixing in $M$ relative to $N$, there exists a positive integer $n$ such that $\Vert \mathbb{E}_{B}(xv_ny)\Vert_2\leq \varepsilon$. As $\mathbb{E}_D=\mathbb{E}_D\mathbb{E}_{B}$, we have $\Vert \mathbb{E}_{D}(xv_ny)\Vert_2\leq \varepsilon$ as well. The above computations give $$ \Vert \mathbb{E}_{D}(u^*v_nu)\Vert_2\leq \Vert \mathbb{E}_{D}(\mathbb{E}_{N}(u^*)v_n\mathbb{E}_{N}(u))\Vert +\varepsilon \leq \Vert \mathbb{E}_{N}(u)\Vert_2 +\varepsilon. $$ We get then: \begin{eqnarray*} \Vert \mathbb{E}_{D}-\mathbb{E}_{uDu^*}\Vert_{\infty,2}^2 & \geq & \Vert v_n-\mathbb{E}_{uDu^*}(v_n)\Vert_2^2\\ &=& \Vert u^*v_nu-\mathbb{E}_{D}(u^*v_nu)\Vert_2^2\\ &=& 1-\Vert \mathbb{E}_{D}(u^*v_nu)\Vert_2^2\\ &\geq & 1-(\Vert \mathbb{E}_{N}(u)\Vert_2+\varepsilon)^2\\ &=& 1-\Vert \mathbb{E}_{N}(u)\Vert_2^2-2\varepsilon\Vert \mathbb{E}_{N}(u)\Vert_2-\varepsilon^2\\ &=& \Vert u-\mathbb{E}_{N}(u)\Vert_2^2-2\varepsilon\Vert \mathbb{E}_{N}(u)\Vert_2-\varepsilon^2. \end{eqnarray*} As $\varepsilon$ is arbitrary, we get the conclusion. \end{proof}
We end the present section with a first class of examples of relative strongly mixing algebras; its proof is inspired by that of Lemma 2.2 in \cite{DSS}.
\begin{proposition} Let $1\in B\subset N$ and $Q$ be arbitrary finite von Neumann algebras with separable preduals. Assume moreover that $B$ is diffuse. Then $B$ is strongly mixing relative to $N$ in the free product algebra $M=N*Q$. \end{proposition} \begin{proof} Let us recall that the free product $N*Q$ is the von Neumann algebra generated by the unital $*$-algebra $$ P:=\C1\oplus \bigoplus_{n\geq 1}\left(\bigoplus_{i_1\not=\cdots\not=i_n}M_{i_1}^0\otimes \cdots \otimes M_{i_n}^0\right) $$ where $M_1^0=\{x\in N:\tau(x)=0\}$ and $M_2^0=\{x\in Q:\tau(x)=0\}$. (We have chosen and fixed finite, normal, faithful normalized traces on $N$ and $Q$.) Thus every element of $P$ is a finite linear combination of words $w$ of the following form: either $w\in N$, or $w$ is a finite product of letters of zero trace and at least one letter belongs to $Q$. \par Then fix a sequence $(u_n)\subset U(B)$ which converges weakly to zero and two words $x,y\in P$ as above. If $x,y\in N$, then $\mathbb{E}_{B}(xu_ny)-\mathbb{E}_{B}(\mathbb{E}_{N}(x)u_n\mathbb{E}_{N}(y))=0$. If $x,y\in Q^0$, then $x,y\in M\ominus N$ and $xu_ny$ decomposes as a sum $xu_ny=x(u_n-\tau(u_n))y+\tau(u_n)xy$ where the first term is a reduced word, hence $\mathbb{E}_{B}(x(u_n-\tau(u_n))y)=0$, and $$
\Vert \mathbb{E}_{B}(xu_ny)\Vert_2\leq |\tau(u_n)|\Vert xy\Vert_2\to 0 $$ as $n\to\infty$. If $x=x_1b_1$ and $y=b_2y_2$ where $b_1,b_2\in N$ and $x_1$ ends with an element in $Q^0$ and $y_2$ starts with an element in $Q^0$, then \begin{eqnarray*} \mathbb{E}_{B}(xu_ny)-\mathbb{E}_{B}(\mathbb{E}_{N}(x)u_n\mathbb{E}_{N}(y)) & = & \mathbb{E}_{B}(x_1b_1u_nb_2y_2)\\ & & -\mathbb{E}_{B}(\mathbb{E}_{N}(x_1)b_1u_nb_2\mathbb{E}_{N}(y_2))\\ & = & \mathbb{E}_{B}(x_1\{b_1u_nb_2-\tau(b_1u_nb_2)\}y_2)\\ & & +\tau(b_1u_nb_2)\mathbb{E}_{B}(x_1y_2)\\ &= & \tau(b_1u_nb_2)\mathbb{E}_{B}(x_1y_2). \end{eqnarray*} As the words $x_1\{b_1u_nb_2-\tau(b_2u_nb_1)\}y_2$, $x_1$ and $y_2$ are reduced, they belong to the kernels of the conditional expectations $\mathbb{E}_{B}$ and $\mathbb{E}_{N}$. Hence we get $$
\Vert \mathbb{E}_{B}(xu_ny)-\mathbb{E}_{B}(\mathbb{E}_{N}(x)u_n\mathbb{E}_{N}(y))\Vert_2\leq |\tau(b_1u_nb_2)|\Vert x_1y_2\Vert \to 0 $$ as $n\to\infty$. The remaining case, when exactly one of $x$ and $y^*$ ends with a letter from $Q$, is dealt with as in the previous case. \end{proof}
\section{The case of group algebras}
As will be seen below, the associated von Neumann algebras of suitable triples of groups $H<K<G$ give rise to examples of von Neumann algebras which satisfy the relative weak or strong mixing properties. Our next definition generalizes the so-called \emph{conditions (SS)} and \emph{(ST)} of \cite{JS} to not necessarily abelian groups. We are indebted to the referee for having suggested the following more intuitive formulation.
\begin{definition} Let $G$ be a countable group and let $H<K$ be two infinite subgroups of $G$. \begin{enumerate} \item [(a)] We say that the triple $H<K<G$ satisfies \textbf{condition (SS)} if all orbits of the natural action $H\curvearrowright (G\setminus K)/H$ are infinite. \item [(b)] The triple $H<K<G$ satisfies \textbf{condition (ST)} if all stabilizers of the natural action $H\curvearrowright (G\setminus K)/H$ are finite. \end{enumerate} When $H<G$ we say that the pair $H<G$ satisfies condition (SS) (resp. (ST)) if it is the case for $H=K<G$. \end{definition}
We present below technical characterizations of both properties. In order to do that, recall on the one hand that all orbits of $H\curvearrowright (G\setminus K)/H$ are infinite if and only if, for every finite subset $Y\subset G\setminus K$, one can find $h\in H$ such that $hY\cap Y=\emptyset$ (see for instance Lemma 2.2 in \cite{KT}). \par On the other hand, for $g,h\in G\setminus K$, set $$ E(g,h)=\{\gamma\in H: g\gamma h\in H\}=g^{-1}Hh^{-1}\cap H $$ and $E(g)=E(g^{-1},g)=gHg^{-1}\cap H$. The latter is a subgroup of $G$. Then it is easy to see that, for arbitrary $\gamma_0\in E(g,h)$, one has $E(g,h)\subset E(g^{-1})\gamma_0$. \par Making use of these observations, the proof of the following lemma is straightforward.
\begin{lemma} Let $H<K<G$ be three infinite, countable groups. \begin{enumerate} \item[(a)] The triple $H<K<G$ satisfies condition (SS) if and only if for every nonempty finite set $F\subset G\setminus K$, there exists $h\in H$ such that $FhF\cap H=\emptyset$. \item[(b)] The following conditions are equivalent: \begin{enumerate} \item [(1)] $H<K<G$ satisfies condition (ST); \item [(2)] for every finite set $F\subset G\setminus K$, there exists an exceptional finite set $E\subset H$ such that $FhF\cap H=\emptyset$ for every $h\in H\setminus E$. \item [(3)] $E(g,h)$ is a finite set for all $g,h\in G\setminus K$; \item [(4)] $E(g)$ is a finite group for every $g\in G\setminus K$. \end{enumerate} \end{enumerate} \end{lemma}
Using the same type of arguments as in \cite{JS}, one proves the following generalization of Proposition 2.3 and of Theorem 3.5 in \cite{JS}:
\begin{proposition} Let $H<K<G$ be a triple of countable groups. Then: \begin{enumerate} \item [(a)] The triple $H<K<G$ satisfies condition (SS) if and only if $L(H)$ is weakly mixing in $L(G)$ relative to $L(K)$. \item [(b)] The triple $H<K<G$ satisfies condition (ST) if and only if $L(H)$ is strongly mixing in $L(G)$ relative to $L(K)$. \end{enumerate} Moreover, if $H<K<G$ satisfies condition (SS) (resp. (ST)) and if $\sigma:G\rightarrow \mathrm{Aut}(Q,\tau)$ is a trace-preserving action of $G$ on some finite von Neumann algebra $(Q,\tau)$, then the crossed product algebra $Q\rtimes_\sigma H$ is weakly (resp. strongly) mixing in $Q\rtimes_\sigma G$ relative to $Q\rtimes_\sigma K$. \end{proposition}
\begin{remark} (1) An infinite subgroup $H$ of a group $G$ is called \textit{malnormal} if, for every $g\in G\setminus H$, one has $H\cap gHg^{-1}=\{1\}$. Hence such a pair $H<G$ satisfies condition (ST). More generally, $H$ is said to be \textit{almost malnormal} if, for every $g\in G\setminus H$, the subgroup $H\cap gHg^{-1}$ is finite. Thus, if $H<K<G$ is a triple that satisfies condition (ST), one can say equivalently that $H$ is \textit{almost malnormal in $G$ relative to $K$.} \newline (2) Following \cite{V10}, we say that a subgroup $H$ of a group $G$ is \emph{relatively malnormal} if there exists an intermediate subgroup $K<G$ of infinite index such that $gHg^{-1}\cap H$ is finite for all $g\in G\setminus K$. This means precisely that the triple $H<K<G$ satisfies condition (ST). \end{remark}
If $G$ is a group and $H$ is a subgroup of $G$, there is a straightforward analogue of the one-sided quasi-normalizer of a subalgebra: we denote by $q\mathcal{N}_G^{(1)}(H)$ the set of elements $g\in G$ for which there exists finitely many elements $g_1,\ldots,g_n\in G$ such that $Hg\subset \bigcup_{i=1}^n g_iH$. In view of Theorem 1.3, it is natural to ask whether the triple of algebras $L(H)\subset L(K)\subset L(G)$ has the relative asymptotic homomorphism property if and only if $q\mathcal{N}_G^{(1)}(H)\subset K$. In the special case $H=K$, the authors of \cite{FGS} proved that it is indeed true, but their proof relied heavily on the main result of the article (\emph{i.e.} Theorem 1.3 in the present notes). We succeeded in providing a self contained proof in the context of group algebras in \cite{JWAHP}, and we recall some characterizations in the next theorem (where $\lambda$ denotes the left regular representation of $G$ on $\ell^2(G)$.)
\begin{theorem} (\cite{JWAHP}, Theorem 2.1) Let $H<K<G$ be a triple of groups and let $B=L(H)\subset N=L(K)\subset M=L(G)$ be their associated von Neumann algebras. Then the following conditions are equivalent: \begin{enumerate}
\item [(1)] There exists a net $(h_i)_{i\in I}\subset H$ such that, for all $x,y\in M\ominus N$, one has
$$
\lim_{i\in I}\Vert \mathbb{E}_B(x\lambda_{h_i} y)\Vert_2=0, $$ i.e. the net of unitaries in the relative weak asymptotic homomorphism property may be chosen in the subgroup $\lambda(H)$ of $U(B)$.
\item [(2)] The triple $B\subset N\subset M$ has the relative weak asymptotic homomorphism property.
\item [(3)] The subspace of $H$-fixed vectors $\ell^2(G/H)^H$ in the quasi-regular representation $\bmod\ H$ is contained in $\ell^2(K/H)$.
\item [(4)] The one-sided quasi-normalizer $q\mathcal{N}_G^{(1)}(H)$ is contained in $K$.
\item [(5)] The triple $H<K<G$ satisfies condition (SS), i.e. for every non empty finite set $F\subset G\setminus K$, there exists $h\in H$ such that $FhF\cap H=\emptyset$. \end{enumerate} \end{theorem}
\begin{remark} (1) Let $G$ be a group and let $H$ be a subgroup of $G$. Then it is interesting to note the following description of the one-sided quasi-normalizer of $H$ in $G$; it is certainly known to specialists: $$
q\mathcal{N}_G^{(1)}(H)=\{g\in G: [H:H\cap gHg^{-1}]<\infty\}. $$ Indeed, let $g\in G$ be arbitrary; let $u_1,\ldots,u_n,\ldots\in H$ be such that $$ H=\bigsqcup_j u_j(H\cap gHg^{-1}). $$ Then it is easy to see that $$ HgHg^{-1}=\bigsqcup_j u_jgHg^{-1}. $$ It implies that $HgH=\bigsqcup_j u_jgH$, hence that $[H:H\cap gHg^{-1}]<\infty$ if and only if $g\in q\mathcal{N}_G^{(1)}(H)$. \\ (2) Let $H<K<G$ be a triple of groups. Condition (SS) is equivalent to the following apparently weaker \textit{condition (wSS)}: \par \textit{For every nonempty finite set $F\subset G\setminus K$ and for every $g\in G\setminus K$, there exists $h\in H$ such that $Fhg\cap H=\emptyset$.} \par Indeed, it is easy to see that condition (wSS) is a reformulation of condition (4) of Theorem 2.5, namely that $q\mathcal{N}_G^{(1)}(H)\subset K$. \end{remark}
We use Theorem 2.5 to give examples of triples of algebras $B\subset N\subset M$ with $N=\mathcal{N}_M(B)''$ in the case of group von Neumann algebras and crossed products.
\begin{theorem} Let $H<K<G$ be a triple of groups such that $K=\mathcal{N}_G(H)$ is the normalizer of $H$ in $G$. We denote by $W\sp\ast(q\mathcal{N}_{L(G)}^{(1)}(L(H)))$ the von Neumann algebra generated by the one-sided quasi-normalizer of $L(H)$ in $L(G)$. Then the following two conditions are equivalent: \begin{enumerate}
\item [(1)] $L(K)=W\sp\ast(q\mathcal{N}_{L(G)}^{(1)}(L(H)))$;
\item [(2)] $H<K<G$ satisfies condition (SS). \end{enumerate} Thus, if the triple $H<K<G$ satisfies condition (SS), then $\mathcal N_{L(G)}(L(H))''=L(K)$, and, moreover, if $G$ acts on some finite von Neumann algebra $Q$, then $$ Q\rtimes K=\mathcal{N}_{Q\rtimes G}(Q\rtimes H)''. $$ \end{theorem} \begin{proof} (1) $\Rightarrow$ (2). By hypothesis, the triple of algebras $L(H)\subset L(K)\subset L(G)$ has the relative weak asymptotic homomorphism property, hence, by Theorem 2.5, $H<K<G$ satisfies condition (SS). \newline (2) $\Rightarrow$ (1). By assumption, one has the following chain of inclusions and equalities: $$ K=\mathcal{N}_G(H)\subset q\mathcal{N}_{G}^{(1)}(H)\subset K $$ where the first inclusion follows from the definition and the second one from condition (SS). At the level of von Neumann algebras, this gives: $$ L(K)=L(\mathcal{N}_G(H))\subset \mathcal{N}_{L(G)}(L(H))''\subset W^*(q\mathcal{N}_{L(G)}^{(1)}(L(H)))\subset L(K) $$ where the last inclusion follows from the relative weak asymptotic homomorphism property. \par For the case of crossed products, the unitary groups $U(Q)$ and $K$ are contained in $\mathcal N_{Q\rtimes G}(Q\rtimes H)$, hence $Q\rtimes K\subset \mathcal N_{Q\rtimes G}(Q\rtimes H)''$. As the latter algebra is trivially contained in $W^*(q\mathcal N_{Q\rtimes G}^{(1)}(Q\rtimes H))$, the assertion follows from Proposition 2.3 above, and from Theorem 3.1 of \cite{FGS}. \end{proof}
\begin{remark} (1) Let $H<G$ be a pair of groups; one may ask whether it is possible to have $\mathcal N_{L(G)}(L(H))''\not=W^*(q\mathcal{N}_{L(G)}^{(1)}(L(H)))$. It is indeed the case, as explained below. \par Following Section 5 of \cite{FGS}, we denote by $H_1$ the quasi-normalizer of $H$ in $G$, \textit{i.e.}, the maximal subgroup of $q\mathcal N_G^{(1)}(H)\cap q\mathcal N_G^{(1)}(H)^{-1}$, and we let $H_2$ denote the subgroup of $G$ generated by $q\mathcal N_G^{(1)}(H)$. Finally, let $q\mathcal N_{L(G)}(L(H))''$ be the quasi-normalizer algebra of $L(H)$ in $L(G)$, \textit{i.e.}, the two-sided version of the von Neumann subalgebra $W^*(q\mathcal{N}_{L(G)}^{(1)}(L(H)))$. Then Corollary 5.2 of \cite{FGS} shows that $$ q\mathcal N_{L(G)}(L(H))''=L(H_1) $$ and that $$ W\sp\ast(q\mathcal{N}_{L(G)}^{(1)}(L(H)))=L(H_2). $$ Example 5.3 of the same article shows that it may happen that $H_1\not=H_2$, hence that it is possible to have a triple $H<K=H_2<G$ that satisfies condition (SS) (by Corollaries 4.1 and 5.2 of \cite{FGS}) but with $\mathcal N_{L(G)}(L(H))''\subsetneqq W^*(q\mathcal{N}_{L(G)}^{(1)}(L(H)))$. \newline (2) Let $H<G$ be a pair of groups as above, $H$ being abelian. Corollary 5.7 of \cite{FGS} shows that if, furthermore, $L(H)$ is a MASA in $L(G)$, then $$ \mathcal{N}_{L(G)}(L(H))''=L(\mathcal{N}_G(H)). $$ As will be seen in the next section, the last equality can fail if we do not assume that $L(H)$ is a MASA in $L(G)$. \par In fact, the family of examples of triples of groups in the next section allows us to propose examples as well as counterexamples to the above equality, namely, we exhibit triples $H<K<G$ such that $K=\mathcal{N}_G(H)$ and $L(K)=\mathcal N_G(L(H))''$ on the one hand, and, on the other hand, we will see that there are triples $H<K<G$ with $K=\mathcal{N}_G(H)$ but $L(K)\subsetneqq \mathcal{N}_{L(G)}(L(H))''$. \end{remark}
We end the present section with a result that applies in the framework of condition (ST), but its relationship with strong mixing is still unclear. It is a generalization of Corollary 2.6 of \cite{Popa83}. In order to state it, we recall the definition of commuting squares. \par Let $M$ be a finite von Neumann algebra endowed with a normal, finite, faithful, normalized trace $\tau$ and let $B_0$ and $B_1$ be von Neumann subalgebras of $M$ with the same unit. The diagram $$ \begin{array}{ccc} B_0 & \subset & M\\ \cup & & \cup\\ B_0\cap B_1 & \subset & B_1 \end{array} $$ is a \textit{commuting square} if $$ \mathbb{E}_{B_0\cap B_1}(b_0b_1)=\mathbb{E}_{B_0\cap B_1}(b_0)\mathbb{E}_{B_0\cap B_1}(b_1) $$ for all $b_j\in B_j$, $j=0,1$, or, equivalently, if $\mathbb{E}_{B_0}\mathbb{E}_{B_1}=\mathbb{E}_{B_1}\mathbb{E}_{B_0}=\mathbb{E}_{B_0\cap B_1}$. See Chapter 4 in \cite{GHJ} for other equivalent conditions and further details. As is well known, if $G_1$ and $G_2$ are subgroups of $G$, the system of inclusions $$ \begin{array}{ccc} L(G_1)& \subset & L(G)\\ \cup & & \cup\\ L(G_1\cap G_2) & \subset & L(G_2) \end{array} $$ is a commuting square. Thus, for every subgroup $H$ of $G$ and for every $g\in G$, the following diagram is a commuting square $$ \begin{array}{ccc} L(gHg^{-1})& \subset & L(G)\\ \cup & & \cup\\ L(gHg^{-1}\cap H) & \subset & L(H). \end{array} $$ In particular, if $H<K<G$ is a triple that satisfies condition (ST), if $g\in G\setminus K$, the commuting square $$ \begin{array}{ccc} L(gHg^{-1})& \subset & L(G)\\ \cup & & \cup\\ L(gHg^{-1}\cap H) & \subset & L(H) \end{array} $$ satisfies the hypotheses of the following proposition since $L(gHg^{-1}\cap H)$ is finite-dimensional, hence atomic for every $g\in G\setminus K$. Thus, every such $g$ is orthogonal to $\mathcal N_{L(G)}(L(H))''$.
\begin{proposition} Let $M$ be a finite von Neumann algebra endowed with some normal, faithful, finite, normalized trace $\tau$, let $1\in B\subset M$ be a diffuse von Neumann subalgebra of $M$ and let $u\in U(M)$ be such that $$ \begin{array}{ccc} uBu^*& \subset & M\\ \cup & & \cup\\ uBu^*\cap B & \subset & B \end{array} $$ is a commuting square, and assume that $uBu^*\cap B$ has the following properties: its center is atomic and its relative commutant $(uBu^*\cap B)'\cap M$ is diffuse (the latter conditions are automatically satisfied if $uBu^*\cap B$ itself is atomic). Then $u$ is orthogonal to $\mathcal{N}_M(B)''$. \end{proposition} \begin{proof} Let us first fix $v\in\mathcal{N}_M(B)$ and let us prove that $\tau(vu)=0$. As $vBv^*=B$, the diagram $$ \begin{array}{ccc} vuBu^*v^*& \subset & M\\ \cup & & \cup\\ vuBu^*v^*\cap B & \subset & B \end{array} $$ is a commuting square as well, $C:=vuBu^*v^*\cap B=v(uBu^*\cap B)v^*$ has atomic center and its relative commutant $C'\cap M$ is still diffuse. Moreover, recall that one has $\mathbb{E}_C(xb)=\mathbb{E}_C(bx)$ for all $b\in C'\cap B$, $x\in M$, and that $\mathbb{E}_C(b)\in Z(C)$. \par If $(z_j)_{j\geq 1}$ is the set of minimal projections of the center $Z(C)$, then each reduced algebra $Cz_j$ is a finite subfactor of the reduced von Neumann algebra $z_jMz_j$, and its relative commutant is the diffuse algebra $z_j(C'\cap B)z_j=(Cz_j)'\cap z_jBz_j$. If $\tau_j$ is the normalized trace on $z_jMz_j$ defined by $\tau_j(z_jxz_j)=\frac{1}{\tau(z_j)}\tau(z_jxz_j)$ for all $x\in M$, then the associated conditional expectation $\mathbb{E}_{Cz_j}$ satisfies the following identity: $\mathbb{E}_{Cz_j}(z_jxz_j)=\mathbb{E}_C(z_jxz_j)z_j$ for all $x\in M$. In particular, one has $\mathbb{E}_{Cz_j}(y)=\tau_j(y)z_j$ for every $y\in z_j(C'\cap B)z_j$. \par Let us fix $\varepsilon>0$. We claim that there exists a partition of unity $(e_i)_{1\leq i\leq n}$ in $C'\cap B$ so that $\Vert \mathbb{E}_C(e_i)\Vert\leq\varepsilon$ for every $i$. Indeed, choose a positive integer $m$ such that $2^{-m}\leq\varepsilon$ and then, for every $j$, a partition of the unity $(e_{j,i})_{1\leq i\leq 2^m}\subset z_j(C'\cap B)z_j$ such that $\tau_j(e_{j,i})=2^{-m}$ for all $i$. Finally, set $n=2^m$ and $e_i=\sum_je_{j,i}$. Then $(e_i)_{1\leq i\leq n}$ is a partition of the unity in $C'\cap B$ and, by the above considerations, $$ \mathbb{E}_C(e_i)=\sum_j\tau_j(e_{j,i})z_j=2^{-m}\leq\varepsilon\quad\forall i. $$ Let $D$ be the abelian von Neumann algebra generated by the projections $(e_i)$. We recall that $$ \mathbb{E}_{D'\cap M}(x)=\sum_i e_ixe_i\quad\forall x\in M, $$ and we will make use of the following identity $$ \mathbb{E}_C(vue_iu^*v^*e_i)=\mathbb{E}_C(vue_iu^*v^*)\mathbb{E}_C(e_i) $$ which is true since $vue_iu^*v^*\in vuBu^*v^*$, $e_i\in B$ and by the commuting square condition. One has: \begin{eqnarray*}
|\tau(vu)|^2 &=&
|\tau(\mathbb{E}_{D'\cap M}(vu))|^2\leq \Vert \mathbb{E}_{D'\cap M}(vu)\Vert_2^2\\ &= &
\tau(|\mathbb{E}_{D'\cap M}(vu)|^2)=
\tau(\mathbb{E}_C(|\mathbb{E}_{D'\cap M}(vu)|^2)). \end{eqnarray*} But, \begin{eqnarray*}
\mathbb{E}_C(|\mathbb{E}_{D'\cap M}(vu)|^2) &=&
\mathbb{E}_C\left(\left|\sum_i e_ivue_i\right|^2\right)\\ &=& \mathbb{E}_C(\sum_{i,j}e_ivue_ie_ju^*v^*e_j)\\ &=& \sum_i \mathbb{E}_C(e_ivue_iu^*v^*e_i)\\ &=&\sum_i \mathbb{E}_C(vue_iu^*v^*)\mathbb{E}_C(e_i)\\ &=& \sum_i \mathbb{E}_C(vue_iu^*v^*)^{1/2}\mathbb{E}_C(e_i)\mathbb{E}_C(vue_iu^*v^*)^{1/2}\\ &\leq& \varepsilon \sum_i\mathbb{E}_C(vue_iu^*v^*)=\varepsilon \mathbb{E}_C(vuu^*v^*)=\varepsilon. \end{eqnarray*}
Thus, one has $|\tau(vu)|^2\leq \varepsilon$. By weak density, we get $\tau(xu)=0$ for every $x\in\mathcal{N}_M(B)''$. \end{proof}
\begin{remark} Let $1\in C\subset M$ be a pair of finite von Neumann algebras. As we have seen in the proof above, the existence, for every $\varepsilon>0$ of partitions of the unity $(e_i)_{1\leq i\leq n}$ in $C'\cap M$ such that $\Vert \mathbb{E}_C(e_i)\Vert \leq \varepsilon$ for all $i$, is implied by two conditions: (i) the center $Z(C)$ is atomic, and (ii) the relative commutant $C'\cap M$ is diffuse. In fact, the existence of such partitions of the unity requires both conditions. Indeed, suppose for instance that $M$ is a type II$_1$ factor; firstly, if $C$ is a MASA in $M$, then its center and its relative commutant are equal and diffuse, and $\mathbb{E}_C(e)=e$ for every nonzero projection $e\in C'\cap M=C$, thus $\Vert \mathbb{E}_C(e)\Vert=1$ for all such projections. Secondly, if $C$ is a subfactor of finite index in $M$, then its center is atomic, but its relative commutant is finite dimensional. Hence there exists a constant $c>0$ such that $$ \Vert E_C(e)\Vert =\tau(e)\geq c $$ for every nonzero projection $e\in C'\cap M$. \end{remark}
\section{Families of examples}
As indicated above, we are going to give examples of triples of groups which satisfy conditions (SS) or (ST). They will be based on semidirect products. \par Thus, let $K$ be a group, let $H<K$ be a subgroup of $K$ and assume that $K$ acts on some group $A$ through an action $\alpha$. Put $G=A\rtimes K$, and identify $K$ with the subgroup $\{e\}\times K$ of $G$. For future use, we put $A^*=A\setminus\{e\}$. \par A case that might be of interest comes from generalized wreath product groups as in the next example.
\begin{example} Let $K$ be an arbitrary countable group. Assume that $K$ acts on some countable set $X$, and take any nontrivial group $Z$. Let $A=Z^{(X)}$ be the group of all maps $a:X\rightarrow Z$ such that $a(x)=e$ except for some finite subset of $X$. Then $K$ acts by left translation on $A$, and the corresponding group $G$ is the generalized wreath product group $Z\wr_X K$. \end{example}
We present first a condition which implies that $K$ is the normalizer of $H$ in $G$.
\begin{proposition} Let $H<K<G=A\rtimes K$ be a triple as above. Assume furthermore that $H$ is a normal subgroup of $K$ and that $e\in A$ is the only element $a\in A$ such that $\alpha_h(a)=a$ for all $h\in H$. Then $\mathcal N_G(H)=K$. \end{proposition} \begin{proof} One has obviously $K\subset\mathcal N_G(H)$. Conversely, if $g=(a,k)\in \mathcal N_G(H)$, then one has for every $h\in H$: \begin{eqnarray*} ghg^{-1} &=& (a,k)(e,h)(\alpha_{k^{-1}}(a^{-1}),k^{-1})=(a,kh)(\alpha_{k^{-1}}(a^{-1}),k^{-1})\\ &=& (a\alpha_{khk^{-1}}(a^{-1}),khk^{-1}) \end{eqnarray*} which belongs to $H$ if and only if $\alpha_{khk^{-1}}(a)=a$ for every $h\in H$, if and only if $\alpha_h(a)=a$ for every $h\in H$. Hence $a=e$. \end{proof}
Let us see now under which conditions the triple $H<K<G=A\rtimes K$ satisfies condition (SS) or (ST). It is partly inspired by Theorem 2.2 of \cite{RSS}. See also Proposition 4.6 of \cite{JS}.
\begin{theorem} Let $H<K<G=A\rtimes K$ be a triple as above. \begin{enumerate}
\item[(a)] The triple $H<K<G$ satisfies condition (ST) if and only if, for every $a\in A^*$, $|\{h\in H:\alpha_h(a)=a\}|<\infty$. \item[(b)] The triple $H<K<G$ satisfies condition (SS) if and only if, for every finite set $E\subset A^*$, there exists $h\in H$ such that $E\cap \alpha_h(E)=\emptyset$. Equivalently, for every $a\in A^*$, the $H$-orbit $H\cdot a$ is infinite. \end{enumerate} \end{theorem} \begin{proof} (a) $(\Rightarrow)$ Let $a\in A^*$. Then $(a,e)\in G\setminus K$, and $H\cap (a,e)H(a^{-1},e)$ is finite. But, if $h\in H$, one has $(a,e)(e,h)(a^{-1},e)=(a\alpha_h(a^{-1}),h))\in H$ if and only if $\alpha_h(a)=a$. The set of such elements $h\in H$ must be finite.\\ $(\Leftarrow)$ Let $g\in G\setminus K$; let us prove that $H\cap gHg^{-1}$ is finite. Put $g=(a,k)$. If $h\in H$, one has \begin{eqnarray*} ghg^{-1} &=& (a,k)(e,h)(\alpha_{k^{-1}}(a^{-1}),k^{-1})\\ &=& (a,kh)(\alpha_{k^{-1}}(a^{-1}),k^{-1})\\ &=& (a\alpha_{khk^{-1}}(a^{-1}),khk^{-1}). \end{eqnarray*} Thus, $ghg^{-1}\in H$ if and only if $khk^{-1}\in H$ and $\alpha_{khk^{-1}}(a)=a$. Hence $H\cap gHg^{-1}$ is finite. \par
\noindent (b) $(\Rightarrow)$ Let $E\subset A^*$ be finite. Replacing $E$ by $E\cup E^{-1}$ if necessary, we assume that $E=E^{-1}$. Then $F:=\{(a,e): a\in E \}\subset G\setminus K$ and there exists $h\in H$ such that $F(e,h)F\cap H=\emptyset$. In particular, $(e,h)(a,e)=(\alpha_h(a),h)\notin FH$ for every $a\in E$ and $E\cap\alpha_h(E)=\emptyset$.\\ $(\Leftarrow)$ Let $F\subset G\setminus K$ be finite. There exist $F_1\subset A^*$ and $F_2\subset K$ finite such that $F\subset F_1\times F_2$. Observe that, for $(a_j,k_j)\in F_1\times F_2$, $j=1,2$, and for $h\in H$, one has $$ (a_1,k_1)(e,h)(a_2,k_2)=(a_1,k_1h)(a_2,k_2)=(a_1\alpha_{k_1h}(a_2),k_1hk_2) $$ hence, if it belongs to $H$, one must have $a_1\alpha_{k_1}(\alpha_h(a_2))=1$, or equivalently, $\alpha_h(a_2)=\alpha_{k_1^{-1}}(a_1^{-1})$. Taking $E=F_1\cup\{\alpha_{k^{-1}}(a^{-1}): k\in F_2, a\in F_1 \}$, if $h\in H$ is such that $E\cap \alpha_h(E)=\emptyset$, then necessarily, $F(e,h)F\cap H=\emptyset$. \end{proof}
\begin{remark}
Let us consider the case where $A=Z^{(X)}$ as in Example 3.1: $K$ acts on the countable set $X$. Then $|\{h\in H:\alpha_h(a)=a\}|<\infty$ for every $a\in A^*$ if and only if, for every $x\in X$, the stabilizer $H_x=\{h\in H: h\cdot x=x\}$ is finite. By Proposition 2.3 of \cite{KT}, it is equivalent to the fact that the action of $H$ by generalized Bernoulli shifts on $(Y^X,\nu^X)$ is mixing, where $(Y,\nu)$ is some standard probability space. \par In the same vein, for every nonempty, finite set $E\subset A^*$ one can find $h\in H$ such that $E\cap \alpha_h(E)=\emptyset$ if and only if the action of $H$ on $X$ has infinite orbits. By Proposition 2.1 of \cite{KT}, it is equivalent to the fact that the corresponding Bernoulli shift action of $H$ on $(Y^X,\nu^X)$ is weakly mixing. \end{remark}
The following corollary is a straightforward consequence of Theorem 2.7, Proposition 3.2 and of Theorem 3.3.
\begin{corollary} Let $H, K$ and $A$ be as above, denote by $H<K<G$ the associated triple of groups and assume that: \begin{enumerate}
\item [(1)] for every $a\in A^*$, there exists $h\in H$ such that $\alpha_h(a)\not=a$;
\item [(2)] for every finite set $E\subset A^*$, there exists $h\in H$ such that $E\cap \alpha_h(E)=\emptyset$. \end{enumerate} Then $L(K)=\mathcal{N}_{L(G)}(L(H))''$. More generally, if $G$ acts on some finite von Neumann algebra $(Q,\tau)$, then $Q\rtimes K=\mathcal{N}_{Q\rtimes G}(Q\rtimes H)''$. \end{corollary}
As promised in the preceeding section, we give examples of triples $H<K<G$ such that $\mathcal{N}_G(H)=K$ but $L(K)\subsetneqq \mathcal{N}_{L(G)}(L(H))''$.
\begin{example} Let $H\triangleleft K$ be infinite, countable groups, let $A$ be an infinite, countable group endowed with an action of $K$, and assume that: \begin{enumerate}
\item [(i)] for every $a\in A^*$, there exists $h\in H$ such that $\alpha_h(a)\not=a$;
\item [(ii)] there exists $a_0\in A^*$ whose orbit $H\cdot a_0$ is finite. \end{enumerate} Let $H<K<G$ be the associated triple of groups. Then the first condition implies that $\mathcal{N}_G(H)=K$ by Proposition 3.2, and the second one implies that the triple does not satisfy condition (SS) by Theorem 3.3. Let us prove that $L(K)\subsetneqq \mathcal{N}_{L(G)}(L(H))''$. Put $F=\{(\alpha_h(a_0),e): h\in H\}$ which is a finite set by the second condition above. Define $$ x=\sum_{g\in F\cup F^{-1}}\lambda_g. $$ Then $x$ is a selfadjoint element of $L(G)\ominus L(K)$ since $F\cap K=\emptyset$. Furthermore, it is easy to see that $x\in L(H)'\cap L(G)$. As $x\notin L(K)$, there exists a spectral projection $e$ of $x$ which does not belong to $L(K)$ either. Then $u:=2e-1$ is a unitary element of $L(H)'\cap L(G)$ hence it belongs to the normalizer $\mathcal{N}_{L(G)}(L(H))$, but $u\notin L(K)$. Thus $L(K)\subsetneqq \mathcal{N}_{L(G)}(L(H))''$. Observe that $H$ can be an abelian group, and this shows that, in order to have the equality $\mathcal{N}_{L(G)}(L(H))''=L(\mathcal{N}_G(H))$ in Corollary 5.7 of \cite{FGS}, one must assume that $L(H)$ is a MASA in $L(G)$. \end{example}
\end{document} |
\begin{document}
\title[Non-convex play operator]{Continuity of the non-convex play operator \\ in the space of rectifiable curves}
\author{Jana Kopfov\'a, Vincenzo Recupero} \thanks{The second author is a member of GNAMPA-INdAM}
\address{\textbf{Jana Kopfov\'a} \\ Mathematical Institute of the Silesian University\\ Na Rybn\' i\v cku 1, CZ-74601 Opava\\ Czech Republic.
\newline
{\rm E-mail address:}
{\tt [email protected]}}
\address{\textbf{Vincenzo Recupero} \\
Dipartimento di Scienze Matematiche \\
Politecnico di Torino \\
Corso Duca degli Abruzzi 24 \\
I-10129 Torino \\
Italy. \newline
{\rm E-mail address:}
{\tt [email protected]}}
\subjclass[2010]{34G25, 34A60, 47J20, 49J52, 74C05} \keywords{Evolution variational inequalities, Play operator, Sweeping processes, Functions of bounded variation, Prox-regular sets}
\begin{abstract} In this paper we prove that the vector play operator with a uniformly prox-regular characteristic set of constraints is continuous with respect to the ${\textsl{BV}\hspace{0.17ex}}$-norm and to the ${\textsl{BV}\hspace{0.17ex}}$-strict metric in the space of continuous functions of bounded variation. We do not assume any further regularity of the characteristic set. We also prove that the non-convex play operator is rate independent. \end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction}
Several phenomena in elasto-plasticity, ferromagnetism, and phase transitions are modeled by the following evolution variational inequality in a real Hilbert space $\H$ with the inner product $\duality{\cdot}{\cdot}$: \begin{eqnarray}\label{var in-intro}
\duality{z - u(t) + y(t)}{y'(t)} \le 0 && \forall z \in \Z, \quad t \in \clint{0,T},\\ \label{constraint} u(t) - y(t) \in \Z && \forall t \in \clint{0,T}. \end{eqnarray} Here $u : \clint{0,T} \function \H$ is a given ``input'' function, $T > 0$ being the final time of evolution, and $y : \clint{0,T} \function \H$ is the unknown function, $y'$ being its derivative. It is assumed that the set $\Z$ in the constraint \eqref{constraint} is a closed convex subset of $ \H,$ and it is usually called \emph{the characteristic set}. We refer to the monographs \cite{KrPo,Ma,Vi,BrSp,Kre96,MieRou15} for surveys on these physical models. It is well-known (see, e.g., \cite{Kre96}), that if $u$ is absolutely continuous, then there exists a unique absolutely continuous solution $y$ of \eqref{var in-intro}-\eqref{constraint} together with the given initial condition \begin{equation}\label{initial cond-intro}
u(0) - y(0) = z_{0} \in \Z. \end{equation} If we set ${\mathsf{P}}(u,z_0) := y$ we have defined a solution operator ${\mathsf{P}} : {\textsl{W}\hspace{0.17ex}}^{1,1}(\clint{0,T};\H) \times \Z \function {\textsl{W}\hspace{0.17ex}}^{1,1}(\clint{0,T};\H)$ which is called the \emph{play operator}. Here ${\textsl{W}\hspace{0.17ex}}^{1,1}(\clint{0,T};\H)$ denotes the space of $\H$-valued Lipschitz continuous functions defined on $\clint{0,T}$ (precise definitions will be given in Sections \ref{S:Preliminaries} and \ref{S:state main result}). An important feature of ${\mathsf{P}}$ is its \emph{rate independence}, i.e. \begin{equation}\label{rate ind}
{\mathsf{P}}(u \circ \phi) = {\mathsf{P}}(u) \circ \phi \end{equation} whenever $\phi : \clint{0,T} \function \clint{0,T}$ is an increasing surjective Lipschitz continuous reparametrization of time. The play operator can be extended to continuous functions of bounded variation, i.e. to inputs $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ (\cite{Kre96}). This can be done by reformulating \eqref{var in-intro} as an integral variation inequality: \begin{equation}\label{play BV-integral inequality}
\int_{0}^{T} \duality{z(t) - u(t) + y(t)}{\de y(t)} \le 0, \qquad \forall z \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\Z), \end{equation} where the integral can be interpreted as a Riemann-Stieltjes integral (see, e.g., \cite{Kre96}), but also as a Lebesgue integral with respect to the differential measure $\De y$, the distributional derivative of $y$ (see \cite{Rec11} for the equivalence of the two formulations). By \cite{Kre96} for every $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ there exists a unique $y \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ such that \eqref{play BV-integral inequality}, \eqref{constraint}, \eqref{initial cond-intro} hold. Therefore the play operator can be extended to the operator ${\mathsf{P}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \to {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H).$ Its domain of definition is naturally endowed with the strong ${\textsl{BV}\hspace{0.17ex}}$-norm defined by \begin{equation}\label{def BVnorm}
\norm{u}{{\textsl{BV}\hspace{0.17ex}}} := \norm{u}{\infty} + \V(u,\clint{0,T}), \qquad u \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H), \end{equation} where $\norm{u}{\infty}$ is the supremum norm of $u$ and $\V(u,\clint{0,T})$ is the total variation of $u$. For absolutely continuous inputs the ${\textsl{BV}\hspace{0.17ex}}$-norm is exactly the standard ${\textsl{W}\hspace{0.17ex}}^{1,1}$-norm, and the continuity of ${\mathsf{P}}$ on ${\textsl{W}\hspace{0.17ex}}^{1,1}(0,T;\H)$ in this special case was proved in \cite{Kre91} for finite dimensional $\H$ and in \cite{Kre96} for separable Hilbert spaces. For such spaces $\H$, assuming $\Z$ has a smooth boundary, the ${\textsl{BV}\hspace{0.17ex}}$-norm continuity of ${\mathsf{P}}$ on ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \cap {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H)$ (respectively on ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$) was proved in \cite{BroKreSch04} (respectively in \cite{KrRo}). Under this additional regularity of $\Z$, in \cite{BroKreSch04, KrRo} it is also shown that ${\mathsf{P}}$ is locally Lipschitz continuous. In \cite{KopRec16} we were able to drop the regularity of $\Z$ and we proved that ${\mathsf{P}}$ is ${\textsl{BV}\hspace{0.17ex}}$-norm continuous on ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ for an arbitrary characteristic set $\Z$.
Another relevant topology in ${\textsl{BV}\hspace{0.17ex}}$ is the one induced by the so-called \emph{strict metric}, which is defined by \begin{equation}\label{def strictBV}
d_{s}(u,v) := \norm{u - v}{\infty} + |\V(u,\clint{0,T}) - \V(v,\clint{0,T})|, \qquad
u, v \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H), \end{equation} indeed every $u \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ can be approximated by a sequence $u_n \in {\textsl{AC}\hspace{0.17ex}}(\clint{0,T};\H)$ converging to $u$ in the strict metric. In \cite{Kre96} it is proved that ${\mathsf{P}}$ is continuous on ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ with respect to the strict metric (shortly, ``strictly continuous''), provided $\Z$ has a smooth boundary. In \cite{Rec11} this regularity assumption is dropped and it is proved that ${\mathsf{P}}$ is continuous on ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ with respect to the stric metric for every characteristic convex set $\Z$. In \cite{Rec11} it is also proved that in general ${\mathsf{P}}$ is not strictly continuous on the whole ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$. For other results on the continuity properties of ${\mathsf{P}}$ we refer to \cite{Re4, Rec15a, KleRec16}.
Previous results are concerned with the case of a convex set $\Z$, but the characteristic set of constraints can be non-convex in some applications, e.g. in problems of crowd motion modeling (see \cite{Venel}).
In the following we will restrict ourselves to uniform prox-regular sets - these are closed sets having a neighborhood where the projection exists and is unique. For the notion of prox-regularity we refer the reader to \cite{Fed, vial, ClaSteWol95, PolRocThi00, ColThi10}. Following e.g. \cite{ColMon03, KreMonRec22a, KreMonRec22b}, we see that the proper formulation of \eqref{play BV-integral inequality} in the case of a prox-regular set $\Z$ reads \begin{equation}\label{BVnonconv}
\int_{0}^T\duality{z(t) - u(t) + y(t)}{\de y(t)}
\le
\frac{1}{2r} \int_0^T \norm{z(t)-u(t)+y(t)}{}^2 \de V_y(t)\qquad \forall z \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\Z), \end{equation} where $V_y(t) = \V(y,\clint{0,t})$ for $t \in \clint{0,T}$ and $\norm{\cdot}{}$ is the norm in $\H$. It is well-known (cf., e.g., \cite{EdmThi06} or \cite{KreMonRec22a}) that for every $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ there exists a unique $y = {\mathsf{P}}(u,z_0) \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ which satifies \eqref{BVnonconv}, \eqref{constraint}, \eqref{initial cond-intro}. Thus also in the non-convex case the solution operator \[ {\mathsf{P}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \to {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H), \] of problem \eqref{BVnonconv}, \eqref{constraint}, \eqref{initial cond-intro} can be defined, which we will call \emph{non-convex play operator}. In \cite{KreMonRec22b} it is proved that in ${\textsl{W}\hspace{0.17ex}}^{1,1}(\clint{0,T};\H)$ the operator ${\mathsf{P}}$ is continuous (and also local Lipschitz continuous) with respect to the strong ${\textsl{BV}\hspace{0.17ex}}$-norm under the assumption that $\Z$ satisfies a suitable regularity assumption, to be more precise it is required that $\Z$ is the sublevel set of a Lipschitz continuous function. In the present paper we prove that ${\mathsf{P}}$ is ${\textsl{BV}\hspace{0.17ex}}$-norm continuous on the larger space ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and for every characteristic prox-regular set $\Z$. We also prove that it is continuous with respect to the strict metric on the space of continuous functions of bounded variation. The technique of our proof consists in reducing the problem to the space of Lipschitz continuous functions, where the problem is considerably easier. In order to perform the reduction we use the rate independence of ${\mathsf{P}}$, which, to the best of our knowledge is proved here for the first time in the non-convex case. The question of the ${\textsl{BV}\hspace{0.17ex}}$-norm continuity on the whole space ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ will be addressed in a future paper: in that case the presence of jumps makes the problem considerably more difficult and the reparametrization method studied in \cite{Rec16a,Rec20} is needed.
The plan of the paper is the following: In Section 2 we recall the preliminaries needed to prove our results, which are stated in Section 3. In Section 4 we perform all the proofs.
\section{Preliminaries}\label{S:Preliminaries}
The set of integers greater or equal to $1$ will be denoted by $\en$.
\subsection{Prox-regular sets}
Throughout this paper we assume that \begin{equation}\label{H-prel} \begin{cases}
\text{$\H$ is a real Hilbert space with the inner product
$\duality{x}{y}$}, \\
\H \neq \{0\}, \\
\norm{x}{} := \duality{x}{x}^{1/2} \qquad \text{for $x \in \H$}. \end{cases} \end{equation} If $\S \subseteq \H$ and $x \in \H$ we set $\d_\S(x) := \inf\{\norm{x-s}{}\ :\ s \in \S\}$.
\begin{Def} If $\K$ is a closed subset of $\H$, $\K \neq \void$, and $y \in \H$, we define the \emph{set of projections of $y$ onto $\K$} by setting \begin{equation}
\Proj_\K(y) := \left\{x \in \K\ :\ \norm{x-y}{} = \inf_{z \in \K} \norm{z-y}{}\right\} \end{equation} and the \emph{(exterior) normal cone of $\K$ at $x$} by \begin{equation}\label{normal cone}
N_\K(x) := \{\lambda(y-x) \ :\ x \in \Proj_\K(y),\ y \in \H,\ \lambda \ge 0\}. \end{equation} \end{Def}
We recall the notion of prox-regularity (see \cite[Theorem 4.1-(d)]{ClaSteWol95}) which can also be called ``mild non-convexity''.
\begin{Def} If $\K$ is a closed subset of $\H$ and if $r \in \opint{0,\infty}$, we say that $\K$ is \emph{$r$-prox-regular} if for every $y \in \{v \in \H\ :\ 0 < \d_{\K}(v) < r\}$ we have that $\Proj_\K(y) \neq \void$ and \[
x \in \Proj_\K\left(x+r\frac{y-x}{\norm{y-x}{}}\right), \qquad \forall x \in \Proj_\K(y). \] \end{Def}
It is well-known and easy to prove that if $x \in \Proj_\K(y_0)$ for some $y_0 \in \H$, then $\Proj_\K(y) = \{x\}$ for every $y$ lying in the segment with endpoints $y_0$ and $x$. Thus it follows that if $\K$ is $r$-prox-regular for some $r > 0$, then $\Proj_\K(y)$ is a singleton for every $y \in \{v \in \H\ :\ 0 < \d_{\K}(v) < r\}$.
Prox-regularity can be characterized by means of a variational inequality, indeed in \cite[Theorem 4.1]{PolRocThi00} and in \cite[Theorem 16]{ColThi10} one can find the proof of the following:
\begin{Thm}\label{charact proxreg} Let $\K$ be a closed subset of $\H$ and let $r \in \opint{0,\infty}$. Then $\K$ is $r$-prox-regular if and only if for every $x \in \K$ and $n \in N_\K(x)$ we have \[
\duality{n}{z-x} \le \frac{\norm{n}{}}{2r}\norm{z-x}{}^2, \qquad \forall z \in \K. \] \end{Thm}
\subsection{Functions of bounded variation}
Let $I$ be an interval of $\re$. The set of $\H$-valued continuous functions defined on $I$ is denoted by ${\textsl{C}\hspace{0.18ex}}(I;\H)$. For a function $f : I \function \H$ and for $S \subseteq I$ we write
$\Lipcost(f,S) := \sup\{\d(f(s),f(t))/|t-s|\ :\ s, t \in S,\ s \neq t\}$, $\Lipcost(f) := \Lipcost(f,I)$, the Lipschitz constant of $f$, and ${\textsl{Lip}\hspace{0.15ex}}(I;\X) := \{f : I \function \H\ :\ \Lipcost(f) < \infty\}$, the set of $\H$-valued Lipschitz continuous functions on $I$.
\begin{Def} Given an interval $I \subseteq \re$, a function $f : I \function \H$, and a subinterval $J \subseteq I$, the \emph{variation of $f$ on $J$} is defined by \begin{equation}\notag
\pV(f,J) :=
\sup\left\{
\sum_{j=1}^{m} \d(f(t_{j-1}),f(t_{j}))\ :\ m \in \en,\ t_{j} \in J\ \forall j,\ t_{0} < \cdots < t_{m}
\right\}. \end{equation} If $\pV(f,I) < \infty$ we say that \emph{$f$ is of bounded variation on $I$} and we set \[
{\textsl{BV}\hspace{0.17ex}}(I;\H) := \{f \in I : \function \H\ :\ \pV(f,I) < \infty\}. \] \end{Def}
It is well known that the completeness of $\H$ implies that every $f \in {\textsl{BV}\hspace{0.17ex}}(I;\H)$ admits one sided limits $f(t-), f(t+)$ at every point $t \in I$, with the convention that $f(\inf I-) := f(\inf I)$ if $\inf I \in I$, and that $f(\sup I+) := f(\sup I)$ if $\sup I \in I$. If $I$ is bounded we have ${\textsl{Lip}\hspace{0.15ex}}(I;\H) \subseteq {\textsl{BV}\hspace{0.17ex}}(I;\H)$.
\subsection{Differential measures}\label{differential measures}
Given an interval $I$ of the real line $\mathbb{R}$, the family of Borel sets in $I$ is denoted by $\mathscr{B}(I)$. If $\mu : \mathscr{B}(I) \function \clint{0,\infty}$ is a measure, $p \in \clint{1,\infty}$, then the space of $\H$-valued functions which are $p$-integrable with respect to $\mu$ will be denoted by ${\textsl{L}\hspace{0.17ex}}^p(I, \mu; \H)$ or simply by ${\textsl{L}\hspace{0.17ex}}^p(\mu; \H)$. For the theory of integration of vector valued functions we refer, e.g., to \cite[Chapter VI]{Lan93}. When $\mu = \leb^1$, where $ \leb^1$ is the one dimensional Lebesgue measure, we write ${\textsl{L}\hspace{0.17ex}}^p(I; \H) := {\textsl{L}\hspace{0.17ex}}^p(I,\mu; \H)$.
We recall that a \emph{$\H$-valued measure on $I$} is a map $\nu : \mathscr{B}(I) \function \H$ such that $\nu(\bigcup_{n=1}^{\infty} B_{n})$ $=$ $\sum_{n = 1}^{\infty} \nu(B_{n})$ for every sequence $(B_{n})$ of mutually disjoint sets in $\mathscr{B}(I)$. The \emph{total variation of $\nu$} is the positive measure $\vartot{\nu} : \mathscr{B}(I) \function \clint{0,\infty}$ defined by \begin{align}\label{tot var measure}
\vartot{\nu}(B)
:= \sup\left\{\sum_{n = 1}^{\infty} \norm{\nu(B_{n})}{}\ :\
B = \bigcup_{n=1}^{\infty} B_{n},\ B_{n} \in \mathscr{B}(I),\
B_{h} \cap B_{k} = \varnothing \text{ if } h \neq k\right\}. \notag \end{align} The vector measure $\nu$ is said to be \emph{with bounded variation} if $\vartot{\nu}(I) < \infty$. In this case the equality $\norm{\nu}{} := \vartot{\nu}(I)$ defines a complete norm on the space of measures with bounded variation (see, e.g. \cite[Chapter I, Section 3]{Din67}).
If $\mu : \mathscr{B}(I) \function \clint{0,\infty}$ is a positive bounded Borel measure and if $g \in {\textsl{L}\hspace{0.17ex}}^1(I,\mu;\H)$, then $g\mu : \mathscr{B}(I) \function \H$ denotes the vector measure defined by \begin{equation}
g\mu(B) := \int_B g\de \mu, \qquad B \in \mathscr{B}(I). \notag \end{equation} In this case we have that \[
\vartot{g\mu}(B) = \int_B \norm{g(t)}{}\de \mu \qquad \forall B \in \mathscr{B}(I) \] (see \cite[Proposition 10, p. 174]{Din67}).
Assume that $\nu : \mathscr{B}(I) \function \H$ is a vector measure with bounded variation and $f : I \function \H$ and $\phi : I \function \mathbb{R}$ are two \emph{step maps with respect to $\nu$}, i.e. there exist $f_{1}, \ldots, f_{m} \in \H$, $\phi_{1}, \ldots, \phi_{m} \in \H$ and $A_{1}, \ldots, A_{m} \in \mathscr{B}(I)$ mutually disjoint such that $\vartot{\nu}(A_{j}) < \infty$ for every $j$ and $f = \sum_{j=1}^{m} \indicator_{A_{j}} f_{j}$, $\phi = \sum_{j=1}^{m} \indicator_{A_{j}} \phi_{j},$ where $\indicator_{S} $ is the characteristic function of a set $S$, i.e. $\indicator_{S}(x) := 1$ if $x \in S$ and $\indicator_{S}(x) := 0$ if $x \not\in S$. For such step maps we define $\int_{I} \duality{f}{\de\nu} := \sum_{j=1}^{m} \duality{f_{j}}{\nu(A_{j})} \in \mathbb{R}$ and $\int_{I} \phi \de \nu := \sum_{j=1}^{m} \phi_{j} \nu(A_{j}) \in \H$.
If ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu};\H)$ (resp. ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu})$) is the set of $\H$-valued (resp. real valued) step maps with respect to $\nu$, then the maps ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu};\H)$ $\function$ $\H : f \longmapsto \int_{I} \duality{f}{\de\nu}$ and ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu})$ $\function$ $\H : \phi \longmapsto \int_{I} \phi \de \nu$ are linear and continuous when ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu};\H)$ and ${\textsl{St}\hspace{0.17ex}}(\vartot{\nu})$ are endowed with the ${\textsl{L}\hspace{0.17ex}}^{1}$-seminorms $\norm{f}{{\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu};\H)} := \int_I \norm{f}{} \de \vartot{\nu}$ and
$\norm{\phi}{{\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu})} := \int_I |\phi| \de \vartot{\nu}$. Therefore they admit unique continuous extensions $\mathsf{I}_{\nu} : {\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu};\H) \function \mathbb{R}$ and $\mathsf{J}_{\nu} : {\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu}) \function \H$, and we set \[
\int_{I} \duality{f}{\de \nu} := \mathsf{I}_{\nu}(f), \quad
\int_{I} \phi\, \de\nu := \mathsf{J}_{\nu}(\phi),
\qquad f \in {\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu};\H),\quad \phi \in {\textsl{L}\hspace{0.17ex}}^{1}(\vartot{\nu}). \]
If $\mu$ is a bounded positive measure and $g \in {\textsl{L}\hspace{0.17ex}}^{1}(\mu;\H)$, arguing first on step functions, and then taking limits, it is easy to check that \[
\int_I\duality{f}{\de(g\mu)} = \int_I \duality{f}{g}\de \mu, \qquad \forall f \in {\textsl{L}\hspace{0.17ex}}^{\infty}(\mu;\H). \] The following results (cf., e.g., \cite[Section III.17.2-3, p. 358-362]{Din67}) provide a connection between functions with bounded variation and vector measures which will be implicitly used in this paper.
\begin{Thm}\label{existence of Stietjes measure} For every $f \in {\textsl{BV}\hspace{0.17ex}}(I;\H)$ there exists a unique vector measure of bounded variation $\nu_{f} : \mathscr{B}(I) \function \H$ such that \begin{align}
\nu_{f}(\opint{c,d}) = f(d-) - f(c+), \qquad \nu_{f}(\clint{c,d}) = f(d+) - f(c-), \notag \\
\nu_{f}(\clsxint{c,d}) = f(d-) - f(c-), \qquad \nu_{f}(\cldxint{c,d}) = f(d+) - f(c+). \notag \end{align} whenever $\inf I \le c < d \le \sup I$ and the left hand side of each equality makes sense.
\noindent Conversely, if $\nu : \mathscr{B}(I) \function \H$ is a vector measure with bounded variation, and if $f_{\nu} : I \function \H$ is defined by $f_{\nu}(t) := \nu(\clsxint{\inf I,t} \cap I)$, then $f_{\nu} \in {\textsl{BV}\hspace{0.17ex}}(I;\H)$ and $\nu_{f_{\nu}} = \nu$. \end{Thm}
\begin{Prop} Let $f \in {\textsl{BV}\hspace{0.17ex}}(I;\H)$, let $g : I \function \H$ be defined by $g(t) := f(t-)$, for $t \in \Int(I)$, and by $g(t) := f(t)$, if $t \in \partial I$, and let $V_{g} : I \function \mathbb{R}$ be defined by $V_{g}(t) := \pV(g, \clint{\inf I,t} \cap I)$. Then $\nu_{g} = \nu_{f}$ and $\vartot{\nu_{f}} = \nu_{V_{g}} = \pV(g,I)$. \end{Prop}
The measure $\nu_{f}$ is called the \emph{Lebesgue-Stieltjes measure} or \emph{differential measure} of $f$. Let us see the connection of the differential measure with the distributional derivative. If $f \in {\textsl{BV}\hspace{0.17ex}}(I;\H)$ and if $\overline{f}: \mathbb{R} \function \H$ is defined by \begin{equation}\label{extension to R}
\overline{f}(t) :=
\begin{cases}
f(t) & \text{if $t \in I$}, \\
f(\inf I) & \text{if $\inf I \in \mathbb{R}$, $t \not\in I$, $t \le \inf I$},\\
f(\sup I) & \text{if $\sup I \in \mathbb{R}$, $t \not\in I$, $t \ge \sup I,$}
\end{cases} \end{equation} then, as in the scalar case, it turns out (cf. \cite[Section 2]{Rec11}) that $\nu_{f}(B) = \De \overline{f}(B)$ for every $B \in \mathscr{B}(\mathbb{R})$, where $\De\overline{f}$ is the distributional derivative of $\overline{f}$, i.e. \[
- \int_\mathbb{R} \varphi'(t) \overline{f}(t) \de t = \int_{\mathbb{R}} \varphi \de \De \overline{f},
\qquad \forall \varphi \in {\textsl{C}\hspace{0.18ex}}_{c}^{1}(\mathbb{R};\mathbb{R}), \] where ${\textsl{C}\hspace{0.18ex}}_{c}^{1}(\mathbb{R};\mathbb{R})$ is the space of continuously differentiable functions on $\mathbb{R}$ with compact support. Observe that $\De \overline{f}$ is concentrated on $I$: $\De \overline{f}(B) = \nu_f(B \cap I)$ for every $B \in \mathscr{B}(I)$, hence in the remainder of the paper, if $f \in {\textsl{BV}\hspace{0.17ex}}(I,\H)$ then we will simply write \begin{equation}
\De f := \De\overline{f} = \mu_f, \qquad f \in {\textsl{BV}\hspace{0.17ex}}(I;\H), \end{equation} and from the previous discussion it follows that \begin{equation}\label{D-TV-pV}
\norm{\De f}{} = \vartot{\De f}(I) = \norm{\nu_f}{} = \pV(f,I), \qquad \forall f \in {\textsl{BV}\hspace{0.17ex}}(I;\H). \end{equation} If $I$ is bounded and $p \in \clint{1,\infty}$, then the classical Sobolev space ${\textsl{W}\hspace{0.17ex}}^{1,p}(I;\H)$ consists of those functions $f \in {\textsl{C}\hspace{0.18ex}}(I;\H)$ for which $\De f = g\leb^1$ for some $g \in {\textsl{L}\hspace{0.17ex}}^p(I;\H)$ and ${\textsl{W}\hspace{0.17ex}}^{1,\infty}(I;\H) = {\textsl{Lip}\hspace{0.15ex}}(I;\H)$. Let us also recall that if $f \in {\textsl{W}\hspace{0.17ex}}^{1,1}(I;\H)$ then the derivative $f'(t)$ exists $\leb^1$-a.e. in $t \in I$, $\De f = f' \leb^1$, and $\V(f,I) = \int_I\norm{f'(t)}{}\de t$ (cf., e.g. \cite[Appendix]{Bre73}).
\section{Main results}\label{S:state main result}
From now on we will assume that \begin{equation}\label{Z}
\text{$\Z$ is a $r$-prox-regular subset of $\H$ for some $r > 0$}, \end{equation} and \begin{equation}\label{T}
T > 0. \end{equation} We will consider on ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ the classical complete ${\textsl{BV}\hspace{0.17ex}}$-norm defined by \eqref{def BVnorm}, where \[
\norm{f}{\infty} := \sup\{\norm{f(t)}{}\ :\ t \in \clint{0,T}\}. \] The norm \eqref{def BVnorm} is equivalent to the norm defined by \[
\interleave f \interleave_{{\textsl{BV}\hspace{0.17ex}}} := \norm{f(0)}{} + \V(f,\clint{0,T}), \qquad f \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H). \] From \eqref{D-TV-pV} it also follows that \[ \norm{f}{{\textsl{BV}\hspace{0.17ex}}} = \norm{f}{\infty} + \norm{\De f}{} = \norm{f}{\infty} + \vartot{\De f}(\clint{0,T}),\qquad \forall f \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H). \] where $\De f$ is the differential measure of $f$ and $\vartot{\De f}$ is the total variation measure of $\De f$.
\noindent We also have \[
\norm{f}{{\textsl{BV}\hspace{0.17ex}}} = \norm{f}{\infty} + \int_0^T\norm{f'(t)}{}\de t \qquad \forall f \in {\textsl{W}\hspace{0.17ex}}^{1,1}(\clint{0,T};\H). \] On ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ we will consider also the so-called \emph{strict metric} defined by \eqref{def strictBV}. We say that \emph{$f_n \to f$ strictly on $\clint{0,T}$} if $\d_{s}(f_n,f) \to 0$ as $n \to \infty$. Let us recall that $\d_{s}$ is not complete and the topology induced by $\d_{s}$ is not linear.
We now state the main problem of our paper. The solution operator of this problem is classically called ``play operator'' in the case when the characteristic set $\Z$ of constraint is convex. We will call it ``play operator'' also in the non-convex case, or we will use the term ``non-convex play operator'' in order to emphasize the non-convexity of the set of constraints.
\begin{Pb}\label{CBVplay} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. For any $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and any $z_0 \in \Z$ one has to find $y = {\mathsf{P}}(u,z_0) \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ such that \begin{align}
& u(t) - y(t) \in \Z \quad \forall t \in \clint{0,T}, \label{CBV x in Z}\\
& \int_{\clint{0,T}} \duality{z(t)-u(t)+y(t)}{\de\De y(t)} \le
\frac{1}{2r}\int_{\clint{0,T}} \norm{z(t)-u(t)+y(t)}{}^2 \de \vartot{\De y}(t) \notag\\
& \hspace{45ex}\forall z \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H), z(\clint{0,T}) \subseteq \Z, \label{CBV v.i.} \\
& u(0) - y(0) = z_0. \label{i.c.} \end{align} \end{Pb}
The integrals in \eqref{CBV v.i.} are Lebesgue integrals with respect to the measures $\De y$ and $\vartot{\De y}$. The inequality can be equivalently written using Riemann-Stieltjes integrals, by virtue of \cite[Lemma A.9]{Rec11} and the discussion in Section \ref{differential measures}.
Problem \ref{CBVplay} can be equivalently stated as a differential inclusion. Indeed we have the following:
\begin{Prop}\label{int form} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold and that $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and $z_0 \in \Z$. Then a function $y \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ is a solution to Problem \ref{CBVplay} if and only if there exists a measure $\mu : \mathscr{B}(\clint{0,T}) \function \clsxint{0,\infty}$ and a function $v \in {\textsl{L}\hspace{0.17ex}}^1(\mu,\H)$ such that \begin{align}
& \De y = v \De \mu, \label{Dy=vmu}\\
& u(t) - y(t) \in \Z \qquad \forall t \in \clint{0,T} \\
& -v(t) \in N_{u(t)-\Z}(y(t)) \qquad \text{for $\mu$-a.e. $t \in \clint{0,T}$} \\
& u(0) - y(0) = z_0. \label{i.c.2} \end{align} \end{Prop}
\begin{proof} First of all let us show that \eqref{CBV v.i.} can be equivalently written either with $z \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ or with a right-continuous $z \in {\textsl{Reg}\hspace{0.17ex}}(\clint{0,T};\H)$, the space of $\H$-valued regulated functions on $\clint{0,T}$, i.e. those functions $z : \clint{0,T} \function \H$ for which there exists one-sided limits $z(t-)$ and $z(t+)$ for every $t \in \clint{0,T}$, with the convention that $z(0-) = z(0)$ and $z(T+) = z(T)$. Indeed if $y \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ satisfies \eqref{CBV v.i.} and if $z \in {\textsl{Reg}\hspace{0.17ex}}(\clint{0,T};\H)$, then by \cite[Theorem 3, Section 2.1]{Bou58} there exists a sequence of step functions $z_n \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ such that $z_n \to z$ uniformly on $\clint{0,T}$. Hence taking the limit in \eqref{CBV v.i.} with $z$ replaced by $z_n$, we get that \eqref{CBV v.i.} holds with any right-continuous $z \in {\textsl{Reg}\hspace{0.17ex}}(\clint{0,T};\H)$ such that $z(\clint{0,T}) \subseteq \Z$. Now we can conclude by recalling that by \cite[Theorem 3.7]{KreMonRec22b} we have that \eqref{CBV x in Z}--\eqref{i.c.} is equivalent to the existence of a positive measure $\mu$ and of a function $v \in {\textsl{L}\hspace{0.17ex}}^1(\mu;\clint{0,T})$ such that \eqref{Dy=vmu}--\eqref{i.c.2} hold. \end{proof}
The first proof of the existence of a solution to the non-convex Problem \ref{CBVplay} can be found in \cite{EdmThi06} (but see also \cite{CasMon96,ColGon99, Ben00, BouThi05}). To be more precise, in \cite[Corollary 3.1]{EdmThi06} it is proved that there exists a unique solution to \eqref{Dy=vmu}--\eqref{i.c.2}. Thus by virtue of Proposition \ref{int form} we have the following:
\begin{Thm}\label{CBVExist} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. Then Problem \ref{CBVplay} has a unique solution for any $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and any $z_0 \in \Z$. \end{Thm}
Another proof of Theorem \ref{CBVExist} can be found in \cite{KreMonRec22a}, exclusively within the framework of the integral formulation.
\begin{Def} The solution operator ${\mathsf{P}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \function {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ associating to every $(u,z_0) \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z$ the unique solution $y = {\mathsf{P}}(u,z_0)$ of Problem \ref{CBVplay}, is called the \emph{(non-convex) play operator}. \end{Def}
When the ``input'' function $u$ of Problem \ref{CBVplay} is more regular, we have the following well-known characterization of solutions (see, e.g., \cite[Corollary 6.3]{KreMonRec22a}).
\begin{Prop} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. If $u \in {\textsl{W}\hspace{0.17ex}}^{1,p}(\clint{0,T};\H)$, $z_0 \in \Z$, and if $y = {\mathsf{P}}(u,z_0)$ is the solution of Problem \ref{CBVplay}, then $y \in {\textsl{W}\hspace{0.17ex}}^{1,p}(\clint{0,T};\H)$ and \begin{align}
& u(t) - y(t) \in \Z \quad \forall t \in \clint{0,T}, \\
& \duality{z-u(t)+y(t)}{y'(t)} \le \frac{\norm{y'(t)}{}}{2r} \norm{z(t)-u(t)+y(t)}{}^2 \quad
\text{for $\leb^1$-a.e. $t \in \clint{0,T}$, $\forall z \in \Z$,} \label{W1p v.i.} \\
& u(0) - y(0) = z_0. \label{W1p i.c.} \end{align} Moreover $y$ is the unique function in ${\textsl{W}\hspace{0.17ex}}^{1,p}(\clint{0,T};\H)$ such that \eqref{W1p v.i.}--\eqref{W1p i.c.} holds. \end{Prop}
Now we can state our main theorems. The first result states that ${\mathsf{P}}$ is continuous with respect to the ${\textsl{BV}\hspace{0.17ex}}$-norm on ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$.
\begin{Thm}\label{T:BVnorm cont} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. The play operator \newline ${\mathsf{P}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \function {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ is continuous with respect to the ${\textsl{BV}\hspace{0.17ex}}$-norm \eqref{def BVnorm}, i.e. if \[
\norm{u-u_n}{{\textsl{BV}\hspace{0.17ex}}} \to 0, \quad \norm{z_0 - z_{0n}}{} \to 0 \qquad \text{as $n \to \infty$}, \] then \[
\norm{{\mathsf{P}}(u,z_0) - {\mathsf{P}}(u_n,z_{0n})}{{\textsl{BV}\hspace{0.17ex}}} \to 0 \qquad \text{as $n \to \infty$} \] whenever $u, u_n \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and $z_0, z_{0,n} \in \Z$ for every $n \in \en$. \end{Thm}
We will also prove that the play operator is continuous with respect to the strict metric.
\begin{Thm}\label{T:BVstrict cont} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. The play operator ${\mathsf{P}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \function {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ is continuous with respect to the strict metric $\d_s$ \eqref{def strictBV}, i.e. if \[
\d_s(u,u_n)\to 0, \quad \norm{z_0 - z_{0n}}{} \to 0 \qquad \text{as $n \to \infty$}, \] then \[
\d_s({\mathsf{P}}(u,z_0), {\mathsf{P}}(u_n,z_{0n})) \to 0 \qquad \text{as $n \to \infty$} \] whenever $u, u_n \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and $z_0, z_{0,n} \in \Z$ for every $n \in \en$. \end{Thm}
The proofs of our main theorems are strongly based on the fact that the play operator is rate independent, which is the property \eqref{P r.i.} of ${\mathsf{P}}$ proved in the following theorem.
\begin{Thm}\label{th:P r.i.} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold, $u \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$, and $z_0 \in \Z$. If $\phi : \clint{0,T} \function \clint{0,T}$ is a continuous function such that $(\phi(t) - \phi(s))(t-s) \ge 0$ and $\phi(\clint{0,T}) = \clint{0,T}$ and $ {\mathsf{P}}(u,z_0)$ is the solution of Problem \ref{CBVplay}, then \begin{equation}\label{P r.i.}
{\mathsf{P}}(u \circ \phi,z_0) = {\mathsf{P}}(u,z_0) \circ \phi. \end{equation} \end{Thm}
We will prove Theorems \ref{T:BVnorm cont}, \ref{T:BVstrict cont}, and \ref{th:P r.i.} in Section \ref{S:proofs}.
\section{Proofs}\label{S:proofs}
Let us start by proving that ${\mathsf{P}}$ is rate independent.
\begin{proof}[Proof of Theorem \ref{th:P r.i.}]
Set $y := {\mathsf{P}}(u,z_0)$, and recall that $V_y(t) = \V(y,\clint{0,t})$ for every $t \in \clint{0,T}$. Hence $\vartot{\De y} = \De V_y$, and by the vectorial Radon-Nikodym theorem (\cite[Corollary VII.4.2]{Lan93}) there exists $v \in {\textsl{L}\hspace{0.17ex}}^1(\vartot{\De y};\H)$ such that $\De y = v \De V_y$. Let us fix $z \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ such that $z(\clint{0,T}) \subseteq \clint{0,T}$ and recall the following well-known formula holding for any measure $\mu : \mathscr{B}(\clint{0,T}) \function \clsxint{0,\infty}$, $g \in {\textsl{L}\hspace{0.17ex}}^1(\mu;\H)$, and $A \in \mathscr{B}(\clint{0,T})$: \[
\int_{\phi^{-1}(A)} g(\phi(t)) \de\mu(t) = \int_{A} g(\tau) \de(\phi_* \mu)(\tau), \] where $\phi_* \mu : \mathscr{B}(\clint{0,T}) \function \clsxint{0,\infty}$ is the measure defined by $\phi_* \mu(B) := \mu(\phi^{-1}(B))$ for $B \in \mathscr{B}(\clint{0,T})$ (this formula can be proved by approximating $g$ by a sequence of step functions and then taking the limit). If $0 \le \alpha \le \beta \le T$ we have \[ \phi_*(\De V_y\circ \phi)(\clint{\alpha,\beta}) = (\De V_y \circ \phi)(\phi^{-1}(\clint{\alpha,\beta})) =\De V_y(\clint{\alpha,\beta}), \] hence \[ \phi_*(\De V_y\circ \phi) = \De V_y, \] and for $0 \le a \le b \le T$ we find \begin{align}
\De\ \!(y\circ \phi)(\clint{a,b})
& = y(\phi(b)) - y(\phi(a)) = \De y(\clint{\phi(a),\phi(b)})\notag \\
& = \int_{\clint{\phi(a),\phi(b)}} v(\tau)\de \De V_y(\tau) =
\int_{\clint{\phi(a),\phi(b)}} v(\tau)\de \De\ \!(\phi_* V_y)(\tau) \notag \\
& = \int_{\clint{a,b}} v(\phi(t))\de \De\ \!(V_y \circ \phi)(t) =
(v \circ \phi)\De\ \!(V_y \circ \phi)(\clint{a,b}), \notag \end{align} so that \[
\De\ \!(y\circ \phi) = (v \circ \phi)\De\ \!(V_y \circ \phi), \qquad
\vartot{\De\ \!(y\circ \phi)} = \norm{v \circ \phi}{} \De\ \!(V_y \circ \phi). \] If $\psi(\tau) := \inf \phi^{-1}(\tau)$, then $\psi$ is increasing and $\tau = \phi(\psi(\tau))$. Therefore, since $\De\ \!(V_y\circ \phi)=0$ on every interval where $\phi$ is constant, we find that for every $h \in {\textsl{C}\hspace{0.18ex}}(\re^2)$ we have \[ \int_{\clint{0,T}} h(z(t),\phi(t))\de\De\ \!(V_y\circ \phi)(t) = \int_{\clint{0,T}} h(z(\psi(\phi(t)), \phi(t)) \de\De\ \!(V_y\circ \phi)(t). \] Hence \begin{align}
&\int_{\clint{0,T}} \duality{z(t)-u(\phi(t))+y(\phi(t))}{\de\De\ \!(y\circ \phi)(t)} \notag\\
&= \int_{\clint{0,T}} \duality{z(t)-u(\phi(t))+y(\phi(t))}{v(\phi(t))}\de\De\ \!(V_y\circ \phi)(t) \notag \\
&= \int_{\clint{0,T}} \duality{z(\psi(\phi(t)))-u(\phi(t))+y(\phi(t))}{v(\phi(t))}\de\De\ \!(V_y\circ \phi)(t) \notag \\
&= \int_{\clint{0,T}} \duality{z(\psi(\tau))-u(\tau)+y(\tau)}{v(\tau)}{\de\De V_y(\tau)} \notag \\
& = \int_{\clint{0,T}} \duality{z(\psi(\tau))-u(\tau)+y(\tau)}{\de\De y(\tau)}, \label{r-i1} \end{align} and \begin{align} & \int_{\clint{0,T}} \norm{z(t)-u(\phi(t))+y(\phi(t))}{}^2 \de \vartot{\De\ \!(y\circ \phi)}(t) \notag \\ & = \int_{\clint{0,T}} \norm{z(t)-u(\phi(t))+y(\phi(t))}{}^2 v(\phi(t)) \de \De\ \!(V_y\circ \phi)(t) \notag \\ & = \int_{\clint{0,T}} \norm{z(\psi(\phi(t))-u(\phi(t))+y(\phi(t))}{}^2 \norm{v(\phi(t))}{}
\de \De\ \!(V_y\circ \phi)(t) \notag \\ & = \int_{\clint{0,T}} \norm{z(\psi(\tau))-u(\tau)+y(\tau)}{}^2 \norm{v(\tau)}{} \de \De V_y(\tau) \notag \\ & = \int_{\clint{0,T}} \norm{z(\psi(\tau))-u(\tau)+y(\tau)}{}^2 \de \vartot{\De y}(\tau) \label{r-i2}. \end{align} Since $y = {\mathsf{P}}(u,z_0)$ we have that the right hand side of \eqref{r-i1} is less or equal to the right hand side of \eqref{r-i2} times $\frac{1}{2r}$ and this implies that \begin{align} & \int_{\clint{0,T}} \duality{z(t)-u(\phi(t))+y(\phi(t))}{\de\De\ \!(y\circ \phi)(t)} \notag \\ & \le \frac{1}{2r}\int_{\clint{0,T}} \norm{z(t)-u(\phi(t))+y(\phi(t))}{}^2 \de \vartot{\De\ \!(y\circ \phi)}(t), \end{align} which is what we wanted to prove. \end{proof}
In the next result, we prove a normality rule for the non-convex play operator, thereby we generalize to the non-convex case the result in \cite[Proposition 3.9]{Kre96}. The idea of the proof is analogous to the one of \cite[Proposition 3.9]{Kre96}.
\begin{Prop}\label{S and Q} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold, $u \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, $z_0 \in \Z$, and that $y = {\mathsf{P}}(u,z_0)$. Let $x = {\mathsf{S}}(u, z_0) : \clint{0,T} \function \H$ and $w = {\mathsf{Q}}(u, z_0) : \clint{0,T} \function \H$ be defined by \begin{align}
x(t) := {\mathsf{S}}(u, z_0)(t) := u(t) - y(t), \qquad \text{$t \in \clint{0,T}$,} \label{def S}\\
w(t) := {\mathsf{Q}}(u, z_0)(t) := y(t) - x(t), \qquad \text{$t \in \clint{0,T}$.} \label{def Q} \end{align} Then $w = {\mathsf{Q}}(u, z_0) \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, $x = {\mathsf{S}}(u, z_0) \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, $x(t) \in Z$ for every $t \in \clint{0,T}$, and \begin{equation}\label{y'.x'=0}
\duality{y'(t)}{x'(t)} = 0 \qquad \text{for $\leb^1$-a.e. $t \in \clint{0,T}$,} \end{equation} and
\begin{equation}\label{|w'|=|u'|}
\norm{w'(t)}{} = \norm{u'(t)}{} \qquad \text{for $\leb^1$-a.e. $t \in \clint{0,T}$.} \end{equation} \end{Prop}
\begin{proof} Let $t \in \clint{0,T}$ be a point where $x$ is differentiable. Taking $z(t) = x(t+h) \in \Z$ for every $h \in \re$ sufficiently small, we have \[
\frac{1}{h}\duality{y'(t)}{x(t) - x(t+h)} \ge -\frac{\norm{y'(t)}{}}{2rh}\norm{x(t) - x(t+h)}{}^2 \] therefore letting $h \to 0$ we get \begin{equation}\label{y'.x'<0}
\duality{y'(t)}{-x'(t)} \ge 0. \end{equation} Taking $z(t) = x(t-h)$ we also have \[
\frac{1}{h}\duality{y'(t)}{x(t) - x(t-h)} \ge -\frac{\norm{y'(t)}{}}{2rh}\norm{x(t) - x(t-h)}{}^2 \] therefore letting $h \to 0$ we get \[
\duality{y'(t)}{x'(t)} \ge 0, \] which together with \eqref{y'.x'<0} yields \eqref{y'.x'=0}. This formula implies that
\begin{equation}\label{|w'|}
\norm{w'(t)}{}^2 = \norm{y'(t) - x'(t)}{}^2 = \duality{y'(t) - x'(t)}{y'(t) - x'(t)}
= \norm{y'(t)}{}^2 + \norm{x'(t)}{}^2, \end{equation} and
\begin{equation}\label{|u'|}
\norm{u'(t)}{}^2 = \norm{y'(t) + x'(t)}{}^2 = \duality{y'(t) + x'(t)}{y'(t) + x'(t)}
= \norm{y'(t)}{}^2 + \norm{x'(t)}{}^2, \end{equation}
therefore \eqref{|w'|=|u'|} follows. \end{proof}
Let us observe that, in the previous proposition, the geometrical meaning of \eqref{|w'|}-\eqref{|u'|}, is that $w'(t)$ and $u'(t)$ are the diagonals of the rectangle with sides $x'(t)$ and $y'(t)$, so that we have \eqref{|w'|=|u'|}.
Now we prove the continuity of the play operator with respect to the ${\textsl{BV}\hspace{0.17ex}}$-norm on the space ${\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$. Our proof is based on the normality rule of Proposition \ref{S and Q} and on a very simple application of a standard weak convergence-argument in ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$.
\begin{Thm} Assume that \eqref{H-prel}, \eqref{Z}, \eqref{T} hold. The non-convex play operator restricted to $\Z \times {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, i.e. ${\mathsf{P}} : \Z \times {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H) \function {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, is continuous with respect to the ${\textsl{BV}\hspace{0.17ex}}$-norm. More precisely let $z_0 \in \Z$, $u \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, $z_{0,n} \in \Z$, $u_n \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$ for every $n \in \en$, and let $y := {\mathsf{P}}(z_0, u)$ and $y_n := {\mathsf{P}}(z_{0,n}, u_n).$
\noindent If $\norm{z- z_n}{} \to 0$ and \[
\norm{u - u_n}{\infty} + \norm{u' - u_n'}{{\textsl{L}\hspace{0.17ex}}^1(\clint{0,T};\H)} \to 0
\qquad \text{as $n \to \infty$}, \] then \[
\norm{y - y_n}{\infty} + \norm{y' - y_n'}{{\textsl{L}\hspace{0.17ex}}^1(\clint{0,T};\H)} \to 0
\qquad \text{as $n \to \infty$}. \] \end{Thm}
\begin{proof} For every $n \in \en$ let us set $w := {\mathsf{Q}}(u,z_0)$ and $w_n := {\mathsf{Q}}(u_n,z_0)$ according to formulas \eqref{def S}--\eqref{def Q}. From Proposition \ref{S and Q} we find that \begin{equation}
\norm{w_n'}{{\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)} = \norm{u_n'}{{\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)} \to
\norm{u'}{{\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)} = \norm{w'}{{\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)} \qquad \text{as $n \to \infty$.} \end{equation} In particular $\{w_n'\}$ is bounded in ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$, therefore there exists $\eta \in {\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$ such that, at least for a subsequence, \begin{equation}
w_n' \rightharpoonup \eta \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$}. \end{equation} But $y_n \to y$ uniformly on $\clint{0,T}$ (see the proof of Theorem 5.5 in \cite{KreMonRec22a}), hence $w_n \to w$ uniformly on $\clint{0,T}$, therefore \begin{equation}
w_n' \rightharpoonup w' \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$}. \end{equation} Therefore since ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$ is a Hilbert space we infer that \begin{equation}
w_n' \to w' \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^2(\clint{0,T};\H)$}, \end{equation} which implies that \begin{equation}
w_n' \to w' \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^1(\clint{0,T};\H)$}, \end{equation} so that $y_n' \to y'$ in ${\textsl{L}\hspace{0.17ex}}^1(\clint{0,T};\H)$ and we are done. \end{proof}
Our proof of the ${\textsl{BV}\hspace{0.17ex}}$-norm continuity of ${\mathsf{P}}$ on ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ will essentially consists on reducing the problem to the Lipschitz continuous case by means of a reparametrization by the arc length. We need the following two auxiliary results. The first is the following:
\begin{Prop}\label{P:ftilde} Assume that \eqref{H-prel} holds. For every $f \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$, let $\ell_f : \clint{0,T} \function \re$ be defined by \begin{equation}\label{ell_f}
\ell_f(t) =
\begin{cases}
\dfrac{T}{\V(f, \clint{0,T})} \V(f,\clint{0,t}) & \text{if $\V(f, \clint{0,T}) \neq 0$}, \\
\ \\
0 & \text{if $\V(f, \clint{0,T}) = 0$},
\end{cases} \end{equation} which we call \emph{normalized arc-length of $f$}. Then there exists $\widetilde{f} \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H),$ the \emph{re\-pa\-ra\-me\-tri\-za\-tion of $f$ by the normalized arc-length}, such that \begin{equation}\label{f=ftilde(l)}
f = \widetilde{f} \circ \ell_f. \end{equation} Moreover there exists a $\leb^1$-representative $\widetilde{f}'$ of the distributional derivative of $\widetilde{f}$ such that
\begin{equation}\label{|f'|=1}
\norm{\widetilde{f}'(\sigma)}{} = \frac{V(f,\clint{0,T})}{T}, \qquad \forall \sigma \in \clint{0,T}. \end{equation} \end{Prop}
\begin{proof} The existence of a function $\widetilde{f} \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$ satisfying \eqref{f=ftilde(l)} is easy to prove (see, e.g., \cite[Proposition 3.1]{Rec08}). Moreover we know from \cite[Lemma 4.3]{Rec11} that if $g$ is a $\leb^1$-representative of the distributional derivative, then $\norm{g(\sigma)}{} = V(f,\clint{0,T})/T$ for every $\sigma \in F$, for some $F \subseteq \clint{0,T}$ with
full measure in $\clint{0,T}$. Thus \eqref{|f'|=1} follows if we define the following Lebesgue representative of the derivative of $\widetilde{f}$: \[
\widetilde{f}'(\sigma) :=
\begin{cases}
g(\sigma) & \text{if $\sigma \in F$}, \\
\ \\
\dfrac{(V(f,\clint{0,T})}{T} e_0& \text{if $\sigma \not\in F$,}
\end{cases} \] where $e_0 \in \H$ is chosen so that $\norm{e_0}{} = 1$. \end{proof}
Then, as for the Lipschitz case, we need to introduce the operator ${\mathsf{Q}}$ defined by ${\mathsf{Q}}(v) = 2{\mathsf{P}}(v) - v$ for $v \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$.
\begin{Lem}\label{L:Q(v)} Assume that $v \in{\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$, $z_0 \in \Z$, and let ${\mathsf{Q}} : {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \times \Z \function {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ be defined by \begin{equation}\label{Q in BV}
{\mathsf{Q}}(v,z_0) := 2{\mathsf{P}}(v,z_0) - v, \qquad v \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H). \end{equation} Then ${\mathsf{Q}}$ is rate independent, i.e. \begin{equation}\label{Q r.i.}
{\mathsf{Q}}(v \circ \phi ,z_0) = {\mathsf{Q}}(v,z_0) \circ \phi \qquad
\forall v \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H) \end{equation} for every continuous function $\phi : \clint{0,T} \function \clint{0,T}$ such that $(\phi(t) - \phi(s))(t-s) \ge 0$ and $\phi(\clint{0,T}) = \clint{0,T}$. Moreover if $\ell_v$ is the arc-length defined in \eqref{ell_f}, then \begin{equation}\label{DQ = Q'Dl}
\De {\mathsf{Q}}(v, z_0) = (({\mathsf{Q}}(\widetilde{v}, z_0))'\circ \ell_v)\De\ell_v, \end{equation} i.e. \begin{equation}\label{DQ = Q'Dl-2}
\De{\mathsf{Q}}(v, z_0)(B) = \int_B ({\mathsf{Q}}(\widetilde{v}, z_0))'(\ell_v(t)) \de \De\ell_v(t), \qquad
\forall B \in \mathscr{B}(\clint{0,T}), \end{equation} where formulas \eqref{DQ = Q'Dl}--\eqref{DQ = Q'Dl-2} hold with any $\leb^1$-representative $({\mathsf{Q}}(\widetilde{v}, z_0))'$ of the distributional derivative of ${\mathsf{Q}}(\widetilde{v}, z_0)$. Finally we can take such an $\leb^1$-representative so that
\begin{equation}\label{|Q'|=1}
\norm{({\mathsf{Q}}(\widetilde{v}, z_0))'(\sigma)}{} = \frac{\V(v,\clint{0,T})}{T},\qquad \forall \sigma \in \clint{0,T}. \end{equation} \end{Lem}
\begin{proof} From Theorem \ref{th:P r.i.} it follows that \[
{\mathsf{Q}}(v \circ \phi,z_0) = 2{\mathsf{P}}(v\circ \phi,z_0) - v \circ \phi =
2{\mathsf{P}}(v,z_0) \circ \phi - v \circ \phi = {\mathsf{Q}}(v, z_0) \circ \phi, \] which is \eqref{Q r.i.}. Moreover, since $\widetilde{v}$ is Lipschitz continuous, we have that ${\mathsf{Q}}(\widetilde{v}, z_0) \in {\textsl{Lip}\hspace{0.15ex}}(\clint{0,T};\H)$, therefore by \cite[Theorem A.7]{Rec11} we infer that, if
${\mathsf{Q}}(\widetilde{v}, z_0)'$ is any $\leb^1$-representative of the distributional derivative of ${\mathsf{Q}}(\widetilde{v}, z_0)$, then the bounded measurable function ${\mathsf{Q}}(\widetilde{v}, z_0)'\circ \ell_v$ is a density of ${\mathsf{Q}}(v, z_0)$ with respect to the measure $\De\ell_v$, i.e. \eqref{DQ = Q'Dl} holds. Finally \eqref{|Q'|=1} follows from
\eqref{|w'|=|u'|} of Proposition \ref{S and Q} and from \eqref{|f'|=1} of Proposition \ref{P:ftilde}. \end{proof}
Now we can prove our first main result.
\begin{proof}[Proof of Theorem \ref{T:BVnorm cont}] Let us consider $u \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ and $u_n \in {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$ for every $n \in \en$, and assume that $\norm{u_n - u}{{\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)} \to 0$ as $n \to \infty$. Then let $\ell := \ell_u$ and $\ell_n := \ell_{u_n}$ be the normalized arc-length functions defined in \eqref{ell_f}, so that we have \[
u = \widetilde{u} \circ \ell, \quad u_n = \widetilde{u}_n \circ \ell_n \qquad \forall n \in \en \] Let us also set \begin{equation}\label{def w wn}
w := {\mathsf{Q}}(u,z_0), \quad w_n := {\mathsf{Q}}(u_n, z_{0,n}), \qquad n \in \en, \end{equation} where the operator ${\mathsf{Q}}$ is defined in Lemma \ref{L:Q(v)}. By the proof of \cite[Theorem 5.5]{KreMonRec22a} we have that ${\mathsf{P}}(u_n,z_{0n}) \to {\mathsf{P}}(u_n,z_{0})$ uniformly on $\clint{0,T}$, because $\norm{u_n -u}{\infty} \to 0$ as $n \to \infty$. Therefore from formula \eqref{Q in BV} it follows that \begin{equation}\label{wn->w unif}
w_n \to w \qquad \text{uniformly on $\clint{0,T}$}. \end{equation} Let us observe that ${\mathsf{Q}}(\widetilde{u}, z_0)$ and ${\mathsf{Q}}(\widetilde{u}_n, z_0)$ are Lipschitz continuous for every $n \in \en$ and let us define the bounded measurable functions $h: \clint{0,T} \function \H$ and $h_n: \clint{0,T} \function \H$ by \begin{equation}
h(t) := ({\mathsf{Q}}(\widetilde{u}, z_0))'(\ell_u(t)), \quad h_n(t) := ({\mathsf{Q}}(\widetilde{u}_n, z_{0n}))'(\ell_n(t)),
\qquad t \in \clint{0,T}, \end{equation}
where, by Lemma \ref{L:Q(v)}, formula \eqref{|Q'|=1}, we have that the $\leb^1$-representative of the distributional derivative of ${\mathsf{Q}}(\widetilde{u}, z_0)$ and ${\mathsf{Q}}(\widetilde{u}_n, z_0)$ can be chosen in such a way that \[
\norm{({\mathsf{Q}}(\widetilde{u}, z_0))'(\sigma)}{} = \frac{\V(u,\clint{0,T})}{T}, \quad \norm{({\mathsf{Q}}(\widetilde{u}_n, z_0))'(\sigma)}{} = \frac{\V(u_n,\clint{0,T})}{T}, \qquad \forall\sigma \in \clint{0,T}. \] Therefore we have that
\begin{equation}\label{|gn(t)|, |g(t)|}
\norm{h(t)}{} = \frac{\V(u,\clint{0,T})}{T}, \quad
\norm{h_n(t)}{} = \frac{\V(u_n,\clint{0,T})}{T},
\qquad \forall t \in \clint{0,T}, \ \forall n \in \en. \end{equation} Since $u_n \to u$ in ${\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$, from the inequality \begin{equation}\label{V(f)-V(g)}
|\V(u,\clint{a,b}) - \V(u_n,\clint{a,b})| \le \V(u-u_n,\clint{a,b}), \end{equation} holding for $0 \le a \le b \le T$, we infer that $\V(u_n,\clint{0,T}) \to \V(u,\clint{0,T})$ as $n \to \infty$, hence we have that the sequence
$\{\V(u_n,\clint{0,T})\}$ is bounded. Therefore from \eqref{|gn(t)|, |g(t)|} we infer that there exists $C > 0$ such that
\begin{equation}\label{|g_n|infty bdd}
\sup\{\norm{h_n(t)}{}\ :\ t \in \clint{0,T}\} \le C \qquad
\ \forall n \in \en \end{equation} and \begin{equation}
\lim_{n \to \infty} \norm{h_n(t)}{} =
\lim_{n \to \infty} \frac{\V(u_n,\clint{0,T})}{T} =
\frac{\V(u,\clint{0,T})}{T} = \norm{h(t)}{}, \qquad \forall t \in \clint{0,T}. \end{equation} It follows that \begin{align}
\lim_{n \to \infty} \int_{\clint{0,T}} \norm{h_{n}(t)}{}^2\de\De\ell(t)
& = \lim_{n \to \infty} \int_{\clint{0,T}} \left(\frac{\V(u_n,\clint{0,T})}{T}\right)^2\de\De\ell(t) \notag \\
& = \int_{\clint{0,T}} \left(\frac{\V(u,\clint{0,T})}{T}\right)^2\de\De\ell(t) \notag \\
& = \int_{\clint{0,T}} \norm{h(t)}{}^2\de\De\ell(t), \notag \end{align} hence \begin{equation}\label{norm to norm}
\lim_{n \to \infty} \norm{h_n}{L^2(\De\ell;\H)}^2 = \norm{h}{L^2(\De\ell;\H)}^2. \end{equation} Now let us observe that from Lemma \ref{L:Q(v)}, formula \eqref{DQ = Q'Dl-2}, and from \eqref{def w wn} we have that \begin{equation}
\De w = h_n\De\ell, \qquad \De w_n = h_n \De \ell_n. \end{equation} Let us also recall that the vector space of (vector) measures $\nu : \mathscr{B}(\clint{0,T}) \function \H$ can be endowed with the complete norm $\norm{\nu}{} := \vartot{\nu}(\clint{0,T})$, where $\vartot{\nu}$ is the total variation measure of $\nu$. Moreover from the definition of variation, inequality \eqref{V(f)-V(g)}, and the triangle inequality, we infer that \begin{equation}\label{Dl -Dln}
\norm{\De\ell - \De\ell_n}{} = \vartot{\De\ \!(\ell - \ell_n)}(\clint{0,T}) = \V(\ell - \ell_n, \clint{0,T}) \to 0 \qquad \text{as $n \to \infty$}. \end{equation}
From \eqref{|g_n|infty bdd} it follows that \begin{equation}
\vartot{\De w_n}(B) = \int_B \norm{h_{n}(t)}{} \de \De\ell(t) \le C \vartot{\De \ell_n}(B), \qquad
\forall B \in \mathscr{B}(\clint{0,T}), \end{equation} therefore, since $\De\ell_n \to \De \ell$ in the space of real measures, we infer that for every $\varepsilon > 0$ there exists $\delta > 0$ such that \[
\vartot{\De \ell}(B) < \delta \ \Longrightarrow\ \sup_{n \in \en} \vartot{\De w_n}(B) < \varepsilon \] for every $B \in \mathscr{B}(\clint{0,T})$. This allows us to apply the weak sequential compactness Dunford-Pettis theorem for vector measures (cf. \cite[Theorem 5, p. 105, Theorem 1, p. 101]{DieUhl77} ) and we deduce that, at least for a subsequence, $\De w_n$ is weakly convergent to some measure $\nu : \mathscr{B}(\clint{0,T}) \function \H$. Hence thanks to \eqref{wn->w unif} and to \cite[Lemma 7.1]{KopRec16} we infer that \begin{equation}
\De w_n \ \text{is weakly convergent to}\ \De w, \end{equation} in particular for every bounded Borel function $\varphi : \clint{0,T} \function \H$, we have that the functional \newline $\nu$ $\longmapsto$ $\int_{\clint{0,T}} \duality{\varphi(t)}{\de \nu(t)}$ is linear and continuous on the space of measures with bounded variation and we have \[
\lim_{n \to \infty} \int_{\clint{0,T}} \duality{\varphi(t)}{\de \De w_n(t)} =
\int_{\clint{0,T}} \duality{\varphi(t)}{\de \De w(t)}, \] that is \begin{equation}\label{gn Dwn -> gDw}
\lim_{n \to \infty} \int_{\clint{0,T}} \duality{\varphi(t)}{h_{n}(t)} \de \De \ell_n(t) =
\int_{\clint{0,T}} \duality{\varphi(t)}{h(t)}\de \De \ell(t). \end{equation}
On the other hand, by \eqref{|g_n|infty bdd} there exists $\eta \in {\textsl{L}\hspace{0.17ex}}^{2}(\De\ell;\H)$ such that $h_{n}$ is weakly convergent to $\eta$ in ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\H)$, therefore if we set $\psi_n(t) := \duality{\varphi(t)}{h_{n}(t)}$ and $\psi(t) := \duality{\varphi(t)}{\eta(t)}$ for $t \in \clint{0,T}$, we have that $\psi_n$ is weakly convergent to $\psi$ in ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\re)$, and \begin{align}
& \sp \left| \int_{\clint{0,T}} \psi_n(t) \de \De \ell_n(t) - \int_{\clint{0,T}} \psi(t) \de \De \ell(t) \right| \notag \\
& \le \int_{\clint{0,T}} |\psi_n(t)| \de \vartot{\De\ \!(\ell_n - \ell)}(t) +
\left| \int_{\clint{0,T}} (\psi_n(t) - \psi(t)) \de \De \ell(t) \right| \notag \\
& \le \norm{\varphi}{\infty}\norm{h_{n}}{\infty} \vartot{\De\ \!(\ell_n - \ell)}(\clint{0,T}) +
\left| \int_{\clint{0,T}} (\psi_n(t) - \psi(t)) \de \De \ell(t) \right| \to 0 \end{align}
as $n \to \infty$, because \eqref{|g_n|infty bdd} and \eqref{Dl -Dln} hold, and $\psi_n$ is weakly convergent to $\psi$ in ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\re)$. Therefore we have found that \[
\lim_{n \to \infty} \int_{\clint{0,T}} \duality{\varphi(t)}{h_{n}(t)} \de \De \ell_n(t)\ =
\int_{\clint{0,T}} \duality{\varphi(t)}{\eta(t)}\de \De \ell(t), \] hence, by \eqref{gn Dwn -> gDw}, \begin{equation}\label{g dl = z dl weakly}
\int_{\clint{0,T}} \duality{\varphi(t)}{\de(h\De \ell)(t)} =
\int_{\clint{0,T}} \duality{\varphi(t)}{\de (\eta\De \ell)(t)}. \end{equation} The arbitrariness of $\varphi$ and \eqref{g dl = z dl weakly} implies that $\eta\De \ell = h\De \ell$ (cf. \cite[Proposition 35, p. 326]{Din67}), hence $\eta(t) = h(t)$ for $\De\ell$-a.e. $t \in \clint{0,T}$ and we have found that \begin{equation}\label{gn to g deb}
h_{n} \rightharpoonup h \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\H)$}. \end{equation} Since ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\H)$ is a Hilbert space, from \eqref{norm to norm} and \eqref{gn to g deb} we deduce that \begin{equation}
h_{n} \to h \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^2(\De\ell;\H)$} , \end{equation} and, since $\De\ell(\clint{0,T})$ is finite, \begin{equation}
h_{n} \to h \qquad \text{in ${\textsl{L}\hspace{0.17ex}}^1(\De\ell;\H)$}. \end{equation} Hence, at least for a subsequence which we do not relabel, $h_n(t) \to h(t)$ for $\De\ell$-a.e. $t \in \clint{0,T}$, thus \begin{align}
V(w_n - w, [0,T])= \norm{\De\ \!(w_n - w)}{}
& = \norm{\De w_n - \De w}{} =
\norm{h_{n} \De \ell_n - h \De \ell}{} \notag \\
& \le \norm{h_{n}\De\ \!(\ell_n - \ell)}{} + \norm{(h_{n} -h) \De\ell}{} \notag \\
& \le C \norm{\De\ \!(\ell_n - \ell)}{} +
\int_{\clint{0,T}}\norm{h_{n}(t) - h(t)}{} \de \De\ell(t) \to 0\notag \end{align} as $n \to \infty$ and we have proved that $\norm{w - w_n}{{\textsl{BV}\hspace{0.17ex}}} \to 0$ as $n\to \infty$. We can conclude recalling \eqref{def w wn} and that ${\mathsf{Q}}(v) = 2{\mathsf{P}}(v) - v$ for every $v \in {\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$. \end{proof}
We can finally infer the strict continuity of the play operator on ${\textsl{C}\hspace{0.18ex}}(\clint{0,T};\H) \cap {\textsl{BV}\hspace{0.17ex}}(\clint{0,T};\H)$.
\begin{proof}[Proof of Theorem \ref{T:BVstrict cont}] The proof of Theorem \ref{T:BVstrict cont} is now a consequence of Theorem \ref{T:BVnorm cont} and \cite[Theorem 3.4]{Rec11}. \end{proof}
\end{document} |
\begin{document}
\onehalfspacing
\title{\bf
Berry--Esseen bounds for design-based causal inference with possibly diverging treatment levels and varying group sizes } \author{Lei Shi and Peng Ding \footnote{Lei Shi, Division of Biostatistics, University of California, Berkeley, CA 94720 (E-mail: [email protected]). Peng Ding, Department of Statistics, University of California, Berkeley, CA 94720 (E-mail: [email protected]). } } \date{}
\maketitle
\begin{abstract} \citet{neyman1923application} introduced the randomization model, which contains the notation of potential outcomes to define causal effects and a framework for large-sample inference based on the design of the experiment. However, the existing theory for this framework is far from complete especially when the number of treatment levels diverges and the group sizes vary a lot across treatment levels. We provide a unified discussion of statistical inference under the randomization model with general group sizes across treatment levels. We formulate the estimator in terms of a linear permutational statistic and use results based on Stein's method to derive various Berry--Esseen bounds on the linear and quadratic functions of the estimator. These new Berry--Esseen bounds serve as basis for design-based causal inference with possibly diverging treatment levels and diverging dimension of causal effects. We also fill an important gap by proposing novel variance estimators for experiments with possibly many treatment levels without replications. Equipped with the newly developed results, design-based causal inference in general settings becomes more convenient with stronger theoretical guarantees. \end{abstract}
\noindent {\bf Keywords}: Central limit theorem; permutation; potential outcome; Stein's method; randomized experiment
\section{Motivation: randomization-based causal inference} \label{sec:motivation}
\subsection{Existing results}
In a seminal paper, \citet{neyman1923application} introduced the notation of potential outcomes to define causal effects. More importantly, he also proposed a framework for statistical inference of causal effects based on the design of the experiment. In particular, \citet{neyman1923application} considered an experiment with $N$ units and $Q$ treatment arms, where the number of units under treatment $q$ equals $N_q$, with $\sum_{q=1}^Q N_q= N$. Corresponding to treatment level $q $, unit $i$ has the potential outcome $Y_i(q)$, where $i=1, \ldots, N$ and $q = 1, \ldots, Q$. Despite its simplicity, the following completely randomized experiment has been widely used in practice and has generated rich theoretical results. Definition \ref{def:cr} below characterizes the joint distribution of ${{\boldsymbol{Z}}} = (Z_1, \ldots, Z_N)$ under complete randomization, where $Z_i \in \{1, \ldots, Q\}$ is the treatment indicator for unit $i$.
\begin{definition}[Complete randomization]\label{def:cr} $ {\mathbb{P}}( {\boldsymbol{Z}} = {\boldsymbol{z}}) = N_1 ! \cdots N_Q! / N! $ for all ${\boldsymbol{z}} = (z_1, \ldots, z_N ) $ with $(N_1, \ldots, N_Q)$ units receiving treatment level $(1,\ldots,Q)$, respectively. \end{definition}
\citet{neyman1923application} formulated complete randomization based on an urn model, which is equivalent to Definition \ref{def:cr}. Under complete randomization, ${\boldsymbol{Z}}$ is from a random permutation of $N_1$ $1$'s, $\ldots $, $N_Q$ $Q$'s. The experiment reveals one of the potential outcomes, which is the observed outcome $ Y_i = Y_i(Z_i) = \sum_{q=1}^Q Y_i(q) \ind{Z_i = q}$ for each unit $i$.
In \citet{neyman1923application} 's framework, all potential outcomes are fixed and only the treatment indicators are random according to Definition \ref{def:cr}. \citet[][Chapter 9]{scheffe1959analysis} called it the {\it randomization model}. Under this model, it is conventional to call the resulting inference as {\it randomization inference} or {\it design-based inference}. It has become increasingly popular in both theory and practice \citep[e.g.,][]{kempthorne52, copas1973randomization, robins1988confidence, rosenbaum2002observational, hinkelmann07, freedman2008Aregression, freedman2008Bregression, lin2013agnostic, dasgupta2015causal, imbens15, ATHEY201773, fogarty2018regression, guo2021generalized}.
A central goal in \citet{neyman1923application} 's framework is to use the observed data $(Z_i, Y_i)_{i=1}^N$ to make inference of causal effects defined by the potential outcomes. Define $$ \overline{Y}(q) = N^{-1}\sum_{i=1}^N Y_i(q) ,\quad S(q,q') = (N-1)^{-1} \sum_{i=1}^N(Y_i(q)-\overline{Y}(q))(Y_i(q')-\overline{Y}(q')) $$ as the average value of the potential outcomes under treatment $q$ and the covariance of the potential outcomes under treatments $q$ and $q'$, respectively. Define the average potential outcome vector as $\overline{Y} = (\overline{Y}(1), \ldots, \overline{Y}(Q))^\top \in {\mathbb{R}}^Q$, and define the covariance matrix of the potential outcomes as $S = (S(q,q'))_{q,q'=1,\ldots, Q}$. The parameter of interest is a linear transformation of $\overline{Y}$: \begin{align*}
\gamma = F^\top \overline{Y} \end{align*} for a pre-specified $F = (f_{qh})\in{\mathbb{R}}^{Q\times H}$, which is called the contrast matrix when its columns are orthogonal to $(1,\ldots,1)^\top$. Despite the simple form of $\gamma$, it can answer questions from a wide ranges of applications. For instance, \citet{neyman1923application} considered pairwise differences in means, and \citet{dasgupta2015causal} and \citet{mukerjee2018using} considered linear combinations of the mean vector. Recently, \citet{li2017general} unified the literature by studying the properties of the moment estimator for $\gamma$ under complete randomization. In particular, define the sample mean and variance of the observed $Y_i(q)$'s as \begin{align}\label{eqn:hS}
{\widehat{Y}}_q = N_q^{-1}\sum_{Z_i = q}Y_i,\quad {\widehat{S}}(q,q) = (N_q-1)^{-1} \sum_{Z_i = q} (Y_i - {{\widehat{Y}}_q} )^2, \end{align} respectively. Define \begin{align}\label{eqn:hY-hV} {\widehat{Y}} = ({\widehat{Y}}_1,\cdots,{\widehat{Y}}_Q)^\top\in{\mathbb{R}}^Q,\quad {\widehat{V}}_{\widehat{Y}} = \diag{N_q^{-1}{\widehat{S}}(q,q)}_{q\in[Q]} \in {\mathbb{R}}^{Q\times Q} \end{align} as the vector of sample averages and the diagonal matrix of the sample variances across all arms, respectively. \citet{li2017general} showed that $ {\widehat{Y}}_q $ has mean and covariance \begin{align}\label{eqn:mean-var-Gamma} {\mathbb{E}}\{{\widehat{Y}}_q \} = \overline{Y} ,\quad \Var{{\widehat{Y}}_q} = V_{\widehat{Y}} = \diag{N_q^{-1}S(q,q)}_{q\in[Q]} - N^{-1}S, \end{align} and moreover, ${\widehat{V}}_{\widehat{Y}}$ is a conservative estimator for $V_{\widehat{Y}}$ in the sense that $ {\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\} - V_{\widehat{\gamma}}$ positive semi-definite. An immediate consequence is that
\begin{align}\label{eqn:estimates}
{\widehat{\gamma}} = F^\top {\widehat{Y}}, \quad {\widehat{V}}_{\widehat{\gamma}} = F^\top {\widehat{V}}_{\widehat{Y}} F
\end{align} are an unbiased point estimator for $ \gamma$ and a conservative covariance estimator for $V_{\widehat{\gamma}} = \Var{{\widehat{\gamma}}} = F V_{\widehat{Y}} F^\top$, respectively. \citet{li2017general} also used the established combinatorial or rank central limit theorems (CLTs) \citep{hajek1960limiting, hoeffding1951combinatorial, fraser1956vector} to prove the asymptotic Normality of $ {\widehat{\gamma}} $ and the validity of the associated large-sample Wald-type inference, under certain regularity conditions.
\subsection{Open questions} \label{sec::open-questions}
Despite the long history of \citet{neyman1923application}'s randomization model, the theory for randomization-based causal inference is far from complete. Technically, \citet{li2017general}'s review only covered the first regime \ref{regime:R1} below, and even there, finer results such as Berry--Esseen bounds (BEBs) have not been rigorously established for the most general setting (see \citet{wang2021rerandomization} for some recent results of BEBs for treatment-control experiments). For other regimes below, many basic results are still missing in the literature. Table \ref{tab:many-settings} provides a list of important regimes and reviews the established and missing theoretical results. The discussion below gives more details.
\begin{table}[t] \centering \caption{Theoretical results for multi-armed experiments under the randomization model. The regimes \ref{regime:R1}--\ref{regime:R4} correspond to nearly uniform designs by Definition \ref{def:uniform-design}, whereas the regime \ref{regime:R5} corresponds to non-uniform designs by Definition \ref{def:non-uniform-design}. } \label{tab:many-settings} \begin{tabular}{M{1.0cm}M{1.0cm}M{3cm}M{6cm}} \toprule
Regime & $Q$ & $N_q$ & CLT, variance estimation, and BEB \\ \hline
\ref{regime:R1} & Small & Large & CLT and variance estimation; no BEB \\ \ref{regime:R2} & Large & Large & Seems similar to \ref{regime:R1} but not studied \\
\ref{regime:R3} & Large & Small but $N_q\ge$ 2 & Not studied \\
\ref{regime:R4} & Large & $N_q = 1$ & Not studied; variance estimation is nontrivial \\ \hline
\ref{regime:R5} & \multicolumn{2}{c}{{Mixture of the above}} & Not studied \\ \bottomrule \end{tabular} \end{table}
\begin{enumerate}[label={({R\arabic*})}]
\item\label{regime:R1} Small $Q$ and large $N_q$'s. In this regime, the number of arms is small and the sample size in each arm is large. Asymptotically, as $N \to \infty$, we have that $Q$ is a fixed integer and $N_q/N\to e_q\in(0,1)$ for all $q=1,\ldots, Q$.
\cite{li2017general} showed that, under \ref{regime:R1} and some regularity conditions on the potential outcomes, we have \begin{equation}\label{eq::basic-asymptotic-inference}
V_{\widehat{\gamma}}^{-1/2}({\widehat{\gamma}} - \gamma) \rightsquigarrow {\mathcal{N}}(0, I_H), \quad N{\widehat{V}}_{\widehat{\gamma}} - N{\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\} = o_{\mathbb{P}}(1), \end{equation} which ensure that the large sample Wald-type inference based on the Normal approximation is conservative. \cite{li2017general}'s results are asymptotic. An important theoretical question is to quantify the finite-sample properties of ${\widehat{\gamma}}$ by deriving non-asymptotic results.
\item\label{regime:R2} Large $Q$ and large $N_q$'s. In this regime, each arm has adequate units for the variance estimation, but the number of arms is also large. Asymptotically, as $N \to \infty$, we have $Q\to \infty$ and $N_q \to \infty$ for all $q=1,\ldots, Q$. Consequently, the limiting values of some $N_q/N$'s must be 0. The point estimates and variance estimators in \eqref{eqn:estimates} are still well-defined in this regime. We might expect that the asymptotic results in \eqref{eq::basic-asymptotic-inference} still hold because of large $N_q$'s. However, previous theoretical results do not cover this seemingly easy case due to the possibly diverging dimension of $F$.
\item\label{regime:R3} Large $Q$ and small $N_q$'s. In this regime, the number of arms is large but the sample size within each arm is small. Asymptotically, as $N \to \infty$, we have $Q\to \infty$ and $2\le N_q\le \overline{n} $ for some fixed $\overline{n} \ge 2$. This regime is well suited for many factorial experiments (see Example \ref{eg::factorial-design} below), in which the total number of factor combinations can be much larger than the number of replications in each combination \citep[e.g.,][]{mukerjee2006modern, wu2011experiments}. Although the point estimate and variance estimator in \eqref{eqn:estimates} are still well-defined, we do not expect a simple CLT based on the joint asymptotic Normality of ${\widehat{Y}}$ due to the small $N_q$'s. Nevertheless, $\widehat{\gamma} = F^\top {\widehat{Y}}$, as a linear transformation of ${\widehat{Y}}$, can still satisfy the CLT for some choice of $F$. This regime is reminiscence of the so-called {\it proportional asymptotics} in regression analysis, and even there, statistical inference is still not satisfactory in general \citep[e.g.,][]{el2013robust, lei2018asymptotics, el2018can}. Technically, we need to analyze $ F^\top {\widehat{Y}}$ with the dimension of ${\widehat{Y}}$ proportional to the sample size under the randomization model. This is a gap in the literature.
\item\label{regime:R4} Large $Q$ and $N_q = 1 $ for all $q = 1, \ldots, Q$. This regime is much harder than \ref{regime:R3} because the variance estimator in \eqref{eqn:estimates} is not even well defined due to the lack of replications within each arm \citep[e.g.,][]{espinosa2016bayesian}. Therefore, we need to answer two fundamental questions. First, does $ F^\top {\widehat{Y}}$ still satisfy the CLT for some $F$? Second, how do we estimate the variance of $ F^\top {\widehat{Y}}$? These two questions are the basis for large-sample Wald-type inference in this regime. Neither has been covered by existing results.
\item\label{regime:R5} Mixture of \ref{regime:R1}--\ref{regime:R4}. In the most general case, it is possible that the number of treatment levels diverges and the group sizes within different treatment arms vary a lot. Theoretically, we can partition the treatment levels into different types corresponding to the four regimes above. Understanding \ref{regime:R5} relies on understanding \ref{regime:R1}--\ref{regime:R4}. Due to the difficulties in \ref{regime:R1}--\ref{regime:R4} mentioned above, a rigorous analysis of \ref{regime:R5} requires deeper understanding of the randomization model. This is another gap in the literature.
\end{enumerate}
For descriptive convenience, we define \ref{regime:R1}--\ref{regime:R4} as \textit{nearly uniform designs} and \ref{regime:R5} as \textit{non-uniform designs}, respectively, based on the heterogeneity of the group sizes across treatment arms. Definitions \ref{def:uniform-design} and \ref{def:non-uniform-design} below make the intuition more precise.
\begin{definition}[Nearly uniform design]\label{def:uniform-design} There exists a positive integer $N_0 > 0$ and absolute constants $\underline{c} \le \overline{c}$, such that $ N_q = c_q {N}_0$ with $\underline{c}\le c_q\le \overline{c}$, for all $q=1,\ldots, Q$. \end{definition}
\begin{definition}[Non-uniform design]\label{def:non-uniform-design} Partition the treatment arms as $\{ 1,\ldots, Q \}= {\mathcal{Q}}_{\textsc{s}}\cup{\mathcal{Q}}_{\textsc{l}}$ with detailed descriptions below.
(i) ${\mathcal{Q}}_{\textsc{l}}$ contains the arms with large sample sizes. There exists a positive integer $N_0$ and absolute constants $\underline{c} \le \overline{c}$, such that $N_q = c_q N_0$ with $\underline{c}\le c_q\le \overline{c}$, for all $q\in{\mathcal{Q}}_{\textsc{l}}$.
(ii) ${\mathcal{Q}}_{\textsc{s}}$ contains the arms with small sample sizes. There exists a fixed integer $\overline{n} $ such that
$N_q \le \overline{n} $ for all $q\in{\mathcal{Q}}_{\textsc{s}}$. Further partition ${\mathcal{Q}}_{\textsc{s}}$ as ${\mathcal{Q}}_{\textsc{s}} = {\mathcal{Q}}_{\textsc{u}} \cup {\mathcal{Q}}_{\textsc{r}}$ where
\begin{itemize}
\item ${\mathcal{Q}}_{\textsc{r}}$ contains the arms with replications, that is, $ 2\le N_q \le \overline{n}$ for all $q\in{\mathcal{Q}}_{\textsc{r}}$;
\item ${\mathcal{Q}}_{\textsc{u}}$ contains the arms without replications, that is, $N_q = 1$ for all $q\in{\mathcal{Q}}_{\textsc{u}}$.
\end{itemize} \end{definition}
For simplicity, we will use $|{\mathcal{Q}}_{\star}|$ and $N_\star = \sum_{q\in{\mathcal{Q}}_\star} N_q$ to denote the number of arms and the sample size in ${\mathcal{Q}}_\star$, respectively, where $\star \in \{ \textsc{s},\textsc{u},\textsc{r},\textsc{l}\}$. As a special case of Definition \ref{def:non-uniform-design}, $|{\mathcal{Q}}_{\textsc{r}}| = |{\mathcal{Q}}_{\textsc{l}}| = 0$ corresponds to \textit{unreplicated designs} in which each treatment level has only one observation.
We will also use the $2^K$ factorial design as a canonical example for many theoretical results throughout. We review the basic setup of the $2^K$ factorial design in Example \ref{eg::factorial-design} below \citep{dasgupta2015causal, lu2016covariate, zhao2021regression}.
\begin{example}[Factorial design] \label{eg::factorial-design} A $2^K$ factorial design has $K$ binary factors which generate $Q=2^K$ possible treatment levels. Index the potential outcomes $Y_i(q)$'s also as $Y_i(z_1, \ldots, z_K)$'s, where $q = 1,\ldots, Q$ and $z_1,\ldots, z_K = 0,1$. The parameter of interest $ \gamma = F^\top \overline{Y} $ may consist of a subset of the factorial effects. The contrast matrix $F$ has orthogonal columns and $\pm Q^{-1}$ entries; see \citet{dasgupta2015causal} for precise definitions of main effects and interactions. \end{example}
By definition, the factorial design can either be a nearly uniform design or a non-uniform design, depending on the group sizes across treatment levels. Previous asymptotic results only covered factorial designs under \ref{regime:R1} with fixed $K$ and large sample sizes for all treatment levels. This asymptotic regime can be a poor approximation to finite-sample properties of factorial designs with even a moderate $K$ (for example, if $K=10$ then $Q=2^K > 1000$). Based on simulation, \citet[][Appendix D]{zhao2021regression} showed that CLTs are likely to hold even with diverging $K$ and small sample sizes for all treatment levels. Allowing for a diverging $K$, \citet[][Theorem A1]{li2017general} derived the CLT for a single factorial effect under the sharp null hypothesis of no treatment effects for any units whatsoever, i.e., $Y_i(1) = \cdots = Y_i(Q)$ for all $i=1,\ldots, N$. However, deriving general asymptotic results for the factorial design has been an open problem in the literature.
\subsection{Our contributions}
Section \ref{sec::open-questions} has reviewed various designs and the associated open problems. In this paper, we will give a unified study of all the designs above. We further the literature in the following ways.
First, we formulate the inference problem under the randomization model in terms of linear permutational statistics. This formulation allows us to build upon the existing results in probability theory \citep{Bolthausen1984AnEO, chatterjee2008multivariate} to derive BEBs on the point estimator of the causal effect. In particular, our analysis emphasizes the dependence on the number of treatment levels and the dimension of the causal effects of interest. Importantly, we derive BEBs that can deal with non-uniform designs with varying group sizes.
Second, we establish a novel BEB on quadratic forms of the linear estimator under the randomization model. Importantly, this BEB allows that the number of treatment levels diverges, the sample sizes across treatment levels vary, and the dimension of the causal effects of interest diverges. It serves as the basis for the $\chi^2$ approximation for large-sample Wald-type inference.
Third, we propose variance estimators for unreplicated designs and mixture designs that allow for the group size to be one in many treatment levels. To the best of our knowledge, the variance estimators are new in the literature of design-based causal inference, although they share some features with those in finely stratified survey sampling \citep[e.g.,][]{cochran1977sampling, wolter2007introduction, breidt2016nonparametric} and experiments \citep[e.g.,][]{abadie2008estimation, fogarty2018mitigating}. However, the theoretical analysis of the new variance estimators is much more challenging because of the dependence of the treatment indicators under the randomization model. We also study their probability limits and establish a complete theory that allows for large-sample Wald-type inference.
Fourth, in the process of achieving the above three sets of results, we established some immediate theoretical results that are potentially useful for other problems. For instance, we prove a novel BEB for linear permutational statistic over convex sets, building upon a recent result based on Stein's method \citep{fang2015rates}. We also obtain fine results on the sample moments under the randomization model. Due to the space limit, we relegate them to Appendices A and C in the supplementary material.
\subsection{Notation} We use $C$ to denote generic constants that may vary. Let $\Phi(t)$ denote the cumulative distribution function of a standard Normal distribution. For two sequences of numbers, $a_N$ and $b_N$, let $a_N = O(b_N)$ denote $ a_N \le Cb_N$ for some positive constant $C>0$, and let $a_N = o(b_N) $ denote $a_N/b_N \to 0 $ as $N\to\infty$. For a positive integer $N$, define $[N] = \{1,\cdots,N\}$. Let ${\boldsymbol{0}}_N$ and ${\boldsymbol{1}}_N$ denote, respectively, vectors of all zeros and ones in ${\mathbb{R}}^N$. For two random variables $X$ and $X'$, we use $X\lesssim X'$ or $X'\gtrsim X$ to represent that $X'$ stochastically dominates $X$, i.e., ${\mathbb{P}}\{X' \le t\} \le {\mathbb{P}}\{X \le t\}$ for all $t\in{\mathbb{R}}$. For any covariance matrix $V$, let $V^\star$ denote the corresponding correlation matrix.
Consider a matrix $M=(M(h,l))\in{\mathbb{R}}^{H\times H}$. Let $M(\cdot,l)\in {\mathbb{R}}^{H\times 1}$ and $M(h, \cdot)\in{\mathbb{R}}^{1\times H}$ denote its $l$-th column and $h$-th row, respectively. Let $\varrho_k(M)$ denote its $k$-th largest singular value. Specially, let $\varrho_{\max}(M)$ and $\varrho_{\min}(M)$ denote the largest and smallest singular values, respectively. Define its condition number as the ratio of its largest and smallest singular values: $ \kappa(M) = {\varrho_{\max}(M)}/{\varrho_{\min}(M)}
$. Let $\|M\|_\textsc{f} = (\sum_{h=1}^H\sum_{l=1}^H M_{hl}^2)^{1/2}$, $\|M\|_{\operatorname{op}} = \{\varrho_{\max}(M^\top M)\}^{1/2}$, $\|M\|_{p,q}= (\sum_{l=1}^H \|M(\cdot,l)\|_p^q)^{1/q} = \{\sum_{l=1}^H(\sum_{h=1}^H |M_{hl}|^p)^{q/p}\}^{1/q}$ ($1\le p < \infty$), $\|M\|_\infty = \max_{h,l\in [H]} |M_{hl}|$ be, respectively, the Frobenius norm, the operator norm, the $L_{p,q}$ norm and the vectorized $\ell_\infty$ norm.
Design-based results rely crucially on conditions on $$
M_N(q) = \max_{i\in[N]}|Y_i(q)-\overline{Y}(q)|, \quad (q=1,\ldots, Q) $$ which is the maximum absolute deviation from the mean for potential outcome $Y_i(q)$'s. \citet{hajek1960limiting} used it in proving the CLT for simple random sampling, and \citet{li2017general} used it in proving CLTs for design-based causal inference. It will also appear frequently in our presentation below.
\section{BEBs for the moment estimator under completely randomized experiments}\label{sec:PCLT-projection} This section presents the BEBs for the moment estimator ${\widehat{\gamma}}$ in \eqref{eqn:estimates} under completely randomized experiments. Section \ref{sec::BEBs-linear-estimators} presents general BEBs for linear projections of ${\widehat{\gamma}}$. Sections \ref{sec::BEB-nearly-uniform-design} and \ref{sec::BEB-non-uniform-design} then apply them to derive useful BEBs for nearly uniform and non-uniform designs, respectively.
\subsection{BEBs on the moment estimator}\label{sec::BEBs-linear-estimators}
To simplify the presentation, standardize $ {\widehat{\gamma}} = F^\top {\widehat{Y}}$: \begin{align}\label{eqn:tilde-gamma}
\widetilde{\gamma} = V_{\widehat{\gamma}}^{-1/2}({\widehat{\gamma}} - \gamma) \quad \text{ with } \quad \E{\widetilde{\gamma}} = 0 \text{ and } \Var{\widetilde{\gamma}} = I_H. \end{align} The standardization \eqref{eqn:tilde-gamma} assumes that the covariance matrix $V_{\widehat{\gamma}}$ is not singular. We assume it for convenience without loss of generality. When the contrast matrix $F$ has linearly dependent columns and $V_{\widehat{\gamma}}$ becomes degenerate, we can focus on a subset of linearly independent columns of $F$.
Our key results are BEBs on linear projections of $ \widetilde{\gamma}$. Condition \ref{cond:well-conditioned} below will facilitate the discussion.
\begin{condition}[Non-degenerate covariance matrix of ${\widehat{\gamma}}$]\label{cond:well-conditioned} There exists $\sigma_F \ge 1$ such that \begin{align}\label{eqn:well-conditioned}
F^\top \diag{N_q^{-1}S(q,q)} F \preceq \sigma^2_F F^\top V_{\widehat{Y}} F . \end{align} \end{condition}
Theorem \ref{thm:be-proj-standard} below gives a general BEB for linear projections of $\widetilde{\gamma}$.
\begin{theorem}[BEBs for linear projections of $\widetilde{\gamma}$]\label{thm:be-proj-standard} Assume complete randomization. (i) There exists a universal constant $C>0$, such that for any $b\in{\mathbb{R}}^H$
with $ \|b\|_2 = 1$, we have \begin{align*}
\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right| \le C\left\|b^\top V_{\widehat{\gamma}}^{-1/2} F^\top\right\|_\infty \cdot \max_{q\in [Q]} N_q^{-1} M_N(q) .
\end{align*} (ii) Further assume Condition \ref{cond:well-conditioned}. There exists a universal constant $C>0$, such that \begin{align}\label{eqn:uniform-be}
\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right|
\le C \max_{i\in[N],q\in[Q]} \min\left\{\mathrm{I}(i,q), \mathrm{II}(i,q)\right\}. \end{align} where \begin{align}\label{eqn::terms1and2-be-proj-standard}
\mathrm{I}(i,q) = {\sigma_F} \left|\frac{ Y_i(q)-\overline{Y}(q)}{\sqrt{N_q S(q,q)}}\right|,\quad \mathrm{II}(i,q) = \frac{ \|F(q, \cdot)\|_2 \cdot N_q^{-1}|Y_i(q)-\overline{Y}(q)|}{\sqrt{\varrho_{\min}\{F^\top V_{\widehat{Y}} F\}}}. \end{align} \end{theorem}
The upper bound in Theorem \ref{thm:be-proj-standard}(i) depends on the choice of $b$, whereas the upper bound in Theorem \ref{thm:be-proj-standard}(ii) is uniform over all $b$. The first step of the proof of Theorem \ref{thm:be-proj-standard} is to formulate $\widetilde{\gamma}$ as a linear permutational statistic so that we can apply an existing BEB by \citet{Bolthausen1984AnEO}. The second step of the proof is to derive upper bounds in terms of I and II in \eqref{eqn::terms1and2-be-proj-standard} from two different perspectives, which is non-trivial to the best of our knowledge. See Appendix D for technical details. The second step is the key, which makes Theorem \ref{thm:be-proj-standard} applicable in a wide range of designs.
Before discussing the useful consequences of Theorem \ref{thm:be-proj-standard}(ii), we first comment on Condition \ref{cond:well-conditioned}, which plays a key role in deriving the upper bound in Theorem \ref{thm:be-proj-standard}(ii). Condition \ref{cond:well-conditioned}, coupled with \eqref{eqn:mean-var-Gamma}, implies that \begin{align*}
\sigma_F^{-2} F^\top \diag{N_q^{-1}S(q,q)} F \preceq V_{\widehat{\gamma}} \preceq F^\top \diag{N_q^{-1}S(q,q)} F, \end{align*} i.e., the covariance matrix $V_{\widehat{\gamma}}$ is upper and lower bounded by $F^\top \diag{N_q^{-1}S(q,q)} F$, up to constants. Lemma \ref{lem:suff-conds} below gives two sufficient conditions for Condition \ref{cond:well-conditioned} to aid the interpretation.
\begin{lemma}[Sufficient conditions for Condition \ref{cond:well-conditioned}]\label{lem:suff-conds} (i) Condition \ref{cond:well-conditioned} holds with $\sigma_F = 1$ if the individual causal effects are constant, that is, $ F^\top\{(Y_i(q))_{q=1}^Q - \overline{Y}\}= 0$ for all $i\in[N]$. (ii) Condition \ref{cond:well-conditioned} holds with $\sigma_F = c\sigma$ if $\max_{q\in[Q]}N_q \le (1-c)N$ for some $0<c<1$ and the condition number of the correlation matrix corresponding to $V_{\widehat{Y}}$ is upper bounded by $\sigma^2$. \end{lemma}
The sufficient conditions in Lemma \ref{lem:suff-conds} are somewhat standard in the literature, especially under \ref{regime:R1}. \citet[][Corollary 2]{li2017general} gives a CLT under the assumption of constant individual causal effects, which is a special case of Lemma \ref{lem:suff-conds}(i). \citet[][Theorem 5]{li2017general} proves a CLT under the assumption that $S$ has a finite limiting value. When the limit is positive definite, $V_{\widehat{Y}}$ also converges to a positive definite matrix, which becomes a special case of Lemma \ref{lem:suff-conds}(ii). The general forms of conditions in Condition \ref{cond:well-conditioned} and Lemma \ref{lem:suff-conds} are more useful for \ref{regime:R2}--\ref{regime:R5}.
Now we comment on Theorem \ref{thm:be-proj-standard}(ii). The upper bound in \eqref{eqn:uniform-be} depends on the minimum value of two terms. It is convenient to apply these two terms to different treatment arms based on the structure of the design. We elaborate this idea by revisiting \ref{regime:R1} to \ref{regime:R5}. \begin{itemize}
\item For \ref{regime:R1} and \ref{regime:R2}, because the $N_q$'s are large, we can use term I in \eqref{eqn::terms1and2-be-proj-standard} and obtain a sufficient condition for a vanishing upper bound.
\item For \ref{regime:R3} and \ref{regime:R4}, the $N_q$'s are bounded and term I in \eqref{eqn::terms1and2-be-proj-standard} has constant order. However, term II in \eqref{eqn::terms1and2-be-proj-standard} is small under mild conditions on $F$. For instance, in the factorial design in Example \ref{eg::factorial-design}, the following algebraic facts hold: \begin{equation} \label{eq::contrast-matrix}
\|F\|_{\infty} = Q^{-1},\quad \| F(q, \cdot) \|_2 = Q^{-1} \sqrt{H}, \quad \varrho_{\min}(F^\top F) = Q^{-1}. \end{equation} Combining Condition \ref{cond:well-conditioned} and \eqref{eq::contrast-matrix}, we have \begin{align}
\varrho_{\min}(F^\top V_{\widehat{Y}} F) &\ge \sigma_F^{-2} \varrho_{\min}(F^\top \diag{N_q^{-1}S(q,q)} F) \notag\\
&\ge \sigma_F^{-2}\min_{q\in[Q]} \{N_q^{-1}S(q,q)\}\cdot \varrho_{\min}(F^\top F) \notag\\
&= Q^{-1} \sigma_F^{-2} \min_{q\in[Q]} \{N_q^{-1}S(q,q)\}.\label{eqn:factorial-F} \end{align} If we assume $\max_{q\in[Q]}M_N(q)^2/\min_{q\in[Q]}S(q,q)$ is of constant order, because the $N_q$'s are bounded, term II in \eqref{eqn::terms1and2-be-proj-standard} has order $O(\sqrt{H/Q})$, which is small if $H/ Q \rightarrow 0$.
\item For \ref{regime:R5}, we can partition the treatment arms based on the sizes of $N_q$'s to achieve a trade-off between terms I and II in \eqref{eqn::terms1and2-be-proj-standard}. In particular, for non-uniform designs in Definition \ref{def:non-uniform-design}, a natural partition is $ [Q] = {\mathcal{Q}}_{\textsc{l}} \cup {\mathcal{Q}}_{\textsc{S}}$. On the one hand, the arms in ${\mathcal{Q}}_{\textsc{l}}$ contain many units, so term I in \eqref{eqn::terms1and2-be-proj-standard} vanishes asymptotically. On the other hand, ${\mathcal{Q}}_{\textsc{s}}$ contains many arms which makes term II in \eqref{eqn::terms1and2-be-proj-standard} small under mild conditions on $F$. \end{itemize}
We will provide rigorous results in the next two sections by applying Theorem \ref{thm:be-proj-standard} to obtain useful BEBs for different designs.
\subsection{A BEB with a proper contrast in nearly uniform designs}\label{sec::BEB-nearly-uniform-design}
In \ref{regime:R1} with a fixed $Q$ and large $N_q$'s, it is intuitive to have CLTs for linear transformations of ${\widehat{Y}}$ because ${\widehat{Y}}$ itself has a CLT. In other regimes, for instance \ref{regime:R4}, CLTs for linear transformations of ${\widehat{Y}}$ are less intuitive. Consider a diverging $Q$ and bounded $N_q$'s. If $F = (1,0,\dots,0)^\top \in {\mathbb{R}}^Q$, then the CLT for $F^\top {\widehat{Y}} = {\widehat{Y}}_1$ does not hold due to the bounded sample size in treatment arm 1. As another toy example, if $F = ({\boldsymbol{1}}_Q, {\boldsymbol{1}}_Q) \in{\mathbb{R}}^{Q\times 2}$, then $F^\top{\widehat{Y}}$ has degenerate covariance structure and Theorem \ref{thm:be-proj-standard} cannot be directly applied. Therefore, CLTs should be established for proper contrast matrices. Corollary \ref{cor:uniform-design-be} below gives a positive result on the BEBs for proper contrasts. We first introduce Condition \ref{condition::proper} below on $F$.
\begin{condition}[Proper contrast] \label{condition::proper}
The contrast matrix $F$ satisfies $ \|F\|_\infty \le cQ^{-1}$ and $\varrho_{\min}\{F^\top F\} \ge c'Q^{-1}$ for some constants $c,c'>0$. \end{condition}
Condition \ref{condition::proper} appears to be dependent on the scale of $F$ although the BEB should not depend on the scale of $F$ due to the standardization of ${\widehat{\gamma}}$. We present the above form of Condition \ref{condition::proper} to facilitate the discussion of the factorial design in Example \ref{eg::factorial-design}, in which the scale of $F$ is motivated by scientific questions of interest. When $Q$ is fixed, Condition \ref{condition::proper} holds if $F$ has full column rank. So in \ref{regime:R1}, Condition \ref{condition::proper} does not impose any additional assumptions beyond the standard ones. When $Q$ diverges, Condition \ref{condition::proper} rules out sparse $F$ that only results in linear combination of ${\widehat{Y}}$ over a small number of treatment arms. Also, the minimum eigenvalue condition in Condition \ref{condition::proper} ensures non-degenerate covariance structure. Example \ref{eg::factorial-proper-F} below gives more detailed discussion in the factorial design.
We then give Corollary \ref{cor:uniform-design-be} below.
\begin{corollary}[BEB for nearly uniform designs]\label{cor:uniform-design-be} Assume complete randomization that satisfies Definition \ref{def:uniform-design}, Conditions \ref{cond:well-conditioned} and \ref{condition::proper}. There exists a universal constant $C>0$, such that \begin{align}\label{eqn:uniform-design-be}
\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right|
\le
C\sigma_F \frac{ \max_{ q\in[Q]} M_N(q) }{ \{ \min_{q\in[Q]} S(q,q) \}^{1/2} } \sqrt{ \frac{H}{N} }
. \end{align}
\end{corollary}
We make several comments on Corollary \ref{cor:uniform-design-be}. First, technically, we derive the upper bound in Corollary \ref{cor:uniform-design-be} based on the upper bound from term II in \eqref{eqn::terms1and2-be-proj-standard} in Theorem \ref{thm:be-proj-standard}(ii), which is uniform over $b$. Therefore, the upper bound in Corollary \ref{cor:uniform-design-be} preserves the uniformity and does not depend on $b$. Second, the upper bound in \eqref{eqn:uniform-design-be} reveals the interplay of several parameters: the number of contrasts $H$, the number of units $N$, the scale of the potential outcomes $M_N(q)$, the minimum second moments $\min_{q\in[Q]} S(q,q)$ as well as the structure of $F$. Third, the upper bound in \eqref{eqn:uniform-design-be} decreases at the rate of $(H / N)^{1/2} $ which can deal with {\ref{regime:R1}--\ref{regime:R4}}.
\begin{remark} The denominator of \eqref{eqn:uniform-design-be} depends on $\min_{q\in[Q]} S(q,q)$, which is useful when the variances of the potential outcomes are lower bounded. For the ease of presentation, we did not discuss more complicated cases such as some $S(q,q)$'s are small. We can slightly modify the proof of Corollary \ref{cor:uniform-design-be} to cover scenarios where some $S(q,q)$'s are close or equal to zero. \end{remark}
We conclude this subsection with an example on the nearly uniform factorial design.
\begin{example}[Nearly uniform factorial design]\label{eg::factorial-proper-F} Recall Example \ref{eg::factorial-design} and assume it satisfies Definition \ref{def:uniform-design}. Let $F\in{\mathbb{R}}^{Q\times H}$ with $H = K+K(K-1)/2={K(K+1)}/{2}$ be the contrast matrix for all main effects and two-way interactions. Assume Condition \ref{cond:well-conditioned} and recall \eqref{eq::contrast-matrix}. Corollary \ref{cor:uniform-design-be} implies
\begin{align} \label{eqn:factorial-BE}
\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right|
\le
C\sigma_F \frac{ \max_{q\in[Q]} M_N(q) }{ \{ \min_{q\in[Q]} S(q,q) \}^{1/2} } \sqrt{\frac{K^2}{N}} . \end{align} From \eqref{eqn:factorial-BE}, we can obtain a sufficient condition for the the upper bound to converge to 0, which implies a CLT of $\widetilde{\gamma} $.
\end{example}
\subsection{A BEB for non-uniform designs}\label{sec::BEB-non-uniform-design}
Now consider the non-uniform design in Definition \ref{def:non-uniform-design}. We can apply Theorem \ref{thm:be-proj-standard} to establish the following BEB for non-uniform designs:
\begin{corollary}[BEB for non-uniform designs]\label{cor:non-uniform-design-be} Assume complete randomization that satisfies Definition \ref{def:non-uniform-design} and Conditions \ref{cond:well-conditioned} and \ref{condition::proper}. Also assume \begin{align}\label{eqn:small-H}
Q \ge 2(c^2/c')H|{\mathcal{Q}}_{\textsc{l}}| \end{align} recalling the constants $c$ and $c'$ in Condition \ref{condition::proper}. There exists a universal constant $C>0$, such that \begin{align}\label{eqn:non-uniform-design-be}
&\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right| \\
\le& C \sigma_F \max\left\{ \max_{q\in{\mathcal{Q}}_{\textsc{l}}}\frac{ M_N(q)}{\sqrt{N_q S(q,q)}}, \frac{ \max_{q\in {\mathcal{Q}}_{\textsc{s}}} M_N(q)}{ \{ \min_{q\in {\mathcal{Q}}_{\textsc{s}}} S(q,q) \}^{1/2} }\cdot \sqrt{\frac{H}{N_{\textsc{s}}}}\right\}. \notag \end{align} \end{corollary}
There are two key steps in applying Theorem \ref{thm:be-proj-standard} to prove Corollary \ref{cor:non-uniform-design-be}. The first step is to partition ${\mathcal{Q}}$ into ${\mathcal{Q}}_{\textsc{s}} \cup {\mathcal{Q}}_{\textsc{l}}$ based on the size of the arms. The second step is to simplify the denominator of term II of \eqref{eqn:uniform-be}. The obtained upper bound \eqref{eqn:non-uniform-design-be} is uniform over all $b$. Moreover, it depends on the sizes of the treatment arms in a subtle way. On the one hand, for $q\in{\mathcal{Q}}_{\textsc{l}}$, the $N_q$'s are large, so the first part of \eqref{eqn:non-uniform-design-be} converges to zero if the following ``local'' condition holds for all $q\in{\mathcal{Q}}_{\textsc{l}}$: $$ \frac{ M_N(q)^2 }{ S(q,q) } = o(N_q) . $$ On the other hand, for $q\in{\mathcal{Q}}_{\textsc{s}}$, the $N_q$'s are small, but the second part of \eqref{eqn:non-uniform-design-be} still converges to zero if the following ``global'' condition holds: $$ \frac{ \max_{q\in {\mathcal{Q}}_{\textsc{s}}} M_N(q)^2 }{ \min_{q\in {\mathcal{Q}}_{\textsc{s}}} S(q,q) } = o\left( \frac{H}{ N_{\textsc{s}} } \right). $$
We conclude this section with an example on the non-uniform factorial design.
\begin{example}[Non-uniform factorial design] Recall Example \ref{eg::factorial-design}. Assume the baseline arm $q=1$ contains a large number of units possibly due to lower cost while the other arms have $N_q \le \overline{n}$ for some fixed $\overline{n}$. This gives a non-uniform design by Definition \ref{def:non-uniform-design} with ${\mathcal{Q}}_{\textsc{l}} = \{1\}$ and ${\mathcal{Q}}_{\textsc{s}} = \{2, \ldots, Q\}$. Let $F\in{\mathbb{R}}^{Q\times H}$ with $H={K(K+1)}/{2}$ be the contrast matrix for all main effects as well as two-way interactions. Assume Condition \ref{cond:well-conditioned} and recall \eqref{eq::contrast-matrix}. Applying Corollary \ref{cor:non-uniform-design-be}, we have \begin{align}\label{eqn:non-uniform-factorial-be}
&\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right| \\
\le& C \sigma_F \max\left\{\frac{ M_N(1)}{\sqrt{N_1 S(1,1)}},
\frac{ \max_{ q \geq 2} M_N(q)}{ \{ \min_{q\geq 2} S(q,q) \}^{1/2} } \sqrt{ \frac{K^2}{N_\textsc{s}} } \right\}. \notag \end{align} From \eqref{eqn:non-uniform-factorial-be}, if $K\to\infty, N_1\to\infty$, and $$ \frac{M_N(1)}{\sqrt{S(1,1)}} = o(N_1^{1/2}) ,\quad \frac{ \max_{q\geq 2} M_N(q)}{ \{ \min_{q\geq 2} S(q,q)\}^{1/2} } = o( { N_\textsc{s}^{1/2} } / K), $$ then the upper bound in \eqref{eqn:non-uniform-factorial-be} vanishes asymptotically. \end{example}
\section{Design-based causal inference}\label{sec:inference}
Now we turn to the central task of design-based causal inference under complete randomization. We focus on the large-sample Wald-type inference based on the quadratic form $$ \widehat{T} = ({\widehat{\gamma}} - \gamma)^\top {\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma), $$ recalling the point estimator ${\widehat{\gamma}}$ and the variance estimator ${\widehat{V}}_{{\widehat{\gamma}}}$ in \eqref{eqn:estimates}. In \ref{regime:R1} with fixed $(Q, H)$ and large $N_q$'s, the standard asymptotic argument suggests that we can use $q_\alpha$, the upper $\alpha$-quantile of $\chi^2(H)$, as the critical value for the the quadratic form. For simplicity, we say that the corresponding confidence set is asymptotically valid if $ \lim_{N\to\infty} \mathbb{P}\{ \widehat{T} \le q_\alpha \} \ge 1-\alpha$.
The rigorous theoretical justification for the above Wald-type inference procedure typically follows from two steps: \begin{enumerate} \item [(Step 1)]\label{step1} First, analyze the asymptotic distribution of the corresponding quadratic form with the true covariance matrix \begin{align}\label{eqn:quad-form}
T = ({\widehat{\gamma}} - \gamma)^\top V_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma). \end{align}
\item [(Step 2)]\label{step2} Second, construct a consistent or conservative estimator ${\widehat{V}}_{{\widehat{\gamma}}}$ for the true covariance matrix $V_{{\widehat{\gamma}}}$. \end{enumerate}
Under regime \ref{regime:R1}, both Steps 1 and 2 have rigorous theoretical justification ensured by \eqref{eq::basic-asymptotic-inference}. Beyond \ref{regime:R1}, it is challenging to derive the asymptotic distribution of the quadratic form in Step 1 especially when $H$ and thus the degrees of freedom of $T$ diverge. To achieve the requirement in Step 1, we use results based on Stein's method to derive BEBs on quadratic forms of linear permutational statistics. To avoid excessive notation, we present the results that are most relevant to our inference problem in the main paper and relegate more general yet more complicated results to Appendices A and C. Moreover, the sample variances ${\widehat{S}}(q,q)$'s and thus the variance estimator ${\widehat{V}}_{{\widehat{\gamma}}}$ in \eqref{eqn:estimates} are not even well defined when some treatment arms do not have replications of the outcome. Without replications in all arms, we must find an alternative form of ${\widehat{V}}_{{\widehat{\gamma}}}$ to estimate $V_{{\widehat{\gamma}}}$. This is a salient problem for \ref{regime:R4} and \ref{regime:R5}. Finally, in all regimes \ref{regime:R1}--\ref{regime:R5}, we need to study the properties of ${\widehat{V}}_{{\widehat{\gamma}}}$ to achieve the requirement in Step 2.
Due to the different levels of technical complexities, we divide this section into three subsections. Section \ref{sec:var-uniform} discusses nearly uniform designs with replications in all arms. Section \ref{sec:var-unreplicate} discusses unreplicated designs. Section \ref{sec::non-uniform-design-inference} discusses the general non-uniform designs. In every subsection, we first present a BEB on the quadratic form in Step 1, then present the properties of the covariance estimator ${\widehat{V}}_{{\widehat{\gamma}}}$, and finally present the formal result to justify the Wald-type inference.
To facilitate the discussion, we introduce the following notation $$ T_0 = \xi_H^\top \xi_H \quad \text{ where } \xi_H\sim{\mathcal{N}}(0,I_H) $$ for a $\chi^2(H)$ random variable with possibly diverging degrees of freedom. It has mean $H$ and variance $2H$. Both $\widehat{T}$ and $T$ are related to $T_0.$ Moreover, we introduce the following moment condition on the potential outcomes.
\begin{condition}[Bounded fourth moment of the potential outcomes]\label{cond:moments} There exists a $\Delta>0$ such that $
\max_{q\in[Q]} N^{-1} \sum_{i=1}^N \{Y_i(q) - \overline{Y}(q)\}^4 \le \Delta^4. $ \end{condition}
\subsection{Nearly uniform design with replications in all arms}\label{sec:var-uniform}
In this subsection, we study the Wald-type inference for nearly uniform designs given by Definition \ref{def:uniform-design}. First, we present a BEB for $T$ in \eqref{eqn:quad-form} in Theorem \ref{thm:quad-be-nearly-uniform} below.
\begin{theorem}[BEB for the quadratic form $T$ for nearly uniform designs with replications]\label{thm:quad-be-nearly-uniform} Assume complete randomization that satisfies Definition \ref{def:uniform-design}. Assume Conditions \ref{cond:well-conditioned} and \ref{condition::proper}. There exists a universal constant $C>0$, such that \begin{align}\label{eqn:quad-be-nearly-uniform}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0 \le t)| \le \frac{C\max_{q\in[Q]}M_N(q)^3 }{ \{ \min_{q\in[Q]}S(q,q) \}^{3/2}}\cdot \frac{H^{19/4}}{N^{1/2}}. \end{align} \end{theorem}
Theorem \ref{thm:quad-be-nearly-uniform} bounds the difference between $T$ and $T_0$ with possibly diverging $H$. Its upper bound is more useful when $H^{19/2} / N \rightarrow 0$, which restricts the number of contrasts of interest. The condition $H^{19/2} / N \rightarrow 0$ holds naturally in the factorial design in Example \ref{eg::factorial-design} under regime \ref{regime:R4} if only the main effects and two-way interactions are of interest with $H = O(K^2) = O( (\log N)^2 ) $.
Second, we discuss variance estimation. Recall ${\widehat{S}}(q,q)$ and ${\widehat{V}}_{\widehat{Y}}$ be defined as in \eqref{eqn:hS} and \eqref{eqn:hY-hV}. Consider the point estimator ${\widehat{\gamma}}$ and covariance estimator ${\widehat{V}}_{\widehat{\gamma}}$ in \eqref{eqn:estimates}. We have Theorem \ref{thm:uniform-var} below.
\begin{theorem}[Variance estimation in nearly uniform designs]\label{thm:uniform-var} Consider designs that satisfies Definition \ref{def:uniform-design} with $\min_{q\in[Q]} N_q \ge 2$. Assume Condition \ref{cond:moments}. \begin{enumerate}[label = (\roman*)]
\item\label{thm:uniform-var-1}
${\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\} \succeq V_{\widehat{\gamma}}$.
\item\label{thm:uniform-var-2}
$ \|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\infty} = O_{{\mathbb{P}}}\left({\|F\|_\infty^4 Q^4 N^{-3}\Delta^4 H^2}\right).$
\item \label{thm:uniform-var-3}
$ \|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{{\operatorname{op}}} = O_{{\mathbb{P}}}\left({ \|F\|_\infty^4 Q^4 N^{-3}\Delta^4 H^4}\right).$
\end{enumerate}
\end{theorem}
Theorem \ref{thm:uniform-var}\ref{thm:uniform-var-1} reviews that the covariance estimator ${\widehat{V}}_{{\widehat{\gamma}}}$ is conservative, which is well-known in design-based causal inference \citep{neyman1923application, imbens15, li2017general}. Theorem \ref{thm:uniform-var}\ref{thm:uniform-var-2} and \ref{thm:uniform-var-3} are novel results on the stochastic orders of of the estimation error of the covariance estimator in $L_{\infty}$ norm and operator norm, respectively. In Example \ref{eg::factorial-design} of the factorial design with $\|F\|_\infty = O(Q^{-1})$, if $\Delta$ is constant, then Theorem \ref{thm:uniform-var} simplifies to $$
N\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\infty} =O_{{\mathbb{P}}}\left( H/N^{1/2} \right),\quad
N\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\operatorname{op}} = O_{{\mathbb{P}}}\left( H^2/N^{1/2} \right). $$ The estimation error shrinks to zero quickly if only the main effects and two-way interactions are of interest. The results in Theorem \ref{thm:uniform-var} suffice for inference, and we relegate the finer probability tail bound for ${\widehat{V}}_{{\widehat{\gamma}}}$ to the supplementary material.
Third, we present formal results on inference. To simplify the presentation, we impose Condition \ref{cond:easy-spec} below.
\begin{condition}\label{cond:easy-spec} (i) The absolute value of the centered potential outcomes are upper bounded by a constant $\nu>0$. (ii) The variance of the potential outcomes are lower bounded a constant $\underline{S} > 0$.
\end{condition}
The upper boundedness in potential outcomes in Condition \ref{cond:easy-spec}(i) is not necessary but convenient for the theory. We can relax it to allow for light tails of the potential outcomes. Similarly, the lower boundedness in Condition \ref{cond:easy-spec}(ii) can be replaced by more general requirement such as light-tailed potential outcomes. We omit the results due to the technicalities.
Theorem \ref{thm:wald-uniform} below justifies the Wald-type inference under the nearly uniform design with replications, where $H$ can be either fixed or diverging.
\begin{theorem}[Wald-type inference under replicated nearly uniform design]\label{thm:wald-uniform} Consider the nearly uniform design given by Definition \ref{def:uniform-design} with $\min_{q\in[Q]}N_q\ge 2$. Assume Conditions \ref{cond:well-conditioned}-\ref{cond:easy-spec}.
Define $
W_N = V_{\widehat{\gamma}}^{1/2}\E{{\widehat{V}}_{\widehat{\gamma}}}^{-1}V_{\widehat{\gamma}}^{1/2} \in {\mathbb{R}}^{H\times H}. $ Let $N\rightarrow \infty$. \begin{enumerate}[label = (\roman*)]
\item For a fixed $H$, assume there exists a {$W_\infty\in{\mathbb{R}}^{H\times H}$} such that
$ \lim_{N\to\infty}W_N = W_\infty $.
Use ${\mathcal{L}}$ to denote the distribution of
$
\xi_H^\top W_\infty \xi_H , \text{ where } \xi_H \sim {\mathcal{N}}(0,I_H).
$
We have
\begin{align*}
({\widehat{\gamma}} - \gamma )^\top {\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma) \rightsquigarrow {\mathcal{L}}
\end{align*}
and ${\mathcal{L}} \lesssim \chi^2(H)$.
Moreover, the Wald-type confidence set is asymptotically valid.
\item For a diverging $H$ with $H\rightarrow \infty$ and $H^{19/4}N^{-1/2} \to 0$, we have
\begin{align*}
\frac{({\widehat{\gamma}} - \gamma)^\top {\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma) - \trace{W_N}}{\sqrt{2\trace{W_N^2}}} \rightsquigarrow {\mathcal{N}}(0,1) .
\end{align*}
Moreover, the Wald-type confidence set is asymptotically valid. \end{enumerate} \end{theorem}
Theorem \ref{thm:wald-uniform}(i) reviews the known result for \ref{regime:R1} with a fixed $H$. Theorem \ref{thm:wald-uniform}(ii) is a novel result that allows for diverging $Q$ and $H$. It relies crucially on the BEB on the quadratic form $T$ in Theorem \ref{thm:quad-be-nearly-uniform} and the stochastic properties of ${\widehat{V}}_{\widehat{\gamma}}$ in Theorem \ref{thm:uniform-var}.
\subsection{Unreplicated design}\label{sec:var-unreplicate}
In this subsection, we study inference for unreplicated designs with $N_q = 1$ for $q=1,\ldots, Q$. On the one hand, the BEB on $T$ is identical to that in Theorem \ref{thm:quad-be-nearly-uniform}. We give the formal result in Theorem \ref{thm:quad-be-unreplicated} below for completeness.
\begin{theorem}[BEB for the quadratic form in unreplicated designs]\label{thm:quad-be-unreplicated} Assume complete randomization with $N_q = 1$ for $q=1,\ldots, Q$. Assume Conditions \ref{cond:well-conditioned} and \ref{condition::proper}. The BEB \eqref{eqn:quad-be-nearly-uniform} holds. \end{theorem}
On the other hand, covariance estimation without replications is a fundamentally challenging problem as reviewed in Section \ref{sec:motivation}. The commonly-used covariance estimator ${\widehat{V}}_{\widehat{\gamma}}$ in \eqref{eqn:estimates} is not well defined. We must construct a new estimator. In unreplicated designs, the observed allocation $Z_i$ and the arm $q$ has a one-to-one correspondence. Hence we can denote the single observed outcome in arm $ q$ by $Y_q$. The point estimator still has the form ${\widehat{\gamma}} = F^\top {\widehat{Y}}$ where ${\widehat{Y}} = (Y_1,\ldots, Y_Q)^\top$ is simply the observed outcome vector. Without replications, we cannot calculate ${\widehat{S}}(q,q)$ based on only the single observation within arm $q \in {\mathcal{Q}}_\textsc{u} = \{ 1,\ldots, Q\} $. With a little abuse of notation, we still consider the covariance estimator of the form: \begin{align}\label{eqn:hV-hY-abuse}
{\widehat{V}}_{{\widehat{\gamma}}} = F^\top {\widehat{V}}_{\widehat{Y}} F, \end{align} where ${\widehat{V}}_{\widehat{Y}}$ is a $Q\times Q$ diagonal matrix. The key is to construct its diagonal elements ${\widehat{V}}_{\widehat{Y}}(q,q)$ for all $q$'s.
To obtain substitutes for ${\widehat{S}}(q,q)$, we must borrow information across treatment arms. This motivates us to consider the following grouping strategy.
\begin{definition}[Grouping] \label{def::Grouping}
Partition ${\mathcal{Q}}_\textsc{u}$ as $ {\mathcal{Q}}_{\textsc{u}} = \cup_{g=1}^G {\mathcal{Q}}_{\textsc{u},g}$ where ${\mathcal{Q}}_{\textsc{u},g} \cap {\mathcal{Q}}_{\textsc{u},g'} = \varnothing$ for all $g\neq g'$ and $ | {\mathcal{Q}}_{\textsc{u},g}| \ge 2$ for all $g \in [G]$. The partition does not depend on the observed data. \end{definition}
Definition \ref{def::Grouping} does not allow for data-dependent grouping, which can cause theoretical complications. Examples \ref{exp:pairing} and \ref{exp:regression} below are special cases of Definition \ref{def::Grouping}. By the construction in Definition \ref{def::Grouping}, the ${\mathcal{Q}}_{\textsc{u},g}$'s have no overlap, so we can also use $\langle g \rangle$ to denote ${\mathcal{Q}}_{\textsc{u},g}$ and ${\mathcal{G}} = \{ \langle g \rangle\}_{g=1}^G$ to denote the grouping strategy without causing confusions. Moreover, $|\langle g \rangle|$ must be larger than or equal to two so that there are at least two treatment levels in each $\langle g \rangle$. In general, we use $ \langle g \rangle_q$ to indicate the group $\langle g \rangle$ that contains arm $q$, but when no confusions would arise, we also simplify the notation to $ \langle g \rangle$ if the corresponding $q$ is clear from the context.
Define \begin{align*}
{\widehat{Y}}_{\langle g \rangle} = \frac{1}{|\langle g \rangle|} \sum_{q\in\langle g \rangle} Y_q, \end{align*}
as the group-specific average, and construct
\begin{align}\label{eqn:hV-QU} {\widehat{V}}_{\widehat{Y}}(q,q) = \mu_{\langle g \rangle}(Y_q - {\widehat{Y}}_{{\langle g \rangle}})^2 ,\quad \text{ if } q\in{\langle g \rangle} \end{align} as the $q$th diagonal element of ${\widehat{V}}_{\widehat{Y}}$, where \begin{align}\label{eqn:mu-bg}
\mu_{\langle g \rangle} = (1-2N^{-1})^{-1}(1-|{\langle g \rangle}|^{-1})^{-2} \end{align} is a correction factor that is motivated by the theory below. Although the mean of $ {\widehat{Y}}_{\langle g \rangle} $ has a simple formula, the mean of ${\widehat{V}}_{\widehat{Y}}(q,q) $ has a cumbersome form. We present an lower bound of $ {\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\} $ below and relegate the complete formula to the supplementary material. The results require Condition \ref{cond:cond-N} below on the largest eigenvalue of the population correlation of the potential outcomes in group $\langle g \rangle$, defined as \begin{align}\label{eqn:rho-max-S}
\varrho_{\langle g \rangle} = \varrho_{\max}\{(S^\star(q,q'))_{q,q'\in\langle g\rangle}\} . \end{align}
\begin{condition}[Bound on $ \varrho_{\langle g \rangle} $]\label{cond:cond-N} $
N- \varrho_{\langle g \rangle} - (|{\langle g \rangle}|-1) \ge 0 $ for all $g\in{\mathcal{G}}$. \end{condition}
Condition \ref{cond:cond-N} reflects a trade-off between $N$, $\langle g\rangle$ and $\varrho_{\langle g \rangle}$. It is more likely to hold with smaller correlations between arms within the same group and smaller subgroup sizes. By a natural bound $\rho_{\langle g \rangle} \le |{\langle g \rangle}|$, Condition \ref{cond:cond-N} holds if $|{\langle g \rangle}| \le {(N + 1)}/{2}$ for all $g\in[G]$. Examples \ref{exp:pairing} and \ref{exp:regression} below satisfy Condition \ref{cond:cond-N} automatically. With Condition \ref{cond:cond-N}, we can present Lemma \ref{lemma::mean-variance-grouping} below.
\begin{lemma} [Sample mean and variance under grouping] \label{lemma::mean-variance-grouping} Assume grouping ${\mathcal{G}}$. We have \begin{align*}
\E{{\widehat{Y}}_{{\langle g \rangle}}} = \overline{Y}_{\langle g \rangle} ,\quad \text{ where } \overline{Y}_{\langle g \rangle} = \frac{1}{|{\langle g \rangle}|N} \sum_{q\in{\langle g \rangle}} \sum_{i=1}^NY_i(q) = \frac{1}{|{\langle g \rangle}|} \sum_{q\in{\langle g \rangle}} \overline{Y}(q). \end{align*} Further assume Condition \ref{cond:cond-N}. We have \begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\} \ge S(q,q) + \underbrace{{\Omega}(q,q)}_{\text{\em term III}} + \underbrace{\mu_{\langle g \rangle} (\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{\text{\em term IV}}, \end{align*} where \begin{align} \label{eq::Omega-qq}
{\Omega}(q,q) &= \mu_{\langle g \rangle}|g|^{-2}\left(1-\frac{\varrho_{\langle g \rangle}}{N} - \frac{|g|-1}{N}\right)\sum_{q'\in{\langle g \rangle},q'\neq q}S(q',q') \geq 0. \end{align} \end{lemma}
By Lemma \ref{lemma::mean-variance-grouping}, ${\widehat{V}}_{\widehat{Y}}(q,q)$, as an estimator for $S(q,q) $, is conservative, and the conservativeness depends on the variation of other arms $q'$ that belong to ${\langle g \rangle}_q$ (term III) and the between-arm heterogeneity in means within ${\langle g \rangle}_q$ (term IV). We comment on some special cases below. \begin{itemize}
\item If we assume homogeneity in means within subgroups, i.e., \begin{align}\label{eqn:weak-means}
\overline{Y}(q) = \overline{Y}_{\langle g \rangle},~ \text{ for all } q\in\langle g \rangle, \end{align} then term IV vanishes.
\item If we assume homoskedasticity across treatment arms within the same subgroup, i.e., \begin{align}\label{eqn:weak-vars}
S(q,q) = S(q',q'),~ \text{ for all } q,q'\in \langle g \rangle, \end{align} then term III becomes \begin{align}\label{eqn:strong-po}
{\Omega}(q,q) = \mu_{\langle g \rangle}(|g|-1)|g|^{-2}\left(1-\frac{\varrho_{\langle g \rangle}}{N} - \frac{|g|-1}{N}\right)S(q,q) . \end{align} Then we can combine \eqref{eqn:strong-po} with $S(q,q)$ and use a smaller correction factor \begin{align*}
\mu'_{\langle g \rangle} = (1-|g|^{-1})^{-1}\{(1-|g|^{-1})(1-2N^{-1}) + |g|^{-1}(1 - (2|g| - 1)/N) \}^{-1} \le \mu_{\langle g \rangle} \end{align*} to reduce the conservativeness of variance estimation.
\item If we assume the strong null hypothesis within subgroups, i.e.,
\begin{align*}
Y_i(q) = Y_i(q'), \text{ for all }i\in[N] \text{ and } q,q'\in {\langle g \rangle},
\end{align*}
then both \eqref{eqn:weak-means} and \eqref{eqn:weak-vars} hold. Applying the correction factor $\mu'_{\langle g \rangle}$, we can show $
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\} = S(q,q). $ \end{itemize}
Lemma \ref{lemma::mean-variance-grouping} suggests that ideally, we should group treatment arms based on the prior knowledge on the means and variances of the potential outcomes. While more general grouping strategies are possible, we give two examples for their simplicity of implementation. Both target the factorial design in Example \ref{eg::factorial-design}.
\begin{example}[Pairing by the lexicographic order]\label{exp:pairing} Recall Example \ref{eg::factorial-design}. We order the observations based on the lexicographical order of their treatment levels, then group the $(2k-1)$-th level with the $(2k)$-th level $(1\le k\le 2^{K-1})$. When $K=3$, the grouping reduces to
\begin{align*}
\langle 1 \rangle= \{ (000), (001) \}, \quad
\langle 2 \rangle = \{ (010), (011) \}, \quad
\langle 3 \rangle = \{ (100), (101) \} , \quad
\langle 4 \rangle = \{ (110), (111)\} .
\end{align*}
If the last factor has small effect on the outcome, then we expect small differences of the mean potential outcomes within grouping.
As a sanity check, Condition \ref{cond:cond-N} holds under this grouping strategy. \end{example}
\begin{example}[Grouping based on a subset of the factors]\label{exp:regression} Recall Example \ref{eg::factorial-design} again. If we have the prior knowledge that $K_0 < K$ factors are the most important ones, we can group the treatment levels based on these factors. Without loss of generality, assume that the first $K_0$ factors are the important ones. In particular, we can create $ G = 2^{K_0} < Q$ groups, with each group $ {\langle g \rangle}$ corresponding to treatment levels with the same important factors. Example \ref{exp:pairing} above is a special with the first $K-1$ factors as the important ones. Also, Condition \ref{cond:cond-N} holds under this grouping strategy. \end{example}
Now we turn to theoretical analysis of \eqref{eqn:hV-QU}. Its properties depend on how successful the grouping ${\mathcal{G}}$ is, quantified by Condition \ref{cond:bounded-bgv} below.
\begin{condition}[Bound on the within-group variation in potential outcome means]\label{cond:bounded-bgv} There exists a $\zeta>0$, such that $
\max_{g\in [G]} \max_{q\in{\langle g \rangle}} |\overline{Y}(q) - \overline{Y}_{\langle g \rangle}| \le \zeta. $ \end{condition}
The $\zeta$ in Condition \ref{cond:bounded-bgv} bounds the between-arm distance of the mean potential outcomes under grouping ${\mathcal{G}}$. It plays a key role in Theorem \ref{thm:unreplicated-var} below.
\begin{theorem}[Variance estimation for unreplicated designs]\label{thm:unreplicated-var} Consider designs that satisfy Definition \ref{def:non-uniform-design} and the covariance estimator in \eqref{eqn:hV-QU}.
\begin{enumerate}[label = (\roman*)]
\item\label{thm:unreplicated-var-1} Assume Condition \ref{cond:cond-N}. We have
\begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}\}
= V_{\widehat{Y}} + \Omega + \diag{\mu_{\langle g \rangle}(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{q\in{\mathcal{Q}}_{\textsc{u}}} + N^{-1} (\Theta + S)
\end{align*}
with $\Omega = \diag{ \Omega(q,q) }_{q\in{\mathcal{Q}}_{\textsc{u}}}$ and $ \Theta = \diag{\Theta(q,q)}_{q\in{\mathcal{Q}}_\textsc{U}}$, where the $\Omega(q,q) $'s are defined in \eqref{eq::Omega-qq} and the $\Theta(q,q)$'s are bounded by $0\le \Theta(q,q) \le 5\mu_{\langle g \rangle}\max_{q'\in \langle g \rangle} S(q',q').$
Therefore, ${\mathbb{E}}\{F^\top{\widehat{V}}_{\widehat{Y}} F\} \succeq V_{\widehat{\gamma}} $.
\item \label{thm:var-unreplicated-2} Assume Conditions \ref{cond:moments} and \ref{cond:bounded-bgv}. We have
\begin{align*}
\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\infty} & = O_{\mathbb{P}}\left\{(\max_{g\in[G]}\mu_{\langle g \rangle})^2 \|F\|_\infty^4\Delta^2
(\Delta^2+\zeta^2)NH^2\right\}.
\end{align*}
\item \label{thm:var-unreplicated-3}
Assume Conditions \ref{cond:moments} and \ref{cond:bounded-bgv}. We have
\begin{align*}
\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\operatorname{op}} & = O_{\mathbb{P}}\left\{(\max_{g\in[G]}\mu_{\langle g \rangle})^2 \|F\|_\infty^4\Delta^2
(\Delta^2+\zeta^2)NH^4\right\}.
\end{align*} \end{enumerate}
\end{theorem}
Theorem \ref{thm:unreplicated-var}\ref{thm:unreplicated-var-1} demonstrates that based on $\eqref{eqn:hV-QU}$, the covariance estimator ${\widehat{V}}_{\widehat{Y}}$ is conservative for $V_{\widehat{Y}}$, which implies that $F^\top{\widehat{V}}_{\widehat{Y}} F$ is conservative for the true covariance matrix of ${\widehat{\gamma}}$. The conservativeness, however, has a more complex pattern compared to the setting with replications within all arms \citep{neyman1923application, imbens15, li2017general}. Theorem \ref{thm:unreplicated-var}(i) shows three sources of conservativeness. The first part, captured by $\Omega $, is due to the between-arm heteroskedasticity within each subgroup. However, it is fundamentally difficult to estimate each $S(q,q)$ without replications. The second part, captured by $\diag{\mu_{\langle g \rangle}(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{q\in[Q]}$, is due to the between-arm heterogeneity in means within each subgroup. The part will be small if the grouping strategy ensures that the grouped arms have similar population average of potential outcomes.
The third part, captured by $N^{-1} (\Theta + S)$, is due to the difficulty of estimating $S$ and in particular, the off-diagonal terms of $S$. The difficulty of estimating $S$ has been well documented ever since \citet{neyman1923application} even in experiments with replications in each arm. It is possible to reduce this part but it requires additional assumptions, for example, the individual causal effects are constant.
Theorem \ref{thm:unreplicated-var}\ref{thm:var-unreplicated-2} and \ref{thm:var-unreplicated-3} give the stochastic order of the estimation error of the covariance estimator ${\widehat{V}}_{{\widehat{\gamma}}}$ under the $L_\infty$ norm and operator norm, respectively. If $\max_{g\in[G]}\mu_{\langle g \rangle}$, $\Delta$ and $\zeta$ are all constants, then $$
N\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\infty} = O_{\mathbb{P}} (H / {N}^{1/2} ) , \quad
N \|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\operatorname{op}} = O_{\mathbb{P}} (H^2 / {N}^{1/2} ), $$ which gives sufficient conditions on $H$ to ensure the convergence of ${\widehat{V}}_{{\widehat{\gamma}}}$ in $L_\infty$ norm and operator norm, respectively.
Finally, equipped with the BEB on the quadratic form $T$ in \eqref{eqn:quad-form} and the conservative variance estimator studied in Theorem \ref{thm:unreplicated-var}, it is immediate to establish Theorem \ref{thm:wald-unreplicated} below for inference, which parallels Theorem \ref{thm:wald-uniform}.
\begin{theorem}[Wald-type inference under unreplicated design]\label{thm:wald-unreplicated} Consider the unreplicated design with $N_q =1$ for all $q = 1,\ldots, Q$. Assume Conditions \ref{cond:well-conditioned}--\ref{cond:bounded-bgv}. Then the conclusion of Theorem \ref{thm:wald-uniform} holds if we use the covariance estimator in \eqref{eqn:hV-QU}. \end{theorem}
\subsection{Non-uniform design}\label{sec::non-uniform-design-inference}
In this section, we consider non-uniform designs in Definition \ref{def:non-uniform-design}. First, we show a BEB on $T$ in \eqref{eqn:quad-form} in Theorem \ref{thm:quad-be-non-uniform} below.
\begin{theorem}[Quadratic form BEB for non-uniform designs]\label{thm:quad-be-non-uniform} Assume complete randomization that satisfies Definition \ref{def:uniform-design}. Assume Conditions \ref{cond:well-conditioned}, \ref{condition::proper}, \eqref{eqn:small-H} and $N = O(Q)$. There exists a universal constant $C>0$, such that \begin{align}\label{eqn:quad-be-non-uniform}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| \le C \frac{\max_{q\in[Q]}M_N(q)^3 }{\{ \min_{q\in{\mathcal{Q}}_\textsc{S}}S(q,q) \}^{3/2}}\cdot \frac{H^{19/4}}{N^{1/2}}. \end{align} \end{theorem}
Theorem \ref{thm:quad-be-non-uniform} assumes $N$ has the same order as $Q$, which is helpful to establish the root $N$ convergence of BEB. We can relax it if we only need the CLT rather than the BEB with . For the ease of presentation, we omit the general results. A subtle feature of the upper bound in \eqref{eqn:quad-be-non-uniform} is that $\max_{q\in[Q]}M_N(q)$ is the maximum value of the $M_N(q)$'s over all treatment arms whereas $\min_{q\in{\mathcal{Q}}_\textsc{S}}S(q,q)$ is the minimum value of the $S(q,q)$'s over treatment arms in ${\mathcal{Q}}_\textsc{S}$ only.
Second, we construct a covariance estimator. It is a combination of the covariance estimators discussed in Sections \ref{sec:var-uniform} and \ref{sec:var-unreplicate}. For the treatment arms with replications, we can calculate sample variances of the potential outcomes based on the observed data. For the treatment arms without replications ${\mathcal{Q}}_\textsc{u}$, we need the grouping strategy in Definition \ref{def::Grouping}. Therefore, we construct a diagonal covariance estimator ${\widehat{V}}_{\widehat{Y}}$ with the $q$-th diagonal term \begin{align*}
{\widehat{V}}_{\widehat{Y}}(q,q) = \left\{
\begin{array}{cc}
\mu_{\langle g \rangle} (Y_q - {\widehat{Y}}_{\langle g \rangle})^2, &\quad q\in{\mathcal{Q}}_{\textsc{u}} \\
\widehat{S}(q,q), & \quad q\in{\mathcal{Q}}_{\textsc{r}}\cup{\mathcal{Q}}_{\textsc{l}}.
\end{array}
\right. \end{align*}
In a matrix form, it is equivalent to \begin{align}\label{eqn:composite-var}
{\widehat{V}}_{\widehat{Y}} = \begin{pmatrix}
{\widehat{V}}_{{\widehat{Y}},\textsc{u}}& 0& 0,\\
0& {\widehat{V}}_{{\widehat{Y}},\textsc{r}}& 0,\\
0& 0& {\widehat{V}}_{{\widehat{Y}},\textsc{l}}
\end{pmatrix}, \end{align} where ${\widehat{V}}_{{\widehat{Y}},\textsc{u}}, {\widehat{V}}_{{\widehat{Y}},\textsc{r}}, {\widehat{V}}_{{\widehat{Y}},\textsc{l}}$ corresponds to the diagonal covariance estimators for treatment arms ${\mathcal{Q}}_{\textsc{u}}, {\mathcal{Q}}_{\textsc{r}}, {\mathcal{Q}}_{\textsc{l}}$, respectively. Partition the contrast matrix $F$ accordingly as $$
F =
\begin{pmatrix}
F_{\textsc{s}}\\
F_{\textsc{l}}
\end{pmatrix}
\quad \text{ where }
F_{\textsc{s}} =
\begin{pmatrix}
F_{\textsc{u}}\\
F_{\textsc{r}}
\end{pmatrix}. $$ Construct the final covariance estimator below: \begin{align}\label{eqn:composite-var-2}
{\widehat{V}}_{\widehat{\gamma}} = F^\top {\widehat{V}}_{{\widehat{Y}}} F = F_{\textsc{u}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{u}} F_{\textsc{u}} + F_{\textsc{r}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{r}} F_{\textsc{r}} + F_{\textsc{l}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{l}} F_{\textsc{l}}. \end{align} The decomposition in \eqref{eqn:composite-var-2} allows us to characterize the statistical properties of $ {\widehat{V}}_{\widehat{Y}} $ by combining the results from Sections \ref{sec:var-uniform} and \ref{sec:var-unreplicate}.
\begin{theorem}[Covariance estimation for non-uniform designs]\label{thm:non-uniform-var} Consider designs in Definition \ref{def:non-uniform-design} and the covariance estimator in \eqref{eqn:composite-var-2}. Assume Conditions \ref{cond:moments}, \ref{cond:cond-N}, and \ref{cond:bounded-bgv}.
Assume $\max_{g\in[G]}\mu_{\langle g \rangle}, \Delta$ and $\zeta$ are constants. \begin{enumerate}[label = (\roman*)]
\item\label{thm:non-uniform-var-1}
${\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\} \succeq V_{\widehat{\gamma}}$.
\item\label{thm:non-uniform-var-2}
We have
$$
\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\infty} =
O_{\mathbb{P}} ( \|F_{\textsc{u}}\|_\infty^4
|{\mathcal{Q}}_{\textsc{u}}|H^2 +
\|F_{\textsc{r}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{r}}| H^2 + { \|F_{\textsc{l}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{l}}|^4 N_\textsc{l}^{-3} H^2} ).
$$
\item\label{thm:non-uniform-var-3}
We have
$$
\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\operatorname{op}} =
O_{\mathbb{P}} ( \|F_{\textsc{u}}\|_\infty^4
|{\mathcal{Q}}_{\textsc{u}}|H^4 +
\|F_{\textsc{r}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{r}}| H^4 + { \|F_{\textsc{l}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{l}}|^4 N_\textsc{l}^{-3} H^4} ) .
$$ \end{enumerate} \end{theorem}
In Theorem \ref{thm:non-uniform-var}, we assume $\max_{g\in[G]}\mu_{\langle g \rangle}, \Delta$ and $\zeta$ to be constants to simplify the presentation. Without this assumption, we can derive results similar to those in Theorem \ref{thm:unreplicated-var} but relegate finer results to the supplementary material. Theorem \ref{thm:non-uniform-var}\ref{thm:non-uniform-var-1} shows the conservativeness of ${\widehat{V}}_{\widehat{\gamma}}$ as a direct consequence of Theorems \ref{thm:uniform-var}\ref{thm:uniform-var-1} and \ref{thm:unreplicated-var}\ref{thm:unreplicated-var-1}. Theorem \ref{thm:non-uniform-var}\ref{thm:non-uniform-var-2} and \ref{thm:non-uniform-var-3} show the stochastic order of the estimation error of ${\widehat{V}}_{\widehat{\gamma}}$ in $L_\infty$ norm and operator norm, respectively. We only discuss Theorem \ref{thm:non-uniform-var}\ref{thm:non-uniform-var-2} below.
If $\| F \|_{\infty} = O(Q^{-1})$ as in the factorial design in Example \ref{eg::factorial-design}, it reduces to \begin{align}\label{eqn:V-infty}
\left\|{\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\}\right\|_\infty = O_{\mathbb{P}}\left\{ Q^{-2}H(|{\mathcal{Q}}_{\textsc{u}}|^{1/2} + |{\mathcal{Q}}_{\textsc{r}}|^{1/2} + |{\mathcal{Q}}_{\textsc{l}}|^{2}N_{\textsc{l}}^{-3/2})\right\}. \end{align}
Therefore, if $N$ and $Q$ are of the same order, then $N\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\infty} = O_{\mathbb{P}} ( HN^{-1/2} ) $. Besides, when one or two of ${\mathcal{Q}}_{\textsc{u}},{\mathcal{Q}}_{\textsc{r}},{\mathcal{Q}}_{\textsc{l}}$ are small or absent, the stochastic orders in Theorem \ref{thm:non-uniform-var} still hold because the large terms in \eqref{eqn:V-infty} will dominate the rest. In particular, if $|{\mathcal{Q}}_\textsc{u}| = |{\mathcal{Q}}_\textsc{r}| = 0$, then $
\|{\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\}\|_\infty = O_{\mathbb{P}} ( HN_{\textsc{l}}^{-3/2}) , $
which gives the same rate as Theorem \ref{thm:uniform-var}. If $|{\mathcal{Q}}_\textsc{l}| = 0$, then we should interpret $0\cdot \infty = 0$ in Theorem \ref{thm:non-uniform-var}\ref{thm:non-uniform-var-2} to obtain $
\|{\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\} \|_\infty = O_{\mathbb{P}} ( HQ^{-3/2} ) = O_{\mathbb{P}} ( HN_{\textsc{s}}^{-3/2} ) , $ which also agrees with Theorem \ref{thm:uniform-var}.
Finally, the BEB on the quadratic form $T$ and the conservativeness of the covariance estimator ensure Theorem \ref{thm:wald-non-uniform} below for inference.
\begin{theorem}[Wald-type inference under non-uniform designs]\label{thm:wald-non-uniform} Consider the non-uniform design in Definition \ref{def:non-uniform-design} with \eqref{eqn:small-H} and $N=O(Q)$. Assume Conditions \ref{cond:well-conditioned}--\ref{cond:bounded-bgv}. Then the conclusions of Theorem \ref{thm:wald-uniform} holds if we use the covariance estimator in \eqref{eqn:composite-var-2}. \end{theorem}
This concludes our discussion of design-based causal inference with possibly diverging number of treatment levels and varying group sizes across treatment levels.
\section{Simulation}\label{sec:simulation}
In this section, we will evaluate the finite-sample properties of the point estimates and the proposed variance estimator in factorial experiments. We mainly consider non-uniform designs because there have been extensive numerical studies for nearly uniform designs before.
\subsection{Practical implementation}\label{section::simulation-implementation}
For illustration purposes, we focus on conducting inference for the main effects in non-uniform factorial designs. To do this, we need grouping strategies to implement the proposed variance estimator \eqref{eqn:composite-var}. As we discussed in Section \ref{sec:var-unreplicate}, the structure of factorial designs can provide some practical guidance on the choice of grouping strategy. Besides, our theoretical results in Theorem \eqref{thm:unreplicated-var} also give insights in reducing the conservativeness of the variance estimator. In our simulation, we will compare three variance estimation strategies: \begin{enumerate}[label = (\roman*)]
\item \textit{Pairing according to the lexicographical order.} This corresponds to our discussion in Example \ref{exp:pairing}. If arms with similar factor combinations have close means, pairing based on the lexicographical order can guarantee small between-arm discrepancy in means and reduce the conservativeness.
Moreover, pairing strategies have another benefit in factorial experiments. We can use a smaller correction factor $\widetilde{\mu}_{\langle g\rangle}$ for variance estimation if our goal is to conduct inference marginally (i.e. build confidence intervals on each of $\gamma_h$ separately). The reason is that, while it is hard to control the $\Omega$ matrix in Theorem \ref{thm:unreplicated-var}\ref{thm:unreplicated-var-1} in general, we can control the diagonals of $F_{\textsc{u}}^\top \Omega F_{\textsc{u}}$ because $F_\textsc{u}$ has element $\pm Q^{-1}$. We can get more intuition by noticing that
$
\sum_{q'\in{\langle g \rangle_q}, q'\neq q} S(q',q')
$
is the core of $\Omega(q,q)$ and that the following algebraic fact holds under pairing:
\begin{align}\label{eqn:fact-on-Omega}
\sum_{q\in{\mathcal{Q}}_{\textsc{u}}}\sum_{q'\in{\langle g \rangle_q}, q'\neq q} S(q',q')
&= \sum_{q\in{\mathcal{Q}}_{\textsc{u}}} S(q,q).
\end{align} The identity \eqref{eqn:fact-on-Omega} enables us to transform the diagonals of $F_{\textsc{u}}^\top \Omega F_{\textsc{u}}$ from a source of conservativeness to the part of the true variance. Hence it allows us to choose a smaller correction factor: \begin{align*}
\widetilde{\mu}_{\langle g \rangle} = (1-|{\langle g \rangle}|^{-1})^{-1}(1-3N^{-1})^{-1} = 2(1-3N^{-1})^{-1} , \end{align*} which is approximately one half of $\mu_{\langle g \rangle}=4(1-2N^{-1})^{-1}$ in \eqref{eqn:mu-bg} when $N$ is large.
\item \textit{Regression-based variance estimation with the target factors as regressors.} Regression-based approach is a commonly used strategy for analyzing factorial experiments. For non-uniform designs, \cite{zhao2021regression} pointed out ordinary least squares (OLS) with unsaturated model specifications can give biased point estimates and variance estimator. Instead, one should apply weighted least squares (WLS) and the sandwich variance estimation.
\item \textit{Regression-based variance estimation with the target factors and their high order interactions as regressors.} This strategy differs from strategy (ii) in whether the interactions are included. If all possible two-way interactions of the target factors are specified in the regression model and the true $k$-way ($k\ge 3$) interactions are zero, then this strategy is equivalent to the general factor-based grouping strategy introduced in Example \ref{exp:regression}. \end{enumerate}
In next section, we will provide more details on implementing the above strategies in simulation.
\subsection{Simulation settings} We set up a $2^{10}$ non-uniform experiment ($K=10$) according to Definition \ref{def:non-uniform-design}, with the basic parameters specified as follows: \begin{itemize}
\item unreplicated arms: $|{\mathcal{Q}}_{\textsc{u}}| = 660$ and $N_q = 1$ for each $q\in{\mathcal{Q}}_{\textsc{u}}$.
\item replicated small arms: $|{\mathcal{Q}}_{\textsc{r}}| = 350$ and $N_q = 2$ for each $q\in{\mathcal{Q}}_{\textsc{r}}$.
\item large arms: $|{\mathcal{Q}}_{\textsc{l}}| = 14$ and $N_q = 30$ for each $q\in{\mathcal{Q}}_{\textsc{l}}$. \end{itemize} The above setup results in a population with $N = 1780$ units. Generate the potential outcomes independently from a shifted exponential distribution: \begin{align*}
Y_i(q) \sim \text{EXP}(\lambda_q) - 1/\lambda_q + \mu_q, \end{align*} where $\lambda_q$ are randomly set as $1$ or $2$ with equal probability to induce heteroskedasticity. We generate two sets of $\mu_q$ and set up two numerical studies, one with small factorial effects and the other with large effects. In both experiments, the main effects for factor $F_k$ with $k=1,4,7,10$ are set as zero. A random subset of two-way of interactions are set as zero as well. All the $k$-way ($k\ge 3$) interactions are zero.
We run simulation for two studies below. \begin{description}
\item[Study 1.] \label{study:small} Generated the nonzero main effects and two-way interactions from $ \text{Unif}([-0.5,-0.1]\cup[0.1,0.5])$.
\item[Study 2.] \label{study:large} Generated the nonzero main effects from $\text{Unif}([-1,-0.5]\cup[0.5,1])$. Generated the nonzero two-way interactions in the same way as Study 1. \end{description}
In each study, we focus on estimating the main factorial effects for factor $F_{2l}$ for $l=1,\dots,5$. We apply the point estimates ${\widehat{\gamma}}$ in \eqref{eqn:estimates} and compare three variance estimation strategies discussed in Section \ref{section::simulation-implementation} above: \begin{enumerate} \item LEX: We use the grouping strategy based on pairing by the lexicographical order. \item $\text{WLS}_0$: We use the sandwich variance estimators based on WLS with the target factors: \begin{align*}
Y \sim F_2 + F_4 + F_6 + F_8 + F_{10}, \text{ with weights } w_i = N_{Z_i}^{-1}. \end{align*} \item $\text{WLS}_1$: We use the sandwich variance estimators based on based on WLS with the target factors and their two-way interactions: \begin{align*}
Y \sim F_2 + F_4 + F_6 + F_8 + F_{10} + \text{Interaction}_2(F_2, F_4, F_6, F_8, F_{10}), \text{ with weights } w_i = N_{Z_i}^{-1}. \end{align*}
\end{enumerate}
\subsection{Simulation results} We repeat $1000$ times for each study. Figure \ref{fig:simulation} shows the violin plots of the differences between the point estimates and the true parameters. Table \ref{tab:cover-reject} compares the aforementioned variance estimators based on two criteria: coverage rate of $95\%$ confidence intervals and rejection rate of the null that the main effects are zero, which corresponds to the ``Coverage'' column and the ``Rejection'' column, respectively.
Figure \ref{fig:simulation} shows that, even in a highly non-uniform design, the point estimates are centered around the truth and asymptotic Normality holds when the total population $N$ is sufficiently large. Table \ref{tab:cover-reject} shows that the constructed confidence intervals based on all three variance estimators are valid and robust for both small effects and large effects settings. The variance estimator based on pairing is less conservative than the sandwich variance estimator, because the between group variation induced by grouping tends to be smaller with finer groups (see Theorem \ref{thm:unreplicated-var} and the relevant discussion). For the sandwich variance estimator, including more terms into the regression can mitigate the conservativeness. In terms of the rejection rate, when the true effect size $|\gamma_\cdot|$ is large (say $|\gamma_\cdot|\ge3(\Var{{\widehat{\gamma}}_\cdot})^{1/2}$), the power of the tests are high in spite of the conservativeness of the variance estimation. However, if the effects are too small, the rejection rate could be negatively impacted. For example, in Study 1, the true main effect for factor $F_6$ is $\gamma_6 = 0.058$ (around $2.6*(\Var{{\widehat{\gamma}}_6})^{1/2}$). From Table \ref{tab:cover-reject}, the rejection rates of all methods for the null $|\gamma_6|=0$ are smaller than 1, suggesting that the tests are underpowered.
\begin{figure}
\caption{Violin plots of the differences between the estimators and true parameters for the five
target effects. Figure \ref{fig:small-effects} corresponds to Study 1 and Figure \ref{fig:large-effects} corresponds to Study 2, respectively. }
\label{fig:small-effects}
\label{fig:large-effects}
\label{fig:simulation}
\end{figure}
\begin{table}[!htbp] \centering \caption{Coverage and rejection rates based on three variance estimators} \label{tab:cover-reject}
\begin{tabular}{|c|c|ccc|ccc|} \hline
\multirow{2}{*}{} & \multirow{2}{*}{Effects} & \multicolumn{3}{c|}{Coverage} & \multicolumn{3}{c|}{Rejection} \\ \cline{3-8}
& & LEX & WLS-0 & WLS-1 & LEX & WLS-0 & WLS-1 \\ \hline \multirow{5}{*}{Study 1} & $F_2$ & 0.963 & 0.977 & 0.973 & 1.000 & 1.000 & 1.000 \\
& $F_4$ & 0.968 & 0.980 & 0.976 & 0.026 & 0.015 & 0.018 \\
& $F_6$ & 0.974 & 0.985 & 0.982 & 0.665 & 0.600 & 0.622 \\
& $F_8$ & 0.974 & 0.984 & 0.982 & 1.000 & 1.000 & 1.000 \\
& $F_{10}$ & 0.973 & 0.985 & 0.980 & 0.032 & 0.016 & 0.018 \\ \hline \multirow{5}{*}{Study 2} & $F_2$ & 0.977 & 0.996 & 0.995 & 1.000 & 1.000 & 1.000 \\
& $F_4$ & 0.970 & 0.996 & 0.995 & 0.030 & 0.004 & 0.005 \\
& $F_6$ & 0.969 & 0.994 & 0.993 & 1.000 & 1.000 & 1.000 \\
& $F_8$ & 0.974 & 0.994 & 0.993 & 1.000 & 1.000 & 1.000 \\
& $F_{10}$ & 0.969 & 0.996 & 0.995 & 0.031 & 0.004 & 0.005 \\ \hline \end{tabular}
\end{table}
\section{Discussion}\label{sec::discussion}
We focused on scalar outcomes. Results for vector outcomes are also important in both theory and practice. \citet{li2017general} reviewed CLTs and many applications with vector outcomes under the regime of a fixed number of treatment levels and large sample sizes within all treatment levels. We include an extension of the BEB for vector outcomes under a general regime; see Section \ref{sec:vec-outcome} in the supplementary material.
Asymptotic results for design-based inference are often criticized because the population of interest is finite but the asymptotic theory requires a growing sample size. Establishing BEBs is an important theoretical step to characterize the finite-sample performance of the statistics. Alternatively, it is also desirable to derive non-asymptotic concentration inequalities for the estimators under the randomization model. This requires deeper understandings of sampling without replacement and permutational statistics. We leave it to future research.
\textbf{Funding.} The authors were partially supported by the U.S. National Science Foundation (\# 1945136).
\textbf{Supplementary material.} The supplementary material contains additional results on general linear permutational statistics, randomization-based inference, and all the technical proofs.
\begin{appendix}
\setcounter{page}{1} \renewcommand{S\arabic{page}}{S\arabic{page}}
\setcounter{equation}{0} \renewcommand{S\arabic{equation}}{S\arabic{equation}}
\setcounter{theorem}{0} \renewcommand{S\arabic{theorem}}{S\arabic{theorem}}
\setcounter{lemma}{0} \renewcommand{S\arabic{lemma}}{S\arabic{lemma}}
\setcounter{proposition}{0} \renewcommand{S\arabic{proposition}}{S\arabic{proposition}}
\setcounter{corollary}{0} \renewcommand{S\arabic{corollary}}{S\arabic{corollary}}
\setcounter{table}{0} \renewcommand{S\arabic{table}}{S\arabic{table}}
\setcounter{example}{0} \renewcommand{S\arabic{example}}{S\arabic{example}}
\setcounter{condition}{0} \renewcommand{S\arabic{condition}}{S\arabic{condition}}
\setcounter{definition}{0} \renewcommand{S\arabic{definition}}{S\arabic{definition}}
\setcounter{remark}{0} \renewcommand{S\arabic{remark}}{S\arabic{remark}}
\begin{center} \Huge Supplementary materials \end{center}
Appendix \ref{sec:general-BE} reviews existing and develops new BEBs for linear permutational statistics.
Appendix \ref{sec::proof-linear-permutational} gives the proofs of the results in Appendix \ref{sec:general-BE}.
Appendix \ref{sec:additional} presents additional results for design-based causal inference.
Appendix \ref{sec:main-proof} gives the proofs of the results in the main paper and Appendix \ref{sec:additional}.
In additional to the notation used in the main paper, we need additional notation.
For a positive integer $N$, let ${\mathbb{S}}_N$ denote the set of permutations over $[N]$. We use $\pi \in {\mathbb{S}}_N$ to denote a permutation, which is a bijection from $[N]$ to $[N]$ with $\pi(i)$ denoting the integer on index $i$ after permutation. We also use the same notation $\pi$ to denote a random permutation, which is uniformly distributed over ${\mathbb{S}}_N$.
For a matrix $M=(M(h,l))\in{\mathbb{R}}^{H\times H}$, define its column, row and all-entry sums as $$ {M}(+, l) = \sum_{h=1}^H M(h, l),\quad {M}(h,+) = \sum_{l=1}^H M(h,l),\quad M(+,+) = \sum_{h=1}^H\sum_{l=1}^H M(h,l), $$ respectively. For two matrices $M,M'\in{\mathbb{R}}^{H\times H}$, define the trace inner product as $$ \tr{M}{M'} = \operatorname{trace}(M^\top M') = \sum_{h=1}^H\sum_{l=1}^H M_{hl}M'_{hl}. $$ Vectorize $M$ as $\myvec{M}$ by stacking its column vectors. We will use the following basic result on matrix norms: \begin{align}\label{eqn:op-inf}
\|M\|_{\operatorname{op}} \le {H}\|M\|_{\infty}. \end{align}
\section{General combinatorial Berry--Esseen bounds for linear permutational statistics}\label{sec:general-BE}
Appendix \ref{sec:general-BE} presents general BEBs on multivariate linear permutational statistics. Section \ref{sec:formulation} provides a unified formulation for linear permutational statistics, which includes the point estimates in the main paper as a special case. Section \ref{sec:linear-projection} discusses BEBs for linear projections of multivariate permutational statistics. Section \ref{sec:fine-BE} provides dimension-dependent BEBs over convex sets, which are the basic tools for proving the BEBs for the quadratic forms of linear permutational statistics.
\subsection{Multivariate permutational statistics}\label{sec:formulation}
To analyze estimates of the form \eqref{eqn:estimates}, we need a general formulation of multivariate permutational statistics. Let $P\in{\mathbb{R}}^{N}$ be a random permutation matrix, which is obtained by randomly permuting the columns (or rows) of the identity matrix $I_N$. Also define $M_1,\ldots,M_H$ as $H$ deterministic $N \times N$ matrices. We want to study the random vector \citep{chatterjee2008multivariate} \begin{align}\label{eqn:rv-target}
\Gamma = \left(\trace{M_1 P},\ldots, \trace{M_H P}\right)^\top. \end{align}
Each random permutation matrix $P$ can also be represented by a random permutation $\pi$.
Then \begin{align*}
\trace{M_h P} = \sum_{i=1}^N M_h(i,\pi(i)), \qquad (h=1,\ldots,H). \end{align*}
Example \ref{exp:revisit-CR} below revisits complete randomization.
\begin{example}[Revisiting complete randomization]\label{exp:revisit-CR} Under complete randomization, the treatment vector ${\boldsymbol{Z}} = (Z_1,\cdots, Z_N)$ has a correspondence with $P$. As a toy example, consider an experiment with $Q = 2$, $N_1 = 1$ and $N_2 = 2$. One can label the rows and columns of $P$ as follows: \begin{align*}
\bordermatrix{~ & i=1 & i=2 & i=3 \cr
q=1 & 0 & 1 & 0 \cr
q=2 & 1 & 0 & 0 \cr
q=2 & 0 & 0 & 1}. \end{align*} The pattern of $1$'s indicates exactly the treatment allocation. Generally, if we let the rows of $P$ represent the treatment arms and view the columns as indicator vectors of individuals, a permutation over the columns means a pattern of treatment allocation for all units. We can use \eqref{eqn:rv-target} to reformulate the sample mean vector ${\widehat{Y}}$ as $ \Gamma = (\Gamma_1, \ldots, \Gamma_Q)^\top$, where $\Gamma_q = \trace{M_q P}$ with
\begin{align}\label{eqn:pop} M_q = \bordermatrix{~ & Z = 1 & \cdots & Z = q & { \cdots } & Z=Q \cr 1 & {\boldsymbol{0}}^\top_{N_{1}} & \cdots & N_{q}^{-1}Y_1(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & {\boldsymbol{0}}^\top_{N_{Q}} \cr 2 & {\boldsymbol{0}}^\top_{N_{1}} & \cdots & N_{q}^{-1}Y_2(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & {\boldsymbol{0}}^\top_{N_{Q}} \cr \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \cr N & {\boldsymbol{0}}^\top_{N_{1}} & \cdots & N_{q}^{-1}Y_N(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & {\boldsymbol{0}}^\top_{N_{Q}} \cr} . \end{align} \end{example}
Lemma \ref{lem:mean-var} below gives the mean and covariance of $\Gamma$: \begin{lemma}[Mean and covariance of $\Gamma$]\label{lem:mean-var} \quad \begin{enumerate}[label = (\roman*)] \item For random permutation matrix $P$, we have \begin{gather}
{\mathbb{E}}\{P(\cdot, i)\} = \frac{1}{N} {\boldsymbol{1}}_N,\quad {\mathbb{E}}\{P(\cdot, i) P(\cdot, i)^\top\} = \frac{1}{N}I_N\label{eqn:EP}
\end{gather} for all $i$, and
\begin{gather} {\mathbb{E}}\{P(\cdot, i)P(\cdot, j)^\top\} = \frac{1}{N(N-1)}({\boldsymbol{1}}_{N\times N} - I_N) \label{eqn:EPiPj} \end{gather} for $ i\neq j$. \item For the random vector $\Gamma$ defined in \eqref{eqn:rv-target}, we have \begin{align}
{\mathbb{E}}\{\Gamma_h\} = \frac{1}{N}\sum_{i=1}^N\sum_{j=1}^NM_h(i,j)\label{eqn:EGamma}
\end{align} for all $h$, and
\begin{align}
{\mathbb{E}}\{\Gamma_h\Gamma_l\} &= \frac{1}{N-1}\tr{M_h}{M_l} + \frac{1}{N(N-1)}{M}_h(+,+){M}_l(+,+)\notag\\
& - \frac{1}{N(N-1)} \sum_{k=1}^N M_h(+,k) {M}_l(+,k) - \frac{1}{N(N-1)} \sum_{k=1}^N {M}_h(k,+) {M}_l(k,+).\label{eqn:EGammakl} \end{align} for $h\neq l$. \end{enumerate} \end{lemma}
Special cases of Lemma \ref{lem:mean-var} have appeared in some previous works under certain simplifications. For example, \cite{hoeffding1951combinatorial} computed the mean and variance for scalar $\Gamma$ with $H=1$. \cite{chatterjee2008multivariate} did the calculation under the conditions of zero row and column sums as well as orthogonality of the population matrices. \cite{bolthausen1993rate} relaxed the constraints of orthogonality and presented the covariance formula only under the zero column sum condition.
As an application, we can obtain the mean and covariance of ${\widehat{Y}}$: \begin{example}[Mean and covariance matrix of ${\widehat{Y}}$]\label{exp:multi-avg} Based on \eqref{eqn:pop}, we can verify that \begin{align*}
\tr{M_q}{M_l} = 0, \text{ if } q\neq l. \end{align*} Using \eqref{eqn:EGamma}, we can compute \begin{align*}
{\mathbb{E}}\{\Gamma_q\} = \frac{1}{N}\sum_{i=1}^N Y_i(q), \end{align*} and \begin{gather*}
{\mathbb{E}}\{(\Gamma_q - {\mathbb{E}}\Gamma_q)^2\} = \left(\frac{1}{N_q} - \frac{1}{N}\right)S(q,q), \\
{\mathbb{E}}\{(\Gamma_q - {\mathbb{E}}\Gamma_q)(\Gamma_l - {\mathbb{E}}\Gamma_l)\} = - \frac{1}{N}S(q,l). \end{gather*}
\end{example}
From now on, for the ease of discussion, we assume Condition \ref{cond:str-Mk} below: \begin{condition}[Standardized orthogonal structure of $M_h$'s]\label{cond:str-Mk} For each $h\in[H]$, the row and column sums of $M_h$ are zero and \begin{align*}
\operatorname{Tr}(M_h^\top M_h) = N - 1. \end{align*} The $M_h$'s are mutually orthogonal with respect to the trace inner product: \begin{align*}
\operatorname{Tr}(M_h^\top M_l) = 0, \text{ for } h\neq l. \end{align*} \end{condition}
Lemma \ref{lem:reformulate} below ensures that imposing Condition \ref{cond:str-Mk} causes no loss of generality.
\begin{lemma}[Reformulation of the multivariate permutational statistics]\label{lem:reformulate}
Let $P$ be a random $N\times N$ permutation matrix, and $M_1,\ldots,M_H$ be $H$ deterministic $N\times N$ matrices. Let ${\mathbb{E}}\{\Gamma\}, V = \Var{\Gamma}, V^\star = \operatorname{Corr}(\Gamma)$ be respectively the expectation, covariance and correlation of $\Gamma$ defined in \eqref{eqn:rv-target}. Let $\widetilde{V} = V^{-1/2}$. Define the $\{M'_h\}_{h=1}^H$ as $$
M'_h(i,j) = {M}_h(i,j) - N^{-1}{M_h}(i,+) - N^{-1}{M_h}(+,j) + N^{-2} {M_h}(+,+),
$$
and then define the $\{M''_h\}_{h=1}^H$ as
$$
M''_h(i,j) = \sum_{l=1}^H \widetilde{V}_{hl}M'_l(i,j) .
$$
(i) ${M}''_1, \ldots, {M}''_H$ satisfy Condition \ref{cond:str-Mk} and \begin{align*} V^{-1/2}(\Gamma - {\mathbb{E}}\{\Gamma\}) = \left(\operatorname{Tr}({M}''_1P), \ldots, \operatorname{Tr}({M}''_HP)\right)^\top. \end{align*}
(ii) We have \begin{align}\label{eqn:standardM-bd}
\max_{h\in[H]}\max_{i,j\in[N]} | {M}''_h(i,j)| \le \varrho_{\min}(V)^{-1/2} \sqrt{H} \max_{h\in[H]}\max_{i,j\in[N]} |M'_h(i,j)|. \end{align}
\end{lemma}
\subsection{BEBs for linear projections}\label{sec:linear-projection}
In this subsection, we establish BEBs for linear permutational statistics. \cite{Bolthausen1984AnEO} established a BEB for univariate permutational statistics, which is a basic tool for our proofs.
\begin{lemma}[Main theorem of \cite{Bolthausen1984AnEO}]\label{lem:bolthausen-1984} There exists an absolute constant $C>0$, such that \begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}\{\Gamma_1 \le t\} - \Phi(t)| \le \frac{C}{N}\sum_{i,j\in[N]} |M_1(i,j)|^3. \end{align*} \end{lemma}
We can use Lemma \ref{lem:bolthausen-1984} to prove Theorem \ref{thm:linear-projection} below.
\begin{theorem}\label{thm:linear-projection}
Assume Condition \ref{cond:str-Mk}. Let $b\in{\mathbb{R}}^H$ be a vector with $\|b\|_2 = 1$. Then there exists an absolute constant $C > 0$, such that \begin{align*}
\sup_{t\in{\mathbb{R}}}|{\mathbb{P}}\{b^\top\Gamma \le t\} - \Phi(t)| \le C{\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM_h(i,j)\right|}. \end{align*}
\end{theorem}
The proof of Theorem \ref{thm:linear-projection} is straightforward based on the theorem of \cite{Bolthausen1984AnEO}. It is more interesting to compute the upper bound in specific examples, which we will do in Appendix \ref{sec:additional}. Theorem \ref{thm:linear-projection} is a finite-sample result. It implies a CLT when the upper bound vanishes: \begin{align}\label{eqn:BE-to-clt}
\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM_h(i,j)\right| \to 0, \text{ as } N\to\infty. \end{align} We can further upper bound the left hand side of \eqref{eqn:BE-to-clt}: \begin{align*}
\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM_h(i,j)\right| \le \max_{i,j\in[N],h\in[H]} |M_h(i,j)| \cdot \|b\|_1 \le \sqrt{H}\max_{i,j\in[N],h\in[H]} |M_h(i,j)|. \end{align*}
Hence Theorem \ref{thm:linear-projection} reveals a trade-off between $H$ and $\max_{i,j\in[N],h\in[H]} |M_h(i,j)|$. Alternatively, we can use the Cauchy-Schwarz inequality to obtain another bound: \begin{align*}
\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM_h(i,j)\right| \le \max_{i,j\in[N]} \left\{\sum_{h=1}^H|M_h(i,j)|^2\right\}^{1/2} \cdot \|b\|_2 \le \max_{i,j\in[N]} \left\{\sum_{h=1}^H|M_h(i,j)|^2\right\}^{1/2}. \end{align*} which can be a better bound for some $M_q$'s.
Besides, the combinatorial CLT of \citet[][Theorem 3]{hoeffding1951combinatorial} establishes the following sufficient condition for $b^\top\Gamma$ converging to a standard Normal distribution: \begin{lemma}[Combinatorial CLT by Theorem 3 of \cite{hoeffding1951combinatorial}] $b^\top\Gamma$ is asymptotically Normal if \begin{align}\label{eqn:hoeffding-cond}
\frac{\max_{i,j\in[N]} \left\{\sum_{h=1}^Hb_hM_h(i,j)\right\}^2}{N^{-1}\sum_{i,j\in[N]}\left\{\sum_{h=1}^Hb_hM_h(i,j)\right\}^2} \to 0. \end{align} \end{lemma}
Under Condition \ref{cond:str-Mk}, we have \begin{align*}
\sum_{i,j\in[N]}\left\{\sum_{h=1}^Hb_hM_h(i,j)\right\}^2 = N-1. \end{align*} Hence \eqref{eqn:hoeffding-cond} is equivalent to \eqref{eqn:BE-to-clt}. But from Theorem \ref{thm:linear-projection}, \eqref{eqn:hoeffding-cond} implies not only convergence in distribution, but also an upper bound on the convergence rate in the Kolmogorov distance.
\subsection{A permutational BEB over convex sets} \label{sec:fine-BE}
With independent random variables, the BEBs over convex sets match the optimal rate $N^{1/2}$ \citep{nagaev1976estimate, bentkus2005lyapunov}.
We achieve the same order for linear permutational statistics by using a result based on Stein's method \citep{fang2015rates}.
\begin{definition}[Exchangeable pair] \label{def::exchangeable-pair} $(\Gamma,\Gamma')$ is an exchangeable pair if $(\Gamma,\Gamma')$ and $(\Gamma',\Gamma)$ have the same distribution. \end{definition}
\begin{definition}[Stein coupling, Definition 2.1 of \cite{fang2015rates}]\label{def:stein-coupling} A triple of square integrable $H$-dimensional random vectors $ (\Gamma,\Gamma',G) $ is called a $H$-dimensional Stein coupling if \begin{align*}
{\mathbb{E}}\{G^\top f(\Gamma') - G^\top f(\Gamma)\} = {\mathbb{E}}\{\Gamma^\top f(\Gamma)\} \end{align*} for all $f: {\mathbb{R}}^H \to {\mathbb{R}}^H$ provided that the expectations exist. \end{definition}
\citet[][Remark 2.3]{fang2015rates} made a connection between Definitions \ref{def::exchangeable-pair} and \ref{def:stein-coupling}, shown below.
\begin{lemma} [Remark 2.3 of \cite{fang2015rates}] \label{lemma-coupling-1} If $(\Gamma,\Gamma')$ is an exchangeable pair and $
{\mathbb{E}}(\Gamma' - \Gamma\mid \Gamma) = -\Lambda \Gamma $ for some invertible $\Lambda$, then $(\Gamma,\Gamma',\frac{1}{2}\Lambda^{-1}(\Gamma'-\Gamma))$ is a Stein coupling. \end{lemma}
\cite{fang2015rates} established the following BEB based on multivariate Stein coupling.
\begin{lemma}[Theorem 2.1 of \cite{fang2015rates}]\label{lem:bounded-pair} Let $(\Gamma, \Gamma', G)$ be a $H$-dimensional Stein coupling. Assume $\operatorname{Cov}(\Gamma) = {I}_H$. Let $\xi_H$ be an $H$-dimensional standard Normal random vector. With $D = \Gamma' - \Gamma$, suppose that there are positive constants $\alpha$ and $\beta$ such that
$\|G\|_2\le\alpha$ and $\|D\|_2 \le \beta$. Then there exists a universal constant $C$, such that \begin{align*}
&\sup_{A\in{\mathcal{A}}}|{\mathbb{P}}\{\Gamma\in A\} - {\mathbb{P}}\{\xi_H \in A\}| \\
&\le C(H^{7/4} \alpha \mathbb{E}\|D\|_2^2 + H^{1/4}\beta + H^{7/8}\alpha^{1/2}B_1^{1/2} + H^{3/8}B_2 + H^{1/8}B_3^{1/2}), \end{align*} where \begin{gather*}
B_1^2 = \Var{{\mathbb{E}}(\|D\|_2^2\mid \Gamma)}, \\
B_2^2 = \sum_{h=1}^H\sum_{l=1}^H \Var{{\mathbb{E}}(G_hD_l\mid \Gamma)}, \\
B_3^2 = \sum_{h=1}^H\sum_{l=1}^H\sum_{m=1}^H \Var{{\mathbb{E}}(G_hD_lD_m\mid \Gamma)}. \end{gather*}
\end{lemma}
Our construction of exchangeable pairs for linear permutational statistics is motivated by \cite{chatterjee2007multivariate}. For $\Gamma$, constructe a coupling random vector $\Gamma'$ by performing a random transposition to the original pattern of permutation. Here a random transposition is defined as follows: \begin{definition}[Random transposition]\label{def:transposition} The set of transpositions ${\mathbb{T}}_N = \{(t_1t_2)\}$ is defined as the subset of permutations over $[N]$ which only switches two indices $t_1$ and $t_2$ among $\{1,\ldots,N\}$ while keeping the other fixed. A random transposition $\tau$ takes a uniform distribution on ${\mathbb{T}}_N$. \end{definition}
If a random transposition $\tau$ and a random permutation $\pi$ are independent, their composite $\pi' = \tau\circ \pi$ is also a random permutation over $[N]$. As we discussed in Section \ref{sec:formulation}, $\pi$ and $\pi'$ can be represented as random permutation matrices $P$ and $P'$. Let \begin{align}\label{eqn:target-vec-prime}
\Gamma' = \left(\operatorname{Tr}(M_1 P'),\ldots, \operatorname{Tr}(M_H P') \right)^\top. \end{align} Now $(\Gamma, \Gamma')$ is an exchangeable pair and has the following basic property.
\begin{lemma} [Lemma 8 of \cite{chatterjee2007multivariate}] \label{lemma-coupling-2} $ {\mathbb{E}}\{\Gamma'-\Gamma\mid \pi\} = -\frac{2}{N-1}\Gamma$. \end{lemma}
By Lemmas \ref{lemma-coupling-1} and \ref{lemma-coupling-2}, $(\Gamma,\Gamma',-\frac{N-1}{4}(\Gamma-\Gamma'))$ with $G = -\frac{N-1}{4}(\Gamma-\Gamma')$ is a Stein coupling.
We prove the following result based on Lemma \ref{lem:bounded-pair}: \begin{theorem}[Permutational BEB over convex sets]\label{thm:be-bounded}
Assume $|M_h(i,j)|\le B_N$ for $h\in[H]$ and $i,j\in[N]$. Assume Condition \ref{cond:str-Mk}. Then there exists a universal constant $C>0$, such that \begin{align}\label{eqn:be-bounded}
&\sup_{A\in{\mathcal{A}}}|{\mathbb{P}}\{\Gamma\in A\} - {\mathbb{P}}\{\xi_H \in A\}| \\
&\le CH^{13/4}NB_N(B_N^2 + N^{-1}) + C H^{3/4}B_N + CH^{13/8}N^{1/4}B_N^{3/2}
+ CH^{11/8}N^{1/2}B_N^2. \notag \end{align} When $B_N = O(N^{-1/2})$, the upper bound \eqref{eqn:be-bounded} becomes \begin{align}\label{eqn:special-case}
\sup_{A\in{\mathcal{A}}}|{\mathbb{P}}\{\Gamma\in A\} - {\mathbb{P}}\{\xi_H \in A\}| \le \frac{CH^{13/4}}{{N}^{1/2}}. \end{align} \end{theorem}
To end this subsection, we briefly comment on the literature of multivariate permutational BEBs and make a comparison between the existing results and Theorem \ref{thm:be-bounded}. \cite{bolthausen1993rate} proved a multivariate permutational BEB under some conditions, but their bound did not specify explicit dependence on the dimension ($H$ in our notation). \cite{chatterjee2007multivariate} proposed methods based on exchangeable pairs for multivariate normal approximation and applied it to permutational distributions. However, their methods only allows to establish the following result: \begin{align*}
\sup_{g\in C^2({\mathbb{R}}^H)}|\E{g(\Gamma)} - \E{g(\xi_H)}| \le \frac{CH^3}{N^{1/2}}, \end{align*} where $C^2({\mathbb{R}}^H)$ represents the collection of second-order continuously differentiable functions on ${\mathbb{R}}^H$. While the rate over $H$ is slightly better than \eqref{eqn:special-case}, the function class $C^2({\mathbb{R}}^H)$ cannot cover the indicator functions. \cite{raic2015multivariate} conjectured the following result: \begin{align}\label{eqn:raic}
\sup_{A\in{\mathcal{A}}}|{\mathbb{P}}\{\Gamma\in A\} - {\mathbb{P}}\{\xi_H\in A\}| \le C\frac{H^{1/4}}{N}\sum_{i\in[N]}\sum_{j\in[N]}\left(\sum_{h\in[H]}M_h(i,j)^2\right)^{3/2}. \end{align} When $B_N = O(N^{-1/2})$, \eqref{eqn:raic} has order $O(H^{7/4}N^{-1/2})$. However, \cite{raic2015multivariate} did not provide any proof for \eqref{eqn:raic}. \cite{wang2021rerandomization} proved a BEB for binary treatment randomized experiment using the coupling method, with the dependence on $N$ is slower than $N^{-1/2}$. The dependence on $H$ may be further improved but it is beyond the scope of the current work.
\section{Proofs of the results in Appendix \ref{sec:general-BE}}\label{sec::proof-linear-permutational}
In this section, we prove the results in Appendix \ref{sec:general-BE}. Section \ref{sec:pre-lemma} presents several lemmas that are essential to the proofs. The main proofs start from Section \ref{sec:pf-linear-proj}.
\subsection{Lemmas}\label{sec:pre-lemma}
Lemma \ref{lem:cond-check} below gives the conditional moments of the exchangeable pair $(\Gamma,\Gamma')$ constructed in \eqref{eqn:rv-target} and \eqref{eqn:target-vec-prime}.
\begin{lemma}[Lemma 8 in \cite{chatterjee2007multivariate}]\label{lem:cond-check}
Construct an exchangeable pair $(\Gamma, \Gamma')$ based on \eqref{eqn:rv-target} and \eqref{eqn:target-vec-prime}. \begin{enumerate}[label = (\roman*)] \item We restate Lemma \ref{lemma-coupling-2}: \begin{align*}
{\mathbb{E}}\{\Gamma'-\Gamma\mid \pi\} = -\frac{2}{N-1}\Gamma . \end{align*}
\item For the $h$-th coordinate $(\Gamma_h, \Gamma_h')$, we have \begin{align*}
{\mathbb{E}}\{(\Gamma_h - \Gamma_h')^2\mid\pi\}
&= \frac{2(N+1)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))^2 + \frac{2}{N} + \frac{2}{N(N-1)}\Gamma_h^2 \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_h(j,\pi(i)). \end{align*}
\item For the $h$-th coordinate $(\Gamma_h, \Gamma_h')$ and $l$-th coordinate $(\Gamma_l, \Gamma_l')$, we have \begin{align*}
{\mathbb{E}}\{(\Gamma_h - \Gamma_h')(\Gamma_l - \Gamma_l')\mid\pi\}
&= \frac{2(N+1)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))M_l(i,\pi(i)) + \frac{2}{N(N-1)}\Gamma_h\Gamma_l \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_l(j,\pi(i)). \end{align*} \end{enumerate}
\end{lemma}
Lemma \ref{lem:key-var-bds} below bounds the variances of linear permutational statistics.
\begin{lemma}\label{lem:key-var-bds} We have the following variance bounds for $N$ large enough. \begin{enumerate}[label = (\roman*)]
\item $\Var{\sum_{h=1}^H X_h}\le H\sum_{h=1}^H \Var{X_i}.$
\item If $|M_0(i,j)|\le B_N$, then
\begin{align}
&\Var{\sum_{i=1}^N M_0(i,\pi(i))} \notag\\
&= (N-1)^{-1}\sum_{i=1}^N\sum_{j=1}^N\{ M_0(i,\pi(i)) - N^{-1}{M}_0(i,+) - N^{-1}{M}_0(+,j) + N^{-2}{M}_0(+,+)\}^2\notag\\
&\le 32NB_N^2.\label{eqn:var-bd}
\end{align}
\item Suppose $M_1=(a_{ij})$ and $M_2=(b_{ij})$ have zero column and row sums. If $|a_{ij}|\le B_N, |b_{ij}|\le B'_N$, then
\begin{align*}
\Var{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(j)}}\le 54 N^2B_N^2B_N'^2.
\end{align*}
\item Suppose $M_1=(a_{ij})$ and $M_2=(b_{ij})$ have zero column and row sums. If $|a_{ij}|\le B_N, |b_{ij}|\le B'_N$, then
\begin{align*}
\Var{\sum_{i\neq j}^N a_{i\pi(j)}b_{j\pi(i)} } \le 54 N^2B_N^2B_N'^2.
\end{align*}
\item Suppose $M_1=(a_{ij}), M_2=(b_{ij}), M_3=(c_{ij})$ all have zero column and row sums. If $|a_{ij}|\le B_N, |b_{ij}|\le B'_N, |c_{ij}|\le B''_N$, then
\begin{align*}
\Var{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(j)}c_{i\pi(j)} }\le 15N^3B_N^2B_N'^2B_N''^2.
\end{align*}
\item Suppose $M_1=(a_{ij}), M_2=(b_{ij}), M_3=(c_{ij})$ all have zero column and row sums. If $|a_{ij}|\le B_N, |b_{ij}|\le B'_N, |c_{ij}|\le B''_N$, then
\begin{align*}
\Var{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(i)}c_{i\pi(j)} } \le 15N^3B_N^2B_N'^2B_N''^2.
\end{align*} \end{enumerate} \end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:key-var-bds}] \begin{enumerate}[label = (\roman*)]
\item This is a standard result by the Cauchy-Schwarz inequality.
\item This is due to the variance formula of linear permutational statistics. See Lemma \ref{lem:mean-var}.
\item We calculate
\begin{align*}
&{\mathbb{E}}\left\{\sum_{i\neq j}a_{i\pi(i)}b_{j\pi(j)}\right\}^2\\
&= {\mathbb{E}}\left\{\sum_{i,k}\sum_{j\neq i, l\neq k}a_{i\pi(i)}b_{j\pi(j)}a_{k\pi(k)}b_{l\pi(l)}\right\}\\
&= \frac{1}{N(N-1)}\sum_{i\neq j,m\neq n} \left\{a_{im}^2b_{jn}^2 + a_{in}b_{in}a_{jm}b_{jm}\right\}\\
& + \frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o} \left\{a_{im}^2b_{jn}b_{ko} + a_{im}a_{jn}b^2_{ko} + a_{im}b_{im}a_{jn}b_{ko} +
a_{im}b_{jn}a_{ko}b_{ko}\right\}\\
& +
\frac{1}{N(N-1)(N-2)(N-3)}\sum_{i\neq j\neq k \neq l}\sum_{m\neq n\neq o\neq p} \left\{
a_{im}b_{jn}a_{ko}b_{lp}\right\}\\
& = \text{I} + \text{II} + \text{III}.
\end{align*}
For I, we have
\begin{align}\label{eqn:3.I-bd}
N(N-1)\text{I} \le N^2(N-1)^2 \cdot 2B_N^2B_N'^2 = {2N^2(N-1)^2B_N^2B_N'^2}.
\end{align}
For II, using the property of zero column and row sums, we have
\begin{align}\label{eqn:3.II-bd}
N(N-1)(N-2)\text{II} \le 16N^2(N-1)^2B_N^2B_N'^2.
\end{align}
To see why \eqref{eqn:3.II-bd} is true, consider the first part of the summation:
\begin{align*}
&\left|\sum_{i\neq j\neq k}\sum_{m\neq n\neq o} a_{im}^2b_{jn}b_{ko}\right| \\
=&\left|\sum_{i\neq j\neq k}\sum_{m\neq n} a_{im}^2b_{jn}(-b_{km}-b_{kn})\right|\\
=&\left|\sum_{i\neq j}\sum_{m\neq n} a_{im}^2b_{jn}(b_{im}+b_{jm}+b_{in}+b_{jn})\right|\\
\le& 4N^2(N-1)^2B_N^2B_N'^2.
\end{align*}
Similar results hold for the other parts of the summation. Adding terms together, we obtain \eqref{eqn:3.II-bd}.
For III, use the zero column and row sums property again, we have
\begin{align}\label{eqn:3.III-bd}
N(N-1)(N-2)(N-3) \text{III} \le 36N^2(N-1)^2B_N^2B_N'^2.
\end{align}
Summing up \eqref{eqn:3.I-bd}--\eqref{eqn:3.III-bd} to obtain
\begin{align*}
\Var{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(j)} } \le 54 N^2B_N^2B_N'^2.
\end{align*}
\item We calculate
\begin{align*}
&{\mathbb{E}}\left\{\sum_{i\neq j}a_{i\pi(j)}b_{j\pi(i)}\right\}^2\\
&= {\mathbb{E}}\left\{\sum_{i,k}\sum_{j\neq i, l\neq k}a_{i\pi(j)}b_{j\pi(i)}a_{k\pi(l)}b_{l\pi(k)}\right\}\\
&= \frac{1}{N(N-1)}\sum_{i\neq j,m\neq n} \left\{a_{im}^2b_{jn}^2 + a_{im}b_{im}a_{jn}b_{jn}\right\}\\
& + \frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o} \left\{a_{in}b_{jm}a_{io}b_{km} + a_{in}a_{jo}b_{jm}b_{kn} + a_{in}a_{km}b_{jm}b_{io} +
a_{in}a_{kn}b_{jm}b_{jo}\right\}\\
& +
\frac{1}{N(N-1)(N-2)(N-3)}\sum_{i\neq j\neq k \neq l}\sum_{m\neq n\neq o\neq p} \left\{
a_{in}a_{kp}b_{jm}b_{lo}\right\}\\
& = \text{I} + \text{II} + \text{III}.
\end{align*}
The rest of the analysis is nearly identical to part (iii). We omit the details.
\item We calculate
\begin{align*}
&{\mathbb{E}}\left\{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(j)}c_{i\pi(j)}\right\}^2\\
=&{\mathbb{E}}\left\{\sum_{i\neq j}\sum_{k\neq l} a_{i\pi(i)}b_{j\pi(j)}c_{i\pi(j)}a_{k\pi(k)}b_{l\pi(l)}c_{k\pi(l)}\right\}\\
=&\frac{1}{N(N-1)}\sum_{i\neq j}\sum_{m\neq n}\{a^2_{im}b^2_{jn}c^2_{in} + a_{im}b_{jn}c_{in}a_{jn}b_{im}c_{jm}\} \\
+& \frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o}
\{a_{im}^2b_{jn}c_{in}b_{ko}c_{io}+
a_{im}b_{jn}c_{in}a_{ko}b_{im}c_{km}\\ &\phantom{\frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o}}+
a_{im}b_{jn}c_{in}a_{jn}b_{ko}c_{jo} +
a_{im}b_{jn}c_{in}a_{ko}b_{jn}c_{kn}
\}\\
+& \frac{1}{N(N-1)(N-2)(N-3)}\sum_{i\neq j\neq k \neq l}\sum_{m\neq n\neq o\neq p} a_{im}b_{jn}c_{in}a_{ko}b_{lp}c_{kp}\\
=& \text{I} + \text{II} + \text{III}.
\end{align*}
For I, using the triangle inequality, we have
\begin{align}\label{eqn:5.I-bd}
N(N-1) \text{I} \le 2N^2(N-1)^2B_N^2B_N'^2B_N''^2.
\end{align}
For II, using the triangle inequality, we have
\begin{align}\label{eqn:5.II-bd}
N(N-1)(N-2) \text{II} \le 4N^2(N-1)^2(N-2)^2B_N^2B_N'^2B_N''^2.
\end{align}
For III, expanding along the indices $l$ and $o$, we have
\begin{align}\label{eqn:5.III-bd}
N(N-1)(N-2)(N-3) \text{III} \le 9N^6B_N^2B_N'^2B_N''^2.
\end{align}
Sum up \eqref{eqn:5.I-bd}--\eqref{eqn:5.III-bd} to get the final result.
\item We calculate
\begin{align*}
&{\mathbb{E}}\left\{\sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(i)}c_{i\pi(j)}\right\}^2\\
=&{\mathbb{E}}\left\{\sum_{i\neq j}\sum_{k\neq l} a_{i\pi(i)}b_{j\pi(i)}c_{i\pi(j)}a_{k\pi(k)}b_{l\pi(k)}c_{k\pi(l)}\right\}\\
=&\frac{1}{N(N-1)}\sum_{i\neq j}\sum_{m\neq n}\{a^2_{im}b^2_{jm}c^2_{in} + a_{im}b_{jm}c_{in}a_{jn}b_{in}c_{jm}\} \\
+& \frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o}
\{a_{im}^2b_{jm}c_{in}b_{km}c_{io}+
a_{im}b_{jm}c_{in}a_{ko}b_{io}c_{km}\\ &\phantom{\frac{1}{N(N-1)(N-2)}\sum_{i\neq j\neq k}\sum_{m\neq n\neq o}}+
a_{im}b_{jm}c_{in}a_{jn}b_{kn}c_{jo} +
a_{im}b_{jm}c_{in}a_{ko}b_{jo}c_{kn}
\}\\
+& \frac{1}{N(N-1)(N-2)(N-3)}\sum_{i\neq j\neq k \neq l}\sum_{m\neq n\neq o\neq p} a_{im}b_{jm}c_{in}a_{ko}b_{lo}c_{kp}\\
=& \text{I} + \text{II} + \text{III}.
\end{align*}
For I, using the triangle inequality, we have
\begin{align}\label{eqn:6.I-bd}
N(N-1) \text{I} \le 2N^2(N-1)^2B_N^2B_N'^2B_N''^2.
\end{align}
For II, using the triangle inequality, we have
\begin{align}\label{eqn:6.II-bd}
N(N-1)(N-2) \text{II} \le 4N^2(N-1)^2(N-2)^2B_N^2B_N'^2B_N''^2.
\end{align}
For III, expanding along the indices $l$ and $p$, we have
\begin{align}\label{eqn:6.III-bd}
N(N-1)(N-2)(N-3) \text{III} \le 9N^6B_N^2B_N'^2B_N''^2.
\end{align}
Sum up \eqref{eqn:6.I-bd}--\eqref{eqn:6.III-bd} to get the final result.
\end{enumerate} \end{proof}
\subsection{Proof of Lemma \ref{lem:mean-var}}
The proof follows from combining the permutational distribution of $P$ with matrix algebra. \begin{proof}[Proof of Lemma \ref{lem:mean-var}]
\textbf{Proof of \eqref{eqn:EP}.} Use the fact that each column of $P$ follows a uniform distribution over the canonical bases.
\textbf{Proof of \eqref{eqn:EPiPj}.} Use the fact that $P(\cdot,i)P(\cdot,j)^\top$ is uniformly distributed over all the $N(N-1)$ off-diagonal positions.
\textbf{Proof of \eqref{eqn:EGamma}.} \eqref{eqn:EGamma} follows from Lemma \ref{lem:mean-var}(i) and the linearity of $\trace{\cdot}$.
\textbf{Proof of \eqref{eqn:EGammakl}.} For \eqref{eqn:EGammakl}, we have \begingroup \allowdisplaybreaks \begin{align*}
{\mathbb{E}}\{\Gamma_h\Gamma_l\} &= {\mathbb{E}}\left\{\left(\sum_{i=1}^N M_h(i,\cdot) P(\cdot, i)\right)\left(\sum_{i=1}^N M_l(i,\cdot) P(\cdot, i)\right)\right\} \\
&= {\mathbb{E}}\left\{\left(\sum_{i=1}^N M_h(i,\cdot) P(\cdot, i)\right)\left(\sum_{i=1}^N M_l(i,\cdot) P(\cdot, i)\right)\right\} \\
& = {\mathbb{E}}\left\{ \sum_{i=1,j=1}^N M_h(i,\cdot)P(\cdot,i)P(\cdot,j)^\top M_l(j,\cdot)^\top \right\} \\
& = \frac{1}{N}\sum_{i=1}^N M_h(i,\cdot)M_l(i,\cdot)^\top + \frac{1}{N(N-1)} \sum_{i \neq j}^N M_h(i,\cdot) ({\boldsymbol{1}}_{N\times N} - I_N) M_l(j,\cdot)^\top\\
& = \frac{1}{N-1}\sum_{i=1}^NM_h(i,\cdot)M_l(i,\cdot)^\top + \frac{1}{N(N-1)}\sum_{i\neq j}^N M_h(i,\cdot){\boldsymbol{1}}_{N\times N} M_l(j,\cdot)^\top\\
& - \frac{1}{N(N-1)}\left\{\sum_{i=1}^N M_h(i,\cdot)\right\} \left\{\sum_{i=1}^N M_l(i,\cdot)^\top\right\} \\
& = \frac{1}{N-1}\tr{M_h}{M_l} + \frac{1}{N(N-1)}\sum_{i\neq j} {M}_h(i,+){M}_l(j,+) - \frac{1}{N(N-1)} \sum_{k=1}^N{M}_h(+,k){M}_l(+,k)\\
& = \frac{1}{N-1}\tr{M_h}{M_l} + \frac{1}{N(N-1)}\sum_{i=1,j=1}^N {M}_h(i,+){M}_l(j,+)\\
& - \frac{1}{N(N-1)} \sum_{k=1}^N{M}_h(+,k){M}_l(+,k) - \frac{1}{N(N-1)} \sum_{k=1}^N{M}_h(k,+){M}_l(k,+)\\
& = \frac{1}{N-1}\tr{M_h}{M_l} + \frac{1}{N(N-1)}{M}_h(+,+){M}_l(+,+)\\
& - \frac{1}{N(N-1)} \sum_{k=1}^N{M}_h(+,k){M}_l(+,k) - \frac{1}{N(N-1)} \sum_{k=1}^N{M}_h(k,+){M}_l(k,+). \end{align*} \endgroup \end{proof}
\subsection{Proof of Lemma \ref{lem:reformulate}} \begin{proof}[Proof of Lemma \ref{lem:reformulate}] (i) By definition, \begin{align*} {\Gamma_h - {\mathbb{E}}\{\Gamma_h\}} &= \sum_{i=1}^N {M}_h(i,\pi(i)) - N^{-1}\sum_{i=1}^N\sum_{j=1}^N {M}_h(i,j)\\ &= \sum_{i=1}^N V_{hh}^{-1/2}\left\{{M}_h(i,\pi(i)) - N^{-1}{M_h}(i,+) - N^{-1}{M_h}(+,\pi(i)) + N^{-2} {M_h}(+,+)\right\}. \end{align*} Now introduce a new matrix $M_h'$ with entries \begin{align}\label{eqn:centering-1}
{M}_h'(i,j) = {M}_h(i,j) - N^{-1}{M_h}(i,+) - N^{-1}{M_h}(+,j) + N^{-2} {M_h}(+,+). \end{align} Let $\widetilde{V} = V^{-1/2}$ and let $\widetilde{\Gamma} = \widetilde{V}(\Gamma - {\mathbb{E}}\{\Gamma\})$ with $\operatorname{Var}\{\widetilde{\Gamma}\} = I_H, ~{\mathbb{E}}\{\widetilde{\Gamma}\} = 0$. Define $M''_h = \sum_{l=1}^H \widetilde{V}_{hl}M'_l$. Because $M'_h$'s have zero row and column sums, we can verify that $M''_h$'s also satisfy: \begin{align*}
{M''_h}(i,+) = 0, ~{M''_h}(+,j) = 0, ~\forall~ i,j\in[N]. \end{align*}
Besides, \begin{align*}
\widetilde{\Gamma}_h = \sum_{l=1}^H \widetilde{V}_{hl}(\operatorname{Tr}(M'_lP) - {\mathbb{E}}\{\operatorname{Tr}(M'_lP)\}) = \operatorname{Tr}(M''_h P). \end{align*}
Hence, combining Lemma \ref{lem:mean-var}, we have \begin{align}\label{eqn:tGamma-hl-1}
{\mathbb{E}}\{\widetilde{\Gamma}_h\widetilde{\Gamma}_l\} &= \frac{1}{N-1}\tr{M''_h}{M''_l} + \frac{1}{N(N-1)}{M''_h}(+,+){M''_l}(+,+)\notag\\
& - \frac{1}{N(N-1)} \sum_{k=1}^N{M''_h}(+,k){M''_l}(+,k) - \frac{1}{N(N-1)} \sum_{k=1}^N{M''_h}(k,+){M''_l}(k,+)\\
& = \frac{1}{N-1} \tr{M''_h}{M''_l}. \end{align} Recall \begin{align}\label{eqn:tGamma-hl-2}
\E{\widetilde{\Gamma}_h\widetilde{\Gamma}_l} = \left\{
\begin{array}{cc}
1, & h=l; \\
0, & h\neq l.
\end{array}
\right. \end{align} Combining \eqref{eqn:tGamma-hl-1} and \eqref{eqn:tGamma-hl-2}, we conclude \begin{align*}
\frac{1}{N-1} \tr{M''_h}{M''_l} = \left\{
\begin{array}{cc}
1, & h=l; \\
0, & h\neq l.
\end{array}
\right. \end{align*} Therefore, Condition \ref{cond:str-Mk} holds for $M''_h$'s.
(ii)
For $i,j\in[N]$, define the vectors \begin{align*}
\boldsymbol{c}' = [M'_1(i,j),\ldots,M'_H(i,j)]^\top\in{\mathbb{R}}^H \end{align*} and \begin{align*}
\boldsymbol{c}'' = [M''_1(i,j),\ldots,M''_H(i,j)]^\top\in{\mathbb{R}}^H. \end{align*}
We have \begin{align*}
\max_{h=1, \cdots, H} |M''_h(i,j)| \le \|{\boldsymbol{c}}''\|_2 \le \varrho_{\min}(V)^{-1/2} \|{\boldsymbol{c}}'\|_2 \le \varrho_{\min}(V)^{-1/2} \sqrt{H} \max_{h=1,\ldots,H} |M'_h(i,j)|. \end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{thm:linear-projection}}\label{sec:pf-linear-proj}
Proving Theorem \ref{thm:linear-projection} reduces to checking the conditions of Lemma \ref{lem:bolthausen-1984}.
\begin{proof}[Proof of Theorem \ref{thm:linear-projection}] We have \begin{align*}
b^\top \Gamma = \sum_{h=1}^H b_h\trace{M_h P} = \trace{\left(\sum_{h=1}^Hb_hM_h\right) P}. \end{align*} Define \begin{align*}
M' = \sum_{h=1}^Hb_hM_h. \end{align*} We can verify that the row sums and column sums of $M'$ are all zero. Also, using Condition \ref{cond:str-Mk}, \begin{align}\label{eqn:tildeM-Fnorm}
\innerprod{M'}{M'} = \sum_{h=1,l=1}^H b_hb_l\innerprod{M_h}{M_l} = (N-1)\sum_{h=1}^H b_h^2 = N-1. \end{align}
Applying Lemma \ref{lem:bolthausen-1984}, there exists an absolute constant $C>0$, such that \begin{align*}
\sup_{t\in{\mathbb{R}}}|{\mathbb{P}}\{b^\top\Gamma \le t\} - \Phi(t)| \le \frac{C}{N} \sum_{i,j}|M'(i,j)|^3 \le \frac{C(N-1)}{N}\max_{i,j\in [N]}|M'(i,j)| \le C\max_{i,j\in[N]} |M'(i,j)|. \end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{thm:be-bounded}}
\begin{proof}[Proof of Theorem \ref{thm:be-bounded}]
We will apply Lemma \ref{lem:bounded-pair}. The key step is to figure out the orders of $B_1,B_2,B_3$ in Lemma \ref{lem:bounded-pair}. One can upper bound $\Var{{\mathbb{E}}(\cdot\mid \Gamma)}$ by $\Var{{\mathbb{E}}(\cdot\mid \mathcal{F})}$ if $\sigma(\Gamma)\subset \mathcal{F}$. This is a standard trick in Stein’s method and will be used without further mentioning.
Now we compute the quantities involved in Lemma \ref{lem:bounded-pair}. Recall we use the random transposition $\tau = (IJ)$ to construct exchangeable pairs. The $h$-th coordinate of $D = \Gamma' - \Gamma$ equals \begin{align*}
D_h = M_h(I,\pi(I)) + M_h(J,\pi(J)) - M_h(I,\pi(J)) - M_h(J,\pi(I)). \end{align*} Hence \begin{align*}
|D_h| \le 4 B_N, \quad |G_h| \le (N-1)B_N, \quad \|D\|_2 \le 4\sqrt{H}B_N, \quad \|G\|_2 \le (N-1)\sqrt{H}B_N. \end{align*}
To apply Lemma \ref{lem:bounded-pair}, we need to bound the following quantities: \begin{enumerate}[label = (\roman*)]
\item ${\mathbb{E}}\{\|D\|_2^2\mid \pi\}$ and ${\mathbb{E}}\{\|D\|_2^2\}$.
\item $B_1 = \sqrt{\Var{{\mathbb{E}}(\|D\|_2^2\mid \Gamma)}}$ and $B_2 = \sqrt{\sum_{k,l=1}^H \Var{{\mathbb{E}}(G_hD_l\mid \Gamma)}}$.
\item $B_3 = \sqrt{\sum_{k,l,m=1}^H \Var{{\mathbb{E}}(G_hD_lD_m\mid \Gamma)}}$. \end{enumerate}
\noindent\textbf{(i) Bound ${\mathbb{E}}\{\|D\|_2^2\mid \pi\}$ and ${\mathbb{E}}\{\|D\|_2^2\}$}.
By Lemma \ref{lem:cond-check}, \begin{align*}
{\mathbb{E}}\{\|D\|_2^2\mid \pi\}
&= \sum_{h=1}^H {\mathbb{E}}\{D_h^2\mid\pi\} = \sum_{h=1}^H {\mathbb{E}}\{(\Gamma_h - \Gamma_h')^2\mid\pi\}\\
&= \frac{2(N+1)}{N(N-1)}\sum_{h=1}^H \sum_{i=1}^N M_h(i,\pi(i))^2 + \frac{2H}{N} + \frac{2}{N(N-1)}\sum_{h=1}^H \Gamma_h^2 \\
&+ \frac{2}{N(N-1)}\sum_{h=1}^H \sum_{i\neq j} M_h(i,\pi(j))M_h(j,\pi(i))\\
&\le \frac{2(N+1)HB_N^2}{N-1} + \frac{2H}{N} + \frac{2HNB_N^2}{(N-1)} + {2HB_N^2} \le 12HB_N^2 + \frac{2H}{N}. \end{align*} This implies \begin{align*}
{\mathbb{E}}\{\|D\|_2^2\} = {\mathbb{E}}_{\pi}{\mathbb{E}}\{\|D\|_2^2\mid \pi\} \le 12HB^2_N + \frac{2H}{N}. \end{align*}
\noindent\textbf{(ii) Bound $B_1$ and $B_2$}.
We prove the following result: there exists a universal constant $C>0$, such that \begin{align*}
B_1\le CHN^{-1/2}B_N^2, ~ B_2 \le CHN^{1/2}B_N^2. \end{align*}
By Lemma \ref{lem:cond-check}, \begin{align*}
{\mathbb{E}}\{D_h^2\mid\pi\}
&= \frac{2(N+1)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))^2 + \frac{2}{N} + \frac{2}{N(N-1)}\Gamma_h^2 \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_h(j,\pi(i))\\
&= \frac{2(N+2)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))^2 + \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_h(i,\pi(j)) \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_h(j,\pi(i))+ \frac{2}{N}\\
& = \text{I} + \text{II} + \text{III} + \frac{2}{N}. \end{align*} For $h\neq l$, \begin{align*}
{\mathbb{E}}\{D_hD_l\mid\pi\}
&= \frac{2(N+1)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))M_l(i,\pi(i)) + \frac{2}{N(N-1)}\Gamma_h\Gamma_l \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_l(j,\pi(i))\\
& = \frac{2(N+2)}{N(N-1)}\sum_{i=1}^N M_h(i,\pi(i))M_l(i,\pi(i)) + \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_l(i,\pi(j)) \\
&+ \frac{2}{N(N-1)}\sum_{i\neq j} M_h(i,\pi(j))M_l(j,\pi(i))\\
& = \text{IV} + \text{V} + \text{VI}. \end{align*}
For $B_1$, using Lemma \ref{lem:key-var-bds}(ii)--(iv), we have \begin{align*}
\Var{ \text{I} } &\le \frac{4(N+2)^2}{N^2(N-1)^2}\cdot 32 N B_N^4 \le \frac{256B_N^4}{N},\\
\Var{ \text{II} } &\le \frac{4}{N^2(N-1)^2}\cdot 54 N^2 B_N^4\le \frac{216B_N^4}{N},\\
\Var{ \text{III} } &\le \frac{4}{N^2(N-1)^2}\cdot 54 N^2 B_N^4\le \frac{256B_N^4}{N}.\\ \end{align*} Now apply Lemma \ref{lem:key-var-bds}(i) to obtain \begin{align*}
B_1^2 &= {\Var{{\mathbb{E}}(\|D\|_2^2\mid \Gamma)}}
= {\Var{{\mathbb{E}}\left(\sum_{h=1}^H D_h^2\mid \Gamma\right)}}
\le {H\sum_{h=1}^H \Var{{\mathbb{E}}\left( D_h^2\mid \Gamma\right)}}
\le CH^2N^{-1}B_N^4. \end{align*}
Similarly, for $B_2$, we have \begin{align*}
B_2^2 = {\sum_{h,l=1}^H \Var{{\mathbb{E}}(G_hD_l\mid \Gamma)}}
= \left(\frac{N-1}{4}\right)^2{\sum_{k,l=1}^H \Var{{\mathbb{E}}\left(D_hD_l\mid \Gamma\right)}}
\le CH^2NB_N^4. \end{align*}
\noindent\textbf{(iii) Bound $B_3$}.
We prove the following result: there exists a universal constant $C>0$, such that \begin{align*}
B_3 \le C{H^{3/2}N^{1/2}B_N^3} \end{align*}
For simplicity, we write $a_{ij} = M_h(i,j),b_{ij} = M_l(i,j)$ and $c_{ij} = M_m(i,j)$. Recall $G_h = (N-1)D_h/4 $. We have \begin{align}\label{eqn:DDD}
{\mathbb{E}}\{D_hD_lD_m\mid\pi\}
= {\mathbb{E}}\{&
(a_{I\pi(I)} + a_{J\pi(J)} -a_{I\pi(J)} -a_{J\pi(I)}) \\
&\cdot (b_{I\pi(I)} + b_{J\pi(J)} -b_{I\pi(J)} -b_{J\pi(I)})\notag\\
&\cdot (c_{I\pi(I)} + c_{J\pi(J)} -c_{I\pi(J)} -c_{J\pi(I)})\mid\pi\}.\notag \end{align} The expansion of \eqref{eqn:DDD} has $4^3=64$ terms, which can be characterized by the following categories of $a_{i_aj_a}b_{i_bj_b}c_{i_cj_c}$: \begin{itemize}
\item $a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(I)}$. There are $2$ terms in total. We have
\begin{align*}
{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(I)}\mid \pi\} = \frac{1}{N} \sum_{i=1}^N a_{i\pi(i)}b_{i\pi(i)}c_{i\pi(i)}.
\end{align*}
Because $|a_{i\pi(i)}b_{i\pi(i)}c_{i\pi(i)}|\le B_N^3$, by \eqref{eqn:var-bd}, we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(I)}\mid \pi\}} \le \frac{32NB_N^6}{N^2} = \frac{32B_N^6}{N}.
\end{align*}
\item $a_{I\pi(J)}b_{I\pi(J)}c_{I\pi(J)}$. There are $2$ terms in total. We have
\begin{align}\label{eqn:second-ctg}
{\mathbb{E}}\{a_{I\pi(J)}b_{I\pi(J)}c_{I\pi(J)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(j)}b_{i\pi(j)}c_{i\pi(j)} \notag\\
&= \frac{1}{N(N-1)}\sum_{j=1}^N \sum_{i\neq j} a_{i\pi(j)}b_{i\pi(j)}c_{i\pi(j)}.
\end{align}
\eqref{eqn:second-ctg} can be viewed as univariate linear permutation statistics coming from a population matrix filled with entries that are identical on each row:
\begin{align*}
d_{kl} = \sum_{m\neq l} a_{ml}b_{ml}c_{ml}.
\end{align*}
Because $|d_{kl}| \le (N-1)B_N^3$, by Lemma \ref{lem:key-var-bds} (ii), we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(I)}\mid \pi\}} &\le \frac{16N \cdot \{(N-1)B_N^3\}^2}{N^2(N-1)^2} \le \frac{32B_N^6}{N}.
\end{align*}
\item $a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(I)}$. There are $6$ terms in total. We have
\begin{align*}
{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(I)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(i)}b_{i\pi(i)}c_{j\pi(i)}\\
&= \frac{1}{N(N-1)} \sum_{i=1}^N\left\{\sum_{j\neq i} a_{i\pi(i)}b_{i\pi(i)}c_{j\pi(i)}\right\}\\
& = \frac{1}{N(N-1)} \sum_{i=1}^N \{-a_{i\pi(i)}b_{i\pi(i)}c_{i\pi(i)}\} \\
& \text{(since the column sums of $c_{ij}$ are all zero)}.
\end{align*}
Apply Lemma \ref{lem:key-var-bds} (ii) to obtain
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(I)}\mid \pi\}}\le \frac{16NB_N^6}{N^2(N-1)^2}\le\frac{32B_N^6}{N^3}.
\end{align*}
\item $a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(J)}$. There are $6$ terms in total. This part is similar to the last one:
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(J)}\mid \pi\}}\le \frac{16NB_N^6}{N^2(N-1)^2}\le\frac{32B_N^6}{N^3}.
\end{align*}
\item $a_{I\pi(I)}b_{J\pi(I)}c_{J\pi(I)}$. There are $6$ terms in total. We have
\begin{align}\label{eqn:third-ctg}
{\mathbb{E}}\{a_{I\pi(I)}b_{J\pi(I)}c_{J\pi(I)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(i)}c_{j\pi(i)}\notag\\
&= \frac{1}{N(N-1)} \sum_{i=1}^N \left\{\sum_{j\neq i}a_{i\pi(i)}b_{j\pi(i)}c_{j\pi(i)}\right\}.
\end{align}
\eqref{eqn:third-ctg} can be viewed as a univariate linear permutation statistics from a population matrix with entries
\begin{align*}
d_{kl} = a_{kl}\sum_{m\neq k}b_{ml}c_{ml}.
\end{align*}
Because $|d_{kl}| \le (N-1)B_N^3$, we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{I\pi(I)}\mid \pi\}} \le \frac{16N \cdot \{(N-1)B_N^3\}^2}{N^2(N-1)^2} \le \frac{16B_N^6}{N}.
\end{align*}
\item $a_{I\pi(I)}b_{I\pi(J)}c_{I\pi(J)}$. There are $6$ terms in total. We can check (by using $\pi^{-1}$) that this term is similar to the last one:
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(J)}c_{I\pi(J)}\mid \pi\}} \le \frac{16N \cdot \{(N-1)B_N^3\}^2}{N^2(N-1)^2} \le \frac{16B_N^6}{N}.
\end{align*}
\item $a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(J)}$. There are $6$ terms in total.
Let $(d_{kl}) = (a_{kl}b_{kl})$ and $d^\star_{kl} = d_{kl} - d_{\cdot l} - d_{k\cdot} + d_{\cdot\cdot}$ be the centered version with $|d^\star_{kl}|\le 4B_N^2$. We have
\begin{align*}
{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(J)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(i)}b_{i\pi(i)}c_{j\pi(j)}\\
&= \frac{1}{N(N-1)} \left\{\sum_{i\neq j}^N d^\star_{i\pi(i)}c_{j\pi(j)} + \sum_{i\neq j}^N (d_{\cdot\pi(i)} - d_{\cdot\cdot} )c_{j\pi(j)} + \sum_{i\neq j}^N d_{i\cdot}c_{j\pi(j)}\right\}\\
& = \text{I} + \text{II} + \text{III}.
\end{align*}
For I, by Lemma \ref{lem:key-var-bds} (iii), we know
\begin{align*}
\Var{\text{I}}\le \frac{54N^2 \cdot (4B_N^2)^2 \cdot (B_N^2)}{N^2(N-1)^2} \le \frac{864B_N^6}{(N-1)^2}.
\end{align*}
For II, by re-indexing $h=\pi(i),l=\pi(j)$, we have
\begin{align*}
\text{II} = \frac{1}{N(N-1)}\sum_{l=1}^N\sum_{k\neq l}(d_{\cdot k} - d_{\cdot\cdot} )c_{\pi^{-1}(l)l}.
\end{align*}
Let $e_{kl} = \sum_{m\neq l}(d_{\cdot k} - d_{\cdot\cdot} )c_{kl}$. By Lemma \ref{lem:key-var-bds} (ii) and the fact that $|\sum_{k\neq l}(d_{\cdot k} - d_{\cdot\cdot} )c_{ml}|\le 2(N-1)B_N^3$, we have
\begin{align*}
\Var{\text{II}} \le \frac{32N\{2(N-1)B_N^3\}^2}{N^2(N-1)^2} \le \frac{128B_N^6}{N}.
\end{align*}
For III, the analysis is similar to II:
\begin{align*}
\Var{\text{III}} \le \frac{32N\{(N-1)B_N^3\}^2}{N^2(N-1)^2} \le \frac{32B_N^6}{N}.
\end{align*}
Since $\Var{\text{I}}$ is of lower order compared with that of II and III, we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{I\pi(I)}c_{J\pi(J)}\mid \pi\}} \le \frac{600B_N^6}{N}.
\end{align*}
\item $a_{I\pi(J)}b_{I\pi(J)}c_{J\pi(I)}$. There are $6$ terms in total.
The analysis for this part is similar to the last part. The only difference is that, we need to apply Lemma \ref{lem:key-var-bds} (iv) instead of (iii) to bound the variance for a term that looks like the I in the previous part. But the upper bound in (iii) and (iv) are the same. Hence
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(J)}b_{I\pi(J)}c_{J\pi(I)}\mid \pi\}}\le \frac{600B_N^6}{N}.
\end{align*}
\item $a_{I\pi(I)}b_{J\pi(J)}c_{I\pi(J)}$. There are $12$ terms in total. We have
\begin{align*}
{\mathbb{E}}\{a_{I\pi(I)}b_{J\pi(J)}c_{I\pi(J)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(j)}c_{i\pi(j)}.
\end{align*}
By Lemma \ref{lem:key-var-bds} (v), we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{J\pi(J)}c_{I\pi(J)}\mid \pi\}} \le \frac{15N^3B_N^6}{N^2(N-1)^2} \le \frac{30B_N^6}{N}.
\end{align*}
\item $a_{I\pi(I)}b_{J\pi(I)}c_{I\pi(J)}$. There are $12$ terms in total. We have
\begin{align*}
{\mathbb{E}}\{a_{I\pi(I)}b_{J\pi(I)}c_{I\pi(J)}\mid \pi\} &= \frac{1}{N(N-1)} \sum_{i\neq j}^N a_{i\pi(i)}b_{j\pi(i)}c_{i\pi(j)}.
\end{align*}
By Lemma \ref{lem:key-var-bds} (vi), we have
\begin{align*}
\Var{{\mathbb{E}}\{a_{I\pi(I)}b_{J\pi(J)}c_{I\pi(J)}\mid \pi\}} \le \frac{15N^3B_N^6}{N^2(N-1)^2} \le \frac{30B_N^6}{N}.
\end{align*}
\end{itemize}
Now summing up the bullet points above, we have \begin{align*}
\Var{{\mathbb{E}}\{D_hD_lD_m\mid\pi\}} \le \frac{CB_N^6}{N}, \end{align*} for some absolute constant $C>0$.
\noindent\textbf{(iv) Summarize (i) (ii) (iii) above.}
As a brief review, we have proved the following results: when $N$ is large, \begin{enumerate}[label = (\arabic*)]
\item $\|G\|_2\le \alpha = C(N-1)H^{1/2}B_N$, $\|D\|_2 \le \beta = CH^{1/2}B_N$.
\item ${\mathbb{E}}\{\|D\|_2^2\} \le C(HB_N^2 + HN^{-1})$.
\item $B_1 = \sqrt{\Var{{\mathbb{E}}(\|D\|_2^2\mid \Gamma)}} \le CHN^{-1/2}B_N^2$.
\item $B_2 = \sqrt{\sum_{k,l=1}^H \Var{{\mathbb{E}}(G_hD_l\mid \Gamma)}}\le CHN^{1/2}B_N^2$.
\item $B_3 = \sqrt{\sum_{k,l,m=1}^H \Var{{\mathbb{E}}(G_hD_lD_m\mid \Gamma)}} \le CH^{3/2}N^{1/2}B_N^3$. \end{enumerate} Using Lemma \ref{lem:bounded-pair} with (1) - (5), we have \begin{align*}
&\sup_{A\in{\mathcal{A}}}|{\mathbb{P}}\{\Gamma\in A\} - {\mathbb{P}}\{\xi_H\in A\}| \\
&\le C(H^{7/4} \alpha \mathbb{E}\|D\|_2^2 + H^{1/4}\beta + H^{7/8}\alpha^{1/2}B_1^{1/2} + H^{3/8}B_2 + H^{1/8}B_3^{1/2})\\
&\le CH^{13/4}NB_N(B_N^2 + N^{-1}) + C H^{3/4}B_N + CH^{13/8}N^{1/4}B_N^{3/2} \\
&+ CH^{11/8}N^{1/2}B_N^2 + CH^{7/8}N^{1/4}B_N^{3/2}. \end{align*}
\end{proof}
\section{Additional results for randomization-based causal inference}\label{sec:additional}
Appendix C presents some additional results for randomization-based causal inference. Section \ref{sec:BE-quad} presents general BEBs for quadratic forms, which are derived based on the results in Appendix \ref{sec:general-BE}. Section \ref{sec:high-moments} establishes some bound on the high order moments of the sample averages. Section \ref{sec:tail-uniform} and \ref{sec:tail-non-uniform} provide more delicate tail bounds for linear combination of sample variances. Section \ref{sec:vec-outcome} extends the BEBs to vector potential outcomes.
\subsection{BEBs for quadratic forms}\label{sec:BE-quad}
In Section \ref{sec:PCLT-projection}, we proved the BEBs for linear projections of multivariate permutational statistics in randomized experiments. In this subsection, we study a more general type of distance: \begin{align}\label{eqn:dcA}
d_{\mathcal{A}}(\widetilde{\gamma}, \xi_H) = \sup_{A\in{\mathcal{A}}} |\Prob{\widetilde{\gamma} \in A} - \Prob{\xi_H \in A}|, \end{align} where ${\mathcal{A}}$ is the collection of all Borel convex sets, $\widetilde{\gamma}$ is defined as \eqref{eqn:standard-est}, and $\xi_H$ is a random vector in ${\mathbb{R}}^H$ with standard multivariate Normal distribution. ${\mathcal{A}}$ can cover many specific convex class. For example, the set of ellipsoids defined as follows are a subset of of ${\mathcal{A}}$: \begin{align}\label{eqn:ellipsoids}
{\mathcal{A}}_{2}(\lambda, t) = \left\{\gamma\in{\mathbb{R}}^H: \sum_{h=1}^H\lambda_h^2\gamma_h^2 \le t \right\}, ~\lambda_h > 0,~ t > 0. \end{align} \eqref{eqn:ellipsoids} is useful for deriving BEBs for quadratic forms of $\widetilde{\gamma}$.
Recall $\widetilde{\gamma}$ from \eqref{eqn:tilde-gamma}. We study the asymptotic distribution of the following random variable: \begin{align*}
T = \widetilde{\gamma}^\top W \widetilde{\gamma}, \end{align*} where $W$ is a given positive definite matrix in ${\mathbb{R}}^{H\times H}$. We also write $T_0$ as the counterpart with $\widetilde{\gamma}$ replaced by Normal vectors $\xi_H$: \begin{align}\label{eqn:E-V-TZ}
T_0 = \xi_H^\top W \xi_H,~ \xi_H \sim {\mathcal{N}}(0,I_H), \end{align} with \begin{equation}\label{eqn::moments-chi-squares} \E{T_0} = \trace{W}, \quad \Var{T_0} = 2\trace{W^2}. \end{equation} For the problems in the main paper, we need to deal with the class of ellipsoids \eqref{eqn:ellipsoids}, which are convex. Applying Theorem \ref{thm:be-bounded}, we obtain the following result: \begin{theorem}[Permutational BEBs for quadratic forms]\label{thm:quad-clt} We have \begin{align}
& \sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)|\notag\\
& \le C(H^{13/4}NB_N(B_N^2 + N^{-1}) + H^{3/4}B_N + H^{13/8}N^{1/4}B_N^{3/2}
+ H^{11/8}N^{1/2}B_N^2),\label{eqn:quad-clt} \end{align} where \begin{align}\label{eqn:BN}
B_N = \varrho_{\min}(V_{{{\widehat{\gamma}}}})^{-1/2}\sqrt{H} \max_{h\in[H]}\max_{i\in[N],q\in[Q]}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))|. \end{align} When $B_N \le C H^{1/2}N^{-1/2}$, we have \begin{align}\label{eqn:BN-special}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| \le \frac{CH^{19/4}}{N^{1/2}}. \end{align} \end{theorem}
\begin{remark} The proof for Theorem \ref{thm:quad-clt} is an application of the bound over convex sets in Theorem \ref{thm:be-bounded}. In general, the bound \eqref{eqn:quad-clt} might not be sharp for quadratic forms. While the BEB achieved the rate $N^{-1/2} $ which is analogous to the i.i.d. scenario, the rate in $H$ might not be optimal. However, we do not pursue the best possible bound here. In many cases, \eqref{eqn:quad-clt} suffices for establishing asymptotics. For example, in factorial experiments, if we focus on lower order effects, $H$ is approximately the order of $\log(N)$. Therefore, we can justify the asymptotic Normality as $N\to\infty$ using \eqref{eqn:quad-clt}. \end{remark}
We discuss how to bound $B_N$ to obtain a usable result from \eqref{eqn:quad-clt}. The following lemma covers nearly uniform designs and non-uniform designs:
\begin{lemma}\label{lem:BN} Assume Conditions \ref{cond:well-conditioned} and \ref{condition::proper}. \begin{enumerate}[label = (\roman*)]
\item For nearly uniform design with either replicated or unreplicated arms, there exists a constant $C = C(c,c',\underline{c},\overline{c})$ that only depends on the constants in Definition \ref{def:uniform-design} and Condition \ref{condition::proper}, such that
\begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| \le C\frac{\max_{ q\in[Q]} M_N(q)^3 }{ \{ \min_{q\in[Q]} S(q,q) \}^{3/2}}\cdot\frac{H^{19/4}}{N^{1/2}}.
\end{align*}
Moreover, under Condition \ref{cond:easy-spec}, we have
\begin{align*}
B_N \le \frac{2c^{1/2}\underline{c}^{-1}\nu}{(\overline{c}^{-1}\underline{S})^{1/2}} \cdot \left(\frac{H}{QN_0}\right)^{1/2}
\end{align*}
and the BEB \eqref{eqn:BN-special} holds.
\item For non-uniform design, assume \eqref{eqn:small-H}. There exists some constant $C = C(c,c',\underline{c},\overline{c})$ that only depends on the constants in Definition \ref{def:non-uniform-design} and Condition \ref{condition::proper}, such that
\begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| \le C \frac{ \max_{ q\in[Q]} M_N(q)^3 }{ \{ \overline{n}^{-1}\min_{q\in{\mathcal{Q}}_\textsc{s}} S(q,q) \} ^{3/2}}\cdot\frac{H^{19/4}}{N^{1/2}}. \end{align*}
Moreover, under Condition \ref{cond:easy-spec} and $N=O(Q)$, we have
\begin{align*}
B_N \le \frac{2vc H^{1/2}}{(c'\overline{n}^{-1}\underline{S})^{1/2}Q^{1/2}}
\end{align*} and the BEB \eqref{eqn:BN-special} holds.
\end{enumerate}
\end{lemma}
We have a thorough understanding of the distribution of $T_0$. By eigenvalue decomposition, $$ T_0 \sim \sum_{h=1}^H \varrho_h {\widetilde{\xi}}_{0,h}^2 \lesssim \varrho_1\chi^2(H),
$$
where $\varrho_1\ge\cdots\ge\varrho_H$ are eigenvalues of $ W$ and ${\widetilde{\xi}}_{0,1},\ldots,{\widetilde{\xi}}_{0,H}$ are i.i.d. $\mathcal{N}(0,1)$. Therefore, $T_0$ is stochastically dominated by $\chi^2(H)$. When $H$ is fixed, the asymptotic distribution of $T$ follows immediately. When $H$ diverges, we need to further use the asymptotic distribution for sum of independent random variables based on the Lindeberg--L\'{e}vy CLT and classical BEBs. Corollary \ref{cor:quad-clt-v2} below summarizes the results.
\begin{corollary}[Limiting distribution of the quadratic form]\label{cor:quad-clt-v2} Let $N\rightarrow \infty$. Assume the upper bound in \eqref{eqn:quad-clt} vanishes: \begin{align*}
CH^{13/4}NB_N(B_N^2 + N^{-1}) + C H^{3/4}B_N + CH^{13/8}N^{1/4}B_N^{3/2}
+ CH^{11/8}N^{1/2}B_N^2 \to 0. \end{align*} \begin{enumerate}
\item If $H$ is fixed, then $
T \rightsquigarrow T_0 .
$
\item If $H$ diverges, then
\begin{align*}
\frac{T - \operatorname{Tr}( W)}{\sqrt{2\operatorname{Tr}( W^2)}} \rightsquigarrow {\mathcal{N}}(0,1).
\end{align*} \end{enumerate} \end{corollary}
\subsection{High order moments of ${\widehat{Y}}$} \label{sec:high-moments} In this subsection, we presents some delicate characterizations of the high order moments of the sample average ${\widehat{Y}}$, which are crucial for the proofs of our main results and might be of independent interest for other problems.
\begin{lemma}[High order moments of ${\widehat{Y}}$]\label{lem:high-moment} Assume complete randomization and Condition \ref{cond:moments}. \begin{enumerate}[label=(\roman*)]
\item ${\mathbb{E}}\{({\widehat{Y}}_q - \overline{Y}(q))^2\} \le \frac{C\Delta^2}{N_q} $;
\item ${\mathbb{E}}\{({\widehat{Y}}_q - \overline{Y}(q))^4\} \le \frac{C\Delta^4}{N_q^2} $;
\item $\Cov{({\widehat{Y}}_q - \overline{Y}(q))^2}{({\widehat{Y}}_{q'} - \overline{Y}(q'))^2} \le \frac{C(N_q + N_{q'})\Delta^4}{N_qN_{q'}(N-1)} $. \end{enumerate} \end{lemma}
\begin{lemma}[High order moments under unreplicated designs]\label{lem:moments-U} Assume the potential outcomes are centered: $\overline{Y}(q) = 0$ for all $q\in[Q]$. Assume complete randomization and Condition \ref{cond:moments}. For the unreplicated design in Definition \ref{def:non-uniform-design}, there exists a universal constant $C>0$, such that \begin{enumerate}[label = (\roman*)]
\item for $q_1\in[Q]$, $\Var{Y_{q_1}^2} = (1-N^{-1})S_{Y^2}(q_1,q_1) \le C\Delta^4$;
\item for $q_1\neq q_2$, $\Cov{Y_{q_1}^2}{Y_{q_2}^2} = -N^{-1}S_{Y^2}(q_1,q_2)$ and $|\Cov{Y_{q_1}^2}{Y_{q_2}^2} = -N^{-1}S_{Y^2}(q_1,q_2)| \le \frac{C\Delta^4}{N} $;
\item for $q_1\neq q_2$, $ |\Cov{Y_{q_1}^2}{Y_{q_1}Y_{q_2}}| \le \frac{C\Delta^4}{N}$;
\item for $q_1\neq q_2\neq q_3$, $ |\Cov{Y_{q_1}^2}{Y_{q_2}Y_{q_3}}| \le \frac{C\Delta^4}{N}$;
\item for $q_1\neq q_2\neq q_3 \neq q_4$, $ |\Cov{Y_{q_1}Y_{q_2}}{Y_{q_3}Y_{q_4}}| \le \frac{C\Delta^4}{N^2}$. \end{enumerate} \end{lemma}
We assume the potential outcomes are centered in Lemma \ref{lem:moments-U} to simplify the formulas. Without this assumption, all results in Lemma \ref{lem:moments-U} hold if we subtract the means of the potential outcomes from the corresponding observed outcomes.
\subsection{Tail probability of variance estimation for nearly uniform design} \label{sec:tail-uniform}
For an arbitrary set of indices ${\mathcal{Q}} \subset [Q]$, define \begin{align*}
{\widehat{v}} = \sum_{q\in{\mathcal{Q}}}w_q N_q^{-1} \widehat{S}(q,q) \end{align*} if $N_q \geq 2$ for all $q \in {\mathcal{Q}}.$ Lemma \ref{lem:tail} below gives the tail probability of ${\widehat{v}}$.
\begin{lemma}[Tail probability of variance estimation]\label{lem:tail} Consider the nearly uniform design satisfying Definition \ref{def:uniform-design}. Assume Condition \ref{cond:moments} and $\min_{q\in[Q]}~N_q \ge 2 $. Assume $(w_q)_{q\in[Q]}$ is a sequence of bounded real number: \begin{align*}
\max_{q\in[Q]}|w_q| \le \overline{w}. \end{align*} Then there exists a universal constant $C>0$, such that
\begin{align*}
{\mathbb{P}}\left\{\left|{\widehat{v}} - \E{{\widehat{v}}}\right|\ge t\right\} \le \frac{C\overline{c}\underline{c}^{-4} \overline{w}^2 |{\mathcal{Q}}| N_0^{-3}\Delta^4}{t^2}. \end{align*} \end{lemma}
\subsection{Tail probability of variance estimation for unreplicated arms} \label{sec:tail-non-uniform}
Recall the notation in Section \ref{sec:var-unreplicate}. Define \begin{align}
\widehat{v} = \sum_{{\langle g \rangle}\in{{\mathcal{G}}}}\sum_{q\in{\langle g \rangle}}{w_q} \left (Y_q - {\widehat{Y}}_{{\langle g \rangle}}\right)^2 .
\label{eqn:var-est-3-U} \end{align}
Lemma \ref{lem:hv-U} below gives the tail probability of $ \widehat{v} $.
\begin{lemma}[Analysis of $\widehat{v}$ under unreplicated arms]\label{lem:hv-U}
Assume Conditions \ref{cond:moments} and \ref{cond:bounded-bgv}. Assume $(w_q)_{q\in{\mathcal{Q}}_{\textsc{u}}}$ is a sequence of bounded real number: \begin{align*}
\max_{q\in{\mathcal{Q}}_{\textsc{u}}}|w_q| \le \overline{w}. \end{align*}
Then there exists a universal constant $C>0$, such that \begin{align*}
\Prob{|\widehat{v} - \E{{\widehat{v}}}|\ge t}
\le \frac{C\overline{w}^2(\Delta^4+\Delta^2\zeta^2) N_{\textsc{u}}}{t^2}. \end{align*} \end{lemma}
\subsection{Extension to vector potential outcomes} \label{sec:vec-outcome} In some settings, we are interested in vector potential outcomes. \cite{li2017general} proved some CLTs for vector outcomes. For binary treatments, \cite{wang2021rerandomization} proved some BEBs based on the coupling method. However, the general theory for BEB is still incomplete.
Let $\{{\boldsymbol{Y}}_i(q)\in{\mathbb{R}}^p:i\in[N],q\in[Q]\}$ be a collection of potential outcomes. Let ${\boldsymbol{F}}_1,\ldots,{\boldsymbol{F}}_Q$ be $Q$ coefficient matrices in ${\mathbb{R}}^{H\times p}$. Define $\gamma = \sum_{q=1}^Q {\boldsymbol{F}}_q \overline{{\boldsymbol{Y}}}(q)$, and the moment estimator is ${\widehat{\gamma}} = \sum_{q=1}^Q {\boldsymbol{F}}_q \widehat{{\boldsymbol{Y}}}_q$. \cite{li2017general} calculated the mean and covariance of ${\widehat{\gamma}}$: \begin{gather*}
{\mathbb{E}}\{{\widehat{\gamma}}\} = \sum_{q=1}^Q {\boldsymbol{F}}_q\overline{{\boldsymbol{Y}}}(q),\quad
\text{Var}({\widehat{\gamma}}) = \sum_{q=1}^Q N_q^{-1}{\boldsymbol{F}}_q{\boldsymbol{S}}(q,q){\boldsymbol{F}}_q^\top - N^{-1}{\boldsymbol{S}}_{\boldsymbol{F}} := {\boldsymbol{V}}_{\widehat{\gamma}}, \end{gather*} where \begin{gather*}
{\boldsymbol{S}}(q,q') = (N-1)^{-1}\sum_{i=1}^N ({\boldsymbol{Y}}_i(q) - \overline{{\boldsymbol{Y}}}(q))({\boldsymbol{Y}}_i(q') - \overline{{\boldsymbol{Y}}}(q'))^\top,~ q,q'\in[Q],\\
{\boldsymbol{S}}_{\boldsymbol{F}} = (N-1)^{-1} \sum_{i=1}^N (\gamma_i - \overline{\gamma})(\gamma_i - \overline{\gamma})^\top, \quad
\gamma_i = \sum_{q=1}^Q{\boldsymbol{F}}_q{\boldsymbol{Y}}_i(q), \quad
\overline{\gamma} = N^{-1}\sum_{i=1}^N \gamma_i. \end{gather*}
Define \begin{align*}
\breve{{\boldsymbol{Y}}}_i(q) = {\boldsymbol{Y}}_i(q)-\overline{{\boldsymbol{Y}}}(q) \text{ for all } q\in[Q]. \end{align*} Theorem \ref{thm:be-proj-standard-vec} below gives BEBs for projections of the standardized ${\widehat{\gamma}}$. \begin{theorem}[BEB for projections of the standardized ${\widehat{\gamma}}$]\label{thm:be-proj-standard-vec} Let \begin{align*}
\widetilde{\gamma} = \{{\boldsymbol{V}}_{\widehat{\gamma}}\}^{-1/2}({\widehat{\gamma}} - {\mathbb{E}}\{{\widehat{\gamma}}\}). \end{align*}
Assume complete randomization. (i) There exists a universal constant $C>0$, such that for any $b\in{\mathbb{R}}^H$ with $\|b\|_2 = 1$, we have \begin{align*}
\left|{\mathbb{P}}\{b^\top\widetilde{\gamma} \le t\} - \Phi(t)\right| \le C\max_{i\in[N],q\in[Q]}\left|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}N_{q}^{-1}{\boldsymbol{F}}_{q}\breve{{\boldsymbol{Y}}}_i({q}) \right|. \end{align*} (ii) If there exists $\sigma_F \ge 1$, such that \begin{align}\label{eqn:well-conditioned-vec}
\sum_{q=1}^Q N_q^{-1}{\boldsymbol{F}}_q {\boldsymbol{S}}(q,q) {\boldsymbol{F}}_q^\top \preceq \sigma^2_F {\boldsymbol{V}}_{\widehat{\gamma}}, \end{align} then \begin{align}\label{eqn:uniform-be-vec}
&\sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top\widetilde{\gamma} \le t\} - \Phi(t)\right| \notag\\
\le& C \max_{i\in[N],q\in[Q]} \min\left\{2 {\sigma_F} \sqrt{N_q^{-1}\breve{{\boldsymbol{Y}}}_i(q)^\top {\boldsymbol{S}}(q,q)^{-1} \breve{{\boldsymbol{Y}}}_i(q)},~ \frac{ \|{\boldsymbol{F}}_q\|_{2,1}\cdot N_q^{-1}\|\breve{{\boldsymbol{Y}}}_i(q)\|_\infty}{\sqrt{\varrho_{\min}\{ {\boldsymbol{V}}_{\widehat{\gamma}}\}}}\right\}. \end{align}
\end{theorem}
Theorem \ref{thm:be-proj-standard-vec} extends Theorem \ref{thm:be-proj-standard} to vector potential outcomes. If $p=1$, Theorem \ref{thm:be-proj-standard-vec} recovers Theorem \ref{thm:be-proj-standard}. The novel part of the extension is to decide the appropriate vector and matrix norms in the upper bound for the vector potential outcomes $\breve{{\boldsymbol{Y}}}_i(q)$'s and the coefficient matrices ${\boldsymbol{F}}_q$'s. The proof provides more insights into the choices. Moreover, we can derive many corollaries from Theorem \ref{thm:be-proj-standard-vec} as in the main paper. To avoid repetitions, we omit the details.
\section{Proofs of the results in the main paper and Appendix \ref{sec:additional}}\label{sec:main-proof}
\subsection{Proof of Theorem \ref{thm:be-proj-standard}}
The proof of Theorem \ref{thm:be-proj-standard} is based on Theorem \ref{thm:linear-projection}. There are two key steps: (i) formulate $\widetilde{\gamma}$ as a linear permutational statistic that satisfies the conditions of Theorem \ref{thm:linear-projection}; (ii) find explicit bounds for the BEB in Theorem \ref{thm:linear-projection}.
\begin{proof}[Proof of Theorem \ref{thm:be-proj-standard}] Recall \begin{align}\label{eqn:standard-est}
\widetilde{\gamma} = (V_{\widehat{\gamma}})^{-1/2}({\widehat{\gamma}} - \gamma) = \Var{F^\top{\widehat{Y}}}^{-1/2}(F^\top{\widehat{Y}} - {\mathbb{E}}\{F^\top{\widehat{Y}}\}). \end{align} \textbf{Step 1: Reformulate $\widetilde{\gamma}$ as a multivariate linear permutational statistic.}
We show that, there exist population matrices $M''_1,\ldots, M''_H$ that satisfy Condition \ref{cond:str-Mk}, such that $\widetilde{\gamma} = \left(\trace{M''_h P}\right)_{h=1}^H$.
\textit{1. Construction of $M''_h$'s.} Define \begin{align}\label{eqn:center-PO}
\breve{Y}_i(q) = Y_i(q) - \overline{Y}(q), ~\breve{\tau}_{hi} = N^{-1}\sum_{q'=1}^Q f_{q'h} \breve{Y}_i(q') . \end{align} For each $i,j$, define \begin{align*}
M_h'(i,j) = N_q^{-1} f_{qh}\breve{Y}_i(q) - \breve{\tau}_{hi}, ~ \sum_{q'=0}^{q-1} N_q + 1\le j \le \sum_{q'=0}^{q} N_q \end{align*} such that $M'_h$ is the centered version of
\begin{align}\label{eqn:potentials} M_{h} =
\bordermatrix{
& & Z = 1 & \cdots & Z = q & { \cdots } & Z = Q \cr 1 & & f_{1h}N_{1}^{-1}Y_1(1)\cdot{\boldsymbol{1}}^\top_{N_{1}} & \cdots & f_{qh}N_{q}^{-1}Y_1(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & f_{Qh}N_{Q}^{-1}Y_1(Q)\cdot{\boldsymbol{1}}^\top_{N_{Q}} \cr 2 & & f_{1h}N_{1}^{-1}Y_2(1)\cdot{\boldsymbol{1}}^\top_{N_{1}} & \cdots & f_{qh}N_{q}^{-1}Y_2(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & f_{Qh}N_{Q}^{-1}Y_2(Q)\cdot{\boldsymbol{1}}^\top_{N_{Q}} \cr
\cdots & & \cdots & \cdots & \cdots & \cdots \cr N & & f_{1h}N_{1}^{-1}Y_N(1)\cdot{\boldsymbol{1}}^\top_{N_{1}} & \cdots & f_{qh}N_{q}^{-1}Y_N(q)\cdot{\boldsymbol{1}}^\top_{N_{q}} & \cdots & f_{Qh}N_{Q}^{-1}Y_N(Q)\cdot{\boldsymbol{1}}^\top_{N_{Q}} \cr
}. \end{align}
Observe that \begin{align}\label{eqn:hgamma-vecM}
{\widehat{\gamma}} - \gamma = \left(\trace{M_1'P}, \dots,\trace{M_H'P}\right)^\top =
\begin{pmatrix}
\{\myvec{M'_1}\}^\top\\
\vdots\\
\{\myvec{M'_H}\}^\top
\end{pmatrix}
\myvec{P}. \end{align}
Construct $M''_h$'s as follows: \begin{align}\label{eqn:vecM}
\begin{pmatrix}
\{\myvec{M''_1}\}^\top\\
\vdots\\
\{\myvec{M''_H}\}^\top
\end{pmatrix}
=
V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
\{\myvec{M'_1}\}^\top\\
\vdots\\
\{\myvec{M'_H}\}^\top
\end{pmatrix}. \end{align} Combining \eqref{eqn:vecM} and \eqref{eqn:hgamma-vecM}, we can show $\widetilde{\gamma} = \left(\trace{M_1''P}, \dots, \trace{M_H''P}\right)^\top$. The next step is to show $M''_h$'s satisfy Condition \ref{cond:str-Mk}.
\textit{2. Verify Condition \ref{cond:str-Mk}.} To verify that $M_h''$'s have zero row and column sums, we notice that summation of $j$-th column (or row) corresponds to a linear mapping from ${\mathbb{R}}^{N\times N}$ to ${\mathbb{R}}$ that can be defined by the trace inner product: \begin{align*}
\sum_{i=1}^N M''_h(i,j) = \trace{{M_h''}^{\top} T_j} \text{ with} T_j = (0,\dots,\underbrace{1_N}_{\text{column $j$}},\dots,0). \end{align*} Given that $M'_h$'s are row and column centered, we can use \eqref{eqn:vecM} to show that \begin{align*}
\begin{pmatrix}
\{\myvec{M''_1}\}^\top\\
\vdots\\
\{\myvec{M''_H}\}^\top
\end{pmatrix}\myvec{T_j}
=
V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
\{\myvec{M'_1}\}^\top\\
\vdots\\
\{\myvec{M'_H}\}^\top
\end{pmatrix}\myvec{T_j} = 0. \end{align*}
To show $M''_h$'s are standardized and mutually orthogonal, we notice \begin{align}\label{eqn:V-tgamma-1}
\Var{\widetilde{\gamma}} = I_H. \end{align} Now using Lemma \ref{lem:mean-var}(ii), we have \begin{align}\label{eqn:V-tgamma-2}
\Var{\widetilde{\gamma}} = \left(\frac{1}{N-1}\tr{M''_h}{M''_l}\right)_{h,l\in[H]}. \end{align} Comparing \eqref{eqn:V-tgamma-1} and \eqref{eqn:V-tgamma-2}, we obtain the desired conclusion.
\vskip 2mm \noindent\textbf{Step 2: Apply Theorem \ref{thm:linear-projection} by finding explicit bounds for the BEB.} \vskip 2mm
Apply Theorem \ref{thm:linear-projection} to obtain that: for any $b\in{\mathbb{R}}^H$ with $\|b\|_2 = 1$, we have \begin{align}\label{eqn:BE-M}
\sup_{t\in{\mathbb{R}}}|{\mathbb{P}}\{b^\top\widetilde{\gamma} \le t\} - \Phi(t)| \le C{\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM''_h(i,j)\right|}. \end{align}
Each column of the matrix \eqref{eqn:potentials} corresponds to a treatment group $q$. For ease of presentation, it is convenient to highlight this connection with notation $q_j$, meaning the $j$-th column is constructed based on potential outcomes from treatment level $q_j$.
Based on \eqref{eqn:vecM}, we have \begin{align}\label{eqn:bM}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right| &=
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} f_{q_j1}\breve{Y}_i({q_j}) - \breve{\tau}_{1i} \\
\vdots\\
N_{q_j}^{-1} f_{{q_j}H}\breve{Y}_i({q_j}) - \breve{\tau}_{Hi}
\end{pmatrix}\right|\\
& =
\left|\underbrace{b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} f_{q_j1}\breve{Y}_i({q_j}) \\
\vdots\\
N_{q_j}^{-1} f_{{q_j}H}\breve{Y}_i({q_j})
\end{pmatrix}}_{\text{term I}}
-
\underbrace{b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
\breve{\tau}_{1i} \\
\vdots \\
\breve{\tau}_{Hi}
\end{pmatrix}}_{\text{term II}}\right| \end{align}
From the definition \eqref{eqn:center-PO}, term II is the average of term I over $j\in[N]$. Therefore, if we can bound term I for all $i,j$, then we can also bound term II by the triangle inequality. We now use two ways to bound term I.
\textbf{First Bound for term I:} For $b\in{\mathbb{R}}^H$ with $\|b\|_2 = 1$, construct $b_0 = V_{{\widehat{\gamma}}}^{ - 1/2} b / \| V_{{\widehat{\gamma}}}^{ - 1/2} b \|_2 \in{\mathbb{R}}^H $ with $\|b_0\|_2 = 1$. We can verify that
\begin{align*}
b = \frac{V_{{\widehat{\gamma}}}^{1/2}b_0}{\sqrt{b_0^\top V_{{\widehat{\gamma}}} b_0}}. \end{align*}
We have \begin{align*}
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} f_{{q_j}1}\breve{Y}_i({q_j}) \\
\vdots\\
N_{q_j}^{-1} f_{{q_j}r}\breve{Y}_i({q_j})
\end{pmatrix}\right|
&=
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
f_{{q_j}1} \\
\vdots\\
f_{{q_j}H}
\end{pmatrix} \cdot {N_{q_j}^{-1}\breve{Y}_i({q_j})}\right|\\
&=
\left|b_0^\top
\begin{pmatrix}
f_{{q_j}1} \\
\vdots\\
f_{{q_j}H}
\end{pmatrix} \right| \cdot \left|\frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{b_0^\top V_{{\widehat{\gamma}}} b_0}}\right|\\
&=
\left|F({q_j}, \cdot)b_0\right| \cdot \left|\frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{b_0^\top V_{{\widehat{\gamma}}} b_0}}\right|. \end{align*}
To get a uniform bound, we need to bound $\left|F({q_j}, \cdot)b_0\right|$ and $b_0^\top V_{{\widehat{\gamma}}} b_0$. We can show \begin{align*}
\left|F({q_j}, \cdot)b_0\right| \le \|F({q_j}, \cdot)\|_2 , \quad
{b_0^\top V_{{\widehat{\gamma}}} b_0} \ge {\varrho_{\min}\{V_{{\widehat{\gamma}}}\}}. \end{align*}
Hence, \begin{align}\label{eqn:first-bd}
\left|F({q_j}, \cdot)b_0\right| \cdot \left|\frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{b_0^\top V_{{\widehat{\gamma}}} b_0}}\right| \le \frac{ \|F({q_j}, \cdot)\|_2\cdot N_{q_j}^{-1}|Y_i({q_j})-\overline{Y}({q_j})|}{\sqrt{\varrho_{\min}\{V_{{\widehat{\gamma}}}\}}}. \end{align}
\textbf{Second bound for term I:} We have \begin{align}
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} f_{{q_j}1}\breve{Y}_i({q_j}) \notag\\
\vdots\\
N_{q_j}^{-1} f_{{q_j}H}\breve{Y}_i({q_j})
\end{pmatrix}\right|
&=
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
f_{{q_j}1} \\
\vdots\\
f_{{q_j}H}
\end{pmatrix} \cdot \sqrt{N_{q_j}^{-1}S({q_j},{q_j})} \cdot \frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{N_{q_j}^{-1}S({q_j},{q_j})}}\right|\notag\\
&\le
\left|b^\top V_{{\widehat{\gamma}}}^{-1/2}
\begin{pmatrix}
f_{{q_j}1} \\
\vdots\\
f_{{q_j}H}
\end{pmatrix} \cdot \sqrt{N_{q_j}^{-1}S({q_j},{q_j})}\right| \cdot \left|\frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{N_{q_j}^{-1}S({q_j},{q_j})}}\right|\notag\\
&\le
\left\|b^\top V_{{\widehat{\gamma}}}^{-1/2}
F^\top \diag{N_q^{-1}S(q,q)}^{1/2}\right\|_\infty \cdot \left|\frac{N_{q_j}^{-1}\breve{Y}_i({q_j})}{\sqrt{N_{q_j}^{-1}S({q_j},{q_j})}}\right|. \label{eqn:part2-bd1} \end{align} The infinity norm is upper bounded by the $\ell_2$ norm: \begin{align}
&\left\|b^\top V_{{\widehat{\gamma}}}^{-1/2}
F^\top \diag{N_q^{-1}S(q,q)}^{1/2}\right\|_\infty \notag\\
\le &\left\|b^\top V_{{\widehat{\gamma}}}^{-1/2}
F^\top \diag{N_q^{-1}S(q,q)}^{1/2}\right\|_2\notag\\
= & \sqrt{b^\top V_{{\widehat{\gamma}}}^{-1/2}
F^\top \diag{N_q^{-1}S(q,q)}F V_{{\widehat{\gamma}}}^{-1/2}b}\notag\\
\le &\sqrt{b^\top V_{{\widehat{\gamma}}}^{-1/2}
(\sigma_F^2V_{{\widehat{\gamma}}}) V_{{\widehat{\gamma}}}^{-1/2}b}
\see{by Condition \ref{cond:well-conditioned}} \notag\\
\le & \sigma_F. \label{eqn:part2-bd2} \end{align} Combining \eqref{eqn:part2-bd1} and \eqref{eqn:part2-bd2}, we have \begin{align}\label{eqn:second-bd}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right|\le 2 {\sigma_F} \left|\frac{ \breve{Y}_i({q_j})}{\sqrt{N_q S({q_j},{q_j})}}\right|. \end{align}
Combining \eqref{eqn:first-bd} and \eqref{eqn:second-bd}, we have \begin{align}\label{eqn:combined-bd}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right|\le 2\min\left\{ {\sigma_F} \left|\frac{ Y_i({q_j})-\overline{Y}({q_j})}{\sqrt{N_{q_j} S({q_j},{q_j})}}\right|, \frac{ \|F({q_j}, \cdot)\|_2\cdot N_{q_j}^{-1}|Y_i({q_j})-\overline{Y}({q_j})|}{\sqrt{\varrho_{\min}\{V_{{\widehat{\gamma}}}\}}}\right\}. \end{align}
Now we can take maximum over $i,j\in[N]$ in \eqref{eqn:combined-bd} and use \eqref{eqn:BE-M} to conclude the proof. \end{proof}
\subsection{Proof of Theorem \ref{thm:quad-be-nearly-uniform}} The proof is an application of Lemma \ref{lem:BN} (i).
\subsection{Proof of Lemma \ref{lem:suff-conds}}
\begin{proof}[Proof of Lemma \ref{lem:suff-conds}]
(i) Suppose the individual causal effects are constant. Then \begin{align*}
F^\top \diag{N_q^{-1}S(q,q)}F = V_{\widehat{\gamma}}. \end{align*}
(ii) Suppose the condition number of the correlation matrix corresponding to $V_{\widehat{Y}}$ is upper bounded by $\sigma^2$. Then \begin{align*}
Q = \sum_{q=1}^Q \varrho_{q}(V_{{\widehat{Y}}}^\star) \le Q\cdot \varrho_{\max}(V^\star_{{\widehat{Y}}}) \le \sigma^2 Q\cdot \rho_{\min}(V_{{\widehat{Y}}}^\star) \end{align*} which implies $\rho_{\min}(V_{{\widehat{Y}}}^\star) \ge \sigma^{-2}.$ Let $D = \diag{(N_q^{-1} - N^{-1})S(q,q)}$. Then \begin{align*}
V_{\widehat{\gamma}} &= F^\top V_{{\widehat{Y}}} F \\
& = F^\top D^{1/2} V_{{\widehat{Y}}}^\star D^{1/2} F\\
& \succeq F^\top D^{1/2} (\sigma^{-2} I_Q) D^{1/2} F \\
& \succeq \sigma^{-2}F^\top \diag{(N_q^{-1} - N^{-1})S(q,q)} F \\
& \succeq c\sigma^{-2}F^\top \diag{N_q^{-1} S(q,q)} F\\
&\see{using $N_q \le (1-c)N$}. \end{align*} \end{proof}
\commenting{ \subsection{Proof of Corollary \ref{cor:non-uniform-bound}} \begin{proof}[Proof of Corollary \ref{cor:non-uniform-bound}] The insight for this proof is to apply the bound \eqref{eqn:uniform-be} according to different subset of arms. Under the conditions of Theorem \ref{thm:be-proj-standard}, \begin{align*}
& \sup_{b\in{\mathbb{R}}^H,\|b\|_2 =1}\sup_{t\in{\mathbb{R}}}\left|{\mathbb{P}}\{b^\top \widetilde{\gamma} \le t\} - \Phi(t)\right|\\
\le & C \max_{i\in[N],q\in[Q]} \min\left\{{\sigma_F} \left|\frac{ Y_i(q)-\overline{Y}(q)}{\sqrt{N_q S(q,q)}}\right|, \frac{ \|F(q, \cdot)\|_2 \cdot N_q^{-1}|Y_i(q)-\overline{Y}(q)|}{\sqrt{\varrho_{\min}\{F^\top V_{\widehat{Y}} F\}}}\right\}, \end{align*} which is bounded by the maximum of following two parts: \begin{itemize}
\item For those arms in ${\mathcal{Q}}_1$, we keep the first term in the upper bound \eqref{eqn:uniform-be}:
\begin{align*}
\max_{i\in[N],q\in{\mathcal{Q}}_1} \sigma_F \left|\frac{ Y_i(q)-\overline{Y}(q)}{\sqrt{N_q S(q,q)}}\right|;
\end{align*}
\item For those arms in ${\mathcal{Q}}_2$, we apply the second term in the upper bound \eqref{eqn:uniform-be}:
\begin{align*}
\max_{i\in[N],q\in{\mathcal{Q}}_2} \frac{ \|F(q, \cdot)\|_2 \cdot N_q^{-1}|Y_i(q)-\overline{Y}(q)|}{\sqrt{\varrho_{\min}\{F^\top V_{\widehat{Y}} F\}}}.
\end{align*} \end{itemize} Hence we conclude the proof.
\end{proof} }
\subsection{Proof of Corollary \ref{cor:uniform-design-be}} \begin{proof}[Proof of Corollary \ref{cor:uniform-design-be}] It suffices to further control term II of the upper bound in \eqref{eqn:uniform-be}: \begin{align*}
\frac{\|F(q, \cdot)\|_2\cdot N_q^{-1}|Y_i(q)-\overline{Y}(q)|}{\sqrt{\varrho_{\min}\{\Var{F^\top{\widehat{Y}}}\}}}. \end{align*}
By definition of 2-norm, $\|F(q, \cdot)\|_2 \le \sqrt{H}\|F\|_\infty$. By Definition \ref{def:uniform-design}, $N_q^{-1} \le \underline{c}^{-1}N_0^{-1}$. Under Condition \ref{cond:well-conditioned}, \begin{align*}
\varrho_{\min}\left( \Var{F^\top{\widehat{Y}}} \right )
\ge& \sigma_F^{-2} \varrho_{\min}\{F^\top \diag{N_q^{-1}S(q,q)}F\} \\
\ge & \sigma_F^{-2} \varrho_{\min}\{F^\top F\} \min_{q\in[Q]}\left\{N_q^{-1}S(q,q)\right\} \\
\ge & \overline{c}^{-1}N_0^{-1}\sigma_F^{-2} \varrho_{\min}\{F^\top F\} \min_{q\in[Q]} S(q,q) . \end{align*} Now \eqref{eqn:uniform-design-be} is obtained by using \eqref{eq::contrast-matrix} and plugging in these results. \end{proof}
\subsection{Proof of Corollary \ref{cor:non-uniform-design-be}}
The key idea is to find explicit bounds on the BEB given by Theorem \ref{thm:be-proj-standard} for non-uniform designs. We partition the arms into ${\mathcal{Q}}_{\textsc{s}} \cup {\mathcal{Q}}_{\textsc{l}}$, and apply different parts in the general BEB in \eqref{eqn::terms1and2-be-proj-standard} to these two groups respectively.
\begin{proof}[Proof of Corollary \ref{cor:non-uniform-design-be}] Recall the bound \eqref{eqn::terms1and2-be-proj-standard} in Theorem \ref{thm:be-proj-standard}. The two parts in the upper bound shall be applied to different categories of arms from Definition \ref{def:non-uniform-design}. Because each $N_q$ is large for $q\in{\mathcal{Q}}_{\textsc{l}} $, we keep the first part of \eqref{eqn:uniform-be} for ${\mathcal{Q}}_{\textsc{l}} $: \begin{align}\label{eqn:bd-QL}
\max_{i\in[N], q\in {\mathcal{Q}}_{\textsc{l}}} {\sigma_F} \left|\frac{ Y_i(q)-\overline{Y}(q)}{\sqrt{N_q S(q,q)}}\right|. \end{align}
For the small groups in $ {\mathcal{Q}}_{\textsc{s}} $, we apply the second part of \eqref{eqn:uniform-be}. First, we have $N_q^{-1} \le 1$. Besides, under Condition \ref{cond:well-conditioned}, \begin{align}\label{eqn:bd-QS}
\rho_{\min}\left( \Var{F^\top{\widehat{Y}}} \right) &\ge \sigma_F^{-2} \rho_{\min}\{F^\top \diag{N_q^{-1}S(q,q)}_{q\in[Q]} F\}\notag\\
&\ge \sigma_F^{-2} \rho_{\min}\left\{F_{\textsc{s}}^\top \diag{N_q^{-1}S(q,q)}_{q\in {\mathcal{Q}}_{\textsc{s}}}
F_{\textsc{s}}
\right\}\notag\\
& \ge \sigma_F^{-2} \min_{q\in {\mathcal{Q}}_{\textsc{s}}} \{N_q^{-1}S(q,q)\}\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\} \notag\\
& \ge \sigma_F^{-2}\overline{n}^{-1} \min_{q\in {\mathcal{Q}}_{\textsc{s}}} \{S(q,q)\}\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\}. \end{align} Hence for $q\in{\mathcal{Q}}_{\textsc{s}}$, we keep the second part of \eqref{eqn:uniform-be} and use \eqref{eqn:bd-QS} to obtain upper bound: \begin{align*}
\frac{\underline{c}^{-1}\overline{n}^{-1} \sigma_F\|F(q, \cdot)\|_2 \cdot |Y_i(q)-\overline{Y}(q)|}{ (\overline{n}^{-1}\min_{q\in {\mathcal{Q}}_{\textsc{s}}} \{S(q,q)\}\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\})^{1/2}}. \end{align*}
Under Condition \ref{condition::proper}, we have $$
\|F(q,\cdot)\|_2 \le cQ^{-1}\sqrt{H}
$$
and
\begin{align*}
\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\} &= \varrho_{\min}\{F^\top F - F_{\textsc{l}}^\top F_{\textsc{l}}\}\\
&\ge \varrho_{\min}\{F^\top F\} - \varrho_{\max}\{F_{\textsc{l}}^\top F_{\textsc{l}}\} \\
&\ge c'Q^{-1} - c^2H|{\mathcal{Q}}_{\textsc{l}}|Q^{-2}
\end{align*}
because $\varrho_{\max}\{F_{\textsc{l}}^\top F_{\textsc{l}}\} \le \trace{F_{\textsc{l}}^\top F_{\textsc{l}}} = c^2{H|{\mathcal{Q}}_{\textsc{l}}|Q^{-2}} $. Hence, \begin{align}\label{eqn:F-2-over-sqrt-rho}
\frac{\|F(q,\cdot)\|_2}{(\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\})^{1/2}} \le \sqrt{\frac{c^2H}{c'Q - c^2H|{\mathcal{Q}}_{\textsc{l}}|}}. \end{align}
Because we assumed $Q \ge 2(c^2/c')H|{\mathcal{Q}}_L|$, \eqref{eqn:F-2-over-sqrt-rho} implies \begin{align}\label{eqn:F-2-over-sqrt-rho-2}
\frac{\|F(q,\cdot)\|_2}{(\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\})^{1/2}} \le \sqrt{\frac{2c^2H}{c'Q}}. \end{align}
Putting \eqref{eqn:bd-QL}, \eqref{eqn:bd-QS} and \eqref{eqn:F-2-over-sqrt-rho-2} into \eqref{eqn:uniform-be} concludes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm:uniform-var}}\label{sec:pf-uniform-var} \begin{proof}[Proof of Theorem \ref{thm:uniform-var}]
\begin{enumerate}[label=(\roman*)]
\item It is well known.
\item
For the stochastic order in $L_\infty$ norm, we shall apply Lemma \ref{lem:tail} with ${\mathcal{Q}} = [Q]$. We have \begin{align*}
{\widehat{V}}_{{\widehat{\gamma}}}(h,h') &= \sum_{q \in {\mathcal{Q}}} F(h,q)F(h',q) N_q^{-1} \widehat{S}(q,q)\\
&= \sum_{q \in {\mathcal{Q}}} w_q N_q^{-1}\widehat{S}(q,q), \end{align*} where \begin{align*}
w_q = F(h,q)F(h',q),\quad |w_q| \le \|F\|_\infty^2. \end{align*} Applying Lemma \ref{lem:tail}, we have \begin{align*}
{\mathbb{P}}\left\{\left|{\widehat{v}} - \E{{\widehat{v}}}\right|\ge t\right\} \le \frac{C\overline{c}\underline{c}^{-4} \|F\|_\infty^4 Q N_0^{-3}\Delta^4}{t^2} := \circledast_1, \end{align*} which implies \begin{align*}
\forall h,h'\in[H],~{\mathbb{P}}\left\{|{\widehat{V}}_{{\widehat{\gamma}}}(h,h') - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}(h,h')\}|> t\right\}
\le \circledast_1. \end{align*} Taking union bound over $h,h'\in[H]$, we have \begin{align*}
{\mathbb{P}}\left\{\max_{h,h'\in[H]}|{\widehat{V}}_{{\widehat{\gamma}}}(h,h') - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}(h,h')\}|> t\right\}
\le \circledast_1 \cdot H^2. \end{align*} Therefore, \begin{align*}
\|{\widehat{V}}_{{\widehat{\gamma}}}-{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|^2_{\infty} = O_{{\mathbb{P}}}\left(\circledast_1 \cdot H^2\right). \end{align*}
\item
It follows from \eqref{eqn:op-inf}.
\end{enumerate}
\end{proof}
\subsection{Proof of Theorem \ref{thm:wald-uniform}} \begin{proof}[Proof of Theorem \ref{thm:wald-uniform}]
We prove the ``fixed $H$'' and ``diverging $H$'' scenarios separately. In each scenario, we apply BEBs to obtain CLTs with the true variances, and then apply the variance estimation results to justify the statistical properties after plugging in the variance estimators.
\begin{enumerate}[label = (\roman*)]
\item Consider fixed $H$. By Corollary \ref{cor:uniform-design-be}, under Conditions \ref{cond:well-conditioned}, \ref{condition::proper} and \ref{cond:easy-spec}, the property of joint asymptotic Normality holds:
\begin{align*}
V_{{\widehat{\gamma}}}^{-1/2}({\widehat{\gamma}}-\gamma) \rightsquigarrow {\mathcal{N}}(0, I_H).
\end{align*}
The continuous mapping theorem implies
\begin{gather*}
({\widehat{\gamma}} - \gamma)^\top V_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma) \rightsquigarrow \chi^2_H, \\
({\widehat{\gamma}} - \gamma)^\top {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ({\widehat{\gamma}} - \gamma) = ({\widehat{\gamma}} - \gamma)^\top V_{{\widehat{\gamma}}}^{-1/2} W_N V_{{\widehat{\gamma}}}^{-1/2}({\widehat{\gamma}} - \gamma) \rightsquigarrow {\mathcal{L}}.
\end{gather*}
Because ${\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\} \succeq V_{\widehat{\gamma}}$, we have $I_H \succeq W_\infty$ and $ {\mathcal{L}} \lesssim \chi^2(H)$.
For variance estimation, under Conditions \ref{cond:well-conditioned}, \ref{condition::proper}, \ref{cond:moments} and \ref{cond:easy-spec}, the stochastic order in Theorem \ref{thm:uniform-var} implies
$N{\widehat{V}}_{{\widehat{\gamma}}} - N{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\} = o_{\mathbb{P}}(1)$. Hence $N^{-1}{\widehat{V}}_{{\widehat{\gamma}}}^{-1} - N^{-1}{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} = o_{\mathbb{P}}(1)$.
Moreover,
\begin{align*}
\|{\widehat{\gamma}} - \gamma\|_2^2 = O_{\mathbb{P}}(\|V_{\widehat{\gamma}}\|_{\operatorname{op}}) = O_{\mathbb{P}}(\|F^\top\diag{N_q^{-1}S(q,q)} F\|_{\operatorname{op}}) = O_{\mathbb{P}}(\Delta^2N_0^{-1}\|F^\top F\|_{\operatorname{op}}).
\end{align*}
Using
\begin{align*}
\|F^\top F\|_{\operatorname{op}} \le \trace{F^\top F} \le c^2Q^{-2} \cdot (QH) \le c^2Q^{-1}H,
\end{align*}
we can derive
\begin{align*}
{N}\|{\widehat{\gamma}} - \gamma\|_2^2 = O_{\mathbb{P}}(H).
\end{align*}
Therefore,
\begin{align}
({\widehat{\gamma}} - \gamma)^\top {\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma) &= ({\widehat{\gamma}} - \gamma)^\top \E{{\widehat{V}}_{{\widehat{\gamma}}}}^{-1} ({\widehat{\gamma}} - \gamma) \\
&+ ({\widehat{\gamma}} - \gamma)^\top \left( {\widehat{V}}_{{\widehat{\gamma}}}^{-1} - \E{{\widehat{V}}_{{\widehat{\gamma}}}}^{-1}\right) ({\widehat{\gamma}} - \gamma), \label{eqn:infer-uniform-1}
\end{align}
where \eqref{eqn:infer-uniform-1} has the order
\begin{align*}
({\widehat{\gamma}} - \gamma)^\top \left( {\widehat{V}}_{{\widehat{\gamma}}}^{-1} - \E{{\widehat{V}}_{{\widehat{\gamma}}}}^{-1}\right) ({\widehat{\gamma}} - \gamma) = O_{\mathbb{P}}(H) \cdot o_{\mathbb{P}}(1) = o_{\mathbb{P}}(1).
\end{align*}
Now using Slutsky's theorem, we prove the results.
\item Consider diverging $H$. We use a quadratic form CLT stated in Corollary \ref{cor:quad-clt-v2}. By Corollary \ref{cor:quad-clt-v2}, under Conditions \ref{cond:well-conditioned}, \ref{condition::proper} and \ref{cond:easy-spec}, when $H^{19/4}N^{-1/2} \to 0$, with $W_N = V_{{\widehat{\gamma}}}^{1/2} \E{{\widehat{V}}_{{\widehat{\gamma}}}}^{-1} V_{{\widehat{\gamma}}}^{1/2} $, we have
\begin{align} \label{eqn::qclt-true-variance}
\frac{({\widehat{\gamma}} - \gamma)^\top \E{{\widehat{V}}_{{\widehat{\gamma}}}}^{-1} ({\widehat{\gamma}} - \gamma) - \trace{W_N^2}}{\sqrt{2\trace{W_N}}} \rightsquigarrow {\mathcal{N}}(0,1).
\end{align}
Because ${\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\} \succeq V_{\widehat{\gamma}}$, we have
\begin{align}\label{eqn:trQ-Q2}
\trace{W_N} \le H,\quad \trace{W_N^2} \le H.
\end{align}
Now we consider the difference induced by plugging in the variance estimator:
\begin{align*}
&|({\widehat{\gamma}} - \gamma)^\top \left({\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\right) ({\widehat{\gamma}} - \gamma)|\\
\le & |({\widehat{\gamma}} - \gamma)^\top V_{{\widehat{\gamma}}}^{-1}({\widehat{\gamma}} - \gamma)|\cdot \|V_{{\widehat{\gamma}}}\|_{\text{op}} \cdot \|{\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\|_{\operatorname{op}}.
\end{align*} Use the matrix identity
\begin{align*}
{\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} = -{\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}) {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ,
\end{align*} we can verify that
\begin{align*}
\|{\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\|_{\operatorname{op}} &= \|{\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}) {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\|_{\operatorname{op}}\\
&\le \|{\widehat{V}}_{{\widehat{\gamma}}}^{-1}\|_{\operatorname{op}} \|{\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\operatorname{op}}\|{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\|_{\operatorname{op}}.
\end{align*}
Theorem \ref{thm:uniform-var} ensures
\begin{gather*}
N\|{\widehat{V}}_{{\widehat{\gamma}}} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}\|_{\operatorname{op}} = O_{{\mathbb{P}}}({H^{2}N^{-1/2}}), \\ \|{\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\|_{\operatorname{op}} = O(N), \quad \|{\widehat{V}}_{{\widehat{\gamma}}}^{-1}\|_{\operatorname{op}} = O_{{\mathbb{P}}}(N), \quad \|V_{{\widehat{\gamma}}}\|_{\operatorname{op}} = O_{{\mathbb{P}}}(N^{-1}).
\end{gather*}
Corollary \ref{cor:quad-clt-v2} ensures
\begin{align*}
({\widehat{\gamma}} - \gamma)^\top V_{{\widehat{\gamma}}}^{-1}({\widehat{\gamma}} - \gamma) = O_{{\mathbb{P}}}(H).
\end{align*}
Under Condition \ref{cond:well-conditioned}, we have
\begin{align*}
\trace{W_N^2} &= \trace{V_{\widehat{\gamma}}^{1/2}\E{{\widehat{V}}_{\widehat{\gamma}}}^{-1}V_{\widehat{\gamma}}\E{{\widehat{V}}_{\widehat{\gamma}}}^{-1}V_{\widehat{\gamma}}^{1/2}} \\
&\ge \sigma_F^{-2}\trace{V_{\widehat{\gamma}}^{1/2}\E{{\widehat{V}}_{\widehat{\gamma}}}^{-1} V_{\widehat{\gamma}}^{1/2}} \\
& = \sigma_F^{-2}\trace{\E{{\widehat{V}}_{\widehat{\gamma}}}^{-1} V_{\widehat{\gamma}}} \\
&\ge \sigma_F^{-4} \trace{I_H} = \sigma_F^{-4} H.
\end{align*}
Using these results, we obtain
\begin{align}\label{eqn:small-dev}
\frac{|({\widehat{\gamma}} - \gamma)^\top \left({\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\right) ({\widehat{\gamma}} - \gamma)|}{\sqrt{2\trace{ W_N^2}}} = O_{{\mathbb{P}}}(H^{5/2}N^{-1/2}),
\end{align}
which converges to $0$ if $H^{19/4}N^{-1/2} \to 0$. Combine \eqref{eqn::qclt-true-variance} and \eqref{eqn:small-dev} to establish the desired CLT.
To prove the validity of the confidence set, we notice that
\begin{align}
&\Prob{({\widehat{\gamma}} - \gamma)^\top {\widehat{V}}_{{\widehat{\gamma}}}^{-1} ({\widehat{\gamma}} - \gamma) \ge q_\alpha} \notag\\
\le &\Prob{({\widehat{\gamma}} - \gamma)^\top {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ({\widehat{\gamma}} - \gamma) + |({\widehat{\gamma}} - \gamma)^\top \left({\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\right) ({\widehat{\gamma}} - \gamma)| \ge q_\alpha} \notag\\
\le &\Prob{({\widehat{\gamma}} - \gamma)^\top {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ({\widehat{\gamma}} - \gamma)\ge q_\alpha - cH^4N^{-1/2}} \label{eqn:main-tail}\\
+ &\Prob{|({\widehat{\gamma}} - \gamma)^\top \left({\widehat{V}}_{{\widehat{\gamma}}}^{-1} - {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1}\right) ({\widehat{\gamma}} - \gamma)| \ge cH^4N^{-1/2} }. \label{eqn:small-tail}
\end{align}
Using \eqref{eqn:trQ-Q2} and \eqref{eqn:small-dev}, \eqref{eqn:small-tail} converges to zero if $ H^{19/4}N^{-1/2}\to 0 $.
For \eqref{eqn:main-tail}, using Lemma \ref{lem:BN} (i),
\begin{align}\label{eqn:main-tail-1}
\left|\Prob{({\widehat{\gamma}} - \gamma)^\top {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ({\widehat{\gamma}} - \gamma)\ge q_\alpha - cH^4N^{-1/2}} - \Prob{\xi_H^\top W_N \xi_H\ge q_\alpha - cH^4N^{-1/2}}\right| = o(1).
\end{align}
Now because $W_N \preceq I_H$,
\begin{align}\label{eqn:main-tail-2}
\Prob{\xi_H^\top W_N \xi_H\ge q_\alpha - cH^4N^{-1/2}} \le \Prob{\xi_H^\top \xi_H\ge q_\alpha - cH^4N^{-1/2}}.
\end{align}
Moreover,
\begin{align}\label{eqn:main-tail-3}
\sup_{t\in{\mathbb{R}}}\left|\Prob{\frac{\xi_H^\top \xi_H - H}{\sqrt{2H}}\le t} - \Phi(t)\right| = o(1) \text{ as } H \to \infty.
\end{align}
Hence,
\begin{align}\label{eqn:main-tail-4}
&\left|\Prob{\frac{\xi_H^\top \xi_H - H}{\sqrt{2H}}\ge \frac{q_\alpha - H - cH^4N^{-1/2} }{\sqrt{2H}}} - \Prob{\frac{\xi_H^\top \xi_H - H}{\sqrt{2H}}\ge \frac{q_\alpha - H}{\sqrt{2H}}}\right| \notag\\
= & \left| \Phi\left(\frac{q_\alpha - H - cH^4N^{-1/2} }{\sqrt{2H}}\right) - \Phi\left(\frac{q_\alpha - H}{\sqrt{2H}}\right)\right| + o(1)
\le \frac{1}{\sqrt{2\pi}}H^{7/2}N^{-1/2} + o(1) = o(1).
\end{align}
Combining \eqref{eqn:main-tail-1}--\eqref{eqn:main-tail-4}, we conclude that, if $ H^{19/4}N^{-1/2}\to 0$, then
\begin{align}\label{eqn:main-tail-5}
\lim_{H,N\to\infty}\Prob{({\widehat{\gamma}} - \gamma)^\top {\mathbb{E}}\{{\widehat{V}}_{{\widehat{\gamma}}}\}^{-1} ({\widehat{\gamma}} - \gamma)\ge q_\alpha - cH^4N^{-1/2}} \le \alpha.
\end{align}
From \eqref{eqn:main-tail} and \eqref{eqn:small-tail}, we conclude the asymptotic validity of the Wald-type inference. \end{enumerate}
\end{proof}
\subsection{Proof of Theorem \ref{thm:quad-be-unreplicated}}
The proof is an application of Lemma \ref{lem:BN} (i).
\subsection{Proof of Lemma \ref{lemma::mean-variance-grouping} and Theorem \ref{thm:unreplicated-var}}
Lemma \ref{lemma::mean-variance-grouping} is a special case of Theorem \ref{thm:unreplicated-var}\ref{thm:unreplicated-var-1}. We first give a proof for Theorem \ref{thm:unreplicated-var}, then add some discussions on improving variance estimation under more assumptions.
\begin{proof}[Proof of Theorem \ref{thm:unreplicated-var}]
\begin{enumerate}[label = (\roman*)]
\item We first compute the expectation of \eqref{eqn:hV-QU}: \begin{align}
&{\mathbb{E}}\{ (Y_q - {\widehat{Y}}_{\langle g \rangle})^2\} \notag\\
=& {\mathbb{E}}\{ (Y_q - \overline{Y}(q) + \overline{Y}(q) - \overline{Y}_{\langle g \rangle} + \overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{\langle g \rangle})^2\} \notag\\
=& {\mathbb{E}}\{ (Y_q - \overline{Y}(q) + \overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{\langle g \rangle})^2\} + {\mathbb{E}}\{ (\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2\}\notag\\
= & (1-|{\langle g \rangle}|^{-1})^2 {\mathbb{E}}\{(Y_q - \overline{Y}(q))^2\}\notag\\
+ & |{\langle g \rangle}|^{-2} {\mathbb{E}}\left[\left\{\sum_{q'\in{\langle g \rangle},q'\neq q}(Y_{q'} - \overline{Y}(q'))\right\}^2\right] \notag\\
- & 2 (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} {\mathbb{E}}\{(Y_q - \overline{Y}(q))(Y_{q'} - \overline{Y}(q'))\}\notag\\
+ & {\mathbb{E}}\{(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2\} \notag\\
= & (1-|{\langle g \rangle}|^{-1})^2 (1-N^{-1})S(q,q)\label{eqn:var-q}\\
+ & |{\langle g \rangle}|^{-2} {\mathbb{E}}\left[\left\{\sum_{q'\in{\langle g \rangle},q'\neq q}(Y_{q'} - \overline{Y}(q'))\right\}^2\right] \label{eqn:var-qprime}\\
- & 2 (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} S(q,q')\label{eqn:cov-q}\\
+ & (\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2 .\label{eqn:var-bg} \end{align}
\eqref{eqn:var-q} reflects the within group variation for arm $q$. \eqref{eqn:var-qprime} reflects the pooled variation for the arms except $q$ in group ${\langle g \rangle}$. \eqref{eqn:cov-q} captures the correlation between arm $q$ and the rest in ${\langle g \rangle}$. \eqref{eqn:var-bg} represents the between group variation.
For \eqref{eqn:var-qprime}, we have \begin{align}
& |{\langle g \rangle}|^{-2} {\mathbb{E}}\left[\left\{\sum_{q'\in{\langle g \rangle},q'\neq q}(Y_{q'} - \overline{Y}(q'))\right\}^2\right] \notag\\
= & |{\langle g \rangle}|^{-2} {\boldsymbol{1}}^\top[\diag{S(q',q')}_{q'\in{\langle g \rangle},q'\neq q} - N^{-1}(S(q',q''))_{q',q''\in\langle g\rangle\backslash \{q\}}]{\boldsymbol{1}} \notag\\
= & |{\langle g \rangle}|^{-2} (1-N^{-1}\varrho_{\langle g \rangle})\sum_{q'\in{\langle g \rangle},q'\neq q} S(q',q') + N^{-1}\mu_{\langle g \rangle}^{-1} \Theta_1(q,q), \end{align} where \begin{gather}
\Theta_1(q,q) = \mu_{\langle g \rangle} |\langle g \rangle|^{-2}{\boldsymbol{1}}^{\top}\{\varrho_{\langle g \rangle}\diag{S(q',q')}_{q'\in{\langle g \rangle},q'\neq q} - (S(q',q''))_{q',q''\in\langle g\rangle\backslash \{q\}}\}{\boldsymbol{1}} \ge 0. \label{eqn:Theta-1} \end{gather}
We can upper bound $\Theta_1(q,q)$ as follows: \begin{align*}
\Theta_1(q,q) &\le \mu_{\langle g \rangle} |\langle g \rangle|^{-2}{\boldsymbol{1}}^{\top}\{\varrho_{\langle g \rangle}\diag{S(q',q')}_{q'\in{\langle g \rangle},q'\neq q}\}{\boldsymbol{1}} \\
& = \mu_{\langle g \rangle} \cdot \frac{\varrho_{\langle g \rangle}}{|\langle g \rangle|} \cdot |\langle g \rangle|^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} S(q',q') \\
& \le \mu_{\langle g \rangle} |\langle g \rangle|^{-1} \sum_{q'\in{\langle g \rangle}} S(q',q') \le \mu_{\langle g \rangle} \max_{q'\in{\langle g \rangle}} S(q',q'). \end{align*}
For \eqref{eqn:cov-q}, we have \begin{align}
&2 (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} S(q,q') \notag\\
\le & 2 (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} \sqrt{S(q,q)S(q',q')} \see{by the Cauchy-Schwarz inequality} \notag\\
\le & 2 (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} \left\{\frac{S(q,q) + S(q',q')}{2}\right\} \notag\\
\le &
(1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} (|{\langle g \rangle}|-1) S(q,q)
+
(1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} S(q',q') \notag\\
= & (1-|{\langle g \rangle}|^{-1})^2 N^{-1} S(q,q)
+
(1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}N^{-1} \sum_{q'\in{\langle g \rangle},q'\neq q} S(q',q'). \label{eqn:cov-q-2} \end{align}
Define \begin{align}\label{eqn:Theta-2}
\Theta_2(q,q) = \mu_{\langle g \rangle} (1-|{\langle g \rangle}|^{-1})|{\langle g \rangle}|^{-1}\sum_{q'\in{\langle g \rangle},q'\neq q} \left\{{S(q,q) + S(q',q') - 2S(q,q')}\right\} \ge 0. \end{align}
We can upper bound $\Theta_2(q,q)$ by \begin{align*}
\Theta_2(q,q) \le 4\mu_{\langle g \rangle} \max_{q'\in \langle g \rangle} S(q',q'). \end{align*}
Now using \eqref{eqn:rho-max-S} and \eqref{eqn:cov-q-2}, we have \begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\}
= & \mu_{\langle g \rangle} (1-|{\langle g \rangle}|^{-1})^2 (1- 2N^{-1})S(q,q) + \mu_{\langle g \rangle} {\mathbb{E}}\{(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2\} \\
+ & \mu_{\langle g \rangle} |{\langle g \rangle}|^{-2}N^{-1} \{N- \varrho_{\langle g \rangle} - (|{\langle g \rangle}|-1)\}\sum_{q'\in{\langle g \rangle}, q'\neq q} S(q',q')\\
+ & N^{-1}\Theta(q,q), \end{align*} where \begin{align*}
\Theta(q,q) = \Theta_1(q,q) + \Theta_2(q,q), \quad 0\le \Theta(q,q) \le 5\mu_{\langle g \rangle} \max_{q'\in \langle g \rangle} S(q',q'). \end{align*}
Using $\mu_{\langle g \rangle} = (1-|{\langle g \rangle}|^{-1})^{-2} (1- 2N^{-1})^{-1}$ and Condition \ref{cond:cond-N}, we obtain that \begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\} \ge S(q,q) + \mu_{\langle g \rangle} (\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2 \ge S(q,q). \end{align*}
\item
We can show that \begin{align*}
{\widehat{V}}_{{\widehat{\gamma}}}(h,h') =& \sum_{q \in {\mathcal{Q}}} F(h,q)F(h',q) {\widehat{V}}_{\widehat{Y}}(q,q) \\
=&
\sum_{{\langle g \rangle}\in{\langle g \rangle}}\sum_{q\in{\langle g \rangle}}w_q \left (Y_q - {\widehat{Y}}_{{\langle g \rangle}}\right)^2 \end{align*} where \begin{align*}
w_q = \mu_{\langle g \rangle} F(h,q)F(h',q), \text{ if $q\in{\langle g \rangle}$} \end{align*} satisfies \begin{align*}
|w_q| \le (\max_{g\in[G]}\mu_{\langle g \rangle}) \|F\|_\infty^2 := \overline{w}. \end{align*}
Applying Lemma \ref{lem:hv-U}, we have \begin{align*}
&\Prob{ |{\widehat{V}}_{{\widehat{\gamma}}}(h,h') - \E{{\widehat{V}}_{{\widehat{\gamma}}}(h,h')}|\ge t}\\
\le &\frac{C\overline{w}^2\Delta^2
(\Delta^2+\zeta^2)N}{t^2} =
\frac{C(\max_{g\in[G]}\mu_{\langle g \rangle})^2 \|F\|_\infty^4\Delta^2
(\Delta^2+\zeta^2)N}{t^2} := \circledast_2 . \end{align*}
Taking union bound over $h,h'\in[H]$, we obtain \begin{align*}
\Prob{\|{\widehat{V}}_{{\widehat{\gamma}}} - \E{{\widehat{V}}_{{\widehat{\gamma}}}} \|_\infty \ge t}
\le\frac{\circledast_2\cdot H^2}{t^2}. \end{align*}
\item It follows from \eqref{eqn:op-inf}. \end{enumerate}
\end{proof}
\textbf{More discussions on the conservativeness of ${\widehat{V}}_{\widehat{Y}}$}. Theorem \ref{thm:unreplicated-var}\ref{thm:unreplicated-var-1} shows \begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}\}
= V_{\widehat{Y}} + \Omega + \diag{\mu_{\langle g \rangle}(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{q\in{\mathcal{Q}}_{\textsc{u}}} + N^{-1} (\Theta + S). \end{align*}
Following Lemma \ref{lemma::mean-variance-grouping}, we commented that the conservativeness can be reduced under different assumptions: \begin{itemize}
\item If we assume homogeneity in means within subgroups, i.e., \begin{align}\label{eqn:weak-means-supp}
\overline{Y}(q) = \overline{Y}_{\langle g \rangle},~ \text{ for all } q\in\langle g \rangle, \end{align} then the term $$ \diag{\mu_{\langle g \rangle}(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{q\in{\mathcal{Q}}_{\textsc{u}}} $$ vanishes.
\item If we assume homoskedasticity across treatment arms within the same subgroup, i.e., \begin{align}\label{eqn:weak-vars-supp}
S(q,q) = S(q',q'),~ \text{ for all } q,q'\in \langle g \rangle, \end{align} then $ \Omega $ has diagonals: \begin{align*}
{\Omega}(q,q) = \mu_{\langle g \rangle}(|g|-1)|g|^{-2}\left(1-\frac{\varrho_{\langle g \rangle}}{N} - \frac{|g|-1}{N}\right)S(q,q), \end{align*} which can also contribute to $S(q,q)$ and suggest that we can use a smaller correction factor $\mu'_{\langle g \rangle}$ to reduce the conservativeness: \begin{align*}
\mu'_{\langle g \rangle} = (1-|g|^{-1})^{-1}\{(1-|g|^{-1})(1-2N^{-1}) + |g|^{-1}(1 - (2|g| - 1)/N) \}^{-1} \le \mu_{\langle g \rangle}. \end{align*}
When $|g|$ is large (say of the same order as $N$), $\mu'_{\langle g \rangle}$ is close to $ \mu_{\langle g \rangle}$ because $|g|^{-1}(1 - (2|g| - 1)/N)$ is small. When $|g|$ is small, say for pairing, $|g| = 2$, $$ \mu'_{\langle g \rangle} \le 2(1-3N^{-1})^{-1}, \quad \mu_{\langle g \rangle} = 4(1-2N^{-1})^{-1}. $$ Hence $ \mu'_{\langle g \rangle} $ induces much less conservativeness than $ \mu_{\langle g \rangle} $ under stronger assumptions.
\item If we assume the strong null hypothesis within subgroups, i.e.,
\begin{align}\label{eqn:strong-po-supp}
Y_i(q) = Y_i(q'), \text{ for all }i\in[N] \text{ and } q,q'\in {\langle g \rangle},
\end{align}
then both \eqref{eqn:weak-means-supp} and \eqref{eqn:weak-vars-supp} are satisfied. Then
\begin{gather*}
\diag{\mu_{\langle g \rangle}(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{q\in{\mathcal{Q}}_{\textsc{u}}} = 0,\\
\Theta = 0 \see{ by the definitions of $\Theta_1$ in \eqref{eqn:Theta-1} and $\Theta_2$ in \eqref{eqn:Theta-2}}.
\end{gather*}
Applying the correction factor $\mu'_{\langle g \rangle}$, we can show
\begin{align*}
{\mathbb{E}}\{{\widehat{V}}_{\widehat{Y}}(q,q)\} = S(q,q).
\end{align*} \end{itemize}
\subsection{Proof of Theorem \ref{thm:wald-unreplicated}}
Based on Corollary \ref{cor:uniform-design-be} and Theorem \ref{thm:unreplicated-var}, the proof can be done similarly as Theorem \ref{thm:wald-uniform}. We omit the details here.
\subsection{Proof of Theorem \ref{thm:quad-be-non-uniform}} The proof is an application of Lemma \ref{lem:BN} (ii).
\subsection{Proof of Theorem \ref{thm:non-uniform-var}} \begin{proof}[Proof of Theorem \ref{thm:non-uniform-var}]
\begin{enumerate}[label = (\roman*)]
\item Combining the decomposition \eqref{eqn:composite-var-2} and the results from Theorems \ref{thm:uniform-var} and \ref{thm:unreplicated-var}, we have \begin{align*}
\E{{\widehat{V}}_{\widehat{\gamma}}} &= \E{F_{\textsc{u}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{u}} F_{\textsc{u}} + F_{\textsc{r}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{r}} F_{\textsc{r}} + F_{\textsc{l}}^\top {\widehat{V}}_{{\widehat{Y}},\textsc{l}} F_{\textsc{l}}} \\
&\succeq F_{\textsc{u}}^\top \diag{S(q,q)}_{q\in{\mathcal{Q}}_{\textsc{u}}} F_{\textsc{u}} + F_{\textsc{u}}^\top\Omega F_{\textsc{u}} + F_{\textsc{u}}^\top \diag{\mu_{\langle g \rangle}(\overline{Y}(q)-\overline{Y}_{\langle g \rangle})^2}_{q\in{\mathcal{Q}}_{\textsc{u}}} F_{\textsc{u}} \\
&+ F_{\textsc{r}}^\top \diag{N_q^{-1}S(q,q)}_{q\in{\mathcal{Q}}_{\textsc{r}}} F_{\textsc{r}} + F_{\textsc{l}}^\top \diag{N_q^{-1}S(q,q)}_{q\in{\mathcal{Q}}_{\textsc{l}}} F_{\textsc{l}} . \end{align*}
Therefore, ${\mathbb{E}}\{{\widehat{V}}_{\widehat{\gamma}}\} \succeq F^\top V_{\widehat{Y}} F \succeq V_{\widehat{\gamma}}$.
\item Decompose ${\widehat{V}}_{{\widehat{\gamma}}}(h,h')$ into three terms: \begin{align*}
{\widehat{V}}_{{\widehat{\gamma}}}(h,h') =& \sum_{q \in {\mathcal{Q}}} F(k,q)F(k',q) {\widehat{V}}_{\widehat{Y}}(q,q) \\
=&
\underbrace{ \sum_{{\langle g \rangle}\in{\langle g \rangle}}\sum_{q\in{\langle g \rangle}}F_{\textsc{u}}(k,q)F_{\textsc{u}}(k',q) \mu_{\langle g \rangle}\left (Y_q - {\widehat{Y}}_{{\langle g \rangle}}\right)^2}_{{\widehat{v}}_\text{I}}\\
+& \underbrace{ \sum_{q\in{\mathcal{Q}}_{S}}F_{\textsc{r}}(k,q)F_{\textsc{r}}(k',q)N_q^{-1}\widehat{S}(q,q)}_{{\widehat{v}}_{\text{II}}}\\
+& \underbrace{ \sum_{q\in{\mathcal{Q}}_{L}} F_{\textsc{l}}(k,q)F_{\textsc{l}}(k',q) N_q^{-1}\widehat{S}(q,q)}_{{\widehat{v}}_{\text{III}}}, \end{align*}
Applying Lemma \ref{lem:hv-U}, we have \begin{align*}
\Prob{ |{\widehat{v}}_\text{I} - \E{{\widehat{v}}_\text{I}}|\ge t}
\le
\frac{C(\max_{g\in[G]}\mu_{\langle g \rangle})^2 \|F_{\textsc{u}}\|_\infty^4\Delta^2
(\Delta^2+\zeta^2)|{\mathcal{Q}}_{\textsc{u}}|}{t^2} := \circledast_4 . \end{align*}
Applying Lemma \ref{lem:tail} with ${\mathcal{Q}}={\mathcal{Q}}_{\textsc{r}}$, $\overline{c} = \overline{n}$, $\underline{c} = 1$, $N_0 = 1$, we have \begin{align*}
\Prob{ |{\widehat{v}}_\text{II} - \E{{\widehat{v}}_\text{II}}|\ge t}
\le
\frac{C\overline{n} \|F_{\textsc{r}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{r}}| \Delta^4}{t^2} := \circledast_5 . \end{align*}
Applying Lemma \ref{lem:tail} with ${\mathcal{Q}}={\mathcal{Q}}_{\textsc{l}}$, we have \begin{align*}
\Prob{ |{\widehat{v}}_\text{III} - \E{{\widehat{v}}_\text{III}}|\ge t}
\le
\frac{C\overline{c}\underline{c}^{-4} \|F_{\textsc{l}}\|_\infty^4 |{\mathcal{Q}}_{\textsc{l}}| N_0^{-3} \Delta^4}{t^2} := \circledast_6 . \end{align*}
Therefore, \begin{align*}
&\Prob{ |{\widehat{V}}_{\widehat{\gamma}}(h,h') - \E{{\widehat{V}}_{\widehat{\gamma}}(h,h')}|\ge t} \\
\le & \Prob{\{|{\widehat{v}}_{\text{I}} - \E{{\widehat{v}}_{\text{I}}}|\ge t/3\}\cup\{|{\widehat{v}}_{\text{II}} - \E{{\widehat{v}}_{\text{II}}}|\ge t/3\}\cup\{|{\widehat{v}}_{\text{III}} - \E{{\widehat{v}}_{\text{III}}}|\ge t/3\}} \\
\le & 9 (\circledast_4 + \circledast_5 + \circledast_6). \end{align*}
Taking union bound over $h,h'\in[H]$, we have \begin{align*}
\Prob{ \|{\widehat{V}}_{\widehat{\gamma}} - \E{{\widehat{V}}_{\widehat{\gamma}}}\|_\infty\ge t}
\le 9H^2 (\circledast_4 + \circledast_5 + \circledast_6). \end{align*}
\item It follows from \eqref{eqn:op-inf}. \end{enumerate}
\end{proof}
\subsection{Proof of Theorem \ref{thm:wald-non-uniform}}
Based on Corollary \ref{cor:non-uniform-design-be} and Theorem \ref{thm:non-uniform-var}, the proof is similar to Theorem \ref{thm:wald-uniform}. We omit the details here.
\subsection{Proof of Theorem \ref{thm:quad-clt}}
\begin{proof}[Proof of Theorem \ref{thm:quad-clt}] For a given matrix $W$, let ${\mathbb{B}}_t(x; W) = \{y\in{\mathbb{R}}^H : (y-x)^\top W (y-x) \le t\}$, which is convex. By Theorem \ref{thm:be-bounded}, \begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| &= \sup_{t\in{\mathbb{R}}} |{\mathbb{P}}\{\widetilde{\gamma} \in {\mathbb{B}}_t(0; W)\}-{\mathbb{P}}\{\xi_H \in {\mathbb{B}}_t(0; W)\}|\\
&\le \sup_{A\in{\mathcal{A}}} |{\mathbb{P}}\{\widetilde{\gamma} \in A\}-{\mathbb{P}}\{\xi_H \in A\}|\\
& \le CH^{13/4}NB_N(B_N^2 + N^{-1}) + C H^{3/4}B_N + CH^{13/8}N^{1/4}B_N^{3/2} \\
&+ CH^{11/8}N^{1/2}B_N^2 + CH^{7/8}N^{1/4}B_N^{3/2}, \end{align*}
where $B_N = \max_{h\in[H]}\max_{i,j\in[N]}|M''_h(i,j)|$. Here $M''_h(i,j)$ is the standardized population matrix given by Lemma \ref{lem:reformulate}. Now applying \eqref{eqn:standardM-bd} in Lemma \ref{lem:reformulate}, we can further upper bound $B_N$: \begin{align}\label{eqn:bd-BN}
B_N \le \varrho_{\min}(V_{{\widehat{\gamma}}})^{-1/2}\sqrt{H} \max_{h\in[H]}\max_{i,q\in[N]}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))|. \end{align}
\end{proof}
\subsection{Proof of Lemma \ref{lem:BN}} \begin{proof}[Proof of Lemma \ref{lem:BN}]
We derive upper bounds on $B_N$ by bounding the quantities $ \varrho_{\min}(V_{{\widehat{\gamma}}})$ and $ \max_{i,q\in[N]}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))|$. When bounds on $B_N$ are obtained, the BEB for $W$ is a direct application of Theorem \ref{thm:quad-clt}. \begin{enumerate}[label = (\roman*)] \item Under Conditions \ref{cond:well-conditioned}, \ref{condition::proper} and \ref{cond:easy-spec}, we have \begin{gather*}
\varrho_{\min}(V_{{\widehat{\gamma}}}) \ge \varrho_{\min}(F^\top F)\cdot \min_{q\in[Q]} N_q^{-1} S(q,q), \\
\max_{i,q\in[N]}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))| \le 2\|F\|_{\infty}\cdot\underline{c}^{-1}N_0^{-1}\max_{i\in[N],q\in[Q]}|Y_i(q) - \overline{Y}(q)|. \end{gather*} Now use Condition \ref{condition::proper} and the upper bound for $B_N$ \eqref{eqn:bd-BN} to obtain \begin{align*}
B_N \le \frac{2c^{1/2}\underline{c}^{-1}\max_{i\in[N],q\in[Q]}|Y_i(q) - \overline{Y}(q)|}{(\overline{c}^{-1}\min_{q\in[Q]}S(q,q))^{1/2}} \cdot \left(\frac{H}{QN_0}\right)^{1/2}. \end{align*}
Then we can apply Theorem \ref{thm:quad-clt} to derive the BEB.
If we further assume Condition \ref{cond:easy-spec}, then $B_N = O(H^{1/2}N^{-1/2})$. Then \eqref{eqn:BN-special} in Theorem \ref{thm:quad-clt} holds.
\item In non-uniform designs, we first give a lower bound on $\varrho_{\min}(V_{\widehat{\gamma}})$: \begin{align}\label{eqn:non-unif-BN-0}
\varrho_{\min}(V_{\widehat{\gamma}}) \ge \varrho_{\min}(F^\top_{\textsc{s}} F_{\textsc{s}}) \cdot (\overline{n}^{-1}\min_{q\in{\mathcal{Q}}_{\textsc{s}}}S(q,q)). \end{align} Use Condition \ref{condition::proper} to obtain \begin{align*}
\varrho_{\min}\{ F_{\textsc{s}}^\top F_{\textsc{s}}\} &= \varrho_{\min}\{F^\top F - F_{\textsc{l}}^\top F_{\textsc{l}}\}\\
&\ge \varrho_{\min}\{F^\top F\} - \varrho_{\max}\{F_{\textsc{l}}^\top F_{\textsc{l}}\} \\
&\ge c'Q^{-1} - c^2H|{\mathcal{Q}}_{\textsc{l}}|Q^{-2}\\
&\see{because $\varrho_{\max}\{F_{\textsc{l}}^\top F_{\textsc{l}}\} \le \trace{F_{\textsc{l}}^\top F_{\textsc{l}}} = c^2{H|{\mathcal{Q}}_{\textsc{l}}|Q^{-2}} $}\\
& \ge (c'/2)Q^{-1}\\
&\see{because we assumed $ Q \ge 2(c^2/c')H|{\mathcal{Q}}_L|$}. \end{align*}
Then we bound the maximum part of $B_N$ in \eqref{eqn:BN} by considering arms in ${\mathcal{Q}}_{\textsc{s}}$ and ${\mathcal{Q}}_{\textsc{l}}$ separately. For $q\in{\mathcal{Q}}_{\textsc{s}}$, because $N_q\ge 1$, under Condition \ref{condition::proper} we have \begin{align}\label{eqn:non-unif-BN-1}
\max_{h\in[H]}\max_{i\in[N],q\in{\mathcal{Q}}_{\textsc{s}}}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))| \le 2 cQ^{-1} \max_{i\in[N],q\in{\mathcal{Q}}_{\textsc{s}}}|Y_i(q) - \overline{Y}(q)|. \end{align} For $q\in{\mathcal{Q}}_{\textsc{l}}$, we have \begin{align}\label{eqn:non-unif-BN-3}
\max_{h\in[H]}\max_{i\in[N],q\in{\mathcal{Q}}_{\textsc{l}}}|f_{qh}N_{q}^{-1}(Y_i(q)- \overline{Y}(q))| \le 2 c\underline{c}^{-1} Q^{-1}N_0^{-1} \max_{i\in[N],q\in{\mathcal{Q}}_{\textsc{l}}}|Y_i(q) - \overline{Y}(q)|. \end{align} Now plugging \eqref{eqn:non-unif-BN-0}--\eqref{eqn:non-unif-BN-3} into \eqref{eqn:BN}, we have \begin{align*}
B_N \le& \frac{2c H^{1/2}\max_{i\in[N],q\in[Q]}|Y_i(q) - \overline{Y}(q)|}{(c'\overline{n}^{-1}\min_{q\in{\mathcal{Q}}_{\textsc{s}}}S(q,q))^{1/2}}\cdot \max\left\{\frac{1}{Q}, \frac{1}{\underline{c}QN_0}\right\} \\
\le& \frac{2c\max_{i\in[N],q\in{\mathcal{Q}}_{\textsc{l}}}|Y_i(q) - \overline{Y}(q)|}{(c'\overline{n}^{-1}\min_{q\in{\mathcal{Q}}_{\textsc{s}}}S(q,q))^{1/2}}\frac{H^{1/2}}{Q^{1/2}}. \end{align*} \end{enumerate}
Now we can apply Theorem \ref{thm:quad-clt} to derive the BEB.
If we further assume Condition \ref{cond:easy-spec} and $N=O(Q)$, then $B_N = O(H^{1/2}N^{-1/2})$. Then \eqref{eqn:BN-special} in Theorem \ref{thm:quad-clt} holds.
\end{proof}
\subsection{Proof of Corollary \ref{cor:quad-clt-v2}} \begin{proof}[Proof of Corollary \ref{cor:quad-clt-v2}] No matter $H$ is increasing or not, by the conditions and Theorem \ref{thm:quad-clt}, we know that as $N\to\infty$, \begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}(T\le t)-{\mathbb{P}}(T_0\le t)| = o(1). \end{align*} \begin{enumerate}[label = (\roman*)]
\item When $H$ is fixed, the proof is done.
\item When $H$ is increasing to infinity, by the classical Lindeberg CLT, we have for a standard Normal variable $Z$, \begin{align*}
\sup_{t\in{\mathbb{R}}} |{\mathbb{P}}\{T_0\le t\}-{\mathbb{P}}\left\{\sqrt{\Var{T_0}}Z + {\mathbb{E}}(T_0)\le t\right\}| = o(1). \end{align*} Using the expectation and variance calculation for $T_0$ in \eqref{eqn::qclt-true-variance}, we conclude the second part. \end{enumerate}
\end{proof}
\subsection{Proof of Lemma \ref{lem:high-moment}}
\begin{proof}[Proof of Lemma \ref{lem:high-moment}]
Without loss of generality, we assume the potential outcomes are centered: $\overline{Y}(q) = 0$ for all $q\in[Q]$.
(i) The first part follows from the variance formula of ${\widehat{Y}}_q$.
(ii) Now we bound the fourth moment of $\widehat{Y}_q$: \begin{align*}
{\mathbb{E}}\left\{\widehat{Y}_q^4\right\}= & \underbrace{\frac{1}{N_q^4}{\mathbb{E}}\left\{\sum_{i=1}^N Y_i(q)^4\ind{Z_i = q}\right\}}_{\text{II.2-1}}\\
+ & \underbrace{\frac{4}{N_q^4}{\mathbb{E}}\left\{\sum_{i\neq j}^N Y_i(q)^3 Y_j(q)\ind{Z_i = Z_j = q}\right\}}_{\text{II.2-2}}\\
+ & \underbrace{\frac{3}{N_q^4}{\mathbb{E}}\left\{\sum_{i \neq j}^N Y_i(q)^2 Y_j(q)^2\ind{Z_i = Z_j = q}\right\}}_{\text{II.2-3}}\\
+ & \underbrace{\frac{3}{N_q^4}{\mathbb{E}}\left\{\sum_{i\neq j \neq k}^N Y_i(q) Y_j(q) Y_k(q)^2\ind{Z_i = Z_j = Z_k = q}\right\}}_{\text{II.2-4}}\\
+ & \underbrace{\frac{1}{N_q^4}{\mathbb{E}}\left\{\sum_{i \neq j \neq k \neq l}^N Y_i(q) Y_j(q) Y_k(q) Y_l(q)\ind{Z_i = Z_j = Z_k = Z_l = q}\right\}}_{\text{II.2-5}}. \end{align*}
Compute \begingroup \allowdisplaybreaks \begin{align*}
\text{II.2-1} &= \frac{1}{N_q^3N}\sum_{i=1}^N Y_i(q)^4,\\
\text{II.2-2} &= \frac{4}{N_q^4}{\mathbb{E}}\left\{\sum_{i\neq j}^N Y_i(q)^3 Y_j(q)\ind{Z_i = Z_j = q}\right\} \\
&= \frac{4(N_q-1)}{N_q^3N(N-1)} \sum_{i\neq j}^N Y_i(q)^3 Y_j(q) \\
&= -\frac{4(N_q-1)}{N_q^3N(N-1)} \sum_{i\neq j}^N Y_i(q)^4,\\
\text{II.2-3} &= \frac{3(N_q-1)}{N_q^3N(N-1)} \sum_{i \neq j}^N Y_i(q)^2 Y_j(q)^2,\\
\text{II.2-4} & = \frac{3(N_q-1)(N_q-2)}{N_q^3N(N-1)(N-2)} \sum_{i\neq j \neq k}^N Y_i(q) Y_j(q) Y_k(q)^2 \\
& = \frac{3(N_q-1)(N_q-2)}{N_q^3N(N-1)(N-2)} \sum_{ j \neq k}^N -(Y_j(q) + Y_k(q)) Y_j(q) Y_k(q)^2 \\
& = -\frac{3(N_q-1)(N_q-2)}{N_q^3N(N-1)(N-2)} \sum_{ j \neq k}^N Y_j(q)^2 Y_k(q)^2
+ \frac{3(N_q-1)(N_q-2)}{N_q^3N(N-1)(N-2)} \sum_{ k}^N Y_k(q)^4, \\
\text{II.2-5} &= \frac{N_q(N_q-1)(N_q-2)(N_q-3)}{N_q^4 N(N-1)(N-2)(N-3)} \sum_{i \neq j \neq k \neq l}^N Y_i(q) Y_j(q) Y_k(q) Y_l(q) \\
&= -\frac{3N_q(N_q-1)(N_q-2)(N_q-3)}{N_q^4 N(N-1)(N-2)(N-3)} \sum_{i \neq j
\neq k}^N Y_i(q) Y_j(q) Y_k(q)^2\\
&= \frac{3N_q(N_q-1)(N_q-2)(N_q-3)}{N_q^4 N(N-1)(N-2)(N-3)}\sum_{ j \neq k}^N Y_j(q)^2 Y_k(q)^2 \\
&- \frac{3N_q(N_q-1)(N_q-2)(N_q-3)}{N_q^4 N(N-1)(N-2)(N-3)} \sum_{ k}^N Y_k(q)^4. \end{align*} \endgroup Now bound these terms: \begin{gather*}
|\text{II.2-1}| \le \frac{\Delta^4}{N_q^3}, \quad |\text{II.2-2}| \le \frac{4\Delta^4}{N_q^2N}, \\
|\text{II.2-3}| \le \frac{6\Delta^4 }{N_q^2 } ~\left(\text{using $\sum_{i\neq j} Y_i(q)^2Y_j(q)^2 \le \sum_iY_i(q)^2\sum_{j}Y_j(q)^2$}\right),\\
|\text{II.2-4}| \le \frac{6\Delta^4}{N_q(N-2)} + \frac{3\Delta^4}{N_q(N-1)(N-2)}, \\
|\text{II.2-5}| \le \frac{6\Delta^4}{(N-2)(N-3)} + \frac{3\Delta^4}{(N-1)(N-2)(N-3)}. \end{gather*} Choose $N$ large enough to obtain \begin{align*}
{\mathbb{E}}\left\{\widehat{Y}_q^4\right\} \le \frac{C\Delta^4}{N_q^2}. \end{align*}
(iii) Then we compute the covariance terms: \begin{align*}
&{\mathbb{E}}\left\{\widehat{Y}_q^2 \widehat{Y}_{q'}^2\right\} - {\mathbb{E}}\left\{\widehat{Y}_q^2\right\} {\mathbb{E}}\left\{\widehat{Y}_{q'}^2\right\}\\
=& \Bigg\{\underbrace{\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq k}Y_i(q)^2Y_k(q')^2 \frac{N_qN_{q'}}{N(N-1)}}_{\text{II.2-1}}\\
&+\underbrace{\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq j \neq k}Y_i(q)Y_j(q)Y_k(q')^2 \frac{N_q(N_q-1)N_{q'}}{N(N-1)(N-2)}}_{\text{II.2-2}}\\
&+\underbrace{\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq k \neq l}Y_i(q)^2Y_k(q')Y_l(q') \frac{N_qN_{q'}(N_{q'}-1)}{N(N-1)(N-2)}}_{\text{II.2-3}}\\
&+\underbrace{\frac{1}{N_q^2N_{q'}^2}\sum_{i \neq j \neq k \neq l}Y_i(q)Y_j(q)Y_k(q')Y_l(q') \frac{N_q(N_q-1)N_{q'}(N_{q'}-1)}{N(N-1)(N-2)(N-3)}}_{\text{II.2-4}}\Bigg\}\\
&- \underbrace{\left\{\frac{1}{N_q} - \frac{1}{N}\right\}S(q,q)\cdot \left\{\frac{1}{N_{q'}} - \frac{1}{N}\right\}S(q',q')}_{\text{II.2-5}}. \end{align*} For II.2-1 and II.2-5: \begin{align*}
&\left|\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq k}Y_i(q)^2Y_k(q')^2 \frac{N_qN_{q'}}{N(N-1)} - \frac{1}{(N-1)^2}\left(\frac{N-N_q}{N_qN}\right) \left(\frac{N-N_{q'}}{N_{q'}N}\right)\left\{\sum_{i=1}^NY_i(q)^2\right\}\left\{\sum_{k=1}^NY_i(q')^2\right\}\right| \\
=& \Bigg|\left\{\frac{1}{N_qN_{q'}N(N-1)} - \frac{1}{(N-1)^2}\left(\frac{N-N_q}{N_qN}\right) \left(\frac{N-N_{q'}}{N_{q'}N}\right)\right\}\left\{\sum_{i=1}^NY_i(q)^2\right\}\left\{\sum_{i=1}^NY_i(q')^2\right\}\\
-& \frac{1}{N_qN_{q'}N(N-1)}\sum_{i=1}^NY_i(q)^2Y_i(q')^2\Bigg| \\
\le& \frac{(N_q+N_{q'}+1)N-N_qN_{q'}}{N^2(N-1)^2N_qN_{q'}} N^2\Delta^4 + \frac{\Delta^4}{N_qN_{q'}(N-1)} \le \frac{7(N_q + N_{q'})\Delta^4}{N_qN_{q'}(N-1)}. \end{align*}
For II.2-2: \begin{align*}
&\left|\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq j \neq k}Y_i(q)Y_j(q)Y_k(q')^2 \frac{N_q(N_q-1)N_{q'}}{N(N-1)(N-2)}\right|\\
=& \left|-\frac{1}{N_q^2N_{q'}^2}\sum_{i \neq k}Y_i(q)\{Y_i(q)+Y_k(q)\}Y_k(q')^2 \frac{N_q(N_q-1)N_{q'}}{N(N-1)(N-2)}\right|\\
\le& \frac{N_q-1}{N_qN_{q'}N(N-1)(N-2)} \sum_{i\neq k} \left\{\frac{1}{2}Y_i(q)^4 +\frac{1}{2}Y_k(q')^4 + \frac{1}{4}Y_i(q)^4 + \frac{1}{4}Y_k(q)^4 + \frac{1}{2}Y_k(q')^4 \right\}\\
\le& \frac{N_q-1}{N_qN_{q'}N(N-1)(N-2)} \{N(N-1)\cdot 2\Delta^4\} \le \frac{2(N_q + N_{q'})\Delta^4}{N_qN_{q'}(N-2)}. \end{align*} II.2-3 is similar to II.2-2: \begin{align*}
&\left|\frac{1}{N_q^2N_{q'}^2}\sum_{i\neq k \neq l}Y_i(q)^2Y_k(q')Y_l(q') \frac{N_qN_{q'}(N_{q'}-1)}{N(N-1)(N-2)}\right|\le\frac{2(N_q + N_{q'})\Delta^4}{N_qN_{q'}(N-2)}. \end{align*} For II.2-4: \begin{align*}
&\left|\frac{1}{N_q^2N_{q'}^2}\sum_{i \neq j \neq k \neq l}Y_i(q)Y_j(q)Y_k(q')Y_l(q') \frac{N_q(N_q-1)N_{q'}(N_{q'}-1)}{N(N-1)(N-2)(N-3)}\right|\\
= &\left|-\frac{1}{N_q^2N_{q'}^2}\sum_{i \neq j \neq k}Y_i(q)Y_j(q)Y_k(q')\{Y_i(q')+Y_j(q')+Y_k(q')\} \frac{N_q(N_q-1)N_{q'}(N_{q'}-1)}{N(N-1)(N-2)(N-3)}\right|\\
\le & \frac{(N_q-1)(N_{q'}-1)}{N_qN_{q'}N(N-1)(N-2)(N-3)}\cdot{6N(N-1) \Delta^4} \see{reduce terms like II.2-2}\\
\le &\frac{3(N_q-1)(N_{q'}-1)\Delta^4}{N_qN_{q'}(N-2)(N-3)}. \end{align*} Summarizing II.2-1 to II.2-5, \begin{align}\label{eqn:cov-bound}
\left|\text{Cov}\left\{\widehat{Y}_q^2, \widehat{Y}_{q'}^2\right\}\right| \le \frac{C(N_q + N_{q'})\Delta^4}{N_qN_{q'}N}. \end{align}
\end{proof}
\subsection{Proof of Lemma \ref{lem:moments-U}} \begin{proof}[Proof of Lemma \ref{lem:moments-U}] Part (i) and Part (ii) can be shown by constructing new potential outcomes $\{Y_i(q)^2\}$ and apply the variance formula for the sample average. Thus we omit the proof.
For Part (iii), we have \begin{align*}
|\Cov{Y_{q_1}^2}{Y_{q_1}Y_{q_2}}| &= |\E{Y_{q_1}^3Y_{q_2}} - \E{Y_{q_1}^2}\E{Y_{q_1}Y_{q_2}}|\\
&= |\frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)^3Y_j(q_2) + (1-N^{-1})S_{Y}(q_1,q_1) \cdot N^{-1}S(q_1,q_2)| \le \frac{C\Delta^4}{N}. \end{align*}
For Part (iv), we have \begin{align*}
|\Cov{Y_{q_1}^2}{Y_{q_2}Y_{q_3}}| &= |\E{(Y_{q_1}^2 - \E{Y_{q_1}^2})Y_{q_2}^2Y_{q_3}^2}|\\
&= \left|\E{\sum_{i\neq j\neq k} \{Y_i(q_1)^2 - N^{-1}\sum_{i\in[N]}Y_i(q_1)^2\}Y_j(q_2)Y_k(q_3)\ind{Z_i = q_1,Z_j = q_2, Z_k = q_3}}\right| \\
&= \left|\frac{1}{(N)_3}\sum_{i\neq j\neq k} \{Y_i(q_1)^2 - N^{-1}\sum_{i\in[N]}Y_i(q_1)^2\}Y_j(q_2)Y_k(q_3)\right| \\
& = \left|-\frac{1}{(N)_3}\sum_{j\neq k} \{Y_j(q_1)^2 + Y_k(q_1)^2 - 2 N^{-1}\sum_{i\in[N]}Y_i(q_1)^2\}Y_j(q_2)Y_k(q_3)\right| \le \frac{C\Delta^4}{N}. \end{align*}
For Part (v), we have \begin{align*}
&|\Cov{Y_{q_1}Y_{q_2}}{Y_{q_3}Y_{q_4}}| \\
=& \left|\frac{1}{(N)_4}\sum_{i\neq j \neq k \neq l} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}Y_k(q_3)Y_l(q_4)\right| \\
=& \left|-\frac{1}{(N)_4}\sum_{i\neq j \neq k} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{N(N-1)}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}Y_k(q_3)(Y_i(q_4) + Y_j(q_4) + Y_k(q_4))\right|. \end{align*} Further we have \begin{align*}
&\frac{1}{(N)_4}\left|\sum_{i\neq j\neq k} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}Y_k(q_3)Y_i(q_4)\right|\\
= &\left|-\frac{1}{(N)_4}\sum_{i\neq j} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}(Y_i(q_3) + Y_j(q_3))Y_i(q_4)\right|
\le \frac{C\Delta^4}{N^2}. \end{align*} Similar for the summation \begin{align*}
\left|\frac{1}{(N)_4}\sum_{i\neq j\neq k} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}Y_k(q_3)Y_j(q_4)\right| \le \frac{C\Delta^4}{N^2}. \end{align*} Last, it remains to bound \begin{align*}
&\left|\frac{1}{(N)_4}\sum_{i\neq j\neq k} \left\{Y_i(q_1)Y_j(q_2) - \frac{1}{(N)_2}\sum_{i\neq j} Y_i(q_1)Y_j(q_2)\right\}Y_k(q_3)^2\right|\\
= &\left|\frac{1}{(N)_4}\sum_{j\neq k} \left\{-(Y_j(q_1) + Y_k(q_1))Y_j(q_2) + \frac{N-2}{(N)_2}\sum_{i} Y_i(q_1)^2\right\}Y_k(q_3)^2\right|\\
\le & \frac{C\Delta^4}{N^2}. \end{align*} Hence we conclude the proof by combining the above parts. \end{proof}
\subsection{Proof of Lemma \ref{lem:tail}} \begin{proof}[Proof of Lemma \ref{lem:tail}] The proof is based on Chebyshev's inequality and bounding the variance of \begin{align*}
\sum_{q\in{\mathcal{Q}}}w_q N_q^{-1} \widehat{S}(q,q)
=& \underbrace{\sum_{q\in{\mathcal{Q}}}w_q N_q^{-1}(N_q-1)^{-1} \sum_{q_i=q}(Y_i-\overline{Y}(q))^2 }_{\text{II}.1} \\
-& \underbrace{ \sum_{q\in{\mathcal{Q}}}w_q (N_q-1)^{-1}\left(\widehat{Y}(q)-\overline{Y}(q)\right)^2}_{\text{II}.2} . \end{align*} The above decomposition ensures that we can assume $Y_i(q)$'s are centered without loss of generality. For II.1, we have \begin{align}\label{eqn:II.1}
\Var{\text{II.1}} \le \sum_{q\in{\mathcal{Q}}} w_q^{2}(N_q-1)^{-1}N_q^{-2} S_{Y^2}(q,q) \le 4\underline{c}^{-3}\overline{w}^{2}|{\mathcal{Q}}| N_0^{-3} \Delta^4. \end{align} For II.2, we have \begin{align}
\text{Var}\{\text{II.2}\} &\le \sum_{q\in{\mathcal{Q}}} w_q^2 (N_q-1)^{-2}\Var{\widehat{Y}_q^2} \notag\\
& + \sum_{q\neq q'\in{\mathcal{Q}}} w_q w_{q'} (N_q-1)(N_{q'} - 1)\Cov{\widehat{Y}_q^2}{\widehat{Y}_{q'}^2} \notag\\
&\le \sum_{q\in{\mathcal{Q}}} w_q^2 (N_q-1)^{-2}{\mathbb{E}}\left\{ \widehat{Y}_q^4 \right\} \notag\\
& + \sum_{q\neq q'\in{\mathcal{Q}}} w_q w_{q'} (N_q-1)(N_{q'} - 1)\Cov{\widehat{Y}_q^2}{\widehat{Y}_{q'}^2} \notag\\
&\le \sum_{q\in{\mathcal{Q}}} w_q^2 (N_q-1)^{-2}(C\Delta^4 N_q^{-2}) \label{eqn:var-II.2}\\
& + \sum_{q\neq q'\in{\mathcal{Q}}} w_q w_{q'} (N_q-1)^{-1}(N_{q'} - 1)^{-1} \frac{C(N_q + N_{q'})\Delta^4}{N_qN_{q'}N}\label{eqn:cov-II.2}\\
&\see{By Lemma \ref{lem:high-moment}}.\notag \end{align} For \eqref{eqn:var-II.2}, we have \begin{align}\label{eqn:var-II.2-1}
\sum_{q\in{\mathcal{Q}}} w_q^2 (N_q-1)^{-2}(C\Delta^4 N_q^{-2}) \le C\underline{c}^{-4}\overline{w}^2|{\mathcal{Q}}| N_0^{-4}\Delta^4. \end{align} For \eqref{eqn:cov-II.2}, we have \begin{align}
&\left|\sum_{q\neq q'\in{\mathcal{Q}}} w_q w_{q'} (N_q-1)^{-1}(N_{q'} - 1)^{-1} \frac{C(N_q + N_{q'})\Delta^4}{N_qN_{q'}N}\right|\notag\\
\le & \sum_{q\neq q'\in{\mathcal{Q}}}\overline{w}^2 \cdot 4(\underline{c}N_0)^{-4} \cdot \frac{C\overline{c}N_0\Delta^4}{\underline{c}QN_0}\notag\\
\le & \overline{w}^2 \cdot 4(\underline{c}N_0)^{-4} \cdot \frac{C\overline{c} |{\mathcal{Q}}|^2 \Delta^4}{\underline{c}Q }\notag\\
\le &
C\underline{c}^{-4}\overline{w}^2 (\overline{c}/\underline{c}) |{\mathcal{Q}}| N_0^{-4} \Delta^4\notag\\
\le & C\overline{c}\underline{c}^{-4} \overline{w}^2 |{\mathcal{Q}}| N_0^{-3}\Delta^4 ,\label{eqn:cov-II.2-1} \end{align} where in the last inequality \eqref{eqn:cov-II.2-1}, we use the fact that as the lower bound for the size of the arms, $\underline{c}N_0 $ is in general greater than some absolute constant (in many cases just use $1$).
Combining \eqref{eqn:II.1}--\eqref{eqn:cov-II.2-1}, we have \begin{align*}
\Var{\sum_{q\in{\mathcal{Q}}}w_q N_q^{-1} \widehat{S}(q,q)} \le C\overline{c}\underline{c}^{-4} \overline{w}^2 |{\mathcal{Q}}| N_0^{-3}\Delta^4. \end{align*}
We apply Chebyshev's inequality to conclude the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:hv-U}} \begin{proof}[Proof of Lemma \ref{lem:hv-U}] The proof is based on Chebyshev's inequality and bounding the variance of ${\widehat{v}}$.
For any ${\langle g \rangle}$, we have \begin{align}
\sum_{q\in{\langle g \rangle}}w_q \left (Y_q - {\widehat{Y}}_{{\langle g \rangle}}\right)^2
& = \sum_{q\in{\langle g \rangle}}w_q \left (Y_q - \overline{Y}(q) + \overline{Y}(q) - \overline{Y}_{\langle g \rangle} + \overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}\right)^2 \notag\\
& = \underbrace{\sum_{q\in{\langle g \rangle}}w_q(Y_q - \overline{Y}(q))^2}_{\text{Term I}}
+
\underbrace{\sum_{q\in{\langle g \rangle}}w_q(\overline{Y}(q) - \overline{Y}_{\langle g \rangle})^2}_{\text{Term II}}
+
\underbrace{\sum_{q\in{\langle g \rangle}}w_q(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}})^2}_{\text{Term III}} \label{eqn:decomp-var-U-1}\\
&
+
2 \underbrace{\sum_{q\in{\langle g \rangle}}w_q \left \{(Y_q - \overline{Y}(q)) (\overline{Y}(q) - \overline{Y}_{\langle g \rangle}) \right\}}_{\text{Term IV}} \label{eqn:decomp-var-U-2}\\
&
+ 2\underbrace{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q \left \{(Y_q - \overline{Y}(q)) \right\}}_{\text{Term V}} \label{eqn:decomp-var-U-3}\\
&
+
2\underbrace{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q \left \{(\overline{Y}(q) - \overline{Y}_{\langle g \rangle}) \right\}}_{\text{Term VI}} . \label{eqn:decomp-var-U-4} \end{align}
There are six terms in \eqref{eqn:decomp-var-U-1} to \eqref{eqn:decomp-var-U-4}. We deal with them separately.
\textbf{Bound summations involving Term I, IV and VI.} We first show upper bounds for the variance of Term I, IV and VI (summed over $g\in[G]$): \begin{gather}
\Var{\sum_{g\in{G}}\sum_{q\in{\langle g \rangle}}w_q(Y_q - \overline{Y}(q))^2} \le C\sum_{q\in[Q]} w_q^2 \Delta^4,\label{eqn:V-term-I}\\
\Var{\sum_{g\in[G]}\sum_{q\in{\langle g \rangle}}w_q \left \{(Y_q - \overline{Y}(q)) (\overline{Y}(q) - \overline{Y}_{\langle g \rangle}) \right\}} \le C\sum_{q\in[Q]} w_q^2 \Delta^2 \zeta^2, \label{eqn:V-term-IV}\\
\Var{\sum_{g\in[G]}(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q \left \{(\overline{Y}(q) - \overline{Y}_{\langle g \rangle}) \right\}} \le C\sum_{q\in[Q]} w_q^2\Delta^2\zeta^2.\label{eqn:V-term-VI} \end{gather}
The key idea for proving \eqref{eqn:V-term-I}--\eqref{eqn:V-term-VI} is to treat the summations as linear combinations of sample averages and apply Lemma \ref{lem:moments-U}. Take \eqref{eqn:V-term-I} for example. We can treat $Y'_i(q ) = (Y_i(q) - \overline{Y}(q))^2$ as pseudo potential outcomes and obtain: \begin{align*}
\Var{\sum_{g\in{G}}\sum_{q\in{\langle g \rangle}}w_q(Y_q - \overline{Y}(q))^2} \le \sum_{g\in{\mathcal{G}}}\sum_{q\in\langle g \rangle} w_q^2 S_{Y'}(q,q)
\le
C\sum_{q\in[Q]} w_q^2 \Delta^4. \end{align*} Similar derivation holds for \eqref{eqn:V-term-IV} and \eqref{eqn:V-term-VI}.
\textbf{Bound summations involving Term II.} Term II is a non-random quantity. Therefore, it will not make contribution to the variance.
\textbf{Bound summations involving Terms III.} Now we bound \begin{gather}
\Var{\sum_{g\in[G]}|{\langle g \rangle}| \overline{w}_{\langle g \rangle}(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}})^2}, \see{where $\overline{w}_{\langle g \rangle} = |{\langle g \rangle}|^{-1}\sum_{q\in{\langle g \rangle}}w_q$} \label{eqn:hard-I}. \end{gather} We calculate \begin{align*}
\eqref{eqn:hard-I} = &\underbrace{\sum_{g\in[G]} |{\langle g \rangle}|^2 \overline{w}_{\langle g \rangle}^2 \Var{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2} }_{\text{Term III.1}}\\
+ & \underbrace{\sum_{g\neq g'\in[G]} |{\langle g \rangle}||{\langle g \rangle}'| \overline{w}_{\langle g \rangle} \overline{w}_{{\langle g \rangle}'} \Cov{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}{({\widehat{Y}}_{{\langle g \rangle}'} - \overline{Y}_{{\langle g \rangle}'})^2}}_{\text{Term III.2}}. \end{align*} For Term III.1, we can show \begin{align}
&\sum_{g\in[G]} |{\langle g \rangle}|^2 \overline{w}_{\langle g \rangle}^2 \Var{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}\notag \\
= & \sum_{g\in[G]} |{\langle g \rangle}|^2 \overline{w}_{\langle g \rangle}^2 \Cov{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}\notag \\
= & \sum_{g\in[G]}|{\langle g \rangle}|^2\overline{w}_{\langle g \rangle}^2 |{\langle g \rangle}|^{-4} \Bigg\{\sum_{q\in{\langle g \rangle}}\Var{(Y_q - \overline{Y}(q))^2} + \sum_{q_1 \neq q_2\in[{\langle g \rangle}]} \Cov{(Y_{q_1} - \overline{Y}(q_1))^2}{(Y_{q_2} - \overline{Y}(q_2))^2}\notag \\
& \phantom{=\sum_{g\in[G]}|{\langle g \rangle}|^2\overline{w}_{\langle g \rangle}^2 } +\sum_{q_1\neq q_2\in{\langle g \rangle}} \Cov{(Y_{q_1} - \overline{Y}(q_1))^2}{(Y_{q_1} - \overline{Y}(q_1))(Y_{q_2} - \overline{Y}(q_2))}\Bigg\}\notag \\
& \phantom{=\sum_{g\in[G]}|{\langle g \rangle}|^2\overline{w}_{\langle g \rangle}^2 } +\sum_{q_1\neq q_2\neq q_3\in{\langle g \rangle}} \Cov{(Y_{q_1} - \overline{Y}(q_1))^2}{(Y_{q_2} - \overline{Y}(q_2))(Y_{q_3} - \overline{Y}(q_3))}\Bigg\}\notag \\
& \phantom{=\sum_{g\in[G]}|{\langle g \rangle}|^2\overline{w}_{\langle g \rangle}^2 } +\sum_{q_1\neq q_2\neq q_3 \neq q_4\in{\langle g \rangle}} \Cov{(Y_{q_1} - \overline{Y}(q_1))(Y_{q_2} - \overline{Y}(q_2))}{(Y_{q_3} - \overline{Y}(q_3))(Y_{q_4} - \overline{Y}(q_4))}\Bigg\}\notag \\
\le & \sum_{g\in[G]}|{\langle g \rangle}|^{-2}\overline{w}_{\langle g \rangle}^2 \left\{C|{\langle g \rangle}|\Delta^4 +
C|{\langle g \rangle}|^2\Delta^4/N +
C|{\langle g \rangle}|^2\Delta^4/N + C|{\langle g \rangle}|^3\Delta^4/N + C|{\langle g \rangle}|^4\Delta^4/N^2\right\}\notag \\
\le & C\sum_{g\in[G]} \overline{w}_{\langle g \rangle}^2 \Delta^4 \le C\overline{w}^2\Delta^4G \le C\overline{w}^2\Delta^4N_{\textsc{u}}. \label{eqn:Term-III.1} \end{align} To bound Term III.2, we first obtain the following bound using Lemma \ref{lem:moments-U}: \begin{gather}\label{eqn:Cov-g-gprime}
\left|\Cov{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}{({\widehat{Y}}_{{\langle g \rangle}'} - \overline{Y}_{{\langle g \rangle}'})^2}\right| \le \frac{C\Delta^4(|{\langle g \rangle}| + |{\langle g \rangle}'|)}{|{\langle g \rangle}||{\langle g \rangle}'|N}, \forall~ g\neq g'\in[G]. \end{gather} The derivation is similar to what we did when handling Term III.1, thus we omit the details here. Using \eqref{eqn:Cov-g-gprime}, we have \begin{align}\label{eqn:Term-III.2}
\left|\sum_{g\neq g'\in[G]} |{\langle g \rangle}||{\langle g \rangle}'| \overline{w}_{\langle g \rangle} \overline{w}_{{\langle g \rangle}'} \Cov{({\widehat{Y}}_{\langle g \rangle} - \overline{Y}_{\langle g \rangle})^2}{({\widehat{Y}}_{{\langle g \rangle}'} - \overline{Y}_{{\langle g \rangle}'})^2}\right|
\le \frac{CN_{\textsc{u}}^2\overline{w}^2\Delta^4}{N} \le C\overline{w}^2\Delta^4 N_{\textsc{u}}. \end{align}
Combine \eqref{eqn:Term-III.1} and \eqref{eqn:Term-III.2} to obtain \begin{align}\label{eqn:hard-I-bd}
\text{\eqref{eqn:hard-I}} \le C\overline{w}^2\Delta^4 N_U. \end{align}
\textbf{Bound summations involving Term V.} Now we bound \begin{gather}
\Var{\sum_{g\in[G]}(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q \left \{(Y_q - \overline{Y}(q)) \right\}}. \label{eqn:hard-II} \end{gather}
We can show \begin{align*}
\eqref{eqn:hard-II} &= \sum_{g\in[G]} \Var{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q (Y_q - \overline{Y}(q)) }\\
& + \sum_{g\neq g\in[G]} \Cov{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q (Y_q - \overline{Y}(q))}{(\overline{Y}_{{\langle g \rangle}'} - {\widehat{Y}}_{{\langle g \rangle}'}) \sum_{q\in{\langle g \rangle}'}w_q (Y_q - \overline{Y}(q))}. \end{align*} The analysis is very similar to \eqref{eqn:hard-I}. We omit the proof and directly state the conclusion: \begin{gather}
\sum_{g\in[G]} \Var{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q (Y_q - \overline{Y}(q)) } \le C\overline{w}^2\Delta^4 G \le C\overline{w}^2\Delta^4 N_{\textsc{u}},\notag\\
\sum_{g\neq g\in[G]} \Cov{(\overline{Y}_{\langle g \rangle} - {\widehat{Y}}_{{\langle g \rangle}}) \sum_{q\in{\langle g \rangle}}w_q (Y_q - \overline{Y}(q))}{(\overline{Y}_{{\langle g \rangle}'} - {\widehat{Y}}_{{\langle g \rangle}'}) \sum_{q\in{\langle g \rangle}'}w_q (Y_q - \overline{Y}(q))} \le C\overline{w}^2\Delta^4N_{\textsc{u}},\notag\\
\text{\eqref{eqn:hard-II}} \le C\overline{w}^2\Delta^4 N_U. \label{eqn:hard-II-bd} \end{gather}
\textbf{Summarize results.} Combining \eqref{eqn:V-term-I}, \eqref{eqn:V-term-IV}, \eqref{eqn:V-term-VI}, \eqref{eqn:hard-I-bd} and \eqref{eqn:hard-II-bd}, for the unreplicated design, we have \begin{align*}
\Var{{\widehat{v}}} \le C\overline{w}^2(\Delta^4+\Delta^2\zeta^2) N_{\textsc{u}}. \end{align*} Now the tail bound can be obtained by Chebyshev's inequality. \end{proof}
\subsection{Proof of Theorem \ref{thm:be-proj-standard-vec}}
\begin{proof}[Proof of Theorem \ref{thm:be-proj-standard-vec}]
The proof extends that of Theorem \ref{thm:be-proj-standard}. The main difference is that we need to carefully choose the norms and get more delicate bounds. By Lemma \ref{lem:reformulate}, there are population matrices $M''_1,\ldots, M''_H$ that satisfy Condition \ref{cond:str-Mk} and $\widetilde{\gamma} = \left(\trace{M''_h P}\right)_{h=1}^H$. We apply Theorem \ref{thm:linear-projection} to obtain that for any $b\in{\mathbb{R}}^H$ with $\|b\|_2 = 1$, \begin{align*}
\sup_{t\in{\mathbb{R}}}|{\mathbb{P}}\{b^\top\widetilde{\gamma} \le t\} - \Phi(t)| \le C{\max_{i,j\in[N]} \left|\sum_{h=1}^Hb_hM''_h(i,j)\right|}. \end{align*} Here following the proof of Lemma \ref{lem:reformulate}, $M''_h$ is obtained through the following definition of $M'_h$. Define \begin{align}\label{eqn:vec-center-PO}
\breve{{\boldsymbol{Y}}}_i(q) = {\boldsymbol{Y}}_i(q) - \overline{{\boldsymbol{Y}}}(q), ~\breve{\gamma}_{i} = N^{-1}\sum_{q'=1}^Q {\boldsymbol{F}}_{q'} \breve{{\boldsymbol{Y}}}_i(q') . \end{align} For each $i,j\in[N]$, define \begin{align}\label{eqn:vec-M}
M_h'(i,j) = N_q^{-1} {\boldsymbol{F}}_q(h,\cdot)\breve{{\boldsymbol{Y}}}_i(q) - \breve{\gamma}_{hi}, ~ \sum_{q'=0}^{q-1} N_q + 1\le j \le \sum_{q'=0}^{q} N_q. \end{align} \eqref{eqn:vec-M} indicates a natural mapping from column $j$ to a particular treatment level ${q_j}$. Then \begin{align*}
b^\top
\begin{pmatrix}
\{\myvec{M''_1}\}^\top\\
\vdots\\
\{\myvec{M''_H}\}^\top
\end{pmatrix}
=
b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}
\begin{pmatrix}
\{\myvec{M'_1}\}^\top\\
\vdots\\
\{\myvec{M'_H}\}^\top
\end{pmatrix}. \end{align*} Hence \begin{align*}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right| &=
\left|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} {\boldsymbol{F}}_{q_j}(1,\cdot)\breve{{\boldsymbol{Y}}}_i({q_j}) - \breve{\gamma}_{1i} \\
\vdots\\
N_{q_j}^{-1} {\boldsymbol{F}}_{q_j}(H,\cdot)\breve{{\boldsymbol{Y}}}_i({q_j}) - \breve{\gamma}_{Hi}
\end{pmatrix}\right|\\
& =
\left| b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}
\begin{pmatrix}
N_{q_j}^{-1} {\boldsymbol{F}}_{q_j}(1,\cdot)\breve{{\boldsymbol{Y}}}_i({q_j}) \\
\vdots\\
N_{q_j}^{-1} {\boldsymbol{F}}_{q_j}(H,\cdot)\breve{{\boldsymbol{Y}}}_i({q_j})
\end{pmatrix}
-
b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}
\begin{pmatrix}
\breve{\gamma}_{1i} \\
\vdots \\
\breve{\gamma}_{Hi}
\end{pmatrix}\right| \\
& = |\underbrace{b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}N_{q_j}^{-1}{\boldsymbol{F}}_{q_j}\breve{{\boldsymbol{Y}}}_i({q_j}) }_{\text{term I}}- \underbrace{b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}\breve{\gamma}_i }_{\text{term II}}| \end{align*}
From \eqref{eqn:vec-center-PO}, term II is the average of term I over $j\in[N]$. Therefore, we can bound term I for all $i,q$ and use triangle inequality to obtain a bound for term II.
\textbf{First bound for term I}. For $b\in{\mathbb{R}}^H$ with $\|b\|_2 = 1$, construct $b_0 = {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} b / \| {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} b \|_2 \in{\mathbb{R}}^H $ with $\|b_0\|_2 = 1$. We can verify that \begin{align*}
b = \frac{ {\boldsymbol{V}}_{\widehat{\gamma}}^{1/2}b_0}{\sqrt{b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0}}. \end{align*} Then \begin{align*}
\left|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}N_{q_j}^{-1}{\boldsymbol{F}}_{q_j}\breve{{\boldsymbol{Y}}}_i({q_j}) \right|
&=
\left|b_0^\top N_{q_j}^{-1}{\boldsymbol{F}}_{q_j}\breve{{\boldsymbol{Y}}}_i({q_j}) \right| \cdot \left|\frac{1}{\sqrt{b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0}}\right|
\le
\left\|b_0^\top {\boldsymbol{F}}_{q_j} \right\|_1 \cdot \frac{N_{q_j}^{-1}\|\breve{Y}_i({q_j})\|_\infty}{\sqrt{b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0}}. \end{align*}
This gives the bounds that depends on the choice of $b $.
To get a uniform bound, we need to bound $\left\|b_0^\top {\boldsymbol{F}}_{q_j} \right\|_1$ and $b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0$. We can show \begin{align*}
\|b_0^\top {\boldsymbol{F}}_{q_j} \|_1 = \sum_{k\in[p]} |b_0^\top {\boldsymbol{F}}_{q_j}(\cdot,k)| \le \sum_{k\in[p]} \|{\boldsymbol{F}}_{q_j}(\cdot,k)\|_2 =\|{\boldsymbol{F}}_{q_j}\|_{2,1},~
{b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0} \ge {\varrho_{\min}\{ {\boldsymbol{V}}_{\widehat{\gamma}}\}}. \end{align*}
Hence \begin{align}\label{eqn:second-bd-vec}
\left\|b_0^\top {\boldsymbol{F}}_{q_j}\right\|_1 \cdot \frac{N_{q_j}^{-1}\|\breve{{\boldsymbol{Y}}}_i({q_j})\|_\infty}{\sqrt{b_0^\top {\boldsymbol{V}}_{\widehat{\gamma}} b_0}} \le \frac{ \max_{q\in[Q]}\|{\boldsymbol{F}}_{q_j}\|_{2,1}\cdot N_{q_j}^{-1}\|{\boldsymbol{Y}}_i({q_j})-\overline{{\boldsymbol{Y}}}({q_j})\|_\infty}{\sqrt{\varrho_{\min}\{ {\boldsymbol{V}}_{\widehat{\gamma}}\}}}. \end{align}
\textbf{Second bound for term I}. Revisit term I. We have \begin{align}\label{eqn:vec-1}
\left|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}N_{q_j}^{-1}{\boldsymbol{F}}_{q_j}\breve{{\boldsymbol{Y}}}_i({q_j}) \right|
&=\left|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} {\boldsymbol{F}}_{q_j}\{N_{q_j}^{-1}{\boldsymbol{S}}({q_j},{q_j})\}^{1/2}\{N_{q_j}^{-1}{\boldsymbol{S}}({q_j},{q_j})\}^{-1/2}\{N_{q_j}^{-1}\breve{{\boldsymbol{Y}}}_i({q_j})\} \right| \notag\\
&\le
\left\|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}{\boldsymbol{F}}_{q_j}\{N_{q_j}^{-1}{\boldsymbol{S}}({q_j},{q_j})\}^{1/2}\right\|_2 \cdot \left\|\{N_{q_j}^{-1}{\boldsymbol{S}}({q_j},{q_j})\}^{-1/2}\{N_{q_j}^{-1}\breve{{\boldsymbol{Y}}}_i({q_j})\} \right\|_2. \end{align} We further bound the first term in \eqref{eqn:vec-1} as follows: \begin{align}\label{eqn:vec-2}
&\left\|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} {\boldsymbol{F}}_{q_j}\{N_{q_j}^{-1}{\boldsymbol{S}}({q_j},{q_j})\}^{1/2}\right\|_2^2 \\
\le & {\sum_{q=1}^Q \left\|b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} {\boldsymbol{F}}_q\{N_q^{-1}{\boldsymbol{S}}(q,q)\}^{1/2}\right\|_2^2 } \notag\\
\le & {\sum_{q=1}^Q b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} {\boldsymbol{F}}_q\{N_q^{-1}{\boldsymbol{S}}(q,q)\}{\boldsymbol{F}}_q {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2} b } \notag\\
\le &{b^\top {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}
(\sigma_{\boldsymbol{F}}^2 {\boldsymbol{V}}_{\widehat{\gamma}}) {\boldsymbol{V}}_{\widehat{\gamma}}^{-1/2}b}
\see{by Condition \eqref{eqn:well-conditioned-vec}}\notag\\
\le & \sigma_F^2.\notag \end{align} Combining \eqref{eqn:vec-1} and \eqref{eqn:vec-2}, we have \begin{align}\label{eqn:first-bd-vec}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right|^2\le 4 {\sigma_F^2} {N_{q_j}^{-1}\breve{{\boldsymbol{Y}}}_i({q_j})^\top {\boldsymbol{S}}({q_j},{q_j})^{-1} \breve{{\boldsymbol{Y}}}_i({q_j})}. \end{align}
Combining \eqref{eqn:second-bd-vec} and \eqref{eqn:first-bd-vec}, we have \begin{align*}
\left|\sum_{h=1}^Hb_hM''_h(i,j)\right|\le \min\left\{2 {\sigma_F} \sqrt{N_{q_j}^{-1}\breve{{\boldsymbol{Y}}}_i({q_j})^\top {\boldsymbol{S}}({q_j},{q_j})^{-1} \breve{{\boldsymbol{Y}}}_i({q_j})}, \frac{ \|{\boldsymbol{F}}_{q_j}\|_{2,1}\cdot N_{q_j}^{-1}\|{\boldsymbol{Y}}_i({q_j})-\overline{{\boldsymbol{Y}}}({q_j})\|_\infty}{\sqrt{\varrho_{\min}\{ {\boldsymbol{V}}_{\widehat{\gamma}}\}}}\right\}. \end{align*}
\end{proof}
\end{appendix}
\end{document} |
\begin{document}
\title{Approximate $C^*$-ternary ring homomorphisms} \author{Mohammad Sal Moslehian} \address{Department of Mathematics, Ferdowsi University, P. O. Box 1159, Mashhad 91775, Iran.} \email{[email protected]} \subjclass[2000]{Primary 39B82; Secondary 39B52, 46L05.} \keywords{generalized Hyers--Ulam--Rassias stability, $C^*$-ternary ring, $C^*$-ternary homomorphism, Trif's functional equation} \begin{abstract} In this paper, we establish the generalized Hyers--Ulam--Rassias stability of $C^*$-ternary ring homomorphisms associated to the Trif functional equation \begin{eqnarray*} d \cdot C_{d-2}^{l-2} f(\frac{x_1+\cdots +x_d}{d})+ C_{d-2}^{l-1}\sum_{j=1}^d f(x_j) = l \cdot \sum_{1\leq j_1< \cdots < j_l\leq d} f(\frac{x_{j_1} + \cdots + x_{j_l}}{l}). \end{eqnarray*} \end{abstract} \maketitle
\section {Introduction and preliminaries} A {\it ternary ring of operators} (TRO) is a closed subspace of the space $B({\mathcal H}, {\mathcal K})$ of bounded linear operators between Hilbert spaces ${\mathcal H}$ and ${\mathcal K}$ which is closed under the ternary product $[xyz] := xy^{\ast}z$. This concept was introduced by Hestenes \cite{HES}. The class of TRO's includes Hilbert $C^*$-modules via the ternary product $[xyz] := \langle x, y\rangle z$. It is remarkable that every TRO is isometrically isomorphic to a corner $p {\mathcal A} (1-p)$ of a $C^*$-algebra ${\mathcal A}$, where $p$ is a projection. A closely related structure to TRO's is the so-called $JC^*$-triple that is a norm closed subspace of $B({\mathcal H})$ being closed under the triple product $[xyz]=(xy^*z + zy^*x)/2$; cf. \cite{HAR}. It is also true that a commutative TRO, i.e. a TRO with the property $xy^*z=zy^*x$, is an associative $JC^*$-triple.
Following \cite{ZET} a {\it C*-ternary ring} is defined to be a Banach space ${\mathcal A}$ with a ternary product $(x, y, z)\mapsto [xyz]$ from ${\mathcal A}$ into ${\mathcal A}$ which is linear in the outer variables, conjugate linear in the middle variable, and associative in the sense that $[xy[zts]]=[x[tzy]s]=[[xyz]ts]$, and satisfies
$\|[xyz]\|\leq\|x\|\|y\|\|z\|$ and $\|[xxx]\|=\|x\|^{3}$. For instance, any TRO is a C*-ternary ring under the ternary product $[xyz]= xy^*z$. A linear mapping $\varphi$ between $C^*$-ternary rings is called a {\it homomorphism} if $\varphi([xyz])=[\varphi(x)\varphi(y)\varphi(z)]$ for all $x,y,z\in {\mathcal A}$.
The stability problem of functional equations originated from a question of Ulam \cite{ULA}, posed in 1940, concerning the stability of group homomorphisms. In the next year, Hyers \cite{HYE} gave a partial affirmative answer to the question of Ulam in the context of Banach spaces. In 1978, Th. M. Rassias \cite{RAS1} extended the theorem of Hyers by considering the unbounded Cauchy difference
$\|f(x+y)-f(x)-f(y)\|\leq \varepsilon(\|x\|^ p+\|y\|^ p)$, where $\varepsilon>0$ and $ p\in [0,1)$ are constants. The result of Th. M. Rassias has provided a lot of influence in the development of what we now call {\it Hyers--Ulam--Rassias stability} of functional equations. In 1994, a generalization of Rassias' result, the so-called generalized Hyers--Ulam--Rassias stability, was obtained by G\u avruta \cite{GAV} by following the same approach as in \cite{RAS1}. During the last decades several stability problems of functional equations have been investigated in the spirit of Hyers--Ulam--Rassias-G\u avruta. See \cite{CZE, H-I-R, JUN, RAS2, MOS1} and references therein for more detailed information on stability of functional equations.
As far as the author knows, \cite{BOU} is the first paper dealing with stability of (ring) homomorphisms. Another related result is that of Johnson \cite{JOH} in which he introduced the notion of almost algebra $*$-homomorphism between two Banach $*$-algebras. In fact, so many interesting results on the stability of homomorphisms have been obtained by many mathematicians; see \cite{RAS3} for a comprehensive account on the subject. In \cite{B-M} the stability of homomorphisms between $J^*$-algebras associated to the Cauchy equation $f(x+y)=f(x)+f(y)$ was investigated. Some results on stability ternary homomorphisms may be found at \cite{A-M, M-S}.
Trif \cite{TRI} proved the generalized stability for the so-called Trif functional equation \begin{eqnarray*} d \cdot C_{d-2}^{l-2} f(\frac{x_1+\cdots +x_d}{d})+ C_{d-2}^{l-1}\sum_{j=1}^d f(x_j) = l \cdot \sum_{1\leq j_1< \cdots < j_l\leq d} f(\frac{x_{j_1} + \cdots + x_{j_l}}{l}), \end{eqnarray*} deriving from an inequality of Popoviciu \cite{POP} for convex functions (here, $C^k_r$ denotes $\frac{r!}{k!(r-k)!}$). Hou and Park \cite{P-H} applied the result of Trif to study $*$-homomorphisms between unital $C^*$-algebras. Further, Park investigated the stability of Poisson $JC^*$-algebra homomorphisms associated with Trif's equation (see \cite{PAR1}.
In this paper, using some strategies from \cite{B-M, L-S, P-H, PAR1, TRI}, we establish the generalized Hyers--Ulam--Rassias stability of $C^*$-ternary homomorphisms associated to the Trif functional equation. If a $C^*$-ternary ring $({\mathcal A}, [\;])$ has an identity, i.e. an element $e$ such that $x = [xee] = [eex]$ for all $x\in {\mathcal A}$, then it is easy to verify that $x\odot y := [xey]$ and $x^*:= [exe]$ make ${\mathcal A}$ into a unital
$C^*$-algebra (due to the fact that $\|x\odot x^*\odot x\| =
\|x\|^3$). Conversely, if $(A, \odot)$ is a (unital) $C^*$-algebra, then $[xyz] := x\odot y^*\odot z$ makes ${\mathcal A}$ into a $C^*$-ternary ring (with the unit $e$ such that $x\odot y = [xey]$) (see \cite{MOS2}). Thus our approach may be applied to investigate of stability of homomorphisms between unital $C^*$-algebras.
Throughout this paper, ${\mathcal A}$ and ${\mathcal B}$ denote $C^*$-ternary rings. In addition, let $q=\frac{l(d-1)}{d-l}$ and $r = -\frac{l}{d-l}$ for positive integers $l, d$ with $2\leq l\leq d-1$. By an {\it approximate $C^*$-ternary ring homomorphism associated to the Trif equation} we mean a mapping $f: {\mathcal A}\to {\mathcal B}$ for which there exists a certain control function $\varphi: {\mathcal A}^{d+3}\to [0, \infty)$ such that if \begin{eqnarray*}
D_\mu f(x_1, \cdots, x_d, u, v, w)&=&\|d \cdot C_{d-2}^{l-2} f(\frac{\mu x_1+\cdots + \mu x_d}{d} + \frac{[uvw]}{d \cdot C_{d-2}^{l-2}}) + C_{d-2}^{l-1} \sum_{j=1}^d \mu f(x_j) \\ &&- l \cdot \sum_{1\leq j_1< \cdots < j_l\leq d} \mu f(\frac{x_{j_1}
+ \cdots + x_{j_l}}{l}) - [f(u)f(v)f(w)]\|. \end{eqnarray*} then \begin{eqnarray}\label{trifapp} D_\mu f(x_1, \cdots, x_d, u, v, w)\leq\varphi(x_1, \cdots, x_d, u, v, w), \end{eqnarray} for all scalars $\mu$ in a subset ${\mathbb E}$ of ${\mathbb C}$ and all $x_1, \cdots, x_d, u, v, w\in {\mathcal A}$.
It is not hard to see that a function $T : X \to Y$ between linear spaces satisfies Trif's equation if and only if there is an additive mapping $S : X \to Y$ such that $T(x) = S(x) + T(0)$ for all $x \in X$. In fact, $S(x) := (1/2)(T(x) - T(-x))$; see \cite{TRI}.
\section{Main Results}
In this section, we are going to establish the generalized Hyers--Ulam--Rassias stability of homomorphisms in $C^*$-ternary rings associated with the Trif functional equation. We start our work with investigating the case in which an approximate $C^*$-ternary ring homomorphism associated to the Trif equation is an exact homomorphism.
\begin{proposition} Let $T:{\mathcal A} \to {\mathcal B}$ be an approximate $C^*$-ternary ring homomorphism associated to the Trif equation with ${\mathbb E}={\mathbb C}$ and a control function $\varphi$ satisfying \begin{eqnarray*} \lim_{n\to\infty}q^{-n}\varphi(q^nx_1, \cdots, q^nx_d, q^n u, q^nv, q^nw)=0, \end{eqnarray*} for all $x_1, \cdots, x_d, u, v, w\in {\mathcal A}$. Suppose that $T(qx)=qT(x)$ for all $x\in {\mathcal A}$. Then $T$ is a $C^*$-ternary homomorphism. \end{proposition}
\begin{proof} $T(0)=0$, because $T(0)=qT(0)$ and $q>1$. We have \begin{eqnarray*} D_1 T(x_1, \cdots, x_d, 0, 0, 0)&=& q^{-n}D_1 T(q^nx_1, \cdots, q^nx_d, 0, 0, 0)\\ &\leq& q^{-n}\varphi(q^nx_1, \cdots, q^nx_d, 0, 0, 0). \end{eqnarray*} Taking the limit as $n\to\infty$ we conclude that $T$ satisfies Trif's equation. Hence $T$ is additive. It follows from \begin{eqnarray*}
D_\mu T(q^nx, \cdots, q^nx, 0, 0, 0) &=& q^n\|d \cdot C_{d-2}^{l-2} (T(\mu x) -\mu T(x))\| \leq \varphi(q^nx, \cdots, q^nx, 0, 0, 0), \end{eqnarray*} that $T$ is homogeneous.
Set $x_1=\cdots=x_d=0$ and replace $u, v, w$ by $q^nu, q^nv, q^nw$, respectively, in (\ref{trifapp}). Since $T$ is homogeneous, we have \begin{eqnarray*}
\|T([uvw])-[T(u)T(v)T(w)]\|&=& q^{-3n}\|T([q^nu q^nv q^nw])-
[T(q^nu)T(q^nv)T(q^nw)]\| \\ &\leq& q^{-n}\varphi(0, \cdots, 0, q^nu, q^nv, q^nw), \end{eqnarray*} for all $u, v, w\in {\mathcal A}$. The right hand side tends to zero as $n\to\infty$. Hence $T([uvw])=[T(u)T(v)T(w)]$ for all $u, v, w\in {\mathcal A}$. \end{proof}
\begin{theorem}\label{main} Let $f:{\mathcal A} \to {\mathcal B}$ be an approximate $C^*$-ternary ring homomorphism associated to the Trif equation with ${\mathbb E}={\mathbb T}$ and a control function $\varphi :{\mathcal A}^{d+3} \to [0, \infty)$ satisfying \begin{eqnarray}\label{phi} \widetilde{\varphi}(x_1, \cdots, x_d, u, v, w):=\sum_{j=0}^{\infty} q^{-j} \varphi(q^jx_1, \cdots, q^jx_d, q^ju, q^jv, q^jw) < \infty , \end{eqnarray} for all $x_1, \cdots, x_d, u, v, w\in{\mathcal A}$. If $f(0)= 0$, then there exists a unique $C^*$-ternary ring homomorphism $T:{\mathcal A} \to {\mathcal B}$ such that \begin{eqnarray*}
\|f(x) - T(x)\|\leq \frac{1}{l \cdot C_{d-1}^{l-1}} \widetilde{\varphi}(qx, rx, \cdots, rx, 0, 0, 0), \end{eqnarray*} for all $x\in{\mathcal A}$. \end{theorem} \begin{proof} Set $u=v=w=0, \mu =1$ and replace $x_1, \cdots ,x_d$ by $qx, rx, \cdots, rx$ in (\ref{trifapp}). Then \begin{eqnarray*}
\|C_{d-2}^{l-1}f(qx)-l \cdot C_{d-1}^{l-1} f(x)\|\leq \varphi(qx, rx, \cdots, rx, 0, 0, 0) \quad (x \in {\mathcal A}). \end{eqnarray*} One can use induction to show that \begin{eqnarray}\label{approx}
&&\|q^{-n}f(q^nx)-q^{-m}f(q^mx)\|\nonumber\\ &\leq& \frac{1}{l \cdot C_{d-1}^{l-1}}\sum_{j=m}^{n-1}q^{-j} \varphi\big(q^j(qx), q^j(rx), \cdots, q^j(rx), 0, 0, 0\big), \end{eqnarray} for all nonnegative integers $m<n$ and all $x \in {\mathcal A}$. Hence the sequence $\{q^{-n}f(q^nx)\}_{n\in {\mathbb N}}$ is Cauchy for all $x\in {\mathcal A}$. Therefore we can define the mapping $T:{\mathcal A} \to {\mathcal B}$ by \begin{eqnarray}\label{lim} T(x) := \lim_{n\to\infty}\frac{1}{q^n} f(q^nx)\quad (x\in{\mathcal A}). \end{eqnarray} Since \begin{eqnarray*} D_1T(x_1, \cdots ,x_d, 0, 0, 0) &=& \lim_{n\to\infty} q^{-n}D_1f(q^nx_1,\cdots, q^nx_d, 0, 0, 0)\\ &\leq& \lim_{n\to\infty} q^{-n}\varphi(q^nx_1,\cdots, q^nx_d, 0, 0, 0)\\ &=& 0, \end{eqnarray*} we conclude that $T$ satisfies the Trif equation and so it is additive (note that (\ref{lim}) implies that $T(0)=0$). It follows from (\ref{lim}) and (\ref{approx}) with $m=0$ that \begin{eqnarray*}
\|f(x)- T(x)\| \leq \frac{1}{l \cdot C_{d-1}^{l-1}} \widetilde{\varphi}(qx, rx, \cdots, rx, 0, 0, 0), \end{eqnarray*} for all $x\in {\mathcal A}$.
We use the strategy of \cite{TRI} to show the uniqueness of $T$. Let $T'$ be another additive mapping fulfilling \begin{eqnarray*}
\|f(x)- T'(x)\| \leq \frac{1}{l \cdot C_{d-1}^{l-1}} \widetilde{\varphi}(qx, rx, \cdots, rx, 0, 0, 0), \end{eqnarray*} for all $x\in {\mathcal A}$. We have \begin{eqnarray*}
\|T(x)- T'(x)\|&=&q^{-n}\|T(q^nx)-T'(q^nx)\|\\
&\leq& q^{-n}\|T(q^nx)-f(q^nx)\|+ q^{-n}\|f(q^nx)-T'(q^nx)\|\\ &\leq& \frac{2q^{-n}}{l \cdot C_{d-1}^{l-1}}\widetilde{\varphi}\big(q^n(qx), q^n(rx), \cdots, q^n(rx), 0, 0, 0\big)\\ &\leq& \frac{2}{l \cdot C_{d-1}^{l-1}}\sum_{j=n}^\infty q^{-j} \varphi\big(q^j(qx),q^j(rx), \cdots, q^j(rx), 0, 0, 0\big), \end{eqnarray*} for all $x\in{\mathcal A}$. Since the right hand side tends to zero as $n\to\infty$, we deduce that $T(x)=T'(x)$ for all $x\in{\mathcal A}$.
Let $\mu\in{\mathbb T}^1$. Setting $x_1= \cdots = x_d = x$ and $u=v=w=0$ in (\ref{trifapp}) we get \begin{eqnarray*}
\| d \cdot C_{d-2}^{l-2} \big(f(\mu x) -\mu f(x)\big)\| \leq \varphi(x, \cdots, x, 0, 0, 0), \end{eqnarray*} for all $x\in{\mathcal A}$. So that \begin{eqnarray*}
q^{-n} \| d \cdot C_{d-2}^{l-2} \big(f(\mu q^n x) -\mu f(q^n x)\big)\| \leq q^{-n} \varphi(q^nx, \cdots, q^nx, 0, 0, 0), \end{eqnarray*} for all $x\in{\mathcal A}$. Since the right hand side tends to zero as $n\to\infty$, we have \begin{eqnarray*}
\lim_{n \to \infty}q^{-n}\|f(\mu q^n x) -\mu f(q^n x)\| = 0, \end{eqnarray*} for all $\mu\in{\mathbb T}^1$ and all $x\in{\mathcal A}$. Hence \begin{eqnarray*} T(\mu x) = \lim_{n\to \infty}\frac{f(q^n \mu x)}{q^n}= \lim_{n\to \infty}\frac{\mu f(q^nx)}{q^n} = \mu T(x), \end{eqnarray*} for all $\mu\in{\mathbb T}^1$ and all $x\in{\mathcal A}$.
Obviously, $T(0x)=0=0T(x)$. Next, let $\lambda \in {\mathbb C} \;\;(\lambda \neq 0)$, and let $M$ be a natural number greater than
$|\lambda|$. By an easily geometric argument, one can conclude that there exist two numbers $\mu_1, \mu_2 \in {\mathbb T}$ such that $2\frac{\lambda}{M}=\mu_1+\mu_2$. By the additivity of $T$ we get $T\big(\frac{1}{2}x\big)=\frac{1}{2}T(x)$ for all $ x\in {\mathcal A}$. Therefore \begin{eqnarray*} T(\lambda x)& = & T\big(\frac{M}{2}\cdot 2 \cdot \frac{\lambda}{M}x\big)=MT\big(\frac{1}{2}\cdot 2\cdot \frac{\lambda}{M}x\big) =\frac{M}{2}T\big(2\cdot \frac{\lambda}{M}x\big)\\ & = & \frac{M}{2}T(\mu_1x+\mu_2x) =\frac{M}{2}\big(T(\mu_1x)+T(\mu_2x)\big) \\ & = & \frac{M}{2}(\mu_1+\mu_2)T(x) =\frac{M}{2}\cdot 2\cdot \frac{\lambda}{M}\\ &=&\lambda T(x), \end{eqnarray*} for all $x \in {\mathcal A}$, so that $T$ is a ${\mathbb C}$-linear mapping.
Set $\mu =1$ and $x_1=\cdots=x_d=0$, and replace $u, v, w$ by $q^nu, q^nv, q^nw$, respectively, in (\ref{trifapp}) to get \begin{eqnarray*}
\frac{1}{q^{3n}}\big\|d \cdot C_{d-2}^{l-2} f\big(\frac{q^{3n}}{d \cdot C_{d-2}^{l-2}}[uvw]\big)-\big[f(q^nu)f(q^nv)f(q^nw)\big]\big\|\\ \leq q^{-3n}\varphi(0, \cdots, 0, q^nu, q^nv, q^nw), \end{eqnarray*} for all $u, v, w\in {\mathcal A}$. Then by applying the continuity of the ternary product $(x,y,z)\mapsto [xyz]$ we deduce \begin{eqnarray*} T([uvw])&=& d \cdot C_{d-2}^{l-2} T\big(\frac{1}{d \cdot C_{d-2}^{l-2}}[uvw]\big)\\ &=&\lim_{n\to\infty}\frac{d \cdot C_{d-2}^{l-2}}{q^{3n}} f\big(\frac{q^{3n}}{d \cdot C_{d-2}^{l-2}}[uvw]\big)\\ &=&\lim_{n\to\infty}\big[\frac{f(q^nu)}{q^n}\frac{f(q^nv)}{q^n}\frac{f(q^nw)}{q^n}\big]\\ &=& [T(u)T(v)T(w)], \end{eqnarray*} for all $u, v, w\in {\mathcal A}$. Thus $T$ is a $C^*$-ternary homomorphism. \end{proof}
\begin{example} Let $S:{\mathcal A} \to {\mathcal A}$ be a (bounded) $C^*$-ternary homomorphism, and let $f:{\mathcal A} \to {\mathcal A}$ be defined by
$$f(x)=\left \{\begin{array}{cc}S(x) \;\;\;\;\;\; \|x\|<1\\ 0
\;\;\;\;\;\;\;\;\;\;\;\; \|x\|\geq 1 \end{array} \right .$$ and $$\varphi(x_1, \cdots, x_d, u, v, w) := \delta, $$ where $\delta := d \cdot C_{d-2}^{l-2} + d \cdot C_{d-2}^{l-1} + l \cdot C_d^l + 1$. Then \begin{eqnarray*} \widetilde{\varphi}(x_1, \cdots, x_d, u, v, w)&=&\sum_{n=0}^\infty q^{-n}\cdot \delta = \frac{\delta q}{q-1}, \end{eqnarray*} and \begin{eqnarray*} D_\mu f(x_1, \cdots, x_d, u, v, w)\leq \varphi (x_1, \cdots, x_d, u, v, w), \end{eqnarray*} for all $\mu\in {\mathbb T}^1$ and all $x_1, \cdots, x_d, u, v, w\in {\mathcal A}$. Note also that $f$ is not linear. It follows from Theorem \ref{main} that there is a unique $C^*$-ternary ring homomorphism $T: {\mathcal A} \to {\mathcal A}$ such that \begin{eqnarray*}
\|f(x)-T(x)\|\leq \frac{1}{l \cdot C_{d-1}^{l-1}}\, \widetilde{\varphi}(qx, rx, \cdots, rx, 0, 0, 0) \qquad (x\in {\mathcal A}). \end{eqnarray*} Further, $T(0)=\lim_{n\to\infty}\frac{f(0)}{q^n}=0$ and for $x\neq 0$ we have \begin{eqnarray*} T(x)=\lim_{n\to\infty}\frac{f(q^nx)}{q^n} =\lim_{n\to\infty}\frac{0}{q^n}=0, \end{eqnarray*}
since for sufficiently large $n, \|q^nx\|\geq 1$. Thus $T$ is identically zero. \end{example}
\begin{corollary} Let $f:{\mathcal A} \to {\mathcal B}$ be a mapping with $f(0)= 0$ and there exist constants $\varepsilon \geq 0$ and $p\in[0, 1)$ such that \begin{eqnarray*} D_\mu f(x_1, \cdots, x_d, u, v, w)\leq \varepsilon (\sum_{j=1}^d
\|x_j\|^p + \|u\|^p + \|v\|^p + \|w\|^p), \end{eqnarray*} for all $\mu\in{\mathbb T}^1$ and all $x_1, \cdots, x_d, u, v, w\in{\mathcal A}$. Then there exists a unique $C^*$-ternary ring homomorphism $T:{\mathcal A} \to {\mathcal B}$ such that \begin{eqnarray*}
\|f(x) - T(x)\|\leq \frac{q^{1-p}(q^p+(d-1)r^p)\varepsilon }{l
\cdot C_{d-1}^{l-1}(q^{1-p}-1)}\|x\|^p, \end{eqnarray*} for all $x\in{\mathcal A}$. \end{corollary}
\begin{proof} Define $\varphi(x_1, \cdots, x_d, u, v, w) = \varepsilon
(\sum_{j=1}^d \|x_j\|^p + \|u\|^p + \|v\|^p + \|w\|^p)$, and apply Theorem 2.2. \end{proof}
The following corollary can be applied in the case that our ternary algebra is linearly generated by its `idempotents', i.e. elements $u$ with $u^3 = u$. \begin{proposition} Let ${\mathcal A}$ be linearly spanned by a set $S\subseteq {\mathcal A}$ and let $f:{\mathcal A} \to {\mathcal B}$ be a mapping satisfying $f(q^{2n}[s_1s_2z]) = [f(q^ns_1)f(q^ns_2)f(z)]$ for all sufficiently large positive integers $n$, and all $s_1,s_2\in S, z\in{\mathcal A}$. Suppose that there exists a control function $\varphi :{\mathcal A}^{d} \to [0, \infty)$ satisfying \begin{eqnarray*} \widetilde{\varphi}(x_1, \cdots, x_d):=\sum_{j=0}^{\infty} q^{-j} \varphi(q^jx_1, \cdots, q^jx_d) < \infty \quad (x_1, \cdots, x_d \in{\mathcal A}). \end{eqnarray*} If $f(0)=0$ and \begin{eqnarray*}
\|d \cdot C_{d-2}^{l-2} f(\frac{\mu x_1+\cdots + \mu x_d}{d}) + C_{d-2}^{l-1} \sum_{j=1}^d \mu f(x_j)\\ - l \cdot \sum_{1\leq j_1< \cdots < j_l\leq d} \mu f(\frac{x_{j_1} +
\cdots + x_{j_l}}{l})\|\leq \varphi(x_1, \cdots, x_d), \end{eqnarray*} for all $\mu \in{\mathbb T}^1$ and all $x_1, \cdots, x_d \in {\mathcal A}$, then there exists a unique $C^*$-ternary ring homomorphism $T:{\mathcal A} \to {\mathcal B}$ such that \begin{eqnarray*}
\|f(x) - T(x)\|\leq \frac{1}{l \cdot C_{d-1}^{l-1}}\, \widetilde{\varphi}(qx, rx, \cdots, rx), \end{eqnarray*} for all $x\in{\mathcal A}$. \end{proposition} \begin{proof} Applying the same argument as in the proof of Theorem 2.2, there exists a unique linear mapping $T:{\mathcal A} \to {\mathcal B}$ given by \begin{eqnarray*} T(x) := \lim_{n\to\infty}\frac{1}{q^n} f(q^nx) \quad (x\in{\mathcal A}) \end{eqnarray*} such that \begin{eqnarray*}
\|f(x) - T(x)\|\leq \frac{1}{l \cdot C_{d-1}^{l-1}} \widetilde{\varphi}(qx, rx, \cdots, rx), \end{eqnarray*} for all $x\in{\mathcal A}$. We have \begin{eqnarray*} T([s_1s_2z]) &=& \lim_{n\to\infty}\frac{1}{q^{2n}} f([(q^ns_1)(q^ns_2)z])\\ &=& \lim_{n\to\infty}\big[\frac{f(q^ns_1)}{q^n}\frac{f(q^ns_2)}{q^n}f(z)\big]\\ &=& [T(s_1)T(s_2)f(z)]. \end{eqnarray*} By the linearity of $T$ we have $T([xyz]) = [T(x)T(y)f(z)]$ for all $x, y, z\in {\mathcal A}$. Therefore $q^nT([xyz])= T([xy(q^nz)]) = [T(x)T(y)f(q^nz)]$, and so \begin{eqnarray*} T[xyz])= \lim_{n\to\infty}\frac{1}{q^n}[T(x)T(y)f(q^nz)]=\big[T(x)T(y)\lim_{n\to\infty}\frac{f(q^nz)}{q^n}\big ]= [T(x)T(y)T(z)], \end{eqnarray*} for all $x,y,z\in{\mathcal A}$. \end{proof}
\begin{theorem} Suppose that $f:{\mathcal A} \to {\mathcal B}$ is an approximate $C^*$-ternary ring homomorphism associated to the Trif equation with ${\mathbb E}=\{1, {\bf i}\}$ and a control function $\varphi: A^{d+3}\to [0, \infty)$ fulfilling (\ref{phi}). If $f(0)=0$ and for each fixed $x\in {\mathcal A}$ the mapping $t\mapsto f(tx)$ is continuous on ${\mathbb R}$, then there exists a unique $C^*$-ternary homomorphism $T:{\mathcal A} \to {\mathcal B}$ such that \begin{eqnarray*}
\|f(x)-T(x)\|\leq \widetilde{\varphi}(qx, rx, \cdots, rx, 0, 0, 0), \end{eqnarray*} for all $x\in{\mathcal A}$. \end{theorem} \begin{proof} Put $u=v=w=0$ and $\mu=1$ in (\ref{trifapp}). Using the same argument as in the proof of Theorem \ref{main} we deduce that there exists a unique additive mapping $T:{\mathcal A} \to {\mathcal B}$ given by \begin{eqnarray*} T(x)=\lim_{n\to\infty}\frac{f(q^nx)}{q^n} \quad (x\in {\mathcal A}). \end{eqnarray*} By the same reasoning as in the proof of the main theorem of \cite{RAS1}, the mapping $T$ is ${\mathbb R}$-linear.
Putting $x_1= \cdots = x_d = x$, $\mu={\bf i}$ and $u=v=w=0$ in (\ref{trifapp}) we get \begin{eqnarray*}
\|d \cdot C_{d-2}^{l-2} (f({\bf i} x) -{\bf i} f(x))\| \leq \varphi(x, \cdots, x, 0, 0, 0) \quad (x\in {\mathcal A}). \end{eqnarray*} Hence \begin{eqnarray*}
q^{-n}\|f(q^n{\bf i}x)-{\bf i}f(q^nx)\|\leq q^{-n}\varphi(q^nx, \cdots, q^nx, 0, 0, 0) \quad (x\in {\mathcal A}). \end{eqnarray*} The right hand side tends to zero as $n\to\infty$, hence \begin{eqnarray*} T({\bf i}x)=\lim_{n\to\infty}\frac{f(q^n{\bf i}x)}{q^n}=\lim_{n\to\infty}\frac{{\bf i}f(q^nx)}{q^n}={\bf i}T(x) \quad (x\in {\mathcal A}). \end{eqnarray*} For every $\lambda\in {\mathbb C}$ we can write $\lambda=\alpha_1+{\bf i}\alpha_2$ in which $\alpha_1,\alpha_2\in{\mathbb R}$. Therefore \begin{eqnarray*} T(\lambda x)&=&T(\alpha_1x+{\bf i}\alpha_2x)=\alpha_1T(x)+\alpha_2T({\bf i}x)\\ &=&\alpha_1T(X)+{\bf i}\alpha_2T(x)=(\alpha_1+{\bf i}\alpha_2)T(x)\\ &=&\lambda T(x), \end{eqnarray*} for all $x\in {\mathcal A}$. Thus $T$ is ${\mathbb C}$-linear. \end{proof}
\end{document} |
\begin{document}
\subjclass[2010]{17D99} \keywords{Leibniz, triangular, nilradical, classification, Lie}
\doublespacing \title{Solvable Leibniz Algebras with Triangular Nilradical}
\begin{abstract}
A classification exists for Lie algebras whose nilradical is the triangular Lie algebra $T(n)$. We extend this result to a classification of all solvable Leibniz algebras with nilradical $T(n)$. As an example we show the complete classification of all Leibniz algebras whose nilradical is $T(4)$. \end{abstract}
\section{Introduction}\label{intro}
Leibniz algebras were defined by Loday in 1993 \cite{loday, loday2}. In recent years it has been a common theme to extend various results from Lie algebras to Leibniz algebras \cite{ao, ayupov, omirov}. Several authors have proven results on nilpotency and related concepts which can be used to help extend properties of Lie algebras to Leibniz algebras.
Specifically, variations of Engel's theorem for Leibniz algebras have been proven by different authors \cite{barnesengel, jacobsonleib} and Barnes has proven Levi's theorem for Leibniz algebras \cite{barneslevi}. Additionally, Barnes has shown that left-multiplication by any minimal ideal of a Leibniz algebra is either zero or anticommutative \cite{barnesleib}.
In an effort to classify Lie algebras, many authors place various restrictions on the nilradical \cite{cs, nw, rw, wld}. In \cite{tw}, Tremblay and Winternitz study solvable Lie algebras with triangular nilradical. It is the goal of this paper to extend these results to the Leibniz setting.
Recent work has been done on classification of certain classes of Leibniz algebras \cite{aor, chelsie-allison, heisen, clok, clok2}. In \cite{heisen}, a subset of the authors of this work found a complete classification of all Leibniz algebras whose nilradical is Heisenberg. In particular, this includes a classification of all Leibniz algebras whose nilradical is the triangular Lie algebra $T(3)$, since $T(3)$ is the three-dimensional Heisenberg algebra. For this reason our primary example will be Leibniz algebras whose nilradical is the triangular algebra $T(4)$.
\section{Preliminaries}
A Leibniz algebra, $L$, is a vector space over a field (which we will take to be $\mathbb{C}$ or $\mathbb{R}$) with a bilinear operation (which we will call multiplication) defined by $[x,y]$ which satisfies the Leibniz identity \begin{equation}\label{Jacobi} [x,[y,z]] = [[x,y],z] + [y,[x,z]] \end{equation} for all $x,y,z \in L$. In other words $L_x$, left-multiplication by $x$, is a derivation. Some authors choose to impose this property on $R_x$, right-multiplication by $x$, instead. Such an algebra is called a ``right'' Leibniz algebra, but we will consider only ``left'' Leibniz algebras (which satisfy \eqref{Jacobi}). $L$ is a Lie algebra if additionally $[x,y]=-[y,x]$.
The derived series of a Leibniz (Lie) algebra $L$ is defined by $L^{(1)}=[L,L]$, $L^{(n+1)}=[L^{(n)},L^{(n)}]$ for $n\ge 1$. $L$ is called solvable if $L^{(n)}=0$ for some $n$. The lower-central series of $L$ is defined by $L^2 = [L,L]$, $L^{n+1}=[L,L^n]$ for $n>1$. $L$ is called nilpotent if $L^n=0$ for some $n$. It should be noted that if $L$ is nilpotent, then $L$ must be solvable.
The nilradical of $L$ is defined to be the (unique) maximal nilpotent ideal of $L$, denoted by $\nr(L)$. It is a classical result that if $L$ is solvable, then $L^2 = [L,L] \subseteq \nr(L)$. From \cite{mubar}, we have that \begin{equation}\label{dimension} \dim (\nr(L)) \geq \frac{1}{2} \dim (L). \end{equation}
The triangular algebra $T(n)$ is the $\frac{1}{2}n(n-1)$-dimensional Lie algebra whose basis is the set of strictly upper-triangular matrices, $\{N_{ik} \vert 1 \leq i < k \leq n \}$ defined by multiplications \begin{equation}\label{tri} [N_{ik},N_{ab}]=\delta_{ka}N_{ib} - \delta_{bi}N_{ak}. \end{equation}
The left-annihilator of a Leibniz algebra $L$ is the ideal $\Ann_\ell(L) = \left\{x\in L\mid [x,y]=0\ \forall y\in L\right\}$. Note that the elements $[x,x]$ and $[x,y] + [y,x]$ are in $\Ann_\ell(L)$, for all $x,y\in L$, because of \eqref{Jacobi}.
An element $x$ in a Leibniz algebra $L$ is nilpotent if both $(L_x)^n = (R_x)^n = 0$ for some $n$. In other words, for all $y$ in $L$ \begin{equation*} [x,\cdots[x,[x,y]]] = 0 = [[[y,x],x]\cdots,x]. \end{equation*}
A set of matrices $\{X^\alpha\}$ is called linearly nilindependent if no non-zero linear combination of them is nilpotent. In other words, if \begin{equation*} X = \displaystyle\sum_{\alpha=1}^f c_\alpha X^\alpha, \end{equation*} then $X^n=0$ implies that $c_\alpha=0$ for all $\alpha$. A set of elements of a Leibniz algebra $L$ is called linearly nilindependent if no non-zero linear combination of them is a nilpotent element of $L$.
\section{Classification}
Let $T(n)$ be the $\frac{1}{2}n(n-1)$-dimensional triangular (Lie) algebra over the field $F$ ($\mathbb{C}$ or $\mathbb{R}$) with basis $\{N_{ik} \vert 1 \leq i < k \leq n \}$ and products given by \eqref{tri}. We will extend $T(n)$ to a solvable Leibniz algebra $L$ of dimension $\frac{1}{2}n(n-1) + f$ by appending linearly nilindependent elements $\{X^1, \ldots, X^f\}$. In doing so, we will construct an indecomposable Leibniz algebra whose nilradical is $T(n)$.
We construct the vector $N = (N_{12} N_{23} \cdots N_{(n-1)n} N_{13} \cdots N_{(n-2)n} \cdots N_{1n})^T$ whose components are the basis elements of the nilradical ordered along consecutive off-diagonals ($N_{i(i+1)}$ in order, then $N_{i(i+2)}$ in order, \ldots). Then since $[L,L] \subseteq \nr(L)$, the brackets of $L$ are given by \eqref{tri} and \begin{align*}
[X^\alpha,N_{ik}] &= A^\alpha_{ik,pq} N_{pq}\\
[N_{ik},X^\alpha] &= B^\alpha_{ik,pq} N_{pq}\\
[X^\alpha,X^\beta] &= \sigma^{\alpha\beta}_{pq} N_{pq}. \end{align*} using Einstein summation notation on repeated indices (from here onward), where $1\leq \alpha, \beta \leq f$, $A^\alpha_{ik,pq}, \sigma^{\alpha \beta}_{pq} \in F$. Note that $A^\alpha \in F^{r \times r}$, $N \in T(n)^{r \times 1}$ where $r = \frac{1}{2}n(n-1)$.
To classify Leibniz algebras $L(n,f)$ we must classify the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$. The Jacobi identities for the triples $\{X^\alpha, N_{ik}, N_{ab}\}$, $\{N_{ik}, N_{ab}, X^\alpha\}$, $\{N_{ik}, X^\alpha, N_{ab}\}$ with $1 \leq \alpha \leq f$, $1 \leq i < k \leq n$, $1 \leq a < b \leq n$ give us respectively
\begin{align} \tag{4a}\label{2.14}\delta_{ka} A^\alpha_{ib,pq} N_{pq} - \delta_{bi} A^\alpha_{ak,pq} N_{pq} + A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} - A^\alpha_{ab,kq} N_{iq} + A^\alpha_{ab,pi} N_{pk} &= 0\\ \tag{4b}\label{un2.14}\delta_{ka} B^\alpha_{ib,pq} N_{pq} - \delta_{bi} B^\alpha_{ak,pq} N_{pq} + B^\alpha_{ik,bq} N_{aq} - B^\alpha_{ik,pa} N_{pb} - B^\alpha_{ab,kq} N_{iq} + B^\alpha_{ab,pi} N_{pk} &= 0\\ \tag{4c}\label{2.14twist}\delta_{ka} A^\alpha_{ib,pq} N_{pq} - \delta_{bi} A^\alpha_{ak,pq} N_{pq} + A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} + B^\alpha_{ab,kq} N_{iq} - B^\alpha_{ab,pi} N_{pk} &= 0. \end{align} \addtocounter{equation}{1}
As a consequence of \eqref{2.14} and \eqref{2.14twist}, we also have that $$A^\alpha_{ab,pi} N_{pk} - A^\alpha_{ab,kq} N_{iq} = - (B^\alpha_{ab,pi} N_{pk} - B^\alpha_{ab,kq} N_{iq}).$$ Thus $A^\alpha_{ab,pi} = - B^\alpha_{ab,pi}$ if $p<i$ and $A^\alpha_{ab,kq} = - B^\alpha_{ab,kq}$ if $k<q$. Therefore \begin{equation}\label{2.14cor} A^\alpha_{ab,ik} = - B^\alpha_{ab,ik} \quad\forall ab,ik \text{ except } ik=1n. \end{equation}
Similarly the Jacobi identities for the triples $\{X^\alpha, X^\beta, N_{ab}\}$, $\{X^\alpha, N_{ik}, X^\beta\}$, $\{N_{ik}, X^\alpha, X^\beta\}$ with $1 \leq \alpha, \beta \leq f$ and $1 \leq i < k \leq n$ give us respectively
\begin{align} \tag{6a}\label{2.15}[A^\alpha, A^\beta]_{ik,pq} N_{pq} =& \phantom{-(}\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk}\\ \tag{6b}\label{2.15twist}[A^\alpha, B^\beta]_{ik,pq} N_{pq} =& - (\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk})\\ \tag{6c}\label{un2.15}(\BbA^\alpha+B^\alphaB^\beta)_{ik,pq} N_{pq} =& \phantom{-(}\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk}. \end{align} \addtocounter{equation}{1} Unlike the Lie case, these give nontrivial relations for $f=1$ or $\alpha=\beta$ when $f>1$.
The Jacobi identity for the triple $\{X^\alpha, X^\beta, X^\gamma\}$ with $1 \leq \alpha, \beta, \gamma \leq f$ gives us \begin{equation}\label{2.16} \sigma^{\beta \gamma}_{pq} A^\alpha_{pq,ik} - \sigma^{\alpha \beta}_{pq} B^\gamma_{pq,ik} - \sigma^{\alpha \gamma}_{pq} A^\beta_{pq,ik} = 0. \end{equation} Again, we do not require $\alpha, \beta, \gamma$ to be distinct, which in particular gives nontrivial relations for $f \geq 1$.
In order to simplify the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$ we will make use of several transformations which leave the commutation relations \eqref{tri} invariant. Namely \begin{itemize} \item Redefining the elements of the extension: \begin{equation}\label{2.17} \begin{split} &\hspace{23pt}X^\alpha \longrightarrow X^\alpha + \mu^\alpha_{pq} N_{pq}, \quad \mu^\alpha_{pq} \in F \\ \Rightarrow &\begin{cases} A^\alpha_{ik,ab} \longrightarrow A^\alpha_{ik,ab} + \delta_{kb}\mu^\alpha_{ai} - \delta_{ia}\mu^\alpha_{kb}\\ B^\alpha_{ik,ab} \longrightarrow B^\alpha_{ik,ab} - \delta_{kb}\mu^\alpha_{ai} + \delta_{ia}\mu^\alpha_{kb}. \end{cases} \end{split} \end{equation}
\item Changing the basis of $\nr(L)$: \begin{equation}\label{2.18} \begin{split} &\hspace{17pt}N \longrightarrow GN, \quad G \in GL(r,F) \\ \Rightarrow &\begin{cases} A^\alpha \longrightarrow GA^\alpha G^{-1}\\ B^\alpha \longrightarrow GB^\alpha G^{-1}. \end{cases} \end{split} \end{equation}
\item Taking a linear combination of the elements $X^\alpha$. \end{itemize} The matrix $G$ must satisfy certain restrictions discussed later in order to preserve the commutation relations \eqref{tri} of $\nr(L)$.
Note that $N_{1n}$ is not used in \eqref{2.17} since it commutes with all the elements in $\nr(L)$. Since \eqref{2.16} gives relations between the matrices $A^\alpha$, $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$, the unused constant $\mu^\alpha_{1n}$ can be used to scale the constants $\sigma^{\alpha \beta}_{pq}$ when $f \geq 2$: \begin{equation}\label{2.19} \begin{split} X^\alpha &\longrightarrow X^\alpha + \mu^\alpha_{1n} N_{1n}, \quad \mu^\alpha_{1n} \in F \\ \Rightarrow \sigma^{\alpha \beta}_{pq} &\longrightarrow \sigma^{\alpha \beta}_{pq} + \mu^\beta_{1n}A^\alpha_{1n,pq} + \mu^\alpha_{1n}B^\beta_{1n,pq}. \end{split} \end{equation} In this transformation $A^\alpha$ is invariant, so we will be able to simplify some constants $\sigma^{\alpha \beta}_{pq}$.
\section{Extensions of $T(4)$}
In this paper we will focus on triangular algebras $T(n)$ with $n\geq 4$ because: \begin{itemize} \item $T(2)$ is a one-dimensional algebra (hence by \eqref{dimension} $L$ has dimension at most 2) and the Jacobi identity gives that the only family of solvable non-Lie Leibniz algebras with one-dimensional nilradical are given by $L(c)= \langle a,b \rangle$ with $[a,a]=[a,b]=0$, $[b,a]=ca$, $[b,b]=a$ where $c \neq 0 \in F$. \item $T(3)$ is a Heisenberg Lie algebra, and Leibniz algebras with Heisenberg nilradical were classified in \cite{heisen}. \end{itemize}
Now we will consider the case when $n = 4$. In particular, $N = (N_{12} N_{23} N_{34} N_{13} N_{24} N_{14})^T$ and $r = 6$.
We can proceed by considering the relations in \eqref{2.14} and \eqref{un2.14} for $1 \leq i < k \leq 4$, $1 \leq a < b \leq 4$, $k \neq a$, $b \neq i$
\begin{align} \tag{11a}\label{3.3}A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} - A^\alpha_{ab,kq} N_{iq} + A^\alpha_{ab,pi} N_{pk} &= 0\\ \tag{11b}\label{un3.3}B^\alpha_{ik,bq} N_{aq} - B^\alpha_{ik,pa} N_{pb} - B^\alpha_{ab,kq} N_{iq} + B^\alpha_{ab,pi} N_{pk} &= 0.
\end{align} \addtocounter{equation}{1} Similarly for $1 \leq i < k = a < b \leq 4$, we obtain
\begin{align} \tag{12a}\label{3.2}A^\alpha_{ib,pq} N_{pq} + A^\alpha_{ik,bq} N_{kq} - A^\alpha_{ik,pk} N_{pb} - A^\alpha_{kb,kq} N_{iq} + A^\alpha_{kb,pi} N_{pk} &= 0\\ \tag{12b}\label{un3.2}B^\alpha_{ib,pq} N_{pq} + B^\alpha_{ik,bq} N_{kq} - B^\alpha_{ik,pk} N_{pb} - B^\alpha_{kb,kq} N_{iq} + B^\alpha_{kb,pi} N_{pk} &= 0.
\end{align} \addtocounter{equation}{1}
Using the linear independence of the $N_{ik}$ with equation \eqref{3.3}, we can obtain relationships among the entries of the matrices $A^\alpha$, summarized in the matrix below. For example, letting $ik = 12$ and $ab = 34$, the coefficient of $N_{14}$ gives $A^\alpha_{12,13} + A^\alpha_{34,24} = 0$, the coefficient of $N_{13}$ gives $A^\alpha_{34,23} = 0$, and the coefficient of $N_{24}$ gives $A^\alpha_{12,23} = 0$. Using equation \eqref{un3.3}, we obtain the same relationships among the entries of $B^\alpha$.
\begin{align*} A^\alpha =& \begin{pmatrix} *&0&A^\alpha_{12,34}&A^\alpha_{12,13}&*&*\\ 0&*&0&*&*&*\\ A^\alpha_{34,12}&0&*&*&-(A^\alpha_{12,13})&*\\ 0&0&0&*&(A^\alpha_{12,34})&*\\ 0&0&0&(A^\alpha_{34,12})&*&*\\ 0&0&0&0&0&* \end{pmatrix} \end{align*}
Applying \eqref{3.2} and \eqref{un3.2} in the same way, $A^\alpha$ becomes:
\begin{align*} A^\alpha =& \begin{pmatrix} A^\alpha_{12,12}&0&0&A^\alpha_{12,13}&*&*\\ &A^\alpha_{23,23}&0&A^\alpha_{23,13}&A^\alpha_{23,24}&*\\ &&A^\alpha_{34,34}&*&-(A^\alpha_{12,13})&*\\ &&&A^\alpha_{12,12} + A^\alpha_{23,23}&0&(A^\alpha_{23,24})\\ &&&&A^\alpha_{23,23} + A^\alpha_{34,34}&(A^\alpha_{23,13})\\ &&&&&A^\alpha_{12,12} + A^\alpha_{23, 23} + A^\alpha_{34,34} \end{pmatrix} \end{align*} As before, $B^\alpha$ has the same form above. This with \eqref{2.14cor} implies that $B^\alpha_{13,14} = B^\alpha_{23,24} = -A^\alpha_{23,24} = -A^\alpha_{13,14}$. Similarly, $B^\alpha_{24,14} = -A^\alpha_{24,14}$ and $B^\alpha_{14,14} = -A^\alpha_{14,14}$.
We can further simplify matrix $A^\alpha$ by performing the transformation specified in \eqref{2.17}. Choosing $\mu^\alpha_{12} = -A^\alpha_{23,13}$, $\mu^\alpha_{23} = A^\alpha_{12,13}$, $\mu^\alpha_{34} = A^\alpha_{23,24}$, $\mu^\alpha_{13} = -A^\alpha_{34,14}$, and $\mu^\alpha_{24} = -A^\alpha_{12,14}$ leads to $$A^\alpha_{23,13} = A^\alpha_{12,13} = A^\alpha_{23,24} = A^\alpha_{34,14} = A^\alpha_{12,14} = 0.$$
This gives us the matrices
\begin{align*} A^\alpha =& \begin{pmatrix} A^\alpha_{12,12}&0&0&0&A^\alpha_{12,24}&0\\ &A^\alpha_{23,23}&0&0&0&A^\alpha_{23,14}\\ &&A^\alpha_{34,34}&A^\alpha_{34,13}&0&0\\ &&&A^\alpha_{13,13}&0&0\\ &&&&A^\alpha_{24,24}&0\\ \phantom{-A^\alpha_{12,12}}&\phantom{-A^\alpha_{23,23}}&\phantom{-A^\alpha_{34,34}}&\phantom{-A^\alpha_{13,13}}&\phantom{-A^\alpha_{24,24}}&\,\,A^\alpha_{14,14}\,\, \end{pmatrix}\\ B^\alpha =& \begin{pmatrix} -A^\alpha_{12,12}&0&0&0&-A^\alpha_{12,24}&B^\alpha_{12,14}\\ &-A^\alpha_{23,23}&0&0&0&B^\alpha_{23,14}\\ &&-A^\alpha_{34,34}&-A^\alpha_{34,13}&0&B^\alpha_{34,14}\\ &&&-A^\alpha_{13,13}&0&0\\ &&&&-A^\alpha_{24,24}&0\\ &&&&&-A^\alpha_{14,14} \end{pmatrix} \end{align*}
$$A^\alpha_{ik,ik} = \sum^{k-1}_{p = i}A^\alpha_{p(p+1),p(p+1)}.$$
Note that $A^\alpha_{12,12}$, $A^\alpha_{23,23}$, and $A^\alpha_{34,34}$ cannot simultaneously equal 0, otherwise the nilradical would no longer be $T(4)$. The nilindependence among the $A^\alpha$ implies that $T(4)$ can have at most a three-dimensional extension, since there are three parameters on the diagonal.
The form of the matrices $A^\alpha$ implies that the only nonzero elements of $[A^\alpha, A^\beta]$ are $$[A^\alpha, A^\beta]_{12,24},\quad [A^\alpha, A^\beta]_{23,14},\quad [A^\alpha, A^\beta]_{34,13}.$$ The linear independence of the $N_{ik}$ with equation \eqref{2.15}, yields \begin{align} \label{Acommute}[A^\alpha,A^\beta] =& \ 0\\ \label{3.12}[X^\alpha,X^\beta] =& \ \sigma^{\alpha \beta}_{14} N_{14}. \end{align} For example, letting $ik = 12$ in equation \eqref{2.15}, the coefficient of $N_{24}$ implies that $[A^\alpha,A^\beta]_{12,24}=0$ and the coefficient of $N_{14}$ implies that $\sigma^{\alpha \beta}_{24} = [A^\alpha,A^\beta]_{12,14} = 0$. Henceforth we will abbreviate $\sigma^{\alpha \beta}_{14} = \sigma^{\alpha \beta}$ as all other $\sigma^{\alpha \beta}_{pq} = 0$. Since the $A^\alpha$ commute by \eqref{Acommute}, \eqref{2.15} and \eqref{2.15twist} imply \begin{equation}\label{ABcommute} [A^\alpha,B^\beta]=0. \end{equation}
Considering \eqref{ABcommute} componentwise, we find that $B^\beta_{12,14}(A^\alpha_{14,14}-A^\alpha_{12,12}) = B^\beta_{34,14}(A^\alpha_{14,14}-A^\alpha_{34,34}) = (A^\beta_{23,14} + B^\beta_{23,14})(A^\alpha_{14,14}-A^\alpha_{23,23}) = 0$.
Furthermore, by \eqref{ABcommute} and \eqref{un2.15}, $0=B^\alphaA^\beta+\BbB^\alpha=(A^\beta+B^\beta)B^\alpha$. Componentwise, this tells us that \begin{equation}\label{LindseyLemma}
0 = B^\beta_{12,14}A^\alpha_{14,14} = B^\beta_{34,14}A^\alpha_{14,14} = (A^\beta_{23,14} + B^\beta_{23,14})A^\alpha_{14,14}. \end{equation}
In particular, if $B^\beta$ has a nontrivial off-diagonal entry, then \begin{equation}\label{offdiag} \begin{cases} B^\beta_{12,14}\ne 0 &\Rightarrow A^\alpha_{12,12}=A^\alpha_{14,14}=0,\ \forall\alpha \\ B^\beta_{34,14}\ne 0 &\Rightarrow A^\alpha_{34,34}=A^\alpha_{14,14}=0,\ \forall\alpha \\ B^\beta_{23,14}\ne -A^\beta_{23,14} &\Rightarrow A^\alpha_{23,23}=A^\alpha_{14,14}=0,\ \forall\alpha. \end{cases} \end{equation}
The form of the matrices $A^\alpha$ and $B^\alpha$ imply that \eqref{2.16} becomes \begin{equation}\label{3.13} \sigma^{\alpha \beta}A^\gamma_{14,14}-\sigma^{\alpha \gamma}A^\beta_{14,14}+\sigma^{\beta \gamma}A^\alpha_{14,14}=0. \end{equation} By adding equations of form \eqref{3.13}, we get \begin{equation}\label{Lieish} (\sigma^{\beta \gamma} + \sigma^{\gamma \beta})A^\alpha_{14,14} = 0. \end{equation} For example $(\sigma^{12}+\sigma^{21})A^\alpha_{14,14}=0$ is obtained by adding \eqref{3.13} with $\beta=1, \gamma=2$ to \eqref{3.13} with $\beta=2, \gamma=1$.
On a related note, since $[X^\beta,X^\beta] \in \Ann_\ell(L)$, we have $0=[[X^\beta,X^\beta],X^\alpha]=[\sigma^{\beta \beta}N_{14},X^\alpha]= - \sigma^{\beta \beta} A^\alpha_{14,14} N_{14}$. Thus, \begin{equation}\label{Lieish2} \sigma^{\beta \beta} A^\alpha_{14,14} = 0. \end{equation}
As a consequence of \eqref{2.14cor}, \eqref{3.12}, \eqref{LindseyLemma}, \eqref{Lieish}, and \eqref{Lieish2}, we have the following result. \begin{lemma}\label{megalem} If $A^\alpha_{14,14}\ne0$ for any $\alpha=1,\ldots,f$, then the Leibniz algebra is a Lie algebra. \end{lemma}
The form of the matrices $A^\alpha$ and $B^\alpha$ imply that the transformation \eqref{2.19} becomes \begin{equation}\label{3.14} \sigma^{\alpha \beta} \longrightarrow \sigma^{\alpha \beta} + \mu^\beta_{14}A^\alpha_{14,14} - \mu^\alpha_{14}A^\beta_{14,14}. \end{equation}
For $f=2$, suppose $A^\alpha_{14,14} \neq 0$ for some $\alpha$, and without loss of generality assume that $\alpha=1$. Then choosing $\mu^1_{14}=0$ and $\mu^2_{14}=-\frac{\sigma^{12}}{A^1_{14,14}}$; \eqref{3.14} makes $\sigma^{12}=0$. For $f=3$, suppose $A^\alpha_{14,14} \neq 0$ for some $\alpha$, and without loss of generality assume that $\alpha=1$. Then choosing $\mu^1_{14}=0$ and $\mu^\beta_{14}=-\frac{\sigma^{1\beta}}{A^1_{14,14}}$ for $\beta=2,3$; \eqref{3.14} makes $\sigma^{1\beta}=0$. By \eqref{3.13}, we also have $\sigma^{23}=0$. Combining these results for $f=2$, 3 and employing \eqref{Lieish2} for $f=1$, we have: \begin{equation}\label{3.15} [X^\alpha,X^\beta] = \begin{cases} \sigma^{\alpha \beta}N_{14} & \text{if }A^1_{14,14}=\cdots=A^f_{14,14}=0 \\ 0 & \text{otherwise.} \end{cases} \end{equation}
Now we will utilize matrices $G$ to simplify the structure of matrices $A^\alpha$ and $B^\alpha$. Perform transformation \eqref{2.18}, given by $N\longrightarrow G_1 N$, with $$G_1 = \begin{pmatrix} 1&0&0&0&g_1&g_0\\
&1&0&0&0&g_2 \\
& &1&g_3&0&g_4\\
& & &1&0&0\\
& & & &1&0\\
& & & & &1 \end{pmatrix}.$$ Observe that $G_1$ acts invariantly on the commutation relations \eqref{tri}. It does, however, transform matrices $A^\alpha$ and $B^\alpha$ by $A^\alpha\longrightarrow G_1 A^\alpha G_1^{-1}$ and $B^\alpha\longrightarrow G_1 B^\alpha G_1^{-1}$, respectively. In particular, $G_1$ transforms the following components \begin{eqnarray*} \begin{cases}
A^\alpha_{12,24} \longrightarrow A^\alpha_{12,24} + g_1(A^\alpha_{24,24}-A^\alpha_{12,12}) \\
A^\alpha_{23,14} \longrightarrow A^\alpha_{23,14} + g_2(A^\alpha_{14,14}-A^\alpha_{23,23}) \\
A^\alpha_{34,13} \longrightarrow A^\alpha_{34,13} + g_3(A^\alpha_{13,13}-A^\alpha_{34,34}) \end{cases}\\ \begin{cases}
B^\alpha_{12,14} \longrightarrow B^\alpha_{12,14} - g_0(A^\alpha_{14,14}-A^\alpha_{12,12}) \\
B^\alpha_{23,14} \longrightarrow B^\alpha_{23,14} - g_2(A^\alpha_{14,14}-A^\alpha_{23,23}) \\
B^\alpha_{34,14} \longrightarrow B^\alpha_{34,14} - g_4(A^\alpha_{14,14}-A^\alpha_{34,34}). \end{cases} \end{eqnarray*} We use the matrix $G_1$ to eliminate some entries in $A^\alpha$ and $B^\alpha$. However, if $B^\alpha_{12,14}$ or $B^\alpha_{34,14}$ is not zero, then by \eqref{offdiag}, $G_1$ leaves that entry of $B^\alpha$ invariant. Hence, we can use $g_1$, $g_2$, and $g_3$ to eliminate at most 3 off-diagonal elements.
Note: In this step, we consider only transformations of the form $G_1=\begin{pmatrix} I&*\\0&I\end{pmatrix}$. Any other transformation which leaves \eqref{tri} and the form of $A^\alpha$ invariant, but eliminates $B^\alpha_{12,14}$ or $B^\alpha_{34,14}$, would provide an isomorphism from a non-Lie Leibniz algebra to a Lie algebra. It is, however, possible to scale such entries, which we will consider in the next case.
Let the diagonal matrix $G_2$ be $$G_2=\begin{pmatrix} g_{12}&&&&&\\ &g_{23}&&&&\\ &&g_{34}&&&\\ &&&g_{12}g_{23}&&\\ &&&&g_{23}g_{34}&\\ &&&&&g_{12}g_{23}g_{34} \end{pmatrix},\quad g_{ik}\in F\backslash\{0\}.$$ Note that $G_2$ preserves commutation relations \eqref{tri}. Our transformation of $\nr(L)$ will be defined by $G=G_2 G_1$. Observe that $G_2$ transforms $A^\alpha$ and $B^\alpha$ by $A^\alpha_{ik,ab}\longrightarrow \dfrac{g_{ik}}{g_{ab}}A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab}\longrightarrow \dfrac{g_{ik}}{g_{ab}}B^\alpha_{ik,ab}$, respectively, where $g_{ik}=(G_2)_{ik}=\prod\limits^{k-1}_{j=i}g_{j(j+1)}$. Hence, we can scale up to three nonzero off-diagonal elements to 1. In the case of Lie algebras, it may be necessary to scale to $\pm 1$ over $F=\mathbb{R}$. This issue does not arise in Leibniz algebras of non-Lie type, because we have greater restrictions on the number of nonzero entries.
\subsection{Leibniz algebras $L(4,1)$} The Lie cases for $\nr(L)=T(4)$, with $f=1$, have been previously classified in \cite{tw}, so we will focus on the Leibniz algebras of non-Lie type. We know that all such algebras will have $A^1_{14,14}=0$ by Lemma \ref{megalem}, where $A=A^1$ will be of the form found in \cite{tw}. Since $A$ is not nilpotent and $A_{14,14}=0$, we know that there is at most 1 nonzero off-diagonal entry in $A$. Altogether, there are 10 classes of Leibniz algebras of non-Lie type. Of these, there are 2 two-dimensional families, and 8 one-dimensional families. The matrices $A$ and $B$ for these can be found in Table \ref{L41} in the appendix.
\subsection{Leibniz algebras $L(4,2)$} Again, all Lie cases for $\nr(L)=T(4)$, with $f=2$, were classified in \cite{tw}. So, focusing on Leibniz algebras of non-Lie type, we again require that $A^1_{14,14}=A^2_{14,14}=0$ by Lemma \ref{megalem}. As a result, if $B^\alpha_{ik,14}\ne-A^\alpha_{ik,14}$, for any $\alpha$ or pair $ik$, then $A^\beta_{ik,ik}=A^\beta_{14,14}=0$ $\forall \beta$, which makes it impossible to have two linearly nilindependent matrices $A^1$ and $A^2$. Therefore, there is 1 four-dimensional family of Leibniz algebras of non-Lie type, and their matrices $A^1=-B^1$ and $A^2=-B^2$ can be found in Table \ref{L42} in the appendix.
\subsection{Leibniz algebras $L(4,3)$} There is only one Lie algebra that is a three-dimensional extension of $T(4)$, again given in \cite{tw}. Since we cannot have three linearly nilindependent matrices of form $A^\alpha$ with $A^1_{14,14} = A^2_{14,14} = A^3_{14,14} = 0$, it is impossible to have a three-dimensional extention of $T(4)$ that is of non-Lie type, by Lemma \ref{megalem}.
\begin{theorem}\label{L4f} Every Leibniz algebra $L(4,f)$ is either of Lie type, or is isomorphic to precisely one algebra represented in Table \ref{L41} or Table \ref{L42}. \end{theorem}
\section{Solvable Lie algebras $L(n,f)$ for $n\ge4$}
We are now going to consider Leibniz algebras $L$ with $\nr(L)=T(n)$. Recall from \eqref{2.14cor}, $A^\alpha_{ik,ab} = -B^\alpha_{ik,ab}$ for all $ab\ne 1n$. We have the following result: \begin{lemma}\label{Astructure} Matrices $A^\alpha=(A^\alpha_{ik,ab})$ and $B^\alpha=(B^\alpha_{ik,ab})$, $1\le i<k\le n$, $1\le a < b\le n$ have the following properties. \begin{enumerate}
\item[i.] $A^\alpha$ and $B^\alpha$ are upper-triangular.
\item[ii.] The only off-diagonal elements of $A^\alpha$ and $B^\alpha$ which may not be eliminated by an appropriate transformation on $X^\alpha$ are:
\begin{align*}\label{lem1.2A}
A^\alpha_{12,2n},\quad A^\alpha_{j(j+1),1n} \ (2\le j\le n-2),\quad A^\alpha_{(n-1)n,1(n-1)}, \\
B^\alpha_{12,2n},\quad B^\alpha_{j(j+1),1n} \ (1\le j\le n-1),\quad B^\alpha_{(n-1)n,1(n-1)}.
\end{align*}
\item[iii.] The diagonal elements $A^\alpha_{i(i+1),i(i+1)}$ and $B^\alpha_{i(i+1),i(i+1)}$, $1\le i\le n-1$, are free. The remaining diagonal elements of $A^\alpha$ and $B^\alpha$ satisfy
\begin{equation*}\label{lem1.3}
A^\alpha_{ik,ik} = \sum\limits^{k-1}_{j=i}A^\alpha_{j(j+1),j(j+1)},\quad B^\alpha_{ik,ik} = \sum\limits^{k-1}_{j=i}B^\alpha_{j(j+1),j(j+1)},\quad k>i+1.
\end{equation*} \end{enumerate} \end{lemma} \begin{proof}
The form of the matrices $A^\alpha$ given in Lemma \ref{Astructure} follows from \eqref{2.14} by induction on $n$, as shown in \cite{tw}. Similarly, properties $i.$ and $iii.$ follow for $B^\alpha$ from \eqref{un2.14}. Property $ii.$ for matrices $B^\alpha$ follows from \eqref{2.14cor}. \end{proof} As a consequence of property $iii.$ and \eqref{2.14cor}, we have that $A^\alpha_{1n,1n}=-B^\alpha_{1n,1n}$.
Lemma \ref{Astructure} asserts that $A^\alpha$ has $n-1$ free entries on the diagonal and nonzero off-diagonal entries possible in only $n-1$ locations, represented by $*$ in the matrix below. The form of $B^\alpha$ is the same, save for nonzero off-diagonal entries possible in two additional locations, represented by $b_1$ and $b_2$ in the matrix below.
$
\left(
\begin{array}{ccccc|ccccc}
* &&&& & && &*&b_1 \\ & * &&& & && &&* \\ && \ddots && & && &&\vdots \\ &&& * & & && && * \\ &&&& * & && *&&b_2 \\ \hline &&&& & *&&&&\\ &&&& & & \ddots &&& \\ &&&& & &&*&&\\ &&&& & &&&*&\phantom{\ddots}\\ \phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&* \end{array} \right)
$
\begin{lemma} The maximum degree of an extension of $T(n)$ is $f=n-1$. \end{lemma} \begin{proof}
The proof follows from the fact that the $A^\alpha$ are nilindependent and that we have, at most, $n-1$ parameters along the diagonal. \end{proof}
The form of the matrices $A^\alpha$ implies that the only nonzero elements of $[A^\alpha, A^\beta]$ are $$[A^\alpha,A^\beta]_{12,2n},\quad [A^\alpha,A^\beta]_{j(j+1),1n} \ (2\le j\le n-2),\quad [A^\alpha,A^\beta]_{(n-1)n,1(n-1)}.$$ As before, the linear independence of the $N_{ik}$ with equations \eqref{2.15} and \eqref{2.15twist}, yields \begin{align} \label{Acommute2}[A^\alpha,A^\beta] =& \ 0\\ \label{ABcommute2}[A^\alpha,B^\beta]=& \ 0\\
\nonumber [X^\alpha,X^\beta] =& \ \sigma^{\alpha \beta}_{1n} N_{1n}. \end{align}
From Lemma \ref{Astructure}, \eqref{2.16} becomes \begin{equation*}\label{3.13b} \sigma^{\alpha \beta}A^\gamma_{1n,1n}-\sigma^{\alpha \gamma}A^\beta_{1n,1n}+\sigma^{\beta \gamma}A^\alpha_{1n,1n}=0. \end{equation*}
\begin{lemma}\label{commutelem} Matrices $A^\alpha$ and $B^\alpha$ can be transformed to a canonical form satisfying \begin{align*} [A^\alpha, A^\beta] &= 0 \\ [A^\alpha, B^\beta] &= 0 \\ [X^\alpha,X^\beta] &=
\begin{cases}
\sigma^{\alpha \beta}N_{1n} & \text{if }A^1_{1n,1n}=\cdots=A^f_{1n,1n}=0 \\
0 & \text{otherwise.}
\end{cases} \end{align*} \end{lemma} \begin{proof} The first two identities of this lemma have already been shown. The argument for the third identity is the same as \eqref{3.15}, using $(\sigma^{\beta \gamma} + \sigma^{\gamma \beta})A^\alpha_{1n,1n} = 0$ and $\sigma^{\beta \beta} A^\alpha_{1n,1n} = 0$. \end{proof}
\subsection{Change of basis in $\nr(L(n,f))$}
As before, we perform the transformation \eqref{2.18} on $N$ by use of the matrix $G_1$, with $G_1$ defined to be all zeroes except $(G_1)_{ik,ik} = 1$, and $(G_1)_{12,1n}, (G_1)_{(n-1)n,1n}, (G_1)_{ab,ik} \in F$ where $\{ab,ik\}=\{12,2n\}, \{(n-1)n,1(n-1)\}, \{j(j+1),1n\}$ for $2 \leq j \leq n-2$. Therefore, the zero entries of $G_1$ are the off-diagonal entries of $B^\alpha$ that are guaranteed to be zero. Note that the transformation given by $G_1$ preserves commutation relation \eqref{tri}. Matrices $A^\alpha$ and $B^\alpha$ are transformed by conjugation with $G_1$, leaving the diagonal elements invariant and giving \begin{eqnarray*}
\begin{split} A^\alpha_{ik,ab} &\longrightarrow A^\alpha_{ik,ab} + (G_1)_{ik,ab}(A^\alpha_{ab,ab}-A^\alpha_{ik,ik}) \\ B^\alpha_{ik,ab} &\longrightarrow B^\alpha_{ik,ab} - (G_1)_{ik,ab}(A^\alpha_{ab,ab}-A^\alpha_{ik,ik}). \end{split} \end{eqnarray*}
By \eqref{ABcommute2}, we have that $0=[A^\alpha,B^\beta]_{12,1n} = B^\beta_{12,1n}(A^\alpha_{12,12}-A^\alpha_{1n,1n})$ and $0=[A^\alpha,B^\beta]_{(n-1)n,1n} = B^\beta_{(n-1)n,1n}(A^\alpha_{(n-1)n,(n-1)n}-A^\alpha_{1n,1n})$. Consequently, $G_1$ cannot eliminate the entries $B^\beta_{12,1n}$ and $B^\beta_{(n-1)n,1n}$.
\begin{lemma}\label{lemma4} Matrices $A^\alpha$ and $B^\alpha$ will have a nonzero off-diagonal entry $A^\alpha_{ik,ab}$ or $B^\alpha_{ik,ab}$, respectively, only if $$A^\beta_{ik,ik}=A^\beta_{ab,ab},\quad \forall\beta=1,\ldots,f.$$ \end{lemma} \begin{proof} For $A^\alpha$, the proof of Lemma \ref{lemma4} follows from \eqref{Acommute2}, as shown in \cite{tw}. Similarly for $B^\alpha$, the proof of Lemma \ref{lemma4} follows from considering \eqref{ABcommute2} componentwise. Namely, we find that $0=[A^\beta,B^\alpha]_{12,1n}=B^\alpha_{12,1n}(A^\beta_{1n,1n}-A^\beta_{12,12})$, and similar identities for $B^\alpha_{12,2n}$ and $B^\alpha_{(n-1)n,1(n-1)}$.
The commutation relations for $[A^\alpha,B^\beta]_{j(j+1),1n}$ gives that $B^\alpha_{j(j+1),1n}(A^\beta_{1n,1n}-A^\beta_{j(j+1),j(j+1)})=-A^\beta_{j(j+1),1n}(A^\alpha_{1n,1n}-A^\alpha_{j(j+1),j(j+1)})=0$. \end{proof}
Now consider a second transformation $G_2$ given by $N \longrightarrow G_2 N$, where $G_2$ is the diagonal matrix $(G_2)_{ik,ik} = g_{ik}$ and $g_{ik}=\prod\limits_{j=i}^{k-1} g_{j(j+1)}$. The matrices $A^\alpha$ and $B^\alpha$ are transformed by conjugation by $G_2$. Thus $A^\alpha_{ik,ab} \longrightarrow \dfrac{g_{ik}}{g_{ab}} A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab} \longrightarrow \dfrac{g_{ik}}{g_{ab}} B^\alpha_{ik,ab}$. This transformation can be used to scale up to $n-1$ nonzero off-diagonal elements to 1. For Lie algebras over the field $F=\mathbb{R}$ it may be that some entries have to be scaled to $-1$.
Since the only non-Lie cases occur when $A^\beta_{1n,1n}=0$ for all $\beta$, then $B^\alpha_{ik,1n} \ne -A^\alpha_{ik,1n}$ implies $A^\beta_{ik,ik}=0$ for all $\beta$ by Lemma \ref{lemma4}. Since the extensions $X^\alpha$ are required to be nilindependent, this imposes restrictions on the degree $f$ of non-Lie extensions of $T(n)$. In particular, this implies that in the maximal case $f=n-1$, $L(n,n-1)$ must be Lie. Such algebras have been classified in \cite{tw}, and in fact there is a unique algebra $L(n,n-1)$ where all $A^\alpha$ are diagonal and the $X^\alpha$ commute.
\begin{theorem} Every solvable Leibniz algebra $L(n,f)$ with triangular nilradical $T(n)$ has dimension $d=\frac{1}{2}n(n-1) + f$ with $1 \leq f \leq n-1$. It can be written in a basis $\{X^\alpha, N_{ik}\}$ with $\alpha= 1, \ldots, f$, $1 \leq i < k \leq n$ satisfying $$[N_{ik},N_{ab}]=\delta_{ka}N_{ib} - \delta_{bi}N_{ak}$$ $$[X^\alpha,N_{ik}]=A^\alpha_{ik,pq}N_{pq}$$ $$[N_{ik},X^\alpha]=B^\alpha_{ik,pq}N_{pq}$$ $$[X^\alpha,X^\beta]=\sigma^{\alpha \beta} N_{1n}.$$
Furthermore, the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}$ satisfy: \begin{enumerate} \item[i.] The matrices $A^\alpha$ are linearly nilindependent and $A^\alpha$ and $B^\alpha$ have the form specified in Lemma \ref{Astructure}. $A^\alpha$ commutes with all these matrices, i.e. $[A^\alpha,A^\beta]=[A^\alpha,B^\beta]=0$. \item[ii.] $B^\alpha_{ik,ab}=-A^\alpha_{ik,ab}$ for $ab \neq 1n$, and $B^\alpha_{1n,1n}=-A^\alpha_{1n,1n}$. \item[iii.] $L$ is Lie and all $\sigma^{\alpha \beta}=0$, unless $A^\gamma_{1n,1n}=0$ for $\gamma=1, \ldots, f$. \item[iv.] The remaining off-diagonal elements $A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab}$ are zero, unless $A^\beta_{ik,ik}=A^\beta_{ab,ab}$ for $\beta=1, \ldots, f$. \item[v.] In the maximal case $f=n-1$, there is only one algebra, which is isomorphic to the Lie algebra with all $A^\alpha$ diagonal where all $X^\alpha$ commute. \end{enumerate} \end{theorem}
\noindent{\bf Acknowledgements.}
The authors gratefully acknowledge the support of the Departments of Mathematics at Spring Hill College, the University of Texas at Tyler, and West Virginia University: Institute of Technology.
\begin{table}[b] \tiny \caption{The Leibniz algebras $L(4,1)$ of non-Lie type} \begin{tabular}{lllc} \hline
No. & $A$ & $B$ & $\sigma$, parameters \\ \hline\hline
(1) & $ \left(\begin{array}{cccccc}
\ 1 & & &&& \\
& a & &&& \\
& & -1-a &&&\\
& & & 1+a && \\
& & & &-1& \\
& & & && 0\\ \end{array}\right)$ & $B=-A$ &
$\begin{array}{l}\sigma^{11} \neq 0 \in F\\ a\ne-1 \in F\end{array}$ \\ \hline
(2) & $ \left(\begin{array}{cccccc}
\ 1 & & &&& \\
& 0 & \ &&& \\
& & -1\ &&&\\
& & & 1 && \\
& & &&-1& \\
& & & && 0\\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
\ -1 & & &&& \\
& 0 & \ &&& 1\\
& & 1\ &&&\\
& & & -1 && \\
& & & &1& \\
& & & && 0\\ \end{array}\right)$ & $\sigma^{11}\in F$ \\ \hline
(3) & $ \left(\begin{array}{cccccc}
\ 1 & & &&& \\
& -1 & &&& \\
& & 0 &&&\\
& & & 0 && \\
& & &&-1& \\
& & & && 0\\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
\ -1 & & &&& \\
& 1& &&& \\
& & 0 &&&1\\
& & & 0&& \\
& & & &1& \\
& & & && 0\\ \end{array}\right)$ & $\sigma^{11} \in F$ \\ \hline
(4) & $ \left(\begin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
0 & & & & &1 \\
& -1 & & & & \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 0 & \\
& & & & & 0 \\ \end{array}\right)$ & $\sigma^{11} \in F$ \\ \hline
(5) & $ \left(\begin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & &0 & \\
& & & & & 0 \\ \end{array}\right)$ & $B = -A$ & $\sigma^{11} \neq 0 \in F$ \\ \hline
(6) & $ \left(\begin{array}{cccccc}
0 & & & & 1 & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\ \end{array}\right)$ & $B = -A$ & $\sigma^{11} \neq 0 \in F$ \\ \hline
(7) & $ \left(\begin{array}{cccccc}
0 & & & & 1 & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & &0 & \\
& & & & & 0 \\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
0 & & & & -1 & 1 \\
& -1 & & & & \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 0 & \\
& & & & & 0 \\ \end{array}\right)$ & $\sigma^{11} \in F$ \\ \hline
(8) & $ \left(\begin{array}{cccccc}
1 & & & & & \\
& 0 & & & & 1 \\
& & -1 & & & \\
& & & 1 & & \\
& & & & -1 & \\
& & & & & 0 \\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
-1 & & & & & \\
& 0 & & & & b \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 1 & \\
& & & & & 0 \\ \end{array}\right)$ & $\begin{array}{c} \sigma^{11}, b \in F\\ \\ \sigma^{11}, b+1 \text{ not both zero} \end{array} $ \\ \hline
(9) & $ \left(\begin{array}{cccccc}
1 & & & & & \\
& -1 & & & & \\
& & 0 & 1 & & \\
& & & 0 & & \\
& & & & -1 & \\
& & & & & 0 \\ \end{array}\right)$ & $B= -A$ & $\sigma^{11} \neq 0 \in F$ \\ \hline
(10) & $ \left(\begin{array}{cccccc}
1 & & & & & \\
& -1 & & & & \\
& & 0 & 1 & & \\
& & & 0 & & \\
& & & & -1 & \\
& & & & & 0 \\ \end{array}\right)$ & $ \left(\begin{array}{cccccc}
-1 & & & & & \\
& 1 & & & & \\
& & 0 & -1 & & 1 \\
& & & 0 & & \\
& & & & 1 & \\
& & & & & 0 \\ \end{array}\right)$ & $\sigma^{11} \in F$ \\ \hline\hline \end{tabular}
\label{L41} \end{table}
\begin{table}[th] \tiny \caption{The Leibniz algebras $L(4,2)$ of non-Lie type}
\begin{tabular}{lllc} \hline
No. & $A^1=-B^1$ & $A^2=-B^2$ & $\sigma$ \\ \hline\hline
(11) & $\left(\begin{array}{cccccc}
1 & & & & & \\
& 0 & & & & \\
& & -1& & & \\
& & & 1 & & \\
& & & & -1& \\
& & & & & 0 \\ \end{array}\right)$ & $\left(\begin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1& & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\ \end{array}\right)$ & $\begin{array}{c} \sigma^{11}, \sigma^{22}, \sigma^{12}, \sigma^{21} \in F\\ \\ \sigma^{11}, \sigma^{22}, \sigma^{12}+\sigma^{21}\text{ not all zero} \end{array} $ \\ \hline\hline \end{tabular} \label{L42}
\end{table}
\end{document} |
\begin{document}
\title{From Vectors to Geometric Algebra}
\begin{abstract} Geometric algebra is the natural outgrowth of the concept of
a vector and the addition of vectors. After reviewing the properties of the addition of vectors, a multiplication of vectors is introduced in such a way that it encodes the famous Pythagorean theorem. Synthetic proofs of theorems in Euclidean geometry can then be replaced by powerful algebraic proofs. Whereas we largely limit our attention to 2 and 3 dimensions, geometric algebra is applicable in any number of dimensions, and in both Euclidean and non-Euclidean geometries.
\end{abstract}
\section*{0\quad Introduction}
The evolution of the concept of number, which is at the heart of mathematics,
has a long and fascinating history that spans many centuries and the rise and
fall of many civilizations \cite{TD1967}.
Regarding the introduction of negative and complex numbers, Gauss remarked in 1831, that
``... these advances, however, have always been made at first with timorous and hesitating steps".
In this work, we lay down for the uninitiated reader the most basic ideas and methods of geometric algebra. Geometric algebra, the natural generalization of the real and complex number systems to include new quantities called {\it directed numbers}, was discovered by William Kingdon Clifford (1845-1879) shortly before his death \cite{WKC1882}.
In Section 1, we extend the real number system $\mathbb{R} $ to include {\it vectors} which are {\it directed line segments} having both {\it length} and {\it direction}. Since the geometric significance of the addition of vectors, and the multiplication of vectors by real numbers or {\it scalars}, are well understood, we only provide a short review. We wish to emphasize that the concept of a vector as a directed line segment in a flat space is independent of any coordinate system, or the dimension of the space. What is important is that the {\it location} of the directed line segment in flat space is unimportant, since a vector at a point can be translated to a parallel vector at any other point, and have the same length and direction.
Section 2 deals with the {\it geometric multiplication} of vectors.
Since we can both {\it add} and {\it multiply} real numbers, if the real number system is to be truly extended to include vectors, then we must be able to {\it multiply} as well as to {\it add} vectors. For guidance on how to geometrically multiply vectors, we recall the two millennium old Pythagorean Theorem relating the sides of a right angle. By only giving up the law of {\it universal commutativity} of multiplication, we discover that the product of orthogonal vectors is anti-commutative and defines a new directed number called a {\it bivector}.
The {\it inner} and {\it outer products} are defined in terms of the {\it symmetric} and {\it anti-symmetric} parts of the geometric product of vectors, and various important relationships between these three products are investigated.
In Section 3, we restrict ourselves to the most
basic geometric algebras $\mathbb{G}_2$ of the Euclidean plane $\mathbb{R}^2$, and the geometric algebra $\mathbb{G}_3$ of Euclidean space $\mathbb{R}^3$. These geometric algebras offer concrete examples and calculations based upon the familiar rectangular coordinate systems of two and three dimensional space, although the much more general discussion of the previous sections should not be forgotten. At the turn of the 19th Century, the great {\it quaternion} verses standard {\it Gibbs-Heaviside} vector algebra was fought \cite{MJC1985}. We show how the standard {\it cross product} of two vectors is the natural {\it dual} to the outer product of those vectors, as well as the relationship to other well known identities in standard vector analysis. These ideas can easily be generalized to higher dimensional geometric algebras of both Euclidean and non-Euclidean spaces, used extensively in Einstein's famous theories of relativity \cite{DL07}, and across the mathematics \cite{H/S,LP97}, and the engineering fields \cite{ECGS01,HD02}.
In Section 4, we treat elementary ideas from analytic geometry, including the vector equation of a line and the vector equation of a plane. Along the way, formulas for the decomposition of a vector into parallel and perpendicular components to a line and plane are derived, as well as formulas for the reflection and rotation of a vector in 2, 3 and higher dimensional spaces.
In Section 5, the flexibility and power of geometric algebra is fully revealed by discussing {\it stereographic projection} of the unit 2-sphere centered at the origin onto the Euclidean 2-plane. Stereographic projection, and its generalization to higher dimensions, has profound applications in many areas of mathematics and physics. For example, the fundamental $2$-component spinors used in quantum mechanics have a direct interpretation in the stereographic projection of the $2$-sphere \cite{Shopf2015}.
It is remarkable that almost 140 years after its discovery, this powerful geometric number system, the natural completion of the real number system to include the concept of direction, is not universally known by the wider scientific community, although there have been many developments and applications of the language at the advanced levels in mathematics, theoretical physics, and more recently in the computer science and robotics communities. We feel that the main reason for this regrettable state of affairs, has been the lack of a concise, yet rigorous introduction at the most fundamental level. For this reason we pay careful attention to introducing the inner and outer products, and developing the basic identities, in a clear and direct manner, and in such a way that generalization to higher dimensional Euclidean and non-Euclidean geometric algebras presents no new obstacles for the reader. We give careful references to more advanced material, which the interested reader can pursue at their leisure.
\section{Geometric addition of vectors}
Natural numbers, or counting numbers, are used to express quantities of objects, such as 3 cows, 4 pounds, or 5 steps to north. Historically, natural numbers have been gradually extended to include fractions, negative numbers, and all numbers on the one-dimensional number line.
{\it Vectors}, or {\it directed line-segments}, are a new kind of number which include the notion of direction. A vector $\mathbf{v}=|\mathbf{v}|\hat \mathbf{v}$ has {\it length} $|\mathbf{v}|$ and a {\it unit direction} $\hat \mathbf{v}$, pictured in Figure \ref{vectoradd}. Also pictured is the sum of vectors $\mathbf{w} =\mathbf{u} + \mathbf{v} $. \begin{figure}
\caption{Vector addition.}
\label{vectoradd}
\end{figure}
\begin{figure}\label{prop1and2}
\end{figure}
\begin{figure}
\caption{Geometric properties of addition of vectors.}
\label{prop3and4}
\end{figure} Let $\mathbf{a} $, $\mathbf{b} $ and $\mathbf{c} $ be vectors. Each of the pictures in Figure \ref{prop3and4} expresses a basic geometric property of the addition of vectors, together with its translation into a corresponding algebraic rule. For example, the {\it negative} of a vector $\mathbf{a} $ is the vector $-\mathbf{a}$, which has the same length as the vector $\mathbf{a}$ but the {\it opposite direction} or {\it orientation}, shown in Figure \ref{prop3and4}:\ 1). We now summarize the algebraic rules for the geometric additions of vectors, and multiplication by real numbers.
\renewcommand{(P\arabic{enumi})}{(A\arabic{enumi})}
\begin{enumerate}
\item
$\mathbf{a} + (-\mathbf{a})= 0 \mathbf{a} = \mathbf{a} 0 =0$
{\it Additive inverse of a vector}
\item
$\mathbf{a} + \mathbf{b} = \mathbf{b} + \mathbf{a}$
{\it Commutative law of vector addition}
\item
$(\mathbf{a} + \mathbf{b}) + \mathbf{c} = \mathbf{a} + (\mathbf{b} + \mathbf{c}) := \mathbf{a} + \mathbf{b} + \mathbf{c} $
{\it Associative law of}
\linebreak \strut
{\it vector addition}
\item For each $\alpha \in \mathbb{R} $,\
$\alpha \mathbf{a} = \mathbf{a} \alpha$
{\it Real numbers commute with vectors}
\item
$\mathbf{a} -\mathbf{b} := \; \mathbf{a} + (-\mathbf{b})$
{\it Definition of vector subtraction}
\end{enumerate}
In Property (A1), the same symbol ${0}$ represents both the zero vector and the zero scalar. Property (A4), tells us that the multiplication of a vector with a real number is a commutative operation.
Note that rules for the addition of vectors are the same as for the addition of real numbers.
Whereas vectors are usually introduced in
terms of a coordinate system, we wish to emphasize that their geometric properties are independent of any coordinate system. In Section 4, we carry out explicit calculations in the geometric algebras $\mathbb{G}_2$ and $\mathbb{G}_3$, by using the usual orthonormal coordinate systems of $\mathbb{R}^2$ and $\mathbb{R}^3$, respectively.
\section{Geometric multiplication of vectors}
The geometric significance of the addition of vectors is pictured in Figures \ref{vectoradd} and \ref{prop3and4}, and formalized in the rules (A1) - (A5). But what about the {\it multiplication} of vectors? We both add and multiply real numbers, so why can't we do the same for vectors? Let's see if we can discover how to multiply vectors in a geometrically meaningful way.
First recall that any vector $\mathbf{a} = |\mathbf{a} | \hat \mathbf{a} $. Squaring this vector, gives
\begin{equation} \mathbf{a}^2 = (|\mathbf{a} | \hat \mathbf{a})( |\mathbf{a} | \hat \mathbf{a}) = | \mathbf{a} |^2 \hat \mathbf{a}^2 = | \mathbf{a} |^2 .
\label{vecsquared} \end{equation} In the last step, we have introduced the {\it new rule} that a unit vector squares to $+1$. This is always true for unit {\it Euclidean vectors}, the vectors which we are most familiar.\footnote{{\it Space-time vectors} in Einstein's {\it relativity theory}, as well as vectors in other
{\it non-Euclidean geometries}, have unit vectors with square $-1$.} With this assumption it directly follows that a Euclidean vector squared is its {\it magnitude} or {\it length} squared, $\mathbf{a}^2 = | \mathbf{a} |^2 \ge 0$, and is equal to zero only when it has zero length.
Dividing both sides of equation (\ref{vecsquared}) by
$|\mathbf{a} |^2$, gives
\begin{equation} \frac{\mathbf{a}^2}{|\mathbf{a} |^2} = \mathbf{a} \frac{\mathbf{a}}{|\mathbf{a} |^2}= \frac{\mathbf{a}}{|\mathbf{a} |^2}\mathbf{a} = 1, \label{vecinverse} \end{equation} or \[ \mathbf{a} \mathbf{a}^{-1} = \mathbf{a}^{-1} \mathbf{a} = 1 \] where
\begin{equation} \mathbf{a}^{-1}:=\frac{1}{ \mathbf{a} }= \frac{\mathbf{a}}{|\mathbf{a} |^2} = \frac{\hat\mathbf{a}}{|\mathbf{a} |} \label{inversevec} \end{equation} is the {\it multiplicative inverse} of the vector $\mathbf{a} $. Of course, the inverse of a vector is only defined for nonzero vectors. \begin{figure}
\caption{Right triangle with sides $\mathbf{a} + \mathbf{b} = \mathbf{c} $. }
\label{righttriangle}
\end{figure}
Now consider the right triangle in Figure \ref{righttriangle}. The vectors $\mathbf{a} , \mathbf{b} , \mathbf{c} $ along its sides satisfy the equation
\begin{equation} \mathbf{a} + \mathbf{b} = \mathbf{c} . \label{rtriangle} \end{equation}
The most famous theorem of ancient Greek mathematics, the Pythagorean Theorem, tells us that the lengths $|\mathbf{a}|,|\mathbf{b}|,|\mathbf{c}|$ of the sides of this right triangle satisfy the famous relationship $|\mathbf{a}|^2 + |\mathbf{b}|^2 = |\mathbf{c}|^2$. Assuming the usual rules for the addition and multiplication of real numbers, except for the commutative law of multiplication, we square both sides of the vector equation (\ref{rtriangle}), to get
\[ (\mathbf{a} + \mathbf{b} )^2=\mathbf{a}^2 +\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} + \mathbf{b}^2=\mathbf{c}^2 \ \ \iff \ \ |\mathbf{a}|^2 + \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} + |\mathbf{b} |^2= |\mathbf{c} |^2, \] from which it follows that $\mathbf{a} \mathbf{b} = -\mathbf{b} \mathbf{a} $, if the Pythagorean Theorem is to remain valid. We have discovered that the geometric product of the orthogonal vectors $\mathbf{a} $ and $\mathbf{b} $ must {\it anti-commute} if this venerable theorem is to remain true.
For the orthogonal vectors $\mathbf{a} $ and $\mathbf{b} $, let us go further and give the new quantity $\mathbf{B} := \mathbf{a} \mathbf{b} $ the geometric interpretation of a {\it directed plane segment}, or {\it bivector}, having the direction of the plane in which the vectors lies. The bivectors $\mathbf{B} $, and its {\it additive inverse} $ \mathbf{b} \mathbf{a} = -\mathbf{a} \mathbf{b} = - \mathbf{B} $, are pictured in Figure \ref{fig:bivector}. Just as the orientation of a vector is the determined by the direction of the line segment, the {\it orientations} of the bivectors $\mathbf{B} = \mathbf{a} \mathbf{b} $ and $-\mathbf{B} = \mathbf{b} \mathbf{a} $ are determined by the orientation of its sides, as shown in the Figure \ref{fig:bivector}. \begin{figure}
\caption{The bivectors $\mathbf{a} \mathbf{b} $ and $\mathbf{b} \mathbf{a} $ defined by the orthogonal vectors $\mathbf{a} $ and $\mathbf{b} $.}
\label{fig:bivector}
\end{figure}
We have seen that a vector $\mathbf{v} = |\mathbf{v} |\hat \mathbf{v} $ has the unit direction $\hat \mathbf{v} $ and length $|\mathbf{v} |$, and that $\mathbf{v}^2 = |\mathbf{v} |^2$. Squaring the bivector $\mathbf{B} = \mathbf{a} \mathbf{b} $ gives \begin{equation} \mathbf{B}^2 = (\mathbf{a} \mathbf{b})(\mathbf{a} \mathbf{b})= - \mathbf{a} \mathbf{b} \mathbf{b} \mathbf{a}
=- \mathbf{a}^2 \mathbf{b}^2 =-|\mathbf{a} |^2|\mathbf{b} |^2= -|\mathbf{B} |^2, \label{bivectorsquared}\end{equation} which is the {\it negative} of the area squared of the rectangle with the sides defined by the orthogonal vectors $\mathbf{a} $ and $\mathbf{b} $. It follows that
\begin{equation} \mathbf{B} =|\mathbf{B}| \hat \mathbf{B}, \label{defbivector} \end{equation} where $|\mathbf{B} |=|\mathbf{a} ||\mathbf{b} |$ is the area of the directed plane segment, and its direction is the {\it unit bivector} $\hat \mathbf{B} = \hat \mathbf{a} \hat \mathbf{b} $, with \[ \hat \mathbf{B}^2 =(\hat \mathbf{a} \hat \mathbf{b})(\hat \mathbf{a} \hat \mathbf{b})
=\hat \mathbf{a} (\hat \mathbf{b} \hat \mathbf{a}) \hat \mathbf{b} =-\hat \mathbf{a}^2 \hat \mathbf{b}^2 = -1 . \]
\subsection{The inner product}
Consider now the general triangle in Figure \ref{coslaw},
with the vectors $\mathbf{a} $, $\mathbf{b} $, $\mathbf{c} $ along its sides satisfying the vector equation $ \mathbf{a} + \mathbf{b} = \mathbf{c} $. Squaring this equation gives
\begin{figure}
\caption{Law of Cosines.}
\label{coslaw}
\end{figure}
\[ (\mathbf{a} + \mathbf{b} )^2 =\mathbf{a}^2 + \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} + \mathbf{b}^2 =\mathbf{c}^2\ \ \iff \ \
|\mathbf{a}|^2 + 2 \mathbf{a} \cdot \mathbf{b} + |\mathbf{b}|^2 = |\mathbf{c}|^2 , \]
known as the {\it Law of Cosines}, where
\begin{equation} \mathbf{a} \cdot \mathbf{b} := \frac{1}{2}(\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} ) = |\mathbf{a} ||\mathbf{b} |\cos \theta ,\label{reverseab} \end{equation}
is the {\it inner product} or {\it dot product} of the
vectors $\mathbf{a} $ and $\mathbf{b} $. In Figure \ref{coslaw}, the angle $-\pi \le \theta \le \pi$ is measured from the vector
$\mathbf{a} $ to the vector $\mathbf{b} $, and
\[\cos \theta = - \cos (\pi -\theta) =-\cos C = \cos (-\theta), \]
so the sign of the angle is unimportant. Note that (\ref{reverseab}) allows us to reverse the order of the geometric product,
\begin{equation} \mathbf{b} \mathbf{a} = - \mathbf{a} \mathbf{b} + 2 \mathbf{a} \cdot \mathbf{b} . \label{reverseba} \end{equation}
We have used the usual rules for the multiplication of real
numbers, except that we have not assumed that the multiplication of vectors is
universally commutative. Indeed, the Pythagorean Theorem tells us that $|\mathbf{a}|^2+ |\mathbf{b}|^2 = |\mathbf{c}|^2$ only for a right triangle when $\mathbf{a} \cdot \mathbf{b} =0$,
or equivalently, when the vectors $\mathbf{a} $ and $\mathbf{b} $ are orthogonal and anti-commute.
Now is a good place to summarize the rules which we have developed for the geometric multiplication of vectors. For vectors $\mathbf{a} $, $\mathbf{b} $, and $\mathbf{c}$,
\renewcommand{(P\arabic{enumi})}{(P\arabic{enumi})}
\begin{enumerate}
\item $ \mathbf{a}^2 = |\mathbf{a} |^2 $ \quad {\it The square of a vector is its magnitude squared} \item $\mathbf{a} \mathbf{b} =- \mathbf{b} \mathbf{a} $ \quad {\it defines the bivector $\mathbf{B} =\mathbf{a} \mathbf{b} $ when $\mathbf{a} $ and $\mathbf{b} $ are orthogonal vectors.}
\item
$\mathbf{a} ( \mathbf{b} + \mathbf{c} ) = \mathbf{a}\mathbf{b} + \mathbf{a}\mathbf{c}$ \quad {\it Left distributivity}
\item
$( \mathbf{b} + \mathbf{c})\mathbf{a} = \mathbf{b}\mathbf{a} + \mathbf{c}\mathbf{a}$ \quad {\it Right distributivity}
\item
$\mathbf{a} (\mathbf{b}\mathbf{c} ) =(\mathbf{a}\mathbf{b} )\mathbf{c} = \mathbf{a}\mathbf{b}\mathbf{c}$ \quad {\it Product associativity}
\item $0 \mathbf{a} = 0 = \mathbf{a} 0 $ \quad {\it Multiplication of a vector by zero is zero}
\item
$\alpha\mathbf{a} = \mathbf{a}\alpha, \ \ {\rm for} \ \ \alpha \in \mathbb{R}$ \quad {\it Multiplication of a vector times a scalar is commutative}
\end{enumerate}
\subsection{The outer product}
So far, all is well, fine and good. The inner product of two vectors has been identified
as one half the symmetric product of those vectors. To discover the geometric interpretation of the anti-symmetric product of the two vectors $\mathbf{a} $ and $\mathbf{b} $, we write
\begin{equation} \mathbf{a} \mathbf{b} = \frac{1}{2}(\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a})+ \frac{1}{2}(\mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a}) =
\mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b} , \label{geoproductab} \end{equation}
where $\mathbf{a} \wedge \mathbf{b} := \frac{1}{2}(\mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a})$ is called
the {\it outer product}, or {\it wedge product} between $\mathbf{a} $ and $\mathbf{b} $. The outer
product is {\it antisymmetric}, since $\mathbf{b} \wedge \mathbf{a} = - \mathbf{a} \wedge \mathbf{b}$.
Indeed, when $\mathbf{a} \cdot \mathbf{b} =0$ the geometric product reduces to the outer product, {\it i.e.}
\begin{equation} \mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b} = \mathbf{a} \wedge \mathbf{b} = - \mathbf{b} \mathbf{a} . \label{reducedprod} \end{equation}
It is natural to give $\mathbf{a} \wedge \mathbf{b}$ the interpretation of a {\it directed plane segment} or {\it bivector}. To see this, write $\mathbf{b} = \mathbf{b}_\parallel + \mathbf{b}_\perp$,
where $\mathbf{b}_\parallel = s \mathbf{a} $, for $s\in \mathbb{R}$, is the vector part of $\mathbf{b}$ which is parallel to $\mathbf{a}$, and $\mathbf{b}_\perp$ is the vector part of $\mathbf{b} $ which is perpendicular to $\mathbf{a}$. Calculating $\mathbf{a} \mathbf{b} $, we find
\[ \mathbf{a} \mathbf{b} = \mathbf{a} (\mathbf{b}_\parallel + \mathbf{b}_\perp ) = \mathbf{a} \mathbf{b}_\parallel +\mathbf{a} \mathbf{b}_\perp =
s \mathbf{a} ^2 + \mathbf{a} \mathbf{b}_\perp = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}. \]
Equating scalar and bivector parts, gives
\begin{equation} \mathbf{a} \cdot \mathbf{b} = s \mathbf{a}^2 \quad {\rm and} \quad \mathbf{a} \wedge \mathbf{b} = \mathbf{a} \mathbf{b}_\perp.
\label{perpbivector} \end{equation}
It follows that $\mathbf{a} \wedge \mathbf{b} = \mathbf{a} \mathbf{b}_\perp $ is the bivector which is the product of the
orthogonal vectors $\mathbf{a} $ and $\mathbf{b}_\perp$, shown in Figure \ref{coslaw}.
The bivector defined by the oriented parallelogram $\mathbf{a} \wedge \mathbf{b} $, with sides $\mathbf{a} $ and $\mathbf{b} $, has exactly the same orientation and directed area as the bivector defined by the oriented rectangle $\mathbf{a} \mathbf{b}_\perp$, with the sides
$\mathbf{a} $ and $\mathbf{b}_\perp$.
\begin{figure}\label{bivecorient}
\end{figure}
We have seen that
the square of a vector is its magnitude squared, $\mathbf{a} ^2 = |\mathbf{a} |^2$.
What about the square of the bivector $(\mathbf{a} \wedge \mathbf{b})$? Using (\ref{perpbivector}), we find that
\begin{equation} (\mathbf{a} \wedge \mathbf{b} )^2 = (\mathbf{a} \mathbf{b}_\perp)^2 = - \mathbf{a}^2 \mathbf{b}_\perp^2 =-|\mathbf{a} \wedge \mathbf{b}|^2, \label{areabivector2} \end{equation}
in agreement with (\ref{bivectorsquared}). If the bivector is in the $xy$-plane of the unit bivector $\mathbf{e}_1 \mathbf{e}_2$, where the unit vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ lie along the orthogonal $x$- and $y$-axes, respectively, then
$\mathbf{a} \wedge \mathbf{b} =\mathbf{e}_{12}|\mathbf{a} ||\mathbf{b} | \sin \theta$, see Figure \ref{bivecorient}. The geometric product in $\mathbb{R}^2$ and $\mathbb{R}^3$ is further discussed in Section 3.
\begin{figure}
\caption{The wedge product is distributive over the addition of vectors.}
\label{fig:distpropwedge}
\end{figure}
Just as sum of vectors is a vector,
the sum of bivectors is a bivector. Figure \ref{fig:distpropwedge} shows
the sum of the bivectors
\[ \mathbf{a} \wedge \mathbf{c} + \mathbf{b} \wedge \mathbf{c} =( \mathbf{a} + \mathbf{b} )\wedge \mathbf{c}, \]
and also shows the {\it distributive property} of the outer product
over the sum of the
vectors $\mathbf{a}$ and $\mathbf{b}$.
\subsection{Properties of the inner and outer products}
Since the triangle in Figure \ref{coslaw} satisfies the vector equation
\[ \mathbf{a} + \mathbf{b} = \mathbf{c}, \]
by wedging both sides of this equation by $\mathbf{a}, \mathbf{b} $ and $\mathbf{c}$, gives
\[ \mathbf{a} \wedge \mathbf{b} = \mathbf{c} \wedge \mathbf{b}, \ \ \mathbf{b} \wedge \mathbf{a} = \mathbf{c} \wedge \mathbf{a} , \ \ {\rm and} \ \ \mathbf{c} \wedge \mathbf{a} =
\mathbf{b} \wedge \mathbf{c} , \]
or equivalently,
\[ \mathbf{a} \wedge \mathbf{b} = \mathbf{c} \wedge \mathbf{b} = \mathbf{a} \wedge \mathbf{c} . \]
Note that the area of the triangle is given by $\frac{1}{2}|\mathbf{a} \wedge \mathbf{b}|$,
which is one half of the area of the parallelogram $\mathbf{a} \wedge \mathbf{b} $, so the last equation is reflecting the equivalent relationship between parallelograms.
Dividing each term of the last equality by $|\mathbf{a} || \mathbf{b} ||\mathbf{c} |$, gives
\[ \frac{\hat \mathbf{a} \wedge \hat \mathbf{b} }{|\mathbf{c} |} = \frac{\hat \mathbf{c} \wedge \hat \mathbf{b} }{|\mathbf{a} |} =
\frac{\hat \mathbf{a} \wedge \hat \mathbf{c} }{|\mathbf{b} |} \ \ \Longrightarrow \frac{|\hat \mathbf{a} \wedge \hat \mathbf{b}| }{|\mathbf{c} |} = \frac{|\hat \mathbf{c} \wedge \hat \mathbf{b}| }{|\mathbf{a} |} =
\frac{|\hat \mathbf{a} \wedge \hat \mathbf{c} |}{|\mathbf{b} |}. \]
For the angles $0 \le A,B,C \le \pi$,
\[ |\hat \mathbf{a} \wedge \hat \mathbf{b}| =\sin C = \sin(\pi -C), \ \ |\hat \mathbf{c} \wedge \hat \mathbf{b}|= \sin A, \ \ {\rm and} \ \ |\hat \mathbf{a} \wedge \hat \mathbf{c}| = \sin B, \]
from which it follows that
\[ \frac{\sin A}{|\mathbf{a} |} =\frac{\sin B}{|\mathbf{b}| }=\frac{\sin C}{|\mathbf{c}|} \]
known as the {\it Law of Sines},
see Figure \ref{sinlaw}.
\begin{figure}
\caption{Law of Sines.}
\label{sinlaw}
\end{figure}
In (\ref{geoproductab}), we discovered that the geometric product of two vectors splits into two parts, a symmetric {\it scalar} part $\mathbf{a} \cdot \mathbf{b} $ and an anti-symmetric {\it bivector} part $\mathbf{a} \wedge \mathbf{b}$. It is natural to ask the question whether the geometric product of a vector $\mathbf{a}$ with a bivector $\mathbf{b} \wedge \mathbf{c} $ has a similar decomposition? Analogous to (\ref{geoproductab}), we write
\begin{equation} \mathbf{a} (\mathbf{b} \wedge \mathbf{c}) = \mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c}) + \mathbf{a} \wedge (\mathbf{b} \wedge \mathbf{c}), \label{gaproductabc} \end{equation}
where in this case
\begin{equation} \mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c}):=\frac{1}{2}\Big( \mathbf{a} (\mathbf{b} \wedge \mathbf{c}) - (\mathbf{b} \wedge \mathbf{c})\mathbf{a} \Big)
=: - (\mathbf{b} \wedge \mathbf{c})\cdot \mathbf{a}
\label{adotbc} \end{equation}
is {\it antisymmetric}, and
\begin{equation} \mathbf{a} \wedge (\mathbf{b} \wedge \mathbf{c}):=\frac{1}{2}\Big( \mathbf{a} (\mathbf{b} \wedge \mathbf{c}) + (\mathbf{b} \wedge \mathbf{c})\mathbf{a} \Big)
=: (\mathbf{b} \wedge \mathbf{c})\wedge \mathbf{a}
\label{awedgebc} \end{equation}
is {\it symmetric}.
To better understand this decomposition, we consider each part separately. Starting with
$\mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c})=\mathbf{a}_\parallel (\mathbf{b} \wedge \mathbf{c} ) $, we first show that
\begin{equation} \mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c}) =(\mathbf{a} \cdot \mathbf{b})\mathbf{c} - (\mathbf{a} \cdot \mathbf{c} )\mathbf{b} . \label{adotbc1} \end{equation}
Decomposing the left side of this equation, using (\ref{adotbc}) and (\ref{geoproductab}), gives
\[ \mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c}) = \frac{1}{2}\Big(\mathbf{a} (\mathbf{b} \wedge \mathbf{c} )-(\mathbf{b} \wedge \mathbf{c} )\mathbf{a} \Big)
= \frac{1}{4}(\mathbf{a} \mathbf{b} \mathbf{c} - \mathbf{a} \mathbf{c} \mathbf{b} -\mathbf{b} \mathbf{c} \mathbf{a} + \mathbf{c} \mathbf{b} \mathbf{a}) . \]
Decomposing the right side, gives \[ (\mathbf{a} \cdot \mathbf{b})\mathbf{c} - (\mathbf{a} \cdot \mathbf{c} )\mathbf{b}=\frac{1}{2}\Big( (\mathbf{a} \cdot \mathbf{b})\mathbf{c} + \mathbf{c} (\mathbf{a} \cdot \mathbf{b}) - (\mathbf{a} \cdot \mathbf{c} )\mathbf{b} - \mathbf{b} (\mathbf{a} \cdot \mathbf{c} )\Big)\] \[=\frac{1}{4}\Big( (\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} )\mathbf{c} + \mathbf{c} (\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} ) - (\mathbf{a} \mathbf{c}+\mathbf{c} \mathbf{a} )\mathbf{b} - \mathbf{b} (\mathbf{a} \mathbf{c} + \mathbf{c} \mathbf{a} )\Big) \] \[= \frac{1}{4}(\mathbf{a} \mathbf{b} \mathbf{c} - \mathbf{a} \mathbf{c} \mathbf{b} -\mathbf{b} \mathbf{c} \mathbf{a} + \mathbf{c} \mathbf{b} \mathbf{a}) , \] which is in agreement with the left side. The geometric interpretation of (\ref{adotbc}) is given in the Figure \ref{apawc}. \begin{figure}\label{apawc}
\end{figure}
Regarding the triple wedge product (\ref{awedgebc}), we need to show the associative
property, $\mathbf{a} \wedge (\mathbf{b} \wedge \mathbf{c})= (\mathbf{a} \wedge \mathbf{b})\wedge \mathbf{c}$. Decomposing both sides of this equation, using (\ref{trivector}) and (\ref{awedgebc}), gives
\[ \mathbf{a} \wedge (\mathbf{b} \wedge \mathbf{c}):=\frac{1}{2}\Big( \mathbf{a} (\mathbf{b} \wedge \mathbf{c}) + (\mathbf{b} \wedge \mathbf{c})\mathbf{a} \Big)
= \frac{1}{4}\Big( \mathbf{a} (\mathbf{b} \mathbf{c}- \mathbf{c} \mathbf{b}) + (\mathbf{b} \mathbf{c}- \mathbf{c} \mathbf{b})\mathbf{a} \Big) , \]
and
\[ (\mathbf{a} \wedge \mathbf{b}) \wedge \mathbf{c}:=\frac{1}{2}\Big( (\mathbf{a} \wedge \mathbf{b} ) \mathbf{c} + \mathbf{c} (\mathbf{a} \wedge \mathbf{b}) \Big)
= \frac{1}{4}\Big(( \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a})\mathbf{c} + \mathbf{c} (\mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} ) \Big) . \]
To finish the argument, we have
\[\mathbf{a} \wedge (\mathbf{b} \wedge \mathbf{c})- (\mathbf{a} \wedge \mathbf{b})\wedge \mathbf{c} = \frac{1}{4}\Big( - \mathbf{a} \mathbf{c} \mathbf{b} -\mathbf{c} \mathbf{a} \mathbf{b} +
\mathbf{b} \mathbf{c} \mathbf{a} +\mathbf{b}\mathbf{a} \mathbf{c} \Big) \]
\[ =\frac{1}{2}\Big(-(\mathbf{a} \cdot \mathbf{c} )\mathbf{b} + \mathbf{b} (\mathbf{a} \cdot \mathbf{c}) \Big) =0. \]
The {\it trivector} or {\it directed volume} $\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}$ is pictured in
Figure \ref{trivector}.
There are many more similar identities in higher dimensional geometric algebras \cite{H/S,SNF}.
\begin{figure}\label{trivector}
\end{figure}
\noindent {\bf Exercise:} Using the properties (\ref{awedgebc}) and (\ref{adotbc1}), prove the Associative Law (P5) for the geometric product of vectors,
\[ \mathbf{a} ( \mathbf{b} \mathbf{c} )= (\mathbf{a} \mathbf{b} )\mathbf{c}. \]
\section{The geometric algebras $\mathbb{G}_1$, $\mathbb{G}_2$ and $\mathbb{G}_3$.}
In the previous section, we discovered two general principals for the multiplication
of Euclidean vectors $\mathbf{a} $ and $\mathbf{b} $:
\begin{itemize}
\item[1)] The square of a vector is its length
squared, $\mathbf{a}^2 = |\mathbf{a} |^2$.
\item[2)] If the vectors $\mathbf{a} $ and $\mathbf{b} $ are orthogonal to each other, i.e., the angle between
them is $90$ degrees, then they anti-commute $\mathbf{a} \mathbf{b} = -\mathbf{b} \mathbf{a} $ and define the
bivector given in (\ref{defbivector}).
\end{itemize}
These two general rules hold for Euclidean vectors, independent of the dimension
of the space in which they lie.
The simplest euclidean geometric algebra is obtained by extending the real number system $\mathbb{R} $ to include a single new square
root of $+1$, giving the geometric algebra
\[ \mathbb{G}_1 := \mathbb{R}(\mathbf{e}), \]
where $\mathbf{e}^2=1$. A geometric number in $\mathbb{G}_1$ has the form \[ g= x + y \mathbf{e}, \] where $x,y \in \mathbb{R} $, and defines the hyperbolic number plane \cite{S1}.
We now apply what we have learned about the general geometric addition and multiplication of vectors to vectors in the two
dimensional plane $\mathbb{R}^2$, and in the three dimensional space $\mathbb{R}^3$ of experience. The $2$-dimensional {\it coordinate plane} is defined by
\begin{equation} \mathbb{R}^2 := \{(x,y)| \ \ x,y \in \mathbb{R} \}. \label{coorplane2} \end{equation}
By laying out two {\it orthonormal unit vectors} $\{\mathbf{e}_1, \mathbf{e}_2\}$ along the $x$- and $y$-axes, respectively, each point
\begin{equation} (x,y)\in \mathbb{R}^2 \quad \longleftrightarrow \quad \mathbf{x} = x \mathbf{e}_1 + y \mathbf{e}_2\in \mathbb{R}^2 \label{abusexy} \end{equation}
becomes a {\it position vector} $\mathbf{x} = |\mathbf{x} |\hat \mathbf{x} $ from the origin, shown in
Figure \ref{unitcircle} with the unit circle. The point $\hat\mathbf{x} =(\cos \theta, \sin \theta)$ on the unit circle $S^1$, where the
angle $\theta$ is measured from the $x$-axis, becomes the unit vector
\[ \hat \mathbf{x}=\cos (\theta)\mathbf{e}_1 +\sin (\theta)\mathbf{e}_2 . \]
In equation (\ref{abusexy}), we have abused notation by equating the coordinate point $(x,y)\in \mathbb{R}^2$ with the {\it position vector} $\mathbf{x} = x \mathbf{e}_1 + y \mathbf{e}_2$ from the origin of $\mathbb{R}^2$.
\begin{figure}
\caption{The unit circle $S^1$ in the $xy$-plane.}
\label{unitcircle}
\end{figure}
Calculating the geometric product of the two vectors
\[ \mathbf{a}=(a_1,a_2)=a_1 \mathbf{e}_{1} + a_2 \mathbf{e}_{2}, \quad
\mathbf{b}=(b_1,b_2)=b_1 \mathbf{e}_{1} + b_2 \mathbf{e}_{2}, \]
in the $xy$-plane, we obtain
\[ \mathbf{a} \mathbf{b} = (a_1 \mathbf{e}_{1} + a_2 \mathbf{e}_{2})(b_1 \mathbf{e}_{1} + b_2 \mathbf{e}_{2}) \]
\[ = a_1 b_1 \mathbf{e}_1^2 + a_2 b_2 \mathbf{e}_2^2 + a_1b_2 \mathbf{e}_1 \mathbf{e}_2+a_2 b_1\mathbf{e}_2 \mathbf{e}_1 \]
\begin{equation} = (a_1b_1 +a_2 b_2) +(a_1 b_2 -a_2 b_1 )\mathbf{e}_1 \mathbf{e}_2 = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b} ,\label{dotouter2} \end{equation}
where the inner product $\mathbf{a} \cdot \mathbf{b} = a_1b_1 +a_2 b_2 =|\mathbf{a}||\mathbf{b}|\cos \theta$, and the
outer product
\[ \mathbf{a} \wedge \mathbf{b} =(a_1 b_2 -a_2 b_1 )\mathbf{e}_{12} = \mathbf{e}_{12} |\mathbf{a} ||\mathbf{b} | \sin \theta \]
for $\mathbf{e}_{12}:= \mathbf{e}_1 \mathbf{e}_2 = \mathbf{e}_1 \wedge \mathbf{e}_2$.
The bivector $\mathbf{a} \wedge \mathbf{b}$ is pictured in Figure \ref{geoprod}, together
with a picture proof that the magnitude $|\mathbf{a} \wedge \mathbf{b}|= |a_1b_2 -a_2 b_1|$, as expected.
\begin{figure}
\caption{The outer product $\mathbf{a} \wedge \mathbf{b} $ in 2-dimensions.}
\label{geoprod}
\end{figure}
By introducing the unit vectors $\{\mathbf{e}_1,\mathbf{e}_2\}$ along the coordinate axes of $\mathbb{R}^2$, and using properties of the geometric product, we have found explicit formulas for the dot and outer products of any two vectors $\mathbf{a}$ and $\mathbf{b}$ in $\mathbb{R}^2$. The geometric product of the orthogonal unit vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ gives the unit bivector
$\mathbf{e}_{12}$, already pictured in Figure \ref{bivecorient}. Squaring $\mathbf{e}_{12}$, gives
\[ \mathbf{e}_{12}^2 = (\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2)= -\mathbf{e}_1^2 \mathbf{e}_2^2 = -1, \]
which because of (\ref{bivectorsquared}) and (\ref{areabivector2}) is no surprise.
The most general geometric number of the $2$-dimensional Euclidean
plane $\mathbb{R}^2$ is
\[ g= g_0 + g_1 \mathbf{e}_1 + g_2 \mathbf{e}_2 + g_3 \mathbf{e}_{12}, \]
where $g_\mu \in \mathbb{R}$ for $\mu = 0,1,2,3$. The set of all geometric numbers $g$, together with the two operations of geometric addition and multiplication, make up the {\it geometric algebra} $\mathbb{G}_2$ of the Euclidean plane $\mathbb{R}^2$,
\[ \mathbb{G}_2 := \{ g| \ \ g= g_0 + g_1 \mathbf{e}_1 + g_2 \mathbf{e}_2 + g_3 \mathbf{e}_{12}\} =\mathbb{R}(\mathbf{e}_1,\mathbf{e}_2) . \]
The formal rules for the geometric addition and multiplication of the geometric numbers in $\mathbb{G}_2$
are exactly the same as the rules for addition and multiplication of real numbers, except we give up universal commutativity to express the
anti-commutativity of orthogonal vectors.
The geometric algebra $\mathbb{G}_2$ breaks into two parts,
\[ \mathbb{G}_2 = \mathbb{G}_2^0 + \mathbb{G}_2^1 + \mathbb{G}_2^2 =\mathbb{G}_2^+ + \mathbb{G}_2^-, \]
where the {\it even part}, consisting of {\it scalars} (real numbers) and bivectors,
\[ \mathbb{G}_2^+ :=\mathbb{G}_2^{0+2}= \{ x + y\mathbf{e}_{12}| \ \ x,y \in \mathbb{R} \}\ \widetilde{=} \ \mathbb{C} \]
is algebraically closed and isomorphic to the complex number $\mathbb{C}$,
and the {\it odd part},
\[ \mathbb{G}_2^- := \mathbb{G}_2^1 = \{ \mathbf{x} | \ \ \mathbf{x} = x \mathbf{e}_1 + y \mathbf{e}_2 \}\equiv \mathbb{R}^2 \]
for $x,y \in \mathbb{R}$, consists of vectors in the $xy$-plane $\mathbb{R}^2$.
The geometric algebra $\mathbb{G}_2$ unites the vector plane $\mathbb{G}_2^-$ and the complex number plane $\mathbb{G}_2^+$ into a unified geometric number system $\mathbb{G}_2$ of the plane.
By introducing a third unit vector $\mathbf{e}_3$ into $\mathbb{R}^2$, along
the $z$-axis, we get the $3$-dimensional space $\mathbb{R}^3$. All of the formulas found in $\mathbb{R}^2$ can then
be extended to $\mathbb{R}^3$, and by the same process, to any higher $n$-dimensional space $\mathbb{R}^n$ for $n>3$. Geometric algebras can always be extended to higher dimensional geometric algebras simply by introducing additional orthogonal anti-commuting unit vectors with square $\pm 1$, \cite{GeoReal,Hyprevisit}.
Let us see how the
formulas (\ref{dotouter2}) work out explicitly in
\begin{equation} \mathbb{R}^3 := \{ \mathbf{x}| \quad \mathbf{x} = (x,y,z ) = x \mathbf{e}_1+y \mathbf{e}_2 + z \mathbf{e}_3 \}, \label{defR3} \end{equation}
for $x,y,z \in \mathbb{R}$. For vectors
\[ \mathbf{a} = a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2+a_3 \mathbf{e}_3, \quad \mathbf{b} = b_1 \mathbf{e}_1 + b_2 \mathbf{e}_2+b_3 \mathbf{e}_3,\] we calculate
\[ \mathbf{a} \mathbf{b} =( a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2+a_3 \mathbf{e}_3) ( b_1 \mathbf{e}_1 + b_2 \mathbf{e}_2+b_3 \mathbf{e}_3) \]
\[ = a_1b_1 \mathbf{e}_1^2 + a_2 b_2 \mathbf{e}_2^2 + a_3 b_3 \mathbf{e}_3^2 \]
\[ + a_1b_2 \mathbf{e}_1 \mathbf{e}_2 + a_2 b_1 \mathbf{e}_2 \mathbf{e}_1 + a_2 b_3 \mathbf{e}_2 \mathbf{e}_3 + a_3 b_2 \mathbf{e}_3 \mathbf{e}_2 + a_1 b_3 \mathbf{e}_1 \mathbf{e}_3 + a_3 b_1 \mathbf{e}_3 \mathbf{e}_1 \]
\[ =( a_1b_1 + a_2 b_2 + a_3 b_3) + ( a_1b_2 - a_2 b_1) \mathbf{e}_{12} +
( a_2 b_3 - a_3 b_2) \mathbf{e}_{23} + (a_1 b_3 - a_3 b_1) \mathbf{e}_{13} \]
where the {\it dot} or {\it inner product},
\[ \mathbf{a} \cdot \mathbf{b} =a_1b_1 + a_2 b_2 + a_3 b_3=|\mathbf{a} ||\mathbf{b} |\cos \theta , \]
and the outer product (\ref{perpbivector}),
\begin{equation} \mathbf{a} \wedge \mathbf{b} =( a_1b_2 - a_2 b_1) \mathbf{e}_{12} +
( a_2 b_3 - a_3 b_2) \mathbf{e}_{23} + (a_1 b_3 - a_3 b_1) \mathbf{e}_{13}=|\mathbf{a} ||\mathbf{b} |\hat \mathbf{B} \sin \theta . \label{awbarea} \end{equation}
The sum of the three bivector components, which are projections onto the coordinate planes, are shown in Figure \ref{3dbivector}.
\begin{figure}
\caption{Bivector decomposition in 3D space.}
\label{3dbivector}
\end{figure}
In $\mathbb{R}^3$, the outer product $\mathbf{a} \wedge \mathbf{b} $ can be expressed in terms of the well known {\it cross product} of the century old, pre-Einstein Gibbs-Heaviside vector analysis.
The vector cross product of the vectors $\mathbf{a} $ and $\mathbf{b}$ is defined by
\[ \mathbf{a} \times \mathbf{b} := \det \pmatrix{\mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \cr
a_1 & a_2 & a_3 \cr b_1 & b_2 & b_3 }
= ( a_2b_3 - a_3 b_2) \mathbf{e}_{1} -
( a_1 b_3 - a_3 b_1) \mathbf{e}_{2} + (a_1 b_2 - a_2 b_1) \mathbf{e}_{3} \]
\begin{equation} = |\mathbf{a} || \mathbf{b} | \sin \theta \, \hat \mathbf{n} ,\label{vec-cross-product} \end{equation} where $ \hat \mathbf{n} :=
\frac{\mathbf{a} \times \mathbf{b} }{|\mathbf{a} \times \mathbf{b} |}$ .
Defining the {\it unit trivector} or {\it pseudoscalar} of $\mathbb{R}^3$,
\begin{equation} I := \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 = \mathbf{e}_{123}, \label{pseudoi} \end{equation}
the relationship (\ref{vec-cross-product}) and (\ref{pseudoi}) can be combined into
\begin{equation} \mathbf{a} \wedge \mathbf{b} = I (\mathbf{a} \times \mathbf{b}) = |\mathbf{a} || \mathbf{b} | \sin \theta \, I \hat \mathbf{n} , \label{crosswedge} \end{equation}
as can be easily verified. We say that
the vector $\mathbf{a} \times \mathbf{b}$ is {\it dual} to, or the {\it right hand normal} of, the bivector $\mathbf{a} \wedge \mathbf{b}$, shown in the Figure \ref{crossproductab}. Note that we are using the symbol $I=\mathbf{e}_{123}$ for the unit trivector or {\it pseudoscalar} of $\mathbb{G}_3$ to
distinguish it from the $i=\mathbf{e}_{12}$, the unit bivector of $\mathbb{G}_{2}$.
\begin{figure}
\caption{The vector cross product $\mathbf{a} \times \mathbf{b}$ is the {\it right hand normal} dual to the
bivector $\mathbf{a} \wedge \mathbf{b} =I(\mathbf{a} \times \mathbf{b}) $. Also, $|\mathbf{a} \times \mathbf{b} |=|\mathbf{a} \wedge \mathbf{b} |$.}
\label{crossproductab}
\end{figure}
We have seen in (\ref{geoproductab}) that the geometric product of two vectors
decomposes into two parts, a scalar part and a vector part. We now calculate the
geometric product of three vectors $\mathbf{a},\mathbf{b}, \mathbf{c}$.
\[ \mathbf{a} \mathbf{b} \mathbf{c} = \mathbf{a} (\mathbf{b} \cdot \mathbf{c} + \mathbf{b} \wedge \mathbf{c} ) = (\mathbf{b} \cdot \mathbf{c}) \mathbf{a} + \mathbf{a} (\mathbf{b} \wedge \mathbf{c}) \]
\[ =(\mathbf{b} \cdot \mathbf{c}) \mathbf{a} + \mathbf{a}\cdot (\mathbf{b} \wedge \mathbf{c}) + \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} . \]
This shows that geometric product of three vectors consists of a vector part
\[ (\mathbf{b} \cdot \mathbf{c} )\mathbf{a} + \mathbf{a} \cdot (\mathbf{b} \wedge \mathbf{c}) = (\mathbf{b} \cdot \mathbf{c} )\mathbf{a} + (\mathbf{a} \cdot \mathbf{b}) \mathbf{c} - (\mathbf{a} \cdot \mathbf{c} )\mathbf{b} \]
and the trivector part $\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}$. For the vectors
\[ \mathbf{a} = a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + a_3 \mathbf{e}_3, \ \mathbf{b} = b_1 \mathbf{e}_1 + b_2 \mathbf{e}_2 + b_3 \mathbf{e}_3, \ \mathbf{c} = c_1 \mathbf{e}_1 + c_2 \mathbf{e}_2 + c_3 \mathbf{e}_3 , \]
the trivector
\begin{equation} \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} = \det{\pmatrix{a_1 & a_2 & a_3 \cr
b_1 & b_2 & b_3 \cr c_1 & c_2 & c_3}} \mathbf{e}_{123}
=\Big(( \mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} \Big)I . \label{detabc} \end{equation}
By the {\it standard basis} of the
geometric algebra $\mathbb{G}_3$ of the $3$-dimensional Euclidean space $\mathbb{R}^3$, we mean
\[ \mathbb{G}_3 := span_\mathbb{R} \{ 1, \mathbf{e}_1 ,\mathbf{e}_2, \mathbf{e}_3, \mathbf{e}_{12},\mathbf{e}_{13}, \mathbf{e}_{23}, \mathbf{e}_{123} \}= \mathbb{R}(\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3) . \]
A general geometric number of $\mathbb{G}_3$ is
\[ g= g_0 + \mathbf{v} + B + T \]
where $g_0 \in \mathbb{R} $, $\mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2 + v_3 \mathbf{e}_3$ is a vector,
$B = b_{12}\mathbf{e}_{12} + b_{23}\mathbf{e}_{23} + b_{13}\mathbf{e}_{13}$ is a bivector,
and $T=t I$, for $t\in \mathbb{R} $, is a {\it trivector} or {\it directed volume element}. Note that just like
the unit bivector $i=\mathbf{e}_{12}$ has square $i^2=-1$, the unit trivector
$I=\mathbf{e}_{123}$ of space has square $I^2=-1$, as follow from the calculation
\[ I^2 = (\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3)(\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3) =(\mathbf{e}_1 \mathbf{e}_2) (\mathbf{e}_1 \mathbf{e}_2)
\mathbf{e}_3^2 = (-1)(+1)=-1. \]
Another important property of the pseudoscalar $I$ is that it commutes with all vectors in $\mathbb{R}^3$, and hence with all geometric numbers in $\mathbb{G}_3$.
\section{Analytic Geometry}
\begin{figure}
\caption{The vector $\mathbf{x} $ is decomposed into parallel and perpendicular commponents with respect to the vector $\hat \mathbf{a} $.}
\label{decompvec}
\end{figure}
Given a vector $\mathbf{x}$ and a unit vector $\hat \mathbf{a}$, we wish to express $\mathbf{x} =\mathbf{x}_\parallel + \mathbf{x}_\perp $ where $\mathbf{x}_\parallel$ is parallel to $\hat \mathbf{a}$, and $\mathbf{x}_\perp$ is perpendicular to $\hat \mathbf{a} $, as shown in Figure \ref{decompvec}. Since $\hat \mathbf{a} \hat \mathbf{a}= 1$, and using the associative law,
\begin{equation}
\nonumber
\mathbf{x} = (\mathbf{x} \hat \mathbf{a}) \hat \mathbf{a} = (\mathbf{x} \cdot \hat \mathbf{a} )\hat \mathbf{a} + (\mathbf{x} \wedge \hat \mathbf{a} )\hat \mathbf{a} = \mathbf{x}_\parallel + \mathbf{x}_\perp, \label{vecdeomp} \end{equation} where
\[ \mathbf{x}_\parallel = (\mathbf{x} \cdot \hat \mathbf{a} )\hat \mathbf{a} \quad {\rm and} \quad \mathbf{x}_\perp = (\mathbf{x} \wedge \hat \mathbf{a})\hat \mathbf{a} = \mathbf{x} - \mathbf{x}_\parallel . \] We could also accomplish this decomposition by writing \[ \mathbf{x} =\hat \mathbf{a} ( \hat \mathbf{a} \mathbf{x}) = \hat \mathbf{a} (\hat \mathbf{a} \cdot \mathbf{x} ) + \hat \mathbf{a} (\hat \mathbf{a} \wedge \mathbf{x} ) = \mathbf{x}_\parallel + \mathbf{x}_\perp . \] It follows that $\mathbf{x}_\parallel = (\mathbf{x} \cdot \hat \mathbf{a}) \hat \mathbf{a} =\hat \mathbf{a} (\mathbf{x} \cdot \hat \mathbf{a} ) $ as expected, and
\[ \mathbf{x}_\perp = (\mathbf{x} \wedge \hat \mathbf{a} )\cdot \hat \mathbf{a} = \hat \mathbf{a} \cdot (\hat \mathbf{a} \wedge \mathbf{x} )=
-\hat \mathbf{a} \cdot (\mathbf{x} \wedge \hat \mathbf{a}) , \]
in agreement with (\ref{adotbc1}).
\begin{figure}\label{lineeq}
\end{figure}
One of the simplest problems in analytic geometry is given a vector $\mathbf{a} $ and a point $\mathbf{x}_0$, what is the equation of the line passing through the point $\mathbf{x}_0$ in the direction of the vector $\mathbf{a}$? The line $L_{\mathbf{x}_0}(\mathbf{a})$ is given by
\[ L_{\mathbf{x}_0}(\mathbf{a}):= \{ \mathbf{x}| \ \ (\mathbf{x}-\mathbf{x}_0)\wedge \mathbf{a} =0 \} .\] The equation \[ (\mathbf{x}-\mathbf{x}_0)\wedge \mathbf{a} =0 \ \ \iff \ \ \mathbf{x} = \mathbf{x}_0 + t \mathbf{a} , \] for $t \in \mathbb{R}$, see Figure \ref{lineeq}. \begin{figure}\label{distance}
\end{figure}
Given the line $L_{\mathbf{x}_0}(\mathbf{a} )$, and a point $\mathbf{p} $, let us find the point $\mathbf{x} $ on the line $L_{\mathbf{x}_0}(\mathbf{a} )$ which is closest to the point $\mathbf{p}$, and the distance $|\mathbf{x} - \mathbf{p} |$ from $\mathbf{x} $ to $\mathbf{p} $. Referring to Figure \ref{distance}, and using the decomposition (\ref{vecdeomp}) to project $\mathbf{p} - \mathbf{x}_0$ onto the vector $\hat \mathbf{a} $, we find \[ \mathbf{x} = \mathbf{x}_0 + [(\mathbf{p} - \mathbf{x}_0 )\cdot \hat \mathbf{a} ] \hat \mathbf{a} , \] so, with the help of (\ref{geoproductab}) and (\ref{vecdeomp}), \begin{equation} \mathbf{x} - \mathbf{p}=
(\mathbf{x}_0 - \mathbf{p} )- [(\mathbf{x}_0 - \mathbf{p} )\cdot \hat \mathbf{a} ] \hat \mathbf{a} =(\mathbf{x}_0 - \mathbf{p} )_\perp , \label{perpdist} \end{equation}
where $(\mathbf{x}_0 - \mathbf{p})_\perp$ is the component of $\mathbf{x}_0 - \mathbf{p}$ perpendicular to $\mathbf{a} $. Using (\ref{perpdist}), the distance of the point $\mathbf{p}$ to the line is
\[ | \mathbf{x} - \mathbf{p}| =\sqrt{(\mathbf{x} - \mathbf{p})^2}
= \sqrt{(\mathbf{x}_0-\mathbf{p} )^2- \big((\mathbf{x}_0 - \mathbf{p} )\cdot \hat \mathbf{a}\big)^2}=| (\mathbf{x}_0 - \mathbf{p} )_\perp|, \]
see Figure \ref{distance}.
\subsection{The exponential function and rotations}
The Euler exponential function arises naturally from the geometric product (\ref{geoproductab}). With the help of (\ref{reverseab}) and (\ref{crosswedge}), and noting that $(I\hat\mathbf{n})^2 = -1 $, the geometric product of two unit vectors $\hat\mathbf{a}$ and $\hat\mathbf{b} $ in $\mathbb{R}^3$ is \begin{equation} \hat \mathbf{a} \hat \mathbf{b} = \hat \mathbf{a} \cdot \hat \mathbf{b} + \hat \mathbf{a} \wedge \hat \mathbf{b} = \cos \theta +I \hat \mathbf{n} \sin \theta = e^{\theta I \hat \mathbf{n} }, \label{abrot} \end{equation} where $\cos \theta := \hat \mathbf{a} \cdot \hat \mathbf{b} $. Similarly, \begin{equation} \hat \mathbf{b} \hat \mathbf{a} = \hat \mathbf{b} \cdot \hat \mathbf{a} + \hat \mathbf{b} \wedge \hat \mathbf{a} = \cos \theta -I \hat \mathbf{n} \sin \theta = e^{-\theta I \hat \mathbf{n} }. \label{barot} \end{equation}
\begin{figure}\label{rota}
\end{figure}
Let $\hat \mathbf{a} , \hat \mathbf{b} , \hat \mathbf{c} $ be unit vectors in $\mathbb{R}^3$. The equation \begin{equation} (\hat \mathbf{b} \hat \mathbf{a} ) \hat \mathbf{a} = \hat \mathbf{b} (\hat \mathbf{a} \hat \mathbf{a} ) = \hat \mathbf{b} = (\mathbf{a} \hat \mathbf{a}) \hat \mathbf{b} = \hat \mathbf{a} (\hat \mathbf{a} \hat \mathbf{b}), \label{fullangle} \end{equation} shows that when $\hat \mathbf{a} $ is multiplied on the right by $\hat \mathbf{a} \hat \mathbf{b} = e^{\theta I \hat \mathbf{n}}$, or on the left by $\hat \mathbf{b} \hat \mathbf{a} = e^{-\theta I \hat \mathbf{n}}$, it rotates the vector $\hat \mathbf{a} $ through the angle $\theta$ into the vector $\hat \mathbf{b} $.
The composition of rotations, can be pictured as the composition of arcs on the unit sphere. The composition of the arc $\widetilde{\bold{\hat{a}} \bold{\hat{b}}}$ on the great circle connecting the points $\hat\mathbf{a}$ and $\hat\mathbf{b}$, with the arc $\widetilde{\hat \mathbf{b} \hat \mathbf{c} }$ connecting $\hat\mathbf{b}$ and $\hat\mathbf{c} $, gives the arc $\widetilde{\hat \mathbf{a} \hat \mathbf{c}}$ connecting $\hat\mathbf{a}$ and $\hat\mathbf{c} $. Symbolically, \[ \widetilde{\hat \mathbf{a} \hat\mathbf{b}}\widetilde{\hat\mathbf{b} \hat\mathbf{c}} := (\hat\mathbf{a} \hat \mathbf{b}) (\hat \mathbf{b} \hat \mathbf{c})=\hat \mathbf{a} \hat \mathbf{c} =: \widetilde{\hat\mathbf{a} \hat\mathbf{c}} , \] as shown in \begin{figure}
\caption{The parallel component $\mathbf{x}_\parallel $ of $\mathbf{x} $ in the plane of $\mathbf{a} \wedge \mathbf{b} $ is rotated through the angle $\theta$, leaving the perpendicular component $\mathbf{x}_\perp$ unchanged.}
\label{xparallelrot}
\end{figure} Figure \ref{rota}.
By taking the square roots of both sides of equations (\ref{abrot}) and (\ref{barot}), it follows that \[ \sqrt{\hat \mathbf{a} \hat \mathbf{b} } = e^{\frac{1}{2}\theta I \hat \mathbf{n} }, \quad {\rm and} \quad \sqrt{\hat \mathbf{b} \hat \mathbf{a} } = e^{- \frac{1}{2}\theta I \hat \mathbf{n} }. \] Note also that \begin{equation} \hat \mathbf{b} = (\hat \mathbf{b} \hat \mathbf{a}) \hat \mathbf{a} = (\sqrt{\hat\mathbf{b} \hat \mathbf{a} })^2 \hat \mathbf{a} = \sqrt{\hat\mathbf{b} \hat \mathbf{a} }\ \hat \mathbf{a} \, \sqrt{\hat\mathbf{a} \hat \mathbf{b} } = e^{-\frac{1}{2}\theta I \hat \mathbf{n} }\hat \mathbf{a} \, e^{\frac{1}{2}\theta I \hat \mathbf{n} }. \label{halfanglerot} \end{equation} The advantage of the equation (\ref{halfanglerot}) over (\ref{fullangle}) is that it can be applied to rotate any vector $\mathbf{x}$. For $\mathbf{x} =\mathbf{x}_\parallel + \mathbf{x}_\perp $, where $\mathbf{x}_\parallel$ is in the plane of $\mathbf{a} \wedge \mathbf{b} $, and $\mathbf{x}_\perp$ is perpendicular to the plane, we get with the help of (\ref{adotbc}) and (\ref{awedgebc}), \begin{equation} \mathbf{x}_{rot} :=\sqrt{\hat\mathbf{b} \hat \mathbf{a} }\, \mathbf{x} \, \sqrt{\hat\mathbf{a} \hat \mathbf{b} }= e^{-\frac{1}{2}\theta I \hat \mathbf{n} }\, ( \mathbf{x}_\parallel + \mathbf{x}_\perp ) e^{\frac{1}{2}\theta I \hat \mathbf{n} } = e^{-\theta I \hat \mathbf{n} } \mathbf{x}_\parallel + \mathbf{x}_\perp , \label{rotatex} \end{equation} see Figure \ref{xparallelrot}. Formula (\ref{rotatex}) is known as the {\it half angle} representation of a rotation \cite[p.55]{SNF}. A rotation can also be expressed as the composition of two reflections.
\subsection{Reflections}
A bivector characterizes the direction of a plane. The equation of a plane passing through the origin in the direction of the bivector $\mathbf{a} \wedge \mathbf{b}$ is
\begin{equation} Plane_0(\mathbf{a} \wedge \mathbf{b}) = \{ \mathbf{x} | \ \ \mathbf{x} \wedge \mathbf{a} \wedge \mathbf{b} =0\}. \label{planeab} \end{equation}
The condition that $\mathbf{x} \wedge \mathbf{a} \wedge \mathbf{b} =0$ tells us that $\mathbf{x} $ is in the the plane of the bivector $\mathbf{a} \wedge \mathbf{b}$, or
\[ \mathbf{x} = t_a \mathbf{a} + t_b \mathbf{b} ,\]
where $t_a, t_b \in \mathbb{R}$. This is the parametric equation of a plane passing through the origin having the direction of the bivector $\mathbf{a} \wedge \mathbf{b}$. If, instead, we want the
equation of a plane passing through a given point $\mathbf{x}_0$ and having the direction of
the bivector $\mathbf{a} \wedge \mathbf{b}$, we have
\begin{equation} Plane_{\mathbf{x}_0} (\mathbf{a} \wedge \mathbf{b}) = \{ \mathbf{x} | \ \ (\mathbf{x}-\mathbf{x}_0) \wedge \mathbf{a} \wedge \mathbf{b} =0\}, \label{planeabx0} \end{equation}
with the corresponding parametric equation
\[ \mathbf{x} = \mathbf{x}_0 + t_a \mathbf{a} + t_b \mathbf{b}. \]
For a plane in $\mathbb{R}^3$, when $\mathbf{x}=(x,y,z)$ and $\mathbf{x}_0 = (x_0,y_0,z_0)$, using (\ref{detabc}) and (\ref{planeabx0}),
\[ Plane_{\mathbf{x}_0}(\mathbf{a} \wedge \mathbf{b} )
= \{ \mathbf{x} | \ \ \det{\pmatrix{x-x_0 & y-y_0 & z - z_0 \cr
a_1 & a_2 & a_3 \cr b_1 & b_2 & b_3} } =0\}, \]
which is equivalent to the well known equation of a line through the point $\mathbf{x}_0$,
\[ (\mathbf{x} - \mathbf{x}_0)\cdot \mathbf{n} = 0 , \]
where $\mathbf{n} = \mathbf{a} \times \mathbf{b} $ is the {\it normal vector} to the bivector
$\mathbf{a} \wedge \mathbf{b} $ of the plane, see Figure \ref{planeeq}.
\begin{figure}
\caption{The point $\mathbf{x}$ is in the plane passing through the point $x_0$ and
having the direction of the bivector $\mathbf{a} \wedge \mathbf{b}$. }
\label{planeeq}
\end{figure}
Given a vector $\mathbf{x} $ and a unit bivector $\mathbf{a} \wedge \mathbf{b} $, we decompose $\mathbf{x} $ into a part $\mathbf{x}_\parallel$ parallel to $\mathbf{a} \wedge \mathbf{b} $, and a part $\mathbf{x}_\perp$ perpendicular to $\mathbf{a} \wedge \mathbf{b}$. Since by (\ref{awedgebc})
\[ \mathbf{x}_\parallel \wedge \mathbf{a} \wedge \mathbf{b} =\frac{1}{2}\Big(\mathbf{x}_\parallel (\mathbf{a} \wedge \mathbf{b})+(\mathbf{a} \wedge \mathbf{b})\mathbf{x}_\parallel \Big) = 0 ,\]
and by (\ref{adotbc}),
\[ \mathbf{x}_\perp \cdot ( \mathbf{a} \wedge \mathbf{b}) =\frac{1}{2}\Big(\mathbf{x}_\perp (\mathbf{a} \wedge \mathbf{b})-(\mathbf{a} \wedge \mathbf{b})\mathbf{x}_\perp \Big) = 0, \]
it follows that the parallel and perpendicular parts of $\mathbf{x} $ anti-commute and
commute, respectively, with the bivector $\mathbf{a}\wedge \mathbf{b}$. Remembering that
$(\mathbf{a} \wedge \mathbf{b} )^2 = -1$, it follows that
\begin{equation} (\mathbf{a} \wedge \mathbf{b} )\mathbf{x} (\mathbf{a} \wedge \mathbf{b})= (\mathbf{a} \wedge \mathbf{b} )(\mathbf{x}_\parallel + \mathbf{x}_\perp)(\mathbf{a} \wedge \mathbf{b})=\mathbf{x}_\parallel - \mathbf{x}_\perp \label{awb-mirror}. \end{equation}
This is the general formula for the reflection of a vector $\mathbf{x}$ in a mirror
in the plane of the unit bivector $\mathbf{a} \wedge \mathbf{b}$.
When we are in the $3$-dimensional space $\mathbb{R}^3$, the unit bivector
\[ \mathbf{a} \wedge \mathbf{b} = I (\mathbf{a} \times \mathbf{b} )= I \hat \mathbf{n} . \]
In this case, the reflection (\ref{awb-mirror}) takes the form
\begin{equation} (\mathbf{a} \wedge \mathbf{b} )\mathbf{x} (\mathbf{a} \wedge \mathbf{b})= - \hat \mathbf{n} \mathbf{x} \hat \mathbf{n} = -\hat \mathbf{n} (\mathbf{x}_\parallel+ \mathbf{x}_\perp)\hat \mathbf{n} = \mathbf{x}_\parallel - \mathbf{x}_\perp. \label{axb-mirror} \end{equation}
Since a rotation in $\mathbb{R}^3$ is generated by two consecutive reflections about two planes with normal unit vectors $\hat \mathbf{n}_1 $ and $\hat \mathbf{n}_2$, we have
\begin{equation} \mathbf{x}_{rot} = - \hat\mathbf{n}_2 (- \hat \mathbf{n}_1 \mathbf{x} \hat \mathbf{n}_1) \hat \mathbf{n}_2 =
(\hat \mathbf{n}_2 \hat \mathbf{n}_1 ) \mathbf{x} (\hat \mathbf{n}_1 \hat \mathbf{n}_2 ). \label{rotinR3} \end{equation}
Letting $\hat\mathbf{n}_1 \hat \mathbf{n}_2=e^{\frac{1}{2} \theta I \hat \mathbf{n} }$ where
\[ \hat \mathbf{n} := \frac{\hat \mathbf{n}_1\times \hat \mathbf{n}_2}{|\hat \mathbf{n}_1\times \hat \mathbf{n}_2| }, \] the formula for the rotation (\ref{rotinR3}) becomes \begin{equation} \mathbf{x}_{rot} = (\hat \mathbf{n}_2 \hat \mathbf{n}_1 ) \mathbf{x} (\hat \mathbf{n}_1 \hat \mathbf{n}_2 ) =e^{-\frac{1}{2} \theta I \hat \mathbf{n} }\mathbf{x} e^{\frac{1}{2} \theta I \hat \mathbf{n} } =e^{-\frac{1}{2} \theta I \hat \mathbf{n} }\mathbf{x}_\parallel e^{\frac{1}{2} \theta I \hat \mathbf{n} } + \mathbf{x}_\perp , \label{rotinR3a} \end{equation} which is equivalent to (\ref{rotatex}).
\section{Stereographic projection and a bit of quantum mechanics}
\begin{figure}\label{sterox}
\end{figure}
As a final demonstration of the flexibility and power of geometric algebra, we discuss stereographic projection from the unit sphere $S^2 \subset \mathbb{R}^3$, defined by
\[ S^2 := \{\hat \mathbf{a} | \quad \hat \mathbf{a}^2 =1 \ \ {\rm and} \ \ \hat \mathbf{a} \in \mathbb{R}^3 \}, \] onto $\mathbb{R}^2$. The mapping $\mathbf{x} = f(\hat \mathbf{a})\in \mathbb{R}^2 $ defining stereographic projection is
\begin{equation} \mathbf{x} = f(\hat \mathbf{a}) := \frac{2}{\hat \mathbf{a} + \mathbf{e}_3}-\mathbf{e}_3 , \ \ {\rm where} \ \
\hat \mathbf{a} \in S^2,\label{stereoproj} \end{equation} and is pictured in Figure \ref{sterox}. A 2-D cut away in the plane of the great circle, defined by the points $\mathbf{e}_3, \hat \mathbf{a}$, and the origin, is shown in Figure \ref{sterox2}.
Stereographic projection is an example of a {\it conformal mapping}, which preserves angles, and has many important applications in mathematics, physics, and more recently in robotics \cite{ECGS01,Sob2012}.
\begin{figure}
\caption{A 2-D cut away in the plane of great circle through the points
$\mathbf{e}_3, \hat \mathbf{a} $, and $-\hat \mathbf{a} $ on $S^2$. }
\label{sterox2}
\end{figure}
In working with the mapping (\ref{stereoproj}), it is convenient to use the new variable $\mathbf{m} = \mathbf{x} + \mathbf{e}_3$, for which case the mapping takes the simpler form
\begin{equation} \mathbf{m} = \frac{2}{\hat \mathbf{a} + \mathbf{e}_3} = \frac{2(\hat \mathbf{a} + \mathbf{e}_3) }
{ (\hat \mathbf{a} + \mathbf{e}_3)^2} = \frac{\hat \mathbf{a} + \mathbf{e}_3 }{1+
\hat \mathbf{a} \cdot \mathbf{e}_3} . \label{stereoprojm} \end{equation}
The effect of this change of variable maps points $\mathbf{x} \in \mathbb{R}^3$ into corresponding
points $\mathbf{m}$ in the plane $Plane_{\mathbf{e}_3}(\mathbf{e}_{12})$ passing through the point $\mathbf{e}_3$ and parallel to $\mathbb{R}^2= Plane_0(\mathbf{e}_{12} )$. Noting that
\[ \mathbf{e}_3 \cdot \mathbf{m} =\mathbf{e}_3 \cdot \Big( \frac{\hat \mathbf{a} + \mathbf{e}_3 }{1+
\hat \mathbf{a} \cdot \mathbf{e}_3}\Big) = 1, \]
and solving the equation (\ref{stereoprojm}) for $\hat \mathbf{a} $, gives with the help of (\ref{inversevec}) and (\ref{reverseba}),
\[ \hat \mathbf{a} = \frac{2}{\mathbf{m}}-\mathbf{e}_3= \mathbf{m}^{-1}\big(2- \mathbf{m} \mathbf{e}_3 \big) \]
\begin{equation} = \frac{\hat \mathbf{m} }{|\mathbf{m} |}\big(2+\mathbf{e}_3 \mathbf{m} -2 \mathbf{e}_3\cdot \mathbf{m} \big)
=\hat \mathbf{m} \mathbf{e}_3 \hat \mathbf{m} . \label{ahateqn} \end{equation}
We also have
\begin{equation} \hat \mathbf{a}=\hat \mathbf{m} \mathbf{e}_3 \hat \mathbf{m} =(\hat \mathbf{m} \mathbf{e}_3)\mathbf{e}_3(\mathbf{e}_3 \hat \mathbf{m}) =(-I\hat \mathbf{m}) \mathbf{e}_3 (I\hat \mathbf{m}) , \label{ahateqn3} \end{equation}
showing that $\hat \mathbf{a}$ is obtained by a rotation of $\mathbf{e}_3$ in the plane of $\hat \mathbf{m}\wedge \mathbf{e}_3$ through an
angle of $2\theta$ where $\cos\theta:=\mathbf{e}_3\cdot \hat \mathbf{m}$, or equivalently,
by a rotation of $\mathbf{e}_3$ in the plane of $I \hat \mathbf{m}$ through an angle of $\pi$.
Quantum mechanics displays many surprising, amazing, and almost magical properties, which defy the classical mechanics of everyday experience.
If the {\it quantum spin state} of an electron is put into a spin state $\hat \mathbf{a} \in S^2 $ by a strong magnetic field at a given time, then the {\it probability of observing the electron's spin} in the spin state $\hat\mathbf{b} \in S^2$ at a time immediately thereafter is
\begin{equation} prob_{\hat \mathbf{a}}^+(\hat \mathbf{b} ) := \frac{1}{2}(1 + \hat \mathbf{a} \cdot \hat \mathbf{b} )=
1- \frac{(\mathbf{m}_a - \mathbf{m}_b)^2 }{\mathbf{m}_a^2 \mathbf{m}_b^2 }, \label{probab} \end{equation}
where
\[ \hat \mathbf{a} = \frac{2}{\mathbf{m}_a }- \mathbf{e}_3 \quad {\rm and} \quad
\hat \mathbf{b} = \frac{2}{\mathbf{m}_b }- \mathbf{e}_3, \]
see \cite{SNov16,SMar17}.
On the other hand, the {\it probability of a photon being emitted}
by an electron prepared in a spin state $\hat \mathbf{b} $, when it is forced by a magnetic field into the spin state $\hat \mathbf{a} $ is
\begin{equation} prob_{\hat \mathbf{a}}^-(\hat \mathbf{b} ) := \frac{1}{2}(1 - \hat \mathbf{a} \cdot \hat \mathbf{b} )=
\frac{(\mathbf{m}_a - \mathbf{m}_b)^2 }{\mathbf{m}_a^2 \mathbf{m}_b^2 }. \label{probabminus} \end{equation}
Whenever a photon is emitted, it has {\it exactly the same energy}, regardless of the angle
$\theta$ between the spin states $\hat\mathbf{a}$ and $\hat \mathbf{b} $, \cite{SLN06,SLNY}.
A plot of these two probability functions is given in Figure \ref{plotplusminus}.
The equalities in (\ref{probab}) and (\ref{probabminus}) show that $prob_{\hat \mathbf{a} }^\pm(\hat \mathbf{b} )$ is directly related to the Euclidean distances between the points $\mathbf{m}_a,\mathbf{m}_b \in Plane_{\mathbf{e}_3}(\mathbf{e}_{12})$.
The case when
\[ \mathbf{m}_{a}=\mathbf{m}=\mathbf{x} + \mathbf{e}_3, \ \ \hat \mathbf{b} =-\hat \mathbf{a}, \ \ {\rm and} \ \
\mathbf{m}_{b} = \mathbf{m}_\perp = - \frac{1}{\mathbf{x}}+ \mathbf{e}_3 \]
is pictured in Figure \ref{sterox2}.
\begin{figure}\label{plotplusminus}
\end{figure}
\end{document}
\end{document} |
\begin{document}
\begin{abstract} For a finite alphabet $\mathcal{A}$ and shift $X\subseteq\mathcal{A}^{\mathbb{Z}}$ whose factor complexity function grows at most linearly, we study the algebraic properties of the automorphism group ${\rm Aut}(X)$. For such systems, we show that every finitely generated subgroup of ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$, in contrast to the behavior when the complexity function grows more quickly. With additional dynamical assumptions we show more: if $X$ is transitive, then ${\rm Aut}(X)$ is virtually $\mathbb{Z}$; if $X$ has dense aperiodic points, then ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$. We also classify all finite groups that arise as the automorphism group of a shift. \end{abstract}
\maketitle
\section{Introduction} Given a finite alphabet $\mathcal{A}$, a shift system $(X,\sigma)$ is a closed set $X\subseteq\mathcal{A}^\mathbb{Z}$ that is invariant under the left shift $\sigma\colon \mathcal{A}^\mathbb{Z}\to\mathcal{A}^\mathbb{Z}$ and its automorphism group ${\rm Aut}(X)$ is the group of homeomorphisms of $X$ that commute with $\sigma$ (these notions are made precise in Section~\ref{sec:notation}). For general shift systems, while ${\rm Aut}(X)$ is countable, it can be quite complicated: for the full shift~\cite{Hedlund2} or for mixing shifts of finite type~\cite{BLR}, ${\rm Aut}(X)$ is not finitely generated and is not amenable (see also~\cite{BK, KRW, FF, ward, hochman}). The assumption of topological mixing can be used to construct a rich collection of subgroups of the automorphism group. For example, the automorphism group contains isomorphic copes of all finite groups, the direct sum of countably many copies of $\mathbb{Z}$, and the free group on two generators. In these examples, the topological entropy is positive, and the complexity function $P_X(n)$, which counts the number of nonempty cylinder sets of length $n$ taken over all elements $x\in X$, grows quickly.
When the complexity function of a shift system grows slowly, the automorphism group is often much simpler and the main goal of this paper is to study the algebraic properties of ${\rm Aut}(X)$ in this setting. In contrast to mixing shifts, we study general shifts of low complexity, without an assumption of minimality or transitivity. We show that the automorphism group of any shift of low complexity is amenable, yet its behavior can still be be quite complicated.
As $P_X(n)$ is non-decreasing, boundedness is the slowest possible growth property that $P_X(n)$ can have. As
expected, this case is simple: the Morse-Hedlund Theorem~\cite{MH} implies that if there exists $n\in\mathbb{N}$ such that $P_X(n)\leq n$, then $X$ is comprised entirely of periodic points. Thus ${\rm Aut}(X)$ is a finite group (and we classify all finite groups that arise in this way in Section~\ref{sec:periodic2}). It follows that if $(X,\sigma)$ is a shift for which $P_X(n)/n\xrightarrow{n\to\infty}0$, then $|{\rm Aut}(X)|<\infty$.
It is thus natural to study shifts for which $P_X(n) > n$ for all $n\in\mathbb{N}$. The first nontrivial growth rate that such a system can have is linear, by which we mean $$ 0<\limsup_{n\to\infty}\frac{P_X(n)}{n}<\infty. $$ In previous work~\cite{CK3}, we studied the algebraic properties of ${\rm Aut}(X)$ for transitive shifts of subquadratic growth and showed that ${\rm Aut}(X)/\langle\sigma\rangle$ is a periodic group. In particular, this holds for transitive shifts of linear growth. Periodic groups, however, can be quite complicated: for example, a periodic group need not be finitely generated, and there are finitely generated, nonamenable periodic groups. In this paper, we study ${\rm Aut}(X)$ for general (not necessarily transitive) shifts of linear growth. In the transitive case, we prove a stronger result than is implied by~\cite{CK3}, showing that ${\rm Aut}(X)/\langle\sigma\rangle$ is finite. However, the main novelty of this work is that our techniques remain valid even without the assumption of transitivity.
Depending on dynamical assumptions on the system, shift systems with linear growth exhibit different behavior. Our most general result is: \begin{theorem}\label{thm:main} Suppose $(X,\sigma)$ is a shift system for which there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}P_X(n)/n<k. $$ Then every finitely generated subgroup of ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$ for some $d<k$. \end{theorem}
Let $[\sigma]$ denote the full group of a shift $(X,\sigma)$ (see Section~\ref{sec:full-group} for the definition). With the additional assumption that $(X,\sigma$) has a dense set of aperiodic points, we have: \begin{theorem} \label{th:finitely-generated} Suppose $(X,\sigma)$ is a shift system for which there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}P_X(n)/n<k. $$ If $X$ has a dense set of aperiodic points, then ${\rm Aut}(X)\cap[\sigma]\cong\mathbb{Z}^d$ for some $d<k$ and ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is finite. In particular, ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$. \end{theorem}
For a shift $(X, \sigma)$, let $\langle\sigma\rangle$ denote the subgroup of ${\rm Aut}(X)$ generated by $\sigma$. With the additional assumption that $(X,\sigma)$ is topologically transitive, meaning there exists of a point whose orbit is dense in $X$, we show: \begin{theorem} \label{thm:transitive} Suppose $(X,\sigma)$ is a transitive shift system for which $$ 0<\limsup_{n\to\infty}P_X(n)/n<\infty. $$ Then ${\rm Aut}(X)/\langle\sigma\rangle$ is finite. In particular, ${\rm Aut}(X)$ is virtually $\mathbb{Z}$. \end{theorem}
For minimal shifts, meaning shifts such that every point has dense orbit, we show (note the growth condition on the complexity only assumes $\liminf$ instead of $\limsup$): \begin{theorem} \label{theorem:minimal} Suppose $(X,\sigma)$ is a minimal shift for which there exists $k\in\mathbb{N}$ satisfying $$ \liminf_{n\to\infty}P_X(n)/n<k. $$
Then ${\rm Aut}(X)/\langle\sigma\rangle$ is finite and $|{\rm Aut}(X)/\langle\sigma\rangle|<k$. \end{theorem} For periodic minimal shifts, it is easy to see that ${\rm Aut}(X)\cong\mathbb{Z}/nZ$ where $n$ is the minimal period. Salo and T\"orm\"a~\cite{SaTo} asked if the automorphism group of any linearly recurrent shift is virtually $\mathbb{Z}$. Linearly recurrent shifts are minimal and the factor complexity function grows at most linearly, and so Theorem~\ref{theorem:minimal} gives an affirmative answer to their question.
Roughly speaking, the proof of Theorem~\ref{thm:main} splits into two parts. We start by studying shifts with a dense set of aperiodic points in Section~\ref{sec:aperiodic}, showing that the automorphism group is locally a group of polynomial growth, with the polynomial growth rate depending on the linear complexity assumption on the shift. We sharpen this result to understand transitive shifts of linear growth, leading to the proof of Theorem~\ref{thm:transitive} in Section~\ref{subsec:transitive}. We then combine this with information on existence of aperiodic points, completing the proof of Theorem~\ref{thm:main} in Section~\ref{sec:general-linear}. The proof of Theorem~\ref{theorem:minimal} in Section~\ref{sec:minimal} proceeds in a different manner, relying on a version of a lemma of Boshernitzan used to bound the number of ergodic probability measures on a shift with linear growth, which we use to bound the number of words in the language of the system that have multiple extensions.
For some of these results, we are able to give examples showing that they are sharp. These examples are included in Section~\ref{sec:examples}.
While writing up these results, we became aware of related work by Donoso, Durand, Maass, and Petit~\cite{DDMP}. While some of the results obtained are the same, the methods are different and each method leads to new open directions.
\section{Background and notation}\label{sec:notation} \subsection{Shift systems} We assume throughout that $\mathcal{A}$ is a fixed finite set endowed with the discrete topology. If $x\in\mathcal{A}^{\mathbb{Z}}$, we denote the value of $x$ at $n\in\mathbb{Z}$ by $x(n)$.
The metric $d(x,y):=2^{-\inf\{|n|\colon x(n)\neq y(n)\}}$ generates the product topology on $\mathcal{A}^{\mathbb{Z}}$ and endowed with this metric, $\mathcal{A}^\mathbb{Z}$ is a compact metric space; henceforth we assume this metric structure on $\mathcal{A}^{\mathbb{Z}}$.
The {\em left shift} $\sigma\colon\mathcal{A}^{\mathbb{Z}}\to\mathcal{A}^{\mathbb{Z}}$ is the map defined by $(\sigma x)(n):=x(n+1)$ and is a homeomorphism from $\mathcal{A}^{\mathbb{Z}}$ to itself. If $X\subseteq\mathcal{A}^{\mathbb{Z}}$ is a closed, $\sigma$-invariant subset, then the pair $(X,\sigma)$ is called a {\em subshift of $\mathcal{A}^{\mathbb{Z}}$}, or just a {\em shift of $\mathcal{A}^{\mathbb{Z}}$}. If the alphabet $\mathcal{A}$ is clear from the context, we refer to $(X,\sigma$) as just a {\em shift}.
The set $$ \mathcal{O}(x):=\{\sigma^nx\colon n\in\mathbb{N}\} $$ is the {\em orbit of $x$} and we use $\overline{\mathcal{O}}(x)$ to denote its closure. The shift $(X,\sigma)$ is {\em transitive} if there exists some $x\in X$ such that $\overline{\mathcal{O}}(x) = X$ and it is {\em minimal} if $\overline{\mathcal{O}}(x) = X$ for all $x\in X$. A point $x\in X$ is {\em periodic} if there exists some $n\in\mathbb{N}$ such that $\sigma^nx = x$ and otherwise it is said to be {\em aperiodic}.
\subsection{Complexity of shifts} For a shift $(X,\sigma)$ and $w=(a_{-m+1},\dots,a_{-1},a_0,a_1,$ $\dots,a_{m-1})\in\mathcal{A}^{2m+1}$, the {\em central cylinder set $[w]_0$ determined by $w$} is defined to be $$ [w]_0:=\left\{x\in X\colon x(n)=a_n\text{ for all }-m<n<m\right\}. $$ The collection of central cylinder sets forms a basis for the topology of $X$. If $w=(a_0,\dots,a_{m-1})\in\mathcal{A}^m$, then the {\em one sided cylinder set $[w]_0^+$ determined by $w$} is given by $$ [w]_0^+:=\left\{x\in X\colon x(n)=a_n\text{ for all }0\leq n<m\right\}. $$ For $m\in\mathbb{N}$, define the set of {\em words $\mathcal{L}_m(X)$ of length $m$ in $X$} by $$ \mathcal{L}_m(X):=\left\{w\in\mathcal{A}^m\colon[w]_0^+\neq\emptyset\right\} $$
and define the {\em language $\mathcal{L}(X)$} of $X$ to be $\mathcal{L}(X):=\bigcup_{m=1}^{\infty}\mathcal{L}_m(X)$. For $w\in\mathcal{L}(X)$, we denote the length of $w$ by $|w|$. A word in $x\in X$ is also referred to as a {\em factor} of $x$.
A measure of the complexity of $X$ is the {\em (factor) complexity function} $P_X\colon X\to\mathbb{N}$, which counts the number of words of length $n$ in the language of $X$: $$
P_X(n):=|\mathcal{L}_n(X)|. $$ If $P_x(n)$ is the complexity function of a fixed $x\in X$, meaning it is the number of configurations in a block of size $n$ in $x$, then $P_X(n) \geq \sup_{x\in X}P_x(n)$, with equality holding when $X$ is a transitive shift.
\subsection{The automorphism group of a shift} Let ${\rm Hom}(X)$ denote the group of homeomorphisms from $X$ to itself. If $h_1,\dots,h_n\in{\rm Hom}(X)$, then $\langle h_1,\dots,h_n\rangle$ denotes the subgroup of ${\rm Hom}(X)$ generated by $h_1,\dots,h_n$. Thus the shift $\sigma\in{\rm Hom}(X)$ and its centralizer in ${\rm Hom}(X)$ is called the {\em automorphism group} of $(X,\sigma)$. We denote the automorphism group of $(X,\sigma)$ by ${\rm Aut}(X)$ and endow it with the discrete topology.
A map $\varphi\colon X\to X$ is a {\em sliding block code} if there exists $R\in\mathbb{N}$ such that for any $w\in\mathcal{L}_{2R+1}(X)$ and any $x,y\in[w]_0$, we have $(\varphi x)(0)=(\varphi y)(0)$. Any number $R\in\mathbb{N}\cup\{0\}$ for which this property holds is called a {\em range} for $\varphi$. The {\em minimal range} of $\varphi$ is its smallest range.
If $\varphi\colon X\to X$ is a sliding block code of range $R$, there is a natural map (which, by abuse of notation, we also denote by $\varphi$) taking $\bigcup_{m=2R+1}^{\infty}\mathcal{L}_m(X)$ to $\mathcal{L}(X)$. To define this extension of $\varphi$, let $m>2R$ and let $w=(a_0,\dots,a_{m-1})\in\mathcal{A}^m$. For $0\leq i<m-2R$, choose $x_i\in[(a_i,\dots,a_{i+2R})]_0$ and define $$ \varphi(w):=\bigl((\varphi x_0)(0),(\varphi x_1)(0),\dots,(\varphi x_{m-2R-1})(0)\bigr). $$
Therefore if $w$ is a word of length at least $2R+1$, then $\varphi(w)$ is a word of length $|w|-2R$.
The elements of ${\rm Aut}(X)$ have a concrete characterization: \begin{theorem}[Curtis-Hedlund-Lyndon Theorem~\cite{Hedlund2}] \label{th:CHL} If $(X,\sigma)$ is a shift, then any element of ${\rm Aut}(X)$ is a sliding block code. \end{theorem}
For $R\in\mathbb{N}\cup\{0\}$, we let ${\rm Aut}_R(X)\subseteq {\rm Aut}(X)$ denote the automorphisms of $(X,\sigma)$ for which $R$ is a (not necessarily minimal) range. Thus ${\rm Aut}(X)=\bigcup_{R=0}^{\infty}{\rm Aut}_R(X)$. We observe that if $\varphi_1\in{\rm Aut}_{R_1}(X)$ and $\varphi_2\in{\rm Aut}_{R_2}(X)$, then $\varphi_1\circ\varphi_2\in{\rm Aut}_{R_1+R_2}(X)$.
In general, the automorphism group of a shift can be complicated, but Theorem~\ref{th:CHL} implies that ${\rm Aut}(X)$ is always countable.
\subsection{Automorphisms and the full group} \label{sec:full-group}
The {\em full group} $[\sigma]$ of a shift $(X,\sigma)$ is the subgroup of ${\rm Hom}(X)$ comprised of the orbit preserving homeomorphisms: $$ [\sigma]:=\left\{\psi\in{\rm Hom}(X):\psi(x)\in\mathcal{O}(x)\text{ for all }x\in X\right\}. $$ Thus if $\psi\in[\sigma]$, then there is a function $k_{\psi}\colon X\to\mathbb{Z}$ such that $\psi(x)=\sigma^{k_{\psi}(x)}(x)$ for all $x\in X$.
It follows from the definitions that the group ${\rm Aut}(X)\cap[\sigma]$ is the centralizer of $\sigma$ in $[\sigma]$. We note two basic facts about ${\rm Aut}(X)\cap[\sigma]$ which we will need in order to study ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ in Section~\ref{lemma:aperiodic-finite}.
\begin{lemma} \label{lemma:normal} If $(X,\sigma)$ is a shift, then ${\rm Aut}(X)\cap[\sigma]$ is normal in ${\rm Aut}(X)$. \end{lemma} \begin{proof} Let $\varphi\in{\rm Aut}(X)$ and suppose $\psi\in{\rm Aut}(X)\cap[\sigma]$. Let $k_{\varphi}\colon X\to\mathbb{Z}$ be a function such that $\varphi(x)=\sigma^{k_{\varphi}(x)}(x)$ for all $x\in X$. Fix $x\in X$ and observe that since $\varphi$ and $\sigma$ commute, $$ \varphi\circ\psi\circ\varphi^{-1}(x)=\varphi\circ\sigma^{k_{\varphi}(\varphi^{-1}(x))}\circ\varphi^{-1}(x)=\sigma^{k_{\varphi}(\varphi^{-1}(x))}(x). $$ As this holds for any $x\in X$, it follows that $\varphi\circ\psi\circ\varphi^{-1}\in{\rm Aut}(X)\cap[\sigma]$. Since $\phi\in{\rm Aut}(X)$ and $\psi\in{\rm Aut}(X)\cap[\sigma]$ are arbitrary, we have $${\rm Aut}(X)\cap[\sigma]=\varphi\cdot\left({\rm Aut}(X)\cap[\sigma]\right)\cdot\varphi^{-1}$$ for all $\varphi\in{\rm Aut}(X)$. So ${\rm Aut}(X)\cap[\sigma]$ is normal in ${\rm Aut}(X)$. \end{proof}
\begin{lemma}\label{lemma:abelian} If $(X,\sigma)$ is a shift, then ${\rm Aut}(X)\cap[\sigma]$ is abelian. \end{lemma} \begin{proof} Suppose $\varphi_1, \varphi_2\in{\rm Aut}(X)\cap[\sigma]$. For $i=1,2$, let $k_{\varphi_i}\colon X\to\mathbb{Z}$ be functions such that $\varphi_i(x)=\sigma^{k_{\varphi_i(x)}}(x)$ for all $x\in X$. For any $x\in X$, \begin{eqnarray*} \varphi_1\circ\varphi_2(x)&=&\varphi_1\circ\sigma^{k_{\varphi_2}(x)}(x)=\sigma^{k_{\varphi_2}(x)}\circ\varphi_1(x) \\ &=&\sigma^{k_{\varphi_2}(x)}\circ\sigma^{k_{\varphi_1}(x)}(x)=\sigma^{k_{\varphi_1}(x)}\circ\sigma^{k_{\varphi_2}(x)}(x) \\ &=&\sigma^{k_{\varphi_1}(x)}\circ\varphi_2(x)=\varphi_2\circ\sigma^{k_{\varphi_1}(x)}(x) \\ &=&\varphi_2\circ\varphi_1(x). \end{eqnarray*} Therefore $\varphi_1\circ\varphi_2=\varphi_2\circ\varphi_1$. \end{proof}
\subsection{Summary of group theoretic terminology} For convenience, we summarize the algebraic properties that we prove ${\rm Aut}(X)$ may have. We say that a group $G$ is {\em locally $P$} if every finitely generated subgroup of $G$ has property $P$. The group $G$ is {\em virtually $H$} if $G$ contains $H$ as a subgroup of finite index. The group $G$ is {\em $K$-by-$L$} if there exists a normal subgroup $H$ of $G$ which is $K$ and such that the quotient $G/H$ is $L$.
\section{Shifts of linear growth with a dense set of aperiodic points} \label{sec:transitive}
\subsection{Cassaigne's characterization of linear growth}
Linear growth can be characterized in terms of the (first) difference of the complexity function: \begin{theorem}[Cassaigne~\cite{C}]\label{theorem:cassaigne} A shift $(X,\sigma)$ satisfies $p_X(n)=O(n)$ if and only if the difference function $p_X(n+1)-p_X(n)$ is bounded. \end{theorem}
\begin{definition}
Let $w=(a_0,\dots,a_{|w|-1})\in\mathcal{L}_{|w|}(X)$. For fixed $m\in\mathbb{N}$, we say that $w$ {\em extends uniquely $m$ times to the right} if there is exactly one word $\widetilde{w}=(b_0,\dots,b_{|w|+m-1})\in\mathcal{L}_{|w|+m}(X)$ such that $a_i=b_i$ for all $0\leq i<|w|$. \end{definition}
\begin{corollary}\label{corollary:extend} Assume $(X,\sigma)$ satisfies $p_X(n)=O(n)$. Then for any $m,n\in\mathbb{N}$, the number of words of length $n$ that do not extend uniquely $m$ times to the right is at most $B m$, where $B=\max_{n\in\mathbb{N}}\bigl(p_X(n+1)-p_X(n)\bigr)$. \end{corollary}
Note that it follows from Cassaigne's Theorem that $B$ is finite.
\begin{proof} For any $N\in\mathbb{N}$, the quantity $p_X(N+1)-p_X(N)$ is an upper bound on the number of words of length $N$ that do not extend uniquely to the right. For any word $w$ of length $n$ which does not extend uniquely $m$ times to the right, there exists $0\leq k<m$ such that $w$ extends uniquely $k$ times to the right, but not $k+1$ times. For fixed $k$, the number of words for which this is the case is at most the number of words of length $n+k$ that do not extend uniquely to the right. So the number of words of length $n$ that fail to extend uniquely $m$ times to the right is at most \begin{equation*} \sum_{k=1}^m\bigl(p_X(n+k)-p_X(n+k-1)\bigr)\leq Bm.
\qedhere \end{equation*} \end{proof}
\subsection{Assuming a dense set of aperiodic points} \label{sec:aperiodic}
We start by considering shifts with a dense set of aperiodic points. This assumption holds in particular when the shift has no isolated points: if $X$ has no isolated points then, for any fixed period, the set of periodic points with that period has empty interior. Then the Baire Category Theorem implies that the set of all periodic points has empty interior. In particular, the set of aperiodic points is dense. The two assumptions are equivalent if the set of aperiodic points is nonempty.
\begin{lemma}\label{lemma:k-transitive} Suppose $(X,\sigma)$ is a shift with a dense set of aperiodic points and there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ Then there exist $x_1,\dots,x_{k-1}\in X$ such that $$ X=\overline{\mathcal{O}}(x_1)\cup\overline{\mathcal{O}}(x_2)\cup\dots\cup\overline{\mathcal{O}}(x_{k-1}). $$ \end{lemma} \begin{proof} Suppose not and let $x_1\in X$. Since $\overline{\mathcal{O}}(x_1)\neq X$, there is a word $w_1\in\mathcal{L}(X)$ such that $[w_1]_0^+\cap\overline{\mathcal{O}}(x_1)=\emptyset$. Choose $x_2\in X$ with $x_2\in[w_1]_0^+$. Let $i<k$ and suppose that we have constructed $x_1,\dots,x_i\in X$ and $w_1,\dots,w_{i-1}\in\mathcal{L}(X)$ such that $[w_{j_1}]_0\cap\overline{\mathcal{O}}(x_{j_2})=\emptyset$ whenever $j_2\leq j_1$. Since $\overline{\mathcal{O}}(x_1)\cup\dots\cup\overline{\mathcal{O}}(x_i)\neq X$, there is a word $w_i\in\mathcal{L}(X)$ such that $[w_i]_0^+\cap\overline{\mathcal{O}}(x_1)\cup\dots\cup\overline{\mathcal{O}}(x_i)=\emptyset$. Let $x_{i+1}\in[w_i]_0^+$ and we continue this construction until $i=k$.
Let $N>\max_{1\leq i<k}|w_i|$ be a fixed large integer (to be specified later). Since $x_1$ is aperiodic, there are at least $N+1$ distinct factors of length $N$ in $\overline{\mathcal{O}}(x_1)$. Therefore there are at least $N+1$ distinct factors of length $N$ in $X$ which do not contain the words $w_1,\dots,w_{k-1}$. We claim that for $1\leq i<k$,
there are at least $N-|w_i|$ distinct factors in $\overline{\mathcal{O}}(x_{i+1})$ which contain the word $w_i$ but do not contain any of the words $w_{i+1},w_{i+2},\dots,w_{k-1}$. Assuming this claim, then for any sufficiently large $N$ we have $p_X(N)\geq kN-\sum_{i=1}^k|w_i|$, a contradiction of the complexity assumption.
We are left with proving the claim. Let $1\leq i<k$ be fixed. By construction, the word $w_i$ appears in $\overline{\mathcal{O}}(x_{i+1})$ but $[w_j]_0^+\cap\overline{\mathcal{O}}(x_{i+1})=\emptyset$ for any $j>i$. If $w_i$ appears syndetically in $x_{i+1}$ then so long as $N$ is sufficiently large, every factor of $x_{i+1}$ of length $N$ contains the word $w_i$. In this case, since $x_{i+1}$ is aperiodic, there are at least $N+1$ distinct factors in $\overline{\mathcal{O}}(x_{i+1})$ which contain $w_i$ but not $w_j$ for any $j>i$. Otherwise $w_i$ does not appear syndetically in $x_{i+1}$ and so there are arbitrarily long factors in $x_{i+1}$ which do not contain $w_i$. Since $w_i$ appears at least once in $x_{i+1}$, it follows that there are arbitrarily long words which appear in $x_{i+1}$ which contain exactly one occurrence of $w_i$ and we can assume that $w_i$ occurs as either the rightmost or leftmost subword. Without loss, we assume that there exists a word $w$ of length $N$ which contains $w_i$ as its rightmost subword and has no other occurrences of $w_i$. Choose $j\in\mathbb{Z}$ such that $$
w=\bigl(x_{i+1}(j),x_{i+1}(j+1),\dots,x_{i+1}(j+|w|-1)\bigr). $$
By construction, if $0\leq s<|w|-|w_i|$ then the word $$
w^{(s)}:=\bigl(x_{i+1}(j+s),x_{i+1}(j+s+1),\dots,x_{i+1}(j+s+|w|-1)\bigr) $$
is a word of length $N$ for which the smallest $t\in\{0,\dots,|w|-|w_i|\}$ such that $$
w_i=\bigl(x_{i+1}(j+t),x_{i+1}(j+t+1),\dots,x_{i+1}(j+t+|w_i|-1\bigr) $$
is $t=|w|-|w_i|-s$. Therefore, the words $w^{(s)}$ are pairwise distinct and each contains $w_i$ as a subword. By construction, they do not contain $w_j$ for any $j>i$, thus establishing the claim. \end{proof}
\begin{proposition}\label{prop:polynomial} Suppose that $(X,\sigma)$ is a shift with a dense set of aperiodic points and there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ Then ${\rm Aut}(X)$ is locally a group of polynomial growth with polynomial growth rate is at most $k-1$. Moreover, if $q\in\mathbb{N}$ is the smallest cardinality of a set $x_1,\dots,x_q\in X$ such that $\mathcal{O}(x_1)\cup\mathcal{O}(x_2)\cup\cdots\cup\mathcal{O}(x_q)$ is dense in $X$, then the polynomial growth rate of any finitely generated subgroup of ${\rm Aut}(X)$ is at most $q$. \end{proposition}
In Section~\ref{sec:large-poly-growth}, we give an example showing that the growth rate given in this proposition is optimal.
\begin{proof} By Lemma~\ref{lemma:k-transitive}, there exist $y_1,\dots,y_{k-1}\in X$ such that the union of the orbits $\mathcal{O}(y_1)\cup\mathcal{O}(y_2)\cup\cdots\cup\mathcal{O}(y_{k-1})$ is dense in $X$. Let $x_1,\dots,x_q\in X$ be a set of minimum cardinality for which $\mathcal{O}(x_1)\cup\mathcal{O}(x_2)\cup\cdots\cup\mathcal{O}(x_q)$ is dense.
For $1\leq i\leq q$, define the constant $$
C_i:=\inf\{|w|\colon\text{$w$ is a subword of $x_i$ and $[w]_0^+$ contains precisely one element}\} $$ and define $C_i:=0$ if no such subword exists. Define \begin{equation}\label{eq:constant} C:=\max_{1\leq i\leq q}C_i. \end{equation} Fix $R\in\mathbb{N}$. For $i=1,\ldots, q$, let $\tilde{w}_i$ be a factor of $x_i$ such that
\begin{enumerate}
\item $|\tilde{w}_i|\geq3R+1$;
\item \label{it:two} for all $u\in\mathcal{L}_{2R+1}(X)$, there exists $i$ such that $u$ is a factor of $\tilde{w}_i$.
\end{enumerate} Note that~\eqref{it:two} is possible since $\mathcal{O}(x_1)\cup\mathcal{O}(x_2)\cup\cdots\cup\mathcal{O}(x_q)$ is dense.
Without loss of generality, we can assume that there exists $M_1\geq0$ such that $[\tilde{w}_i]_0^+$ contains precisely one element for all $i\leq M_1$ and contains at least two elements for all $i>M_1$ (otherwise reorder $x_1,\dots,x_{k-1}$). For each $i>M_1$, either there exists $a\geq0$ such that $\tilde{w}_i$ extends uniquely to the right $a$ times but not $a+1$ times, or there exists $a\geq0$ such that $\tilde{w}_i$ extends uniquely to the left $a$ times but not $a+1$ times. Again, reordering if necessary, we can assume that there exists $M_2\geq M_1$ such that the former occurs for all $M_1<i\leq M_2$ and the latter occurs when $i>M_2$. For $i=1, \ldots, q$, we define words $w_1,\dots,w_q$ as follows:
\begin{enumerate}
\item For $i=1, \ldots, M_1$, the set $[\tilde{w}_i]_0^+$ contains precisely one element. This must be a shift of $x_i$,
and without loss, we can assume it is $x_i$ itself. In this case, we define $u_i$ to be the shortest subword of $x_i$ with the property that $[u_i]_0^+$ contains precisely one element and define $w_i$ to be the (unique) extension $2R+2$ times both to the right and to the left of $u_i$. Observe that if $\varphi, \varphi^{-1}\in{\rm Aut}_R(X)$, then $\varphi^{-1}(\varphi(w_i))=u_i$. Since $\varphi^{-1}$ is injective and sends every element of $[\varphi(w_i)]_0^+$ to the one point set $[u_i]_0^+$, it follows that $[\varphi(w_i)]_0^+$ contains precisely one element and the word $\varphi(w_i)$ uniquely determines the word $\varphi(\tilde{w}_i)$.
Moreover, $|u_i|\leq C$, where $C$ is the constant in~\eqref{eq:constant}, and so $|w_i|\leq C+4R+4$.
\item For $i=M_1+1, \dots, M_2$, there exists $a_i\geq0$ such that $\tilde{w}_i$ extends uniquely to the right $a_i$ times but not $a_i+1$ times. Define $w_i$ to be the (unique) word of length $|\tilde{w}_i|+a_i$ which has $\tilde{w}_i$ as its leftmost factor. By choice of the ordering, $w_i$ does not extend uniquely to its right.
\item For $i=M_2+1, \ldots, q$, there exists $a_i\geq0$ such that $\tilde{w}_i$ extends uniquely to the left $a_i$ times but not $a_i+1$ times. Define $w_i$ to the be (unique) word of length $|\tilde{w}_i|+a_i$ which has $\tilde{w}_i$ as its rightmost factor. By choice of the ordering, $w_i$ does not extend uniquely to its left.
\end{enumerate}
For $\varphi\in {\rm Aut}_R(X)$, we have that $\varphi(w_i)$ determines the word $\varphi(\tilde{w}_i)$ and so the block code determines what $\varphi$ does to every word in $\mathcal{L}_{2R+1}(X)$.
Thus the map $\Phi\colon{\rm Aut}_R(X)\to\mathcal{L}_{|w_1|-2R}(X)\times\mathcal{L}_{|w_2|-2R}(X)\times\cdots\times\mathcal{L}_{|w_q|-2R}(X)$ defined by $$ \Phi(\varphi)=\bigl(\varphi(w_1),\varphi(w_2),\dots,\varphi(w_q)\bigr) $$ is injective. We claim that for $1\leq i\leq q$, we have \begin{equation}\label{eq:words}
\left|\left\{\varphi(w_i)\colon\varphi,\varphi^{-1}\in{\rm Aut}_R(X)\right\}\right|\leq Bk(C+2)(R+1) , \end{equation} where $B$ is the constant appearing in Corollary~\ref{corollary:extend} and $C$ is the constant in~\eqref{eq:constant}.
Before proving the claim, we show how to deduce the proposition from this estimate. It follows from~\eqref{eq:words} that $|\{\Phi(\varphi)\colon\varphi, \varphi^{-1}\in{\rm Aut}_R(X)\}|\leq(Bk(C+2))^q(R+1)^q$. Since $\Phi$ is injective, it follows that we have the bound \begin{equation}\label{eq:growth-estimate}
|\{\varphi\in{\rm Aut}_R(X)\colon\varphi^{-1}\in{\rm Aut}_R(X)\}|\leq(Bk(C+2))^q(R+1)^q. \end{equation}
Given $\varphi_1,\dots,\varphi_m\in{\rm Aut}(X)$, choose $R\in\mathbb{N}$ such that $\varphi_1,\dots,\varphi_m,\varphi_1^{-1},\dots,\varphi_m^{-1}\in{\rm Aut}_R(X)$. Then for any $n\in\mathbb{N}$, any $e_1,\dots,e_n\in\{-1,1\}$, and any $f_1,\dots,f_n\in\{1,\dots,m\}$, we have $$ \varphi_{f_1}^{e_1}\circ\varphi_{f_2}^{e_2}\circ\cdots\circ\varphi_{f_n}^{e_n}\in{\rm Aut}_{nR}(X). $$ In particular, if $\mathcal{S}:=\{\varphi_1,\dots,\varphi_m,\varphi_1^{-1},\dots,\varphi_m^{-1}\}$ is a (symmetric) generating set for $\langle\varphi_1,\dots,\varphi_m\rangle$, then any reduced word of length $n$ (with respect to $\mathcal{S}$) is an element of $\{\varphi\in{\rm Aut}_{nR}(X)\colon\varphi^{-1}\in{\rm Aut}_{nR}(X)\}$. By~\eqref{eq:growth-estimate}, there are at most $(Bk(C+2))^q(nR+1)^q$ such words. Therefore $\langle\varphi_1,\dots,\varphi_m\rangle$ is a group of polynomial growth and its polynomial growth rate is at most $q$. This holds for any finitely generated subgroup of ${\rm Aut}(X)$ (where the parameter $R$ depends on the subgroup and choice of generating set, but $B$, $C$, $k$, and $q$ depend only on the shift $(X,\sigma)$). As $q\leq k-1$, the proposition follows.
We are left with showing that~\eqref{eq:words} holds. There are three cases to consider, depending on the interval in which $i$ lies.
\begin{enumerate} \item Suppose $1\leq i\leq M_1$.
Then $|w_i|\leq C+4R+4$ and so $\varphi(w_i)$ is a word of length $C+2R+2$. Therefore, there are $$ p_X(C+2R+2)\leq k\cdot(C+2R+2)\leq k(C+2)(R+1) $$ possibilities for the word $\varphi(w_i)$.
\item \label{case:two}
Suppose $M_1<i\leq M_2$. Then $w_i$ does not extend uniquely to its right. If $\varphi\in{\rm Aut}(X)$ is such that $\varphi, \varphi^{-1}\in{\rm Aut}_R(X)$, then the word $\varphi(w_i)\in\mathcal{L}_{|w_i|-2R}(X)$ cannot extend uniquely $R+1$ times to its right (as otherwise this extended word would have length $2R+1$ and applying $\varphi^{-1}$ to it would show that there is only one possible extension of $w_i$ to its right). By Corollary~\ref{corollary:extend}, there are at most $B(R+1)$ such words. Therefore $\{\varphi(w_i)\colon\varphi,\varphi^{-1}\in{\rm Aut}_R(X)\}$ has at most $B(R+1)$ elements.
\item Suppose $i>M_2$. Then $w_i$ does not extend uniquely to its left. As in Case~\eqref{case:two}, if $\varphi\in{\rm Aut}(X)$ is such that $\varphi, \varphi^{-1}\in{\rm Aut}_R(X)$, then $\varphi(w_i)$ cannot extend uniquely $R+1$ times to its left. By Corollary~\ref{corollary:extend}, there are at most $B(R+1)$ such words. Therefore $\{\varphi(w_i)\colon\varphi,\varphi^{-1}\in{\rm Aut}_R(X)\}$ has at most $B(R+1)$ elements. \end{enumerate} This establishes~\eqref{eq:words}, and thus the proposition. \end{proof}
\subsection{The automorphism group of a transitive shift} \label{subsec:transitive}
\begin{lemma}\label{lemma:transitive-finite-index} Suppose $(X,\sigma)$ is a transitive shift and there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ If $x_0\in X$ has a dense orbit, then the set $$ \left\{\varphi(x_0)\colon\varphi\in{\rm Aut}(X)\right\} $$ is contained in the union of finitely many distinct orbits. \end{lemma} \begin{proof} Suppose not. Let $\varphi_1,\varphi_2,\ldots\in{\rm Aut}(X)$ be such that $\varphi_i(x_0)\notin\mathcal{O}(\varphi_j(x_0))$ whenever $i\neq j$. For $N\in\mathbb{N}$, let $R(N)$ be the smallest integer such that we have $\varphi_1,\dots,\varphi_N,\varphi_1^{-1},\dots,\varphi_N^{-1}\in{\rm Aut}_{R(N)}(X)$. For $1\leq i\leq N$, $m\in\mathbb{N}$, and $-n\leq j\leq n$, we have $\varphi_i^{\pm1}\circ\sigma^j\in{\rm Aut}_{R(N)+n}(X)$. As automorphisms take aperiodic points to aperiodic points, for fixed $i$, the set $$ \{\varphi_i\circ\sigma^j\colon-n\leq j\leq n\} $$ contains $2n+1$ elements. If $i_1\neq i_2$ and $-n\leq j_1,j_2\leq n$, then $\varphi_{i_1}\circ\sigma^{j_1}(x_0)\notin\mathcal{O}(\varphi_{i_2}\circ\sigma^{j_2}(x_0))$. Thus the set $$ \{\varphi_i\circ\sigma^j\colon 1\leq i\leq N\text{ and }-n\leq j\leq n\} $$ contains $2Nn+N$ elements. Therefore, $$
|\{\varphi\in{\rm Aut}_{R(N)+n}(X)\colon\varphi^{-1}\in{\rm Aut}_{R(N)+n}(X)\}|\geq2Nn+N. $$ It follows that $$
\limsup_{R\to\infty}\frac{|\{\varphi\in{\rm Aut}_R(X)\colon\varphi^{-1}\in{\rm Aut}_R(X)\}|}{R}\geq 2N. $$ Since $N\in\mathbb{N}$ was arbitrary, we have \begin{equation}\label{eq:slow-growth}
\limsup_{R\to\infty}\frac{|\{\varphi\in{\rm Aut}_R(X)\colon\varphi^{-1}\in{\rm Aut}_R(X)\}|}{R}=\infty. \end{equation}
On the other hand, since $(X,\sigma)$ is transitive, the parameter $q$ in the conclusion of Proposition~\ref{prop:polynomial} is $1$. Then by~\eqref{eq:growth-estimate}, we have
$$|\{\varphi\in{\rm Aut}_R(X)\colon\varphi^{-1}\in{\rm Aut}_R(X)\}\leq Bk(C+2)(R+1), $$ where $B,k,C$ are as in Proposition~\ref{prop:polynomial}, which depend only on the shift $(X,\sigma)$ and not on $R$. This estimate holds for any $R\in\mathbb{N}$, a contradiction of~\eqref{eq:slow-growth}. \end{proof}
We use this to complete the proof of Theorem~\ref{thm:transitive}, characterizing the automorphism group of transitive shifts of linear growth: \begin{proof}[Proof of Theorem~\ref{thm:transitive}] Assume $(X,\sigma)$ is a transitive shift satisfying $$\limsup_{n\to\infty}P_X(n)/n<k$$ for some $k\in\mathbb{N}$. An automorphism in a transitive shift is determined by the image of a point whose orbit is dense, and so Lemma~\ref{lemma:transitive-finite-index} implies that the group ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is finite (Lemma~\ref{lemma:normal} implies that ${\rm Aut}(X)\cap[\sigma]$ is normal in ${\rm Aut}(X)$). However, the only orbit preserving automorphisms in a transitive shift are elements of $\langle\sigma\rangle$, since such an automorphism acts like a power of the shift on a point whose orbit is dense. \end{proof}
Theorem~\ref{thm:transitive} shows that if $(X,\sigma)$ is transitive and has low enough complexity, then ${\rm Aut}(X)$ is highly constrained. One might hope to have a converse to this theorem: if $(X,\sigma)$ is transitive and is above some ``complexity threshold'' then ${\rm Aut}(X)$ is nontrivial. In Section~\ref{sec:no-complexity-threshold}, we give an example showing that no such converse holds.
\subsection{The automorphism group of a shift with dense aperiodic points} \label{sec:dense-aperiodic}
\begin{lemma}\label{lemma:aperiodic-finite} Suppose $(X,\sigma)$ has a dense set of aperiodic points and there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ Let $x_1,\dots,x_q\in X$ be a set (of minimal cardinality) such that $\mathcal{O}(x_1)\cup\cdots\cup\mathcal{O}(x_q)$ is dense in $X$. Then for each $1\leq i\leq q$, the set $$ \left\{\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\right\} $$ is contained in the union of finitely many distinct orbits. \end{lemma} \begin{proof} By minimality of the set $\{x_1,\dots,x_q\}$, we have $$ x_i\notin\bigcup_{j\neq i}\overline{\mathcal{O}}(x_j) $$ for any $1\leq i\leq q$. Therefore there exists $w_i\in\mathcal{L}(X)$ such that $[w_i]_0^+\cap\mathcal{O}(x_i)\neq\emptyset$ but $[w_i]_0^+\cap\bigcup_{j\neq i}\overline{\mathcal{O}}(x_j) =\emptyset$. This implies that $[w_i]_0^+\subseteq\overline{\mathcal{O}}(x_i)$.
Let $\varphi\in{\rm Aut}(X)$ and note that $\varphi$ is determined by $\varphi(x_1),\dots,\varphi(x_q)$. If for some $1\leq i\leq q$ we have $\mathcal{O}(\varphi(x_j))\cap[w_i]_0^+=\emptyset$ for all $j$, then $\varphi(X)\cap[w_i]_0^+=\emptyset$ and $\varphi$ is not surjective, a contradiction. Therefore, for each $i$ there exists $1\leq j_i\leq q$ such that $\mathcal{O}(\varphi(x_{j_i}))\cap[w_i]_0^+\neq\emptyset$. By construction, if $\mathcal{O}(\varphi(x_{j_i}))\cap[w_i]_0^+\neq\emptyset$, then $\varphi(x_{j_i})\in\overline{\mathcal{O}}(x_i)$ and so $\mathcal{O}(\varphi(x_{j_i}))\cap[w_k]_0^+=\emptyset$ for any $k\neq i$. That is, the map $i\mapsto j_i$ is a permutation on the set $\{1,2,\dots,q\}$. Let $\pi_{\varphi}\in S_q$, where $S_q$ is the symmetric group on $q$ letters, denote this permutation.
Let $$ H:=\{\pi_{\varphi}\colon\varphi\in{\rm Aut}(X)\}\subseteq S_q. $$ For each $h\in H$, choose $\varphi_h\in{\rm Aut}(X)$ such that $h=\pi_{\varphi_h}$. Then if $\varphi\in{\rm Aut}(X)$ and $h=\pi_{\varphi}$, the permutation induced by $\varphi_h^{-1}\circ\varphi$ is the identity. It follows that $\varphi_h^{-1}\circ\varphi$ preserves each of the sets $\overline{\mathcal{O}}(x_1),\dots,\overline{\mathcal{O}}(x_q)$. Consequently, for each $1\leq i\leq q$, the restriction of $\varphi_h^{-1}\circ\varphi$ to $\overline{\mathcal{O}}(x_i)$ is an automorphism of the (transitive) subsystem $\left(\overline{\mathcal{O}}(x_i),\sigma\right)$. By Lemma~\ref{lemma:transitive-finite-index}, the set $\{\psi(x_i)\colon\psi\in{\rm Aut}(\overline{\mathcal{O}}(x_i))\}$ is contained in the union of finitely many distinct orbits. Therefore, the set $\{\varphi_{\pi_{\varphi}}^{-1}\circ\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\}$ is contained in the union of finitely many distinct orbits. Since $$ \{\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\}\subseteq\bigcup_{h\in H}\varphi_h\left(\{\varphi_{\pi_{\varphi}}^{-1}\circ\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\}\right) $$ and automorphisms take orbits to orbits, it follows that $\{\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\}$ is contained in the union of finitely many distinct orbits. \end{proof}
\begin{lemma}\label{lemma:rank} Let $(X,\sigma)$ be a shift with a dense set of aperiodic points and assume that there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ Then ${\rm Aut}(X)\cap[\sigma]\cong\mathbb{Z}^d$ for some $d<k$. \end{lemma} \begin{proof} By Lemmas~\ref{lemma:normal} and~\ref{lemma:abelian}, ${\rm Aut}(X)\cap[\sigma]$ is abelian and normal in ${\rm Aut}(X)$. By Lemma~\ref{lemma:k-transitive}, there exist points $x_1,\dots,x_{k-1}\in X$ such that $\mathcal{O}(x_1)\cup\cdots\cup\mathcal{O}(x_{k-1})$ is dense in $X$. If $\varphi\in{\rm Aut}(X)\cap[\sigma]$, then there exist $e_1(\varphi),\dots,e_{k-1}(\varphi)\in\mathbb{Z}$ such that $\varphi(x_i)=\sigma^{e_i(\varphi)}(x_i)$ for all $1\leq i\leq q$. As an automorphism is determined by the images of $x_1,\dots,x_{k-1}$, the map $\varphi\mapsto(e_1(\varphi),\dots,e_{k-1}(\varphi))$ is an injective homomorphism from ${\rm Aut}(X)\cap[\sigma]$ to $\mathbb{Z}^{k-1}$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{th:finitely-generated}] By Lemma~\ref{lemma:k-transitive}, there exist $x_1,\dots,x_{k-1}\in X$ such that $\mathcal{O}(x_1)\cup\cdots\cup\mathcal{O}(x_{k-1})$ is dense in $X$. If $\varphi\in{\rm Aut}(X)$, then $\varphi$ is determined by the values of $\varphi(x_1),\dots,\varphi(x_{k-1})$. By Lemma~\ref{lemma:aperiodic-finite}, the set $\{\varphi(x_i)\colon\varphi\in{\rm Aut}(X)\}$ is contained in the union of finitely many distinct orbits in $X$. Therefore, modulo orbit preserving automorphisms, there are only finitely many choices for $\varphi(x_1),\dots,\varphi(x_{k-1})$. It follows that the group ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is finite. By Lemma~\ref{lemma:rank}, ${\rm Aut}(X)\cong\mathbb{Z}^d$ for some $d<k$. \end{proof}
\section{General shifts of linear growth} \label{sec:general-linear}
\begin{lemma}\label{lemma:per-finite} Suppose $(X,\sigma)$ is a shift and $w\in\mathcal{L}(X)$ is such that $[w]_0^+$ is infinite. Then there exists aperiodic $x_w\in X$ such that $x_w\in[w]_0^+$. \end{lemma} \begin{proof} Either $w$ occurs syndetically in every element of $[w]_0^+$ with a uniform bound on the gap, or there exists a sequence of elements of $[w]_0^+$ along which the gaps between occurrences of $w$ in $y_w$ grow.
In the first case, the subsystem $$ \overline{\left\{\sigma^ix\colon x\in[w]_0^+\text{, }i\in\mathbb{Z}\right\}} $$ is infinite and so contains an aperiodic point $x_w$. Since $w$ occurs syndetically with the same bound in every element of $[w]_0^+$, it also occurs syndetically in any limit taken along elements of $[w]_0^+$, and in particular in $x_w$.
In the second case, there is an element of $x_w\in\overline{\mathcal{O}(y_w)}\cap[w]_0^+$ for which either $w$ occurs only finitely many times or infinitely many times with gaps tending to infinity in the semi-infinite word $\{x(n)\colon n\geq0\}$, or the same behavior occurs in the semi-infinite word $\{x(n)\colon n\leq0\}$. In either case, $x_w$ is aperiodic. \end{proof}
We use this to complete the proof of Theorem~\ref{thm:main}, characterizing the finitely generated subgroups of a shift of linear growth:
\begin{proof}[Proof of Theorem~\ref{thm:main}] Let $(X,\sigma)$ be a shift and assume there exists $k\in\mathbb{N}$ such that $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k. $$ Let $$ X_{NP}:=\overline{\left\{x\in X\colon\sigma^i(x)\neq x\text{ for all }i\neq0\right\}} $$ be the closure of the set of aperiodic points in $X$. As automorphisms take aperiodic points to aperiodic points, every element of ${\rm Aut}(X)$ preserves $X_{NP}$. Consequently, restriction to $X_{NP}$ defines a natural homomorphism $h\colon{\rm Aut}(X)\to{\rm Aut}(X_{NP})$.
Let $\varphi_1,\dots,\varphi_N\in{\rm Aut}(X)$ and choose $R\in\mathbb{N}$ such that $\varphi_1,\dots,\varphi_N,\varphi_1^{-1},\dots,\varphi_N^{-1}\in{\rm Aut}_R(X)$. By Lemma~\ref{lemma:k-transitive}, there exists a set $x_1,\dots,x_{k-1}\in X_{NP}$ such that $$ \mathcal{O}(x_1)\cup\cdots\cup\mathcal{O}(x_{k-1}) $$ is dense in $X_{NP}$. Let $\{x_1,\dots,x_q\}\subseteq X_{NP}$ be a set of minimal cardinality with the property that $$ \mathcal{O}(x_1)\cup\cdots\cup\mathcal{O}(x_q) $$ is dense in $X_{NP}$. Then for any $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$, the restriction of $\varphi$ to $X_{NP}$ is determined by $\varphi(x_1),\dots,\varphi(x_q)$. By Lemma~\ref{lemma:aperiodic-finite}, for each $1\leq j\leq q$, the set $$ \{\varphi(x_j)\colon\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle\} $$ is contained in the union of finitely many distinct orbits. Therefore there exists a finite collection of automorphisms $\psi_1,\dots,\psi_M\in\langle\varphi_1,\dots,\varphi_N\rangle$ such that for any $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$, there exists $1\leq t(\varphi)\leq M$ such that for all $1\leq j\leq q$, we have $$ \varphi(x_j)\in\mathcal{O}(\psi_{t(\varphi)}(x_j)). $$ Thus the restriction of $\psi_{t(\varphi)}^{-1}\circ\varphi$ to $X_{NP}$ is orbit preserving. Let $$ K:=\{\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle\colon\text{the restriction of $\varphi$ to $X_{NP}$ is orbit preserving}\}. $$ Clearly $K$ is a subgroup of $\langle\varphi_1,\dots,\varphi_N\rangle$.
For each $1\leq i\leq N$, we have that $\varphi_i$ is a block code of range $R$. Let $$ \mathcal{W}_R:=\left\{w\in\mathcal{L}_{2R+1}(X)\colon[w]_0^+\cap X_{NP}=\emptyset\right\}. $$ Then by Lemma~\ref{lemma:per-finite}, the set $$ Y:=\bigcup_{w\in\mathcal{W}_R}[w]_0^+ $$ is finite. Since every element of $Y$ is periodic and automorphisms preserve the minimal period of periodic points, the ($\langle\varphi_1,\dots,\varphi_N\rangle$-invariant) set $$ Z:=\left\{\varphi_{i_1}^{e_1}\circ\cdots\circ\varphi_{i_S}^{e_S}(y)\colon i_1,\dots,i_S\in\{1,\dots,N\}\text{, }e_1,\dots,e_S\in\{-1,1\},S\in\mathbb{N}\text{, }y\in Y\right\} $$ is finite. For any $1\leq i\leq N$, the restriction of $\varphi_i$ to $X_{NP}$ uniquely determines the restriction of $\varphi_i$ to $X\setminus Z$ (since all words of length $2R+1$ that occur in elements of $X\setminus Z$ also occur in $X_{NP}$). Since $\varphi_1,\dots,\varphi_N$ are automorphisms that preserve $Z$, they take elements of $X\setminus Z$ to elements of $X\setminus Z$. Thus for any $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$, the restriction of $\varphi$ to $X_{NP}$ uniquely determines the restriction of $\varphi$ to $X\setminus Z$. In particular, this holds for all $\varphi\in K$. So there exists a finite collection of automorphisms $\alpha_1,\dots,\alpha_T\in K$ such that for all $\varphi\in K$, there is an integer $1\leq s(\varphi)\leq T$ such that $\alpha_{s(\varphi)}^{-1}\circ\varphi$ acts trivially on $Z$.
With the functions $t(\varphi)$ and $s(\varphi)$ defined as above, we have that for any $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$, the automorphism $$ \alpha^{-1}_{s\left(\psi^{-1}_{t(\varphi)}\circ\varphi\right)}\circ\psi^{-1}_{t(\varphi)}\circ\varphi $$ acts trivially on $Z$ and its restriction to $X_{NP}$ is orbit preserving. Define $H\subseteq\langle\varphi_1,\dots,\varphi_N\rangle$ to be the subgroup of elements $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$ such that $\varphi$ acts trivially on $Z$ and the restriction of $\varphi$ to $X_{NP}$ is orbit preserving. Every element of $H$ is uniquely determined by its restriction to $X_{NP}$, and so $H$ is isomorphic to a subgroup of ${\rm Aut}(X_{NP})\cap[\sigma]$. By Lemma~\ref{lemma:rank}, this subgroup is isomorphic to $\mathbb{Z}^d$ for some $d<k$. On the other hand, for any $\varphi\in\langle\varphi_1,\dots,\varphi_N\rangle$, there exist $1\leq t\leq M$ and $1\leq s\leq T$ such that $\alpha_s^{-1}\circ\psi_t^{-1}\circ\varphi\in H$. Therefore $H$ has finite index in ${\rm Aut}(X)$.
Finally, if $\varphi\in H$, then there is a function $k\colon X_{NP}\to\mathbb{Z}$ such that for all $x\in X_{NP}$ we have $\varphi(x)=\sigma^{k(x)}(x)$. Thus if $\psi\in\langle\varphi_1,\dots,\varphi_N\rangle$ and $x\in X_{NP}$, we have $$ \psi\circ\varphi\circ\psi^{-1}(x)=\psi\circ\sigma^{k(\psi^{-1}(x))}\circ\psi^{-1}(x)=\sigma^{k(\psi^{-1}(x))}(x) $$ and if $x\in Z$, we have $$ \psi\circ\varphi\circ\psi^{-1}(z)=z. $$ Therefore $\psi\circ\varphi\circ\psi^{-1}\in H$ and so $H$ is a normal subgroup of ${\rm Aut}(X)$.
It follows that ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$ for some $d<k$. \end{proof}
\section{Minimal shifts of linear growth} \label{sec:minimal} For minimal shifts, we need more information on the words that are uniquely extendable:
\begin{definition} For $x\in X$, define $$ x_R:=\{y\in X\colon y(i)=x(i)\text{ for all }i\geq0\}. $$ For $x,y\in X$, we write $x\sim_Ry$ if $x_R=y_R$ and define $X_R:=X/\!\sim_R$ to be $X$ modulo this relation. \end{definition} It is easy to check that $\sim_R$ is an equivalence relation on $X$ and so $X_R$ is well defined. We view $(X_R,\sigma)$ as a one sided shift. If $\varphi\in{\rm Aut}(X)$, then $\varphi$ is a block code (say of range $N$) and so determines an endomorphism on $(X_R,\sigma)$ as follows: if $y\in x_R$ and $N\in\mathbb{N}$ is the minimal range of $\varphi$, then $\varphi(x_R):=\left(\sigma^N\circ\varphi(y)\right)_R$. It is easy to check that $\varphi(x_R)$ is well defined.
\begin{definition} For $x\in X$, we say that $x_R$ is {\em uniquely left extendable} if it has a unique preimage in $X_R$ and {\em nonuniquely left extendable} otherwise.
If $w\in\mathcal{L}_n(X)$ is a word of length $n$ in the language of $X$, we say that $w$ is {\em uniquely left extendable} if there is a unique $\hat{w}\in\mathcal{L}_{n+1}(X)$ that ends with $w$. \end{definition}
Boshernitzan~\cite{Bos} showed that if $(X,\sigma)$ is minimal and there exists $k\in\mathbb{N}$ such that $$ \liminf_{n\to\infty}P_X(n)-kn=-\infty, $$ then the number of ergodic probability measures on $(X,\sigma)$ is finite. In his proof, he makes use of a counting lemma and we use an infinite version of this lemma to study minimal shifts of linear growth:
\begin{lemma}[Infinite version of Boshernitzan's Lemma] \label{lemma-boshernitzan} Let $(X,\sigma)$ be a shift for which there exists $k\in\mathbb{N}$ such that \begin{equation}\label{eq1} \liminf_{n\to\infty}P_X(n)-kn=-\infty. \end{equation} Then there are at most $k-1$ distinct elements of $(X_R,\sigma)$ which are nonuniquely left extendable. \end{lemma} \begin{proof} We first claim that for infinitely many $n$, the number of words of length $n$ that are nonuniquely left extendable is at most $k-1$. If not, let $L_n$ be the number of words of length $n$ that do not extend uniquely to their left. Then by assumption there exists $N\in\mathbb{N}$ such that for all $n\geq N$ we have $L_n\geq k$. However, $$ P_X(n+1)\geq P_X(n)+L_n, $$ and so $P_X(n)\geq P_X(N)+k\cdot(n-N)$ for all $n\geq N$. This contradicts~\eqref{eq1}, and the claim follows.
We use this to show that there are at most $k-1$ elements in $X_R$ which are nonuniquely left extendable. If not, there exist distinct elements $x_1,\dots,x_k\in X_R$ which are all nonuniquely left extendable. Choose $M\in\mathbb{N}$ such that for any $1\leq i<j\leq k$, there exists $0\leq m<M$ such that $x_i(m)\neq x_j(m)$. By the first claim, there exists $n>M$ such that there are at most $k$ words of length $n$ that are nonuniquely left extendable. For all $1\leq i\leq k$, the word $$ \left(x_i(0),x_i(1),\dots,x_i(n-2),x_i(n-1)\right) $$ is a word of length $n$ that is nonuniquely left extendable and these words are pairwise distinct since $n>M$, leading to a contradiction. Thus the number of elements of $(X_R,\sigma)$ that are nonuniquely left extendable is at most $k-1$. \end{proof}
\begin{notation}\label{NLE-def} We write ${\Upsilon}_0\subseteq X_R$ for the collection of nonuniquely left extendable points in $X_R$. For $m\in\mathbb{N}$, we write ${\Upsilon}_m:=\sigma^m({\Upsilon}_0)$ for the collection of elements of $X_R$ whose preimage under $m$ iterates of $\sigma$ contains more than one point. \end{notation}
\begin{lemma}\label{lemma-extension} If $y\in X_R\setminus\bigcup_{m=0}^{\infty}{\Upsilon}_m$, then there is a unique $z\in X$ for which $y=z_R$. \end{lemma} \begin{proof} If not, there exist distinct $z_1, z_2\in X$ and $y=(z_1)_R=(z_2)_R$. Thus there exists $i\in\mathbb{N}$ such that $z_1(-i)\neq z_2(-i)$. Set $i_0$ to be the minimal such $i$. Then $\sigma^{-i_0+1}y=(\sigma^{-i_0+1}z_1)_R=(\sigma^{-i_0+1}z_2)_R$, \ but $(\sigma^{-i_0}z_1)_R\neq(\sigma^{-i_0}z_2)_R$. Thus $\sigma^{-i_0+1}y\in {\Upsilon}_0$, which implies that $y\in {\Upsilon}_{-i_0+1}$, a contradiction. \end{proof}
\begin{lemma}\label{lemma-finite-extension} If $(X,\sigma)$ is a shift, $\varphi\in{\rm Aut}(X)$, and $y\in {\Upsilon}_0$, then there exists $m\geq 0$ such that $\varphi(y)\in {\Upsilon}_m$. \end{lemma} \begin{proof} It not, then $\varphi(y)\in X_R\setminus\bigcup_{m=0}^{\infty}{\Upsilon}_m$ and so Lemma~\ref{lemma-extension} implies that there is a unique $z\in X$ such that $\varphi(y)=z_R$. Since $\varphi$ is an automorphism, it follows that $\varphi^{-1}(z)$ is the only solution to the equation $y=x_R$, a contradiction of $y\in {\Upsilon}_0$. \end{proof}
We use this to complete the characterization of the automorphism group for minimal aperiodic shifts with linear growth:
\begin{proof}[Proof of Theorem~\ref{theorem:minimal}] Assume $(X,\sigma)$ is an aperiodic minimal shift such that there exists $k\in\mathbb{N}$ with $\liminf_{n\to\infty}P_X(n)/n<k$.
Fix $y\in {\Upsilon}_0$ and let $\varphi\in{\rm Aut}(X)$. By Lemma~\ref{lemma-finite-extension}, there exists $m\in\mathbb{N}$ such that $\varphi(y)\in {\Upsilon}_m$. Let $m_{\varphi}\geq 0$ be the smallest non-negative integer for which $\varphi(y)\in {\Upsilon}_m$. Then there exists $z_{\varphi}\in {\Upsilon}_0$ such that $\sigma^{m_{\varphi}}(z_{\varphi})=\varphi(y)$.
Now suppose $\varphi_1, \varphi_2\in{\rm Aut}(X)$ and $z_{\varphi_1}=z_{\varphi_2}$. We claim that $\varphi_1$ and $\varphi_2$ project to the same element in ${\rm Aut}(X)/\langle\sigma\rangle$. Without loss, suppose $m_{\varphi_1}\leq m_{\varphi_2}$. Then $$ \varphi_2(y)=\sigma^{m_{\varphi_2}}(z_{\varphi_2})=\sigma^{(m_{\varphi_2}-m_{\varphi_1})}\circ\sigma^{m_{\varphi_1}}(z_{\varphi_1})=\sigma^{(m_{\varphi_2}-m_{\varphi_1})}\circ\varphi_1(y). $$ By minimality, every word of every length occurs syndetically in every element of $(X,\sigma)$. It follows that all words occur syndetically in every element of $(X_R,\sigma)$, and in particular, all words occur syndetically in $y$. Both $\varphi_2$ and $\sigma^{(m_{\varphi_2}-m_{\varphi_1})}\circ\varphi_1$ are sliding block codes. Since $\varphi_2(y)=\sigma^{(m_{\varphi_2}-m_{\varphi_1})}\circ\varphi_1(y)$, it follows that $\varphi_2$ and $\sigma^{(m_{\varphi_2}-m_{\varphi_1})}\circ\varphi_1$ have the same image on every word, meaning that they define the same block code. In other words, $\varphi_1$ and $\varphi_2$ project to the same element in ${\rm Aut}(X)/\langle\sigma\rangle$, proving the claim.
Since $|{\Upsilon}_0|\leq k-1$, Lemma~\ref{lemma-boshernitzan} implies that there can be at most $k-1$ distinct elements of $(X_R,\sigma)$ that arise as $z_{\varphi}$ for $\varphi\in{\rm Aut}(X)$. Therefore, there are at most $k-1$ distinct elements of ${\rm Aut}(X)/\langle\sigma\rangle$. $
\square$ \end{proof}
This can be used to characterize the automorphism groups for particular systems. We note the simplest case of a Sturmian shift for later use (see~\cite[Example 4.1]{Olli}): \begin{corollary} \label{cor:olli} If $(X,\sigma)$ is a Sturmian shift, then ${\rm Aut}(X)=\langle\sigma\rangle$. \end{corollary} \begin{proof}
For a Sturmian shift, $(X,\sigma)$ is minimal, aperiodic, and $P_X(n)=n+1$ for all $n\in\mathbb{N}$. Applying Theorem~\ref{theorem:minimal} with $k=2$, we have that $\left|{\rm Aut}(X)/\langle\sigma\rangle\right|=1$. \end{proof}
More generally: \begin{corollary} If $(X,\sigma)$ is aperiodic, minimal and there exists $k\in\mathbb{N}$ such that $$ \liminf_{n\to\infty}P_X(n)-kn=-\infty, $$ then ${\rm Aut}(X)$ is the semi-direct product of a finite group and $\mathbb{Z}$. \end{corollary} \begin{proof} By Theorem~\ref{theorem:minimal}, ${\rm Aut}(X)/\langle\sigma\rangle$ is finite. Since $\langle\sigma\rangle$ has infinite order and is contained in the center of ${\rm Aut}(X)$, it follows from the classification of virtually cyclic groups (see~\cite{SW}) that ${\rm Aut}(X)$ is the semi-direct product of a finite group and $\mathbb{Z}$. \end{proof}
\section{Examples} \label{sec:examples}
\subsection{Automorphism group with large polynomial growth} \label{sec:large-poly-growth} Proposition~\ref{prop:polynomial} shows that if $(X,\sigma)$ is a shift satisfying $$ \limsup_{n\to\infty}\frac{P_X(n)}{n}<k $$ then ${\rm Aut}(X)$ is locally a group of polynomial growth, with polynomial growth rate at most $k-1$. The following Proposition shows that this estimate of the polynomial growth rate of ${\rm Aut}(X)$ is optimal. \begin{proposition} Let $k\in\mathbb{N}$ be fixed and let $\mathcal{A}=\{0,1\}\times\{1,\dots,k\}$. There is a shift $X\subseteq\mathcal{A}^{\mathbb{Z}}$ with a dense set of aperiodic points such that $P_X(n)=kn+k$ and ${\rm Aut}(X)\cong\mathbb{Z}^k$. \end{proposition} \begin{proof} Recall that a Sturmian shift is an aperiodic, minimal shift of $\{0,1\}^{\mathbb{Z}}$ whose complexity function satisfies $P_X(n)=n+1$ for all $n$. There are uncountably many Sturmian shifts and any particular Sturmian shift only factors onto countably many other Sturmian shifts (since the factor map must be a sliding block code, of which there are only countably many). Therefore there exist $k$ Sturmian shifts $X_1, X_2,\dots,X_k$ such that there exists a sliding block code taking $X_i$ to $X_j$ if and only if $i=j$. We identify $X_i$ with in a natural way with a shift of $\mathcal{A}^{\mathbb{Z}}$ by writing the elements of $X_i$ with the letters $(0,i)$ and $(1,i)$ and will abuse notation by also referring to this shift as $X_i$. Let $X:=X_1\cup\cdots\cup X_k$ (which is clearly shift invariant, and is closed because the minimum distance between a point in $X_i$ and $X_j$ is $1$ whenever $i\neq j$).
Let $\varphi\in{\rm Aut}(X)$. As $\varphi$ is given by a sliding block code, $\varphi$ must preserve the sets $X_1,\dots,X_k$. Therefore ${\rm Aut}(X)\cong{\rm Aut}(X_1)\times\cdots\times{\rm Aut}(X_k)$. By Corollary~\ref{cor:olli} we have ${\rm Aut}(X_i)=\langle\sigma\rangle\cong\mathbb{Z}$ for $i=1,\dots,k$. So ${\rm Aut}(X)\cong\mathbb{Z}^k$. \end{proof}
\subsection{Quickly growing transitive shifts with trivial automorphism group} \label{sec:no-complexity-threshold} Next we describe a general process which takes a minimal shift of arbitrary growth rate and produces a transitive shift with essentially the same growth, but whose automorphism group consists only of powers of the shift. This shows that there is no ``complexity threshold'' above which the automorphism group of a transitive shift must be nontrivial.
\begin{lemma}\label{lemma:dense-orbit} If $(X,\sigma)$ is a transitive shift with precisely one dense orbit, then ${\rm Aut}(X)=\langle\sigma\rangle$. \end{lemma} \begin{proof} Suppose that there exists $x_0\in X$ such that $$ \{y\in X\colon y\text{ has a dense orbit}\}=\mathcal{O}(x_0). $$ If $\varphi\in{\rm Aut}(X)$, then $\varphi(x_0)$ has a dense orbit and so there exists $k\geq 0$ such that $\varphi(x_0)=\sigma^k(x_0)$. It follows that $\varphi$ and $\sigma^k$ agree on the (dense) orbit of $x_0$. Since both functions are continuous, they agree everywhere. \end{proof}
\begin{example}
Let $\mathcal{A}=\{0,1,2,\dots,n-1\}$ and let $X\subseteq\mathcal{A}^{\mathbb{Z}}$ be a minimal shift. Let $\tilde{\mathcal{A}}=\mathcal{A}\cup\{n\}$, where we add the symbol $n$ to the alphabet and $n\notin\mathcal{A}$. Fix $x_0\in X$ and define $\tilde{x}_0\in\tilde{\mathcal{A}}^{\mathbb{Z}}$ by: $$ \tilde{x}_0(i)=\begin{cases} x_0(i) & \text{ if }i\neq 0; \\ n & \text{ if }i=0. \end{cases} $$ Let $\tilde{X}\subseteq\tilde{A}^{\mathbb{Z}}$ be the orbit closure of $\tilde{x}_0$. Then $\tilde{X}=X\cup\mathcal{O}(\tilde{x}_0)$, $(\tilde{X},\sigma)$ is transitive, $p_{\tilde{X}}(n)=p_X(n)+n$ for all $n\in\mathbb{N}$, and $\tilde{X}$ has precisely one dense orbit. By Lemma~\ref{lemma:dense-orbit}, ${\rm Aut}(\tilde{X})=\langle\sigma\rangle$. \end{example}
\subsection{${\rm Aut}(X)$ and ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ are not always finitely generated} \label{sec:not-global}
Theorem~\ref{thm:main} shows that every finitely generated subgroup of ${\rm Aut}(X)$ is virtually $\mathbb{Z}^d$. When $X$ has a dense set of aperiodic points, Theorem~\ref{th:finitely-generated} shows that ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is finite. In this section we show that the result of Theorem~\ref{th:finitely-generated} cannot be extended to the general case, and the words ``every finitely generated subgroup'' cannot be removed from the statement of Theorem~\ref{thm:main}. We begin with an example to set up our construction.
\begin{example} Let $\mathcal{A}=\{0,1\}$ and for $n\in\mathbb{N}$, let $x_n\in\mathcal{A}^{\mathbb{Z}}$ be the periodic point
$$
x_n(i)=\begin{cases}
1 & \text{ if }i\equiv0\text{ (mod $2^n$)}; \\
0 & \text{ otherwise}.
\end{cases}
$$ Let $X$ be the closure of the set $\{x_n\colon n\in\mathbb{N}\}$ under $\sigma$. If we define
$$
x_{\infty}(i)=\begin{cases}
1 & \text{ if }i=0; \\
0 & \text{ otherwise},
\end{cases}
$$ and ${\bf 0}$ to be the $\mathcal{A}$-coloring of all zeros, then we have $$ X=\{{\bf 0}\}\cup\mathcal{O}(x_{\infty})\cup\bigcup_{n=1}^{\infty}\mathcal{O}(x_n). $$ Suppose $R\in\mathbb{N}$ is fixed and $\varphi\in{\rm Aut}_R(X)$. Since $\varphi$ preserves the period of periodic points, $\varphi({\bf 0})={\bf 0}$. In particular, the block code $\varphi$ takes the block consisting of all zeros to $0$. It follows that there exists $k\in[-R,R]$ such that $\varphi(x_{\infty})=\sigma^k(x_{\infty})$. For any $m>2R+1$, the blocks of length $2R+1$ occurring in $x_m$ are identical to those appearing in $x_{\infty}$ and so $\varphi(x_m)=\sigma^k(x_m)$ for all such $m$.
Now let $\varphi_1,\dots,\varphi_n\in{\rm Aut}(X)$ and find $R\in\mathbb{N}$ such that $\varphi_1,\dots,\varphi_n,\varphi_1^{-1},\dots,\varphi_n^{-1}\in{\rm Aut}_R(X)$. For $1\leq i\leq n$, let $k_i\in[-R,R]$ be such that for all $m>2R+1$ we have $\varphi_i(x_m)=\sigma^{k_i}(x_m)$. Then $N\in\mathbb{N}$, any $e_1,\dots,e_N\in\{1,\dots,n\}$, any $\epsilon_1,\dots,\epsilon_N\in\{-1,1\}$, and any $m>2R+1$, we have $$ \left(\varphi_{e_1}^{\epsilon_1}\circ\varphi_{e_2}^{\epsilon_2}\circ\cdots\circ\varphi_{e_N}^{\epsilon_N}\right)(x_m)=\sigma^{(\epsilon_1\cdot k_{e_1}+\epsilon_2\cdot k_{e_2}+\cdots+\epsilon_N\cdot k_{e_N})}(x_m). $$ Then if $\varphi\in{\rm Aut}(X)$ is the automorphism that acts like $\sigma$ on $\mathcal{O}(x_{R+1})$ and acts trivially on $X\setminus\mathcal{O}(x_{R+1})$ (this map is continuous because $x_{R+1}$ is isolated), then $\varphi\notin\langle\varphi_1,\dots,\varphi_N\rangle$. Therefore $\langle\varphi_1,\dots,\varphi_N\rangle\neq{\rm Aut}(X)$. Since $\varphi_1,\dots,\varphi_n\in{\rm Aut}(X)$ were general, it follows that ${\rm Aut}(X)$ is not finitely generated.
On the other hand $$ P_X(n)=n+2^{\lfloor\log_2(n)\rfloor+1}-1<3n $$ for all $n$, so $P_X(n)$ grows linearly. We also remark that ${\rm Aut}(X)={\rm Aut}(X)\cap[\sigma]$ for this shift. \end{example}
\begin{proposition}\label{ex:infinitely-generated} There exists a shift $(X,\sigma)$ of linear growth that has a dense set of periodic points and is such that none of the groups ${\rm Aut}(X)$, ${\rm Aut}(X)\cap[\sigma]$, and ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ are finitely generated. \end{proposition} \begin{proof} Let $X_1$ be the shift of $\{0,1\}^{\mathbb{Z}}$ constructed in the previous example. Let $X_2$ be the same shift, constructed over the alphabet $\{2,3\}$ (by identifying $0$ with $2$ and $1$ with $3$). Let $X=X_1\cup X_2$ and observe that $d(X_1,X_2)=1$. Since ${\rm Aut}(X_i)\cap[\sigma]={\rm Aut}(X_i)$ for $i=1,2$, we have ${\rm Aut}(X)\cap[\sigma]\cong{\rm Aut}(X_1)\times{\rm Aut}(X_2)$. Therefore ${\rm Aut}(X)\cap[\sigma]$ is not finitely generated. On the other hand, $$ P_X(n)=P_{X_1}(n)+P_{X_2}(n)=2\cdot P_{X_1}(n)<6n $$ so $X$ is a shift of linear growth (and has a dense set of periodic points).
We claim that ${\rm Aut}(x)/{\rm Aut}(X)\cap[\sigma]$ is not finitely generated. Define $\delta\in{\rm Aut}(X)$ to be the range $0$ involution that exchanges $0$ with $2$ and $1$ with $3$. For each $m\in\mathbb{N}$ let $\delta_m\in{\rm Aut}(X)$ be the range $0$ involution which exchanges the (unique) orbit of period $2^m$ in $X_1$ with the (unique) orbit of period $2^m$ in $X_2$ by exchanging $0$ with $2$ and $1$ with $3$ in these orbits only (and fixing the remainder of $X$. For $i\in\mathbb{N}$ let $\tilde{\delta}_i$ be the projection of $\delta_i$ to ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ and let $\tilde{\delta}$ be the projection of $\delta$. These involutions commute pairwise and the set $\{\tilde{\delta}_i\colon i\in\mathbb{N}\}\cup\{\tilde{\delta}\}$ is clearly independent.
Now let ${\bf x}\in X_1$ be the point ${\bf x}(i)=1$ if and only if $i=0$, and let ${\bf y}\in X_2$ be the point ${\bf y}(i)=4$ if and only if $i=0$. Let $\varphi\in{\rm Aut}(X)$ be fixed and observe that either $\varphi({\bf x})\in\mathcal{O}({\bf x})$ or $\varphi({\bf x})\in\mathcal{O}({\bf y})$. In the former case define $\epsilon:=0$ and in the latter case define $\epsilon:=1$, so that $\varphi\circ\delta^{\epsilon}$ preserves the orbit of ${\bf x}$ (hence also the orbit of ${\bf y}$). As $\varphi\circ\delta^{\epsilon}$ is given by a block code which carries the block of all $0$'s to $0$, there are at most finitely many $m$ such that $\varphi\circ\delta^{\epsilon}$ does not preserve the orbit of the (unique) periodic orbit of period $2^m$ in $X_1$. Let $m_1<\cdots<m_n$ be the set of $m$ for which it does not preserve the orbit. Then $$ \varphi\circ\delta^{\epsilon}\circ\delta_{m_1}\circ\cdots\circ\delta_{m_n}\in{\rm Aut}(X)\cap[\sigma]. $$ Therefore ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is the group generated by $\tilde{\delta}, \tilde{\delta}_1, \tilde{\delta}_2, \tilde{\delta}_3,\dots$ This group is isomorphic to $\prod_{i=1}^{\infty}\mathbb{Z}_2$ so ${\rm Aut}(X)/{\rm Aut}(X)\cap[\sigma]$ is not finitely generated. Finally, as ${\rm Aut}(X)$ factors onto a group that it not finitely generated, it is not finitely generated either. \end{proof}
\section{Automorphisms of periodic shifts} \label{sec:periodic2}
We characterize which finite groups arise as automorphism groups of shifts. \begin{definition} For $n>1$ and $m\in\mathbb{N}$, let $$ \mathbb{Z}_n^m:=\underbrace{\mathbb{Z}_n\times\mathbb{Z}_n\times\cdots\times\mathbb{Z}_n}_{m\text{ times}} $$ where $\mathbb{Z}_n$ denotes $\mathbb{Z}/n\mathbb{Z}$. Let $S_m$ denote the symmetric group on $m$ letters and define a homomorphism $\psi\colon S_m\to{\rm Aut}(\mathbb{Z}_n^m)$ by $$ \left(\psi(\pi)\right)(i_1,\dots,i_m):=(i_{\pi(1)},\dots,i_{\pi(m)}). $$ Then the {\em generalized symmetric group} is defined as in~\cite{Osima} to be $$ S(n,m):=\mathbb{Z}_n^m\rtimes_{\psi} S_m. $$ Equivalently, $S(n,m)$ is the wreath product $\mathbb{Z}_n\wr S_m$. \end{definition}
\begin{theorem} Suppose $G$ is a finite group. There exists a shift $(X,\sigma)$ for which ${\rm Aut}(X)\cong G$ if and only if there exist $s\in\mathbb{N}$, $n_1<n_2<\cdots<n_s$, and $m_1, m_2, \dots,m_s\in\mathbb{N}$ such that $$ G\cong S(n_1,m_1)\times S(n_2,m_2)\times\cdots\times S(n_s,m_s). $$ \end{theorem} \begin{proof} Suppose $(X,\sigma)$ is a shift for which ${\rm Aut}(X)$ is finite. Since $\sigma\in{\rm Aut}(X)$, there exists $k\in\mathbb{N}$ such that $\sigma^k(x)=x$ for all $x\in X$. That is, $X$ is comprised entirely of periodic points such that the minimal period of each point is a divisor of $k$. Since a shift can have only finitely many such points, $X$ is finite. Let $x_1,\dots,x_N\in X$ be representatives of the orbits in $X$, meaning that $\mathcal{O}(x_i)\cap\mathcal{O}(x_j)=\emptyset$ whenever $i\neq j$ and for all $x\in X$ there exist $i,k\in\mathbb{N}$ such that $x=\sigma^k(x_i)$. For $i=1, \ldots, N$, let $p_i$ be the minimal period of $x_i$ and, without loss, assume that $p_1\leq p_2\leq\cdots\leq p_N$. Define $n_1:=p_1$ and inductively define $n_2,n_3,\dots,n_s$ by $$ n_{i+1}:=\min\{p_j\colon p_j>n_i\}, $$ where $s$ is the number of steps before the construction terminates. Define $$ m_i:=\vert\{j\colon p_j=n_i\}\vert. $$ Let $\varphi\in{\rm Aut}(X)$. Then for $1\leq i\leq s$, $\varphi$ induces a permutation on the set of periodic points of minimal period $n_i$. More precisely, for fixed $1\leq i\leq s$, we can define $\pi_i^{\varphi}\in S_{m_i}$ to be the permutation that sends $j\in\{1,2,\dots,m_i\}$ to the unique integer $k\in\{1,2,\dots,m_i\}$ such that $\varphi(x_{m_1+\cdots+m_{i-1}+j})\in\mathcal{O}(x_{m_1+\cdots+m_{i-1}+k})$. For $1\leq j\leq m_i$, choose $k_j^i\in\mathbb{Z}_{n_i}$ such that $$ \varphi(x_{m_1+\cdots+m_{i-1}+j})=\sigma^{k_j^i}(x_{m_1+\cdots+m_{i-1}+\pi_i^{\varphi}(j)}). $$ Then the map $$ \varphi\mapsto(k_1^1,k_2^1,\dots,k_{m_1}^1,\pi_1^{\varphi},k_1^2,\dots,k_{m_2}^2,\pi_2^{\varphi},\dots,k_1^s,\dots,k_{m_s}^s,\pi_s^{\varphi}) $$ is a homomorphism from ${\rm Aut}(X)$ to $S(n_1,m_1)\times\cdots\times S(n_s,m_s)$. The kernel of this map is trivial. To check that it is surjective, if $\pi_1,\dots,\pi_s$ are permutations ($\pi_i\in S_{m_i}$ for all $i$), then define $$ \varphi_{\pi_1,\dots,\pi_s}(x_{m_1+\cdots+m_{i-1}+j})=x_{m_1+\cdots+m_{i-1}+\pi_i(j)} $$ and extend this to an automorphism of $(X,\sigma)$. Similarly, for $1\leq i\leq N$, define $$ \varphi_i(\sigma^k(x_j)):=\sigma^{k+\delta_{i,j}}(x_j), $$ where $\delta_{i,j}$ is the Kronecker delta. Note that each of these maps is given by a block code, where the range is the smallest $R$ such that $\sigma^R(x)=x$ for all $x\in X$. Taken together, this shows that the map $\phi$ is surjective and thus is an isomorphism.
Conversely, suppose that $n_1<\cdots<n_s$ are given and $m_1,\dots,m_s\in\mathbb{N}$. For $1\leq i\leq s$ and $1\leq j\leq m_i$, define $$ x_{i,j}(k):=\begin{cases} j & \text{ if }k\equiv0\text{ (mod $n_i$)}; \\ 0 & \text{ otherwise}. \end{cases} $$ Let $$ X^{\prime}=\bigcup_{i=1}^s\bigcup_{j=1}^{n_i}x_{i,j} $$ and let $X$ be the closure of $X^{\prime}$ under $\sigma$. Then $X$ consists of periodic points, with precisely $m_i$ distinct orbits of minimal period $n_i$, for $1\leq i\leq s$. Thus \begin{equation*} {\rm Aut}(X)\cong S(n_1,m_1)\times\cdots\times S(n_s,m_s).
\qedhere \end{equation*} \end{proof}
\noindent {\bf Acknowledgment}: We thank Jim Davis for pointing us to reference~\cite{SW}.
\end{document} |
\begin{document}
\begin{abstract}
We study rationality properties of geodesic cycle integrals of meromorphic modular forms associated to positive definite binary quadratic forms. In particular, we obtain finite rational formulas for the cycle integrals of suitable linear combinations of these meromorphic modular forms. \end{abstract}
\title{Meromorphic modular forms with rational cycle integrals}
\section{Introduction} \label{sec:introduction}
One of the fundamental results in the classical theory of modular forms is the fact that the vector spaces of modular forms are spanned by forms with rational Fourier coefficients. Besides that, there are other natural rational structures on these spaces, for example coming from the rationality of periods or cycle integrals of modular forms. This was first shown by Kohnen and Zagier in \cite{kohnenzagierrationalperiods}, where they proved the rationality of the even periods of the cusp forms \begin{align}\label{fkdef}
f_{k,D}(z) := \frac{|D|^{k-\frac{1}{2}}}{\pi}\sum_{Q \in \mathcal{Q}_{D}}Q(z,1)^{-k} \end{align} of weight $2k$ for $\Gamma(1) = \mathrm{SL}_{2}(\mathbb{Z})$, for $k \geq 2$ and all discriminants $D > 0$. Here the sum runs over the set $\mathcal{Q}_{D}$ of all integral binary quadratic forms of discriminant $D$. These cusp forms were introduced by Zagier while investigating the Doi-Naganuma lift in \cite{zagierdoinaganuma}, and they played a prominent role in the explicit description of the Shimura-Shintani correspondence in \cite{kohnenzagiercriticalstrip}. The aforementioned rationality result of Kohnen and Zagier was generalized to Fuchsian groups of the first kind by Katok \cite{katok}. Periods and cycle integrals of other types of modular forms, such as weakly holomorphic modular forms, harmonic Maass forms, or meromorphic modular forms, have been the object of active research over the last years, see for example \cite{bringmannfrickekent, bringmannguerzhoykane, bif, bifl, dit}.
If we allow negative discriminants $D < 0$ in \eqref{fkdef} and restrict the summation to positive definite forms $Q \in \mathcal{Q}_{D}$, then we obtain meromorphic modular forms $f_{k,D}$ of weight $2k$ for $\Gamma(1)$ with poles of order $k$ at the CM points of discriminant $D$. These forms recently attracted some attention, starting with the work of Bengoechea \cite{bengoecheapaper} on the rationality properties of their Fourier coefficients. Their regularized inner products and connections to locally harmonic Maass forms were investigated by Bringmann, Kane, and von Pippich \cite{bringmannkanevonpippich} and the first author \cite{loebrich}. Furthermore, Zemel \cite{zemel} used them to prove a higher-dimensional analogue of the Gross-Kohnen-Zagier theorem \cite{grosskohnenzagier}, which hints at a deeper geometric meaning of the meromorphic $f_{k,D}$. Recently, Alfes-Neumann, Bringmann, and the second author in \cite{anbs} established modularity properties of the generating series of traces of cycle integrals of $f_{k,D}$ and used this to show the rationality of suitable linear combinations of these traces.
It is natural to ask whether the individual cycle integrals of the meromorphic modular forms $f_{k,D}$ for $D < 0$ have nice rationality properties too. For an indefinite integral binary quadratic form $A = [a,b,c]$ of non-square discriminant the cycle integral of $f_{k,D}$ along the closed geodesic corresponding to $A$ is defined by \begin{align*}
\mathcal{C}(f_{k,D},A) := \int_{\Gamma(1)_{A}\backslash S_{A}}f_{k,D}(z)A(z,1)^{k-1}dz, \end{align*} where \[
S_{A} := \{z \in \mathbb{H} : a|z|^{2}+b\mathrm{Re}(z)+c = 0\} \] is a semi-circle centered at the real line and $\Gamma(1)_{A}$ denotes the stabilizer of $A$ in $\Gamma(1)$. Note that, due to the modularity of $f_{k,D}$, the cycle integral depends only on the $\Gamma(1)$-equivalence class of $A$. If $f_{k,D}$ has a pole on $S_A$, the cycle integral can be defined as a Cauchy principal value, see Section~\ref{section cycle integrals}. Numerical integration yields the following approximations for $k \in \{2,4,6\}$ and $D = -3$. \begin{align*}
\renewcommand{1.5}{1.2}
\begin{array}{|c||c|c|c|c|c|c|}
\hline
A & [1,1,-1] & [1,0,-2] & [1,1,-3] & [1,1,-4] & [1,1,-5] & [1,0,-6] \\
\hline \hline
\mathcal{C}(f_{2,-3},A) & 4 & 8 & 12 & 28 & 10 & 16\\
\hline
\mathcal{C}(f_{4,-3},A) & 20 & 48 & 92 & 452 & 170 & 288 \\
\hline
\mathcal{C}(f_{6,-3},A) & 142.36448 & 411.27103 & 1049.99067 & 12351.27103 & 5635.65417 & 8944.31786\\
\hline
\end{array} \end{align*} It seems that the cycle integrals of $f_{2,-3}$ and $f_{4,-3}$ are integers, but there is little reason to believe that the cycle integrals of $f_{6,-3}$ are rational numbers.
The main aim of the present work is to investigate the rationality of the cycle integrals of $f_{k,D}$ for $D < 0$. As we will see, the failure of rationality of these cycle integrals is due to the existence of cusp forms of weight $2k$. In particular, we have to take certain linear combinations of cycle integrals of a fixed $f_{k,D}$ or a fixed cycle integral of linear combinations of forms $f_{k,D}$ to obtain convenient rationality results. We also treat forms of higher level $\Gamma_{0}(N)$, as well as the case $k = 1$. We remark that our results generalize the rationality results of \cite{anbs} in several aspects, using a very different proof.
\section{Statement of results}\label{section results}
Let $N$ and $k$ be positive integers and let $\Gamma = \Gamma_{0}(N)$. For any $D \in \mathbb{Z}$ the group $\Gamma$ acts on the set $\mathcal{Q}_{D}$ of (positive definite if $D < 0$) integral binary quadratic forms $Q = [a,b,c]$ of discriminant $D = b^{2}-4ac$ with $N \mid a$, with finitely many orbits if $D \neq 0$. We write $[Q_{0}]$ for the $\Gamma$-class of $Q_{0} \in \mathcal{Q}_{D}$. For $D \neq 0$ and $k \geq 2$ we define the associated function \[
f_{k,Q_{0}}(z) := \frac{|D|^{k-\frac{1}{2}}}{\pi}\sum_{Q \in [Q_{0}]}Q(z,1)^{-k} \] on $\mathbb{H}$. For $k = 1$ the function $f_{1,Q_{0}}(z)$ is defined using Hecke's trick, see Section~\ref{section modular forms quadratic forms}. Throughout, we let $A \in \mathcal{Q}_{D}$ denote an indefinite quadratic form of non-square discriminant $D > 0$, and $P \in \mathcal{Q}_{d}$ a positive definite quadratic form of discriminant $d < 0$. Then $f_{k,A}$ is a cusp form of weight $2k$ for $\Gamma$, and $f_{k,P}$ is a meromorphic modular form of weight $2k$ for $\Gamma$ which has poles of order $k$ at the CM points $\tau_{Q} \in \mathbb{H}$ (defined by $Q(\tau_{Q},1) =0$) for $Q \in [P]$.
Our explicit formulas for the cycle integrals of $f_{k,P}$ will be given in terms of the following function. For $k \geq 2$, an indefinite quadratic form $A \in \mathcal{Q}_{D}$ of non-square discriminant $D > 0$, and $\tau \in \mathbb{H}$ not lying on any of the semi-circles $S_{Q}$ for $Q \in [A]$, we define the function \begin{align}\label{eq def local polynomial} \begin{split} \mathcal{P}_{k,A}(\tau) &:=D^{k-\frac12}\frac{(-1)^k\zeta_{\Gamma,A}(k)+ \zeta_{\Gamma,-A}(k)}{2^{k-2}(2k-1)\mathrm{Im}(\tau)^{k-1}}
\\
& \qquad +2 \left(-i\sqrt{D}\right)^{k-1}\sum_{\substack{Q = [a,b,c] \in [A] \\\tau \in \Int(S_{Q})}}\operatorname{sgn}(a) P_{k-1}\left(\frac{i(a|\tau|^{2}+b\mathrm{Re}(\tau)+c)}{\mathrm{Im}(\tau)\sqrt{D}}\right), \end{split} \end{align} where the zeta function $\zeta_{\Gamma,A}(s)$ is defined in \eqref{zetadef}, $P_{k-1}$ denotes the usual Legendre polynomial, and $\Int(S_{Q})$ denotes the bounded component of $\mathbb{H} \setminus S_{Q}$. For $k = 1$ the function $\mathcal{P}_{1,A}(\tau)$ is defined analogously, but the first line has to be omitted. If $\tau \in \mathbb{H}$ does lie on one of the semi-circles $S_{Q}$ for $Q \in [A]$, we define the value of $\mathcal{P}_{k,A}$ at $\tau$ by the average value \[ \mathcal{P}_{k,A}(\tau) := \lim_{\varepsilon \to 0}\frac{1}{2}\big(\mathcal{P}_{k,A}(\tau+i\varepsilon)+\mathcal{P}_{k,A}(\tau-i\varepsilon)\big). \] Note that the sum in the second line of \eqref{eq def local polynomial} is finite and $\mathcal{P}_{k,A}(\tau)$ has discontinuities along the semi-circles $S_{Q}$ for $Q \in [A]$. From the properties of $\zeta_{\Gamma,A}(s)$ given in Section~\ref{section zeta functions} it easily follows that the special values \[
|d|^{\frac{k-1}{2}}\mathcal{P}_{k,A}(\tau_{P}) \] at CM points $\tau_{P} \in \mathbb{H}$ associated to positive definite forms $P \in \mathcal{Q}_{d}$ are rational numbers.
Our first rationality result concerns linear combinations of cycle integrals of a fixed $f_{k,P}$.
\begin{theorem}\label{theorem traces rationality}
Let $\mathcal{Q}$ be a finite family of indefinite quadratic forms
of non-square discriminants and $a_{A} \in \mathbb{Z}$ for $A \in \mathcal{Q}$ such that $\sum_{A \in \mathcal{Q}}a_{A}f_{k,A}= 0$ in $S_{2k}(\Gamma)$. Furthermore, let $P \in \mathcal{Q}_{d}$ be a positive definite quadratic form of discriminant $d$. Then we have the formula \[
\sum_{A \in \mathcal{Q}}a_{A}\mathcal{C}(f_{k,P},A) = \frac{|d|^{\frac{k-1}{2}} }{|\overline{\Gamma}_{P}|}\sum_{A \in \mathcal{Q}} a_{A} \mathcal{P}_{k,A}(\tau_P), \] where $\overline{\Gamma}_{P}$ is the stabilizer of $P$ in $\Gamma/\{\pm 1\}$.
In particular, this linear combination of cycle integrals is a rational number whose denominator is bounded only in $k$ and $N$. \end{theorem}
We would like to emphasize that the formula on the right-hand side can be evaluated exactly, giving the precise rational value of the linear combination of cycle integrals on the left-hand side.
The proof of Theorem~\ref{theorem traces rationality} uses the fact that the cycle integral $\mathcal{C}(f_{k,P},A)$ equals the special value at the CM point $\tau_{P}$ of (the iterated derivative of) a so-called locally harmonic Maass form $\mathcal{F}_{1-k,A}(\tau)$, see Corollary~\ref{corollary main identity}. This function was introduced by Bringmann, Kane, and Kohnen in \cite{bringmannkanekohnen}. They showed that $\mathcal{F}_{1-k,A}$ can be decomposed into a sum of a certain local polynomial (whose iterated derivative is $\mathcal{P}_{k,A}$), and holomorphic and non-holomorphic Eichler integrals of the cusp form $f_{k,A}$. Taking suitable linear combinations as in the theorem, one can achieve that the Eichler integrals cancel out, which yields the formula in Theorem~\ref{theorem traces rationality}. We refer to Section~\ref{section proof theorem traces rationality} for the details of the proof.
\begin{example} Let $N = 1$. Since there are no non-trivial cusp forms of weight less than $12$ or weight $14$ for $\Gamma(1)$, the functions $f_{k,A}$ for $k \leq 5$ and $k=7$ vanish identically for every indefinite quadratic form $A$. Thus it follows from Theorem~\ref{theorem traces rationality} that the cycle integrals $\mathcal{C}(f_{k,P},A)$ are rational for $k \leq 5$ and $k=7$ for every choice of $P$ and $A$. This explains the rationality of the cycle integrals of $f_{2,-3}$ and $f_{4,-3}$ that we observed in the introduction. In contrast, we have seen in the introduction that the cycle integrals $\mathcal{C}(f_{6,-3}, A)$ do not seem to be rational. This corresponds to the fact that $f_{6,A}$ is a cusp form of weight $12$ which does usually not vanish identically. However, using results of \cite{zagqf}, one can prove the relations \[ 2 f_{6,[1,1,-1]} + f_{6,[1,0,-2]}= 11 f_{6,[1,1,-1]} + f_{6,[1,1,-3]} = f_{6,[1,0,-1]} -f_{6,[1,1,-4]} =0. \] Now Theorem \ref{theorem traces rationality} asserts that for any positive definite quadratic form $P$, the corresponding linear combinations \begin{align*} 2\mathcal{C}(f_{6,P}, [1,1,-1]) + \mathcal{C}(f_{6,P}, [1,0,-2]),& \\ 11\mathcal{C}(f_{6,P}, [1,1,-1]) + \mathcal{C}(f_{6,P}, [1,1,-3]),& \\
\mathcal{C}(f_{6,P}, [1,0,-2]) - \mathcal{C}(f_{6,P}, [1,1,-4]),& \end{align*} of cycle integrals of $f_{6,P}$ are rational numbers. For example, for $f_{6,[1,1,1]} = f_{6,-3}$ we have \begin{align*} 2\mathcal{C}(f_{6,-3}, [1,1,-1]) + \mathcal{C}(f_{6,-3}, [1,0,-2]) &=696,\\ 11\mathcal{C}(f_{6,-3}, [1,1,-1]) + \mathcal{C}(f_{6,-3}, [1,1,-3]) &=2616,\\ \mathcal{C}(f_{6,-3}, [1,0,-2]) - \mathcal{C}(f_{6,-3}, [1,1,-4]) &=-11940. \end{align*}
\end{example}
Next, we consider cycle integrals of certain linear combinations of forms $f_{k,P}$ over a single geodesic. Following \cite{grosszagier}, we call a sequence $\underline{\lambda} = (\lambda_{m})_{m=1}^{\infty} \subset \mathbb{Z}$ of integers a \emph{relation} for $S_{2k}(\Gamma)$ if
\begin{enumerate}
\item $\lambda_{m} = 0$ for almost all $m$,
\item $\sum_{m=1}^{\infty}\lambda_{m}c_{f}(m) = 0$ for every cusp form $f(z) = \sum_{m=1}^{\infty}c_{f}(m)q^{m} \in S_{2k}(\Gamma)$, and
\item $\lambda_{m} = 0$ whenever $(m,N) > 0$.
\end{enumerate}
For a meromorphic function $f$ on $\mathbb{H}$ which transforms like a modular form of weight $2k$ for $\Gamma$ we define its \emph{Hecke translate} corresponding to a relation $\underline{\lambda}$ by \[
f|T_{\underline{\lambda}} := \sum_{m=1}^{\infty}\lambda_{m}f|T_{m}, \]
where $T_{m}$ denotes the usual $m$-th Hecke operator of level $N$, see \eqref{eq definition Tn}. Bengoechea \cite{bengoecheapaper} showed that the Fourier coefficients of $f_{k,P}|T_{\underline{\lambda}}$ are algebraic multiples of $\pi^{k-1}$ for every positive definite quadratic form $P$ and every relation $\underline{\lambda}$ for $S_{2k}(\Gamma)$. We obtain a rationality result for the cycle integrals of $f_{k,P}|T_{\underline{\lambda}}$.
\begin{theorem}\label{theorem traces rationality 2}
Let $A \in \mathcal{Q}_{D}$ be an indefinite quadratic form of non-square discriminant $D$. Furthermore, let $P \in \mathcal{Q}_{d}$ be a positive definite quadratic form of discriminant $d$ and let $\underline{\lambda} = (\lambda_{m})$ be a relation for $S_{2k}(\Gamma)$. Then we have the formula
\begin{align*}
\mathcal{C}\left(f_{k,P}|T_{\underline{\lambda}},A\right) &= \frac{|d|^{\frac{k-1}{2}} }{|\overline{\Gamma}_{P}|}\sum_{m \geq 1} \lambda_{m}m^{k-1}\sum_{\substack{\alpha \delta =m \\ \delta>0}}\sum_{\beta\!\!\!\!\!\pmod{\delta}} \mathcal{P}_{k,A}\left(\frac{\alpha \tau_P+\beta}{\delta}\right).
\end{align*}
In particular, the cycle integrals of $f_{k,P}|T_{\underline{\lambda}}$ are rational numbers whose denominators are bounded only in $k$ and $N$. \end{theorem}
The idea of the proof is similar as for Theorem~\ref{theorem traces rationality}. See Section~\ref{section proof theorem traces rationality 2} for the details.
\begin{example}\label{Heckexp} Let $N = 1$. For $k=6$, we have the relation $\underline{\lambda} = (24,1,0,0,\dots)$ for $S_{12}$. By Theorem~\ref{theorem traces rationality 2} applied to $f_{6,-3} = f_{6,[1,1,1]}$, the function \[
f_{6,-3}|T_{\underline{\lambda}} =f_{6,-3}|T_2 + 24 f_{6,-3} = f_{6,-12} -8 f_{6,-3} \] has rational cycle integrals. Here we used the action of $T_{p}$ on $f_{k,D}$ as stated in \cite{bengoecheapaper}. Indeed, we have \begin{align*} \renewcommand{1.5}{1.2}
\begin{array}{|c||c|c|c|c|c|c|} \hline A & [1,1,-1] & [1,0,-2] & [1,1,-4] & [1,0,-6] & [1,1,-7] & [1,1,-8] \\ \hline\hline
\mathcal{C}(f_{6,-3}|T_{\underline{\lambda}},A) & 5952 & 44112 & 1128096 & 1186056 & 2349504 & 4070304\\ \hline \end{array} \end{align*} Similarly, for $f_{6,-7} = f_{6,[1,1,2]}$, we have \[
f_{6,-7}|T_{\underline{\lambda}} = f_{6,-7}|T_{2}+24f_{6,-7} = f_{6,-28} +56 f_{6,-7} \] and \begin{align*} \renewcommand{1.5}{1.2}
\begin{array}{|c||c|c|c|c|c|c|} \hline A & [1,1,-1] & [1,0,-3] & [1,1,-3] & [1,1,-4]& [1,1,-5] & [1,0,-6] \\ \hline\hline
\mathcal{C}(f_{6,-7}|T_{\underline{\lambda}},A) & 228704 & 2728656 & 7282240 & 17047968 & 15937488 & 26668656 \\ \hline \end{array} \end{align*} \end{example}
Finally, we consider cycle integrals of linear combinations of the forms $f_{k,D}$ and their twisted analogs $f_{k,\Delta,\delta}$, which we define now. For simplicity, we now assume that $N$ is odd and square-free. Let $k \geq 1$, let $\Delta$ be a discriminant with $(-1)^{k}\Delta > 0$, and let $\delta$ be a fundamental discriminant with $(-1)^{k}\delta < 0$, such that $\delta$ is a square modulo $4N$. Let $\chi_{\delta}$ be the generalized genus character on $\mathcal{Q}_{\Delta \delta}$ as defined in \cite{grosskohnenzagier}. For $k \geq 1$ we define the twisted function \[ f_{k,\Delta,\delta}(z) := \sum_{P \in \mathcal{Q}_{\Delta \delta}/\Gamma}\chi_{\delta}(P)f_{k,P}(z). \] Then $f_{k,\Delta,\delta}$ is a meromorphic modular form of weight $2k$ for $\Gamma$. Suppose that \[ F(\tau) = \sum_{m \gg -\infty}c_{F}(m)q^m \] is a weakly holomorphic modular form of weight $\frac{3}{2}-k$ for $\Gamma_{0}(4N)$ satisfying the Kohnen plus space condition, such that the Fourier coefficients $c_{F}(m)$ are rational for all $m < 0$. We will show in Proposition~\ref{RatCoef} that the Fourier coefficients of the meromorphic modular form \begin{align}\label{eq twisted sum}
\sum_{(-1)^{k}\Delta > 0}c_{F}(-|\Delta|)f_{k,\Delta,\delta}(z) \end{align} are algebraic multiples of $\pi^{k-1}$. Based on extensive numerical experiments, we arrived at the following conjecture.
\begin{conjecture}\label{RatCycint} The function in \eqref{eq twisted sum} has rational cycle integrals if it has no poles on the cycle. \end{conjecture}
It seems that the methods used to prove Theorem~\ref{theorem traces rationality} and Theorem~\ref{theorem traces rationality 2} are not suitable to prove the conjecture. In particular, numerical computations suggest that the cycle integrals of the function in \eqref{eq twisted sum} cannot be expressed in a simple way in terms of the functions $\mathcal{P}_{k,A}$. However, we are able to prove the conjecture in the case $k = 1$, using different methods.
\begin{theorem}\label{weight 2} Conjecture~\ref{RatCycint} is true for $k = 1$. \end{theorem}
The proof relies on the fact that the function $\pi i f_{1,\Delta,\delta}(z)dz$ is the canonical differential of the third kind for its residue divisor on the compactified modular curve $X_{0}(N)$. Together with a rationality criterion of Scholl \cite{scholl} for such differentials we obtain Theorem~\ref{weight 2}. We refer to Section~\ref{section proof weight 2} for the proof. We remark that, unfortunately, the proof does not yield finite rational formulas for the cycle integrals of the linear combination \eqref{eq twisted sum}.
\begin{example} We give some numerical examples of Conjecture~\ref{RatCycint}. Let $N = 1$. If $k$ is odd, we can pick $\delta=1$, such that there is no twist. The first odd $k$ for which there are nontrivial weight $2k$ cusp forms is $k=9$ with $S_{18} = \mathbb{C}\Delta E_6$. The space $S_{18}$ is isomorphic to the Kohnen plus space of weight $9+\frac{1}{2}$ under the Shimura correspondence, and the latter space is spanned by the cusp form \[ q^3 - 2q^4 - 16q^7 + 36q^8 + O(q^{11}) \] This implies that there is a weakly holomorphic modular form with principal part $q^{-4} + 2q^{-3}+O(1)$ in the Kohnen plus space of weight $\frac32-9$. In this case, Conjecture~\ref{RatCycint} predicts that the linear combination \[ g := f_{9,-4}+ 2f_{9,-3} \] has rational cycle integrals. One can easily see that, since $k$ is odd, we have $\mathcal{C}(g,A)=0$ whenever the form $A$ is $\Gamma(1)$-equivalent to $-A$. But for quadratic forms that are not equivalent to their negatives, we obtain numerically: \begin{align*} \renewcommand{1.5}{1.2}
\begin{array}{|c||c|c|c|c|c|c|} \hline A & [1,1,-5] & [1,0,-6] & [1,1,-8] & [1,0,-11] &[1,0,-14] & [1,1,-14] \\ \hline\hline \mathcal{C}(g,A) & 3343284 & 235476 & 4350060 & 116285048 & 255683332 & 254947680 \\ \hline \end{array} \end{align*}
If $k$ is even, we have to introduce a twist, since $\delta <0$. Here we consider $k=6$ and $\delta =-3$. The weight $6+\frac12$ cusp form corresponding to $\Delta \in S_{12}$ under the Shimura correspondence is given by \[ q - 56q^4 +120q^5 -240q^8 +9q^9 + O(q^{12}), \] so for example the functions \[ g_{1}:=f_{6,4,-3} + 56 f_{6,1,-3} \qquad \text{and} \qquad g_{2}:=f_{6,5,-3} -120 f_{6,1,-3} \]
should have rational cycle integrals. Indeed, it is easy to check that the function $g_{1}$ coincides with $f_{6,-3}|T_{\underline{\lambda}}$ from Example \ref{Heckexp}, so it does have rational cycle integrals by Theorem~\ref{theorem traces rationality 2}. In constrast, $g_{2}$ cannot be obtained by acting with Hecke operators, since $-15$ is squarefree. However, for this function we obtain numerically: \begin{align*}
\renewcommand{1.5}{1.5}
\begin{array}{|c||c|c|c|c|c|c|c|}
\hline
A & [1,1,-1] & [1,0,-2] & [1,1,-3]& [1,1,-4] & [1,1,-5] & [1,1,-7] & [1,1,-8] \\
\hline\hline
\mathcal{C}(g_{2},A) & -51012 & -126816 & 57876 & -2108352 & 134946 & 3813312 & -7458750 \\
\hline
\end{array} \end{align*} \end{example}
The work is organized as follows. In Section \ref{section preliminaries} we introduce the necessary functions and notation. Then we relate the cycle integrals $\mathcal{C}(f_{k,P},A)$ to locally harmonic Maass forms in Section \ref{locally}. The proofs of Theorems~\ref{theorem traces rationality}, \ref{theorem traces rationality 2}, and \ref{weight 2} are given in the remaining sections.
\section*{Acknowledgments} We thank Jan Bruinier for insightful discussions on the proof of Theorem~\ref{weight 2}. Furthermore, we thank Kathrin Bringmann for helpful comments on an earlier draft of this paper.
\section{Preliminaries}\label{section preliminaries}
\subsection{Weight $2$ Eisenstein series}
For $z = x+iy \in \mathbb{H}$ we define the quasimodular weight $2$ Eisenstein series for $\Gamma=\Gamma_0(N)$ associated to the cusp $i\infty$ by the conditionally convergent series \[
E_{2,\Gamma}(z) :=1 + \sum_{c\geq 1}\sum_{\substack{d\in\mathbb{Z} \\ \left(\begin{smallmatrix}*&*\\c&d\end{smallmatrix}\right)\in \Gamma_\infty\setminus\Gamma }}1|_{2}M, \]
where $\Gamma_{\infty} := \{\pm\left(\begin{smallmatrix}1 & n \\ 0 & 1 \end{smallmatrix} \right): n\in \mathbb{Z}\}$ and $\left(f|_{k}\left(\begin{smallmatrix}a & b \\ c & d \end{smallmatrix} \right)\right)(z) := (cz +d)^{-k}f\big(\frac{az + b}{cz + d}\big)$ denotes the usual weight $k$ slash operator. The function $E_{2,\Gamma}$ has a non-holomorphic modular completion \[ E_{2,\Gamma}^*(z) := -\frac{3}{\pi [\Gamma(1):\Gamma]y} + E_{2,\Gamma}(z) \] that has constant term $1$ at $i\infty$ and $0$ at all other cusps. The following lemma expresses $E_{2,\Gamma}^*$ in terms of the Eisenstein series for the full modular group and follows from eq.~(9) on p.~546 of \cite{grosskohnenzagier}.
\begin{lemma}\label{E2N} We have \begin{align*}
E_{2,\Gamma}^*(z) &= \prod_{p|N }\left(1-p^{-2}\right)^{-1}\sum_{d|N}\frac{\mu(d)}{d^2}E_{2, \Gamma(1)}^*\left(\frac{N}{d}z\right) \\
&=-\frac{3}{\pi [\Gamma(1):\Gamma]y} + 1 - 24\prod_{p|N}\left(1-p^{-2}\right)^{-1}\sum_{d|N}\frac{\mu(d)}{d^2}\sum_{n\geq 1}\sigma\left(\frac{dn}{N}\right)q^n, \end{align*} where $\sigma$ denotes the divisor sum function and we set $\sigma(x):=0$ for $x\notin\mathbb{Z}$. \end{lemma}
\subsection{Petersson's Poincar\'e series}\label{section petersson poincare}
For $z = x+iy,\tau =u+iv \in \mathbb{H}$ and $k \in \mathbb{Z}$ with $k \geq 2$ we define Petersson's Poincar\'e series \begin{align*}
H_{k}(z,\tau) &:=\sum_{M \in \Gamma}\left(\frac{(z-\tau)(z-\overline{\tau})}{v}\right)^{-k}\Biggl|_{2k, z} M =\sum_{M \in \Gamma}\left(\frac{(z-\tau)(z-\overline{\tau})}{v}\right)^{-k}\Biggl|_{0, \tau} M, \end{align*} which has weight $2k$ in $z$ for $\Gamma$ and weight $0$ in $\tau$ for $\Gamma$. Furthermore, it is meromorphic as a function of $z$, and an eigenfunction of the invariant Laplace operator $\Delta_{0}$ with eigenvalue $k(1-k)$ as a function of $\tau$ for $\tau$ not lying in the $\Gamma$-orbit of $z$. The series does not converge for $k=1$. However, we can apply Hecke's trick as in \cite{bringmannkaneweight0} and define for $\mathrm{Re}(s)>0$ \[
H_{1,s}(z,\tau) :=\sum_{M \in \Gamma}\left(\left(\frac{(z-\tau)(z-\overline{\tau})}{v}\right)^{-1}\left(\frac{|z-\tau||z-\overline{\tau}|}{vy}\right)^{-s}\right)\Biggl|_{0, \tau} M. \] One can show that $H_{1,s}(z,\tau)$ has an analytic continuation $H_{1}^*(z,\tau)$ to $s=0$. It is not meromorphic in $z$ anymore, but the function \begin{equation}\label{H1E2} H_{1}(z,\tau) := H_{1}^*(z,\tau)-2\pi E_{2,\Gamma}^*(z) \end{equation} is a meromorphic modular form of weight $2$ in $z$ and a harmonic Maass form of weight $0$ in $\tau$ for $\Gamma$. The function $z\mapsto H_{1}(z,\tau)$ has a simple pole when $z$ is $\Gamma$-conjugate to $\tau$.
Similarly, we define for $k\geq 2$ and $\ell \in\mathbb{Z}$ the function \begin{align*} H_{k,\ell}(z,\tau)
&:= \sum_{M \in \Gamma}v^{k+\ell}\left((z-\tau)^{\ell-k}(z-\overline{\tau})^{-\ell-k}\right)\Bigl|_{2k, z} M \\
&= \sum_{M \in \Gamma}v^{k+\ell}\left((z-\tau)^{\ell-k}(z-\overline{\tau})^{-\ell-k}\right)\Bigl|_{-2\ell, \tau} M. \end{align*} It has weight $2k$ in $z$ and weight $-2\ell$ in $\tau$ for $\Gamma$, and it also behaves nicely under the raising and lowering operators \begin{align*} R_{\kappa} := 2i\frac{\partial}{\partial \tau} + \kappa v^{-1}, \qquad L_{\kappa} := -2i v^{2}\frac{\partial}{\partial \overline{\tau}}, \end{align*} which raise and lower the weight of an automorphic form of weight $\kappa$ by $2$, respectively. The following lemma can be checked by a direct computation.
\begin{lemma}\label{raislow}
For $k \geq 2$ and $\ell \in \mathbb{Z}$ we have
\begin{align*}
R_{-2\ell,\tau}\left(H_{k,\ell}(z,\tau)\right) &= (k-\ell)H_{k,\ell-1}(z,\tau), \\
L_{-2\ell,\tau}\left(H_{k,\ell}(z,\tau)\right) &= (k+\ell)H_{k,\ell+1}(z,\tau).
\end{align*} \end{lemma}
We are particularly interested in the function $H_{k,k-1}(z,\tau)$ (with $H_{1,0}(z,\tau):=H_{1}(z,\tau)$), which has weight $2k$ in $z$ and $2-2k$ in $\tau$. It is meromorphic in $z$ and harmonic in $\tau$ for $\tau$ not lying in the $\Gamma$-orbit of $z$, and as a function of $\tau$ it is bounded at the cusps (and vanishes at $i\infty$ if $k = 1$, compare Lemma 5.4 in \cite{bringmannkaneweight0}). Furthermore, by Lemma~\ref{raislow} it is related to $H_{k}(z,\tau)$ by \begin{align}\label{eq Hk and Hkk-1} R_{2-2k,\tau}^{k-1}\left(H_{k,k-1}(z,\tau)\right) = (k-1)!H_{k}(z,\tau), \end{align} where $R_{2-2k}^{k-1} := R_{-2} \circ \cdots \circ R_{2-2k}$ is an iterated version of the raising operator.
\subsection{Modular forms associated to quadratic forms}\label{section modular forms quadratic forms}
In the introduction we defined the function $f_{k,Q_{0}}$ associated to an integral binary quadratic form $Q_{0}$ of discriminant $D \neq 0$ for $k\geq 2$. We briefly explain the definition for $k = 1$. For $s \in \mathbb{C}$ with $\mathrm{Re}(s) > 1$ we consider the series
\begin{align*}
f_{1,Q_{0},s}(z) := \frac{|D|^{\frac{s+1}{2}}}{2^s\pi}y^{s}\sum_{Q \in [Q_{0}]}Q(z,1)^{-1}|Q(z,1)|^{-s}.
\end{align*}
It converges absolutely and has a holomorphic continuation to $s = 0$. If $Q_{0} = A$ is indefinite and $D$ not a square, then
\[
f_{1,A}(z) := f_{1,A,0}(z)
\]
is a cusp form of weight $2$ for $\Gamma$ (see \cite{grosskohnenzagier}, p. 517).
If $Q_{0} = P$ is positive definite, then it follows from the following lemma and \eqref{H1E2} that
\begin{equation}\label{fPmero}
f_{1,P}(z) := f_{1,P,0}(z) -\frac{2}{|\overline{\Gamma}_P|} E_{2, \Gamma}^{*}(z)
\end{equation}
is a meromorphic modular form of weight $2$ for $\Gamma$.
\begin{lemma}\label{lemma fkP and Hk}
For $k\geq 2$ and $z$ not lying in the $\Gamma$-orbit of the CM point $\tau_{P}$ we have
\[
f_{k,P}(z) = \frac{2^{k-1}|d|^{\frac{k-1}{2}}}{|\overline{\Gamma}_{P}|\pi}H_{k}(z,\tau_{P})
\]
and
$$
f_{1,P,0}(z) = \frac{1}{|\overline{\Gamma}_{P}|\pi}H_{1}^*(z,\tau_{P}).
$$ \end{lemma}
\begin{proof}
We have the formula
\[
P(z,1) = \sqrt{|d|}\frac{(z-\tau_{P})(z-\overline{\tau}_{P})}{2\im(\tau_{P})}.
\]
Hence, for $k \geq 2$ we get
\begin{align*}
H_{k}(z,\tau_{P}) &= \sum_{M \in \Gamma}j(M,z)^{-2k}\left(\frac{(Mz-\tau_{P})(Mz-\overline{\tau}_{P})}{\im(\tau_{P})}\right)^{-k} \\
&= 2^{-k}|d|^{\frac{k}{2}}\sum_{M \in \Gamma}j(M,z)^{-2k}P(Mz,1)^{-k}
= 2^{1-k}|d|^{\frac{1-k}{2}}|\overline{\Gamma}_{P}|\pi f_{k,P}(z).
\end{align*}
For $k=1$ we can show in the same way that $H_{1,s}(z,\tau_P)$ is a multiple of $f_{1,P,s}(z)$ and then use analytic continuation. \end{proof}
We will also need the Fourier expansion of $f_{k,P}$. The proof of the following formula is analogous to the proof of Proposition 2.2 from \cite{bengoecheapaper} (correcting a sign error), but additionally uses \eqref{fPmero} and Lemma \ref{E2N} in case that $k=1$.
\begin{proposition}\label{prop fkP Fourier expansion}
For $k \geq 2$ and $z \in \mathbb{H}$ with $y > \sqrt{|d|}/2$ we have the Fourier expansion
\[
f_{k,P}(z) = \sum_{n \geq 1}c_{f_{k,P}}(n)e^{2\pi i n z},
\]
where
\[
c_{f_{k,P}}(n) = \frac{(-1)^{k}2^{k+\frac{1}{2}}\pi^{k}}{(k-1)!}|d|^{\frac{k}{2}-\frac{1}{4}}n^{k-\frac{1}{2}}\sum_{\substack{a \geq 1 \\ N \mid a}}a^{-\frac{1}{2}}S_{a,P}(n)I_{k-\frac{1}{2}}\left( \frac{\pi n\sqrt{|d|}}{a}\right),
\]
with the usual $I$-Bessel function and the exponential sum
\[
S_{a,P}(n) := \sum_{\substack{b \!\!\!\!\pmod{2a} \\ b^{2} \equiv d \!\!\!\!\pmod{4a} \\ \left[a,b,\frac{b^{2}-d}{4a}\right] \in [P]}} e\left(\frac{n b}{2a}\right).
\]
For $k = 1$ the formula is analogous, but we have to add
\[
\frac{12}{|\overline{\Gamma}_P|}\prod_{p|N}\left(1-p^{-2}\right)^{-1}\sum_{d|N}\frac{\mu(d)}{d^2}\sigma\left(\frac{dn}{N}\right)
\]
to $c_{f_{1,P}}(n)$, and we get a constant term $c_{f_{1,P}}(0) = -\frac{2}{|\overline{\Gamma}_P|}$. \end{proposition}
\subsection{Zeta functions associated to indefinite quadratic forms} \label{section zeta functions}
Let $A \in \mathcal{Q}_{D}$ be an indefinite quadratic form of non-square discriminant $D > 0$. We define the associated zeta function \begin{equation}\label{zetadef}
\zeta_{\Gamma,A}(s) := \sum_{\substack{\left(\begin{smallmatrix}m & m_{0} \\ n & n_{0} \end{smallmatrix}\right) \in \Gamma_{A}\backslash \Gamma/\Gamma_{\infty} \\ A(m,n) > 0}}A(m,n)^{-s} =\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{A}^{t} \\ N| n, (m,n) = 1\\ A(m,n) > 0}}A(m,n)^{-s}. \end{equation} The series converges absolutely for $\mathrm{Re}(s) > 1$ and it only depends on the $\Gamma$-equivalence class of $A$. For $N = 1$ and a fundamental discriminant $D > 0$ the function $\zeta(2s)\zeta_{\Gamma(1),A}(s)$ is the usual zeta function of the ideal class in $\mathbb{Q}(\sqrt{D})$ associated to $A$. The above zeta function can also be written as a Dirichlet series
\[
\zeta_{\Gamma,A}(s) = \sum_{\substack{a > 0 \\ N \mid a}}\frac{n_{A}(a)}{a^{s}}, \qquad n_{A}(a) := \#\left\{b \!\!\!\! \pmod{2a}, b^{2}\equiv D \!\!\!\! \pmod{4a}, \left[a,b,\frac{b^{2}-D}{4a}\right] \in [A]\right\},
\]
compare \cite{zagierzetafunctions}, Proposition 3 (i).
We first express $\zeta_{\Gamma,A}(s)$ in terms of zeta functions $\zeta_{\Gamma(1),A_{d}}(s)$ for level $1$ and suitable quadratic forms $A_{d}$.
\begin{lemma}\label{lemma zeta level lowering}
For $\mathrm{Re}(s) > 1$ we have
\[
\zeta_{\Gamma,A}(s) = N^{-s}\prod_{p \mid N}(1-p^{-2s})^{-1}\sum_{d \mid N}\frac{\mu(d)}{d^{s}}\left[\Gamma(1)_{A_{N/d}}:\Gamma_{0}(d)_{A_{N/d}}\right]\zeta_{\Gamma(1),A_{N/d}}(s),
\]
where $A_{d} := [a/d,b,dc]$ for $A = [a,b,c]$. \end{lemma}
\begin{proof}
Let $\zeta_{N}(s) = \sum_{\substack{\alpha = 1, (\alpha,N) = 1}}^{\infty}\alpha^{-s}$. We have
\begin{align*}
\zeta_{N}(2s)\zeta_{\Gamma,A}(s) &= \sum_{\substack{\alpha = 1 \\ (\alpha,N) = 1}}^{\infty}\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t} \\ N\mid n, (m,n) = 1 \\ A(m,n) > 0}}A(\alpha m, \alpha n)^{-s} = \sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t} \\ N\mid n, (m,N) =1 \\ A(m,n) > 0}}A(m, n)^{-s} \\
&= \sum_{d \mid N}\mu(d)\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t} \\ N\mid n, d \mid m \\ A(m,n) > 0}}A(m, n)^{-s}= \sum_{d \mid N}\frac{\mu(d)}{d^{2s}}\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t} \\ \frac{N}{d}\mid n \\ A(m,n) > 0}}A(m, n)^{-s}.
\end{align*}
If $(a,b)$ runs through $\mathbb{Z}^{2}/\Gamma_{0}(d)_{A_{N/d}}^{t}$ then $(m,n) = (a,\frac{N}{d}b)$ runs through $\mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t}$ with $\frac{N}{d}\mid n$. Hence we obtain
\begin{align*}
\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(N)_{A}^{t} \\ \frac{N}{d}\mid n, \, A(m,n) > 0}}A(m, n)^{-s}
&= \sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(d)_{A_{N/d}}^{t} \\ A(m,\frac{N}{d}n) > 0}}A(m, \tfrac{N}{d}n)^{-s} \\
&= \left(\frac{N}{d}\right)^{-s}\sum_{\substack{(m,n) \in \mathbb{Z}^{2}/\Gamma_{0}(d)_{A_{N/d}}^{t} \\ A_{N/d}(m,n) > 0}}A_{N/d}(m,n)^{-s} \\
&= \left[\Gamma(1)_{A_{N/d}}:\Gamma_{0}(d)_{A_{N/d}}\right]N^{-s}d^{s}\zeta(2s)\zeta_{\Gamma(1),A_{N/d}}(s).
\end{align*}
Using $\frac{\zeta(2s)}{\zeta_{N}(2s)} =\prod_{p \mid N}(1-p^{-2s})^{-1}$ we obtain the stated formula. \end{proof}
The following result concerns the rationality of the special values of $\zeta_{\Gamma,A}(s)$ at positive integers.
\begin{proposition}\label{proposition zeta level lowering}
The expression
\[
D^{k-\frac{1}{2}}\left(\zeta_{\Gamma,A}(k)+(-1)^{k}\zeta_{\Gamma,-A}(k) \right)
\]
is rational for any $k \geq 2$, any non-square discriminant $D > 0$, and $N \in \mathbb{N}$. \end{proposition}
\begin{proof}
We set $\widehat{\zeta}_{A}(s) := \zeta(2s)\zeta_{\Gamma(1),A}(s)$. It is well known that for $k \geq 2$ and any non-square discriminant $D > 0$ we have the functional equation \[ D^{k-\frac{1}{2}}\left(\widehat{\zeta}_{A}(k)+(-1)^{k}\widehat{\zeta}_{-A}(k)\right) = \frac{2^{2k-1}\pi^{2k}}{(k-1)!^{2}}\widehat{\zeta}_{A}(1-k), \] compare \cite{kohnenzagierrationalperiods}, p. 230. Furthermore, $\widehat{\zeta}_{A}(1-k)$ is rational by Theorem 8 in \cite{kohnenzagierrationalperiods}. Dividing by $\zeta(2k) = (-1)^{k+1}\frac{B_{2k}(2\pi)^{2k}}{2(2k)!}$ on both sides, we see that \[ D^{k-\frac{1}{2}}\left(\zeta_{\Gamma(1),A}(k)+(-1)^{k}\zeta_{\Gamma(1),-A}(k)\right) \] is rational, too. Using Lemma~\ref{lemma zeta level lowering} we obtain the result for all $N \in \mathbb{N}$. \end{proof}
\begin{remark}
It follows from the explicit formula in Theorem 8 of \cite{kohnenzagierrationalperiods} that the denominator of $\widehat{\zeta}_{A}(1-k)$ is bounded by a constant only depending on $k$, but not on $A$. This formula can also be used to evaluate $D^{k-\frac{1}{2}}\left(\zeta_{\Gamma,A}(k)+(-1)^{k}\zeta_{\Gamma,-A}(k) \right)$ explicitly as a rational number.
\end{remark}
Finally, we relate the expression from Proposition~\ref{proposition zeta level lowering} to the cycle integrals of the Eisenstein series $E_{2k,\Gamma}$ of weight $2k$ for the cusp $i\infty$ of $\Gamma$, which is normalized such that its Fourier expansion at $i\infty$ has constant term $1$. The following result can be proven by a similar computation as on pp. 240--241 of \cite{kohnenzagierrationalperiods}.
\begin{proposition}
Let $k\geq 2$ and let $A \in \mathcal{Q}_{D}$ be an indefinite quadratic form of non-square discriminant $D > 0$. Then
\[
\mathcal{C}(E_{2k,\Gamma},A) = (-1)^{k}\frac{(k-1)!^{2}}{(2k-1)!}D^{k-\frac{1}{2}}\left(\zeta_{\Gamma,A}(k)+(-1)^{k}\zeta_{\Gamma,-A}(k) \right).
\] \end{proposition}
Although we will not use this formula in the proofs of our main results, we decided to include it since it gives an interesting interpretation of the expression from Proposition~\ref{proposition zeta level lowering}.
\subsection{Cycle integrals of meromorphic modular forms}\label{section cycle integrals}
Let $A = [a,b,c] \in \mathcal{Q}_{D}$ be an indefinite quadratic form of non-square discriminant $D > 0$. Then the set \[
S_{A} :=\{z \in \mathbb{H}: a|z|^{2}+b\mathrm{Re}(z)+c = 0\} \] is a semi-circle centered at the real line. Let $f: \mathbb{H} \to \mathbb{C}$ be a meromorphic function which transforms like a modular form of weight $2k$ for $\Gamma$. If the poles of $f$ do not meet the semi-circle $S_{A}$, then we define the cycle integral of $f$ along the closed geodesic $c_{A} = \Gamma_{A}\backslash S_{A}$ by \[ \mathcal{C}(f,A) := \int_{c_{A}}f(z)A(z,1)^{k-1}dz, \] where $\Gamma_{A}$ denotes the stabilizer of $A$ in $\Gamma$. It only depends on the $\Gamma$-equivalence class of $A$. If some poles of $f$ do lie on $S_{A}$, we modify $S_{A}$ by circumventing these poles and all of their $\Gamma$-translates on small arcs of radius $\varepsilon > 0$ above and below the poles. Thereby we obtain two paths $S_{A,\varepsilon}^{+}$ and $S_{A,\varepsilon}^{-}$ and corresponding geodesics $c_{A,\varepsilon}^{+}$ and $c_{A,\varepsilon}^{-}$ which avoid the poles of $f$. We define the regularized cycle integral of $f$ along $c_{A}$ by the Cauchy principal value \begin{align}\label{eq Cauchy principal value} \mathcal{C}(f,A) := \lim_{\varepsilon \to 0}\frac{1}{2}\left(\int_{c_{A,\varepsilon}^{+}}f(z)A(z,1)^{k-1}dz+\int_{c_{A,\varepsilon}^{-}}f(z)A(z,1)^{k-1}dz\right). \end{align} Note that, since $f$ is meromorphic, the integrals on the right-hand side are actually independent of $\varepsilon$ for $\varepsilon > 0$ small enough, so the limit exists.
\subsection{Maass Poincar\'e series}
Throughout this section we let $N$ be odd and square-free.
One can construct harmonic Maass form of half-integral weight as special values of Maass Poincar\'e series, see \cite{millerpixton}, for example. In this way, one obtains for every integer $k \geq 1$ and $n < 0$ with $(-1)^{k}n \equiv 0,3 \pmod 4$ a harmonic Maass form $\mathcal{P}_{\frac{3}{2}-k,n}(\tau)$ of weight $\frac{3}{2}-k$ for $\Gamma_{0}(4N)$ which satisfies the Kohnen plus space condition, whose Fourier expansion at $i\infty$ starts with $q^{-|n|}+O(1)$, and which is bounded at the other cusps.
The holomorphic part of $\mathcal{P}_{\frac{3}{2}-k,n}$ has a Fourier expansion of the shape \[
\mathcal{P}_{\frac{3}{2}-k,n}^+ (\tau) = q^{-|n|} + \sum_{\substack{m \geq 0 \\ (-1)^{k}m \equiv 0,3 \!\!\!\!\pmod{4}}} c^+ _{\mathcal{P}_{\frac{3}{2}-k,n}}(m)q^m, \] whose coefficients of positive index are given as follows.
\begin{theorem}[Theorem 2.1 in \cite{millerpixton}]\label{theorem Poincare Fourier expansion} \item Let $n<0$ and $m>0$ with $(-1)^{k}n,(-1)^{k}m\equiv 0,3\pmod{4}$. Then \begin{align*} c_{\mathcal{P}_{\frac{3}{2}-k,n}}^{+}(m) &=
-(-1)^{\left\lfloor\frac{k}{2}\right\rfloor}\pi\sqrt{2} \left(\frac{m}{|n|}\right)^{\frac14 -\frac{k}{2}}\sum_{\substack{a >0 \\ N \mid a}}\frac{K^{+}((-1)^{k+1}n, (-1)^{k+1}m,a)}{a}I_{k-\frac12}\left(\frac{\pi\sqrt{m|n|}}{a}\right), \end{align*} with the half-integral weight Kloosterman sum \[ K^{+}(m,n,a) := \frac{1-i}{4}\left(1+\left(\frac{4}{a}\right)\right)\sum_{\nu\!\!\!\!\pmod{4a}^*}\left(\frac{4a}{\nu}\right)\left(\frac{-4}{\nu}\right)^{\frac12}e\left(\frac{m \nu+n\overline{\nu}}{4a}\right). \] \end{theorem}
Let $\Delta,\delta \in \mathbb{Z}$ be discriminants and assume that $\delta$ is fundamental. For $a,n \in \mathbb{Z}$ we consider the \emph{Sali\'e sum} \[ S_{a,\Delta, \delta}(n):=\sum_{\substack{b \!\!\!\!\pmod{2a} \\ b^2 \equiv \delta\Delta \!\!\!\!\pmod{4a}}}\chi_\delta\left(\left[a,b,\frac{b^2-\delta\Delta}{4a}\right]\right)e\left(\frac{nb}{2a}\right), \] where $\chi_{\delta}$ is the generalized genus character of $\mathcal{Q}_{\Delta\delta}$ as defined in \cite{grosskohnenzagier}. It is related to the half-integral weight Kloosterman sum by the following formula. \begin{proposition}[Proposition 3 in \cite{dit}]\label{prop salie sum} Let $\Delta,\delta \in \mathbb{Z}$ be discriminants and assume that $\delta$ is fundamental. Then for $a,n \in \mathbb{Z}$ we have the identity \begin{align*}
S_{a,\Delta, \delta}(n)&= \sum_{m|(n,a)}\left(\frac{\delta}{m}\right)\sqrt{\frac{m}{a}}
K^{+}\left(\Delta,\frac{n^2}{m^2}\delta,\frac{a}{m}\right). \end{align*} \end{proposition}
\section{Locally harmonic Maass forms}\label{locally}
The key to proving Theorems~\ref{theorem traces rationality} and \ref{theorem traces rationality 2} is relating the cycle integrals of $f_{k,P}$ to certain {\it locally harmonic Maass forms} introduced by Bringman, Kane, and Kohnen in \cite{bringmannkanekohnen} and H\"ovel in \cite{hoevel}. Namely, for $k\geq 2$, $\tau = u+iv \in \mathbb{H}$, and an indefinite quadratic form $A \in \mathcal{Q}_{D}$ of non-square discriminant $D > 0$, these are defined by the series \begin{equation}\label{curlyfexp}
\mathcal{F}_{1-k, A}(\tau) : = \frac{(-1)^{k}D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}\pi}\sum_{Q \in [A]}\sgn(Q_{\tau})Q(\tau,1)^{k-1}\psi\left( \frac{Dv^{2}}{|Q(\tau,1)|^{2}}\right), \end{equation}
where $Q_{\tau} := \frac{1}{v}(a|\tau|^{2}+bu + c)$ and \[ \psi(v) :=\frac{1}{2}\beta\left(v;k-\frac{1}{2},\frac{1}{2}\right) = \frac{1}{2}\int_{0}^{v}t^{k-\frac{3}{2}}(1-t)^{-\frac{1}{2}}dt \] is a special value of the incomplete $\beta$-function. For $k=1$ one can define a weight $0$ analogue $\mathcal{F}_{0,A}$ of \eqref{curlyfexp} using the Hecke trick as in \cite{brikavia, ehlenguerzhoykanerolen}. By adding a suitable constant we can normalize $\mathcal{F}_{0,A}$ such that it vanishes at $i\infty$.
The function $\mathcal{F}_{1-k,A}$ transforms like a modular form of weight $2-2k$ for $\Gamma$, is harmonic on $\mathbb{H} \setminus \bigcup_{Q\in[A]}S_{Q}$ and bounded at the cusps, and has discontinuities along the semi-circles $S_{Q}$ for $Q \in [A]$. Its value at a point $\tau$ lying on $S_{Q}$ for $Q \in [A]$ is given by the average value \begin{align}\label{eq average value} \mathcal{F}_{1-k,A}(\tau) = \lim_{\varepsilon \to 0}\frac{1}{2}\big(\mathcal{F}_{1-k,A}(\tau+i\varepsilon)+\mathcal{F}_{1-k,A}(\tau-i\varepsilon)\big). \end{align}
Furthermore, outside the singularities $\mathcal{F}_{1-k,A}$ is related to the cusp form $f_{k,A} \in S_{2k}(\Gamma)$ by the differential equations \begin{align*} \xi_{2-2k}(\mathcal{F}_{1-k,A}) &= (-1)^{k}\frac{D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}}f_{k,A}, \\ \mathcal{D}^{2k-1}(\mathcal{F}_{1-k,A}) &= (-1)^{k+1}D^{\frac{1}{2}-k}\frac{(k-1)!^{2}}{(4\pi)^{2k-1}}f_{k,A}, \end{align*} where $\xi_{\kappa} := 2iv^{\kappa}\overline{\frac{\partial}{\partial \overline{\tau}}}$ and $\mathcal{D} := \frac{1}{2\pi i}\frac{\partial}{\partial \tau}$. Note that our normalization of $f_{k,A}$ differs from the one used in \cite{bringmannkanekohnen}, which explains the different constants in the above differential equations.
Recall that the non-holomorphic and holomorphic \emph{Eichler integrals} of a cusp form $f = \sum_{n\geq 1}c_{f}(n)q^{n} \in S_{2k}(\Gamma)$ are defined by \begin{align*} f^{*}(\tau) := (-2i)^{1-2k}\int_{-\overline{\tau}}^{i\infty}\overline{f(-\overline{z})}(z+\tau)^{2k-2}dz,\qquad\mathcal{E}_{f}(\tau) := \sum_{n\geq 1}\frac{c_{f}(n)}{n^{2k-1}}q^{n}. \end{align*} They satisfy \[ \xi_{2-2k}(f^{*}) = f, \qquad \mathcal{D}^{2k-1}(f^{*}) = 0, \qquad \xi_{2-2k}(\mathcal{E}_{f}) = 0, \qquad \mathcal{D}^{2k-1}(\mathcal{E}_{f}) = f. \] The following decomposition of $\mathcal{F}_{1-k,A}$ was derived by Bringmann, Kane and Kohnen for $N = 1$ and $k\geq 2$ in \cite{bringmannkanekohnen}, but the same methods work for all $N \geq 1$ and $k = 1$ (see also \cite{ehlenguerzhoykanerolen} for $k = 1$).
\begin{theorem}[Theorem 7.1 of \cite{bringmannkanekohnen}]\label{local} For $\tau$ not lying on any of the semi-circles $S_{Q}$ for $Q \in [A]$ we have \[ \mathcal{F}_{1-k,A}(\tau) = P_{1-k,A}(\tau) + (-1)^{k}\frac{D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}}f^{*}_{k,A}(\tau)+(-1)^{k+1}D^{\frac{1}{2}-k}\frac{(k-1)!^{2}}{(4\pi)^{2k-1}}\mathcal{E}_{f_{k,A}}(\tau), \] where $P_{1-k,A}(\tau)$ is locally a polynomial of degree at most $2k-2$. More precisely, it is a polynomial on each connected component of $\mathbb{H} \setminus \bigcup_{Q \in [A]}S_{Q}$, which is given by \[ P_{1-k,A}(\tau) := c_{k}(A) + (-1)^{k-1}2^{2-2k}D^{\frac{1}{2}-k}\sum_{\substack{Q = [a,b,c] \in [A] \\\tau \in \Int(S_{Q})}}\sgn(a)Q(\tau,1)^{k-1}, \] where $c_{1}(A) :=0 $ and \[ c_{k}(A):= -\frac{\zeta_{\Gamma,A}(k)+(-1)^{k}\zeta_{\Gamma,-A}(k)}{2^{2k-2}(2k-1)\binom{2k-2}{k-1}} \] for $k \geq 2$, and $\Int(S_{Q})$ denotes the bounded component of $\mathbb{H}\setminus S_{Q}$. \end{theorem}
The main goal of this section is to show that $\mathcal{F}_{1-k,A}(\tau)$ can be written as a cycle integral of Petersson's Poincar\'e series $H_{k,k-1}(z,\tau)$.
\begin{theorem}\label{theorem locally harmonic maass form identity}
We have \begin{align*} \mathcal{F}_{1-k,A}(\tau) = \frac{D^{\frac12-k}}{2\pi}\mathcal{C}\left( H_{k,k-1}(\cdot,\tau), A\right). \end{align*} If $\tau$ lies on a semi-circle $S_{Q}$ for $Q \in [A]$ the left-hand side has to interpreted as the average value \eqref{eq average value}, and the cycle integral on the right-hand side is defined as the Cauchy principal value \eqref{eq Cauchy principal value}.
\end{theorem}
Before we come to the proof of the theorem we state an important corollary, which immediately follows from Theorem~\ref{theorem locally harmonic maass form identity} together with Lemma~\ref{lemma fkP and Hk} and the identity \eqref{eq Hk and Hkk-1}.
\begin{corollary}\label{corollary main identity} We have \[
\mathcal{C}\left(f_{k,P},A\right) = \frac{2^{k}|d|^{\frac{k-1}{2}}D^{k-\frac12}}{(k-1)! \,|\overline{\Gamma}_{P}|}R_{2-2k}^{k-1}\left(\mathcal{F}_{1-k,A}\right)(\tau_{P}), \] where $R_{2-2k}^{k-1}$ denotes the iterated raising operator defined in Section~\ref{section petersson poincare}. \end{corollary}
Note that a harmonic function on $\mathbb{H}$ which transforms like a modular form of weight $2-2k$ and is bounded at the cusps has to be a constant (and therefore vanishes if $k > 1$). Hence, in order to prove Theorem~\ref{theorem locally harmonic maass form identity} in the case that $\tau$ does not lie on $S_{Q}$ for $Q \in [A]$ it suffices to show that both sides in the theorem have the same singularities on $\mathbb{H}$ and are bounded at the cusps (and vanish at $i\infty$ if $k = 1$).
We say that a function $f$ has a \emph{singularity of type $g$} at a point $\tau_{0}$ if there exists a neighbourhood $U$ of $\tau_{0}$ such that $f$ and $g$ are defined on a dense subset of $U$ and $f-g$ can be extended to a harmonic function on $U$. For example, Theorem~\ref{local} shows that the function $\mathcal{F}_{1-k,A}(\tau)$ has a singularity of type
\begin{align*}
(-1)^{k}2^{1-2k}D^{\frac{1}{2}-k} \sum_{\substack{Q = [a,b,c]\in [A]\\ \tau_0 \in S_Q}}\sgn(Q_{\tau})Q(\tau,1)^{k-1}
\end{align*}
at each point $\tau_{0} \in \mathbb{H}$, which easily follows from the fact that $\tau \in \Int(S_{Q})$ is equivalent to $\sgn(a)\sgn(Q_{\tau}) < 0$.
\begin{lemma}\label{jumps} The function $\mathcal{C}( H_{k,k-1}(\cdot,\tau), A)$ is harmonic on $\mathbb{H}\setminus \bigcup_{Q \in [A]}S_{Q}$ and bounded at the cusps. For $k = 1$ it vanishes at $i\infty$. At a point $\tau_{0} \in \mathbb{H}$ it has a singularity of type
\[ (-1)^{k}2^{2-2k}\pi \sum_{\substack{Q = [a,b,c] \in [A] \\ \tau_0 \in S_Q}}\sgn(Q_{\tau})Q(\tau,1)^{k-1}.
\] \end{lemma}
\begin{proof} Since the function $\tau \mapsto H_{k,k-1}(z,\tau)$ is harmonic on $\mathbb{H} \setminus \Gamma z$, the function $\mathcal{C}(H_{k,k-1}(\cdot,\tau), A)$ is harmonic on $\mathbb{H} \setminus \bigcup_{Q \in [A]}S_{Q}$. Moreover, $\mathcal{C}( H_{k,k-1}(\cdot,\tau), A)$ is bounded at the cusps (and vanishes at $i\infty$ if $k = 1$) because the same is true for $\tau \mapsto H_{k,k-1}(z,\tau)$.
To determine the singularities, we keep $\tau_{0}\in\mathbb{H}$ fixed and consider the function
\begin{align*}
G_{\tau_0}(z, \tau) &:= \sum_{\substack{Q\in [A] \\ \tau_0 \in S_Q}}\sum_{M\in \Gamma_{Q}} \left(\frac{v^{2k-1}}{(z-\tau)(z-\overline{\tau})^{2k-1}}\right)\Big|_{2k, z} M.
\end{align*} Note that the sum over $Q \in [A]$ with $\tau_{0} \in S_{Q}$ is finite, and the group $\overline{\Gamma}_{Q}$ is infinite cyclic. It is not hard to show that the series converges absolutely and locally uniformly for all $k \geq 1$, and is meromorphic in $z$ and harmonic in $\tau$ for $\tau$ not lying in the $\Gamma$-orbit of $z$. We split the cycle integral into \[ \mathcal{C}(H_{k,k-1}(\cdot,\tau), A) =\mathcal{C}(H_{k,k-1}(\cdot,\tau) - G_{\tau_0}(\cdot,\tau), A) + \mathcal{C}( G_{\tau_0}(\cdot, \tau), A). \] The function
\[ \tau \mapsto \mathcal{C}(H_{k,k-1}(\cdot,\tau) - G_{\tau_{0}}(\cdot,\tau), A) \] is harmonic in a neighborhood of $\tau_0$. For the second summand we compute for any $\tau\notin \Gamma S_A$ \begin{align*}
\mathcal{C}( G_{\tau_0}(\cdot, \tau), A) &= \int_{c_A}\sum_{\substack{Q\in [A]\\ \tau_0 \in S_Q}} \sum_{M\in \Gamma_Q} \left(\left(\frac{v^{2k-1}}{(z-\tau)(z-\overline{\tau})^{2k-1}}\right)\Big|_{2k, z} M \right) A(z,1)^{k-1}dz \\ &=2\sum_{\substack{Q\in [A]\\ \tau_0 \in S_Q}}\int_{S_Q}\frac{v^{2k-1}}{(z-\tau)(z-\overline{\tau})^{2k-1}} Q(z,1)^{k-1}dz. \end{align*} Note that the integrand is meromorphic in $z$. The integral is oriented counterclockwise if $a>0$ and clockwise if $a<0$. We complete $S_Q$ to a closed path by adding the horizontal line connecting the two real endpoints $w < w'$ of $S_Q$. The function \[ \tau\mapsto \int_{w}^{w'}\frac{v^{2k-1}}{(x-\tau)(x-\overline{\tau})^{2k-1}} Q(x,1)^{k-1}dx \] is harmonic on $\mathbb{H}$, so it does not contribute to the singularity. From the residue theorem we obtain that the integral over the closed path equals $0$ if $\tau\notin \text{Int}(S_Q)$ and \begin{align*} 2\pi i \sgn(a) \operatorname{Res}_{z=\tau} \left(\frac{v^{2k-1}}{(z-\tau)(z-\overline{\tau})^{2k-1}} Q(z,1)^{k-1}\right) = (-1)^{k-1}2^{2-2k}\pi \sgn(a)Q(\tau,1)^{k-1} \end{align*} if $\tau\in \text{Int}(S_Q)$. This yields the claimed singularity.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem locally harmonic maass form identity}] By what we have said above, Theorem~\ref{theorem locally harmonic maass form identity} for $\tau$ not lying on $S_{Q}$ for any $Q \in [A]$ follows from the above lemma. By a similiar idea as in the proof of the lemma above we find that for $\tau$ lying on a semi-circle $S_{Q}$ for $Q \in [A]$ we have
\begin{align*}
\mathcal{C}(H_{k,k-1}(\cdot,\tau),A)
&= \lim_{\varepsilon \to 0}\frac{1}{2}\big(\mathcal{C}(H_{k,k-1}(z,\tau+i\varepsilon),A)+\mathcal{C}(H_{k,k-1}(z,\tau-i\varepsilon),A) \big),
\end{align*}
where the cycle integral integral on the left-hand side is defined as the Cauchy principal value \eqref{eq Cauchy principal value}. This implies that Theorem~\ref{theorem locally harmonic maass form identity} is also true for $\tau$ lying on $S_{Q}$ for some $Q \in [A]$.
\end{proof}
\section{The Proof of Theorem \ref{theorem traces rationality}}\label{section proof theorem traces rationality}
By Corollary~\ref{corollary main identity} we have the identity \begin{align}\label{eq main identity}
\mathcal{C}\left(f_{k,P},A\right) = \frac{2^{k}|d|^{\frac{k-1}{2}}D^{k-\frac12}}{(k-1)! \,|\overline{\Gamma}_{P}|}R_{2-2k}^{k-1}\left(\mathcal{F}_{1-k,A}\right)(\tau_{P}). \end{align} Let $\mathcal{Q}$ be a finite family of indefinite quadratic forms $A \in \mathcal{Q}_{D}$ of non-square discriminants $D_{A} > 0$, and let $a_{A} \in \mathbb{Z}$ for $A \in \mathcal{Q}$ such that $\sum_{A \in \mathcal{Q}}a_{A}f_{k,A} = 0$. If we multiply \eqref{eq main identity} by $a_{A}$ and sum over $A \in \mathcal{Q}$, and then plug in the splitting of $\mathcal{F}_{1-k,A}$ from Theorem~\ref{local}, we see that the Eichler integrals $f_{k,A}^{*}$ and $\mathcal{E}_{f_{k,A}}$ cancel out due to the assumption $\sum_{A \in \mathcal{Q}}a_{A}f_{k,A} = 0$. Hence we obtain \begin{align*}
\sum_{A \in \mathcal{Q}}a_{A}\mathcal{C}\left(f_{k,P},A\right) = \frac{2^{k}|d|^{\frac{k-1}{2}}}{(k-1)! \,|\overline{\Gamma}_{P}|}\sum_{A \in \mathcal{Q}}a_{A }D_{A}^{k-\frac12}R_{2-2k}^{k-1}\left(P_{1-k,A}\right)(\tau_{P}), \end{align*} where $P_{1-k,A}$ is the local polynomial defined in Theorem~\ref{local}. The action of the iterated raising operator on $P_{1-k,A}$ has been computed in Lemmas 5.3 and 5.4 of \cite{anbs}, and is given as follows.
\begin{lemma}\label{raisingP} For $\tau \in \mathbb{H} \setminus \bigcup_{Q \in [A]}S_{Q}$ we have \begin{align*}
R_{2-2k}^{k-1}\left(P_{1-k,A}\right)(\tau) = \frac{(k-1)!}{2^{k}D^{k-\frac{1}{2}}}\mathcal{P}_{k,A}(\tau),
\end{align*} where $\mathcal{P}_{k,A}(\tau)$ is the function defined in \eqref{eq def local polynomial}. \end{lemma}
We arrive at \begin{align*}
\sum_{A \in \mathcal{Q}}a_{A}\mathcal{C}\left(f_{k,P},A\right) = \frac{|d|^{\frac{k-1}{2}}}{|\overline{\Gamma}_{P}|}\sum_{A \in \mathcal{Q}}a_{A}\mathcal{P}_{k,A}(\tau_{P}), \end{align*} which is the formula from Theorem~\ref{theorem traces rationality}. Finally, we show that the right-hand side is rational.
\begin{lemma}\label{lemma P rational} For any CM-point $\tau_P\in\mathbb{H}$ of discriminant $d < 0$, we have \[
|d|^{\frac{k-1}{2}}\mathcal{P}_{k,A}(\tau_P)\in\mathbb{Q}. \] \end{lemma}
\begin{proof} If $P = [a,b,c]$ with $a > 0$, then $\tau_{P}$ is given by \[
\tau_{P} = \frac{-b+i\sqrt{|d|}}{2a}. \]
In particular, $\frac{\sqrt{|d|}}{\mathrm{Im}(\tau_P)}$ and $\sqrt{|d|}Q_{\tau_{P}}$ are rational. We have seen in Proposition~\ref{proposition zeta level lowering} that \[ D^{k-\frac12}\left(\zeta_{\Gamma,A}(k)+(-1)^k \zeta_{\Gamma,-A}(k)\right) \] is a rational number for $k \geq 2$ (and this expression does not occur in $\mathcal{P}_{k,A}$ for $k = 1$). Moreover, the Legendre poynomial $P_{k-1}$ is odd if $k$ is even and even if $k$ is odd. Hence \[
|d|^{\frac{k-1}{2}}(i\sqrt{D})^{k-1}P_{k-1}\left(\frac{iQ_{\tau_P}}{\sqrt{D}}\right) \]
is rational. Combining all these facts we see that $|d|^{\frac{k-1}{2}}\mathcal{P}_{k,A}(\tau_P)$ is a rational number. \end{proof}
\section{The Proof of Theorem \ref{theorem traces rationality 2}}\label{section proof theorem traces rationality 2}
For $(m,N) = 1$ the $m$-th Hecke operator $T_{m}$ on a function $f$ transforming like a modular form of weight $2k$ for $\Gamma$ is defined by \begin{align} \label{eq definition Tn}
f| T_{m} := m^{k-1}\sum_{M\in\Gamma\backslash \mathcal{M}_m(N)}f|_{2k}M, \end{align}
where $\mathcal{M}_m (N)$ is the set of integral $2$ by $2$ matrices of determinant $m$ whose lower left entry is divisible by $N$, and the slash operator is defined by $(f|_{2k}M)(z) := \det(M)^{k}j(M,z)^{-2k}f(Mz)$. It acts on the Fourier expansion of a cusp form $f(z) = \sum_{n=1}^{\infty}c_{f}(n)q^{n} \in S_{2k}(\Gamma)$ by \[
(f|T_{m})(z) = \sum_{n=1}^{\infty}\sum_{d\mid (m,n)}d^{2k-1}c_{f}(mn/d^{2})q^{n}. \]
In order to show Theorem~\ref{theorem traces rationality 2} we would like to use the splitting of $\mathcal{F}_{1-k,A}$ from Theorem~\ref{local} and get rid of the Eichler integrals by taking suitable linear combinations. To this end, the following well-known lemma is useful.
\begin{lemma}\label{cusprelation}
If $\underline{\lambda}$ is a relation for $S_{2k}(\Gamma)$, then $f|T_{\underline{\lambda}} = 0$ for every $f \in S_{2k}(\Gamma)$. \end{lemma}
\begin{proof}
We have
\[
(f|T_{\underline{\lambda}})(z) = \sum_{n=1}^{\infty}\lambda_{n}\sum_{m=1}^{\infty}\sum_{d \mid (m,n)}d^{2k-1}c_{f}(mn/d^{2})q^{m} = \sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\lambda_{n}\sum_{d \mid (m,n) }d^{2k-1}c_{f}(mn/d^{2})q^{m}.
\]
Since the innermost sum is just the $n$-th coefficient of the cusp form $f|T_{m}$, the sum over $n$ vanishes by the definition of a relation for $S_{2k}(\Gamma)$. \end{proof}
An important ingredient in the proof of Theorem~\ref{theorem traces rationality 2} is the fact that Petersson's Poincar\'e series $H_{k}(z,\tau)$ behaves nicely under the action of Hecke operators.
\begin{lemma}\label{HkHecke} For $k \geq 1$ and $(m,N) = 1$ we have \[
H_k(z,\tau)|_ {z} T_m = m^{k} H_k(z,\tau)|_ {\tau} T_m. \] \end{lemma}
\begin{proof}
For $k \geq 2$ we plug in the definition of $H_{k}(z,\tau)$ and $T_{m}$ and write
\[
H_{k}(z,\tau)|_{z}T_{m} = m^{k-1}\sum_{M \in \mathcal{M}_{m}(N)}\left(\frac{(z-\tau)(z-\overline{\tau})}{v} \right)^{-k}\Bigg|_{2k,z}M.
\]
Now a short calculation gives
\[
\left(\frac{(z-\tau)(z-\overline{\tau})}{v} \right)^{-k}\Bigg|_{2k,z}M = \left(\frac{(z-\tau)(z-\overline{\tau})}{v} \right)^{-k}\Bigg|_{0,\tau}M^{'},
\]
where $\left(\begin{smallmatrix}a & b \\ c & d \end{smallmatrix} \right)^{'} = \left(\begin{smallmatrix}d & -b \\ -c & a \end{smallmatrix} \right)$. Since $M^{'}$ also runs through $\mathcal{M}_{m}(N)$ we obtain the stated identity for $k\geq 2$. For $k = 1$ and $\mathrm{Re}(s) > 0$ we compute analogously
\[
H_{1,s}(z,\tau)|_{z}T_{m} = m H_{1,s}(z,\tau)|_{\tau}T_{m}.
\]
Using the well-known fact that
\[
E_{2,\Gamma}^{*}(z)|_{z}T_{m} = \sigma(m)E_{2,\Gamma}^{*}(z) = |\Gamma \backslash \mathcal{M}_{m}(N)| E_{2,\Gamma}^{*}(z) = mE_{2,\Gamma}^{*}(z)|_{\tau}T_{m}
\]
and analytic continuation we also obtain the result for $k = 1$.
\end{proof}
We now come to the proof of Theorem~\ref{theorem traces rationality 2}. Using Lemmas~\ref{lemma fkP and Hk} and \ref{HkHecke} we compute \begin{align*}
\mathcal{C}(f_{k,P}|T_{m},A) &= \frac{2^{k-1}|d|^{\frac{k-1}{2}}}{|\overline{\Gamma}_{P}|\pi}\mathcal{C}(H_{k}(\cdot,\tau_{P})|_{z}T_{m},A) = \frac{2^{k-1}|d|^{\frac{k-1}{2}}}{|\overline{\Gamma}_{P}|\pi}m^{k}\big(\mathcal{C}(H_{k}(\cdot,\tau),A)|_{\tau}T_{m}\big)(\tau_{P}) . \end{align*} By \eqref{eq Hk and Hkk-1} and Theorem \ref{theorem locally harmonic maass form identity} we obtain \begin{align*}
\mathcal{C}(H_{k}(\cdot,\tau),A)|_{\tau}T_{m} &= \frac{1}{(k-1)!}\big(R_{2-2k,\tau}^{k-1}\big(\mathcal{C}(H_{k,k-1}(\cdot,\tau),A)\big)\big)|_{\tau}T_{m} \\
&=\frac{2\pi D^{k-\frac12}}{(k-1)!} \big(R_{2-2k}^{k-1}(\mathcal{F}_{1-k,A})\big) | T_m \\
&=\frac{2\pi D^{k-\frac12}}{(k-1)!}m^{k-1}R_{2-2k}^{k-1}\left(\mathcal{F}_{1-k,A} | T_m \right). \end{align*}
Since every coset in $\Gamma\backslash\mathcal{M}_m(N)$ is represented by a matrix $M$ with $Mi\infty =i\infty$, we have for any $f\in S_{2k}(\Gamma)$ \[
\mathcal{E}_{f}|T_m = m^{1-2k}\mathcal{E}_{f| T_m} \qquad\text{and}\qquad f^{*}| T_m = m^{1-2k} (f|T_m)^*. \] This implies \[
\mathcal{F}_{1-k,A}| T_m = P_{1-k,A}| T_m+ m^{1-2k}(-1)^{k}\frac{D^{\frac{1}{2}-k}}{\binom{2k-2}{k-1}}\left(f_{k,A}| T_m\right)^*-m^{1-2k}(-1)^{k}D^{\frac{1}{2}-k}\frac{(k-1)!^2}{(4\pi)^{2k-1}}\mathcal{E}_{f_{k,A}| T_m}. \]
It follows from Lemma \ref{cusprelation} that $\sum_{m>0}\lambda_{m}f_{k,A}| T_m =0$, and therefore \begin{align*}
\sum_{m>0}\lambda_{m} \mathcal{C}(f_{k,P}|T_{m},A)
&=\frac{2^k |d|^{\frac{k-1}{2}}D^{k-\frac12}}{(k-1)!|\overline{\Gamma}_{P}|} \sum_{m>0}\lambda_{m}m^{2k-1}\left(R_{2-2k}^{k-1}\left(P_{1-k,A}| T_m\right)\right)(\tau_{P})\\
&=\frac{2^k |d|^{\frac{k-1}{2}}D^{k-\frac12}}{(k-1)!|\overline{\Gamma}_{P}|} \sum_{m>0}\lambda_{m}m^{k}\left(R_{2-2k}^{k-1}(P_{1-k,A})|T_m\right)(\tau_{P}). \end{align*} The expression $R_{2-2k}^{k-1}(P_{1-k,A})$ can be rewritten using Lemma \ref{raisingP}. We plug in the definition of $T_{m}$ and choose as a system of representatives for $\Gamma \backslash \mathcal{M}_{m}(N)$ the matrices $\left(\begin{smallmatrix} \alpha & \beta \\ 0 & \delta \end{smallmatrix} \right)$ with $\alpha,\beta,\delta \in \mathbb{Z}, \alpha > 0, \alpha\delta = m$, and $\beta \pmod \delta$. This yields the formula in Theorem~\ref{theorem traces rationality 2}. Note that $\frac{\alpha \tau_{P}+\beta}{\delta}$ is a CM point of discriminant $\delta^{2}d$. Hence Lemma~\ref{lemma P rational} implies that the expression \[
|d|^{\frac{k-1}{2}}\mathcal{P}_{k,A}\left(\frac{\alpha \tau_{P}+\beta}{\delta} \right) \] is rational. This finishes the proof of Theorem~\ref{theorem traces rationality 2}.
\section{The Proof of Theorem \ref{weight 2}}\label{section proof weight 2}
Throughout this section we assume that $N$ is odd and square-free. Furthermore, we let $\Delta$ be a discriminant with $(-1)^{k}\Delta > 0$ and $\delta$ a fundamental discriminant with $(-1)^{k}\delta < 0$ such that $\delta$ is a square modulo $4N$. Finally, let $F(\tau) = \sum_{m \gg -\infty}c_{F}(m)q^{m}$ be a weakly holomorphic modular form of weight $\frac{3}{2}-k$ for $\Gamma_{0}(4N)$ in the Kohnen plus space with rational coefficients $c_{F}(m)$ for $m < 0$. We first show that the Fourier coefficients of the meromorphic modular form \eqref{eq twisted sum} are algebraic multiples of $\pi^{k-1}$.
\begin{proposition}\label{RatCoef} For $k \geq 1$ the meromorphic modular form \[
\pi^{1-k}|\delta|^{\frac12-k}\sum_{(-1)^k\Delta > 0} c_{F}(-|\Delta|)f_{k,\Delta,\delta} \] has rational Fourier coefficients. \end{proposition}
For the proof we write the coefficients of $f_{k,\Delta,\delta}$ as linear combinations of coefficients of half-integral weight Maass Poincar\'e series.
\begin{lemma}\label{fPcoef} Let $k \geq 1$. For $n \geq 1$ we have \begin{multline*}
c_{f_{k,\Delta, \delta}}(n) = -\frac{(-1)^{\left[\frac{k}{2}\right]}2^{k}\pi^{k-1}|\delta|^{k-\frac12}n^{2k-1}}{(k-1)!}\sum_{m|n} \left(\frac{\delta}{m}\right) m^{-k} c^+_{\mathcal{P}_{\frac{3}{2}-k,-|\Delta|}}\left(\frac{n^{2}|\delta|}{m^{2}}\right)\\
\qquad+\delta_{k=1}12\prod_{p|N}\left(1-p^{-2}\right)^{-1}\sum_{d|N}\frac{\mu(d)}{d^2}\sigma\left(\frac{dn}{N}\right)\sum_{P \in \mathcal{Q}_{\Delta \delta}/\Gamma}\frac{\chi_{\delta}(P)}{|\overline{\Gamma}_P|}. \end{multline*} \end{lemma}
\begin{proof}
This identity follows from a straightforward calculation using Proposition~\ref{prop fkP Fourier expansion}, Theorem~\ref{theorem Poincare Fourier expansion} and Proposition~\ref{prop salie sum}. It could alternatively be derived from the fact that $f_{k,\Delta,\delta}$ is a theta lift of $\mathcal{P}_{\frac{3}{2}-k,-|\Delta|}$, compare \cite{bringmannkanevonpippich, zemel}. \end{proof}
\begin{proof}[Proof of Proposition~\ref{RatCoef}]
Looking at the formula for $c_{f_{k,\Delta, \delta}}(n)$ in Lemma \ref{fPcoef}, we see that the second summand on the right-hand side is rational if $k=\delta=1$ and vanishes otherwise. It remains to show that the coefficients of \begin{align}\label{Pschmodda}
\sum_{(-1)^k \Delta > 0} c_{F}(-|\Delta|)\sum_{n \geq 1}n^{2k-1}\sum_{m|n} \left(\frac{\delta}{m}\right) m^{-k} c^+_{\mathcal{P}_{\frac{3}{2}-k,-|\Delta|}}\left(\frac{n^{2}|\delta|}{m^{2}}\right)q^n \end{align} are rational.
Let $F$ be a weakly holomorphic modular form of weight $\frac{3}{2}-k$. Then so is the function
\[
\widetilde{F}(\tau) := \sum_{m < 0}c_{F}(m)\mathcal{P}_{\frac{3}{2}-k, m}(\tau).
\]
If $k > 1$, we have $F = \widetilde{F}$ since there are no holomorphic modular forms of negative weight. In particular, since the space of weakly holomorphic modular forms of weight $\frac{3}{2}-k$ has a basis consisting of forms with rational coefficients and the principal part of $F$ is rational, we find that all coefficients of $F$ are rational for $k > 1$. However, for $k = 1$ the functions $F$ and $\widetilde{F}$ may differ by a holomorphic modular form. Note that every $\mathcal{P}_{\frac{3}{2}-k,m}$ is orthogonal to cusp forms with respect to the regularized Petersson inner product and has rational principal part. Hence the same is true for $\widetilde{F}$. It now follows from Proposition~3.2 in \cite{bruinierschwagenscheidt} that all Fourier coefficients of $\widetilde{F}$ are rational. Now we see that \eqref{Pschmodda} equals
\[
\sum_{n \geq 1}n^{2k-1}\sum_{m|n} \left(\frac{\delta}{m}\right) m^{-k} c_{\widetilde{F}}\left(\frac{n^{2}|\delta|}{m^{2}}\right)q^n,
\] which has rational Fourier coefficients. This finishes the proof.
\end{proof}
We now proceed to the proof of Theorem~\ref{weight 2}. For the rest of this section we let $k = 1$ and $\delta > 0$ a fundamental discriminant which is a square modulo $4N$. We can assume without loss of generality that the coefficients $c_{F}(\Delta)$ for $\Delta < 0$ are integers. We consider the differential \[ \eta_{\delta}(F) := \pi i \sum_{\Delta < 0}c_{F}(\Delta)f_{1,\Delta,\delta}(z)dz \] on $X_{0}(N)$. For $P \in \mathcal{Q}_{\Delta\delta}$ we have \[ \Res_{z = \tau_{P}}(f_{1,\Delta,\delta}(z)) = \frac{\chi_{\delta}(P)}{\pi i}, \] so $\eta_{\delta}(F)$ has simple poles with integral residues. In particular, $\eta_{\delta}(F)$ is a differential of the third kind on $X_{0}(N)$.
Following \cite{bruinieronoheegner}, we define the twisted Heegner divisor \[
Z_{\delta}(F) := \sum_{\Delta < 0}c_{F}(\Delta)Z_{\delta}(\Delta), \qquad Z_{\delta}(\Delta) := \sum_{P \in \mathcal{Q}_{\delta\Delta}/\Gamma}\frac{\chi_\delta(P)}{|\overline{\Gamma}_{P}|}[\tau_{P}], \] associated to $F$, and the corresponding degree $0$ divisor \[ y_{\delta}(F) := Z_{\delta}(F)-\deg(Z_{\delta}(F))\cdot[i\infty]. \] By \cite{bruinieronoheegner}, Lemma 5.1, $y_{\delta}(F)$ is defined over $\mathbb{Q}(\sqrt{\delta})$. Note that $y_{\delta}(F)$ is precisely the residue divisor of $\eta_{\delta}(F)$ on $X_{0}(N)$. Moreover, we have the following result.
\begin{lemma}
The differential $\eta_{\delta}(F)$ is the canonical differential of the third kind for $y_{\delta}(F)$, i.e., the unique differential of the third kind with residue divisor $y_{\delta}(F)$ such that
\[
\mathrm{Re}\left(\int_{\gamma}\eta_{\delta}(F)\right) = 0
\]
for all cycles $\gamma \in H_{1}(X_{0}(N)\setminus y_{\delta}(F),\mathbb{Z})$. \end{lemma}
\begin{proof}
One can see from Theorem~\ref{local} that $\mathcal{F}_{0,A}(\tau)\in\mathbb{R}$ for all $\tau\in\mathbb{H}$ not lying on any of the semi-circles $S_{Q}$ for $Q \in [A]$. It follows from Corollary~\ref{corollary main identity} that \[ \mathrm{Re}\left(\int_{\gamma}\eta_{\delta}(F)\right) = 0 \] if $\gamma$ is any cycle of the form $c_A$ which does not meet any poles of $\eta_{\delta}(F)$. It is well-known that the group $H_{1}(X_{0}(N)\setminus y_{\delta}(F),\mathbb{Z})$ is generated by these cycles, which yields the result. \end{proof}
The crucial ingredient for the proof of Theorem~\ref{weight 2} is the following rationality result of Scholl \cite{scholl} for differentials of the third kind (see also Theorem 3.3 of \cite{bruinieronoheegner}).
\begin{theorem}[Scholl]
Let $D$ be a divisor of degree $0$ on $X_{0}(N)$ defined over a number field $F$. Let $\eta_{D}$ be the canonical differential of the third kind associated to $D$ and write $\eta_{D} = 2\pi i f dz$. If all the Fourier coefficients of $f$ are contained in $F$, then some non-zero multiple of $D$ is a principal divisor. \end{theorem}
It follows from Theorem \ref{RatCoef} that the Fourier coefficients of $\frac{1}{2\pi i}\eta_{\delta}(F)$ are contained in $\mathbb{Q}(\sqrt{\delta})$, which is also the field of definition of the divisor $y_{\delta}(F)$. In particular, the above criterion of Scholl implies that some non-zero multiple of $y_{\delta}(F)$, say $m\cdot y_{\delta}(F)$ for some $m \in \mathbb{Z}$, is the divisor of a meromorphic function $g$ on $X_{0}(N)$.
Fix some point $z_{0} \in \mathbb{H}$ which is not a pole of $\eta_{\delta}(F)$. For $z \in \mathbb{H}$ not being a pole of $\eta_{\delta}(F)$ we consider the function \[ \Psi_{\delta}(F,z) := \exp\left(m\int_{z_{0}}^{z}\eta_{\delta}(F)\right), \] where the integral is over any path from $z_{0}$ to $z$ in $\mathbb{H}$ avoiding the poles of $\eta_{\delta}(F)$. Since the residues of $\eta_{\delta}(F)$ are integers, this does not depend on the choice of the path. Note that $\Psi_\delta(F,z)$ is meromorphic on $\mathbb{H}$ and has the same divisor as $g$. Thus their quotient is constant on $\mathbb{H}$ and $\Psi_\delta(F,z)$ is $\Gamma$-invariant.
For any $M\in\Gamma$, we have \[ \Psi_{\delta}(F,Mz) = \exp\left(m\int_{z_{0}}^{Mz}\eta_{\delta}(F)\right) = \exp\left(m\int_{z_{0}}^{Mz_0}\eta_{\delta}(F)\right)\Psi_{\delta}(F,z), \] so for $\Psi_{\delta}(F,z)$ to be $\Gamma$-invariant, the integral \[ \frac{m}{2\pi i}\int_{z_{0}}^{Mz_0}\eta_{\delta}(F) = \frac{m}{2}\int_{z_0} ^{Mz_0} \sum_{\Delta < 0}c_{F}(\Delta)f_{1,\Delta,\delta}(z)dz \] has to be an integer. If we choose $z_0$ to lie on a geodesic $S_A$ and $M$ to be a generator of $\Gamma_A$, then this implies that the cycle integral of $\sum_{\Delta < 0}c_{F}(\Delta)f_{1,\Delta,\delta}$ along $c_{A}$ is a rational number. This finishes the proof of Theorem~\ref{weight 2}.
\end{document} |
\begin{document}
\title{Compromise Solutions for Robust Combinatorial Optimization with Variable-Sized Uncertainty}
\author[1]{Andr\'{e} Chassein\thanks{Email: [email protected]}} \author[2]{Marc Goerigk\thanks{Corresponding author. Email: [email protected]}}
\date{} \affil[1]{Fachbereich Mathematik, Technische Universit\"at Kaiserslautern, Germany} \affil[2]{Department of Management Science, Lancaster University, United Kingdom}
\maketitle
\begin{abstract} In classic robust optimization, it is assumed that a set of possible parameter realizations, the uncertainty set, is modeled in a previous step and part of the input. As recent work has shown, finding the most suitable uncertainty set is in itself already a difficult task. We consider robust problems where the uncertainty set is not completely defined. Only the shape is known, but not its size. Such a setting is known as variable-sized uncertainty.
In this work we present an approach how to find a single robust solution, that performs well on average over all possible uncertainty set sizes. We demonstrate that this approach can be solved efficiently for min-max robust optimization, but is more involved in the case of min-max regret, where positive and negative complexity results for the selection problem, the minimum spanning tree problem, and the shortest path problem are provided. We introduce an iterative solution procedure, and evaluate its performance in an experimental comparison.
\end{abstract}
{\bf Keywords:} robust combinatorial optimization; min-max regret; variable-sized uncertainty
\section{Introduction}
Classic optimization settings assume that the problem data are known exactly. Robust optimization, like stochastic optimization, instead assumes some degree of uncertainty in the problem formulation. Most approaches in robust optimization formalize this uncertainty by assuming that all uncertain parameters $\xi$ are described by a set of possible outcomes $\mathcal{U}$, the uncertainty set.
For general overviews on robust optimization, we refer to \cite{Aissi2009,RObook,GoeSchoe13-AE,bertsimas-survey}.
While the discussion of properties of the robust problem for different types of uncertainty sets $\mathcal{U}$ has always played a major role in the research community, only recently the data-driven design of useful sets $\mathcal{U}$ has become a focus of research. In \cite{bertsimas2013data}, the authors discuss the design of $\mathcal{U}$ taking problem tractability and probabilistic guarantees of feasibility into account. The paper \cite{bertsimas2009constructing} discusses the relationship between risk measures and uncertainty sets.
In distributionally robust optimization, one assumes that a probability distribution on the data is roughly known; however, this distribution itself is subject to an uncertainty set $\mathcal{U}$ of possible outcomes (see \cite{goh2010distributionally,wiesemann2014distributionally}).
Another related approach is the globalized robust counterpart, see \cite{RObook}. The idea of this approach is that a relaxed feasibility should be maintained, even if a scenario occurs that is not specified in the uncertainty set. The larger the distance of $\xi$ to $\mathcal{U}$, the further relaxed becomes the feasibility requirement of the robust solution.
In this work we present an alternative to constructing a specific uncertainty set $\mathcal{U}$. Instead, we only assume knowledge of a nominal (undisturbed) scenario, and consider a set of possible uncertainty sets of varying size based on this scenario. That is, a decision maker does not need to determine the size of uncertainty (a task that is usually outside his expertise). Our approach constructs a solution for which the worst-case objective with respect to any possible uncertainty set performs well on average over all uncertainty sizes.
The basic idea of variable-sized uncertainty was recently introduced in \cite{variable}. There, the aim is to construct a set of robust candidate solutions that requires the decision maker to chose one that suits him best. In our setting, we consider all uncertainty sizes simultaneously, and generate a single solution as a compromise approach to the unknown uncertainty. We call this setting the \textit{compromise approach to variable-sized uncertainty}.
We focus on combinatorial optimization problems with uncertainty in the objective function, and consider both min-max and min-max regret robustness (see \cite{kasperski2016robust}).
This work is structured as follows. In Section~\ref{sec:var}, we briefly formalize the setting of variable-sized uncertainty. We then introduce our new compromise approach for min-max robustness in Section~\ref{sec:comp}, and for the more involved case of min-max regret robustness in Section~\ref{sec:comp2}. We present complexity results for the selection problem, the minimum spanning tree problem, and the shortest path problem in Section~\ref{sec:probs}. In Section~\ref{sec:exp}, we evaluate our approach in a computation experiment, before concluding this paper in Section~\ref{sec:conc}.
\section{Variable-Sized Uncertainty}\label{sec:var}
We briefly summarize the setting of \cite{variable}. Consider an uncertain combinatorial problem of the form \[ \min\ \{ c^t x: x\in\mathcal{X} \} \tag{P($c$)} \] with $\mathcal{X}\subseteq\{0,1\}^n$, and an uncertainty set $\mathcal{U}(\lambda)\subseteq\mathbb{R}^n_+$ that is parameterized by some size $\lambda \in \Lambda$. For example, \begin{itemize} \item interval-based uncertainty $\mathcal{U}(\lambda) = \prod_{i\in[n]} [(1-\lambda)\hat{c}_i,(1+\lambda)\hat{c}_i]$ with $\Lambda \subseteq [0,1]$, or \item ellipsoidal uncertainty $\mathcal{U}(\lambda) = \{ c : c = \hat{c} + C\xi, \Vert \xi \Vert_2 \le \lambda \}$ with $\Lambda \subseteq \mathbb{R}_+$. \end{itemize} We call $\hat{c}$ the \textit{nominal scenario}, and any $\hat{x}\in\mathcal{X}$ that is a minimizer of P($\hat{c}$) a \textit{nominal solution}.
In variable-sized uncertainty, we want to find a set of solutions $\mathcal{S}\subseteq \mathcal{X}$ that contains an optimal solution to each robust problems over all $\lambda$. Here, the robust problem is either given by the min-max counterpart \[ \min_{x\in\mathcal{X}} \max_{c\in\mathcal{U}(\lambda)} c^t x \] or the min-max regret counterpart \[ \min_{x\in\mathcal{X}} \max_{c\in\mathcal{U}(\lambda)} \left( c^t x - \min_{y\in\mathcal{X}} c^ty \right). \]
In the case of min-max robustness, such a set can be found through methods from multi-objective optimization in $\mathcal{O}(|\mathcal{S}|\cdot T)$, where $T$ denotes the complexity of the nominal problem, for many reasonable uncertainty sets. However, $\mathcal{S}$ may be exponentially large. Furthermore, in some settings, a set of solutions that would require a decision maker to make the final choice may not be desirable, but instead a single solution that represents a good trade-off over all values for $\lambda$ is sought.
\section{Compromise Solutions in the Min-Max Model}\label{sec:comp}
In this paper we are interested in finding one single solution that performs well over all possible uncertainty sizes $\lambda\in \Lambda$. To this end, we consider the problem \[ \min_{x\in\mathcal{X}} val(x) \hspace{5mm} \text{with} \hspace{5mm} val(x) = \int_\Lambda w(\lambda) \left( \max_{c\in\mathcal{U}(\lambda)} c^tx \right) d\lambda \tag{C} \] for some weight function $w: \Lambda \to \mathbb{R}_+$, such that $val(x)$ is well-defined. We call this problem the compromise approach to variable-sized uncertainty. The weight function $w$ can be used to include decision maker preferences; e.g., it is possible to give more weight to smaller disturbances and less to large disturbances. If a probability distribution over the uncertainty size were known, it could be used to determine $w$. In the following, we consider (C) for different shapes of $\mathcal{U}(\lambda)$.
\begin{theorem}\label{th1} Let $\mathcal{U}(\lambda) = \prod_{i\in[n]} [(1-\lambda)\hat{c}_i,(1+\lambda)\hat{c}_i]$ be an interval-based uncertainty set with $\lambda\in\Lambda\subseteq[0,1]$. Then, a nominal solution $\hat{x}$ is an optimal solution of (C). \end{theorem} \begin{proof} As $\max_{c\in\mathcal{U}(\lambda)} c^tx = (1+\lambda)c^t x$, we get \begin{align*} val(x) &= \int_0^1 w(\lambda) \left( (1+\lambda) \hat{c}^tx \right) d\lambda \\ &= \left( \int_0^1 (1+\lambda) w(\lambda) d\lambda\right) \hat{c}^tx \end{align*} Therefore, a minimizer of the nominal problem with costs $\hat{c}$ is also a minimizer of (C). \end{proof}
\begin{lemma}\label{lem1} For an ellipsoidal uncertainty set $\mathcal{U}(\lambda) = \{\hat{c} + C\xi : \Vert \xi \Vert_2 \le \lambda \}$ with $\lambda\in\mathbb{R}_+$, it holds that \[ \max_{c\in\mathcal{U}(\lambda)} c^tx = \hat{c}^t x + \lambda \Vert C^t x\Vert_2 \] \end{lemma} \begin{proof} This result has been shown in \cite{ben1999robust} for $\lambda = 1$. The proof holds analogously. \end{proof}
\begin{theorem}\label{th2} Let $\mathcal{U}(\lambda) = \{ \hat{c} + C\xi : \Vert \xi \Vert_2 \le \lambda \}$ be an ellipsoidal uncertainty set with $\lambda\in\Lambda\subseteq\mathbb{R}_+$. Then, an optimal solution to (C) can be found by solving a single robust problem with ellipsoidal uncertainty. \end{theorem} \begin{proof} Using Lemma~\ref{lem1}, we find \begin{align*} val(x) &= \int_\Lambda w(\lambda) \Big( \hat{c}^tx + \lambda\Vert C^t x\Vert_2 \Big) d\lambda \\ &= \left( \int_\Lambda w(\lambda) d\lambda \right) \hat{c}^t x+ \left( \int_\Lambda \lambda w(\lambda) d\lambda \right)\Vert C^t x\Vert_2 \end{align*} To find a minimizer of $val(x)$, we can therefore solve the robust counterpart of (P) using an uncertainty set $\mathcal{U}(\lambda')$ with $\lambda' = (\int_\Lambda \lambda w(\lambda) d\lambda)/(\int_\Lambda w(\lambda) d\lambda)$.
\end{proof} Note that $ (\int_0^1 \lambda w(\lambda) d\lambda)/(\int_0^1 w(\lambda) d\lambda) = \frac{1}{2}$, if $w(\lambda) = 1$, i.e., the compromise solution simply hedges against the average size of the uncertainty. In general, recall that this formula gives the centroid of the curve defined by $w$.
The results of Theorems~\ref{th1} and \ref{th2} show that compromise solutions are easy to compute, as the resulting problems have a simple structure. This is due to the linearity of the robust objective value in the uncertainty size $\lambda$. Such linearity does not exist for min-max regret, as is discussed in the following section.
\section{Compromise Solutions in the Min-Max Regret Model}\label{sec:comp2}
We now consider the compromise approach in the min-max regret setting. In classic min-max regret, one considers the problem \[ \min_{x\in\mathcal{X}} \max_{c\in\mathcal{U}(\lambda)} c^t x - opt(c) \] with $opt(c) = \min_{y\in\mathcal{X}} c^ty$. In the following, we restrict the analysis to the better-researched interval uncertainty sets $\mathcal{U}(\lambda) = \prod_{i\in[n]} [(1-\lambda)\hat{c}_i,(1+\lambda)\hat{c}_i]$. Ellipsoidal uncertainty have been introduced in min-max regret only recently (see \cite{chassein2016min}).
\noindent The compromise approach to variable-sized uncertainty becomes \[ \min val(x)\hspace{5mm} \text{with} \hspace{5mm} val(x) = \int_\Lambda w(\lambda) \left( \max_{c\in\mathcal{U}(\lambda)} c^tx - opt(c) \right) d\lambda \] To simplify the presentation, we assume $\Lambda = [0,1]$ and $w(\lambda) = 1$ for all $\lambda\in\Lambda$ in the following. All results can be directly extended to piecewise linear functions $w$ with polynomially many changepoints.
\subsection{Structure of the Objective Function}
We first discuss the objective function $val(x)$ for some fixed $x\in\mathcal{X}$. Note that \[ reg(x,\lambda) := \max_{c\in\mathcal{U}(\lambda)} c^tx - opt(c) = \max_{y\in\mathcal{X}} (1+\lambda)\hat{c}^tx - \sum_{i\in[n]} \hat{c}_i (1-\lambda + 2\lambda x_i) y_i. \] Hence, $reg(x,\lambda)$ is a piecewise linear function in $\lambda$, where every possible regret solution $y$ defines an affine linear regret function $c^t(x,\lambda)(x-y)$, with \[ c_i(x,\lambda) = \begin{cases} (1+\lambda)\hat{c}_i & \text{ if } x_i = 1 \\ (1-\lambda)\hat{c}_i & \text{ if } x_i = 0 \end{cases}.\]
\begin{figure}
\caption{Illustration of structure of $val(x)$.}
\label{fig1}
\end{figure}
Figure~\ref{fig1} illustrates the objective function. In red is the maximum over all regret functions, which defines $val(x)$. On the interval $[0,\lambda_1]$, the regret of some solution $x$ is defined through $y^2$, while solution $y^3$ defines the regret on $[\lambda_1,\lambda_2]$, and $y^4$ defines the regret on $[\lambda_2,1]$. In this case, we can hence compute \begin{align*} val(x) &= \int_0^1 reg(x,\lambda) d\lambda = \int_0^1 \left( \max_{c\in\mathcal{U}(\lambda)} c^tx - \min_{y\in\mathcal{X}} c^t y \right) d\lambda \\ &= \int_0^{\lambda_1} \max_{c\in\mathcal{U}(\lambda)} c^tx - c^t y^2 d\lambda + \int_{\lambda_1}^{\lambda_2} \max_{c\in\mathcal{U}(\lambda)} c^tx - c^t y^3 d\lambda \\ & \hspace{1cm} + \int_{\lambda_2}^1 \max_{c\in\mathcal{U}(\lambda)} c^tx - c^t y^4 d\lambda \\ &= \int_0^{\lambda_1} c^t(x,\lambda) (x- y^2) d\lambda + \int_{\lambda_1}^{\lambda_2} c^t(x,\lambda)(x - y^3) d\lambda + \int_{\lambda_2}^1 c^t(x,\lambda)(x - y^4) d\lambda \\ &= \lambda_1 c^t(x,\frac{\lambda_1}{2}) (x-y^2) + (\lambda_2-\lambda_1) c^t(x,\frac{\lambda_1+\lambda_2}{2}) (x-y^3) \\ &\hspace{1cm} + (1-\lambda_2) c^t(x,\frac{\lambda_2+1}{2}) (x-y^4) \end{align*} In general, to compute $val(x)$, we need to determine all relevant regret solutions $y$, and the intersections of the resulting regret functions.
\subsection{Problem Formulation}
Let $\overline{\Lambda}(x)\subseteq\Lambda$ be the set of changepoints of the piecewise linear function $reg(x,\cdot)$. To formulate problem (C) as a linear integer program, we use a set $\overline{\Lambda} \supseteq \cup_{x\in\mathcal{X}} \overline{\Lambda}(x)$. As $\mathcal{X}$ is a finite set, there always exists a set $\overline{\Lambda}$ that is finite. In general, it may contain exponentially many elements.
For the ease of notation, we assume $\overline{\Lambda}=\{\lambda_1,\ldots,\lambda_K\}$ with $\lambda_i \le \lambda_{i+1}$ and $\lambda_{K+1} := 1$. As a first approach, we model $val(x)$ by using all possible regret solutions $y\in\mathcal{X}$. \begin{align} \min\ & \sum_{\lambda_j \in\overline{\Lambda}} (\lambda_{j+1} - \lambda_j) z_j \label{p1}\\ \text{s.t. } & z_j \ge \sum_{i\in[n]} (1+\overline{\lambda}_j)\hat{c}_i x_i - \sum_{i\in[n]} (1-\overline{\lambda}_j + 2\overline{\lambda}_jx_i)\hat{c}_i y^\ell_i & \forall \lambda_j \in\overline{\Lambda}, y^\ell\in\mathcal{X} \nonumber\\ & x\in\mathcal{X} \nonumber \end{align} where $\overline{\lambda}_{j} = \frac{1}{2}(\lambda_j + \lambda_{j+1})$.
If (P) has a duality gap of zero, i.e., if solving the dual of the linear relaxation also gives an optimal solution to (P), this formulation can be simplified. Examples where this is the case include the shortest path problem, or the minimum spanning tree problem. Let us assume that the linear relaxation of $\mathcal{X}$ is given by \[ \overline{\mathcal{X}} = \{x\in\mathbb{R}_+^n : Ax\ge b\} \]
For the regret problem $\min_{x\in\mathcal{X}} \{ c^t(x,\lambda)x - opt(c(x,\lambda)) \}$ we may then write the following equivalent problem (see \cite{Aissi2009}): \[ \min \{ c^t(x,\lambda)x - b^tu : x\in\mathcal{X}, u\in\overline{\mathcal{Y}} \} \text{ with } \overline{\mathcal{Y}} = \{u\ge 0 : A^tu \le c(x,\lambda)\} \] Using this reformulation, we find the following program for (C) \begin{align} \min\ & \sum_{\lambda_j \in\overline{\Lambda}} (\lambda_{j+1} - \lambda_j) \Big( c^t(x,\overline{\lambda}_j)x - b^t u^j \Big) \label{p2}\\ \text{s.t. } & A^tu^j \le c(x,\overline{\lambda}_j) & \forall \lambda_j\in\overline{\Lambda} \nonumber\\ & x\in\mathcal{X} \nonumber\\ & u^j \in \overline{\mathcal{Y}} & \forall \lambda_j\in\overline{\Lambda} \nonumber \end{align} For binary variables $x$, the product $c^t(x,\overline{\lambda}_j)x$ can then be linearized. If a set $\overline{\Lambda}$ can be found that is of polynomial size, this is a compact formulation. In general, constraints and variables can be added in an iterative algorithm that generates new candidate values for $\lambda$ in Problem~\eqref{p2}. If the zero duality gap assumption does not hold, we can use Formulation~\eqref{p1}, where both values for $\lambda$ and regret solutions $y$ need to be generated. This approach is explained in the following section.
\subsection{General Algorithm}\label{sec:algo}
In the following we describe how to compute the set $\overline{\Lambda}(x)$ of changepoints of $reg(x,\cdot)$. This is then used to solve Formulation~\eqref{p2} in the case of a zero duality gap for (P) as described in Algorithm~\ref{alg:c}.
\begin{algorithm}[htb] \caption{Exact algorithm for (C)} \label{alg:c} \begin{algorithmic}[1] \Require{An instance of (C).} \State $\overline{\Lambda} \leftarrow \{\frac{1}{2}\}$, $k\leftarrow 0$ \label{a1s1} \State Solve Formulation~\eqref{p2} using $\overline{\Lambda}$. Let the solution be $x^k$, and the objective value $LB^k$.\label{algret} \State Compute $val(x^k)$. Let the resulting changepoints be $\overline{\Lambda}(x^k)$, and the objective value $UB^k$. \label{a1s3} \If{$UB^k = LB^k$} \State \textbf{END}: $x^k$ is an optimal solution. \label{a1s5} \Else \State $\overline{\Lambda} \leftarrow \overline{\Lambda} \cup \overline{\Lambda}(x^k)$ \label{a1s7} \State $k \leftarrow k+1$ \State Go to \ref{algret} \EndIf \end{algorithmic} \end{algorithm} The algorithm begins with a starting set $\overline{\Lambda} = \{\frac{1}{2}\}$ as a guess for relevant changepoints (any other set could be used here). Using the current set $\overline{\Lambda}$, it then solves Formulation~\eqref{p2}. As not all constraints of the problem are present, this is a problem relaxation. Accordingly, the objective value that is found is only a lower bound $LB^k$ on the true optimal objective value of the problem. To evaluate the resulting candidate solution $x^k$, we compute $val(x^k)$ in Step~\ref{a1s3}. The sub-algorithm for this purpose is explained below. As $x^k$ is a feasible solution to (C), $val(x^k)$ gives an upper bound $UB^k$ on the optimal objective value. Hence, if lower and upper bound coincide, an optimal solution has been found. Otherwise, we extend the set $\overline{\Lambda}$ in Step~\ref{a1s7} and repeat the procedure.
If (P) has a duality gap larger than zero, the same algorithm can be used with the slight adjustment that Problem~\eqref{p1} is solved in Step~\ref{algret}. To this end, also regret solutions $\mathcal{Y}(x^k)$ generated in the computation of $val(x^k)$ need to be collected.
We describe the procedure to compute $val(x)$ in Algorithm~\ref{alg:val}. \begin{algorithm}[htb] \caption{Algorithm to compute $val(x)$} \label{alg:val} \begin{algorithmic}[1] \Require{An instance of (C), a fixed solution $x\in\mathcal{X}$.} \State $\overline{\Lambda}(x) \leftarrow \{0,1\}$, $\overline{\Lambda}^{\text{new}}(x) \leftarrow \overline{\Lambda}(x)$ \State $\mathcal{Y}(x) \leftarrow \emptyset$ \ForAll{$\lambda \in \overline{\Lambda}^{\text{new}}(x)$} \label{j1} \State Solve (P) with costs $c(x,\lambda)$. Let $y$ be the resulting solution. \State $\mathcal{Y}(x) \leftarrow \mathcal{Y}(x) \cup \{y\}$ \State $\overline{\Lambda}^{\text{new}}(x) \leftarrow \overline{\Lambda}^{\text{new}}\setminus\{\lambda\}$ \EndFor \State change $\leftarrow$ false \ForAll{$y^i,y^j\in\mathcal{Y}(x)$} \State Calculate $\lambda$ as the point where the affine linear regret functions defined by $y^i$ and $y^j$ intersect. \If{$\lambda\not\in\overline{\Lambda}(x)$} \State $\overline{\Lambda}(x) \leftarrow \overline{\Lambda}(x) \cup \{\lambda\}$ \State $\overline{\Lambda}^{\text{new}}(x) \leftarrow \overline{\Lambda}^{\text{new}}(x) \cup \{\lambda\}$ \State change $\leftarrow$ true \EndIf \EndFor \If{change $=$ true} \State Go to \ref{j1} \EndIf \State Reduce $\overline{\Lambda}(x)$ and $\mathcal{Y}(x)$ such that only changepoints and regret functions remain that define the maximum over all affine linear functions. \State \Return $\overline{\Lambda}(x)$, $\mathcal{Y}(x)$ \end{algorithmic} \end{algorithm} We begin with only two regret functions, for the extreme points $\overline{\Lambda}(x) = \{0,1\}$. The resulting two regret functions will intersect at one new candidate changepoint $\lambda$. We find the regret solution $y$ maximizing the regret at this point by solving a problem of type (P). We then repeat this process by iteratively calculating the regret at all current intersection points. Note that if there exists any regret function that is larger than all current regret functions at some point $\lambda$, then it is also larger than all current functions at the intersection point between two of them. Hence, Algorithm~\ref{alg:val} finds all relevant changepoints $\lambda$. As it may also produce unnecessary candidates $\lambda$, we reduce the solution sets at the end to contain only those changepoints and regret functions that define the maximum.
\section{Min-Max Regret Compromise Solutions for Specific Problems}\label{sec:probs}
\subsection{Minimum Selection}
The minimum selection problem is given by \[ \min\ \left\{ c^tx : \sum_{i\in[n]} x_i = p,\ x\in\{0,1\}^n\right\} \] and has been frequently studied in the literature on min-max regret optimization (see \cite{kasperski2016robust}). The min-max regret problem can be solved in $\mathcal{O}(n \cdot\min\{n,n-p\})$ time, see \cite{conde2004improved}.
We show that also problem (C) can be solved in polynomial time. \begin{theorem} Let $\mathcal{U} = \prod_{i\in[n]} [(1-\lambda)\hat{c}_i,(1+\lambda)\hat{c}_i]$ for a fixed $\lambda$. Then $\hat{x}$ is an optimal solution to the min-max regret selection problem. \end{theorem} \begin{proof} We assume that items are sorted with respect to $\hat{c}$. Let $\tilde{x}$ be an optimal solution with $\tilde{x}_i = 0$ for an item $i\le p$. We assume $i$ is the smallest such item. Then there exists some $j>p$ with $\tilde{x}_j = 1$. Consider the solution $x'$ with $x'_k = \tilde{x}_k$ for $k\neq i,j$ and $x'_i = 1$, $x'_j = 0$.
Let $\tilde{y}$ be a regret solution for $\tilde{x}$. We can assume that $\tilde{y}_i = 1$, as $(1-\lambda)\hat{c}_i$ must be one of the $p$ cheapest items. We can also assume $\tilde{y}_j = 0$, as $(1+\lambda)\hat{c}_j$ is not among the $p$ cheapest items. Let $y'$ be the regret solution for $x'$.
The solutions $\tilde{x}$ and $x'$ differ only on the two items $i$ and $j$. Hence, the following cases are possible: \begin{itemize} \item Case $y'_i = 1$ and $y'_j = 0$, i.e., $\tilde{y} = y'$. We have \begin{align*} Reg(\tilde{x}) - Reg(x') =& (1+\lambda)\hat{c}_j - (1+\lambda)\hat{c}_i - (1-\lambda)\hat{c}_i + (1+\lambda)\hat{c}_i \\ =& (1+\lambda)\hat{c}_j - (1-\lambda)\hat{c}_i \ge 0 \end{align*} \item Case $y_i'=1$ and $y'_j = 1$, $y'_k = 0$ for some $k > i$ with $\tilde{y}_k = 1$. Note that this means $(1+\lambda)\hat{c}_j \ge (1-\lambda+2\tilde{x}_k\lambda)\hat{c}_k$, as otherwise, the regret solution of $\tilde{y}$ could be improved. Hence, \begin{align*} Reg(\tilde{x}) - Reg(x') =& (1+\lambda)\hat{c}_j - (1+\lambda)\hat{c}_i - (1-\lambda)\hat{c}_i + (1+\lambda)\hat{c}_i \\ & - (1-\lambda+2\lambda\tilde{x}_k)\hat{c}_k + (1-\lambda)\hat{c}_j \\ =& (1-\lambda)(\hat{c}_j - \hat{c}_i) + (1+\lambda)\hat{c}_j - (1-\lambda+2\lambda\tilde{x}_k)\hat{c}_k \ge 0 \end{align*} \item Case $y_i = 0$ and $y'_\ell =1$ for some $\ell>i$ with $\tilde{y}_\ell = 0$. As the costs of item $i$ have increased by using solution $x'$ instead of $\tilde{x}$, the resulting two cases are analogue to the two cases above.
\end{itemize}
Overall, solution $x'$ has regret less or equal to the regret of $\tilde{x}$. \end{proof}
Note that this result does not hold for general interval uncertainty sets, where the problem is NP-hard. It also does not necessarily hold for other combinatorial optimization problems; e.g., a counter-example for the assignment problem can be found in \cite{variable}.
Finally, it remains to show that $val(x)$ can also be computed in polynomial time.
\begin{theorem}
For the compromise min-max regret problem of minimum selection it holds that $|\overline{\Lambda}(x)| \in\mathcal{O}(\min\{p,n-p\})$ for any fixed $x\in\mathcal{X}$, and there is a set $\overline{\Lambda}$ with $|\overline{\Lambda}|\in\mathcal{O}(n^2)$. \end{theorem} \begin{proof} If $x$ is fixed, then there are $p$ items $i$ with costs $(1+\lambda)\hat{c}_i$, and $(n-p)$ items $i$ with costs $(1-\lambda)\hat{c}_i$. The regret solution is determined by the $p$ smallest items. Accordingly, when $\lambda$ increases, the regret solution only changes if an item $i$ with $x_i=1$, that used to be among the $p$ smallest items, moves to the $(n-p)$ largest items, and another item $j$ with $x_j=0$ becomes part of the $p$ smallest items. There are at most $\min\{p,n-p\}$ values for $\lambda$ where this is the case.
We define $\overline{\Lambda}$ to consist of all $\lambda\in[0,1]$ such that \[ (1-\lambda) \hat{c}_i = (1+\lambda) \hat{c}_j \]
for some $i,j\in[n]$, as only for such values of $\lambda$ an optimal regret solution may change. Hence, $|\overline{\Lambda}|\in\mathcal{O}(n^2)$.
\end{proof}
As the size of $\overline{\Lambda}(x)$ is polynomially bounded, $val(x)$ can be computed in polynomial time, and we get the following conclusion. \begin{corollary} The compromise min-max regret problem of minimum selection can be solved in polynomial time. \end{corollary}
\subsection{Minimum Spanning Tree}\label{sec:mst}
The min-max regret spanning tree problem in a graph $G=(V,E)$ has previously been considered, e.g., in \cite{yaman2001robust,kasperski2011approximability}. The regret of a fixed solution can be computed in polynomial time, but it is NP-hard to find an optimal solution. We now consider the compromise min-max regret counterpart (C).
Let any spanning tree $x$ be fixed. To compute $val(x)$, we begin with $\lambda = 0$ and calculate a regret spanning tree by solving a nominal problem with costs $\hat{c}$. Recall that this can be done using Kruskal's algorithm that considers edges successively according to an increasing sorting
\[ \hat{c}_{e_1} \le \ldots \le \hat{c}_{e_{|E|}} \]
with respect to costs. If $\lambda$ increases, edges that are included in $x$ have costs $(1+\lambda)\hat{c}_e$ (i.e., their costs increase) and edges not in $x$ have costs $(1-\lambda)\hat{c}_e$ (i.e., their costs decrease). Kruskal's algorithm will only find a different solution if the sorting of edges change. As there are $|V|-1$ edges with increasing costs, and $|E|-|V|+1$ edges with increasing costs, the sorting can change at most $(|V|-1)(|E|-|V|+1) = \mathcal{O}(|E|^2)$ times (note that two edges with increasing costs or two edges with decreasing costs cannot change relative positions). We have therefore shown:
\begin{theorem} A solution to the compromise min-max regret problem of minimum spanning tree can be evaluated in polynomial time. \end{theorem}
If the solution $x$ is not known, we can still construct a set $\overline{\Lambda}$ with size $\mathcal{O}(|E|^2)$ that contains all possible changepoints along the same principle. We can conclude: \begin{theorem} There exists a compact mixed-integer programming formulation for the compromise min-max regret problem of minimum spanning tree. \end{theorem}
However, we show in the following that solving the compromise problem is NP-hard. To this end, we use the following result:
\begin{theorem}{\cite{averbakh2004interval}} The min-max regret spanning tree problem is NP-hard, even if all intervals of uncertainty are equal to $[0,1]$. \end{theorem}
\noindent Note that if all intervals are of the form $[a,b]$, then \begin{align*} reg(x) &= \sum_{e\in E} b x_e - \min_{y\in\mathcal{X}} \left( \sum_{e\in E\atop x_e = 1} by_e + \sum_{e\in E\atop x_e = 0} ay_e\right) \\
&= (|V|-1)b - \min_{y\in\mathcal{X}} \left( (|V|-1)b - \sum_{e\in E\atop x_e = 0} (b-a) y_e\right) \\ &= (b-a) \max_{y\in\mathcal{X}} \sum_{e\in E\atop x_e = 0} y_e \end{align*} Therefore, the min-max regret problem with costs $[0,1]$ is equivalent to the min-max regret problem with any other costs $[a,b]$, in the sense that objective values only differ by a constant factor and both problems have the same set of optimal solutions. In particular, a solution $y$ that maximizes the regret of $x$ with respect to cost intervals $[a,b]$ is also a maximizer of the regret for any other cost intervals $[a',b']$. We can conclude:
\begin{theorem} The compromise problem of min-max regret minimum spanning tree is NP-hard, even if $w(\lambda) = 1$ for all $\lambda\in[0,1]$. \end{theorem} \begin{proof} Let an instance of the min-max regret spanning tree problem with cost intervals $[0,1]$ be given. Consider an instance of the compromise problem with $\hat{c}_e = 1$ for all $e\in E$, and $w(\lambda) = 1$. Then \[ val(x) = \int_0^1 reg(x,\lambda) d\lambda = \int_0^1 \left( 2\lambda \max_{y\in\mathcal{X}} \sum_{e\in E\atop x_e = 0} y_e \right) d\lambda = \max_{y\in\mathcal{X}} \sum_{e\in E\atop x_e = 0} y_e \] Hence, any minimizer of $val(x)$ is also an optimal solution to the min-max regret spanning tree problem. \end{proof}
\subsection{Shortest Path}
For the shortest path problem, we consider $\mathcal{X}$ as the set of all simple $s-t$ paths in a graph $G=(V,E)$ (for the min-max regret problem, see, e.g., \cite{averbakh2004interval}). As for the minimum spanning tree problem, the regret of a fixed solution can be computed in polynomial time, but it is NP-hard to find an optimal solution.
\noindent For the compromise problem (C), we have: \begin{align*} reg(x,\lambda) &= \sum_{e\in E\atop x_e = 1} (1+\lambda)\hat{c}_e - \left( \min_{y\in\mathcal{X}}\ \sum_{e\in E\atop x_e = 1} (1+\lambda)\hat{c}_e y_e + \sum_{e\in E\atop x_e = 0} (1-\lambda)\hat{c}_e y_e \right) \\ &= \sum_{e\in E\atop x_e = 1} (1+\lambda)\hat{c}_e + \left( \min_{y\in\mathcal{X}}\ \lambda \sum_{e\in E\atop x_e = 1} 2\hat{c}_ey_e + (1-\lambda) \sum_{e\in E} \hat{c}_e y_e \right) \end{align*} We can interpret the minimization problem as a weighted sum of the bicriteria problem \[ \min \left\{ \begin{pmatrix} \sum_{e\in E\atop x_e = 1} 2\hat{c}_ey_e \\ \sum_{e\in E} \hat{c}_e y_e \end{pmatrix} : y\in \mathcal{X} \right\} \] The number of solutions we need to generate to compute $val(x)$ can therefore be bounded by the number of solutions we can find through such weighted sum computations (the set of extreme efficient solutions $\mathcal{E}$).
\begin{lemma}
For the compromise shortest path problem, it holds that $|\overline{\Lambda}(x)| \le |\mathcal{E}|$. \end{lemma}
Depending on the graph $G$, the following bounds on the number of extreme efficient solutions $\mathcal{E}$ (see, e.g., \cite{ehrgott2006multicriteria}) can be taken from the literature \cite{carstensen1983complexity,variable}: \begin{itemize}
\item for series-parallel graphs, $\mathcal{E} \in \mathcal{O}(|E|)$ \item for layered graphs with width $w$ and length $\ell$, $\mathcal{E} \in \mathcal{O}(2^{\log w \log (\ell+1)})$
\item for acyclic graphs, $\mathcal{E} \in \mathcal{O}(|V|^{\log|V|})$
\item for general graphs $\mathcal{E}\in 2^{\Omega(\log^2|V|)}$ \end{itemize}
We can conclude: \begin{corollary} A solution to the compromise min-max regret problem of shortest path can be evaluated in polynomial time on series-parallel graphs and layered graphs with fixed width or length. \end{corollary}
Note that the number of extreme efficient solutions is only an upper bound on $\overline{\Lambda}(x)$. Unfortunately, we cannot hope to find a better performance than this bound, as the following result demonstrates.
\begin{theorem}
For any bicriteria shortest path instance with costs $(a,b)$, $a_e>0$ for all $e\in E$, there is an instance of (C) and a solution $x$ where $\overline{\Lambda}(x) = |\mathcal{E}|$. \end{theorem} \begin{proof} Let an instance of the bicriteria shortest path problem be given, i.e., a directed graph $G=(V,E)$ with arc costs $c_e = (a_e,b_e)$ for all $e\in E$. As $a_e > 0$ for all $e\in E$, we can assume w.l.o.g. that $2a_e \ge b_e$ for all $e\in E$. We create the following instance of (C).
Every arc $e=(i,j)\in E$ is substituted by three arcs $e'=(i,i'(e))$, $e''=(i'(e),j'(e))$ and $e'''=(j'(e),j)$. We set $\hat{c}_{e'} = a_e - \frac{b_e}{2}$, $\hat{c}_{e''} = \frac{b_e}{2}$ and $\hat{c}_{e'''} = 0$ (see Figure~\ref{fignp} for an example of such a transformation). Let $E'$, $E''$ and $E'''$ contain all edges of the respective type. Additionally, we choose an arbitrary order of edges $(e_1,\ldots,e_m)$, and create arcs $E_M = \{ (s,i'(e_1)), (j'(e_1),i'(e_2)), \ldots, (j'(e_m),t)\}$. We set costs of these arcs to be a sufficiently large constant $M$. Finally, let $x$ be the path that follows all edges in $E_M$ and $E''$. Note that edges in $E'''$ can be contracted, but are shown for better readability.
\begin{figure}
\caption{Example for transformation.}
\label{fignp}
\end{figure}
Note that $M$ is sufficiently large so that no regret path $y$ will use any edge in $E_M$. Hence, if $y$ uses an edge in $E'$, it will also have to use the following edges in $E''$ and $E'''$, i.e., $y$ corresponds to a path in the original graph $G=(V,E)$. The regret of $x$ is \begin{align*} reg(x,\lambda) &= \sum_{e\in E_M} (1+\lambda)M + \sum_{e\in E''} (1+\lambda)\hat{c}_e - \min_{y\in\mathcal{X}} \sum_{e\in E\atop x_e = 1} (1+\lambda)\hat{c}_e y_e + \sum_{e\in E\atop x_e = 0} (1-\lambda)\hat{c}_e y_e \\ &= (1+\lambda)\cdot const. - \min_{y\in\mathcal{X}} \sum_{e\in E} \left( (1+\lambda)\frac{b_e}{2} + (1-\lambda)(a_e - \frac{b_e}{2})\right) y_e\\ &= (1+\lambda)\cdot const. - \min_{y\in\mathcal{X}} \sum_{e\in E} \left( a_e +\lambda (b_e - a_e) \right) y_e \end{align*} Therefore, if $\lambda$ goes from $0$ to $1$, all extreme efficient paths in the original graph $G$ are used to calculate $reg(x,\lambda)$. \end{proof}
We now consider the complexity of finding a solution $x\in\mathcal{X}$ that minimizes $val(x)$. Note that the reduction in \cite{averbakh2004interval} uses interval costs of the form $[0,1]$ and $[1,1]$, which does not fit into our cost framework $[(1-\lambda)\hat{c}_e,(1+\lambda)\hat{c}_e]$. Instead, we make use of the following result:
\begin{theorem}{\cite{variable}} The min-max regret shortest path problem is NP-hard for layered graphs with interval costs $[0,1]$. \end{theorem} Note that for layered graphs, all paths have the same cardinality. Hence, $reg(x) = (b-a) \max_{y\in\mathcal{X}} \sum_{e\in E \atop x_e = 0} y_e$ (see Section~\ref{sec:mst}), and the problem with costs $[0,1]$ is equivalent to the problem with costs $[a,b]$ for any $a<b$. Analog to the last section, we can therefore conclude:
\begin{theorem} Finding an optimal solution to the compromise shortest path problem is NP-hard on layered graphs, even if $w(\lambda)=1$ for all $\lambda\in[0,1]$. \end{theorem}
\section{Experiments}\label{sec:exp}
\subsection{Setup}
In this section we present two experiments on compromise solutions to min-max regret problems with variable-sized uncertainty. The first experiment is concerned with the computational effort to find such a solution using the iterative algorithms presented in Section~\ref{sec:algo}. In the second experiment we compare these solutions to the alternatives of using classic regret solutions for various uncertainty set sizes.
Both experiments are conducted on shortest path instances of two types.
The first type consists of complete layered graphs. We parameterize such instances by the number of layers, width and cost types. Each graph consists of a source node, a sink node, and node layers of equal width between. For $N+1$ layers of width $k$, an instance has a total of $(N+1)k+2$ nodes and $Nk^2+2k$ edges. We use $N=5$ to $N=55$ in steps of size 5, and $k=5,10,15,20$. Graph sizes thus vary from 32 nodes and 130 edges to 1,122 nodes and 22,040 edges.
Edges connect all nodes of one layer to the nodes of the next layer. Source and sink are completely connected to the first and last layer, respectively. We considered two types of cost structures to generate $\hat{c}$. For type A, all costs are chosen uniformly from the interval $[1,100]$. For type B costs, we generate nominal costs in $[1,30]\cup[70,100]$, i.e., they are either low or high.
In total, there are $11\cdot 4 \cdot 2 = 88$ parameter combinations. For each combination, we generate 20 instances, i.e., a total of 1,760 instances.
The second type consists of graphs with two paths, that are linked by diagonal edges. For some length parameter $L$, we generated two separate paths from a node $s$ to a node $t$, each with $L$ nodes between. We then generate diagonal edges in the following way. On one of the two paths, we choose the $i$th node uniformly randomly. We then connect this node with the $j$th node on the other path, where $j>i$. The $j$th node is chosen with probability $\frac{3}{4}(\frac{1}{4})^{j-i-1}$, i.e., long diagonal edges are unlikely (ensuring that $j$ is at most $L$).
Edges along the two base paths have length chosen uniformly from the interval $[1,100]$. For diagonal edges, we determine their length by sampling from the same interval $(j-i)$ times, and adding these values, i.e., all paths have the same expected length.
We generate instances with length $L$ from 50 to 850 in steps of 100, and set the number of diagonal edges to be $d\cdot L$ for $d\in\{0.05,0.10,0.15\}$. The smallest instances therefore contain 102 nodes and 105 edges; the largest instances contain 1,702 nodes and 1,830 edges. For each parameter combination, we generate 20 instances, i.e., a total of $9\cdot 3 \cdot 20 = 540$ instances.
The classic min-max regret shortest path problem on instances of both types is known to be NP-hard, see \cite{andre-diss}. We investigate both types, as we expect the nominal solution to show a different performance: For layered graphs, the nominal solution is also optimal for $\mathcal{U}(1)$, as for every path there also exists a disjoint path. Therefore, the regret of a path $P$ with respect to $\mathcal{U}(1)$ is $\sum_{e\in P} 2\hat{c}_e$. For the second type of instances, a good solution with respect to min-max regret can be expected to intersect with as many other paths as possible. We can therefore expect the nominal solution to be different to the optimal solution of $\mathcal{U}(1)$.
All experiments were conducted using one core of a computer with an Intel Xeon E5-2670 processor, running at 2.60 GHz with 20MB cache, with Ubuntu 12.04 and Cplex v.12.6.
\subsection{Experiment 1: Computational Effort}
\subsubsection{Layered Graphs}
We solve the compromise approach to variable-sized uncertainty for each instance using the algorithms described in Section~\ref{sec:algo} and record the computation times. Average computation times in seconds are presented in Table~\ref{tab1}. In each column we average over all instances for which this parameter is fixed; i.e., in column ''width 5'' we show the results over all 440 instances that have a width of 5, broken down into classes of different length. The results indicate that computation times are still reasonable given the complexity of the problem, and mostly depend on the size of the instance (width parameter) and the density of the graph, while the cost structure has no significant impact on computation times.
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrrr|rr}
& & \multicolumn{4}{|c|}{Width} & \multicolumn{2}{|c}{Costs} \\ & & 5 & 10 & 15 & 20 & A & B \\ \hline \parbox[t]{2mm}{\multirow{11}{*}{\rotatebox[origin=c]{90}{Layers}}} & 5 & 0.05 & 0.13 & 0.21 & 0.40 & 0.20 & 0.20 \\ & 10 & 0.22 & 0.49 & 0.89 & 1.43 & 0.75 & 0.77 \\ & 15 & 0.47 & 0.99 & 1.78 & 2.98 & 1.47 & 1.64 \\ & 20 & 1.17 & 2.31 & 3.61 & 6.78 & 3.45 & 3.49 \\ & 25 & 1.99 & 4.17 & 7.53 & 11.47 & 5.97 & 6.61 \\ & 30 & 3.77 & 7.86 & 13.13 & 21.14 & 11.50 & 11.45 \\ & 35 & 6.05 & 11.08 & 19.87 & 35.51 & 18.28 & 17.97 \\ & 40 & 9.46 & 21.85 & 35.37 & 49.58 & 28.64 & 29.48 \\ & 45 & 13.77 & 29.64 & 56.30 & 85.48 & 47.23 & 45.37 \\ & 50 & 21.58 & 46.37 & 67.88 & 141.14 & 66.33 & 72.16 \\ & 55 & 26.61 & 69.56 & 125.95 & 193.30 & 105.94 & 101.76 \end{tabular} \caption{Average computation times to solve (C) in seconds.}\label{tab1} \end{center} \end{table}
We present more details in Tables~\ref{tab2} and \ref{tab3}, where the number of iterations (i.e., how often was the relaxation of (C) solved in Line~\ref{algret} of Algorithm~\ref{alg:c}) and the size of $\overline{\Lambda}$ at the end of the algorithm are presented, respectively.
We find that the average number of iterations is stable and small, with around two iterations on average (the maximum number of iterations is three). This value seems largely independent of the problem size. For the number of generated changepoints $|\overline{\Lambda}|$, however, this is different. It increases with the number of layers, but it decreases with the width of the graph. Recall that the regret of a solution $x$ is roughly determined by the number of edges a regret path $y$ has in common. With increasing width, regret paths are less likely to use the same edges, which explains why the size of $\overline{\Lambda}(x)$ decreases. As before, we find that the cost structure does not have a significant impact on the performance of the solution algorithm.
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrrr|rr}
& & \multicolumn{4}{|c|}{Width} & \multicolumn{2}{|c}{Costs} \\ & & 5 & 10 & 15 & 20 & A & B \\
\hline \parbox[t]{2mm}{\multirow{11}{*}{\rotatebox[origin=c]{90}{Layers}}} & 5 & 1.98 & 1.98 & 1.73 & 1.80 & 1.89 & 1.85 \\ & 10 & 2.02 & 1.98 & 1.98 & 2.00 & 1.98 & 2.01 \\ & 15 & 2.08 & 2.02 & 1.98 & 1.98 & 2.01 & 2.01 \\ & 20 & 2.08 & 2.08 & 2.00 & 2.08 & 2.06 & 2.05 \\ & 25 & 2.00 & 2.02 & 2.02 & 2.05 & 2.04 & 2.01 \\ & 30 & 2.15 & 2.10 & 2.08 & 2.05 & 2.11 & 2.08 \\ & 35 & 2.10 & 2.02 & 2.10 & 2.05 & 2.06 & 2.08 \\ & 40 & 2.10 & 2.15 & 2.10 & 2.02 & 2.11 & 2.08 \\ & 45 & 2.12 & 2.15 & 2.12 & 2.05 & 2.10 & 2.12 \\ & 50 & 2.17 & 2.08 & 2.05 & 2.15 & 2.05 & 2.17 \\ & 55 & 2.10 & 2.10 & 2.02 & 2.12 & 2.10 & 2.08 \end{tabular} \caption{Average numbers of iterations.}\label{tab2} \end{center} \end{table}
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrrr|rr}
& & \multicolumn{4}{|c|}{Width} & \multicolumn{2}{|c}{Costs} \\ & & 5 & 10 & 15 & 20 & A & B \\
\hline \parbox[t]{2mm}{\multirow{11}{*}{\rotatebox[origin=c]{90}{Layers}}} & 5 & 3.05 & 2.95 & 2.17 & 2.27 & 2.59 & 2.64 \\ & 10 & 5.05 & 3.75 & 3.33 & 3.10 & 3.71 & 3.90 \\ & 15 & 6.72 & 4.95 & 4.33 & 4.10 & 4.92 & 5.12 \\ & 20 & 9.00 & 6.53 & 5.03 & 5.20 & 6.40 & 6.47 \\ & 25 & 10.47 & 7.70 & 6.65 & 5.60 & 7.42 & 7.79 \\ & 30 & 11.68 & 9.43 & 7.28 & 6.70 & 9.00 & 8.54 \\ & 35 & 13.10 & 9.32 & 7.62 & 7.10 & 8.99 & 9.59 \\ & 40 & 14.95 & 11.05 & 9.35 & 7.78 & 10.95 & 10.61 \\ & 45 & 16.10 & 11.35 & 10.35 & 8.53 & 11.85 & 11.31 \\ & 50 & 18.73 & 12.57 & 10.05 & 9.47 & 12.56 & 12.85 \\ & 55 & 19.77 & 14.40 & 11.43 & 9.62 & 13.68 & 13.94 \end{tabular} \caption{Average size of $\overline{\Lambda}$ at the end of the algorithm.}\label{tab3} \end{center} \end{table}
\subsubsection{Two-Path Graphs}
The Tables~\ref{tab4}, \ref{tab5} and \ref{tab6} correspond to the Tables~\ref{tab1}, \ref{tab2} and \ref{tab3} from the last experiment, respectively. Computation times are sensitive to the parameter $d$, i.e., the number of diagonal edges. For small values of $d$, the computational complexity of problem (C) scales well with the length of the graph; however, for larger values of $d$, the problem becomes intractable.
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrr}
& & \multicolumn{3}{|c}{$d$} \\
& & 0.05 & 0.10 & 0.15 \\ \hline \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Length}}} & 50 & 0.04 & 0.05 & 0.08 \\ & 150 & 0.13 & 0.29 & 0.67 \\ & 250 & 0.22 & 0.79 & 2.23 \\ & 350 & 0.48 & 2.07 & 11.76 \\ & 450 & 0.78 & 5.37 & 28.22 \\ & 550 & 1.33 & 12.01 & 57.44 \\ & 650 & 2.06 & 19.65 & 165.17 \\ & 750 & 3.01 & 36.70 & 488.51 \\ & 850 & 3.84 & 73.42 & 3186.18 \end{tabular} \caption{Average computation times to solve (C) in seconds.}\label{tab4} \end{center} \end{table}
While the number of iterations is relatively small overall, as in the last experiment, the size of $\overline{\Lambda}$ increases with $d$, which makes the master problems larger and more difficult to solve.
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrr}
& & \multicolumn{3}{|c}{$d$} \\
& & 0.05 & 0.10 & 0.15 \\ \hline \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Length}}}
& 50 & 2.00 & 2.05 & 2.00 \\
& 150 & 2.05 & 2.15 & 2.20 \\
& 250 & 2.05 & 2.10 & 2.30 \\
& 350 & 2.10 & 2.20 & 2.45 \\
& 450 & 2.10 & 2.35 & 2.45 \\
& 550 & 2.20 & 2.30 & 2.40 \\
& 650 & 2.15 & 2.30 & 2.55 \\
& 750 & 2.20 & 2.35 & 2.50 \\
& 850 & 2.05 & 2.50 & 2.35 \end{tabular} \caption{Average numbers of iterations.}\label{tab5} \end{center} \end{table}
\begin{table}[htbp] \begin{center}
\begin{tabular}{rr|rrr}
& & \multicolumn{3}{|c}{$d$} \\
& & 0.05 & 0.10 & 0.15 \\ \hline \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Length}}}
& 50 & 2.75 & 3.55 & 3.95 \\
& 150 & 4.20 & 6.45 & 8.90 \\
& 250 & 5.15 & 9.15 & 13.15 \\
& 350 & 6.95 & 12.25 & 18.05 \\
& 450 & 8.05 & 16.25 & 21.40 \\
& 550 & 9.95 & 18.45 & 27.80 \\
& 650 & 11.20 & 20.20 & 30.25 \\
& 750 & 12.35 & 22.45 & 32.70 \\
& 850 & 13.05 & 27.00 & 39.00 \end{tabular} \caption{Average size of $\overline{\Lambda}$ at the end of the algorithm.}\label{tab6} \end{center} \end{table}
\subsection{Experiment 2: Comparison of Solutions}
\subsubsection{Layered Graphs} \label{sec:plots}
In our second experiment, we compare the compromise solution to the nominal solution (which is also the min-max regret solution with respect to the uncertainty sets $\mathcal{U}(0)$ and $\mathcal{U}(1)$), and to the min-max regret solutions with respect to $\mathcal{U}(0.3)$, $\mathcal{U}(0.5)$ and $\mathcal{U}(0.7)$.
To compare solutions, we calculate the regret of the compromise solution for values of $\lambda$ in $[0,1]$. We take this regret as the baseline. For all other solutions, we also calculate the regret depending on $\lambda$, and compute the difference to the baseline. We then compute the average differences for fixed $\lambda$ over all instances of the same size. The resulting average differences are shown in Figure~\ref{fig:exp} for four instance sizes. To set the differences in perspective, the average regret ranges from $\mathcal{U}(0)$ to $\mathcal{U}(1)$ of the compromise solutions are shown in the captions.
\begin{figure}
\caption{Difference in regret compared to nominal solution depending on $\lambda$.}
\label{fig:exp}
\end{figure}
By construction, a min-max regret solution with respect to $\mathcal{U}(\bar{\lambda})$ has the smallest regret for this $\bar{\lambda}$. Generally, all presented solutions have higher regret than the nominal solution for small and for large values of $\lambda$, and perform better in between. By construction, the compromise solution has the smallest integral under the shown curve. It can be seen that it presents an interesting alternative to the other solutions by having a relatively small regret for small and large values of $\lambda$, but also a relatively good performance in between.
\subsubsection{Two-Path Graphs}
We generate the same plots as in Section~\ref{sec:plots} using the two-path instances. Recall that in this case, the nominal solution is not necessarily an optimal solution with respect to $\mathcal{U}(1)$. We therefore include an additional line for $\mathcal{U}(1)$ in Figure~\ref{fig:exp2}.
\begin{figure}
\caption{Difference in regret compared to nominal solution depending on $\lambda$.}
\label{fig:exp2}
\end{figure}
It can be seen that the nominal solution performs different to the last experiment; the regret increases with $\lambda$ in a rate that part of the line needed to be cut off from the plot for better readability. The solution to $\mathcal{U}(0.5)$ performs very close to the compromise solution overall. Additionally, the scale of the plots show that differences in regret are much larger than in the previous experiment. Overall, it can be seen that using a robust solution plays a more significant role than in the previous experiment, as the nominal solution shows poor performance. The solutions that hedge against large uncertainty sets ($\mathcal{U}(0.7)$ and $\mathcal{U}(1.0)$) are relatively expensive for small uncertainty sets and vice versa. The compromise solution (as $\mathcal{U}(0.5)$, in this case) presents a reasonable trade-off over all uncertainty sizes.
\section{Conclusion}\label{sec:conc}
Classic robust optimization approaches assume that the uncertainty set $\mathcal{U}$ is part of the input, i.e., it is produced using some expert knowledge in a previous step. If the modeler has access to a large set of data, it is possible to follow recently developed data-driven approaches to design a suitable set $\mathcal{U}$. In our approach, we remove the necessity of defining $\mathcal{U}$ by using a single nominal scenario, and considering all uncertainty sets generated by deviating coefficients of different size simultaneously. The aim of the compromise approach is to find a single solution that performs well on average in the robust sense over all possible uncertainty set sizes.
For min-max combinatorial problems, we showed that our approach can be reduced to solving a classic robust problem of particular size. The setting is more involved for min-max regret problems, where the regret objective is a piecewise linear function in the uncertainty size. We presented a general solution algorithm for this problem, which is based on a reduced master problem, and the iterative solution of subproblems of nominal structure.
For specific problems, positive and negative complexity results were demonstrated. The compromise selection problem can be solved in polynomial time. Solutions to the compromise minimum spanning tree problem can be evaluated in polynomial time, but it is NP-hard to find an optimal solution. For compromise shortest path problems, the same results hold in case of layered graphs; however, for general graphs, it is still an open problem if there exist instances where exponentially many regret solutions are involved in the evaluation problem.
In computational experiments we highlighted the value of our approach in comparison with different min-max regret solutions, and showed that computation times can be within few minutes for instances with up to 22,000 edges.
\end{document} |
\begin{document}
\title{SWIS - Shared Weight bIt Sparsity for Efficient Neural Network Acceleration}
\author{Shurui Li, Wojciech Romaszkan, Alexander Graening, Puneet Gupta} \email{[email protected], [email protected], [email protected], [email protected]} \affiliation{
\institution{University of California, Los Angeles}
\streetaddress{420 Westwood Plaza}
\city{Los Angeles}
\state{California}
\country{USA}
\postcode{90095} }
\begin{abstract} Quantization is spearheading the increase in performance and efficiency of neural network computing systems making headway into commodity hardware. We present SWIS - Shared Weight bIt Sparsity, a quantization framework for efficient neural network inference acceleration delivering improved performance and storage compression through an offline weight decomposition and scheduling algorithm. SWIS can achieve up to 54.3\% (19.8\%) point accuracy improvement compared to weight truncation when quantizing MobileNet-v2 to 4 (2) bits post-training (with retraining) showing the strength of leveraging shared bit-sparsity in weights. SWIS accelerator gives up to $6\times$ speedup and $1.9\times$ energy improvement over state of the art bit-serial architectures. \end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
Creating custom silicon for a particular application requires a robust economic case due to the immense costs of such endeavors. Deep neural networks (DNNs) have created such a case in a span of a few short years and both training and inference accelerators are proliferating in server and edge-class devices \cite{Jouppi2017NnTpu}. Many of these accelerators double down on further specialization to improve efficiency, frequently through the use of quantization going as low as 4-bit or binarized precision \cite{Nvidia2020A100}. However, only a subset of applications can take advantage of such aggressive precision reduction. \par
Recently, a lot of research has gone into hardware support for configurable levels of quantization, for example bit-serial and decomposable arithmetic \cite{Judd2017NnStripes, Ryu2019NnBitBlade, Sharma2018NnBitFusion}. Recent works for bit-serial arithmetic have attempted to avoid unnecessary computations with zero-valued bits in activations at runtime \cite{Albericio2017NnBitPragma, Delmas2018NnDpredStripes}. Those approaches lead to limited latency improvements \cite{Delmas2018NnDpredStripes}, significant hardware overheads \cite{Albericio2017NnBitPragma, Delmas2018NnDpredStripes}, no storage compression \cite{Delmas2018NnDpredStripes}, or non-trivial scheduling issues \cite{Albericio2017NnBitPragma}. Moreover, most existing bit-serial, precision-scalable architectures show benefits when quantizing from 16-bit networks \cite{Judd2017NnStripes, Delmas2018NnDpredStripes}. However, recent efforts have shown that 8-bit quantization does not lose accuracy for most networks \cite{JacobQuantizationInference}, so the value of precision-scalable approaches must be shown {\em below} bitwidth of 8.
To address these issues, we propose SWIS - Shared Weight bIt Sparsity Scheduling, a methodology for training, compressing, and executing convolutional neural networks on bit-serial hardware that can significantly reduce the effective required bitwidth. SWIS achieves this through configurable, non-consecutive shift values on a very fine granularity of small groups of weights. This results in efficient hardware implementation and a more compressed representation. With offline profiling of weights, SWIS can achieve significant storage compression and efficient scheduling, which is not achievable in accelerators that process activations in a bit-serial manner. \par
The main contributions of this work are as follows. \begin{itemize}
\item We show that \textit{Shared bit sparsity} achieves up to $3.7\times$ neural network weight compression compared to conventional quantization approaches at similar inference accuracy.
\item The proposed SWIS architecture gives up to $6\times$ ($1.8\times$) improvement in inference latency (energy) compared to state-of-the-art bit-serial accelerators of same size.
\item We develop \textit{filter scheduling} approaches that maximize the benefits of SWIS by optimizing distribution of shift cycles among filters on a fine granularity, giving up to 4.6p.p. improvement in accuracy over unscheduled version for ResNet-18. \end{itemize}
\section{SWIS Quantizaion} \label{sec:swiss_motiv}
\subsection{What Should be Quantized?} \label{sec:swiss_motiv:choice}
Quantization and reduced precision have proven to be low-hanging fruits for improving the efficiency of neural network inference \cite{Rastegari2016NnXnornet, Zhou2016NnQuantDorefa, Judd2017NnStripes, Sharma2018NnBitFusion}. When these techniques are applied, a question arises - which values should be quantized: weights or activations? Commodity hardware, like CPUs or GPUs, will often enforce symmetric quantization, with both weights and activations using the same precision, while conventional bit-serial hardware can only effectively quantize one of the two \cite{Judd2017NnStripes}. Most bit-serial work has opted for reducing the precision of activations while keeping weight precision unchanged \cite{Judd2017NnStripes, Albericio2017NnBitPragma}. We argue that this approach is flawed and that reducing the precision of weights should be prioritized in such architectures. \par
Firstly, prior works have shown that weights can be quantized much more aggressively than activations without significant accuracy drops \cite{Zhou2016NnQuantDorefa, Rastegari2016NnXnornet}. With quantization-aware training, weight precision can be reduced to just 1 or 2 bits, and results for post-training quantization also suggest quantizing weights to lower precision is better than doing the same for activations in most cases \cite{Banner2019Nn4bitQuant}. Unlike activations, weights are not input-dependent; thus, they can be quantized offline at a much finer granularity without inducing hardware overheads. Architectures that use different precision weights and activations have opted to reduce precision more on the weight side \cite{Sharma2018NnBitFusion}, except for the aforementioned bit-serial accelerators. \par
Secondly, there are performance considerations. In modern DNNs, the overall number of weights will often dwarf the number of intermediate activations generated. Consider the ratio of external memory weight to activation accesses in the ResNet-18 model, shown in Figure \ref{fig:dram_ratio}, for a systolic array accelerator. For some convolutional layers, there can be two orders of magnitude more weight than activation accesses. Considering how system performance can be dominated by memory accesses, reducing the precision of weights can yield much greater improvements than doing this for activations.
We will now describe SWIS - a computation scheme that can quantize weights in a much more efficient manner than traditional bit-serial approaches. \par
\begin{figure}
\caption{Ratio of DRAM weight to activation accesses (RD+WR) in different convolutional layers of ResNet-18 in a systolic array accelerator.}
\label{fig:dram_ratio}
\end{figure}
\subsection{Shared Weight Bit-Sparsity} \label{sec:swiss_motiv:form}
The multiply-accumulate (MAC) operation, which is the workhorse of deep neural networks, between an activation vector $\vec{a}$ and weight vector $\vec{w}$ can be written as:
\begin{equation} \label{eq:base_mac}
\vec{a} \cdot \vec{w} = \sum_{i=0}^{M-1}a_{i} \times w_{i} \end{equation}
Where $a_{i}$ and $w_{i}$ are the i-th elements of vectors $\vec{a}$ and $\vec{w}$ respectively and $M$ is the width of the multiply-accumulate. We will refer to the $M$ as a group size from now on. Each weight $w_{i}$ can be further decomposed to its bit-wise form:
\begin{equation} \label{eq:wgt_bin}
w_{i} = Sign(w_{i}) \times \sum_{j=0}^{B-1}2^{j} \times w_{i}[j] \end{equation}
Where $w_{i}[j]$ is the j-th bit (from LSB) of weight $w_{i}$, and $B$ is the bitwidth of the weight. Equation \ref{eq:base_mac} can now be rewritten as: \begin{equation} \label{eq:inv_mac} \begin{split}
\vec{a} \cdot \vec{w} = \sum_{j=0}^{B-1} 2^{j} \sum_{i=0}^{M-1} Sign(w_{i}) \times a_{i} \times w_{i}[j] \end{split} \end{equation}
If we consider that multiplication by a single bit is a bit-wise AND operation (\&), and multiplication by a power of 2 is a logical shift operation ($<<$), Equation \ref{eq:inv_mac} can be rewritten as:
\begin{equation} \label{eq:inv_shft_mac}
\vec{a} \cdot \vec{w} = \sum_{j=0}^{B-1} \left(\sum_{i=0}^{M-1} Sign(w_{i}) \times (a_{i}\& w_{i}[j])\right) << j \end{equation}
This formulation is used in bit-serial accelerators, although most prior works use activations in their bit-serial representation and weights in their parallel representation \cite{Judd2017NnStripes, Delmas2018NnDpredStripes}. This allows activations to be positive and negative. We now explain why the weight bit-serial formulation, as in Equation \ref{eq:inv_shft_mac}, can be much better. \par
Naive implementation of bit-serial multiplication requires going through all bits of one of the operands. However, as multiple previous works have pointed out, every bit equal to 0 will not contribute to the final result, effectively wasting computation cycles \cite{Delmas2018NnDpredStripes}. One solution is to clip all MSB and LSB positions containing zeroes and only process bits within that clipped range \cite{Delmas2018NnDpredStripes}. However, that does not eliminate zero-bits within the clipped range. For example, the above scheme applied to a value of 129, represented as an 8-bit value (\textit{1000\_0001} in binary), results in no cycle savings, despite 75\% of bits not contributing to the result. \par
Further, this will cause synchronization problems that are difficult to solve in highly-parallel architectures unless the above scheme is applied on a group basis \cite{Albericio2017NnBitPragma}. However, when applied to a group of values, clipping is constrained by the worst-case number, reducing achievable benefits. Consider grouping 129 (\textit{1000\_0001} in binary) with 8 (\textit{0000\_1000}). The former will require processing all 8 bit positions, while the latter only requires a single one. Overall, over 80\% of computation would effectively be wasted. While more sophisticated techniques of removing all activation zero bit computations have been proposed, they suffer from the above synchronization issue and significant hardware overheads. \cite{Albericio2017NnBitPragma}. While training optimizations for such architectures have recently been proposed, they do not fully solve the scheduling issues \cite{Zhao2020NnBitSerialPruning}. \par
What limits the efficacy of the methods described above is that they are attempting what is effectively "lossless compression" of computation, requiring representation of exact values. We argue that through careful pre-processing, a much more hardware-friendly "lossy compression" can be achieved without significantly reducing inference accuracy, as we will show in Section \ref{sec:eval_results:acc}. However, pre-processing implies that it can only be applied to weights and not activations, which are input dependent. This insight, together with the reasons outlined in Section \ref{sec:swiss_motiv:choice} justify our "reverse" weight bit-serial formulation in Equation \ref{eq:inv_shft_mac}. Furthermore, these existing approaches quantize using consecutive bit positions (usually truncating the LSBs). Next we show SWIS approach to leverage the sparsity in bit representations of weights. \par
Let us assume we constrain a group of weights to only use a specific subset of \emph{active} bit positions, while all the other \emph{inactive} positions are assumed to be 0. We can define a supporting vector $\vec{s}$:
\begin{equation} \label{eq:shift_vec}
\vec{s} = (s_{0}, s_{1}, ..., s_{N-1}) : s_{i} \in \langle 0, B \rangle \end{equation}
We can then rewrite Equation \ref{eq:wgt_bin} as:
\begin{equation} \label{eq:wgt_bin_sparse}
w_{i} = Sign(w_{i}) \times \sum_{j=0}^{N-1}2^{s_{j}} \times m_{i}[j] \end{equation}
Where $m_{i}$ is a \emph{mask} bit indicating whether weight $w_{i}$ has an active bit in position $s_{j}$. After combining Equations \ref{eq:inv_shft_mac} and \ref{eq:wgt_bin_sparse} we arrive at the shared weight bit sparsity formulation, the foundation of the SWIS methodology:
\begin{equation} \label{eq:inv_shft_mac_sparse}
\vec{a} \cdot \vec{w} = \sum_{j=0}^{N-1} \left(\sum_{i=0}^{M-1} Sign(w_{i}) \times (a_{i} \& m_{i}[j])\right) << s_{j} \end{equation}
The stark similarity between Equations \ref{eq:inv_shft_mac} and \ref{eq:inv_shft_mac_sparse} means that SWIS is fully compatible with bit-serial MAC processing elements (PEs). There are three crucial differences between bit-serial and SWIS processing. First is the change in the outer loop bound from $B$ (weight bitwidth) to $N$ (size of the support vector). Second is the sparse (non-consecutive) nature of the supporting vector - most prior bit-serial architectures either constrained themselves to consecutive shift ranges \cite{Judd2017NnStripes}, or ran into non-trivial scheduling problems when attempting to exploit bit-sparsity in dynamic activations \cite{Albericio2017NnBitPragma}. SWIS does not have this problem as long as the number of active bits, henceforth referred to as \textit{shifts}, is the same for all computations scheduled at the same time. \par
The third difference is the flexibility to select shifts on the granularity of an individual group. Traditional bit-serial approaches constrain themselves to per-layer profiling of consecutive shifts, which, as we will show in Section \ref{sec:swiss_sched:gran}, can be overly restrictive. We refer to this approach as \textit{layer-wise static quantization}. Through a careful selecting and scheduling approach, described in Section \ref{sec:swiss_sched:shsel} and \ref{sec:swiss_sched:sched}, SWIS can ensure that $N<<B$, without sacrificing inference accuracy. \par
Recent works have shown that using a consecutive subset of bits of a given value, where that subset can differ between weight values, can also yield acceptable accuracy for certain datasets and networks \cite{Gupta2020NnQuantL2l}. SWIS can support such consecutive bit subsets by treating them as a series of shifts, without any additional overheads. It can also take advantage of the higher weight compression ratio enabled by it, since only a single \textit{shift offset} needs to be stored per group of weights, instead of individual sparse shift values. We refer to this configuration as SWIS-Consecutive (SWIS-C). The important distinction between SWIS-C and typical quantization approaches is that the \textit{offset} being used can be set on a very fine granularity of a group of weights, instead of a per-kernel or per-layer basis, hence allowing more aggressive quantization without sacrificing accuracy.
\subsection{Granularity of Weight Quantization} \label{sec:swiss_sched:gran}
We discuss the relative accuracy of three quantization approaches in this section, namely layer-wise static quantization, SWIS-C, and SWIS. To establish the superiority of both SWIS methods, we will first discuss their approximation ability, which can be reflected by the probability of losslessly quantizing an 8-bit integer $A$ into $\bar{A}$ using a given number of shifts $N$. The 8-bit number is assumed to be randomly generated so that each bit will have a 50\% probability of being 1. In reality the bit distribution tends to shift towards the lower end since most weights are around zero, but taking this factor into account will make the analytical calculation of probability impractical. For simplicity of analysis, we stick with random bits and group size of one for the probability of lossless quantization calculation, and we will factor in the actual weight distribution and group size is subsequent analysis. \par
First, for SWIS, as the bit selection is sparse, the quantization is lossless if the number of bits that are 1 in $A$ is less than or equal to $N$. The probability of lossless quantization for SWIS given $N$ can be formulated using cumulative binomial distribution:
\begin{equation} P_{SWIS}(A == \bar{A}) = \sum_{n=0}^{N}{8\choose n}\cdot 0.5^8 \end{equation}
Second, for SWIS-C, the probability can be calculated based on the probability of SWIS, multiplied by the fraction of total bit permutations that can be losslessly quantized. The probability of lossless quantization of SWIS-C for given $N$ can therefore be formulated by:
\begin{equation} P_{SWIS-C}(A == \bar{A}) = \sum_{n=0}^{N}{8\choose n}\cdot 0.5^8\cdot \frac{{N\choose n}(9-N)-(8-N){N-1\choose n}}{{8\choose n}} \end{equation}
Last, for layer-wise static quantization, the bit selection is fixed for the entire layer, therefore the probability of lossless quantization of an individual 8-bit value is:
\begin{equation}
P_{layer-wise}(A == \bar{A}) = \sum_{n=0}^{N}{8\choose n}\cdot 0.5^8\cdot \frac{{N\choose n}}{{8\choose n}} \end{equation}
Figure \ref{fig:lossless_prob} shows the computed probability of lossless quantization for all three approaches at every $N$. The results are expected, SWIS outperforms the other two by a large margin in most cases due to its bit sparsity, while SWIS-C also outperforms layer-wise quantization noticeably, since it allows a finer quantization granularity.
\begin{figure}
\caption{Probability of lossless quantization of a 8-bit integer using layer-wise static quantization, SWIS-C and SWIS.}
\label{fig:lossless_prob}
\end{figure}
The relative accuracy of lossless quantization also holds for lossy quantization. We use root mean square error (RMSE), instead of probability, to compare the above three methods. Table \ref{tab:quantcompareresnet} shows quantization RMSE against original weights for a typical layer of 8-bit ResNet-18 \cite{He2016NnResNet} and MobileNet-v2 for a different number of shift values and group sizes. Group size of 1 shows the ideal case performance while group size of 4 shows results for a more realistic case, which we will explore further in Section \ref{sec:swiss_sched:swiss_group}. Both networks show a similar trend, and the huge RMSE of static layer-wise quantization (implemented using LSB truncation) suggests that it does not work well for lower bit widths. SWIS outperforms SWIS-C in all cases, and the gap is large for the combination of a hard-to-quantize network (MobileNet-v2) and a small number of shift values. This trend holds for larger group sizes, but the difference between SWIS and SWIS-C becomes smaller, suggesting SWIS-C can be considered an alternative for some use cases, with a better weight compression.
\begin{footnotesize}
\begin{table}[htbp]
\centering
\caption{RMSE of three weight quantization methods for a typical layers of 8-bit ResNet-18 and MobileNet-v2, for group size of 1 and 4.}
\begin{tabularx}{\linewidth}{X|X|X|X|X|X}
\toprule
&\multicolumn{2}{c|}{Group size = 1} &\multicolumn{2}{c|}{Group size = 4}& \\
\midrule
\# shifts &SWIS & SWIS-C &SWIS & SWIS-C & layer-wise trunc.\\
\bottomrule
\multicolumn{6}{c}{ResNet-18 first convolution layer} \\
\toprule
5 shifts & 0.0013 & 0.0020 &0.0022 & 0.0027 & 0.0168 \\
4 shifts & 0.0019 & 0.0037 &0.0044 & 0.0053 & 0.0314 \\
3 shifts & 0.0038 & 0.0070 &0.0091 & 0.0103 & 0.0556 \\
2 shifts & 0.0094 & 0.0146 &0.0197 &0.0214 & 0.0895 \\
\bottomrule
\multicolumn{6}{c}{MobileNet-v2 first point-wise convolution layer} \\
\toprule
5 shifts & 0.0007 & 0.003 &0.0039 & 0.005& 0.0158 \\
4 shifts & 0.0023 & 0.0055 &0.0078 & 0.0095 & 0.0227 \\
3 shifts & 0.0051 & 0.0112 &0.0162 & 0.019 & 0.0394 \\
2 shifts & 0.0126 & 0.0208 &0.0358 & 0.0401 & 0.0774 \\
\bottomrule
\end{tabularx}
\label{tab:quantcompareresnet} \end{table}
\end{footnotesize}
\section{Architecture} \label{sec:arch} We architect SWIS as a bit-serial processed systolic array with each processing element (PE) and dataflow optimized to leverage SWIS quantization.
\subsection{SWIS PE} \label{sec:arch:pe}
The conventional processing element (PE) implementation of Equation \ref{eq:inv_shft_mac_sparse} would consist of N (group size) parallel bitwise AND operations (masking), conditional sign inversion, an adder tree for summing masked activations, a barrel-shifter for power-of-2 multiplication and a serial accumulator, similar to the one proposed in \cite{Judd2017NnStripes}. It computes one of the operands one bit at a time. While the group size is specific to a given hardware, the number of shifts used can be configured at runtime, and different for each PE. We refer to this style of bit-serial PE as a \textit{single-shift} PE. While inverting the order of addition and multiplication results in certain gains in efficiency, bit-serial processing by itself does not provide higher throughput per area or energy efficiency compared to conventional fixed-point when processing all of the bits. Only by aggressively reducing the number of bits (shifts) being used and maximizing the PE group size, performance improvements over fixed-point can be achieved. While such improvements are trivial when 16-bit fixed-point precision is used as a baseline, they are much harder when the baseline is reduced to 8-bits, the de-facto standard precision in quantized networks nowadays. \cite{Judd2017NnStripes}. \par
To quantify the possible benefits of using bit-serial computation, we have designed the 8-bit fixed-point, and a single-shift bit-serial PEs with different group sizes (2-16) using Verilog RTL and synthesized them using a commercial 28nm TSMC library and Cadence Genus synthesis tool. Since we intended to use them in a systolic array style accelerator, all PEs include activation and weight buffers. We then compared their area, energy per MAC, and throughput per area for different number of shifts used in the bit-serial version (2/4/6). Results, normalized to the fixed-point PE using the same group size, are shown in Figure \ref{eval:pe}. The single-shift PE only comes out ahead in terms of energy and throughput per area when fewer than 4 shifts are used. When using conventional quantization approaches, this level of precision reduction might not be tolerable, as we will show in Section \ref{sec:eval_results:acc}. SWIS, with its ability to implement sparse quantization on a much finer granularity, can reduce the number of shifts required much more aggressively than those approaches. \par
However, even with SWIS, improving the performance requires using PEs with large group sizes, as shown in Figure \ref{eval:pe}. Below a group size of 8, performance improvements, even with a low number of shifts used, are modest at best. This limited improvement is due to overheads which cannot be reduced compared to fixed-point PEs. To recover accuracy for larger group sizes, more shifts are required, and as shown in Figure \ref{eval:pe}, efficiency gains are quickly lost when more than 4 shifts are used. Therefore, a way to improve hardware efficiency is needed. To better amortize the fixed costs mentioned above, we propose to process multiple bits (shifts) simultaneously. By computing, for example, two shifts at the same time, performance break-even points compared to fixed-point can be improved, through amortizing the cost of buffering the activations and sign inversion.\par
We show the performance comparison of this \textit{double-shift} PE in Figure \ref{eval:pe}, for the same group sizes and number of shifts being used as the single-shift one. It has a lower normalized energy per MAC and throughput per area than a single-shift one with double the group size. This means we can effectively halve the group size while improving both performance and inference accuracy. For that reason, we opt to use double-shift PEs in our SWIS accelerator architecture, as shown in Figure \ref{fig:inv_mac_diagram}. However, this double-shift processing comes at an increased rigidity in terms of the number of shifts used. Using an odd number of shifts would result in underutilization of the available compute - going from four to three shifts would therefore not improve inference latency. However, SWIS allows us to assign the number of shifts on a sub-layer granularity, meaning that \textit{effective} number of shifts is not constrained to even numbers. For example, if half of the kernels in a given layer use 2 shifts, and the other half 4 shifts, the effective, layer-wise number of shifts is 3. See Section \ref{sec:swiss_sched:sched} for network accuracy when using a scheduled odd effective number of shifts on the \textit{double-shift} configuration. \par
\begin{figure*}
\caption{ Single and double-shift 8-bit SWIS PE area (a), per-MAC energy (b) and throughput/area (c) for different PE widths, normalized to a conventional fixed-point PE with the same group size.}
\label{eval:pe}
\end{figure*}
\subsection{SWIS Systolic Array and Dataflow} \label{sec:arch:sysarr} We use systolic array as a baseline architecture, shown in Figure \ref{fig:inv_mac_diagram}, due to simple scheduling, low complexity processing element architecture, and low bandwidth requirements when processing convolutional layers \cite{Jouppi2017NnTpu}. We assume the same structure, consisting of the systolic array itself, together with activation, weight, and output buffers, as described in \cite{Samajdar2018NnScaleSim}. While the systolic array itself is a 2D array of PEs, each individual PE processes weights in groups, effectively adding a third dimension to the dataflow. That being said, SWIS is not inherently tied to a particular implementation and could be used in any accelerator that can support bit-serial processing. \par
Compared to conventional systolic array, where each element consists of a single multiplier and accumulator, SWIS systolic array uses group-wise PEs, where multiple MAC operations are executed in parallel on a vector of activations and a corresponding vector of weights, one shift at a time. For simplicity, we assume that all such vectors are depth-wise - all activations and weights have the same x and y positions but correspond to different input channels. We also assume that those vectors are packed in memory, and on-chip buffers have interfaces scaled by a factor equal to the group size. Those assumptions are easy to fulfill for commonly used convolutional layers where the number of input channels is a power of 2. For depthwise-separable convolutions, such as those used by MobileNet, we underutilize the PEs in the systolic array, for the simplicity of scheduling. We plan on exploring a more efficient implementation of such layers in future work. \par
\begin{figure}
\caption{N-wide double shift bit-serial MAC unit (a) and systolic array accelerator (b) used by the SWIS methodology.}
\label{fig:inv_mac_diagram}
\end{figure}
In terms of scheduling, we use the output stationary dataflow (OS), as it has been shown to provide the best performance and minimal number of memory accesses in most cases \cite{Samajdar2018NnScaleSim}. There are several ways bit-serial computation can be scheduled in a systolic array. The most naive would be to perform a full computational pass for each shift. While straightforward to implement in the OS dataflow, it would also increase the number of on-chip memory accesses roughly proportional to the number of shifts being used. Another alternative is to send all shift masks to the PE at the same time and execute each operation in multiple cycles. Unfortunately, this would require scaling both the weight buffer interface and PE weight buffers to support the worst case, 8 shifts, drastically increasing their area. Instead, we opt for a "staggered" approach, where weights (shifts) flow through the array normally, but each activation is fed in repeatedly over multiple cycles, equal to the number of shifts being used. Such an approach requires minimal control and buffering overhead, without over-provisioning the PE buffers or increasing the number of activation buffer accesses. It also enables efficient reuse of activations for different shifts, as they do not need to be fetched multiple times. For SWIS-C, we assume that a shift (offset) is fetched only once, and incremented outside of the array, incurring negligible area overheads. \par
\subsection{SWIS Compression} \label{sec:swiss_sched:comp}
The performance of a computing system cannot be evaluated without considering the impact of memory. Increasingly, memory bandwidth and access energy have the dominant impact on overall latency, and energy \cite{Horowitz2014CompEnergy}. Approaches that rely solely on point improvements to arithmetic efficiency will quickly fall victim to diminishing returns. One of the main advantages of SWIS is the weight storage compression it offers. Assuming 8-bit underlying precision, for each group of weights, we only need to store their signs (one bit per weight), shift values (3 bits per group, per shift), and shift masks (1 bit per weight, per shift). \par
The resulting weight compression ratios for different number of shifts and group sizes are shown in Figure \ref{eval:wgt_compress}. We compare our compression scheme of 8-bit weights to the one used by DPRed \cite{Delmas2018NnDpredStripes}, profiled across one example convolution layer, for different groups sizes. DPRed stores weights using per-group bitwidth, determined by the highest active bit position in a given group. We also show compression ratios for SWIS-C, which only needs to store one shift value per group. \par
Those compression ratios further translate to external memory bandwidth reduction. Comparing with an iso-area, 8-bit fixed-point accelerator, SWIS can require up to $2.3\times$ lower DRAM bandwidth, while for SWIS-C bandwidth reduction can go as high as $3.3\times$, at similar accuracy (within 1\% of 8-bit Resnet-18).
While it is important to note that unlike SWIS, DPRed compression is lossless (retains all information), it is also too restrictive, at least at 8-bit precision, to deliver any significant storage savings. Meanwhile, SWIS and SWIS-C can deliver close to $3.7\times$ reduction in weight storage when large groups are used with an aggressive reduction in the number of shifts. For a group size of 4, which we use in our architecture, compression varies from $1.1\times$ to $2.9\times$ for SWIS and from $1.5\times$ to $2.9\times$ for SWIS-C. Accuracy-performance trade-offs between the number of shifts and group sizes are explored in Section \ref{sec:eval_results}.
\begin{figure}
\caption{Weight storage compression ratio for different number of shifts and PE sizes, for SWIS, SWIS-C, and DPRed.}
\label{eval:wgt_compress}
\end{figure}
\section{SWIS Scheduling \& Grouping} \label{sec:swiss_sched}
\subsection{SWIS Shift Selection} \label{sec:swiss_sched:shsel} \subsubsection{Selection Algorithm} The shift selection process for SWIS consists of selecting the optimal shift values $s_j$ for each group and generating the bitmasks $m_i$ for individual weights to minimize the quantization error for the given number of shifts. As the total number of possible combinations of selecting $N$ shift values out of 8 is manageable, we use an enumeration algorithm for best results. For each group, we quantize the weights using all possible shift value combinations and select the combination with the least error based on our error metric (Section \ref{sec:errormetric}) over the entire group. For each shift value combination, the corresponding values for all possible bitmasks are generated, and each weight is quantized to the nearest value (bitmask). This enumeration algorithm ensures that the optimal shift values and bitmasks are selected for every group and every weight to minimize the error. \par
\subsubsection{Error Metric} \label{sec:errormetric} We introduce an error metric based on mean square error (MSE) for SWIS shift value selection called MSE++. Although MSE provides decent baseline results, it only considers the absolute error. MSE++ includes a signed error term to reduce drift of the average value of a multiply accumulate due to quantization rounding errors. The formulation we used for signed error is shown in equation \ref{eq:signederror} where $N$ is the group size:
\begin{equation} \label{eq:signederror} \mathrm{Signed Error}= \sum_{i=1}^{N}\left(X_{i}-\hat{X}_{i}\right) \end{equation}
For MSE++, we squared the signed error term to guarantee a positive value and scale the magnitude closer to MSE so the overall error is not dominated by the signed error. We also added a coefficient to the signed error term to allow us to fine-tune its contribution for each network. The complete equation for MSE++ is shown in equation \ref{eq:msepp} where $\alpha$ is the coefficient term:
\begin{equation} \label{eq:msepp} \mathrm{MSE++}=\frac{1}{N} \left(\alpha \left(\sum_{i=1}^{N}\left(X_{i}-\hat{X}_{i}\right)\right)^{2} + \sum_{i=1}^{N}\left(X_{i}-\hat{X}_{i}\right)^{2}\right) \end{equation}
Using MSE++ resulted in direct quantization inference accuracy improvements from 0.5\% to 10\% compared to MSE for each evaluated network and nearly all sets of group size, number of shifts and SWIS configuration. When fine-tuning is not practical, MSE++ still outperforms pure MSE with the coefficient set to one.
\subsection{SWIS Grouping} \label{sec:swiss_sched:swiss_group} The previous analysis of different quantization granularities assumes that the group size is one, but that does not result in efficient hardware implementation or storage compression. However, increasing the group size will increase the quantization error and impact network accuracy as the shift values for the entire group of weights need to be shared. Figure \ref{fig:resnet_comparision} shows the inference accuracy of ResNet-18 on ImageNet, with different group sizes and number of shift values. As expected, inference accuracy drops as group size increases, but the exact amount differs significantly for different number of shift values. SWIS performs better than SWIS-C when the number of shift values is small, but their performance converges when the number of shift values increases, which verifies the analysis in section \ref{sec:swiss_sched:gran}. For a group size of 4, which tends to be a good accuracy/efficiency trade-off point, we need 3 shifts to maintain a similar performance of 8-bit baseline. In the next section, we will discuss how to obtain even finer granularity of the number of shifts being used. \par
\begin{figure}
\caption{ResNet-18 top-1 inference accuracy for different group sizes and number of shift values.}
\label{fig:resnet_comparision}
\end{figure}
\subsection{SWIS Scheduling} \label{sec:swiss_sched:sched} Within a layer, not all filters are equally sensitive to the loss of precision. SWIS scheduling takes advantage of this to decrease the quantization error calculated using MSE++ for a given layer compared to the quantization error achieved by naively quantizing the entire layer to the same number of shifts. We do this by increasing the number of shifts for some filters while decreasing it for others to keep the total number of shifts constant for the layer. This scheduling approach's main benefit is that it allows us to choose an average quantization level that would not be possible without filter scheduling. For instance, it allows the \textit{double-shift} architecture to use a target number of shifts that is not an even number without under-utilizing the hardware.\par The SWIS scheduling heuristic starts by placing all filters at a number of shifts higher than the target value. We then calculate the MSE++ cost of decreasing the number of shifts used to quantize each filter by one shift. The filters are then sorted based on this cost, and the lowest cost $n$ filters are moved down to the next lowest shift. The new cost for the filters which changed their number of shift values are then recomputed, and the filter costs are sorted again to find the $n$ lowest cost filters. This process is repeated until the average number of shifts in the layer is equal to the target number of shifts. At this point, the filters are sorted based on their number of shifts. \par The above method does not guarantee that all filters scheduled simultaneously on the systolic array have the same number of shifts, a restriction that is necessary to ensure simple scheduling and the absence of synchronization issues. To enforce such behavior, the second part of the algorithm assigns the number of shifts to each group of filters that are scheduled simultaneously, based on previous ordering. We first enumerate the possible per-filter-group number of shift assignment sequences that are nondecreasing and guarantee the desired overall average number of shifts per layer. For each sequence, we compute the quantization error and select the combination with the lowest error.\par All SWIS variations benefit from scheduling on all benchmarks and the benefit is larger for lower base accuracy. Accuracy improvement using SWIS scheduling for ResNet-18 \textit{single-shift} is shown in Table \ref{tab:schedule_accuracy}.
\begin{small} \begin{table}[htbp]
\centering
\caption{ResNet-18 top-1 accuracy with SWIS scheduling for single- and double-shift PEs, compared to a single-shift PE accuracy with no scheduling for different systolic array (SA) sizes. PE group size is 4.}
\begin{tabularx}{\linewidth}{X|X|X|X|X|X|X}
\toprule
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{2 Shift \% Accuracy} & \multicolumn{3}{c}{2.5 Shift \% Accuracy} \\
\midrule
\multicolumn{1}{l|}{SA} & \multicolumn{1}{l|}{Single} & \multicolumn{1}{l|}{Double} & \multicolumn{1}{l|}{None} & \multicolumn{1}{l|}{Single} & \multicolumn{1}{l|}{Double} & \multicolumn{1}{l}{None} \\
\midrule
8 & 65.9 & 66.0 & 61.4 & 68.5 & 67.9 & N/A \\
16 & 65.0 & 63.9 & 61.4 & 68.3 & 67.7 & N/A \\
\midrule
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{3 Shift \% Accuracy} & \multicolumn{3}{c}{4 Shift \% Accuracy} \\
\midrule
\midrule
8 & 69.2 & 68.6 & 68.3 & 69.5 & 69.5 & 69.05 \\
16 & 69.1 & 68.3 & 68.3 & 69.4 & 69.4 & 69.05 \\
\bottomrule
\end{tabularx}
\label{tab:schedule_accuracy} \end{table} \end{small}
\section{Evaluation \& Results} \label{sec:eval_results}
All PE area, power, and latency numbers are derived from synthesis results in a commercial 28nm library with Cadence Genus tool. We used SCALE-Sim, a systolic array simulator, to obtain cycle-accurate execution traces \cite{Samajdar2018NnScaleSim}. As a baseline, we used an 8x8 bit-serial systolic array with 64KB activation and weight buffers, and 16KB output buffer. The PE group size has been set to 4, as it provides a good balance between performance and accuracy. We compare the following versions of SWIS: single-shift SWIS-SS, double-shift SWIS-DS, single-shift consecutive SWIS-C-SS and double-shift consecutive SWIS-C-DS.
As a baseline, we use a systolic array with conventional (\textit{single-shift}) bit-serial PEs using per-layer activation truncation. Computation is done in the same way as \cite{Judd2017NnStripes}, however the accelerator organization is different. We also compare to the same architecture, but use weight truncation. Further, we compare SWIS to BitFusion, a systolic array using decomposable arithmetic \cite{Sharma2018NnBitFusion}. The area and energy numbers have been scaled appropriately to 28nm, whenever necessary. We evaluate BitFusion using 4-bit weights and 8-bit activations, as the architecture is constrained to power-of-2 precision. Finally, we include conventional 8-bit fixed-point numbers for reference. All configurations have the same amount of on-chip memory. All comparison points use the same size of the systolic array ($8\times8$) as it allows us to isolate the benefits coming from each scheme. We evaluate the performance only on convolutional layers of tested networks, as they dominate overall performance and latency. We leave SWIS optimizations targeting fully-connected layers for future work. \par
For network accuracy evaluation, we use Pytorch and implement all custom quantization functions using Pytorch's built-in functions. Table \ref{tab:directquanttab} includes the networks and datasets we used as benchmarks and their baseline accuracies. We select ResNet-18 and MobileNet-v2 on ImageNet 2012 and VGG-16\cite{Simonyan2014VeryRecognition} on CIFAR100 to evaluate the results. For MobileNet-v2, the floating point weights are downloaded from Pytorch's model zoo and then retrained for 10 epochs with 8-bit quantization to generate the 8-bit baseline weights, as MobileNet-v2 performs poorly on post-training INT8 quantization. For ResNet-18, the 8-bit baseline is the layer-wise static INT8 quantization of pytorch's pretrained floating point weight. For VGG-16, the network structure is adjusted slightly to fit CIFAR-100 dataset and trained from scratch for 100 epochs to obtain the baseline accuracy. For quantization-aware retraining, all baseline results are trained for 10 epochs with learning rate decay. Some SWIS variants also fine-tune based on scheduling algorithm's output to enable odd number of shifts (for DS) and half shifts. All activations are also quantized to 8 bits unless specified. \par
We use the method introduced in Section \ref{sec:swiss_sched:shsel} for SWIS weight quanization. To simulate the activation quantization in \cite{Judd2017NnStripes, Delmas2018NnDpredStripes}, we implemented a layer-wise LSB truncation algorithm on all activations, where the last $8-N$ bits are truncated and $N$ is the number of shifts allowed. When reporting the number of shifts for a given configuration, we report the "effective" number of shifts across the whole network, which is averaged across all of the weights. \par
\subsection{Network Accuracy Evaluation} \label{sec:eval_results:acc}
\subsubsection{Post-training Quantization} In this section we compare the accuracy of the 4 SWIS configurations to layer-wise activation truncation (similar to the approach used in \cite{Judd2017NnStripes}) and layer-wise weight truncation + clipping, which is a standard baseline method for weight quantization. Table \ref{tab:directquanttab} shows the post training quantization accuracy for all SWIS configurations on three networks along with baselines for 32-bit floating point and 8-bit integer quantization. All SWIS/SWIS-C results are after scheduling. SWIS configurations outperform weight and activation truncation in all cases. In general, SWIS outperforms SWIS-C and SS outperforms DS slightly due to better scheduling flexibility. In most cases, the accuracy difference between DS and SS is small, and DS is preferred due to its better hardware efficiency. The accuracy difference between SWIS and SWIS-C depends on networks, the gap is relatively small for more redundant networks like VGG-16 on CIFAR100 while it is large for MobileNet-v2, where SWIS shows advantage of its bit sparsity quantization. Post-training activation quantization (as in \cite{Judd2017NnStripes}) below 8 bits has unusably low accuracy. Even for weight quantization, for example at 4 bits (or shifts), SWIS has 9.3\%, 54.3\%, 1.5\% higher accuracy than conventional quantization for Resnet-18, MobileNet-v2 and VGG-16 respectively.
\begin{footnotesize} \begin{table}[htbp]
\centering
\caption{Post-training quantization top-1 accuracy of the three networks, using different algorithm and hardware setups, Wgt. and Act. means weight truncation and activation truncation. Results for weight and activation truncation with 6 and 7 shifts are included for reference.}
\begin{tabularx}{\linewidth}{X|X|X|X|X|X|X}
\toprule
& \multicolumn{2}{c|}{SWIS} & \multicolumn{2}{c|}{SWIS-C} & \multicolumn{2}{c}{Trunc.}\\
N\_shift & SS & DS & SS & DS & Wgt. & Act. \\
\midrule
\multicolumn{7}{c}{Resnet-18 ImageNet (Baseline FP32: 69.6 and INT8: 69.5)} \\
\midrule
\midrule
2 & 65.9 & 66.0 & 62.2 & 62.5 & 3.6 & 0.1 \\
2.5 & 68.5 & 67.9 & 66.8 & 66.6 & N/A & N/A \\
3 & 69.1 & 68.6 & 68.6 & 68.0 & 30.8 & 0.1\\
4 & 69.5 & 69.5 & 69.4 & 69.3 & 60.2 & 45.9 \\
6 & / & / & / & / & 69.2 & 66.7 \\
7 & / & / & / & / & 69.5 & 69.1 \\
\midrule
\multicolumn{7}{c}{MobileNet-v2 ImageNet (Baseline FP32: 71.9 and INT8: 70.1)} \\
\midrule
\midrule
3 & 58.3 & 28.8 & 41.2 & 30.5 & 0.6 & 0.1 \\
3.5 & 65.6 & 55.8 & 47.4 & 43.4 & N/A & N/A \\
4 & 67.5 & 67.2 & 65.4 & 67.2 & 13.2 & 0.3 \\
5 & 69.9 & 68.3 & 68.4 & 68.3 & 60.6 & 25.8 \\
6 & / & / & / & / & 68.0 & 60.3 \\
7 & / & / & / & / & 70.1 & 68.1 \\
\midrule
\multicolumn{7}{c}{VGG-16 CIFAR100 (Baseline FP32: 64.8 and INT8: 64.8)} \\
\midrule
\midrule
2 & 61.3 & 61.4 & 56.1 & 57.9 & 31.1 & 1.0 \\
2.5 & 63.6 & 62.7 & 61.5 & 60.8 & N/A & N/A \\
3 & 64.5 & 63.3 & 63.4 & 62.6 & 60.5 & 3.6 \\
4 & 64.7 & 64.7 & 64.5 & 64.7 & 63.2 & 24.7 \\
6 & / & / & / & / & 64.7 & 62.8 \\
7 & / & / & / & / & 64.8 & 64.1 \\
\bottomrule
\end{tabularx}
\label{tab:directquanttab} \end{table} \end{footnotesize} \begin{scriptsize}
\begin{table*}[htbp]
\centering
\caption{Energy (Frames/J) and latency (Frames/s) comparison between different SWIS configurations, bit-serial with activation and weight truncation, BitFusion, and 8-bit fixed-point, at different accuracy points for different network models and datasets. Best Fr/J and Fr/s for each accuracy points are highlighted. "S" indicated the number of shifts used.}
\begin{tabular}{l|rrrrrrrrrrrrrrrrrrrrr|rr}
\toprule
Archi- & \multicolumn{6}{c|}{SWIS} & \multicolumn{6}{c|}{SWIS-C} & \multicolumn{6}{c|}{Trunc} & \multicolumn{3}{c|}{Bit} & \multicolumn{2}{c}{8-bit } \\
tecture & \multicolumn{3}{c|}{SS} & \multicolumn{3}{c|}{DS} & \multicolumn{3}{c|}{SS} & \multicolumn{3}{c|}{DS} & \multicolumn{3}{c|}{Act} & \multicolumn{3}{c|}{Wgt} & \multicolumn{3}{c|}{Fusion $4\times8$} & \multicolumn{2}{c}{FXP} \\
\midrule
Area [$mm^2$] & \multicolumn{3}{c|}{0.54} & \multicolumn{3}{c|}{0.55} & \multicolumn{3}{c|}{0.54} & \multicolumn{3}{c|}{0.55} & \multicolumn{3}{c|}{0.54} & \multicolumn{3}{c|}{0.54} & \multicolumn{3}{c|}{0.57} & \multicolumn{2}{c}{0.54} \\
\midrule
Network & \multicolumn{23}{c}{ResNet-18 ImageNet} \\
\midrule
\midrule
Accuracy & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & F/s & F/J & F/s \\
\midrule
>69.1\% & 3 & 317.8 & \multicolumn{1}{r|}{28.6} & 4 & 292.5 & \multicolumn{1}{r|}{\textbf{42.9}} & 4 & 326.3 & \multicolumn{1}{r|}{21.4} & 4 & \textbf{353.6} & \multicolumn{1}{r|}{\textbf{42.9}} & 7 & 215.8 & \multicolumn{1}{r|}{12.2} & 6 & 230.7 & \multicolumn{1}{r|}{14.3} & - & - & - & 238.5 & 23.2 \\
>60.2\% & 2 & 390.8 & \multicolumn{1}{r|}{42.9} & 2 & 416.5 & \multicolumn{1}{r|}{\textbf{85.7}} & 2 & 410.6 & \multicolumn{1}{r|}{42.9} & 2 & \textbf{439.1} & \multicolumn{1}{r|}{\textbf{85.7}} & 6 & 230.7 & \multicolumn{1}{r|}{14.3} & 4 & 267.7 & \multicolumn{1}{r|}{21.4} & 4 & 218.9 & 42.9 & - & - \\
\midrule
Network & \multicolumn{23}{c}{MobileNet V2 ImageNet} \\
\midrule
\midrule
Accuracy & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & F/s & F/J & F/s \\
\midrule
>68.0\% & 5 & 475.6 & \multicolumn{1}{r|}{4.0} & 5 & 490.0 & \multicolumn{1}{r|}{\textbf{8.0}} & 5 & \textbf{496.3} & \multicolumn{1}{r|}{4.0} & 6 & 495.8 & \multicolumn{1}{r|}{6.7} & 7 & 456.1 & \multicolumn{1}{r|}{2.9} & 6 & 466.1 & \multicolumn{1}{r|}{3.3} & - & - & - & 391.2 & 6.1 \\
>60.3\% & 3.5 & 511.4 & \multicolumn{1}{r|}{5.7} & 4 & 511.6 & \multicolumn{1}{r|}{\textbf{10.0}} & 4 & 515.8 & \multicolumn{1}{r|}{10.0} & 4 & \textbf{529.4} & \multicolumn{1}{r|}{10.0} & 6 & 466.1 & \multicolumn{1}{r|}{3.3} & 5 & 476.6 & \multicolumn{1}{r|}{4.0} & - & - & - & - & - \\
\midrule
Network & \multicolumn{23}{c}{VGG-16 CIFAR-100} \\
\midrule
\midrule
Accuracy & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & \multicolumn{1}{r| }{F/s} & S & F/J & \multicolumn{1}{r|}{F/s} & S & F/J & F/s & F/J & F/s \\
\midrule
>64.1\% & 3 & 763.6 & \multicolumn{1}{r|}{124.7} & 4 & 626.5 & \multicolumn{1}{r|}{\textbf{187.1}} & 4 & 815.1 & \multicolumn{1}{r|}{93.5} & 4 & \textbf{843.5} & \multicolumn{1}{r|}{\textbf{187.1}} & 7 & 553.0 & \multicolumn{1}{r|}{53.4} & 6 & 569.5 & \multicolumn{1}{r|}{62.4} & - & - & - & 522.3 & 94 \\
>62.5\% & 2.5 & 878.2 & \multicolumn{1}{r|}{149.7} & 2.5 & 905.6 & \multicolumn{1}{r|}{299.3} & 3 & 942.1 & \multicolumn{1}{r|}{124.7} & 3 & \textbf{980.3} & \multicolumn{1}{r|}{\textbf{299.3}} & 6 & 569.5 & \multicolumn{1}{r|}{62.4} & 4 & 605.6 & \multicolumn{1}{r|}{93.5} & 4 & 799.8 & \multicolumn{1}{r|}{187.1} & - & - \\
\bottomrule
\end{tabular}
\label{eval:perf} \end{table*} \end{scriptsize} \subsubsection{Quantization-aware Retraining} Though our focus is on the energy/latency benefits of SWIS for post-training quantization, retraining can reduce the number of shift values needed by 1-3 shifts. This is especially helpful for MobileNet-v2, since it needs more shifts to maintain accuracy for post-training quantization compared to other networks. During retraining, the shift value selection is treated as a special quantization, and is updated per batch input. The shift value selection is applied to quantize the weight in the forward pass and the error is back-propagated to update weights, similar to conventional quantization. Table \ref{tab:retraintab} shows the retraining results, all SWIS configurations outperform weight truncation in all cases (5\%, 19.8\%,4.5\% point accuracy gain over conventional quantization at 2-shifts for the three networks). For ResNet-18, SWIS at 2 shifts in all its variants is {\em far superior} in accuracy compared to conventional quantization at 3 shifts.
\begin{footnotesize} \begin{table}[htbp]
\centering
\caption{Retraining top-1 accuracy of the three networks, using different algorithm and network setups }
\begin{tabularx}{\linewidth}{X|X|X|X|X|X}
\toprule
& \multicolumn{2}{c|}{SWIS} & \multicolumn{2}{c|}{SWIS-C} & Trunc. \\
N\_shift & SS & DS & SS & DS & Wgt. \\
\midrule
\multicolumn{6}{c}{Resnet-18 ImageNet} \\
\midrule
\midrule
2 & 68.3 & 68.3 & 68.1 & 68.1 & 63.3 \\
3 & 69.1 & 68.7 & 68.4 & 68.3 & 66.3 \\
\midrule
\multicolumn{6}{c}{MobileNet-v2 ImageNet} \\
\midrule
\midrule
2 & 67.4 & 67.4 & 65.5 & 65.5 & 47.6 \\
2.5 & 68.0 & 67.8 & 66.9 & 66.0 & N/A \\
3 & 69.3 & 68.5 & 69.0 & 67.2 & 65.8 \\
\midrule
\multicolumn{6}{c}{VGG-16 CIFAR100} \\
\midrule
\midrule
2 & 64.1 & 64.1 & 64 & 64 & 59.6 \\
\bottomrule
\end{tabularx}
\label{tab:retraintab} \end{table} \end{footnotesize}
\subsection{Performance Comparison} \label{sec:eval_results:perf}
Performance results, in terms of frames per Joule (F/J) and frames per second (F/s) for each evaluated configuration are listed in Table \ref{eval:perf}. Performance for each SWIS configuration is evaluated at 2 accuracy points, with corresponding activation- and weight-truncation results, as well as BitFusion $4\times8$ where applicable. First, we show that SWIS-SS can be between $1.75\times$ and $4.8\times$ faster than activation-truncation bit-serial. For SWIS-DS that speedup ranges from $2.8\times$ to $6\times$. SWIS can also improve energy efficiency by 1.04-1.7$\times$ and 1.1-1.9$\times$ for SWIS-SS and SWIS-DS respectively, due to weight compression and more efficient computation. When using the same number of shifts, SWIS-C has higher energy efficiency than SWIS, but that benefit is often offset when additional shifts are required to maintain iso-accuracy with it. \par Even when comparing to weight truncation, SWIS offers up to $1.6\times$ and $3.2\times$ speedup for SWIS-SS and SWIS-DS respectively, with up to $1.6\times$ reduction in energy across all SWIS configurations. Compared with iso-accuracy BitFusion, SWIS can have up to $2\times$ lower latency and up to $1.9\times$ lower energy consumption, thanks to the SWIS's ability to reduce the number of bits used much more aggressively, improving both storage compression and computation energy efficiency.
\section{Conclusion} In this work, we propose SWIS, a framework for neural network quantization for efficient inference on edge devices. We show conventional bit-serial designs do not fully utilize their flexibility as most of them only apply to activations. We utilize the bit level sparsity inherent in weights to quantize them beyond the conventional "prefix" or "suffix" style truncation. For example, SWIS quantization can achieve MobileNet-v2 accuracy within 1\% of INT8 with 5 effective bit quantization {\em without} any retraining and 3 bits with retraining. For bit-serial architectures, SWIS compresses weights and improves latency and energy by as much as $6\times$ and $1.9\times$, respectively, without loss of accuracy. Based on SWIS, we further purpose SWIS-C and double-shift SWIS (SWIS-DS), one for better weight compression and the other for better hardware efficiency. Further, we develop a filter scheduling algorithm, to allow for fine-grained tradeoff between accuracy and energy/latency. Our ongoing work includes design space exploration of SWIS systolic array architectures as well as approaches for efficient SWIS execution of fully connected layers.
\end{document} |
\begin{document}
\begin{titlepage}
\title{Robust Phase Transitions for Heisenberg and Other Models on General Trees} \author{Robin Pemantle\thanks{Research partially supported by a Presidential Faculty Fellowship and a Sloan Foundation Fellowship.} \and Jeffrey E. Steif\thanks{Research supported by grants from the Swedish Natural Science Research Council and from the Royal Swedish Academy of Sciences.}
\and \\ \it University of Wisconsin-Madison
and Chalmers University of Technology } \date{} \maketitle
\begin{abstract} We study several statistical mechanical models on a general tree. Particular attention is devoted to the classical Heisenberg models, where the state space is the $d$--dimensional unit sphere and the interactions are proportional to the cosines of the angles between neighboring spins. The phenomenon of interest here is the classification of phase transition (non-uniqueness of the Gibbs state) according to whether it is {\it robust}. In many cases, including all of the Heisenberg and Potts models, occurrence of robust phase transition is determined by the geometry (branching number) of the tree in a way that parallels the situation with independent percolation and usual phase transition for the Ising model. The critical values for robust phase transition for the Heisenberg and Potts models are also calculated exactly. In some cases, such as the $q\ge 3$ Potts model, robust phase transition and usual phase transition do not coincide, while in other cases, such as the Heisenberg models, we conjecture that robust phase transition and usual phase transition are equivalent. In addition, we show that symmetry breaking is equivalent to the existence of a phase transition, a fact believed but not known for the rotor model on $Z\!\!\!Z^2$. \end{abstract}
\noindent AMS 1991 subject classifications. Primary 60K35, 82B05, 82B26. \\ Key words and phrases. phase transitions, symmetry breaking, Heisenberg models.\\ Running head: Phase transitions for Heisenberg models. \end{titlepage}
\setcounter{equation}{0}
\section{Definition of the model and main results} \label{sec:one}
Particle systems on trees have produced the first and most tractable examples of certain qualitative phenomena. For example, the contact process on a tree has multiple phase transitions, (\cite{Pem,Lig2,Sta}) and the critical temperature for the Ising model on a tree is determined by its branching number or Hausdorff dimension (\cite{Ly1,EKPS,PP}), which makes the Ising model intimately related to independent percolation whose critical value is also determined by the branching number (see \cite{Ly2}). In this paper we study several models on general infinite trees, including the classical Heisenberg and Potts models. Our aim is to exhibit a distinction between two kinds of phase transitions, {\it robust} and {\it non-robust}, as well as to investigate conditions under which robust phase transitions occur.
In many cases, including the Heisenberg and Potts models, the existence of a robust phase transition is determined by the branching number. However, in some cases (including the $q > 2$ Potts model), the critical temperature for the existence of usual phase transition is not determined by the branching number. Thus robust phase transition behaves in a more universal manner than non-robust phase transition, being a function of the branching number alone, as it is for usual phase transition for independent percolation and the Ising model. Although particle systems on trees do not always predict the qualitative behavior of the same particle system on high-dimensional lattices, it seems likely that there is a lattice analogue of non-robust phase transition, which would make an interesting topic for further research. Another unresolved question is whether there is ever a non-robust phase transition for the Heisenberg models (see Conjecture~\ref{conj:PS}).
We proceed to define the general statistical ensemble on a tree and to state the main results of the paper. Let $G$ be a compact metrizable group acting transitively by isometries on a compact metric space $({\bf S},d)$. It is well known that there exists a unique $G$--invariant probability measure on ${\bf S}$, which we denote by $d{\bf x}$. An {\bf energy function} is any nonconstant function $H : {\bf S} \times {\bf S} \rightarrow \hbox{I\kern-.2em\hbox{R}}$ that is symmetric, continuous, and $d$--invariant in that $H(x,y)$ depends only on $d(x,y)$. This implies that $$ H(x,y)=H(gx,gy) \,\,\forall \, x,y\in {\bf S}, \,g\in G. $$
${\bf S}$ together with its $G$--action and the function $H$ will be called a {\bf statistical ensemble}. Several examples with which we will be concerned are as follows.
\begin{eg} \label{eg:ising} The Ising model. Here ${\bf S} = \{ 1 , -1 \}$ acted on by itself (multiplicatively), $d$ is the usual discrete metric, $d{\bf x}$ is uniform on ${\bf S}$, and $H (x , y) = - xy$. \end{eg}
\begin{eg} \label{eg:potts} The Potts model. Here ${\bf S} = \{ 0 , 1 , \ldots , q-1 \}$ for some integer $q > 1$, $G$ is the symmetric group $S_q$ with its natural action, $d$ is the usual discrete metric, $d{\bf x}$ is uniform on ${\bf S}$, and $H (x , y) = 1 - 2 \delta_{x,y}$. This reduces to the Ising model when $q = 2$. \end{eg}
\begin{eg} \label{eg:rotor} The rotor model. Here ${\bf S}$ is the unit circle, acted on by itself by translations, $d(\theta , \phi) = 1- \cos (\theta - \phi)$, $d{\bf x}$ is normalized Lebesgue measure, and $H (\theta , \phi) = - \cos (\theta - \phi)$. \end{eg}
\begin{eg} \label{eg:spherical} The Heisenberg models for $d \ge 1$. In the $d$--dimensional Heisenberg model, ${\bf S}$ is the unit sphere $S^d$, $G$ is the special orthogonal group with its natural action; $d(x,y)$ is $1-x\cdot y$, $d{\bf x}$ is normalized surface measure, and $H (x , y)$ is again the negative of the dot product of $x$ and $y$. When $d=1$, we recover the rotor model. \end{eg}
Let $A$ be any finite graph, with vertex and edge sets denoted by $V(A)$ and $E(A)$ respectively, and let ${\cal J} : E (A) \rightarrow \hbox{I\kern-.2em\hbox{R}}^+$ be a function mapping the edge set of $A$ to the nonnegative reals which we call {\bf interaction strengths}. We now assume that ${\bf S}$, $G$ and $H$ are given and fixed.
\begin{defn} \label{defn:Gibbs} The {\bf Gibbs measure} with interaction strengths ${\cal J}$ is the probability measure $\mu=\mu^{{\cal J}}$ on ${\bf S}^{V(A)}$ whose density with respect to product measure $d{\bf x}^{V(A)}$ is given by $$ { \exp (- H^{\cal J} (\eta)) \over Z},\,\,\,\, \eta\in {\bf S}^{V(A)} $$ where $$ H^{\cal J}(\eta) = \sum_{e = \overline{xy} \in E(A)}
{\cal J} (e) H(\eta (x) , \eta (y)) ,$$ and $Z = \int \exp (- H^{\cal J}(\eta)) \, d{\bf x}^{V(A)}$ is a normalization. \end{defn}
In statistical mechanics, one wants to define Gibbs measures on infinite graphs $A$ in which case the above definition of course does not make sense. We follow the usual approach (see~\cite{Ge}), in which one introduces boundary conditions and takes a weak limit of finite subgraphs increasing to $A$. Since the precise nature of the boundary conditions play a role here (we know this to be true at least for the Potts model with $q > 2$), we handle boundary conditions with extra care and, unfortunately, notation. We give definitions in the case of a rooted tree, though the extensions to general locally finite graphs are immediate. By a {\bf tree}, we mean any connected loopless graph $\Gamma$ where every vertex has finite degree. One fixes a vertex $o$ of $\Gamma$ which we call the {\bf root}, obtaining a {\bf rooted tree}.
The vertex set of $\Gamma$ is denoted by $V(\Gamma)$. If $x$ is a vertex, we write $|x|$ for the number of edges on the shortest path from $o$ to $x$ and for two vertices $x$ and $y$, we write $|x-y|$ for the number of edges on the shortest path from $x$ to $y$. For vertices $x$ and $y$, we write $x \le y$ if $x$ is on the shortest path from $o$ to $y$, $x < y$ if $x \le y$ and $x \ne y$, and $x \to y$ if $x \le y$ and
$|y|=|x|+1$. For $x \in V(\Gamma)$, the tree $\Gamma (x)$ denotes the subtree of $\Gamma$ rooted at $x$ consisting of $x$ and all of its descendents. We also define $\partial\Gamma$, which we refer to as the boundary of $\Gamma$, to be the set of infinite self-avoiding paths starting from $o$. Throughout the paper, the following assumption is in force.
\noindent{\bf ASSUMPTION:} For all trees considered in this paper, the number of children of the vertices will be assumed bounded and we will denote this bound by $B$.
A {\bf cutset} $C$ is a finite set of vertices not including $o$ such that every self-avoiding infinite path from $o$ intersects $C$ and such that there is no pair $x , y \in C$ with $x < y$. Given a cutset $C$, $\Gamma \backslash C$ has one finite component (which contains $o$) which we denote by $C^i$ (``i'' for inside) and we let $C^o$ (``o'' for outside) denote the union of the infinite components of $\Gamma \backslash C$. We say that a sequence $\{ C_n \}$ of cutsets approaches $\infty$ if for all $v \in \Gamma$, $v \in C_n^i$ for all sufficiently large $n$.
Boundary conditions will take the form of specifications of the value of $\eta$ at some cutset $C$. Let $\delta$ be any element of ${\bf S}^C$. The Gibbs measure with boundary condition $\delta$ is the probability measure $\mu^\delta_C = \mu^{{\cal J} , \delta}_C$ on ${\bf S}^{C^i}$ whose density with respect to product measure $d{\bf x}^{C^i}$ is given by \begin{equation} \label{eq:Gibbs} { \exp (- H^{{\cal J} , \delta}_C (\eta)) \over Z},\,\,\,\, \eta\in {\bf S}^{C^i} \end{equation} where $$ H^{{\cal J} , \delta}_C (\eta) = \sum_{e = \overline{xy} \in E(\Gamma)
\atop x,y \in C^i} {\cal J} (e) H(\eta (x) , \eta (y))
+ \sum_{e = \overline{xy} \in E(\Gamma) \atop x \in C^i, y \in C}
{\cal J} (e) H(\eta (x) , \delta (y)) $$ and $Z = \int \exp (- H^{{\cal J} , \delta}_C (\eta)) \, d{\bf x}^{C^i}$ is a normalization. When we don't include the second summand above, we call this the {\it free} Gibbs measure on $C^i$, denoted by $\mu^{\rm free}_C$, where ${\cal J}$ is suppressed in the notation. As we will see in Lemma~\ref{lem:free}, the free measure does not depend on $C$ except for its domain of definition, so we can later also suppress $C$ in the notation.
\begin{defn} \label{defn:gibbstree} A probability measure $\mu$ on ${\bf S}^{V(\Gamma)}$ is called a {\bf Gibbs state} for the interactions ${\cal J}$ if for each cutset $C$, the conditional distribution on $C^i$ given the configuration $\delta'$ on $C\cup C^o$ is given by $\mu_C^{{\cal J} , \delta}$ where $\delta$ is the restriction of $\delta'$ to $C$. (A similar definition is used for general graphs.) Both in the case of lattices and trees (or for any graph), we say that a statistical ensemble {\bf exhibits a phase transition (PT) for the interaction strengths ${\cal J}$} if there is more than one Gibbs state for the interaction strengths ${\cal J}$. \end{defn}
In the next section we will prove \begin{lem} \label{lem:free} Fix interaction strengths ${\cal J}$ and let $C$ and $D$ be any two cutsets of $\Gamma$. Then the projections of $\mu^{\rm free}_C$ and $\mu^{\rm free}_D$ to ${\bf S}^{C^i \cap D^i}$ are equal. Hence the measures $\mu^{\rm free}_C$ have a weak limit as $C \rightarrow \infty$, denoted $\mu^{\rm free}$. \end{lem}
For general graphs, the measures $\mu^{\rm free}_C$ are not compatible in this way. Also, one has the following fact, which follows from Theorems~4.17 and~7.12 in \cite{Ge}.
\begin{lem} \label{lem:limits} If $\{C_n\}$ is a sequence of cutsets approaching $\infty$ and if for each $n$, $\delta_n\in {\bf S}^{C_n}$, then any weak subsequential limit of the sequence $\{\mu_{C_n}^{{\cal J},\delta_n}\}_{n\ge 1}$ is a Gibbs state for the interactions ${\cal J}$. In addition, if all such possible limits are the same, then there is no phase transition. (A similar statement holds for graphs other than trees.) \end{lem}
We pause for a few remarks about more general graphs, before restricting our discussion to trees for the rest of the paper. Lemma~\ref{lem:free} does not apply to graphs with cycles, so the existence of a unique weak limit $\mu^{\rm free}$ is not guaranteed there, but Lemma~\ref{lem:limits} together with compactness tells us that there always is at least one Gibbs state. The state of knowledge about the rotor model (Example~\ref{eg:rotor}) on more general graphs is somewhat interesting. It is known (see~\cite{Ge}, p.178 and p.434) that for $Z\!\!\!Z^d$, $d \leq 2$, all Gibbs states are rotationally invariant when ${\cal J}\equiv J$ for any $J$ (and it is believed but not known that there is a unique Gibbs state for the rotor model in this case) while for $d \geq 3$, there are values of $J$ for which the rotor model with ${\cal J}\equiv J$ has a Gibbs state whose distribution at the origin is not rotationally invariant (and hence there is more than one Gibbs state). In statistical mechanics, this latter phenomenon is referred to as a {\it continuous symmetry breaking} since we have a continuous state space (the circle) where the interactions are invariant under a certain continuous symmetry (rotations) but there are Gibbs states which are not invariant under this symmetry. We also mention that it is proved in~\cite{C} that for the rotor model with ${\cal J}\equiv J$ for any $J$ on any graph of bounded degree for which simple random walk is recurrent, all the Gibbs states are rotationally invariant. (This was then extended in~\cite{MW} where the condition of boundedness of the degree is dropped and the group involved is allowed to be more general than the circle.) This however is not a sharp criterion: in~\cite{E}, a graph (in fact a tree) is constructed for which simple random walk is transient but such that there is no phase transition in the rotor model when ${\cal J}\equiv J$ for any $J$. (This will also follow from Theorem~\ref{th:0hd} below together with the easy fact that there are trees with branching number 1 for which simple random walk is transient.) However, Y.\ Peres has conjectured a sharp criterion, Conjecture~\ref{conj:peres} below, for which our Corollary~\ref{cor:perestree} together with the discussion following it provides some corroboration.
For the rest of this paper, we will restrict to trees. It is usually in this context that the most explicit results can be obtained and our basic goal is to determine whether there is a phase transition by comparing the interaction strengths with the ``size'' (branching number) of our tree. It turns out that we can only partially answer this question but the question which we can answer more completely is whether there is a {\it robust} phase transition, a concept which we will introduce shortly.
\begin{defn} \label{defn:notation} Given ${\cal J},C$ and $\delta$ defined on $C$, let $f^{{\cal J},\delta}_{C , o}$ (or $f^{\delta}_{C , o}$ if ${\cal J}$ is understood) denote the marginal density of $\mu^{{\cal J} , \delta}_{C}$ at the root $o$. \end{defn}
For any tree, recall that $\Gamma (v)$ denotes the subtree rooted at $v$, so that the tree $\Gamma(v)$ has vertex set $\{ w \in \Gamma : v \leq w\}$. If $v\in C^i$ and we intersect $C$ with $\Gamma (v)$, we obtain a cutset $C(v)$ for $\Gamma(v)$. We now extend Definition~\ref{defn:notation} to other marginals as follows.
\begin{defn} \label{defn:marginals} With ${\cal J},C$ and $\delta$ as in Definition~\ref{defn:notation} and $v \in C^i$, define $f_{C,v}^{{\cal J} , \delta}$ by replacing $\Gamma$ by $\Gamma (v)$, $C$ with $C(v)$, ${\cal J}$ with ${\cal J}$ restricted to $E (\Gamma (v))$, $\delta$ with $\delta$ restricted to $C(v)$ and $o$ with $v$ in Definition~\ref{defn:notation}. \end{defn}
It is important to note that $f_{C,v}^{{\cal J},\delta}$ is not the density of the projection of $\mu_C^{{\cal J},\delta}$ onto vertex $v$, but rather the density of a Gibbs measure with similar boundary conditions on the smaller graph $\Gamma (v)$.
\begin{defn} \label{defn:SB} A statistical ensemble on a tree $\Gamma$ exhibits a {\bf symmetry breaking (SB) for the interactions ${\cal J}$} if there exists a Gibbs state such that the marginal distribution at some vertex $v$ is not $G$--invariant (or equivalently is not $d{\bf x}$). \end{defn}
The following proposition which will be proved in Section~\ref{sec:prelims} is interesting since it establishes the equivalence of PT and SB for general trees and general statistical ensembles, something not known for general graphs, see the remark below.
\begin{prop} \label{prop:SB=} Consider a statistical ensemble on a tree $\Gamma$ with interactions ${\cal J}$. The following four conditions are equivalent. \\ (i) There exists a vertex $v$ such that for any sequence of cutsets $C_n\to\infty$, there exist boundary conditions $\delta_n$ on $C_n$ such that $$
\inf_n \|f_{C_n,v}^{\delta_n}-1\|_\infty \neq 0. $$ (ii) There exists a vertex $v$, a sequence of cutsets $C_n\to\infty$ and boundary conditions $\delta_n$ on $C_n$ such that $$
\inf_n \|f_{C_n,v}^{\delta_n}-1\|_\infty \neq 0. $$ (iii) The system satisfies SB. \\ (iv) The system satisfies PT. \end{prop}
We now fix a distinguished element in ${\bf S}$, hereafter denoted ${\hat{0}}$. The notation $\mu^{{\cal J} , +}_C$ denotes $\mu^{{\cal J} , \delta}_C$ when $\delta$ is the constant function ${\hat{0}}$. In the case ${\cal J} \equiv J$, we denote this simply $\mu^{J,+}_C$. We will be particularly concerned about whether $\mu^{{\cal J} , +}_C \rightarrow \mu^{\rm free}$ weakly, as $C \rightarrow \infty$.
\begin{defn} \label{defn:SB+} A statistical ensemble on a tree $\Gamma$ exhibits a {\bf symmetry breaking with plus boundary conditions (SB+) for the interactions ${\cal J}$} if there exists a vertex $v$ and a sequence of cutsets $C_n\to\infty$ such that $$
\inf_n \|f_{C_n,v}^{{\cal J},+}-1\|_\infty \neq 0. $$ \end{defn}
Note that by symmetry, SB+ does not depend on which point of ${\bf S}$ is chosen to be ${\hat{0}}$.
In Section~\ref{sub:spherical} we will prove:
\begin{pr} \label{pr:rotor equiv} For the rotor model on a tree, SB is equivalent to SB+. \end{pr}
We conjecture but cannot prove the stronger statement: \begin{conj} \label{conj:SB} For any Heisenberg model on any graph, SB is equivalent to SB+. \end{conj}
\noindent{\em Remarks:} $(i)$ By Proposition~\ref{prop:SB=}, we have that SB+ implies SB for any statistical ensemble on a tree. While Proposition~\ref{prop:SB=} tells us that PT and SB are equivalent for any statistical ensemble on a tree, we note that such a result is not even known for the rotor model on $Z\!\!\!Z^2$ where it has been established that for all $J$, all Gibbs states are rotationally invariant for ${\cal J}\equiv J$ but where it has not been established that there is no phase transition. A weaker form of the above conjecture would be that SB+ and SB are equivalent for all Heisenberg models on trees. This is Problem~\ref{pblm:all spheres} in~Section~\ref{sec:anal}. An extension to graphs with cycles would seem to entail a different kind of reasoning, perhaps similar to the inequalities of Monroe and Pearce~\cite{MP} which fall just short of proving Conjecture~\ref{conj:SB} for the rotor model. \\ \noindent{$(ii)$} The fact that PT and SB+ are equivalent when the rotor model is replaced by the Ising model is an immediate consequence of the fact that the probability measure is stochastically increasing in the boundary conditions. More generally, it is also the case that PT and SB+ are equivalent for the Potts models (see \cite{ACCN}).
We now consider the idea of a {\it robust phase transition} where we investigate if the boundary conditions on a cutset have a nontrivial effect on the root even when the interactions along the cutset are made arbitrarily small but fixed.
Given parameters $J>0$ and $J' \in (0,J]$ and a cutset $C$ of $\Gamma$, let $ {\cal J} ( J', J , C)$ be the function on $E(\Gamma)$ which is $J$ on edges in $C^i$ and $J'$ on edges connecting $C^i$ to $C$ (the values elsewhere being irrelevant). Let $f^{J',J , +}_{C_n , o}$ denote the marginal at the root $o$ of the measure $\mu^{J',J, +}_C:=\mu^{{\cal J} (J', J , C) , +}_C$.
\begin{defn} \label{defn:robustPT} The statistical ensemble on the tree $\Gamma$ has a {\bf robust phase transition (RPT) for the parameter $J>0$} if for every $J'\in (0,J]$ $$
\inf_C \|f^{J',J , +}_{C , o} - 1\|_\infty \neq 0 \, $$ where the $\inf$ is taken over all cutsets $C$. \end{defn}
\noindent{\em Remarks:} In the case ${\cal J} \equiv J$, by taking $J'=J$, it is clear that a RPT implies SB+ (which in turn implies SB and PT). Note that in this case, RPT is stronger than SB+ not only because $J'$ can be any number in $(0,J]$ and the root $o$ must play the role of $v$ but also because in SB+, we only require that for {\it some} sequence of cutsets going to infinity, the marginal at the vertex $v$ stays away from uniform while in RPT, we require this for {\it all} cutsets going to infinity. We note also that with some care, this definition makes sense for general graphs, and that the issue of robustness of phase transition on general graphs is worth investigating, although we do not do so here.
Our first theorem gives criteria based on $J$ and the branching number of $\Gamma$ (which will now be defined) for robust phase transition to occur for the Heisenberg models. A little later on, we will have an analogous result for the Potts models. In \cite{F}, Furstenberg introduced the notion of the Hausdorff dimension of a tree (or more accurately of the boundary of the tree). This was further investigated by Lyons~(\cite{Ly2}) using the term branching number instead. The {\bf branching number} of a tree $\Gamma$, denoted $\textstyle {br} (\Gamma)$, is a real number greater than or equal to one that measures the average number of branches per vertex of the tree. More precisely, the {\bf branching number} of $\Gamma$ is defined by $$\textstyle{br}\,\Gamma:=\inf\left\{\lambda>0;\inf\limits_{C}
\sum_{x \in C}\lambda^{-|x|} = 0 \right \} \;$$ where the second infimum is over all cutsets $C$. The branching number is a measure of the average number of branches per vertex of $\Gamma$. It is less than or equal to $\liminf_{n \to \infty} M_n^{1/n}$, where
$M_n := | \left \{x \in \Gamma ; |x| = n \right \}|$, and takes more of the structure of $\Gamma$ into account than does this latter growth rate. For sufficiently regular trees, such as homogeneous trees or, more generally, Galton-Watson trees, $\textstyle{br}\, \Gamma = \lim_{n\to\infty} M_n^{1/n}$ (\cite{Ly2}). We also mention that the branching number is the exponential of the Hausdorff dimension of $\partial\Gamma$ where the latter is endowed with the metric which gives distance $e^{-k}$ to two paths which split off after $k$ steps. As indicated earlier, the branching number has been an important quantity in previous investigations. More specifically, in \cite{Ly1} and \cite{Ly2}, the critical values for independent percolation and for phase transition in the Ising model on general trees are explicitly computed in terms of the branching number.
For each $J\ge 0$, define a continuous strictly positive probability density function $K_J : {\bf S} \rightarrow \hbox{I\kern-.2em\hbox{R}}^+$ by \begin{equation} \label{eq:KJ} K_J (u): = C(J)^{-1} \exp (- J H (u , {\hat{0}})) \end{equation} where $C(J) = \int \exp (- J H(w,{\hat{0}})) \, d{\bf x} (w)$ is a normalizing constant, and more generally let $K_{J,y} : {\bf S} \rightarrow \hbox{I\kern-.2em\hbox{R}}^+$ be given by \begin{equation} \label{eq:KJy} K_{J,y} (u): = C(J)^{-1} \exp (- J H (u , y)) \end{equation} (noting that $K_{J,{\hat{0}}}=K_{J}$). Let ${\cal K}_J$ denote the convolution operator on the space $L^2 ({\bf S} , d{\bf x})$ given by the formula \begin{equation} \label{eq:conv}
{\cal K}_J f (u) : = \int_{{\bf S}} f(x) K_{J,x} (u) d{\bf x}(x) \,\, . \end{equation} Note that by the assumed invariance $\int_{{\bf S}} \exp (- J H(w,y)) \, d{\bf x} (w)$ is independent of $y$ and that $f\ge 0$ and $\int_{{\bf S}} f(x) d{\bf x}(x)=1$ imply that ${\cal K}_J f\ge 0$ and $\int_{{\bf S}} {\cal K}_J f(x) d{\bf x}(x)=1$. We extend the above notation to cover the case where $f$ is a pointmass $\delta_y$ at $y$ by defining in that case \begin{equation} \label{eq:pointconv} {\cal K}_J \delta_y (u) : = K_{J , y}(u) . \end{equation}
We will now give the exact critical parameter $J$ for RPT for the Heisenberg models. For any $d\ge 1$, let $$ \rho^d(J):= {\int_{-1}^1 r e^{Jr}(1-r^2)^{{d \over 2}-1} dr \over \int_{-1}^1 e^{Jr}(1-r^2)^{{d \over 2}-1} dr }. $$
When $d=1$ (rotor model), this is (by a change of variables) the first Fourier coefficient of $K_J$ ($\int_{{\bf S}} K_J(\theta) \cos (\theta) d\theta$) which is perhaps more illustrative. When $d=2$, this is the first Legendre coefficient of $e^{Jr}$ (properly normalized) and for $d\ge 3$, this is the first so-called ultraspherical coefficient of $e^{Jr}$ (properly normalized).
\begin{th} \label{th:main} Let $d\ge 1$. \\ (i) If $\textstyle {br} (\Gamma) \rho^d(J) <1$, then the $d$--dimensional Heisenberg model on $\Gamma$ with parameter $J$ does not exhibit a robust phase transition. \\ (ii) If $\textstyle{br}(\Gamma) \rho^d(J) >1$, then the $d$--dimensional Heisenberg model on $\Gamma$ with parameter $J$ exhibits a robust phase transition. \end{th}
\noindent{\em Remark:} It is easy to see that $\lim_{d\to\infty} \rho^d(J)=0$ which says that it is harder to obtain a robust phase transition on higher dimensional spheres. This is consistent with the fact that it is in some sense harder to have a phase transition for the rotor model than in the Ising model (0-dimensional sphere); this latter fact can be established using the ideas in \cite{PS}.
A simple computation shows that the derivative of $\rho^d(J)$ with respect to $J$ is the variance of a random variable whose density function is proportional to $e^{Jr}(1-r^2)^{d/2 -1}$ on $[-1,1]$, thereby obtaining the following lemma. \begin{lem} \label{lem:inc} For any $d\ge 1$, we have that $\rho^d(J)$ is a strictly increasing function of $J$. \end{lem}
Theorem \ref{th:main} and Lemma \ref{lem:inc} together with the fact that for any $d\ge 1$, $\rho^d(J)$ is a continuous function of $J$ which approaches 0 as $J\to 0$ and approaches 1 as $J\to \infty$ give us the following corollary.
\begin{cor} \label{cor:critical} For any Heisenberg model with $d\ge 1$ and any tree $\Gamma$ with branching number larger than 1, let $J_c=J_c(\Gamma, d)$ be such that $\textstyle{br}(\Gamma) \rho^d(J_c)=1$. Then there is a robust phase transition for the $d$--dimensional Heisenberg model on $\Gamma$ if $J> J_c$ and there is no such robust phase transition for $J< J_c$. \end{cor}
For the Heisenberg models, we believe that phase transition and robust phase transition coincide and therefore we have the following conjecture.
\begin{conj} \label{conj:PS} For any $d\ge 1$, if $\textstyle {br} (\Gamma) \rho^d(J) <1$, then the $d$--dimensional Heisenberg model on $\Gamma$ with parameter $J$ does not exhibit a phase transition. \end{conj}
We can however obtain the following weaker form of this conjecture which is valid for all statistical ensembles.
\begin{th} \label{th:0hd} If $\textstyle {br} (\Gamma) = 1$, then there is no phase transition for any statistical ensemble on $\Gamma$ with bounded ${\cal J}$. \end{th}
Theorems \ref{th:main}(ii) and \ref{th:0hd} together with the facts that RPT implies PT and that for any $d\ge 1$, $\lim_{J\to\infty} \rho^d(J) = 1$ immediately yield the following corollary.
\begin{cor} \label{cor:perestree} For any Heisenberg model with $d\ge 1$ and for any tree $\Gamma$, there is a phase transition for the tree $\Gamma$ for some value of the parameter $J$ if and only if $\textstyle {br} (\Gamma)>1$. \end{cor}
Since it is known (see \cite{Ly2}) that $\textstyle {br} (\Gamma)>1$ if and only if there is some $p< 1$ with the property that when performing independent percolation on $\Gamma$ with parameter $p$, there exists a.s.\ an infinite cluster on which simple random walk is transient, the above corollary yields the following conjecture of Y. Peres for the special case of trees of bounded degree.
\begin{conj} \label{conj:peres} For any graph $A$, the rotor model exhibits a phase transition for some $J$ if and only if there is some $p< 1$ with the property that performing independent bond percolation on $A$ with parameter $p$, there exists a.s.\ an infinite cluster on which simple random walk is transient. \end{conj}
Recall that the rotor model on the graph $A$ exhibits no SB for any parameter $J$ if $A$ is recurrent for simple random walk, which is of course consistent with the above conjecture. Note that, on the other hand, the standard Ising model does exhibit a phase transition on $Z\!\!\!Z^2$, a graph which is recurrent (as are its subgraphs) for simple random walk.
The next result states the critical value for RPT for the Potts models.
\begin{th} \label{th:potts} Consider the Potts model with $q \ge 2$ and let $$\alpha_J = {e^J - e^{-J} \over e^J + (q-1) e^{-J}} \, .$$ (i) If $\textstyle {br} (\Gamma) \alpha_J <1$, then the Potts model on $\Gamma$ with parameter $J$ does not exhibit a robust phase transition. \\ (ii) If $\textstyle{br}(\Gamma) \alpha_J >1$, then the Potts model on $\Gamma$ with parameter $J$ exhibits a robust phase transition. \end{th}
\noindent{\em Remarks:}\\ $(i)$ $d\alpha_J/dJ >0$ and so there is a critical value of $J$ depending on $\textstyle{br}(\Gamma)$ analogous to in Corollary~\ref{cor:critical} for the Heisenberg models. \\ \noindent{$(ii)$} Note that when $q=2$ (the Ising model), this formula agrees with the formula for the Heisenberg models when one formally sets $d=0$ in the formula $$ \rho^d(J)=\int_{S^d} (x \cdot {\hat{0}}) K_J (x) \, d{\bf x}(x), $$ the latter being obtained by a change of variables.
To point out the subtlety involved in Conjecture \ref{conj:PS}, we continue to discuss the Potts model, a case in which the analogue of Conjecture~\ref{conj:PS} fails. Our final result tells us that phase transitions (unlike robust phase transitions) in the Potts model with $q > 2$ cannot be determined by the branching number.
\begin{th} \label{th:2trees} Given any integer $q >2$, there exist trees $\Gamma_1$ and $\Gamma_2$ and a nontrivial interval $I$ such that $\textstyle {br} (\Gamma_1) < \textstyle {br} (\Gamma_2)$ and for any $J\in I$, there is a phase transition for the $q$--state Potts model with parameter $J$ on $\Gamma_1$ but no such phase transition on $\Gamma_2$. \end{th}
\noindent{\em Remarks:}\\ $(i)$ $\Gamma_1$ and $\Gamma_2$ can each be taken to be spherically symmetric which means that for all $k$, all vertices at the $k$th generation have the same number of children. \\ \noindent{$(ii)$} In the case $q=2$, more is known. In \cite{Ly1}, the critical value for phase transition in the Ising model is found and corresponds to what is obtained in Theorem~\ref{th:potts} above. It follows that there is never a non-robust phase transition except possibly at the critical value. However, a sharp capacity criterion exists~\cite{PP} for phase transition for the Ising model (settling the issue of phase transition at the critical parameter) and using this criterion, one can show that phase transition and robust phase transition correspond even at criticality. The arguments of~\cite{PP} cannot be extended to the Potts model for $q > 2$ because the operator ${\cal K}_J$, acting on a certain likelihood function, when conjugated by the logarithm is not concave in this case. Theorems~\ref{th:potts} and~\ref{th:2trees} together tell us that there is indeed a non-robust phase transition when $q > 2$ for a nontrivial interval of $J$.
The rest of the paper is devoted to the proofs of the above results. In Section~\ref{sec:prelims}, we collect several lemmas that apply to general statistical ensembles, including the basic recursion formula (Lemma~\ref{lem:rec}) that allows us to analyze general statistical ensembles on trees, prove Lemma~\ref{lem:free} and Proposition~\ref{prop:SB=} as well as provide some background concerning Heisenberg models (showing that they satisfy the more general hypotheses of Theorems~\ref{th:gen ii} and~\ref{th:gen i} given later on) and the more general notion of distance regular spaces. Section~\ref{sec:proofs} is devoted to the proofs of Theorems~\ref{th:gen ii} and~\ref{th:gen i}. In Section~\ref{sec:anal}, we use these theorems to find the critical parameters for robust phase transition in the Heisenberg and Potts models, Theorems~\ref{th:main} and~\ref{th:potts}, as well as prove Proposition~\ref{pr:rotor equiv}. Section~\ref{sec:zero} discusses the special case of trees of branching number 1, proving Theorem~\ref{th:0hd}. Finally, in Section~\ref{sec:potts}, Theorem~\ref{th:2trees} is proved.
\setcounter{equation}{0}
\section{Basic background results} \label{sec:prelims} In this section, we collect various background results which will be needed to prove the results described in the introduction. We begin with a subsection describing results pertaining to trees that hold for general statistical ensembles. After discussing the concept of a distance regular space in Section~\ref{sub:drs}, we specialize to Heisenberg models (the most relevant family of continuous distance regular models) in Section~\ref{sub:sphere} and then to distance regular graphs in Section~\ref{sub:finite}.
\subsection{The fundamental recursion and other lemmas} \label{sub:rec}
We start off with two lemmas exploiting the recursive structure of trees.
Let ${\bf S},G$ and $H$ be a statistical ensemble. Let $A_1$ and $A_2$ be two disjoint finite graphs, with distinguished vertices $v_1 \in V(A_1)$ and $v_2 \in V(A_2)$. Let ${\cal J}_1$ and ${\cal J}_2$ be interaction functions for $A_1$ and $A_2$, i.e., positive functions on $E(A_1)$ and $E(A_2)$ respectively. For any $C_1 \subseteq V(A_1) \setminus \{ v_1 \}$ (possibly empty) and any $C_2 \subseteq V(A_2)$, and for any $\delta_1 \in {\bf S}^{C_1}$ and $\delta_2 \in {\bf S}^{C_2}$, we have measures $\mu_i := \mu^{{\cal J}_i , \delta_i}_{C_i}$, $i = 1, 2$ on ${\bf S}^{V(A_i)\setminus C_i}$ defined (essentially) by~(\ref{eq:Gibbs}). Abbreviate $H_{C_i}^{{\cal J}_i,\delta_i}$ (which has the obvious meaning) by $H_i$. Let $A$ be the union of $A_1$ and $A_2$ together with an edge connecting $v_1$ and $v_2$. Let $C = C_1 \cup C_2$, ${\cal J}$ extend each ${\cal J}_i$ and the value of the new edge be given the value $J$, $\delta$ extend each $\delta_i$ and denote $\mu^{{\cal J} , \delta}_C$ (a probability measure on ${\bf S}^{(V(A_1)\setminus C_1)\cup (V(A_2)\setminus C_2)}$) by $\mu$ and $H^{{\cal J} , \delta}_C$ (again having the obvious meaning) by $H$. The identity \begin{equation} \label{eq:Hdecomp} H = H_1 + H_2 + J H (\eta (v_1) , \eta (v_2)) \end{equation} leads to the following lemma.
\begin{lem} \label{lem:decomps} The measure $\mu$ satisfies \begin{equation} \label{eq:mudecomp} {d\mu \over d (\mu_1 \times \mu_2)} = c \exp [- J H(\eta_1 (v_1) ,
\eta_2 (v_2))] , \end{equation} where $$c = \left [ \int \int \exp (- J H (\eta_1 (v_1) , \eta_2 (v_2)))
\, d\mu_1 (\eta_1) \, d\mu_2 (\eta_2) \right ]^{-1}$$ is a normalizing constant. Let $f_i$ denote the marginal density of $\mu_i$ at $v_i$, $i = 1 , 2$, and $f$ denotes the marginal density of $\mu$ at $v_1$. Then the projection $\mu^{(1)}$ of $\mu$ onto ${\bf S}^{V(A_1)\setminus C_1}$ satisfies \begin{equation} \label{eq:mudecomp2} \mu^{(1)} = c \int \int \mu_{1 , y} f_1 (y) f_2 (z) \exp (- J H(y,z))
\, d{\bf x} (z) \, d{\bf x} (y) \end{equation} for some normalizing constant $c$, where $\mu_{1 , y}$ denotes the conditional distribution of $\mu_1$ given $\eta (v_1) = y$. Consequently, \begin{equation} \label{eq:fdecomp} f (y) = c f_1 (y) \int f_2 (z) \exp (- J H(y,z)) \, d{\bf x} (z) \, , \end{equation} where $c$ normalizes $f$ to be a probability density. \end{lem}
\noindent{\bf Proof.} The relation~(\ref{eq:mudecomp}) follows from~(\ref{eq:Hdecomp}) and the defining equation~(\ref{eq:Gibbs}). From this it follows that the measure $\mu$ on pairs $(\eta_1 , \eta_2)$ makes $\eta_1$ and $\eta_2$ conditionally independent given $\eta_1 (v_1)$ and $\eta_2 (v_2)$. Hence the conditional distribution of $\mu^{(1)}$ given $\eta_1 (v_1) = y$ and $\eta_2 (v_2) = z$ is just $\mu_{1 , y}$. Next, (\ref{eq:mudecomp}) and the last fact yield~(\ref{eq:mudecomp2}). The marginal of $\mu_{1, y}$ at $v_1$ is just $\delta_y$, and so~(\ref{eq:mudecomp2}) yields~(\ref{eq:fdecomp}). $
\Box$
A tree $\Gamma$ may be built up from isolated vertices by the joining operation described in the previous lemma. The decompositions in Lemma~\ref{lem:decomps} may be applied inductively to derive a fundamental recursion for marginals. This recursion, Lemma~\ref{lem:rec} below, expresses the marginal distribution at the root of $\Gamma$ as a pointwise product of marginals at the roots of each of the generation 1 subtrees, each convolved with a kernel $K_J$. The normalized pointwise product will be ubiquitous throughout what follows, so we introduce notation for it.
\begin{defn} If $f_1 , \ldots , f_k$ are nonnegative functions on ${\bf S}$ with $\int f_i \, d{\bf x} = 1$ for each $i$, let
\\ ${\bigodot}_k (f_1 , \ldots , f_k)$ denote the normalized pointwise product, $${\bigodot}_k (f_1 , \ldots , f_k) (x) = {\prod_{i=1}^k f_i (x) \over
\int \prod_{i=1}^k f_i (y) \, d{\bf x} (y)}$$ whenever this makes sense, e.g., when each $f_i$ is in $L^k (d{\bf x})$ and the product is not almost everywhere zero. Let ${\bigodot}$ denote the operator which for each $k$ is ${\bigodot}_k$ on each $k$-tuple of functions. There is an obvious associativity property, namely ${\bigodot} ({\bigodot} (f,g) , h) = {\bigodot} (f,g,h)$, which may be extended to arbitrarily many arguments. \end{defn}
\begin{lem}[Fundamental recursion] \label{lem:rec} Given a tree $\Gamma$, a cutset $C$, interactions ${\cal J}$, boundary condition $\delta$ and $v\in C^i$, let $\{ w_1 , \ldots , w_k \}$ be the children of $v$. Let $J_1 , \ldots , J_k$ denote the values of ${\cal J} (v , w_1) , \ldots , {\cal J} (v , w_k)$. Then \begin{equation} \label{eq:recurse} f_{C,v}^{{\cal J},\delta} = {\bigodot} ({\cal K}_{J_1} f^{{\cal J} , \delta}_{C , w_1} ,
\ldots , {\cal K}_{J_k} f^{{\cal J} , \delta}_{C , w_k}) \, , \end{equation} where when $w_i\in C$, $f^{{\cal J} , \delta}_{C , w_i}$ is taken to be the point mass at $\delta(w_i)$ and convention~(\ref{eq:pointconv}) is in effect. \end{lem}
\noindent{\bf Proof.} Passing to the subtree $\Gamma (v)$, we may assume without loss of generality that $v = o$. Also assume without loss of generality that $w_1 , \ldots , w_k$ are numbered so that for some $s$, $w_i \in C^i$ for $i \leq s$ and $w_i \in C$ for $i > s$. For $i\le s$, let $C(w_i) = C \cap \Gamma (w_i)$. For such $i$, by definition, $f_i := f^{{\cal J} , \delta}_{C,w_i}$ is the marginal at $w_i$ of the measure $\mu_i := \mu^{{\cal J}, \delta}_{C(w_i) , w_i}$ on configurations on $\Gamma (w_i)\cap C^i$, where ${\cal J}$ and $\delta$ are restricted to $E (\Gamma (w_i))$ and $C(w_i)$ respectively. Let $\Gamma_r$ denote the induced subgraph of $\Gamma$ whose vertices are the union of $\{ o \}$, $\Gamma (w_1) , \ldots , \Gamma (w_r)$. We prove by induction on $r$ that the density $g_r$ at the root of $\Gamma_r$ of the analogue of $\mu^{{\cal J} , \delta}_C$ for $\Gamma_r$ is equal to $${\bigodot} ({\cal K}_{J_1} f^{{\cal J} , \delta}_{C , w_1} , \ldots ,
{\cal K}_{J_r} f^{{\cal J} , \delta}_{C , w_r}) \, ;$$ The case $r = k$ is the desired conclusion.
To prove the $r=1$ step, use~(\ref{eq:fdecomp}) with $v_1 = o$, $A_1 = \{ o \}$, $C_1 = \emptyset$, $v_2 = w_1$, $A_2 = \Gamma (w_1)$ and $C_2 = C(w_1)$. If $w_1 \in C$, the $r=1$ case is trivially true, so assume $s \geq 1$. The measure $\mu_1$ is uniform on ${\bf S}$ since $C(v) = \emptyset$. Thus from~(\ref{eq:fdecomp}) we find that $$g_1 (y) = c \int e^{-J_1 H (y,z)} f_1 (z) \, dz
= ({\cal K}_{J_1} f_1) (y)$$ which proves the $r = 1$ case.
For $1 < r \leq s$, use~(\ref{eq:fdecomp}) with $A_1 = \Gamma_{r-1}$, $v_1 = o$, $C_1 = \Gamma_{r-1} \cap C$, $A_2 = \Gamma (w_r)$, $v_2 = w_r$ and $C_2 = \Gamma (w_r) \cap C$. Using~(\ref{eq:fdecomp}) we find that \begin{eqnarray*} g_r (y) & = & c g_{r-1} (y) \int e^{-J_r H (y,z)}
f_r (z) \, d{\bf x} (z) \\[1ex] & = & c g_{r-1} (y) ({\cal K}_{J_r} f_r) (y) \\[1ex] & = & ({\bigodot} (g_{r-1} , {\cal K}_{J_r} f_r)) (y) \, . \end{eqnarray*} By associativity of ${\bigodot}$ the induction step is completed for $r \leq s$.
Finally, if $r > s$, then the difference between $H (\eta)$ on $\Gamma_{r-1}$ and $H (\eta)$ on $\Gamma_r$ is just $- J_r H(\eta (o) , \delta (w_r))$, so $$g_r (y) = c g_{r-1} (y) \exp (- J_r H(y , \delta (w_r))) = \left ( {\bigodot}
(g_{r-1} , {\cal K}_{J_r} f_r) \right ) (y)$$ by the convention~(\ref{eq:pointconv}), and associativity of ${\bigodot}$ completes the induction as before. $
\Box$
Another consequence of Lemma~\ref{lem:decomps} is Lemma~\ref{lem:free}, giving the existence of a natural and well defined free boundary measure.
\noindent{\bf Proof of Lemma~\protect{\ref{lem:free}}.} Observe that in~(\ref{eq:mudecomp2}), if $f_2 \equiv 1$ then the integral against $z$ is independent of $y$, so one has $\mu^{(1)} = \mu_1$. Let $F$ be any cutset and $w \in F^i$ be chosen so each of its children $v_1 , \ldots , v_k$ is in $F$. Applying our observation inductively to eliminate each child of $w$ in turn, we see that the projection of $\mu^{\rm free}_F$ onto ${\bf S}^{F^i \setminus \{ w \}}$ is just $\mu^{\rm free}_{F'}$ where $F' = F \cup \{ w \} \setminus \{ v_1 , \ldots , v_k \}$.
Given cutsets $C$ and $D$ with $D \cap C^i \neq \emptyset$, choose $v \in D \cap C^i$ and $w \geq v$ maximal in $C^i$. Then all children of $w$ are in $C$. Applying the previous paragraph with $F = C$, we see that $\mu^{\rm free}_C$ agrees with $\mu^{\rm free}_{F'}$. Continually reducing in this way, we conclude that on $C^i \cap D^i$ $\mu^{\rm free}_C$ agrees with $\mu^{\rm free}_Q$ where $Q$ is the exterior boundary of $C^i \cap D^i$. The same argument shows that $\mu^{\rm free}_D$ agrees with $\mu^{\rm free}_Q$, which finishes the proof of the lemma. $
\Box$
According to Lemma~\ref{lem:rec}, if, for $J>0$, we define ${\cal P} (J)$ to be the smallest class of densities containing each $K_{J' , y}$ for $J' \in (0,J]$ and $y \in {\bf S}$ and closed under ${\cal K}_{J'}$ for $J' \in (0,J]$ and ${\bigodot}$, then, when ${\cal J}$ is strictly positive and bounded by $J$, each density $f^{{\cal J} , \delta}_{C , v}$ is an element of ${\cal P}(J)$. Similarly, if ${\cal P}_+(J)$ is taken to be the smallest class of densities containing each $K_{J'}$ for $J' \in (0,J]$ and closed under ${\cal K}_{J'}$ for $J' \in (0,J]$ and ${\bigodot}$, then, when ${\cal J}$ is strictly positive and bounded by $J$, each density $f^{{\cal J} , +}_{C , v}$ is an element of ${\cal P}_+(J)$. We also let ${\cal P}:=\bigcup_{J> 0}{\cal P}(J)$ and ${\cal P}_+:=\bigcup_{J> 0}{\cal P}_+(J)$.
This leads to the following lemma whose proof is left to the reader.
\begin{lem} \label{lem:unifbd} Suppose the interaction strengths $\{ {\cal J} (e) \}$ are bounded above by some constant. Then there exist constants $0 < B_{\rm min} < B_{\rm max}$ such that for every $C , \delta$ and $v \in C^i$, the one-dimensional marginal of $\mu^{\delta}_C$ at $v$ is absolutely continuous with respect to $d{\bf x}$ with a density function in $[B_{\rm min} , B_{\rm max}]$. It follows, since the above properties are closed under convex combinations, that all one-dimensional marginals of any Gibbs state have densities in $[B_{\rm min} , B_{\rm max}]$. Similarly, the $k$-dimensional marginals have densities in the interval $[B_{\rm min}^{(k)} , B_{\rm max}^{(k)}]$ for some constants $0< B_{\rm min}^{(k)} < B_{\rm max}^{(k)}$. In addition, the family of all one--dimensional densities which arise as above is an equicontinuous family. \end{lem}
The usefulness of the equicontinuity property is that the following easily proved lemma (whose proof is also left to the reader) tells us that in determining weak convergence to $d{\bf x}$, it is equivalent to look to see if there is convergence in $L^\infty$ of the associated densities to 1.
\begin{lem} \label{lem:converge} Let $(X,d)$ be a compact metric space and $\mu$ a probability measure on $X$ with full support. If $\{f_n\}$ is an equicontinuous family of probability densities (with respect to $\mu$), then $$
\lim_{n\to\infty} \|f_n-1\|_{\infty} = 0 \mbox{ if and only if } \lim_{n\to\infty} f_n d\mu = \mu \mbox{ weakly }. $$ \end{lem}
Using this, we can prove the equivalence of phase transition and symmetry breaking on trees (Proposition~\ref{prop:SB=}).
\noindent{\bf Proof of Proposition~\ref{prop:SB=}.} (i) implies (ii) is trivial. For (ii) implying (iii), assume we have a vertex $v$, a sequence of cutsets $C_n\to\infty$ and boundary conditions $\delta_n$ on $C_n$ such that $$
\inf_n \|f_{C_n,v}^{\delta_n}-1\|_\infty \neq 0. $$ Clearly we obtain the same result if we change $\delta_n$ on $C_n\setminus \Gamma(v)$ to anything, in particular, if we take no (i.e., free) boundary condition there. We then take any weak limit of these measures as $n\to\infty$. This will yield a Gibbs state and by the first line of the proof of Lemma~\ref{lem:free}, together with Lemma~\ref{lem:converge}, the marginal density at $v$ of this Gibbs state is not 1, which proves (iii). (iii) implies (iv) is also trivial of course. To see that (iv) implies (i), note that if there is PT, then there exists an extremal Gibbs state $\mu\neq \mu^{\rm free}$. Choose a cutset $C$ such that $\mu\neq \mu^{\rm free}$ when restricted to $C^i$. If (i) fails, then for all $v\in C$, there exists a sequence of cutsets $C_n\to\infty$ such that for all boundary conditions $\delta_n$ on $C_n$ we have that \begin{equation} \label{eq:ivgivesi}
\inf_n \|f_{C_n,v}^{\delta_n}-1\|_\infty = 0. \end{equation} Clearly, because of the geometry, $\{C_n\}$ can be chosen independent of $v$. Since $\mu$ is extremal, it is known (see Theorem 7.12(b) in \cite{Ge}, p. 122) that there exist boundary conditions $\delta_n'$ on $C_n$ so that $\mu_{C_n}^{\delta_n'} \rightarrow \mu$ weakly. However, by (\ref{eq:ivgivesi}) and Lemma~\ref{lem:rec}, $\mu$ must equal $\mu^{\rm free}$ on $C^i$, a contradiction. $
\Box$
\subsection{Distance regular spaces} \label{sub:drs}
Our primary interest in this paper is in the Heisenberg models. Nevertheless, it turns out that many of the properties of the Heisenberg model hold in the more general context of distance regular spaces. A {\bf distance regular graph} is a finite graph for which the size of the set $\{ z : d(x,z) = a , d (y,z) = b \}$ depends on $x$ and $y$ only through the value of $d(x,y)$ where $d(x,y)$ is the usual graph distance between $x$ and $y$. We generalize this by saying that the metric space $({\bf S},d)$ with probability measure $d{\bf x}$ is {\bf distance regular} if the law of the pair $(d(x,Z) , d(y,Z))$ when $Z$ has law $d{\bf x}$ depends only on $d(x,y)$. In particular, when the action of $G$ on ${\bf S}$ is distance transitive (in addition to preserving $d$ and $d{\bf x}$), meaning that $(x,y)$ can be mapped to any $(x' , y')$ with $d(x,y) = d(x' , y')$, it follows easily that $({\bf S},d, d{\bf x})$ is distance regular. All the examples we have mentioned so far are distance transitive (and hence distance regular) except for the rotor model which is still distance regular. (For an example of a graph showing that the full automorphism group acting distance transitively is strictly stronger than the assumption of distance regularity, see~\cite{AVLF} or {\it Additional Result} {\bf 23b} of~\cite{Big}.)
We present some of the background in this generality not because we are fond of gratuitous generalization but because we find the reasoning clearer, and because it seems reasonable that someone in the future might study a particle system whose spin states are elements of some distance regular space, such as real projective space or the discrete $n$-cube. The primary consequence of distance regularity is that it allows one to define a commutative convolution on a certain subspace of $L^2$.
\begin{defn} Let $L^2 ({\bf S})$ denote the space $L^2 (d{\bf x})$, and let $L^2 ({\bf S}/{\hat{0}})$ denote the space of functions $f \in L^2 ({\bf S})$ for which $f(x)$ depends only on $d(x , {\hat{0}})$. For $f \in L^2 ({\bf S}/{\hat{0}})$, define a function $\overline{f}$ on $\{d({\hat{0}},y)\}_{y\in {\bf S}}$ by $\overline{f} (r) := f(x)$ where $x$ is such that $d({\hat{0}},x) = r$. \end{defn}
\begin{defn} If $({\bf S},d{\bf x})$ is distance regular, define a commutative convolution operation on $L^2 ({\bf S}/{\hat{0}}) \times L^2 ({\bf S}/{\hat{0}})$ by $$f * h (x) := \int_{{\bf S}} h(y) \overline{f} (d(x,y)) \, d{\bf x} (y) =
\int_{[0,\infty)^2} \overline{f} (u) \overline{h} (v) \, d\pi_x(u,v)$$ where $\pi_x$ is the law of $(d(x,Z) , d({\hat{0}} , Z))$ for a variable $Z$ with law $d{\bf x}$. It is clear from the definition of a distance regular space that $(d(x,Z) , d({\hat{0}},Z))$ and $(d({\hat{0}},Z) , d(x,Z))$ are equal in distribution implying that $f * h =h * f$ and that, since $\pi_x$ only depends on $d(x,{\hat{0}})$, $f,h\in L^2 ({\bf S}/{\hat{0}})$ implies that $f * h \in L^2 ({\bf S}/{\hat{0}})$. \end{defn}
The following lemma is straightforward and left to the reader.
\begin{lem} \label{lem:dt} For all $J\ge 0$, $K_J\in L^2 ({\bf S}/{\hat{0}})$ and for all $h\in L^2 ({\bf S})$, ${\cal K}_J(h)(x)$ (defined in~(\ref{eq:conv})) is equal to $\int_{{\bf S}} h(y) \overline{K_J} (d(x,y)) \, d{\bf x} (y)$. In particular, if $({\bf S} , d{\bf x})$ is distance regular, then the operators ${\cal K}_J$ map $L^2 ({\bf S}/{\hat{0}})$ into itself and ${\cal K}_J(h) =K_J * h$ for all $h \in L^2 ({\bf S}/{\hat{0}})$. \end{lem}
We believe that for most distance regular spaces, one can verify the necessary hypotheses of Theorems~\ref{th:gen ii} and~\ref{th:gen i} below in the same way as we will do for the Heisenberg models in detail in the next section. Doing this however would take us too far afield and so we content ourselves with pointing out to the reader that much of this probably can be done, and after analyzing the Heisenberg models in Section~\ref{sub:sphere}, explain how to carry much of this out in the context of distance regular graphs in Section~\ref{sub:finite}.
\subsection{Heisenberg models} \label{sub:sphere}
In this subsection, we consider Example \ref{eg:spherical} in Section~\ref{sec:one} and so we have ${\bf S} = S^d$, $d \geq 1$, the unit sphere in $(d+1)$--dimensional Euclidean space with the corresponding $G, d, d{\bf x}$ and $H$. Recall that this is distance transitive for $d\ge 2$ (and hence distance regular) and distance regular for $d=1$. The following lemma allows us to set up coordinates in which our bookkeeping will be manageable. It is certainly well known.
\begin{lem} \label{lem:spherical} For any $d\ge 1$, there exist real--valued functions $\psi_0 , \psi_1 , \psi_2 , \ldots \in L^2 ({\bf S}/{\hat{0}})$ $({\bf S}=S^d)$, orthogonal under the inner product $\langle f,g \rangle = \int_{{\bf S}} f \overline{g} \, d{\bf x}$, such that $\psi_n$ is a polynomial of degree exactly $n$ in $x\cdot {\hat{0}}$, and such that the following properties hold. \\ (1) $\psi_0 (x) \equiv 1$ and $\psi_1 (x) = x \cdot {\hat{0}}$. \\
(2) $1 = \psi_j ({\hat{0}}) = \sup_{x \in {\bf S}} |\psi_j (x)|$, for all $j$. \\ (3) $\psi_i \psi_j = \sum_{r\ge 0} q^r_{ij} \psi_r$, where the coefficients
$q^r_{ij}$ are nonnegative and $\sum_r q^r_{ij} = 1$. \\ (4) $\psi_i * \psi_j = \gamma_j \delta_{ij} \psi_j$, where
$\gamma_j := \psi_j * \psi_j ({\hat{0}}) = \int \psi_j^2(x) \, d{\bf x}(x)$. \\ (5) The functions $\psi_j$ are eigenfunctions of any convolution
operator, that is, $f * \psi_j = c \psi_j$ for any $f \in L^2 ({\bf S}/{\hat{0}})$. \\ (6) Any $f \in L^2 ({\bf S}/{\hat{0}})$ can be written as a convergent series
$f(x) = \sum_{j\ge 0} a_j (f) \psi_j(x)$ (in the $L^2$ sense), where the complex numbers $a_j(f)$ are given by $a_j(f) : = \gamma_j^{-1} \int f(x) \psi_j (x) \, d{\bf x} (x) .$ \\ (7) For $f,g \in L^2 ({\bf S}/{\hat{0}})$, we have $a_j(f * g)= \gamma_j a_j(f) a_j(g)$.
\end{lem}
\noindent{\bf Proof.} For each $\alpha, \beta >-1$, define the Jacobi polynomials $\{{\bf P}^{(\alpha , \beta)}_n(r)\}_{n\ge 0}$ by \begin{equation} \label{eq:rod} (1 - r)^\alpha (1 + r)^\beta {\bf P}_n^{(\alpha , \beta)} (r) =
{(-1)^n \over 2^n n!} {d^n \over dr^n} \left [ (1 - r)^{n + \alpha}
(1 + r)^{n + \beta} \right ] \, . \end{equation} (The Jacobi polynomials are usually defined differently in which case~(\ref{eq:rod}) becomes what is known as Rodrigues' formula but we shall use~(\ref{eq:rod}) as our definition; when $\alpha=\beta$, which is the case relevant to us, these are the ultraspherical polynomials.)
For any given $d\ge 1$, we let, for $n\ge 0$, $$ \psi_n(x):= {{\bf P}^{({d \over 2}-1 ,{d \over 2}-1)}_n (x\cdot {\hat{0}}) \over {\bf P}^{({d \over 2}-1 ,{d \over 2}-1)}_n (1)}. $$ By p.254 in \cite{R}, ${\bf P}^{(\alpha , \beta)}_n$ is a polynomial of degree exactly $n$. By p.259 in \cite{R}, the collection $\{{\bf P}^{(\alpha , \beta)}_n\}_{n\ge 0}$ are orthogonal on $[-1,1]$ with respect to the weight function $(1-r)^\alpha (1+r)^\beta$. A change of variables then shows that the $\psi_n$'s are orthogonal in $L^2 ({\bf S})$.
(1) is then an easy calculation, the first equality in (2) is trivial while the second equality is in \cite{R}, p.278 and 281. (3) is in~\cite{Askey74}, p.41. (4) and (5) follow from the Funk--Hecke Theorem (\cite{N}, p.195) (the calculation of $\gamma_j$ being trivial). Since the subspace generated by the $\{{\bf P}^{({d \over 2}-1 ,{d \over 2}-1)}_n(r)\}$'s are uniformly dense in $C([-1,1])$ by the Stone-Weierstrass Theorem, it easily follows that the subspace generated by the $\psi_n$'s are uniformly dense in $L^2 ({\bf S}/{\hat{0}})\cap C({\bf S})$. Hence the $\psi_n$'s are a basis for $L^2 ({\bf S}/{\hat{0}})$ and (6) follows. Finally, (4) and (6) together yield (7). $
\Box$
Note that for all $f,g\in L^2 ({\bf S}/{\hat{0}})$, we have that $fg\in L^2 ({\bf S}/{\hat{0}})$ provided $fg\in L^2 ({\bf S})$. Since $\psi_n$ is a polynomial of degree exactly $n$ in $x\cdot {\hat{0}}$, the greatest $r$ for which $q^r_{ij} \neq 0$ must be $i + j$. From this and the nonnegativity of the $q^r_{ij}$'s, it follows that for $\lambda > 0$ the function $e^{\lambda \psi_1(x)} = \sum_{n \geq 0} \lambda^n \psi_1 (x)^n / n!$ has \begin{equation} \label{eq:viii} a_j (e^{\lambda \psi_1}) > 0 , \mbox{ for all } j\ge 0. \end{equation} It follows from Lemmas~\ref{lem:rec}, \ref{lem:dt} and \ref{lem:spherical}(3,4) that ${\cal P}_+\subseteq L^2 ({\bf S}/{\hat{0}})$ and that for all $g \in {\cal P}_+$, \begin{equation} \label{eq:ix} a_j (g) > 0, \mbox{ for all } j\ge 0. \end{equation}
\begin{defn} Define the $A$ norm on $L^2 ({\bf S}/{\hat{0}})$ by
$$||f||_A = \sum_{j\ge 0} |a_j (f)| ,$$ provided it is finite. \end{defn}
From the fact that $\sum_{r\ge 0} q^r_{ij} = 1$, one can easily show that for all $f,g\inL^2 ({\bf S}/{\hat{0}})$ with $fg\inL^2 ({\bf S}/{\hat{0}})$, \begin{equation} \label{eq:submult}
||fg||_A \leq ||f||_A ||g||_A , \end{equation} and that equality holds if $f , g \in {\cal P}_+$. An easy computation also
shows that $||e^{\lambda \psi_1(x)}||_A =e^{\lambda} <\infty$ for all $\lambda \ge 0$ and hence by Lemmas~\ref{lem:rec} and~\ref{lem:spherical}(4)
and~(\ref{eq:submult}), $||f||_A<\infty$ for all $f\in{\cal P}_+$. Also, it follows from~(\ref{eq:ix}), Lemma~\ref{lem:spherical}(2,6), the fact that $\int f \, d{\bf x} = 1$ for all $f \in {\cal P}_+$ and the fact that ${\cal P}_+\subseteq L^2 ({\bf S}/{\hat{0}})$ that for $f \in {\cal P}_+$, \begin{equation} \label{eq:x}
1 + ||f - 1||_A = ||f||_A = f ({\hat{0}}) = ||f||_\infty =
1 + ||f - 1||_\infty . \end{equation} The last equality is obtained by observing that $\le$ is clear while
$ ||g||_\infty \le ||g ||_A $ for all $g\in L^2 ({\bf S}/{\hat{0}})$ is also clear.
\begin{lem} \label{lem:taylor} There exists a function $o$ with $\displaystyle{\lim_{h \to 0} {o(h) \over h} = 0}$ such that for all $h_1 , \ldots , h_k \in {\cal P}_+$ with $k \leq B$, \begin{equation} \label{eq:xi}
|| {\bigodot} (h_1 , \ldots , h_k) - 1 - \sum_{i=1}^k (h_i - 1)||_A
\leq o(\max_i ||h_i - 1||_A) , \end{equation}
provided $\max_i ||h_i - 1||_A\le 1$. \end{lem}
\noindent{\bf Proof.} Write \begin{equation} \label{eq:new01}
|| \prod_{i=1}^k h_i - 1 - \sum_{i=1}^k (h_i - 1)||_A =
|| \sum_{{A\subseteq\{1,\ldots,k\}\atop |A|\ge 2}} \prod_{i\in A}(h_i-1)||_A. \end{equation}
Then $\max_i ||h_i - 1|| \leq 1$ and submultiplicativity~(\ref{eq:submult})
of $|| \cdot ||_A$ implies this is at most
$$ 2^k (\max_i ||h_i - 1||_A)^2 .$$ Next, since $\int (h_i - 1) \, d{\bf x} = 0$ for $1 \leq i \leq k$, we similarly obtain
$$\left|\int \prod_{i=1}^k h_i - 1\right| \leq 2^k (\max_i ||h_i - 1||_A)^2.$$ We then have $$
|| {\bigodot} (h_1 , \ldots , h_k) - \prod_{i=1}^k h_i||_A =
{1\over \int \prod_{i=1}^k h_i}\left|\int \prod_{i=1}^k h_i - 1\right|
|| \prod_{i=1}^k h_i ||_A
\leq 4^k (\max_i ||h_i - 1||_A)^2 ,$$
since $|| \prod_{i=1}^k h_i ||_A\le 2^k$ and $\int \prod_{i=1}^k h_i\ge 1$ by the positivity of the $q^r_{ij}$ and~(\ref{eq:ix}). A use of the triangle inequality completes the proof. $
\Box$
We note five facts that follow easily from the above, but which will be useful later on in generalizing our results.
Let $\langle\PP_+\rangle$ be the linear subspace of $L^2 ({\bf S}/{\hat{0}})$ spanned by ${\cal P}_+$, $\langle\PP_+(J)\rangle$ be the linear subspace of $L^2 ({\bf S}/{\hat{0}})$ spanned by ${\cal P}_+(J)$
and $|| {\cal K}_{J'} ||_A$ denote the operator norm of ${\cal K}_{J'}$ on
$(\langle\PP_+\rangle,||\,\,||_A)$. \begin{equation} \label{eq:xii}
\lim_{J' \to 0} ||K_{J'} - 1||_A = 0 ; \end{equation}
\begin{equation} \label{eq:present iii}
c_1:=\sup_{f\in\langle\PP_+\rangle ,f \neq 1} {||f - 1||_\infty \over ||f - 1||_A} < \infty ; \end{equation}
\begin{equation} \label{eq:present iii'}
c_2:=\inf_{f\in{\cal P}_+,f\neq 1} {||f - 1||_\infty \over ||f - 1||_A} > 0 ; \end{equation}
\begin{equation} \label{eq:xiii}
\mbox{ For all } J' \geq 0, \,\,\, || {\cal K}_{J'} ||_A \leq 1; \end{equation}
There exist $a,b\in {\bf S}$ such that for all $f\in {\cal P}_+$, \begin{equation} \label{eq:xiiii} f(a)=\sup_{x\in{\bf S}} f(x) \mbox{ and } f(b)=\inf_{x\in{\bf S}} f(x). \end{equation}
(\ref{eq:xiii}), for example, follows immediately from Lemmas~\ref{lem:dt}
and~\ref{lem:spherical}(7) and the fact that $|\gamma_n a_n(g)|\le 1$ for any probability density function $g\in L^2 ({\bf S}/{\hat{0}})$.
The results on Heisenberg models presented thus far are parallel to the results obtainable for any finite distance regular graph (see the next subsection). One useful result that is not true for general distance regular models depends on the following obvious geometric property of the sphere:
$$|\{ z : d(x,z) \leq a , d(y,z) \leq b \}|$$ is a nonincreasing function of $d(x,y)$ for any fixed $a$ and $b$ where
$|\,\,|$ denotes surface measure. [Proof: For $S^1$, this is obvious. For $S^d$, $d\ge 2$, by symmetry, we can assume that $x=(0,\ldots,0,1)$ and $y=(\cos\theta,0,\ldots, 0, \sin\theta)$ (both vectors with $d+1$ coordinates). Write $S^d$ as $$ \cup_{u\in [-1,1]^{d-1}} A_u $$ where $$ A_u:=S^d\cap\{(a_1,\ldots,a_{d+1}):(a_2,\ldots,a_{d})=u\}. $$ Each $A_u$ is a circle (or is empty) and so essentially by the 1--dimensional case, we have the desired behaviour on each $A_u$ (using 1--dimensional Lebesgue measure) and by Fubini's Theorem, we obtain the desired result on $S^d$.]
Calling a function $f\inL^2 ({\bf S}/{\hat{0}})$ nonincreasing if the corresponding $\overline{f}$ is nonincreasing, the latter can be seen to be equivalent to the property that ${\bf 1}_{d(x , {\hat{0}}) \leq a} * {\bf 1}_{d(x , {\hat{0}}) \leq b}$ is nonincreasing, and by taking linear combinations, this is equivalent to $f * g$ being nonincreasing for all nonincreasing $f$ and $g$ in $L^2 ({\bf S}/{\hat{0}})$. Since $K_J$ is nonincreasing for all $J$, it follows from the fundamental recursion that
\begin{equation} \label{eq:xiv} f \in {\cal P}_+ \Rightarrow f \mbox{ is nonincreasing} . \end{equation}
\begin{lem} \label{lem:incr} For any positive nonincreasing $f \in L^2 ({\bf S}/{\hat{0}})$, $$
\left|\int_{{\bf S}} f \psi_n \, d{\bf x}\right| \le \int_{{\bf S}} f \psi_1 \, d{\bf x} $$ for all $n \ge 1 $. \end{lem}
\noindent{\bf Proof.} It suffices to prove this for functions of the form $f(x) = {\bf 1}_{\{x \cdot {\hat{0}} \geq t\}}$ with $t\in [-1,1]$. We rely on explicit formulae for the functions $\{ \psi_n \}$. Letting $\alpha = d/2 - 1$, a change of variables yields $$\int_{{\bf S}} f \psi_n \, d{\bf x} = s_d^{-1}\int_t^1 {P_n^{(\alpha , \alpha)} (r) \over P_n^{(\alpha ,
\alpha)} (1)} (1 - r^2)^\alpha \, dr,$$ where $$ s_d=\int_{-1}^1 (1-r^2)^\alpha dr $$ and ${\bf P}_n^{(\alpha , \alpha)}$ is the Jacobi polynomial defined earlier.
Taking the indefinite integral of each side in~(\ref{eq:rod}) with $\beta = \alpha$ yields \begin{eqnarray*} \int (1 - r)^\alpha (1 + r)^\alpha P_n^{(\alpha , \alpha)} (r) \, dr & = &
{(-1)^n \over 2^n n!} {d^{n-1} \over dr^{n-1}} \left [ (1 - r)^{n + \alpha}
(1 + r)^{n + \alpha} \right ] \\[2ex] & = & {- 1 \over 2n} (1 - r^2)^{\alpha + 1} P_{n-1}^{(\alpha + 1 ,
\alpha + 1)} (r)\, . \end{eqnarray*} Evaluating at 1 and $t$ gives \begin{eqnarray*} \int_{{\bf S}} f \psi_n \, d{\bf x}
& = & s_d^{-1}\int_t^1 {P_n^{(\alpha , \alpha)} (r) (1 - r^2)^\alpha \over
P_n^{(\alpha , \alpha)} (1)} \, dr \\[2ex] & = & s_d^{-1} {P_{n-1}^{(\alpha + 1 , \alpha + 1)} (t) (1 - t^2)^{\alpha + 1} \over
2 n P_n^{(\alpha , \alpha)} (1)} . \end{eqnarray*} When $n = 1$, using~(\ref{eq:rod}), this is just $s_d^{-1}(1 - t^2)^{\alpha + 1} / 2(1+\alpha)$. Dividing, we get $${ \int_{{\bf S}} f \psi_n \, d{\bf x} \over \int_{{\bf S}} f \psi_1 \, d{\bf x} } = {P_{n-1}^{(\alpha + 1 , \alpha + 1)} (t) (1+\alpha)
\over n P_n^{(\alpha , \alpha)} (1)} =
{P_{n-1}^{(\alpha + 1 , \alpha + 1)} (t) \over
P_{n-1}^{(\alpha + 1 , \alpha + 1)} (1)} \cdot
{P_{n-1}^{(\alpha + 1 , \alpha + 1)} (1) \over
n P_n^{(\alpha , \alpha)} (1)}\cdot(1+\alpha).$$ The first term in the product is bounded in absolute value by 1. By \cite{Askey74}, p.7, $$P_n^{(\alpha , \alpha)} (1) = {\alpha + n \choose n} ,$$ and so we see that the second term is $1 / (\alpha + 1)$, completing the proof of the lemma. $
\Box$
\noindent{\em Remark:} The case $d = 1$ can also be handled by a rearrangement lemma.
\begin{defn} \label{defn:op} Define a linear functional $L$ on $L^2 ({\bf S}/{\hat{0}})$ by $L (g):= \int_{{\bf S}}g(x)\psi_1(x) d{\bf x}(x)$ $(= \gamma_1 a_1 (g))$ and set ${\bf Op}_J = L(K_J)$. (Recall that $\psi_1,\gamma_1$ and $a_1$ are defined in Lemma~\ref{lem:spherical}.) \end{defn}
It follows from Lemmas~\ref{lem:dt},~\ref{lem:spherical}(7) and~\ref{lem:incr},~(\ref{eq:xiv}) and an easy computation that \begin{equation} \label{eq:ixix}
||{\cal K}_J f-1||_A\le {\bf Op}_J ||f-1||_A \mbox{ for all } f\in{\cal P}_+(J). \end{equation}
In the following inequalities, we denote $\rho := {\bf Op}_J$. For $f \in {\cal P}_+(J)$, it also follows easily that \begin{equation} \label{eq:(a)} L ({\cal K}_J f - 1 ) \geq \rho L (f - 1) \end{equation} and that there is a constant $c_3$ such that for all $f\in\,\,\langle\PP_+(J)\rangle$, \begin{equation} \label{eq:(c)}
|L(f)| \leq c_3 ||f||_A . \end{equation} (We can of course take $c_3$ to be 1, but we leave the condition written in this more general form for use as a hypothesis in Theorem~\ref{th:gen i}.)
Putting together the results of Lemmas~\ref{lem:spherical} and~\ref{lem:incr}, as well as~(\ref{eq:viii}),~(\ref{eq:ix}) and~(\ref{eq:xiv}), gives the following corollary.
\begin{cor} \label{cor:(b)} For all $J\ge 0$, there is a constant $c_4 > 0$ such that for all $f \in {\cal P}_+(J)$, \begin{equation} \label{eq:(b)}
L (f) \geq c_4 ||f - 1||_A . \end{equation} \end{cor}
\noindent{\bf Proof.} Fix $f \in {\cal P}_+(J)$. If $f = K_{J'}$ for some $J' \in (0, J]$, we argue as follows. As
$||e^{\lambda \psi_1(x)}||_A =e^{\lambda}$ (which we mentioned earlier) and $K_{J'}(x)=e^{J'\psi_1(x)}/\int e^{J'\psi_1(x)}d{\bf x}(x)$, we have \begin{eqnarray*}
||K_{J'} - 1||_A & = & ||K_{J'} ||_A -1 \\[2ex] & = & {e^{J'}\over \int e^{J'\psi_1(x)}d{\bf x}(x)} -1 \\[2ex] & \le & e^{2J'} -1. \end{eqnarray*} Next, \begin{eqnarray*} L (K_{J'}) & = & {1\over \int e^{J'\psi_1(x)}d{\bf x}(x)} \int e^{J'\psi_1(x)}\psi_1(x)d{\bf x}(x) \\[2ex] & = & {1\over \int e^{J'\psi_1(x)}d{\bf x}(x)} \sum_{k=0}^\infty {(J')^k\over k!} \int \psi_1^{k+1}(x)d{\bf x}(x). \end{eqnarray*} By Lemma~\ref{lem:spherical}(3), all terms in the sum are nonnegative and by Lemma~\ref{lem:spherical}(4), the $k=1$ term is $J'\gamma_1$. Hence $L (K_{J'})\ge J'\gamma_1/e^{J'}$. Since $$ \inf_{J'\in (0,J]} {J'\gamma_1 \over e^{J'}(e^{2J'} -1) } >0, $$ we can find a $c_4$ in this case.
Otherwise, by the fundamental recursion, we may represent $f$ as ${\bigodot} ({\cal K}_{J_1} h_1 , \ldots , {\cal K}_{J_k} h_k)$ with each $h_i$ either in ${\cal P}_+(J)$ or equal to $\delta_{{\hat{0}}}$ and each $J_i\in (0,J]$. Define $g_i = {\cal K}_{J_i} h_i - 1$. Let $m :=\inf_{0< J'\le J} a_1 (K_{J'}) / \sum_{n > 0} a_n (K_{J'})$ which is strictly positive by the above. It follows that if $h_i\in {\cal P}_+(J)$ (the case $h_i=\delta_{{\hat{0}}}$ is already done),
$${L (g_i) \over ||g_i ||_A} = {a_1 (K_{J_i}) a_1 (h_i) \gamma_1^2\over
\sum_{n > 0} a_n (K_{J_i}) a_n (h_i) \gamma_n} \geq m\gamma_1$$ by Lemma~\ref{lem:spherical}(7) and since $a_1(h_i)\gamma_1 \ge a_n(h_i)\gamma_n$ for all $n\ge 1$ by Lemma~\ref{lem:incr} and~(\ref{eq:xiv}). Let $h = \prod_{i=1}^k {\cal K}_{J_i} h_i$. Then $L (h) = L(1 + \sum_{i=1}^k g_i + Q)$, where $Q$ is a sum of monomials in $\{ g_i \}$. Using $q^r_{ij} \ge 0$ and~(\ref{eq:ix}), we have that $L(Q) \geq 0$, and hence \begin{equation} \label{eq:new02}
L(h) \geq \sum_{i=1}^k L(g_i) \geq m \gamma_1\sum_{i=1}^k ||g_i ||_A . \end{equation} On the other hand, for any $B$ and $M$, there is $C = C(M,B)$ such that if $x_1 , \ldots , x_k \in (0,M)$ with $k\le B$, then $$ -1 + \prod_{i=1}^k (1 + x_i) \leq C \sum_{i=1}^k x_i .$$ Next, the positivity of the $q^r_{ij}$ implies $\int_{{\bf S}} h(x) \, d{\bf x}(x) = a_0 (h) \geq 1$. It follows that
$$||h - 1||_A = -1+ ||h ||_A =
-1 + \prod_{i=1}^k ||g_i + 1||_A \leq C \sum_{i=1}^k
||g_i||_A $$
for some constant $C$ since $||g_i+1 ||_A=||g_i ||_A +1$ and
$||g_i+1 ||_A$ clearly has a universal upper bound. [To see the latter statement, one notes that $$
\sup_{0<J' \le J} ||K_{J'}||_A <\infty, $$ $$
||K_{J'} *f||_A \le ||K_{J'} ||_A $$ for any probability density function $f\in L^2 ({\bf S}/{\hat{0}})$ (by Lemma~\ref{lem:spherical}(7)),~(\ref{eq:submult}) and the fact that we never have more than $B$ terms in our pointwise products imply that $$
\sup_{f\in {\cal P}_+(J)}||f||_A \le
\left(\sup_{0<J' \le J} ||K_{J'}||_A \right)^B <\infty.] $$ Putting this together with~(\ref{eq:new02}) gives
$${L(h) \over ||h - 1||_A} \geq {m \gamma_1\over C} \, .$$
Finally, letting $f = h / \left(\int_{{\bf S}} h(x) \, d{\bf x}(x)\right)$, we obtain \begin{eqnarray*}
||h - 1||_A & \ge & \sum_{n\ge 1} a_n(h) \\[2ex] & = & \sum_{n\ge 1} \left[\int_{{\bf S}} h(x)d{\bf x}(x)\right] a_n(f) \\[2ex]
& = & \left[\int_{{\bf S}} h(x)d{\bf x}(x)\right] ||f - 1||_A. \end{eqnarray*} Hence $$
{L(f)\over ||f - 1||_A} \ge {L(h)\over ||h - 1||_A} \ge {m \gamma_1\over C} $$ and we're done. $
\Box$
\subsection{Distance regular graphs} \label{sub:finite}
For the remainder of this section, we suppose that ${\bf S}$ is the vertex set of a finite, connected, distance regular graph, that $d(x,y)$ is the graph distance, and that the energy $H (x,y)$ depends only on $d(x,y)$. The Potts models fit into this framework, with the respective graphs being the complete graph $K_q$ on $q$ vertices. All the results we need follow in fact from an even weaker assumption, namely that ${\bf S}$ is an {\it association scheme}. For the definition of association schemes and the proofs of the relevant results, see~\cite{BCN} or~\cite{Ter98}. By developing the analogue of Lemma~\ref{lem:spherical} for distance regular graphs, we will illustrate the extent to which our results are independent of the special properties of the Heisenberg model.
We have a distinguished element ${\hat{0}} \in {\bf S}$ and the measure $d{\bf x}$ will of course be normalized counting measure $|{\bf S}|^{-1} \sum_{x \in {\bf S}} \delta_x$. The spaces $L^2 ({\bf S})$ and $L^2 ({\bf S}/{\hat{0}})$ are then simply finite dimensional vector spaces with respective dimensions $|{\bf S}|$ and $1+D$, where $D$ is the diameter of the graph ${\bf S}$.
Denote by $M({\bf S})$ the space of matrices with rows and columns indexed by ${\bf S}$, thought of as linear maps from $L^2 ({\bf S})$ to $L^2 ({\bf S})$. Associated with each function $f \in L^2 ({\bf S}/{\hat{0}})$ is the matrix $M_f \in M({\bf S})$ whose $(x,y)$ entry is $\overline{f} (d(x,y))$, whence the matrix $M_f$ corresponds to the linear operator $h \mapsto h * f$ given in Section~\ref{sub:drs}. The following analogue of Lemma~\ref{lem:spherical} is derived from Section~2.4 of~\cite{Ter98}; a published reference is Section~2.3 of~\cite{BCN}.
\begin{lem} \label{lem:scheme} There exists a basis of real--valued functions $\psi_0 , \ldots , \psi_D$ of $L^2 ({\bf S}/{\hat{0}})$ orthogonal under the inner product
$\langle f , g\rangle = |{\bf S}|^{-1} \sum_x f(x) \overline{g(x)}$ with the following properties. \\ (1) $\psi_0 (x) \equiv 1$. \\
(2) $\psi_j ({\hat{0}}) = 1 = \sup_x |\psi_j (x)|$ for all $j$. \\
(3) $\psi_i \psi_j = \sum_{r=0}^D q^r_{ij} \psi_r$ for some nonnegative coefficients
$q^r_{ij}$ with $\sum_r q^r_{ij} = 1$. \\ (4) $\psi_i * \psi_j = \gamma_j \delta_{ij} \psi_j$, where $\gamma_j : = \psi_j * \psi_j
({\hat{0}}) = |{\bf S}|^{-1} \sum_x \psi_j (x)^2$. \\ (5) The functions $\psi_j$ are eigenfunctions of any convolution
operator, that is, $M_f \psi_j = c \psi_j$ for any $f \in L^2 ({\bf S}/{\hat{0}})$. \\
(6) For $f \in L^2 ({\bf S}/{\hat{0}})$, we have $f = \sum_{j=0}^D a_j(f) \psi_j$, where
$a_j : = \gamma_j^{-1} |{\bf S}|^{-1} \sum_x f(x) \psi_j (x)$. \\ (7) For $f,g \in L^2 ({\bf S}/{\hat{0}})$, we have $a_j(f * g)= \gamma_j a_j(f) a_j(g)$. \\ (8) For $f\in L^2 ({\bf S}/{\hat{0}})$ which is positive and nonincreasing,
$|\langle f , \psi_i\rangle|\le \langle f , \psi_1\rangle $ for each $i\ge 1$.
\end{lem}
If we place the norm $\sum_{j=0}^D |a_j(f)|$ on $\langle\PP_+(J)\rangle$, essentially all of the hypotheses in Theorems~\ref{th:gen ii} and~\ref{th:gen i} (to come later) are immediate noting that all norms are equivalent on finite dimensional spaces. If the analogue of~(\ref{eq:xiv}) holds, then letting
$L (g):= |S|^{-1}\sum_{x\in{\bf S}}g(x)\psi_1(x)$ and both ${\bf Op}_J $ and $\rho$ to be $L(K_J)$, then one can easily show that {\it all} of the hypotheses in Theorems~\ref{th:gen ii} and~\ref{th:gen i} hold. As far as~(\ref{eq:xiv}), it trivially holds for the complete graph where the diameter $D$ is equal to 1 and in any case, the reader is left with only one condition to check.
\setcounter{equation}{0}
\section{Two Technical Theorems} \label{sec:proofs} We now state two general results from which Theorems~\ref{th:main} and~\ref{th:potts} will follow.
\begin{th} \label{th:gen ii} Let $\Gamma$ be any tree (with bounded degree). For the $d$--dimensional Heisenberg model with $d\ge 1$, if $J > 0$ and $$\textstyle {br} (\Gamma) \cdot {\bf Op}_J < 1,$$ then there is no robust phase transition for the parameter $J$, where ${\bf Op}_J$ is given in Definition~\ref{defn:op} (${\bf Op}_J$ implicitly depends on $d$). More generally, if $J>0$ and if $({\bf S} , G , H)$ is any statistical ensemble with a norm
$|| \cdot ||$ on $\langle\PP_+(J)\rangle$ satisfying~(\ref{eq:xi}),~(\ref{eq:xii}),~(\ref{eq:present iii}) and~(\ref{eq:xiii}) and there exists a number ${\bf Op}_J \in (0,1)$ satisfying (\ref{eq:ixix}) and $\textstyle {br} (\Gamma) \cdot {\bf Op}_J < 1$, then there is no robust phase transition for the parameter $J$. \end{th}
\begin{th} \label{th:gen i} Let $\Gamma$ be any tree (with bounded degree). For the $d$--dimensional Heisenberg model with $d\ge 1$, if $J > 0$ and $$\textstyle {br} (\Gamma) \cdot {\bf Op}_J > 1,$$ then there is a robust phase transition for the parameter $J$, where ${\bf Op}_J$ is as above. More generally, if $J>0$ and if $({\bf S} , G , H)$ is any statistical ensemble with a norm
$|| \cdot ||$ on $\langle\PP_+(J)\rangle$ satisfying~(\ref{eq:xi}),~(\ref{eq:present iii}), (\ref{eq:present iii'}),~(\ref{eq:xiii}) and~(\ref{eq:xiiii}), and if $L$ is a linear functional on $\langle\PP_+(J)\rangle$ which vanishes on the constants and satisfies~(\ref{eq:(a)}),~(\ref{eq:(c)}) and~(\ref{eq:(b)}) for a constant $\rho > 0$, then $\textstyle {br} (\Gamma) \cdot \rho > 1$ implies a robust phase transition for the parameter $J$. \end{th}
To prove these results, we begin with a purely geometric lemma on the existence of cutsets of uniformly small content below the branching number.
\begin{lem} \label{lem:globalcut} Assume that $\textstyle {br} (\Gamma) < d$. Then for all $\epsilon >0$, there exists a cutset $C$ such that $$
\sum_{x\in C}({1 \over d})^{|x|} \le \epsilon $$ and for all $v\in C^i\cup C$, \begin{equation}\label{eqn:goodcut}
\sum_{x\in C\cap \Gamma (v)}({1 \over d})^{|x|-|v|} \le 1. \end{equation} \end{lem}
\noindent{\bf Proof.} Since $\textstyle {br} (\Gamma) < d$, for any given $\epsilon >0$, there exists a cutset $C$ such that $$
\sum_{x\in C}({1 \over d})^{|x|} \le \epsilon. $$ We can assume that $C$ is a minimal cutset with this property with respect to the partial order $C_1 \preceq C_2$ if for all $v\in C_1$, there exists $w\in C_2$ such that $v\le w$. We claim that this cutset satisfies~(\ref{eqn:goodcut}). If this property failed for some $v$, we let $C'$ be the modified cutset obtained by replacing $C\cap \Gamma (v)$ by $v$ (and leaving $C\cap \Gamma^c_v$ unchanged). As~(\ref{eqn:goodcut}) clearly holds for $w\in C$, we must have that $v\not\in C$ in which case $C'\neq C$. We then have \begin{eqnarray*}
\sum_{x\in C'}({1 \over d})^{|x|} & = &
\sum_{x\in C\cap \Gamma (v)^c}({1 \over d})^{|x|}+({1 \over d})^{|v|} \\[1ex]
& < & \sum_{x\in C\cap \Gamma (v)^c}({1 \over d})^{|x|}+
({1 \over d})^{|v|} \sum_{x\in C\cap \Gamma (v)}({1 \over d})^{|v-x|} \\[1ex]
& = & \sum_{x\in C\cap \Gamma (v)^c}({1 \over d})^{|x|}+
\sum_{x\in C\cap \Gamma (v)}({1 \over d})^{|x|} \\[1ex]
& = & \sum_{x\in C}({1 \over d})^{|x|} \\[1ex] & \le & \epsilon, \end{eqnarray*} contradicting the minimality of $C$ since clearly $C'\preceq C$. $
\Box$
We now proceed with the proofs of Theorems~\ref{th:gen ii} and~\ref{th:gen i}.
\noindent{\bf Proof of Theorem \ref{th:gen ii}.} Since in Section~\ref{sub:sphere}
the Heisenberg models have been shown to satisfy all of the more general hypotheses of this theorem, we need only prove the last statement of the theorem where we have a given $J>0$, a given $\|\,\,\|$ on $\langle\PP_+(J)\rangle$ and a given ${\bf Op}_J$ satisfying the required conditions. By~(\ref{eq:xi}), for any $\epsilon > 0$, there is an $\epsilon_0 > 0$ such that for all $k \leq B$ and all $h_1 , \ldots ,
h_k \in {\cal P}_+(J)$ with $\|h_i - 1\| \leq \epsilon_0$ for all $i$, we have that \begin{equation} \label{eq:star2}
\| {\bigodot}_k (h_1 , \ldots , h_k) - 1 \| \leq (1 + \epsilon) \sum_{i=1}^k
\|h_i - 1\| \, . \end{equation} Choose $\epsilon > 0$ so that $(1 + \epsilon)^{-1} > \textstyle {br} (\Gamma) \cdot {\bf Op}_J$ and choose $\epsilon_0$ as above. By~(\ref{eq:xii}), we can choose $J'>0$ small enough
so that $\| K_{J'} -1 \| \leq \epsilon_0 {\bf Op}_J$. Use Lemma~\ref{lem:globalcut} to choose a sequence of cutsets $\{ C_n \}$ for which $$\lim_{n \rightarrow \infty} \sum_{x \in C_n} [(1 + \epsilon)
{\bf Op}_J ]^{|x|} = 0$$ and for all $n$ and all $v \in C_n^i \cup C_n$, \begin{equation} \label{eq:star43} \sum_{x \in C_n \cap \Gamma (v)}
[(1 + \epsilon) {\bf Op}_J ]^{|x| - |v|} \leq 1. \end{equation} We now show by induction that for all $n$ and all $v \in C_n^i$, \begin{equation} \label{eq:ind}
\|f^{J' ,J, +}_{C_n , v} - 1\| \leq \epsilon_0 \sum_{x \in C_n \cap \Gamma (v)}
[(1 + \epsilon) {\bf Op}_J ]^{|x| - |v|} \, . \end{equation} Indeed, from Lemma~\ref{lem:rec}, letting $w_1 , \ldots , w_k$ be the children of $v$,
$$\|f^{J',J , +}_{C_n , v} - 1\|
= \| {\bigodot} ({\cal K}_{J_1''} f^{J',J , +}_{C_n , w_1},
\ldots , {\cal K}_{J_k''} f^{J',J , +}_{C_n , w_k}) - 1 \| \, $$ where $J_i''$ is $J$ if $w_i\in C_n^i$ and $J'$ otherwise. When $w_i \in C_n$, the choice of $J'$ guarantees that
$\|{\cal K}_{J_i''} f^{J',J , +}_{C_n , w_i} - 1\| \leq \epsilon_0{\bf Op}_J\leq \epsilon_0$, while when $w_i \notin C_n$, the induction hypothesis together with~(\ref{eq:star43}) guarantees that
$\|f^{J',J , +}_{C_n , w_i} - 1\| \le \epsilon_0$ which implies that
$\|{\cal K}_{J_i''} f^{J',J , +}_{C_n , w_i} - 1\| \le \epsilon_0$ by~(\ref{eq:xiii}). Hence, from~(\ref{eq:star2}),
$$\|f^{J',J , +}_{C_n , v} - 1\| \leq
(1 + \epsilon) \sum_{w_i \in C_n} \| {\cal K}_{J'} f^{J',J , +}_{C_n , w_i} - 1 \|
+ (1 + \epsilon) \sum_{w_i \notin C_n}
\|{\cal K}_J f^{J',J , +}_{C_n , w_i} - 1\|.$$
The summands in the first sum are at most $\epsilon_0 {\bf Op}_J$ while those in the second sum are by~(\ref{eq:ixix}) at
most ${\bf Op}_J \|f^{J',J , +}_{C_n , w_i} - 1\| $. Therefore using the induction hypothesis on the second term, we obtain \begin{eqnarray*}
\|f^{J',J , +}_{C_n , v} - 1\| & \leq & \sum_{i=1}^k (1 + \epsilon) \epsilon_0 {\bf Op}_J \sum_{x \in C_n \cap
\Gamma (w_i)} \left [ (1 + \epsilon ) {\bf Op}_J \right ]^{|x| - |w_i|} \\[2ex] & = & \epsilon_0 \sum_{x \in C_n \cap \Gamma (v)} \left [ (1 + \epsilon)
{\bf Op}_J \right ]^{|x| - |v|} , \end{eqnarray*} completing the induction. Finally, the theorem follows by taking $v = o$, letting $n \rightarrow \infty$, and using~(\ref{eq:present iii}). $
\Box$
For the proof of Theorem~\ref{th:gen i}, it is easiest to isolate the following two lemmas.
\begin{lem} \label{lem:first} Under the more general hypotheses of Theorem~\ref{th:gen i}
(with a given $J >0$, a given $\|\,\,\|$ on $\langle\PP_+(J)\rangle$, a given $L$ and a given $\rho$ satisfying the required conditions), for all $\alpha>0$, there exists $\beta>0$ so that if $h_1,\ldots, h_k\in {\cal P}_+(J)$ with
$k\le B$ and $\|h_i-1\| < \beta$ for each $i$, then $$ L \left [ ({\bigodot}_k ({\cal K}_J h_1 , \ldots , {\cal K}_J h_k)) - 1 \right ]
\geq {1 \over 1 + \alpha} \sum_{i=1}^k L ({\cal K}_J h_i - 1) $$ \end{lem}
\noindent{\bf Proof.} In~(\ref{eq:xi}), choose $\beta<1$ so that $$o(h)\le h \left(1-{1 \over (1+\alpha)}\right) {c_4 \over c_3} $$ for all $h\in (0,\beta)$, with $c_3$ and $c_4$ as in~(\ref{eq:(c)}) and~(\ref{eq:(b)}). If $h_1,\ldots, h_k\in {\cal P}_+(J)$ are such that
$\|h_i-1\| < \beta$, then $\|{\cal K}_J h_i-1\| < \beta$ by~(\ref{eq:xiii}). We can now write \begin{equation} \label{eq:U1} {\bigodot}_k({\cal K}_J h_1,\ldots,{\cal K}_J h_k)-1- {1 \over (1+\alpha)}\sum_{i=1}^k ({\cal K}_J h_i-1) \end{equation} as \begin{equation} \label{eq:U2} \left(1-{1 \over (1+\alpha)}\right)\sum_{i=1}^k ({\cal K}_J h_i-1) +U \end{equation} where by assumption, \begin{eqnarray} \label{eq:new03}
\|U\| & \le & o(\max_i \|{\cal K}_J h_i-1 \|) \\[2ex] & \le & \left(1-{1 \over (1+\alpha)}\right) {c_4 \over c_3}
\max_i \|{\cal K}_J h_i-1 \| \nonumber \\[2ex] & \le & \left(1-{1 \over (1+\alpha)}\right){c_4 \over c_3}
\sum_{i=1}^k \|{\cal K}_J h_i-1 \|. \nonumber \end{eqnarray} Letting $a$ be the quantity~(\ref{eq:U1}), we see that \begin{eqnarray*} L(a) & = & L \left [ \left ( 1 - {1 \over (1+\alpha)} \right )
\sum_{i=1}^k ({\cal K}_J h_i-1) \right ] + L(U) \\ & \ge & \left ( 1 - {1 \over (1+\alpha)} \right ) c_4 \sum_{i=1}^k
\|{\cal K}_J h_i-1\| -c_3 \|U\| \\ & \geq & 0 \end{eqnarray*} by~(\ref{eq:(c)}),~(\ref{eq:(b)}) and~(\ref{eq:new03}), which is the conclusion of the lemma. $
\Box$
The next lemma tells us that in ``one step'', we can't move from being ``far away'' from uniform to being ``very close'' to uniform.
\begin{lem} \label{lem:second} Under the more general hypotheses of Theorem~\ref{th:gen i}
(with a given $J >0$, a given $\|\,\,\|$ on $\langle\PP_+(J)\rangle$, a given $L$ and a given $\rho$ satisfying the required conditions), for all $\beta>0$ and $J'\in (0,J]$, there exists a $\gamma<\beta$ such that if
$\| {\bigodot}_k ( {\cal K}_{J_1''} h_1 , \ldots , {\cal K}_{J_k''} h_k) - 1 \| <\gamma$ with $h_1,\ldots, h_k\in {\cal P}_+(J)\cup \{\delta_{{\hat{0}}}\}$ and $k\le B$
and with $J_i''$ being $J$ if $h_i\in {\cal P}_+(J)$ and $J'$ if $h_i=\delta_{{\hat{0}}}$, then each $h_i$ is not $\delta_{{\hat{0}}}$ and $\sum_{i=1}^k \| h_i - 1\| < \beta$. \end{lem}
\noindent{\bf Proof.} Choose $\gamma\in (0,\min\{\beta,1/c_1\})$ so that $$ {2c_1 c_3 B\gamma \over\rho c_2 c_4 (1-c_1\gamma)} < \beta $$ and $$
\min\{||K_J-1||,||K_{J'}-1||\} > {2c_1 \gamma \over (1-c_1\gamma)c_2} $$ where $c_1,c_2,\rho,c_3$ and $c_4$ come from~(\ref{eq:present iii}),~(\ref{eq:present iii'}),~(\ref{eq:(a)}),~(\ref{eq:(c)}) and~(\ref{eq:(b)}) respectively. We first show that if $h_1,\ldots,h_k\in{\cal P}_+(J)$, with $k\le B$, then
$\| {\bigodot}_k (h_1 , \ldots , h_k) - 1 \| <\gamma< 1/c_1$ implies that for all $i$ $$
\| h_i - 1 \| <{2c_1\gamma \over (1-c_1\gamma)c_2}. $$ [Proof: $$
||h_i-1||\le c_2^{-1}||h_i-1||_\infty \le c_2^{-1}\left({\max h_i\over \min h_i}-1\right) $$ $$ \le c_2^{-1}\left({\max \prod_i h_i\over \min \prod_i h_i}-1\right) = c_2^{-1}\left({\max {\bigodot}_k (h_1 , \ldots , h_k)\over\min {\bigodot}_k (h_1 , \ldots , h_k) }-1\right) $$
where the second inequality is straightforward and the third inequality comes from~(\ref{eq:xiiii}). Next, $\| {\bigodot}_k (h_1 , \ldots , h_k) - 1 \| <\gamma< 1/c_1$ implies
$|| {\bigodot}_k (h_1 , \ldots , h_k) - 1 ||_\infty\le c_1\gamma$ which implies the last expression is at most $$ c_2^{-1}\left({1+c_1\gamma\over 1-c_1\gamma}-1\right)= c_2^{-1}{2c_1\gamma\over 1-c_1\gamma}.] $$ It follows that if
$\| {\bigodot}_k ( {\cal K}_{J_1''} h_1 , \ldots , {\cal K}_{J_k''} h_k) - 1 \| <\gamma$, then $$
\| {\cal K}_{J_i''} h_i-1 \| <{2c_1 \gamma \over (1-c_1\gamma)c_2} $$ for each $i$ which implies that $h_i\in{\cal P}_+(J)$ (as opposed to being $\delta_{{\hat{0}}}$). Hence $J_i''$ is $J$ for all $i$.
Now from~(\ref{eq:(a)})--(\ref{eq:(b)}) we have
$$||{\cal K}_{J} h_i - 1|| \geq {\rho c_4 ||h_i - 1|| \over c_3}$$ and we obtain the conclusion of the lemma. $
\Box$
\noindent{\bf Proof of Theorem~\ref{th:gen i}.} Since in Section~\ref{sub:sphere}
the Heisenberg models have been shown to satisfy all of the more general hypotheses of this theorem, we need only prove the last statement of the theorem, where we have a given $J >0$, a given $\|\,\,\|$ on $\langle\PP_+(J)\rangle$, a given $L$ and a given $\rho$ satisfying the required conditions. Choose an $\alpha > 0$ so that $\textstyle {br} (\Gamma) \cdot \rho > 1 + \alpha$. Choosing $\beta$ from Lemma \ref{lem:first}, we have, under our assumptions, that for all $h_1 , \ldots , h_k \in {\cal P}_+(J)$ with
$k\le B$ and $\|h_i-1\| < \beta$ for each $i$, \begin{equation} \label{eq:star4old} L \left [ ({\bigodot}_k ({\cal K}_J h_1 , \ldots , {\cal K}_J h_k)) - 1 \right ]
\geq {1 \over 1 + \alpha} \sum_{i=1}^k L ({\cal K}_J h_i - 1)
\geq {\rho \over 1 + \alpha} \sum_{i=1}^k L (h_i - 1). \end{equation} Now, if there is no robust phase transition, then by~(\ref{eq:present iii'}) there must exist $J'\in (0,J]$
and a sequence of cutsets $\{ C_n \}$ going to infinity such that $\lim_{n \rightarrow \infty} \|f^{J',J , +}_{C_n , o} - 1\| = 0$. Using Lemma \ref{lem:second}, choose $\gamma<\beta$ corresponding to $\beta$ and $J'$. Next, by our choice of $\alpha$, we have
$$I := \inf_C \sum_{x \in C} \left ( {\rho \over 1 + \alpha} \right)^{|x|}
> 0$$ where the infimum is over all cutsets. We now choose $n$ so that $$
\|f^{J',J , +}_{C_n , o} - 1\| < \min \{ \gamma , {c_4 \gamma I \over c_3} \}. $$ where $c_3$ and $c_4$ come from~(\ref{eq:(c)}) and~(\ref{eq:(b)}) respectively. We then define $\Gamma'$ to be the component of the set
$$\{ v \in C_n^i : \|f^{J',J , +}_{C_n , v} - 1\| < \gamma \}$$ that contains $o$ and let $C$ be the exterior boundary of $\Gamma'$ (that is, the set of $x \notin \Gamma'$ neighboring some $y \in \Gamma'$). By the choice of $\gamma$, $C \subseteq C_n^i$ and for each $v\in C^i\cup C$, the density $f^{J',J , +}_{C_n , v}$ is in
$$ {\cal P}_+(J) \cap \{f: \|f-1\| < \beta\}. $$ Using~(\ref{eq:star4old}) and induction, we see that $$L (f^{J',J , +}_{C_n , o} - 1) \geq \sum_{x \in C} \left (
{\rho \over 1 + \alpha} \right )^{|x|} L (f^{J',J , +}_{C_n , x} - 1) .$$ By definition of $\Gamma' , C$ and $I$ and the fact that $L (f-1) \geq c_4
\|f - 1\|$ on ${\cal P}_+(J)$, we see that $$L (f^{J',J , +}_{C_n , o} - 1) \geq c_4 \gamma I .$$ Hence
$$\|f^{J',J , +}_{C_n , o} - 1\| \geq {c_4 \over c_3} \gamma I .$$ This contradicts the choice of $n$, proving that there is indeed a robust phase transition. $
\Box$
\setcounter{equation}{0}
\section{Analysis of specific models} \label{sec:anal}
\subsection{Heisenberg models} \label{sub:spherical}
For the Heisenberg models, recall that ${\bf S} = S^d$, $d \geq 1$, and $H(x,y) = - x \cdot y$. The operator ${\cal K}_J$ is convolution with the function $K_J (x) = c e^{ J x \cdot {\hat{0}}}$, where $c$ is a normalizing constant.
\noindent{\bf Proof of Theorem~\protect{\ref{th:main}}.} A change of variables shows that $L(K_J)=\rho^d(J)$ and so the result follows from Theorems~\ref{th:gen ii} and~\ref{th:gen i}. $
\Box$
For the rotor model, we now prove the equivalence of SB and SB+.
\noindent{\bf Proof of Proposition~\protect{\ref{pr:rotor equiv}}.} We have already seen the representation $$f = \sum_{n \geq 0} a_n (f) \psi_n, $$ for functions $f \in L^2 ({\bf S}/{\hat{0}})$. In the case of the rotor model, where ${\bf S} = S^1$ and we take ${\hat{0}}$ to be $(1,0)$, the space $L^2 ({\bf S}/{\hat{0}})$ is the space of even functions of $\theta \in [-\pi , \pi]$ and $\psi_n = \cos (n \theta)$. We now turn to the full Fourier decomposition $f = \sum_{n \in Z\!\!\!Z} b_n (f) e^{i n \theta}$, where $b_n (f) = \int_0^{2\pi} f(\theta) e^{-i n \theta} \, d\theta/ (2 \pi)$.
Let $C$ be any cutset and $\delta$ be a set of boundary conditions on $C$. Let ${\cal J}$ be any set of interaction strengths. It suffices to show that
$$||f^{{\cal J} , \delta}_{C,w} - 1||_\infty \leq ||f^{{\cal J} , +}_{C,w}-1||_\infty$$ for all $w\in C^i$.
For $v \in C$ and $n\in Z\!\!\!Z$, let $x_{v,n} = b_n (K_{{\cal J}(x) , \delta (v)})$ where $e$ is the edge from $v$ to its parent.
\noindent{\em Claim}: For all $y\in C^i$, the Fourier coefficients $\int_0^{2\pi} e^{i n \theta} \, d\mu^{{\cal J} , \delta}_{C,y}(\theta)$, which we denote by $\{ u_{y , n} : n \in Z\!\!\!Z \}$, are sums of monomials in $\{ x_{v,n} \}_{v\in C, n\inZ\!\!\!Z}$ with nonnegative coefficients. {\it Proof:} Let $w \in C^i$ have children $w_1 , \ldots , w_r \in C^i$ and $w_{r+1} , \ldots , w_k \in C$. Then the Fourier coefficients $\{ u_{w,n} : n \in Z\!\!\!Z \}$ are the convolution of the $k - r$ series $\{ x_{v , n} : n \in Z\!\!\!Z \}$ as $v$ ranges over $w_{r+1} , \ldots , w_k$, also convolved with the series $\{ b_n (K_{{\cal J} (\overline{wv})}) u_{v,n} : n \in Z\!\!\!Z \}$ as $v$ ranges over $w_1 , \ldots , w_r$. Since $b_n (K_J) \geq 0$, this establishes the claim via induction and the fundamental recursion.
Now write $x_{v,n}^+$ for the Fourier coefficients $b_n (K_{{\cal J} (e)})$ where $e$ is as before. Since $K_{J , e^{i \alpha}} (x) = K_J (e^{-i \alpha} x)$, it follows that
$$|x_{v,n}| = |x_{v,n}^+| .$$ But $x_{v,n}^+$ is real because $K_J$ is even, and has been shown to be nonnegative. Thus
$$|x_{v,n}| = x_{v,n}^+ ,$$ and it follows from the claim that each $u_{w,n}$ has modulus bounded above by the corresponding $u_{w,n}^+$ when plus boundary conditions are taken. Hence $$
||f^{{\cal J} , \delta}_{C,w} - 1||_\infty \le ||f^{{\cal J} , \delta}_{C,w} - 1||_A
\leq \sum_{n \neq 0} |u_{w,n}|
\leq \sum_{n \neq 0} u_{w,n}^+ = ||f^{{\cal J} , +}_{C,w} - 1||_A
= ||f^{{\cal J} , +}_{C,w} - 1||_\infty,$$ proving the lemma. $
\Box$
\noindent{\em Remark:} Although we have used special properties of the Fourier decomposition on $L^2 (S^1)$, there exist similar decompositions for $S^d$. We believe that a parallel argument can probably be constructed, bounding the modulus of the sum of the coefficients of spherical harmonics of a given order by the coefficients one obtains for the analogous monomials in the values $a_n (K_{{\cal J} (x)})$, whose coefficients are necessarily nonnegative by the nonnegativity of the connection coefficients $q^r_{ij}$. Thus we are led to state:
\begin{pblm} \label{pblm:all spheres} Prove a version of Proposition~\ref{pr:rotor equiv} for general Heisenberg models on trees. \end{pblm}
\subsection{The Potts model} \label{sub:potts}
\noindent{\bf Proof of Theorem}~\ref{th:potts}. We will obtain this result from Theorems~\ref{th:gen ii} and~\ref{th:gen i}. For (i), letting $||\,\,||$ be the $L_\infty$ norm on $\langle\PP_+(J)\rangle$ and ${\bf Op}_J=\alpha_J$, all of the hypotheses in Theorem~\ref{th:gen ii} except~(\ref{eq:ixix}) are clear. The function $K_J$ is given by $$K_J (x) = c \exp (J (2 \delta_{x,0} - 1))$$ where $c = (e^J + (q-1) e^{-J})^{-1}$. The operator ${\cal K}_J$ is linear and $${\cal K}_J \delta_j = c e^J \delta_j + \sum_{i \neq j} c e^{-J} \delta_i \, .$$ Hence in the basis $\delta_0 , \ldots , \delta_{q-1}$, the matrix representation of ${\cal K}_J$ is $c (e^J - e^{-J}) I + c e^{-J} M$ where $M$ is the matrix of all ones. On the orthogonal complement of the constant functions, ${\cal K}_J$ is $c (e^J - e^{-J}) I$, and~(\ref{eq:ixix}) follows, proving (i) by an application of Theorem~\ref{th:gen ii}.
For (ii), let $||\,\,||$ be the same as above, $\rho=\alpha_J$ and $L(h)=h(0)-h(1)$. It is then immediate to check that all of the hypotheses in Theorem~\ref{th:gen i} hold and we may conclude (ii) by an application of Theorem~\ref{th:gen i}. $
\Box$
\setcounter{equation}{0}
\section{Proof of Theorem~\protect{\ref{th:0hd}}.} \label{sec:zero}
By Proposition~\ref{prop:SB=} and the fact that any subtree of a tree with branching number 1 also has branching number 1, it suffices to show: \begin{quote} For for any $\Gamma$ with $\textstyle {br} (\Gamma) = 1$, and any bounded ${\cal J}$, there is a sequence of cutsets $\{ C_n \}$ such that for any sequence $\{ \delta_n \}$ of boundary conditions on $\{ C_n \}$, $$
\lim_{n\to\infty}\| f^{{\cal J} , \delta_n}_{C_n , o} - 1\|_\infty = 0 . $$ \end{quote}
It is convenient to work with a different measure of size, the {\em Max/Min} measure, defined as follows. (This arose already in the proof of Lemma~\ref{lem:second}.) For any continuous strictly positive function $f$ on ${\bf S}$, let
$$\|f\|_M := {\max_{x \in {\bf S}} f(x) \over \min_{x \in {\bf S}} f(x)} \, .$$ It is immediate to see:
\begin{lem} \label{lem:mmequiv}
For any sequence $\{ h_n \}$ of continuous probability densities, $\|h_n - 1\|_\infty \rightarrow 0$ if and only if
$\log \|h_n\|_M \rightarrow 0$. \end{lem}
Next, we examine the effect of ${\cal K}_J$ on $\|f\|_M$.
\begin{lem} \label{lem:unifmm} For any statistical ensemble $({\bf S} , G , H)$, any $J_{\rm max}$ and any $T > 0$ there is an $\epsilon > 0$ such that for any continuous strictly positive function $f$ with
$\|f\|_M \leq T$, and any $J \leq J_{\rm max}$,
$$\log \|{\cal K}_J f\|_M \leq (1 - \epsilon) \log \| f \|_M \, .$$ \end{lem}
\noindent{\bf Proof.} Fix $H, J$ and $f$ and assume without loss of generality that $\int f \, d{\bf x} = 1$ since the {\em Max/Min} measure is unaffected by multiplicative constants. Let $[a,b]$ be the smallest closed interval containing the range of $f$ and $[c,d]$ contain the range of $K_J$ with $a,c >0$. Since $f$ is a probability density, $a < 1 < b$ (we rule out the trivial case $f \equiv 1$). Since $K_J = c + (1-c) g$ for some probability density $g$, it follows that for any $x \in {\bf S}$, $$c + (1-c) a \leq {\cal K}_J f(x) \leq c + (1-c) b .$$ As $J$ varies over $[0 , J_{\rm max}]$, $\min_x K_J (x)$ is bounded below by some $c_0 > 0$, so for all such $J$, $$c_0 + (1-c_0) a \leq {\cal K}_J f(x) \leq c_0 + (1-c_0) b$$ and so
$$\| {\cal K}_J f \|_M \leq {c_0 + (1 - c_0) b \over c_0 + (1 - c_0) a} \, .$$
Setting $R = \|f\|_M - 1$, we have $b = (1+R) a$ and so
$$\| {\cal K}_J f\|_M \leq {c_0 + (1 - c_0) (1 + R) a \over c_0 + (1 - c_0) a}
= 1 + R {(1 - c_0) a \over c_0 + (1 - c_0) a}
\leq 1 + R (1 - c_0) \, .$$ Thus \begin{equation} \label{eq:u}
\|{\cal K}_J f\|_M \leq 1 + (1 - c_0) \left ( \|f\|_M - 1 \right ) \, . \end{equation} The function $\log (1 + (1 - c_0) u) / \log (1 + u)$ is bounded above by some $1 - \epsilon < 1$ as $u$ varies over $(0 , T-1]$, and setting
$u = \|f\|_M - 1$ in~(\ref{eq:u}) gives
$$\log \| {\cal K}_J f\|_M \leq \log (1 + (1 - c_0) (\|f\|_M - 1)) \leq
(1 - \epsilon) \log \|f\|_M \, ,$$ proving the lemma. $
\Box$
Proceeding with the proof of Theorem~\ref{th:0hd}, let $C$ be a cutset with no vertices in the first generation, $$\partial C = \{ v \in C^i : \exists w \in C~{\rm with }\,\, v \to w \}, $$ and $\delta$ be defined on $C$. Clearly, for continuous strictly positive functions $h_1 , \ldots , h_k$, $$
\| {\bigodot} (h_1 , \ldots , h_k) \|_M \leq \prod_{i=1}^k \| h_i \|_M \, . $$ We have also previously seen (Lemma~\ref{lem:unifbd}) that all densities that arise are uniformly bounded away from 0 and $\infty$ and hence there is a uniform bound on the
$\| \,\, \|_M $ that arise. We can therefore choose $\epsilon$ from Lemma~\ref{lem:unifmm}. Next for any $v\in C^i\setminus \partial C$, applying the fundamental recursion gives \begin{eqnarray*}
\log \|f^{{\cal J} , \delta}_{C,v} \|_M & = & \log \| {\bigodot} ({\cal K}_{{\cal J} (\overline{vw_1})}
f^{{\cal J} , \delta}_{C , w_1} , \ldots , {\cal K}_{{\cal J} (\overline{vw_k})}
f^{{\cal J} , \delta}_{C , w_k} \|_M \\[2ex]
& \leq & \sum_{i=1}^k \log \| {\cal K}_{{\cal J} (\overline{vw_i})}
f^{{\cal J} , \delta}_{C , w_i} \|_M \\[2ex]
& \leq & \sum_{i=1}^k (1 - \epsilon) \log \| f^{{\cal J} , \delta}_{C , w_i} \|_M. \end{eqnarray*} Working backwards, we find that for any cutset $C$,
$$\log \|f^{{\cal J} , \delta}_{C , o}\|_M \leq \sum_{w \in \partial C}
(1 - \epsilon)^{|w|} \log \|f^{{\cal J} , \delta}_{C , w}\|_M \, .$$ Since $\textstyle {br} (\Gamma) = 1$ one can choose a sequence of cutsets $\{ C_n \}$
such that $\sum_{w \in \partial C_n} (1 - \epsilon)^{|w|} \rightarrow 0$. The uniform bound on
$\|f^{{\cal J} , \delta}_{C , w}\|_M$ implies that for any sequence of functions $\delta_n$ on $C_n$, $$
\lim_{n\to\infty}\log \|f^{{\cal J} , \delta_n}_{C_n , o} \|_M =0, $$ which along with Lemma~\ref{lem:mmequiv} proves the theorem. $
\Box$
Olle H\"aggstr\"om pointed out to us that this result could also be obtained using ideas from disagreement percolation.
\setcounter{equation}{0}
\section{Proof of Theorem {\protect{\ref{th:2trees}}}.} \label{sec:potts}
While we assume that $q$ is an integer, the case of nonintegral $q$ can be made sense of via the random cluster representation, and it is worth noting here that the break between $q=2$ and $q=3$ happens at $q = 2 + \epsilon$. See~\cite{Ha} for a discussion of the qualitative differences between the random cluster model on a tree when $q \leq 2$ as opposed to $q > 2$.
\begin{lem} \label{lem:robust} Assume that all of the hypotheses of Theorem~\ref{th:gen ii} are in force (in particular,~(\ref{eq:ixix}) and $\textstyle {br} (\Gamma) \cdot {\bf Op}_J < 1$ hold and so there is no RPT for the parameter $J$) and in addition that
$\sup_{y\in {\bf S}} \|K_{J,y}\| < \infty$ and~(\ref{eq:ixix}) holds for all $f\in{\cal P}(J)$ (instead of just ${\cal P}_+(J)$). Then there is a tree $\Gamma'$ with $\textstyle {br} (\Gamma') = \textstyle {br} (\Gamma)$ such that $\Gamma'$ has no PT for the parameter $J$.
\end{lem}
\noindent{\bf Proof.} We mimic the proof of Theorem~\ref{th:gen ii}. Choose $\epsilon$, $\epsilon_0$ and cutsets $\{ C_n \}$ as in the proof of Theorem~\ref{th:gen ii} where we can assume that the cutsets $\{ C_n \}$ are disjoint. Choose an integer $m$ sufficiently large so that the $m$-fold iterated convolution operator ${\cal K}_J^m$ satisfies
$||{\cal K}_J^m \delta_y-1|| \leq \epsilon_0 {\bf Op}_J$. For each increasing sequence $\{ n(k) : k = 1 , 2 , \ldots \}$ of integers, define a tree $\Gamma'$ by replacing each edge from an element of $C_{n(k)}$ to its parent by $m$ edges in series, for all cutsets in the sequence $\{ C_{n(k)} \}$. It is not too great an abuse of notation to let $C_n$ denote the cutset of $\Gamma'$ consisting of the same vertices as before. It is now possible to establish~(\ref{eq:ind}) for all $v \in D$, where $D$ is the set of vertices in $\Gamma'$ that are in $C^i$ and in $\Gamma$ (i.e., are not in a chain of parallel edges that was added). The only adjustment in the proof is as follows. Use Lemma~\ref{lem:rec} to represent $f^{J , +}_{C_n , v}$ in terms of $f^{J,+}_{C_n , w}$ where $w$ are the children of $v$ in $\Gamma$ rather than in $\Gamma'$, i.e., we leap the whole chain of $m$ edges at once. Then the case $w \in C_n$ that was handled by the choice of $J'$ is replaced by a case $w \in \Gamma' \setminus \Gamma$, which is handled by the choice of $m$. In fact, (\ref{eq:ind}) holds when + is replaced by any boundary condition as the exact same proof shows. By choosing $\{ n(k) \}$ sufficiently sparse, we can ensure that $\textstyle {br} (\Gamma') = \textstyle {br} (\Gamma)$. Fixing any such choice of $\{ n(k) \}$, it follows that there is no phase transition by the above together with Proposition~\ref{prop:SB=}. $
\Box$
We proceed now with the description of a counterexample. For $\Gamma_1$, we choose the homogeneous binary tree, where each vertex has precisely 2 children. Recall from Section~\ref{sub:potts} that under + boundary conditions, the functions $f^{J , +}_{C , v}$ all lie in a one-dimensional set. The most convenient parameterization for the segment is by the log-likelihood ratio of state ${\hat{0}}$ to the other states. Thus the probability measure $a \delta_0 + \sum_{i=1}^{q-1} ((1-a)/(q-1)) \delta_i$ is mapped to the value $\log [(q-1) a / (1-a)]$. Let $g(v)$ denote the log-likelihood ratio at $v$ under some interaction strength and boundary conditions. The recursion~(\ref{eq:recurse}) of Lemma~\ref{lem:rec} boils down to $$g(v) = \sum_{v \rightarrow w} \phi (g(w)) ; \;\;\;\; \phi (z) := \log
{ p e^z + 1-p \over {1-p \over q-1} e^z + (1 - {1-p \over q-1})} \, ,$$ where \begin{equation} \label{eq:p} p := e^J / (e^J + (q-1) e^{-J}) \, . \end{equation}
Taking a Taylor expansion to the second order gives $$\phi (z) = \left ( p - {1-p \over q-1} \right ) z + {1-p \over 2 (q-1)^2}
[p(q-1)^2 - (q-1) + (1-p)] z^2 + O(z^3) .$$ To see that the second derivative is positive at 0 for $q > 2$, first take the $q$-derivative of the $z^2$ coefficient which is $[q+2p-3](1-p)/(2(q-1)^3)$. The definition of $p$ and the fact that $J > 0$ imply that $p > 1/q \geq 1/(2(q-1))$. Since $x+1/(x-1) -3 >0$ on $(2,\infty)$ and $2p> 1/(q-1)$, it follows that the $z^2$ coefficient has a positive $q$-derivative for $q \ge 2$, and is therefore positive for all $q > 2$. (This also implies that for $q\in (2-\delta,2)$ for some $\delta$, the function $\phi$ is concave (see~\cite{PP} for a detailed analysis of the critical case $q=2$).)
The Taylor expansion gives $\phi'(0) = p - (1-p)/(q-1)$. Note that $p_0: = (q+1)/(2q)$ satisfies $p_0 - (1 - p_0) / (q-1) = 1/2$. The value of $p_0$ is chosen to make $\phi' (0) = 1/2$; by convexity of $\phi$ near zero, there is an interval $I := (p_0 - \epsilon , p_0)$ such that for $p \in I$, the equation $\phi (z) = z/2$ has a positive solution, call it $z(p)$. Take $\epsilon>0$ so small that $p_0 -\epsilon > 1/q$. For any $1>p > 1/q$ there is a unique $J > 0$ such that~(\ref{eq:p}) holds. If $p \in I$, then $z(p)$ is a fixed point for the function $2 \phi$ and it is easy to see by induction that under + boundary conditions on the binary tree, one will always have $g(v) \geq z(p)$. Thus we have shown that $\Gamma_1$ has a phase transition for any $J$ such that $p \in I$.
To find $\Gamma_2$, we examine the connection between $p_0$ and
$\| {\cal K}_J \|$ where for the rest of the proof, the operator norm refers to the $L^\infty$ norm on the orthogonal complement of the constants. Observe that $$p - {1-p \over q-1} = {e^J \over e^J + (q-1) e^{-J}} - {e^{-J} \over
e^J + (q-1) e^{-J}} = \| {\cal K}_J \|$$
by the computation in Section~\ref{sub:potts}. Thus $p_0$ is chosen to make $\| {\cal K}_J \| = 1/2$ and for any $p \in I$, $\| {\cal K}_J \| < 1/2$. Fix any $J$ so that $p \in I$, and let $\Gamma$ be any tree with $$2 = \textstyle {br} (\Gamma_1) <
\textstyle {br} (\Gamma) < \| {\cal K}_J \|^{-1}.$$ Let $\Gamma'$ be as in Lemma~\ref{lem:robust} and set $\Gamma_2 = \Gamma'$. Then there is no phase transition on $\Gamma_2$ for the chosen parameters, and since we have seen there is a phase transition for $\Gamma_1$, this completes the proof of Theorem~\ref{th:2trees}. $
\Box$
\noindent {\bf Acknowledgements.} We thank Richard Askey for discussions and showing us the proof of Lemma~\ref{lem:incr}, J\"{o}ran Bergh, Yuval Peres and Paul Terwilliger for discussions, Anton Wakolbinger for providing us with reference \cite{E} and the referee for a correction and some suggestions.
\noindent \begin{tabbing} enoughs \= fffffffffffffffffffffenoughennnnnnnnnnnnnnn \= \kill \> Robin Pemantle \> Jeffrey E.~Steif \\ \> Department of Mathematics \> Department of Mathematics \\ \> University of Wisconsin-Madison \> Chalmers University of Technology \\ \> Van Vleck Hall \> S--41296 Gothenburg \\ \> 480 Lincoln Drive \> Sweden \\ \> Madison, WI 53706 \> [email protected] \\ \> [email protected] \end{tabbing}
\end{document} |
\begin{document}
\newtheorem{theorem}[subsection]{Theorem} \newtheorem{proposition}[subsection]{Proposition} \newtheorem{lemma}[subsection]{Lemma} \newtheorem{corollary}[subsection]{Corollary} \newtheorem{conjecture}[subsection]{Conjecture} \newtheorem{prop}[subsection]{Proposition} \numberwithin{equation}{section} \newcommand{\ensuremath{\mathbb R}}{\ensuremath{\mathbb R}} \newcommand{\ensuremath{\mathbb C}}{\ensuremath{\mathbb C}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathfrak{M}}{\mathfrak{M}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\ensuremath{\mathfrak A}}{\ensuremath{\mathfrak A}} \newcommand{\ensuremath{\mathbb N}}{\ensuremath{\mathbb N}} \newcommand{\ensuremath{\mathbb Q}}{\ensuremath{\mathbb Q}} \newcommand{\half}{\tfrac{1}{2}} \newcommand{f\times \chi}{f\times \chi} \newcommand{\mathop{{\sum}^{\star}}}{\mathop{{\sum}^{\star}}} \newcommand{\chi \bmod q}{\chi \bmod q} \newcommand{\chi \bmod db}{\chi \bmod db} \newcommand{\chi \bmod d}{\chi \bmod d} \newcommand{\text{sym}^2}{\text{sym}^2} \newcommand{\hhalf}{\tfrac{1}{2}} \newcommand{\sumstar}{\sideset{}{^*}\sum} \newcommand{\sumprime}{\sideset{}{'}\sum} \newcommand{\sumprimeprime}{\sideset{}{''}\sum} \newcommand{\sumflat}{\sideset{}{^{\flat}}\sum} \newcommand{\sumSTAR}{\sideset{}{^{\star}}\sum} \newcommand{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}}{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}} \newcommand{\V}{V\left(\frac{nm}{q^2}\right)} \newcommand{\mathop{{\sum}^{\dagger}}}{\mathop{{\sum}^{\dagger}}} \newcommand{\ensuremath{\mathbb Z}}{\ensuremath{\mathbb Z}} \newcommand{\leg}[2]{\left(\frac{#1}{#2}\right)} \newcommand{\mu_{\omega}}{\mu_{\omega}}
\newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{{\mathcal{C}}}{{\mathcal{C}}} \newcommand{{\mathcal{O}}}{{\mathcal{O}}} \newcommand{{\mathfrak{c}}}{{\mathfrak{c}}} \newcommand{{\mathpzc{N}}}{{\mathpzc{N}}} \newcommand{{\mathrm{Tr}}}{{\mathrm{Tr}}}
\newcommand{{\mathfrak{O}}}{{\mathfrak{O}}} \newcommand{{\mathfrak{a}}}{{\mathfrak{a}}} \newcommand{{\mathfrak{b}}}{{\mathfrak{b}}} \newcommand{{\mathfrak{c}}}{{\mathfrak{c}}} \newcommand{{\mathrm{res}}}{{\mathrm{res}}} \newcommand{{\mathfrak{p}}}{{\mathfrak{p}}} \newcommand{{\mathfrak{m}}}{{\mathfrak{m}}} \newcommand{\rm Aut}{\rm Aut}
\newcommand{m(t,u;\ell^k)}{m(t,u;\ell^k)} \newcommand{m(t_1,u;\ell^k)}{m(t_1,u;\ell^k)} \newcommand{m(t_2,u;\ell^k)}{m(t_2,u;\ell^k)} \newcommand{m(t,u_0;\ell^k)}{m(t,u_0;\ell^k)} \newcommand{S(t_1,t_2;\ell^k)}{S(t_1,t_2;\ell^k)} \newcommand{S(t;\ell^k)}{S(t;\ell^k)} \newcommand{R(t;\ell^k)}{R(t;\ell^k)} \newcommand{n(N,u;\ell^k)}{n(N,u;\ell^k)} \newcommand{T(N;\ell^k)}{T(N;\ell^k)} \newcommand{T(N;\ell^k)}{T(N;\ell^k)} \newcommand{{\tilde{\nu}}_\ell}{{\tilde{\nu}}_\ell} \newcommand{{\tilde{\nu}}_\ell^{(k)}}{{\tilde{\nu}}_\ell^{(k)}} \makeatletter \def\imod#1{\allowbreak\mkern7mu({\operator@font mod}\,\,#1)} \makeatother
\title[Value-distribution of quartic Hecke $L$-functions]{Value-distribution of quartic Hecke $L$-functions}
\date{\today} \author{Peng Gao and Liangyi Zhao}
\begin{abstract} Set $K=\mathbb{Q}(i)$ and suppose that $c\in \mathbb{Z}[i]$ is a square-free algebraic integer with $c\equiv 1 \imod{\langle16\rangle}$. Let $L(s,\chi_{c})$ denote the Hecke $L$-function associated with the quartic residue character modulo $c$. For $\sigma>1/2$, we prove an asymptotic distribution function $F_{\sigma}$ for the values of the logarithm of \begin{equation*} L_c(s)= L(s,\chi_c)L(s,\overline{\chi}_{c}), \end{equation*}
as $c$ varies. Moreover, the characteristic function of $F_{\sigma}$ is expressed explicitly as a product over the prime ideals of $\mathbb{Z}[i]$. \end{abstract}
\maketitle
\noindent {\bf Mathematics Subject Classification (2010)}: 11M41, 11R42 \newline
\noindent {\bf Keywords}: value-distribution, logarithm of $L$-functions, quartic characters
\section{Introduction} \label{sec1}
Let $d$ be a non-square integer such that $d\equiv 0,1\imod 4$ and $\chi_d=\left(\frac{d}{.}\right)$ be the Kronecker symbol. In the early 1950s, S. Chowla and P. Erd\H{o}s studied the distribution of values of quadratic Dirichlet $L$-functions $L(s, \chi_d)$. They proved in \cite{chowla-erdos} that when $\sigma>3/4$, then $$\lim_{x\rightarrow \infty} \frac{\#\{0<d\leq x;~d\equiv 0, 1\imod{4}~{\rm and}~ L(\sigma, \chi_d)\leq z\}}{x/2}=G(z)$$ exists and the distribution function $G(z)$ is continuous and strictly increasing satisfying $G(0)=0$, $G(\infty)=1$. This result was further strengthened by P. D. T. A. Elliott for $\sigma=1$ in \cite{elliott-0}. \newline
A systematic study of the value-distribution of the logarithm and the logarithmic derivative of $L$-functions on the half-plane $\Re(s)>1/2$ has been carried out by Y. Ihara and K. Matsumoto (see for example \cite{I-M1} and \cite{I-M}). Based on the approach in \cite{I-M}, M. Mourtada and V. K. Murty proved (\cite[Theorem 2]{M-M}), assuming the Generalized Riemann Hypothesis (GRH) for $L(s, \chi_d)$, that for any $\sigma>1/2$, there exists a probability density function $Q_\sigma$ such that \begin{equation*} \lim _{Y\rightarrow\infty} \frac{1}{\#\mathcal{F}(Y)} \# \left\{ d\in \mathcal{F}(Y), \frac {L^{\prime}(\sigma, \chi_d)}{L(\sigma, \chi_d)} \leq z \right\}= \int\limits_{-\infty}^{z} Q_\sigma(t) \mathrm{d} t. \end{equation*} Here $\mathcal{F}(Y)$ denotes the set of the fundamental discriminants in the interval $[-Y, Y]$ \newline
If $d$ is a fundamental discriminant, $L(s, \chi_d)=\zeta_{\mathbb{Q}(\sqrt{d})}(s)/\zeta(s)$, with $\zeta_{\mathbb{Q}(\sqrt{d})}(s)$ denoting the Dedekind zeta function of $\mathbb{Q}(\sqrt{d})$ and $\zeta(s)$ the Riemann zeta function. A. Akbary and A. Hamieh studied an analogue case of the above result of Mourtada and Murty. Let $F=\mathbb{Q}(\zeta_3)$ and ${\mathfrak{O}}_F=\mathbb{Z}[\zeta_3]$ be the ring of integers of $F$, where $\zeta_3=\exp(2\pi i/3)$. Let $\mathcal{D}(Y)$ denote the set of square-free elements $d$ such that $d \equiv 1 \pmod 9$ and ${\mathpzc{N}}(d) \leq Y$. Further define $L_{d, F}(s)=\zeta_{F(d^{1/3})}(s)/\zeta(s)$, where $\zeta_{F(d^{1/3})}(s)$ is the Dedekind zeta function of $F(d^{1/3})$. Then Akbary and Hamieh \cite[Theorem 1.4]{AH} proved that, without assuming GRH, for either $\mathcal{L}_{d}(s)=\log L_d(s)$ or $L'_d/L_d(s)$, there exists a corresponding probability density function $D_\sigma$ such that for every $\sigma>1/2$, \begin{equation*} \lim _{Y\rightarrow\infty} \frac{1}{\#\mathcal{D}(Y)} \# \left\{ d \in \mathcal{D}(Y), \mathcal{L}_{d}(\sigma) \leq z \right\}= \int\limits_{-\infty}^{z} D_{\sigma}(t) \mathrm{d} t. \end{equation*}
We note that $L_{d, F}(s)$ can be decomposed as a product of Hecke $L$-functions. In fact, it is shown in the paragraph below \cite[(2)]{AH} that \begin{equation} \label{Ld} L_{d, F}(s)= L(s,\chi_d)L(s,\overline{\chi}_{d}), \end{equation}
where $L(s,\chi_{d})$ is the Hecke $L$-function associated with the cubic residue symbol $\chi_{d}=\left(\frac{\cdot}{d}\right)_3$. \newline
Motivated by the above result, we consider the value-distribution of the logarithm of the product of quartic Hecke $L$-functions in this paper. Set $K=\mathbb{Q}(i)$ and $\mathcal{O}_K =\mathbb{Z}[i]$, the ring of integers of $K$. Let \begin{align*}
\mathcal{C}:=\left\{c\in \mathcal{O}_K : ~c\neq 1 \text{ is square-free and } c\equiv 1 \imod{\langle 16 \rangle} \right\}. \end{align*} In the same spirit as \eqref{Ld}, we define \begin{equation*}
L_c(s)= L(s,\chi_c)L(s,\overline{\chi}_{c}), \quad \mathcal{L}_{c}(s)=\log L_{c}(s), \end{equation*}
where $L(s,\chi_{c})$ is the Hecke $L$-function associated with the quartic residue symbol $\chi_{c}=\left(\frac{.}{c}\right)_4$. Our result is \begin{theorem}\label{mainthrm} Let $\sigma>1/2$ and \[\mathcal{S}(Y) = \#\left\{{c}\in \mathcal{C}: {\mathpzc{N}}(c)\leq Y \right\}. \] Then there is a smooth density function $M_{\sigma}$ such that \[\lim_{Y\to\infty}\frac{1}{\mathcal{S}(Y)}\#\left\{{c}\in \mathcal{C}: {\mathpzc{N}}(c)\leq Y\;\; \text{and}\;\;\ \mathcal{L}_{c}(\sigma) \leq z \right\}=\int\limits_{-\infty}^{z}M_{\sigma}(t)\; \mathrm{d} t.\] Furthermore, $M_\sigma$ can be constructed as the inverse Fourier transform of the characteristic function
\begin{equation} \label{phiydef} \varphi_{\sigma}(y)=\exp\left(-2iy\log(1-2^{-\sigma})\right)\prod_{{\mathfrak{p}}\nmid\langle2\rangle}
\left( \frac{1}{{\mathpzc{N}}({\mathfrak{p}})+1}+\frac{1}{4}\frac{{\mathpzc{N}}({\mathfrak{p}})}{{\mathpzc{N}}({\mathfrak{p}})+1}\sum_{j=0}^{3}\exp\left(-2iy\log\left|1-\frac{i^{j}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right|\right) \right). \end{equation}
\end{theorem}
Our proof of Theorem~\ref{mainthrm} closely follows the treatment of Theorem 1.3 in \cite{AH}. We shall rely on the following two propositions for the proof of Theorem~\ref{mainthrm}.
\begin{prop}\label{mainprop1} Set \[\mathcal{S}^{*}(Y)=\sum_{c\in\mathcal{C}}\exp \left( -{\mathpzc{N}}(c)/Y \right).\] Fix $\sigma>1/2$ and $y\in\mathbb{R}$. We have \begin{equation*} \lim_{Y\to\infty}\frac{1}{\mathcal{S}^{*}(Y)} \sumSTAR_{{{c}}\in \mathcal{C}}\exp\left(iy\mathcal{L}_{{c}} (\sigma)\right)\exp \left( -{\mathpzc{N}}({c})/Y \right)=\varphi_{\sigma}(y), \end{equation*} where $\varphi_{\sigma}(y)$ is defined in \eqref{phiydef} and henceforth $\sum^{\star}$ indicates that the sum is over $c$ for which $L_c(\sigma)\neq 0$. \end{prop}
\begin{prop}\label{mainprop2}
Let $\delta>0$ be given and $\sigma>1/2$ be fixed. For sufficiently large values of $y$, we have $$\left|\varphi_{\sigma}(y)\right|\leq\exp\left(-C|y|^{1/\sigma-\delta}\right),$$ where $C$ is a positive constant and can be chosen depending on values of $\delta$ and $\sigma$. \end{prop}
The above two propositions clearly have their cubic analogues in \cite{AH}. Theorem~\ref{mainthrm} follows from the above two propositions using the exact same arguments as Section 2 of \cite{AH}. Thus, we shall devote the remainder of the paper to the proofs of Propositions \ref{mainprop1} and \ref{mainprop2}, which are, following some preparatory work in Section~\ref{sec 2}, presented in Sections \ref{pfofProp1} and \ref{sec:good-density}, respectively.
\subsection{Notations} The following notations and conventions are used throughout the paper.\\
$f =O(g)$ or $f \ll g$ means $|f| \leq cg$ for some unspecified positive constant $c$. \newline
$\epsilon$ denotes an arbitrary positive number, whereas $\epsilon_0$ denotes a fixed positive constant. \newline $K=\mathbb{Q}(i)$ and $\mathcal{O}_K$ denote the ring of integers of $K$. \newline The Gothic letters ${\mathfrak{a}}$, ${\mathfrak{b}}$, $\cdots$ represent ideals of $\mathcal{O}_K$. \newline The norm of an integer $a \in \mathcal{O}_K$ is written as ${\mathpzc{N}}(a)$. The norm of an ideal ${\mathfrak{a}}$ is written as ${\mathpzc{N}}({\mathfrak{a}})$. \newline $\mu_{[i]}$ denotes the M\"obius function on $\mathcal{O}_K$. \newline $\zeta_{K}(s)$ is the Dedekind zeta function for the field $K$.
\section{Preliminaries} \label{sec 2}
\subsection{Distribution and characteristic functions}
A function $F: \mathbb{R} \rightarrow [0, 1]$ is said to be a distribution function if $F$ is non-decreasing, right-continuous with $F(-\infty)=0$ and $F(+\infty)=1$. For example, $$F(z)=\int\limits_{-\infty}^{z} M(t) \mathrm{d} t,$$ where $M(t)$ is non-negative and $\int_{-\infty}^{\infty} M(t) dt=1$. In this case, $M$ called tthe density function of $F$. The characteristic function of $F$, $\varphi_{F}(y)$, is the Fourier transform of the measure $dF(z)$, i.e. $$\varphi_F(y):= \int\limits_{-\infty}^{\infty} e^{iy z} \mathrm{d} F(z).$$ For a more detailed discussion, we refer the reader to \cite{AH} and the references therein.
\subsection{Quartic residue symbol} \label{sec2.0}
The symbol $(\frac{\cdot}{n})_4$ is the quartic residue symbol in the ring $\ensuremath{\mathbb Z}[i]$. For a prime $\varpi \in \ensuremath{\mathbb Z}[i]$ with ${\mathpzc{N}}(\varpi) \neq 2$, the quartic character is defined for $a \in \ensuremath{\mathbb Z}[i]$, $(a, \varpi)=1$ by $\leg{a}{\varpi}_4 \equiv a^{({\mathpzc{N}}(\varpi)-1)/4} \pmod{\varpi}$, with $\leg{a}{\varpi}_4 \in \{
\pm 1, \pm i \}$. When $\varpi | a$, we define $\leg{a}{\varpi}_4 =0$. Then the quartic character can be extended to any composite $n$ with $({\mathpzc{N}}(n), 2)=1$ multiplicatively. We extend the definition of $\leg{\cdot }{n}_4$ to $n=1$ by setting $\leg{\cdot}{1}_4=1$. \newline
Note that in $\mathbb{Z}[i]$, every ideal co-prime to $2$ has a unique generator congruent to 1 modulo $(1+i)^3$. Such a generator is called primary. Recall that \cite[Theorem 6.9]{Lemmermeyer} the quartic reciprocity law states that for two primary integers $m, n \in \ensuremath{\mathbb Z}[i]$, \begin{align} \label{quarticrec}
\leg{m}{n}_4 = \leg{n}{m}_4(-1)^{(({\mathpzc{N}}(n)-1)/4)(({\mathpzc{N}}(m)-1)/4)}. \end{align}
From the supplement theorem to the quartic reciprocity law (see for example, Lemma 8.2.1 and Theorem 8.2.4 in \cite{BEW}), we have for $n=a+bi$ being primary, \begin{align*}
\leg {i}{n}_4=i^{(1-a)/2} \qquad \mbox{and} \qquad \hspace{0.1in} \leg {1+i}{n}_4=i^{(a-b-1-b^2)/4}. \end{align*}
It follows that for any $c \equiv 1 \pmod {16}$, \begin{align*}
\chi_c(i)=\chi_c(1+i)=1. \end{align*}
The above shows that $\chi_c$ is trivial on units, hence it can be regarded as a primitive quartic character of the $\langle c \rangle$-ray class group of $K$ when $c$ is square-free.
\subsection{Evaluation of $\mathcal{S}^*(Y)$ and $\mathcal{S}(Y)$} \label{C}
To evaluate $\mathcal{S}^*(Y)$ and $\mathcal{S}(Y)$, defined in the statements of Proposition~\ref{mainprop1} and Theorem~\ref{mainthrm}, we note the following estimation from \cite[p. 7]{G&Zhao1}. \begin{lemma}\label{lem:luo-lemma} As $Y\to\infty$, we have for $({\mathfrak{a}}, 2)=1$, $$\sum_{\substack{c\in\mathcal{C}\\\gcd(\langle c \rangle),{\mathfrak{a}})=1}}\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right)= C_{{\mathfrak{a}}}Y+O_\epsilon(Y^{1/2+\epsilon}{\mathpzc{N}}({\mathfrak{a}})^{\epsilon}),$$
where $$C_{{\mathfrak{a}}}=\frac{{\mathrm{res}}_{s=1}\zeta_{K}(s)}{\left| H_{\langle16\rangle} \right|\zeta_{K}(2)}\prod_{\substack{{\mathfrak{p}}|2{\mathfrak{a}}\\{\mathfrak{p}}\; \text{prime}}} \left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1},$$ and $\mathrm{res}_{s=1}\zeta_{K}(s)$ denotes the residue of $\zeta_{K}(s)$ at $s=1$, $H_{\langle 16\rangle}$ denotes the $\langle 16 \rangle$-ray class group of $K$. \end{lemma}
We deduce from Lemma \ref{lem:luo-lemma}, by setting ${\mathfrak{a}}=\langle1 \rangle$, that as $Y\rightarrow \infty$, \begin{equation} \label{N*}
\mathcal{S}^*(Y)= \sum_{\substack{c\in\mathcal{C}}}\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right) \sim \frac{2}{3}\frac{{\mathrm{res}}_{s=1}\zeta_{K}(s)}{\left| H_{\langle16\rangle} \right|\zeta_{K}(2)} Y, \end{equation}
and
\[\mathcal{S}(Y)= \#\left\{c\in\mathcal{C}:{\mathpzc{N}}(c)\leq Y\right\}
\sim \frac{2}{3}\frac{\mathrm{res}_{s=1}\zeta_{K}(s)}{\left| H_{\langle 16\rangle}\right|\zeta_{K}(2)}Y.\]
\subsection{A zero density theorem}
For $c\in \mathcal{C}$, the Hecke $L$-function associated with $\chi_c$ is defined by the Dirichlet series $$L(s, \chi_c)=\sum_{0\neq \mathfrak{a} \subset \mathcal{O}_K } \frac{\chi_c(\mathfrak{a})}{{\mathpzc{N}}(\mathfrak{a})^s}, \; \Re(s)>1 .$$ $L(s, \chi_c)$ can be analytically continued to the entirety of $\mathbb{C}$ and satisfies a functional equation relating its values at $s$ and at $1-s$. We shall need the following zero density theorem for $L(s, \chi_c)$.
\begin{lemma}{\cite[Corollary 1.6]{BGL}}
\label{lem:zer-density}
For $1/2< \sigma \leq 1$, $T\geq 1$ and $c\in\mathcal{C}$, let $N(\sigma,T, c)$
be the number of zeros $\rho =\beta +i\gamma$ of $L(s, \chi_{c})$ in the rectangle $\sigma\leq\beta\leq 1$ , $|\gamma|\leq T$ . Then \[\sum_{\substack{c\in\mathcal{C}\\{\mathpzc{N}}(c)\leq Y}}N(\sigma,T,c)\ll Y^{g(\sigma)}T^{1+\frac{2-2\sigma}{3-2\sigma}}(YT)^{\varepsilon},\] where \[g(\sigma)=\begin{cases} \displaystyle \frac{8(1-\sigma)}{7-6\sigma},& \frac{1}{2}<\sigma\leq \frac56,\\ \\ \displaystyle \frac{2(10\sigma-7)(1-\sigma)}{24\sigma-12\sigma^2-11} & \frac56< \sigma\leq1.\end{cases}\]
\end{lemma}
\subsection{The large sieve with quartic symbols and a P\'olya-Vinogradov type inequality}
In the course of the proof of Theorem \ref{mainthrm}, we need the following large sieve type inequality for quartic residue symbols, which is a special case of \cite[Theorem 1.3]{BGL} and an improvement of \cite[Theorem 1.1]{G&Zhao}: \begin{lemma} \label{largesieve} For $\varepsilon>0$ and $(b_\alpha)_{\alpha \in \mathcal{O}_K}$ be an arbitrary sequence of complex numbers, we have \begin{equation} \label{eqn:large-sieve}
\sumflat_{\substack{\lambda\equiv1\imod{\langle (1+i)^3 \rangle}\\{\mathpzc{N}}(\lambda)\leq M}}\left| \ \sumflat_{\substack{\alpha\equiv1\imod{\langle (1+i^3\rangle}\\
{\mathpzc{N}}(\alpha)\leq N}}b_{\alpha}~ \left(\frac{\alpha}{\lambda}\right)_4 \right|^2\ll_{\epsilon} \left( M+N+(MN)^{2/3} \right)(MN)^{\varepsilon}
\sumflat_{{\mathpzc{N}}(\alpha)\leq N}|b_{\alpha}|^{2}, \end{equation} where $\sum^{\flat}$ means that the summation runs over the square-free elements of $\mathcal{O}_K$. \end{lemma}
We shall also need the following P\'olya-Vinogradov type inequality for $\mathfrak{f}$-ray class characters of $K$. \begin{lemma}{\cite[Lemma 3.1]{G&Zhao}} \label{H-P} Let $K=\mathbb{Q}(i)$ and $\chi$ be a non-trivial character (not necessarily primitive) of $\mathfrak{f}$-ray class group of $K$. Then for $Y>1$ and $\varepsilon>0$, we have \begin{equation}\label{eqn:polya}\sum_{a\equiv 1 \imod{\langle (1+i)^3 \rangle}}\chi(a)\exp \left( -\frac{{\mathpzc{N}}(a)}{Y} \right) \ll_{\epsilon}{\mathpzc{N}}(\mathfrak{f})^{1/2+\varepsilon}. \end{equation} \end{lemma}
\subsection{A Dirichlet series representation for $\exp\left(iy\mathcal{L}_{c} (s)\right)$}
For $u \in \mathbb{C}$ and any non-negative integer $r$, we define the function $H_{r}(u)$ by $H_{0}(u)=1$ and for $r\geq1$, \[H_{r}(u)=\frac{1}{r!}u(u+1)\cdots(u+r-1).\]
This implies that \begin{equation} \label{H}
\exp\left(-u\log(1-t)\right)=\sum_{r=0}^{\infty}H_{r}(u)t^{r},\quad\text{for } |t|<1. \end{equation}
We further define the arithmetic function $\lambda_{y}({\mathfrak{a}})$ on the integral ideals of $K$ as follows:
\begin{equation} \label{lambdadef}
\lambda_y({\mathfrak{a}})=\prod_{\mathfrak{p}} \lambda_y({\mathfrak{p}}^{\alpha_\mathfrak{p}})\quad\text{ and}\quad\lambda_y(\mathfrak{p} ^{\alpha_\mathfrak{p}} ) =H_{\alpha_\mathfrak{p}} \left(iy \right). \end{equation}
Similar to \cite[Lemma 4.1]{AH}, the following Lemma gives a Dirichlet series representation for $\exp\left(iy\mathcal{L}_{c} (s)\right)$. \begin{lemma}\label{lem:exp(iyLc)} Let $y\in\mathbb{R}$ and $s\in\mathbb{C}$ with $\Re(s)>1$. Then \[ \exp\left(iy\mathcal{L}_{c}(s)\right)=\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K \\ {\mathfrak{a}},{\mathfrak{b}} \neq 0}}\frac{\lambda_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{s}}, \] where $\lambda_{y}$ is given in \eqref{lambdadef}. Moreover, the above series is absolutely convergent. \end{lemma}
We omit the proof of Lemma \ref{lem:exp(iyLc)} as it is similar to the proof of \cite[Lemma 4.1]{AH}. Moreover, It is shown in \cite[p. 92]{I-M} that for any $\varepsilon, R>0$ and all $|y|\leq R$, we have \begin{equation}\label{eqn:lambda-bound} \lambda_{y}({\mathfrak{a}})\ll_{\epsilon,R}{\mathpzc{N}}({\mathfrak{a}})^{\varepsilon}. \end{equation}
\section{PROOF OF PROPOSITION \ref{mainprop1}} \label{pfofProp1}
To establish Proposition~\ref{mainprop1}, we first prove, in the next three section, Proposition~\ref{newprop} which gives a Dirichlet series (see \eqref{M}) representation for the limit in Proposition~\ref{mainprop1}. Then in Section \ref{sec:prod}, we prove Proposition \ref{lem:euler} which renders a product representation for the afore-mentioned Dirichlet series to show that it is the same as $\varphi_{\sigma}(y)$ defined in \eqref{phiydef}. \newline
\begin{prop} \label{newprop} Fix $\sigma=1/2+\varepsilon_0$ for some $\varepsilon_0>0$. Then for all $y\in\mathbb{R}$ we have \begin{equation*} \label{main-limit} \lim_{Y\to\infty}\frac{1}{\mathcal{N}^{*}(Y)} \sumSTAR_{c\in \mathcal{C}}\exp\left(iy\mathcal{L}_{c} (\sigma)\right)\exp(-{\mathpzc{N}}(c)/Y)=\widetilde{M}_{\sigma}(y). \end{equation*} Here $\widetilde{M}_{\sigma}(y)$ is given by the following absolutely convergent Dirichlet series \begin{align} \label{M} \widetilde{M}_{\sigma}(y)=\sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{(r_1+r_{2})\sigma}} \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}}
\frac{\lambda_{y}({\mathfrak{a}}^4{\mathfrak{m}})\lambda_{y}({\mathfrak{b}}^4{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}}^4{\mathfrak{b}}^4{\mathfrak{m}}^2)^{\sigma}\displaystyle{\prod_{\substack{{\mathfrak{p}}|{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}\\{\mathfrak{p}}\; \text{prime}}} \left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)}}. \end{align} \end{prop}
\subsection{Application of the zero density estimate}\label{sec:zero-density} Let $A>0$ be fixed and $R_{Y, \varepsilon, A}$ be the rectangle with the vertices $1\pm i(\log{Y})^A$ and $(1+\varepsilon)/2 \pm i(\log{Y})^A$. Let $\mathcal{Z}^c$ be the set consisting of $c \in \mathcal{C}$ such that $L(s, \chi_c)$ does not vanish in $R_{Y, \epsilon, A}$. Also, set $\mathcal{Z} = \mathcal{C} \setminus \mathcal{Z}^c$. Note that $\mathcal{Z}$ and $\mathcal{Z}^c$ vary with $Y$, $\varepsilon$, and $A$. \newline
Using arguments similar to those in the proof of \cite[Lemma 4.3]{AH}, we see that for $\sigma> 1/2$ as fixed in Proposition \ref{newprop} and a sufficiently small $\varepsilon>0$, we have \begin{equation} \label{main1} \sumSTAR_{c\in \mathcal{C}}\exp\left(iy\mathcal{L}_{c} (\sigma)\right)\exp(-{\mathpzc{N}}(c)/Y)= \sum_{c\in \mathcal{Z}^c} \exp\left(iy\mathcal{L}_{c} (\sigma)\right)\exp(-{\mathpzc{N}}(c)/Y)+ O(Y^\delta). \end{equation}
Furthermore, for $c\in {\mathcal{Z}}^c$, $y\in\mathbb{R}$, and $1/2<\sigma\leq1$, we use Lemma \ref{lem:exp(iyLc)} and arguments similar to those in the proof of \cite[Lemma 4.4]{AH} to derive the following lemma giving a representation of $\exp\left(iy\mathcal{L}_{c} (\sigma)\right)$ as a sum of an infinite sum and a certain contour integral. \begin{lemma} \label{rep}
Let $\varepsilon>0$ be given with $\sigma\geq 1/2+\varepsilon$. Suppose that $\sigma\leq1$. If $c\in \mathcal{Z}^c$, then $$\exp\left(iy\mathcal{L}_{c} (\sigma)\right)= \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K \\ {\mathfrak{a}},{\mathfrak{b}} \neq 0}}\frac{\lambda_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})}{{\mathpzc{N}}({\mathfrak{a}})^{{\sigma}}{\mathpzc{N}}({\mathfrak{b}})^{\sigma}}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right)-\frac{1}{2\pi i} \int\limits_{L_{Y, \epsilon, A}} \exp\left(iy\mathcal{L}_{c} (\sigma+u)\right)\Gamma(u) X^u \mathrm{d} u, $$ where $L_{Y, \epsilon, A}$ is the contour that connects, by straight line segments, the points $(1-\sigma+\varepsilon/2) + i \infty$, $(1-\sigma+\varepsilon/2) + i (\log Y)^A$, $- \varepsilon/2 + i (\log Y)^A$, $-\varepsilon/2 - i(\log Y)^A$, $(1-\sigma+\varepsilon/2) - i (\log Y)^A$ and $(1-\sigma+\varepsilon/2) - i \infty$ . \end{lemma}
Inserting the above lemma into \eqref{main1}, we get, for $\sigma\leq1$, \begin{equation} \label{main2} \sumSTAR_{c\in \mathcal{C}}\exp\left(iy\mathcal{L}_{c} (\sigma)\right)\exp(-{\mathpzc{N}}(c)/Y)= (I)-(II)+(III)+ O(Y^\delta), \end{equation} where \begin{equation} \label{one} (I)=\sum _{c\in \mathcal{C}} \left ( \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K \\ {\mathfrak{a}},{\mathfrak{b}} \neq 0}}\frac{{\lambda}_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})}{{\mathpzc{N}}({\mathfrak{a}})^{{\sigma}}{\mathpzc{N}}({\mathfrak{b}})^{\sigma}}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right)\right) \exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right), \end{equation} \begin{equation*}
(II)= \sum _{c\in \mathcal{Z}} \left ( \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K \\ {\mathfrak{a}},{\mathfrak{b}} \neq 0}}\frac{{\lambda}_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})}{{\mathpzc{N}}({\mathfrak{a}})^{{\sigma}}{\mathpzc{N}}({\mathfrak{b}})^{\sigma}}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right)\right) \exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right), \end{equation*} \begin{equation*}
(III)= \sum _{c\in \mathcal{Z}^c} \left ( -\frac{1}{2\pi i} \int\limits_{L_{Y, \epsilon, A}} \exp\left(iy\mathcal{L}_{c} (\sigma+u)\right)\Gamma(u) X^u \mathrm{d} u \right) \exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right). \end{equation*}
Letting $A>1$ and $X=Y^\eta$ for $\eta>0$, using arguments analogues to those in \cite{AH}, we deduce that \begin{equation} \label{two-estimate} (II)+(III)\ll Y^\delta X^{1-\sigma+\epsilon}+Y X^{-\varepsilon/2} . \end{equation}
\subsection{Evaluation of (I)}\label{sec:mean-value}
It still remains to prove an asymptotic formula for $(I)$ given in \eqref{one}. \begin{lemma}\label{lem:luo-calcs} Set \begin{equation} \label{Cdef}
\widetilde{C}_{\sigma}(y)=\frac{2}{3}\frac{{\mathrm{res}}_{s=1}\zeta_{K}(s)}{\left| H_{\langle16\rangle} \right|\zeta_{K}(2)}\widetilde{M}_{\sigma}(y), \end{equation} with $\widetilde{M}_{\sigma}(y)$ defined in \eqref{M}. Then \begin{equation*} (I)=\widetilde{C}_{\sigma}(y)Y+O\left( YX^{\varepsilon-\varepsilon_0}+Y^{1/2+\varepsilon}+Y^{1/2+2\epsilon}X^{1-\varepsilon_{0}+3\varepsilon} \right), \end{equation*} for any sufficiently small $\varepsilon>0$. \end{lemma} \begin{proof} Starting with \eqref{one}, \begin{align*}(I)&=\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K \\ {\mathfrak{a}},{\mathfrak{b}} \neq 0}}\frac{\lambda_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{\sigma}}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right)\sum_{c\in\mathcal{C}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right)\\& =\sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_{1}})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_1\sigma+r_2\sigma}}\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}},\langle2\rangle)=1}}\frac{\lambda_{y}({\mathfrak{a}})\lambda_{y}({\mathfrak{b}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right) \sum_{c\in\mathcal{C}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right). \end{align*} Rearranging the above sum over ${\mathfrak{a}}$ and ${\mathfrak{b}}$ with ${\mathfrak{m}}=\gcd({\mathfrak{a}},{\mathfrak{b}})$ and recalling that $\chi_c({\mathfrak{m}}^4)=1$ if ${\mathfrak{m}}$ is prime to $\langle c\rangle$, we obtain \begin{equation} \label{I} \begin{split} (I)&=\sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i \rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_1\sigma+r_2\sigma}}\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}}\frac{\lambda_{y}({\mathfrak{a}}{\mathfrak{m}})\lambda_{y}({\mathfrak{b}}{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}^2)}{X}\right)\\& \hspace{4in}\times\sum_{\substack{c\in\mathcal{C}\\\gcd(\langle c\rangle,{\mathfrak{m}})=1}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^{3})\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right). \end{split} \end{equation}
The part of \eqref{I} contributed by the fourth powers is \[ \sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_1\sigma+r_2\sigma}} \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}}\frac{\lambda_{y}({\mathfrak{a}}^4{\mathfrak{m}})\lambda_{y}({\mathfrak{b}}^4{\mathfrak{m}})} {{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{4\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}^4{\mathfrak{b}}^4{\mathfrak{m}}^2)}{X}\right) \sum_{\substack{c\in\mathcal{C}\\\gcd(\langle c \rangle,{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}})=1}}\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right). \] Utilizing Lemma \ref{lem:luo-lemma} and the estimate \eqref{eqn:lambda-bound} for $\lambda_y$, above expression can be estimated by \begin{align} \label{4powercontrib} Y\sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_1\sigma+r_2\sigma}} \sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}} \frac{C_{{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}}\lambda_{y}({\mathfrak{a}}^4{\mathfrak{m}})\lambda_{y}({\mathfrak{b}}^4{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{4\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}} {\mathpzc{N}}({\mathfrak{a}}^4{\mathfrak{b}}^4{\mathfrak{m}}^2)}{X}\right), \end{align} with an error that is $O\left( Y^{1/2+\varepsilon} \right)$. Here $C_{{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}}$ is defined in Lemma \ref{lem:luo-lemma}. Choose $\varepsilon$ small enough so that $0<\varepsilon < \varepsilon_0$. Inserting the formula $$\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}^4{\mathfrak{b}}^4{\mathfrak{m}}^2)}{X}\right)=\frac 1{2\pi i} \int\limits_{(1)}\Gamma(u)\left(\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}^4{\mathfrak{b}}^4{\mathfrak{m}}^2)}{X}\right)^{-u}\; \mathrm{d} u$$ into \eqref{4powercontrib} and shifting the line of integration to $\Re(u)=\varepsilon-\varepsilon_0$, we conclude the contribution of fourth powers to $(I)$ is \begin{align}\label{llast}Y\sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_1\sigma+r_2\sigma}}\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}}\frac{C_{{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}}\lambda_{y}({\mathfrak{a}}^4{\mathfrak{m}})\lambda_{y}({\mathfrak{b}}^4{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{4\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}+O\left( YX^{\epsilon-\epsilon_0} + Y^{1/2+\varepsilon} \right).\end{align} Mark that the coefficient of $Y$ in \eqref{llast} agrees with $\widetilde{C}_\sigma(y)$ given by \eqref{Cdef}. \newline
To estimate the contribution of non-fourth powers to $(I)$, we write ${\mathfrak{a}}=\langle a\rangle$ and ${\mathfrak{b}}=\langle b\rangle$ with $a,\;b\in\mathbb{Z}[i]$ being primary for ideals ${\mathfrak{a}}$ and ${\mathfrak{b}}$ with $\gcd({\mathfrak{a}}{\mathfrak{b}},\langle2\rangle)=1$. We then have, by the quartic reciprocity law \eqref{quarticrec}, \begin{equation} \label{preRS} \begin{split} \sum_{c\in \mathcal{C}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^3)\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right) &=\sum_{c\equiv1\imod{\langle 16 \rangle}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^3)\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right)
\sum_{\substack{d^{2}|c\\d\equiv1\imod{\langle(1+i)^3\rangle}}}\mu_{[i]}(d) \\ &=\sum_{c\equiv1\imod{\langle16\rangle}}\chi_{ab^3}(c)\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right)
\sum_{\substack{d^{2}|c\\d\equiv1\imod{\langle (1+i)^3\rangle}}}\mu_{[i]}(d), \end{split} \end{equation} where we use $\mu_{[i]}$, the M\"obius function in $\mathbb{Z}[i]$, to detect the square-free condition on $c$. Here $\mu_{[i]}(d)$ for any $d \in \ensuremath{\mathbb Z}[i]$ is defined to be $1$ if the ideal generalized by $d$ equals $\ensuremath{\mathbb Z}[i]$ and to be $(-1)^r$ if the ideal generalized by $d$ equals a product of $r$ distinct prime ideals. For other values of $d$, $\mu_{[i]}(d)$ is defined to be $0$. \newline
Now we split the last expression in \eqref{preRS} into two parts to get \begin{align*}\sum_{c\in \mathcal{C}}\chi_{c}({\mathfrak{a}}{\mathfrak{b}}^3)\exp\left(-\frac{{\mathpzc{N}}(c)}{Y}\right) &=R+S, \end{align*}
where \begin{align*} R &=\sum_{\substack{d\equiv1\imod{\langle (1+i)^3 \rangle}\\{\mathpzc{N}}(d)\leq B}}\mu_{[i]}(d)\chi_{ab^3}(d^2)\sum_{c\equiv\overline{d}^2\imod{\langle 16 \rangle}} \chi_{ab^3}(c)\exp\left(-\frac{{\mathpzc{N}}(d^2c)}{Y}\right), \\
S &=\sum_{d_1\equiv1\imod{\langle(1+i)^3\rangle}}\chi_{ab^3}(d_{1}^2)\sum_{\substack{d|d_1\\d\equiv1\imod{\langle(1+i)^3\rangle}\\{\mathpzc{N}}(d)>B}}\mu_{[i]}(d) \sumflat_{c\equiv\overline{d}_1^2\imod{\langle16\rangle}}\chi_{ab^3}(c)\exp\left(-\frac{{\mathpzc{N}}(d_1^2c)}{Y}\right). \end{align*} Here $B$ is a parameter to be optimized later, $\overline{d}$ (respectively $\overline{d}_1$) is the multiplicative inverse of $d$ (respectively $d_1$) modulo $\langle 16 \rangle$ and $\sum^{\flat}$ denotes summation over square-free elements of $\mathcal{O}_K$. \newline
We have \begin{align*} R&=\sum_{\substack{d\equiv1\imod{\langle(1+i)^3\rangle}\\{\mathpzc{N}}(d)\leq B}}\mu_{[i]}(d)\chi_{ab^3}(d^2)\sum_{c\equiv\overline{d}^2\imod{\langle16\rangle}} \chi_{ab^3}(c)\exp\left(-\frac{{\mathpzc{N}}(d^2c)}{Y}\right)\\
&=\frac{1}{\left|H_{\langle16\rangle}\right|}\sum_{\substack{d\equiv1\imod{\langle(1+i)^3\rangle}\\{\mathpzc{N}}(d)\leq B}}\mu_{[i]}(d)\chi_{ab^3}(d^2) \sum_{c\equiv1\imod{\langle(1+i)^3\rangle}}\chi_{ab^3}(c)\sum_{\chi\imod{\langle16\rangle}}\chi(d^2c)\exp\left(-\frac{{\mathpzc{N}}(d^2c)}{Y}\right)\\
&=\frac{1}{\left|H_{\langle16\rangle}\right|}\sum_{\chi\imod{\langle16\rangle}}\sum_{\substack{d\equiv1\imod{\langle(1+i)^3 \rangle}\\{\mathpzc{N}}(d)\leq B}}\mu_{[i]}(d)\chi_{ab^3}\chi(d^2)\sum_{c\equiv1\imod{\langle(1+i)^3\rangle}}\chi_{ab^3}\chi(c)\exp\left(-\frac{{\mathpzc{N}}(d^2c)}{Y}\right). \end{align*} The bound in \eqref{eqn:polya} gives \begin{equation*} R\ll B{\mathpzc{N}}(ab^3)^{1/2+\varepsilon}. \end{equation*} Therefore, the summands in \eqref{I} involving $R$ can be majorized by \begin{align*}
&\ll B\sum_{r_{1},r_{2}}\frac{\left|\lambda_{y}(\langle1+i\rangle^{r_{1}})\right|\left|\lambda_{y}(\langle1+i\rangle^{r_{2}})\right|}{2^{r_{1}\sigma+r_{2}\sigma}}
\sum_{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}}\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}}^{3})^{1/2+\varepsilon}\left|\lambda_{y}({\mathfrak{a}}{\mathfrak{m}})\right|\left|\lambda_{y}({\mathfrak{b}}{\mathfrak{m}})\right|}{{\mathpzc{N}}({\mathfrak{a}})^{\sigma} {\mathpzc{N}}({\mathfrak{b}})^{\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}^{2})}{X}\right)\\ &\ll B\sum_{{\mathfrak{a}},{\mathfrak{b}}}{\mathpzc{N}}({\mathfrak{a}})^{-\sigma+1/2+2\varepsilon}{\mathpzc{N}}({\mathfrak{b}})^{-\sigma+3/2+4\varepsilon}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})}{X}\right) \end{align*} With a change of variables, the last expression is recast as
\[ B\sum_{{\mathfrak{a}}}{\mathpzc{N}}({\mathfrak{a}})^{-\sigma+1/2+2\varepsilon}\sum_{{\mathfrak{b}}|{\mathfrak{a}}}{\mathpzc{N}}({\mathfrak{b}})^{1+2\varepsilon}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}})}{X}\right) \ll B\sum_{{\mathfrak{a}}}{\mathpzc{N}}({\mathfrak{a}})^{-\sigma+3/2+5\varepsilon}\exp\left(-\frac{{\mathpzc{N}}({\mathfrak{a}})}{X}\right)\ll BX^{5/2-\sigma+6\varepsilon}. \]
Next, the summands \eqref{I} with $S$ are precisely \begin{align*}
&\sum_{r_{1},r_{2}\geq 0}\frac{\lambda_{y}(\langle1+i\rangle^{r_{1}})\lambda_{y}(\langle1+i\rangle^{r_{2}})}{2^{r_{1}\sigma+r_{2}\sigma}} \sum_{\substack{a\equiv1\imod{\langle(1+i)^3\rangle}\\b\equiv1\imod{\langle(1+i)^3\rangle}\\m\equiv1\imod{\langle(1+i)^3\rangle} \\(\langle a\rangle,\langle b\rangle)=1}}\frac{\lambda_{y}(\langle am\rangle)\lambda_{y}(\langle bm\rangle)}{{\mathpzc{N}}(a)^{\sigma}{\mathpzc{N}}(b)^{\sigma} {\mathpzc{N}}(m)^{2\sigma}}\exp\left(-\frac{2^{r_{1}+r_{2}}{\mathpzc{N}}(abm^2)}{X}\right)\\
&\hspace{1in}\times\sum_{d_1\equiv1\imod{\langle(1+i)^3\rangle}}\chi_{ab^3}(d_{1}^2)\sum_{\substack{d|d_1\\d\equiv1\imod{\langle(1+i)^3\rangle}\\ {\mathpzc{N}}(d)>B}}\mu_{[i]}(d)\sumflat_{c\equiv\overline{d_1}^2\imod{\langle16\rangle}}\chi_{ab^3}(c)\exp\left(-\frac{{\mathpzc{N}}(d_1^2c)}{Y}\right).\nonumber \end{align*} Now we rewrite $a$ as $a=a_{1}a_{2}^{2}$, where $a_{1}$ is the square-free part of $a$. Also note that we may assume that \[ {\mathpzc{N}}(a_1)\ll \frac{X^{1+\varepsilon}}{{\mathpzc{N}}(a_2^2b)^{1+\varepsilon}}, \; {\mathpzc{N}}(c)\ll\frac{Y^{1+\varepsilon}}{{\mathpzc{N}}(d_{1}^2)^{1+\varepsilon}} \] and $B<{\mathpzc{N}}(d_1)<\sqrt{Y}$. Proceeding in a manner similar to the treatment of $S$ in \cite{AH} by using Cauchy-Schwarz inequality and the large sieve inequality \eqref{eqn:large-sieve}, we see that the contribution of $S$ to \eqref{I} is \begin{equation*} \ll (XY)^{1/2+2\varepsilon}+Y^{1+2\varepsilon}\frac{X^{1-\sigma+\varepsilon}}{B}+Y^{5/6+3\varepsilon/2}\frac{X^{1-\sigma+\varepsilon}}{B^{2/3}}. \end{equation*} The combined contribution of $R$ and $S$ to $(I)$ is \begin{equation*} \ll BX^{5/2-\sigma+6\varepsilon}+(XY)^{1/2+2\varepsilon}+Y^{1+2\varepsilon}\frac{X^{1-\sigma+\varepsilon}}{B}+Y^{5/6+3\varepsilon/2}\frac{X^{1-\sigma+\varepsilon}}{B^{2/3}}. \end{equation*} Upon taking $B=Y^{1/2+\varepsilon}X^{-3/4-5\varepsilon/2}$, the above is \begin{equation*} \ll Y^{1/2+2\varepsilon}X^{7/4-\sigma+4\varepsilon}. \end{equation*} Combining this estimation with \eqref{llast}, we obtain
\begin{equation}\label{one-estimate} (I)=\widetilde{C}_{\sigma}(y)Y+O(YX^{\varepsilon-\varepsilon_0}+Y^{1/2+\varepsilon}+Y^{1/2+2\varepsilon}X^{5/4-\varepsilon_{0}+4\varepsilon}) \end{equation} and complete the proof of the lemma. \end{proof}
\subsection{Proof of Proposition \ref{newprop}} \begin{proof}
As the treatment for $\sigma>1$ is analogue to the one given in the proof of \cite[Proposition 4.2]{AH}, we may assume that $\sigma\leq1$. Inserting \eqref{two-estimate} and \eqref{one-estimate} into \eqref{main2} yields
\begin{equation*} \sumSTAR_{c\in \mathcal{C}}\exp\left(iy\mathcal{L}_{c} (\sigma)\right)\exp(-{\mathpzc{N}}(c)/Y)=\widetilde{C}_{\sigma}(y) Y+O\left(YX^{\epsilon-\epsilon_0}+Y^{\frac{1}{2}+\epsilon}+Y^{1/2+2\varepsilon}X^{5/4-\varepsilon_{0}+4\varepsilon} +Y^\delta X^{1/2-\varepsilon_0+\varepsilon} + Y X^{-\varepsilon/2}\right). \end{equation*} Now choosing $X=Y^\eta$ for a sufficiently small positive constant $\eta$ and using \eqref{N*} give the result. \end{proof}
\subsection{The product formula for $\widetilde{M}_{\sigma}(y)$} \label{sec:prod}
\begin{prop}\label{lem:euler}
Let $\widetilde{M}_{\sigma}(y)$ be given in \eqref{M}. We have $\widetilde{M}_{\sigma}(y)=\varphi_{\sigma}(y)$, where $\varphi_{\sigma}(y)$ is defined in Theorem \ref{mainthrm}. \end{prop} \begin{proof} Using \eqref{H} yeilds \begin{equation} \label{sumoverr} \begin{split} \sum_{r_{1},r_{2}\geq0}\frac{\lambda_{y}(\langle1+i\rangle^{r_1})\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{(r_1+r_{2})\sigma}} & =\sum_{r_{1}\geq0} \frac{\lambda_{y}(\langle1+i\rangle^{r_1})}{2^{r_1\sigma}}\sum_{r_{2}\geq0} \frac{\lambda_{y}(\langle1+i\rangle^{r_2})}{2^{r_2\sigma}} \\ & =\sum_{r_{1}\geq0}\frac{H_{r_1}(iy)}{2^{r_1\sigma}}\sum_{r_{2}\geq0}\frac{H_{r_2}(iy)}{2^{r_2\sigma}} =\exp\left(-2iy\log(1-2^{-\sigma})\right). \end{split} \end{equation}
In the sequal, let ${\mathfrak{p}}$ denote a prime ideal and we adopt the convention that all products over ${\mathfrak{p}}$ are restricted to odd prime ideals, i.e. prime ideals co-prime to $\langle2\rangle$. Set \begin{equation} \label{N1} \begin{split} \widetilde{N}_{\sigma}(y)&:=\sum_{\substack{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{m}}\subset\mathcal{O}_K\\\gcd({\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}},\langle2\rangle)=1\\\gcd({\mathfrak{a}},{\mathfrak{b}})=1}}\frac{\lambda_{y}({\mathfrak{a}}^4{\mathfrak{m}})
\lambda_{y}({\mathfrak{b}}^4{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}}{\mathfrak{b}})^{4\sigma}{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\displaystyle{\prod_{\substack{{\mathfrak{p}}|{\mathfrak{a}}{\mathfrak{b}}{\mathfrak{m}}}}\left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1}} \\
&= \sideset{}{^{\sharp}}\sum_{{\mathfrak{m}}} \frac{1}{{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\prod_{{\mathfrak{p}}|{\mathfrak{m}}}(1+{\mathpzc{N}}({\mathfrak{p}})^{-1})^{-1}\sum_{{\mathfrak{a}}}\frac{\lambda_{y}
({\mathfrak{a}}^{4}{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}})^{4\sigma}}\prod_{\substack{{\mathfrak{p}}|{\mathfrak{a}}\\{\mathfrak{p}}\nmid{\mathfrak{m}}}}(1+{\mathpzc{N}}({\mathfrak{p}})^{-1})^{-1}
\sum_{{\mathfrak{b}}}\frac{\lambda_{y}({\mathfrak{b}}^{4}{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{b}})^{4\sigma}}\prod_{\substack{{\mathfrak{p}}|{\mathfrak{b}}\\{\mathfrak{p}}\nmid{\mathfrak{a}}{\mathfrak{m}}}}(1+{\mathpzc{N}}({\mathfrak{p}})^{-1})^{-1}. \end{split} \end{equation} Here $\sum^{\sharp}$ denotes that the sum runs over ${\mathfrak{m}}$'s that are free of fourth powers. \newline
We need to have an Euler product for $\widetilde{N}_{\sigma}(y)$. To this end, we first find an Euler product for the innermost sum over ${\mathfrak{b}}$ in the last expression of \eqref{N1}. Let $\nu_{\mathfrak{p}}({\mathfrak{m}})$ denote the multiplicity of a prime ideal ${\mathfrak{p}}$ in an ideal ${\mathfrak{m}}$, i.e. the highest power of ${\mathfrak{p}}$ that divides ${\mathfrak{m}}$. We have \begin{equation} \label{N2}
\sum_{{\mathfrak{b}}}\frac{\lambda_{y}({\mathfrak{b}}^{4}{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{b}})^{4\sigma}} \prod_{\substack{{\mathfrak{p}}|{\mathfrak{b}}\\{\mathfrak{p}}\nmid{\mathfrak{a}}{\mathfrak{m}}}}(1+{\mathpzc{N}}({\mathfrak{p}})^{-1})^{-1} =\prod_{{\mathfrak{p}}} \mathcal{F}({\mathfrak{p}}) \frac{\displaystyle \prod_{{\mathfrak{p}}|{\mathfrak{a}}{\mathfrak{m}}}\left(\sum_{j=0}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j+\nu_{{\mathfrak{p}}}({\mathfrak{m}})})}{{\mathpzc{N}}({\mathfrak{p}})^{4j\sigma}}\right)}
{\displaystyle \prod_{{\mathfrak{p}}|{\mathfrak{a}}{\mathfrak{m}}} \mathcal{F}({\mathfrak{p}}) } , \end{equation} where \[ \mathcal{F}({\mathfrak{p}}) = 1+\sum_{j=1}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j})}{{\mathpzc{N}}({\mathfrak{p}})^{4j\sigma}}\left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1} . \]
Inserting \eqref{N2} into \eqref{N1}, we arrive at \begin{align} \label{N3}
\widetilde{N}_{\sigma}(y)=\prod_{{\mathfrak{p}}} \mathcal{F} ({\mathfrak{p}}) \widetilde{P}_{\sigma}(y), \quad \mbox{where} \quad \widetilde{P}_{\sigma}(y) = \sideset{}{^{\sharp}}\sum_{{\mathfrak{m}}}\frac{1}{{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\prod_{{\mathfrak{p}}|{\mathfrak{m}}} \frac{ \mathcal{G}({\mathfrak{p}}, \nu_{{\mathfrak{p}}}({\mathfrak{m}})) }
{\mathcal{F}({\mathfrak{p}}) } \sum_{{\mathfrak{a}}}\frac{\lambda_{y}({\mathfrak{a}}^{4}{\mathfrak{m}})}{{\mathpzc{N}}({\mathfrak{a}})^{4\sigma}}\prod_{\substack{{\mathfrak{p}}|{\mathfrak{a}}\\{\mathfrak{p}}\nmid{\mathfrak{m}}}} \frac{\mathcal{G}({\mathfrak{p}}, \nu_{{\mathfrak{p}}}({\mathfrak{m}}))} {\mathcal{F}({\mathfrak{p}})} , \end{align} where \[ \mathcal{G}({\mathfrak{p}}, l) = \sum_{j=0}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j+l})}{{\mathpzc{N}}({\mathfrak{p}})^{4j\sigma}}\left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1} . \] Now we have \begin{equation} \label{N4} \begin{split}
\widetilde{P}_{\sigma}(y) & =\sideset{}{^{\sharp}}\sum_{{\mathfrak{m}}} \frac{1}{{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}} \prod_{{\mathfrak{p}}|{\mathfrak{m}}} \frac{\mathcal{G}({\mathfrak{p}}, \nu_{{\mathfrak{p}}}({\mathfrak{m}}))}{\mathcal{F}({\mathfrak{p}}) } \prod_{{\mathfrak{p}}\nmid{\mathfrak{m}}} \left(1+\sum_{k=1}^{\infty} \frac{\lambda_{y}({\mathfrak{p}}^{4k})}
{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}} \frac{\mathcal{G}({\mathfrak{p}}, 0)}{ \mathcal{F}({\mathfrak{p}})} \right) \prod_{{\mathfrak{p}}|{\mathfrak{m}}} \left(\sum_{k=0}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k+v_{{\mathfrak{p}}}({\mathfrak{m}})})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}}\right) \\ &=\prod_{{\mathfrak{p}}}\left(1+\sum_{k=1}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}}
\frac{ \mathcal{G}({\mathfrak{p}}, 0) }{\mathcal{F}({\mathfrak{p}}) } \right) \sideset{}{^{\sharp}}\sum_{{\mathfrak{m}}} \frac{1}{{\mathpzc{N}}({\mathfrak{m}})^{2\sigma}}\frac{\displaystyle\prod_{{\mathfrak{p}}|{\mathfrak{m}}} \frac{\mathcal{G}({\mathfrak{p}}, \nu_{{\mathfrak{p}}}({\mathfrak{m}}))}{\mathcal{F}({\mathfrak{p}})}
\displaystyle \prod_{{\mathfrak{p}}|{\mathfrak{m}}}\left(\displaystyle\sum_{k=0}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k+v_{{\mathfrak{p}}}({\mathfrak{m}})})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}}\right)}
{\displaystyle \prod_{{\mathfrak{p}}|{\mathfrak{m}}}\left(\displaystyle 1+ \sum_{k=1}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}} \frac{\displaystyle \mathcal{G}({\mathfrak{p}}, 0)}{\mathcal{F}({\mathfrak{p}})} \right)}. \end{split} \end{equation}
The summation over ${\mathfrak{m}}$ in the above expression can be recast as the Euler product \begin{align} \label{N5} \prod_{{\mathfrak{p}}}\left(1+\sum_{l=1}^{3}\frac{1}{{\mathpzc{N}}({\mathfrak{p}})^{2l\sigma}} \frac{\frac{\displaystyle \mathcal{G}({\mathfrak{p}}, l)} {\displaystyle \mathcal{F}({\mathfrak{p}}) } \displaystyle \sum_{k=0}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k+l})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}}} {1+\displaystyle \sum_{k=1}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4k})}{{\mathpzc{N}}({\mathfrak{p}})^{4k\sigma}} \frac{\displaystyle \mathcal{G}({\mathfrak{p}}, 0)} {\displaystyle \mathcal{F}({\mathfrak{p}}) }} \right). \end{align}
Inserting \eqref{N5} into \eqref{N4} and using resulting expression for $\widetilde{P}_{\sigma}(y)$ in \eqref{N3}, we infer that \begin{align*} \widetilde{N}_{\sigma}(y)=\prod_{{\mathfrak{p}}}\widetilde{M}_{\sigma,{\mathfrak{p}}}(y), \quad \widetilde{M}_{\sigma,{\mathfrak{p}}}(y)=1-\left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1} +\left(1+{\mathpzc{N}}({\mathfrak{p}})^{-1}\right)^{-1}\sum_{l=0}^{3}\sum_{\substack{j=0\\k=0}}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j+l})\lambda_{y}({\mathfrak{p}}^{4k+l})}{{\mathpzc{N}}({\mathfrak{p}})^{(4j+l)\sigma}{\mathpzc{N}}({\mathfrak{p}})^{(4k+l)\sigma}}. \end{align*}
The definition of $\lambda_{y}$ in \eqref{lambdadef}, together with the relation \begin{align} \label{ortho} \sum_{l=0}^{3}i^{lk}=\begin{cases} 4 & \text{ if }k \equiv 0 \pmod {4}, \\ 0 & \text{ otherwise},
\end{cases} \end{align} gives that \begin{align*}
\sum_{l=0}^{3}\sum_{\substack{j=0\\k=0}}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j+l})\lambda_{y}({\mathfrak{p}}^{4k+l})}{{\mathpzc{N}}({\mathfrak{p}})^{(4j+l)\sigma}{\mathpzc{N}}({\mathfrak{p}})^{(4k+l)\sigma}} &=\sum_{l=0}^{3}\sum_{j=0}^{\infty}\frac{H_{4j+l}\left(iy\right)}{{\mathpzc{N}}({\mathfrak{p}})^{(4j+l)\sigma}} \sum_{k=0}^{\infty}\frac{H_{4k+l}\left(iy\right)}{{\mathpzc{N}}({\mathfrak{p}})^{(4k+l)\sigma}}, \\ &=\frac 1{16}\sum_{l=0}^{3}\sum_{r=0}^{\infty}\frac{H_{r}\left(iy\right)}{{\mathpzc{N}}({\mathfrak{p}})^{r\sigma}} \sum^{3}_{n=0}i^{(r-l)n}\sum_{r'=0}^{\infty}\frac{H_{r'}\left(iy\right)}{{\mathpzc{N}}({\mathfrak{p}})^{r'\sigma}}\sum^{3}_{m=0}i^{(r'-l)m} \nonumber \\ &=\frac1{16}\sum_{l=0}^{3}\sum_{\substack{n=0\\m=0}}^{3}\frac{1}{i^{l(n+m)}}\exp\left(-iy \left(\log \left(1-\frac{i^{n}} {{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right )+\log \left(1-\frac{i^{m}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right)\right) \right ), \nonumber \end{align*}
where the last expression above follows from \eqref{H}. \newline
Further using the relation \eqref{ortho} for $k=m+n$, we conclude that \begin{align*}
\sum_{l=0}^{3}\sum_{\substack{j=0\\k=0}}^{\infty}\frac{\lambda_{y}({\mathfrak{p}}^{4j+l})\lambda_{y}({\mathfrak{p}}^{4k+l})}{{\mathpzc{N}}({\mathfrak{p}})^{(4j+l)\sigma}{\mathpzc{N}}({\mathfrak{p}})^{(4k+l)\sigma}}&=\frac14\sum_{j=0}^{3}\exp\left(-2iy\log\left|1-\frac{i^{j}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right|\right). \end{align*} Therefore, $\widetilde{N}_{\sigma}(y)$ takes the form \begin{align} \label{NM}
\widetilde{N}_{\sigma}(y)=\prod_{{\mathfrak{p}} \nmid\langle2\rangle}\widetilde{M}_{\sigma,{\mathfrak{p}}}(y), \quad \widetilde{M}_{\sigma,{\mathfrak{p}}}(y)=\left( \frac{1}{{\mathpzc{N}}({\mathfrak{p}})+1}+\frac{1}{4}\left(\frac{{\mathpzc{N}}({\mathfrak{p}})}{{\mathpzc{N}}({\mathfrak{p}})+1}\right)\sum_{j=0}^{3}\exp\left(-2iy\log\left|1-\frac{i^{j}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right|\right)\right). \end{align}
The assertion of Proposition \ref{lem:euler} now follows from this and \eqref{sumoverr}. \end{proof}
\section{PROOF OF PROPOSITION \ref{mainprop2}} \label{sec:good-density}
Since $\overline{\varphi}_{\sigma}(y)=\varphi_{\sigma}(-y)$, we can assume, without loss of generality, that $y>0$. It follows from Proposition \ref{lem:euler} that \begin{equation*} \varphi_{\sigma}(y)=\exp\left(-2iy\log(1-2^{-\sigma})\right)\prod_{{\mathfrak{p}}\nmid\langle2\rangle}\widetilde{M}_{\sigma,{\mathfrak{p}}}(y), \end{equation*}
where $\widetilde{M}_{\sigma,{\mathfrak{p}}}(y)$ is given in \eqref{NM}. Note first that for all $y$ we have $|\widetilde{M}_{\sigma,{\mathfrak{p}}}(y)|\leq 1$. Further note that \begin{align*}
\widetilde{Q}_{\sigma,{\mathfrak{p}}}(y)&:=\sum_{j=0}^{3}\exp\left(-2iy\log\left|1-\frac{i^{j}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}}\right|\right). \\&=\exp\left(-2iy\log(1-N(p)^{-\sigma})\right)\left(1+2\exp\left(-2iy\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)+ \exp\left(-2iy\log\frac{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}+1}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)\right). \end{align*}
Thus, \begin{align*}
\left|\widetilde{Q}_{\sigma,{\mathfrak{p}}}(y)\right|&=\left|1+2\exp\left(-2iy\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)+\exp\left(-2iy\log\frac{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}+1}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)\right|\\
& \leq \left|1+2\exp\left(-2iy\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)\right|+1 =\sqrt{1+8\cos^{2}\left(y\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right)}+1. \end{align*} Given any $\varepsilon>0$ and sufficiently large $y$, consider the prime ideals ${\mathfrak{p}}$ with \begin{equation}\label{eqn:condition2} 1.35-\varepsilon\leq y\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\leq 1.77+\varepsilon . \end{equation} Upon taking $\varepsilon$ small enough, we can ensure that
\[ \left|\cos\left(y\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\right) \right|\leq 0.22 . \]
This implies that $\left|\widetilde{Q}_{\sigma,{\mathfrak{p}}}(y)\right|\leq 2.2$. It follows that for all ${\mathfrak{p}}$ satisfying (\ref{eqn:condition2}), we have
\[\left|\widetilde{M}_{\sigma,{\mathfrak{p}}}(y)\right|\leq \frac{1}{{\mathpzc{N}}({\mathfrak{p}})+1}+0.55\left(\frac{{\mathpzc{N}}({\mathfrak{p}})}{{\mathpzc{N}}({\mathfrak{p}})+1}\right)\leq 0.8.\] Observe that $\frac{2y}{3.54}\leq {\mathpzc{N}}({\mathfrak{p}})^{\sigma}\leq \frac{2y}{2.7}$ is equivalent to
\[ \frac{2.7}{2}{\mathpzc{N}}({\mathfrak{p}})^{\sigma}\log\tfrac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\leq y\log\tfrac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\leq \frac{3.54}{2}{\mathpzc{N}}({\mathfrak{p}})^{\sigma}\log\tfrac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}.\]
Since
$$\displaystyle{\lim_{{\mathpzc{N}}({\mathfrak{p}})\to\infty}{\mathpzc{N}}({\mathfrak{p}})^{\sigma}\log\tfrac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}}=1,$$
we get that for sufficiently large $y$, \[\frac{2y}{3.54}\leq {\mathpzc{N}}({\mathfrak{p}})^{\sigma}\leq \frac{2y}{2.7} \] which implies \[ 1.35-\varepsilon\leq y\log\frac{\sqrt{{\mathpzc{N}}({\mathfrak{p}})^{2\sigma}+1}}{{\mathpzc{N}}({\mathfrak{p}})^{\sigma}-1}\leq 1.77 +\varepsilon.\] Let $\Pi(x)$ be the number of prime ideals in $K$ with norms not exceeding $x$, and let $\Pi_{\sigma}(y)$ be the number of prime ideals satisfying \eqref{eqn:condition2}. It follows from the above consideration that if $y$ is large enough, then \[\Pi_{\sigma}(y)>\Pi\left(\left(\frac{2y}{2.7}\right)^{1/\sigma}\right)-\Pi\left(\left(\frac{2y}{3.54}\right)^{1/\sigma}\right) \gg_{\sigma} y^{1/\sigma-\delta}\] for all sufficiently small $\delta>0$. Consequently,
\[\left|\varphi_{\sigma}(y)\right|=\prod_{{\mathfrak{p}}\nmid\langle2\rangle}\left|\widetilde{M}_{\sigma,{\mathfrak{p}}}(y)\right|\leq 0.8^{\Pi_{\sigma}(y)}=\exp\left(-\log\left(\frac{1}{0.8}\right)\Pi_{\sigma}(y)\right)\leq\exp\left(-Cy^{1/\sigma-\delta}\right),\] where $C$ can be chosen according to the values of $\sigma$ and $\delta$. This completes the proof of Proposition \ref{mainprop2}. \newline
\noindent{\bf Acknowledgments.} P. G. is supported in part by NSFC grant 11871082 and L. Z. by the FRG grant PS43707 and the Faculty Silverstar Award PS49334. Parts of this work were done when P. G. visited the University of New South Wales (UNSW) in August 2018. He wishes to thank UNSW for the invitation, financial support and warm hospitality during his pleasant stay. Finally, the authors would like to thank the anonymous referee for his/her comments and suggestions.
\vspace*{.5cm}
\noindent\begin{tabular}{p{8cm}p{8cm}} School of Mathematical Sciences & School of Mathematics and Statistics \\ Beihang University & University of New South Wales \\ Beijing 100191 China & Sydney NSW 2052 Australia \\ Email: {\tt [email protected]} & Email: {\tt [email protected]} \\ \end{tabular}
\end{document} |
\begin{document}
\title[] {Examples of Ricci-mean curvature flows}
\author{Hikaru Yamamoto} \address{Department of Mathematics, Faculty of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan} \email{[email protected]}
\begin{abstract} Let $\pi:\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))\to \mathbb{P}^{n-1}$ be a projective bundle over $\mathbb{P}^{n-1}$ with $1\leq k \leq n-1$. In this paper, we show that lens space $L(k\, ;1)(r)$ with radius $r$ embedded in $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ is a self-similar solution, where $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ is endowed with the $U(n)$-invariant gradient shrinking Ricci soliton structure. We also prove that there exists a pair of critical radii $r_{1}<r_{2}$ which satisfies the following. The lens space $L(k\, ;1)(r)$ is a self-shrinker if $r<r_{2}$ and self-expander if $r_{2}<r$, and the Ricci-mean curvature flow emanating from $L(k\, ;1)(r)$ collapses to the zero section of $\pi$ if $r<r_{1}$ and to the $\infty$-section of $\pi$ if $r_{1}<r$. This gives explicit examples of Ricci-mean curvature flows. \end{abstract}
\keywords{mean curvature flow, self-similar solution, Ricci flow, Ricci soliton}
\subjclass[2010]{53C42, 53C44}
\maketitle
\section{Background}\label{BCG} Let $M$ and $N$ be manifolds, $\mathcal{G}=\{\,g_{t}\mid t\in [0,T)\,\}$ be a smooth 1-parameter family of Riemannian metrics on $N$ and $\mathcal{F}=\{\,F_{t}:M\to N\mid t\in [0,T')\,\}$ be a smooth 1-parameter family of immersions with $T'\leq T$. \begin{definition} The pair $(\mathcal{G},\mathcal{F})$ is called a {\it Ricci-mean curvature flow} if it satisfies \begin{align}\label{RMCFeq} \begin{aligned} \frac{\partial g_{t}}{\partial t}=&-2\mathop{\mathrm{Ric}}(g_{t})\\ \frac{\partial F_{t}}{\partial t}=&H_{g_{t}}(F_{t}), \end{aligned} \end{align} where $H_{g_{t}}(F_{t})$ denotes the mean curvature vector field of $F_{t}$ calculated with the ambient metric $g_{t}$ at each time $t$. \end{definition} The first equation of (\ref{RMCFeq}) is just the Ricci flow equation on $N$ and it does not depend on existence of $\mathcal{F}$. The second equation of (\ref{RMCFeq}) is the mean curvature flow equation though it is affected by the evolution of ambient metics $g_{t}$. This is a coupled flow of the Ricci flow and the mean curvature flow.
The Ricci-mean curvature flow equation has been already appeared in some contexts. For example, Smoczyk \cite{Smoczyk2}, Han-Li \cite{HanLi} and Lotay-Pacini \cite{LotayPacini} consider the Lagrangian mean curvature flow coupled with the K\"ahler-Ricci flow, and generalize several results which hold in Calabi-Yau manifolds to this moving ambient setting. Other contexts appear in, for example, Lott \cite{Lott} and Magni-Mantegazza-Tsatis \cite{MagniMantegazzaTsatis}. There is a monotonicity formula for a mean curvature flow in a Euclidean space introduced by Huisken \cite{Huisken}. They generalized it to a Ricci-mean curvature flow coupled with a Ricci flow constructed by a gradient shrinking Ricci soliton.
A {\it gradient shrinking Ricci soliton} is a pair of a Riemannian manifold $(N,g)$ and a function $f$ on $N$, called a potential function, which satisfies \[\mathop{\mathrm{Ric}}(g)+\mathop{\mathrm{Hess}}f=g. \] From a given gradient shrinking Ricci soliton $(N,g,f)$ and an arbitrary fixed time $T\in (0,\infty)$ the solution of Ricci flow which survives on $[0,T)$ is constructed by $g_{t}=2(T-t)\Phi_{t}^{*}g$, where $\Phi_{t}$ is the 1-parameter family of diffeomorphisms of $N$ generated by $\frac{1}{2(T-t)}\nabla f$ with $\Phi_{0}=\mathrm{id}$.
Motivated by their works, the author generalized the rescaling procedure due to Huisken to a Ricci-mean mean curvature flow in \cite{Yamamoto2}. It states that if a Ricci-mean curvature flow coupled with a Ricci flow $g_{t}=2(T-t)\Phi_{t}^{*}g$ develops singularities of type I at the same time $T$, its rescaling limit is a self-shrinker. The definition of self-shrinkers is the follwong. \begin{definition}\label{defofselsim} An immersion $F: M\to N$ to a gradient shrinking Ricci soliton $(N,g,f)$ is called a {\it self-similar solution} with coefficient $\lambda\in\mathbb{R}$ if it satisfies \begin{align}\label{self3} H_{g}(F)=\lambda {\nabla f}^{\bot}. \end{align} If $\lambda<0$ or $\lambda>0$, it is called a {\it self-shrinker} or {\it self-expander}, respectively. \end{definition} If $\lambda=0$, it is a minimal immersion. Moreover, it also holds that a self-similar solution with coefficient $\lambda$ is a minimal immersion in $N$ with respect to a conformally rescaled metric $e^{2\lambda f/m}g$, where $m=\dim M$. Hence, self-similar solutions can be considered as a kind of generalization of minimal submanifolds.
When $(N,g,f)$ is the Gaussian soliton, that is, the Euclidean space with the standard metric and potential function $f=|x|^2/2$, the equation (\ref{self3}) is written as $H(F)=\lambda x^{\bot}$. Thus, Definition \ref{defofselsim} coincides with the ordinary notion of self-similar solutions in the Euclidean space. The original form of Definition \ref{defofselsim} for general Ricci solitons has been appeared in \cite{Lott}.
It is well-known that a mean curvature flow in a fixed Riemannian manifold is a backward $L^2$-gradient flow of the volume functional, and a Ricci flow can be regarded as a gradient flow of Perelman's $\mathcal{W}$-entropy functional. As mentioned above, Ricci-mean curvature flows have some common properties with mean curvature flows in Euclidean spaces or more generally Ricci flat spaces. However, it becomes unclear that a Ricci-mean curvature flow can be considered as a backward $L^2$-gradient flow of some functional. To the authors knowledge, a problem to fined an appropriate functional such that its gradient flow is a Ricci-mean curvature flow is still open.
If $M$ and $N$ are compact, for any Riemannian metric $g$ on $N$ and immersion $F:M\to N$, the short-time existence and uniqueness of the Ricci-mean curvature flow equation (\ref{RMCFeq}) with initial condition $(g,F)$ are assured. Actually, first, construct a unique short-time solution of Ricci flow on $N$ with initial metric $g$, and next, solve the second equation of (\ref{RMCFeq}) for short-time with initial immersion $F$. Hence, there are infinitely many examples of Ricci-mean curvature flows.
However, explicit examples of Ricci-mean curvature flows are not known, so far. In this paper, we consider a lens space $L(k\,;1)(r)$ embedded in $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ endowed with a gradient shrinking Ricci soliton structure, and investigate how it moves along the Ricci-mean curvature flow. The analysis is done by reducing PDE (\ref{RMCFeq}) to ODE (\ref{ODEradi2}). This example shows how the evolution of the ambient metrics affects the motion of submanifolds. To the authors knowledge, this gives a first non-trivial explicit example of Ricci-mean curvature flow, and the author hopes that this example inspires further research of Ricci-mean curvature flows. \\
\noindent {\bf Organization of this paper.} Section \ref{BCG} is a background of a study of Ricci-mean curvature flows, and definitions of Ricci-mean curvature flows and self-similar solutions are given in it. In Section \ref{sectmain}, we briefly summarize main results of this paper. In Section \ref{caoconst}, we review a construction of a gradient shrinking K\"ahler Ricci soliton structure on $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$. In Section \ref{caoconst}, we see that lens spaces $L(k\,;1)(r)$ with radius $r$ are naturally embedded in $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ and these are self-similar solutions. In Section \ref{motion}, we investigate the motion of the Ricci-mean curvature flow emanating from $L(k\,;1)(r)$. In Section \ref{motionh}, we compare it to the ordinary mean curvature flow emanating from $L(k\,;1)(r)$.
\section{Main Results}\label{sectmain} We give a summary of results of this paper in the following. An ambient space is the $k$-twisted projective-line bundle $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ over $\mathbb{P}^{n-1}$, where $n\geq 2$ and $1\leq k \leq n-1$, and we denote it by $N_{k}^{n}$. It can be shown that $N_{k}^{n}$ contains $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$ as an open dense subset, and its complement is the disjoint union of $S_{0}$ and $S_{\infty}$, where these denote the image of $0$-section and $\infty$-section respectively. The eliminated $\{0\}$ and the point at infinity of $\mathbb{C}^{n}\setminus\{0\}$ are replaced by $S_{0}$ and $S_{\infty}$ respectively. See Figure \ref{fig1}.
It is known, by Cao \cite{Cao} and Koiso \cite{Koiso}, that there exists a unique $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton structure on $N_{k}^{n}$. We denote its Riemannian metric and potential function by $g$ and $f$, respectively.
\begin{figure}
\caption{$N_{k}^{n}$}
\label{fig1}
\end{figure}
For $0<r<\infty$, we consider the $\mathbb{Z}_{k}$ quotient of $S^{2n-1}(r)$, the sphere with radius $r$ in $\mathbb{C}^{n}$, and denote it by $L(k\,;1)(r)$. Actually, it is a lens space and embedded in $N_{k}^{n}$. We denote its inclusion map by \[\iota_{r}:L(k\,;1)(r)\hookrightarrow N_{k}^{n}. \] Then, the first theorem states that these are self-similar solutions, and whether it is a self-shrinker or self-expander is distinguished by whether its radius is smaller or larger than a critical radius $r_{2}$. \begin{theorem}\label{sumofmain1} For each $0<r<\infty$, the inclusion map $\iota_{r}:L(k\,;1)(r)\hookrightarrow N_{k}^{n}$ is a compact self-similar solution. Furthermore, there exists a unique radius $r_{2}$ which satisfies the following. \begin{itemize} \item If $r<r_{2}$, $\iota_{r}:L(k\,;1)(r)\hookrightarrow N_{k}^{n}$ is a non-minimal self-shrinker. \item If $r=r_{2}$, $\iota_{r}:L(k\,;1)(r)\hookrightarrow N_{k}^{n}$ is a minimal embedding. \item If $r>r_{2}$, $\iota_{r}:L(k\,;1)(r)\hookrightarrow N_{k}^{n}$ is a non-minimal self-expander. \end{itemize} \end{theorem}
\begin{remark} Especially, Theorem \ref{mainthm} claims that there exists a compact self-expander in $N_{k}^{n}$. To contrast with the case that the ambient space is a Euclidean space, we remark that there exists no compact self-expander in $\mathbb{R}^{n}$. It can be proved by several ways, for instance, see Proposition 5.3 in \cite{CaoLi}. In $\mathbb{R}^{n}$, the sphere $S^{n-1}(r)$ is a self-shrinker for every radius $r$. However, intuitively, we get a self-expander in $(N_{k}^{n},\omega, f)$ by taking the radius sufficiently large because of bending and compactifying the neighborhood of $\{\infty\}$ of $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$. \end{remark}
Fix a time $0<T<\infty$. Then, we will check that the 1-parameter family of diffeomorphisms $\Phi_{t}$ generated by $\frac{1}{2(T-t)}\nabla f$ with $\Phi_{0}=\mathrm{id}$ is given by \[\Phi_{t}(z):=\left(\frac{T}{T-t}\right)^{\frac{c}{2}}z\] for $t\in[0,T)$ on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$. Here, $c$ is a positive constant defined in the process to construct the soliton structure, and we skip its explanation here. Then, we obtain a Ricci flow \[g_{t}:=2(T-t)\Phi_{t}^{*}g\] which survives on the time interval $[0,T)$. We remark that, since $c$ is positive, $\Phi_{t}$ is expanding and $\Phi_{t}(z)$ converges to a point in $S_{\infty}$ as $t\to T$ if $z$ is not contained in $S_{0}$. See Figure \ref{fig1}.
For a fixed radius $r$, we take the solution of Ricci-mean curvature flow $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ along $g_{t}$ with initial condition $F_{0}=\iota_{r}$. We assume that $F_{t}$ exists on $[0,T')$ and $T'(=T'(r))$ is the maximal time of existence of the solution. Note that $T'\leq T$ in general. Then, the following is a summary of Theorem \ref{mainRM}. \begin{theorem}\label{sumofmain2} There exists a unique radius $r_{1}$ with $r_{1}<r_{2}$ which satisfies the following. \begin{itemize} \item $T'=T$ if and only if $r=r_{1}$. \item If $r\leq r_{1}$, $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ collapses to $S_{0}$ as $t\to T'$. \item If $r>r_{1}$, $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ collapses to $S_{\infty}$ as $t\to T'$. \end{itemize} \end{theorem}
Theorem \ref{mainRM} contains further information about the blow up rate of the solution. Actually, we see that the blow up rate is type I in each case. The above theorem reveals how a lens space $L(k\,;1)(r)$ moves and what it converges to by a Ricci-mean curvature flow along the Ricci flow $g_{t}$. On the other hand, in section \ref{motionh}, we investigate the evolution of $L(k\,;1)(r)$ by the ordinary mean curvature flow in the fixed Riemannian manifold $(N_{k}^{n},g)$. Then, we prove that if $r<r_{2}$ ($r>r_{2}$) it collapses to $S_{0}$ ($S_{\infty}$) in finite time and its blow up rate is also type I. Of course, if $r=r_{2}$, $L(k\,;1)(r)$ does not move since $L(k\,;1)(r)$ is minimal. Thus, the critical radius $r_{1}$ which determine whether a lens space tends to $S_{0}$-side or $S_{\infty}$-side under the Ricci-mean curvature flow is smaller than the minimal radius $r_{2}$. See Figure \ref{fig1}. Here we summarize the situation on Table \ref{tab1}.
\begin{table}[h] \caption{Ricci-mean curvature flow and mean curvature flow} \label{tab1}
\begin{tabular}{|c||c|c|c|c|c|c|} \multicolumn{6}{c}{Ricci-mean curvature flow}\\ \hline
Radius $r$ & $r<r_{1}$ & $r_{1}$ & $r_{1}<r<r_{2}$ & $r_{2}$ & $r_{2}<r$ \\
\hline Maximal time $T'$ & $T'<T$ & $T'=T$ & \multicolumn{3}{|c|}{$T'<T$} \\
\hline Collapse to & \multicolumn{2}{|c|}{$S_{0}$} & \multicolumn{3}{|c|}{$S_{\infty}$} \\
\hline blow up rate & \multicolumn{5}{|c|}{Type I}\\
\hline \multicolumn{6}{c}{}\\ \multicolumn{6}{c}{Mean curvature flow}\\ \hline
Radius $r$ & $r<r_{1}$ & $r_{1}$ & $r_{1}<r<r_{2}$ & $r_{2}$ & $r_{2}<r$ \\
\hline Maximal time $T'$ & \multicolumn{3}{|c|}{$T'<\infty$} & $T'=\infty$ & $T'<\infty$ \\
\hline Collapse to & \multicolumn{3}{|c|}{$S_{0}$} & --- & $S_{\infty}$ \\
\hline blow up rate & \multicolumn{3}{|c|}{Type I} & --- & Type I \\ \hline \end{tabular} \end{table}
\section{Quick review of Cao's construction}\label{caoconst} The first example of non-trivial compact gradient shrinking Ricci soliton was found by Koiso \cite{Koiso} and independently by Cao \cite{Cao}, and it is actually a K\"ahler Ricci soliton. In this section, we quickly review the construction of it following Section 4 in \cite{Cao} and also Section 7.2 in \cite{Chow}. Fix integers $n$, $k$ with $n\geq 2$ and $1\leq k \leq n-1$, and consider the $k$-twisted projective-line bundle \[\pi:\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))\to \mathbb{P}^{n-1}\] over $\mathbb{P}^{n-1}$, where $\mathcal{O}(k)$ denotes the $k$-th tensor power of the hyperplane bundle $\mathcal{O}(1)$ over $\mathbb{P}^{n-1}$. The gradient shrinking K\"ahler Ricci soliton is constructed on $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$. Let $(z^{1}:\cdots:z^{n})$ be the homogeneous coordinates on $\mathbb{P}^{n-1}$, and put $U_{j}:=\{\,z^{j}\neq 0\,\}\subset\mathbb{P}^{n}$ for $j=1,\dots ,n$. Then $\{U_{1},\dots,U_{n}\}$ gives an open covering of $\mathbb{P}^{n-1}$, and the transition functions of $\mathcal{O}(k)$ are given by \begin{align}\label{trans} y^{i}={\left(\frac{z^{i}}{z^{j}}\right)}^{k}y^{j} \end{align} over $U_{i}\cap U_{j}$, where $y^{i}\in\mathbb{C}$ is the standard coordinate of a fiber of $\mathcal{O}(k)$ over $U_{i}$. For a point $(z^{1},\dots,z^{n}) \in \mathbb{C}^{n}\setminus\{0\}$ with $z^{j}\neq 0$, we define \[\psi(z^{1},\dots,z^{n}):=((z^{1}:\cdots:z^{n}),(1:(z^{j})^{k}))\in U_{j}\times\mathbb{P}^1 \cong \pi^{-1}(U_{j}). \] This definition is compatible for $(z^{1},\dots,z^{n}) \in \mathbb{C}^{n}\setminus\{0\}$ with $z^{i}\neq 0$ and $z^{j}\neq 0$ since $(1:(z^{j})^{k})\in \mathbb{P}(\mathbb{C}\oplus\mathbb{C})$ in the fiber over $U_{j}$ is identified with $(1:(z^{i})^{k})\in \mathbb{P}(\mathbb{C}\oplus\mathbb{C})$ in the fiber in $U_{i}$ by the relation (\ref{trans}). Hence we have a smooth map
\[\psi:\mathbb{C}^{n}\setminus\{0\}\to \mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k)). \] It is clear that $\psi(z^{1},\dots,z^{n})=\psi(z'^{1},\dots,z'^{n})$ if and only if $z'=e^{2\pi i\frac{\ell}{k}} z$ for some $\ell\in\mathbb{Z}$. Hence $\psi$ induces an open dense embedding \[\hat{\psi}:(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k} \hookrightarrow \mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k)), \] where the $\mathbb{Z}_{k}$-action on $\mathbb{C}^{n}\setminus\{0\}$ is defined by $([\ell],z)\mapsto e^{2\pi i\frac{\ell}{k}} z$, and the complement of the image of $\hat{\psi}$ is $S_{0}\sqcup S_{\infty}$, where $S_{0}$ and $S_{\infty}$ denote the image of $0$-section and $\infty$-section of $\pi$, respectively.
From now on, we denote $\mathbb{P}(\mathcal{O}(0)\oplus \mathcal{O}(k))$ by $N_{k}^{n}$, and we identify $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$ with its image of $\hat{\psi}$. Thus, we have an open dense subset \[(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}\subset N_{k}^{n}. \] The K\"ahler Ricci soliton structure is constructed on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, and it actually extends smoothly to $S_{0}$ and $S_{\infty}$. Let $u:\mathbb{R}\to\mathbb{R}$ be a smooth function which satisfies \begin{align}\label{udash} u'(s)>0\quad\mathrm{and}\quad u''(s)>0, \end{align} and has the following asymptotic expansions \begin{align}\label{uasy} \begin{aligned} u(s)=&(n-k)s+a_{1}e^{ks}+a_{2}e^{2ks}+\cdots \quad(s\to-\infty)\\ u(s)=&(n+k)s+b_{1}e^{-ks}+b_{2}e^{-2ks}+\cdots \quad(s\to\infty)\\ \end{aligned} \end{align} with $a_{1}>0$ and $b_{1}>0$. Define a $U(n)$-invariant smooth function $\Phi:\mathbb{C}^{n}\setminus\{0\}\to\mathbb{R}$ by
\[\Phi(z):=u(s)\quad\mathrm{with}\quad s=\log |z|^2. \] Since $\Phi$ is $\mathbb{Z}_{k}$-invariant, it induce a smooth function on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, and we continue to denote it by $\Phi$. By the positivity conditions (\ref{udash}), we get a K\"ahler form \begin{align}\label{omega} \omega=\sqrt{-1}\frac{\partial^2 \Phi}{\partial z^{\alpha}\partial \bar{z}^{\beta}}dz^{\alpha}\wedge d\bar{z}^{\beta} \end{align} on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, where $(z^{1},\dots,z^{n})$ is the ($k$-to-one) global holomorphic coordinates. By the asymptotic conditions (\ref{uasy}), K\"ahler form $\omega$ extends smoothly to $S_{0}$ and $S_{\infty}$, and we get a global K\"ahler structure on $N_{k}^{n}$. The Ricci form of $\omega$ is \[\mathop{\mathrm{Ric}}(\omega)=-\sqrt{-1}\frac{\partial^2 \log\det(g)}{\partial z^{\alpha}\partial \bar{z}^{\beta}}dz^{\alpha}\wedge d\bar{z}^{\beta}, \] where $g=(g_{\alpha\bar{\beta}})$ is a matrix given by \begin{align}\label{g} g_{\alpha\bar{\beta}}=\frac{\partial^2 \Phi}{\partial z^{\alpha}\partial \bar{z}^{\beta}}=e^{-s}u'(s)\delta_{\alpha\bar{\beta}}+e^{-2s}\bar{z}^{\alpha}z^{\beta}(u''(s)-u'(s)) \end{align}
for $s=\log|z|^2$. One can easily check that \begin{align*} g^{\alpha\bar{\beta}}(z)=&\frac{e^{s}}{u'(s)}\delta^{\alpha\bar{\beta}}+z^{\alpha}\bar{z}^{\beta}\left(\frac{1}{u''(s)}-\frac{1}{u'(s)}\right), \\ \det(g(z))=&e^{-ns}(u'(s))^{n-1}u''(s). \end{align*} Define a real valued smooth function $P:\mathbb{R}\to\mathbb{R}$ by \begin{align}\label{f} \begin{aligned} P(s):=&\log\left(e^{-ns}(u'(s))^{n-1}u''(s)\right)+u(s)\\ =&-ns+(n-1)\log u'(s)+\log u''(s)+u(s), \end{aligned} \end{align} and a $U(n)$-invariant real valued smooth function $f:\mathbb{C}^{n}\setminus\{0\}\to\mathbb{R}$ by \begin{align}\label{F}
f(z):=P(s)=\log\det(g(z))+\Phi(z) \quad\mathrm{with}\quad s=\log |z|^2. \end{align} Since $f$ is $\mathbb{Z}_{k}$-invariant, it induce a smooth function on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, and we continue to denote it by $f$. Then, we have \[\mathop{\mathrm{Ric}}(\omega)+\sqrt{-1}\partial\bar{\partial}f=\omega. \] This equation is just the (1,1)-part of the gradient shrinking Ricci soliton equation \begin{align}\label{ksoliton} \mathop{\mathrm{Ric}}+\mathop{\mathrm{Hess}}f=g, \end{align} where $g$ is the associated Riemannian metric of $\omega$ and $\mathop{\mathrm{Ric}}$ is the Ricci 2-tensor of $g$. Thus, the property that $f$ satisfies (\ref{ksoliton}) is equivalent to that $\nabla f$ is a holomorphic vector field. The coefficient of $\partial/\partial z^{\alpha}$ of $\nabla f$ is given by \begin{align*} g^{\alpha\bar{\beta}}\frac{\partial f}{\partial{\bar{z}}^{\beta}}=g^{\alpha\bar{\beta}}\left(P'(s)e^{-s}z^{\beta}\right)=\frac{P'(s)}{u''(s)}z^{\alpha}. \end{align*} Hence, $\nabla f$ is holomorphic if and only if \begin{align}\label{c1} \frac{P'(s)}{u''(s)}=c \end{align} for some constant $c\in\mathbb{R}$. Substituting (\ref{f}) into (\ref{c1}), we have the following third order ODE: \begin{align}\label{ODE} \frac{u'''}{u''}+\left(\frac{n-1}{u'}-c\right)u''=n-u'. \end{align} Hence, we get a $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton structure on $N^{n}_{k}$ when we find a solution $u$ of (\ref{ODE}) which satisfies condition (\ref{udash}) and (\ref{uasy}) for some $c\in\mathbb{R}$. Then, Cao \cite{Cao} proved the following. \begin{theorem}[\cite{Cao}]\label{caothm} There exists one and only one pair $(u,c)$ so that $u$ and $c$ satisfies (\ref{udash}), (\ref{uasy}) and (\ref{ODE}). Additionally, it follows that $0<c<1$. \end{theorem} Thus, there exists a unique $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton structure on $N_{k}^{n}$, and K\"ahler form $\omega$ and potential function $f$ are written as (\ref{omega}) and (\ref{F}) on $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, respectively.
\section{Lens spaces in $N^{n}_{k}$ as self-similar solutions}\label{lenssp} In this section we see that a lens space $L(k\,;1)(r)$ with radius $r$ is embedded in $(N_{k}^{n},\omega,f)$ as a self-similar solution, and whether it is a self-shrinker or self-expander is determined by its radius $r$. Actually, we prove that there exists the specific radius $r_{2}$ such that $L(k\,;1)(r)$ is a self-shrinker or self-expander if $r<r_{2}$ or $r>r_{2}$, respectively.
Let $p$, $q_{1},\dots,q_{n}$ be integers such that $q_{i}$ are coprime to $p$, and $r$ be a positive constant. Then, the lens space $L(p\,;q_{1},\dots,q_{n})(r)$ with radius $r$ is the quotient of $S^{2n-1}(r) \subset \mathbb{C}^{n}$, the sphere with radius $r$, by the free $\mathbb{Z}_{p}$-action defined by \[[\ell]\cdot (z^{1},\dots,z^{n}):=(e^{2\pi i\ell\frac{q_{1}}{p}}z^{1},\dots,e^{2\pi i\ell\frac{q_{n}}{p}}z^{n}). \] We restrict ourselves to the case that given integers $n$ and $k$ satisfy $n\geq 2$ and $1\leq k \leq n-1$. We write \[L(k\,;1)(r):=L(k\,;\overbrace{1,\dots,1}^{n})(r), \] for short. It is clear that $L(k\,;1)(r)$ is embedded in $(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$, and $U(n)$ acts on $L(k\,;1)(r)$ transitively, since $\mathbb{Z}_{k}$-action defined by \[[\ell]\cdot (z^{1},\dots,z^{n}):=(e^{2\pi i\ell\frac{1}{k}}z^{1},\dots,e^{2\pi i\ell\frac{1}{k}}z^{n})\] and $U(n)$-action commute.
Let $(N_{k}^{n},\omega)$ be the unique $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton with potential function $f$ given in Theorem \ref{caothm}. As explained in Section \ref{caoconst}, we have an open dense subset \[(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k} \subset N_{k}^{n}. \] Via this identification, we embed $L(k\,;1)(r)$ into $N_{k}^{n}$, and denote its inclusion map by \[\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}. \] Actually, $L(k\,;1)(r)$ is given as a level set of potential function $f$. \begin{lemma} We have \[L(k\,;1)(r)=\{\,f=\gamma\,\}, \] where $\gamma:=P(\log r^2)$ and $P$ is given by (\ref{f}). \end{lemma} \begin{proof}
It is clear that $L(k\,;1)(r)$ is contained in $\{\,f=\gamma\,\}$ by a relation $f(z)=P(s)$ with $s=\log|z|^2$. To show the converse inclusion, it is sufficient to see that $P$ is strictly increasing. This is true since $P'>0$ by the equality (\ref{c1}), the positivity condition (\ref{udash}) and a fact that $0<c<1$ stated in Theorem \ref{caothm}. \end{proof}
Since $L(k\,;1)(r)$ is a level set of $f$, the second fundamental form $A$ and the mean curvature vector field $H$ of $\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n} $ are given by \begin{align}\label{AH}
A(\iota_{r})=-\frac{\nabla f}{|\nabla f|^2}\mathop{\mathrm{Hess}}f \quad\mathrm{and}\quad H(\iota_{r})=-\frac{\nabla f}{|\nabla f|^2}\mathop{\mathrm{tr}^{\top}}\mathop{\mathrm{Hess}}f, \end{align} where $\nabla f$ and $\mathop{\mathrm{Hess}}f$ is the gradient and the Hessian of $f$ with respect to the ambient Riemannian metric $g$, and $\mathrm{tr}^{\top}$ is the trace restricted on $T_{p}L(k\,;1)(r)$ at each point $p$ in $L(k\,;1)(r)$. Since $L(k\,;1)(r)$ and the K\"ahler structure on $N_{k}^{n}$ are invariant under $U(n)$-action and it acts transitively on $L(k\,;1)(r)$, a function
\[-\frac{1}{|\nabla f|^2}\mathop{\mathrm{tr}^{\top}}(\mathop{\mathrm{Hess}}f)\] on $L(k\,;1)(r)$ is actually a constant, and we denote the constant by $\lambda(r)$. Thus, the embedding $\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}$ is a self-similar solution with \[H(\iota_{r})=\lambda(r){\nabla f}^{\bot}. \] Here we used that $\nabla f$ is normal to $L(k\,;1)(r)$, that is, ${\nabla f}^{\bot}=\nabla f$ actually. The reminder is to determine the sign of $\lambda(r)$. By the $U(n)$-invariance, it suffices to compute
\[-\frac{1}{|\nabla f|^2}\mathop{\mathrm{tr}^{\top}}(\mathop{\mathrm{Hess}}f)\] at a point $p=(r,0\dots,0)$ in $L(k\,;1)(r)$. Put $s:=\log r^2$ and \[v_{1}:=\frac{e^{\frac{s}{2}}}{\sqrt{2u''(s)}}\frac{\partial}{\partial y^{1}}=\sqrt{-1}\frac{e^{\frac{s}{2}}}{\sqrt{2u''(s)}}\left(\frac{\partial}{\partial z^{1}}-\frac{\partial}{\partial \bar{z}^{1}}\right). \] Furthermore, put \begin{align*} w_{\alpha}:=&\frac{e^{\frac{s}{2}}}{\sqrt{2u'(s)}}\frac{\partial}{\partial x^{\alpha}}=\frac{e^{\frac{s}{2}}}{\sqrt{2u'(s)}}\left(\frac{\partial}{\partial z^{\alpha}}+\frac{\partial}{\partial \bar{z}^{\alpha}}\right)\\ Jw_{\alpha}:=&\frac{e^{\frac{s}{2}}}{\sqrt{2u'(s)}}\frac{\partial}{\partial y^{\alpha}}=\sqrt{-1}\frac{e^{\frac{s}{2}}}{\sqrt{2u'(s)}}\left(\frac{\partial}{\partial z^{\alpha}}-\frac{\partial}{\partial \bar{z}^{\alpha}}\right), \end{align*} for $\alpha=2,\dots,n$. Then, by (\ref{g}), one can check that $\{\, v_{1},w_{2},Jw_{2},\dots, w_{n},Jw_{n}, \}$ is an orthonormal basis of $T_{p}L(k\,;1)(r)$ at $p=(r,0\dots,0)$. Here, we have \begin{align}\label{hesses} \begin{aligned} \mathop{\mathrm{Hess}}f(v_{1},v_{1})=&\frac{e^{s}}{u''(s)}\frac{\partial^2 f}{\partial z^{1}\partial \bar{z}^{1}}(p)=\frac{P''(s)}{u''(s)}\\ \mathop{\mathrm{Hess}}f(w_{\alpha},w_{\alpha})=&\mathop{\mathrm{Hess}}f(Jw_{\alpha},Jw_{\alpha})= \frac{e^{s}}{u'(s)}\frac{\partial^2 f}{\partial z^{\alpha}\partial \bar{z}^{\alpha}}(p)=\frac{P'(s)}{u'(s)}. \end{aligned} \end{align} Thus, we have \begin{align*} \mathop{\mathrm{tr}^{\top}}\mathop{\mathrm{Hess}}f=&\mathop{\mathrm{Hess}}f(v_{1},v_{1}) +\sum_{\alpha=2}^{n}\mathop{\mathrm{Hess}}f(w_{\alpha},w_{\alpha})+\sum_{\alpha=2}^{n}\mathop{\mathrm{Hess}}f(Jw_{\alpha},Jw_{\alpha})\\ =&\frac{P''(s)}{u''(s)}+2(n-1)\frac{P'(s)}{u'(s)}. \end{align*} By $P'=cu''$, we have \begin{align*} \frac{P''}{u''}+2(n-1)\frac{P'}{u'}=&c\frac{u'''}{u''}+2c(n-1)\frac{u''}{u'}\\ =&c\left( \left(\frac{u'''}{u''}+(n-1)\frac{u''}{u'} \right)+(n-1)\frac{u''}{u'} \right)\\ =&c\left(n-u'+cu''+(n-1)\frac{u''}{u'}\right), \end{align*} where we used ODE (\ref{ODE}) in the last equality. Furthermore, It is clear that \begin{align}\label{nab}
\nabla f=ce^{\frac{s}{2}}\frac{\partial}{\partial x^{1}} \quad\mathrm{and}\quad |\nabla f|^2=2c^2u''(s). \end{align} at $p=(r,0,\dots,0)\in L(k\,;1)(r)$. Thus, we have \[\lambda(r)=\frac{-1}{2cu''(s)}\left( n-u'(s)+cu''(s)+(n-1)\frac{u''(s)}{u'(s)} \right), \] where $s=\log r^2$.
To capture the behavior of $\lambda(r)$, we need the following lemma. The radius $r_{2}$ in the statement (2) of the following lemma is needed to determine whether $L(k\,;1)(r)$ is a self-shrinker or a self-expander, and $r_{1}$ determines whether the Ricci-mean curvature flow of $L(k\,;1)(r)$ converges to $S_{0}$ or $S_{\infty}$. \begin{lemma}\label{2radii} \indent \begin{enumerate} \item It holds that \begin{align*} &\lambda(r)\to -\infty \quad \mathrm{and} \quad \lambda(r)=\mathcal{O}(r^{-2k}) \quad \mathrm{as} \quad r\to 0, \\ &\lambda(r)\to \infty \quad \mathrm{and} \quad \lambda(r)=\mathcal{O}(r^{2k}) \quad \mathrm{as} \quad r\to \infty. \end{align*} \item There exists a unique pair of radii $r_{1}<r_{2}$ which satisfies the following. \begin{itemize} \item $\lambda(r)\in (-\infty,-1)$ for $r\in(0,r_{1})$. \item $\lambda(r_{1})=-1$. \item $\lambda(r)\in(-1,0)$ for $r\in(r_{1},r_{2})$. \item $\lambda(r_{2})=0$. \item $\lambda(r)\in(0,\infty)$ for $r\in (r_{2},\infty)$. \end{itemize} \end{enumerate} \end{lemma} \begin{proof} By the asymptotic conditions (\ref{uasy}), we have \begin{align*} &n-u'(s)+cu''(s)+(n-1)\frac{u''(s)}{u'(s)} \to k \quad (s\to -\infty)\\ &n-u'(s)+cu''(s)+(n-1)\frac{u''(s)}{u'(s)} \to -k \quad (s\to \infty), \end{align*} and also have \begin{align*} &u''(s) \to 0 \quad \mathrm{and}\quad u''(s)=\mathcal{O}(e^{ks}) \quad (s\to -\infty)\\ &u''(s) \to 0 \quad \mathrm{and}\quad u''(s)=\mathcal{O}(e^{-ks}) \quad (s\to \infty). \end{align*} Thus, we have proved the statement (1).
To prove the statement (2), we will prove that the derivative of $\lambda(r)$ at $r$ such that $\lambda(r)=-1$ or $\lambda(r)=0$ is positive. Then, combining the statement (1), this implies immediately that $\lambda(r)$ takes the value $-1$ and $0$ only once.
Define $\Lambda(s)$ by \[\Lambda(s):=\lambda(r)=\frac{-1}{2cu''(s)}\left( n-u'(s)+cu''(s)+(n-1)\frac{u''(s)}{u'(s)} \right)\] with $s=\log r^2$. Then, we have \[\frac{d}{d r}\lambda(r)=2e^{-\frac{s}{2}}\frac{d}{d s}\Lambda(s). \] Hence, the positivity of $d\lambda/dr$ is equivalent to the positivity of $d\Lambda/ds$. By a straightforward computation, we have \begin{align}\label{eq31} \begin{aligned} \frac{d}{d s}\Lambda(s)=&\frac{-1}{2cu''(s)}\biggl(-u''(s)+cu'''(s)\\ &+(n-1)\frac{u'''(s)}{u'(s)}-(n-1)\frac{(u''(s))^2}{(u'(s))^2}\biggr)-\Lambda(s)\frac{u'''(s)}{u''(s)}\\ =&\frac{1}{2c}+\frac{(n-1)u''(s)}{2c(u'(s))^2}\\ &+\frac{1}{2cu'(s)}\biggl(-c\Bigl(1+2\Lambda(s)\Bigr)u'(s)-(n-1)\biggr)\frac{u'''(s)}{u''(s)}. \end{aligned} \end{align} By ODE (\ref{ODE}), we have \begin{align}\label{eq32} \begin{aligned} \frac{u'''(s)}{u''(s)}=&n-u'(s)-\left(\frac{n-1}{u'(s)}-c\right)u''(s)\\ =&\left(n-u'(s)+cu''(s)+(n-1)\frac{u''(s)}{u'(s)}\right)-2(n-1)\frac{u''(s)}{u'(s)}\\ =&-2cu''(s)\Lambda(s)-2(n-1)\frac{u''(s)}{u'(s)}\\ =&2\Bigl(-c\Lambda(s) u'(s)-(n-1)\Bigr)\frac{u''(s)}{u'(s)}. \end{aligned} \end{align} Substituting (\ref{eq32}) into (\ref{eq31}), we have \begin{align*} \frac{d}{d s}\Lambda(s)=&\frac{1}{2c}+\frac{(n-1)u''(s)}{2c(u'(s))^2}\\ &+\frac{1}{c}\biggl(-c\Bigl(1+2\Lambda(s)\Bigr)u'(s)-(n-1)\biggr)\biggl(-c\Lambda(s) u'(s)-(n-1)\biggr)\frac{u''(s)}{(u'(s))^2}. \end{align*} We remark that since $c, u''>0$, \[\frac{1}{2c}+\frac{(n-1)u''(s)}{2c(u'(s))^2}>0. \] Since $\Lambda(s)\to -\infty$ as $s\to -\infty$ and $\Lambda(s)\to \infty$ as $s\to \infty$, there exists an $s\in\mathbb{R}$ such that $\Lambda(s)=0$, and for such $s$ we have \begin{align*} &\frac{1}{c}\biggl(-c\Bigl(1+2\Lambda(s)\Bigr)u'(s)-(n-1)\biggr)\biggl(-c\Lambda(s) u'(s)-(n-1)\biggr)\frac{u''(s)}{(u'(s))^2}\\ =&\frac{n-1}{c}\Bigl(cu'(s)+(n-1)\Bigr)\frac{u''(s)}{(u'(s))^2}>0, \end{align*} since $c, u',u''>0$. Thus, we have proved that \[\frac{d}{ds}\Lambda(s)>\frac{1}{2c}+\frac{(n-1)u''(s)}{2c(u'(s))^2}>0\] for $s$ such that $\Lambda(s)=0$. Similarly, there exists another $s\in\mathbb{R}$ such that $\Lambda(s)=-1$, and for such $s$ we have \begin{align*} &\frac{1}{c}\biggl(-c\Bigl(1+2\Lambda(s)\Bigr)u'(s)-(n-1)\biggr)\biggl(-c\Lambda(s) u'(s)-(n-1)\biggr)\frac{u''(s)}{(u'(s))^2}\\ =&\frac{1}{c}\Bigl(cu'(s)-(n-1)\Bigr)^2\frac{u''(s)}{(u'(s))^2}\geq 0 \end{align*} since $c, u''>0$. Thus, we have proved that \[\frac{d}{ds}\Lambda(s)\geq \frac{1}{2c}+\frac{(n-1)u''(s)}{2c(u'(s))^2}>0\] for $s$ such that $\Lambda(s)=-1$. Consequently, we have proved that \[\frac{d}{dr}\lambda(r)>0\] for $r$ such that $\lambda(r)=0$ or $\lambda(r)=-1$. By this property and the statement (1), the statement (2) follows. \end{proof}
By Lemma \ref{2radii}, we have proved the following. This is the same as Theorem \ref{sumofmain1}.
\begin{theorem}\label{mainthm} For every $0<r<\infty$, the embedding \[\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}\] is a compact self-similar solution with \[H(\iota_{r})=\lambda(r){\nabla f}^{\bot}, \] and there exists the unique radius $r_{2}$ such that $\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}$ is a non-minimal self-shrinker, minimal submanifold or non-minimal self-expander when $r<r_{2}$, $r=r_{2}$ or $r_{2}<r$, respectively. \end{theorem}
For the following sections, here we compute the norm of $A(\iota_{r})$. It is easy to see that $A(\iota_{r})$ is diagonalized by the orthonormal basis $\{\, v_{1},w_{2},Jw_{2},\dots, w_{n},Jw_{n}, \}$. Hence, we have
\[|A(\iota_{r})|^{2}=|A(v_{1},v_{1})|^2+\sum_{\alpha=2}^{n}|A(w_{k},w_{k})|^2+\sum_{\alpha=2}^{n}|A(Jw_{k},Jw_{k})|^2. \] By (\ref{AH}), (\ref{hesses}) and (\ref{nab}), with $s=\log r^2$, we have \begin{align*}
|A(\iota_{r})|^{2}=\frac{1}{2c^{2}u''(s)}\left(\left(\frac{P''(s)}{u''(s)}\right)^2+2(n-1)\left(\frac{P'(s)}{u'(s)}\right)^2\right). \end{align*} By $P'=cu''$ and ODE (\ref{ODE}), we have \begin{align*}
|A(\iota_{r})|^{2}=\frac{1}{2u''(s)}\left(\left(n-u'(s)-\left(\frac{n-1}{u'(s)}-c\right)u''(s)\right)^2+2(n-1)\left(\frac{u''(s)}{u'(s)}\right)^2\right). \end{align*} By a similar argument of the proof of the statement (1) of Lemma \ref{2radii}, we can prove the following. \begin{lemma}\label{rateofA} It holds that \begin{align*}
&|A(\iota_{r})|^{2}\to \infty \quad \mathrm{and} \quad |A(\iota_{r})|^{2}=\mathcal{O}(r^{-2k}) \quad \mathrm{as}\quad r\to 0, \\
&|A(\iota_{r})|^{2}\to \infty \quad \mathrm{and} \quad |A(\iota_{r})|^{2}=\mathcal{O}(r^{2k}) \quad \mathrm{as}\quad r\to \infty. \end{align*} \end{lemma}
\section{The motion by Ricci-mean curvature flow of a lens space in $N^{n}_{k}$}\label{motion} In this section we observe how a lens space $L(k\,;1)(r)$ in $N^{n}_{k}$ moves by Ricci-mean curvature flow and what it converges to. Continuing Section \ref{lenssp}, let $(N_{k}^{n},\omega)$ be a unique $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton with potential function $f$ given in Theorem \ref{caothm}. Then we have \[\nabla f=cr\frac{\partial}{\partial r}, \]
where $r=|z|$. Fix $T\in (0,\infty)$. One can easily see that \[\Phi_{t}:(\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}\to (\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}\] defined by \[\Phi_{t}(z):=\kappa(t)z\quad\mathrm{with}\quad\kappa(t):=\left(\frac{T}{T-t}\right)^{\frac{{c}}{2}}\] for $t\in [0,T)$ is the 1-parameter family of automorphisms of $N_{k}^{n}$ generated by $\frac{1}{2(T-t)}\nabla f$ with $\Phi_{0}=\mathrm{id}$. Then, it follows that $g_{t}:=2(T-t)\Phi_{t}^{*}g$ satisfies the Ricci flow equation: \[\frac{\partial}{\partial t}g_{t}=-2\mathrm{Ric}(g_{t}). \] Fix $r\in(0,\infty)$ and let $\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}$ be a lens space with radius $r$ and \[F_{t}:L(k\,;1)(r)\to N_{k}^{n}\quad (t\in[0,T'))\] be the solution of Ricci-mean curvature flow along $g_{t}=2(T-t)\Phi_{t}^{*}g$ with initial condition $F_{0}=\iota_{r}$. We assume that $T'(=T'(r))$ is the maximal time of existence of the solution. By rotationally symmetry of $L(k\,;1)(r)\subset N_{k}^{n}$, the solution $F_{t}$ is written as \[F_{t}(p):=h(t)p, \] by some positive smooth function $h:[0,T')\to \mathbb{R}$. Then the Ricci-mean curvature flow equation for $F_{t}$ is reduced to an ODE for $h(t)$. \begin{proposition} The 1-parameter family of immersions $F_{t}$ is the solution of the Ricci-mean curvature flow coupled with $g_{t}$ with initial condition $F_{0}=\iota_{r}$ if and only if the positive smooth function $h:[0,T')\to \mathbb{R}$ satisfies the following ODE with initial condition: \begin{align}\label{ODEradi} \begin{aligned}
&h(0)=1\\ &\frac{h'(t)}{h(t)}=\frac{c}{2(T-t)}\lambda(\kappa(t)h(t)r), \end{aligned} \end{align} where $\lambda$ and $\kappa$ are given functions. \end{proposition} \begin{proof} Recall that the Ricci-mean curvature flow equation is \[\frac{\partial}{\partial t}F_{t}=H_{g_{t}}(F_{t}), \] where $H_{g_{t}}(F_{t})$ is the mean curvature vector field of $F_{t}$ computed with the Riemannian metric $g_{t}=2(T-t)\Phi_{t}^{*}g$. It is easy to see that \begin{align*} H_{g_{t}}(F_{t})=\frac{1}{2(T-t)}H(\iota_{\kappa(t)h(t)r})=\frac{c}{2(T-t)}\lambda(\kappa(t)h(t)r)\left(r\frac{\partial}{\partial r}\right), \end{align*} where $H$ without subscript $g_{t}$ denotes the mean curvature vector field with respect to the original ambient metric $g$. Since \[\frac{\partial}{\partial t}F_{t}=\frac{h'(t)}{h(t)}\left(r\frac{\partial}{\partial r}\right), \] the proposition is proved. \end{proof}
Put \begin{align}\label{Rh} R(t):=\kappa(t)h(t)r=\left(\frac{T}{T-t}\right)^{\frac{{c}}{2}}h(t)r. \end{align} Then, we have \[\frac{R'(t)}{R(t)}=\frac{c}{2(T-t)}+\frac{h'(t)}{h(t)}. \] Thus, we have the following.
\begin{lemma} The ODE (\ref{ODEradi}) for $h(t)$ with initial condition is equivalent to the following ODE for $R(t)$ with initial condition: \begin{align}\label{ODEradi2} \begin{aligned}
&R(0)=r\\ &\frac{R'(t)}{R(t)}=\frac{c}{2(T-t)}\Bigl(\lambda(R(t))+1\Bigr). \end{aligned} \end{align} \end{lemma}
Therefore, analysis of the motion of Ricci-mean curvature flow emanating from $L(k\, ;1)(r)$ is reduced to the analysis of $R(t)$. Let $r_{1}$ be the specific radius introduced in Lemma \ref{2radii}. Then, we have the following.
\begin{lemma}\label{mainlemma} \indent \begin{enumerate} \item If $r<r_{1}$, then $T'<T$ and the solution $R(t)$ of (\ref{ODEradi2}) satisfies $R(t)\to 0$ and $h(t)\to 0$ as $t\to T'$. Furthermore, we have \[(R(t))^{-2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] \item If $r=r_{1}$, then $T'=T$ and $R(t)=r_{1}$ is the stationary solution of (\ref{ODEradi2}) and $h(t)=\kappa^{-1}(t)\to 0$ as $t \to T$. \item If $r_{1}<r$, then $T'<T$ and the solution $R(t)$ of (\ref{ODEradi2}) satisfies $R(t)\to \infty$ and $h(t)\to \infty$ as $t\to T'$. Furthermore, we have \[(R(t))^{2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] \end{enumerate} \end{lemma} \begin{proof} The proof is done by an ordinary argument for the bifurcation phenomenon of an ODE. First, we prove the statement (1). Assume that $r<r_{1}$. In this case, by Lemma \ref{2radii}, there exists a constant $\alpha=\alpha(r)<-1$ such that $\lambda(r)\leq \alpha$ for all $r\in (0,r_{0}]$. At $t=0$, we have $R'(0)<0$ by OED (\ref{ODEradi2}). If there exists some $t_{0}\in (0,T')$ such that $R(t_{0})=r$, it follows that \[R'(t_{0})=\frac{c}{2(T-t_{0})}\Bigl(\lambda(r)+1\Bigr)r< 0. \] This means that $R(t)\in(0,r]$ for all $t\in [0,T')$ and $R(t)$ is monotonically decreasing. By ODE (\ref{ODEradi2}), we have \begin{align}\label{int0} \frac{R'(t)}{R(t)\Bigl(\lambda(R(t))+1\Bigr)}=\frac{c}{2(T-t)}, \end{align} and integrating both sides from $t=0$ to $t=T'-0$ we have \begin{align}\label{intint} \int_{r}^{R(T'-0)}\frac{1}{R\left(\lambda(R)+1\right)}dR=\int_{0}^{T'-0}\frac{c}{2(T-t)}dt. \end{align} By (1) of Lemma \ref{2radii}, we have \begin{align}\label{order0} \frac{1}{R\left(\lambda(R)+1\right)}=\mathcal{O}(R^{2k-1})\quad\mathrm{as}\quad R\to 0. \end{align} Thus, the left hand side of (\ref{intint}) is integrable, and we have proved that $T'<T$. If $\lim_{t\to T'}R(t)>0$ then $R'(t)$ is bounded as $t\to T'$ by ODE (\ref{ODEradi2}), and this contradicts that $T'$ is the maximal time of existence of the solution. Thus, it holds that $R(t)\to 0$ as $t\to T'$. Integrating both sides of (\ref{int0}) from $t$ to $T'$ and combining the estimate of order (\ref{order0}) and $R(t)\to 0$ as $t\to T'$, we have \[C(R(t))^{2k}\geq \int_{t}^{T'}\frac{c}{2(T-t)}dt\geq \frac{c}{2T}(T'-t)\] with some constant $C>0$ and $t$ sufficiently close to $T'$. Thus, we have \[(R(t))^{-2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] Since $R(t)=\kappa(t)h(t)r$, $R(t)\to 0$ as $t\to T'$ and $\kappa(t)$ is bounded on $[0,T')$, it holds that $h(t)\to 0$ as $t\to T'$. Hence, we have proved the statement (1).
The statement (2) is clear since $\lambda(r_{1})+1=0$.
Finally, we prove the statement (3). The argument is very similar to the proof of the statement (1). Assume that $r_{1}<r$. In this case, by Lemma \ref{2radii}, there exists a constant $\alpha=\alpha(r_{0})>-1$ such that $\lambda(r)\geq \alpha$ for all $r\in [r_{0},\infty)$. At $t=0$, we have $R'(0)>0$ by OED (\ref{ODEradi2}). If there exists some $t_{0}\in (0,T')$ such that $R(t_{0})=r$, it follows that \[R'(t_{0})=\frac{c}{2(T-t_{0})}\Bigl(\lambda(r)+1\Bigr)r>0. \] This means that $R(t)\in[r,\infty)$ for all $t\in [0,T')$ and $R(t)$ is monotonically increasing. By (1) of Lemma \ref{2radii}, we have \begin{align}\label{orderinfty} \frac{1}{R\left(\lambda(R)+1\right)}=\mathcal{O}(R^{-2k-1})\quad\mathrm{as}\quad R\to \infty. \end{align} Thus, the left hand side of (\ref{intint}) is integrable, and we have proved that $T'<T$. If $\lim_{t\to T'}R(t)<\infty$ then $R'(t)$ is bounded as $t\to T'$ by ODE (\ref{ODEradi2}), and this contradicts that $T'$ is the maximal time of existence of the solution. Thus, it holds that $R(t)\to \infty$ as $t\to T'$. Integrating both sides of (\ref{int0}) from $t$ to $T'$ and combining the estimate of order (\ref{orderinfty}) and $R(t)\to \infty$ as $t\to T'$, we have \[C(R(t))^{-2k}\geq \int_{t}^{T'}\frac{c}{2(T-t)}dt=\frac{c}{2T}(T'-t)\] with some constant $C>0$ and $t$ sufficient close to $T'$. Thus, we have \[(R(t))^{2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] Since $R(t)=\kappa(t)h(t)r$, $R(t)\to \infty$ as $t\to T'$ and $\kappa(t)$ is bounded on $[0,T')$, it holds that $h(t)\to \infty$ as $t\to T'$. Hence, we have proved the statement (3). \end{proof}
\begin{remark}\label{T} In the case $r<r_{1}$, by integrating both sides of (\ref{int0}) from $t=0$ to $t=T'$ and straightforward computation, the maximal time $T'(=T'(r))$ is explicitly given as \[T'=T-T\exp\left( \frac{2}{c}\int_{0}^{r}\frac{1}{R\left(\lambda(R)+1\right)}dR \right). \] Similarly, in the case $r_{1}<r$, the maximal time $T'(=T'(r))$ is explicitly given as \[T'=T-T\exp\left( \frac{-2}{c}\int_{r}^{\infty}\frac{1}{R\left(\lambda(R)+1\right)}dR \right). \] \end{remark}
By Lemma \ref{mainlemma}, we can prove the following theorem. This contains Theorem \ref{sumofmain2}.
\begin{theorem}\label{mainRM} \indent \begin{enumerate} \item If $r<r_{1}$, then $T'<T$ and $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ converges pointwise to $S_{0}$ as $t\to T'$.
Furthermore, we have $|A_{g_{t}}(F_{t})|_{g_{t}}^2\to \infty$ as $t\to T'$ and there exists some constant $C>0$ such that
\[|A_{g_{t}}(F_{t})|_{g_{t}}^2\leq \frac{C}{T'-t}\quad \mathrm{on} \quad [0,T'). \] \item If $r=r_{1}$, then $T'=T$ and $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ is given by $F_{t}(p)=\kappa^{-1}(t)p$ and converges pointwise to $S_{0}$ as $t\to T$.
Furthermore, we have $|A_{g_{t}}(F_{t})|_{g_{t}}^2\to \infty$ as $t\to T$ and there exists some constant $C>0$ such that
\[|A_{g_{t}}(F_{t})|_{g_{t}}^2=\frac{C}{T'-t}\quad \mathrm{on} \quad [0,T'). \] \item If $r_{1}<r$, then $T'<T$ and $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ converges pointwise to $S_{\infty}$ as $t\to T'$.
Furthermore we have $|A_{g_{t}}(F_{t})|_{g_{t}}^2\to \infty$ as $t\to T'$ and there exists some constant $C>0$ such that
\[|A_{g_{t}}(F_{t})|_{g_{t}}^2\leq\frac{C}{T'-t}\quad \mathrm{on} \quad [0,T'). \] \end{enumerate} \end{theorem} \begin{proof} It is easy to see that \begin{align}\label{AR}
|A_{g_{t}}(F_{t})|_{g_{t}}^2=\frac{1}{2(T-t)}|A(\iota_{\kappa(t)h(t)r})|^2=\frac{1}{2(T-t)}|A(\iota_{R(t)})|^2, \end{align}
where $|A_{g_{t}}|_{g_{t}}$ and $|A|$ is the norm of the second fundamental form with respect to the ambient metric $g_{t}$ and $g$, respectively. In the case (1), we have $h(t)\to 0$ as $t\to T'$ by Lemma \ref{mainlemma}. Let $p=(z^{1},\dots,z^{n})$ be a point in $ L(k\,;1)(r) \subset (\mathbb{C}^{n}\setminus\{0\})/\mathbb{Z}_{k}$ and assume that $z^{j}\neq 0$ for some $j$. Then, $F_{t}(p)=h(t)p$ is identified with \[((h(t)z^{1}:\dots:h(t)z^{n}),(1:(h(t)z^{j})^{k}))=((z^{1}:\dots:z^{n}),(1:(h(t)z^{j})^{k}))\] in $U_{j}\times\mathbb{P}^{1}$ via $\psi$. Hence, it is clear that $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ converges pointwise to $S_{0}$, the 0-section, as $t\to T'$. By the formula (\ref{AR}) and Lemma \ref{rateofA} and Lemma \ref{mainlemma}, the remaining of the statement (1) is clear. The proof of (3) is similar.
In the case (2), it is enough to put $C:=2|A(\iota_{r_{1}})|^2$. \end{proof}
\begin{remark} Consider the standard Hopf fibration $S^{2n-1}(r)\to \mathbb{P}^{n-1}$. When $r\to 0$, the total space $S^{2n-1}(r)$ collapses to $\mathbb{P}^{n-1}$ and this collapsing is just caused by degeneration of $S^{1}$-fiber. From this view point, the picture of the collapsing of $L(k\,;1)(r)$ to $S_{0}$ or $S_{\infty}$ (these are diffeomorphic to $\mathbb{P}^{n-1}$) is considered as a $\mathbb{Z}_{k}$-quotient analog of the collapsing of the Hopf fibration. \end{remark}
\section{The motion by mean curvature flow of a lens space in $N^{n}_{k}$}\label{motionh} In this section, we observe how a lens space in $N^{n}_{k}$ moves by mean curvature flow. As in Section \ref{motion}, let $(N_{k}^{n},\omega)$ be a unique $U(n)$-invariant gradient shrinking K\"ahler Ricci soliton with potential function $f$ given in Theorem \ref{caothm}, and for a given $r>0$ let $\iota_{r}:L(k\,;1)(r) \hookrightarrow N_{k}^{n}$ be a lens space with radius $r$. Then, by rotationally symmetry, the solution \[F_{t}:L(k\,;1)(r) \to N_{k}^{n} \] of the mean curvature flow equation \[\frac{\partial}{\partial t}F_{t}=H(F_{t})\] is given by \[F_{t}(p)=h(t)p\] with some positive smooth function $h:[0,T')\to\mathbb{R}$ which satisfies the following ODE with initial condition: \begin{align}\label{ODEradih} \begin{aligned}
&h(0)=1\\ &\frac{h'(t)}{h(t)}=c\lambda(h(t)r). \end{aligned} \end{align} We remark that this is an autonomous differential equation. We assume that $T'(=T'(r))$ is the maximal time of existence of the solution. Let $r_{2}$ be the specific radius introduced in Lemma \ref{2radii}. Then, by similar arguments as the proof of Lemma \ref{mainlemma}, one can prove the following. \begin{lemma}\label{mainlemma2} \indent \begin{enumerate} \item If $r<r_{2}$, then $T'<\infty$ and the solution $h(t)$ of (\ref{ODEradih}) satisfies \[h(t)\to 0 \quad \mathrm{and} \quad (h(t))^{-2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] \item If $r=r_{2}$, then $T'=\infty$ and $h(t)=1$ is the stationary solution of (\ref{ODEradih}). \item If $r_{2}<r$, then $T'<\infty$ and the solution $h(t)$ of (\ref{ODEradih}) satisfies \[h(t)\to \infty \quad \mathrm{and} \quad (h(t))^{2k}=\mathcal{O}\left(\frac{1}{T'-t}\right) \quad \mathrm{as} \quad t\to T'. \] \end{enumerate} \end{lemma} Furthermore, as the proof of Theorem \ref{mainRM}, combining Lemma \ref{rateofA} and Lemma \ref{mainlemma2}, we can prove the following. \begin{theorem}\label{mainM} \indent \begin{enumerate} \item If $r<r_{2}$, then $T'<\infty$ and $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ converges pointwise to $S_{0}$ as $t\to T'$.
Furthermore, we have $|A(F_{t})|^2\to \infty$ as $t\to T'$ and there exists some constant $C>0$ such that
\[|A(F_{t})|^2\leq \frac{C}{T'-t}\quad \mathrm{on} \quad [0,T'), \] that is, $F_{t}$ develops singularities of type I. \item If $r=r_{2}$, then $F_{t}\equiv F_{0}:L(k\,;1)(r)\to N_{k}^{n}$ $(t\in[0,\infty))$ is the stationary solution of the mean curvature flow since $F_{0}$ is a minimal immersion. \item If $r_{2}<r$, then $T'<\infty$ and $F_{t}:L(k\,;1)(r)\to N_{k}^{n}$ converges pointwise to $S_{\infty}$ as $t\to T'$.
Furthermore we have $|A(F_{t})|^2\to \infty$ as $t\to T'$ and there exists some constant $C>0$ such that
\[|A(F_{t})|^2\leq\frac{C}{T'-t}\quad \mathrm{on} \quad [0,T'), \] that is, $F_{t}$ develops singularities of type I. \end{enumerate} \end{theorem}
\begin{remark} As Remark \ref{T}, the maximal time $T'(=T'(r))$ is explicitly given as follows. When $r<r_{2}$, \[T'=\frac{-1}{c}\int_{0}^{r}\frac{1}{r\lambda(r)}dr, \] and when $r_{2}<r$, \[T'=\frac{1}{c}\int_{r}^{\infty}\frac{1}{r\lambda(r)}dr. \] \end{remark}
\end{document} |
\begin{document}
\title[K\"ahler manifolds and secondary curvature operator]{K\"ahler manifolds and the curvature operator of the second kind}
\author{Xiaolong Li}\thanks{The author's research is partially supported by Simons Collaboration Grant \#962228 and a start-up grant at Wichita State University} \address{Department of Mathematics, Statistics and Physics, Wichita State University, Wichita, KS, 67260} \email{[email protected]}
\subjclass[2020]{53C55, 53C21}
\keywords{Curvature operator of the second kind, orthogonal bisectional curvature, holomorphic sectional curvature, rigidity theorems}
\begin{abstract} This article aims to investigate the curvature operator of the second kind on K\"ahler manifolds. The first result states that an $m$-dimensional K\"ahler manifold with $\frac{3}{2}(m^2-1)$-nonnegative (respectively, $\frac{3}{2}(m^2-1)$-nonpositive) curvature operator of the second kind must have constant nonnegative (respectively, nonpositive) holomorphic sectional curvature. The second result asserts that a closed $m$-dimensional K\"ahler manifold with $\left(\frac{3m^3-m+2}{2m}\right)$-positive curvature operator of the second kind has positive orthogonal bisectional curvature, thus being biholomorphic to $\mathbb{CP}^m$. We also prove that $\left(\frac{3m^3+2m^2-3m-2}{2m}\right)$-positive curvature operator of the second kind implies positive orthogonal Ricci curvature. Our approach is pointwise and algebraic. \end{abstract}
\maketitle
\section{Introduction} The Riemann curvature tensor on a Riemannian manifold $(M^n, g)$ induces a self-adjoint operator $\overline{R}:S^2(T_pM) \to S^2(T_pM)$ via \begin{equation*}
\overline{R}(\varphi)_{ij}=\sum_{k,l=1}^n R_{iklj}\varphi_{kl}, \end{equation*} where $S^2(T_pM)$ is the space of symmetric two-tensors on the tangent space $T_pM$. \textit{The curvature operator of the second kind}, denoted by $\mathring{R}$ throughout this article, refers to the symmetric bilinear form $$\mathring{R}:S^2_0(T_pM)\times S^2_0(T_pM) \to \mathbb{R}$$ obtained by restricting $\overline{R}$ to $S^2_0(T_pM)$, the space of traceless symmetric two-tensors. See \cite{CGT21} or \cite{Li21} for a detailed discussion. This terminology is due to Nishikawa \cite{Nishikawa86}, who conjectured in 1986 that a closed Riemannian manifold with positive (respectively, nonnegative) curvature operator of the second kind is diffeomorphic to a spherical space form (respectively, Riemannian locally symmetric space).
Nishikawa's conjecture had remained open for more than three decades before its positive part was resolved by Cao, Gursky, and Tran \cite{CGT21} recently and its nonnegative part was settled by the author \cite{Li21} shortly after. The key observation in \cite{CGT21} is that two-positive curvature operator of the second kind implies the strictly PIC1 condition (i.e., $M\times \mathbb{R}$ has positive isotropic curvature). This is sufficient since earlier work of Brendle \cite{Brendle08} has shown that a solution to the normalized Ricci flow starting from a strictly PIC1 metric on a closed manifold exists for all time and converges to a metric of constant positive sectional curvature. Soon after that, the author \cite{Li21} proved that strictly PIC1 is implied by three-positivity of $\mathring{R}$, thus getting an immediate improvement to the result in \cite{CGT21}. Furthermore, the author was able to resolve the nonnegative case of Nishikawa's conjecture under three-nonnegativity of $\mathring{R}$. More recently, the conclusion has been strengthened by Nienhaus, Petersen, and Wink \cite{NPW22}, who ruled out irreducible compact symmetric spaces by proving that an $n$-dimensional closed Riemannian manifold with $\frac{n+2}{2}$-nonnegative $\mathring{R}$ is either flat or a rational homology sphere. Combining these works together, we have \begin{theorem}\label{thm 3 positive} Let $(M^n,g)$ be a closed Riemannian manifold of dimension $n\geq 3$. \begin{enumerate}
\item If $M$ has three-positive curvature operator of the second kind, then $M$ is diffeomorphic to a spherical space form.
\item If $M$ has three-nonnegative curvature operator of the second kind, then $M$ is either flat or diffeomorphic to a spherical space form. \end{enumerate} \end{theorem}
To prove sharper results, the author \cite{Li22JGA} introduced the notion of $\alpha$-positive curvature operator of the second kind for $\alpha \in [1,N]$, where $N=\dim(S^2_0(T_p M))$. Hereafter, $\lfloor x \rfloor$ denotes the floor function defined by \begin{equation*}
\lfloor x \rfloor := \max \{k \in \mathbb{Z}: k \leq x\}. \end{equation*} \begin{definition*} A Riemannian manifold $(M^n,g)$ is said to have $\alpha$-positive (respectively, $\alpha$-nonnegative) curvature operator of the second kind if for any $p\in M$ and any orthonormal basis $\{\varphi_i\}_{i=1}^N$ of $S^2_0(T_pM)$, it holds that \begin{equation}\label{alpha positive def}
\sum_{i=1}^{\lfloor \alpha \rfloor} \mathring{R}(\varphi_i,\varphi_i) +(\alpha -\lfloor \alpha \rfloor) \mathring{R}(\varphi_{\lfloor \alpha \rfloor+1},\varphi_{\lfloor \alpha \rfloor+1}) > (\text{respectively,} \geq) \ 0. \end{equation} Similarly, $(M^n,g)$ is said to have $\alpha$-negative (respectively, $\alpha$-nonpositive) curvature operator of the second kind if the reversed inequality holds. \end{definition*}
In \cite{Li22JGA}, the author proved that $\left(n+\frac{n-2}{n}\right)$-positive (respectively, $\left(n+\frac{n-2}{n}\right)$-nonnegative) curvature operator of the second kind implies positive (respectively, nonnegative) Ricci curvature in all dimensions. Combined with Hamilton's work \cite{Hamilton82, Hamilton86}, this immediately leads to an improvement of Theorem \ref{thm 3 positive} in dimension three: a closed three-manifold with $3\frac{1}{3}$-positive $\mathring{R}$ is diffeomorphic to a spherical space form and a closed three-manifold with $3\frac{1}{3}$-nonnegative $\mathring{R}$ is either flat, or diffeomorphic to a spherical space form, or diffeomorphic to a quotient of $\mathbb{S}^2 \times \mathbb{R}$. Note that the number $3\frac{1}{3}$ is optimal, as the eigenvalues of $\mathring{R}$ on $\mathbb{S}^2 \times \mathbb{S}^1$ are given by $\{-\frac{1}{3}, 0, 0, 1, 1\}$ (see \cite[Example 2.6]{Li21}).
Another result obtained in \cite{Li22JGA} states that $4\frac{1}{2}$-positive (respectively, $4\frac{1}{2}$-nonnegative) curvature operator of the second kind implies positive (respectively, nonnegative) isotropic curvature in dimensions four and above. This was previously shown in \cite{CGT21} under the stronger condition of four-positivity (respectively, four-nonnegativity) of $\mathring{R}$. In view of Micallef and Moore's work \cite{MM88}, one concludes that a closed Riemannian manifold of dimension $n\geq 4$ with $4\frac{1}{2}$-positive $\mathring{R}$ is homeomorphic to a spherical space form. Moreover, the ``homeomorphism" can be upgraded to ``diffeomorphism" if either $n=4$ or $n\geq 12$, using Hamilton's work \cite{Hamilton97} or that of Brendle \cite{Brendle19}, respectively. The number $4\frac{1}{2}$ is optimal in dimension four as both $\mathbb{CP}^2$ and $\mathbb{S}^3\times \mathbb{S}^1$ has $\alpha$-positive $\mathring{R}$ for any $\alpha>4\frac{1}{2}$. In addition, a rigidity result for closed manifolds with $4\frac{1}{2}$-nonnegative $\mathring{R}$ was also obtained in \cite[Theorem 1.4]{Li22JGA} (see also \cite[Theorem 1.4]{Li22product} for an improvement).
This article aims to investigate the curvature operator of the second kind on K\"ahler manifolds. Such study dates at least back to Bourguignon and Karcher \cite{BK78}, who computed in 1978 that the eigenvalues of $\mathring{R}$ on $(\mathbb{CP}^m,g_{FS})$, the complex projective space with the Fubini-Study metric normalized with constant holomorphic sectional curvatures $4$, are given by: $-2$ with multiplicity $m^2-1$ and $4$ with multiplicity $m(m+1)$. One immediately sees that $\mathbb{CP}^m$ has $\alpha$-positive $\mathring{R}$ for any $\alpha > \frac{3}{2}(m^2-1)$ but not for any $\alpha \leq \frac{3}{2}(m^2-1)$. As $\mathbb{CP}^m$ is often considered as the K\"ahler manifold with ``most positive curvature", one may speculate that there might not be any K\"ahler manifold with $\alpha$-positive $\mathring{R}$ for $\alpha \leq \frac{3}{2}(m^2-1)$. Our first main result confirms this speculation. \begin{theorem}\label{thm flat} Let $(M^m,g,J)$ be a K\"ahler manifold of complex dimension $m\geq 2$. \begin{enumerate}
\item If $M$ has $\alpha$-nonnegative (respectively, $\alpha$-nonpositive) curvature operator of the second kind for some $\alpha < \frac{3}{2}(m^2-1)$, then $M$ is flat.
\item If $M$ has $\frac{3}{2}(m^2-1)$-nonnegative (respectively, $\frac{3}{2}(m^2-1)$-nonpositive) curvature operator of the second kind, then $M$ has constant nonnegative (respectively, nonpositive) holomorphic sectional curvature. \end{enumerate} \end{theorem}
Previously, the author \cite[Theorem 1.9]{Li21} proved that K\"ahler manifolds with four-nonnegative $\mathring{R}$ are flat, and Nienhaus, Petersen, Wink, and Wylie \cite{NPWW22} proved the following result. \begin{theorem}\label{thm NPWW} Let $(M^m,g,J)$ be a K\"ahler manifold of complex dimension $m \geq 2$. Set \begin{equation*}
A=\begin{cases} 3m\frac{m+1}{m+2}, & \text{ if } $m$ \text{ is even}, \\
3m\frac{(m+1)(m^2-1)}{(m+2)(m^2+1)}, & \text{ if } $m$ \text{ is odd}.
\end{cases} \end{equation*} If the curvature operator of the second kind of $M$ is $\alpha$-nonnegative or $\alpha$-nonpositive for some $\alpha <A$, then $M$ is flat. \end{theorem}
Theorem \ref{thm flat} improves Theorem \ref{thm NPWW}. Moreover, the number $\frac{3}{2}(m^2-1)$ is sharp as $(\mathbb{CP}^m,g_{FS})$ has $\frac{3}{2}(m^2-1)$-nonnegative $\mathring{R}$ and $(\mathbb{CH}^m, g_{\text{stand}})$, the complex hyperbolic space with constant negative holomorphic sectional curvature, has $\frac{3}{2}(m^2-1)$-nonpositive $\mathring{R}$. Theorem \ref{thm flat} has two immediate corollaries. \begin{corollary}\label{cor 1} For $\alpha \leq \frac{3}{2}(m^2-1)$, there do not exist $m$-dimensional K\"ahler manifolds with $\alpha$-positive or $\alpha$-negative curvature operator of the second kind. \end{corollary}
\begin{corollary}\label{cor 3} Let $(M^m,g,J)$ be a complete non-flat K\"ahler manifold of complex dimension $m$. If $M$ has $\frac{3}{2}(m^2-1)$-nonnegative (respectively, $\frac{3}{2}(m^2-1)$-nonpositive) curvature operator of the second kind, then $M$ is isometric to $(\mathbb{CP}^m, g_{FS})$ (respectively, a quotient of $(\mathbb{CH}^m, g_{\text{stand}})$), up to scaling. \end{corollary}
In K\"ahler geometry, there are several positivity conditions on curvatures that characterize $\mathbb{CP}^m$ among closed K\"ahler manifolds up to biholomorphism. For instance, a closed K\"ahler manifold with positive bisectional curvature is biholomorphic to $\mathbb{CP}^m$. This was known as the Frankel conjecture \cite{Frankel61} and it was proved independently by Mori \cite{Mori79} and Siu and Yau \cite{SY80}. A weaker condition, called positive orthogonal bisectional curvature, also characterizes $\mathbb{CP}^m$. This is due to Chen \cite{Chen07} and Gu and Zhang \cite{GZ10} (see also Wilking \cite{Wilking13} for an alternative proof using Ricci flow). Therefore, it is natural to seek a positivity condition on $\mathring{R}$ that characterizes $\mathbb{CP}^m$. Regarding this question, we prove that \begin{theorem}\label{thm OB +}
A closed K\"ahler manifold of complex dimension $m\geq 2$ with $\alpha_m$-positive curvature operator of the second kind, where
\begin{equation}\label{eq alpha m def}
\alpha_m:= \frac{3m^3-m+2}{2m},
\end{equation}
is biholomorphic to $\mathbb{CP}^m$. \end{theorem}
Note that when $m=2$, $\alpha_2=6$ is the best constant for Theorem \ref{thm OB +} to hold, as $\mathbb{CP}^1 \times \mathbb{CP}^1$ has $(6+\epsilon)$-positive $\mathring{R}$ for $\epsilon >0$. For K\"ahler surfaces, the author proved in \cite{Li22PAMS} that a closed K\"ahler surface with six-positive $\mathring{R}$ is biholomorphic to $\mathbb{CP}^2$, and a closed nonflat K\"ahler surface with six-nonnegative $\mathring{R}$ is either biholomorphic to $\mathbb{CP}^2$ or isometric to $\mathbb{CP}^1 \times \mathbb{CP}^1$, up to scaling. The two different proofs in \cite{Li22PAMS} only work for complex dimension two.
The number $\alpha_m$, however, does not seem to be optimal for $m\geq 3$. An ambitious question is \begin{question} What is the largest number $C_m$ so that a closed $m$-dimensional K\"ahler manifold with $C_m$-positive curvature operator of the second kind is biholomorphic to $\mathbb{CP}^m$? \end{question}
Theorem \ref{thm OB +} implies $\alpha_m \leq C_m$. We point out that \begin{equation}\label{eq beta m def}
C_m \leq \b_m:=\frac{3m^3+2m^2-3m-2}{2m}, \end{equation} as the product manifold $\mathbb{CP}^{m-1}\times \mathbb{CP}^1$ has $\alpha$-positive $\mathring{R}$ for any $\alpha > \b_m$ (see \cite{Li22product}). This determines the leading term of $C_m$ to be $\frac{3}{2}m^2$. In particular, we have \begin{equation*}
\frac{40}{3} \leq C_3 \leq \frac{44}{3}. \end{equation*} It remains an interesting question to determine $C_m$ for $m\geq 3$.
In the next result, we show that $\b_m$-positivity of $\mathring{R}$ implies positive orthogonal Ricci curvature. \begin{theorem}\label{thm Ric perp} Let $(M^m,g,J)$ be a K\"ahler manifold of complex dimension $m\geq 2$. If $M$ has $\b_m$-positive curvature operator of the second kind with $\b_m$ defined in \eqref{eq beta m def}, then $M$ has positive orthogonal Ricci curvature, namely \begin{equation*}
\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2 >0 \end{equation*} for any $0 \neq X\in T_pM$ and any $p\in M$. If $M$ is further assumed to be closed, then $M$ has $h^{p,0}=0$ for any $1\leq p \leq m$, and in particular, $M$ is simply-connected and projective. \end{theorem}
The orthogonal Ricci curvature $\operatorname{Ric}^\perp$ is defined as \begin{equation*}
\operatorname{Ric}^\perp(X,X)=\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2 \end{equation*} for $0 \neq X\in T_pM$. This notion of curvature was introduced by Ni and Zheng \cite{NZ18} in the study of Laplace comparison theorems on K\"ahler manifolds. We refer the reader to \cite{NZ19}, \cite{NWZ21}, and \cite{Ni21} for a more detailed account of it. The constant $\b_m$ in Theorem \ref{thm Ric perp} is optimal, as $\mathbb{CP}^{m-1}\times \mathbb{CP}^1$ has $\b_m$-nonnegative $\mathring{R}$ and it has nonnegative (but not positive) orthogonal Ricci curvature.
In addition, we prove a result similar to Theorem \ref{thm Ric perp}, which states that \begin{theorem}\label{thm Ricci} Let $(M^m,g,J)$ be a K\"ahler manifold of complex dimension $m\geq 2$. Suppose $M$ has $\gamma_m$-positive curvature operator of the second kind, where \begin{equation}\label{eq gamma m def}
\gamma_m:= \frac{3m^2+2m-1}{2}. \end{equation} Then for any $p\in M$ and $0 \neq X\in T_pM$, it holds that \begin{equation}\label{eq mixed curvature}
2\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2 >0. \end{equation} If $M$ is further assumed to be closed, then $M$ has $h^{p,0}=0$ for any $1\leq p \leq m$, and in particular, $M$ is simply-connected and projective. \end{theorem}
Chu, Lee, and Tam \cite{CLT20} introduced a family of curvature conditions for K\"ahler manifolds called mixed curvature. They are defined as
$$\mathcal{C}_{a,b}(X):=a \operatorname{Ric}(X,\bar{X})+b R(X,\bar{X},X,\bar{X})/|X|^2$$ for $a,b\in \mathbb{R}$. Theorem \ref{thm Ricci} establishes a connection between $\mathring{R}$ and the mixed curvature condition $\mathcal{C}_{2,-1}$.
Finally, let's discuss our strategies to prove the above-mentioned results. We will work pointwise and establish relationships between the curvature operator of the second kind and other frequently used curvature notions in K\"ahler geometry, such as holomorphic sectional curvature, orthogonal bisectional curvature, and orthogonal Ricci curvature. Theorems \ref{thm flat}, \ref{thm OB +}, \ref{thm Ric perp}, and \ref{thm Ricci} follow immediately from parts (1), (2), (3), and (4) of the following theorem, respectively. \begin{theorem}\label{thm algebra R}
Let $(V,g,J)$ be a complex Euclidean vector space with complex dimension $m\geq 2$ and $R$ be a K\"ahler algebraic curvature operator on $V$ (see Definition \ref{def Kahler algebraic curvature operator}). Then the following statements hold:
\begin{enumerate}
\item If $R$ has $\frac{3}{2}(m^2-1)$-nonnegative (respectively, $\frac{3}{2}(m^2-1)$-nonpositive) curvature operator of the second kind, then $R$ has constant nonnegative (respectively, nonpositive) holomorphic sectional curvature.
\item If $R$ has $\alpha_m$-nonnegative (respectively, $\alpha_m$-positive, $\alpha_m$-nonpositive, $\alpha_m$-negative) curvature operator of the second kind with $\alpha_m$ defined in \eqref{eq alpha m def}, then $R$ has nonnegative (respectively, positive, nonpositive, negative) orthogonal bisectional curvature and nonnegative (respectively, positive, nonpositive, negative) holomorphic sectional curvature.
\item If $R$ has $\b_m$-nonnegative (respectively, $\b_m$-positive, $\b_m$-nonpositive, $\b_m$-negative) curvature operator of the second kind with $\b_m$ defined in \eqref{eq beta m def}, then $R$ has nonnegative (respectively, positive, nonpositive, negative) orthogonal Ricci curvature.
\item If $R$ has $\gamma_m$-nonnegative (respectively, $\gamma_m$-positive, $\gamma_m$-nonpositive, $\gamma_m$-negative) curvature operator of the second kind with $\gamma_m$ defined in \eqref{eq gamma m def}, then the expression
\begin{equation*}
2\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2
\end{equation*}
is nonnegative (respectively, positive, nonpositive, negative) for any $0 \neq X \in V$.
\end{enumerate} \end{theorem}
The strategy to prove a statement in Theorem \ref{thm algebra R} is to choose a model space and apply $\mathring{R}$ to the eigenvectors of the curvature operator of the second kind on this model space. A good model space leads to a sharp result. The model spaces we use for parts (1), (3), and (4) of Theorem \ref{thm algebra R} are $\mathbb{CP}^m$, $\mathbb{CP}^{m-1}\times \mathbb{CP}^1$, and $\mathbb{CP}^{m-1}\times \mathbb{C}$, respectively. For part (2) of Theorem \ref{thm algebra R}, we use $\mathbb{CP}^m$ as the model space, but the result does not seem to be sharp for $m\geq 3$. Finally, it's worth mentioning that this strategy has been used successfully by the author in several works \cite{Li21, Li22JGA, Li22PAMS, Li22product} with $\mathbb{CP}^2$, $\mathbb{CP}^1 \times \mathbb{CP}^1$, $\mathbb{S}^{n-1}\times \mathbb{S}^1$, $\mathbb{S}^{k}\times \mathbb{S}^{n-k}$ and $\mathbb{CP}^k \times \mathbb{CP}^{m-k}$ as model spaces.
We emphasize that our approach is pointwise; therefore, many of our results are of pointwise nature and the completeness of the metric is not needed. Another feature is that our proofs are purely algebraic and work equally well for nonpositivity conditions on $\mathring{R}$.
This article is organized as follows. Section 2 consists of three subsections. We fix some notation and conventions in subsection 2.1 and give an introduction to the curvature operator of the second kind in subsection 2.2. In subsection 2.3, we review some basics about K\"ahler algebraic curvature operators. In Section 3, we collect some identities that will be frequently used in this paper. In Section 4, we construct an orthonormal basis of the space of traceless symmetric two-tensors on a complex Euclidean vector space and calculate the diagonal elements of the matrix representing $\mathring{R}$ with respect to this basis. The proofs of Theorems \ref{thm flat}, \ref{thm OB +}, \ref{thm Ric perp}, and \ref{thm Ricci} are given in Sections 5, 6, 7, and 8, respectively.
\section{Preliminaries} \subsection{Notation and Conventions}
In the following, $(V,g)$ is a real Euclidean vector space of dimension $n \geq 2$ and $\{e_i\}_{i=1}^n$ is an orthonormal basis of $V$. We always identify $V$ with its dual space $V^*$ via the metric. \begin{itemize}
\item $S^2(V)$ and $\Lambda^2(V)$ denote the space of symmetric two-tensors on $V$ and two-forms on $V$, respectively.
\item $S^2_0(V)$ denotes the space of traceless symmetric two-tensors on $V$. Note that $S^2(V)$ splits into $O(V)$-irreducible subspaces as
\begin{equation*}
S^2(V)=S^2_0(V)\oplus \mathbb{R} g.
\end{equation*}
\item $S^2(\Lambda^2 V)$, the space of symmetric two-tensors on $\Lambda^2(V)$, has the orthogonal decomposition
\begin{equation*}
S^2(\Lambda^2 V) =S^2_B(\Lambda^2 V) \oplus \Lambda^4 V,
\end{equation*}
where $S^2_B(\Lambda^2 V)$ consists of all tensors $R\in S^2(\Lambda^2(V))$ that also satisfy the first Bianchi identity. The space $S^2_B(\Lambda^2(V))$ is called the space of algebraic curvature operators (or tensors) on $V$.
\item The tensor product is defined via
\begin{equation*}
(e_i\otimes e_j)(e_k,e_l)=\delta_{ik}\delta_{jl}.
\end{equation*}
\item $\odot$ denotes the symmetric product defined by
\begin{equation*}
u \odot v=u\otimes v +v \otimes u.
\end{equation*}
\item $\wedge$ denotes the wedge product defined by
\begin{equation*}
u \wedge v=u\otimes v - v \otimes u.
\end{equation*}
\item The inner product on $S^2(V)$ is given by
\begin{equation*}
\langle A, B \rangle =\operatorname{tr}(A^T B).
\end{equation*}
If $\{e_i\}_{i=1}^n$ is an orthonormal basis of $V$, then $\{\frac{1}{\sqrt{2}}e_i \odot e_j\}_{1\leq i<j\leq n} \cup \{\frac{1}{2}e_i \odot e_i\}_{1\leq i\leq n}$ is an orthonormal basis of $S^2(V)$.
\item The inner product on $\Lambda^2(V)$ is given by
\begin{equation*}
\langle A, B \rangle =\frac{1}{2}\operatorname{tr}(A^T B).
\end{equation*}
If $\{e_i\}_{i=1}^n$ is an orthonormal basis of $V$, then $\{e_i \wedge e_j\}_{1\leq i<j\leq n}$ is an orthonormal basis of $\Lambda^2(V)$.
\end{itemize}
\subsection{The Curvature Operator of the Second Kind} Given $R\in S^2_B(\Lambda^2(V))$, the induced self-adjoint operator $\hat{R}:\Lambda^2 (V) \to \Lambda^2(V)$ given by
\begin{equation*}
\hat{R}(\omega)_{ij}=\frac{1}{2}\sum_{k,l=1}^n R_{ijkl}\omega_{kl},
\end{equation*} is called the curvature operator (or the curvature operator of the first kind by Nishikawa \cite{Nishikawa86}). The most famous result concerning $\hat{R}$ is perhaps the differentiable sphere theorem stating that a closed Riemannian manifold with two-positive curvature operator is diffeomorphic to a spherical space form. This is due to Hamilton \cite{Hamilton82} in dimension three, Hamilton \cite{Hamilton86} and Chen \cite{Chen91} in dimension four, and B\"ohm and Wilking \cite{BW08} in all higher dimensions. Rigidity results for closed manifolds with two-nonnegative curvature operator are obtained by \cite{Hamilton86} in dimension three, Hamilton \cite{Hamilton86} and Chen \cite{Chen91} in dimension four, and Ni and Wu \cite{NW07} in all higher dimensions. For other important results regarding $\hat{R}$, see for example \cite{Meyer71}, \cite{GM75}, \cite{Tachibana74}, \cite{PW21} and the references therein.
By the symmetries of $R\in S^2_B(\Lambda^2(V))$ (not including the first Bianchi identity), $R$ also induces a self-adjoint operator $\overline{R}:S^2(V) \to S^2(V)$ via \begin{equation*}
\overline{R}(\varphi)_{ij}=\sum_{k,l=1}^n R_{iklj}\varphi_{kl}. \end{equation*} However, the nonnegativity of this operator is too strong in the sense that $\overline{R}: S^2(V) \to S^2(V)$ is nonnegative if and only if $R=0$. Therefore, one usually considers the restriction of $\overline{R}$ to the space of traceless symmetric two-tensors, i.e., the induced symmetric bilinear form $\mathring{R}:S^2_0(V)\times S^2_0(V) \to \mathbb{R}$ given by
\begin{equation*}
\mathring{R}(\varphi,\psi)=\sum_{i,j,k,l=1}^n R_{ijkl}\varphi_{il}\psi_{jk}.
\end{equation*} Following Nishikawa's terminology \cite{Nishikawa86}, we call the symmetric bilinear form $\mathring{R}$ \textit{the curvature operator of the second kind}.
The action of the Riemann curvature tensor on symmetric two-tensors indeed has a long history. It appeared for K\"ahler manifolds in the study of the deformation of complex analytic structures by Calabi and Vesentini \cite{CV60}. They introduced the self-adjoint operator $\xi_{\alpha \b} \to R^{\rho}_{\ \alpha\b}{}^{\sigma} \xi_{\rho \sigma}$ from $S^2(T^{1,0}_p M)$ to itself, and computed the eigenvalues of this operator on Hermitian symmetric spaces of classical type, with the exceptional ones handled shortly after by Borel \cite{Borel60}. In the Riemannian setting, the operator $\overline{R}$ arises naturally in the context of deformations of Einstein structure in Berger and Ebin \cite{BE69} (see also \cite{Koiso79a, Koiso79b} and \cite{Besse08}). In addition, it appears in the Bochner-Weitzenb\"ock formulas for symmetric two-tensors (see for example \cite{MRS20}), for differential forms in \cite{OT79}, and for Riemannian curvature tensors in \cite{Kashiwada93}. In another direction, curvature pinching estimates for $\overline{R}$ were studied by Bourguignon and Karcher \cite{BK78}, and they calculated eigenvalues of $\overline{R}$ on the complex projective space with the Fubini-Study metric and the quaternionic projective space with its canonical metric. Nevertheless, the operators $\overline{R}$ and $\mathring{R}$ are significantly less investigated than $\hat{R}$.
Let $N=\dim(S^2_0(V))=\frac{(n-1)(n+2)}{2}$ and $\{\varphi_i\}_{i=1}^N$ be an orthonormal basis of $S^2_0(V)$. The $N\times N$ matrix $\mathring{R}(\varphi_i, \varphi_j)$ is called the matrix representation of $\mathring{R}$ with respect to the orthonormal basis $\{\varphi_i\}_{i=1}^N$. The eigenvalues of $\mathring{R}$ refer to the eigenvalues of any of its matrix representations. Note that the eigenvalues of $\mathring{R}$ are independent of the choices of the orthonormal bases.
For a positive integer $1\leq k \leq N$, we say $R\in S^2_B(\Lambda^2(V))$ has $k$-nonnegative curvature operator of the second kind if the sum of the smallest $k$ eigenvalues of $\mathring{R}$ is nonnegative. This was extended to all $k\in [1,N]$ in \cite{Li22JGA} as follows. \begin{definition} Let $N=\frac{(n-1)(n+2)}{2}$ and $\alpha \in [1, N]$. \begin{enumerate}
\item We say $R\in S^2_B(\Lambda^2(V))$ has $\alpha$-nonnegative curvature operator of the second kind ($\mathring{R}$ is $\alpha$-nonnegative for short) if for any orthonormal basis $\{\varphi_i\}_{i=1}^{N}$ of $S^2_0(V)$, it holds that \begin{equation*}\label{eq def R}
\sum_{i=1}^{\lfloor \alpha \rfloor} \mathring{R}(\varphi_i,\varphi_i) +(\alpha -\lfloor \alpha \rfloor) \mathring{R}(\varphi_{\lfloor \alpha \rfloor+1},\varphi_{\lfloor \alpha \rfloor+1}) \geq 0. \end{equation*} If the inequality is strict, then $R$ is said to have $\alpha$-positive curvature operator of the second kind ($\mathring{R}$ is $\alpha$-positive for short).
\item We say $R\in S^2_B(\Lambda^2(V))$ has $\alpha$-nonpositive (respectively, $\alpha$-negative) curvature operator of the second kind if $-R$ has $\alpha$-nonnegative (respectively, $\alpha$-positive) curvature operator of the second kind. \end{enumerate} \end{definition} Note that when $\alpha=k$ is an integer, this agrees with the usual definition. We always omit $\alpha$ when $\alpha=1$. Clearly, $\alpha$-nonnegativity of $\mathring{R}$ implies $\b$-nonnegativity of $\mathring{R}$ if $\alpha \leq \b$. The same holds for positivity, negativity, and nonpositivity.
\begin{definition} A Riemannian manifold $(M^n,g)$ is said to have $\alpha$-nonnegative (respectively, $\alpha$-positive, $\alpha$-nonpositive, $\alpha$-negative) curvature operator of the second kind if $R_p \in S^2_B(\Lambda^2 T_pM)$ has $\alpha$-nonnegative (respectively, $\alpha$-positive, $\alpha$-nonpositive, $\alpha$-negative) curvature operator of the second kind for each $p\in M$. \end{definition}
The generalized definition is motivated by geometric examples. For instance, $\mathbb{S}^{n-1}\times \mathbb{S}^1$ has $\alpha$-nonnegative curvature operator of the second kind for any $\alpha \geq \left(n+\frac{n-2}{n}\right)$, but not for any $\alpha< \left(n+\frac{n-2}{n}\right)$. Another example is $(\mathbb{CP}^m,g_{FS})$, whose curvature operator of the second kind is $\alpha$-nonnegative for any $\alpha \geq \frac{3}{2}(m^2-1)$, but not for any $\alpha < \frac{3}{2}(m^2-1)$.
\subsection{K\"ahler Algebraic Curvature Operators} Throughout this subsection, let $(V,g,J)$ be a complex Euclidean vector space of complex dimension $m\geq 1$. In other words, $(V,g)$ is a real Euclidean vector space of real dimension $2m$ and $J:V\to V$ is an endomorphism of $V$ satisfying the following two properties: \begin{enumerate}
\item $J^2 =-\operatorname{id} $ on $V$,
\item $g(X,Y)=g(JX,JY)$ for all $X,Y\in V$. \end{enumerate} $J$ is called a complex structure on $V$.
\begin{definition}\label{def Kahler algebraic curvature operator} $R\in S^2_B(\Lambda^2 V )$ is called a K\"ahler algebraic curvature operator if it satisfies \begin{equation*}
R(X,Y,Z,W)=R(X,Y,JZ,JW), \end{equation*} for all $X,Y,Z,W \in V$. \end{definition}
Note that a K\"ahler algebraic curvature operator $R$ satisfies \begin{eqnarray}\label{eq R J-inv}
&&R(X,Y,Z,W)=R(JX,JY,Z,W)\\ &=& R(X,Y,JZ,JW)=R(JX,JY,JZ,JW),\nonumber \end{eqnarray} and \begin{equation}\label{eq bisectional curvature}
R(X,JX,Y,JY)=R(X,Y,X,Y) + R(X,JY,X,JY), \end{equation} for all $X,Y,Z,W \in V$. \eqref{eq R J-inv} follows from the symmetries of $R$ and \eqref{eq bisectional curvature} is a consequence of the first Bianchi identity and \eqref{eq R J-inv}. The expression on the left-hand side of \eqref{eq bisectional curvature} is called bisectional curvature (see \cite{GK67}), holomorphic sectional curvature if $X=Y$ (see for instance \cite{YZ19}), and orthogonal bisectional curvature if $g(X,Y)=g(X,JY)=0$ (see for example \cite{LN20}). We will use \eqref{eq R J-inv} and \eqref{eq bisectional curvature} frequently in the rest of this paper.
In view of \eqref{eq R J-inv} and \eqref{eq bisectional curvature}, the Ricci tensor of a K\"ahler algebraic curvature operator $R$ is given by \begin{equation}\label{eq Ricci curvature}
\operatorname{Ric}(X,Y)=\sum_{i=1}^m R(X,JY,e_i, Je_i), \end{equation} and the scalar curvature of $R$, denoted by $S$, is given by \begin{equation}\label{eq scalar curvature}
S=2\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j), \end{equation} where $\{e_1,\cdots, e_m, Je_1, \cdots, Je_m\}$ is an orthonormal basis of $V$.
Next, we recall some definitions. \begin{definition} A K\"ahler algebraic curvature operator $R$ is said to have \begin{enumerate} \item nonnegative holomorphic sectional curvature if for any $X\in V$, \begin{equation*}
R(X,JX,X,JX)\geq 0. \end{equation*} \item nonnegative orthogonal bisectional curvature if for any $X, Y\in V$ with $g(X, Y)=g(X,JY)=0$, \begin{equation*}
R(X,JX,Y,JY)\geq 0. \end{equation*} \item nonnegative orthogonal Ricci curvature if for any $0\neq X\in V$, \begin{equation*}
\operatorname{Ric}^\perp(X,X):=\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2 \geq 0 \end{equation*} \end{enumerate} \end{definition} Analogously, one defines the positivity, negativity, and non-positivity of holomorphic sectional curvature, orthogonal bisectional curvature, and orthogonal Ricci curvature. Finally, a K\"ahler manifold $(M^m,g,J)$ is said to satisfy a curvature condition if $R_p \in S^2_B(\Lambda^2 T_pM)$ satisfies the curvature condition at every $p\in M$.
\section{Identities}
In this section, we collect some identities that will be frequently used in subsequent sections. Many of them have been used explicitly or implicitly in earlier works such as \cite{OT79}, \cite{CGT21}, \cite{Li21, Li22JGA, Li22PAMS}, and \cite{NPW22}.
\begin{lemma}\label{lemma 3.0} Let $(V,g)$ be a real Euclidean vector space of dimension $n \geq 2$ and $\{e_i\}_{i=1}^n$ be an orthonormal basis of $V$. Then we have \begin{equation}\label{eq g on basis}
\langle e_i \odot e_j ,e_k \odot e_l \rangle = 2(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}), \end{equation} and \begin{equation}\label{eq R on basis}
\mathring{R}(e_i \odot e_j ,e_k \odot e_l)= 2(R_{iklj}+R_{ilkj}),
\end{equation} for all $1\leq i,j,k,l \leq n$. \end{lemma} \begin{proof} Using $(e_i\otimes e_j)(e_k,e_l)=\delta_{ik}\delta_{jl}$, we compute that \begin{eqnarray*}
&& \langle e_i \odot e_j ,e_k \odot e_l \rangle \\
&=& \sum_{p,q=1}^n (e_i \odot e_j)(e_p,e_q) \cdot (e_k \odot e_l)(e_p,e_q) \\
&=& \sum_{p,q=1}^n(\delta_{ip}\delta_{jq}+\delta_{iq}\delta_{jp})(\delta_{kp}\delta_{lq}+\delta_{kq}\delta_{lp}) \\
&=& 2(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}). \end{eqnarray*} This proves \eqref{eq g on basis}. To prove \eqref{eq R on basis}, we calculate that \begin{eqnarray*} && \mathring{R}(e_i \odot e_j ,e_k \odot e_l) \\ &=& \sum_{p,q, r, s=1}^n R_{prsq}\cdot (e_i \odot e_j)(e_p,e_q) \cdot (e_k \odot e_l)(e_r,e_s) \\ &=& \sum_{p,q, r, s=1}^n R_{prsq}(\delta_{ip}\delta_{jq}+\delta_{iq}\delta_{jp})(\delta_{kr}\delta_{ls}+\delta_{ks}\delta_{lr}) \\ &=& R_{iklj}+R_{ilkj}+R_{jkli}+R_{jlki}\\ &=& 2(R_{iklj}+R_{ilkj}). \end{eqnarray*} \end{proof}
\begin{lemma}\label{lemma ijkl}
Let $\{e_i,e_j,e_k,e_l\}$ be an orthonormal four-frame in a Euclidean vector space $(V,g)$ of dimension $n\geq 4$. Define the following traceless symmetric two-tensors:
\begin{eqnarray*}
h^{\pm}_1 &=& \frac{1}{2}\left(e_i\odot e_j \pm e_k \odot e_l \right), \\
h_2 &=& \frac{1}{4}\left(e_i\odot e_i +e_j \odot e_j -e_k \odot e_k -e_l \odot e_l \right) .
\end{eqnarray*}
Then we have $\|h^{\pm}_1\|=\|h_2\|=1$ and \begin{eqnarray*}
\mathring{R}(h^{\pm}_1,h^{\pm}_1) &=& \frac{1}{2} \left( R_{ijij} +R_{klkl} \right) \pm R_{iklj} \pm R_{ilkj}, \\
\mathring{R}(h_2,h_2) &=& \frac{1}{2}\left( -R_{ijij}-R_{klkl}+R_{ikik}+R_{ilil}+R_{jkjk}+R_{jljl}\right). \end{eqnarray*} \end{lemma} \begin{proof}
One easily verifies using \eqref{eq g on basis} that $\|h^{\pm}_1\|=\|h_2\|=1$. We compute that \begin{eqnarray*}
&& 4 \mathring{R}(h^{\pm}_1,h^{\pm}_1) \\
&=&\mathring{R}(e_i\odot e_j \pm e_k \odot e_l, e_i\odot e_j \pm e_k \odot e_l) \\
&=& \mathring{R}(e_i\odot e_j , e_i\odot e_j) +\mathring{R}(e_k \odot e_l, e_k \odot e_l) \pm 2 \mathring{R}(e_i\odot e_j, e_k \odot e_l) \\
&=& 2R_{ijij} +2R_{klkl} \pm 4 \left(R_{iklj}+R_{ilkj} \right), \end{eqnarray*} where we have used \eqref{eq R on basis} in getting the last line.
Using \eqref{eq R on basis} again, we obtain that \begin{eqnarray*}
&& 16 \mathring{R}(h_2,h_2) \\
&=&\mathring{R}(e_i\odot e_i+e_j\odot e_j, e_i \odot e_i+ e_j\odot e_j) \\
&&+\mathring{R}(e_k\odot e_k+e_l\odot e_l, e_k \odot e_k+ e_l\odot e_l) \\
&&-2\mathring{R}(e_i\odot e_i+e_j\odot e_j, e_k \odot e_k+ e_l\odot e_l) \\
&=& 2 \mathring{R}(e_i\odot e_i, e_j\odot e_j) +2 \mathring{R}(e_k\odot e_k, e_l\odot e_l) \\
&& -2 \mathring{R}(e_i\odot e_i, e_k\odot e_k) -2 \mathring{R}(e_i\odot e_i, e_l\odot e_l) \\
&& -2 \mathring{R}(e_j\odot e_j, e_k\odot e_k) -2 \mathring{R}(e_j\odot e_j, e_l\odot e_l) \\
&=& -8(R_{ijij} +R_{klkl}) +8\left(R_{ikik}+R_{ilil}+R_{jkjk}+R_{jljl} \right). \end{eqnarray*} \end{proof}
\begin{lemma}\label{lemma ij} Let $\{e_i,e_j\}$ be an orthonormal two-frame in a Euclidean vector space $(V,g)$ of dimension $n\geq 2$. Define the following traceless symmetric two-tensors: \begin{eqnarray*}
h_3 &=& \frac{1}{2\sqrt{2}}\left(e_i\odot e_i -e_j \odot e_j \right), \\
h_4 &=& \frac{1}{\sqrt{2}} e_i \odot e_j.
\end{eqnarray*}
Then we have $\|h_3\|=\|h_4\| =1$ and \begin{equation*}
\mathring{R}(h_3,h_3)= \mathring{R}(h_4,h_4) = R_{ijij}. \end{equation*} \end{lemma} \begin{proof} This is a straightforward computation using \eqref{eq g on basis} and \eqref{eq R on basis}. \end{proof}
\begin{lemma} Let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of a complex Euclidean vector space $(V,g,J)$ of complex dimension $m\geq 1$. Then for any $1 \leq i, j \leq m$, we have \begin{equation}\label{eq R iiJiJi}
\mathring{R}(e_i \odot e_i +Je_i \odot Je_i, e_j \odot e_j +Je_j \odot Je_j) =-8R(e_i,Je_i,e_j,Je_j). \end{equation} \end{lemma} \begin{proof} This follows from a routine computation using \eqref{eq R on basis}, \eqref{eq R J-inv}, and \eqref{eq bisectional curvature} as follows: \begin{eqnarray*} && \mathring{R}(e_i \odot e_i +Je_i \odot Je_i, e_j \odot e_j +Je_j \odot Je_j) \\ &=& \mathring{R}(e_i \odot e_i, e_j \odot e_j) +\mathring{R}(e_i \odot e_i, Je_j \odot Je_j) \\ && +\mathring{R}(Je_i \odot Je_i, e_j \odot e_j)+\mathring{R}(Je_i \odot Je_i, Je_j \odot Je_j) \\ &=& -4R(e_i,e_j,e_i,e_j)-4R(e_i,Je_j,e_i,Je_j)\\ && -4R(Je_i,e_j,Je_i,e_j)-4R(Je_i,Je_j,Je_i,Je_j) \\ &=& -8R(e_i,e_j,e_i,e_j)-8R(e_i,Je_j,e_i,Je_j)\\ &=& -8R(e_i,Je_i,e_j,Je_j). \end{eqnarray*} \end{proof}
The author observed in \cite[Proposition 4.1]{Li21} (see also \cite[Proposition 1.2]{NPW22}) that the trace of $\mathring{R}$ is equal to $\frac{n+2}{2n}S$, where $S$ denotes the scalar curvature. That is to say, if $\{\varphi_i\}_{i=1}^N$ is an orthonormal basis of $S^2_0(V)$, then \begin{equation}\label{trace R}
\sum_{i=1}^N \mathring{R}(\varphi_i,\varphi_i) =\frac{n+2}{2n}S. \end{equation} This implies that \begin{lemma}\label{lemma R and S} $R\in S^2_B(\Lambda^2 V)$ has $\frac{(n-1)(n+2)}{2}$-nonnegative (respectively, $\frac{(n-1)(n+2)}{2}$-nonpositive) curvature operator of the second kind if and only if $R$ has nonnegative (respectively, nonpositive) scalar curvature $S$. \end{lemma} \begin{lemma}\label{S flat implies R flat} Suppose that $R\in S^2_B(\Lambda^2 V)$ has $\alpha$-nonnegative (respectively, $\alpha$-nonpositive) curvature operator of the second kind for some $\alpha < \frac{(n-1)(n+2)}{2}$. If $S=0$, then $R=0$. \end{lemma}
\section{An orthonormal basis for $S^2_0(V)$} Below we construct an orthonormal basis for $S^2_0(V)$ on a complex Euclidean vector space $(V,g,J)$.
\begin{lemma}\label{lemma basis +} Let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of a complex Euclidean vector space $(V,g,J)$. Let \begin{equation*}
E^+=\operatorname{span}\{u \odot v -Ju \odot Jv: u,v \in V \}. \end{equation*} Define \begin{eqnarray*} \varphi^+_{ij} &=& \frac{1}{2} \left( e_i \odot e_j - Je_i \odot Je_j \right), \text{ for } 1 \leq i < j \leq m, \\ \psi^{+}_{ij} &=& \frac{1}{2} \left( e_i \odot J e_j + Je_i \odot e_j \right), \text{ for } 1 \leq i < j \leq m, \\
\theta_{i} &=& \frac{1}{2\sqrt{2}} \left( e_i \odot e_i -Je_i \odot Je_i \right), \text{ for } 1\leq i \leq m, \\
\theta_{m+i} &=& \frac{1}{\sqrt{2}} e_i \odot J e_i, \text{ for } 1\leq i \leq m. \end{eqnarray*} Then \begin{equation}\label{eq basis}
\mathcal{E}^+= \{\varphi^{+}_{ij}\}_{1\leq i < j \leq m} \cup\{\psi^{+}_{ij}\}_{1\leq i < j \leq m} \cup \{\theta_i \}_{i=1}^{2m} \end{equation} forms an orthonormal basis of $E^+$. In particular, $\dim(E^+)=m(m+1)$. \end{lemma} \begin{proof} Clearly, $\mathcal{E}^+ \subset E^+$. Using \eqref{eq g on basis}, one verifies that $\mathcal{E}^+$ is an orthonormal subset of $E^+$. The statement $\mathcal{E}^+$ spans $E^+$ follows from the following observation. If \begin{eqnarray*}
u &=& \sum_{i=1}^m x_i e_i +\sum_{i=1}^m y_i Je_i , \\
v &=& \sum_{i=1}^m z_i e_i +\sum_{i=1}^m w_i Je_i, \end{eqnarray*} then \begin{eqnarray*}
&& u\odot v -Ju \odot Jv \\
&=& \sum_{i,j=1}^m (x_iz_j-y_iw_j)(e_i\odot e_j -Je_i \odot Je_j) \\
&&+ \sum_{i,j=1}^m (x_iw_j+y_iz_j)(e_i\odot Je_j + Je_i \odot e_j) \\
&=& 4\sum_{1\leq i < j\leq m} (x_iz_j-y_iw_j)\varphi^+_{ij} +4\sum_{1\leq i < j\leq m} (x_iw_j+y_iz_j)\psi^+_{ij} \\
&& + 2\sqrt{2} \sum_{i=1}^m (x_iz_i-y_iw_i)\theta_{i} + 2\sqrt{2} \sum_{i=1}^{m} (x_iw_i+y_iz_i)\theta_{m+i}. \end{eqnarray*} Thus, $\mathcal{E}^+$ forms an orthonormal basis of $E^+$ and $\dim(E^+)=m(m+1)$. \end{proof}
\begin{lemma}\label{lemma basis -} Let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of a complex Euclidean vector space $(V,g,J)$. Let $E^-=(E^+)^\perp$ be the orthogonal complement of $E^+$, where $E^+$ is the subspace of $S^2_0(V)$ defined in Lemma \ref{lemma basis +}. Define \begin{eqnarray*} \varphi^{-}_{ij} &=& \frac{1}{2} \left( e_i \odot e_j + Je_i \odot Je_j \right), \text{ for } 1 \leq i < j \leq m, \\ \psi^{-}_{ij} &=& \frac{1}{2} \left( e_i \odot J e_j - Je_i \odot e_j \right), \text{ for } 1 \leq i < j \leq m, \end{eqnarray*} and \begin{eqnarray*}
\eta_k &=& \frac{k}{\sqrt{8k(k+1)}} (e_{k+1}\odot e_{k+1} +Je_{k+1} \odot Je_{k+1}) \\
&& - \frac{1}{\sqrt{8k(k+1)}}\sum_{i=1}^{k}(e_i \odot e_i +Je_i \odot Je_i), \end{eqnarray*} for $1\leq k \leq m-1$. Then \begin{equation}
\mathcal{E}^-=\{\varphi^{-}_{ij}\}_{1\leq i < j \leq m} \cup \{\psi^{-}_{ij}\}_{1\leq i < j \leq m} \cup \{\eta_k\}_{k=1}^{m-1} \end{equation} forms an orthonormal basis of $E^-$. In particular, $\dim(E^-)=m^2-1$. \end{lemma}
\begin{proof} Since $S^2_0(V)=E^+\oplus E^-$, we have that \begin{eqnarray*}
\dim(E^-) &=& \dim(S^2_0(V))-\dim(E^+) \\
&=& (2m-1)(m+1) -m(m+1)\\
&=&m^2-1. \end{eqnarray*} As the number of traceless symmetric two-tensors in $\mathcal{E}^-$ is equal to $\dim(E^-)$, it suffices to verify that $\mathcal{E}^+ \cup \mathcal{E}^-$ is an orthonormal basis of $S^2_0(V)$, which is a straightforward computation using \eqref{eq g on basis}. \end{proof}
We remark that on $(\mathbb{CP}^m, g_{FS})$, $E^+$ is the eigenspace associated with the eigenvalue $4$, and $E^-$ is the eigenspace associated with the eigenvalue $-2$. The subspace $E^-$ is spanned by traceless symmetric two-tensors of the form $u \odot v +Ju \odot Jv$ with $g(u,v)=0$ and $u\odot u +Ju \odot Ju -v\odot v -Jv\odot Jv$ with $g(u,u)=g(v,v)$. See \cite[page 84]{BK78}.
The next step is to calculate the matrix representation of $\mathring{R}$ with respect to the orthonormal basis $\mathcal{E}^+ \cup \mathcal{E}^-$ for $S^2_0(V)$. We only need the diagonal elements of this matrix. \begin{lemma}\label{lemma R basis +} For the basis $\mathcal{E}^+$ of $E^+\subset S^2_0(V)$ defined in Lemma \ref{lemma basis +}, we have \begin{equation}\label{eq R positive basis vp psi}
\mathring{R}(\varphi^+_{ij}, \varphi^+_{ij}) =\mathring{R}(\psi^+_{ij}, \psi^+_{ij})= 2R(e_i, Je_i, e_j, Je_j) \end{equation} for $1\leq i < j \leq m$, and \begin{equation}\label{eq R positive basis alpha}
\mathring{R}(\theta_{i}, \theta_{i}) =\mathring{R}(\theta_{m+i}, \theta_{m+i})= R(e_i,Je_i, e_i, Je_i), \end{equation} for $1\leq i \leq m$. Moreover, \begin{eqnarray}\label{R sum positive eigenvalues}
\sum_{1\leq i< j \leq m} \left( \mathring{R}(\varphi^+_{ij}, \varphi^+_{ij}) + \mathring{R}(\psi^+_{ij}, \psi^+_{ij}) \right) + \sum_{i=1}^{2m} \mathring{R}(\theta_{i}, \theta_{i})
= S. \end{eqnarray} \end{lemma} \begin{proof} Applying Lemma \ref{lemma ijkl} to the orthonormal four-frame $\{e_i,e_j,Je_i,Je_j\}$ yields \begin{eqnarray*}
\mathring{R}(\varphi^+_{ij}, \varphi^+_{ij}) &=& \frac{1}{2} \left( R(e_i,e_j,e_i,e_j) +R(Je_i,Je_j,Je_i,Je_j) \right) \\
&& -R(e_i, Je_i, Je_j, e_j) - R(e_i, Je_j, Je_i, e_j) \\
&=& R(e_i,e_j,e_i,e_j) +R(e_i, Je_i, e_j, Je_j) + R(e_i, Je_j, e_i, Je_j)\\
&=& 2R(e_i, Je_i, e_j, Je_j), \end{eqnarray*} where we have used \eqref{eq R J-inv} and \eqref{eq bisectional curvature}. Similarly, we get \begin{eqnarray*}
\mathring{R}(\psi^+_{ij}, \psi^+_{ij}) &=& \frac{1}{2} \left( R(e_i,Je_j,e_i,Je_j) +R(Je_i,e_j,Je_i,e_j) \right) \\
&& +R(e_i, Je_i, e_j, Je_j) + R(e_i, e_j, Je_i, Je_j) \\
&=& R(e_i, Je_j, e_i, Je_j)+R(e_i, Je_i, e_j, Je_j) +R(e_i,e_j,e_i,e_j) \\
&=& 2R(e_i, Je_i, e_j, Je_j). \end{eqnarray*} Now, \eqref{eq R positive basis vp psi} is proved.
For $1\leq i \leq m$, we apply Lemma \ref{lemma ij} to the orthonormal two-frame $\{e_i,Je_i\}$ and get \begin{equation*}
\mathring{R}(\theta_{i}, \theta_{i}) =\mathring{R}(\theta_{m+i}, \theta_{m+i})= R(e_i,Je_i, e_i, Je_i). \end{equation*} This proves \eqref{eq R positive basis alpha}.
Finally, \eqref{R sum positive eigenvalues} follows from \eqref{eq R positive basis vp psi}, \eqref{eq R positive basis alpha}, and \eqref{eq scalar curvature}.
\end{proof}
\begin{lemma}\label{lemma R basis -} For the basis $\mathcal{E}^-$ of $E^-\subset S^2_0(V)$ defined in Lemma \ref{lemma basis -}, we have \begin{eqnarray}\label{eq sum of negative eigenvaleus}
\sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k)
= -\frac{m-1}{2m} S. \end{eqnarray} \end{lemma}
\begin{proof} Since $\mathcal{E}^+ \cup \mathcal{E}^-$ is an orthonormal basis for $S^2_0(V)$, \eqref{eq sum of negative eigenvaleus} follows immediately from and \eqref{R sum positive eigenvalues}, \eqref{trace R} and \eqref{eq scalar curvature}. \end{proof}
\section{Flatness} We prove Theorem \ref{thm flat} in this section. The key ingredient is \begin{proposition}\label{prop flat}
Let $R$ be a K\"ahler algebraic curvature operator on a complex Euclidean vector space $(V,g,J)$ of complex dimension $m\geq 2$.
\begin{enumerate}
\item If $R$ has $\frac{3}{2}(m^2-1)$-nonnegative (respectively, $\frac{3}{2}(m^2-1)$-nonpositive) curvature operator of the second kind, then $R$ has constant nonnegative (respectively, nonpositive) holomorphic sectional curvature.
\item If $R$ has $\alpha$-nonnegative (respectively, $\alpha$-nonpositive) curvature operator of the second kind for some $\alpha < \frac{3}{2}(m^2-1)$, then $R$ is flat.
\end{enumerate} \end{proposition}
We need an elementary lemma. \begin{lemma}\label{lemma average} Let $N$ be a positive integer and $A$ be a collection of $N$ real numbers. Denote by $a_i$ the $i$-th smallest number in $A$ for $1\leq i \leq N$. Define a function $f(A,x)$ by
\begin{equation*}
f(A,x)=\sum_{i=1}^{\lfloor x \rfloor} a_i +(x-\lfloor x \rfloor) a_{\lfloor x \rfloor+1},
\end{equation*}
for $x\in [1,N]$. Then we have
\begin{equation}\label{eq function}
f(A,x) \leq x \bar{a},
\end{equation}
where $\bar{a}:=\frac{1}{N}\sum_{i=1}^N a_i$ is the average of all numbers in $A$.
Moreover, the equality holds for some $x\in [1,N)$ if and only if $a_i=\bar{a}$ for all $1\leq i\leq N$. \end{lemma} \begin{proof}
We first show that Lemma \ref{lemma average} holds when $x=k$ is an integer. This is obvious if $k=1$ or $k=N$. If $2\leq k \leq N-1$, we have \begin{eqnarray*}
N(f(A,k)-k\bar{a})
&=& N\sum_{i=1}^{k} a_i - k \sum_{i=1}^N a_i \\
&=& (N-k)\sum_{i=1}^{k} a_i -k \sum_{i=k+1}^N a_i . \end{eqnarray*} Note that $\sum_{i=1}^{k} a_i \leq ka_{k+1}$ with equality if and only if $a_1=\cdots =a_{k+1}$ and $\sum_{i=k+1}^N a_i \geq (N-k)a_{k+1}$ with equality if and only if $a_{k+1}=\cdots =a_{N}$. So, we get \begin{equation*}
N(f(A,k)-k\bar{a})
\leq (N-k)k a_{k+1} -k(N-k)a_{k+1} = 0. \end{equation*} Moreover, the equality holds if and only if $a_i=\bar{a}$ for all $1\leq i\leq N$.
Next, for $x \in (1,N)$, we have \begin{eqnarray*}
f(A,x) &=& \sum_{i=1}^{\lfloor x \rfloor} a_i +(x-\lfloor x \rfloor) a_{\lfloor x \rfloor+1} \\
&=& (x-\lfloor x \rfloor)\sum_{i=1}^{\lfloor x \rfloor+1} a_i +(1-x+\lfloor x \rfloor) \sum_{i=1}^{\lfloor x \rfloor} a_i \\
&=& (x-\lfloor x \rfloor) f(A, \lfloor x \rfloor+1) +(1-x+\lfloor x \rfloor) f(A,\lfloor x \rfloor) \\
&\leq & (x-\lfloor x \rfloor) (\lfloor x \rfloor+1)\bar{a} +(1-x+\lfloor x \rfloor) \lfloor x \rfloor \bar{a} \\
&=& x \bar{a}. \end{eqnarray*} Here we have used $f(A, \lfloor x \rfloor+1) \leq (\lfloor x \rfloor+1) \bar{a}$ and $f(A,\lfloor x \rfloor) \leq \lfloor x \rfloor \bar{a}$ in getting the inequality step.
If the equality holds in \eqref{eq function} for $x\in [1,N)$, then we must have $f(A, \lfloor x \rfloor+1) = (\lfloor x \rfloor+1) \bar{a}$ and $f(A,\lfloor x \rfloor) = \lfloor x \rfloor \bar{a}$, which implies $a_i=\bar{a}$ for all $1\leq i\leq N$. This completes the proof. \end{proof}
We are now ready to prove Proposition \ref{prop flat}. \begin{proof}[Proof of Proposition \ref{prop flat}] (1). Let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of $V$ and let $\mathcal{E}^+\cup \mathcal{E}^-$ be the orthonormal basis of $S^2_0(V)$ constructed in Lemma \ref{lemma basis +} and Lemma \ref{lemma basis -}.
Let $A$ be the collection of the values $\mathring{R}(\theta_i,\theta_i)$ for $1\leq i \leq 2m$, $\mathring{R}(\varphi^{+}_{ij},\varphi^{+}_{ij})$ and $\mathring{R}(\psi^{+}_{ij},\psi^{+}_{ij})$ for $1\leq i < j \leq m$. By \eqref{R sum positive eigenvalues}, the sum of all values in $A$ is equal to $S$, and $\bar{a}$, the average of all values in $A$, is given by \begin{equation*}
\bar{a}=\frac{S}{m(m+1)}. \end{equation*} By Lemma \ref{lemma average}, we have \begin{equation}\label{eq f1}
f\left(A,\frac{1}{2}(m^2-1)\right) \leq \frac{1}{2}(m^2-1) \bar{a} =\frac{m-1}{2m}S, \end{equation} where $f(A,x)$ is the function defined in Lemma \ref{lemma average}.
If $R$ has $\frac{3}{2}(m^2-1)$-nonnegative curvature operator of the second kind, then \begin{eqnarray*}
0 \leq \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) +f\left(A,\frac{1}{2}(m^2-1)\right). \end{eqnarray*} Substituting \eqref{eq sum of negative eigenvaleus} into the above inequality yields \begin{eqnarray}\label{eq 4.1}
\frac{m-1}{2m} S \leq f\left(A,\frac{1}{2}(m^2-1)\right). \end{eqnarray} Therefore, \eqref{eq f1} must hold as equality and we conclude from Lemma \ref{lemma average} that all values in $A$ are equal to $\frac{S}{m(m+1)}$. In particular, \begin{equation*}
R(e_i,Je_i,e_i,Je_i)=\frac{S}{m(m+1)} \end{equation*} for all $1\leq i \leq m$. Since the orthonormal basis $\{e_1,\cdots, e_m, Je_1, \cdots, Je_m\}$ is arbitrary, we have \begin{equation*}
R(X,JX,X,JX)=\frac{S}{m(m+1)} \end{equation*} for any unit vector $X\in V$. Hence, $R$ has constant holomorphic sectional curvature. Finally, we notice that $S\geq 0$ by Lemma \ref{lemma R and S}.
If $\mathring{R}$ is $\frac{3}{2}(m^2-1)$ nonpositive, we apply the proved result to $-R$ and get that $R$ has constant nonpositive holomorphic sectional curvature.
(2). $R$ has constant holomorphic sectional curvature by part (1). Thus, $R=c R_{\mathbb{CP}^m}$ for some $c\in \mathbb{R}$, where $R_{\mathbb{CP}^m}$ denotes the Riemann curvature tensor of $ (\mathbb{CP}^m, g_{FS})$. The desired conclusion follows from the fact that $c R_{\mathbb{CP}^m}$ has $\alpha$-nonnegative or $\alpha$-nonpositive curvature operator of the second kind for some $\alpha < \frac{3}{2}(m^2-1)$ if and only if $c=0$ (see \cite{BK78}).
Alternatively, one can argue as follows. If $\mathring{R}$is $\left(\frac{3}{2}(m^2-1)-\epsilon\right)$-nonnegative for some $1>\epsilon >0$, then we would get in \eqref{eq 4.1} that \begin{equation*}
0 \leq -\frac{m-1}{2m} S + f\left(A,\frac{1}{2}(m^2-1)-\epsilon \right)
\leq \frac{-\epsilon \ S}{m(m+1)}. \end{equation*} So, we have $S=0$. By Lemma \ref{S flat implies R flat}, $R=0$. Similarly, $\mathring{R}$ is $\alpha$-nonpositive for some $\alpha < \frac{3}{2}(m^2-1)$ if and only if $R=0$. \end{proof}
We now prove Theorem \ref{thm flat} and its corollaries.
\begin{proof}[Proof of Theorem \ref{thm flat}] Theorem \ref{thm flat} follows immediately from Proposition \ref{prop flat} and Schur's lemma for K\"ahler manifolds (see for instance \cite[Theorem 7.5]{KN69}). \end{proof}
\begin{proof}[Proof of Corollary \ref{cor 1}]
If $\mathring{R}$ is $\alpha$-positive or $\alpha$-negative for some $\alpha\leq \frac{3}{2}(m^2-1)$, then $R$ must have constant holomorphic sectional curvature by Proposition \ref{prop flat}. Thus, $R=c R_{\mathbb{CP}^m}$ for some $c\in \mathbb{R}$. This contradicts the fact that $c R_{\mathbb{CP}^m}$ does not have $\alpha$-positive or $\alpha$-nonnegative curvature operator of the second kind for any $\alpha\leq \frac{3}{2}(m^2-1)$. \end{proof}
\begin{proof}[Proof of Corollary \ref{cor 3}] By Theorem \ref{thm flat}, $M$ has constant holomorphic sectional curvature. The conclusion follows from the the classification of complete simply-connected K\"ahler manifolds with constant holomorphic sectional curvature. \end{proof}
\section{Orthogonal Bisectional Curvature} Throughout this section, $\alpha_m$ is the number defined in \eqref{eq alpha m def}, i.e., \begin{equation*}
\alpha_m=\frac{3m^3-m+2}{2m}. \end{equation*} We restate part (2) of Theorem \ref{thm algebra R} as two propositions. \begin{proposition}\label{prop OB}
Let $R$ be a K\"ahler algebraic curvature operator on a complex Euclidean vector space $(V,g,J)$ of complex dimension $m\geq 2$.
If $R$ has $\alpha_m$-nonnegative (respectively, $\alpha_m$-positive, $\alpha_m$-nonpositive, $\alpha_m$-negative) curvature operator of the second kind, then $R$ has nonnegative (respectively, positive, nonpositive, negative) orthogonal bisectional curvature. \end{proposition} \begin{proposition}\label{prop H}
Let $R$ be a K\"ahler algebraic curvature operator on a complex Euclidean vector space $(V,g,J)$ of complex dimension $m\geq 2$. If $R$ has $\alpha_m$-nonnegative (respectively, $\alpha_m$-positive, $\alpha_m$-nonpositive, $\alpha_m$-negative) curvature operator of the second kind, then $R$ has nonnegative (respectively, positive, nonpositive, negative) holomorphic sectional curvature. \end{proposition}
\begin{proof}[Proof of Proposition \ref{prop OB}] Let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of $V$ and let $\mathcal{E}^+\cup \mathcal{E}^-$ be the orthonormal basis of $S^2_0(V)$ constructed in Lemma \ref{lemma basis +} and Lemma \ref{lemma basis -}.
Let $A$ be the collection of the values $\mathring{R}(\theta_i,\theta_i)$ for $1\leq i \leq m$. By Lemma \ref{lemma R basis +}, $A$ consists of one copy of $R(e_i,Je_i,e_i,Je_i)$ for each $1\leq i\leq m$, and its average $\bar{a}$ is given by \begin{equation*} \bar{a}=\frac{1}{m}\sum_{i=1}^m R(e_i,Je_i,e_i,Je_i). \end{equation*}
Let $B$ be the collection of the values $\mathring{R}(\varphi^{+}_{ij},\varphi^{+}_{ij})$ and $\mathring{R}(\psi^{+}_{ij},\psi^{+}_{ij})$ for $1\leq i < j \leq m$ with $(i,j)\neq (1,2)$. According to Lemma \ref{lemma R basis +}, $B$ is made of two copies of $2R(e_i,Je_i,e_j,Je_j)$ for each $1\leq i < j \leq m$ with $(i,j)\neq (1,2)$. The average of all values in $B$, denoted by $\bar{b}$, is given by \begin{equation*}
\bar{b}=\frac{4}{(m-2)(m+1)}\left( \sum_{1\leq i< j\leq m}^m R(e_i,Je_i,e_j,Je_j) - R(e_1,Je_1,e_2,Je_2)\right). \end{equation*}
Next, let $f(A,x)$ and $f(B,x)$ be defined as in Lemma \ref{lemma average}. By Lemma \ref{lemma average}, we have \begin{equation}\label{eq a bar}
f(A,m-1) \leq (m-1)\bar{a}=\frac{m-1}{m}\sum_{i=1}^m R(e_i,Je_i,e_i,Je_i), \end{equation} and \begin{eqnarray}\label{eq b bar}
&& f\left(B, \frac{(m-2)(m^2-1)}{2m}\right)
\leq \frac{(m-2)(m^2-1)}{2m} \bar{b} \\
&=& 2\frac{m-1}{m}\left( \sum_{1\leq i< j\leq m}^m R(e_i,Je_i,e_j,Je_j) - R(e_1,Je_1,e_2,Je_2)\right). \nonumber \end{eqnarray}
Note that \begin{equation*}
\alpha_m=(m^2-1)+2+(m-1)+\frac{(m-2)(m^2-1)}{2m}. \end{equation*} If $R$ has $\alpha_m$-nonnegative curvature operator of the second kind, then we have \begin{eqnarray*}\label{eq R nonnegative ob}
0 &\leq & \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) \\
&& + \mathring{R}(\varphi^+_{12},\varphi^+_{12})+\mathring{R}(\psi^+_{12},\psi^+_{12}) \nonumber \\
&& + f(A, m-1) +f\left(B, \frac{(m-2)(m^2-1)}{2m}\right). \nonumber \end{eqnarray*} Substituting \eqref{eq sum of negative eigenvaleus}, \eqref{eq R positive basis vp psi}, \eqref{eq a bar}, and \eqref{eq b bar}
into the above inequality yields \begin{eqnarray*}
0 &\leq & -\frac{m-1}{2m} S+ 4R(e_1,Je_1,e_2,Je_2) + \frac{m-1}{m}\sum_{i=1}^m R(e_i,Je_i,e_i,Je_i) \\ && +\frac{2(m-1)}{m}\left( \sum_{1\leq i< j\leq m}^m R(e_i,Je_i,e_j,Je_j) - R(e_1,Je_1,e_2,Je_2)\right) \\
&=& \frac{2(m+1)}{m} R(e_1,Je_1,e_2,Je_2), \end{eqnarray*} where we have used \eqref{eq scalar curvature}. The arbitrariness of the orthonormal frame allows us to conclude that $R(e_1,Je_1,e_2,Je_2) \geq 0$ for any orthonormal two-frame $\{e_1,e_2\}$. Hence, $R$ has nonnegative orthogonal bisectional curvature.
If $R$ has $\alpha_m$-positive curvature operator of the second kind, then the last two inequalities in the above argument become strict and one concludes that $R$ has positive orthogonal bisectional curvature. Applying the results to $-R$ then yields the statements for $\alpha_m$-negativity and $\alpha_m$-nonpositivity. \end{proof}
\begin{proof}[Proof of Proposition \ref{prop H}] The idea is the same as in the proof of Proposition \ref{prop OB}. The main difference is that we need to separate the terms involving $R(e_1,Je_1,e_1,Je_1)$. As before, let $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ be an orthonormal basis of $V$ and let $\mathcal{E}^+\cup \mathcal{E}^-$ be the orthonormal basis of $S^2_0(V)$ constructed in Lemma \ref{lemma basis +} and Lemma \ref{lemma basis -}.
Let $A$ be the collection of the values $\mathring{R}(\theta_i,\theta_i)$ for $2 \leq i \leq m$ and $m+2\leq i \leq 2m$, and the values $\mathring{R}(\varphi^{+}_{ij},\varphi^{+}_{ij})$ and $\mathring{R}(\psi^{+}_{ij},\psi^{+}_{ij})$ for $1\leq i < j \leq m$. By Lemma \ref{lemma R basis +}, $A$ consists of two copies of $R(e_i,Je_i,e_i,Je_i)$ for each $2\leq i \leq m$ and two copies of $2R(e_i,Je_i,e_j,Je_j)$ for each $1\leq i < j \leq m$. The average of all values in $A$, denoted by $\bar{a}$, is given by \begin{eqnarray*}
\bar{a} &=& \frac{1}{(m-1)(m+2)}\left(2\sum_{i=2}^m R(e_i,Je_i,e_i,Je_i) +4\sum_{1\leq i < j \leq m} R(e_i,Je_i,e_j,Je_j)\right)\\
&=& \frac{1}{(m-1)(m+2)}\left(S-2R(e_1,Je_1,e_1,Je_1) \right), \end{eqnarray*} where we have used \eqref{eq scalar curvature}.
Let $f(A,x)$ be the function defined in Lemma \ref{lemma average}, then we have by Lemma \ref{lemma average} that \begin{eqnarray}\label{eq a bar H}
&& f\left(A,\frac{(m-1)^2(m+2)}{2m}\right)
\leq \frac{(m-1)^2(m+2)}{2m} \bar{a} \\
& = & \frac{m-1}{2m} \left(S-2R(e_1,Je_1,e_1,Je_1)\right). \nonumber \end{eqnarray}
Note that \begin{equation*}
\alpha_m=(m^2-1)+2+\frac{(m-1)^2(m+2)}{2m}. \end{equation*} Since $\mathring{R}$ is $\alpha_m$-nonnegative, we obtain \begin{eqnarray*}
0 &\leq & \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) \\
&& + \mathring{R}(\theta_{1},\theta_{1}) +\mathring{R}(\theta_{m+1},\theta_{m+1}) \\
&& +f\left(A,\frac{(m-1)^2(m+2)}{2m}\right).
\end{eqnarray*} Substituting \eqref{eq sum of negative eigenvaleus}, \eqref{eq R positive basis alpha}, and \eqref{eq a bar H}
into the above inequality yields \begin{eqnarray*}
0 &\leq & -\frac{m-1}{2m}S +2R(e_1,Je_1,e_1,Je_1) \\
&& +\frac{m-1}{2m} (S-2R(e_1,Je_1,e_1,Je_1)) \\ &=& \frac{m+1}{m} R(e_1,Je_1,e_1,Je_1). \end{eqnarray*} Since the orthonormal frame $\{e_1, \cdots, e_m, Je_1, \cdots, Je_m\}$ is arbitrary, we conclude that $R$ has nonnegative holomorphic sectional curvature. Other statements can be proved similarly. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm OB +}] By Proposition \ref{prop OB}, $M$ has positive orthogonal bisectional curvature. $M$ is biholomorphic to $\mathbb{CP}^m$ by \cite{Chen07} and \cite{GZ10} (or \cite{Wilking13}). \end{proof}
\section{Orthogonal Ricci Curvature} We prove part (3) of Theorem \ref{thm algebra R} in this section. For convenience, we restate it in an equivalent way with the only difference being shifting the dimension from $m$ to $m+1$.
\begin{proposition}\label{prop Ric perp m+1}
Let $R$ be a K\"ahler algebraic curvature operator on a complex Euclidean vector space $(V,g,J)$ of complex dimension $(m+1)\geq 2$.
Let \begin{equation*}
\tilde{\b}_m:=\b_{m+1}=\frac{m(m+2)(3m+5)}{2(m+1)}.
\end{equation*} If $R$ has $\tilde{\b}_m$-nonnegative (respectively, $\tilde{\b}_m$-positive, $\tilde{\b}_m$-nonpositive, $\tilde{\b}_m$-negative) curvature operator of the second kind, then $R$ has nonnegative (respectively, positive, nonpositive, negative) orthogonal Ricci curvature. \end{proposition}
\begin{proof} The key idea is to use $\mathbb{CP}^m \times \mathbb{CP}^1$ as a model space. The eigenvalues and their associated eigenvectors of the curvature operator of the second kind on $\mathbb{CP}^m \times \mathbb{CP}^1$ are determined in \cite{Li22product}.
We construct an orthonormal basis of $S^2_0(V)$. Let $\{e_0, \cdots, e_m, Je_0, \cdots, Je_m\}$ be an orthonormal basis of $V$. Let \begin{align*}
V_0 &=\operatorname{span}\{e_0,Je_0\}, \\
V_1 &=\operatorname{span}\{e_1, \cdots, e_m, Je_1, \cdots, Je_m \}. \end{align*} Then $V=V_0\oplus V_1$.
Notice that $h \in S^2_0(V_i)$ can be viewed as an element in $S^2_0(V)$ via \begin{equation*}
h(X, Y)=h(\pi_i(X),\pi_i(Y)) \end{equation*} where $\pi_i:V \to V_i$ is the projection map for $i=1,2$. Thus, $S^2_0(V_i)$ can be identified with a subspace of $S^2_0(V)$ for $i=1,2$.
Let $\mathcal{E}^+\cup \mathcal{E}^-$ be the orthonormal basis of $S^2_0(V_1)$ constructed in Lemma \ref{lemma basis +} and Lemma \ref{lemma basis -}. An orthonormal basis of $S^2_0(V_0)$ is given by \begin{eqnarray*}
\tau_1 &=& \frac{1}{2\sqrt{2}} \left(e_0 \odot e_0 -Je_0 \odot Je_0 \right), \\
\tau_2 &=& \frac{1}{\sqrt{2}} \left( e_0 \odot Je_0 \right) \end{eqnarray*} Next, we define the following traceless symmetric two-tensors on $V$: \begin{eqnarray*}
h_i &=& \frac{1}{\sqrt{2}} e_0 \odot e_i \text{ for } 1\leq i\leq m, \\
h_{m+i} &=& \frac{1}{\sqrt{2}} e_0 \odot J e_i \text{ for } 1\leq i\leq m, \\
h_{2m+i} &=& \frac{1}{\sqrt{2}} Je_0 \odot e_i \text{ for } 1\leq i\leq m, \\
h_{3m+i} &=& \frac{1}{\sqrt{2}} Je_0 \odot J e_i \text{ for } 1\leq i\leq m. \end{eqnarray*} We also define \begin{equation*}
\zeta= \frac{1}{\sqrt{8m(m+1)}} \left(m(e_0\odot e_0 +Je_0 \odot Je_0) -\sum_{i=1}^m (e_i\odot e_i +Je_i \odot Je_i)\right). \end{equation*} Then it is a straightforward computation to verify that $$\mathcal{E}^+\cup \mathcal{E}^- \cup \{\tau_i\}_{i=1}^2 \cup \{h_i\}_{i=1}^{4m} \cup \{\zeta\} $$ forms an orthonormal basis of $S^2_0(V)$. This corresponds to the orthogonal decomposition \begin{equation*}
S^2_0(V)=S^2_0(V_1) \oplus S^2_0(V_0) \oplus \operatorname{span}\{u \odot v: u \in V_0, v\in V_1 \} \oplus \mathbb{R} \zeta. \end{equation*}
The next step is to calculate the diagonal elements of the matrix representation of $\mathring{R}$ with respect to this basis. By Lemma \ref{lemma ij}, we have \begin{equation}\label{eq R gamma}
\mathring{R}(\tau_1,\tau_1)=\mathring{R}(\tau_2,\tau_2)=R(e_0,Je_0,e_0,Je_0). \end{equation} Using Lemma \ref{lemma ij} again, we obtain \begin{eqnarray*}
\sum_{i=1}^{4m} \mathring{R}(h_i,h_i)
&=& \sum_{i=1}^{m}R(e_0,e_i,e_0,e_i) +\sum_{i=1}^{m}R(e_0,Je_i,e_0,Je_i) \\
&& + \sum_{i=1}^{m}R(Je_0,e_i,Je_0,e_i) +\sum_{i=1}^{m}R(Je_0,Je_i,Je_0,Je_i) \\
&=& 2 \sum_{i=1}^{m} \left( R(e_0,e_i,e_0,e_i) + R(e_0,Je_i,e_0,Je_i) \right) \\
&=& 2 \sum_{i=1}^{m}R(e_0,Je_0,e_i,Je_i) \\
&=& 2\operatorname{Ric}^\perp (e_0,e_0). \end{eqnarray*} We calculate using \eqref{eq R iiJiJi} that \begin{eqnarray*}
\mathring{R}(\zeta,\zeta) &=& \frac{m^2}{8m(m+1)} \mathring{R}(e_0\odot e_0 +Je_0 \odot Je_0, e_0\odot e_0 +Je_0 \odot Je_0) \\
&& - \frac{2m}{8m(m+1)} \sum_{i=1}^m\mathring{R}(e_0\odot e_0 +Je_0 \odot Je_0, e_i\odot e_i +Je_i \odot Je_i) \\
&& +\frac{1}{8m(m+1)} \sum_{i,j=1}^m \mathring{R}(e_i\odot e_i +Je_i \odot Je_i, e_j\odot e_j +Je_j \odot Je_j)\\
&=& -\frac{m}{(m+1)}R(e_0,Je_0,e_0,Je_0) +\frac{2}{m+1} \operatorname{Ric}^\perp (e_0,e_0) \\
&& -\frac{1}{m(m+1)}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j). \end{eqnarray*}
Let $A$ be the collection of the values $\mathring{R}(\tau_i,\tau_i)$ for $i=1,2$, $\mathring{R}(\theta_i,\theta_i)$ for $1\leq i \leq 2m$, $\mathring{R}(\varphi^{+}_{ij},\varphi^{+}_{ij})$ and $\mathring{R}(\psi^{+}_{ij},\psi^{+}_{ij})$ for $1\leq i < j \leq m$. By Lemma \ref{lemma R basis +} and \eqref{eq R gamma}, we know that $A$ contains two copies of $R(e_i,Je_i,e_i,Je_i)$ for each $0\leq i \leq m$ and two copies of $2R(e_i,Je_i,e_j,Je_j)$ for each $1\leq i < j \leq m$. Therefore, $\bar{a}$, the average of all values in $A$, is given by \begin{eqnarray*}
\bar{a} = \frac{2}{m^2+m+2} \left( \sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) +R(e_0,Je_0,e_0,Je_0)\right) . \end{eqnarray*}
Since $R$ has $\tilde{\b}_m$-nonnegative curvature operator of the second kind with \begin{equation*}
\tilde{\b}_m=(m^2-1)+4m+1+\frac{m(m^2+m+2)}{2(m+1)}, \end{equation*} we obtain that \begin{eqnarray}\label{eq R nonnegative Ricci perp}
0 &\leq & \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) \\
&& +\sum_{i=1}^{4m} \mathring{R}(h_i,h_i) + \mathring{R}(\zeta,\zeta) + f\left(A, \frac{m(m^2+m+2)}{2(m+1)}\right), \nonumber \end{eqnarray} where $f(A,x)$ is the function defined in Lemma \ref{lemma average}. By Lemma \ref{lemma average}, we have \begin{eqnarray}\label{eq f estimate}
&& f\left(A, \frac{m(m^2+m+2)}{2(m+1)}\right)
\leq \frac{m(m^2+m+2)}{2(m+1)} \bar{a} \\
&=& \frac{m}{m+1}\left( \sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) -R(e_0,Je_0,e_0,Je_0)\right). \nonumber \end{eqnarray} By \eqref{eq sum of negative eigenvaleus} and \eqref{eq scalar curvature}, we have \begin{eqnarray}\label{eq sum of negative eigenvalues new}
&& \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) \\
&=& -\frac{m-1}{m}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j). \nonumber \end{eqnarray} Substituting \eqref{eq sum of negative eigenvalues new}, the expressions for $\sum_{i=1}^{4m} \mathring{R}(h_i,h_i)$ and $\mathring{R}(\zeta,\zeta)$, and \eqref{eq f estimate} into \eqref{eq R nonnegative Ricci perp} yields \begin{eqnarray*}
0 &\leq & -\frac{m-1}{m}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j)+ 2\operatorname{Ric}^\perp (e_0,e_0) \\ &&-\frac{m}{m+1}R(e_0,Je_0,e_0,Je_0) +\frac{2}{m+1} \operatorname{Ric}^\perp(e_0,e_0) \\ && -\frac{1}{m(m+1)}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j)\\ && +\frac{m}{m+1}\left( \sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) +R(e_0,Je_0,e_0,Je_0)\right) \\ &=& \frac{2(m+2)}{m+1}\operatorname{Ric}^\perp(e_0,e_0). \end{eqnarray*} Since the orthonormal frame $\{e_0, \cdots, e_m, Je_0, \cdots, Je_m\}$ is arbitrary, we conclude that $\operatorname{Ric}^\perp \geq 0$. Other statements can be proved similarly. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm Ric perp}] By Proposition \ref{prop OB}, $M$ has positive orthogonal Ricci curvature. If $M$ is in addition closed, one can use \cite[Theorem 2.2]{Ni21} to conclude that $h^{p,0}=0$ for all $1\leq p\leq m$. In particular, $M$ is simply-connected and projective. \end{proof}
By the classification of closed three-dimensional K\"ahler manifolds with positive orthogonal Ricci curvature in \cite{NWZ21}, we get the following corollary of Theorem \ref{thm Ric perp}. \begin{corollary}\label{corollary Ric perp 3D} A closed three-dimensional K\"ahler manifold with $\frac{44}{3}$-positive curvature operator of the second kind is either biholomorphic to $\mathbb{CP}^3$ or biholomorphic to $\mathbb{Q}^3$, the smooth quadratic hypersurface in $\mathbb{CP}^4$. \end{corollary}
\section{Mixed Curvature}
Part (4) of Theorem \ref{thm algebra R} follows immediately from the following proposition. \begin{proposition}\label{prop mixed curvature}
Let $(V,g,J)$ be a complex Euclidean vector space with complex dimension $(m+1)\geq 2$. Let $R$ be a K\"ahler algebraic curvature operator on $V$.
Let $$\tilde{\gamma}_m:=\gamma_{m+1}=\frac{3m^2+8m+4}{2}.$$
If $R$ has $\tilde{\gamma}_m$-nonnegative (respectively, $\tilde{\gamma}_m$-positive, $\tilde{\gamma}_m$-nonpositive, $\tilde{\gamma}_m$-negative) curvature operator of the second kind, then the expression
\begin{equation*}
2\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2
\end{equation*}
is nonnegative (respectively, positive, nonpositive, negative) for any nonzero vector $X$ in $V$. \end{proposition}
\begin{proof}[Proof of Proposition \ref{prop mixed curvature}] The proof is similar to that of Proposition \ref{prop Ric perp m+1}, but with the different model space $\mathbb{CP}^m \times \mathbb{C}$. The eigenvalues and their associated eigenvectors of the curvature operator of the second kind on $\mathbb{CP}^m \times \mathbb{C}$ are determined in \cite{Li22product}.
Let $\{e_0, \cdots, e_m, Je_0, \cdots, Je_m\}$ be an orthonormal basis of $V$. Then $V=V_0\oplus V_1$, where $V_0 =\operatorname{span}\{e_0,Je_0\}$ and $ V_1 =\operatorname{span}\{e_1, \cdots, e_m, Je_1, \cdots, Je_m \}$. Let $$\mathcal{E}^+\cup \mathcal{E}^- \cup \{\tau_i\}_{i=1}^2 \cup \{h_i\}_{i=1}^{4m} \cup \{\zeta\} $$ be the orthonormal basis of $S^2_0(V)$ constructed in the proof of Proposition \ref{prop Ric perp m+1}.
Let $A$ be the collection of the values $\mathring{R}(\theta_i,\theta_i)$ for $1\leq i \leq 2m$, $\mathring{R}(\varphi^{+}_{ij},\varphi^{+}_{ij})$ and $\mathring{R}(\psi^{+}_{ij},\psi^{+}_{ij})$ for $1\leq i < j \leq m$. By Lemma \ref{lemma R basis +}, we know that $A$ contains two copies of $R(e_i,Je_i,e_i,Je_i)$ for each $1 \leq i \leq m$ and two copies of $2R(e_i,Je_i,e_j,Je_j)$ for each $1\leq i < j \leq m$. Therefore, $\bar{a}$, the average of all values in $A$, is given by \begin{eqnarray*}
\bar{a} = \frac{2}{m(m+1)} \left( \sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) \right). \end{eqnarray*}
Let $f(A,x)$ be the function defined in Lemma \ref{lemma average}. Then we have \begin{eqnarray}\label{eq 5.1}
f\left(A, \frac{1}{2}m^2\right) \leq \frac{1}{2}m^2 \bar{a}=\frac{m}{m+1}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j). \end{eqnarray}
Note that \begin{equation*}
\tilde{\gamma}_m=(m^2-1)+4m+3 + \frac{1}{2}m^2. \end{equation*} Since $R$ has $\tilde{\gamma}_m$-nonnegative curvature operator of the second kind, we obtain that \begin{eqnarray*}
0 &\leq & \sum_{1\leq i <j \leq m} \left( \mathring{R}(\varphi^-_{ij}, \varphi^-_{ij}) + \mathring{R}(\psi^-_{ij}, \psi^-_{ij}) \right) +\sum_{k=1}^{m-1} \mathring{R}(\eta_k,\eta_k) \\
&& +\sum_{i=1}^{4m} \mathring{R}(h_i,h_i) + \mathring{R}(\zeta,\zeta) + \mathring{R}(\tau_1,\tau_1)+ \mathring{R}(\tau_2,\tau_2) \nonumber \\
&& + f\left(A, \frac{1}{2}m^2\right). \nonumber \end{eqnarray*} Substituting \eqref{eq sum of negative eigenvalues new}, \eqref{eq 5.1}, \eqref{eq R gamma}, and the expressions for $\sum_{i=1}^{4m} \mathring{R}(h_i,h_i)$ and $\mathring{R} (\zeta,\zeta)$ into the above inequality produces \begin{eqnarray*}\label{eq R nonnegative Ricci perp mixed}
0 &\leq & -\frac{m-1}{m} \sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) + 2\operatorname{Ric}^\perp(e_0,e_0) \\ &&-\frac{m}{m+1}R(e_0,Je_0,e_0,Je_0) +\frac{2}{m+1} \operatorname{Ric}^\perp(e_0,e_0) \\ && -\frac{1}{m(m+1)}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j) +2R(e_0,Je_0,e_0,Je_0) \\ && +\frac{m}{m+1}\sum_{i,j=1}^m R(e_i,Je_i,e_j,Je_j)\\ &=& \frac{m+2}{m+1}\left(2\operatorname{Ric}(e_0,e_0)-R(e_0,Je_0,e_0,Je_0) \right). \end{eqnarray*} Finally, the arbitrariness of the orthonormal frame $\{e_0, \cdots, e_m, Je_0, \cdots, Je_m\}$ allows us to conclude that
$$2\operatorname{Ric}(X,X)-R(X,JX,X,JX)/|X|^2 \geq 0$$ for any nonzero vector $X \in V$. Other statements can be proved similarly.
\end{proof}
\section*{Acknowledgments} I am grateful to the anonymous referees for their valuable suggestions and comments. In particular, one referee pointed out the simple proof of \eqref{eq sum of negative eigenvaleus}, saving two pages of computations.
\end{document} |
\begin{document}
\begin{abstract}
We prove that the Catalan Lie idempotent $D_n(a,b)$, introduced in
[Menous {\it et al.}, Adv. Appl. Math. 51 (2013), 177] can be
refined by introducing $n$ independent parameters $a_0,\ldots,a_{n-1}$
and that the coefficient of each monomial is itself a Lie idempotent
in the descent algebra. These new idempotents are multiplicity-free
sums of subsets of the Poincaré-Birkhoff-Witt basis of the Lie module.
These results are obtained by embedding noncommutative symmetric functions into the
dual noncommutative Connes-Kreimer algebra, which also allows us to interpret, and rederive in a simpler way,
Chapoton's results on a two-parameter tree expanded series. \end{abstract}
\maketitle
\small \tableofcontents \normalsize
\section{Introduction}
Lie idempotents are idempotents of the symmetric group algebra which act on words as projectors onto the free Lie algebra. Thus, they are in particular elements of the Lie module ${\rm Lie}(n)$, spanned by complete bracketing of standard words, such as $[[1,3],[2,4]]$, which can be represented as complete binary trees with leaves labelled $1,2,\ldots,n$.
Of course, these elements are not linearly independent, but the trees such that for each internal node, the smallest label is in the left subtree, and the greatest label is in the right subtree do form a basis, called the Poincaré-Birkhoff-Witt basis \cite{ST}. Such labellings are said to be admissible. These basis elements are denoted by $t(\sigma)$, where $t$ is a complete binary tree, and $\sigma$ the permutation obtained by reading its leaves from left to right.
The direct sum ${\rm Lie} = \bigoplus_{n\ge 0}{\rm Lie}(n)$ can be interpreted as the operad ${\mathcal Lie}$. It is also a Lie algebra for the Malvenuto-Reutenauer convolution product of permutations, which allows us to regard it as contained into ${\bf FQSym}$, a permutation $\sigma$ being intepreted as the basis element ${\bf G}_\sigma$. Then, it is (strictly) contained in the primitive Lie algebra of ${\bf FQSym}$.
It turns out that the elements ${\sf c}_t$, defined for complete binary trees $t$ by the sum over admissible labellings \begin{equation} {\sf c}_t = \sum_{\sigma\ {\rm admissible}}t(\sigma) \end{equation} span a Lie subalgebra ${\mathfrak C}$ of ${\rm Lie}$, which might be called the Catalan Lie algebra.
It has proved convenient to label its basis elements by plane trees instead of binary trees: we set $C_T={\sf c}_t$ where the plane tree $T$ is the right-branch rotation of the incomplete binary tree $t'$ obtained by removing the leaves of $t$ (so that the maximal element of the Tamari order is the corolla).
This provides us with elements $C_T$ of ${\bf FQSym}$, labelled by plane trees. The noncommutative Connes-Kreimer algebra ${\mathcal H}_{NCK}$ \cite{Foi00,Foi01} is the free associative algebra generated by variables $Y_T$ indexed by plane trees, endowed with the coproduct of admissible cuts. Its basis $Y_F$ is therefore indexed by plane forests $F$. Its dual ${\mathcal H}_{NCK}^*$ admits a natural embedding into ${\bf FQSym}$, and if $X_F$ denotes the dual basis of $Y_F$, it turns out that \begin{equation}
C_T = \sum_{T'\le T}X_{T'} \end{equation} where the sum is over the Tamari order. Moreover, if one denotes by $\tau=\bar T$ the underlying non-plane rooted tree of $T$ the sums \begin{equation}
x_\tau = |{\rm Aut}(\tau)|\sum_{\bar T=\tau}X_T \end{equation} span a sub-preLie algebra, which is free on the generator $x_\bullet$, and $x_\tau$ coincides with its Chapoton-Livernet basis.
The aim of this paper is to investigate the expansions in the $X$ and $C$ bases of various noncommutative symmetric functions, regarded as elements of ${\bf FQSym}$. Our first result concerns the family of Catalan idempotents $D_n(a,b)$. Originally introduced as noncommutative symmetric functions on the ribbon basis in \cite{MNT}, these elements were identified in \cite{FMNT} as simple weighted sums of the basis $C_T$. However, the calculations of \cite{FMNT} are rather tricky, and it is by no means obvious that such sums belong to the descent algebra. We present here a new approach, relying on the Birkhoff factorisation of a simple character of $QSym$ with values in an algebra of Laurent series. This approach produces immediately the expansion on $X_F$ of a grouplike series $\sigma_{a(z)}^+$ which by definition belongs to the descent algebra. It is then relatively straightforward to check that the original Catalan idempotents are obtained by choosing $a(z)=\frac{a}{z}+\frac{b}{1-z}$ and taking the residue, the general case giving rise to new refined idempotents indexed by partitions of $n-1$.
Finally, we show how the embedding of ${\mathcal H}_{NCK}^*$ into ${\bf FQSym}$ can be used to determine the $X$-expansion of various noncommutative symmetric functions, including the Eulerian idempotents and the two-parameter series of Chapoton \cite{Cha3}
This paper is a continuation of \cite{FMNT}, to which the reader is referred for background and notation.
{\bf Acknowlegements. } This research has been partially supported by the project CARPLO of the Agence Nationale de la recherche (ANR-20-CE40-0007).
\section{The noncommutative Connes-Kreimer Hopf algebra ${\mathcal H}_{NCK}$}
The noncommutative Connes-Kreimer Hopf algebra ${\mathcal H}_{NCK}$, introduced by Foissy \cite{Foi00,Foi01}, is as a graded vector space spanned by plane forests, the degree being the number of nodes. We denote by $Y_F$ its natural basis indexed by forests: \begin{equation} {\mathcal F}=\lbrace \emptyset,\bullet, \arbuga,\bullet \bullet, \arbdga ,\arbdgb ,\bullet \arbuga,\arbuga \bullet, \bullet \bullet \bullet,\dots \rbrace \end{equation}
It is then freely generated by variables $Y_T$ indexed by plane trees. The product is concatenation, and the coproduct is given by admissible cuts, which can be conveniently defined directly for the iterated coproducts in terms of labellings.
Trees will be drawn with the root at the top in this paper. The canonical labelling of a tree is obtained by visiting it in postorder, so that the labels of each subtree form an interval, with the maximum at its root.
{\footnotesize For instance, for the tree $\arbtgb$, we get the labelling { \newcommand{\nodea}{\node[draw,circle] (a) {$2$} ;}\newcommand{\nodeb}{\node[draw,circle] (b) {$1$} ;}\newcommand{\nodec}{\node[draw,circle] (c) {$3$} ;}\newcommand{\noded}{\node[draw,circle] (d) {$3$} ;}\newcommand{\nodee}{\node[draw,circle] (e) {$6$} ;}\newcommand{\nodef}{\node[draw,circle] (f) {$4$} ;}\newcommand{\nodeg}{\node[draw,circle] (g) {$5$} ;}\begin{tikzpicture}[auto] \matrix[column sep=.2cm, row sep=.2cm,ampersand replacement=\&]{
\& \nodef \& \\
\nodea \& \& \nodec \\
\nodeb \& \& \\ };
\path[ultra thick, black] (f) edge (a)
(f) edge (c)
(a) edge (b); \end{tikzpicture}} }
A forest $F$ is similarly labelled, by first grafting it on a common root -- that is considering the tree $T=B_+(F)$ -- labelling $T$ and removing this labelled root afterwards.
Such a labelled forest is regarded as the Hasse diagram of a poset.
With this labelling the $r$-iteraded coproduct of an element $Y_F$ of degree n can be described as follows: \begin{equation}
\Delta^r Y_F = \sum_{u\in C(F)\cap [r]^n}Y_{F_{(1)}}\otimes Y_{F_{(2)}}\cdots\otimes Y_{F_{(r)} } \end{equation} where $C(F)$ is the set of words such that $i<_Fj\Rightarrow u_i\le u_j$, and $F_{(i)}$ is the restriction of $F$ to vertices labelled $i$.
{\footnotesize For instance, for the previous tree $T=\arbtgb$ and $r=2$, \begin{equation} C(T)\cap \lbrace 1,2 \rbrace^4=\lbrace 2222, 2212, 1222, 1122, 1212, 1112, 1111 \rbrace \end{equation} give the coproduct: \begin{equation} \Delta\arbtgb = 1\otimes\arbtgb + \bullet\otimes \arbdga + \bullet\otimes \arbdgb
+\arbuga\otimes\arbuga
+\bullet\bullet\otimes \arbuga+
\arbuga\bullet\otimes\bullet+ \arbtgb\otimes1 \end{equation} }
As it will be useful in the following sections, let us also recall here the polish code of a plane forest is obtained by labelling each node by the number of its descendants, and traversing it in prefix order. For the previous tree $T$ we get the polish code $2100$ and also its reverse polish code $0012$.
It has been shown in \cite[3.5]{FNT} that ${\mathcal H}_{NCK}$ admits an embedding $\pi$ into ${\bf WQSym}$. It is actually an embedding into ${\bf FQSym}$, given by $F\mapsto \Gamma_F(A)$, where $\Gamma_P(A)$ denotes the free generating function of a poset \cite{NCSF6}, that is, the sum of its linear extensions \begin{equation}
\Gamma_P(A) = \sum_{\sigma\in L(P)}{\bf F}_\sigma \in{\bf FQSym} = \sum_{u\in C(F)}{\bf M}_u, \end{equation}
where $C(F)$ is the set of packed words $u$ such that $i<_F j\Rightarrow u_i\le u_j$. Indeed, the linear extensions of a poset are precisely those permutations $\sigma$ such that $i<_P j\Rightarrow \sigma^{-1}(i)<\sigma^{-1}(j)$.
The linear extensions of such a labelled forest form an initial interval of the right weak order \cite{BW2}.
{\footnotesize For example, \begin{equation}
\Gamma_{\arbtgb} = {\bf F}_{3124}+{\bf F}_{1324}+{\bf F}_{1234}={\bf S}^{2314}=\check{\bf S}^{3124} \end{equation} where \cite[(6.4), (6.12)]{NCSF7} \begin{equation}
{\bf S}^\sigma=\sum_{\tau\le_L \sigma}{\bf G}_{\tau}=:\check {\bf S}^{\sigma^{-1}}. \end{equation} Then, $Y_F$ can be identified with $\Gamma_F=\check{\bf S}^{\sigma_F}$, where $\sigma_F$ is the maximal linear extension of $F$. For example \begin{equation} \Gamma_{\arbuga}\Gamma_\bullet = {\bf S}^{12}{\bf S}^1 = {\bf S}^{231} = \check{\bf S}^{312} = \Gamma_{\arbuga\bullet}. \end{equation} As for the coproduct, $\Gamma_{\arbtgb}=\check{\bf S}^{3124}$, and {\small \begin{equation} \Delta\check{\bf S}^{3124}=
1 \otimes \check{\bf S}^{3124} + \check{\bf S}^{1} \otimes \check{\bf S}^{123} + \check{\bf S}^{1} \otimes \check{\bf S}^{213} +
\check{\bf S}^{1 2} \otimes \check{\bf S}^{1 2} + \check{\bf S}^{21} \otimes \check{\bf S}^{12} + \check{\bf S}^{312} \otimes \check{\bf S}^{1} +
\check{\bf S}^{3124} \otimes 1, \end{equation} } which corresponds term by term to \begin{equation} \Delta\arbtgb = 1\otimes\arbtgb + \bullet\otimes \arbdga + \bullet\otimes \arbdgb
+\arbuga\otimes\arbuga
+\bullet\bullet\otimes \arbuga+
\arbuga\bullet\otimes\bullet+ \arbtgb\otimes1 \end{equation} }
Indeed, the coproduct formula \cite[(6.13)]{NCSF7} \begin{equation} \Delta\check{\bf S}^\sigma=
\sum_{u\cdot v\le\sigma}\langle\sigma|u\,\shuf\, v\rangle \check{\bf S}^{{\rm std}(u)}\otimes\check{\bf S}^{{\rm std}(v)}\,, \end{equation} (sum over pairs of complementary subwords whose concatenation is smaller than $\sigma$ in the right weak order) implies that if a value $\sigma_i$ goes into $v$, all greater values on its right must also go into $v$, so as not to create new inversions. Thus, the word $u$ and $v$ encode admissible cuts.
\section{Dual noncommutative Connes-Kreimer algebra ${\mathcal H}_{NCK}^*$}
\subsection{An embedding of ${\bf Sym}$, and its dual} Let $X_F$ be the dual basis of $Y_F$. According to our description of the coproduct of $Y_F$, the coefficient of $X_F$ in the product $X_{F'}X_{F''}$ is equal to the number of labellings of $F$ by words over $\{1,2\}$, nondecreasing from bottom to top, and such that
$F_{(1)}=F'$ and $F_{(2)}=F''$.
The coproduct of $X_F$ is deconcatenation, so that trees $X_T$ are primitive. The elements \begin{equation}\label{eq:SymInH}
\Lambda_n := X_{\bullet\bullet\cdots\bullet} \quad\text{($n$ vertices) and }\quad S_n:=\sum_{|F|=n}X_F \end{equation} form sequences of divided powers, and both define the same embedding of ${\bf Sym}$ into ${\mathcal H}_{NCK}^*$. One easily checks that, indeed, \begin{equation}
\left(\sum_{n\ge 0}(-1)^n X_{\bullet\bullet\cdots\bullet}\right)^{-1}=\sum_{n\ge 0}\sum_{|F|=n}X_F. \end{equation}
{\footnotesize Representing trees by their Polish codes, we have: \begin{align*} R_{1 1} & = X_{00}\\ R_{2} & = X_{00}+ X_{10}\\ R_3 &= X_{000}+X_{100}+X_{010}+X_{200}+X_{110}\\ R_{21} &= 2X_{000}+ X_{100}+ X_{010}\\ R_{12}&= 2X_{000}+ X_{100}+ X_{010}+X_{200}\\ R_{111}&=X_{000}\\ R_4&= X_{0000} + X_{0010} + X_{0100} + X_{1000} + X_{1010} + X_{0200} + X_{2000}\\ & + X_{1100} + X_{0110} + X_{1110} + X_{1200} + X_{2100} + X_{2010} + X_{3000} \\ R_{31}&= (X_{1100}+ X_{0110})+ 2(X_{1000}+ X_{0100}+ X_{0010})\\ &\ + (X_{2000}+ X_{0200})+ X_{1010}+ 3X_{0000}\\ R_{22}&= (X_{1100}+ X_{0110})+ 3(X_{1000}+ X_{0100}+ X_{0010})\\ &\ + 2(X_{2000}+ X_{0200})+ 2X_{1010} + 2X_{3000}+(X_{2100}+X_{2010})+5X_{0000}\\ R_{13}&= (X_{1100}+ X_{0110})+ 2(X_{1000}+ X_{0100}+ X_{0010})\\ &\ + 2(X_{2000}+ X_{0200})+ X_{1010} + 2X_{3000}+(X_{2100}+X_{2010})+X_{1200}+3X_{0000}\\ R_{211}&= (X_{1000}+ X_{0100}+ X_{0010})+3X_{0000}\\ R_{121}&= 2(X_{1000}+ X_{0100}+ X_{0010})+ (X_{2000}+ X_{0200})+ X_{1010}+5X_{0000}\\ R_{112}&=(X_{1000}+ X_{0100}+ X_{0010})+(X_{2000}+ X_{0200})+X_{3000}+3X_{0000}\\ R_{1111}&= X_{0000} \end{align*} }
\begin{proposition} The coefficient of $ X_F$ in $R_I$ is equal to the number of linear extensions of $F$ which are of ribbon shape $I$. \end{proposition}
\noindent \it Proof -- \rm The product rule of the $X$-basis implies immediately that the coefficient $\langle Y_F,S^I\rangle$ of $X_F$ in $S^I$ is the number of nondecreasing labellings of $F$ by words of evaluation $I$, which is also the coefficient of $M_I$ in $\Gamma_F(X)$. The dual map of the embedding of ${\bf Sym}$ into ${\mathcal H}_{NCK}^*$ is an epimorphism $\pi:\ {\mathcal H}_{NCK}\rightarrow QSym$, and $\langle Y_F,S^I\rangle$ is also the coefficient of $M_I$ in $\pi(Y_F)$.
Thus, $\pi$ is the restriction of the canonical projection (commutative image) of ${\bf FQSym}$ onto $QSym$, so that the coefficient of $X_F$ in $R_I$ is equal to the coefficient of $F_I$ in $\pi(\Gamma_F)$, which is by definition the number of linear extensions of $F$ of ribbon shape $I$. \qed
{\footnotesize For instance, if $T=\arbtgb=2100$, we have $\Gamma_{T} = {\bf F}_{3124}+{\bf F}_{1324}+{\bf F}_{1234}$ and $(3124)$, $(1324)$, $(1234)$ have respective ribbon shape $13$, $22$ and $4$, so that $X_{2100}$ appears with a coefficient $1$ in $R_{13}$, $R_{22}$ and $R_{4}$. }
The product rule of the $X$-basis also implies that the coefficient of $X_F$ in $\Lambda^I$ is the number of strictly decreasing labellings of $F$ by words of evaluation $I$.
We have therefore proved: \begin{theorem}\label{th:coeffX} The coefficient of $X_F$ is $\sigma_1(XA)$ is \begin{equation} \sum_I\<Y_F,S^I\>M_I(X) = \Gamma_F(X) \end{equation} and that of $X_F$ in $\lambda_1(XA)$ is
\begin{equation}\label{eq:chi}
\sum_I\<Y_F,\Lambda^I\>M_I(X) = (-1)^{|F|}\Gamma_F(-X) =: \chi_F(X). \end{equation} \end{theorem} \qed
Recall from \cite{NCSF2} that the involution $X\mapsto -X$ of $QSym$ is the adjoint of $A\mapsto -A$ in ${\bf Sym}$, so that \begin{equation}\label{eq:def-X}
M_I(-X)= (-1)^{\ell(I)}\sum_{J\le I}M_J(X)\quad\text{et}\quad F_I(-X)=(-1)^{|I|}F_{\bar I^\sim}(X). \end{equation}
\subsection{Dendriform structure} One can also describe the product $X_{F_1}X_{F_2}$ in the following way. Endow $F_1$ with its canonical labelling, and $F_2$ with its canonical labelling shifted by the number $n_1$ of vertices of $F_1$. Then, the coefficient of $X_F$ in the product is equal to the number of standard decreasing (from the roots towards the leaves) labellings of $F$ whose restriction to $[1,n_1]$ is $F_1$, and whose restriction to $[n_1+1,n_1+n_2]$ is $F_2$.
This allows to define dendriform half-products: $X_{F_1}\prec X_{F_2}$ consists of the forests whose first label in the postfix order is $n_1+1$, and $X_{F_1}\succ X_{F_2}$ of those whose first label is $1$. In particular, \begin{equation} X_T\prec X_F = X_{FT}, \end{equation} Actually, ${\mathcal H}_{NCK}^*$ can be identified with the Loday-Ronco Hopf algebra ${\bf PBT}$ \cite{Foi00,Foi01}. It is easily seen that its dendriform product are induced by those of ${\bf FQSym}$, so that the coefficient of $X_F$ in ${\bf P}_t$ is equal to the number of linear extensions of $F$ whose decreasing tree has shape $t$.
\subsection{PreLie and brace structures} The product rule shows that for two trees, $X_{T_1T_2}$ et $X_{T_2T_1}$ have the same coefficient in $X_{T_1}X_{T_2}$. Thus, $[X_{T_1},X_{T_2}]$ is a linear combination of trees, and the primitive Lie algebra admits the $X_T$ as basis.
The commutator $[X_{T_1},X_{T_2}]$ is the difference between the sum of the $X_T$ obtained by grafting $T_1$ on a node of $T_2$ and that of those obtained by grafting $T_2$ on a node of $T_1$. If one denotes by $X_{T_1}\triangleright X_{T_2}$ the first sum, $\triangleright$ defines then a right preLie product. There is also a left preLie product. $a\triangleleft b=b\triangleright a$.
{\footnotesize For example, \begin{equation}
[X_\bullet,X_{\arbuga} ] = 2X_{\arbdgb} = X_\bullet\triangleright X_{\arbuga}-X_{\arbuga}\triangleright X_\bullet. \end{equation} }
The preLie algebra generated by $X_\bullet$ is free. For a non-plane rooted tree $\tau$, we set \begin{equation}
x_\tau = |{\rm Aut}(\tau)|\sum_{\bar T= \tau}X_T \end{equation} (where $\bar T$ means that the rooted tree obtained by forgetting the planar structure of $T$ is $\tau$) gets identified with the Chapoton-Livernet basis of the free preLie algebra on one generator.
The brace product which extends the preLie product and induces the associative product is \cite{Foi3} \begin{equation}
\langle X_{T_1\cdots T_r}, X_T\rangle_\triangleright = \sum_{T'}X_{T'} \end{equation} where the sum runs over all trees $T'$ obtained by grafting $T_1,\ldots,T_r$ on nodes of $T$, respecting their order. One has then, writing $B(X_F)$ for $X_{B_+(F)}$, $B(X_FX_{F'})=\<X_F, B(X_{F'})\rangle$ and \begin{equation}
\<X_F, \<X_{F'},X_T\rangle\rangle =\<X_FX_{F'},X_T\rangle. \end{equation} The primitive Lie algebra of ${\mathcal H}_{NCK}^*$ is the free brace algebra on one generator.
In terms of the dendriform operations, the preLie product is $x\triangleright y =a\succ b-b\prec a$, and one has then as usual $\Lambda_n=X_\bullet\prec\Lambda_{n-1}$ and $S_n=S_{n-1}\succ X_\bullet$.
\subsection{A quotient of ${\bf FQSym}$} Let ${\bf M}_\sigma$ be the dual basis of ${\bf S}^\sigma$. The above embedding of ${\mathcal H}_{NCK}$ into ${\bf FQSym}$ allows to identify $X_F$ with the image of ${\bf M}_{\sigma_F^{-1}}$ in the quotient of ${\bf FQSym}$ by the relations ${\bf M}_\sigma\equiv 0$ if $\sigma$ contains the pattern $132$.
{\footnotesize For example, \begin{equation} X_{\arbuga}X_{\arbuga} = 2X_{\arbuga\arbuga}+X_{\arbtgb}+X_{\arbtgd}+X_{\arbtge} \end{equation} and \begin{equation} \begin{aligned} {\bf M}_{12}{\bf M}_{12}& =
{\bf M}_{1 2 3 4} + 2{\bf M}_{{\bf 1 3 2} 4} + {\bf M}_{{\bf 1 3} 4 \bf 2} + {\bf M}_{{\bf 1 4 2} 3}\\ &+ {\bf M}_{2 3 1 4}
+ {\bf M}_{{\bf 2 4} 1 \bf 3} + {\bf M}_{ 3 1 2 4} + {\bf M}_{ 3 \bf 1 4 2} + 2{\bf M}_{3 4 1 2}\\ &\equiv 2{\bf M}_{3 4 1 2} + {\bf M}_{3124} + {\bf M}_{2314} +{\bf M}_{1 2 3 4}\\ & = 2{\bf M}_{\sigma_{\arbuga\arbuga}^{-1}}+{\bf M}_{\sigma_{\arbtgb}^{-1}}+{\bf M}_{\sigma_{\arbtgd}^{-1}}+{\bf M}_{\sigma_{\arbtge}^{-1}} \end{aligned} \end{equation} }
To reconstruct the forest $F$ from its maximal linear extension $\sigma_F$, one must construct the binary search tree of its mirror image $\overline{\sigma_F}$ and take its right branch.rotation
{\footnotesize For example, the tree $3100200$
{ \newcommand{\nodea}{\node[draw,circle] (a) {$7$} ;}\newcommand{\nodeb}{\node[draw,circle] (b) {$2$} ;}\newcommand{\nodec}{\node[draw,circle] (c) {$1$} ;}\newcommand{\noded}{\node[draw,circle] (d) {$3$} ;}\newcommand{\nodee}{\node[draw,circle] (e) {$6$} ;}\newcommand{\nodef}{\node[draw,circle] (f) {$4$} ;}\newcommand{\nodeg}{\node[draw,circle] (g) {$5$} ;}\begin{tikzpicture}[auto] \matrix[column sep=.2cm, row sep=.2cm,ampersand replacement=\&]{
\& \nodea \& \& \& \\
\nodeb \& \noded \& \& \nodee \& \\
\nodec \& \& \nodef \& \& \nodeg \\ };
\path[ultra thick, black] (b) edge (c)
(e) edge (f) edge (g)
(a) edge (b) edge (d) edge (e); \end{tikzpicture}}
has $\sigma_T = 5463127$, and the binary search tree of $\overline{\sigma_T }$ is
{ \newcommand{\nodea}{\node[draw,circle] (a) {$7$} ;}\newcommand{\nodeb}{\node[draw,circle] (b) {$2$} ;}\newcommand{\nodec}{\node[draw,circle] (c) {$1$} ;}\newcommand{\noded}{\node[draw,circle] (d) {$3$} ;}\newcommand{\nodee}{\node[draw,circle] (e) {$6$} ;}\newcommand{\nodef}{\node[draw,circle] (f) {$4$} ;}\newcommand{\nodeg}{\node[draw,circle] (g) {$5$} ;}\begin{tikzpicture}[auto] \matrix[column sep=.2cm, row sep=.2cm,ampersand replacement=\&]{
\& \& \& \& \& \& \& \& \& \nodea \& \\
\& \nodeb \& \& \& \& \& \& \& \& \& \\
\nodec \& \& \& \noded \& \& \& \& \& \& \& \\
\& \& \& \& \& \& \& \nodee \& \& \& \\
\& \& \& \& \& \nodef \& \& \& \& \& \\
\& \& \& \& \& \& \nodeg \& \& \& \& \\ };
\path[ultra thick, black] (f) edge (g)
(e) edge (f)
(d) edge (e)
(b) edge (c) edge (d)
(a) edge (b); \end{tikzpicture}}
}
\section{A multivariate version of the Catalan family} \subsection{A generic factorisation of $\sigma_a$}
The derivation of the Catalan idempotents presented in \cite[Sec. 10]{MNT} can be interpreted as a Birkhoff factorisation of the character of $QSym$ defined by \begin{equation}\label{eq:defphi} \varphi(M_I)=\begin{cases}a^n&\text{if $I=(n)$}\\ 0&\text{otherwise,}\end{cases} \end{equation} for a certain choice of $a$ in a Rota-Baxter algebra of functions ${\mathbb R}$, the Rota-Baxter map being the the multiplpication by the indicatrix of ${\mathbb R}^+$.
Let us now look at the generic factorisation of this character, for an arbitrary Rota-Baxter algebra ${\mathcal A}={\mathcal A}_+\oplus {\mathcal A}_-$, with projectors $P_+$ and $P_-$. Under the embedding \eqref{eq:SymInH} of ${\bf Sym}$ into ${\mathcal H}_{NCK}^*$, we have \begin{equation} \sigma_a = \sum_{n\ge 0}a^n S_n =\sum_{F\in{\mathcal F}}\varphi(Y_F)X_F. \end{equation} Writing the Birkhoff factorization $\varphi^+=\varphi^-\star\varphi$ as \begin{equation}
\sigma_a^+ =\sigma_a^-\sigma_a, \quad \sigma_a^\pm =\sum_{F\in{\mathcal F}}\varphi^\pm(Y_F)X_F, \end{equation} we can easily calculate $\varphi^\pm$ by remarking that \begin{equation} \lambda_{-a} = \sum_{n\ge 0}(-a)^nX_{\underbrace{\bullet\bullet\cdots\bullet}_{n}}=:\sum_{F\in{\mathcal F}}\alpha(Y_F)X_F \end{equation} where the character $\alpha$ is defined \textbf{on trees} by \begin{equation} \alpha(Y_T)=\begin{cases}-a&\text{if $T=\bullet$}\\0&\text{otherwise.}\end{cases} \end{equation} Since $\sigma_a^+\lambda_{-a}=\sigma_a^-$, we have $\varphi^+\star \alpha=\varphi^-$, which gives, for $T=B_+(F)$, \begin{equation} \varphi^+(Y_T)+\varphi^+(Y_F)\alpha(Y_\bullet) = \varphi^+(Y_T)-\varphi^+(Y_F)a=\varphi^-(Y_T) \end{equation} an applying $P_+$ and $P_-$, we obtain the recursive formulas \begin{align}\label{recphi} \varphi^+(Y_T) &= P_+(\varphi^+(Y_F)a),\\ \varphi^-(Y_T)&= -P_-(\varphi^+(Y_F)a). \end{align}
\subsection{Example: polar part of a Laurent series} Let us now take ${\mathcal A}= \operatorname{\mathbb C}[z^{-1},z]]$ with ${\mathcal A}_+=z^{-1}\operatorname{\mathbb C}[z^{-1}]$ and ${\mathcal A}_-=\operatorname{\mathbb C}[[z]]$, and let \begin{equation} a = a(z) = \sum_{n\ge 0}a_nz^{n-1}. \end{equation} Here, $P_+(f)$ is defined as the polar part.
{\footnotesize We have, writing $a_{i_1\cdots i_r}$ for $a_{i_1}\cdots a_{i_r}$, \begin{align*} \varphi^+(Y_\bullet) &= P_+(a)=\frac{a_0}{z} \\ \varphi^+(Y_{\bullet\bullet}) &=P_+(a)^2=\frac{a_{00}}{z^2} \\ \varphi^+(Y_{\arbuga}) &= P_+\left(\frac{a_0}{z}\left(\frac{a_0}{z}+a_1+\cdots\right)\right) = \frac{a_{00}}{z^2}+\frac{a_{01}}{z} \\ \varphi^+(Y_{\arbdgb}) &= P_+\left(\frac{a_{00}}{z^2}\left(\frac{a_0}{z}+a_1+a_2z\cdots\right)\right)= \frac{a_{000}}{z^3}+\frac{a_{010}}{z^2}+\frac{a_{002}}{z} \\ \varphi^+(Y_{\arbdga}) &= \frac{a_{000}}{z^3} + \frac{a_{010}}{z^2} + \frac{a_{001}}{z^2} + \frac{a_{011}}{z} + \frac{a_{002}}{z}. \end{align*} } On these examples, we can observe the following explicit description: \begin{theorem}\label{th:phiX} For any tree $T$, the value of the character $\varphi^+$ on a tree $Y_T$ is given by
\begin{equation} \varphi^+(Y_T)=\sum_{F\ge T}a_F z^{-r(F)}\label{eq:phi+tamari} \end{equation} where the sum is over the Tamari order on plane forests, $r(F)$ denotes the number of roots of $F$ and $a_F=a_{c_1}\cdots a_{c_n}$ if $c_1\cdots c_n$ is the reverse Polish code of $F$. \end{theorem}
\noindent \it Proof -- \rm This is an immediate consequence of the recursive description of Tamari intervals given below.
\qed
\subsection{A recursive description of the Tamari order}
\begin{theorem} For a forest $F$, let $f(F)$ be the formal sum \begin{equation} f(F)=\sum_{G\ge F}G. \end{equation} Then for a tree $T=B_+(F)$, \begin{equation} f(T)= \sum_{F=F_1F_2}f(F_1)B_+(f(F_2)). \end{equation} \end{theorem}
In other words, the formal sum of the reverse Polish codes of the forests $G\ge B_+(F)$ is obtained by the following process: for each tree $T'=B_+(F')\ge T$, write down the reverse Polish code $a_{F'}=a_{c_1}\cdots a_{c_{n-1}}$, and take the sum of the words $a_{F'}a_i$ for $i=0,\ldots,r$, where $r$ is the number of connected components of $F'$. This amounts to encoding $F'$ by $\frac{a_{F'}}{z^r}$ and taking the polar part of $\frac{a_{F'}}{z^r}a(z)$, which implies Theorem \ref{th:phiX}.
{\footnotesize For example with $T= \arbtgc$, the codes of the trees $T'\ge T$ are $0021, 0102, 0003$. The above process gives for each of them \begin{align*} 0021 &\rightarrow 0020 + 0021,\\ 0102 &\rightarrow 0100 + 0101 + 0102,\\ 0003&\rightarrow 0000+0001+0002+0003, \end{align*} which are indeed the codes of the 9 forests $G \ge T$. }
\noindent \it Proof -- \rm Recall the cover relation of the Tamari order on plane trees: starting from a tree $T$ and a vertex $x$ that is neither its root or a leaf, the trees $T'>T$ covering $T$ are obtained by cutting off the leftmost subtree of $x$ and grafting it back on the left of the parent of $x$.
So all elements in the Tamari order above a given element are obtained by a sequence of such moves which can be encoded by a sequence of numbers recording on which node each cut is done.
We shall prove the result for a forest containing a single tree since the proof works in the same way with a general forest. We shall actually prove a stronger result: all trees above a given tree can be obtained by a sequence of cuts where no cut is done inside a subtree that was already cut.
To see that, number the internal nodes of a tree in prefix order, so that any node has a label smaller than its descendants. Now, the path from a tree to a tree above it corresponds to a word on these labels recording in which order the nodes were cut. Assume that there is somewhere a factor $1i$ where $i>1$. Then we shall see that this factor can be rewritten either as $i1$ if $i$ is not the leftmost child of $1$ at this step or as $i11$ it it is.
First, if $i$ is the leftmost child of the root, applying $1$ then $i$ leads to a forest containing three trees: the left subtree of $i$, the remaining part of the tree of root $i$ without its left child and the remaining part of the whole tree without its left child. One easily checks that we get the same result by applying $i11$ to the tree.
Moreover, if $i$ is not the left-most child of the root, then $1$ and $i$ commute since they work in separate parts of the tree.
So by induction, any word sending a tree $T$ to a tree $T'$ below it can be rewritten as a word where its 1s are at the end. The same applies to any element of the tree, whence the result. \qed
\subsection{Grouplikes and primitives in ${\bf Sym}$}
By definition, the series \begin{equation}
{\sf C}:=\sigma_a^+|_{z=1} \quad \text{and}\quad {\sf D} = \operatorname{Res}_{z=0}\sigma_a^+ \end{equation} are in ${\bf Sym}$. We shall see that ${\sf D}$ is a multiparameter version of the Catalan idempotent of \cite{MNT,FMNT}, which is obtained by the choice $a(z)=\frac{a}{z}+\frac{b}{1-z}$.
Indeed, recall that the basis $C_F$ of ${\mathcal H}_{NCK}^*$ is defined by \begin{equation} C_F=\sum_{G\le F}X_G. \end{equation} Thus, \begin{equation} {\sf C} = \sum_F\left(\sum_{G\ge F}a_G\right)X_F = \sum_G a_G\sum_{F\le G}X_F =\sum_G a_G C_G, \end{equation} where $a_G$ is the (commutative) product of the code of $G$, and since taking the residue amounts to restricting the sum to trees, \begin{equation} {\sf D}= \sum_T a_T C_T \end{equation} which gives back the expression obtained in \cite{FMNT} for $a(z)=\frac{a}{z}+\frac{b}{1-z}$.
The possible values of $a_T$ correspond to partitions $\lambda$ of $n-1$. The sums \begin{equation} {\sf D}_\lambda := \sum_{a_T=a_\lambda} C_T \end{equation} are therefore Lie quasi-idempotents of the descent algebra.
{\footnotesize For example, \begin{align*} {\sf D}_{(3)} &= C_{3000} = \bar\Psi_4,\\ {\sf D}_{(21)} & = C_{2100}+C_{2010}+C_{1200} = R_4-R_{22}+R_{121}-R_{111}+\Psi_4+\bar\Psi_4,\\ {\sf D}_{(111)} &= \Psi_4. \end{align*} }
\subsection{Expansions in ${\bf Sym}$}
To compute the expansions of ${\sf C}$ and ${\sf D}$ on the usual bases of ${\bf Sym}$, we start with the Birkhoff recurrence \cite{Man} \begin{align} \varphi^-(x)&=-P_-\big(\varphi(x)+\sum_{(x)}\varphi^-(x')\varphi(x'')\big)\label{eq:recBirk1}\\ \varphi^+(x)&=P_+\big(\varphi(x)+\sum_{(x)}\varphi^-(x')\varphi(x'')\big)\label{eq:recBirk2}. \end{align} This gives immediately \begin{align} \varphi^-(M_n)=-P_-(a^n),\ \varphi^+(M_n)=P_+(a^n),\\ \varphi^-(M_{ij})=P_-(P_-(a^i)a^j),\ \varphi^+(M_{ij})=-P_+(P_-(a^i)a^j),\ldots \end{align} and by induction, we arrive at the proposition below.
Let $a$ an element of a Rota-Baxter algebra ${\mathcal A}$. We set $P_{\emptyset}^{ \emptyset}(a)=1_{{\mathbb K}\, }$ and for $I=(i_1,\dots,i_n)$, $\tmmathbf{\varepsilon} = (\varepsilon_1, \ldots, \varepsilon_n) \in \{+,-\}^n$, \begin{equation} P^{I}_{\tmmathbf{\varepsilon}}(a)=P_{\varepsilon_n}\left(P^{I'}_{\tmmathbf{\varepsilon}'}(a)a^{i_n}\right) \end{equation} where $I'=(i_1,\dots,i_{n-1})$ and $\tmmathbf{\varepsilon}'=(\varepsilon_1, \ldots, \varepsilon_{n-1})$. We also write for short $P_{\tmmathbf{\varepsilon}}(a)=P_{\varepsilon_1, \ldots, \varepsilon_n}(a)$ the element of ${\mathcal A}^{\varepsilon_n}$ equal to $P^{I}_{\tmmathbf{\varepsilon}}(a)$ where $i_1=i_2=\dots=i_n=1$.
{\footnotesize For instance,\[ P^{1,2,3}_{+,-,-}=P_-(P_-(P_+(a)a^2)a^3)\text{ and } P_{+,+,-}(a)=P_-(P_+(P_+(a)a)a). \] }
\begin{proposition}\label{prop:basisexp} Let $a$ an element of a Rota-Baxter algebra ${\mathcal A}$. Then, \begin{align}
\sigma_a^+&=\sum_I (-1)^{l(I)-1}P^{I}_{(-)^{l(I)-1},+}(a)S^I\\
&=\sum_I (-1)^{|I|+l(I)}P^{I}_{(+)^{l(I)}}(a)\Lambda^I\\
&=1+\sum_{\tmmathbf{\varepsilon}\in \mathcal{E}}P_{\tmmathbf{\varepsilon},+}(a)R_{\tmmathbf{\varepsilon},\bullet} \end{align} and \begin{align}
\sigma_a^-&=\sum_I (-1)^{l(I)}P^{I}_{(-)^{l(I)}}(a)S^I\\
&=\sum_I (-1)^{|I|+l(I)-1}P^{I}_{(+)^{l(I)-1},-}(a)\Lambda^I\\
&=1-\sum_{\tmmathbf{\varepsilon}\in \mathcal{E}}P_{\tmmathbf{\varepsilon},-}(a)R_{\tmmathbf{\varepsilon},\bullet} \end{align} \end{proposition}
We use in the last equation the {\it signed ribbon basis} of ${\bf Sym}$ (see \cite{MNT}),
which is a slight modification of the noncommutative ribbon Schur functions: for any sequence of signs $\mathbf{\varepsilon}=(\varepsilon_1,\dots,\varepsilon_{n-1}$ \begin{equation} R_{\mathbf{\varepsilon} \bullet}=(-1)^{l(I)-1}R_I\quad (R_\emptyset=1,\ R_{\bullet}=R_1) \end{equation} where $I=(i_1,\dots,i_r)$ is the composition of $n$ such that \begin{equation} D(I):=\{i_1,i_1+i_2,\dots,i_1+\dots+i_{r-1}\}=\{1\leq i \leq n-1 \ ;\ \varepsilon_i=-\}. \end{equation} \noindent \it Proof -- \rm The expansions on the $S^I$ follow immediately from the recurrence \eqref{eq:recBirk1}-\eqref{eq:recBirk2}. The other ones admit an interesting explanation in terms of the free Rota-Baxter algebra on one generator, which can be realized a subalgebra of the algebra of sequences of multivariate polynomials, with pointwise addition and product.
Let ${\bf x}$ be a sequence of variables ${\bf x} = (x_1,x_2,x_3,\ldots)$ and for a sequence ${\bf z}$, define \begin{equation}
R({\bf z}) = (0,z_1,z_1+z_2,z_1+z_2+z_3,\ldots) \end{equation} This is a Rota-Baxter operator of weight $1$. It generates from ${\bf x}$ the free Rota-Baxter algebra
$\mathfrak{A}({\bf x})$ \cite{Rota}. Define \begin{equation}
P_+ =-R,\quad P_- = I-P_+, \end{equation} which are now of weight $-1$. Set also $V({\bf z})=(z_2,z_3,\ldots)$.
The subalgebra generated by the $R({\bf x}^n)$ is isomorphic to $Sym$: \begin{equation} R({\bf x}^n) = (p_n(0),p_n(x_1),p_n(x_1,x_2),\ldots):=\tilde p_n \end{equation} but there is also an embedding of $QSym$ in $\mathfrak{A}({\bf x})_+:=P_+(\mathfrak{A}({\bf x}))$ given by the same rule \begin{equation} f\mapsto \tilde f :=(f(0),f(x_1), f(x_1,x_2),\ldots) \end{equation} Its image under $V$ gives an embedding of $QSym_+$ into $\mathfrak{A}({\bf x})_-$.
It is now easy to show by induction that for $I=(i_1,\ldots,i_r)$, \begin{equation}
\tilde M_I = R(\tilde M_{i_1\cdots i_{r-1}}{\bf x}^{i_r}). \end{equation} If we number the applications of $R$ in the above expression of $M_I$,
{\footnotesize \noindent for example \begin{equation} \tilde M_{i_1i_2i_3i_4}=R_4(R_3(R_2(R_1(a^{i_1}){\bf x}^{i_2}){\bf x}^{i_3}){\bf x}^{i_4}) \end{equation} }
\noindent and replace some $R_j$ by $R'=I+R=P_-$, the result is now $\tilde M_I+\tilde M_{I'}$, where $I'=(i_1,\ldots,i_j+i_{j+1},i_{j+2},\ldots)$.
{\footnotesize For example, \begin{equation} \tilde M_{i_1i_2i_3i_4}\mapsto R_4(R_3((I+R_2)(R_1({\bf x}^{i_1}){\bf x}^{i_2}){\bf x}^{i_3}){\bf x}^{i_4})
=\tilde M_{i_1i_2i_3i_4}+\tilde M_{i_1,i_2+i_3,i_4}. \end{equation} }
Applying this to $M_{1^n}$, the replacement of $R_i$ by $R'_i$ has the effect of removing $i$ from the descent set of $1^n$. Iterating, we see that replacing $R_{d_1},\ldots,R_{d_k}$ by $R'$ yields all compositions whose descent set contains the complement of $\{d_1,\ldots,d_k\}$. The result is therefore $\tilde F_{\bar I^\sim}$
Let us now look at the coefficient of $S^I$ in $\sigma_{\bf x}^+$. For example, for $I=(i,j,k)$, this is \begin{align*}
(-1)^3P^{ijk}_{--+}({\bf x})&= R(((I+R)((I+R){\bf x}^i){\bf x}^j){\bf x}^k)\\
&=R({\bf x}^{i+j+k}+R(R({\bf x}^i){\bf x}^{j+k}+R({\bf x}^{i+j}){\bf x}^k+R(R({\bf x}^i){\bf x}^j){\bf x}^k)\\
&= \tilde M_{i+j+k}+\tilde M_{i,j+k}+\tilde M_{i+j,k}+\tilde M_{ijk}\\
&= \sum_{J\le I}\tilde M_J\\
&=(-1)^3 \widetilde{M_{ijk}(-X)}. \end{align*}
Thus, the Birkhoff factorisation of the character \eqref{eq:defphi} of $QSym$ is given by
\begin{equation}
\varphi^+(M_I)=\widetilde{M_I(-X)},\quad
\varphi^-(M_I)=V\widetilde{M_I(-X)}, \end{equation} and \begin{equation}
\varphi^+(F_I)=\tilde F_I(-X)=(-1)^{|I|}\tilde F_{\bar I^\sim},\quad \varphi^-(F_I) =V\varphi_+(\tilde F_I). \end{equation} Now, the coefficient of $\Lambda^I$ in $\sigma_{\bf x}^+$ is $\tilde M_I(X)$, whence the second equalities in Prop. \ref{prop:basisexp}, and that of $R_I$ is, up to sign, $\tilde F_{\bar I^\sim}(X)$, which can be expressed as $\pm P_{\tmmathbf{\varepsilon},+}({\bf x})$. \qed
\subsection{Combinatorial interpretation of the coefficients}
Evaluating the above expression for $\varphi^+(M_I)$ on ${\bf x}=a(z)$, but without assuming that the $a_i$ commute yields a set of words which can be characterized by certain inequalities involving partial sums of the subscripts. Recall that
\begin{equation} \sigma_a^+ = \sum_{I} \varphi^+(M_I) S^I, \end{equation} where $\varphi^+(M_n)=P_+(a^n)$, and for $I=(I',i_p)$. \begin{equation} \left\{ \begin{array}{l} \varphi^+(M_I) = P_+ (\varphi^-(M_{I'}) a^{i_p}) \\ \varphi^-(M_I) = - P_- (\varphi^-(M_{I'}) a^{i_p}). \end{array} \right. \end{equation}
So if $I=(i_1,\dots,i_p)$, the evaluation of $\varphi^+(M_I)$ amounts to computing $P_-(M_{i_1})$, then multiply by $a^{i_2}$, then apply $P_-$, then multiply the result by $a^{i_3}$, and so on, up to the last step where one applies a $P_+$ instead of $P_-$.
Thus, up to a global sign $(-1)^{\ell(I)-1}$, $\varphi^+(M_I)$ is a sum of monomials in $z^{-1}$ and the $a_i$. Such a monomial is a product of $n$ terms of the series $a$ which survive the sequence of $P_-$ and the final $P_+$. Writing this product as a word, considering that the $a_i$ do not commute, and replacing $a_i$ with $i$ (ignoring the power of $z$ that can be reconstituted in the end), we obtain a word $w=w_1\dots w_n$ over the integers such that \begin{equation} \left\{ \begin{array}{l} w_1+\dots + w_{d_k} \geq d_k, \text{\ for all $k<p$}, \\ w_1+\dots + w_{i_1+\dots+i_p} < i_1+\dots+i_p, \end{array} \right. \end{equation} where $\{d_1=i_1,d_2=i_1+i_2,\dots,d_{p-1}=i_1+\dots+i_{p-1}\}$ is the \emph{descent set} $D(I)$ of $I$. Denote this set of words by $S(I)$.
{\footnotesize Let us check this observation on $\varphi^+(M_{112})$, which is \begin{equation} \frac{a_3 a_0^3}{z} + \frac{4a_2a_1a_0^2}{z} + \frac{2a_1^3a_0}{z} + \frac{a_2a_0^3}{z^2} + \frac{a_1^2a_0^2}{z^2}. \end{equation} It is indeed obtained from the $9$ words $w=w_1w_2w_3w_4$ satisfying \begin{equation} w_1\geq 1, \quad w_1+w_2\geq 2, \quad w_1+\dots+w_4 < 4, \end{equation} that are \begin{equation} 3000,\ 2100,\ 2010,\ 2001,\ 1200,\ 1110,\ 1101,\ 2000,\ 1100, \end{equation} by sending each value $i$ to $a_i z^{i-1}$. }
For a word over the integers, define \begin{equation} w_{1:k}:=\sum_{i=1}^kw_i \end{equation} and let \begin{equation}
W(I)=\{w|w_{1:k}\ge k \ \text{if $k\in D(I)$ and}\ w_{1:k}<k\ \text{otherwise}\}, \end{equation} so that \begin{equation} S(I)=\bigsqcup_{J\ge I} W(J). \end{equation} Thus, if one writes as an intermediate expression for $\varphi^+(M_I)$ the sum \begin{equation} \begin{split} \sum_I (-1)^{\ell(I)-1} S^I \sum_{w\in S(I)} w &= \sum_I (-1)^{\ell(I)-1 }S^I \sum_{J\ge I}\sum_{w\in W(J)} w\\ &=\sum_J\sum_{w\in W(J)} w\sum_{I\le J}(-1)^{\ell(I)-1}S^I \end{split} \end{equation} one can see that the coefficient of a word $w\in W(J)$ is, up to a sign $(-1)^{\ell(J)-1}$, the ribbon $R_J$.
So the expansion of $\sigma_a^+$ in the ribbon basis is obtained by listing the words $w=w_1\dots w_n$ satisfying $w_1+\dots+w_n<n$ (counted by the binomial $\binom{2n-1}{n}$). Each such $w$ belongs to a unique $W(I)$, which determines its coefficient $(-1)^{\ell(I)-1}R_I$, and a factor $z^{w_{1:n}-n}$
{\footnotesize For example, here are all possible words for $n=3$ with the corresponding compositions: \begin{equation} \begin{array}{llllllllll} 000 & 001 & 010 & 100 & 002 & 020 & 200 & 011 & 101 & 110 \\
3 & 3 & 3 & 12 & 3 & 21 & 111 & 3 & 12 & 111 \end{array} \end{equation}
For $n=4$, here is the complete list of all words contributing to each $R_I$:
\begin{equation} \begin{array}{ll} 4 & 0000,\ 0100,\ 0010,\ 0001,\ 0110,\ 0101,\ 0020,\ 0011,\ 0002,\\
& 0111,\ 0102,\ 0021,\ 0012,\ 0003, \\ 31 & 0120,\ 0030, \\ 22 & 0200,\ 0201, \\ 13 & 1000,\ 1010,\ 1001,\ 1011,\ 1002, \\ 211 & 0300,\ 0210, \\ 121 & 1020, \\ 112 & 2000,\ 1100,\ 2001,\ 1101, \\ 1111 & 3000,\ 2100,\ 2010,\ 1200,\ 1110. \end{array} \end{equation} }
We already know from previous works \cite{MNT,FMNT} that if $a_0=a, a_i=b$ for $i>0$, the coefficient of a $R_I$ is (up to a global sign) a product of Narayana polynomials. Since the coefficients in the general case are sums of monomials with the same sign, this implies that the cardinalities of the sets $W(I)$ are products of Catalan numbers. This can be seen directly as follows.
Recall the correspondence between Łukasiewicz words (Polish codes of plane trees) and Dyck paths. The code of a plane tree is obtained by labelling each node by the number of its descendants, and traversing it in prefix order.
{\footnotesize An example would be \begin{equation} w= 40201200010 \end{equation} }
These codes are characterized by the following property: if one forms a word $u$ by subtracting 1 to each entry of $w$, the partial sums $u_{1:i}$ are all nonnegative, except for the last one which is $-1$.
{\footnotesize On our example, \setcounter{MaxMatrixCols}{20} \begin{equation} \begin{matrix} 4 & 0 & 2 & 0 & 1 & 2 & 0 & 0 & 0 & 1 & 0\\ 3 &-1 & 1 &-1 & 0 & 1 &-1 &-1 &-1 & 0 &-1\\ 3 & 2 & 3 & 2 & 2 & 3 & 2 & 1 & 0 & 0 & -1 \end{matrix} \end{equation} }
This characterization means that if one replaces each integer $i$ by the word $a^ib$, one obtains a word $wb$, where $w$ is a Dyck word\footnote{Here, the letter $a$ stands for an upstep and $b$ for a downstep.}.
{\footnotesize On our exemple, this yields \begin{equation} aaaab.b.aab.b.ab.aab.b.b.b.ab\cdot b \end{equation} }
The partial sums $u_{1:i}$ give the height of the corresponding Dyck path after the $i$th $b$.
This description can be extended to the sets $W(I)$. The word obtained by replacing each entry $k$ by $a^kb$ in $w$ encodes a lattice path starting at the origin, and ending at $(2n+1,-1)$. Applying the transformation $u_i=w_i-1$ to $W(I)$ results into the set of words \begin{equation} U(I)=
\{u|u_{1:k}\ge 0 \ \text{if $k\in D(I)$ and}\ u_{1:k}<0\ \text{otherwise}\}. \end{equation} Again, the partial sums $u_{1:i}$ of such words record the heights attained by the lattice path associated with $w$ after the $i$th $b$.
Represent a composition $I=(i_1,\dots,i_p)$ of $n$ as a sequence of $n$ symbols $+$ and $-$ with a $-$ in position $k$ if $k$ is a descent of $I$, and a $+$ otherwise.
{\footnotesize For example, $312$ is represented as $++--++$ and $3111$ as $++---+$. }
Then, the cardinality of $W(I)$ is $\prod_i C_{i}$ where $i$ runs over the lenghts of blocks of identical signs.
{\footnotesize For example, $W(312)$ contains $C_2^3=8$ words and $W(3111)$ has $C_2C_3=10$ elements. }
Indeed, the blocks of symbols $+$ correspond to sections of the path associated with $w$ lying under the horizontal axis, and the blocks of $-$ to sections where it remains above the axis. The sections of the path determined by these blocks are alternatively Dyck paths or negative of Dyck paths, whence the product of Catalan numbers. Counting them by number of peaks gives back the products of Narayana polynomials already mentioned.
{\footnotesize For example, let us decompose $W(4111)$. The corresponding signed word is $+++---+$. There should be $25$ such words. Let us write these as a $5\times5$ square where words on the same column have same first three values. \begin{equation} \begin{array}{lllll} 0006000 & 0015000 & 0105000 & 0024000 & 0114000 \\ 0005100 & 0014100 & 0104100 & 0023100 & 0113100 \\ 0005010 & 0014010 & 0104010 & 0023010 & 0113010 \\ 0004200 & 0013200 & 0103200 & 0022200 & 0112200 \\ 0004110 & 0013110 & 0103100 & 0022110 & 0112110 \end{array} \end{equation} The path corresponding to $0004200$ is $bbbaaa.abaabb.b$, and that corresponding to $0112200$ is $bababa.abaabb.b$. One can check that all pairs of Dyck paths are obtained. Note that in each row, the values $(w_4,w_5,w_6)$ are the same if one replaces the fourth one by $w_4+(w_1+w_2+w_3)-3$. The sequence of these values becomes \begin{equation} 300, 210, 201, 120, 111, \end{equation} which is indeed the set of the first three values associated with the composition $1111$, and the Polish codes of plane trees with 4 nodes except for their final 0. }
\section{Lie idempotents of the descent algebra}
We shall now decribe the expansions of several Lie idempotents of the descent algebra on the $X$-basis. To this aim, we shall need several versions of the $(1-q)$-transform.
Recall that on ordinary symmetric function, the alphabet $\frac{X}{1-q}$ is the set $\{q^ix_j|i\ge 0, x_j\in X\}$. It can be extended to noncommutative symmetric functions by choosing a total order of the products $q^ia_j$, which can of course be done in an infinity of ways, but only four of them are natural: take the lexicographic order on the pairs $(q^i,a_j)$ or $(a_j,q^i)$, keeping the original order on $A$ and ordering the $q^i$ in ascending or descending order of the exponents. This leads to four possible definitions of the $(1-q)$-transform as the respective inverses of the above transforms. In the sequel we shall define them directly by specifying the image of the $S_n$.
\subsection{Dynkin} \begin{proposition} The right Dynkin $\bar\Psi_n=[1,[2,[3,\ldots[n-1,n]\ldots]]]$ is the sum of all trees \begin{equation}
\bar\Psi_n = \sum_{|T|=n}X_T. \end{equation} and the left Dynkin $\Psi_n=[\ldots[[1,2],3],\ldots,n]$ is the linear tree $X_{L_n}$ \begin{equation} \bar\Psi_n = ((X_\bullet\triangleright X_\bullet)\cdots)\triangleright X_\bullet. \end{equation} \end{proposition}
\noindent \it Proof -- \rm We first apply Theorem \ref{th:coeffX} to $X=1-q$, defined by \begin{equation} S_n((1-q)A)= (1-q)\sum_{k=0}^n(-q)^k R_{1^k,n-k}(A), \end{equation}
so that $\Psi_n(A) = \frac1{1-q}S_n((1-q)A)|_{q=1}$, and $F_I(1-q)$ is nonzero only for $I$ of the type $(1^k,n-k)$.
Every forest with $k+1$ leaves has a unique maximal linear extension of this shape, obtained by reading its leaves from right to left and then taking the postorder reading of the remaining nodes. It has therefore ${k\choose i}$ linear extensions of shape $(1^i,n-i)$ for $0\le i\le k$, so that $\Gamma_F(1-q)=(1-q)^{k}$ is divisible by $(1-q)^2$ except for $k=1$, which means that $F=L_n$ is a linear tree.
To deal with $\bar \Psi_n$, we need another version of the $1-q$ transform, denoted by $1+(-q)$, and défined\footnote{
This strange notation is justified by the fact that addition of alphabets is not commutative, and that $X-Y$ is defined as $(-Y)+X$, {\it cf.} \cite{NCSF2}.} by $F_I(1+(-q))=(1-q)(-q)^k$ if $I=(n-k,1^k)$ and 0 otherwise,
so that $\bar \Psi_n(A) = \frac1{1-q}S_n((1+(-q))A)|_{q=1}$. A permutation of shape $I=(n-k,1^k)$ cannot be a linear extension of a tree, unless $k=0$, in which case it is the identity, the common linear extension of all trees. Thus, $\bar\Psi_n$ is the sum of all trees with $n$ nodes. \qed
\subsection{Eulerian idempotents}
Take the binomial alphabet $\alpha$ defined by $\sigma_1(\alpha A)=\sigma_1^\alpha$, so that $M_I(\alpha)={\alpha\choose\ell(I)}$, and $F_I(\alpha)={\alpha+n-r\choose n}$ where $n=|I|$ and $r=\ell(I)$. Then, the Solomon idempotent $\varphi$ (often denoted by $\Omega$, and also known as the first Eulerian idempotent) is given by \begin{equation}
\varphi := \log \sigma_1 = \left.\frac{d}{d\alpha}\right|_{\alpha=0}\exp{\alpha\varphi} =\left.\frac{d}{d\alpha}\right|_{\alpha=0}\sigma_1(\alpha A), \end{equation} so that the coefficient of $X_T$ in $\varphi$ is \begin{equation}
\left.\frac{d}{d\alpha}\right|_{\alpha=0}\Gamma_T(\alpha). \end{equation}
Equivalently, with the notation of Theorem \ref{th:coeffX} \begin{equation} \sum_F \chi_F(\alpha)X_F = \lambda_1(A)^\alpha =\exp\left\{\alpha\sum_{n\ge 1}(-1)^{n-1}\varphi_n\right\} \end{equation} and for a forest of degree $n$, \begin{equation}
\left.\frac{d}{d\alpha}\right|_{\alpha=0}\chi_F(\alpha) = (-1)^{n-1}( Y_F,\varphi_n) \end{equation} so that \begin{equation}
\varphi_n = (-1)^{n-1}\left.\frac{d}{d\alpha}\right|_{\alpha=0}
\sum_{|T|=n} \chi_T(\alpha)X_T \end{equation} which contains only trees, since $\varphi$ is a Lie series.
The polynomial $\chi_T(t)$ is the evaluation of the tree $T$ obtained by putting $t$ in each leaf, the operator ``discrete integral of the product of the subtrees'' \begin{equation} \Sigma:\ t^p\mapsto \Sigma_0^t s^p\delta s = \frac{B_{p+1}(t)-B_{p+1}(0)}{p+1} \end{equation} in each internal node, and multplying the result by $(-1)^{n-1}$ (the $B_k$ are the Bernoulli polynomials).
Indeed, if $T=B_+(T_1\cdots T_k)$, $\chi_T(t)$ satisfies the difference equation \begin{equation}
\Delta \chi_T(t) =\chi_{T_1}(t)\cdots \chi_{T_k}(t) \end{equation} which can bee seen as follows. First, $\chi_T(t)=\<Y_T,\lambda_1^t\rangle$, so that \begin{align*}
\Delta\chi_T(t) &= \<Y_T,\lambda_1^t(\lambda_1-1)\rangle =\langle\Delta Y_T,\lambda_1^t\otimes(\lambda_1-1)\rangle\\
&= \sum_{(T)}\<Y_{T(1)}\otimes Y_{T(2)},\lambda_1^t\otimes(X_\bullet+X_{\bullet\bullet}+\cdots)\rangle\\
&= \<Y_{T_1}\cdots Y_{T_k},\lambda_1^t\rangle \quad\text{since the only nonzero term is obtained for $T(2)=\bullet$}\\
&= \chi_{T_1}(t)\cdots \chi_{T_k}(t). \end{align*}
This formula has been first obtained in \cite{WZ} by a more complicated argument.
The coefficients of the polynomial
$(-1)^{|T|}\chi_T(t)$ given the expansion of the other Eulerian idempotents is the forests basis. This is equivalent to the description of the ``formal flow'' given in \cite{WZ}. The coefficient of $\alpha^k$ in $\sigma_1^\alpha$ is \begin{equation} \frac1{k!}\sum_{\ell(I)=k} \varphi^I \end{equation} hence the coefficient of $X_F$ in \begin{equation} e_n^{(k)}= \frac1{k!}\sum_{I\vDash n}\varphi^I \end{equation} is ({\it cf.} Eq. \eqref{eq:chi}) \begin{equation} [\alpha^k]\Gamma_F(\alpha)=
(-1)^{|F|}[\alpha^k]\chi_F(\alpha). \end{equation}
\subsection{$q$-idempotents and a two-parameter series}
In \cite{NCSF2}, it is proved that, for the usual definition of $\frac{A}{1-q}$ \begin{equation}
\varphi_n(q) = \frac{1-q^n}{n}\Psi_n\left(\frac{A}{1-q}\right) =
{1 \over n} \ \sum_{|I|=n} \ { (-1)^{\ell (I)-1} \over \qbin{n-1}{\ell (I) -1}} \ q^{{\rm maj}(I) - \ssbin{\ell (I)}{2}} \ R_I(A) \end{equation} is a Lie idempotent, interpolating between the Solomon idempotent $\varphi_n$ (for $q=1)$, the two Dynkin (for $q=0,\infty$) and Klyachko ($q=e^{2i\pi/n}$). Its expansion on the preLie basis $x_\tau$ (hence also on $X_T$) is obtained by Chapoton in \cite{Cha1}.
One way to recover this result is to apply Theorem \ref{th:coeffX} to the virtual alphabet \begin{equation}
\frac{1-qt|}{|1-q}= (1-qt)\times\frac1{1-q} \end{equation} defined by \cite{NCSF2} \begin{equation}
S_n\left(\frac{1-qt|}{|1-q}A\right)=(1-qt)\sum_{k=0}^n(-qt)^k R_{1^k,n-k}\left(\frac{A}{1-q}\right) \end{equation} so that \begin{equation}
\Psi_n\left(\frac{A}{1-q}\right)= \frac1{1-qt}\left.S_n\left(\frac{1-qt|}{|1-q}A\right)\right|_{t=\frac1q}. \end{equation}
The series denoted by $\sympawn$ in \cite{Cha3} is essentially $\sigma_1\left(\frac{1-qt|}{|1-q}A\right)$. Actually, Chapoton takes the opposite order on the alphabet of powers of $q$, and to recover the same coefficients, we have to define $\sympawn$ as \begin{equation}\label{eq:defsqx}
\sympawn = \sigma_1(X_{q,t}A) := \prod_{i\ge 0}^\rightarrow\sigma_{q^i}(A)\prod_{j\ge 0}^\leftarrow\lambda_{-q^jt}(A). \end{equation} The functional equation satisfied by $f(t) := \sigma_1(X_{q,t}A)$ is then \begin{equation}
f(qt)=f(t)\sigma_{qt}(A) \end{equation} which is equivalent to \cite[(8)]{Cha3} after setting $t=1+(q-1)x$.
The coefficient of $\frac\tau{|{\rm Aut}(\tau)|}$ in $\sympawn$ is thus obtained by setting $t=1+(q-1)x$ in $\Gamma_T\left(X_{q,t}\right)$.
For example, with $T=10$, $\Gamma_T(A)={\bf M}_{12}+{\bf M}_{11}$, hence $\Gamma_T(X)=M_2+M_{11}=h_2$ is a symmetric function, and \begin{equation} h_2\left(\frac{1-qt}{1-q}\right)=\frac{(1-qt)(1-q^2t)}{(1-q)(1-q^2)}=\frac{(1+qx)(1+q+q^2x)}{1+q}. \end{equation} Dividing by $1+qx$, and setting $x=-1/q$, one finds $\frac1{1+q}$, which is indeed the coefficient of $X_{10}$ in the series $\bar\Omega_q$ defined in \cite[(45)]{Cha3}.
\subsection{Examples}
One can easily compute $\Gamma_T(A)$ by the recurrence (obvious from the definition in terms of linear extensions) \begin{equation} \Gamma_{B_+(T_1\cdots T_k)}(A) = B(\Gamma_{T_1}\cdots\Gamma_{T_k}), \end{equation}
where $B({\bf F}_{\sigma}):={\bf F}_{\sigma n}={\bf F}_\sigma\succ{\bf F}_1$ ($n=|T_1|+\cdots+|T_k|+1$), which yields by projection onto $QSym$ \begin{equation} \Gamma_{B_+(T_1\cdots T_k)}(X) = B(\Gamma_{T_1}\cdots\Gamma_{T_k}),\quad\text{where $B(F_{i_1i_2\ldots i_r}):=F_{i_1,i_2\ldots,i_r+1}$} \end{equation}
{\footnotesize For example, \begin{align} \Gamma_{\arbuga}(X) &= F_2 \rightarrow {\alpha+1\choose 2}\\ \Gamma_{\arbdga}(X) &= F_3 \rightarrow {\alpha+2\choose 3}\\ \Gamma_{\arbdgb}(X) &= F_{12}+F_3 \rightarrow {\alpha+2\choose 3}+{\alpha+1\choose 3}\\ \Gamma_{\arbtge}(X) &= F_4 \rightarrow {\alpha+3\choose 4} \\ \Gamma_{\arbtgc}(X) &= F_{13} + F_4 \rightarrow {\alpha+3\choose 4}+ {\alpha+2\choose 4}\\ \Gamma_{\arbtgd}(X) &= F_{22} + F_{13} + F_4 \rightarrow {\alpha+3\choose 4}+ 2{\alpha+2\choose 4} \\ \Gamma_{\arbtgb}(X) &= F_{22} + F_{13} + F_4 \rightarrow {\alpha+3\choose 4}+ 2{\alpha+2\choose 4}\\ \Gamma_{\arbtga}(X) &= F_{112} + 2F_{22} + 2F_{13} + F_4\rightarrow {\alpha+3\choose 4}+ 4{\alpha+2\choose 4}+{\alpha+1\choose 4} \end{align} which gives for the Eulerian idempotents \begin{align} e_4^{(1)}&= \frac1{4!}\left(6 X_{1110} + 4 X_{1200} + 2 X_{2010} + 2 X_{2100} \right)=\varphi_4\\ e_4^{(2)}&= \frac1{4!}\left(9 X_{2100} + 6 X_{1010} + 6 X_{3000} + 10 X_{1200} + 9 X_{2010}\right. \nonumber\\
&+ \left. 4 X_{2000} + 4 X_{0200} + 8 X_{1100} + 11 X_{1110} + 8 X_{0110} \right)\\ e_4^{(3)}&= \frac1{4!}\left( 10 X_{2010} + 12 X_{0200} + 6 X_{1110} + 12 X_{0010} + 8 X_{1200} + 12 X_{0110}\right. \nonumber\\
&+ \left. 12 X_{3000} + 12 X_{1100} + 12 X_{2000} + 12 X_{1010} + 12 X_{0100} + 10 X_{2100} + 12 X_{1000} \right)\\ e_4^{(4)}&= \frac1{4!} \left( 3 X_{2100} + 8 X_{0200} + 8 X_{2000} + 2 X_{1200} + 4 X_{0110} + 4 X_{1100}\right.\nonumber\\
&\left. + 3 X_{2010} + 12 X_{1000} + 12 X_{0100} + 6 X_{1010} + 6 X_{3000} + 24 X_{0000} + 12 X_{0010} + X_{1110} \right) \end{align} }
To recover Chapoton's coefficients for the two-parameter series, one has to use the other version of the $X$-basis, defined by duality with the opposite coproduct on ${\mathcal H}_{NCK}$. This amounts to replacing $\Gamma(X)$ by $\Gamma'(X)=\omega(\Gamma(X))$, that is, \begin{equation}
\Gamma_T(X_{q,t})=\omega(\Gamma_T)\left(\frac{1-qt|}{|1-q} \right). \end{equation} {\footnotesize \begin{align}
\Gamma'_{\arbuga}\left(\frac{1-qt|}{|1-q} \right) &=\frac{{\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{q + 1} \\
\Gamma'_{\arbdga}\left(\frac{1-qt|}{|1-q} \right) &=\frac{{\left(q^{3} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q + 1\right)}} \\
\Gamma'_{\arbdgb}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{3} x + q^{2} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q + 1\right)}}\\
\Gamma'_{\arbtge}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{4} x + q^{3} + q^{2} + q + 1\right)} {\left(q^{3} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q^{2} + 1\right)} {\left(q + 1\right)}^{2}} \\
\Gamma'_{\arbtgc}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{3} x + q^{2} + q + 1\right)} {\left(q^{3} x + q^{2} + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q^{2} + 1\right)} {\left(q + 1\right)}}\\
\Gamma'_{\arbtgd}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{4} x + q^{3} x + q^{3} + q^{2} x + q^{2} + q + 1\right)} {\left(q^{3} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q^{2} + 1\right)} {\left(q + 1\right)}^{2}} \\ \end{align}
\begin{align}
\Gamma'_{\arbtgb}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{4} x + q^{3} x + q^{3} + q^{2} x + q^{2} + q + 1\right)} {\left(q^{3} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q^{2} + 1\right)} {\left(q + 1\right)}^{2}} \\
\Gamma'_{\arbtga}\left(\frac{1-qt|}{|1-q} \right) &= \frac{{\left(q^{6} x^{2} + q^{5} x^{2} + 2 \, q^{5} x + q^{4} x^{2} + 2 \, q^{4} x + q^{4} + 3 \, q^{3} x + q^{3} + 2 \, q^{2} x + 2 \, q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q^{2} + 1\right)} {\left(q + 1\right)}} \end{align} }
\subsection{Appendix: noncommutative Ehrhart polynomials}
In the introduction of \cite{Cha3}, Chapoton mentions that the coefficients of the series $\sympawn$ are $q$-analogues of Ehrhart polynomials (according to his definition given in \cite{Cha4}). These are actually specialisations of the noncommutative Ehrhart polynomals, which are defined only for the order polytopes of posets on $[n]$ \cite{BK,Wh}.
Recall the definition of the free generating function of a poset $P$ \begin{equation} \Gamma_P(A) = \sum_{\sigma\in L(P)}{\bf F}_\sigma \in{\bf FQSym} \end{equation} where $L(P)\subseteq{\mathfrak S}_n$ is the set of linear extensions of $P$. It is a morphism from the Malvenuto-Reutenauer Hopf algebra of special posets towards ${\bf FQSym}$. In the sequel, we will only consider posets satisfying $i<_P j\Rightarrow i<j$.
The order polytope $Q_P$ of $P$ is defined by the inequalities $0\le x_i\le 1$ for $i\in P$ and $i<_Pj\Rightarrow x_i\le x_j$.
The Ehrhart polynomial $E_Q(t)$ computes the number of integral points of $nQ$ for $t=n$. Moreover, $(-1)^nE_T(-n)$ is the number of interior integral points.
Since $nQ_P$ is the intersection of the cone $C_P$ defined by $x_i\ge 0$ and $i<_Pj\Rightarrow x_i\le x_j$, and of a hypercube, one can form in ${\bf WQSym}$ the sum of the packed words of its integer points. The noncommutative Ehrhart polynomial of $Q_P$ is \begin{equation} \sum_{u\in C(P)}{\bf M}_u = \Gamma_P(A) \end{equation} where $C(P)$ is the set of packed words satisfying $i<_Pj\Rightarrow u_i\le u_j$, if one embeds ${\bf FQSym}$ into ${\bf WQSym}$ by \begin{equation} {\bf G}_\sigma(A) = \sum_{{\rm std}(u)=\sigma}{\bf M}_u. \end{equation} Indeed, the linear extensions of $P$ are precisely the permutations such that $i<_P j\Rightarrow \sigma^{-1}(i)<\sigma^{-1}(j)$.
If one specializes $A$ to the alphabet $A_{n+1}=\{a_0,a_1\ldots,a_n\}$, $\Gamma_P(A_{n+1})$ becomes the sum of the integral points of $Q_P$. Their number is therefore $E_{Q_P}(n)=\Gamma_P(n+1)$.
The change of sign $A\mapsto -A$ of the alphabet is defined on symmetric functions by means of the $\lambda$-ring structure: $p_n(-X)=-p_n(X)$, and one defines more generally, the multiplication of the alphabet by an element of binomial type
$p_n(\alpha X)=\alpha p_n(X)$.
These transformations can be naturally extended to quasi-symmetric functions. One first defines them on ${\bf Sym}$ by setting $\sigma_t(\alpha A)=\sigma_t(A)^\alpha$, then one extends to $QSym$ by defining $\sigma_t(X\alpha \cdot A)=\sigma_t(XA)*\sigma_1(\alpha A)$.
These transformations can then be extended to ${\bf WQSym}$ by means of the internal product of ${\bf WQSym}^*$ \cite{NTsuper}. One obtains \begin{equation} {\bf M}_u(-A) = (-1)^{\max(u)}\sum_{v\le u}{\bf M}_v(A) \end{equation} where the sum runs over the refinement order on packed words\footnote{$v\le u$ iff the set composition encoded by $v$ is obtained by merging adjacent blocks of that encoded by $u$.}.
If one sets $A=\{a_0,a_1,a_2\ldots\}$ et $A'=\{a_1,a_2,\ldots\}$, one has \begin{equation} (-1)^n\Gamma_P(-A') = \sum_{v\in \dot{C}(P)}{\bf M}_v(A') \end{equation} where $\dot C(P)$ is the set of packed words satisfying $i<_P j\Rightarrow u_i<u_j$, otherwise said, of the packed words of the interior points of the cone. The interior points of the polytope $nQ_P$ are obtained by evaluating on the alphabet $\{a_1,\ldots,a_{n-1}\}$.
The number of interior points is thus $(-1)^{n}\Gamma_P(1-n)=E_{Q_P}(-n)$, we have therefore in this particular case a noncommutative lift of the Ehrhart reciprocity formula.
For example, \begin{equation}
\begin{split}
\Gamma_{\arbdgb}(X) &={\bf F}_{123}+{\bf F}_{213}={\bf G}_{123}+{\bf G}_{213}\\
&={\bf M}_{123}+ {\bf M}_{122}+ {\bf M}_{112}+ {\bf M}_{111}+ {\bf M}_{213}+ {\bf M}_{212}
\end{split} \end{equation} has as commutative image $F_{12}+F_3$ and as evaluation on a scalar
$ {\alpha+2\choose 3}+{\alpha+1\choose 3}$ so that the Ehrhart polynomial of the order polytope $Q=\{0\le x_1,x_2\le x_3\}$ is \begin{equation}
E_Q(x)= {x+3\choose 3}+{x+2\choose 3} = \frac{(x+1)(x+2)(2x+3)}{6} \end{equation} which is indeed the specialization $q=1$ of \begin{equation} \Gamma_{\arbdgb}(X_{q,t})= \frac{{\left(q^{3} x + q^{2} x + q^{2} + q + 1\right)} {\left(q^{2} x + q + 1\right)} {\left(q x + 1\right)}}{{\left(q^{2} + q + 1\right)} {\left(q + 1\right)}} \end{equation} The specialization $x=[n]_q$ gives the $q$-counting of the integral points of $nQ$ by sum of the coordinates. Indeed, this amounts to setting $t=q^n$ in \eqref{eq:defsqx}, so that by \cite[Prop. 8.4]{NCSF2} \begin{equation}
\sympawn \mapsto \sigma_1(X_{q,q^n}A) := \prod_{0\le i\le n}^\rightarrow\sigma_{q^i}(A) = \sum_I M_I(1,q,\ldots,q^{n})S^I, \end{equation} that is, \begin{equation}
\Gamma_P(X_{q,q^n})=\sum_{(x_1,\ldots,x_d)\in nQ\cap{\mathbb Z}^d}q^{x_1+x_2+\cdots+x_d}. \end{equation} For example, the 14 integral points of $2Q$ are $$000,\ 001,\ 011,\ 101,\ 022,\ 111,\ 012,\ 102,\ 112,\ 022,\ 202,\ 122, \ 212,\ 222$$ and $\Gamma_{\arbdgb}(X_{q,q^2})=1+q+3q^2+3q^3+3q^4+2q^5+q^6$, as expected.
Now, \begin{equation}
(-1)^3\Gamma_{\arbdgb}(-A)={\bf M}_{123}+{\bf M}_{213}+{\bf M}_{112} \end{equation} which predicts correctly that for $n=3$ the only interior point of $3Q$ is $(1,1,2)$. Setting $t=q^{-n}$ in \eqref{eq:defsqx} results into \begin{equation}
\sympawn \mapsto \sigma_1(X_{q,q^{-n}}A) := \prod_{1\le i\le n-1}^\rightarrow\lambda_{-q^{-i}}(A) \end{equation}
so that $\Gamma_P(X_{q,q^{-n}})$ is obtained by evaluating $\Gamma_P(-A)$ on the alphabet $\{x_i=q^{-i}|i=1,\ldots,n-1\}$, in accordance with \cite[Theorem 2.5]{Cha4}. On our example, setting $x=[-3]_q$ yields $-q^{-4}$, corresponding to the interior point $(1,1,2)$.
\footnotesize
\end{document} |
\begin{document}
\title{A Nonparametric Statistical Approach to Content Analysis of Items}
\begin{abstract} In order to use psychometric instruments to assess a multidimensional construct, we may decompose it in dimensions and, in order to assess each dimension, develop a set of items, so one may assess the construct as a whole, by assessing its dimensions. In this scenario, the content analysis of items aims to verify if the developed items are assessing the dimension they are supposed to. In the content analysis process, it is customary to request the judgement of specialists in the studied construct about the dimension that the developed items assess, what makes it a subjective process as it relies upon the personal opinion of the specialists. This paper aims to develop a nonparametric statistical approach to the content analysis of items in order to present a practical method to assess the consistency of the content analysis process, by the development of a statistical test that seeks to determine if all the specialists have the same capability to judge the items. A simulation study is conducted to assess the consistency of the test and it is applied to a real validation process. \\ \end{abstract}
\keywords{Nonparametric statistics; Applied statistics; Content validity; Psychometric instruments; Psychometrics.}
\section{Introduction}
Psychometric instruments play an important role in researches in the areas of psychology and education, thus it is necessary that they are thoroughly developed and validated, so that no erroneous results are obtained by their application. The psychometric instruments are developed in order to assess psychological constructs that cannot be operationally defined and, consequently, cannot be objectively assessed. According to \cite{law1998}, a construct is said to be multidimensional when it consists of a number of interrelated attributes or dimensions and exists in multidimensional domains. In order to develop a psychometric instrument to assess a multidimensional construct, a set of items, that assess a dimension, is developed for each one of its dimensions in furtherance of assessing the construct as a whole. The validation process of an instrument must guarantee that each item assesses its dimension correctly according to the desirable characteristics of a psychometric instrument, e.g., reliability and trustworthiness \cite{haynes1995}.
The validity of an instrument is divided in four categories: predictive validity, concurrent validity, content validity and construct validity. The first two of these may be together considered as criterion-oriented validation processes \cite{cronbach1955}. The predictive validity is studied when the instrument aims to predict a criterion. The instrument is applied and a correlated construct to the criterion is assessed, providing a prediction for the criterion of interest. The concurrent validity is studied when the instrument is proposed as a substitute for another \cite{cronbach1955}. The study of the construct validity of a psychometric instrument is necessary when the result of the instrument is the measure of an attribute or a characteristic that is not operationally defined. According to the construct validity, an instrument is valid when it is possible to determine which construct accounts for the variance of the instrument performance.
Content validity is established by showing that the instrument items are a sample of a universe in which the investigator is interested. Content validity is ordinarily to be established deductively, by defining a universe of items and sampling systematically within this universe to establish the instrument \cite{cronbach1955}. Another definition for content validity is that it is the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose \cite{haynes1995}.
A list consisting of thirty-five procedures for the content validation was proposed by \cite{haynes1995}. Amidst these procedures are to match each item to the dimension of the construct that it assesses and request the judgement of specialists in the construct, also called judges, about the developed items. The accomplishment of these procedures is imperative to verify if the developed items are a sample of the universe that the instrument aims to assess. These procedures, components of the theoretical analysis of items, are subjective for they rely upon the personal opinions of specialists and researchers. Indeed, the theoretical analysis of items is done by judges and aims to establish the comprehension of the items (semantic analysis) and their pertinence to the attribute that they propose to assess.
This paper aims to propose a nonparametric statistical approach to the content analysis of items in furtherance of assessing its consistency and reliability. Therefore, our approach does not seek to establish the validity of the instrument, but rather assess the consistency of the content analysis process, so that its rule about the instrument may be trusted. Thus, this approach must be applied among other instrument validation methods, quantitative and qualitative, e.g., semantic analysis, pretrial and factorial analysis, in order to ensure the reliability, consistency, validity and trustworthiness of the psychometric instrument.
\section{Method}
The researcher, supported by the theory of the construct that the instrument aims to assess, develops \textit{m} items and for each item assigns a theoretical dimension according to the theory and/or his opinion about which dimension the item assesses. Although the items and their dimensions have theoretical foundations, it is necessary to test them in order to determine if every item is indeed assessing the dimension it is supposed to.
In order to fulfil such test, the items are sent to \textit{s} specialists in the construct, so that they may judge the items according to the dimension they assess. The items may be sent to at least six specialists and should be presented to them in a random order and without their theoretical dimensions, so that their judgement is not biased.
A condition for an item to be excluded from the instrument is determined based on the judgement of the specialists. This condition must exclude the items that do not belong to the universe that the instrument aims to assess, so that the not excluded items are a sample of such universe. A possible way to proceed is to determine a \textit{Concordance Index} (\textit{CI}) that states that all items in which less than $c$\% of the specialists agree on the dimension that they assess must be excluded. One may also take the \textit{Content Validity Ratio} (\textit{CRV}), as proposed by \cite{lawshe1975}, as a condition to exclude items that do not belong to the universe that the instrument aims to assess.
The method to be developed in this paper aims to determine if all specialists have the same capability to judge the items according to their dimensions, through the analysis of the judgement of the specialists about the items that were not excluded by the established condition. However, the method does not rank the specialists according to their capabilities, but only determine if all specialists have the same capability. Therefore, it is not possible to determine the specialists with low capability.
If there are no evidences that the capabilities of the specialists are different, their judgement is accepted and the items not excluded by the established condition are used in the next steps of the instrument validation process. Indeed, if all the specialists have the same capability, it may happen that they are all highly capable or little capable of judging the items, though the proposed method will not be able to differentiate between the two cases. Nevertheless, the two scenarios may be differentiated by a qualitative analysis of the specialists judgements, by observing if they agree with the theoretical dimension of the items and, when they do not agree, if there is some theory that supports their choice. Therefore, if their judgements are consistent with some theory, then the specialists may be regarded as being all highly capable of judging the items, given that they have all the same capability to judge them.
On the other hand, if it is determined that the specialists do not have all the same capability to judge the items, then at least one specialist is less capable to judge them than the others, what may bias the validation of the instrument. Therefore, in such scenario, we propose two approaches in order to avoid a biased validation process. First, we propose that the specialists judgement be disregarded and a new group of specialists be requested to judge the items. However, this approach may be impractical in some cases, as time and resources may be too limited to repeat the cycle of specialists judgements more than once. Nonetheless, we propose a much more practical approach that consists in applying the proposed method to all subgroups of specialists of size $s^{*}$, $6 \leq s^{*} < s$, of the original group of specialists, and then choose the judgement of the subgroup whose specialists have all the same capability to judge the items. This approach will be presented in more details in the application section.
\section{Notation and Definitions}
Let $C = \{C_{1},\dots, C_{n}\}$ be a construct divided in \textit{n} dimensions and \textit{U} be the universe of all the items that assess the dimensions of \textit{C}. A set $I = \{i_{1}, \dots, i_{m}\}$ of items is developed based on the theory about \textit{C} and then a subset $I^{*} \subset I$ of items, that we believe to be a subset of \textit{U}, is determined, by the following process.
Denote $E = \{e_{1},\dots,e_{s}\}$ a set of $s$ specialists and let $C_{c(i_{l})} \in C$ be the dimension that the item $i_{l} \in I^{*}$ assesses. Let the random variables $\{X_{i_{l}}(e_{j}):i_{l} \in I,e_{j} \in E\}$, defined on $(\Omega, \mathbb{F},\mathbb{P})$, be so that $X_{i_{l}}(e_{j}) = k$ if the specialist $e_{j}$ judged the item $i_{l}$ at the \textit{kth} dimension of \textit{C}. Note that if $i_{l} \in I^{*}$ and $X_{i_{l}}(e_{j}) = c(i_{l})$, then the specialist $e_{j}$ judged the item $i_{l}$ correctly.
The capability of the specialist $e_{j}$ to judge the items is defined as \begin{equation*} \boldsymbol{P}(e_{j}) = (P_{i_{l}}(e_{j}): i_{l} \in I^{*}) \end{equation*}
in which $P_{i_{l}}(e_{j}) = \mathbb{P}\{X_{i_{l}}(e_{j}) = c(i_{l})\}, \forall i_{l} \in I^{*}$ and $\forall e_{j} \in E$. In the proposed approach, we are interested in developing a hypothesis test to determine if $\boldsymbol{P}(e_{j}) = \boldsymbol{p} \in [0,1]^{|I^{*}|}, \forall e_{j} \in E$, i.e., if all specialists have the same capability to judge the items.
For this purpose, let a random sample of the judgement of the specialist $e_{j}$ about the items of $I$ be given by $\boldsymbol{x}_{e_{j}} = \{x_{i_{1}}(e_{j}), \dots, x_{i_{m}}(e_{j})\}$ and let $\boldsymbol{X}$ be the space of all possible random samples $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$. Define the random sets $\{M_{i_{l}}: i_{l} \in I\}$ as \begin{equation*} M_{i_{l}} = \arg\max\limits_{k \in \{1, \dots, n\}} \Bigg\{\sum_{e_{j} \in E} \mathds{1}_{\{k\}} \Big(X_{i_{l}}(e_{j})\Big) \Bigg\} \end{equation*} in which $\mathds{1}_{\{A\}}(\cdot)$ is the indicator function of the set $A$. Note that $M_{i_{l}}$ is the set containing the number of the dimensions in which the majority of the specialists judged the item $i_{l} \in I$. Given a random sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\} \in \boldsymbol{X}$ and a subset $I^{*} \subset I$ of items, the set $\{m_{i_{l}}: i_{l} \in I^{*}\}$, determined from the sample values $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$, is a random sample of $\{M_{i_{l}}: i_{l} \in I^{*}\}$.
The subset $I^{*}$ may be defined by a condition function, a function of the sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$, given by $f: \boldsymbol{X} \mapsto \mathcal{P}(I)$, in which $\mathcal{P}(\cdot)$ is the power set operator. The condition function must be so that if $\{m_{i_{l}}: i_{l} \in I^{*}\}$ is determined from $\{\boldsymbol{x}_{e_j}: e_{j} \in E\} \in \boldsymbol{X}$ and $I^{*} = f(\boldsymbol{x}_{e_j}: e_{j} \in E)$, then $|m_{i_{l}}| = 1, \forall i_{l} \in I^{*}$. The \textit{CI} for $c > 50$ and the $CRV$ are condition functions. From now on, it will be supposed that the condition function may be expressed as a \textit{CI}.
The condition function is based on the assumption that an item is in the universe of items that assess the construct of interest if the majority of specialists agree on the dimension it assesses. Of course, one may take a different criterion to exclude the items that do not assess the construct of interest, although our method may be applied only if the criterion can be expressed as a condition function, for it is based on the fact that $M_{i_{l}}$ is a univariate random variable.
Finally, define \begin{equation*} W_{i_{l}}(e_{j}) = \mathds{1}_{\{M_{i_{l}}\}} \Big(X_{i_{l}}(e_{j})\Big), \end{equation*} as the random variable that indicates if the specialist $e_{j}$ judged the item $i_{l}$ at the same dimension as the majority of the specialists. Given a random sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\} \in \boldsymbol{X}$ and a subset $f(\boldsymbol{x}_{e_j}: e_{j} \in E) = I^{*} \subset I$ of items, the set $\{w_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$, determined from the sample values $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$, is a random sample of $\{W_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$.
On the one hand, whilst we observe the values of the random variables $\{X_{i_{l}}(e_{j}):i_{l} \in I,e_{j} \in E\}$, we do not know if the specialists judged the items correctly or not, for the dimension that an item really assesses (if any) is unknown. Therefore, it is not possible to differentiate the specialists by the number of items they judged correctly, for example.
On the other hand, from the random variables $\{W_{i_{l}}(e_{j}): i_{l} \in I, e_{j} \in E\}$, we know the concordance of the specialists on the judgement of the items, what gives us a relative measure of the capability of the specialists to judge the items. Therefore, we are able to test if all the specialists have the same capability to judge the items, although we cannot determine the capability of each one.
\section{Assumptions}
The development of the items and the judgement of the specialists must satisfy two assumptions so that the method to be presented below may be applied: \begin{enumerate}
\item[\textbf{1.}] Each item $i_{l} \in I^{*}$ assesses one, and only one, dimension $C_{c(i_{l})} \in C$.
\item[\textbf{2.}] The random variables $\{X_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$ are independent. \end{enumerate}
Assumption \textbf{1} establishes that the items that were not excluded by the condition function, i.e., the items in $I^{*}$, are well constructed and assess only one dimension of \textit{C}, while assumption \textbf{2} imposes that the specialists judge the items independently of each other and that the judgement of a specialist about one item does not depend on his judgement about any other item. Those assumptions are not strong, for it is expected that they will be satisfied if the items were well constructed. Indeed, as better the condition function is in determining what items are not in \textit{U}, the better will be the quality of the items in $I^{*}$. Therefore the assumptions above are closely related to the condition function. If, in fact, $I^{*} \subset U$, then the first assumption is immediately satisfied, for there is no intersection between two dimensions of a construct, and the second assumption may also hold, for the items are well defined.
\section{Mathematical Deduction}
Given a random sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\} \in \boldsymbol{X}$, it is not trivial to estimate the capabilities $\{\boldsymbol{P}(e_{j}): e_{j} \in E \}$, for the dimension that each item assesses is unknown. Examining such random sample, it is known that the specialist $e_{j}$ judged the item $i_{l}$ at the dimension $C_{k}$, but it is not possible to determine, with probability 1, if he judged such item correctly. Therefore, the problem is, given a random sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\} \in \boldsymbol{X}$, to determine random variables that allow us to test if the capability of all the specialists is the same. It will be shown that if the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E \}$ are not identically distributed $\forall i_{l} \in I^{*}$, then the specialists do not have all the same capability to judge the items. Indeed, in order to test if the capability of all specialists is the same, we will consider the following null hypotheses: \begin{equation*} H_{0}: \begin{cases}
\text{\textbf{1.} } & \boldsymbol{P}(e_{j}) = (p^{(i_{1})}, \dots, p^{(i_{|I^{*}|})}) = \boldsymbol{p} \in [0,1]^{I^{*}}, \forall e_{j} \in E\\ \text{\textbf{2.} } & \Big(\mathbb{P}\{X_{i_{l}}(e_{j}) = 1\}, \dots, \mathbb{P}\{X_{i_{l}}(e_{j}) = n\}\Big) \text{ is a permutation of } \\ & \Big(p^{(i_{l})},p_{1}^{(i_{l})}, \dots, p_{n-1}^{(i_{l})}\Big) \in [0,1]^{n}, p^{(i_{l})} + \sum_{k=1}^{n-1} p_{k}^{(i_{l})} = 1, \forall i_{l} \in I^{*}, \forall e_{j} \in E \\ \end{cases} \end{equation*} Of course, we are only interested in testing the first part of $H_{0}$, that refers to the capability of the specialists, i.e., that all specialists have the same capability to judge the items. However, the second part is needed to develop a test statistic for $H_{0}$. It will be argued that for great values of $p^{(i_{l})}$ the hypothesis that is actually being tested is the first one.
The propositions below set the scenario for the nonparametric test that will be used to test $H_{0}$.
\begin{proposition}
\label{P1}
The random variables $\{W_{i_{l}}(e_{j}): i_{l} \in I^{*}\}$ are independent $\forall e_{j} \in E$, but the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ are dependent $\forall i_{l} \in I^{*}$. \end{proposition}
\begin{proof}
On the one hand, the random variables $\{W_{i_{l}}(e_{j}): i_{l} \in I^{*}\}$ are each, by assumption \textbf{2}, function of independent random variables, therefore they are independent. On the other hand, note that $\sum\limits_{e_{j} \in E} W_{i_{l}}(e_{j}) \geq \lceil \frac{cs}{100} \rceil$, for at least $c\%$ of the specialists must agree on the dimension an item in $I^{*}$ assesses, what establishes a dependence. \end{proof}
\begin{proposition}
\label{P2}
Under $H_{0}$, the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ are identically distributed for all $i_{l} \in I^{*}$. \end{proposition}
\begin{proof}
We have that
\begin{align*}
\mathbb{P}\{W_{i_{l}}(e_{j}) = 1\} & = \mathbb{P}\{X_{i_{l}}(e_{j}) = M_{i_{l}}\} \\ & = \mathbb{P}\{X_{i_{l}}(e_{j}) = c(i_{l}), M_{i_{l}} = c(i_{l})\} + \mathbb{P}\{X_{i_{l}}(e_{j}) = M_{i_{l}}, M_{i_{l}} \neq c(i_{l})\}.
\end{align*}
Now let $X^{(i_{l})} \sim Binomial(s-1,p^{(i_{l})})$ and $X_{k}^{(i_{l})} \sim Binomial(s-1,p_{k}^{(i_{l})}), k \in \{1, \dots, n-1\}$, be independent random variables, and let $f^{*} = \lfloor \frac{cs}{100} \rfloor$, in which \textit{c} is the \textit{CI}. Then,
\begin{align*}
\mathbb{P}\{X_{i_{l}}(e_{j}) = c(i_{l}), M_{i_{l}} = c(i_{l})\} & = \mathbb{P}\{ M_{i_{l}} = c(i_{l})|X_{i_{l}}(e_{j}) = c(i_{l})\}\mathbb{P}\{X_{i_{l}}(e_{j}) = c(i_{l})\}\\ & = \mathbb{P}\{X^{(i_{l})} \geq f^{*}\} p^{(i_{l})}
\end{align*}
and
\begin{align*}
\mathbb{P}\{X_{i_{l}}(e_{j}) = M_{i_{l}}, M_{i_{l}} \neq c(i_{l})\} & = \sum\limits_{\substack{k=1 \\ k \neq c(i_{l})}}^{n} \mathbb{P}\{X_{i_{l}}(e_{j}) = k, M_{i_{l}} = k\} \\ & = \sum\limits_{\substack{k=1 \\ k \neq c(i_{l})}}^{n} \mathbb{P}\{ M_{i_{l}} = k|X_{i_{l}}(e_{j}) = k\}\mathbb{P}\{X_{i_{l}}(e_{j}) = k\} \\ & = \sum\limits_{k=1}^{n-1} \mathbb{P}\{X_{k}^{(i_{l})} \geq f^{*}\} p_{k}^{(i_{l})}.
\end{align*}
Hence,
\begin{align*}
\mathbb{P}\{W_{i_{l}}(e_{j}) = 1\} & = \mathbb{P}\{X^{(i_{l})} \geq f^{*}\} p^{(i_{l})} + \sum\limits_{k=1}^{n-1} \mathbb{P}\{X_{k}^{(i_{l})} \geq f^{*}\} p_{k}^{(i_{l})}
\end{align*}
that does not depend on $e_{j}$ and the result follows. \end{proof}
It is important to note that if all $p^{(i_{l})}$ are approximately $1$, then $\mathbb{P}\{W_{i_{l}}(e_{j}) = 1\} \approx p^{(i_{l})} \mathbb{P}\{X \geq f^{*}\}$ and the hypothesis that is really being tested is the first part of $H_{0}$. Therefore, it is reasonable to test $H_{0}$ in order to determine if the specialists have all the same capability to judge the items, for, if it is indeed true, we expect that all $p^{(i_{l})}$ are great and the second part of $H_{0}$ will hardly leads to the rejection of $H_{0}$ when the capability is the same.
This test may be used as a diagnostic for the content analysis of items. If $H_{0}$ is not rejected, then there is no evidence that the capabilities of the specialists are different. However, if $H_{0}$ is rejected, we do not know if it is the first or the second part (or both) of $H_{0}$ that is not being satisfied by the judgement of the specialists. Nevertheless, we may disregard the judgement of those specialists in any case, for either their capability is not the same or they are the same, but some $p^{(i_{l})}$ are small, what led to the rejection of $H_{0}$ by its second part.
\section{Hypothesis Testing}
The Chocran's Q test may applied to the random sample $\{w_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$ determined from $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$ as a way to test $H_{0}$ \cite{cochran1950}. The assumptions of the Chocran's Q test, using the notation of this paper, are: \begin{itemize}
\item[\textbf{(a)}] The items of $I^{*}$ were randomly selected from the items that form the universe \textit{U} that the instrument aims to assess.
\item[\textbf{(b)}] The random variables $\{W_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$ are dichotomous.
\item[\textbf{(c)}] The random variables $\{W_{i_{l}}(e_{j}): i_{l} \in I^{*}\}$ are independent. \end{itemize}
From the usual scenario in which such test is applied, we have that the items may be seen as the \textit{blocks} and the specialists as the \textit{treatments}. What the Chocran's Q test evaluates is if the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ are identically distributed for all $i_{l} \in I^{*}$. Therefore, if we reject the null hypothesis of the test, we conclude that $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ are not identically distributed for all $i_{l} \in I^{*}$ and, by Proposition \ref{P2}, $H_{0}$ is also rejected. Thus, the hypothesis tested by the Chocran's Q test is indeed $H_{0}$.
The statistic of the test is calculated from Table \ref{T1}, in which $I^{*} = \{i^{*}_{1}, \dots,i^{*}_{v}\}$, and may be expressed as \begin{equation*} Q = \sum\limits_{r =1}^{s} \dfrac{s(s-1)\Big(D_{r} - \frac{N}{s}\Big)^{2}}{\sum_{l=1}^{v} R_{l}(s-R_{l})} \end{equation*}
\begin{table}[H]
\centering
\caption{Table of the observed random sample.}
\label{T1}
\begin{tabular}{c|ccc|c}
\hline
\multirow{2}{*}{Item} & \multicolumn{3}{c|}{Specialist} & \multirow{2}{*}{Total} \\ \cline{2-4}
& $e_{1}$ & $\cdots$ & $e_{s}$ & \\
\hline
$i^{*}_{1}$ & $w_{i^{*}_{1}}(e_{1})$ & $\cdots$ & $w_{i^{*}_{1}}(e_{s})$ & $R_{1} = \sum\limits_{e_{j} \in E} w_{i^{*}_{1}}(e_{j})$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$i^{*}_{v}$ & $w_{i^{*}_{v}}(e_{1})$ & $\cdots$ & $w_{i^{*}_{v}}(e_{s})$ & $R_{v} = \sum\limits_{e_{j} \in E} w_{i^{*}_{v}}(e_{j})$ \\
\hline
Total & $D_{1} = \sum\limits_{i_{l} \in I^{*}} w_{i_{l}}(e_{1})$ & $\cdots$ & $D_{s} = \sum\limits_{i_{l} \in I^{*}} w_{i_{l}}(e_{s})$ & $N = \sum\limits_{i_{l} \in I^{*}} \sum\limits_{e_{j} \in E} w_{i_{l}}(e_{j})$ \\
\hline
\end{tabular} \end{table}
The exact distribution of the $Q$ statistics may be calculated by the method presented by \cite{patil1975}, although a large sample approximation may be used instead. If $|I^{*}|$ is large, then the distribution of $Q$ is approximately $\chi^{2}$ with $(s-1)$ degrees of freedom \cite{conover1998}.
It is worth mentioning that the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ being identically distributed for all $i_{l} \in I^{*}$ does not imply that the specialists have all the same capability to judge the items, although there is no evidence that their capabilities are different. If there is no evidence that the capabilities of the specialists to judge the items are different, their judgement may be accepted.
If it is determined that the random variables $\{W_{i_{l}}(e_{j}): e_{j} \in E\}$ are not identically distributed for all $i_{l} \in I^{*}$, then the judgement of the specialists is disregarded for $H_{0}$ is rejected. The items may be judged by different groups of specialists until they are judged by a set in which all the specialists have the same capability to judge the items. Those groups may be formed by new specialists or may be a subgroup of size $s^{*}$, $6 \leq s^{*} < s$, of the specialists for which $H_{0}$ was rejected.
\section{Simulation Study}
As the Cochran's Q test is not a powerful one, i.e., its Type I error may be too great, a simulation study will be conducted to estimate the power of the test in some specific cases. The power of a statistical test is defined as the probability of $H_{0}$ being rejected when it is false and depends on the real scenario, i.e., on the real values of the parameters considered on $H_{0}$. Therefore, the power of Cochran's Q test in testing $H_{0}$ depends on the real capability of each specialist in judging the items, so that the simulation study consider \textit{10} distinct scenarios and is conducted as follows.
For each scenario, we will simulate \textit{50,000} judgements of the same items by the judges and then determine the proportion of the simulations in which $H_{0}$ was rejected at a significance, i.e., Type II error, of 5\%. This proportion will be regarded as an estimate for the power of the test in the considered scenario. A CI of \textit{50\%} will be used to determine $I^{*}$ in each simulation. Analysing the results of all \textit{10} scenarios, we will have a wide picture of the power of the test and will know for which scenarios it is more powerful.
We will consider in all scenarios nine specialists judging \textit{30} items into three dimensions, that is the framework of the application in the next section. We will also consider that the capability of each specialist is the same for all items, i.e., that $\mathbb{P}\{X_{i_{l}}(e_{j}) = c(i_{l})\} = p_{j}$ for all $j \in \{1,\dots,9\}$ and $l \in \{1,\dots,|I^{*}|\}$. Finally, we will assume that $\mathbb{P}\{X_{i_{l}}(e_{j}) = k\} = (1 -p_{j})/2$ for all $k \neq c(i_{l})$, $j \in \{1,\dots,9\}$ and $l \in \{1,\dots,|I^{*}|\}$. The scenarios and their estimated test power are displayed in Table \ref{simulations}.
\begin{table}[ht]
\centering
\caption{The estimated power of the test for each scenario.}
\label{simulations}
\begin{tabular}{c|lcc}
\hline
Scenario & Description & Items$^{*}$ & Power \\
\hline
1 & $p_{j} = 0.9$, $j \neq 1$ and $p_{1} = 0.45$ & 30 & 0.9931 \\
2 & $p_{j} = 0.9$, $j \notin \{1,2,3\}$ and $p_{1} = p_{2} = p_{3} = 0.45$ & 30 & 0.9999 \\
3 & $p_{j} = 0.9$, $j \notin \{1,2,3\}$ and $p_{1} = 0.45, p_{2} = 0.35, p_{3} = 0.25$ & 29 & 1 \\
4 & $p_{j} = 0.9$, $j \neq 1$ and $p_{1} = 0.8$ & 30 & 0.1595 \\
5 & $p_{j} = 0.9$, $j \notin \{1,2\}$ and $p_{1} = p_{2} = 0.8$ & 30 & 0.2413 \\
6 & $p_{j} = 0.6$, $j \notin \{1,2,3\}$ and $p_{1} = p_{2} = p_{3} = 0.75$ & 29 & 0.2413 \\
7 & $p_{j} = 0.3$, $j \notin \{1,2,3\}$ and $p_{1} = p_{2} = p_{3} = 0.75$ & 13 & 0.3167 \\
8 & $(p_{1},\dots,p_{9}) = (0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9)$ & 15 & 0.4571 \\
9 & $p_{j} = 0.9, \forall j$, but the second part of $H_{0}$ is not true & 30 & 0.0456 \\
10 & $p_{j} = 0.6, \forall j$, but the second part of $H_{0}$ is not true & 26 & 0.0553 \\
\hline
\multicolumn{4}{l}{$^{*}$ The mean number of items not excluded by the \textit{CI}.}
\end{tabular} \end{table}
On the one hand, we see in Table \ref{simulations}, that the power of the test is great when the majority of the specialists have the same high capability, while a few specialists have a low capability, as is the case of scenarios 1, 2 and 3. On the other hand, the power of the test is quite low when some of the specialists have the same high capability, but the specialists with lower capability are almost as capable as them, as is the case of scenarios 4, 5 and 6.
On scenarios 7 and 8 we see that the power of the test is low when there are specialists with capability less than \textit{0.5}. It happens because the specialists hardly agree on the dimension that each item assesses (as some of them are not capable) so that many items are excluded by the \textit{CI} and, on the items that remain, the not capable specialists agree with the highly capable ones, so it seems that they have high capability. Indeed, in scenarios 7 and 8, the mean number of not excluded items are the lowest of all scenarios, so that a low concordance among the specialists is an evidence of the existence of low capable specialists, given that the items were well constructed.
Finally, as pointed out in the Mathematical Deduction section, we see in scenarios 9 and 10 that the hypotheses that is actually being tested when all the specialists are highly and equally capable is the first part of $H_{0}$, as the power of the test is close to the Type II error, what must be the case if the hypothesis is true.
The simulation study shed light in some interesting facts about the proposed method on the considered scenarios. On the one hand, if the majority of the specialists have a homogeneous high capability, and a few specialists have a very low capability, then the power of the test is great. However, if the specialists have all high, but different, capability then the power of the test is low. On the other hand, if the majority of the specialists have a low capability, then a great number of items will be excluded by the \textit{CI} and, given that the items were well constructed, we may conclude that the specialists have low capability of judging the items, even though the power of the test is low. Finally, if only the first part of $H_{0}$ is being satisfied, and the capability of the specialists is high, then the power of the test is low and, therefore, the hypothesis that is really being tested is the first part of $H_{0}$.
\section{Application: Perception About the Evaluation of the Teaching-Learning}
In this section we will apply the developed method to a real validation process, in order to analyse the content of items of an instrument that aims to assess the perception of teachers and students of higher education institutions about the teaching-learning process, that is a construct that may be divided into three dimensions: process (P), judgement (J) and teaching-learning (T).
The evaluation of the teaching-learning has a process dimension, as it must have a beginning, a middle and an end well defined and must have a continuous, cumulative and systematic character. Indeed, it is a systematic mechanism for gathering information over time, with well defined levels, what characterizes it as a process. Also, the evaluation of the teaching-learning has a judgement dimension because it must issue a judgement of value or assign a score through the analysis of educational results obtained from the information gathered over the time. Finally, the evaluation of the teaching-learning has a teaching-learning dimension for, as its own name says, it must not only evaluate the learning, but also the teaching: it should not only evaluate what the student has learnt, but also what the teacher has taught. Therefore, the evaluation of the teaching-learning is a process of data gathering, in which an individual will judge or be judged accordingly to the teaching-learning.
In order to develop an instrument to assess this construct, \textit{30} items were developed and sent to nine specialists so they would judge the items to the dimension that, according to their opinion, each one assesses. The condition defined for excluding an item is the \textit{CI} with $c = 50$. The judgements of the specialists are presented in Table \ref{judgement}, the table for the Cochran's Q test is displayed in Table \ref{Q} and a translation of the items, that were originally constructed in Portuguese, is presented in the Appendix.
\begin{table}[ht]
\centering
\caption{Judgement of the specialists about each item, i.e., the sample $\{\boldsymbol{x}_{e_j}: e_{j} \in E\}$.}
\label{judgement}
\begin{tabular}{c|ccccccccc|cc}
\hline
\multirow{2}{*}{Item}& \multicolumn{9}{c|}{Specialist} & \multirow{2}{*}{Dimension$^{*}$} & \multirow{2}{*}{Theoretical} \\ \cline{2-10}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & & \\
\hline
1 & T & P & P & T & T & T & T & P & P & T & P \\
2 & P & J & T & T & P & P & T & P & T & - & T \\
3 & T & T & J & P & P & P & P & J & J & - & J \\
4 & J & P & P & P & P & P & P & J & J & P & P \\
5 & T & J & T & T & T & T & J & T & P & T & P \\
6 & P & T & T & P & T & T & T & P & T & T & P \\
7 & J & J & J & J & J & J & J & J & J & J & J \\
8 & T & P & P & T & P & P & T & P & T & P & P \\
9 & J & P & T & J & T & T & P & P & P & - & T \\
10 & J & J & J & J & J & J & J & J & J & J & J \\
11 & J & T & J & J & J & J & J & J & J & J & J \\
12 & P & T & T & P & P & P & P & J & P & P & P \\
13 & J & P & P & T & J & J & J & J & T & J & J \\
14 & T & T & T & T & P & P & T & P & T & T & P \\
15 & J & P & J & J & J & J & J & J & J & J & J \\
16 & T & P & T & T & P & P & J & P & T & - & P \\
17 & P & P & P & P & T & T & P & T & P & P & T \\
18 & J & T & T & T & P & P & J & T & J & - & T \\
19 & T & T & T & T & P & P & T & J & P & T & T \\
20 & P & P & P & T & P & P & P & J & J & P & P \\
21 & J & J & J & J & J & J & J & J & P & J & P \\
22 & P & P & P & P & P & P & T & T & P & P & P \\
23 & J & J & J & J & J & J & J & J & J & J & J \\
24 & T & J & P & T & P & P & T & J & T & - & J \\
25 & J & J & J & J & J & J & J & J & J & J & J \\
26 & T & P & T & T & P & P & P & P & T & P & T \\
27 & P & P & P & T & P & P & T & J & P & P & T \\
28 & T & P & P & J & P & P & J & P & T & P & T \\
29 & T & T & P & T & P & P & P & P & T & P & T \\
30 & T & J & J & J & T & T & J & J & P & J & J \\
\hline
\multicolumn{12}{l}{$^{*}$ The dimension on which at least 50\% of the specialists agree that} \\
\multicolumn{12}{l}{the item assesses.}
\end{tabular} \end{table}
\begin{table}[ht]
\centering
\caption{The table for the Cochran's Q test, i.e., the sample $\{w_{i_{l}}(e_{j}): i_{l} \in I^{*}, e_{j} \in E\}$.}
\label{Q}
\begin{tabular}{c|ccccccccc|c}
\hline
\multirow{2}{*}{Item}& \multicolumn{9}{c|}{Specialist} & \multirow{2}{*}{Total} \\ \cline{2-10}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \\
\hline
1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 5 \\
2 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 4 \\
3 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 4 \\
4 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 6 \\
5 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 6 \\
6 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 6 \\
7 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 9 \\
8 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 5 \\
9 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 4 \\
10 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 9 \\
11 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 8 \\
12 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 6 \\
13 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 5 \\
14 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 6 \\
15 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 8 \\
16 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 4 \\
17 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 6 \\
18 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 4 \\
19 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 5 \\
20 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 6 \\
21 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 8 \\
22 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 7 \\
23 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 9 \\
24 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 4 \\
\hline
Total & 18 & 14 & 17 & 18 & 17 & 17 & 18 & 12 & 13 & 144 \\
\hline
\end{tabular} \end{table}
The statistic of the Cochran's Q test for the data in Table \ref{Q} is $Q = 8.7$ and the test p-value is $0.36$, so that there is no evidence that $H_{0}$ is not true, at a significance of 5\%. Furthermore, as the majority of the specialists agreed on the dimension that \textit{24} out of \textit{30} (80\%) items assess we also do not have evidence that the capability of the specialists is low. Therefore, based on the proposed method, there is no reason to disregard the judgement of the specialists.
Nevertheless, in order to illustrate the proposed approach for the case in which $H_{0}$ is rejected, we will apply the test to every subgroup of size $6 \leq s^{*} < 9$ of specialists, what amounts to \textit{130} subgroups, and see for what subgroups the capability of the specialists is the same. From the \textit{130} subgroups, for \textit{13} of them $H_{0}$ was rejected at a significance of 5\%. The $Q$ statistic and the p-value for the \textit{10} groups with greatest p-values are displayed in Table \ref{subgroups}. If $H_{0}$ had been rejected to the group of nine specialists we could now look for a subgroup of those specialists for which $H_{0}$ is not rejected and, by the help of a qualitative analysis, we could choose a subgroup of those specialists instead of disregarding their judgements as a whole and sending the items to other specialists to judge.
\begin{table}[ht]
\centering
\caption{The result of the Cochran's Q test for the subgroups of specisliats with the biggest p-value.}
\label{subgroups}
\begin{tabular}{l|cc}
\hline
Specialists & Q & p-value \\
\hline
(2,3,5,6,7,8) & 0.778 & 0.978 \\
(2,3,5,6,8,9) & 0.789 & 0.978 \\
(1,2,3,4,5,6,8,9) & 2.478 & 0.929 \\
(2,3,5,6,7,9) & 1.772 & 0.880 \\
(2,3,4,5,7,8,9) & 2.441 & 0.875 \\
(2,3,4,6,7,8,9) & 2.441 & 0.875 \\
(1,3,4,5,6,7) & 1.923 & 0.860 \\
(2,4,5,7,8,9) & 1.991 & 0.850 \\
(2,4,6,7,8,9) & 1.991 & 0.850 \\
(1,3,5,6,8,9) & 2.069 & 0.840 \\
\hline
\end{tabular} \end{table}
\FloatBarrier \section{Final Remarks}
The Cochran's Q test is not a powerful one, thus the method must be used with caution. The validation of a psychometric instrument is a process formed by various procedures, therefore it must not be restricted to the content analysis of items and the method developed in this paper. It is important to apply other validation techniques, both qualitative and quantitative, to the instrument so it may be properly validated.
The method may be improved in order to decrease even more the subjectivity of the content analysis of items, especially by the development of more powerful tests than the one established and the definition of other random variables that enable the comparison between the judgement of the specialists. This paper does not exhaust the subject, but present a nonparametric statistical approach that aims to decrease the subjectivity of a subjective process and that may applied not only to the content analysis of items, but also to any statistical application that enables the definition of variables such as those of this paper.
\section{Supplementary Materials}
The \textbf{R} \cite{R} code used in the simulation study and in the application section is available as supplementary material for this paper and can be accessed at \textit{www.ime.usp.br/$\sim$dmarcondes}.
\appendix \section*{Appendix} A translation of the constructed items, whose response is the Likert Scale, and their theoretical dimension, are presented below.
\begin{enumerate}
\item The evaluation is an instrument strategically used to help on the difficulties (Process).
\item The evaluation assumes a formative role on the teaching-learning (Teaching-Learning).
\item The proposed evaluation methods are fair and appropriate (Judgement).
\item The time available for evaluation is sufficient (Process).
\item The evaluation offers recovery strategies for students that have difficulties (Process).
\item The instructions given for the assignments subjected to evaluation are useful (Process).
\item The evaluation is a tool for punishing the student in some manner (Judgement).
\item The evaluation is an essential tool for the teaching-learning process (Process).
\item The evaluation is an essential tool for the understanding of the taught subject (Teaching-Learning).
\item The evaluation is a process that ranks the students is some manner (Judgement).
\item The evaluation is a process that, in a particular way, builds a hierarchy among the students (Judgement).
\item The evaluation is a process that follows the student during all his academic life (Process).
\item The evaluation has different meanings for who evaluate and for who is evaluated (Judgement).
\item The evaluation is used to find out where and how the teaching-learning may be improved (Process).
\item The evaluation is a tool to reward the student in some manner (Judgement).
\item The evaluation is a tool to diagnostic the teaching-learning (Process).
\item The evaluation is a tool with technical and pedagogical characteristics (Teaching-Learning).
\item The evaluation aims to identify how much the student has learnt the subjects (Teaching-Learning).
\item The evaluation aims to identify which paths take to knowledge (Teaching-Learning).
\item The evaluation is a systematic evidence gathering process (Process).
\item The evaluation is a process of outlining, obtaining and providing informations that permit to judge decision alternatives (Process).
\item The evaluation is a process with continuous, cumulative and systematic, but not episodic, character (Process).
\item Evaluate means to provide a judgement of value or to assign a score to whom is being evaluated (Judgement).
\item The evaluation is a tool that permits to inquire to what extent the defined objectives are being achieved (Judgement).
\item The evaluation has an authoritarian and classificatory role inside the process of teaching-learning (Judgement).
\item The evaluation is an educational component that can facilitate the teaching-learning (Teaching-Learning).
\item The teaching-learning and the evaluation are not isolated parts of the education process (Teaching-Learning).
\item The evaluation is the more adequate path to make it feasible an excellent teaching-learning (Teaching-Learning).
\item The evaluation stimulate the acts of teaching and learning as a simultaneous process (Teaching-Learning).
\item The evaluation involves the intentional judgement of a process developed by an individual, during your learning (Judgement). \end{enumerate}
\end{document} |
\begin{document}
\title{ $3j$-symbols for representation of the Lie algebra $\mathfrak{gl}
\title{ $3j$-symbols for representation of the Lie algebra $\mathfrak{gl}
\renewcommand{\abstractname}{}
\begin{abstract} In the paper a simple explicit formula for an arbitrary $3j$-symbol for the Lie algebra $\mathfrak{gl}_3$ is given. It is expressed through a fraction of values of hypergeometric functions when one substitutes $\pm 1$ instead of all it's arguments. The problem of calculation of an arbitrary $3j$-symbol is equivalent to the problem of calculation of an arbitrary Clebsh-Gordan coefficient for the algebra $\mathfrak{gl}_3$. These coefficients play an important role in quantum mechanics in the theory of quarks. \end{abstract}
\section{Introduction}
Consider a tensor product of irreducible representation $V$ and $W$ of the algebra $\mathfrak{gl}_3$ and let us split it into a sum of irreducibles:
\begin{equation}
\label{rzl}
V\otimes W=\sum_{U,s} U^s, \end{equation}
where $U$ denotes possible types of irreducible representations that occur in this decomposition and the symbol $s$ is indexing irreducible representations $U^s$ of type $U$, occurring in the decomposition \footnote{ A precise definition of the index $s$ is the following. We are writing a decomposition as follows: $V\otimes W=\sum_{U} M_U\otimes U$, where $M_U$ is a linear space named the multiplicity space. Let $\{e_i\}$ be it's base, then $U^s:=e_s\otimes U$. }.
Let us choose in these representations bases $\{v_{\mu}\}$, $\{w_{\nu}\}$, $\{u^s_{\rho}\}$. The Clebsh-Gordon coefficients are numeric coefficients $C^{U,\rho,s}_{V,W;\mu,\nu}\in\mathbb{C}$, appearing in the decomposition
\begin{equation} \label{kg1}
v_{\mu}\otimes w_{\nu}=\sum_{s,\rho} C^{U,\rho,s}_{V,W;\mu,\nu} u_{\rho}^s. \end{equation}
Also we use the term the Clebsh-Gordan coefficients for coefficients $D^{U,\rho,s}_{V,W;\mu,\nu}\in\mathbb{C}$, occuring in the decomposition
\begin{equation} \label{kg2}
u_{\rho}^s = \sum_{\mu,\nu} D^{U,\rho,s}_{V,W;\mu,\nu} v_{\mu}\otimes w_{\nu}. \end{equation}
These coefficients in the case of algebras $\mathfrak{gl}_2$, $\mathfrak{gl}_3$ play an important role in the quantum mechanics. The Clebsh-Gordan coefficients for the algebra $\mathfrak{gl}_2$ are used in the spin theory (see \cite{blb}), and the Clebsh-Gordan coefficients for the algebra $\mathfrak{gl}_3$ are used in the theory of quarks (see \cite{GrM}). The problem of their calculation in the case $\mathfrak{gl}_2$ is quite simple. There are explicit formulas that were first obtained by Van der Varden \cite{Gkl0}. The Clebsh-Gordan coefficients for the algebra $\mathfrak{sl}_2$ allow to construct new realization of representations of real Lie groups associated with this Lie algebra ($SU(2)$ in \cite{blb}, \cite{Go}; \cite{GoGo}; $SO(3)$ in \cite{GM}, \cite{GoGo}, \cite{Go}). They appear in the elasticity theory (see \cite{Sel}, \cite{Sel2}, where actually the $SO(3)$ group is considered).
In the application it is especially important to find the Clebsh-Gordan coefficients in the case when in representations the Gelfand-Tsetlin base is taken.
The problem of calculation of Clebsh-Gordan coefficients for $\mathfrak{gl}_n$ when $n\geq 3$ is much more difficult than in the case $n=2$. In the case $n=3$ they for the first time were calculated in a series of papers by Biedenharn, Louck, Baird \cite{bb1963}, \cite{bl1968}, \cite{bl1970}, \cite{bl19731}, \cite{bl19732}. In these papers the general case $\mathfrak{gl}_n$ is considered, but only in the case $n=3$ their calculations allow to obtain in principle a formula for a general Clebsh-Gordan coefficient. The calculations are based on the following principle. Consider tensor opertors between two representations of $\mathfrak{gl}_n$, i.e. collections $\{f_u\}$ of mappings
$$ f_{u}: V \rightarrow W $$
between representations of $\mathfrak{gl}_n$, the mappings $\{f_u\}$ are in one-to-one correspondence with vectors of a representation $U$ of $\mathfrak{gl}_n$. Some condition relating the actions of $\mathfrak{gl}_n$ on these representations must hold. The matrix elements of tensor operators are closely related to the Clebsh-Gordan coefficients (The Wigner Eckart theorem ). In papers \cite{bb1963}-\cite{bl19732} some explicit realization of these operators is given (in different papers cases of different representations $U$ are considered). Such an explicit realization does not allow to obtain exlicit formulas for matrix elements of a tensor operator corresponding tо a given $U$ but it allows to express them through the matrix elelments of tensor operators corresponding to $U$, considered in previous papers.
Unfortunately there is no explicit formula in their papers. It is just clear that it can be obtained.
Thus the problem of finding of an {\it explicit and simple
} formula for a general Clebsh-Gordan coefficient for the algebra $\mathfrak{gl}_3$ still remained unsolved. A review of proceeding papers can be found in \cite{a1}.
In this paper for the first time a really {\it explicit } formula for a general Clebsh-Gordan coefficient for $\mathfrak{gl}_3$ in the decomposition \eqref{kg2} was obtained. That is an explicit formula of type $D^{U,\gamma,s}_{V,W;\alpha,\beta}=...$ was derived. Unfortunately this formula is quite cumnbersome, thus the problem of derivation of a {\it simple } formul remained unsolved.
For the purpose of caculation of Clebsh-Gordan coefficients it is necessary to choose an explicit realization of a representation of $\mathfrak{gl}_3$.
A realization which is very convenient for calculation was suggested in \cite{bb1963}. In this paper the following is proved. If one uses a realization of a representation in the space of functions on the Lie group $GL_3$, then functions corresponding to Gelfand-Tsetlin base vectors can be expressed through the Gauss' hypergeometric function (see a modern viewpoint in \cite{a2}). This idea was used in \cite{a1}.
In the present paper we change an approach of \cite{a1} and this allows to obtain a much more simpler result. Instead of coefficients in the decomposition \eqref{kg1} we are calculating the $3j$-symbols (see their definition and their relation to Clebsh-Gordan coefficients in Section \ref{3jcg}). We
express the $3j$-symbols through the values of an hypergeometric function. As the result we manage to obtain {\it explicit and simple} formulas for a general Clebsh-Gordan coefficient for the algebras $\mathfrak{gl}_3$. The main result is formulated in Theorem \ref{ost}, where
a formula for a $3j$-in this case (see \eqref{osntf2}).
In Appendix \ref{dop} we give s selection rulers for Clebsh-Gordan coefficients and $3j$-symbols.
\section{ The basic notions}
\subsection{ $A$-hypergeometric functions}
One can find information about a $\Gamma$-series in \cite{GG}.
Let $B\subset \mathbb{Z}^N$ be a lattice, let $\mu\in \mathbb{Z}^N$ be a fixed vector. Define a {\it hypergeometric
$\Gamma$-series } in variables $z_1,...,z_N$ by formulas
\begin{equation} \label{gmr} \mathcal{F}_{\mu}(z,B)=\sum_{b\in
B}\frac{z^{b+\mu}}{\Gamma(b+\mu+1)}, \end{equation} where $z=(z_1,...,z_N)$, and we use the multiindex notations
$$ z^{b+\mu}:=\prod_{i=1}^N z_i^{b_i+\mu_i},\,\,\,\Gamma(b+\mu+1):=\prod_{i=1}^N\Gamma(b_i+\mu_i+1). $$
Note that in the case when at least one of the components of the vector $b+\mu$ is non-positive integer then the corresponding summand in \eqref{gmr} vanishes. Thus in the considered below $\Gamma$-series there are only finitely many terms. Also we write below factorials instead of $\Gamma$-functions.
A $\Gamma$-series satisfies the Gelfand-Kapranov-Zelevinsky system. Let us write it in the case $z=(z_1,z_2,z_3,z_4)$, $B=\mathbb{Z}<(1,-1,-1,1)>$: \begin{align} \begin{split} \label{gkzs} \Big(\frac{\partial^2}{\partial z_1\partial
z_4}-\frac{\partial^2}{\partial z_2\partial z_3}\Big)F_{\mu,B}&=0, \\ z_1\frac{\partial}{\partial z_1}F_{\mu,B}+ z_2\frac{\partial}{\partial
z_2}F_{\mu,B}& =(\mu_1+\mu_2)F_{\mu,B},\quad \\ z_1\frac{\partial}{\partial z_1}F_{\mu,B}+ z_3\frac{\partial}{\partial z_3}F_{\mu,B}&=(\mu_1+\mu_3)F_{\mu,B}, \\ z_1\frac{\partial}{\partial z_1}F_{\mu,B}-z_4\frac{\partial}{\partial
z_4}F_{\mu,B}&=(\mu_1-\mu_4)F_{\mu,B}. \end{split} \end{align}
\subsection{ A functional realization of a Gelfand-Tsetlin base}
In the paper Lie algebras and groups over $\mathbb{C}$ are considered.
Functions on $GL_3$ form a representation of the group $GL_3$. On a function $f(g)$, $g\in GL_3$, an element $X\in GL_{3}$ acts by right shifts
\begin{equation} \label{xf} (Xf)(g)=f(gX). \end{equation}
Passing to an infinitesimal action we obtain that on the space of all functions on $GL_3$ there exists an action of $\mathfrak{gl}_3$.
Every finite dimensional irreducible representation can be realized as a subrepresentation in the space of functions. Let $[m_{1},m_2,m_{3}]$ be a highest weight, then in the space of functions there is a highest vector with such a weight, which is written explicitly as follows.
Let $a_{i}^{j}$, $i,j=1,2,3$ - be a function of a matrix element on the group $GL_{3}$. Here $j$ is a row index and $i$ is a column index. Also put
\begin{equation} \label{dete} a_{i_1,...,i_k}:=det(a_i^j)_{i=i_1,...,i_k}^{j=1,...,k}, \end{equation}
where we take a determinant of a submatrix in a matrix $(a_i^j)$, formed by rows with indices $1,...,k$ and columns with indices $i_1,...,i_k$.
The operator $E_{i,j}$ acts onto determinants by transforming their column indices
\begin{equation} \label{edet1} E_{i,j}a_{i_1,...,i_k}=a_{\{i_1,...,i_k\}\mid_{j\mapsto i}}, \end{equation}
where $.\mid_{j\mapsto i}$ denotes an operation of substitution $i$ instead of $j$ $i$; if $j$ does not occur among $\{i_1,...,i_k\}$, then we obtain zero.
One sees that a rising operator $E_{i,j}$, $i<j$ tryes to change an index to a smaller one. Thus the function \begin{equation} \label{stv} v_0=\frac{a_{1}^{m_{1}-m_{2}}}{(m_1-m_2)!}\frac{a_{1,2}^{m_{2}-m_{3}}}{(m_2-m_3)!}\frac{a_{1,2,3}^{m_{3}}}{m_{3}!} \end{equation}
is a highest vector for the algebra $\mathfrak{gl}_{3}$ with the weight $[m_{1},m_{2},m_3]$.
Let us write a formula for a function corresponding to a Gelfand-Tselin diagram for $\mathfrak{gl}_3$ (see definition of this base in \cite{zh}). A diagram is an integer table of the following type in which the betweeness conditions hold
\begin{align*} \begin{pmatrix} m_{1} && m_{2} &&0\\ &k_{1}&& k_{2}\\&&s \end{pmatrix} \end{align*} A formula for a function corresponding to this diagram is given in the Theorem proved in \cite{bb1963}.
\begin{thm}\label{vec3} Put $B_{GC}=\mathbb{Z}<(0,1,-1,-1,1,0)>$, $\mu=(m_1-k_1,s-m_{2},k_{1}-s_{},m_{2}-k_{2},0,k_2)$, then to a diagram there corresponds a function $$ \mathcal{F}_{\mu}(a_3,a_{1},a_{2},a_{1,3},a_{2,3},a_{1,2},B_{GC}) $$
\end{thm}
In \cite{bb1963} an expression involving the Gauss' hypergeometric function is given. A modern form of this formula involving a $\Gamma$-series was presented in \cite{a2}.
To make notations shorter in the case when we use a $\Gamma$-series corresponding to a lattice $B_{BG}$ from Theorem \ref{vec3} we omit the notation $B_{GC} $ and we write just $\mathcal{F}_{\mu}(a)$. In the case when we use another lattice we do not omit it.
Note also that the first equation form the GKZ system looks as follows
\begin{equation} \label{gkz} \mathcal{O}\mathcal{F}=0,\,\,\,\mathcal{O}=\frac{\partial^2}{\partial a_1\partial a_{2,3}}-\frac{\partial^2}{\partial a_2\partial a_{1,3}}. \end{equation}
\subsection{ A-GKZ system and it's solutions. A-GKZ realization }
\subsubsection{ A-GKZ system. A base $F_{\mu}$ and a base $\tilde{F}_{\mu}$ in it's solution space}
Instead of determinants $a_X$, $X\subset \{1,2,3\}$, that satisfy the Plucker relations let us introduce variables $A_X$, $X\subset \{1,2,3\}$, we suppose that $A_X$ are skew-symmetric functions of $X$.
Let us return to the GKZ-system \eqref{gkz} and let us change the differential operators that define it. Consider functions $F(A)$, that satisfies the equations
\begin{equation} \label{agkz} \bar{\mathcal{O}}_{A}F=0, \,\,\, \bar{\mathcal{O}}_{A}=\frac{\partial^2}{\partial A_1\partial A_{2,3}}-\frac{\partial^2}{\partial A_2\partial A_{1,3}}+\frac{\partial^3}{\partial A_3\partial A_{1,2}} \end{equation}
This system is called an antisymmetrized Gelfand-Kapranov-Zelevinsky system(or A-GKZ for short).
Let us find a base in the space of polynomial solutions of such a system.
Using the equality (65) from \cite{a1} one can show that the following function is a solution
\begin{equation} \label{Fmu} F_{\mu}(A):=\sum_{s\in\mathbb{Z}_{\geq 0}} q^{\mu}_s \zeta_A^{s}\mathcal{F}_{\mu-s(e_3+e_{1,2})}(A), \end{equation}
where \begin{align}\begin{split}\label{cs} &t^{\mu}_0=1, \,\,\,\,\, t^{\mu}_s=\frac{1}{s(s+1)+s(\mu_1+\mu_2+\mu_{1,3}+\mu_{2,3})}, \text{ при }s>0\\& q_{\mu}^s=\frac{t_s^{\mu}}{\sum_{s'\in\mathbb{Z}_{\geq 0}} t_{s'}^{\mu}}, \,\,\,\,\, \zeta_A=A_1A_{2,3}-A_{2}A_{1,3}. \end{split}\end{align}
One has $F_{\mu}=\mathcal{F}_{\mu}\cdot const$ modulo the Plucker relations.
In the space of solutions of the system \eqref{agkz} one can construct another base. Put $$ v=e_1+e_{2,3}-e_{2}-e_{1,3},\,\,\,\, r=e_3+e_{1,2}-e_1-e_{2,3}, $$ and for all $s\in\mathbb{Z}_{\geq 0}$ consider a function
\begin{equation} \mathcal{F}^s_{\mu}(A):=\sum_{t\in \mathbb{Z}}\frac{(t+1)...(t+s-1)A^{\mu+tv}}{\Gamma(\mu+tv+1)} \end{equation}
One proves directely that
\begin{equation} \label{Ftildmu} \tilde{F}_{\mu}(A)=\sum_{s\in\mathbb{Z}_{\geq 0}} \frac{(-1)^{s}}{s!} \mathcal{F}^s_{\mu-sr}(A) \end{equation}
is also a solution of \eqref{agkz}. Let us introduce an order on shift vectors considered $mod B_{GC}$ \begin{equation} \label{por} \mu\preceq \nu \Leftrightarrow \mu=\nu-sr \,\, mod B_{GC}\,\,\, s\in\mathbb{Z}_{\geq 0}. \end{equation}
Then considering supports of functions $F_{\mu}(A)$, $\tilde{F}_{\mu}(A)$ one comes to a conclusion that the base $\tilde{F}_{\mu}$ is related to the collection of functions $F_{\mu}$ by a low-unitriangular relatively the ordering \eqref{por} linear transformation. That is
$$ \tilde{F}_{\mu}=\sum_{s\in \mathbb{Z}_{\geq 0}}d_s F_{\mu-sr},\,\,\, d_0=1 $$
Hence functions $\tilde{F}_{\mu}(A)$ form a base in the solution space of the system \eqref{agkz}.
\subsubsection{ A realization of a representation in the space of solutions of the A-GKZ system} Define an action onto the variables $A_X$ of the algebra $\mathfrak{gl}_3$ by the ruler:
$$ E_{i,j}A_X=\begin{cases}A_{X\mid_{j\mapsto i}}, \text{ если }j\in X, \,\,\,\text{см.} \eqref{edet1} \\0\text{ otherwise. }\end{cases} $$
One easily checks that this is an action of the Lie algebra. One continues this action to the polynomials in $A_X$ by the Leibnitz ruler.
The operator $ \bar{\mathcal{O}}_{A}$ commutes with the action of $\mathfrak{gl}_3$. Hence the solution space of the system \eqref{agkz} is a representation of $\mathfrak{gl}_3$.
When one applies the Plucker relations (i.e. changes $A_X\mapsto a_X$) the considered realization is transformed to the functional relization. Thus the functions $F_{\mu}$ for shift vectors $\mu$, corresponding to all possible Gelfand-Tsetlin diagrams of a irreducible representation with the highest weight $[m_1,m_2,0]$ (see Theorem \ref{vec3}) form a representation with this highest weight.
The obtained realization is called the A-GKZ realization.
One easily checks that if one substracts from $\mu$ the vector $r$ then a diagram transforms as follows $k_1\mapsto k_1-1$, $k_2\mapsto k_2+1$. Thus $\tilde{F}_{\mu}$ also form a base in the A-GKZ realization.
\subsubsection{ An explicit form of an invariant scalar product in the A-GKZ realization}
In the A-GKZ realization one can write explicitely an invariant scalar product. Between two monomials the scalar product is define as follows:
\begin{equation} <A_{X_1}^{\alpha_1}...A_{X_n}^{\alpha_n},A_{Y_1}^{\beta_1}...A_{Y_m}^{\beta_m}> \end{equation} is nonzero if and only if $n=m$, and (maybe a permutation is needed) $X_1=Y_1$ and $\alpha_1=\beta_1$,..., $X_n=Y_n$ and $\alpha_n=\beta_n$. In this case
\begin{equation} <A_{X_1}^{\alpha_1}...A_{X_n}^{\alpha_n},A_{X_1}^{\alpha_1}...A_{X_n}^{\alpha_n}>=\alpha_1!\cdot...\cdot\alpha_n! \end{equation}
One easyly proves that this product is invariant.
A function in variables $A_X$, $X\subset \{1,2,3\}$ we denote just as $f(A)$. Then the scalar product can be rewritten using the multi-index notations as follows. Define an action
\begin{equation} \label{dve} f(A)\curvearrowright h(A):=f(\frac{d}{dA})h(A), \end{equation} then
\begin{equation} \label{skd} <f(A),h(A)>= f(A)\curvearrowright h(A)\mid_{A=0}. \end{equation}
Due to the symmetry of the scalar product one can write $ <f(A),h(A)>= h(A)\curvearrowright f(A)\mid_{A=0}.$
\subsubsection{ A relation between a base $F_{\mu}$ and a base $\tilde{F}_{\mu}$ of the A-GKZ realization}
Using the constructed scalar product one can find a relation between two basis of the A-GKZ realization.
Note that $\mathcal{F}_{\mu}=const F_{\mu}+pl$, where $pl=0$ modulo Plucker relation. Then for every function $g(A)$, which is a solution of the A-GKZ system one has
$$ <h,\mathcal{F}_{\mu}>=const<h, F_{\mu}>. $$
Thus to prove that
$$ \tilde{F}_{\mu}=\sum_{s\in \mathbb{Z}_{\geq 0}}d_s F_{\mu-sr} $$
it is sufficient to show that
\begin{equation} \label{ur} <\tilde{F}_{\mu},\mathcal{F}_{\nu}>=\sum_{s\in \mathbb{Z}_{\geq 0}}d_s <F_{\mu-sr},\mathcal{F}_{\nu}> \end{equation}
Using the formula \eqref{dve} and definitions of $\tilde{F}_{\mu}$, $\mathcal{F}_{\nu}$, one gets that the scalar product of these function is nonzero if and only if $\nu \preceq \mu$, that is $\nu=\mu-sr\,\,\, mod B_{GC}$. Under this condition one has
$$ <\tilde{F}_{\mu},\mathcal{F}_{\mu-sr}>=\frac{(-1)^{s}}{s!}\mathcal{F}_{\mu-sr}^s(1), $$ where on the right hand side one writes a result of substitution of $1$ instead of all arguments of $\mathcal{F}_{\mu-sr}^s(A)$.
Now let us find a scalar product $<F_{\mu},\mathcal{F}_{\nu}>$. Note that $<\zeta^k h(A),\mathcal{F}_{\nu} >$, where $k>0$, equal to $0$ (due to the formula \eqref{dve} and the fact that $\zeta$ acts as a GKZ operator wich send to $0$ the function $\mathcal{F}_{\nu}$). Thus the scalar product only with the first summand in \eqref{Fmu} is non-zero, thus
$$ <F_{\mu},\mathcal{F}_{\nu}>=<\mathcal{F}_{\mu},\mathcal{F}_{\nu}> $$
Using the formula \eqref{dve}, one obtaines that this expression is non-zero only if $\mu=\nu mod B_{GC}$, in this case it equals $\mathcal{F}_{\mu}(1)$.
Thus \eqref{ur} gives that
\begin{equation} \label{ds} \frac{(-1)^{s+1}}{s!}\mathcal{F}_{\mu-sr}^s(1)=d_s\mathcal{F}_{\mu-sr}(1) \Rightarrow d_s=\frac{(-1)^{s} \mathcal{F}_{\mu-sr}^s(1) }{\mathcal{F}_{\mu-sr}(1) } \end{equation}
We need also an invertion of that expression.
\begin{equation} \label{fs0} F_{\mu}=\sum_{s\in \mathbb{Z}_{s\geq 0}}f_s\tilde{F}_{\mu-sr} \end{equation}
Thus we nedd to find an inverse matrix to the mentioned low-unitriangular matrix. One has
$$ \begin{pmatrix} 1&0&0...\\ \frac{ - \mathcal{F}_{\mu-r}^1(1) }{\mathcal{F}_{\mu-r}(1) } &1 &0...\\ \frac{ \mathcal{F}_{\mu-2r}^2(2) }{\mathcal{F}_{\mu-2r}(1) } &... &1\\ ...\\ \end{pmatrix}^{-1}=\begin{pmatrix} 1&0&0...\\ \frac{ \mathcal{F}_{\mu-r}^1(1) }{\mathcal{F}_{\mu-r}(1) } &1 &0...\\ \frac{ - \mathcal{F}_{\mu-2r}^2(2) }{\mathcal{F}_{\mu-2r}(1) } &... &1\\ ...\\ \end{pmatrix} $$
Thus in \eqref{fs0} one has
\begin{equation} \label{kfs} f^{\mu}_s=\frac{(-1)^{s+1} \mathcal{F}_{\mu-sr}^s(1) }{\mathcal{F}_{\mu-sr}(1) } \end{equation}
\section{ A solution of the multiplicity problem for the Clebsh-Gordan coefficients } \label{krtn}
In the case of the algebra $\mathfrak{gl}_2$ different representation $U^s$ occurring in the decomposition \eqref{rzl} have different highest weights. Thus one can use the highest weight as the index $s$. In the case $\mathfrak{gl}_3$ the situation is much more difficult - there appears the multiplicity problem: in the decomposition \eqref{rzl} a representation $U$ of a given highest weight can occur with some multiplicity.
In the paper \cite{a1} the following solution of the problem of an explicit description of representations $U^s$ occurring in \eqref{rzl}. One realizes $V\otimes W$ in the space of functions on a product of groups $GL_3\times GL_3$. The functions of a matrix element on the first factor are denoted as $a_i^j$, and on the second as $b_i^j$. Introduce functions on $GL_3\times GL_3$:
\begin{align} \begin{split} \label{aab} &(ab)_{i_1,i_2}:=det\begin{pmatrix} a_i^1\\ b_i^1
\end{pmatrix}_{i=i_1,i_2},\,\,\,\, (aabb)_{i_1,i_2,i_3,i_4}:=a_{i_1,i_2}b_{i_3,i_4}-a_{i_3,i_4}b_{i_1,i_2},\\
&(aab)=det\begin{pmatrix} a_i^1\\ a_i^2\\b_i^1
\end{pmatrix}_{i=1,2,3}, (abb)=det\begin{pmatrix} a_i^1\\ b_i^1\\b_i^2
\end{pmatrix}_{i=1,2,3} \end{split}\end{align}
Consider a tensor product $V\otimes W$ of representation with highest weights $[m_1,m_2,0]$ and $[m'_1,m'_2,0]$\footnote{Below we put $m_3=0$, $m'_3=0$}. Then a base in the space of $\mathfrak{gl}_3$-highest vectors is formed by the following functions. Put
\begin{equation} \label{foo} f(\omega,\varphi,\psi,\theta):=a_1^{\alpha}b_1^{\beta}a_{1,2}^{\gamma}b_{1,2}^{\delta}(ab)_{1,2}^{\omega}(abb)^{\varphi}(aab)^{\psi}(aabb)_{1,2,1,3}^{\theta}, \end{equation}
where
\begin{align} \begin{split} \label{usl0} &\alpha+\omega+\varphi=m_1-m_2,\,\,\,\gamma+\theta+\psi=m_2,\\ &\beta+\omega+\psi=m'_1-m'_2,\,\,\,\delta+\varphi+\theta=m'_2. \end{split} \end{align}
The function \eqref{foo} is indexed not by all exponents. The reason in that the exponents $\alpha,\beta,\gamma,\delta$ can be obtained from \eqref{usl0}.
\begin{prop}
\label{mlt}
In the space of $\mathfrak{gl}_3$-highest vector there is a base consisting of functions of type $f(0,\varphi,\psi,\theta)$ and $f(\omega,\varphi,\psi,0)$. \end{prop}
Thus an index $s$ from \eqref{rzl} runs through the set of functions $f(0,\varphi,\psi,\theta)$ and $f(\omega,\varphi,\psi,0)$, where $f$ is defined in \eqref{foo}, and the exponents satisfy conditions \eqref{usl0}. One can identify a function with it's exponents $\alpha,...,\theta$.
\section{$3j$-symbols and Clebsh-Gordan coefficients}
\subsection{A relation to the Clebsh-Gordan coefficients} \label{3jcg}
Let us be given representations $V$, $W$, $U$ of the Lie algebra $\mathfrak{gl}_3$. Choose in them bases $\{v_{\mu}\}$, $\{w_{\nu}\}$, $\{u_{\rho}\}$. Then a $3j$-symbol is a collection of numbers
\begin{equation} \label{3j} \begin{pmatrix} V& W& U\\ v_{\mu} & w_{\nu} & u_{\rho} \end{pmatrix}^s, \end{equation}
such that the value
$$
\sum_{\mu,\nu,\rho}\begin{pmatrix}
V& W& U\\ v_{\mu} & w_{\nu} & u_{\rho}
\end{pmatrix}^s v_{\mu} \otimes w_{\nu} \otimes u_{\rho}
$$
is $\mathfrak{gl}_3$ semi-invariant. The $3j$-symbols with the same inner indices form a linear space. An index $s$ is indexing basic $3j$-symbols with the same inner indices.
These coefficients are closely related to the Clebsh-Gordan coefficients. Indeed let us be given a decomposition into a sum of irreducible representations:
\begin{equation}
\label{rzl9}
V\otimes W=\sum_s U^s. \end{equation}
Take basis $\{v_{\mu}\}$, $\{w_{\nu}\}$, $\{u^s_{\rho}\}$. The Clebsh-Gordan coefficients are coefficients in the decomposition
\begin{equation} \label{ur}
u_{\rho'}^s = \sum_{\mu,\nu} D^{U,\rho',s}_{V,W;\mu,\nu} v_{\mu}\otimes w_{\nu}. \end{equation}
Consider a representation $\bar{U}$, contragradient to $U$, and take in $\bar{U}$ a base $\bar{u}_{\rho}$, dual to $u_{\rho}$ There exists a mapping $U\otimes \bar{U} \rightarrow {\bf 1}$ into a trivial representation, such that $u_{\rho'}\otimes \bar{u}_{\rho}\mapsto \delta_{\rho,\rho'}$, where $\delta_{\rho,\rho'}$ is the Cronecer symbol.
Multiply \eqref{ur} by $\bar{u}_{\rho}$, take a sum over $\rho$, one gets
$$ {\bf 1}=\sum_{\mu,\nu,\rho}D^{U,\rho',s}_{V,W;\mu,\nu} v_{\mu}\otimes w_{\nu}\otimes \bar{u}_{\rho}. $$
Thus one has
$$ D^{U,\gamma,s}_{V,W;\mu,\nu}=\begin{pmatrix} V& W& \bar{U}\\ v_{\mu} & w_{\nu} & \bar{u}_{\rho} \end{pmatrix}^s. $$ This formula allows to identify the multiplicity spaces for the Clebsh-Gordan coefficients and for the $3j$-symbols.
Thus the problem of calculation of the Clebsh-Gordan coefficients and the $3j$-symbols are equivalent.
\subsection{$3j$-symbols in functional realization}
In the functional realization a $3j$-symbol for representations $V$, $W$, $U$ is described as follows. Let the representations be realized in the space of functions on $GL_3\times GL_3\times GL_3$. We can put $m_3=0$, $m'_3=0$. Functions of matrix elements on the factors $GL_3$ we denote as $a_i^j$, $b_i^j$, $c_i^j$. Analogous letters denote the determinants of matrices composed of matrix elements.
Decompose a tensor product $V\otimes W\otimes U$ into a sum of irreducible representations and take one of the occurring trivial representations
$$V\otimes W\otimes U={\bf 1}^s\oplus...,$$
where ${\bf 1}^s$ is one of the occurring trivial representations. More precise one can write $V\otimes W\otimes U=\Big ( {\bf 1}\otimes M \Big ) \oplus...,$ where $M$ is a linear space called the multiplicity space. Choose in $M$ a base $\{e_i\}$ and denote ${\bf 1}^s:={\bf 1}\otimes e_s$. In this representation several trivial representations can occur and $s$ is their index \footnote{Let us mention papers \cite{kl}, \cite{tk}, \cite{tk1}, where analogous approach to the decomposition of a double tensor product is used}.
Let base vectors be indexed by diagrams:
\begin{align}
\begin{split}
\label{3d}
&v_{\mu}=\begin{pmatrix}
m_{1} && m_{2} &&0\\ &k_{1}&& k_{2}\\&&s
\end{pmatrix},\,\,\, w_{\nu}=\begin{pmatrix}
m'_{1} && m'_{2} &&0\\ &k'_{1}&& k'_{2}\\&& s'
\end{pmatrix},\\
&u_{\rho}=\begin{pmatrix} M_1 &&M_2 &&0\\ &K_1 &&K_2\\&&S
\end{pmatrix}
\end{split}
\end{align}
One has
\begin{equation}
\label{3ddu} \bar{ u}_{\rho}=\begin{pmatrix}
-M_3 &&-M_2 &&-M_1\\ &-K_2 &&-K_1\\&&-S
\end{pmatrix}
\end{equation}
\subsection{ An explicit form of a $\mathfrak{gl}_3$-invariant in $V\otimes W\otimes U$}
Let us prove that the highest vector of a representation ${\bf 1}^s$ must be of type (or linear combination of them)
\begin{equation}
\label{skob} g=\prod_i (\underbrace{a\cdots a}_{k^i_1}\underbrace{b\cdots b}_{k^i_2}\underbrace{c\cdots c}_{k^i_3}),\end{equation}
where, by analogy with \eqref{aab} as determinants we introduce expressions $(abc)$, $(aac)$, $(acc)$, also put $(bcc)$, $(bbc)$.
\begin{align*}&(aabbcc):=(\tilde{a}\tilde{b}\tilde{c}),\,\,\,\tilde{a}_1^1:=a_{2,3},\,\,\,\tilde{a}_2^1:=-a_{1,3},\,\,\,\tilde{a}_3^1:=a_{1,2},\end{align*} $\tilde{b}_i^1$, $\tilde{c}_i^1$ are defined analogously.
The following conditions must hold
\begin{align}
\begin{split}
\label{usl} & m_1=\# \{ i : \,\,\, k_1^i=1 \} ,\,\,\,\, m_2=\# \{ i : \,\,\, k_1^i=2 \} ,\,\,\,\, 0=\# \{ i : \,\,\, k_1^i=3 \},\\ & m'_1=\# \{ i : \,\,\, k_2^i=1 \} ,\,\,\,\, m'_2=\# \{ i : \,\,\, k_2^i=2 \} ,\,\,\,\, 0=\# \{ i : \,\,\, k_2^i=3 \},\\ & M_1=\# \{ i : \,\,\, k_1^i=1 \} ,\,\,\,\, M_2=\# \{ i : \,\,\, k_1^i=2 \} ,\,\,\,\, M_3=\# \{ i : \,\,\, k_1^i=3 \}.\\ \end{split}
\end{align}
they provide that $g\in V\otimes W\otimes U$ (см. \cite{zh}).
Indeed from one hand the index $s$ indexing different $3j$-symbols \begin{equation} \begin{pmatrix} V& W& U\\ v_{\mu} & w_{\nu} & u_{\rho} \end{pmatrix}^s \end{equation}
with the same inner indices coincides with the index numerating different Clebsh-Gordan coefficients
$$ C^{\bar{U},\bar{\gamma},s}_{V,W;\alpha,\beta}.$$
From the other hand since to the Clebsh-Gordan coefficient to the index $s$ there corresponds a function \eqref{foo}, to which there corresponds an expression of type \eqref{skob}, that is constructed as follows. To factors of \eqref{foo} there correspond factors form \eqref{skob} by the ruler:
\begin{align*}
&a_1 \mapsto (acc),\,\,\, a_{1,2} \mapsto (aac),\,\,\, b_1 \mapsto (bcc),\,\,\, b_{1,2} \mapsto (bbc),\,\,\, \\
& (ab)\mapsto (abc),\,\,\, (aabb)_{1,2,1,3} \mapsto (aabbcc),\,\,\,\\
&(aab)\mapsto (aab),\,\,\, (abb) \mapsto (abb).
\end{align*}
Thus starting from the expression \eqref{foo} we have constructed an expression \eqref{skob}, this construction is one-to-one. That is we have an isomorphism.
Hence we have descibed all $\mathfrak{gl}_3$-invariants of a triple tensor product.
\section{A formula for a $3j$-symbol}
Let us find an expression for a $3j$-symbol. A $3j$-symbol has a multiplicity index $s$, which is a function \eqref{foo}. To this function there corresponds a trivial represnetation in the triple tensor product with the highest vector
\begin{equation}
\label{goo} g(\omega,\varphi,\psi,\theta):=\frac{(acc)^{\alpha}(bcc)^{\beta}(aac)^{\gamma}(bbc)^{\delta}(abc)^{\omega}(abb)^{\varphi}(aab)^{\psi}(aabbcc)^{\theta}}{\alpha!\beta!\gamma!\omega!\varphi!\psi!\theta!}.
\end{equation}
We have changed the expression \eqref{skob} by adding division by factorials of exponents.
\subsection{ Lattices $B'_1$ and $B''_1$}
Consider independent variables corresponding to summands in determinants $(caa),(acc),...,(aabbcc)$. Denote these variables as follows
\begin{align} \begin{split} \label{perem} &Z=\{[[c_1a_{2,3}] ,[c_2a_{1,3}] ,[c_3a_{1,2}] , [a_1c_{2,3}] , [a_{2}c_{1,3}] ,[a_3c_{1,2}] , [c_1b_{2,3}] , [c_2b_{1,3}] , [c_3b_{1,2}] ,\\& [b_1c_{2,3}] ,[b_{2}c_{1,3}] ,[b_3c_{1,2}] , [b_1a_{2,3}] , [b_2a_{1,3}] , [b_3a_{1,2}], [a_1b_{2,3}] , [a_{2}b_{1,3}] ,[a_3b_{1,2}] ,\\& [a_1b_2c_3],[a_2b_3c_1], [a_3b_1c_2], [a_2b_1c_3],[a_1b_3c_2], [a_3b_2c_1] \\& [a_{2,3}b_{1,3}c_{1,2}] , [a_{1,3}b_{1,2}c_{2,3}] , [a_{1,2}b_{2,3}c_{1,3}] , [a_{1,3}b_{2,3}c_{1,2}] , [a_{2,3}b_{1,2}c_{1,3}] , [a_{1,2}b_{1,3}c_{2,3}] \}. \end{split} \end{align}
These variables are coordinates in a $30$-dimentional space. When one opens brackets in determinants occuring in $g$, there appear monomials in variables \eqref{perem}. The vectors of exponents of these monomials are vectors in $30$-dimentional space.
Consider a vector
$$ v_0=(\gamma,0,0,\alpha,0,0,\delta ,0,0,\beta,0,0,\psi,0,0,\varphi,0,0,\omega,0,0,0,0,0,\theta,0,0,0,0,0) $$
Such a vector of exponents is obtained in the case when one takes the first summand in each determinant. Note that either $\omega=0$, or $\theta=0$.
If one changes a choice of summands that to the vector of exponents $v_0$ one need to add one of the following vectors
\begin{align*} &p_1= e_{[c_1a_{2,3}]}-e_{ [c_2a_{1,3}] }, & p_2=e_{[c_1a_{2,3}]}-e_{ [c_3a_{1,2}] }, \\&p_3= e_{[a_1c_{2,3}] }-e_{[a_2c_{1,3}] }, & p_4= e_{[a_1c_{2,3}] }-e_{[a_3c_{1,2}] },\\ &p_5= e_{[c_1b_{2,3}]}-e_{ [c_2b_{1,3}] }, & p_6=e_{[c_1b_{2,3}]}-e_{ [c_3b_{1,2}] }, \\& p_7= e_{[b_1c_{2,3}] }-e_{[b_2c_{1,3}] }, & p_8= e_{[b_1c_{2,3}] }-e_{[b_3c_{1,2}] },\\ &p_9= e_{[a_1b_{2,3}]}-e_{ [a_2b_{1,3}] }, & p_{10}=e_{[a_1b_{2,3}]}-e_{ [a_3b_{1,2}] }, \\& p_{11}= e_{[b_1a_{2,3}] }-e_{[b_2a_{1,3}] }, & p_{12}= e_{[b_1a_{2,3}] }-e_{[b_3a_{1,2}] },\\& p_{13}=e_{[a_1b_2c_3]}-e_{[a_2b_3c_1]} & p_{14}=e_{[a_1b_2c_3]}-e_{[a_3b_1c_2]} \\ & p_{15}=e_{[a_1b_2c_3]}-e_{[a_2b_1c_3]} & p_{16}=e_{[a_1b_2c_3]}-e_{[a_1b_3c_2]} \\& p_{17}=e_{[a_1b_2c_3]}-e_{[a_3b_2c_1]}, &p_{18}= e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,3}b_{1,2}c_{2,3}] }, \\& p_{19}= e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,2}b_{2,3}c_{1,3}] }, & p_{20}= e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,3}b_{2,3}c_{1,2}] }, \\& p_{21}= e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,2}b_{1,3}c_{2,3}] }, & p_{22}= e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{2,3}b_{1,2}c_{1,3}] }, \end{align*}
Define projectors
$$ pr_a,pr_b,pr_c:\mathbb{C}^{30}\rightarrow \mathbb{C}^6, $$
which operate as follows. Given a vector of exponents for variables \eqref{perem} they construct vectors of exponents for variables $a_X$, $b_X$, $c_X$.
Conside a vector $v$ of exponents of a monomial in variables \eqref{perem}, which corresponds to an arbitrary choice of summands in determinants. Let us find which vectors $\tau\in\mathbb{Z}<p_1,...,p_{22}>$ has the following property. When one adds it to $v$ then to the projections $pr_a$, $pr_b$, $pr_c$ vectors proportional to $(0,1,-1,-1,1,0)$ are added.
\begin{defn} {\it An elementary cycle} is the following object. Take two determinant from $(caa),...,(aabbcc)$ and draw two arrows from a symbol $x=a,b,c$ in one determinant to a symbol $xx=aa,bb,cc$ in another determinant. Or from a symbol $x$ in one determinant to $x$ in another determinant. Or from a symbol $xx$ in one determinant to a symbol $xx$ in another determinant.
The following two condition must be satisfied. One arrow goes from the first determinant to the second and another goes form the second to the first. And one of the arrows goes from $x$ to $xx$. \end{defn}
Here are examples of cycles:
\[ (a\tikzmark{0} bb\tikzmark{11} ) (\tikzmark{00} aa \tikzmark{1} b),\,\,\,
(a\tikzmark{5} b\tikzmark{66} c) (\tikzmark{55} aa \tikzmark{6} bb cc),\,\,\,(a\tikzmark{90} bb\tikzmark{911} ) (\tikzmark{900} a \tikzmark{91} bc),\,\,\, (a\tikzmark{95} bb\tikzmark{966} ) (\tikzmark{955} aa \tikzmark{96} bb cc). \]
\begin{tikzpicture}[remember picture, overlay, bend left=45, -latex, blue] \draw ([yshift=2ex]pic cs:0) to ([yshift=2ex]pic cs:00); \draw ([yshift=0ex]pic cs:1) to ([yshift=0ex]pic cs:11);
\draw ([yshift=2ex]pic cs:5) to ([yshift=2ex]pic cs:55); \draw ([yshift=0ex]pic cs:6) to ([yshift=0ex]pic cs:66); \draw ([yshift=2ex]pic cs:90) to ([yshift=2ex]pic cs:900); \draw ([yshift=0ex]pic cs:91) to ([yshift=0ex]pic cs:911); \draw ([yshift=2ex]pic cs:95) to ([yshift=2ex]pic cs:955); \draw ([yshift=0ex]pic cs:96) to ([yshift=0ex]pic cs:966); \end{tikzpicture}
All other examples of cycles are obtained form these by applying a permutation of symbols $a,b,c$.
To an elementary cycle there corresponds a vector $\tau\in\mathbb{Z}<p_1,...,p_{22}>$ by the following ruler. To the elementary cycles written above there correspond vectors
\begin{align} \begin{split} \label{vec} & u_1=-e_{[a_1b_{2,3}]}+e_{[a_2b_{1,3}]}-e_{[b_1a_{2,3}]}+e_{[b_2a_{1,3}]},\\ & u_2=-e_{[a_1b_{2}c_{3}]}+e_{[a_2b_{1}c_{3}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}+e_{[a_{1,3}b_{2,3}c_{1,2}]},\\ & u_3=-e_{[a_1b_{2,3}]}+e_{[a_2b_{1,3}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}+e_{[a_{1,3}b_{2,3}c_{1,2}]},\\ & u_4=-e_{[a_1b_{2,3}]}+e_{[a_2b_{1,3}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}+e_{[a_{1,3}b_{2,3}c_{1,2}]},\\ \end{split} \end{align}
Let $B_1$ be an integer lattice spanned by the vectors correponding to elementary cycles. One has
\begin{prop}
\label{prp}
The lattice $B_1$ is generated by vectors correspondig to elementary cycles of type
\[
(a\tikzmark{995} bb\tikzmark{9966} ) (\tikzmark{9955} aa \tikzmark{996} bb cc),\,\,\,
(a\tikzmark{85} bb\tikzmark{866} ) (\tikzmark{855} a \tikzmark{86} b c)
\]
\begin{tikzpicture}[remember picture, overlay, bend left=45, -latex, blue]
\draw ([yshift=2ex]pic cs:995) to ([yshift=2ex]pic cs:9955);
\draw ([yshift=0ex]pic cs:996) to ([yshift=0ex]pic cs:9966);
\draw ([yshift=2ex]pic cs:85) to ([yshift=2ex]pic cs:855);
\draw ([yshift=0ex]pic cs:86) to ([yshift=0ex]pic cs:866);
\end{tikzpicture} \end{prop}
\proof
One proves directely that a vector corresponding to an elementary elementary cycle can be expressed through these vectors.
\endproof
One has $B_1\subset B$.
In Proposition \ref{prp} two series of generators of the lattice $B_1$ are listed. Let $B'_1$ be a sublattice of $B_1$, generated by elementary cycles of the first type. Let $B''_1$ be a sublattice of $B_1$, generated by elementary cycles of the second type.
In both cases the generators are basis. The fact that the selected vectors are linearly independend follows form the fact that each vector containes coordinate vectors that are not involved in other vectors.
Thus in $B'_1$ there exists a base
\begin{align} \begin{split} \label{bb1} &v_1^a=-e_{[a_1b_{2,3}]}+e_{[a_2b_{1,3}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}+e_{[a_{1,3}b_{2,3}c_{1,2}]},\,\,\, v_2^a=-e_{[a_1c_{2,3}]}+e_{[a_2c_{1,3}]}-e_{[a_{2,3}b_{1,2}c_{1,3}]}+e_{[a_{1,3}b_{1,2}c_{2,3}]}\\ &v_1^b=-e_{[b_1a_{2,3}]}+e_{[b_2a_{1,3}]}-e_{[a_{1,3}b_{2,3}c_{1,2}]}+e_{[a_{2,3}b_{1,3}c_{1,2}]},\,\,\, v_2^b=-e_{[b_1c_{2,3}]}+e_{[b_2c_{1,3}]}-e_{[a_{1,2}b_{2,3}c_{1,3}]}+e_{[a_{1,2}b_{1,3}c_{2,3}]}\\ &v_1^c=-e_{[c_1a_{2,3}]}+e_{[c_2a_{1,3}]}-e_{[a_{1,3}b_{1,2}c_{2,3}]}+e_{[a_{2,3}b_{1,2}c_{1,3}]},\,\,\, v_2^c =-e_{[c_1b_{2,3}]}+e_{[c_2b_{1,3}]}-e_{[a_{1,2}b_{1,3}c_{2,3}]}+e_{[a_{1,2}b_{2,3}c_{1,3}]}
\end{split} \end{align}
In the case when \eqref{goo} contains $(aabbcc)$ but does not contain $(abc)$ one uses $B'_1$. In the case when \eqref{goo} contains $(abc)$ but does not contain $(aabbcc)$ one uses $B''_1$.
\begin{prop}
Let \eqref{goo} contain $(aabbcc)$ and does not contain $(abc)$. Take $30$-dimensional vectors $\varpi$ , $\varpi'$ of exponents of two monomials in variables $Z$ which one obtains when one opens brackets in \eqref{goo}. Let $[\mu,\nu,\rho]=[pr_a(\varpi),pr_b(\varpi),pr_c(\varpi)]$, $[\mu',\nu',\rho']=[pr_a(\varpi'),pr_b(\varpi'),pr_c(\varpi')]$.
Then $[\mu',\nu',\rho'] =[\mu,\nu,\rho]+b_1$, $b_1\in B'_1$ if and only if simultaneously
\begin{align*}& \mu=\mu' mod (0,-1,1,1,-1,0),
& \nu=\nu' mod (0,-1,1,1,-1,0),
& \rho=\rho' mod (0,-1,1,1,-1,0).\end{align*}
In the case when \eqref{goo} contains $(abc)$ but does not contain $(aabbcc)$ the same is true if one changes $B'_1$ to $B''_1$.
\end{prop}
\begin{proof}
Consider the case when \eqref{goo} contains $(aabbcc)$ but does not contain $(abc)$.
From $[\mu',\nu',\rho'] =[\mu,\nu,\rho] +b_1$, $b_1\in B'_1$ it follows that
$\mu=\mu' mod (0,-1,1,1,-1,0)$,
$ \nu=\nu' mod (0,-1,1,1,-1,0)$,
$\rho=\rho' mod (0,-1,1,1,-1,0)$. It can be proved by direct computations.
Let us prove the contraverse. One changes the choice of summands in the determinants, such that $\mu=\mu' mod (0,-1,1,1,-1,0)$, $ \nu=\nu' mod (0,-1,1,1,-1,0)$, $\rho=\rho' mod (0,-1,1,1,-1,0)$. For example we change $a_1b_{2,3}$ from the determinant $(abb)$ to $a_2b_{1,3}$. The exponent of $b_{2,3}$ reduces by $1$ and the exponent of $b_{1,3}$ increases by $1$. The vector of exponents of determinants $b$ must belong to the set $\delta+ (0,-1,1,1,-1,0)$. Thus we the exponent of $b_1$ must increase by $1$ and the exponent of $b_2$ must decrease by $1$. This can take place in the case when in another determinant say in $(bcc)$ the summand $b_1c_{2,3}$ is changed to $b_2c_{1,3}$. Then we consider the cahange of exponents of $c_{2,3}$, $c_{1,3}$ and so on. Finally we must return to the determinant $(abb)$ and obtain the change of exponent of determinants $a_1$ and $a_2$ which takes place due to the change of a summand in the first determinant.
This construction corresponds to the shift of the vector of exponents by a vector from $B'_1$.
\end{proof}
\subsubsection{Scalar products}
Consider the case when \eqref{goo} contains $(aabbcc)$ and does not contain $(abc)$.
Let us calculate scalar products. For $s_1,s_2,s_3\in\mathbb{Z}_{\geq 0 }$ introduce a hypergeometric type series in variabels $Z$:
\begin{align} \begin{split} \label{defz} &\mathcal{F}_{\varpi}^{s_1,s_2,s_3}(Z,B'_1):=\sum_{t\in \mathbb{Z}^6} \binom{t_1^a+t_2^a+s_1}{s_1}
\binom{t_1^b+t_2^b+s_2}{s_2}
\binom{t_1^c+t_2^c+s_3}{s_3}
\frac{Z^{\varpi+tv}}{ (\varpi+tv)! },\\ &t=(t_1^1,t_2^a,t_1^b,t_2^b,t_1^c,t_2^c),\,\,\,\,tv=t^a_1v^a_1+t^a_2v^a_2+...,\,\,\, \varphi\in\mathbb{Z}^{30}\\
\end{split} \end{align}
Introduce vectorw
\begin{align*} & f_a=-e_{[a_1c_{2,3}]}+e_{[a_3c_{1,2}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,2}b_{1,3}c_{2,3} ] },\\ &f_b=-e_{[b_1c_{2,3}]}+e_{[b_3c_{1,2}]}-e_{[a_{1,3}b_{2,3}c_{1,2}]}-e_{ [a_{1,3}b_{1,2}c_{2,3} ] },\\ &f_c=-e_{[a_1b_{2,3}]}+e_{[a_3b_{1,2}]}-e_{[a_{1,2}b_{2,3}c_{1,3}]}-e_{ [a_{2,3}b_{1,2}c_{1,3} ] }. \end{align*}
One has (see. \eqref{vr})
\begin{align} \begin{split} \label{prabc} &pr_a(\tau+f_a)=pr(\tau)+r,\\ &pr_b(\tau+f_a)=pr(\tau),\\ &pr_c(\tau+f_a)=pr(\tau), \end{split} \end{align}
and one has analogous conditions for $f_b,f_c$.
Below we consider vectors $\varphi\in\mathbb{Z}^{30}$ and vectors $\mu,\nu,\rho$, that satisfy the following relations
\begin{align} \begin{split} \label{vrph} &\varpi=v_0+tv^{abc}+s_1f_a+s_2f_b+s_3f_c,\,\,\, t\in\mathbb{Z}^{9},\,\,\, s_1,s_2,s_3\in\mathbb{Z},\\ &\mu=pr_a(\varphi),\,\,\, \nu=pr_b(\varpi),\,\,\, \rho=pr_c(\varpi). \end{split} \end{align}
\begin{prop}
\label{prp1} Let $\varpi, \mu,\nu,\rho$ satisfy the relations \eqref{vrph}. Then
\begin{equation}
\label{p1}
<g,\mathcal{F}_{\mu}^{s_1}(A)\mathcal{F}_{\nu}^{s_2}(B)\mathcal{F}_{\rho}^{s_3}(C)>=
\mathcal{F}_{\varpi}^{s_1,s_2,s_3}(\pm 1,B'_1),
\end{equation}
Instead od those variables from $Z$ (see \eqref{perem}) which occur in a determinant with a sign $+$, one substituts $+1$, and instead of those variables which occur in a determinant with a sign $-$, one substituts $-1$.
\end{prop}
\proof
Consider a product $\frac{A^x}{x!}\frac{B^y}{y!}\frac{C^z}{z!}$, occuring in $\mathcal{F}_{\mu}^{s_1}(A)\mathcal{F}_{\nu}^{s_2}(B)\mathcal{F}_{\rho}^{s_3}(C)$. A coefficient at this product is defined as follows. If $x=\mu+\tau_1v$, $y=\nu+\tau_2 v$, $z=\rho+\tau_3v$, then the coefficient equals
\begin{equation} \label{kf1}
\binom{\tau_1+s_1}{s_1}
\binom{\tau_2+s_2}{s_2}
\binom{\tau_3+s_3}{s_3}
\end{equation}
Now consider a scalar product
\begin{equation} \label{gabc} <g,\frac{A^x}{x!}\frac{B^y}{y!}\frac{C^z}{z!}>. \end{equation}
This scalar product can be calculated as follows. In \eqref{goo} the brackets are opened. One obtains an expression which is a $\Gamma$-series in variables $Z$, where the variables occuring in a determinant with a sign $-$ are taken with a sign $-$. After this the variables $Z$ are replacet to products of variables $A_X,B_X,C_X$ in an obvious way.
Let $h$ be a vector of exponents of a monomila in variables $Z$, which appears when one opens brackets in \eqref{goo}. Sucj a monomial is divided by factorials of its powers and is multiplied by sign $\pm$.
The scalar product \eqref{gabc} is non-zero if for some $h$ one has $[pr_a(h),pr_b(h),pr_c(h)]=[x,y,z]$. In this case \eqref{gabc} is obtained by a change in a suitable monomial of all variables $Z$ to $1$ and a summation over all suitable monomials.
Finally let us write $x=\mu+\tau_1v$, $y=\nu+\tau_2v$, $z=\rho+\tau_3v$. In the case $\tau_1=\tau_2=\tau_3=0$ one has $[\mu,\nu,\rho]=[pr_a(\varpi),pr_b(\varpi),pr_c(\varpi)]$. Thus $[x,y,z]=[pr_a(\varpi'),pr_b(\varpi'),pr_c(\varpi')]$, where
\begin{align*} & \varphi'=\varpi+t^a_1 v^a_1+t^a_2v^a_2+ t^b_1 v^b_1+t^b_2v^b_2+ t^c_1 v^c_1+t^c_2v^c_2,\\ &\tau_1=t^a_1+t^a_2,\,\,\, \tau_2=t^b_1+t^b_2,\,\,\, \tau_3=t^c_1+t^c_2,\,\,\, \end{align*}
Let us take into consideration the coefficient \eqref{kf1} at $\frac{A^x}{x!}\frac{B^y}{y!}\frac{C^z}{z!}$ ,occuring in $\mathcal{F}_{\mu}^{s_1}(A)\mathcal{F}_{\nu}^{s_2}(B)\mathcal{F}_{\rho}^{s_3}(C)$. As the result one gets an expression in the right hand side in \eqref{p1}.
\endproof
Introduce vectors
\begin{align*} & f_a=-e_{[a_1c_{2,3}]}+e_{[a_3c_{1,2}]}-e_{[a_{2,3}b_{1,3}c_{1,2}]}-e_{ [a_{1,2}b_{1,3}c_{2,3} ] },\\ &f_b=-e_{[b_1c_{2,3}]}+e_{[b_3c_{1,2}]}-e_{[a_{1,3}b_{2,3}c_{1,2}]}-e_{ [a_{1,3}b_{1,2}c_{2,3} ] },\\ &f_c=-e_{[a_1b_{2,3}]}+e_{[a_3b_{1,2}]}-e_{[a_{1,2}b_{2,3}c_{1,3}]}-e_{ [a_{2,3}b_{1,2}c_{1,3} ] }. \end{align*}
One has relations
\begin{align} \begin{split} \label{prabc} &pr_a(\tau+f_a)=pr(\tau)+r,\\ &pr_b(\tau+f_a)=pr(\tau),\\ &pr_c(\tau+f_a)=pr(\tau), \end{split} \end{align}
and analogous relations for $f_b,f_c$.
\begin{prop}
\label{prp2}
Let $\varpi, \mu,\nu,\rho$ satisfy the relations \eqref{vrph}. Then
$$
<g,\mathcal{F}_{\mu-s_1r}^{s_1}(A)\mathcal{F}_{\nu-s_2r}^{s_2}(B)\mathcal{F}_{\rho-s_3r}^{s_3}(C)>=
\mathcal{F}_{\varpi-s_1f_a-s_2f_b-s_3f_c}^{s_1,s_2,s_3}(\pm 1,B_1),
$$ Instead of those variables from $Z$ (see \eqref{perem}) which occur in a determinant with a sign $+$, one substituts $+1$, and instead of those variables which occur in a determinant with a sign $-$, one substituts $-1$.
\end{prop}
\proof
This Proposition follows imidiately from Proposition \ref{prp1} and the formula \eqref{prabc}.
\endproof
Introduce a function
\begin{equation} \label{defz1} \tilde{F}_{\varpi}(Z,B'_1):=\sum_{s_1,s_2,s_3\in\mathbb{Z}_{\geq 0}}(-1)^{s_1+s_2+s_3} \mathcal{F}_{\varpi-s_1f_a-s_2f_b-s_3f_c}^{s_1,s_2,s_3}(Z,B'_1). \end{equation}
Note that the function $\mathcal{F}_{\varpi}^{0,0,0}(Z,B'_1)$ satisfies the GKZ equation and the function $\tilde{F}_{\kappa}(Z,B'_1)$ satisfies the A-GKZ equaiton which consists of 3 equations of type
$$ (\frac{\partial^2}{\partial [a_1c_{2,3}]\partial [a_{2,3}b_{1,2}c_{1,3}]}-\frac{\partial^2}{\partial [a_2c_{1,3}]\partial [a_{1,3}b_{1,2}c_{2,3}]}+\frac{\partial^2}{\partial [c_3a_{1,2}]\partial [a_{1,2}b_{1,3}c_{2,3}]})\tilde{F}_{\varpi}(Z,B'_1)=0. $$
As a direct consequence of Proposition \ref{prp2} one gets
\begin{prop}
Let $\varpi, \mu,\nu,\rho$ satisfy the relations \eqref{vrph}. Then
$$<g,\tilde{F}_{\mu}(A)\tilde{F}_{\nu}(B)\tilde{F}_{\rho}(C)>=\tilde{F}_{\kappa}(\pm 1,B'_1).$$
Instead of those variables from $Z$ (see \eqref{perem}) which occur in a determinant with a sign $+$, one substituts $+1$, and instead of those variables which occur in a determinant with a sign $-$, one substituts $-1$.
\end{prop}
Introduce a function
\begin{equation} \label{funs} F_{\varpi}(Z,B'_1):=\sum_{s_1,s_2,s_3\in\mathbb{Z}_{\geq 0}}f^{pr_a(\varpi)}_{s_1}f^{pr_b(\varpi)}_{s_2}f^{pr_c(\varpi)}_{s_3}F_{\varpi-s_1f^a-s_2f^b-s_3f^c}(Z,B'_1), \end{equation}
where coefficients $f$ are defined in \eqref{kfs}. Using the relation \eqref{fs0}, one gets the following statement
\begin{prop} Let \eqref{goo} contain $(aabbcc)$ and does not contain $(abc)$, let $\varpi, \mu,\nu,\rho$ satisfy the relations \eqref{vrph}. Then
$$<g,F_{\mu}(A)F_{\nu}(B)F_{\rho}(C)>=F_{\varpi}(\pm 1,B'_1)$$
Let \eqref{goo} contain $(abc)$ and does not contain $(aabbcc)$, then
$$<g,F_{\mu}(A)F_{\nu}(B)F_{\rho}(C)>=F_{\varpi}(\pm 1,B''_1)$$
Instead of those variables from $Z$ (see \eqref{perem}) which occur in a determinant with a sign $+$, one substituts $+1$, and instead of those variables which occur in a determinant with a sign $-$, one substituts $-1$. \end{prop}
\subsection{ $3j$-symbols}
\subsection{Selection rulers for $3j$-symbols} Let us answer the following question: which products $\mathcal{F}_{\mu}(a)\mathcal{F}_{\nu}(b)\mathcal{F}_{\rho}(c)$ can have a non-zero scalar product with a function $g$, of type \eqref{goo}.
Let us consider an A-GKZ realization. In this realization the vector $\mathcal{F}_{\mu}(a)\mathcal{F}_{\nu}(b)\mathcal{F}_{\rho}(c)$ is presented by an expression of type $F_{\mu}(A)F_{\nu}(A)F_{\rho}(C)$. The function $g$ is presented by an expression of type $g+pl$, where $pl$ is proportional to $A_1A_{2,3}-A_{2}A_{1,3}+A_{3}A_{1,2}$,.... Since the function $F_{\mu}(A),F_{\nu}(A),F_{\rho}(C)$ is a solution of the A-GKZ system, one has
$$ <F_{\mu}(A)F_{\nu}(A)F_{\rho}(C), g+pl>=<F_{\mu}(A)F_{\nu}(A)F_{\rho}(C), g>. $$
Thus we need to consider the scalar product $<F_{\mu}(A)F_{\nu}(A)F_{\rho}(C), g>$.
A support of a function presented as a power series is a set of vectors of exponents of it's monomials. Note that the support of a function $g$, defined in \eqref{goo}, considered as a function of determinants $a_X,b_X,c_X$ is the following
$$ supp(g)=(\kappa+B)\cap (\mathbb{Z}_{\geq 0}^{18}) $$ for some vector $\kappa\in\mathbb{Z}_{\geq 0}^{18}$ and some lattice $B$. The vector $\kappa$ and the generators of $B$ can be written in the following manner:
\begin{align*} &\kappa=[pr_a(v_0),pr_b(v_0),pr_c(v_0)], & B=\mathbb{Z}<\pi_1,...,\pi_{22}>,\,\,\,\,\, \pi_i=[pr_a(p_i),pr_b(p_i),pr_c(p_i)] \end{align*}
The function $F_{\mu}(A)F_{\nu}(A)F_{\rho}(C)$ can have a non-zero scalar product with $g$ only if the following intersection is non-empty
$$ supp(F_{\mu}(A)F_{\nu}(A)F_{\rho}(C)) \cap supp(g) $$
This condition is written explicitely as follows
\begin{align} \begin{split} \label{potb} &(\bigcup_{s_1,s_2,s_3\in \mathbb{Z}_{\geq 0}} [mu+B_{GC}-s_1r, nu+B_{GC}-s_1r,rho+B_{GC}-s_2r ]\cap (\mathbb{Z}_{\geq 0}^{18})) \cap supp(g)\neq \emptyset \end{split} \end{align}
A condition for a vector $\varphi$, than provides us that the vector $[\mu,\nu,\rho]=[pr_a(\varpi),pr_b(\varpi),pr_c(\varpi)]$ satisfies \eqref{potb} is just the condition \eqref{vrph}.
\subsection{A formula for a $3j$-symbol}
Let us write \begin{equation} g=\sum_{\mu\nu\rho} c_{\mu',\nu',\rho'} \mathcal{F}_{\mu'}(a)\mathcal{F}_{\nu'}( b)\mathcal{F}_{\rho'}(c), \end{equation} where $ c_{\mu',\nu',\rho'}$ is a $3j$-symbol \eqref{3j}. Change determinants $a_X$, $b_X$, $c_X$ to independent variables $A_X$, $B_X$, $C_X$. Then the equalities do hold only the Plucker relations.
Take a scalar product of both sides of this equality with $F_{\mu}(A)\mathcal{F}_{\nu}(B)\mathcal{F}_{\rho}(C)$. Since these functions are solution of the A-GKZ system one can ignore the fact that the previous equality holds only the Plucker relation.
Using the previous caculations one obtaines the Theorem.
\begin{thm}
\label{ost}
Let us be given representations with the highest weights $[m_1,m_2,0]$, $[m'_1,m'_2,0]$, $[M_1,M_2,M_3]$. Let be given Gelfand-Tsetlin base vectors, the $\Gamma$-series that correspond to them have shift vectors $\mu,\nu,\rho$ (the formula for the shift vectors see in Theorem \ref{vec3}). Fix a fucntion of type \eqref{goo} with exponents satisfying conditions of Proposition \ref{mlt}.
Take a vector $\varpi$ wich is related by the equalities \eqref{vrph} with the vectors $\mu,\nu,\rho$ (if it is not possible, than the $3j$-symbol equals to zero).
Then a $3j$-symbol \eqref{3j} equals
\begin{align}
\begin{split}
\label{osntf2}
&\frac{F_{\varpi}(\pm 1,B'_1)}{\mathcal{F}_{\mu}(1)\mathcal{F}_{\nu}(1)\mathcal{F}_{\rho}(1)}, \text{ if \eqref{goo} does not contian $(abc)$},\\
&\frac{F_{\varpi}(\pm 1,B''_1)}{\mathcal{F}_{\mu}(1)\mathcal{F}_{\nu}(1)\mathcal{F}_{\rho}(1)}, \text{ if \eqref{goo} does not contain $(aabbcc)$},
\end{split}
\end{align} where the function occuring in the numerator is defined in \eqref{funs} (see also \eqref{defz1}, \eqref{defz}).
In the function $F_{\varpi}$ in the numerator we substitute $+1$ instead of those $Z$ (see \eqref{perem}), that occur in determinants in \eqref{goo} with the sign $+$, and instead of those $Z$ (see \eqref{perem}), that occur in determinants in \eqref{goo} with the sign $-$ we substitute $-1$.
In the denominator the $\Gamma$-series introduced in Theorem \ref{vec3} occur. WE substitute instead of all their arguments $1$.
\end{thm}
\section{ Appendix: simpler selection rulers} \label{dop}
In this section we present conditons under which a Clebsh-Gordan coefficient or a $3j$-symbol can be non-zero.
\subsection{Selection rulers for Clebsh-Gordan coefficients}
Let us find necessary condition for the elements of the diagrams \eqref{3d} under which the Clebsh-Gordan coefficient $C^{U,\gamma,s}_{V,W;\alpha,\beta}$ can be non-zero.
Let us find condition for upper rows that are $\mathfrak{gl}_3$-highest weights. Let the index $s$ correspond to the function \eqref{foo}. In tern of exponents of \eqref{foo} these upper rows are written as follows
\begin{align} \begin{split} \label{s3d} &[m_1,m_2,0]=[\alpha+\gamma+\omega+\varphi+\psi+\theta,\gamma+\psi+\theta,0],\\ &[m'_1,m'_2,0]=[\beta+\delta+\omega+\varphi+\psi+\theta, \delta+\varphi+\theta ,0]\\ &[M_1,M_2,M_3]=[\alpha+\beta+\gamma+\delta+\omega+\varphi+\psi+2\theta, \gamma+\delta+\omega+\varphi+\psi+\theta,\varphi+\psi+\theta]\\ \end{split} \end{align}
One sees that between these rows one has a relation \begin{align} \begin{split} \label{s3} &[m_1,m_2,0]+[m'_1,m'_2,0]+\omega[-1,1,0]+(\varphi+\psi)[-1,0,1]+\theta[0,-1,1]=\\&=[M_1,M_2,M_3]. \end{split} \end{align}
Moreover this relation is sufficient for existence of $\alpha,...,\omega$, for which the conditions \eqref{s3d} hold.
Let us find selection rulers for the second rows of diagrams \eqref{3d}. One can obtain them by considering the selection rulers for the Clebsh-Gordan coefficients for the algebra $\mathfrak{gl}_2$. One has a tensor product of representations of the algebra $\mathfrak{gl}_2$ with highest weights $[k_1,k_2]$ and $[\bar{k}_1,\bar{k}_2]$. Which highest weights $[K_1,K_2]$ occur in a decomposition of this tensor product?
A base in the space of $\mathfrak{gl}_2$-highest vectors in the functional realization is foremed by functions of type
\begin{equation} \label{foo2} f=a_1^{\alpha}a_{1,2}^{\beta}b_1^{\gamma}b_{1,2}^{\delta}(ab)^{\omega}. \end{equation}
Then
\begin{align} \begin{split} \label{2d} &[k_1,k_2]=[\alpha+\beta+\omega,\beta],\\ &[k'_1,k'_2]=[\gamma+\delta+\omega,\delta],\\ &[K_1,K_2]=[\alpha+\beta+\gamma+\delta+\omega,\beta+\delta+\omega]. \end{split} \end{align}
One sees that the following condition holds \begin{align} \begin{split} \label{s2} [k_1,k_2]+[k'_1,k'_2]+\omega[-1,1]=[K_1,K_2]. \end{split} \end{align}
The relations \eqref{s2} are sufficient for existence of $\alpha,...,\omega$, for which the conditions \eqref{2d} hold.
Considering $E_{1,1}$-weights of diagrams one concludes that for the third rows one has
\begin{equation} \label{s1} s+s'=S. \end{equation}
Thus we have proved the following.
\begin{prop}
A Clebsh-Gordan coefficient can be non-zero only if conditions \eqref{s3}, \eqref{s2}, \eqref{s3} hold for some non-negative integers $\omega$, $\varphi$, $\psi$, $\theta$. \end{prop}
\end{document} |
\begin{document}
\title{A general $q$-expansion formula based on matrix inversions and its applications}
\dedicatory{ \textsc{Jin Wang~\dag}\\[1mm]
Department of Mathematics, Soochow University\\
Suzhou 215006, P.~R.~China\\ Email:~\emph{[email protected]} \thanks{\dag~Corresponding author. This work was supported by NSFC (Grant~No. 11471237)}}
\subjclass[2010]{Primary 33D15 ; Secondary 05A30.} \keywords{$q$-Series; Expansion formula; Coefficient; Transformation; Summation; Matrix inversion; Lagrange--B\"{u}rmann inversion; Formal power series.}
\begin{abstract} In this paper, by use of matrix inversions, we establish a general $q$-expansion formula of arbitrary formal power series $F(z)$ with respect to the base
$$
\left\{z^n\frac{\poq{az}{n}}{\poq{bz}{n}}\bigg|n=0,1,2\cdots\right\}.$$
Some concrete expansion formulas and their applications to $q$-series identities are presented, including Carlitz's $q$-expansion formula, a new partial theta function identity and a coefficient identity of Ramanujan's ${}_1\psi_1$ summation formula as special cases. \end{abstract}
\maketitle\thispagestyle{empty} \markboth{ J. Wang}{A general $q$-expansion formula based on matrix inversions and its applications}
\section{Introduction} Throughout the present paper, we adopt the standard notation and
terminology for $q$-series from the book \cite{10}. As customary, the $q$-shifted factorials of complex variable $z$
with the base $q: |q|<1$ are given by \begin{eqnarray} (z;q)_\infty :=\prod_{n=0}^{\infty}(1-zq^n)\qquad\mbox{and}\quad (z;q)_n:=\frac{(z;q)_\infty}{(zq^n;q)_\infty}\label{guiding} \end{eqnarray} for all integers $n$. For integer $m\geq 1$, we use
the multi-parameter compact notation \[(a_1,a_2,\ldots,a_m;q)_n:=(a_1;q)_\infty (a_2;q)_n\ldots (a_m;q)_n.\] Also, the ${}_{r+1}\phi_r$ series with the base $q$ and the argument $z$ is defined to be \begin{align*} {}_{r+1}\phi_r\bigg[\genfrac{}{}{0pt}{}{a_1,a_2,\cdots,a_{r+1}}{b_1,b_2,\cdots,b_{r}};q,z\bigg]&:=\sum _{n=0} ^{\infty} \frac{\poq{a_1,a_2,\ldots,a_{r+1}}{n}}{\poq{q,b_1,b_2,\ldots,b_r}{n}}z^{n}. \end{align*} For any $f(z)=\sum_{n\geq 0}a_nz^n\in \mathbb{C}[[z]]$, $\mathbb{C}[[z]]$ denotes the ring of formal power series in variable $z$, we shall employ the coefficient functional $$\boldsymbol\lbrack z^n\boldsymbol\rbrack \{f(z)\}:=a_n\,\,\mbox{and}\,\, a_0=f(0).$$ We also follow the summation convention that for any integers $m$ and $n$, $$\sum_{k=m}^{n}a_k=-\sum_{k=n+1}^{m-1}a_k.$$
In their paper \cite{ono}, G.H. Coogan and K. Ono presented the following identity which leads to the generating functions for values of certain expressions of Hurwitz zeta functions at non-positive integers.
\begin{yl}[cf.\mbox{\cite[Proposition 1.1]{ono}}]\label{ma-id11} For $|z|<1$, it holds \begin{align} \sum_{n=0}^\infty\,z^n\frac{\poq{z}{n}}{\poq{-z}{n+1}}= \sum_{n=0}^{\infty}(-1)^nz^{2n}q^{n^2}.\label{id11} \end{align} \end{yl} The appearance of \eqref{id11} reminds us of the famous Rogers--Fine identity \cite[Eq. (17.6.12)]{dlmf}.
\begin{yl}For $|z|<1$, it holds \begin{align} (1-z)\sum_{n=0}^\infty\,z^n\frac{\poq{aq}{n}}{\poq{bq}{n}}= \sum_{n=0}^{\infty}(1-azq^{2n+1})(bz)^nq^{n^2}\frac{\poq{aq,azq/b}{n}}{\poq{bq,zq}{n}}.\label{id117} \end{align} \end{yl} In fact, Identity \eqref{id11} can be easily deduced from \eqref{id117} via setting $aq=z=-b$. Moreover, by setting $a=z=-b$ in \eqref{id117}, we obtain another similar identity.
\begin{yl}\label{ma-id1177}For $|z|<1$, it holds \begin{align} \sum_{n=0}^\infty\,z^n\frac{\poq{z}{n+1}}{\poq{-zq}{n}}=1+ 2\sum_{n=1}^{\infty}(-1)^nz^{2n}q^{n^2}.\label{id1177} \end{align} \end{yl} It is these identities, once treated as formal power series in $z$, that make us be aware of investigating in a full generality the problem of representations of formal power series in terms of the sequences $$
\left\{z^n\frac{\poq{az}{n}}{\poq{bz}{n}}\bigg|n=0,1,2\cdots\right\},$$ which is just a base of the ring $\mathbb{C}[[z]]$. This fact asserts that for any $F(z)\in \mathbb{C}[[z]]$, there exists the series expansion \begin{align} F(z)=\sum_{n=0}^\infty\,c_nz^n\frac{\poq{az}{n}}{\poq{bz}{n}},\label{import1-0} \end{align}
where the coefficients $c_n$ must be independent of $z$ but may depend on the parameters $a$ and $b$. In this respect, particularly noteworthy is that in \cite{xrma}, X.R. Ma established a (formal) generalized Lagrange--B\"{u}rmann inversion formula. We record it for direct reference. \begin{yl}[cf.\mbox{\cite[Theorem 2.1]{xrma}}]\label{analogue}
Let $\{\phi_n(z)\}_{n=0}^{\infty}$ be arbitrary sequence of formal power series with $\phi_n(0)=1$. Then for any $F(z)\in \mathbb{C}[[z]]$, we have
\begin{subequations}\label{expan-one} \begin{equation} F(z)=\sum_{n=0}^{\infty}\frac{c_nz^{n}}{\prod_{i=0}^{n}\phi_i(z)},\label{expan15a} \end{equation} where the coefficients \begin{eqnarray} c_n=\sum_{k=0}^n\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\{F(z)\}\sum_{\stackrel{i\leq j_i}{k=j_{0}\leq j_1\leq j_2\leq\cdots\leq j_{n} \leq j_{n+1}=n}}\prod_{i=0}^n\boldsymbol\lbrack z^{j_{i+1}-j_{i}}\boldsymbol\rbrack\{\phi_i(z)\}.\label{expan-one-ii} \end{eqnarray} \end{subequations} \end{yl} For further information on Lemma \ref{analogue}, we refer the reader to \cite{xrma}. As for the classical Lagrange--B\"{u}rmann inversion formula the reader might consult the book \cite[p. 629]{andrews4} by G.E. Andrews, R. Askey, and R. Roy. For its various $q$-analogues, we refer the reader to the paper \cite{andrews} by G.E. Andrews, \cite{carliz} by L. Carlitz, \cite{gessel-stan} by I. Gessel and D. Stanton, and \cite{kratt} by Ch. Krattenthaler, especially to the good survey \cite{11} of D. Stanton for a more comprehensive information.
A simple expression of the coefficients $c_n$ seems unlikely under the case \eqref{expan15a}. Without doubt, such an explicit formula is the key step to successful use of this expansion formula.
But in contrast, as far as \eqref{import1-0} is concerned, we are able to establish the following explicit expression of $c_n$ via the use of matrix inversions (see Definition 2.1 below). It is just the theme of the present paper. \begin{dl}\label{analogue-two} For $F(z)\in \mathbb{C}[[z]]$, there exists the series expansion
\begin{subequations}\label{expan-two} \begin{equation} F(z)=\sum_{n=0}^{\infty}c_nz^{n}\frac{\poq{az}{n}}{\poq{bz}{n}}\label{expan-two-1} \end{equation} with the coefficients \begin{align} c_n=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ F(z)\frac{\poq{bz}{n-1}}{\poq{az}{n}}\bigg\}-a\sum_{k=0}^{n-1}B_{n-k,1}(a,b)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{F(z)\frac{\poq{bz}{k}}{\poq{az}{k+1}}\bigg\},\label{coeformula} \end{align} where $B_{n,1}(a,b)$ are given by \begin{align} z=\sum_{n=1}^\infty B_{n,1}(a,b)z^n\frac{\poq{az}{n}}{\poq{bz}{n}}.\label{180} \end{align} \end{subequations} \end{dl} As a direct application of this expansion formula, we further set up a general transformation concerning the Rogers--Fine identity \eqref{id117}. \begin{dl}\label{tlidentity} For $G(z)=\sum_{n=0}^{\infty}t_nz^n\in \mathbb{C}[[z]]$, it always holds \begin{subequations} \begin{align} \frac{\poq{az}{\infty}}{\poq{bz}{\infty}}G(z)&= \sum_{n=0}^{\infty}\frac{\poq{aq/b,az}{n}}{\poq{q,bz}{n}}(bz)^nq^{n(n-1)}\nonumber\\ &\qquad\times\big(\widetilde{G}(zq^n;a,b)- azq^{2n}\widetilde{G}(zq^{n+1};a/q,b/q)\big), \label{expan-two-lzlzlz} \end{align} where \begin{align} \widetilde{G}(z;a,b)&:=\sum_{n=0}^\infty t_nz^n\frac{\poq{az}{n}}{\poq{bz}{n}}.\label{dy} \end{align} \end{subequations} \end{dl}
The rest of this paper is organized as follows. In Section 2, we shall prove Theorem \ref{analogue-two}. For this purpose, a series of preliminary results will be established. Section 3 is devoted to the proof of Theorem\ref{tlidentity}. Some applications of these two theorems to $q$-series are further discussed. Among these applications, there is a new partial theta function identity and a coefficient identity of Ramanujan's ${}_1\psi_1$ summation formula.
\section{The proof of Theorem \ref{analogue-two}} In this section, we proceed to show Theorem \ref{analogue-two} which amounts to finding the coefficients $c_n$. For this purpose, we need the concept of matrix inversions and a series of preliminary lemmas. \begin{dy}(cf.\cite[Chapters 2 and 3]{riodan} or \cite[Definition 3.1.1]{egobook})\label{ddd000} A pair of infinite lower-triangular matrices $A=(A_{n,k})$ and $B=(B_{n,k})$ is said to be inverses to each other if and only if for any integers $n, k\geq 0,$ \begin{align} \sum_{i=k}^n\,A_{n,i}B_{i,k}=\sum_{i=k}^n\,B_{n,i}A_{i,k} =\left\{ \begin{array}{ll} 0, & n\neq k; \\
1, & n=k, \end{array} \right. \end{align} where $A_{n,k}=B_{n,k}=0$ if $n<k.$ As usual, we also say that $A$ and $B$ are invertible and write $A^{-1}$ for $B$. \end{dy} Consider now a particular matrix $A=(A_{n,k})$ with the entries $A_{n,k}$ given by \begin{align} z^k\frac{\poq{az}{k}}{\poq{bz}{k}}=\sum_{n=k}^\infty\,A_{n,k}z^n.\label{eq1-13} \end{align} It is easy to check that $A=(A_{n,k})$ is invertible. In what follows, let us assume its inverse $$A^{-1}=(B_{n,k}(a,b)).$$ As such, we see that \eqref{eq1-13} is equivalent to \begin{align} z^k=\sum_{n=k}^\infty B_{n,k}(a,b)z^n\frac{\poq{az}{n}}{\poq{bz}{n}}.\label{18} \end{align}
Next, we shall focus on two kinds of generating functions of the entries $B_{n,k}(a,b)$ of the matrix $A^{-1}$. \begin{yl}\label{ddd} Let $B_{n,k}(a,b)$ be the same as above.
Then we have \begin{align}B_{n,k+1}(a,b)+(b-a)\sum_{i=k+2}^{n}b^{i-k-2} B_{n,i}(a,b)=q^{n-k-1}B_{n-1,k}(a,b).\label{impor-1-added} \end{align} \end{yl} \noindent {\it Proof.} At first, by replacing $z$ by $zq$ in \eqref{18}, we have \begin{align*} (zq)^k=\sum_{n=k}^\infty B_{n,k}(a,b)(zq)^n\frac{\poq{azq}{n}}{\poq{bzq}{n}}. \end{align*} Multiplying both sides with $z(1-az)/(1-bz)$ and shifting $n$ to $n-1$, we obtain \begin{align} q^k z^{k+1}\bigg(1+(b-a)z\sum_{i= 0}^{\infty}b^{i}z^{i}\bigg)=\sum_{n=k+1}^\infty B_{n-1,k}(a,b)q^{n-1}z^n\frac{\poq{az}{n}}{\poq{bz}{n}}.\label{1888} \end{align} Viewing \eqref{1888} from the definition of \eqref{18}, we find that \eqref{1888} can be recast as \begin{align} &q^k\sum_{n=k+1}^\infty B_{n,k+1}(a,b)z^n\frac{\poq{az}{n}}{\poq{bz}{n}}\nonumber\\ &+(b-a)q^k\sum_{i= 0}^{\infty}b^{i}\sum_{n=k+i+2}^\infty B_{n,k+i+2}(a,b)z^n\frac{\poq{az}{n}}{\poq{bz}{n}} \nonumber\\ &=\sum_{n=k+1}^\infty B_{n-1,k}(a,b)q^{n-1}z^n\frac{\poq{az}{n}}{\poq{bz}{n}}.\label{18888} \end{align} Thus, by the uniqueness of the coefficients under the base $\{z^n\poq{az}{n}/\poq{bz}{n}\}_{n=0}^\infty$, it holds \begin{align*} q^k B_{n,k+1}(a,b)+(b-a)q^k\sum_{i=0}^{n-k-2}b^{i} B_{n,k+i+2}(a,b)=q^{n-1}B_{n-1,k}(a,b). \end{align*} After slight simplification, we obtain \begin{align*} B_{n,k+1}(a,b)+(b-a)\sum_{i=k+2}^{n}b^{i-k-2} B_{n,i}(a,b)=q^{n-k-1} B_{n-1,k}(a,b). \end{align*} Hence \eqref{impor-1-added} follows.
\rule{4pt}{7pt}
By use of Lemma \ref{ddd}, it is easy to set up a bivariate generating function of $\{B_{n,k}(a,b)\}_{n\geq k\geq 0}$. \begin{yl} \label{bgf} Let $B_{n,k}(a,b)$ be defined by \eqref{18}. Then we have \begin{align} G(y,z)=\sum_{n=0}^\infty\frac{\poq{b/y}{n}}{\poq{a/y}{n}}(yz)^n+\sum_{n=0}^\infty (bzq^n-aG_1(zq^n))\frac{\poq{b/y}{n}}{\poq{a/y}{n+1}}(yz)^n,\label{impor-1} \end{align} where \begin{align} G(y,z)&:=\sum_{k=0}^\infty G_k(z)y^k,\label{impor-2}\\ G_k(z)&:=\sum_{n=k}^\infty B_{n,k}(a,b)z^n.\label{impor-3} \end{align} \end{yl} \noindent {\it Proof.} It suffices to multiply both sides of \eqref{impor-1-added} with $b$. Then
we get \begin{align}bB_{n,k+1}(a,b)+(b-a)\sum_{i=k+2}^{n}b^{i-k-1} B_{n,i}(a,b)= bq^{n-k-1}B_{n-1,k}(a,b).\label{sansan-5} \end{align} Shifting $k$ to $k-1$ in \eqref{impor-1-added} gives rise to \begin{align} B_{n,k}(a,b)+(b-a)\sum_{i=k+1}^{n}b^{i-k-1} B_{n,i}(a,b)= q^{n-k}B_{n-1,k-1}(a,b).\label{sansan-6} \end{align} By abstracting \eqref{sansan-5} from \eqref{sansan-6}, we come up with \begin{align} B_{n,k}(a,b)-aB_{n,k+1}(a,b)=q^{n-k}B_{n-1,k-1}(a,b)-bq^{n-k-1}B_{n-1,k}(a,b).\label{sansan-007} \end{align}
After multiplying \eqref{sansan-007} by $z^{n}$ and summing over $n$ for $n\geq k$, then we have \begin{align*} \sum_{n=k}^\infty B_{n,k}(a,b)z^{n}&-a\sum_{n=k}^\infty B_{n,k+1}(a,b)z^{n}\\ &=zq^{1-k}\sum_{n=k}^\infty B_{n-1,k-1}(a,b)(qz)^{n-1}-bzq^{-k}\sum_{n=k}^\infty B_{n-1,k}(a,b)(qz)^{n-1}. \end{align*} In terms of $G_k(z)$ defined by \eqref{impor-3}, this relation can be expressed as \begin{align} G_k(z)-aG_{k+1}(z)=zq^{1-k}G_{k-1}(qz)-bzq^{-k}G_k(qz).\label{sansan-77777} \end{align} Then, by multiplying $y^{k+1}$ and then summing over $k$ for $k\geq 1$ on both sides of \eqref{sansan-77777}, we further obtain \begin{align*} (y-a)G(y,z)+(a-y)G_0(z)+ayG_1(z)=yz(y-b)G(y/q,qz)+byzG_0(qz), \end{align*} where $G(y,z)$ is given by \eqref{impor-2}. Observe that $G_0(z)=1$. Then \begin{align*} (y-a)G(y,z)-yz(y-b)G(y/q,qz)=y-a+byz-ayG_1(z), \end{align*} namely, \begin{align} G(y,z)-yz\frac{1-b/y}{1-a/y}G(y/q,zq)=d(y,z),\label{123} \end{align} where, for clarity, we define \[d(y,z):=1+\frac{y}{y-a}(bz-aG_1(z)).\] Setting $(y,z)\to(y/q^n,zq^n)$ in \eqref{123}, we obtain its equivalent version as below \begin{align} X(n)-yz\frac{1-bq^n/y}{1-aq^n/y}X(n+1)=d(y/q^n,zq^n),\label{123-123} \end{align} where \[X(n):=G(y/q^n,zq^n).\] Iterating \eqref{123-123} $m$ times, we find \begin{align*} X(n)&=d(y/q^n,zq^n)+yz\frac{1-bq^n/y}{1-aq^n/y}X(n+1)\\ &=d(y/q^n,zq^n)+yz\frac{1-bq^n/y}{1-aq^n/y}d(y/q^{n+1},zq^{n+1})\\ &\qquad+(yz)^2\frac{(1-bq^n/y)(1-bq^{n+1}/y)}{(1-aq^n/y)(1-aq^{n+1}/y)}X(n+2)\\ &=\cdots\\ &=\sum_{k=0}^{m-1} d(y/q^{n+k},zq^{n+k})\frac{\poq{bq^n/y}{k}}{\poq{aq^n/y}{k}}(yz)^k+\frac{\poq{bq^n/y}{m}}{\poq{aq^n/y}{m}}(yz)^mX(n+m). \end{align*} Regarding the solution of this recurrence relation, we may guess and then show by induction on $m$ (set $n=0$) that \begin{align*} G(y,z)&=\sum_{k=0}^\infty d(y/q^k,zq^k)\frac{\poq{b/y}{k}}{\poq{a/y}{k}}(yz)^k\\ &=\sum_{k=0}^\infty\frac{\poq{b/y}{k}}{\poq{a/y}{k}}(yz)^k+\sum_{k=0}^\infty \frac{bzq^k-aG_1(zq^k)}{1-aq^k/y}\frac{\poq{b/y}{k}}{\poq{a/y}{k}}(yz)^k\\ &=\sum_{k=0}^\infty\frac{\poq{b/y}{k}}{\poq{a/y}{k}}(yz)^k+\sum_{k=0}^\infty (bzq^k-aG_1(zq^k))\frac{\poq{b/y}{k}}{\poq{a/y}{k+1}}(yz)^k. \end{align*} So the lemma is proved.
\rule{4pt}{7pt}
There also exists a finite univariate generating function of $\{B_{n,k}(a,b)\}_{k=0}^n$. \begin{tl}\label{eee} Let $B_{n,k}(a,b)$ be defined by \eqref{18}. Then for integer $n\geq 1$, we have \begin{align}\sum_{k=0}^nB_{n,k}(a,b)y^k= \frac{\poq{b/y}{n-1}}{\poq{a/y}{n}}y^{n}-a\sum_{k=0}^{n-1} B_{n-k,1}(a,b)q^{(n-k)k} \frac{\poq{b/y}{k}}{\poq{a/y}{k+1}}y^k.\label{fff} \end{align} \end{tl} \noindent {\it Proof.} It is an immediate consequence of Lemma \ref{bgf}. To be precise, by Lemma \ref{bgf}, we see \begin{align*} G(y,z)&=\sum_{n\geq k\geq 0}B_{n,k}(a,b)y^kz^n\\ &=\sum_{n=0}^\infty\frac{\poq{b/y}{n}}{\poq{a/y}{n}}(yz)^n+\sum_{n=0}^\infty (bzq^n-aG_1(zq^n))\frac{\poq{b/y}{n}}{\poq{a/y}{n+1}}(yz)^n. \end{align*} By equating the coefficients of $z^n$ on both sides, it is easy to calculate that for $n\geq 1$, \begin{align*} \sum_{k=0}^nB_{n,k}(a,b)y^k&=\frac{\poq{b/y}{n}}{\poq{a/y}{n}}y^n+b\frac{\poq{b/y}{n-1}}{\poq{a/y}{n}}(qy)^{n-1}\\ &-a\sum_{k=0}^n B_{n-k,1}(a,b)q^{(n-k)k}\frac{\poq{b/y}{k}}{\poq{a/y}{k+1}}y^k\\ &=\frac{\poq{b/y}{n-1}}{\poq{a/y}{n}}y^{n}-a\sum_{k=0}^{n-1}B_{n-k,1}(a,b)q^{(n-k)k} \frac{\poq{b/y}{k}}{\poq{a/y}{k+1}}y^k. \end{align*} The corollary is thus proved.
\rule{4pt}{7pt}
\begin{remark} Evidently, the left-hand side of \eqref{fff} is a polynomial in $y$ while the right-hand side doesn't seem that case. In fact, the coefficients $B_{n-k,1}(a,b)$ given by \eqref{180}, i.e., $$ z=\sum_{n=1}^\infty B_{n,1}(a,b) z^n\frac{\poq{az}{n}}{\poq{bz}{n}}, $$ just satisfy $$S_n(aq^k)=0\,\,\,(0\leq k\leq n-1),$$ where $S_n(y)$ is given by \begin{align} \frac{S_n(y)}{\prod_{k=0}^{n-1}(y-aq^k)}:=\frac{\poq{b/y}{n-1}}{\poq{a/y}{n}}y^{n}-a\sum_{k=0}^{n-1} B_{n-k,1}(a,b)q^{(n-k)k}\frac{\poq{b/y}{k}}{\poq{a/y}{k+1}}y^k. \end{align} This fact guarantees that the right-hand side of \eqref{fff} is really a polynomial in $y$. \end{remark} Corollary \ref{eee} leads us to a general matrix inversion, which will play a very crucial role in our main result, i.e., Theorem \ref{analogue-two}. \begin{dl}[Matrix inversion]\label{matrixinversion} Let $A=(A_{n,k})$ be the infinitely lower-triangular matrix with the entries \begin{align} A_{n,k}=\boldsymbol\lbrack z^{n-k}\boldsymbol\rbrack\bigg\{\frac{\poq{az}{k}}{\poq{bz}{k}}\bigg\}\label{eq1-13-inverse} \end{align} and assume $A^{-1}=(B_{n,k}(a,b))$. Then \begin{align}B_{n,k}(a,b)&=\boldsymbol\lbrack z^{n-k}\boldsymbol\rbrack\bigg\{ \frac{\poq{bz}{n-1}}{\poq{az}{n}}\bigg\}-a\sum_{i=k}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i} \boldsymbol\lbrack z^{i-k}\boldsymbol\rbrack\bigg\{\frac{\poq{bz}{i}}{\poq{az}{i+1}}\bigg\}.\label{ssss} \end{align} \end{dl} \noindent {\it Proof.} It is clear that \eqref{ssss} is valid for $n=k=0$ or $k=0$. Thus we only need to show \eqref{ssss} for $n\geq 1$. To that end, we first set $y\to 1/t$ in \eqref{fff} and then multiply both sides with $t^n$. All that we obtained is \begin{align}\sum_{k=0}^nB_{n,k}(a,b)t^{n-k}= \frac{\poq{bt}{n-1}}{\poq{at}{n}}-a\sum_{k=0}^{n-1}B_{n-k,1}(a,b) \frac{\poq{bt}{k}}{\poq{at}{k+1}}q^{(n-k)k}t^{n-k}.\label{hhh} \end{align} A comparison of the coefficients of $t^{n-k}$ yields \begin{align*}B_{n,k}(a,b)&=\boldsymbol\lbrack t^{n-k}\boldsymbol\rbrack\bigg\{ \frac{\poq{bt}{n-1}}{\poq{at}{n}}\bigg\}-a\sum_{i=0}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i} \boldsymbol\lbrack t^{n-k}\boldsymbol\rbrack\bigg\{\frac{\poq{bt}{i}}{\poq{at}{i+1}}t^{n-i}\bigg\}\\ &=\boldsymbol\lbrack t^{n-k}\boldsymbol\rbrack\bigg\{ \frac{\poq{bt}{n-1}}{\poq{at}{n}}\bigg\}-a\sum_{i=k}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i} \boldsymbol\lbrack t^{i-k}\boldsymbol\rbrack\bigg\{\frac{\poq{bt}{i}}{\poq{at}{i+1}}\bigg\}. \end{align*} Thus \eqref{ssss} is confirmed.
\rule{4pt}{7pt}
As byproducts of our analysis, we find two interesting properties for $\{B_{n,k}(a,b)\}_{n\geq k\geq 0}$ as follows. \begin{xinzhi} Let $B_{n,k}(a,b)$ be given by \eqref{18}. Then for integer $n\geq 1$ and $t\in \mathbb{C}$, we have \begin{align} B_{n,k}(at,bt)&=B_{n,k}(a,b)t^{n-k},\label{maxinrong} \\ \boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ \frac{\poq{bz}{n-1}}{\poq{az}{n}}\bigg\}&=a\sum_{i=0}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i} \boldsymbol\lbrack z^{i}\boldsymbol\rbrack\bigg\{\frac{\poq{bz}{i}}{\poq{az}{i+1}}\bigg\}.\label{maxinrong-1}\end{align} \end{xinzhi} \noindent {\it Proof.} To establish \eqref{maxinrong}, it only needs to take $(a,b)\to (at,bt)$ in \eqref{18}. Then it follows \begin{align*} z^k=\sum_{n=k}^\infty\,B_{n,k}(at,bt)z^n\frac{\poq{atz}{n}}{\poq{btz}{n}}. \end{align*} On the other hand, on setting $z\to t\,z$ in \eqref{18}, we have \begin{align*} z^k=\sum_{n=k}^\infty\,B_{n,k}(a,b)t^{n-k}z^n\frac{\poq{atz}{n}}{\poq{btz}{n}}. \end{align*} By the uniqueness of series expansion, we obtain \eqref{maxinrong}. Identity \eqref{maxinrong-1} is a special case $k=0$ of \eqref{ssss}, noting that for $n\geq 1,B_{n,0}=0$.
\rule{4pt}{7pt}
After these preliminaries we are prepared to show Theorem \ref{analogue-two}.
\noindent {\it Proof.} The existence of \eqref{expan-two-1} is evident, because $$
\left\{z^n\frac{\poq{az}{n}}{\poq{bz}{n}}\bigg|n=0,1,2\cdots\right\}$$
is the base of $\mathbb{C}[[z]]$.
Thus it only needs to evaluate the coefficients $c_n$ in \eqref{expan-two-1}. To do this, by Theorem \ref{matrixinversion}, we have \begin{align*} c_n&=\sum_{k=0}^n \boldsymbol\lbrack z^{k}\boldsymbol\rbrack\left\{F(z)\right\} B_{n,k}(a,b)\\ &=\sum_{k=0}^n\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\left\{F(z)\right\}\boldsymbol\lbrack z^{n-k}\boldsymbol\rbrack\bigg\{ \frac{\poq{bz}{n-1}}{\poq{az}{n}}\bigg\}\\ &-a\sum_{i=0}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i} \sum_{k=0}^i\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\{F(z)\}[z^{i-k}]\bigg\{\frac{\poq{bz}{i}}{\poq{az}{i+1}}\bigg\}\\ &=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ F(z)\frac{\poq{bz}{n-1}}{\poq{az}{n}}\bigg\}-a\sum_{i=0}^{n-1}B_{n-i,1}(a,b)q^{(n-i)i}\boldsymbol\lbrack z^{i}\boldsymbol\rbrack\bigg\{ F(z)\frac{\poq{bz}{i}}{\poq{az}{i+1}}\bigg\}. \end{align*} The conclusion is proved.
\rule{4pt}{7pt}
\begin{remark} It is worth mentioning that in \cite{garsia-1} A.M. Garsia and J. Remmel set up a $q$-Lagrange inversion formula, which asserts that for any formal power series $F(z)=\sum_{n=1}^{\infty}F_nz^n$ and $f(z)=\sum_{n=1}^{\infty}f_nz^n$ with $F_1f_1\neq 0$, \begin{eqnarray} \sum_{n=1}^\infty f_nF(z)F(zq)\cdots F(zq^{n-1})=z\label{1} \end{eqnarray} holds if and only if \begin{eqnarray} \sum_{n=1}^\infty F_nf(z)f(z/q)\cdots f(z/q^{n-1})=z.\label{2} \end{eqnarray} However, to the author's knowledge, it is still hard to find out explicit expressions of $f_n:=B_{n,1}(a,b)$ from \eqref{1} even if $F(z)=z(1-az)/(1-bz)$. \end{remark} In the following, we shall examine a few specific formal expansion formulas covered by Theorem \ref{analogue-two}. As a first consequence, when $a=0$ we recover Carlitz's $q$-expansion formula {\cite[p. 206, Eq. (1.11)]{carliz}}. \begin{tl}For any $F(z)\in \mathbb{C}[[z]]$, we have
\begin{subequations}\label{expan-three} \begin{equation} F(z)=\sum_{n=0}^{\infty}\frac{c_nz^{n}}{\poq{bz}{n}},\label{expan-three-1} \end{equation} where \begin{equation} c_n=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\big\{F(z)\poq{bz}{n-1}\big\}.\label{expan-three-2} \end{equation} \end{subequations} \end{tl}
We remark that Carlitz's $q$-expansion formula is a useful $q$-analogue of the Lagrange--B\"{u}rmann inversion formula. The reader may consult the survey \cite{11} of D. Stanton concerning this topic.
A second interesting consequence occurs when $b=0$. \begin{tl} Let $B_{n,1}(a,0)$ be given by \eqref{180}. Then \begin{subequations}\label{expan-two-11} \begin{equation} F(z)=\sum_{n=0}^{\infty}c_nz^{n}\poq{az}{n},\label{expan-two-22} \end{equation} where the coefficients \begin{eqnarray} c_n=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\frac{F(z)}{\poq{az}{n}}\bigg\} -a\sum_{k=0}^{n-1}B_{n-k,1}(a,0)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{\frac{F(z)}{\poq{az}{k+1}}\bigg\}. \end{eqnarray} \end{subequations} \end{tl}
As a third consequence, the special case $b=aq$ leads us to \begin{tl}\label{coro310} Let $F(z)=\sum_{n=0}^{\infty}a_nz^n$ and $F_k(z)=\sum_{i=0}^ka_iz^i$, being the $k$-truncated series of $F(z)$. Suppose that
\begin{subequations}\label{expan-four} \begin{equation} \frac{F(z)}{1-az}=\sum_{n=0}^{\infty}\frac{c_nz^{n}}{1-azq^n}.\label{expan-four-1-0} \end{equation} Then $c_0=a_0$ and $n\geq1,$ \begin{equation} c_n=\sum_{k=0}^{n-1}g_{n-k}(q)q^{(n-k)k}\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ \frac{F(z)-F_k(z)}{1-az}\bigg\},\label{expan-four-2-0} \end{equation} \end{subequations} where $g_n(q)$ are polynomials in $q$ given recursively by \begin{align*}g_n(q)=1 -\sum_{i=1}^{n-1}g_{n-i}(q)q^{(n-i)i}. \end{align*} \end{tl} \noindent {\it Proof.} In such case, we first solve the recurrence relation \eqref{ssss} with $k=1$ for $B_{n,1}(a,aq)$, viz. \begin{align*}B_{n,1}(a,aq)=a^{n-1} -\sum_{i=1}^{n-1}B_{n-i,1}(a,aq)q^{(n-i)i} a^{i}. \end{align*} The solution is recursively given by \begin{align} \left\{
\begin{array}{ll}
&B_{n,1}(a,aq)=g_n(q)a^{n-1},\\ &\\ &g_n(q)=\displaystyle1-\sum_{i=1}^{n-1}g_{n-i}(q)q^{(n-i)i}.\label{crucial}
\end{array} \right. \end{align}
By virtue of \eqref{crucial}, we are now able to calculate $c_n$. To do this, by Theorem \ref{analogue-two} we have \begin{align*} c_n&=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ \frac{F(z)}{1-az}\bigg\}-\sum_{k=0}^{n-1}g_{n-k}(q)q^{(n-k)k}a^{n-k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{\frac{F(z)}{1-az}\bigg\}\\ &=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ \frac{F(z)}{1-az}\bigg\}-\sum_{k=0}^{n-1}g_{n-k}(q)q^{(n-k)k}\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\frac{F_k(z)}{1-az}\bigg\}\\ &=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\sum_{k=0}^{n-1}g_{n-k}(q)q^{(n-k)k}\frac{F(z)-F_k(z)}{1-az}\bigg\}. \end{align*} In the penultimate equality we have used the fact that \begin{align*} a^{n-k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{\frac{F(z)}{1-az}\bigg\}=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\frac{F_k(z)}{1-az}\bigg\} \end{align*} and in the last equality, we have invoked \eqref{crucial} again. The conclusion is proved.
\rule{4pt}{7pt}
It is also of interest to note that if $F(z)$ is a polynomial of degree $m+1$, say $$ F(z)=(1-az)\prod_{n=1}^{m}(1-t_nz), $$ and $F_k(z)=F(z)$ for $k\geq m+1$, then Corollary \ref{coro310} reduces to \begin{tl} With the same notation as Corollary \ref{coro310}. Then we have
\begin{subequations} \begin{equation} \prod_{n=1}^{m}(1-t_nz)=\sum_{n=0}^\infty \frac{c_nz^{n}}{1-azq^n},\label{exexpan-two-1-0} \end{equation} where $c_0=1$ and $n\geq 1$, \begin{equation} c_n=\sum_{k=0}^{\min\{m,n-1\}} g_{n-k}(q)q^{(n-k)k}\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\prod_{i=1}^{m}(1-t_iz)-\frac{F_k(z)}{1-az}\bigg\}.\label{mistake-wangjin} \end{equation} \end{subequations} \end{tl}
\section{Applications to $q$-series theory} Unlike the preceding section, we now focus our attention on applications of Theorem \ref{analogue-two} to the $q$-series theory. In this sense, we assume that all results are subject to appropriate convergent conditions of rigorous analytic theory, unless otherwise stated.
Let us begin with the proof of Theorem \ref{tlidentity}.
\noindent {\it Proof.} We only need to make use of Theorem \ref{analogue-two} as well as the $q$-binomial theorem \cite[(II.3)]{10} to get \begin{align*} \frac{\poq{az}{\infty}}{\poq{bz}{\infty}}G(z)=\sum_{n=0}^{\infty}c_nz^{n}\frac{\poq{az}{n}}{\poq{bz}{n}} =S_1-aS_2, \end{align*} where \begin{align*}
S_1&:=\sum_{n=0}^{\infty}\sum_{i=0}^{n}\frac{\poq{aq/b}{i}}{\poq{q}{i}}b^iq^{(n-1)i}t_{n-i}z^{n}\frac{\poq{az}{n}}{\poq{bz}{n}},\\
S_2&:=\sum_{n=0}^{\infty}\sum_{k=0}^{n-1}B_{n-k,1}(a,b)q^{(n-k)k}\sum_{i=0}^k \frac{\poq{aq/b}{i}}{\poq{q}{i}}(bq^k)^it_{k-i}z^{n}\frac{\poq{az}{n}}{\poq{bz}{n}}. \end{align*} After a mere series rearrangement, we get \begin{align*}
S_1&=\sum_{i=0}^{\infty}\frac{\poq{aq/b}{i}}{\poq{q}{i}}b^iq^{i^2-i}z^{i}\frac{\poq{az}{i}}{\poq{bz}{i}}\sum_{n= 0}^{\infty}q^{ni}t_{n}z^{n}\frac{\poq{azq^i}{n}}{\poq{bzq^i}{n}}\\
&=\sum_{i= 0}^{\infty}\frac{\poq{aq/b,az}{i}}{\poq{q,bz}{i}}b^iq^{i^2-i}z^{i}\widetilde{G}(zq^i;a,b). \end{align*} Hereafter, as given by \eqref{dy}, \[\widetilde{G}(z;a,b)=\sum_{n=0}^{\infty}t_{n}z^{n}\frac{\poq{az}{n}}{\poq{bz}{n}}.\] In a similar way, it is easily found that \begin{align*}
S_2&=\sum_{i=0}^{\infty}\frac{\poq{aq/b}{i}}{\poq{q}{i}}b^i \sum_{k=i}^{\infty}t_{k-i}z^k\frac{\poq{az}{k}}{\poq{bz}{k}}q^{ki}\Delta_{k}, \end{align*} where \[ \Delta_{k}:=\sum_{n= k+1}^{\infty}B_{n-k,1}(a,b)(zq^{k})^{n-k}\frac{\poq{azq^k}{n-k}}{\poq{bzq^k}{n-k}}=\sum_{n= 1}^{\infty}B_{n,1}(a,b)(zq^{k})^{n}\frac{\poq{azq^k}{n}}{\poq{bzq^k}{n}}= zq^k. \] The last equality is based on \eqref{180}. Therefore, \begin{align*}
S_2&=z\sum_{i=0}^{\infty}\frac{\poq{aq/b,az}{i}}{\poq{q,bz}{i}}(bzq^{i+1})^i \sum_{k=0}^{\infty}t_{k}\frac{\poq{azq^i}{k}}{\poq{bzq^i}{k}}(zq^{i+1})^{k}\\ &=z\sum_{i=0}^{\infty}\frac{\poq{aq/b,az}{i}}{\poq{q,bz}{i}}(bzq^{i+1})^i\widetilde{G}(zq^{i+1};a/q,b/q). \end{align*} Finally, we achieve \begin{align*} \frac{\poq{az}{\infty}}{\poq{bz}{\infty}}G(z)&= \sum_{n=0}^{\infty}\frac{\poq{aq/b,az}{n}}{\poq{q,bz}{n}}(bz)^nq^{n(n-1)}\big(\widetilde{G}(zq^n;a,b)- azq^{2n}\widetilde{G}(zq^{n+1};a/q,b/q)\big). \end{align*} This gives the complete proof of the theorem.
\rule{4pt}{7pt}
With regard to applications of Theorem \ref{tlidentity} to $q$-series, it is necessary to set up
\begin{tl}\label{tlnew}For integer $r\geq 0$ and $|cz|<1$, it holds \begin{align} &\frac{\poq{azq}{\infty}}{\poq{bz}{\infty}}{_{r+1}\phi_r}\bigg[\genfrac{}{}{0pt}{}{A_1,A_2,\ldots,A_{r+1}}{B_1,B_2,\ldots,B_{r}};q,cz\bigg]\nonumber\\ &=\lim_{x,y\to\infty}\sum_{n=0}^{\infty}\frac{\poq{az,(az)^{1/2}q,-(az)^{1/2}q,aq/b,x,y}{n}}{\poq{q,(az)^{1/2},-(az)^{1/2},bz,azq/x,azq/y}{n}}\bigg(\frac{bz}{xy}\bigg)^n\nonumber\\ &\qquad\quad\times{_{r+3}\phi_{r+2}}\bigg[\genfrac{}{}{0pt}{}{azq^n,azq^{2n+1},A_1,A_2,\ldots,A_{r+1}}{bzq^n,azq^{2n},B_1,B_2,\ldots,B_{r}};q,czq^n\bigg].\label{expan-two-5lz} \end{align} \end{tl} \noindent {\it Proof.} It suffices to set in Theorem \ref{tlidentity} $$G(z)={_{r+1}\phi_r}\bigg[\genfrac{}{}{0pt}{}{A_1,A_2,\ldots,A_{r+1}}{B_1,B_2,\ldots,B_{r}};q,cz\bigg],$$ which means
$$t_k=\frac{\poq{A_1,A_2,\ldots,A_{r+1}}{k}}{\poq{q,B_1,B_2,\ldots,B_{r}}{k}}c^k.$$ In the sequel, it is routine to compute \begin{align*} H_n(z;a,b)&:=\frac{\widetilde{G}(zq^n;a,b)- azq^{2n}\widetilde{G}(zq^{n+1};a/q,b/q)}{1-azq^{2n}}\\ &=\sum_{k=0}^\infty\frac{\poq{azq^n,azq^{2n+1}}{k}}{\poq{bzq^n,azq^{2n}}{k}}(zq^n)^k t_k\\ &={_{r+3}\phi_{r+2}}\bigg[\genfrac{}{}{0pt}{}{azq^n,azq^{2n+1},A_1,A_2,\ldots,A_{r+1}}{bzq^n,azq^{2n},B_1,B_2,\ldots,B_{r}};q,czq^n\bigg]. \end{align*} This reduces \eqref{expan-two-lzlzlz} of Theorem \ref{tlidentity} to \begin{align} &\frac{\poq{azq}{\infty}}{\poq{bz}{\infty}}{_{r+1}\phi_r}\bigg[\genfrac{}{}{0pt}{}{A_1,A_2,\ldots,A_{r+1}}{B_1,B_2,\ldots,B_{r}};q,cz\bigg]\nonumber\\ &\qquad= \sum_{n=0}^{\infty}\frac{\poq{aq/b,az}{n}}{\poq{q,bz}{n}}(bz)^nq^{n(n-1)}\frac{1-azq^{2n}}{1-az}H_{n}(z;a,b).\label{xuyao} \end{align} Finally, using the basic relations \begin{align*} \frac{1-azq^{2n}}{1-az} =\frac{\poq{(az)^{1/2}q,-(az)^{1/2}q}{n}}{\poq{(az)^{1/2},-(az)^{1/2}}{n}} \end{align*} and \begin{align*} \lim_{x,y\to\infty} \frac{\poq{x,y}{n}}{\poq{azq/x,azq/y}{n}}\bigg(\frac{1}{xy}\bigg)^n =q^{n(n-1)}, \end{align*} we derive \eqref{expan-two-5lz} from \eqref{xuyao} directly.
\rule{4pt}{7pt}
The following are two special instances of Theorem \ref{tlidentity}. \begin{lz}The following transformation formulas are valid. \begin{align} \frac{\poq{az}{\infty}}{\poq{bz}{\infty}}&= \sum_{n=0}^{\infty}\frac{\poq{aq/b,az}{n}}{\poq{q,bz}{n}}(bz)^nq^{n(n-1)}(1-azq^{2n}),\label{expan-two-lzlz}\\ \frac{\poq{azq,ABz}{\infty}}{\poq{bz,Bz}{\infty}}&= \sum_{n=0}^{\infty}\frac{\poq{(az)^{1/2}q,-(az)^{1/2}q,aq/b,az}{n}}{\poq{q,(az)^{1/2},-(az)^{1/2},bz}{n}} (bz)^nq^{n(n-1)}\nonumber\\ &\qquad\times{_3\phi_2}\bigg[\genfrac{}{}{0pt}{}{azq^n,azq^{2n+1},A}{bzq^n,azq^{2n}};q,Bzq^n\bigg].\label{expan-two-4lz} \end{align} \end{lz} \noindent {\it Proof.} Identity \eqref{expan-two-lzlz} comes from $G(z)=1$ in Theorem \ref{tlidentity} and \eqref{expan-two-4lz} does by taking $G(z)=\poq{ABz}{\infty}/\poq{Bz}{\infty}$, i.e., $t_k=\poq{A}{k}/\poq{q}{k}B^k$ in Theorem \ref{tlidentity} or $r=0$ in Corollary \ref{tlnew}.
\rule{4pt}{7pt}
The next conclusion shows how Theorem \ref{tlidentity} can be applied to known transformation formulas for finding new results.
\begin{tl} For $|z|<1$, we have
\begin{align} {}_{2}\phi _{1}\left[\begin{matrix}A,B\\ C\end{matrix} ; q, z\right]=&\sum_{n=0}^{\infty} \frac{\poq{ABq/C,ABz/C}{n}} {\poq{q,z}{n}} z^n q^{n(n-1)}\left(1-\frac{ABzq^{2n}}{C}\right)\nonumber\\ &\times{}_{4}\phi _{3}\left[\begin{matrix}ABzq^n/C,ABzq^{2n+1}/C,C/A,C/B\\ C,zq^n,ABzq^{2n}/C\end{matrix} ; q, \frac{ABzq^n}{C}\right]. \end{align} \end{tl} \noindent {\it Proof.} Performing as above, we choose in Theorem \ref{tlidentity} \begin{align*} G(z)={}_{2}\phi _{1}\left[\begin{matrix}C/A,C/B\\ C\end{matrix} ; q, \frac{ABz}{C}\right],\end{align*} which corresponds to \begin{align*} t_k=\frac{\poq{C/A,C/B}{k}}{\poq{q,C}{k}}\left(\frac{AB}{C}\right)^k. \end{align*}In this case, it is clear that \begin{align*} H_n(z;a,b):&=\frac{\widetilde{G}(zq^n;a,b)- azq^{2n}\widetilde{G}(zq^{n+1};a/q,b/q)}{1-azq^{2n}}\\ &={}_{4}\phi _{3}\left[\begin{matrix}azq^n,azq^{2n+1},C/A,C/B\\ C,bzq^n,azq^{2n}\end{matrix} ; q, \frac{ABzq^n}{C}\right]. \end{align*} As a result, from Theorem \ref{tlidentity} it follows \begin{align*} \frac{\poq{az}{\infty}}{\poq{bz}{\infty}}{}_{2}\phi _{1}\left[\begin{matrix}C/A,C/B\\ C\end{matrix} ; q, \frac{ABz}{C}\right]&= \sum_{n=0}^{\infty}\frac{\poq{aq/b,az}{n}}{\poq{q,bz}{n}}(bz)^nq^{n(n-1)}(1-azq^{2n})H_n(z;a,b). \end{align*} In this form, taking $a=AB/C$ and $b=1$, we obtain \begin{align} &\frac{\poq{ABz/C}{\infty}}{\poq{z}{\infty}}{}_{2}\phi _{1}\left[\begin{matrix}C/A,C/B\\ C\end{matrix} ; q, \frac{ABz}{C}\right]\nonumber\\ &=\sum_{n=0}^{\infty}\frac{\poq{ABq/C,ABz/C}{n}}{\poq{q,z}{n}}z^nq^{n(n-1)}(1-ABzq^{2n}/C)H_n(z;AB/C,1).\label{need} \end{align} By combining \eqref{need} with Heine's third transformation \cite[(III.3)]{10} \begin{align*} {}_{2}\phi _{1}\left[\begin{matrix}A,B\\ C\end{matrix} ; q, z\right]=&\frac{\poq{ABz/C}{\infty}}{\poq{z}{\infty}}{}_{2}\phi _{1}\left[\begin{matrix}C/A,C/B\\ C\end{matrix} ; q, \frac{ABz}{C}\right], \end{align*} then we reformulate \eqref{need} in standard notation of $q$-series as \begin{align*} {}_{2}\phi _{1}\left[\begin{matrix}A,B\\ C\end{matrix} ; q, z\right]=&\sum_{n=0}^{\infty} \frac{\poq{ABq/C,ABz/C}{n}} {\poq{q,z}{n}} z^n q^{n(n-1)}\left(1-\frac{ABzq^{2n}}{C}\right)\nonumber\\ &\times{}_{4}\phi _{3}\left[\begin{matrix}ABzq^n/C,ABzq^{2n+1}/C,C/A,C/B\\ C,zq^n,ABzq^{2n}/C\end{matrix} ; q, \frac{ABzq^n}{C}\right]. \end{align*} The conclusion is proved.
\rule{4pt}{7pt}
Perhaps, the most interesting case is the following partial theta function identity. It can be derived from
Theorem \ref{tlidentity} with the help of two Coogan-Ono type identities \eqref{id11} and \eqref{id1177}. \begin{tl}[Partial theta function identity]Let $\theta(z;q)$ be the partial theta function given by \[\sum_{n=0}^\infty(-1)^nq^{n(n-1)/2}z^{n}.\] Then \begin{align} &\frac{\poq{zq}{\infty}}{\poq{-zq}{\infty}}+\sum_{n=0}^{\infty}\frac{\poq{-1,z}{n}}{\poq{q,-zq}{n}}(-z)^nq^{n^2+n} \label{expan-two-lzlzlz-new} \\ &\qquad=\sum_{n=0}^{\infty}\frac{\poq{-1,z}{n}}{\poq{q,-zq}{n}}\big(1+q^n+zq^{n}-zq^{2n}\big)(-z)^nq^{n^2}\theta(z^2q^{2n+1};q^2).\nonumber \end{align} \end{tl} \noindent {\it Proof.} Recall that Lemma \ref{ma-id11} gives \begin{align} \sum_{k=0}^\infty z^{k}\frac{\poq{z}{k}}{\poq{-zq}{k}}=(1+z)\sum_{k=0}^\infty(-1)^kz^{2k}q^{k^2}.\label{idnewadded-1} \end{align} Lemma \ref{ma-id1177} can be restated as \begin{align} \sum_{k=0}^\infty\,z^k\frac{\poq{z}{k}}{\poq{-zq}{k}}(1-zq^k)=1+ 2\sum_{k=1}^{\infty}(-1)^kz^{2k}q^{k^2}.\label{idnewadded-0} \end{align} Subtracting \eqref{idnewadded-0} from \eqref{idnewadded-1}, we obtain \begin{align} z\sum_{k=0}^\infty\,(qz)^k\frac{\poq{z}{k}}{\poq{-zq}{k}}&=z+(z-1)\sum_{k=1}^{\infty}(-1)^kz^{2k}q^{k^2},\nonumber\\ \mbox{i.e.,}\,\,\,\sum_{k=0}^\infty\,(qz)^k\frac{\poq{z}{k}}{\poq{-zq}{k}}&=\sum_{k=0}^{\infty}(-1)^kz^{2k}q^{k^2}-\sum_{k=1}^{\infty}(-1)^kz^{2k-1}q^{k^2}. \label{idnewadded-2} \end{align} Using \eqref{idnewadded-1} and \eqref{idnewadded-2}, as well as referring to \eqref{dy} with $t_k=1$, we thus obtain \begin{align} \widetilde{G}(z;1,-q)&=(1+z)\sum_{k=0}^\infty(-1)^kz^{2k}q^{k^2},\\ \widetilde{G}(zq;1/q,-1)&=\sum_{k=0}^{\infty}(-1)^kz^{2k}q^{k^2}-\sum_{k=1}^{\infty}(-1)^kz^{2k-1}q^{k^2}. \end{align} Thus it is easy to check that \begin{align*}\displaystyle &\widetilde{G}(zq^n;1,-q)- zq^{2n}\widetilde{G}(zq^{n+1};1/q,-1)\\ &=(1+zq^n)\sum_{k=0}^\infty(-1)^kz^{2k}q^{k^2+2kn}-zq^{2n}\sum_{k=0}^{\infty}(-1)^kz^{2k}q^{k^2+2kn}+q^n\sum_{k=1}^{\infty}(-1)^kz^{2k}q^{k^2+2kn}\\ &=-q^n+(1+q^n+zq^n-zq^{2n} )\sum_{k=0}^{\infty}(-1)^kz^{2k}q^{k^2+2kn}. \end{align*} Note that the summation on the right-hand side can be recast in terms of $\theta(z;q)$. We thus obtain \begin{align*}\displaystyle &\widetilde{G}(zq^n;1,-q)- zq^{2n}\widetilde{G}(zq^{n+1};1/q,-1) =-q^n+(1+q^n+zq^n-zq^{2n} )\theta(z^2q^{2n+1};q^2). \end{align*} This reduces the whole equation \eqref{expan-two-lzlzlz} to \begin{align*} \frac{\poq{zq}{\infty}}{\poq{-zq}{\infty}}&+\sum_{n=0}^{\infty}\frac{\poq{-1,z}{n}}{\poq{q,-zq}{n}}(-z)^nq^{n^2+n} \\ &=\sum_{n=0}^{\infty}\frac{\poq{-1,z}{n}}{\poq{q,-zq}{n}}(-z)^nq^{n^2}\big(1+q^n+zq^{n}-zq^{2n}\big)\theta(z^2q^{2n+1};q^2). \end{align*} Thus Identity \eqref{expan-two-lzlzlz-new} is proved.
\rule{4pt}{7pt}
In the case that $z=q^{-m},m\geq 1$, \eqref{expan-two-lzlzlz-new} reduces to a finite summation of $\theta(z;q).$ \begin{lz}For $m\geq 1$, we have \begin{align} &\sum_{n=0}^{m}\frac{\poq{-1}{n}}{\poq{-q^{1-m}}{n}} \bigg[\genfrac{}{}{0pt}{}{m}{n}\bigg]_qq^{3n^2/2+n/2-2nm}\label{qqq}\\ &=\sum_{n=0}^{m}\frac{\poq{-1}{n}}{\poq{-q^{1-m}}{n}} \bigg[\genfrac{}{}{0pt}{}{m}{n}\bigg]_qq^{3n^2/2-n/2-2nm} \big(1+q^n+q^{n-m}-q^{2n-m}\big)\theta(q^{2n-2m+1};q^2),\nonumber \end{align} where $\bigg[\genfrac{}{}{0pt}{}{m}{n}\bigg]_q$ is the usual $q$-binomial coefficient. \end{lz} \noindent {\it Proof.} It suffices to take $z=q^{-m}$ in \eqref{expan-two-lzlzlz-new} and simplify the obtained by using the facts that for integer $m\geq 1$, $\poq{q^{1-m}}{\infty}=0$ and \begin{align*} \frac{\poq{q^{-m}}{n}}{\poq{q}{n}}= \bigg[\genfrac{}{}{0pt}{}{m}{n}\bigg]_q(-1)^nq^{n(n-1)/2-mn}. \end{align*}
\rule{4pt}{7pt}
It would be natural to expect that Theorem \ref{analogue-two} can be applied to bilateral $q$-series. The reader is referred to \cite[Eq. (5.1.2)]{10} or \eqref{guiding} for the definition of bilateral $q$-series. As an interesting example, we now set up a coefficient identity of the famous Ramanujan ${}_1\psi_1$ summation formula \cite[(II.29)]{10}.
\begin{tl}Let $B_{n,1}(a,b)$ be given by \eqref{180}. For $|b/a|<|z|<1,$ and integer $n\geq 0$, it holds \begin{align} &\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\frac{(aqz^2,1/az^2;q)_{\infty}} {(z,b/az,bq^nz,1/az;q)_{\infty}\poq{aqz}{n}}\bigg\}\label{313}=\frac{1}{(q,b/a;q)_{\infty}}\\ &\quad+aq\sum_{k=0}^{n-1}B_{n-k,1}(aq,bq)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{\frac{(aqz^2,1/az^2;q)_{\infty}} {(z,b/az,bq^{k+1}z,1/az;q)_{\infty}\poq{aqz}{k+1}}\bigg\}.\nonumber \end{align} \end{tl}
\noindent {\it Proof.} Observe that Ramanujan's $\,_1\psi_1$ sum states that for $|b/a|<|z|<1,$ \begin{eqnarray}\sum_{k=-\infty}^{\infty}\frac{(a;q)_k}{(b;q)_k}z^k= \frac{(az,q/az,q,b/a;q)_{\infty}} {(z,b/az,b,q/a;q)_{\infty}}.
\end{eqnarray} Set $(a,b)\to (aqz,bqz)$. Then we arrive at \begin{eqnarray}\sum_{k=-\infty}^{\infty}\frac{(aqz;q)_k}{(bqz;q)_k}z^k= \frac{(aqz^2,1/az^2,q,b/a;q)_{\infty}} {(z,b/az,bqz,1/az;q)_{\infty}},
\end{eqnarray}
which can be reformulated in the form
\begin{align}
f(z)+g(1/z)=F(z), \label{bilateral}
\end{align}
where we define
\begin{align*} f(z)&:=\sum_{k=0}^{\infty}\frac{(aqz;q)_k}{(bqz;q)_k}z^k,\,\,g(z):=
\sum_{k=1}^{\infty}\frac{(z/b;q)_k}{(z/a;q)_k}\bigg(\frac{bz}{a}\bigg)^k,\\ F(z)&:=\frac{(aqz^2,1/az^2,q,b/a;q)_{\infty}} {(z,b/az,bqz,1/az;q)_{\infty}}.
\end{align*}
Now we apply the expansion formula in Theorem \ref{analogue-two} to $f(z)$. It follows from \eqref{coeformula} that for $n\geq 0$,
\begin{align} 1=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ f(z)\frac{\poq{bqz}{n-1}}{\poq{aqz}{n}}\bigg\}-aq\sum_{k=0}^{n-1}B_{n-k,1}(aq,bq)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{f(z)\frac{\poq{bqz}{k}}{\poq{aqz}{k+1}}\bigg\}.\label{coeformula1890} \end{align} Next, observe that \begin{align*} \boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{f(z)\frac{\poq{bqz}{n-1}}{\poq{aqz}{n}}\bigg\}=\sum_{k=0}^n\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\{f(z)\}\times\boldsymbol\lbrack z^{n-k}\boldsymbol\rbrack\bigg\{ \frac{\poq{bqz}{n-1}}{\poq{aqz}{n}}\bigg\},
\end{align*}
while for $k\geq 0$, due to \eqref{bilateral}, it holds $$\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\{f(z)\}=\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\{F(z)\}.$$ We immediately obtain \begin{align*} \boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{ f(z)\frac{\poq{bqz}{k-1}}{\poq{aqz}{k}}\bigg\}=\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{F(z)\frac{\poq{bqz}{k-1}}{\poq{aqz}{k}}\bigg\}. \end{align*} This simplifies \eqref{coeformula1890} to \begin{align*} 1&=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{ F(z)\frac{\poq{bqz}{n-1}}{\poq{aqz}{n}}\bigg\}-aq\sum_{k=0}^{n-1}B_{n-k,1}(aq,bq)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{F(z)\frac{\poq{bqz}{k}}{\poq{aqz}{k+1}}\bigg\}\\ &=\boldsymbol\lbrack z^{n}\boldsymbol\rbrack\bigg\{\frac{(aqz^2,1/az^2,q,b/a;q)_{\infty}} {(z,b/az,bq^nz,1/az;q)_{\infty}\poq{aqz}{n}}\bigg\}\\ &-aq\sum_{k=0}^{n-1}B_{n-k,1}(aq,bq)q^{(n-k)k}\boldsymbol\lbrack z^{k}\boldsymbol\rbrack\bigg\{\frac{(aqz^2,1/az^2,q,b/a;q)_{\infty}} {(z,b/az,bq^{k+1}z,1/az;q)_{\infty}\poq{aqz}{k+1}}\bigg\}. \end{align*} It turns out to be \eqref{313}.
\rule{4pt}{7pt}
We conclude our paper with a coefficient identity of the Coogan-Ono identity \eqref{id11} which can be easily derived by using \eqref{18}. \begin{tl} Let $B_{n,k}(a,b)$ be given by \eqref{18}. Then for any integer $n\geq 0$, we have \begin{align} \sum_{k=0}^{\lfloor n/2\rfloor}B_{n,2k}(1,-q)(-1)^kq^{k^2}+\sum_{k=0}^{\lfloor (n-1)/2\rfloor}B_{n,2k+1}(1,-q)(-1)^kq^{k^2}=1, \end{align} where $\lfloor x\rfloor$ denotes the usual floor function. \end{tl} \noindent {\it Proof.} It suffices to take $a=1,b=-q$ in Theorem \ref{analogue-two} and $$F(z)=(1+z)\sum_{n=0}^{\infty}(-1)^nz^{2n}q^{n^2}:=\sum_{n=0}^{\infty}a_nz^n.$$ So we are back with the series expansion \begin{align*} F(z)=\sum_{n=0}^\infty\,z^n\frac{\poq{z}{n}}{\poq{-zq}{n}}. \end{align*} Thus, by \eqref{18} instead of \eqref{coeformula}, we obtain \begin{align*} 1=\sum_{k=0}^nB_{n,k}(1,-q)a_{k}&=\sum_{2k=0}^nB_{n,2k}(1,-q)(-1)^kq^{k^2}\\ &+\sum_{2k+1=1}^nB_{n,2k+1}(1,-q)(-1)^kq^{k^2}. \end{align*}
\rule{4pt}{7pt}
\end{document} |
\begin{document}
\title{f Crystal Analysis of type $C$ Stanley Symmetric Functions}
\begin{abstract} Combining results of T.K. Lam and J. Stembridge, the type $C$ Stanley symmetric function $F_w^C(\mathbf{x})$, indexed by an element $w$ in the type $C$ Coxeter group, has a nonnegative integer expansion in terms of Schur functions. We provide a crystal theoretic explanation of this fact and give an explicit combinatorial description of the coefficients in the Schur expansion in terms of highest weight crystal elements.
\noindent \textbf{Keywords:} Stanley symmetric functions, crystal bases, Kra\'skiewicz insertion, mixed Haiman insertion, unimodal tableaux, primed tableaux \end{abstract}
\section{Introduction}
Schubert polynomials of type $B$ and type $C$ were independently introduced by Billey and Haiman~\cite{Billey.Haiman.1995} and Fomin and Kirillov~\cite{Fomin.Kirillov.1996}. Stanley symmetric functions~\cite{Stanley.1984} are stable limits of Schubert polynomials, designed to study properties of reduced words of Coxeter group elements. In his Ph.D. thesis, T.K. Lam~\cite{Lam.1995} studied properties of Stanley symmetric functions of types $B$ (and similarly $C$) and $D$. In particular he showed, using Kra\'skiewicz insertion~\cite{Kraskiewicz.1989,Kraskiewicz.1995}, that the type $B$ Stanley symmetric functions have a positive integer expansion in terms of $P$-Schur functions. On the other hand, Stembridge~\cite{Stembridge.1989} proved that the $P$-Schur functions expand positively in terms of Schur functions. Combining these two results, it follows that Stanley symmetric functions of type $B$ (and similarly type $C$) have a positive integer expansion in terms of Schur functions.
Schur functions $s_\lambda(\mathbf{x})$, indexed by partitions $\lambda$, are ubiquitous in combinatorics and representation theory. They are the characters of the symmetric group and can also be interpreted as characters of type $A$ crystals. In~\cite{Morse.Schilling.2016}, this was exploited to provide a combinatorial interpretation in terms of highest weight crystal elements of the coefficients in the Schur expansion of Stanley symmetric functions in type $A$. In this paper, we carry out a crystal analysis of the Stanley symmetric functions $F_w^C(\mathbf{x})$ of type $C$, indexed by a Coxeter group element $w$. In particular, we use Kra\'skiewicz insertion~\cite{Kraskiewicz.1989,Kraskiewicz.1995} and Haiman's mixed insertion~\cite{Haiman.1989} to find a crystal structure on primed tableaux, which in turn implies a crystal structure $\mathcal{B}_w$ on signed unimodal factorizations of $w$ for which $F^C_w(\mathbf{x})$ is a character. Moreover, we present a type $A$ crystal isomorphism $\Phi \colon \mathcal{B}_w \rightarrow \bigoplus_\lambda \mathcal{B}_{\lambda}^{\oplus g_{w\lambda}}$ for some combinatorially defined nonnegative integer coefficients $g_{w\lambda}$; here $\mathcal{B}_\lambda$ is the type $A$ highest weight crystal of highest weight $\lambda$ . This implies the desired decomposition $F^C_w(\mathbf{x}) = \sum_\lambda g_{w\lambda} s_\lambda (\mathbf{x})$ (see Corollary~\ref{corollary.main2}) and similarly for type $B$.
The paper is structured as follows. In Section~\ref{section.background}, we review type $C$ Stanley symmetric functions and type $A$ crystals. In Section~\ref{section.isomorphism} we describe our crystal isomorphism by combining a slight generalization of the Kra\'skiewicz insertion~\cite{Kraskiewicz.1989,Kraskiewicz.1995} and Haiman's mixed insertion~\cite{Haiman.1989}. The main result regarding the crystal structure under Haiman's mixed insertion is stated in Theorem~\ref{theorem.main2}. The combinatorial interpretation of the coefficients $g_{w\lambda}$ is given in Corollary~\ref{corollary.main2}. In Section~\ref{section.semistandard}, we provide an alternative interpretation of the coefficients $g_{w\lambda}$ in terms of semistandard unimodal tableaux. Appendices~\ref{section.proof main2} and~\ref{section.proof main3} are reserved for the proofs of Theorems~\ref{theorem.main2} and~\ref{theorem.main3}.
\subsection*{Acknowledgments} We thank the anonymous referee for pointing out reference~\cite{Liu.2017} and furthermore the connections between our crystal operators and those obtained by intertwining crystal operators on words with Haiman's symmetrization of shifted mixed insertion~\cite[Section 5]{Haiman.1989} and the conversion map~\cite[Proposition~14]{SW.2001} as outlined in Remark~\ref{remark.doubling}. We thank Toya Hiroshima for pointing out that the definition of the reading word of a primed tableau was misleading in a previous version of this paper.
\section{Background} \label{section.background}
\subsection{Type $C$ Stanley symmetric functions}
The \defn{Coxeter group} $W_C$ of type $C_n$ (or type $B_n$), also known as the hyperoctahedral group or the group of signed permutations, is a finite group generated by $\{s_0, s_1, \ldots, s_{n-1}\}$ subject to the quadratic relations
$s_i^2 = 1$ for all $i \in I = \{0,1,\ldots,n-1\}$, the commutation relations $s_i s_j = s_j s_i$ provided $|i-j|>1$, and the braid relations $s_i s_{i+1} s_i = s_{i+1} s_i s_{i+1}$ for all $i>0$ and $s_0 s_1 s_0 s_1 = s_1 s_0 s_1 s_0$.
It is often convenient to write down an element of a Coxeter group as a sequence of indices of $s_i$ in the product representation of the element. For example, the element $w = s_2 s_1 s_2 s_1 s_0 s_1 s_0 s_1$ is represented by the word ${\bf w} = 2120101$. A word of shortest length $\ell$ is referred to as a \defn{reduced word} and $\ell(w):=\ell$ is referred as the length of $w$. The set of all reduced words of the element $w$ is denoted by $R(w)$.
\begin{example} The set of reduced words for $w = s_2 s_1 s_2 s_0 s_1 s_0$ is given by $$R(w) = \{ 210210, 212010, 121010, 120101, 102101 \}.$$ \end{example}
We say that a reduced word $a_1 a_2 \ldots a_\ell$ is \defn{unimodal} if there exists an index $v$, such that $$a_1 > a_2 > \cdots > a_v < a_{v+1} < \cdots < a_\ell.$$
Consider a reduced word $\textbf{a} = a_1 a_2 \ldots a_{\ell(w)}$ of a Coxeter group element $w$. A \defn{unimodal factorization} of $\textbf{a}$ is a factorization $\mathbf{A} = (a_1 \ldots a_{\ell_1}) (a_{\ell_1+1} \ldots a_{\ell_2}) \cdots (a_{\ell_r + 1} \ldots a_L)$ such that each factor $(a_{\ell_i+1} \ldots a_{\ell_{i+1}})$ is unimodal. Factors can be empty.
For a fixed Coxeter group element $w$, consider all reduced words $R(w)$, and denote the set of all unimodal factorizations for reduced words in $R(w)$ as $U(w)$. Given a factorization $\mathbf{A} \in U(w)$, define the \defn{weight} of a factorization $\mathrm{wt}(\mathbf{A})$ to be the vector consisting of the number of elements in each factor. Denote by $\mathrm{nz}(\mathbf{A})$ the number of non-empty factors of $\mathbf{A}$.
\begin{example} For the factorization $\mathbf{A} = (2102)()(10) \in U(s_2 s_1 s_2 s_0 s_1 s_0)$, we have $\mathrm{wt}(\mathbf{A}) = (4,0,2)$ and $\mathrm{nz}(\mathbf{A}) = 2$. \end{example}
Following~\cite{Billey.Haiman.1995, Fomin.Kirillov.1996, Lam.1995}, the \defn{type $C$ Stanley symmetric function} associated to $w\in W_C$ is defined as \begin{equation} \label{equation.StanleyC}
F^C_w(\mathbf{x}) = \sum_{\mathbf{A} \in U(w)} 2^{\mathrm{nz}(\mathbf{A})}
\mathbf{x}^{\mathrm{wt}(\mathbf{A})}. \end{equation} Here $\mathbf{x} = (x_1, x_2, x_3, \ldots)$ and $\mathbf{x}^{\mathbf{v}} = x_1^{v_1} x_2^{v_2} x_3^{v_3} \cdots$. It is not obvious from the definition why the above functions are symmetric. We refer reader to~\cite{Billey.2014}, where this fact follows easily from an alternative definition.
\defn{Type $B$ Stanley symmetric functions} are also labeled by $w\in W_C$ (as the type $B$ and $C$ Coxeter groups coincide) and differ from $F_w^C(w)$ by an overall factor $2^{-o(w)}$ \[
F_w^B(\mathbf{x}) = 2^{-o(w)} F_w^C(\mathbf{x}), \] where $o(w)$ is the number of zeroes in a reduced word for $w$. Loosely speaking, our combinatorial interpretation in the type $C$ case respects this power of 2 -- that is, we will get a valid combinatorial interpretation in the type $B$ case by dividing by $2^{o(w)}$.
\subsection{Type $A$ crystal of words}
Crystal bases~\cite{kashiwara.1994} play an important role in many areas of mathematics. For example, they make it possible to analyze representation theoretic questions using combinatorial tools. Here we only review the crystal of words in type $A_n$ and refer the reader for more background on crystals to~\cite{Bump.Schilling.2017}.
Consider the set of words $\mathcal{B}_n^h$ of length $h$ in the alphabet $\{1,2,\ldots,n+1\}$. We impose a crystal structure on $\mathcal{B}_n^h$ by defining lowering operators $f_i$ and raising operators $e_i$ for $1\leqslant i \leqslant n$ and a weight function. The weight of $\mathbf{b} \in \mathcal{B}_n^h$ is the tuple $\mathrm{wt}(\mathbf{b}) = (a_1,\ldots, a_{n+1})$, where $a_i$ is the number of letters $i$ in $\mathbf{b}$. The crystal operators $f_i$ and $e_i$ only depend on the letters $i$ and $i+1$ in $\mathbf{b}$. Consider the subword $\mathbf{b}^{\{i,i+1\}}$ of $\mathbf{b}$ consisting only of the letters $i$ and $i+1$. Successively bracket any adjacent pairs $(i+1) i$ and remove these pairs from the word. The resulting word is of the form $i^a (i+1)^b$ with $a,b\geqslant 0$. Then $f_i$ changes this subword within $\mathbf{b}$ to $i^{a-1} (i+1)^{b+1}$ if $a>0$ leaving all other letters unchanged and otherwise annihilates $\mathbf{b}$. The operator $e_i$ changes this subword within $\mathbf{b}$ to $i^{a+1} (i+1)^{b-1}$ if $b>0$ leaving all other letters unchanged and otherwise annihilates $\mathbf{b}$.
We call an element $\mathbf{b}\in \mathcal{B}_n^h$ \defn{highest weight} if $e_i(\mathbf{b})=\mathbf{0}$ for all $1\leqslant i\leqslant n$ (meaning that all $e_i$ annihilate $\mathbf{b}$).
\begin{theorem} \cite{Kashiwara.Nakashima.1994}
A word $\mathbf{b} = b_1 \ldots b_h \in \mathcal{B}_n^h$ is highest weight if and only if it is a Yamanouchi word.
That is, for any index $k$ with $1 \leqslant k \leqslant h$ the weight of a subword $b_k b_{k+1} \ldots b_h$ is a partition.
\end{theorem}
\begin{example} The word $85744234654333222211111$ is highest weight. \end{example}
Two crystals $\mathcal{B}$ and $\mathcal{C}$ are said to be \defn{isomorphic} if there exists a bijective map $\Phi \colon \mathcal{B} \rightarrow \mathcal{C}$ that preserves the weight function and commutes with the crystal operators $e_i$ and $f_i$. A \defn{connected component} $X$ of a crystal is a set of elements where for any two $\mathbf{b},\mathbf{c} \in X$ one can reach $\mathbf{c}$ from $\mathbf{b}$ by applying a sequence of $f_i$ and $e_i$.
\begin{theorem} \cite{Kashiwara.Nakashima.1994} Each connected component of $\mathcal{B}_n^h$ has a unique highest weight element. Furthermore, if $\mathbf{b}, \mathbf{c} \in \mathcal{B}_n^h$ are highest weight elements such that $\mathrm{wt}(\mathbf{b}) = \mathrm{wt}(\mathbf{c})$, then the connected components generated by $\mathbf{b}$ and $\mathbf{c}$ are isomorphic. \end{theorem}
We denote a connected component with a highest weight element of highest weight $\lambda$ by $\mathcal{B}_\lambda$. The \defn{character} of the crystal $\mathcal{B}$ is defined to be a polynomial in the variables $\mathbf{x}=(x_1,x_2,\ldots,x_{n+1})$ $$\chi_{\mathcal{B}} (\mathbf{x}) = \sum_{\mathbf{b} \in \mathcal{B}} \mathbf{x}^{\mathrm{wt}(\mathbf{b})}.$$
\begin{theorem}[\cite{Kashiwara.Nakashima.1994}] The character of $\mathcal{B}_{\lambda}$ is equal to the Schur polynomial $s_\lambda (\mathbf{x})$ (or Schur function in the limit $n\to \infty$). \end{theorem}
\section{Crystal isomorphism} \label{section.isomorphism}
In this section, we combine a slight generalization of the Kra\'skiewicz insertion, reviewed in Section~\ref{section.kraskiewicz}, and Haiman's mixed insertion, reviewed in Section~\ref{section.implicit}, to provide an isomorphism of crystals between the crystal of words $\mathcal{B}^h$ and certain sets of primed tableaux. Our main result of this section is stated in Theorem~\ref{theorem.main0}, which asserts that the recording tableaux under the mixed insertion is constant on connected components of $\mathcal{B}^h$.
\subsection{Kra\'skiewicz insertion} \label{section.kraskiewicz}
In this section, we describe the Kra\'skiewicz insertion. To do so, we first need to define the \defn{Edelman--Greene insertion}~\cite{Edelmann.Greene.1987}. It is defined for a word $\mathbf{w} = w_1 \ldots w_\ell$ and a letter $k$ such that the concatenation $w_1 \ldots w_\ell k$ is an $A$-type reduced word. The Edelman--Greene insertion of a letter $k$ into an {\it increasing} word $\mathbf{w} = w_1 \ldots w_\ell$, denoted by $\mathbf{w} \leftsquigarrow k$, is constructed as follows: \begin{enumerate}
\item If $w_\ell < k$, then $\mathbf{w} \leftsquigarrow k = \mathbf{w'},$ where
$\mathbf{w'} = w_1 w_2 \ldots w_\ell\ k$.
\item If $k>0$ and $k\, k+1 = w_i \, w_{i+1}$ for some $1\leqslant i < \ell$, then
$\mathbf{w} \leftsquigarrow k = k+1 \leftsquigarrow\mathbf{w}$.
\item Else let $w_i$ be the leftmost letter in $\mathbf{w}$ such that $w_i>k$. Then
$\mathbf{w} \leftsquigarrow k = w_i \leftsquigarrow \mathbf{w'}$, where
$\mathbf{w'} = w_1 \ldots w_{i-1}\ k\ w_{i+1} \ldots w_\ell$. \end{enumerate} In the cases above, when $\mathbf{w} \leftsquigarrow k = k' \leftsquigarrow\mathbf{w'}$, the symbol $k' \leftsquigarrow\mathbf{w'}$ indicates a word $\mathbf{w'}$ together with a ``bumped'' letter $k'$.
Next we consider a reduced unimodal word $\mathbf{a} = a_1 a_2 \ldots a_\ell$ with $a_1 > a_2 >\cdots > a_v < a_{v+1} < \cdots < a_\ell$. The \defn{Kra\'skiewicz row insertion} \cite{Kraskiewicz.1989,Kraskiewicz.1995} is defined for a unimodal word $\mathbf{a}$ and a letter $k$ such that the concatenation $a_1 a_2 \ldots a_\ell k$ is a $C$-type reduced word. The Kra\'skiewicz row insertion of $k$ into $\mathbf{a}$ (denoted similarly as $\mathbf{a} \leftsquigarrow k$), is performed as follows: \begin{enumerate}
\item If $k=0$ and there is a subword $101$ in $\mathbf{a}$, then $\mathbf{a} \leftsquigarrow 0 =
0 \leftsquigarrow \mathbf{a}$.
\item If $k \neq 0$ or there is no subword $101$ in $\mathbf{a}$, denote the decreasing part $a_1 \ldots a_v$
as $\mathbf{d}$ and the increasing part $a_{v+1} \ldots a_\ell$ as $\mathbf{g}$. Perform the Edelman-Greene
insertion of $k$ into $\mathbf{g}$.
\begin{enumerate}
\item If $a_\ell < k$, then $\mathbf{g} \leftsquigarrow k = a_{v+1} \ldots a_\ell k =: \mathbf{g'}$ and
$\mathbf{a} \leftsquigarrow k = \mathbf{d} \mathbf{g} \leftsquigarrow k = \mathbf{d\ g'} =: \mathbf{a'}$.
\item If there is a bumped letter and $\mathbf{g} \leftsquigarrow k = k' \leftsquigarrow \mathbf{g'}$, negate all
the letters in $\mathbf{d}$ (call the resulting word $-\mathbf{d}$) and perform the Edelman-Greene insertion
$-\mathbf{d} \leftsquigarrow -k'$. Note that there will always be a bumped letter, and so
$-\mathbf{d} \leftsquigarrow -k' = -k'' \leftsquigarrow -\mathbf{d'}$ for some decreasing word $\mathbf{d'}$.
The result of the Kra\'skiewicz insertion is: $\mathbf{a} \leftsquigarrow k = \mathbf{d}[\mathbf{g} \leftsquigarrow k]
= \mathbf{d}[k' \leftsquigarrow \mathbf{g'}] = - [\mathbf{-d} \leftsquigarrow -k']\ \mathbf{g'} =
[k'' \leftsquigarrow \mathbf{d'}]\mathbf{g'} = k'' \leftsquigarrow \mathbf{a'}$, where $\mathbf{a'} := \mathbf{d'g'}$.
\end{enumerate} \end{enumerate}
\begin{example} \begin{equation*}
31012 \leftsquigarrow 0 =0 \leftsquigarrow 31012, \quad 3012 \leftsquigarrow 0 = 0 \leftsquigarrow 3102, \end{equation*} \begin{equation*}
31012 \leftsquigarrow 1 = 1 \leftsquigarrow 32012, \quad 31012 \leftsquigarrow 3 = 310123. \end{equation*} \end{example}
The insertion is constructed to ``commute'' a unimodal word with a letter: If $\mathbf{a} \leftsquigarrow k = k' \leftsquigarrow \mathbf{a'}$, the two elements of the type $C$ Coxeter group corresponding to concatenated words $\mathbf{a}\ k$ and $k' \mathbf{a'}$ are the same.
The type $C$ Stanley symmetric functions~\eqref{equation.StanleyC} are defined in terms of unimodal factorizations. To put the formula on a completely combinatorial footing, we need to treat the powers of $2$ by introducing signed unimodal factorizations. A \defn{signed unimodal factorization} of $w\in W_C$ is a unimodal factorization $\mathbf{A}$ of $w$, in which every non-empty factor is assigned either a $+$ or $-$ sign. Denote the set of all signed unimodal factorizations of $w$ by $U^{\pm} (w)$.
For a signed unimodal factorization $\mathbf{A} \in U^{\pm} (w)$, define $\mathrm{wt}(\mathbf{A})$ to be the vector with $i$-th coordinate equal to the number of letters in the $i$-th factor of $\mathbf{A}$. Notice from~\eqref{equation.StanleyC} that \begin{equation} \label{equation.Upm}
F^C_{w}(\mathbf{x}) = \sum_{\mathbf{A} \in U^{\pm}(w)} \mathbf{x}^{\mathrm{wt}(\mathbf{A})}. \end{equation}
We will use the Kra\'skiewicz insertion to construct a map between signed unimodal factorizations of a Coxeter group element $w$ and pairs of certain types of tableaux $(\mathbf{P},\mathbf{T})$. We define these types of tableaux next.
A \defn{shifted diagram} $\mathcal{S}(\lambda)$ associated to a partition $\lambda$ with distinct parts is the set of boxes in positions $\{(i,j) \mid \ 1\leqslant i\leqslant \ell(\lambda), \ i\leqslant j\leqslant \lambda_i+i-1\}$. Here, we use English notation, where the box $(1,1)$ is always top-left.
Let $X^\circ_n$ be an ordered alphabet of $n$ letters $X^\circ_n = \{0< 1 < 2< \cdots < n-1\}$, and let $X'_n$ be an ordered alphabet of $n$ letters together with their primed counterparts as $X'_n = \{1' < 1 < 2'< 2< \cdots <n' < n\}$.
Let $\lambda$ be a partition with distinct parts. A \defn{unimodal tableau} $\mathbf{P}$ of shape $\lambda$ on $n$ letters is a filling of $\mathcal{S}(\lambda)$ with letters from the alphabet $X^\circ_n$ such that the word $P_i$ obtained by reading the $i$th row from the top of $\mathbf{P}$ from left to right, is a unimodal word, and $P_i$ is the longest unimodal subword in the concatenated word $P_{i+1} P_i$ \cite{Billey.2014} (cf. also with decomposition tableaux~\cite{Serrano.2010,Cho.2013}). The \defn{reading word} of a unimodal tableau $\mathbf{P}$ is given by $\pi_{\mathbf{P}} = P_\ell P_{\ell-1} \ldots P_1$. A unimodal tableau is called \textit{reduced} if $\pi_{\mathbf{P}}$ is a type $C$ reduced word corresponding to the Coxeter group element $w_{\mathbf{P}}$. Given a fixed Coxeter group element $w$, denote the set of reduced unimodal tableaux $\mathbf{P}$ of shape $\lambda$ with $w_{\mathbf{P}} = w$ as $\mathcal{UT}_w (\lambda)$.
A \defn{signed primed tableau} $\mathbf{T}$ of shape $\lambda$ on $n$ letters (cf. semistandard $Q$-tableau~\cite{Lam.1995}) is a filling of $\mathcal{S}(\lambda)$ with letters from the alphabet $X'_n$ such that: \begin{enumerate}
\item The entries are weakly increasing along each column and each row of $\mathbf{T}$.
\item Each row contains at most one $i'$ for every $i = 1,\ldots,n$.
\item Each column contains at most one $i$ for every $i = 1,\ldots,n$. \end{enumerate} The reason for using the word ``signed'' in the name is to distinguish the set of primed tableaux above from the ``unsigned" version described later in the chapter.
Denote the set of signed primed tableaux of shape $\lambda$ by $\mathcal{PT^{\pm}} (\lambda)$. Given an element $\mathbf{T} \in \mathcal{PT^{\pm}} (\lambda)$, define the weight of the tableau $\mathrm{wt}(\mathbf{T})$ as the vector with $i$-th coordinate equal to the total number of letters in $\mathbf{T}$ that are either $i$ or $i'$.
\begin{example} $\Bigg(\young(43201,:212,::0),\ \young(112'\threep3,:\twop23',::4)\Bigg)$ is a pair consisting of a unimodal tableau and a signed primed tableau both of shape $(5,3,1)$. \end{example}
For a reduced unimodal tableau $\mathbf{P}$ with rows $P_\ell, P_{\ell-1}, \ldots, P_1$, the Kra\'skiewicz insertion of a letter $k$ into tableau $\mathbf{P}$ (denoted again by $\mathbf{P} \leftsquigarrow k$) is performed as follows: \begin{enumerate}
\item Perform Kra\'skiewicz insertion of the letter $k$ into the unimodal word $P_1$. If there is no bumped letter and
$P_1 \leftsquigarrow k = P'_1$, the algorithm terminates and the new tableau $\mathbf{P'}$ consists of rows
$P_\ell, P_{\ell-1}, \ldots, P_2, P'_1$. If there is a bumped letter and $P_1 \leftsquigarrow k = k' \leftsquigarrow P'_1$,
continue the algorithm by inserting $k'$ into the unimodal word $P_2$.
\item Repeat the previous step for the rows of $\mathbf{P}$ until either the algorithm terminates, in which case the new
tableau $\mathbf{P}'$ consists of rows $P_\ell, \ldots, P_{s+1}, P'_s, \ldots, P'_1$, or,
the insertion continues until we bump a letter $k_e$ from $P_\ell$, in which case we then put $k_e$ on a new row of
the shifted shape of $\mathbf{P'}$, so that the resulting tableau $\mathbf{P'}$ consists of rows
$k_e, P'_\ell, \ldots, P'_1$. \end{enumerate}
\begin{example} $$\young(43201,:212,::0) \leftsquigarrow 0 = \young(43210,:210,::01),$$ since the insertions row by row are given by $43201 \leftsquigarrow 0 =0 \leftsquigarrow 43210$, $212 \leftsquigarrow 0 = 1 \leftsquigarrow 210$, and $0 \leftsquigarrow 1 = 01$. \end{example}
\begin{lemma} \cite{Kraskiewicz.1989} Let $\mathbf{P}$ be a reduced unimodal tableau with reading word $\pi_\mathbf{P}$ for an element $w\in W_C$. Let $k$ be a letter such that $\pi_\mathbf{P}k$ is a reduced word. Then the tableau $\mathbf{P'} = \mathbf{P} \leftsquigarrow k$ is a reduced unimodal tableau, for which the reading word $\pi_{\mathbf{P'}}$ is a reduced word for $w s_k$. \end{lemma}
\begin{lemma} \cite[Lemma 3.17]{Lam.1995}
\label{lemma.ins}
Let $\mathbf{P}$ be a unimodal tableau, and $\mathbf{a}$ a unimodal word such that $\pi_{\mathbf{P}}\mathbf{a}$ is reduced. Let $(x_1,y_1), \ldots, (x_r, y_r)$ be the (ordered) list of boxes added when $\mathbf{P} \leftsquigarrow {\mathbf{a}}$ is computed. Then there exists an index $v$, such that $x_1 < \cdots < x_v \geqslant \cdots \geqslant x_r $ and $y_1 \geqslant \cdots \geqslant y_v < \cdots < y_r$. \end{lemma}
Let $\mathbf{A} \in U^{\pm} (w)$ be a signed unimodal factorization with unimodal factors $\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_n$. We recursively construct a sequence $(\emptyset, \emptyset) = (\mathbf{P}_0, \mathbf{T}_0),\ (\mathbf{P}_1, \mathbf{T}_1), \ldots, (\mathbf{P}_n, \mathbf{T}_n) = (\mathbf{P}, \mathbf{T})$ of tableaux, where $\mathbf{P}_s \in \mathcal{UT}_{(\mathbf{a}_1 \mathbf{a}_2 \ldots \mathbf{a}_s)} (\lambda^{(s)})$ and $\mathbf{T}_s \in \mathcal{PT}^{\pm} (\lambda^{(s)})$ are tableaux of the same shifted shape $\lambda^{(s)}$.
To obtain the \defn{insertion tableau} $\mathbf{P}_s$, insert the letters of $\mathbf{a}_s$ one by one from left to right, into $\mathbf{P}_{s-1}$. Denote the shifted shape of $\mathbf{P}_{s}$ by $\lambda^{(s)}$. Enumerate the boxes in the skew shape $\lambda^{(s)} / \lambda^{(s-1)}$ in the order they appear in $\mathbf{P}_s$. Let these boxes be $(x_1,y_1), \ldots, (x_{\ell_s}, y_{\ell_s})$.
Let $v$ be the index that is guaranteed to exist by Lemma~\ref{lemma.ins} when we compute $\mathbf{P_{s-1}} \leftsquigarrow {\mathbf{a_s}}$. The \defn{recording tableau} $\mathbf{T}_{s}$ is a primed tableau obtained from $\mathbf{T}_{s-1}$ by adding the boxes $(x_1, y_1), \ldots, (x_{v-1}, y_{v-1})$, each filled with the letter $s'$, and the boxes $(x_{v+1},y_{v+1}), \ldots, (x_{\ell_s}, y_{\ell_s})$, each filled with the letter $s$. The special case is the box $(x_v,y_v)$, which could contain either $s'$ or $s$. The letter is determined by the sign of the factor $\mathbf{a}_s$: If the sign is $-$, the box is filled with the letter $s'$, and if the sign is $+$, the box is filled with the letter $s$. We call the resulting map the \defn{primed Kra\'skiewicz map} $\mathrm{KR}'$.
\begin{example} Given a signed unimodal factorization $\mathbf{A} = (-0) (+212) (-43201)$, the sequence of tableaux is $$ (\emptyset,\emptyset), \quad (\ \young(0),\young(1')\ ), \quad \Big( \ \young(212,:0), \young(1'\twop2,:2)\ \Big), \quad \Bigg(\ \young(43201,:212,::0),\young(1'\twop2\threep3,:2\threep3,::3')\ \Bigg). $$ \end{example}
If the recording tableau is constructed, instead, by simply labeling its boxes with $1,2,3,\ldots$ in the order these boxes appear in the insertion tableau, we recover the original Kra\'skiewicz map \cite{Kraskiewicz.1989,Kraskiewicz.1995}, which is a bijection \begin{equation*}
\mathrm{KR}\colon R(w) \rightarrow
\bigcup_{\lambda} \big[\mathcal{UT}_w (\lambda) \times \mathcal{ST} (\lambda)\big], \end{equation*}
where $\mathcal{ST}(\lambda)$ is the set of \defn{standard shifted tableau} of shape $\lambda$, i.e., the set of fillings of $\mathcal{S} (\lambda)$ with letters
$1,2, \ldots,|\lambda|$ such that each letter appears exactly once, each row filling is increasing, and each column filling is increasing.
\begin{theorem} \label{theorem.KR} The primed Kra\'skiewicz map is a bijection \begin{equation*}
\mathrm{KR}'\colon U^{\pm}(w) \rightarrow
\bigcup_{\lambda} \big[\mathcal{UT}_w (\lambda) \times \mathcal{PT}^{\pm} (\lambda)\big]. \end{equation*} \end{theorem}
\begin{proof} First we show that the map is well-defined: Let $\mathbf{A} \in U^{\pm}(w)$ such that $\mathrm{KR}'(A) = (\mathbf{P}, \mathbf{Q})$. The fact that $\mathbf{P}$ is a unimodal tableau follows from the fact that $\mathrm{KR}$ is well-defined. On the other hand, $\mathbf{Q}$ satisfies Condition (1) in the definition of signed primed tableaux since its entries are weakly increasing with respect to the order the associated boxes are added to $\mathbf{P}$. Now fix an $s$ and consider the insertion $\mathbf{P_{s-1}} \leftsquigarrow {\mathbf{a_s}}$. Refer to the set-up in Lemma~\ref{lemma.ins}. Then, $y_1<\cdots<y_v$ implies there is at most one $s'$ in each row and $y_v\geqslant \cdots \geqslant y_{\ell_s}$ implies there is at most one $s$ in each column, so Conditions (2) and (3) of the definition have been verified, implying that indeed $\mathbf{Q}$ is a signed primed tableau.
Now suppose $(\mathbf{P},\mathbf{Q}) \in \bigcup_{\lambda} \big[\mathcal{UT}_w (\lambda) \times \mathcal{PT}^{\pm} (\lambda)\big]$. The ordering of the alphabet $X'$ induces a partial order on the set of boxes of $\mathbf{Q}$. Refine this ordering as follows: Among boxes containing an $s'$, box $b$ is greater than box $c$ if box $b$ lies below box $c$. Among boxes containing an $s$, box $b$ is greater than box $c$ if box $b$ lies to the right of box $c$. Let the standard shifted tableau induced by the resulting total order be denoted $\mathbf{Q}^*$.
Let $w=\mathrm{KR}^{-1}(\mathbf{P},\mathbf{Q}^*)$. Divide $w$ into factors, where the size of the $s$-th factor is equal to the $s$-th entry in $\mathrm{wt}(\mathbf{Q})$. Let $\textbf{A} =\textbf{a}_1 \ldots \textbf{a}_n$ be the resulting factorization, where the sign of $\mathbf{a}_s$ is determined as follows: Consider the lowest leftmost box in $\mathbf{Q}$ that contains an $s$ or $s'$ (such a box must exist if $\textbf{a}_s \neq \emptyset$). If this box contains
an $s$ give $\mathbf{a}_s$ a positive sign, and otherwise a negative sign. Let $b_1,\ldots, b_{|\textbf{a}_s|}$ denote the boxes of $\mathbf{Q}^*$ corresponding to $\textbf{a}_s$ under $\mathrm{KR}^{-1}$. The construction of $\mathbf{Q}^*$ and the fact that $\mathbf{Q}$ is a primed shifted tableau imply that the coordinates of these boxes satisfy the hypothesis of Lemma \ref{lemma.ins}. Since these are exactly the boxes that appear when we compute $\mathbf{P_{s-1}} \leftsquigarrow \mathbf{a}_s$, Lemma \ref{lemma.ins} implies that $\mathbf{a}_s$ is unimodal. It follows that $\mathbf{A}$ is a signed unimodal factorization mapping to $(\mathbf{P},\mathbf{Q})$ under $\mathrm{KR}'$. It is not hard to see $\mathbf{A}$ is unique. \end{proof}
Theorem~\ref{theorem.KR} and Equation~\eqref{equation.Upm} imply the following relation: \begin{equation} \label{equation.PTpm}
F^C_{w}(\mathbf{x}) = \sum_{\lambda} \big|\mathcal{UT}_w (\lambda) \big| \sum_{\mathbf{T} \in
\mathcal{PT}^{\pm}(\lambda)} \mathbf{x}^{\mathrm{wt}(\mathbf{T})}. \end{equation}
\begin{remark} The sum $\sum_{\mathbf{T} \in \mathcal{PT}^{\pm}(\lambda)} \mathbf{x}^{\mathrm{wt}(\mathbf{T})}$ is also known as the $Q$-Schur function. The expansion~\eqref{equation.PTpm}, with a slightly different interpretation of $Q$-Schur function, was shown in~\cite{Billey.Haiman.1995}. \end{remark}
At this point, we are halfway there to expand $F^C_{w}(\mathbf{x})$ in terms of Schur functions. In the next section we introduce a crystal structure on the set $\mathcal{PT} (\lambda)$ of unsigned primed tableaux.
\subsection{Mixed insertion} \label{section.implicit}
Set $\mathcal{B}^h = \mathcal{B}^h_{\infty}$. Similar to the well-known RSK-algorithm, mixed insertion~\cite{Haiman.1989} gives a bijection between $\mathcal{B}^h$ and the set of pairs of tableaux $(\mathbf{T}, \mathbf{Q})$, but in this case $\mathbf{T}$ is an (unsigned) primed tableau of shape $\lambda$ and $\mathbf{Q}$ is a standard shifted tableau of the same shape.
An \defn{(unsigned) primed tableau} of shape $\lambda$ (cf. semistandard $P$-tableau~\cite{Lam.1995} or semistandard marked shifted tableau~\cite{Cho.2013}) is a signed primed tableau $\mathbf{T}$ of shape $\lambda$ with only unprimed elements on the main diagonal. Denote the set of primed tableaux of shape $\lambda$ by $\mathcal{PT}(\lambda)$. The weight function $\mathrm{wt}(\mathbf{T})$ of $\mathbf{T} \in \mathcal{PT}(\lambda)$ is inherited from the weight function of signed primed tableaux, that is, it is the vector with $i$-th coordinate equal to the number of letters $i'$ and $i$ in $\mathbf{T}$. We can simplify~\eqref{equation.PTpm} as \begin{equation} \label{equation.PT}
F^C_{w}(\mathbf{x}) = \sum_{\lambda} 2^{\ell(\lambda)} \big|\mathcal{UT}_w (\lambda) \big|
\sum_{\mathbf{T} \in \mathcal{PT}(\lambda)} \mathbf{x}^{\mathrm{wt}(\mathbf{T})}. \end{equation}
\begin{remark} The sum $\sum_{\mathbf{T} \in \mathcal{PT}(\lambda)} \mathbf{x}^{\mathrm{wt}(\mathbf{T})}$ is also known as a $P$-Schur function. \end{remark}
Given a word $b_1 b_2 \ldots b_h$ in the alphabet $X = \{1<2<3<\cdots\}$, we recursively construct a sequence of tableaux $(\emptyset, \emptyset) = (\mathbf{T}_0, \mathbf{Q}_0),$ $(\mathbf{T}_1, \mathbf{Q}_1), \ldots, (\mathbf{T}_h, \mathbf{Q}_h) = (\mathbf{T}, \mathbf{Q})$, where $\mathbf{T}_s \in \mathcal{PT}(\lambda^{(s)})$ and $\mathbf{Q}_s \in \mathcal{ST}(\lambda^{(s)})$. To obtain the tableau $\mathbf{T}_{s}$, insert the letter $b_s$ into $\mathbf{T}_{s-1}$ as follows. First, insert $b_s$ into the first row of $\mathbf{T}_{s-1}$, bumping out the leftmost element $y$ that is strictly greater than $b_i$ in the alphabet $X' = \{1' < 1 < 2' < 2< \cdots \}$. \begin{enumerate}
\item If $y$ is not on the main diagonal and $y$ is not primed, then insert it into the next row, bumping out the leftmost
element that is strictly greater than $y$ from that row.
\item If $y$ is not on the main diagonal and $y$ is primed, then insert it into the next column to the right, bumping out
the topmost element that is strictly greater than $y$ from that column.
\item If $y$ is on the main diagonal, then it must be unprimed. Prime $y$ and insert it into the column on the right,
bumping out the topmost element that is strictly greater than $y$ from that column. \end{enumerate} If a bumped element exists, treat it as a new $y$ and repeat the steps above -- if the new $y$ is unprimed, row-insert it into the row below its original cell, and if the new $y$ is primed, column-insert it into the column to the right of its original cell.
The insertion process terminates either by placing a letter at the end of a row, bumping no new element, or forming a new row with the last bumped element.
\begin{example} Under mixed insertion, $$\young(22\threep3,:33) \leftarrow 1 = \young(12'\threep3,:23',::3).$$ Let us explain each step in detail. The letter $1$ is inserted into the first row bumping out the $2$ from the main diagonal, making it a $2'$, which is then inserted into the second column. The letter $2'$ bumps out $2$, which we insert into the second row. Then $3$ from the main diagonal is bumped from the second row, making it a $3'$, which is then inserted into third column. The letter $3'$ bumps out the 3 on the second row, which is then inserted as the first element in the third row. \end{example}
The shapes of $\mathbf{T}_{s-1}$ and $\mathbf{T}_s$ differ by one box. Add that box to $\mathbf{Q}_{s-1}$ with a letter $s$ in it, to obtain the standard shifted tableau $\mathbf{Q}_s$.
\begin{example} For a word $332332123$, some of the tableaux in the sequence $(\mathbf{T}_i, \mathbf{Q}_i)$ are $$\Big(\ \young(23',:3),\young(12,:3) \ \Big), \quad \Big(\ \young(22\threep3,:33),\young(1245,:36) \ \Big), \quad \Bigg(\ \young(1\twop2\threep3,:2\threep3,::3),\young(12459,:368,::7) \ \Bigg).$$ \end{example}
\begin{theorem} \cite{Haiman.1989} The construction above gives a bijection \begin{equation*}
\mathrm{HM} \colon \mathcal{B}^h \rightarrow \bigcup_{\lambda\vdash h} \big[ \mathcal{PT}(\lambda) \times
\mathcal{ST}(\lambda) \big]. \end{equation*} \end{theorem}
The bijection $\mathrm{HM}$ is called a \defn{mixed insertion}. If $\mathrm{HM}(\mathbf{b}) = (\mathbf{T},\mathbf{Q})$, denote $P_{\mathrm{HM}} (\mathbf{b}) = \mathbf{T}$ and $R_{\mathrm{HM}}(\mathbf{b}) = \mathbf{Q}$.
Just as for the RSK-algorithm, the mixed insertion has the property of preserving the recording tableau within each connected component of the crystal $\mathcal{B}^h$.
\begin{theorem} \label{theorem.main0} The recording tableau $R_{\mathrm{HM}} (\cdot)$ is constant on each connected component of the crystal $\mathcal{B}^h$. \end{theorem}
Before we provide the proof of Theorem~\ref{theorem.main0}, we need to define one more insertion from~\cite{Haiman.1989}, which serves as a dual to the previously discussed mixed insertion.
We use the notion of \defn{generalized permutations}. Similar to a regular permutation in two-line notation, a generalized permutation $w$ consists of two lines $\binom{a_1 a_2\cdots a_h}{b_1 b_2 \cdots b_h}$, which gives a correspondence between $a_s$ and $b_s$, but there can be repeated letters now. We order the pairs $(a_s, b_s)$ by making the top line weakly increasing $a_1 \leqslant\cdots \leqslant a_h$, and forcing $b_{s} \leqslant b_{s+1}$ whenever $a_s = a_{s+1}$. The inverse of a generalized permutation $w^{-1}$ consists of pairs $(b_s, a_s)$, ordered appropriately. Given a word $\mathbf{b} = b_1\ldots b_h$, it can be represented as a generalized permutation $w$ by setting the first line of the permutation to be $1\ 2\ \ldots h$ and the second line to be $b_1\ b_2\ \ldots b_h$. Since the inverse of the generalized permutation $w$ exists, it also defined $\mathbf{b}^{-1}$.
Now, let $w=\binom{a_1 a_2\cdots a_h}{b_1 b_2 \cdots b_h}$ be a generalized permutation on the alphabet $X$, where the second line consists of distinct letters. We recursively construct a sequence of tableaux $(\emptyset, \emptyset) = (\mathbf{Q}_0, \mathbf{T}_0),$ $(\mathbf{Q}_1, \mathbf{T}_1), \ldots, (\mathbf{Q}_h, \mathbf{T}_h)
= (\mathbf{Q}, \mathbf{T})$, where $\mathbf{Q}_s \in \mathcal{ST}(\lambda_s)$ and $\mathbf{T}_s
\in \mathcal{PT}(\lambda_s)$. To obtain the tableau $\mathbf{Q}_{s}$, insert the letter $b_s$ into $\mathbf{Q}_{s-1}$
as follows: \begin{itemize} \item Insert $b_s$ into the first row of $\mathbf{Q}_{s-1}$, and insert each bumped element into the next row until either an element is inserted into an empty cell and the algorithm terminates, or an element $b$ has been bumped from the diagonal. In the latter case, insert $b$ into the column to its right and continue bumping by columns, until an empty cell is filled. \item The shapes of $\mathbf{Q}_{s-1}$ and $\mathbf{Q}_s$ differ by one box. Add that box to $\mathbf{T}_{s-1}$ with a letter $a_s$ in it. Prime that letter if a diagonal element has been bumped in the process of inserting $b_s$ into $\mathbf{Q}_{s-1}$. \end{itemize}
The above insertion process is called a \defn{Worley--Sagan insertion algorithm}. The insertion tableau $\mathbf{Q}$ will be denoted by $P_{\mathrm{WS}} (w)$ and the recording tableau $\mathbf{T}$ is denoted by $R_{\mathrm{WS}} (w)$.
\begin{theorem} \cite[Theorem 6.10 and Corollary 6.3]{Haiman.1989} \label{theorem.insertion dual} Given $\mathbf{b} \in \mathcal{B}^h$, we have $R_{\mathrm{HM}} (\mathbf{b}) = P_{\mathrm{WS}} (\mathbf{b}^{-1})$. \end{theorem} Next, we want to find out when the Worley--Sagan insertion tableau is preserved. Fortunately, other results from~\cite{Haiman.1989} provide this description. \begin{theorem} \cite[Corollaries 5.8 and 6.3]{Haiman.1989} \label{theorem.haiman WS} If two words with distinct letters $\mathbf{b}$ and $\mathbf{b}'$ are related by a shifted Knuth transformation, then $P_{\mathrm{WS}} (\mathbf{b}) = P_{\mathrm{WS}} (\mathbf{b}')$. \end{theorem}
Here, a \defn{shifted Knuth transformation} is an exchange of consecutive letters in one of the following forms: \begin{enumerate} \item Knuth transformations: $cab \leftrightarrow acb$ or $bca \leftrightarrow bac$, where $a<b<c$, \item Worley--Sagan transformation: $xy \leftrightarrow yx$, where $x$ and $y$ are the first two letters of the word. \end{enumerate}
We are now ready to prove the theorem.
\begin{proof}[Proof of Theorem~\ref{theorem.main0}] If $\mathbf{b}$ and $\mathbf{b}'$ are two words in the same connected component of $\mathcal{B}^h$, their RSK-recording tableaux $R_{\mathrm{RSK}} (\mathbf{b})$ and $R_{\mathrm{RSK}} (\mathbf{b}')$ are the same. Thus, $P_{\mathrm{RSK}} (\mathbf{b}^{-1})$ and $P_{\mathrm{RSK}} (\mathbf{b}'^{-1})$ are the same, and the second lines of $\mathbf{b}^{-1}$ and $\mathbf{b}'^{-1}$ are related by a sequence of Knuth transformations. This in turn means that $P_{\mathrm{WS}} (\mathbf{b}^{-1})$ and $P_{\mathrm{WS}} (\mathbf{b}'^{-1})$ are the same, and $R_{\mathrm{HM}} (\mathbf{b}) = R_{\mathrm{HM}} (\mathbf{b}')$ by Theorem~\ref{theorem.haiman WS}. \end{proof}
Let us fix a recording tableau $\mathbf{Q}_{\lambda} \in \mathcal{ST} (\lambda)$. Define a map $\Psi_\lambda \colon \mathcal{PT}(\lambda) \rightarrow \mathcal{B}^{h}$ as $\Psi_\lambda (\mathbf{T}) = \mathrm{HM}^{-1} (\mathbf{T}, \mathbf{Q}_\lambda)$. By Theorem~\ref{theorem.main0}, the set $\mathrm{Im}(\Psi_{\lambda})$ consists of several connected components of $\mathcal{B}^h$. The map $\Psi_{\lambda}$ can thus be taken as a crystal isomorphism, and we can define the crystal operators and weight function on $\mathcal{PT}(\lambda)$ as \begin{equation} \label{equation.ef}
e_i(\mathbf{T}) := (\Psi_\lambda^{-1} \circ e_i \circ \Psi_\lambda) (\mathbf{T}), \quad f_i(\mathbf{T})
:= (\Psi_\lambda^{-1} \circ f_i \circ \Psi_\lambda) (\mathbf{T}), \quad \mathrm{wt}(\mathbf{T}) := (\mathrm{wt} \circ \Psi_\lambda) (\mathbf{T}). \end{equation}
Although it is not clear that the crystal operators constructed above are independent of the choice of $\mathbf{Q}_\lambda$, in the next section we will construct explicit crystal operators on the set $\mathcal{PT}(\lambda)$ that satisfy the relations above and do not depend on the choice of $\mathbf{Q}_\lambda$.
\begin{example} For $\mathbf{T} = \young(1\twop2\threep3,:2\threep3,::3)$, choose $\mathbf{Q}_{\lambda} = \young(12345,:678,::9)$. Then $\Psi_\lambda (\mathbf{T}) = 333332221$ and $e_1 \circ \Psi_\lambda (\mathbf{T}) = 333331221$. Thus, \begin{equation*} \ e_1 (\mathbf{T}) = (\Psi_\lambda^{-1} \circ e_1 \circ \Psi_\lambda) (\mathbf{T}) = \young(112\threep3,:2\threep3,::3), \quad f_1(\mathbf{T}) = f_2(\mathbf{T}) = \mathbf{0}. \end{equation*} \end{example}
To summarize, we obtain a crystal isomorphism between the crystal $(\mathcal{PT}(\lambda), e_i, f_i, \mathrm{wt})$, denoted again by $\mathcal{PT}(\lambda)$, and a direct sum $\bigoplus_\mu \mathcal{B}_{\mu}^{\oplus h_{\lambda\mu}}$. We will provide a combinatorial description of the coefficients $h_{\lambda\mu}$ in the next section. This implies the relation on characters of the corresponding crystals $\chi_{\mathcal{PT}(\lambda)} = \sum_\mu h_{\lambda\mu} s_\mu$. Thus we can rewrite~\eqref{equation.PT} one last time \begin{equation*}
F^C_{w}(\mathbf{x}) = \sum_{\lambda} 2^{\ell(\lambda)} \big|\mathcal{UT}_w (\lambda) \big| \sum_{\mu}
h_{\lambda\mu} s_\mu = \sum_\mu \Big( \sum_\lambda 2^{\ell(\lambda)} \big|\mathcal{UT}_w (\lambda)
\big|\ h_{\lambda\mu} \Big) s_\mu. \end{equation*}
\section{Explicit crystal operators on shifted primed tableaux} \label{section.explicit}
We consider the alphabet $X'=\{1' < 1 < 2' < 2 < 3' < \cdots\}$ of primed and unprimed letters. It is useful to think about the letter $(i+1)'$ as a number $i + 0.5$. Thus, we say that letters $i$ and $(i+1)'$ differ by half a unit and letters $i$ and $(i+1)$ differ by a whole unit.
Given an (unsigned) primed tableau $\mathbf{T}$, we construct the \defn{reading word} $\mathrm{rw}(\mathbf{T})$ as follows: \begin{enumerate} \item List all primed letters in the tableau, column by column, from top to bottom within each column, moving from the rightmost column to the left, and with all the primes removed (i.e. all letters are increased by half a unit). (Call this part of the word the \defn{primed reading word}.) \item Then list all unprimed elements, row by row, from left to right within each row, moving from the bottommost row to the top. (Call this part of the word the \defn{unprimed reading word}.) \end{enumerate}
To find the letter on which the crystal operator $f_i$ acts, apply the bracketing rule for letters $i$ and $i+1$ within the reading word $\mathrm{rw}(\mathbf{T})$. If all letters $i$ are bracketed in $\mathrm{rw}(\mathbf{T})$, then $f_i(\mathbf{T}) = \mathbf{0}$. Otherwise, the rightmost unbracketed letter $i$ in $\mathrm{rw}(\mathbf{T})$ corresponds to an $i$ or an $i'$ in $\mathbf{T}$, which we call \defn{bold unprimed} $i$ or \defn{bold primed} $i$ respectively.
If the bold letter $i$ is unprimed, denote the cell it is located in as $x$.
If the bold letter $i$ is primed, we \textit{conjugate} the tableau $\mathbf{T}$ first.
The \defn{conjugate} of a primed tableau $\mathbf{T}$ is obtained by reflecting the tableau over the main diagonal, changing all primed entries $k'$ to $k$ and changing all unprimed elements $k$ to $(k+1)'$ (i.e. increase the entries of all boxes by half a unit). The main diagonal is now the North-East boundary of the tableau. Denote the resulting tableau as $\mathbf{T}^*$.
Under the transformation $\mathbf{T} \to \mathbf{T}^*$, the bold primed $i$ is transformed into bold unprimed $i$. Denote the cell it is located in as $x$.
Given any cell $z$ in a shifted primed tableau $\mathbf{T}$ (or conjugated tableau $\mathbf{T}^*$), denote by $c(z)$ the entry contained in cell $z$. Denote by $z_E$ the cell to the right of $z$, $z_W$ the cell to its left, $z_S$ the cell below, and $z_N$ the cell above. Denote by $z^*$ the corresponding conjugated cell in $\mathbf{T}^*$ (or in $\mathbf{T}$). Now, consider the box $x_E$ (in $\mathbf{T}$ or in $\mathbf{T}^*$) and notice that $c(x_E) \geqslant (i+1)'$.\\
\noindent \textbf{Crystal operator $f_i$ on primed tableaux:}
\begin{enumerate}
\item If $c(x_E) = (i+1)'$, the box $x$ must lie outside of the main diagonal and the box immediately below $x_E$ cannot
contain $(i+1)'$. Change $c(x)$ to $(i+1)'$ and
change $c(x_E)$ to $(i+1)$ (i.e. increase the entry in cell $x$ and $x_E$ by half a unit).
\item If $c(x_E) \neq (i+1)'$ or $x_E$ is empty, then there is a
maximal connected ribbon (expanding in South and West directions) with the following properties:
\begin{enumerate}
\item The North-Eastern most box of the ribbon (the tail of the ribbon) is $x$.
\item The entries of all boxes within a ribbon besides the tail are either $(i+1)'$ or $(i+1)$.
\end{enumerate}
Denote the South-Western most box of the ribbon (the head) as $x_H$.
\begin{enumerate}
\item If $x_H = x$, change $c(x)$ to $(i+1)$ (i.e. increase the
entry in cell $x$ by a whole unit).
\item If $x_H \neq x$ and $x_H$ is on the main diagonal (in case of a tableau $\mathbf{T}$), change $c(x)$
to $(i+1)'$ (i.e. increase the entry in cell $x$ by half a unit).
\item Otherwise, $c(x_H)$ must be $(i+1)'$ due to the bracketing rule. We change $c(x)$ to $(i+1)'$
and change $c(x_H)$ to $(i+1)$ (i.e. increase the entry in cell $x$ and $x_H$ by half a unit).
\end{enumerate} \end{enumerate}
In the case when the bold $i$ in $\mathbf{T}$ is unprimed, we apply the above crystal operator rules to $\mathbf{T}$ to find $f_i(\mathbf{T})$
\begin{example} We apply operator $f_2$ on the following tableaux. The bold letter is marked if it exists: \begin{enumerate} \item $\mathbf{T} = \young(1\twop23',:2\threep3)\ $, $\mathrm{rw}(\mathbf{T}) = 3322312$, thus $f_2(\mathbf{T}) = \mathbf{0}$;\\
\item $\mathbf{T} = \young(12'\boldtwo3',:2\threep4)\ $, $\mathrm{rw}(\mathbf{T}) = 3322412$, thus $f_2(\mathbf{T}) = \young(12'\threep3,:2\threep4)$ by Case (1). \\
\item $\mathbf{T} = \young(112\mathbf{2},:3\fourp4)\ $, $\mathrm{rw}(\mathbf{T}) = 4341122$, thus $f_2(\mathbf{T}) = \young(1123,:3\fourp4)$ by Case (2a).\\
\item $\mathbf{T} = \young(112'\boldtwo3,:223',::33)$, $\mathrm{rw}(\mathbf{T}) = 3233221123$, thus $f_2(\mathbf{T}) = \young(112'\threep3,:223',::33)$ by Case~(2b).\\
\item $\mathbf{T} = \young(111\boldtwo3,:223',::34')$, $\mathrm{rw}(\mathbf{T}) = 3432211123$, thus $f_2(\mathbf{T}) = \young(111\threep3,:223,::34')$ by Case~(2c). \end{enumerate} \end{example}
In the case when the bold $i$ is primed in $\mathbf{T}$, we first conjugate $\mathbf{T}$ and then apply the above crystal operator rules on $\mathbf{T}^*$, before reversing the conjugation. Note that Case~(2b) is impossible for $\mathbf{T}^*$, since the main diagonal is now on the North-East.
\begin{example}
\begin{equation*}
\text{Let} \
\mathbf{T} = \young(1\boldtwop23,:34',::4)\ ,
\quad
\text{then} \
\mathbf{T}^* = \young(2',\boldtwo4',\threep45',4')
\quad
\text{and} \
f_2 (\mathbf{T}) = \young(12\threep3,:34',::4)\ .
\end{equation*} \end{example}
\begin{theorem} \label{theorem.main2}
For any $\mathbf{b} \in \mathcal{B}^h$ with $P_{\mathrm{HM}}(\mathbf{b}) = \mathbf{T}$ and
$f_i(\mathbf{b})\neq \mathbf{0}$, the operator $f_i$ defined on above satisfies
\begin{equation*}
P_{\mathrm{HM}}(f_i(\mathbf{b})) = f_i(\mathbf{T}).
\end{equation*}
Also, $f_i(\mathbf{b}) = \mathbf{0}$ if and only if $f_i(\mathbf{T})=\mathbf{0}$. \end{theorem}
The proof of Theorem~\ref{theorem.main2} is quite technical and is relegated to Appendix~\ref{section.proof main2}. It implies that the explicit operators $f_i$ in this section are indeed equal to those defined in~\eqref{equation.ef} and that they are independent of the choice of $\mathbf{Q}_\lambda$. We also immediately obtain:
\begin{proof}[Second proof of Theorem~\ref{theorem.main0}] Given a word $\mathbf{b}=b_1\ldots b_h$, let $\mathbf{b}'= f_i(\mathbf{b}) = b'_1 \ldots b'_h$, so that $b_m \neq b'_m$ for some $m$ and $b_i = b'_i$ for any $i \neq m$. We show that $Q_{\mathrm{HM}} (\mathbf{b}) = Q_{\mathrm{HM}} (\mathbf{b}')$.
Denote $\mathbf{b}^{(s)} = b_1\ldots b_s$ and similarly $\mathbf{b}'^{(s)} = b'_1\ldots b'_s$. Due to the construction of the recording tableau $Q_{\mathrm{HM}}$, it suffices to show that $P_{\mathrm{HM}}(\mathbf{b}^{(s)})$ and $P_{\mathrm{HM}}(\mathbf{b}'^{(s)})$ have the same shape for any $1 \leqslant s \leqslant h$.
If $s < m$, this is immediate. If $s \geqslant m$, note that $\mathbf{b}'^{(s)}=f_i(\mathbf{b}^{(s)})$. Using Theorem~\ref{theorem.main2}, one can see that $P_{\mathrm{HM}} (\mathbf{b}'^{(s)}) = P_{\mathrm{HM}}(f_i(\mathbf{b}^{(s)})) = f_i(P_{\mathrm{HM}}(\mathbf{b}^{(s)}))$ has the same shape as $P_{\mathrm{HM}}(\mathbf{b}^{(s)})$. \end{proof}
The next step is to describe the raising operators $e_i (\mathbf{T})$. Consider the reading word $\mathrm{rw}(\mathbf{T})$ and apply the bracketing rule on the letters $i$ and $i+1$. If all letters $i+1$ are bracketed in $\mathrm{rw}(\mathbf{T})$, then $e_i(\mathbf{T}) = \mathbf{0}$. Otherwise, the leftmost unbracketed letter $i+1$ in $\mathrm{rw}(\mathbf{T})$ corresponds to an $i+1$ or an $(i+1)'$ in $\mathbf{T}$, which we will call bold unprimed $i+1$ or bold primed $i+1$, respectively. If the bold $i+1$ is unprimed, denote the cell it is located in by $y$. If the bold $i+1$ is primed, conjugate $\mathbf{T}$ and denote the cell with the bold $i+1$ in $\mathbf{T}^*$ by $y$.\\
\noindent \textbf{Crystal operator $e_i$ on primed tableaux:} \begin{enumerate}
\item If $c(y_W) = (i+1)'$, then change $c(y)$ to $(i+1)'$ and
change $c(y_W)$ to $i$ (i.e. decrease the entry in cell $y$ and $y_W$ by half a unit).
\item If $c(y_W) < (i+1)'$ or $y_W$ is empty, then there is a
maximal connected ribbon (expanding in North and East directions) with the following properties:
\begin{enumerate}
\item The South-Western most box of the ribbon (the head of the ribbon) is $y$.
\item The entry in all boxes within a ribbon besides the tail is either $i$ or $(i+1)'$.
\end{enumerate}
Denote the North-Eastern most box of the ribbon (the tail) as $y_T$.
\begin{enumerate}
\item If $y_T = y$, change $c(y)$ to $i$ (i.e. decrease the
entry in cell $y$ by a whole unit).
\item If $y_T \neq y$ and $y_T$ is on the main diagonal (in case of a conjugate tableau $\mathbf{T}^*$),
then change $c(y)$ to
$(i+1)'$ (i.e. decrease the entry in cell $y$ by half a unit).
\item If $y_T \neq y$ and $y_T$ is not on the diagonal, the entry of cell $y_T$ must be $(i+1)'$
and we change $c(y)$ to $(i+1)'$ and change $c(y_T)$ to $i$ (i.e. decrease the entry of cell $y$
and $y_T$ by half a unit).
\end{enumerate} \end{enumerate} When the bold $i+1$ is unprimed, $e_i(\mathbf{T})$ is obtained by applying the rules above to $\mathbf{T}$. When the bold $i+1$ is primed, we first conjugate $\mathbf{T}$, then apply the raising crystal operator rules on $\mathbf{T}^*$, and then reverse the conjugation.
\begin{proposition}
\begin{equation*}
e_i (\mathbf{b}) = \mathbf{0} \quad \text{if and only if} \quad e_i (\mathbf{T}) = \mathbf{0}.
\end{equation*} \end{proposition}
\begin{proof} According to Lemma~\ref{lemma.main}, the number of unbracketed letters $i$ in $\mathbf{b}$ is equal to the number of unbracketed letters $i$ in $\mathrm{rw}(\mathbf{T})$. Since the total number of both letters $i$ and $j=i+1$ is the same in $\mathbf{b}$ and in $\mathrm{rw}(\mathbf{T})$, that also means that the number of unbracketed letters $j$ in $\mathbf{b}$ is equal to the number of unbracketed letters $j$ in $\mathrm{rw}(\mathbf{T})$. Thus, there are no unbracketed letters $j$ in $\mathbf{b}$ if and only if there are no unbracketed letters $j$ in $\mathbf{T}$. \end{proof}
\begin{theorem} \label{theorem.main3}
Given a primed tableau $\mathbf{T}$ with $f_i(\mathbf{T}) \neq \mathbf{0}$, for the operators $e_i$ defined
above we have the following relation:
\begin{equation*}
e_i(f_i(\mathbf{T})) = \mathbf{T}.
\end{equation*} \end{theorem}
The proof of Theorem~\ref{theorem.main3} is relegated to Appendix~\ref{section.proof main3}.
\begin{corollary} \label{theorem.main4}
For any $\mathbf{b} \in \mathcal{B}^h$ with $\mathrm{HM}(\mathbf{b}) = (\mathbf{T},\mathbf{Q})$, the operator
$e_i$ defined above satisfies
\begin{equation*}
\mathrm{HM}(e_i(\mathbf{b})) = (e_i(\mathbf{T}), \mathbf{Q}),
\end{equation*}
given the left-hand side is well-defined. \end{corollary}
The consequence of Theorem~\ref{theorem.main2}, as discussed in Section~\ref{section.implicit}, is a crystal isomorphism $\Psi_\lambda \colon \mathcal{PT}(\lambda) \rightarrow \bigoplus \mathcal{B}_{\mu}^{\oplus h_{\lambda\mu}}$. Now, to determine the nonnegative integer coefficients $h_{\lambda\mu}$, it is enough to count the highest weight elements in $\mathcal{PT}(\lambda)$ of given weight $\mu$.
\begin{proposition} \label{proposition.highest}
A primed tableau $\mathbf{T} \in \mathcal{PT}(\lambda)$ is a highest weight element if and only if its reading word
$\mathrm{rw}(\mathbf{T})$ is a Yamanouchi word. That is, for any suffix of $\mathrm{rw}(\mathbf{T})$, its weight
is a partition. \end{proposition}
Thus we define $h_{\lambda\mu}$ to be the number of primed tableaux $\mathbf{T}$ of shifted shape $\mathcal{S}(\lambda)$ and weight $\mu$ such that $\mathrm{rw}(\mathbf{T})$ is Yamanouchi.
\begin{example}
Let $\lambda = (5,3,2)$ and $\mu = (4,3,2,1)$. There are three primed tableaux of shifted shape
$\mathcal{S}((5,3,2))$ and weight $(4,3,2,1)$ with a Yamanouchi reading word, namely
\begin{equation*}
\young(11112',:223',::34')
\ , \quad
\young(11113',:222,::34')
\quad \text{and} \quad
\young(11114',:222,::33)\ .
\end{equation*}
Therefore $h_{(5,3,2)(4,3,2,1)} = 3$. \end{example}
We summarize our results for the type $C$ Stanley symmetric functions as follows. \begin{corollary} \label{corollary.main2}
The expansion of $F^C_w(\mathbf{x})$ in terms of Schur symmetric functions is
\begin{equation}
\label{equation.FC}
F^C_w(\mathbf{x}) = \sum_\lambda g_{w\lambda} s_\lambda (\mathbf{x}), \quad
\text{where} \quad
g_{w\lambda} = \sum_\mu 2^{\ell(\mu)} \big|\mathcal{UT}_w (\mu) \big| \ h_{\mu\lambda}\ .
\end{equation} \end{corollary}
Replacing $\ell(\mu)$ by $\ell(\mu)-o(w)$ gives the Schur expansion of $F^B_w(\mathbf{x})$. Note that since any row of a unimodal tableau contains at most one zero, $\ell(\mu)-o(w)$ is nonnegative. Thus the given expansion makes sense combinatorially.
\begin{example}\label{exa} Consider the word $w=0101=1010$. There is only one unimodal tableau corresponding to $w$, namely $\mathbf{P} = \young(101,:0)$, which belongs to $\mathcal{UT}_{0101} (3,1)$. Thus, $g_{w\lambda} = 4h_{(3,1)\lambda}$. There are only three possible highest weight primed tableaux of shape $(3,1)$, namely $\young(111,:2),\ \young(112',:2)$ and $\young(113',:2)$, which implies that $h_{(3,1)(3,1)}= h_{(3,1)(2,2)} = h_{(3,1)(2,1,1)} = 1$ and $h_{(3,1)\lambda} = 0$ for other weights $\lambda$. The expansion of $F^C_{0101}(\mathbf{x})$ is thus \begin{equation*}
F^C_{0101} = 4s_{(3,1)} + 4s_{(2,2)} + 4s_{(2,1,1)}. \end{equation*} \end{example}
\begin{remark} \label{remark.doubling} In~\cite[Section 5]{Haiman.1989}, Haiman showed that shifted mixed insertion can be understood in terms of nonshifted mixed insertion operators that produce a symmetric tableau, which can subsequently be cut along the diagonal. More precisely, starting with a word $\mathbf{b}$, consider its doubling $\mathrm{double}(\mathbf{b})$ by replacing each letter $\ell$ by $-\ell \;\ell$. By~\cite[Proposition 6.8]{Haiman.1989} the mixed insertion of $\mathrm{double}(\mathbf{b})$ is the symmetrized version of $P_{\mathrm{HM}}(\mathbf{b})$. This symmetrized version can also be obtained by first applying usual insertion to obtain $P(\mathrm{double}(\mathbf{b}))$ and then applying conversion~\cite[Proposition 14]{SW.2001}. Since both doubling (where the operators are also replaced by their doubled versions) and regular insertion commute with crystal operators, it follows that our crystal operators $f_i$ on primed tableaux can be described as follows: To apply $f_i$ to $\mathbf{T}$, first form the symmetrization of $\mathbf{T}$ and then apply inverse conversion (changing primed entries to negatives). Next apply the doubled operator $f_if_{-i}$, and then convert ``forwards" (negatives to primes). This produces a symmetric tableau, which can then be cut along the diagonal to obtain $f_i(\mathbf{T})$. \end{remark}
\section{Semistandard unimodal tableaux} \label{section.semistandard}
Many of the results of this paper have counterparts which involve the notion of semi\-standard unimodal tableaux in place of primed tableaux. We give a brief overview of these results, mostly without proof.
First, let us define semistandard unimodal tableaux. We say that a word $a_1 a_2 \ldots a_h \in \mathcal{B}^h$ is \defn{weakly unimodal} if there exists an index $v$, such that \[
a_1 > a_2 > \cdots > a_v \leqslant a_{v+1} \leqslant \cdots \leqslant a_h. \] A \defn{semistandard unimodal tableau} $\mathbf{P}$ of shape $\lambda$ is a filling of $\mathcal{S}(\lambda)$ with letters from the alphabet $X$ such that the $i^{th}$ row of $\mathbf{P}$, denoted by $P_i$, is weakly unimodal, and such that $P_i$ is the longest weakly unimodal subword in the concatenated word $P_{i+1} P_i$. Denote the set of semistandard unimodal tableaux of shape $\lambda$ by $\mathcal{SUT}(\lambda)$.
Let $\mathbf{a}=a_1\ldots a_h \in \mathcal{B}^h$. The alphabet $X$ imposes a partial order on the entries of $\mathbf{a}$. We can extend this to a total order by declaring that if $a_i=a_j$ as elements of $X$, and $i<j$, then as entries of $\mathbf{a}$, $a_i<a_j$. For each entry $a_i$, denote its numerical position in the total ordering on the entries of $\mathbf{a}$ by $n_i$ and define the \defn{standardization} of $\mathbf{a}$ to be the word with superscripts, $n_1^{a_1} \ldots n_h^{a_h}$. Since its entries are distinct, $n_1 \ldots n_h$ can be considered as a reduced word. Let $(\mathbf{R},\mathbf{S})$ be the Kra\'skiewicz insertion and recording tableaux of $n_1 \ldots n_h$, and let $\mathbf{R}^*$ be the tableau obtained from $\mathbf{R}$ by replacing each $n_i$ by $a_i$. One checks that setting $\mathrm{SK}(\mathbf{a})=(\mathbf{R}^*,\mathbf{S})$ defines a map, \[
\mathrm{SK} \colon \mathcal{B}=\bigoplus_{h \in \mathbb{N}} \mathcal{B}^h \rightarrow \bigcup_{\lambda}
\big[\mathcal{SUT} (\lambda) \times \mathcal{ST} (\lambda)\big]. \] In fact, this map is a bijection \cite{Serrano.2010,Lam.1995}. It follows that the composition $\mathrm{SK} \circ \mathrm{HM}^{-1}$ gives a bijection \[
\bigcup_{\lambda} \big[\mathcal{PT} (\lambda) \times \mathcal{ST} (\lambda)\big] \rightarrow \bigcup_{\lambda}
\big[\mathcal{SUT} (\lambda) \times \mathcal{ST} (\lambda)\big]. \] The following remarkable fact, which appears as \cite[Proposition 2.23]{Serrano.2010}, can be deduced from \cite[Theorem 3.32]{Lam.1995}, which itself utilizes results of \cite{Haiman.1989}.
\begin{theorem} \label{theorem.same}
For any word $\mathbf{a}\in \mathcal{B}^h$, $Q_{\mathrm{SK}}(\mathbf{a}) = Q_{\mathrm{HM}}(\mathbf{a})$. \end{theorem}
This allows us to define a bijective map $\Phi_{\mathbf{Q}} \colon \mathcal{PT} (\lambda) \rightarrow \mathcal{SUT} (\lambda)$ as follows. Choose a standard shifted tableau $\mathbf{Q}$ of shape $\lambda$. Then, given a primed tableau $\mathbf{P}$ of shape $\lambda$ set $(\mathbf{R}, \mathbf{Q}) = \mathrm{SK}(\mathrm{HM}^{-1}(\mathbf{P},\mathbf{Q}))$, and let $\Phi_{\mathbf{Q}}(\mathbf{P})=\mathbf{R}$.
For any filling of a shifted shape $\lambda$ with letters from $X$, associating this filling to its reading word (the element of
$\mathcal{B}^{|\lambda|}$ obtained by reading rows left to right, bottom to top) induces crystal operators on the set of all fillings of this shape. In particular, we can apply these induced operators to any element of $\mathcal{SUT} (\lambda)$ (although, a priori, it is not clear that the image will remain in $\mathcal{SUT} (\lambda)$). We now summarize our main results for SK insertion and its relation to this induced crystal structure.
\begin{theorem} \label{theorem.main2'}
For any $\mathbf{b} \in \mathcal{B}^h$ with $\mathrm{SK}(\mathbf{b}) = (\mathbf{T},\mathbf{Q})$ and
$f_i(\mathbf{b})\neq \mathbf{0}$, the induced operator $f_i$ described above satisfies
\begin{equation*}
\mathrm{SK}(f_i(\mathbf{b})) = (f_i(\mathbf{T}), \mathbf{Q}).
\end{equation*}
Also, $f_i(\mathbf{b}) = \mathbf{0}$ if and only if $f_i(\mathbf{T})=\mathbf{0}$. \end{theorem}
\begin{corollary}
$\mathcal{SUT} (\lambda)$ is closed under the induced crystal operators described above. \end{corollary}
Replacing $\mathrm{HM}$ by $\mathrm{SK}$ in the second proof of Theorem~\ref{theorem.main0}, or by combining Theorem~\ref{theorem.main0} with Theorem~\ref{theorem.same} yields:
\begin{theorem} \label{theorem.main0'} The recording tableau under $\mathrm{SK}$ insertion is constant on each connected component of the crystal $\mathcal{B}^h$. \end{theorem}
The upshot of all this is the following theorem.
\begin{theorem} \label{theorem.upshot} With respect to the crystal operators we have defined on primed tableaux and the induced operators on semistandard unimodal tableaux described above, the map $\Phi_Q$ is a crystal isomorphism. \end{theorem}
\begin{proof} This says no more than that $\Phi_Q$ is a bijection (which we have established) and that it commutes with the crystal operations on primed tableaux and semistandard unimodal tableaux. But this is simply combining Theorem~\ref{theorem.main0} with Theorem \ref{theorem.main0'}. \end{proof}
Theorem~\ref{theorem.upshot} immediately gives us another combinatorial interpretation of the coefficients $g_{w \lambda}$. Let $k_{\mu \lambda}$ be the number of semistandard unimodal tableaux of shape $\mu$ and weight $\lambda$, whose reading words are Yamanouchi (that is, tableaux that are the highest weight elements of $\mathcal{SUT}(\mu)$).
\begin{corollary} \label{corollary.main2'}
The expansion of $F^C_w(\mathbf{x})$ in terms of Schur symmetric functions is
\begin{equation*}
F^C_w(\mathbf{x}) = \sum_\lambda g_{w\lambda} s_\lambda (\mathbf{x}), \quad
\text{where} \quad
g_{w\lambda} = \sum_\mu 2^{\ell(\mu)} \big|\mathcal{UT}_w (\mu) \big| \ k_{\mu\lambda}\ .
\end{equation*} \end{corollary}
Again, replacing $\ell(\mu)$ by $\ell(\mu)-o(w)$ gives the Schur expansion of $F^B_w(\mathbf{x})$.
\begin{example} According to Example~\ref{exa}, we should find three highest weight semistandard unimodal tableaux of shape $(3,1)$, one for each of the weights $(3,1)$, $(2,2)$, and $(2,1,1)$. These are $\young(211,:1),\ \young(211,:2)$ and $\young(321,:1)$. \end{example}
\section{Outlook}
There are several other generalizations of the results in this paper that one could pursue. First of all, it would be interesting to consider affine Stanley symmetric functions of type $B$ or $C$. As in affine type $A$, this would involve a generalization of crystal bases as the expansion is no longer in terms of Schur functions. Another possible extension is to consider $K$-theoretic analogues of Stanley symmetric functions, such as the (dual) stable Grothendieck polynomials. In type $A$, a crystal theoretic analysis of dual stable Grothendieck polynomials was carried out in~\cite{galashin.2015}. Type $D$ should also be considered from this point of view. Finally, the definition of the reading word $\mathrm{rw}$ of Section~\ref{section.explicit} and the characterization of highest weight elements in Proposition~\ref{proposition.highest} is very similar to the reading words in~\cite[Section 3.2]{Liu.2017} in the analysis of Kronecker coefficients.
\appendix
\section{Proof of Theorem~\ref{theorem.main2}} \label{section.proof main2}
In this appendix, we provide the proof of Theorem~\ref{theorem.main2}.
\subsection{Preliminaries}
We use the fact from \cite{Haiman.1989} that taking only elements smaller or equal to $i+1$ from the word $\mathbf{b}$ and applying the mixed insertion corresponds to taking only the part of the tableau $\mathbf{T}$ with elements $\leqslant i+1$. Thus, it is enough to prove the theorem for a ``truncated'' word $\mathbf{b}$ without any letters greater than $i+1$. To shorten the notation, we set $j= i+1$ in this appendix. We sometimes also restrict to just the letters $i$ and $j$ in a word $w$. We call this the \defn{$\{i,j\}$-subword} of $w$.
First, in Lemma~\ref{lemma.main} we justify the notion of the reading word $\mathrm{rw}(\textbf{T})$ and provide the reason to use a bracketing rule on it. After that, in Section~\ref{section.main.proof} we prove that the action of the crystal operator $f_i$ on $\mathbf{b}$ corresponds to the action of $f_i$ on $\mathbf{T}$ after the insertion.
Given a word $\mathbf{b}$, we apply the crystal bracketing rule for its $\{i,j\}$-subword and globally declare the rightmost unbracketed $i$ in $\mathbf{b}$ (i.e. the letter the crystal operator $f_i$ acts on) to be a bold $i$. Insert the letters of $\mathbf{b}$ via Haiman insertion to obtain the insertion tableau $\mathbf{T}$. During this process, we keep track of the position of the bold $i$ in the tableau via the following rules. When the bold $i$ from $\mathbf{b}$ is inserted into $\mathbf{T}$, it is inserted as the rightmost $i$ in the first row of $\mathbf{T}$ since by definition it is unbracketed in $\mathbf{b}$ and hence cannot bump a letter $j$. From this point on, the tableau $\mathbf{T}$ has a \defn{special} letter $i$ and we track its position:
\begin{enumerate} \item If the special $i$ is unprimed, it is always the rightmost $i$ in its row. When a letter $i$ is bumped from this row, only one of the non-special letters $i$ can be bumped, unless the special $i$ is the only $i$ in the row. When the non-diagonal special $i$ is bumped from its row to the next row, it will be inserted as the rightmost $i$ in the next row. \item When the diagonal special $i$ is bumped from its row to the column to its right, it is inserted as the bottommost $i'$ in the next column. \item If the special $i$ is primed, it is always the bottommost $i'$ in its column. When a letter $i'$ is bumped from this column, only one of the non-special letters $i'$ can be bumped, unless the special $i'$ is the only $i'$ in the column. When the primed special $i$ is bumped from its column to the next column, it is inserted as the bottommost $i'$ in the next column. \item When $i$ is inserted into a row with the special unprimed $i$, the rightmost $i$ becomes special. \item When $i'$ is inserted into a column with the special primed $i$, the bottommost primed $i$ becomes special. \end{enumerate}
\begin{lemma} \label{lemma.main} Using the rules above, after the insertion process of $\mathbf{b}$, the special $i$ in $\mathbf{T}$ is the same as the rightmost unbracketed $i$ in the reading word $\mathrm{rw}(\mathbf{T})$ (i.e. the definition of the bold $i$ in $\mathbf{T}$). Moreover, the number of unbracketed letters $i$ in $\mathbf{b}$ is equal to the number of unbracketed letters $i$ in $\mathrm{rw}(\mathbf{T})$. \end{lemma}
\begin{proof} First, note that since both the number of letters $i$ and the number of letters $j$ are equal in $\mathbf{b}$ and $\mathrm{rw}(\mathbf{T})$, the fact that the number of unbracketed letters $i$ is the same implies that the number of unbracketed letters $j$ must also be the same. We use induction on $1 \leqslant s \leqslant h$, where the letters $b_1 \ldots b_s$ of $\mathbf{b}=b_1 b_2 \ldots b_h$ have been inserted using Haiman mixed insertion with the above rules. That is, we check that at each step of the insertion algorithm the statement of our lemma stays true.
The induction step is as follows: Consider the word $b_1 \ldots b_{s-1}$ with a corresponding insertion tableau $\mathbf{T}^{(s-1)}$. If the bold $i$ in $\mathbf{b}$ is not in $b_1\ldots b_{s-1}$, then $\mathbf{T}^{(s-1)}$ does not contain a special letter $i$. Otherwise, by induction hypothesis assume that the bold $i$ in $b_1\ldots b_{s-1}$ by the above rules corresponds to the special $i$ in $\mathbf{T}^{(s-1)}$, that is, it is in the position corresponding to the rightmost unbracketed $i$ in the reading word $\mathrm{rw}(\mathbf{T}^{(s-1)})$. Then we need to prove that for $b_1 \ldots b_s$, the special $i$ in $\mathbf{T}^{(s-1)}$ ends up in the position corresponding to the rightmost unbracketed $i$ in the reading word of $\mathbf{T}^{(s)} = \mathbf{T}^{(s-1)} \leftsquigarrow b_s$. We also need to verify that the second part of the lemma remains true for $\mathbf{T}^{(s)}$.
Remember that we are only considering ``truncated'' words $\mathbf{b}$ with all letters $\leqslant j$.
\noindent \textbf{Case 1.} Suppose $b_s = j$. In this case $j$ is inserted at the end of the first row of $\mathbf{T}^{(s-1)}$, and $\mathrm{rw}(\mathbf{T}^{(s)})$ has $j$ attached at the end. Thus, both statements of the lemma are unaffected.
\noindent \textbf{Case 2.} Suppose $b_s = i$ and $b_s$ is unbracketed in $b_1 \ldots b_{s-1} b_s$. Then there is no special $i$ in tableau $\mathbf{T}^{(s-1)}$, and $b_s$ might be the bold $i$ of the word $\mathbf{b}$. Also, there are no unbracketed letters $j$ in $b_1 \ldots b_{s-1}$, and thus all $j$ in $\mathrm{rw}(\mathbf{T}^{(s-1)})$ are bracketed. Thus, there are no letters $j$ in the first row of $\mathbf{T}^{(s-1)}$, and $i$ is inserted in the first row of $\mathbf{T}^{(s-1)}$, possibly bumping the letter $j'$ from column $c$ into an empty column $c+1$ in the process. Note that if $j'$ is bumped, moving it to column $c+1$ of $\mathbf{T}^{(s)}$ does not change the reading word, since column $c$ of $\mathbf{T}^{(s-1)}$ does not contain any primed letters other than $j'$. The reading word of $\mathbf{T}^{(s)}$ is thus the same as $\mathrm{rw}(\mathbf{T}^{(s-1)})$ except for an additional unbracketed $i$ at the end. The number of unbracketed letters $i$ in both $\mathrm{rw}(\mathbf{T}^{(s)})$ and $b_1 \ldots b_{s-1} b_s$ is thus increased by one compared to $\mathrm{rw}(\mathbf{T}^{(s-1)})$ and $b_1 \ldots b_{s-1}$. If $b_s$ is the bold $i$ of the word $\mathbf{b}$, the special $i$ of tableau $\mathbf{T}^{(s)}$ is the rightmost $i$ on the first row and corresponds to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$.
\noindent \textbf{Case 3.} Suppose $b_s = i$ and $b_s$ is bracketed with a $j$ in the word $b_1\ldots b_{s-1}$. In this case, according to the induction hypothesis, $\mathrm{rw}(\mathbf{T}^{(s-1)})$ has an unbracketed $j$. There are two options.
\noindent \textbf{Case 3.1.} If the first row of $\mathbf{T}^{(s-1)}$ does not contain $j$, $b_s$ is inserted at the end of the first row of $\mathbf{T}^{(s-1)}$, possibly bumping $j'$ in the process. Regardless, $\mathrm{rw}(\mathbf{T}^{(s)})$ does not change except for attaching an $i$ at the end (see Case 2). This $i$ is bracketed with one unbracketed $j$ in $\mathrm{rw}(\mathbf{T}^{(s)})$. The special $i$ (if there was one in $\mathbf{T}^{(s-1)}$) does not change its position and the statement of the lemma remains true.
\noindent \textbf{Case 3.2.} If the first row of $\mathbf{T}^{(s-1)}$ does contain a $j$, inserting $b_s$ into $\mathbf{T}^{(s-1)}$ bumps $j$ (possibly bumping $j'$ beforehand) into the second row, where $j$ is inserted at the end of the row. So, if the first row contains $n \geqslant 0$ elements $i$ and $m \geqslant 1$ elements $j$, the reading word $\mathrm{rw}(\mathbf{T}^{(s-1)})$ ends with $\ldots i^n j^m$, and $\mathrm{rw}(\mathbf{T}^{(s)})$ ends with $\ldots j i^{n+1} j^{m-1}$. Thus, the number of unbracketed letters $i$ does not change and if there was a special $i$ in the first row, it remains there and it still corresponds to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$.
\noindent \textbf{Case 4.} Suppose $b_s < i$. Inserting $b_s$ could change both the primed reading word and unprimed reading word of $\mathbf{T}^{(s-1)}$. As long as neither $i$ nor $j$ is bumped from the diagonal, we can treat primed and unprimed changes separately.
\noindent \textbf{Case 4.1.} Suppose neither $i$ nor $j$ is not bumped from the diagonal during the insertion. This means that there are no transitions of letters $i$ or $j$ between the primed and the unprimed parts of the reading word. Thus, it is enough to track the bracketing relations in the unprimed reading word; the bracketing relations in the primed reading word can be verified the same way via the transposition. After we make sure that the number of unbracketed letters $i$ and $j$ changes neither in the primed nor unprimed reading word, it is enough to consider the case when the special $i$ is unprimed, since the case when it is primed can again be checked using the transposition. To avoid going back and forth, we combine these two processes together in each subcase to follow.
\noindent \textbf{Case 4.1.1.} If there are no letters $i$ and $j$ in the bumping sequence, the unprimed $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s)})$ is the same as in $\mathrm{rw}(\mathbf{T}^{(s-1)})$. The special $i$ (if there is one) remains in its position, and thus the statement of the lemma remains true.
\noindent \textbf{Case 4.1.2.} Now consider the case when there is a $j$ in the bumping sequence, but no $i$. Let that $j$ be bumped from the row $r$. Since there is no $i$ bumped, row $r$ does not contain any letters $i$. Thus, bumping $j$ from row $r$ to the end of row $r+1$ does not change the $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s-1)})$, so the statement of the lemma remains true.
\noindent \textbf{Case 4.1.3.} Consider the case when there is an $i$ in the bumping sequence. Let that $i$ be bumped from the row $r$.
\noindent \textbf{Case 4.1.3.1.} If there is a (non-diagonal) $j$ in row $r+1$, it is bumped into row $r+2$ ($j'$ may have been bumped in the process). Note that in this case the $i$ bumped from row $r$ could not have been a special one. If there are $n \geqslant 0$ elements $i$ and $m \geqslant 1$ elements $j$ in row $r$, the part of the reading word $\mathrm{rw}(\mathbf{T}^{(s-1)})$ with $\ldots i^n j^m i \ldots$ changes to $\ldots j i^{n+1} j^{m-1} \ldots$ in $\mathrm{rw}(\mathbf{T}^{(s)})$. The bracketing relations remain the same, and if row $r+1$ contained a special $i$, it would remain there and would correspond to the rightmost $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$.
\noindent \textbf{Case 4.1.3.2.} If there are no letters $j$ in row $r+1$, and $j'$ in row $r+1$ does not bump a $j$, the $\{i,j\}$-subword does not change and the statement of the lemma remains true.
\noindent \textbf{Case 4.1.3.3.} Now suppose there are no letters $j$ in row $r+1$ and $j'$ from row $r+1$ bumps a $j$ from another row. This can only happen if, before the $i$ was bumped, there was only one $i$ in row $r$ of $\mathbf{T}^{(s-1)}$, there is a $j'$ immediately below it, and there is a $j$ in the column to the right of $i$ and in row $r' \leqslant r$.
If $r'=r$, then after the insertion process, $i$ and $j$ are bumped from row $r$ to row $r+1$. Since there was only one $i$ in row $r$ and there are no letters $j$ in row $r+1$, the $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s-1)})$ does not change and the statement of the lemma remains true.
Otherwise $r' < r$. Then there are no letters $i$ in row $r'$ and by assumption there is no letter $j$ in row $r+1$. Thus, moving $i$ to row $r+1$ and moving $j$ to the row $r'+1$ does not change the $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s-1)})$ and the statement of the lemma remains true.
\noindent \textbf{Case 4.2.} Suppose $i$ or $j$ (or possibly both) are bumped from the diagonal in the insertion process.
\noindent \textbf{Case 4.2.1.} Consider the case when the insertion sequence ends with $\quad\cdots \rightarrow z \rightarrow j [j']$ with $z<i$ and possibly $ \rightarrow j$ right after it. Let the bumped diagonal $j$ be in column $c$. Then columns $1,2, \ldots, c$ of $\mathbf{T}^{(s-1)}$ could only contain elements $\leqslant z$, except for the $j$ on the diagonal. Thus, the bumping process just moves $j$ from the unprimed reading word to the primed reading word without changing the overall order of the $\{i,j\}$-subword.
\noindent \textbf{Case 4.2.2.} Consider the case when the insertion sequence ends with $\quad \cdots \rightarrow i' \rightarrow i \rightarrow j[j']$ and possibly $\rightarrow j$. Let the bumped diagonal $j$ be in row (and column) $r$. Note that $r$ must be the last row of $\mathbf{T}^{(s-1)}$. Then $i$ has to be bumped from row $r-1$ (and, say, column $c$) and $i'$ also has to be in row $r-1$ (moreover, it has to be the only $i'$ in column $c-1$). Also, since there are no letters $j'$ in column $c$ (otherwise it would be in row $r$, which is impossible), bumping $i'$ to column $c$ does not change the $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s-1)})$. Note that after $i'$ moves to column $c$, there are no $i'$ or $j'$ in columns $1,\ldots, r$, and thus priming $j$ and moving it to column $r+1$ does not change the $\{i,j\}$-subword. If the last row $r$ contains $n$ elements $j$, the $\{i,j\}$-subword of $\mathbf{T}^{(s-1)}$ contains $\ldots j^n i \ldots$ and after the insertion it becomes $\ldots j i j^{n-1} \ldots$, where the left $j$ is from the primed subword. Thus, the number of bracketed letters $i$ does not change. Also, if we moved the special $i$ in the process, it could only have been the bumped $i'$. Its position in the reading word is unaffected.
\noindent \textbf{Case 4.2.3.} The case when the insertion sequence does not contain $i'$, does not bump $i$ from the diagonal, but contains $i$ and bumps $j$ from the diagonal is analogous to the previous case.
\noindent \textbf{Case 4.2.4.} Suppose both $i$ and $j$ are bumped from the diagonal. That could only be the case with diagonal $i$ bumped from row (and column) $r$, bumping another letter $i$ from the row $r$ and column $r+1$, and bumping $j$ from row (and column) $r+1$ (and possibly bumping $j$ to row $r+2$ at the end). Let the number of letters $i'$ in column $r+1$ be $n$ and let the number of letters $j$ in row $r+1$ be $m$.
\noindent \textbf{Case 4.2.4.1} Let $m\geqslant 2$. Then the $\{i,j\}$-subword of $\mathrm{rw}(\mathbf{T}^{(s-1)})$ contains $\ldots i^n j^m ii \ldots$ and after the insertion it becomes $\ldots j i^{n+1} j i j^{m-2} \ldots$. The number of unbracketed letters $i$ stays the same. Since $m \geqslant 2$, the special $i$ of $\mathbf{T}^{(s-1)}$ could not have been involved in the bumping procedure. However, the special $i$ might have been the bottommost $i'$ in column $r+1$ of $\mathbf{T}^{(s-1)}$, and after the insertion the special $i$ would still be the bottommost $i'$ in column $r+1$ and would correspond to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$: \begin{equation*} \young(\cdot\cdoti'\cdot,:ii\cdot,::jj) \quad \mapsto \quad \young(\cdot\cdoti'\cdot,:\cdoti'\cdot,::ij',:::j) \end{equation*}
\noindent \textbf{Case 4.2.4.2.} Let $m=1$. Then the $\{i,j\}$-subword of $\mathbf{T}^{(s-1)}$ contains $\ldots i^n j ii \ldots$ and after the insertion it becomes $\ldots j i^{n+1} i$. The number of unbracketed letters $i$ stays the same. If the special $i$ was in row $r$ and column $r+1$, then after the insertion it becomes a diagonal one, and it would still correspond to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$.
\noindent \textbf{Case 4.2.5.} Suppose only $i$ is bumped from the diagonal (let that $i$ be on row and column $r$). Note that there cannot be an $i'$ in column $r$.
\noindent \textbf{Case 4.2.5.1.} Suppose $i$ from the diagonal bumps another $i$ from column $r+1$ and row $r$. In that case there are no letters $j$ in row $r+1$. No letters $j$ or $j'$ are affected and thus the $\{i,j\}$-subword of $\mathbf{T}^{(s)}$ does not change, and the special $i$ in $\mathbf{T}^{(s)}$ (if there is one) still corresponds to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$.
\noindent \textbf{Case 4.2.5.2.} Suppose $i$ from the diagonal bumps $j'$ from column $r+1$ and row $r$. Note that $j'$ must be the only $j'$ in column $r+1$. Suppose also that there is one $j$ in row $r+1$. Denote the number of letters $i'$ in column $r+1$ of $\mathbf{T}^{(s-1)}$ by $n$. If there is a $j$ in row $r+1$ of $\mathbf{T}^{(s-1)}$, then the $\{i,j\}$-subword of $\mathbf{T}^{(s-1)}$ contains $\ldots i^n jji \ldots$ and after the insertion it becomes $\ldots ji^{n+1}j \ldots$. If there is no $j$ in row $r+1$ of $\mathbf{T}^{(s-1)}$, then the $\{i,j\}$-subword of $\mathbf{T}^{(s-1)}$ contains $\ldots i^n ji \ldots$ and after the insertion it becomes $\ldots ji^{n+1} \ldots$. The number of unbracketed letters $i$ is unaffected. If the special $i$ of $\mathbf{T}^{(s-1)}$ was the bottommost $i'$ in column $r+1$ of $\mathbf{T}^{(s-1)}$, after the insertion the special $i$ is still the bottommost $i'$ in column $r+1$ and corresponds to the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T}^{(s)})$. \end{proof}
\begin{corollary} \label{corollary.f annihilate}
\begin{equation*}
f_i (\mathbf{b}) = \mathbf{0} \quad \text{if and only if} \quad f_i (\mathbf{T}) = \mathbf{0}.
\end{equation*} \end{corollary}
\subsection{Proof of Theorem~\ref{theorem.main2}} \label{section.main.proof} By Lemma~\ref{lemma.main}, the cell $x$ in the definition of the operator $f_i$ corresponds to the bold $i$ in the tableau $\mathbf{T}$. Furthermore, we know how the bold $i$ moves during the insertion procedure. We assume that the bold $i$ exists in both $\mathbf{b}$ and $\mathbf{T}$, meaning that $f_i(\mathbf{b}) \neq \mathbf{0}$ and $f_i(\mathbf{T}) \neq \mathbf{0}$ by Corollary~\ref{corollary.f annihilate}. We prove Theorem~\ref{theorem.main2} by induction on the length of the word $\mathbf{b}$.
\noindent \textbf{Base.} Our base is for words $\mathbf{b}$ with the last letter being a bold $i$ (i.e. rightmost unbracketed $i$). Let $\mathbf{b} = b_1 \ldots b_{h-1} b_h$ and $f_i(\mathbf{b}) = b_1 \ldots b_{h-1} b'_h$, where $b_h = i$ and $b'_h = j$. Denote the mixed insertion tableau of $b_1 \ldots b_{h-1}$ as $\mathbf{T}_0$, the insertion tableau of $b_1 \ldots b_{h-1} b_h$ as $\mathbf{T}$, and the insertion tableau of $b_1 \ldots b_{h-1} b'_h$ as $\mathbf{T}'$. Note that $\mathbf{T}_0$ does not have letters $j$ in the first row. If the first row of $\mathbf{T}_0$ ends with $\ldots j'$, then the first row of $\mathbf{T}$ ends with $\ldots \mathbf{i} j'$ and the first row of $\mathbf{T}'$ ends with $\ldots j' j$. If the first row of $\mathbf{T}_0$ does not contain $j'$, the first row of $\mathbf{T}$ ends with $\ldots \mathbf{i}$ and the first row of $\mathbf{T}'$ ends with $\ldots j$, and the cell $x_S$ is empty. In both cases $f_i(\mathbf{T}) = \mathbf{T}'$.
\noindent \textbf{Induction step.} Now, let $\mathbf{b} = b_1 \ldots b_h$ with operator $f_i$ acting on the letter $b_s$ in $\mathbf{b}$ with $s < h$. Denote the mixed insertion tableau of $b_1 \ldots b_{h-1}$ as $\mathbf{T}$ and the insertion tableau of $f_i(b_1 \ldots b_{h-1})$ as $\mathbf{T}'$. By induction hypothesis, we know that $f_i(\mathbf{T}) = \mathbf{T}'$. We want to show that $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$. In Cases 1-3 below, we assume that the bold letter $i$ is unprimed. Since almost all results from the case with unprimed $i$ are transferrable to the case with primed bold $i$ via the transposition of the tableau $\mathbf{T}$, we just need to cover the differences in Case 4.
\noindent \textbf{Case 1.} Suppose $\mathbf{T}$ falls under Case (1) of the rules for $f_i$: the bold $i$ is in the non-diagonal cell $x$ in row $r$ and column $c$ and the cell $x_E$ in the same row and column $c+1$ contains the entry $j'$. Consider the insertion path of $b_h$.
\noindent \textbf{Case 1.1.} If the insertion path of $b_h$ in $\mathbf{T}$ contains neither cell $x$ nor cell $x_E$, the insertion path of $b_h$ in $\mathbf{T}'$ also does not contain cells $x$ and $x_E$. Thus, $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 1.2.} Suppose that during the insertion of $b_h$ into $\mathbf{T}$, the bold $i$ is row-bumped by an unprimed element $d < i$ or is column-bumped by a primed element $d' \leqslant i'$. This could only happen if the bold $i$ is the unique $i$ in row $r$ of $\mathbf{T}$. During the insertion process, the bold $i$ is inserted into row $r+1$. Since there are no letters $i$ in row $r$ of $\mathbf{T}'$, inserting $b_h$ into $\mathbf{T}'$ inserts $d$ in cell $x$, bumps $j'$ to cell $x_E$, and bumps $j$ into row $r+1$. Thus we are in a situation similar to the induction base. It is easy to check that row $r+1$ does not contain any letters $j$ in $\mathbf{T}$. If it contains $j'$, this $j'$ is bumped back into row $r+1$. Similar to the induction base, $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 1.3.} Suppose that during the insertion of $b_h$ into $\mathbf{T}$, an unprimed $i$ is inserted into row $r$. Note that in this case, row $r$ in $\mathbf{T}$ must contain a $j$ (or else the $i$ from row $r$ would not be the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T})$). Thus inserting $i$ into row $r$ in $\mathbf{T}$ shifts the bold $i$ to column $c+1$, shifts $j'$ to column $c+2$ and bumps $j$ to row $r+1$. Inserting $i$ into row $r$ in $\mathbf{T}'$ shifts $j'$ to column $c+1$ with a $j$ to the right of it, and bumps $j$ into row $r+1$. Thus $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 1.4.} Suppose that during the insertion of $b_h$ into $\mathbf{T}$, the $j'$ in cell $x_E$ is column-bumped by a primed element $d'$ and the cell $x$ is unaffected. Note that in order for $\mathbf{T} \leftsquigarrow b_h$ to be a valid primed tableau, $i$ must be smaller than $d'$, and thus $d'$ could only be $j'$. On the other hand, $j'$ cannot be inserted into column $c+1$ of $\mathbf{T}'$ in order for $\mathbf{T}' \leftsquigarrow b_h$ to be a valid primed tableau. Thus this case is impossible.
\noindent \textbf{Case 2.} Suppose tableau $\mathbf{T}$ falls under Case (2a) of the crystal operator rules for $f_i$. This means that for a bold $i$ in cell $x$ (in row $r$ and column $c$) of tableau $\mathbf{T}$, the cell $x_E$ contains the entry $j$ or is empty and cell $x_S$ is empty. Tableau $\mathbf{T}'$ has all the same elements as $\mathbf{T}$, except for a $j$ in the cell $x$. We are interested in the case when inserting $b_h$ into either $\mathbf{T}$ or $\mathbf{T}'$ bumps the element from cell $x$.
\noindent \textbf{Case 2.1.} Suppose that the non-diagonal bold $i$ in $\mathbf{T}$ (in row $r$) is row-bumped by an unprimed element $d < i$ or column-bumped by a primed element $d' < j'$. Element $d$ (or $d'$) bumps the bold $i$ into row $r+1$ of $\mathbf{T}$, while in $\mathbf{T}'$ (since there are no letters $i$ in row $r$ of $\mathbf{T}'$) it bumps $j$ from cell $x$ into row $r+1$. Thus we are in the situation of the induction base and $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 2.2.} Suppose $x$ is a non-diagonal cell in row $r$, and during the insertion of $b_h$ into $\mathbf{T}$, an unprimed $i$ is inserted into the row $r$. In this case, row $r$ in $\mathbf{T}$ must contain a letter $j$. The insertion process shifts the bold $i$ one cell to the right in $\mathbf{T}$ and bumps a $j$ into row $r+1$, while in $\mathbf{T}'$ it just bumps $j$ into the row $r+1$. We end up in Case (2a) of the crystal operator rules for $f_i$ with bold $i$ in the cell $x_E$.
\noindent \textbf{Case 2.3.} Suppose that during the insertion of $b_h$ into $\mathbf{T}'$, the $j$ in the non-diagonal cell $x$ is column-bumped by a $j'$. This means that $j'$ was previously bumped from column $c-1$ and row $\geqslant r$. Thus the cell $x_{SW}$ (cell to the left of an empty $x_{S}$) is non-empty. Moreover, right before inserting $j'$ into the column $c$, the cell $x_{SW}$ contains an entry $< j'$. Inserting $j'$ into column $c$ of $\mathbf{T}$ just places $j'$ into the empty cell $x_S$. Inserting $j'$ into column $c$ of $\mathbf{T}'$ places $j'$ into $x$, and bumps $j$ into the empty cell $x_S$. Thus, we end up in Case (2c) of the crystal operator rules after the insertion of $b_h$ with $y = x_S$.
\noindent \textbf{Case 2.4.} Suppose that $x$ in $\mathbf{T}$ is a diagonal cell (in row $r$ and column $r$) and that it is row-bumped by an element $d<i$. Note that in this case there cannot be any letter $j$ in row $r+1$. Also, since $d$ is inserted into cell $x$, there cannot be any letters $i'$ in columns $1,\ldots, r$, and thus there cannot be any letters $j'$ in column $r+1$ (otherwise the $i$ in cell $x$ would not be bold). The bumped bold $i$ in tableau $\mathbf{T}$ is inserted as a primed bold $i'$ into the cell $z$ of column $r+1$.
\noindent \textbf{Case 2.4.1.} Suppose that there are no letters $i$ in column $r+1$ of $\mathbf{T}$. In this case, the cell $z$ in $\mathbf{T}$ either contains $j$ (and then that $j$ would be bumped to the next row) or is empty. Inserting $b_h$ into tableau $\mathbf{T}'$ bumps the diagonal $j$ in cell $x$, which is inserted as a $j'$ into cell $z$, possibly bumping $j$ after that. Thus, $\mathbf{T} \leftsquigarrow b_h$ falls under Case (2a) of the ``primed'' crystal rules with the bold $i'$ in cell $z$ (note that there cannot be any $j'$ in cell $(z*)_E$ of the tableau $(\mathbf{T} \leftsquigarrow b_h)*$). Since $\mathbf{T} \leftsquigarrow b_h$ and $\mathbf{T}' \leftsquigarrow b_h$ differ only by the cell $z$, $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 2.4.2.} Suppose that there is a letter $i$ in cell $z$ of column $r+1$ of $\mathbf{T}$. Note that cell $z$ can only be in rows $1, \ldots, r-1$ and thus $z_{SW}$ contains an element $< i$. Thus, during the insertion process of $b_h$ into $\mathbf{T}$, diagonal bold $i$ from cell $x$ is inserted as bold $i'$ into cell $z$, bumping the $i$ from cell $z$ into cell $z_S$ (possibly bumping $j$ afterwards). On the other hand, inserting $b_h$ into $\mathbf{T}'$ bumps the diagonal $j$ from cell $x$ into cell $z_S$ as a $j'$ (possibly bumping $j$ afterwards). Thus, $\mathbf{T} \leftsquigarrow b_h$ falls under Case (1) of the ``primed'' crystal rules with the bold $i'$ in cell $z$, and so $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 2.5.} Suppose that $x$ is a diagonal cell (in row $r$ and column $r$) and that during the insertion of $b_h$ into $\mathbf{T}$, an unprimed $i$ is inserted into row $r$. In this case, the entry in cell $x_E$ has to be $j$ and the diagonal cell $x_{ES}$ must be empty. Inserting $i$ into row $r$ of $\mathbf{T}$ bumps a $j$ from cell $x_E$ into cell $x_{ES}$. On the other hand, inserting $i$ into row $r$ of $\mathbf{T}'$ bumps a $j$ from the diagonal cell $x$, which in turn is inserted as a $j'$ into cell $x_E$, which bumps $j$ from cell $x_E$ into cell $x_{ES}$. Thus, $\mathbf{T} \leftsquigarrow b_h$ falls under Case (2b) of the crystal rules with bold $i$ in cell $x_E$ and $y= x_{ES}$, and so $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 3.} Suppose that $\mathbf{T}$ falls under Case (2b) or (2c) of the crystal operator rules. That means $x_E$ contains the entry $j$ or is empty and $x_S$ contains the entry $j'$ or $j$. There is a chain of letters $j'$ and $j$ in $\mathbf{T}$ starting from $x_S$ and ending on a box $y$. According to the induction hypothesis, $y$ is either on the diagonal and contains the entry $j$ or $y$ is not on the diagonal and contains the entry $j'$. The tableau $\mathbf{T}' = f_i (\mathbf{T})$ has $j'$ in cell $x$ and $j$ in cell $y$. We are interested in the case when inserting $b_h$ into $\mathbf{T}$ affects cell $x$ or affects some element of the chain. Let $r_x$ and $c_x$ be the row and the column index of cell $x$, and $r_y$, $c_y$ are defined accordingly. Note that during the insertion process, $j'$ cannot be inserted into columns $c_y,\ldots, c_x$ and $j$ cannot be inserted into rows $r_x +1,\ldots, r_y$, since otherwise $\mathbf{T} \leftsquigarrow b_h$ would not be a primed tableau.
\noindent \textbf{Case 3.1.} Suppose the bold $i$ in cell $x$ (of row $r_x$ and column $c_x$) of $\mathbf{T}$ is row-bumped by an unprimed element $d < i$ or column-bumped by a primed element $d' < i$. Note that in this case, bold $i$ in row $r_x$ is the only $i$ in this row, so row $r_x+1$ cannot contain any letter $j$. Therefore the entry in cell $x_S$ must be $j'$. In tableau $\mathbf{T}$, the bumped bold $i$ is inserted into cell $x_S$ and $j'$ is bumped from cell $x_S$ into column $c_x+1$, reducing the chain of letters $j'$ and $j$ by one. Notice that since $x_E$ either contains a $j$ or is empty, $j'$ cannot be bumped into a position to the right of $x_S$, so Case (1) of the crystal rules for $\mathbf{T} \leftsquigarrow b_h$ cannot occur. As for $\mathbf{T}'$, inserting $d$ into row $r_x$ (or inserting $d'$ into column $c_x$) just bumps $j'$ into column $c_x+1$, thus reducing the length of the chain by one in that tableau as well. Note that in the case when the length of the chain is one (i.e. $y=x_S$), we would end up in Case (2a) of the crystal rules after the insertion. Otherwise, we are still in Case (2b) or (2c). In both cases, $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 3.2.} Suppose a letter $i$ is inserted into the same row as $x$ (in row $r_x$). In this case, $x_E$ must contain a $j$ (otherwise the bold $i$ would not be in cell $x$). After inserting $b_h$ into $\mathbf{T}$, the bold $i$ moves to cell $x_E$ (note that there cannot be a $j'$ to the right of $x_E$) and $j$ from $x_E$ is bumped to cell $x_{ES}$, thus the chain now starts at $x_{ES}$. As for $\mathbf{T}'$, inserting $i$ into the row $r_x$ moves $j'$ from cell $x$ to the cell $x_E$ and moves $j$ from cell $x_E$ to cell $x_{ES}$. Thus, $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 3.3.} Consider the chain of letters $j$ and $j'$ in $\mathbf{T}$. Suppose an element of the chain $z \neq x,y$ is row-bumped by an element $d < j$ or is column-bumped by an element $d'<j'$. The bumped element $z$ (of row $r_z$ and column $c_z$) must be a ``corner'' element of the chain, i.e. in $\mathbf{T}$ the entry in the boxes must be $c(z)=j', \ c(z_E) = j$ and $c(z_S)$ must be either $j$ or $j'$. Therefore, inserting $b_h$ into $\mathbf{T}$ bumps $j'$ from box $z$ to box $z_E$ and bumps $j$ from box $z_E$ to box $z_{ES}$, and inserting $b_h$ into $\mathbf{T}'$ has exactly the same effect. Thus, there is still a chain of letters $j$ and $j'$ from $x_S$ to $y$ in $\mathbf{T}$ and $\mathbf{T}'$, and $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 3.4.} Suppose $\mathbf{T}$ falls under Case (2c) of the crystal rules (i.e. $y$ is not a diagonal cell) and during the insertion of $b_h$ into $\mathbf{T}$, $j'$ in cell $y$ is row-bumped (resp. column-bumped) by an element $d<j'$ (resp. $d'<j'$). Since $y$ is the end of the chain of letters $j$ and $j'$, $y_S$ must be empty. Also, since it is bumped, the entry in $y_E$ must be $j$. Thus, inserting $b_h$ into $\mathbf{T}$ bumps $j'$ from cell $y$ to cell $y_E$ and bumps $j$ from cell $y_E$ into row $r_y+1$ and column $\leqslant c_y$. On the other hand, inserting $b_h$ into $\mathbf{T}'$ bumps $j$ from cell $y$ into row $r_y+1$ and column $\leqslant c_y$. The chain of letters $j$ and $j'$ now ends at $y_E$ and $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 3.5.} Suppose $\mathbf{T}$ falls under Case (2b) of the crystal rules (i.e. $y$ with entry $j$ is a diagonal cell) and during the insertion of $b_h$ into $\mathbf{T}$, $j$ in cell $y$ is row-bumped by an element $d < j$. In this case, the cell $y_E$ must contain the entry $j$. Thus, inserting $b_h$ into $\mathbf{T}$ bumps $j$ from cell $y$ (making it $j'$) to cell $y_E$ and bumps $j$ from cell $y_E$ to the diagonal cell $y_{ES}$. On the other hand, inserting $b_h$ into $\mathbf{T}'$ has exactly the same effect. The chain of letters $j$ and $j'$ now ends at the diagonal cell $y_{ES}$, so $\mathbf{T}\leftsquigarrow b_h$ falls under Case (2b) of the crystal rules and $f_i(\mathbf{T} \leftsquigarrow b_h) = \mathbf{T}' \leftsquigarrow b_h$.
\noindent \textbf{Case 4.} Suppose the bold $i$ in tableau $\mathbf{T}$ is a primed $i$. We use the transposition operation on $\mathbf{T}$, and the resulting tableau $\mathbf{T}^*$ falls under one of the cases of the crystal operator rules. When $b_h$ is inserted into $\mathbf{T}$, we can easily translate the insertion process to the transposed tableau $\mathbf{T}^*$ so that $[\mathbf{T}^* \leftsquigarrow (b_h+1)'] = [\mathbf{T} \leftsquigarrow b_h]^*$: the letter $(b_h+1)'$ is inserted into the first column of $\mathbf{T}^*$, and all other insertion rules stay exactly same, with one exception -- when the diagonal element $d'$ is column-bumped from the diagonal cell of $\mathbf{T}^*$, the element $d'$ becomes $(d-1)$ and is inserted into the row below. Notice that the primed reading word of $\mathbf{T}$ becomes an unprimed reading word of $\mathbf{T}^*$. Thus, the bold $i$ in tableau $\mathbf{T}^*$ corresponds to the rightmost unbracketed $i$ in the \textit{unprimed} reading word of $\mathbf{T}^*$. Therefore, everything we have deduced in Cases 1-3 from the fact that bold $i$ is in the cell $x$ will remain valid here. Given $f_i(\mathbf{T}^*) = \mathbf{T}'^*$, we want to make sure that $f_i(\mathbf{T}^* \leftsquigarrow (b_h+1)') = \mathbf{T}'^* \leftsquigarrow (b_h+1)'$.
The insertion process of $(b_h+1)'$ into $\mathbf{T}^*$ falls under one of the cases above and the proof of $f_i(\mathbf{T}^* \leftsquigarrow (b_h+1)') = \mathbf{T}'^* \leftsquigarrow (b_h+1)'$ is exactly the same as the proof in those cases. We only need to check the cases in which the diagonal element might be affected differently in the insertion process of $(b_h+1)'$ into $\mathbf{T}^*$ compared to the insertion process of $(b_h+1)'$ into $\mathbf{T}'^*$. Fortunately, this never happens: in Case 1 neither $x$ nor $x_E$ could be diagonal elements; in Cases 2 and 3 $x$ cannot be on the diagonal, and if $x_E$ is on diagonal, it must be empty. Following the proof of those cases, $f_i(\mathbf{T}^* \leftsquigarrow (b_h+1)') = \mathbf{T}'^* \leftsquigarrow (b_h+1)'$.
\section{Proof of Theorem~\ref{theorem.main3}} \label{section.proof main3}
This appendix provides the proof of Theorem~\ref{theorem.main3}. In this section we set $j=i+1$. We begin with two preliminary lemmas.
\subsection{Preliminaries}
\begin{lemma} \label{lemma.chains}
Consider a shifted tableau $\mathbf{T}$.
\begin{enumerate}
\item Suppose tableau $\mathbf{T}$ falls under Case (2c) of the $f_i$ crystal operator rules, that is, there is a chain of
letters $j$ and $j'$ starting from the bold $i$ in cell $x$ and ending at $j'$ in cell $x_H$.
Then for any cell $z$ of the chain containing $j$, the cell $z_{NW}$ contains $i$.
\item Suppose tableau $\mathbf{T}$ falls under Case (2b) of the $f_i$ crystal operator rules, that is, there is a chain of
letters $j$ and $j'$ starting from the bold $i$ in cell $x$ and ending at $j$ in the diagonal cell $x_H$.
Then for any cell $z$ of the chain containing $j$ or $j'$, the cell $z_{NW}$ contains $i$ or $i'$ respectively.
\end{enumerate} \end{lemma} \Yboxdim 13pt \begin{equation*} \young(\cdot\cdot\cdot\cdot\cdot\cdot\cdot\boldsymbol{i},:\cdot\cdot\cdot iiij',::\cdot\cdotj' jjj,:::\cdotj') \qquad \young(\cdot\cdot\cdot\cdoti'\boldsymbol{i},:\cdoti' iij',::ij' jj,:::j) \end{equation*}
\begin{proof} The proof of the first part is based on the observation that every $j$ in the chain must be bracketed with some $i$ in the reading word $\mathrm{rw}(\mathbf{T})$. Moreover, if the bold $i$ is located in row $r_x$ and rows $r_x, r_x+1,\ldots, r_z$ contain $n$ letters $j$, then rows $r_x, r_x +1,\ldots, r_z-1$ must contain exactly $n$ non-bold letters $i$. To prove that these elements $i$ must be located in the cells to the North-West of the cells containing $j$, we proceed by induction on $n$. When we consider the next cell $z$ containing $j$ in the chain that must be bracketed, notice that the columns $c_z, c_z+1,\ldots, c_x$ already contain an $i$, and thus we must put the next $i$ in column $c_z -1$; there is no other row to put it than $r_z-1$. Thus, $z_{NW}$ must contain an $i$.
This line of logic also works for the second part of the lemma. We can show that for any cell $z$ of the chain containing $j$, the cell $z_{NW}$ must contain an $i$. As for cells $z$ containing $j'$, we can again use the fact that the corresponding letters $j$ in the primed reading word of $\mathbf{T}$ must be bracketed. Notice that these letters $j'$ cannot be bracketed with unprimed letters $i$, since all unprimed letters $i$ are already bracketed with unprimed letters $j$. Thus, $j'$ must be bracketed with some $i'$ from a column to its left. Let columns $1,2, \ldots, c_z$ contain $m$ elements $j'$. Using the same induction argument as in the previous case, we can show that $z_{NW}$ must contain $i'$. \end{proof}
Next we need to figure out how $y$ in the raising crystal operator $e_i$ is related to the lowering operator rules for $f_i$.
\begin{lemma} \label{lemma.y} Consider a pair of tableaux $\mathbf{T}$ and $\mathbf{T}' = f_i(\mathbf{T})$.
\begin{enumerate}
\item If tableau $\mathbf{T}$ (in case when bold $i$ in $\mathbf{T}$ is unprimed) or $\mathbf{T}^*$
(if bold $i$ is primed) falls under Case (1) of the $f_i$ crystal operator rules, then cell $y$ of
the $e_i$ crystal operator rules is cell $x_E$ of $\mathbf{T}'$ or $(\mathbf{T}')^*$, respectively.
\item If tableau $\mathbf{T}$ (in case when bold $i$ in $\mathbf{T}$ is unprimed) or $\mathbf{T}^*$
(if bold $i$ is primed) falls under Case (2a) of the $f_i$ crystal operator rules, then cell $y$ of
the $e_i$ crystal operator rules is located in cell $x$ of $\mathbf{T}'$ or $(\mathbf{T}')^*$, respectively.
\item If tableau $\mathbf{T}$ falls under Case (2b) of the $f_i$ crystal operator rules, then cell $y$ of
the $e_i$ crystal operator rules is cell $x^*$ of $(\mathbf{T}')^*$.
\item If tableau $\mathbf{T}$ (in case when bold $i$ in $\mathbf{T}$ is unprimed) or $\mathbf{T}^*$
(if bold $i$ is primed) falls under Case (2c) of the $f_i$ crystal operator rules, then cell $y$ of
the $e_i$ crystal operator rules is cell $x_H$ of $\mathbf{T}'$ or $(\mathbf{T}')^*$, respectively.
\end{enumerate} \end{lemma}
\begin{proof} In all the cases above, we need to compare reading words $\mathrm{rw}(\mathbf{T})$ and $\mathrm{rw}(\mathbf{T}')$. Since $f_i$ affects at most two boxes of $\mathbf{T}$, it is easy to track how the reading word $\mathrm{rw}(\mathbf{T})$ changes after applying $f_i$. We want to check where the bold $j$ under $e_i$ ends up in $\mathrm{rw}(\mathbf{T}')$ and in $\mathbf{T}'$, which allows us to determine the cell $y$ of the $e_i$ crystal operator rules.
\noindent \textbf{Case 1.1.} Suppose $\mathbf{T}$ falls under Case (1) of the $f_i$ crystal operator rules, that is, the bold $i$ in cell $x$ is to the left of $j'$ in cell $x_E$. Furthermore, $f_i$ acts on $\mathbf{T}$ by changing the entry in $x$ to $j'$ and by changing the entry in $x_E$ to $j$. In the reading word $\mathrm{rw}(\mathbf{T})$, this corresponds to moving the $j$ corresponding to $x_E$ to the left and changing the bold $i$ (the rightmost unbracketed $i$) corresponding to cell $x$ to $j$ (that then corresponds to $x_E$). Moving a bracketed $j$ in $\mathrm{rw}(\mathbf{T})$ to the left does not change the $\{i,j\}$ bracketing, and thus the $j$ corresponding to $x_E$ in $\mathrm{rw}(\mathbf{T}')$ is still the leftmost unbracketed $j$. Therefore, this $j$ is the bold $j$ of $\mathbf{T}'$ and is located in cell $x_E$.
\noindent \textbf{Case 1.2.} Suppose the bold $i$ in $\mathbf{T}$ is primed and $\mathbf{T}^*$ falls under Case (1) of the $f_i$ crystal operator rules. After applying lowering crystal operator rules to $\mathbf{T}^*$ and conjugating back, the bold primed $i$ in cell $x^*$ of $\mathbf{T}$ changes to an unprimed $i$, and the unprimed $i$ in cell $(x^*)_S$ of $\mathbf{T}$ changes to $j'$. In terms of the reading word of $\mathbf{T}$, it means moving the bracketed $i$ (in the unprimed reading word) corresponding to $(x^*)_S$ to the left so that it corresponds to $x^*$, and then changing the bold $i$ (in the primed reading word) corresponding to $x^*$ into the letter $j$ corresponding to $(x^*)_S$. The first operation does not change the bracketing relations between $i$ and $j$, and thus the leftmost unbracketed $j$ in $\mathrm{rw}(\mathbf{T}')$ corresponds to $(x^*)_S$. Hence the bold unprimed $j$ is in cell $x_E$ of $(\mathbf{T}')^*$.
\noindent \textbf{Case 2.1.} If $\mathbf{T}$ falls under Case (2a) of the $f_i$ crystal operator rules, $f_i$ just changes the entry in $x$ from $i$ to $j$. The rightmost unbracketed $i$ in the reading word of $\mathbf{T}$ changes to the leftmost unbracketed $j$ in $\mathrm{rw}(\mathbf{T}')$. Thus, the bold $j$ in $\mathrm{rw}(\mathbf{T}')$ corresponds to cell $x$.
\noindent \textbf{Case 2.2.} The case when $\mathbf{T}^*$ falls under Case (2a) of the $f_i$ crystal operator rules is the same as the previous case.
\noindent \textbf{Case 3.} Suppose $\mathbf{T}$ falls under Case (2b) of $f_i$ crystal operator rules. Then there is a chain starting from cell $x$ (of row $r_x$ and column $c_x$) and ending at the diagonal cell $z$ (of row and column $r_z$) consisting of elements $j$ and $j'$. Applying $f_i$ to $\mathbf{T}$ changes the entry in $x$ from $i$ to $j'$. In $\mathrm{rw}(\mathbf{T})$ this implies moving the bold $i$ from the unprimed reading word to the left through elements $i$ and $j$ corresponding to rows $r_x, r_x +1,\ldots, r_z$, then through elements $i$ and $j$ in the primed reading word corresponding to columns $c_z-1, \ldots, c_x$, and then changing that $i$ to $j$ which corresponds to cell $x$. But according to Lemma~\ref{lemma.chains}, the letters $i$ and $j$ in these rows and columns are all bracketed with each other, since for every $j$ or $j'$ in the chain there is a corresponding $i$ or $i'$ in the North-Western cell. (Notice that there cannot be any other letter $j$ or $j'$ outside of the chain in rows $r_x +1,\ldots, r_z$ and in columns $c_z-1, \ldots, c_x$.) Thus, moving the bold $i$ to the left in $\mathrm{rw}(\mathbf{T})$ does not change the bracketing relations. Changing it to $j$ makes it the leftmost unbracketed $j$ in $\mathrm{rw}(\mathbf{T}')$. Therefore, the bold $j$ in $\mathrm{rw}(\mathbf{T}')$ corresponds to the primed $j$ in cell $x$ of $\mathbf{T}'$, and the cell $y$ of the $e_i$ crystal operator rules is thus cell $x^*$ in $(\mathbf{T}')^*$.
\noindent \textbf{Case 4.1.} Suppose $\mathbf{T}$ falls under Case (2c) of the $f_i$ crystal operator rules. There is a chain starting from cell $x$ (in row $r_x$ and column $c_x$) and ending at cell $x_H$ (in row $r_H$ and column $c_H$) consisting of elements $j$ and $j'$. Applying $f_i$ to $\mathbf{T}$ changes the entry in $x$ from $i$ to $j'$ and changes the entry in $x_H$ from $j'$ to $j$. Moving $j'$ from cell $x_H$ to cell $x$ moves the corresponding bracketed $j$ in the reading word $\mathrm{rw}(\mathbf{T})$ to the left, and thus does not change the $\{i,j\}$ bracketing relations in $\mathrm{rw}(\mathbf{T}')$. On the other hand, moving the bold $i$ from cell $x$ to cell $x_H$ and then changing it to $j$ moves the bold $i$ in $\mathrm{rw}(\mathbf{T})$ to the right through elements $i$ and $j$ corresponding to rows $r_x, r_x +1,\ldots, r_H$, and then changes it to $j$. Note that according to Lemma~\ref{lemma.chains}, each $j$ in rows $r_x+1, r_x +2,\ldots, r_H$ has a corresponding $i$ from rows $r_x, r_x +1,\ldots, r_H - 1$ that it is bracketed with, and vise versa. Thus, moving the bold $i$ to the position corresponding to $x_H$ does not change the fact that it is the rightmost unbracketed $i$ in $\mathrm{rw}(\mathbf{T})$. Thus, the bold $j$ in $\mathrm{rw}(\mathbf{T}')$ corresponds to the unprimed $j$ in cell $x_H$ of $\mathbf{T}'$.
\noindent \textbf{Case 4.2.} Suppose $\mathbf{T}$ has a primed bold $i$ and $\mathbf{T}^*$ falls under Case (2c) of the $f_i$ crystal operator rules. This means that there is a chain (expanding in North and East directions) in $\mathbf{T}$ starting from $i'$ in cell $x^*$ and ending in cell $x_H^*$ with entry $i$ consisting of elements $i$ and $j'$. The crystal operator $f_i$ changes the entry in cell $x^*$ from $i'$ to $i$ and changes the entry in $x_H^*$ from $i$ to $j'$. For the reading word $\mathrm{rw}(\mathbf{T})$ this means moving the bracketed $i$ in the unprimed reading word to the right (which does not change the bracketing relations) and moving the bold $i$ in the primed reading word through letters $i$ and $j$ corresponding to columns $c_x, c_x +1 ,\ldots, c_H$, which are bracketed with each other according to Lemma~\ref{lemma.chains}. Thus, after changing the bold $i$ to $j$ makes it the leftmost unbracketed $j$ in $\mathrm{rw}(\mathbf{T}')$. Hence the bold primed $j$ in $\mathbf{T}'$ corresponds to cell $x_H^*$. Therefore $y$ from the $e_i$ crystal operator rules is cell $x_H$ of $(\mathbf{T}')^*$. \end{proof}
\subsection{Proof of Theorem~\ref{theorem.main3}} Let $\mathbf{T'} = f_i(\mathbf{T})$.
\noindent \textbf{Case 1.} If $\mathbf{T}$ (or $\mathbf{T}^*$) falls under Case (1) of the $f_i$ crystal operator rules, then according to Lemma~\ref{lemma.y}, $e_i$ acts on $\mathbf{T}'$ (or on $(\mathbf{T}')^*$) by changing the entry in cell $y_W = x$ back to $i$ and changing the entry in $y = x_E$ back to $j'$. Thus, the statement of the theorem is true.
\noindent \textbf{Case 2.} If $\mathbf{T}$ (or $\mathbf{T}^*$) falls under Case (2a) of the $f_i$ crystal operator rules, then according to Lemma~\ref{lemma.y}, $e_i$ acts on $\mathbf{T}'$ (or on $(\mathbf{T}')^*$) by changing the entry in the cell $y = x$ back to $i$. Thus, the statement of the theorem is true.
\noindent \textbf{Case 3.} If $\mathbf{T}$ falls under Case (2b) of the $f_i$ crystal operator rules, then according to Lemma~\ref{lemma.y}, $e_i$ acts on cell $y=x^*$ of $(\mathbf{T}')^*$. Note that according to Lemma~\ref{lemma.chains}, there is a maximal chain of letters $i$ and $j'$ in $(\mathbf{T}')^*$ starting at $y$ and ending at a diagonal cell $y_T$. Thus, $e_i$ changes the entry in cell $y=x^*$ in $(\mathbf{T}')^*$ from $j$ to $j'$, so the entry in cell $x$ in $\mathbf{T}'$ goes back from $j'$ to $i$. Thus, the statement of the theorem is true.
\noindent \textbf{Case 4.} If $\mathbf{T}$ (or $\mathbf{T}^*$) falls under Case (2c) of the $f_i$ crystal operator rules, then according to Lemma~\ref{lemma.y}, $e_i$ acts on cell $y=x_H$ of $\mathbf{T}'$ (or of $(\mathbf{T}')^*$). Note that according to Lemma~\ref{lemma.chains}, there is a maximal (since $c(x_E) \neq j'$ and $c(x_E) \neq i$) chain of letters $i$ and $j'$ in $\mathbf{T}'$ (or $(\mathbf{T}')^*$) starting at $y$ and ending at cell $y_T = x$. Thus, $e_i$ changes the entry in cell $y=x_H$ in $(\mathbf{T}')^*$ from $j$ back to $j'$ and changes the entry in $y_T = x$ from $j'$ back to $i$. Thus, the statement of the theorem is true.
\end{document} |
\begin{document}
\title {Reflected BSDEs with monotone generator} \author {Tomasz Klimsiak} \date{} \maketitle \begin{abstract} We give necessary and sufficient condition for existence and uniqueness of $\mathbb{L}^{p}$-solutions of reflected BSDEs with continuous barrier, generator monotone with respect to $y$ and Lipschitz continuous with respect to $z$, and with data in $\mathbb{L}^{p}$, $p\ge 1$. We also prove that the solutions may be approximated by the penalization method. \end{abstract}
\footnotetext{{\em Mathematics Subject Classifications (2010):} Primary 60H20; Secondary 60F25.}
\footnotetext{{\em Key words or phrases:} Reflected backward stochastic differential equation, monotone generator, $\mathbb{L}^{p}$-solutions.}
\footnotetext{Research supported by the Polish Minister of Science and Higher Education under Grant N N201 372 436.}
\nsubsection{Introduction}
Let $B$ be a standard $d$-dimensional Brownian motion defined on some probability space $(\Omega,{\mathcal{F}},P)$ and let $\{{\mathcal{F}}_t\}$ denote the augmentation of the natural filtration generated by $B$. In the present paper we study the problem of existence, uniqueness and approximation of ${\mathbb L}^p$-solutions of reflected backward stochastic differential equations (RBSDEs for short) with monotone generator of the form \begin{equation} \label{eq1.1} \left\{ \begin{array}{l} Y_{t}=\xi+{\int_{t}^{T}} f(s,Y_{s},Z_{s})\,ds -{\int_{t}^{T}} dK_{s} -{\int_{t}^{T}} Z_{s}\,dB_{s},\quad t\in [0,T],\\ Y_{t}\ge L_{t},\quad t\in [0,T], \\ K\mbox{ is continuous, increasing, }K_{0}=0,\,\int_{0}^{T} (Y_{t}-L_{t})\,dK_{t}=0. \end{array} \right. \end{equation} Here $\xi$ is an ${\mathcal{F}}_T$-measurable random variable called the terminal condition, $f:[0,T]\times\Omega\times{\mathbb R}\times{\mathbb R}^d\rightarrow{\mathbb R}$ is the generator (or coefficient) of the equation and an $\{{\mathcal{F}}_t\}$-adapted continuous proces $L=\{L_t,t\in[0,T]\}$ such that $L_T\le\xi$ $P$-a.s. is called the obstacle (or barrier). A solution of (\ref{eq1.1}) is a triple $(Y,Z,K)$ of $\{{\mathcal{F}}_t\}$-progressively measurable processes having some integrability properties depending on assumptions imposed on the data $\xi,f,L$ and satisfying (\ref{eq1.1}) $P$-a.s.
Equations of the form (\ref{eq1.1}) were introduced in El Karoui et al. \cite{EKPPQ}. At present it is widely recognized that they provide a useful and efficient tool for studying problems in different mathematical fields, such as mathematical finance, stochastic control and game theory, partial differential equations and others (see, e.g., \cite{CK,EKPPQ,EPQ,H,Kl2}).
In \cite{EKPPQ} existence and uniqueness of square-integrable solutions of (\ref{eq1.1}) are proved under the assumption that
$\xi$, $\int^T_0|f(t,0,0)|\,dt$ and $L^*_T=\sup_{t\le T}|L_t|$ are square-integrable, $f$ satisfies the linear growth condition and is Lipschitz continuous with respect to both variables $y$ and $z$. These assumptions are too strong for many interesting applications. Therefore many attempts have been made to prove existence and uniqueness of solutions of RBSDEs under less restrictive assumptions on the data. Roughly speaking one can distinguish here two types of results: for RBSDEs with less regular barriers (see, e.g., \cite{PengXu}) and for equations with continuous barriers whose generators or terminal conditions satisfy weaker assumptions than in \cite{EKPPQ}. We are interested in the second direction of investigation of (\ref{eq1.1}).
In the paper we consider $\mathbb{L}^{p}$-integrable data with $p\ge 1$ and we assume that the generator is continuous and monotone in $y$ and Lipschitz continuous with respect to $z$. Assumptions of that type were considered in \cite{A,HP,LMX,RS} but it is worth mentioning that the case where the generator is monotone and at the same time the data are $\mathbb{L}^{p}$-integrable for some $p\in [1,2)$ was considered previously only in \cite{A,RS} (to be exact, in \cite{A} the author considers the case $p\in (1,2)$ but for generalized RBSDEs). Let us also mention that in the case $p=2$ existence and uniqueness results are known for equations with generators satisfying even weaker regularity conditions. For instance, in \cite{C} continuous generators satisfying the linear growth conditions are considered, in \cite{ZZ} it is assumed that the generator is left-Lipschitz continuous and possibly discontinuous in $y$, and in \cite{K} equations with generators satisfying the superlinear growth condition with respect to $y$, the quadratic growth condition with respect to $z$ and with data ensuring boundedness of the first component $Y$ are considered. In all these papers except for \cite{RS} the authors consider the so-called general growth condition which says that \begin{align}\label{i4}
|f(t,y,0)|\le |f(t,0,0)|+\varphi(|y|),\quad t\in[0,T],y\in\mathbb{R}, \end{align} where $\varphi:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ is a continuous increasing function or continuous function which is bounded on bounded subsets of $\mathbb{R}$. In \cite{RS} weaker than (\ref{i4}) condition of the form \begin{align}\label{i5}
\forall_{r>0}\quad \sup_{|y|\le r}|f(\cdot,y,0)-f(\cdot,0,0)|\in \mathbb{L}^{1}(0,T). \end{align} is assumed. Condition (\ref{i5}) seems to be the best possible growth condition on $f$ with respect to $y$. It was used earlier in the paper \cite{BDHPS} devoted to ${\mathbb L}^p$-solutions of usual (non-reflected) BSDEs with monotone generators. Similar condition is widely used in the theory of partial differential equations (see \cite{Betal.} and the references given there). Let us point out, however, that in contrast to the case of usual BSDEs with monotone generators, in general assumption (\ref{i4}) (or
(\ref{i5})) together with $\mathbb{L}^{p}$-integrability of the data (integrability of $\xi$, $L^*_T$, $\int^T_0|f(t,0,0)|\,dt$ in our case) do not guarantee existence of $\mathbb{L}^{p}$-integrable solutions of (\ref{eq1.1}). For existence some additional assumptions relating the growth of $f$ with that of the barrier is required. In \cite{A,LMX} existence of solutions is proved under the assumption that
$E|\varphi(\sup_{t\le T}e^{\mu t} L^{+}_{t})|^2<+\infty$, where $\varphi$ is the function of condition (\ref{i4}) and $\mu$ is the monotonicity coefficient of $f$. In \cite{RS} it is shown that it suffices to assume that \begin{align}\label{i7}
E(\int_{0}^{T}|f(t,\sup_{s\le t}
L^{+}_{t},0)|\,dt)^{p}\,dt<+\infty. \end{align}
Condition (\ref{i7}) is still not the best possible. In our main result of the paper we give a necessary and sufficient condition for existence and uniqueness of $\mathbb{L}^{p}$-integrable solution of RBSDE (\ref{eq1.1}) under the assumptions that the data are ${\mathbb L}^p$-integrable, $f$ is monotone in $y$ and Lipschitz continuous in $z$ and (\ref{i5}) is satisfied. Moreover, our condition is not only weaker than (\ref{i7}) but at the same time much easier to check than (\ref{i7}) in case of very important in applications Markov type RBSDEs with obstacles of the form $L=h(\cdot,X)$, where $h:[0,T]\times{\mathbb R}^d\rightarrow{\mathbb R}$ is a measurable function and $X$ is a Hunt process associated with some Markov semigroup. In the case of Markov RBSDEs which appear for instance in applications to variational problems for PDEs (see, e.g., \cite{EKPPQ,Kl2}) our condition can be formulated in terms of $f,h$ only. We prove the main result for $p\ge1$. Moreover, we show that for $p\ge1$ a unique solution of RBSDE (\ref{eq1.1}) can be approximated via penalization. The last result strengthens the corresponding result in \cite{RS} proved in case $p>1$ for general generators and in case $p=1$ for generators not depending on $z$.
In the last part of the paper we study (\ref{eq1.1}) in the case where $\xi$, $L^{+,*}$, $\int^T_0|f(t,0,0)|\,dt$ are ${\mathbb L}^p$-integrable for some $p\ge 1$ but our weaker form of (\ref{i7}) is not satisfied. We have already mentioned, that then there are no ${\mathbb L}^p$-integrable solutions of (\ref{eq1.1}). We show that still there exist solutions of (\ref{eq1.1}) having weaker regularity properties.
The paper is organized as follows. Section \ref{sec2} contains notation and main hypotheses used in the paper. In Section \ref{sec3} we show basic a priori estimates for solutions of BSDEs. In Section \ref{sec4} we prove comparison results as well as some useful results on c\`adl\`ag regularity of monotone limits of semimartingales and uniform estimates of monotone sequences. In Section \ref{sec5} we prove our main existence and uniqueness result for $p>1$, and in Section \ref{sec6} for $p=1$. Finally, in Section \ref{sec7} we deal with nonintegrable solutions.
\nsubsection{Notation and hypotheses} \label{sec2}
Let $B=\{B_{t}, t\ge 0\}$ be a standard $d$-dimensional Brownian motion defined on some complete probability space $(\Omega,{\mathcal{F}},P)$ and let $\{{\mathcal{F}}_{t}, t\ge 0\}$ be the augmented filtration generated by $B$. In the whole paper all notions whose definitions are related to some filtration are understood with respect to the filtration $\{{\mathcal{F}}_{t}\}$.
Given a stochastic process $X$ on $[0,T]$ with values in
$\mathbb{R}^{n}$ we set $X^{*}_{t}=\sup_{0\le s\le t}|X_{s}|$,
$t\in[0,T]$, where $|\cdot|$ denotes the Euclidean norm on $\mathbb{R}^{n}$. By $\mathcal{S}$ we denote the set of all progressively measurable continuous processes. For $p>0$ we denote by $\mathcal{S}^{p}$ the set of all processes $X\in\mathcal{S}$ such that \[
\|X\|_{\mathcal{S}^{p}} =(E\sup_{t\in [0,T]}
|X_{t}|^{p})^{1\wedge1/p}<+\infty. \] $M$ is the set of all progressively measurable processes $X$ such that \[
P(\int_{0}^{T}|X_{t}|^{2}\,dt<+\infty)=1 \] and for $p>0$, $M^{p}$ is the set of all processes $X\in M$ such that \[
(E(\int_{0}^{T}|X_{t}|^{2}\,dt)^{p/2})^{1\wedge 1/p} <+\infty. \] For $p,q>0$, $\mathbb{L}^{p,q}({\mathcal{F}})$ (resp. $\mathbb{L}^{p}({\mathcal{F}}_{T})$) denotes the set of all progressively measurable processes (${\mathcal{F}}_T$ measurable random variables) $X$ such that \[
(E(\int_{0}^{T}|X_{t}|^{p}\,dt)^{q/(1\wedge 1/p)})^{1\wedge 1/q}<+\infty \quad \left(\mbox{resp. }(E|X|^{p})^{1/p}<+\infty \right). \] For brevity we denote $\mathbb{L}^{p,p}({\mathcal{F}})$ by $\mathbb{L}^{p}({\mathcal{F}})$. By $\mathbb{L}^{1}(0,T)$ we denote the space of Lebesgue integrable real valued functions on $[0,T]$.
${\mathcal{M}}_{c}$ is the set of all continuous martingales (resp. local martingales) and ${\mathcal{M}}^{p}_{c}$, $p\ge1$, is the set of all martingales $M\in{\mathcal{M}}_{c}$ such that $E(\langle M \rangle_{T})^{p/2}<+\infty$. $\mathcal{V}_{c}$ (resp. $\mathcal{V}^{+}_{c}$) is the set of all continuous progressively measurable processes of finite variation (resp. increasing processes) and $\mathcal{V}^{p}_{c}$ (resp. $\mathcal{V}^{+,p}_{c}$) is the set of all processes $V\in\mathcal{V}_{c}$ (resp. $V\in\mathcal{V}^{+}_{c}$) such that
$E|V|^{p}_{T}<+\infty$. We put ${\mathcal{H}}^{p}_{c}={\mathcal{M}}_{c}^{p}+\mathcal{V}_{c}^{p}$.
For a given measurable process $Y$ of class (D) we denote \[
\|Y\|_{1}=\sup\{E|Y_{\tau}|,\tau\in\mathcal{T}\}. \]
In what follows $f:[0,T]\times\Omega\times\mathbb{R}\times{\mathbb{R}^{d}}\rightarrow \mathbb{R}$ is a measurable function with respect to $Prog\times\mathcal{B}(\mathbb{R})\times\mathcal{B}({\mathbb{R}^{d}})$, where $Prog$ denotes the $\sigma$-field of progressive subsets of $[0,T]\times\Omega$.
In the whole paper all equalities and inequalities between random elements are understood to hold $P$-a.s.
Let $p\ge 1$. In the paper we consider the following hypotheses.
\begin{enumerate}
\item[(H1)] $E|\xi|^{p}+E(\int_{0}^{T}|f(t,0,0)|\,dt)^{p}<\infty$
\item[(H2)] There exists $\lambda>0$ such that $|f(t,y,z)-f(t,y,z')|
\le \lambda |z-z'|$ for every $t\in [0,T], y\in\mathbb{R}, z,z'\in{\mathbb{R}^{d}}$. \item[(H3)] There exists $\mu\in\mathbb{R}$ such that $(f(t,y,z)-f(t,y',z))(y-y')\le \mu(y-y')^{2}$ for every $t\in [0,T], y, y'\in\mathbb{R}, z,z'\in{\mathbb{R}^{d}}$. \item [(H4)] For every $(t,z)\in[0,T]\times{\mathbb{R}^{d}}$ the mapping $\mathbb{R}\ni y\rightarrow f(t,y,z)$ is continuous. \item[(H5)] For every $r>0$ the mapping
$[0,T]\ni t\rightarrow\sup_{|y|\le r}|f(t,y,0)-f(t,0,0)|$ belongs to $\mathbb{L}^{1}(0,T)$. \item[(H6)]$L$ is a continuous, progressively measurable process such that $L_T\le\xi$. \item [(H7)]There exists a semimartingale $X$ such that $X\in {\mathcal{H}}^{p}_{c}$ for some $p>1$, $X_{t}\ge L_{t}$, $t\in [0,T]$ and $E(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}<\infty$. \item [(H7*)] There exists a semimartingale $X$ of class (D) such that $X\in\mathcal{V}^{1}_{c}+{\mathcal{M}}^{q}_{c}$ for every $q\in (0,1)$, $X_{t}\ge L_{t}$, $t\in [0,T]$ and $E\int_{0}^{T}f^{-}(s,X_{s},0)\,ds<+\infty$. \item[(A)]There exist $\mu\in\mathbb{R}$ and $\lambda \ge 0$ such that \[
\hat{y}f(t,y,z)\le f_{t}+\mu|y|+\lambda |z| \] for every $t\in[0,T]$, $y\in\mathbb{R}$, $z\in\mathbb{R}^d$, where
$\hat{y}=\mathbf{1}_{\{y\neq 0\}}\frac{y}{|y|}$ and $\{f_{t};t\in[0,T\}$ is a nonnegative progressively measurable process. \item[(Z)]There exist $\alpha\in(0,1)$, $\gamma\ge 0$ and a nonnegative process $g\in\mathbb{L}^{1}({\mathcal{F}})$ such that \[
|f(t,y,z)-f(t,y,0)|\le \gamma (g_{t}+|y|+|z|)^{\alpha} \] for every $t\in[0,T]$, $y\in\mathbb{R}$, $z\in\mathbb{R}^d$. \end{enumerate}
\nsubsection{A priori estimates} \label{sec3}
In this section $K$ denotes an arbitrary but fixed process of the class $\mathcal{V}^{+}_{c}$ such that $K_{0}=0$.
The following version of It\^o's formula will be frequently used in the paper.
\begin{stw} \label{prop.ito} Let $p\ge 1$ and let $X$ be a progressively measurable process of the form \begin{align*} X_{t}=X_{0}+\int_{0}^{t} dK_{s}+\int_{0}^{t} Z_{s}\,dB_{s},\quad t\in [0,T], \end{align*} where $Z\in M$. Then there is $L\in \mathcal{V}^{+}_{c}$ such that \begin{align*}
|X_{t}|^{p}-|X_{0}|^{p}&=p\int_{0}^{t}
|X_{s}|^{p-1}\hat{X}_{s}\,dK_{s}
+p\int_{0}^{t}|X_{s}|^{p-1}\hat{X}_{s}\,dB_{s}\nonumber\\
&\quad+c(p)\int^t_0\mathbf{1}_{\{X_{s}\neq 0\}}|X_{s}|^{p-2}|Z_{s}|^{2}\,ds +L_{t}\mathbf{1}_{\{p=1\}} \end{align*} with $c(p)=p(p-1)/2$. \end{stw} \begin{dow} \!\!The proof is a matter of slight modification of the proof of \cite[\!Lemma 2.2]{BDHPS}. \end{dow}
\begin{df} We say that a pair $(Y,Z)$ of progressively measurable processes is a solution of BSDE$(\xi,f+dK)$ iff $Z\in M$, the mapping $[0,T]\ni t\mapsto f(t,Y_{t},Z_{t})$ belongs to $\mathbb{L}^{1}(0,T)$, $P$-a.s. and \begin{equation} \label{eq3.02} Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})\,ds +\int_{t}^{T}dK_{s}-\int_{t}^{T} Z_{s}\,dB_{s},\quad t\in[0,T]. \end{equation} \end{df}
\begin{lm}\label{lm1} Let $(Y,Z)$ be a solution of \mbox{\rm BSDE}$(\xi,f+dK)$. Assume that \mbox{\rm(H3)} is satisfied and there exists a progressively measurable process $X$ such that $X_{t}\ge Y_{t}$, $t\in [0,T]$ and the mappings $[0,T]\ni t\mapsto X^{+}_t$, $[0,T]\ni t\mapsto f^{-}(t,X_{t},0)$ belong to $\mathbb{L}^{1}(0,T)$, $P$-a.s. \begin{enumerate} \item[\rm(i)]If \mbox{\rm(H2)} is satisfied then for every stopping time $\tau\le T$ and $a\ge \mu$, \begin{align*}
\int_{0}^{\tau} e^{at}dK_{t}&\le |e^{a\tau}Y_{\tau}|
+|Y_{0}|+\int_{0}^{\tau}e^{as}Z_{s}\,dB_{s}
+\lambda\int_{0}^{\tau}e^{as}|Z_{s}|\,ds\\&\quad +\int_{0}^{\tau}e^{as}f^{-}(s,X_{s},0)\,ds +\int_{0}^{\tau}a^{+}e^{as}X_{s}^{+}\,ds. \end{align*} \item[\rm(ii)] If \mbox{\rm(Z)} is satisfied then for every stopping time $\tau\le T$ and $a\ge\mu$, \begin{align*}
\int_{0}^{\tau} e^{at}dK_{t}&\le |e^{a\tau}Y_{\tau}|+|Y_{0}| +\int_{0}^{\tau}e^{as}Z_{s}\,dB_{s}+\gamma\int_{0}^{\tau}e^{as}(g_{s}
+|Y_{s}|+|Z_{s}|)^{\alpha}\,ds\\ &\quad+\int_{0}^{\tau}e^{as}f^{-}(s,X_{s},0)\,ds +\int_{0}^{\tau}a^{+}e^{as}X_{s}^{+}\,ds. \end{align*} \end{enumerate} \end{lm} \begin{dow} Assume that $\mu\le 0$. Then $f^{-}(s,Y_{s},0)\le f^{-}(s,X_{s},0)$, $s\in [0,T]$ and from (\ref{eq3.02}) and (H2) it follows that \begin{align*} K_{\tau}\le -Y_{\tau}+Y_{0}+\int_{0}^{\tau}Z_{s}\,dB_{s}
+\lambda\int_{0}^{\tau}|Z_{s}|\,ds-\int_{0}^{\tau} f(s,Y_{s},0)\,ds, \end{align*} which implies (i) with $a=0$. Now, let $a\ge\mu$ and let $\tilde{Y}_{t}=e^{at}Y_{t}$, $\tilde{Z}_{t}=e^{at}Z_{t}$ and $\tilde{\xi}=e^{aT}\xi$, $\tilde{f}(t,y,z) =e^{at}f(t,e^{-at}y,e^{-at}z)-ay$, $d\tilde{K}_{t}=e^{at}\,dK_{t}$. Then $\tilde{f}$ satisfies (H3) with $\mu=0$ and by It\^o's formula, \[ \tilde{Y}_{t}=\tilde{\xi} +\int_{t}^{T}\tilde{f}(s,\tilde{Y}_{s},\tilde{Z}_{s})\,ds +\int_{t}^{T}d\tilde{K}_{s}-\int_{t}^{T}\tilde{Z}_{s}\,dB_{s}, \quad t\in [0,T], \] from which in the same manner as before we obtain (i) for $a\ge\mu$.
To prove (ii) let us observe that from (\ref{eq3.02}) and (Z) it follows immediately that \begin{align*} K_{\tau}\le -Y_{\tau}+Y_{0}+\int_{0}^{\tau}Z_{s}\,dB_{s}
+\gamma\int_{0}^{\tau}(g_{s}+|Y_{s}|+|Z_{s}|)^{\alpha}\,ds -\int_{0}^{\tau} f(s,Y_{s},0)\,ds. \end{align*} Therefore repeating arguments from the proof of (i) we get (ii). \end{dow}
\begin{lm}\label{lm2} Assume \mbox{\rm(A)} and let $(Y,Z)$ be a solution of \mbox{\rm BSDE}$(\xi,f+dK)$. If $Y\in\mathcal{S}^{p}$ for some $p>0$ and \[ E(\int_{0}^{T}X^{+}_{s}\,ds)^{p} +E(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}
+E(\int_{0}^{T}|f(s,0,0)|\,ds)^{p}<+\infty \] for some progressively measurable process $X$ such that $X_{t}\ge Y_{t}$, $t\in [0,T]$, then $Z\in M^{p}$ and there exists $C$ depending only on $\lambda,p,T$ such that for every $a\ge \mu+\lambda^{2}$, \begin{align*}
&E\bigg((\int_{0}^{T} e^{2as}|Z_{s}|^{2}\,ds)^{p/2}+(\int_{0}^{T} e^{as}\,dK_{s})^{p}\bigg) \le CE\bigg(\sup_{t\le T}
e^{apt}|Y_{t}|^{p}\\
&\qquad+(\int_{0}^{T}e^{as}|f(s,0,0)|\,ds)^{p} +(\int_{0}^{T}e^{as}f^{-}(s,X_{s},0)\,ds)^{p} + (\int_{0}^{T}a^{+}e^{as}X_{s}^{+}\,ds)^{p}\bigg). \end{align*} \end{lm} \begin{dow} By standard arguments we may assume that $\mu+\lambda^{2}\le 0$ and take $a=0$. For each $k\in\mathbb{N}$ let us consider the stopping time \begin{equation}
\label{eq3.01} \tau_k=\inf\{t\in [0,T];\int_0^t|Z_{s}|^{2}\,ds\ge k\}\wedge T. \end{equation} Then as in the proof of Eq. (5) in \cite{BDHPS} we get \begin{align*}
(\int_{0}^{\tau_{k}}|Z_{s}|^{2}\,ds)^{p/2}\le c_{p}\bigg(|Y^{*}_{T}|^{p} +(\int_{0}^{T} f_{s}\,ds)^{p}
+|\int_{0}^{\tau_{k}}Y_{s}Z_{s}\,dB_{s}|^{p/2}
+(\int_{0}^{\tau_{k}}|Y_{s}|\,dK_{s})^{p/2}\bigg), \end{align*} and hence, repeating arguments following Eq. (5) in \cite{BDHPS} we show that \begin{align}\label{eq2.1}
E(\int_{0}^{\tau_{k}}|Z_{s}|^{2}\,ds)^{p/2}\le c_{p}E\bigg(|Y^{*}_{T}|^{p} +(\int_{0}^{T} f_{s}\,ds)^{p}
+(\int_{0}^{\tau_{k}}|Y_{s}|\,dK_{s})^{p/2}\bigg). \end{align} By Lemma \ref{lm1} and the Burkholder-Davis-Gundy inequality, \begin{align}\label{eq2.2}
EK^{p}_{\tau_{k}}\le c'(p,\lambda,T)E\{|Y^{*}_{T}|^{p}
+(\int_{0}^{\tau_{k}}|Z_{s}|^{2}\,ds)^{p/2} +(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}. \end{align} Moreover, applying Young's inequality we conclude from (\ref{eq2.1}) that for every $\alpha>0$, \begin{align}\label{eq2.3}
&E(\int_{0}^{\tau_{k}}|Z_{s}|^{2}\,ds)^{p/2} \nonumber\\
&\quad\le c''(p,\alpha)E\{|Y^{*}_{T}|^{p}+(\int_{0}^{T} f_{s}\,ds)^{p}+(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}+\alpha EK_{\tau_{k}}^{p}. \end{align} Taking $\alpha=(2c'(p,\lambda,T))^{-1}$ and combining (\ref{eq2.2}) with (\ref{eq2.3}) we obtain \[
E(\int_{0}^{\tau_{k}}|Z_{s}|^{2}\,ds)^{p/2} \le C(p,\lambda,T)E\{|Y^{*}_{T}|^{p}+(\int_{0}^{T} f_{s}\,ds)^{p} +(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}. \] Applying Fatou's lemma we conclude from the above inequality and (\ref{eq2.2}) that \begin{align*}
E(\int_{0}^{T}|Z_{s}|^{2}\,ds)^{p/2}+EK_{T}^{p} \le CE\{|Y^{*}_{T}|^{p}+(\int_{0}^{T} f_{s}\,ds)^{p} +(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}, \end{align*} which is the desired estimate. \end{dow}
\begin{uw}\label{uw1} Observe that if $f$ does not depend on $z$ then the constant $C$ of Lemma \ref{lm2} depends only on $p$. This follows from the fact that in this case $c'$ in the key inequality (\ref{eq2.2}) depends only on $p$. \end{uw}
\begin{stw}\label{stw1} Assume that \mbox{\rm(A)} is satisfied and \[ E(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}
+E(\int_{0}^{T}|f(s,0,0)|\,ds)^{p}<+\infty \] for some $p>1$ and $X^{+}\in \mathcal{S}^{p}$ such that $X_{t}\ge Y_{t}$, $t\in [0,T]$. Then if $(Y,Z)$ is a solution of \mbox{\rm BSDE}$(\xi,f+dK)$ such that $Y\in\mathcal{S}^{p}$, then there exists $C$ depending only on $\lambda,p,T$ such that for every $a\ge\mu+\lambda^{2}/[1\wedge(p-1)]$ and every stopping time $\tau\le T$, \begin{align*}
&E\sup_{t\le \tau}e^{apt}|Y_{t}|^{p}+E(\int_{0}^{\tau}
e^{2as}|Z_{s}|^{2}\,ds)^{p/2} +E(\int_{0}^{\tau}e^{as}\,dK_{s})^{p}\\
&\qquad\le CE\bigg(e^{ap\tau}|Y_{\tau}|^{p}
+(\int_{0}^{\tau}e^{as}|f(s,0,0)|\,ds)^{p}+\sup_{t\le \tau}
|e^{at}X^{+}_{t}|^{p}\\ &\qquad\quad+(\int_{0}^{\tau}e^{as}f^{-}(s,X_{s},0)\,ds)^{p} +(\int_{0}^{\tau}a^{+}e^{as}X^{+}_{s}\,ds)^{p}\bigg). \end{align*} Assume additionally that $f$ does not depend on $z$. If $p=1$ and $X^{+},Y$ are of class \mbox{\rm(D)} then for every $a\ge\mu$, \begin{align*}
\|e^{a\cdot} Y\|_{1} +E\int_{0}^{T}e^{as}\,dK_{s}&\le E\bigg(e^{aT}|\xi|
+\int_{0}^{T}e^{as}|f(s,0)|\,ds\\ &\quad+ \int_{0}^{T}e^{as}f^{-}(s,X_{s})\,ds
+\int_{0}^{T}a^{+}e^{as}X^{+}_{s}\,ds\bigg)+\|e^{a\cdot}X^{+}\|_{1}\,. \end{align*} \end{stw} \begin{dow} To shorten notation we prove the proposition in the case where $\tau=T$. The proof of the general case requires only minor technical changes. Moreover, by the change of variables used at the beginning of the proof of Lemma \ref{lm1} we can reduce the proof to the case where $a=0$ and $\mu+\lambda^{2}/[1\wedge(p-1)]\le 0$. Therefore we will assume that $a,\mu,\lambda$ satisfy the last two conditions.
By It\^o's formula (see Proposition \ref{prop.ito}), \begin{align*}
|Y_{t}|^{p}&+c(p)\int_{t}^{T}|Y_{s}|^{p-2}\mathbf{1}_{\{Y_{s} \neq 0\}}|Z_{s}|^{2}\,ds= |\xi|^{p}
+p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y_{s}}f(s,Y_{s},Z_{s})\,ds\\
&+p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y}_{s}\,dK_{s}
-p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y_{s}}Z_{s}\,dB_{s},\quad t\in [0,T]. \end{align*} By the same method as in the proof of Eq. (6) in \cite{BDHPS} we deduce from the above inequality that \begin{align}\label{eq2.4}
\nonumber|Y_{t}|^{p}+\frac{c(p)}{2}
\int_{t}^{T}|Y_{s}|^{p-2}\mathbf{1}_{\{Y_{s}\neq 0\}}|Z_{s}|^{2}\,ds
&\le H-p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y_{s}}Z_{s}\,dB_{s}\\
&\quad+p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y}_{s}\,dK_{s},\quad t\in [0,T], \end{align}
where $H=|\xi|^{p}+\int_{0}^{T}|Y_{s}|^{p-1}f_{s}\,ds$. Since the mapping $\mathbb{R}\ni y\mapsto|y|^{p-1}\hat{y}$ is increasing, \[
{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y}_{s}\,dK_{s} \le
{\int_{t}^{T}}|X^{+}_{s}|^{p-1}\hat X^{+}_{s}\,dK_{s},\quad t\in [0,T]. \] From this and (\ref{eq2.4}), \begin{equation} \label{eq3.06}
|Y_{t}|^{p}+\frac{c(p)}{2}\int_{t}^{T}|Y_{s}|^{p-2}\mathbf{1}_{\{Y_{s}
\neq 0\}}|Z_{s}|^{2}\,ds\le H'
-p{\int_{t}^{T}}|Y_{s}|^{p-1}\hat{Y_{s}}Z_{s}\,dB_{s}, \end{equation}
where $H'=|\xi|^{p}+p\int_{0}^{T}|Y_{s}|^{p-1}f_{s}\,ds
+p\int_{0}^{T}|X^{+}_{s}|^{p-1}\,dK_{s}$. As in the proof of \cite[Proposition 3.2]{BDHPS} (see (7) and the second inequality following (8) in \cite{BDHPS}), using the Burkholder-Davis-Gundy inequality we conclude from (\ref{eq3.06}) that \begin{equation}\label{eq2.5}
E|Y^{*}_{T}|^{p}\le d_{p} EH'. \end{equation} Applying Young's inequality we get \begin{align}\label{eq2.6}
pd_{p}E\int_{0}^{T}|Y_{s}|^{p-1}f_{t}\,dt \le pd_{p}E(|Y^{*}_{T}|^{p-1}\int_{0}^{T}f_{t}\,dt) \le
\frac14E|Y^{*}_{T}|^{p}+d'_{p}E(\int_{0}^{T}f_{t}\,dt)^{p} \end{align} and \begin{align}\label{eq2.7}
pd_{p}E\int_{0}^{T}|X^{+}_{t}|^{p-1}\,dK_{t} \le d'(p,\alpha)E|X^{+,*}_{T}|^{p}+\alpha EK^{p}_{T}\,. \end{align} By Lemma \ref{lm1}, there exists $d(p,\lambda,T)>0$ such that \[
EK_{T}^{p}\le d(p,\lambda,T)E\{|Y^{*}_{T}|^{p}
+(\int_{0}^{T}|Z_{s}|^{2}\,ds)^{p/2} +(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}. \] From this and Lemma \ref{lm2} we see that there exists $c(p,\lambda,T)>0$ such that \begin{align}
\label{eq2.8} EK^{p}_{T}\le c(p,\lambda,T)E\{|Y^{*}_{T}|^{p} +(\int_{0}^{T}f_{s}\,ds)^{p} +(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}\}. \end{align} Put $\alpha=(4c(p,\lambda,T))^{-1}$. Then from (\ref{eq2.5})--(\ref{eq2.8}) it follows that there is $C(p,\lambda,T)$ such that \begin{align*}
E|Y_{T}^{*}|^{p}&\le C(p,\lambda,T)E\{|\xi|^{p}
+(\int_{0}^{T}|f(s,0,0)|\,ds)^{p} \\ &\quad+ (\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}+\sup_{t\le T}
|X^{+}_{t}|^{p}\}. \end{align*} Hence, by (\ref{eq2.8}) and Lemma \ref{lm2}, \begin{align*}
E|Y^{*}_{\tau}|^{p}+E(\int_{0}^{\tau}|Z_{s}|^{2}\,ds)^{p/2}
+EK_{T}^{p}&\le CE\bigg(|Y_{\tau}|^{p}
+(\int_{0}^{\tau}|f(s,0,0)|\,ds)^{p}
+|X^{+,*}_{\tau}|^{p}\\ &\quad+(\int_{0}^{\tau}f^{-}(s,X_{s},0)\,ds)^{p}\bigg). \end{align*} From this the first assertion follows. Now suppose that $f$ does not depend on $z$. As in the first part of the proof we may assume that $\mu\le 0$ and $a=0$. Applying It\^o's formula (see Proposition \ref{prop.ito}) we conclude that for any stopping times $\sigma\le\tau\le T$, \begin{align}\label{eq2.9}
|Y_{\sigma}|\le|Y_{\tau}|+\int_{\sigma}^{\tau}f(s,Y_{s})\hat{Y}_{s}\,ds +\int_{\sigma}^{\tau}\hat{Y}_{s}\,dK_{s} -\int_{\sigma}^{\tau}Z_{s}\hat{Y}_{s}\,dB_{s}. \end{align} Let us define $\tau_k$ by (\ref{eq3.01}). Then $\int_{0}^{\tau_{k}\wedge\cdot}Z_{s}\hat{Y}_{s}\,dB_{s}$ is a uniformly integrable martingale. Using this, the fact that
$Y$ is of class (D) and monotonicity of $f$ with respect to $y$ we deduce from (\ref{eq2.9}) that $|Y_{\sigma}|\le E(|\xi|+\int_{0}^{T}|f(s,0)|\,ds +K_{T}|\mathcal{F}_{\sigma})$, hence that \begin{equation} \label{eq2.10}
\|Y\|_{1}\le E(|\xi|+\int_{0}^{T}|f(s,0)|\,ds+K_{T}). \end{equation} On the other hand, $-f(t,Y_{t})\le -f(t,X_{t})$ for $t\in [0,T]$ since $Y_{t}\le X_{t}$, $t\in [0,T]$. Therefore \begin{align*} K_{\tau}&=Y_{0}-Y_{\tau}-\int_{0}^{\tau}f(s,Y_{s})\,ds +\int_{0}^{\tau}Z_{s}\,dB_{s}\\ &\le X_{0}-Y_{\tau} -\int_{0}^{\tau}f(s,X_{s})\,ds +\int_{0}^{\tau}Z_{s}\,dB_{s}. \end{align*} Taking $\tau=\tau_{k}$ and using the fact that $Y$ is of class (D) we deduce from the above inequality that \[
EK_{T}\le EX^{+}_{0}+E|\xi|+E\int_{0}^{T}f^{-}(s,X_{s})\,ds. \] Combining this with (\ref{eq2.10}) we get the desired result. \end{dow}
\begin{uw} If $f$ does not depend on $z$ then the constant $C$ of the first assertion of Proposition \ref{stw1} depends only on $p$. To see this it suffices to observe that if $f$ does not depend on $z$ then the constant $c$ in the key inequality (\ref{eq2.8}) depends only on $p$ (see Remark \ref{uw1}). \end{uw}
\nsubsection{Some useful tools} \label{sec4}
We begin with a useful comparison result for solutions of (\ref{eq3.02}) with $K\equiv0$.
\begin{stw}\label{stw2} Let $(Y^{1},Z^{1}), (Y^{2},Z^{2})$ be solutions of \mbox{\rm BSDE}$(\xi^{1},f^{1})$, \mbox{\rm BSDE}$(\xi^{2},f^{2})$, respectively. Assume that $(Y^{1}-Y^{2})^{+}\in\mathcal{S}^{q}$ for some $q>1$. If $\xi^{1}\le \xi^{2}$ and for a.e. $t\in [0,T]$ either \begin{equation} \label{eq4.01} \mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{1}(t,Y^{1}_{t},Z^{1}_{t})-f^{2}(t,Y^{1}_{t},Z^{1}_{t}))\le 0,\quad f^{2}\mbox{ satisfies }\mbox{\rm(H2)}, \mbox{\rm(H3)} \end{equation} or \begin{equation} \label{eq4.02} \mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{1}(t,Y^{2}_{t},Z^{2}_{t})-f^{2}(t,Y^{2}_{t},Z^{2}_{t}))\le 0, \quad f^{1}\mbox{ satisfies }\mbox{\rm(H2)}, \mbox{\rm(H3)} \end{equation} is satisfied then $Y^{1}_{t}\le Y^{2}_{t}$, $t\in [0,T]$. \end{stw} \begin{dow} We show the proposition in case (\ref{eq4.01}) is satisfied. If
(\ref{eq4.02}) is satisfied, the proof is analogous. Without loss of generality we may assume that $\mu\le 0$. By the It\^o-Tanaka formula, for every $p\in(1,q)$ and every stopping time $\tau\le T$, \begin{align}
\label{eq3.1} &|(Y^{1}_{t\wedge\tau}-Y^{2}_{t\wedge\tau})^{+}|^{p} +\frac{p(p-1)}{2}\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{Y^{1}_{s}
\neq Y^{2}_{s}\}}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-2}
|Z^{1}_{s}-Z^{2}_{s}|^{2}\,ds\nonumber\\
&\qquad=|(Y^{1}_{\tau}-Y^{2}_{\tau})^{+}|^{p}
+p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (f^{1}(s,Y^{1}_{s},Z^{1}_{s})-f^{2}(s,Y^{2}_{s},Z^{2}_{s}))\,ds \nonumber\\
&\qquad\quad-p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (Z^{1}_{s}-Z^{2}_{s})\,dB_{s}. \end{align} By (\ref{eq4.01}), \begin{align*} &\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{1}(t,Y^{1}_{t},Z^{1}_{t})-f^{2}(t,Y^{2}_{t},Z^{2}_{t}))\\ &\qquad=\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{1}(t,Y^{1}_{t},Z^{1}_{t})-f^{2}(t,Y^{1}_{t},Z^{1}_{t}))\\ &\qquad\quad+\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{2}(t,Y^{1}_{t},Z^{1}_{t}) -f^{2}(t,Y^{2}_{t},Z^{2}_{t}))\\ &\qquad\le\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{2}(t,Y^{1}_{t},Z^{1}_{t})-f^{2}(t,Y^{2}_{t},Z^{1}_{t}))\\ &\qquad\quad+\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}} (f^{2}(t,Y^{2}_{t},Z^{1}_{t})-f^{2}(t,Y^{2}_{t},Z^{2}_{t}))\\ &\qquad\le\lambda\mathbf{1}_{\{Y^{1}_{t}>Y^{2}_{t}\}}
|Z^{1}_{t}-Z^{2}_{t}|. \end{align*} From this, (\ref{eq3.1}) and Young's inequality, \begin{align*}
&|(Y^{1}_{t\wedge\tau}-Y^{2}_{t\wedge\tau})^{+}|^{p}+\frac{p(p-1)}{2}
\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{Y^{1}_{s}\neq Y^{2}_{s}\}}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-2}
|Z^{1}_{s}-Z^{2}_{s}|^{2}\,ds\\
&\qquad\le |(Y^{1}_{\tau}-Y^{2}_{\tau})^{+}|^{p}
+p\lambda\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}
-Y^{2}_{s})^{+}|^{p-1}|Z^{1}_{s}-Z^{2}_{s}|\,ds\\
&\qquad\quad-p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (Z^{1}_{s}-Z^{2}_{s})\,dB_{s}\\
&\qquad\le|(Y^{1}_{\tau}-Y^{2}_{\tau})^{+}|^{p} +\frac{p\lambda^{2}}{p-1}\int_{t\wedge\tau}^{\tau}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p}\,ds\\ &\qquad\quad+\frac{p(p-1)}{4}\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{Y^{1}_{s} \neq Y^{2}_{s}\}}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-2}|Z^{1}_{s}-Z^{2}_{s}|^{2}\,ds
\\&\qquad\quad-p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (Z^{1}_{s}-Z^{2}_{s})\,dB_{s}. \end{align*} Let $\tau_{k}=\inf\{t\in [0,T];\int_{0}^{t}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{2(p-1)}
|Z^{1}_{s}-Z^{2}_{s}|^{2}\,ds\ge k\}\wedge T$. From the above estimate it follows that \[
E|(Y^{1}_{t\wedge\tau_{k}}-Y^{2}_{t\wedge\tau_{k}})^{+}|^{p} \le E|(Y^{1}_{\tau_{k}}-Y^{2}_{\tau_{k}})^{+}|^{p} +\frac{p\lambda^{2}}{p-1}E\int_{t\wedge\tau_{k}}^{\tau_{k}}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p}\,ds. \] Since$(Y^{1}-Y^{2})^{+}\in \mathcal{S}^{q}$, letting $k\rightarrow\infty$ and using the assumptions we get \[
E|(Y^{1}_{t}-Y^{2}_{t})^{+}|^{p} \le \frac{p\lambda^{2}}{p-1}E\int_{t}^{T}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p}\,ds,\quad t\in [0,T]. \]
By Gronwall's lemma, $E|(Y^{1}_{t}-Y^{2}_{t})^{+}|^{p}=0,\, t\in [0,T]$, from which the desired result follows. \end{dow}
\begin{lm}\label{lm3} Assume that $\{(X^{n},Y^{n},K^{n})\}$ is a sequence of real valued c\`adl\`ag progressively measurable processes such that \begin{enumerate} \item [\rm (a)]$Y^{n}_{t}=-K^{n}_{t}
+X_{t}^{n},\, t\in [0,T],\,K^{n}$-increasing, $K^{n}_{0}=0,$ \item [\rm (b)] $Y^{n}_{t}\uparrow Y_{t}$, $t\in [0,T]$, $Y^{1},Y$ are of class (D), \item [\rm (c)]There exists a c\`adl\`ag process $X$ such that for some subsequence $\{n'\}$, $X^{n'}_{\tau}\rightarrow X_{\tau}$ weakly in $\mathbb{L}^{1}({\mathcal{F}}_{T})$ for every stopping time $\tau\le T$. \end{enumerate} Then $Y$ is c\`adl\`ag and there exists a c\`adl\`ag increasing process $K$ such that $K^{n'}_{\tau}\rightarrow K_{\tau}$ weakly in $\mathbb{L}^{1}({\mathcal{F}}_{T})$ for every stopping time $\tau\le T$ and \[ Y_{t}=-K_{t}+X_{t},\quad t\in [0,T]. \] \end{lm} \begin{dow} From (b) it follows that $Y^{n'}_{\tau}\rightarrow Y_{\tau}$ weakly in $\mathbb{L}^{1}({\mathcal{F}}_{T})$ for every stopping time $\tau\le T$. Set $K_{t}=X_{t}-Y_{t}$. By the above and (c), $K^{n'}_{\tau}\rightarrow K_{\tau}$ weakly in $\mathbb{L}^{1}(\Omega)$ for every stopping time $\tau\le T$. If $\sigma,\tau$ are stopping times such that $\sigma\le\tau\le T$ then $K_{\sigma}\le K_{\tau}$ since $K^{n}_{\sigma}\le K^{n}_{\tau}$, $n\in{\mathbb N}$. Therefore $K$ is increasing. The fact that $Y,K$ are c\`adl\`ag processes follows easily from \cite[Lemma 2.2]{Peng}. \end{dow}
In what follows we say that a sequence $\{\tau_{k}\}$ of stopping times is stationary if \[ P(\liminf_{k\rightarrow+\infty} \{\tau_{k}=T\})=1. \]
\begin{lm}\label{lm4}
Assume that $\{Y^{n}\}$ is a nondecreasing sequence of continuous processes such that $\sup_{n\ge1}E|Y^{n,*}_{T}|^{q}<\infty$ for some $q>0$. Then there exists a stationary sequence $\{\tau_{k}\}$
of stopping times such that $Y^{n,*}_{\tau_{k}}\le k\vee|Y^{n}_{0}|$, $P$-a.s. for every $k\in\mathbb{N}$. \end{lm} \begin{dow} Set $V^{n}_{t}=\sup_{0\le s\le t}(Y^{n}_{s}-Y^{1}_{s})$. Then $V^{n}$ is nonnegative and $V^{n}\in\mathcal{V}^{+}_{c}$. Since $\{Y^{n}\}$ is nondecreasing, there exists an increasing process $V$ such that $V^{n}_{t}\uparrow V_{t}$, $t\in [0,T]$. By Fatou's lemma, \[
EV^{q}_{T}\le \liminf_{n\rightarrow+\infty} E|V^{n}_{T}|^{q}\le c(q)\sup_{n\ge1}E|Y^{n,*}_{T}|^{q}<\infty. \] Now, set $V'_{t}=\inf_{t<t'\le T}V_{t'}$, $t\in [0,T]$ and then $\tau_{k}=\inf\{t\in [0,T]; Y^{1,*}_{t}+V'_{t}>k\}\wedge T$. It is known that $V'$ is a progressively measurable c\`adl\`ag process. Since $V_{T}$ is integrable, the sequence $\{\tau_{k}\}$ is stationary. From the above it follows that if $\tau_{k}>0$ then \[ Y^{n,*}_{\tau_{k}}=Y^{n,*}_{\tau_{k}-} \le V'_{\tau_{k}-}+Y^{1,*}_{\tau_{k}-}\le k,\quad k\in\mathbb{N}, \] and the proof is complete. \end{dow}
\begin{lm}\label{lm5} If $\{Z^{n}\}$ is a sequence of progressively measurable processes such that
$\sup_{n\ge1}E(\int_{0}^{T}|Z^{n}_{t}|^{2}\,dt)^{p/2}<\infty$ for some $p>1$, then there exists $Z\in M^{p}$ and a subsequence $\{n'\}$ such that for every stopping time $\tau\le T$, $\int_{0}^{\tau}Z^{n'}_{t}\,dB_{t}\rightarrow \int_{0}^{\tau}Z_{t}\,dB_{t}$ weakly in $\mathbb{L}^{p}({\mathcal{F}}_{T})$. \end{lm} \begin{dow} Since $\{Z^{n}\}$ is bounded in $\mathbb{L}^{2,p}({\mathcal{F}})$ and the space $\mathbb{L}^{2,p}({\mathcal{F}})$ is reflexive, there exists a subsequence (still denoted by $\{n\}$) and $Z\in\mathbb{L}^{2,p}({\mathcal{F}})$ such that $Z^{n}\rightarrow Z$ weakly in $\mathbb{L}^{2,p}({\mathcal{F}})$. It is known that if $\xi\in\mathbb{L}^{p'}({\mathcal{F}}_{T})$, where $p'=p/(p-1)$, then there exists $\eta\in \mathbb{L}^{2,p'}({\mathcal{F}})=(\mathbb{L}^{2,p}({\mathcal{F}}))^{*}$ such that \begin{equation} \label{eq3.2} \xi=E\xi+\int_{0}^{T}\eta_{t}\,dB_{t}. \end{equation} Let $f\in (\mathbb{L}^{p}({\mathcal{F}}_{T}))^{*}$. Then there exists $\xi\in\mathbb{L}^{p'}({\mathcal{F}}_{T})$ such that $f(\zeta)=E\zeta\xi$ for every $\zeta\in\mathbb{L}^{p}({\mathcal{F}}_{T})$. Let $\eta\in\mathbb{L}^{2,p'}({\mathcal{F}})$ be such that (\ref{eq3.2}) is satisfied. Without loss of generality we may assume that $E\xi=0$. Then by It\^o's isometry, \begin{align*} f(\int_{0}^{T}Z^{n}_{t}\,dB_{t}) &=E\xi\int_{0}^{T} Z^{n}_{t}\,dB_{t} =E\int_{0}^{T}\eta_{t}\,dB_{t}\int_{0}^{T}Z^{n}_{t}\,dB_{t}\\
&=E\int_{0}^{T}\eta_{t}Z^{n}_{t}\,dt\rightarrow E\int_{0}^{T}\eta_{t}Z_{t}=f(\int_{0}^{T}Z_{t}\,dB_{t}). \end{align*} Since the same reasoning applies to the sequence $\{\mathbf{1}_{\{\cdot\le\tau\}}Z^{n}\}$ in place of $\{Z^n\}$, the lemma follows. \end{dow}
\nsubsection{Existence and uniqueness results for $p>1$} \label{sec5}
First we recall the definition of a solution $(Y,Z,K)$ of (\ref{eq1.1}). Note that a priori we do not impose any integrability conditions on the processes $Y,Z,K$.
\begin{df} We say that a triple $(Y,Z,K)$ of progressively measurable processes is a solution of RBSDE$(\xi,f,L)$ iff \begin{enumerate} \item [\rm(a)]$K$ is an increasing continuous process, $K_{0}=0$, \item [\rm(b)]$Z\in M$ and the mapping $[0,T]\ni t\mapsto f(t,Y_{t},Z_{t})$ belongs to $\mathbb{L}^{1}(0,T),\, P$-a.s., \item [\rm(c)]$Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})\,ds +\int_{t}^{T}dK_{s}-\int_{t}^{T}Z_{s}\,dB_{s},\quad t\in [0,T],$ \item [\rm(d)]$Y_{t}\ge L_{t},\, t\in [0,T]$, $\int_{0}^{T}(Y_{t}-L_{t})\,dK_{t}=0.$ \end{enumerate} \end{df}
\begin{stw}\label{stw2.5} Let $(Y^{1},Z^{1},K^{1}), (Y^{2},Z^{2},K^{2})$ be solutions of \mbox{\rm RBSDE}$(\xi^{1},f^{1},L^{1})$, \mbox{\rm RBSDE}$(\xi^{2},f^{2},L^{2})$, respectively. Assume that $(Y^{1}-Y^{2})^{+}\in\mathcal{S}^{q}$ for some $q>1$. If $\xi^{1}\le\xi^{2}$, $L^{1}_{t}\le L^{2}_{t}$, $t\in [0,T]$, and either \mbox{\rm (\ref{eq4.01})} or \mbox{\rm(\ref{eq4.02})} is satisfied then $Y^{1}_{t}\le Y^{2}_{t}$, $t\in [0,T]$. \end{stw} \begin{dow} Assume that (\ref{eq4.01}) is satisfied. Let $q>1$ be such that $(Y^{1}-Y^{2})^{+}\in\mathcal{S}^{q}$. Without loss of generality we may assume that $\mu\le 0$. By the It\^o-Tanaka formula, for $p\in(1,q)$ and every stopping time $\tau\le T$, \begin{align}\label{eqt}
\nonumber &|(Y^{1}_{t\wedge\tau}-Y^{2}_{t\wedge\tau})^{+}|^{p} +\frac{p(p-1)}{2}\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{Y^{1}
\neq Y^{2}_{s}\}}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-2}
|Z^{1}_{s}-Z^{2}_{s}|^{2}\,ds\nonumber\\
&\quad=|(Y^{1}_{\tau}-Y^{2}_{\tau})^{+}|^{p}
+p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (f^{1}(s,Y^{1}_{s},Z^{1}_{s})-f^{2}(s,Y^{2}_{s},Z^{2}_{s}))\,ds\nonumber\\
&\qquad+p\int_{t\wedge\tau}^{\tau} |(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1 }\, (dK^{1}_{s}-dK^{2}_{s})\nonumber\\
&\qquad-p\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1} (Z^{1}_{s}-Z^{2}_{s})\,dB_{s}. \end{align}
By monotonicity of the function $x\mapsto \hat{x}|x|^{p-1}$, condition (d) of the definition of a solution of reflected BSDE and the fact that $L^1_t\le L^2_t$ for $t\in[0,T]$, \begin{align*}
\int_{t\wedge\tau}^{\tau}|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1}\, (dK^{1}_{s}-dK^{2}_{s})&\le \int_{t\wedge\tau}^{\tau}
|(Y^{1}_{s}-Y^{2}_{s})^{+}|^{p-1}\,dK^{1}_{s}\\&
\le \int_{t\wedge\tau}^{\tau} |(Y^{1}_{s}-L^{1}_{s})^{+}|^{p-1}\,dK^{1}_{s}=0. \end{align*} Combining this with (\ref{eqt}) we get estimate (\ref{eq3.1}) in Proposition \ref{stw2}. Therefore repeating arguments following (\ref{eq3.1}) in the proof of that proposition we obtain the desired result. The proof in case (\ref{eq4.02}) is satisfied is analogous and therefore left to the reader. \end{dow}
\begin{stw}\label{stw3} If $f$ satisfies \mbox{\rm(H2)}, \mbox{\rm(H3)} then there exists at most one solution $(Y,Z,K)$ of \mbox{\rm RBSDE}$(\xi,f,L)$ such that $Y\in \mathcal{S}^{p}$ for some $p>1$. \end{stw} \begin{dow} Follows immediately from Proposition \ref{stw2.5} and uniqueness of the Doob-Meyer decomposition of semimartingales. \end{dow}
\begin{tw}\label{tw1} Let $p>1$. \begin{enumerate} \item[\rm(i)] Assume \mbox{\rm(H1)}--\mbox{\rm(H6)}. Then there exists a solution $(Y,Z,K)$ of \mbox{\rm RBSDE}$(\xi,f,L)$ such that $(Y,Z,K)\in \mathcal{S}^{p}\otimes M^{p}\otimes\mathcal{V}^{+,p}_{c}$ iff \mbox{\rm(H7)} is satisfied. \item[\rm(ii)]Assume \mbox{\rm(H1)}--\mbox{\rm(H7)}. For $n\in\mathbb{N}$ let $(Y^{n},Z^{n})$ be a solution of the BSDE \begin{equation} \label{eq5.01} Y^{n}_{t}=\xi+\int_{t}^{T}f(s,Y^{n}_{s},Z^{n}_{s})\,ds +\int_{t}^{T}dK^n_s-\int_{t}^{T}Z^{n}_{s}\,dB_{s},\, t\in [0,T] \end{equation} with \begin{equation} \label{eq5.02} K_{t}^{n}=\int_{0}^{t}n(Y^{n}_{s}-L_{s})^{-}\,ds \end{equation} such that $(Y^{n},Z^{n})\in\mathcal{S}^{p}\otimes M^{p}$. Then \begin{equation}
\label{eq5.03} E\sup_{t\le T}|Y^{n}_{t}-Y_{t}|^{p} +E\sup_{t\le T}|K^{n}_{t}-K_{t}|^{p}
+E(\int_{0}^{T}|Z^{n}_{t}-Z_{t}|^{2}dt)^{p/2}\rightarrow 0 \end{equation} as $n\rightarrow +\infty$. \end{enumerate} \end{tw} \begin{dow} Without loss of generality we may assume that $\mu\le0$. Assume that there is a solution $(Y,Z,K)\in\mathcal{S}^{p}\otimes M^{p}\otimes\mathcal{V}^{+,p}_{c} $ of RBSDE$(\xi,f,L)$. Then by \cite[Remark 4.3]{BDHPS}, \begin{equation}\label{eq4.1}
E(\int_{0}^{T}|f(s,Y_{s},Z_{s})|\,ds)^{p}\le cE\{|\xi|^{p} +(\int_{0}^{T}f_{s}\,ds)^{p}+K^{p}_{T}\} \end{equation} which in view of (H2) and the fact that $Y_{t}\ge L_{t}$, $t\in [0,T]$ shows (H7). Conversely, let us assume that (H1)--(H7) are satisfied. Let $(Y^{n},Z^{n})$ be a solution of (\ref{eq5.01}) such that $(Y^{n},Z^{n})\in \mathcal{S}^{p}\otimes M^{p}$. We will show that there exists a process $\overline{X}\in\mathcal{H}^{p}_{c}$ such that $\overline{X}_{t}\ge Y^{n}_{t}$, $t\in [0,T]$ for every $n\in\mathbb{N}$. Since $X\in{\mathcal{H}}^{p}_{c}$, there exist $M\in {\mathcal{M}}^{p}_{c}$ and $V\in\mathcal{V}^{p}_{c}$ such that $X=V+M$. By the representation property of Brownian filtration, there exists $Z'\in M^{p}$ such that \[ X_{t}=X_{T}-{\int_{t}^{T}} dV_{s}-{\int_{t}^{T}} Z'_{s}\,dB_{s},\quad t\in [0,T]. \] The above identity can be rewritten in the form \begin{align*} X_{t}&=X_{T}+{\int_{t}^{T}} f(s,X_{s},Z'_{s})\,ds -{\int_{t}^{T}} (f^{+}(s,X_{s},Z'_{s})\,ds+dV^{+}_{s})\\&\quad +{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds+dV^{-}_{s}) -{\int_{t}^{T}} Z'_{s}\,dB_{s},\quad t\in [0,T]. \end{align*} By \cite[Theorem 4.2]{BDHPS}, there exists a unique solution $(\overline{X},\overline{Z})\in\mathcal{S}^{p}\otimes M^{p}$ of the BSDE \begin{align*} \overline{X}_{t}=\xi\vee X_T +{\int_{t}^{T}} f(s,\overline{X}_{s},\overline{Z}_{s})\,ds +{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds+dV^{-}_{s})-{\int_{t}^{T}} \overline{Z}_{s}\,dB_{s}. \end{align*} By Proposition \ref{stw2}, $\overline{X}_{t}\ge X_{t}\ge L_{t},\, t\in [0,T]$. Hence \begin{align*} \overline{X}_{t}&=\xi\vee X_T +{\int_{t}^{T}} f(s,\overline{X}_{s},\overline{Z}_{s})\,ds +{\int_{t}^{T}} n(\overline{X}_{s}-L_{s})^{-}\,ds \\&\quad+{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds +dV^{-}_{s})-{\int_{t}^{T}} \overline{Z}_{s}\,dB_{s},\quad t\in [0,T], \end{align*} so using once again Proposition \ref{stw2} we see that $\overline{X}_{t}\ge Y^{n}_{t}$, $t\in [0,T]$. By \cite[Remark 4.3]{BDHPS},
$E(\int_{0}^{T}|f(s,\overline{X}_{s},0)|\,ds)^{p}<\infty$. Hence, by Lemma \ref{lm1} and Proposition \ref{stw1}, \begin{align} \label{eq4.2}
&E|Y^{n,*}_{T}|^{p}+E(\int_{0}^{T}|Z^{n}_{s}|^{2}\,ds)^{p/2}
+E|K^{n}_{T}|^{p}\nonumber\\
&\qquad\le C(p,\lambda,T)E\{|\xi|^{2} +(\int_{0}^{T}f_{s}\,ds)^{p}
+(\int_{0}^{T}|f(s,\overline{X}_{s},0)|\,ds)^{p}\}. \end{align} From this and \cite[Remark 4.3]{BDHPS}, \begin{equation}
\label{T1} E(\int_{0}^{T}|f(s,Y^{n}_{s},Z_{s}^{n})|\,ds)^{p}\le C^{'}(p,\lambda,T). \end{equation} By Proposition \ref{stw2} there exists a progressively measurable process $Y$ such that $Y^{n}_{t}\uparrow Y_{t}$, $t\in [0,T]$. Using the monotone convergence of $Y^{n}$, (H3)--(H5), (\ref{eq4.2}), (\ref{T1}) and the Lebesgue dominated convergence theorem we conclude that \begin{equation} \label{T2}
E(\int_{0}^{T}|f(s,Y^{n}_{s},0)-f(s,Y_{s},0)|\,ds)^{p}\rightarrow 0 \end{equation} Moreover, by (H2) and (\ref{eq4.2}), \[ \sup_{n\ge1}E\int_{0}^{T}
|f(s,Y^{n}_{s},Z^{n}_{s})-f(s,Y^{n}_{s},0)|^{p}\,ds<\infty. \] It follows in particular that there exists a process $\eta\in\mathbb{L}^{p}({\mathcal{F}})$ such that \[ \int_{0}^{\tau}(f(s,Y^{n}_{s},Z^{n}_{s})-f(s,Y^{n}_{s},0))\,ds \rightarrow \int_{0}^{\tau}\eta_{s}\,ds \] weakly in $\mathbb{L}^{1}({\mathcal{F}}_{T})$ for every stopping time $\tau\le T$. Consequently, by Lemmas \ref{lm3} and \ref{lm5}, $Y$ is a c\`adl\`ag process and there exist $Z\in M^{p}$ and a c\`adl\`ag increasing process $K$ such that $K_{0}=0$ and \begin{equation} \label{eq4.3} Y_{t}=\xi+{\int_{t}^{T}} f(s,Y_{s},0)\,ds+{\int_{t}^{T}}\eta_{s}\,ds+{\int_{t}^{T}} dK_{s} -{\int_{t}^{T}} Z_{s}\,dB_{s},\quad t\in [0,T]. \end{equation} From (\ref{eq5.01}), (\ref{eq4.2}), (\ref{T1}) and pointwise convergence of the sequence $\{Y^{n}\}$ one can deduce that $E\int_{0}^{T}(Y_{s}-L_{s})^{-}\,ds=0$, which when combined with (H6) and the fact that $Y$ is c\`adl\`ag implies that $Y_{t}\ge L_{t}$, $t\in [0,T]$. From this, the monotone character of the convergence of the sequence $\{Y^{n}\}$ and Dini's theorem we conclude that \begin{align}\label{eq4.4}
E|(Y^{n}-L)^{-,*}_{T}|^{p}\rightarrow 0. \end{align} By Proposition \ref{prop.ito}, for $n,m\in\mathbb{N}$ we have \begin{align}\label{eq4.5}
\nonumber &|Y^{n}_{t}-Y^{m}_{t}|^{p}+c(p){\int_{t}^{T}}
|Y^{n}_{s}-Y^{m}_{s}|^{p-2} \mathbf{1}_{\{Y^{n}_{s}-Y^{m}_{s}\neq 0\}} |Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds\\
\nonumber &\qquad = p{\int_{t}^{T}} |Y^{n}_{s}-Y^{m}_{s}|^{p-1} \widehat{Y^{n}_{s}-Y^{m}_{s}} (f(s,Y^{n}_{s},Z^{n}_{s})-f(s,Y^{m}_{s},Z^{m}_{s}))\,ds\\
\nonumber &\qquad\quad+p{\int_{t}^{T}}|Y^{n}_{s}-Y^{m}_{s}|^{p-1} \widehat{Y^{n}_{s}-Y^{m}_{s}}(dK^{n}_{s}-dK^{m}_{s})\\
&\qquad\quad -p{\int_{t}^{T}} |Y^{n}_{s}-Y^{m}_{s}|^{p-1} \widehat{Y^{n}_{s}-Y^{m}_{s}}(Z^{n}_{s}-Z^{m}_{s})\,dB_{s},\quad t\in [0,T]. \end{align}
By monotonicity of the function $\mathbb{R}\ni x\mapsto|x|^{p-1}\hat{x}$, \begin{align} \label{eq4.6}
{\int_{t}^{T}}|Y^{n}_{s}-Y^{m}_{s}|^{p-1}\widehat{Y^{n}_{s}-Y^{m}_{s}}\,dK^{n}_{s}
\le {\int_{t}^{T}}|(Y^{m}_{s}-L_{s})^{-}|^{p-1} \widehat{(Y^{m}_{s}-L_{s})^{-}}\,dK^{n}_{s} \end{align} and \begin{align} \label{eq4.7}
-{\int_{t}^{T}}|Y^{n}_{s}-Y^{m}_{s}|^{p-1} \widehat{Y^{n}_{s}-Y^{m}_{s}}\,dK^{m}_{s}
\le {\int_{t}^{T}}|(Y^{n}_{s}-L_{s})^{-}|^{p-1} \widehat{(Y^{n}_{s}-L_{s})^{-}}\,dK^{m}_{s}. \end{align} By (H2), (H3), (\ref{eq4.5})--(\ref{eq4.7}) and H\"older's inequality, \begin{align} \label{eq.tem2012}
\nonumber & E|Y^{n}_{t}-Y^{m}_{t}|^{p}
+c(p)E{\int_{t}^{T}}|Y^{n}_{s}-Y^{m}_{s}|^{p-2} \mathbf{1}_{\{Y^{n}_{s}-Y^{m}_{s}\neq0\}}
|Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds\\ &\nonumber\quad\le p\lambda E\int_{0}^{T}
|Y^{n}_{s}-Y^{m}_{s}|^{p-1}|Z^{n}_{s}-Z^{m}_{s}|\,ds
+(E|(Y^{n}-L)^{-,*}_{T}|^{p})^{(p-1)/p}(E|K^{m}_{T}|^{p})^{1/p} \\&\qquad
+(E|(Y^{m}-L)^{-,*}_{T}|^{p})^{(p-1)/p}(E|K^{n}_{T}|^{p})^{1/p}. \end{align} Since \begin{align*}
p\lambda |Y^{n}_{s}-Y^{m}_{s}|^{p-1}|Z^{n}_{s}-Z^{m}_{s}|
&\le\frac{p\lambda^{2}}{1\wedge(p-1)}|Y^{n}_{s}-Y^{m}_{s}|^{p}\\&
\quad+\frac{c(p)}{2}\mathbf{1}_{\{Y^{n}_{s}-Y^{m}_{s} \neq 0\}}|Y^{n}_{s}-Y^{m}_{s}|^{p-2}|Z^{n}_{s}-Z^{m}_{s}|^{2}, \end{align*} from (\ref{eq.tem2012}) we get \begin{align*} \label{eq.tem2012}
\nonumber & E|Y^{n}_{t}-Y^{m}_{t}|^{p}
+\frac{c(p)}{2}E{\int_{t}^{T}}|Y^{n}_{s}-Y^{m}_{s}|^{p-2} \mathbf{1}_{\{Y^{n}_{s}-Y^{m}_{s}\neq0\}}
|Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds\\
&\nonumber\quad\le c(p,\lambda) E\int_{0}^{T}|Y^{n}_{s}-Y^{m}_{s}|^{p}\,ds
+(E|(Y^{n}-L)^{-,*}_{T}|^{p})^{(p-1)/p}(E|K^{m}_{T}|^{p})^{1/p} \\&\qquad
+(E|(Y^{m}-L)^{-,*}_{T}|^{p})^{(p-1)/p}(E|K^{n}_{T}|^{p})^{1/p}\equiv I_{n,m}. \end{align*} From the above, (\ref{eq4.2}), (\ref{eq4.4}) and the monotone convergence of $\{Y^{n}\}$ we get \begin{equation} \label{T3} \lim_{n,m\rightarrow +\infty} I^{n,m}=0 \end{equation} which implies that \begin{equation}\label{eq4.9}
\lim_{n,m\rightarrow+\infty}E\int_{0}^{T}|Y^{n}_{s}-Y^{m}_{s}|^{p-2} \mathbf{1}_{\{Y^{n}_{s}\neq Y^{m}_{s}\}}
|Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds=0. \end{equation} From (\ref{eq4.5}) one can also conclude that \begin{align*}
&E\sup_{0\le t\le T}|Y^{n}_{t}-Y^{m}_{t}|^{p}\\
&\qquad\le c'(p,\lambda) \{I^{n,m}+E\sup_{0\le t\le T}|{\int_{t}^{T}}
|Y^{n}_{s}-Y^{m}_{s}|^{p-1}
\widehat{Y^{n}_{s}-Y^{m}_{s}}(Z^{n}_{s}-Z^{m}_{s})\,dB_{s}|\}. \end{align*} Using the Burkholder-Davis-Gundy inequality and then Young's inequality we deduce from the above that \[
E\sup_{0\le t\le T}|Y^{n}_{t}-Y^{m}_{t}|^{p}\le c''(p,\lambda) \{I^{n,m}+E\int_{0}^{T}\mathbf{1}_{\{Y^{n}_{s}
\neq Y^{m}_{s}\}} |Y^{n}_{s}-Y^{m}_{s}|^{p-2}
|Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds\}. \] Hence, by (\ref{T3}) and (\ref{eq4.9}), \begin{align}\label{eq4.10}
\lim_{n,m\rightarrow+\infty}E\sup_{0\le t\le T}|Y^{n}_{t}-Y^{m}_{t}|^{p}=0, \end{align} which implies that $Y\in\mathcal{S}^{p}$. Our next goal is to show that \begin{equation}\label{eq4.11} \lim_{n,m\rightarrow+\infty}
E(\int_{0}^{T}|Z^{n}_{t}-Z^{m}_{t}|^{2}\,dt)^{p/2}=0. \end{equation}
By It\^o's formula applied to $|Y^{n}-Y^{m}|^{2}$, (H2) and (H3), \begin{align*}
\int_{0}^{T}|Z^{n}_{t}-Z^{m}_{t}|^{2}\,dt &\le 2\lambda\int_{0}^{T}|Y^{n}_{t}-Y^{m}_{t}||Z^{n}_{t}-Z^{m}_{t}|\,dt
+2\int_{0}^{T}|Y^{n}_{t}-Y^{m}_{t}|\,dK^{n}_{t}\\&\quad
+2\int_{0}^{T}|Y^{n}_{t}-Y^{m}_{t}|\,dK^{m}_{t} +\sup_{0\le t\le T}|{\int_{t}^{T}} (Z^{n}_{s}-Z^{m}_{s}) (Y^{n}_{s}-Y^{m}_{s})\,dB_{s}|. \end{align*} Hence, by the Burkholder-Davis-Gundy inequality and Young's inequality, \begin{align*}
&E(\int_{0}^{T}|Z^{n}_{t}-Z^{m}_{t}|^{2}\,dt)^{p/2}\le C(p,\lambda)\{E|(Y^{n}-Y^{m})^{*}_{T}|^{p}\\
&\qquad+(E|(Y^{n}-Y^{m})^{*}_{T}|^{p})^{1/2}(E|K^{n}_{T}|^{p})^{1/2}
+(E|(Y^{n}-Y^{m})^{*}_{T}|^{p})^{1/2}(E|K^{m}_{T}|^{p})^{1/2}\}. \end{align*} From the above inequality, (\ref{eq4.2}) and (\ref{eq4.10}) we get (\ref{eq4.11}). From (\ref{eq4.11}) and (\ref{eq4.3}) it follows immediately that \[ Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})\,ds+\int_{t}^{T}dK_{s} -\int_{t}^{T}Z_{s}\,dB_{s},\quad t\in [0,T], \] which implies that $K$ is continuous. In fact, by (\ref{eq4.2}), $K\in \mathcal{V}^{+,p}_{c}$. Moreover, from (\ref{eq5.01}), (\ref{T1}), (\ref{T2}) (\ref{eq4.10}), (\ref{eq4.11}) and (H2) we deduce that \begin{align}\label{eq4.12} \lim_{n,m\rightarrow+\infty}E\sup_{0\le t\le T}
|K^{n}_{t}-K^{m}_{t}|^{p}=0. \end{align} Since $\int_{0}^{T}(Y^{n}_{t}-L_{t})\,dK^{n}_{t}\le 0$, it follows from (\ref{eq4.10}), (\ref{eq4.12}) that $\int_{0}^{T}(Y_{t}-L_{t})\,dK_{t}\le0$, which when combined with the fact that $Y_{t}\ge L_{t}$, $t\in [0,T]$ shows that \[ \int_{0}^{T}(Y_{t}-L_{t})\,dK_{t}=0. \] Thus the triple $(Y,Z,K)$ is a solution of RBSDE$(\xi,f,L)$, which completes the proof of (i). Assertion (ii) follows from (\ref{eq4.10})--(\ref{eq4.12}). \end{dow}
\begin{uw} Let $p>1$ and let assumptions (H1)--(H3) hold. If $(Y,Z,K)$ is a solution of RBSDE$(\xi,f,L)$ such that $(Y,Z)\in \mathcal{S}^{p}\otimes M^{p}$ then from \cite[Remark 4.3]{BDHPS} it follows immediately that \[
E(\int_{0}^{T}|f(s,Y_{s},Z_{s})|\,ds)^{p}<+\infty \mbox{ iff } EK^{p}_{T}<+\infty. \] Moreover, if there exists $X\in{\mathcal{H}}^{p}_{c}$ such that $E(\int_{0}^{T}f^{-}(s,X_{s},0)\,ds)^{p}<+\infty$ then \begin{equation} \label{eq5.14} E(\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\, dK_{s})^{p}<+\infty. \end{equation} Indeed, since $X\in{\mathcal{H}}^{p}_{c} $, there exist $M\in{\mathcal{M}}^{p}_{c}$ and $V\in\mathcal{V}^{p}_{c}$ such that $X_{t}=X_{0}+M_{t}+V_{t}$, $t\in [0,T]$. Let $L^{0}(Y-X)$ denote the local time of $Y-X$ at 0. By (H2), (H3) and the It\^o-Tanaka formula applied to $(Y-X)^{-}$, \begin{align*} \int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\,dK_{s} &=(Y_{T}-X_{T})^{-}-(Y_{0}-X_{0})^{-} -\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}f(s,Y_{s},Z_{s})\,ds \\&\quad -\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}} dV_{s} -\frac12\int_{0}^{T}dL^{0}_{s}(Y-X) -\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}} Z_{s}\,dB_{s}\\ &\quad +\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\,dM_{s}\\
&\le2Y^{*}_{T}+2X^{*}_{T}-\int_{0}^{T}\mathbf{1}_{\{Y_{s} \le X_{s}\}}f(s,X_{s},0)\,ds+\lambda\int_{0}^{T}|Z_{s}|\,ds\\
&\quad+\int_{0}^{T}d|V|_{s}-\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}} Z_{s}\,dB_{s}+\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}} \,dM_{s}, \end{align*} from which one can easily get (\ref{eq5.14}). \end{uw}
We close this section with an example which shows that assumption (\ref{i7}) is not necessary for existence of $p$-integrable solutions of reflected BSDEs.
\begin{prz}
Let $V_{t}=\exp(|B_{t}|^{4})$, $t\in [0,T]$. Observe that \[ P(\int_{0}^{T}V_{t}\,dt<+\infty)=1,\quad E\int_{a}^{T}V_{t}\,dt =+\infty,\quad a\in(0,T). \] Now, set $\xi\equiv 0$, $f(t,y)=-(y-(T-t))^{+}V_{t}$, $L_{t}=T-t$, $t\in [0,T]$. Then $\xi,f,L$ satisfy (H1)--(H7) with $p=2$. On the other hand, \begin{align*} E\int_{0}^{T}f^{-}(t,L^{*}_{t})\,dt=E\int_{0}^{T}f^{-}(t,T)\,dt =E\int_{0}^{T}tV_{t}\,dt\ge aE\int_{a}^{T}V_{t}\,dt=+\infty. \end{align*} \end{prz}
\nsubsection{Existence and uniqueness results for $p=1$} \label{sec6}
We first prove uniqueness. \begin{stw}\label{stw4} If $f$ satisfies \mbox{\rm(H2)}, \mbox{\rm(H3)} and \mbox{\rm(Z)} then there exists at most one solution $(Y,Z,K)$ of \mbox{\rm RBSDE}$(\xi,f,L)$ such that $Y$ is of class \mbox{\rm(D)} and $Z\in\bigcup_{\beta>\alpha} M^{\beta}$. \end{stw} \begin{dow} Without loss of generality we may assume that $\mu\le 0$. Let
$(Y^{1},Z^{1},K^{1})$, $(Y^{2},Z^{2},K^{2})$ be two solutions to RBSDE$(\xi,f,L)$. By Proposition \ref{stw2.5} it suffices to prove that $|Y^{1}-Y^{2}|\in \mathcal{S}^{p}$ for some $p>1$. Write $Y=Y^{1}-Y^{2}$, $Z=Z^{1}-Z^{2}$, $K=K^{1}-K^{2}$ and $\tau_{k}=\inf\{t\in[0,T];
\int_{0}^{t}(|Z^{1}_{s}|^{2}+|Z^{2}_{s}|^{2})\,ds>k\}\wedge T$. Then by the It\^o formula (see \cite[Corollary 2.3]{BDHPS}), \begin{align*}
|Y_{t\wedge\tau_{k}}|&\le |Y_{\tau_{k}}| +\int_{t\wedge \tau_{k}}^{\tau_{k}}\hat{Y}_{s} (f(s,Y^{1}_{s},Z^{1}_{s})-f(s,Y^{2}_{s},Z^{2}_{s}))\,ds\\&\quad +\int_{t\wedge \tau_{k}}^{\tau_{k}}\hat{Y}_{s}\,dK_{s} -\int_{t\wedge \tau_{k}}^{\tau_{k}}\hat{Y}_{s}Z_{s}\,dB_{s}, \quad t\in [0,T]. \end{align*} By the minimality property (d) of the reaction measures $K^{1}, K^{2}$ in the definition of a solution of RBSDE$(\xi,f,L)$, $\int_{0}^{T}\hat{Y}_{s}\,dK_{s}\le 0$. Hence \begin{align*}
|Y_{t\wedge\tau_{k}}|&\le |Y_{\tau_{k}}| +\int_{t\wedge \tau_{k}}^{\tau_{k}}\hat{Y}_{s} (f(s,Y^{1}_{s},Z^{1}_{s})-f(s,Y^{2}_{s},Z^{2}_{s}))\,ds -\int_{t\wedge \tau_{k}}^{\tau_{k}} \hat{Y}_{s}Z_{s}\,dB_{s}\\
&\le |Y_{\tau_{k}}|
+\int_{0}^{T}|f(s,Y^{1}_{s},Z^{1}_{s})-f(s,Y^{1}_{s},Z^{2}_{s})|\,ds -\int_{t\wedge \tau_{k}}^{\tau_{k}}\hat{Y}_{s}Z_{s}\,dB_{s} \end{align*} for $t\in[0,T]$, the last inequality being a consequence of (H3). Consequently, \begin{align*}
|Y_{t\wedge\tau_{k}}|\le E^{{\mathcal{F}}_{t}}(|Y_{\tau_{k}}|
+\int_{0}^{T}|f(s,Y^{1}_{s},Z^{1}_{s})-f(s,Y^{1}_{s},Z^{2}_{s})|\,ds), \quad t\in [0,T]. \end{align*} Since $Y$ is of class (D), letting $k\rightarrow +\infty$ we conclude from the above that \begin{align*}
|Y_{t}|\le E^{{\mathcal{F}}_{t}}(\int_{0}^{T}
|f(s,Y^{1}_{s},Z^{1}_{s})-f(s,Y^{1}_{s},Z^{2}_{s})|\,ds),\quad t\in [0,T]. \end{align*} By (Z), \[
|Y_{t}|\le 2\gamma E^{{\mathcal{F}}_{t}}(\int_{0}^{T}
(g_{s}+|Y^{1}|_{s}+|Z^{1}|_{s}+|Z^{2}_{s}|)^{\alpha}\,ds). \]
From this it follows that $|Y|\in \mathcal{S}^{p}$ for some $p>1$, which proves the proposition. \end{dow}
\begin{uw}\label{uw.dic} A brief inspection of the proof of Proposition \ref{stw4} reveals that if $f$ does not depend on $z$ and satisfies (H2) then there exits at most one solution $(Y,Z,K)$ of RBSDE$(\xi,f,L)$ such that $Y$ is of class (D). \end{uw}
\begin{uw}\label{uw2} If (H1), (H3), (Z) are satisfied and $(Y,Z)$ is a unique solution of BSDE$(\xi,f)$ such that $Y$ is of class (D) and $Z\in\mathbb{L}^{\alpha}({\mathcal{F}})$ then \[
E\int_{0}^{T}|f(s,Y_{s},Z_{s})|\,ds<+\infty. \] Indeed, by Proposition \ref{prop.ito}, for every stopping time $\tau\le T$, \begin{align*}
|Y_{t\wedge\tau}|\le |Y_{\tau}| +\int_{t\wedge\tau}^{\tau}\hat{Y}_{s}f(s,Y_{s},Z_{s})\,ds -\int_{t\wedge\tau}^{\tau}\hat{Y}_{s}Z_{s}\,dB_{s},\quad t\in [0,T]. \end{align*} Hence \begin{align*} -\int_{t\wedge\tau}^{\tau}\hat{Y}_{s} (f(s,Y_{s},Z_{s})-f(s,0,Z_{s}))\,ds &\le
|Y_{\tau}|-|Y_{t\wedge\tau}|
+\int_{t\wedge\tau}^{\tau}|f(s,0,Z_{s})|\,ds\\&\quad -\int_{t\wedge\tau}^{\tau}\hat{Y}_{s}Z_{s}\,dB_{s}. \end{align*} By the above inequality, (H3) (without loss of generality we may assume that $\mu\le 0$) and (Z), for $t\in[0,T]$ we have \begin{align*}
&E\int_{t\wedge\tau_k}^{\tau}|f(s,Y_{s},Z_{s})-f(s,0,Z_{s})|\,ds\\
&\qquad\le E|Y_{\tau_k}|
+E\int_{t\wedge\tau_k}^{\tau}(g_{s}+|Z_{s}|+|Y_{s}|)^{\alpha}\,ds +\int_{t\wedge\tau_k}^{\tau}f_{s}\,ds, \end{align*} where $\tau_{k}$ is defined by (\ref{eq3.01}). Since $Y$ is of class (D), letting $k\rightarrow +\infty$ we obtain \begin{align*}
E\int_{0}^{T}|f(s,Y_{s},Z_{s})-f(s,0,Z_{s})|\,ds \le E|\xi|+\gamma E\int_{0}^{T}(g_{s}+|Z_{s}|+|Y_{s}|)^{\alpha}\,ds +\int_{0}^{T}f_{s}\,ds. \end{align*} Using once again (Z) we conclude from the above that \begin{align*}
E\int_{0}^{T}|f(s,Y_{s},Z_{s})|\,ds \le E|\xi|+2\gamma E\int_{0}^{T}(g_{s}+|Z_{s}|+|Y_{s}|)^{\alpha}\,ds+ 2\int_{0}^{T} f_{s}\,ds<+\infty. \end{align*} \end{uw}
\begin{tw}\label{tw2} Let $p=1$. \begin{enumerate} \item[\rm(i)]Assume \mbox{\rm(H1)}--\mbox{\rm(H6)}, \mbox{\rm(Z)}. Then there exists a solution $(Y,Z,K)$ of \mbox{\rm RBSDE}$(\xi,f,L)$ such that $Y$ is of class \mbox{\rm(D)}, $K\in \mathcal{V}^{+,1}_{c}$ and $Z\in\bigcap_{q<1}M^{q}$ iff \mbox{\rm(H7*)} is satisfied. \item[\rm(ii)]Assume \mbox{\rm(H1)}--\mbox{\rm(H6)}, \mbox{\rm(H7*)} and for $n\in{\mathbb N}$ let $(Y^{n},Z^{n})$ be a solution of \mbox{\rm(\ref{eq5.01})} such that $(Y^{n},Z^{n})\in \mathcal{S}^{q}\otimes M^{q}$, $q\in(0,1)$, and $Y^{n}$ is of class \mbox{\rm(D)}. Let $K^{n}$ be defined by \mbox{\rm(\ref{eq5.02})}. Then for every $q\in(0,1)$, \[
E\sup_{t\le T}|Y^{n}_{t}-Y_{t}|^{q} +E\sup_{t\le T}|K^{n}_{t}-K_{t}|^{q}
+E(\int_{0}^{T}|Z^{n}_{t}-Z_{t}|^{2}\,dt)^{q/2}\rightarrow 0 \] as $n\rightarrow +\infty$. \end{enumerate} \end{tw} \begin{dow} (i) Necessity. By Remark \ref{uw2}, if there is a solution $(Y,Z,K)$ of BSDE$(\xi,f,L)$ such that $(Y,Z)\in \mathcal{S}^{q}\otimes M^{q}$, $q\in (0,1)$, $K\in\mathcal{V}^{+,1}_{c}$ and $Y$ is of class (D) then (H7*) is satisfied with $X=Y$.
\\ Sufficiency. We first show that the sequence $\{Y^{n}\}$ is nondecreasing. To this end, let us put $f_{n}(t,y,z)=f(t,y,z)+n(y-L_{t})^{-}$. Since the exponential change of variable described at the beginning of the proof of Lemma \ref{lm1} does not change the monotonicity of the sequence $\{Y^{n}\}$, we may and will assume that the mapping $\mathbb{R}\ni y\mapsto f_{n}(t,y,0)$ is nonincreasing. By the It\^o-Tanaka formula, for every stopping time $\tau\le T$, \begin{align*} &(Y^{n}_{t\wedge\tau}-Y^{n+1}_{t\wedge\tau})^{+} +\frac12\int_{\tau\wedge t}^{\tau}dL^{0}_{s}(Y^{n}-Y^{n+1})\\ &\qquad=(Y^{n}_{\tau}-Y^{n+1}_{\tau})^{+} +\int_{t\wedge\tau}^{\tau} \mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n+1}_{s},Z^{n+1}_{s}))\,ds\\ &\qquad\quad-\int_{t\wedge\tau}^{\tau} \mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(Z^{n}_{s}-Z^{n+1}_{s})\,dB_{s}. \end{align*} Taking the conditional expectation with respect to ${\mathcal{F}}_t$ on both sides of the above equality with $\tau$ replaced by $\tau_k=\inf\{t\in [0,T];\int_{0}^{t}
|Z^{n}_{s}-Z^{n+1}_{s}|^{2}\,ds\ge k\}\wedge T$, letting $k\rightarrow +\infty$ and using the fact that $Y$ is of class (D) we obtain \begin{align} \label{T4} (Y^{n}_{t}-Y^{n+1}_{t})^{+}&\le E^{\mathcal{F}_{t}}\int_{t}^{T} \mathbf{1}_{\{Y^{n}_{s}>Y^{n+1}_{s}\}}(f_{n}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n+1}_{s},Z^{n+1}_{s}))\,ds. \end{align} From the above inequality and the fact that $f_{n}\le f_{n+1}$ we get \begin{align*} &\int_{t}^{T} \mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n}_{s},Z^{n}_{s}))\,ds\\&\quad +\int_{t}^{T} \mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n+1}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n+1}_{s},Z^{n+1}_{s}))\,ds \\&\le \int_{t}^{T} \mathbf{1}_{\{Y^{n}_{s} >Y^{n+1}_{s}\}}(f_{n+1}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n+1}_{s},Z^{n+1}_{s}))\,ds\\&= \int_{t}^{T}\mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n+1}(s,Y^{n}_{s},Z^{n}_{s}) -f_{n+1}(s,Y^{n}_{s},0))\,ds \\&\quad+\int_{t}^{T}\mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n+1}(s,Y^{n}_{s},0) -f_{n+1}(s,Y^{n+1}_{s},0))\,ds \\&\quad+\int_{t}^{T}\mathbf{1}_{\{Y^{n}_{s} > Y^{n+1}_{s}\}}(f_{n+1}(s,Y^{n+1}_{s},0) -f_{n+1}(s,Y^{n+1}_{s},Z^{n+1}_{s}))\,ds. \end{align*} Since $f_{n}(t,y,z)-f_{n}(t,y,z')=f(t,y,z)-f(t,y,z')$ for every $t\in[0,T]$, $y\in\mathbb{R}$, $z,z'\in\mathbb{R}^{d}$, using the monotonicity of $f_{n+1}$ and assumption (Z) we conclude from the above and (\ref{T4}) that for $t\in[0,T]$, \[ (Y^{n}_{t}-Y^{n+1}_{t})^{+} \le 2\gamma E^{\mathcal{F}_{t}}\int_{0}^{T}
(g_{s}+|Y^{n}_{s}|+|Z_{s}^{n}|+|Y^{n+1}_s|+|Z^{n+1}_s|)^{\alpha}\,ds. \] Since $(Y^{n},Z^{n})\in\mathcal{S}^{q}\otimes M^{q}$ for every $q\in (0,1)$, $n\in\mathbb{N}$, it follows from the above estimate that $(Y^{n}-Y^{n+1})^{+}\in \mathcal{S}^{p}$ for some $p>1$. Hence, by Proposition \ref{stw2}, $Y^{n}_{t}\le Y^{n+1}_{t}$, $t\in [0,T]$. Write \[ Y_{t}=\lim_{n\rightarrow +\infty} Y^{n}_{t},\quad t\in [0,T]. \] We are going to show that there is a process $\overline{X}$ of class (D) such that $\overline{X}\in \mathcal{V}^{1}_{c}+{\mathcal{M}}^{q}_{c}$ for $q\in(0,1)$ and $\overline{X}_{t}\ge Y_{t}$, $t\in [0,T]$. Indeed, since $X$ from assumption (H7*) belongs to $\mathcal{V}^{1}_{c}+{\mathcal{M}}^{q}_{c}$ for $q\in (0,1)$, there exist $M\in{\mathcal{M}}^{q}_{c}$ and $V\in\mathcal{V}^{1}_{c}$ such that $X=V+M$. By the representation property of the Brownian filtration there exists $Z'\in M^{q}$ such that \[ X_{t}=X_{T}-{\int_{t}^{T}} dV_{s}-{\int_{t}^{T}} Z'_{s}\,dB_{s},\quad t\in [0,T], \] which we can write in the form \begin{align*} X_{t}&=X_{T}+{\int_{t}^{T}} f(s,X_{s},Z'_{s})\,ds -{\int_{t}^{T}} (f^{+}(s,X_{s},Z'_{s})\,ds+dV^{+}_{s})\\&\quad +{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds+dV^{-}_{s}) -{\int_{t}^{T}} Z'_{s}\,dB_{s},\quad t\in [0,T]. \end{align*} By \cite[Theorem 6.3]{BDHPS} and Remark \ref{uw2} there exists a unique solution $(\overline{X},\overline{Z})$ of the BSDE \begin{align*} \overline{X}_{t}=\xi\vee X_T+{\int_{t}^{T}} f(s,\overline{X}_{s},\overline{Z}_{s})\,ds +{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds+dV^{-}_{s}) -{\int_{t}^{T}} \overline{Z}_{s}\,dB_{s} \end{align*} such that $(\overline{X},\overline{Z})\in\bigcap_{q<1}\mathcal{S}^{q} \otimes M^{q}$, $\overline{X}$ is of class (D) and \begin{equation} \label{T5}
E\int_{0}^{T}|f(t,\bar{X}_{t},\bar{Z}_{t})|\,dt<+\infty. \end{equation} As in the proof of the fact that $(Y^{n}-Y^{n+1})^{+}\in \mathcal{S}^{p}$ one can show that for every stopping time $\tau\le T$, \begin{align*} (X_{t\wedge\tau}-\overline{X}_{t\wedge\tau})^{+} &\le(X_{\tau}-\overline{X}_{\tau})^{+} +\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{X_{s}>\overline{X}_{s}\}} (f(s,X_{s},Z_{s}')-f(s,\overline{X}_{s},\overline{Z}_{s}))\,ds\\ &\quad-2\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{X_{s}>\overline{X}_{s}\}} (Z'_{s}-\overline{Z}_{s})\,dB_{s}\\ &\le (X_{\tau}-\overline{X}_{\tau})^{+}
+2\gamma\int_{t\wedge\tau}^{\tau}(g_{s}+|X_{s}|+|\bar{X}_{s}|+|Z'_{s}|+|Z_{s}|)^{\alpha}\,ds\\ &\quad-2\int_{t\wedge\tau}^{\tau}\mathbf{1}_{\{X_{s} >\overline{X}_{s}\}}(Z'_{s}-\overline{Z}_{s})\,dB_{s}. \end{align*} Let $\tau_{k}=\inf\{t\in [0,T];\int_{0}^{t}
(|Z^{'}_{s}|^{2}+|\overline{Z}_{s}|^{2})\,ds\ge k\}\wedge T$. Then \[ (X_{t\wedge\tau_{k}}-\overline{X}_{t\wedge\tau_{k}})^{+} \le E^{\mathcal{F}_{t}}(X_{\tau_{k}}-\overline{X}_{\tau_{k}})^{+}
+2\gamma E^{{\mathcal{F}}_{t}}\int_{0}^{T}(g_{s}+|X_{s}|+|\bar{X}_{s}|+|Z'_{s}|+|Z_{s}|)^{\alpha}\,ds. \] Since $X,\overline{X}$ are of class (D), letting $k\rightarrow+\infty$ we get \[ (X_{t}-\overline{X}_{t})^{+}\le 2\gamma E^{{\mathcal{F}}_{t}}\int_{0}^{T}
(g_{s}+|X_{s}|+|\bar{X}_{s}|+|Z'_{s}|+|Z_{s}|)^{\alpha}\,ds. \] Therefore $(X-\overline{X})^{+}\in \mathcal{S}^{p}$ for some $p>1$ since $Z',\overline{Z}\in M^{q}$, $X,\bar{X}\in\mathcal{S}^{q}$, $q\in (0,1)$. Consequently, by Proposition \ref{stw2}, $X_{t}\le \overline{X}_{t}$, $t\in [0,T]$. Thus, \begin{align*} \overline{X}_{t}&=\xi\vee X_{T} +{\int_{t}^{T}} f(s,\overline{X}_{s},\overline{Z}_{s})\,ds +{\int_{t}^{T}} n(\overline{X}_{s}-L_{s})^{-}\,ds \\&\quad+{\int_{t}^{T}} (f^{-}(s,X_{s},Z'_{s})\,ds +dV^{-}_{s})-{\int_{t}^{T}} \overline{Z}_{s}\,dB_{s},\quad t\in [0,T]. \end{align*} As in the case of the process $(X-\overline{X})^{+}$ one can show that $(Y^{n}-\overline{X})^{+}\in\mathcal{S}^{p}$ for some $p>1$. Hence, by Proposition \ref{stw2}, $Y^{n}_{t}\le \overline{X}_{t}$, $t\in [0,T]$ for every $n\in\mathbb{N}$. Furthermore, since $Y^{1},\overline{X}\in \mathcal{S}^{q}$, $q\in (0,1)$, we have \begin{equation}
\label{eq5.1} \sup_{n\ge1}E|Y^{n,*}_{T}|^{q}<+\infty. \end{equation}
It follows in particular that $\sup_{n\ge1}|Y^{n}_{0}|<\infty$ since $Y^{n}_{0}$ are deterministic. Moreover, by Lemma \ref{lm4}, there exists a stationary sequence $\{\sigma^{1}_{k}\}$ of stopping times such that for every $k\in\mathbb{N}$, \begin{equation}
\label{eq5.2} \sup_{n\ge1}|Y^{n,*}_{\sigma^{1}_{k}}| \le k\vee(\sup_{n\ge1}|Y^{n}_{0}|)<+\infty. \end{equation} Set \[ \sigma^{2}_{k}=\inf\{t\in [0,T], \min\{Y^{1,*}_{t},\overline{X}^{+,*}_{t}, \int_{0}^{t}f^{-}(s,\overline{X}_{s},0)\,ds,
\int_{0}^{t}|f(s,0,0)|\,ds\}>k\}\wedge T \] and $\tau_{k}=\sigma^{1}_{k}\wedge\sigma^{2}_{k}$. It is easy to see that the sequence $\{\tau_{k}\}$ is stationary. Using this and the fact that $Y^n_{\tau_k}$, $f$, $L$ satisfy the assumptions of Theorem \ref{tw1} on the interval $[0,\tau_k]$ one can show that there exist $Y,K\in\mathcal{S}, Z\in M$ such that $K$ is increasing, $K_{0}=0$ and \begin{equation} \label{eq5.3}
\sup_{0\le t \le T}|Y^{n}_{t}-Y_{t}| +\sup_{0\le t\le T}
|K^{n}_{t}-K_{t}|+\int_{0}^{T}|Z^{n}_{s}-Z_{s}|^{2}\,ds\rightarrow 0\mbox{ in probability }P \end{equation} as $n\rightarrow +\infty$. Moreover, one can show that $Y_{t}\ge L_{t}$, $t\in [0,T]$, \begin{align}\label{eq5.4} Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})\,ds +\int_{t}^{T}dK_{s}-\int_{t}^{T} Z_{s}\,dB_{s},\quad t\in[0,T] \end{align} and \begin{equation} \label{eq5.5} \int_{0}^{T}(Y_{s}-L_{s})\,dK_{s}=0. \end{equation} Accordingly, the trip $(Y,Z,K)$ is a solution of RBSDE$(\xi,f,L)$. The proof of (\ref{eq5.3})--(\ref{eq5.5}) runs as the proof of Theorem \ref{tw1} (see the reasoning following (\ref{eq4.2}) with $p=2$), the only difference being in the fact that now we consider equations on $[0,\tau_{k}]$ with terminal values depending on $n$. However, using (\ref{eq5.2}) and the pointwise convergence of $\{Y^{n}\}$ allows overcome this difficulty. Since $Y^{1}_{t}\le Y_{t}\le \overline{X}_{t}$, $t\in [0,T]$, and $Y^{1}, X^{+}$ are of class (D), it follows that $Y$ is of class (D). By Lemma \ref{lm2} for every $q\in(0,1)$, \begin{equation} \label{eq6.8}
\sup_{n\ge1}E\big((\int_{0}^{T}|Z^{n}_{t}|^{2}\,dt)^{q/2}
+|K^{n}_{T}|^{q}\big)<+\infty. \end{equation} From this and (\ref{eq5.3}) we conclude that
$Z\in\bigcap_{q<1}M^{q}$ and $E|K_{T}|^{q}<\infty$ for $q\in(0,1)$. To see that $EK_{T}<\infty$ let us define $\tau_{k}$ by (\ref{eq3.01}). Then by (\ref{eq5.4}), \begin{equation} \label{eq6.9} K_{\tau_{k}}=Y_{0}-Y_{\tau_{k}} -\int_{0}^{\tau_{k}}f(s,Y_{s},Z_{s})\,ds +\int_{0}^{\tau_{k}} Z_{s}\,dB_{s}. \end{equation} Since $Y$ is of class (D), using Fatou's lemma, (H2), (Z) and the fact that $Y_{t}\le\overline{X}_{t}$, $t\in [0,T]$ we conclude from (\ref{eq6.9}) that \begin{align*} EK_{T}\le EY^{+}_{0}+E\xi^{-}
+E\int_{0}^{T}f^{-}(s,\overline{X}_{s},0)\,ds+\gamma E\int_{0}^{T}(g_{s}+|Y_{s}|+|Z_{s}|)^{\alpha}\,ds. \end{align*} Hence $EK_{T}<\infty$, because by (\ref{T5}) and (H2),
$E\int_{0}^{T}|f(s,\overline{X}_{s},0)|\,ds<+\infty$.
\\ (ii) Convergence of $\{Y^{n}\}$ in $\mathcal{S}^{q}$ for $q\in (0,1)$ follows from (\ref{eq5.1}) and (\ref{eq5.3}). The desired convergence of $\{Z^{n}\}$ and $\{K^{n}\}$ follows from (\ref{eq5.3}) and (\ref{eq6.8}). \end{dow}
\begin{uw} An important class of generators satisfying (H1)--(H5) together with (Z) are generators satisfying (H1)--(H5) which are bounded or not depending on $z$. Another class which share these properties are generators of the form \[
f(t,y,z)=g(t,y)+c(1+|z|)^{q}, \] where $q\in [0,\alpha]$ and $g$ is a progressively measurable function satisfying (H1)--(H5). \end{uw}
\begin{uw} Let assumptions (H1)--(H3), (Z) hold and let $(Y,Z,K)$ be a solution of RBSDE$(\xi,f,L)$ such that $Y$ is of class (D) and $Z\in\bigcup_{\beta>\alpha} M^{\beta}$. Then from Remark \ref{uw1} it follows immediately that \[
E(\int_{0}^{T}|f(s,Y_{s},Z_{s})|\,ds)<+\infty\mbox{ iff } EK_{T}<+\infty. \] If, in addition, there exists a continuous semimartingale $X$ such that (H7*) is satisfied then \[ E\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}dK_{s}<+\infty. \] To prove the last estimate let us put $\tau_{k}=\inf\{t\in [0,T];
\langle M\rangle_{t} +\int_{0}^{t}|Z_{s}|^{2}\,ds>k\}\wedge T$. By the It\^o-Tanaka formula and (H2), (H3), \begin{align*} \int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\,dK_{s} &=(Y_{\tau_{k}}-X_{\tau_{k}})^{-}-(Y_{0}-X_{0})^{-} -\int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}} f(s,Y_{s},Z_{s})\,ds\\ &\quad-\int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}}dV_{s} -\frac12\int_{0}^{\tau_{k}}dL^{0}_{s}(Y-X)\\ &\quad-\int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}} Z_{s}\,dB_{s}+\int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\,dM_{s}. \end{align*} Hence \begin{align*}
E\int_{0}^{\tau_{k}}\mathbf{1}_{\{Y_{s}\le X_{s}\}}\,dK_{s} &\le E|Y_{\tau_{k}}|+ EX^{+}_{\tau_{k}} +E\int_{0}^{T}\mathbf{1}_{\{Y_{s}\le X_{s}\}}
f^{-}(s,X_{s},0)\,ds\\&\quad +\gamma E\int_{0}^{T}(g_{s}+|Z_{s}|+|Y_{s}|)^{\alpha}\,ds
+E\int_{0}^{T}d|V|_{s}. \end{align*} Since $(Y-X)^{-}$ is of class (D), letting $k\rightarrow+\infty$ in the above inequality we get the desired result. \end{uw}
\nsubsection{Nonintegrable solutions of reflected BSDEs} \label{sec7}
In this section we examine existence and uniqueness of solutions of reflected BSDEs in the case where the data satisfy (H1)--(H6) (resp. (H1)--(H6), (Z) for $p=1$) but (H7) (resp. (H7*) in case $p=1$) is not satisfied. In view of Theorems \ref{tw1} and \ref{tw2} in that case there is neither a solution $(Y,Z,K)$ in the space $\mathcal{S}^{p}\otimes M^{p}\otimes \mathcal{V}^{p,+}_{c}$ if $p>1$ nor a solution in the space $\mathcal{S}^{q}\otimes M^{q}\otimes \mathcal{V}^{1,+}_{c},\, q\in (0,1)$ with $Y$ of class (D) if $p=1$. We will show that nevertheless there exists a solution with weaker integrability properties. Before proving our main result let us note that in \cite{EKPPQ,HP,C} reflected BSDEs with generator $f$ such that
$|f(t,y,z)|\le M(|f(t,0,0)|+|y|+|z|)$ for some $M\ge0$ are considered. In case $p=2$ it is proved there that if we assume that $\xi, \int_{0}^{T}|f(s,0,0)|\,ds\in \mathbb{L}^{2}({\mathcal{F}}_{T})$, $L$ is continuous and $L^{+}\in \mathcal{S}^{2}$ then there exists a solution $(Y,Z,K)\in \mathcal{S}^{2}\otimes M^{2}\otimes \mathcal{V}^{+,2}_{c}$ of (\ref{eq1.1}) (see \cite{EKPPQ} for the case of Lipschitz continuous generator and \cite{HP,C} for continuous generator). We would like to stress that although in \cite{EKPPQ,HP,C} condition (H7) is not explicitly stated, it is satisfied, because if $f$ satisfies the linear growth condition and $L^{+}\in\mathcal{S}^{2}$ then \[
E(\int_{0}^{T}f^{-}(t,L^{+,*}_{t},0)\,dt)^{2} \le 2M^{2}T^{2}+2T^{2}E|L^{+,*}_{T}|^{2}<+\infty \] and $L_{t}\le L^{+,*}_{t}$, $t\in[0,T]$, $L^{+,*}\in\mathcal{V}^{+,2}_{c}$.
\begin{tw} \label{tw7.1} Let \mbox{\rm (H1)}--\mbox{\rm (H6)} (resp. \mbox{\rm (H1)}--\mbox{\rm(H6)}, \mbox{\rm (Z)}) be satisfied and $L^{+}\in\mathcal{S}^{p}$ for some $p>1$ (resp. $L^{+}$ is of class \mbox{\rm(D)}). Then there exists a solution $(Y,Z,K)\in \mathcal{S}^{p}\otimes M\otimes \mathcal{V}^{+}_{c}$ (resp. $(Y,Z,K)\in \mathcal{S}^{q}\otimes M\otimes \mathcal{V}^{+}_{c}$, $q\in (0,1)$ such that $Y$ is of class \mbox{\rm(D)}) of the \mbox{\rm RBSDE}$(\xi,f,L)$. \end{tw} \begin{dow} We first assume that $p=1$. By \cite[Theorem 6.3]{BDHPS} there exists a unique solution $(Y^{n},Z^{n})\in \bigcap_{q<1}\mathcal{S}^{q}\otimes M^{q}$ of (\ref{eq5.01}) such that $Y^n$ is of class (D). By Proposition \ref{tw2} (see also the reasoning used at the beginning of the proof of Theorem \ref{tw2}), for every $n\in\mathbb{N}$, $Y^{n}_{t}\le Y^{n+1}_{t}$ and $Y^{n}_{t}\le\bar{Y}^{n}_{t}$, $t\in [0,T]$, where $(\bar{Y}^{n},\bar{Z}^{n})\in \bigcap_{q<1}\mathcal{S}^{q}\otimes M^{q}$ is a solution of the BSDE \begin{align*} \bar{Y}^{n}_{t}=\xi+{\int_{t}^{T}} f^{+}(s,\bar{Y}^{n}_{s}, \bar{Z}^{n}_{s})\, ds+{\int_{t}^{T}} n(\bar{Y}^{n}_{s}-L_{s})^{-}\,ds-{\int_{t}^{T}} \bar{Z}^{n}_{s}\,dB_{s},\quad t\in [0,T] \end{align*} such that $\bar{Y}^{n}$ is of class (D). Hence \begin{align}
\label{apx1} |Y^{n}_{t}|\le |Y^{1}_{t}|+|\bar{Y}^{n}_{t}|,\quad t\in [0,T]. \end{align} Put \[
R_{t}(L)=\mathop{\mathrm{ess\,sup}}_{t\le\tau\le T} E(L_{\tau}|{\mathcal{F}}_{t}). \] It is known (see \cite{CK,E}) that $R(L)$ has a continuous version (still denoted by $R(L)$) such that $R(L)$ is a supermartingale of class (D) majorizing the process $L$. Moreover, by the Doob-Meyer decomposition theorem there exist a uniformly integrable continuous martingale $M$ and a process $V\in\mathcal{V}^{+,1}_{c}$ such that $R(L)=M+V$. In particular, by \cite[Lemma 6.1]{BDHPS}, $R(L)\in {\mathcal{M}}^{q}_{c}+\mathcal{V}^{+,1}_{c}$ for every $q\in (0,1)$. Therefore the data $\xi,f^{+},L$ satisfy assumptions (H1)--(H6), (Z) and (H7*) with $X=R(L)$. Hence, by Theorem \ref{tw2}, there exists a unique solution $(\bar{Y},\bar{Z},\bar{K})\in \mathcal{S}^{q}\otimes M^{q}\otimes \mathcal{V}^{+,1}_{c}$, $q\in (0,1)$, of the RBSDE$(\xi,f^{+},L)$ such that $\bar Y$ is of class (D) and \begin{align*} \bar{Y}^{n}_{t}\nearrow \bar{Y}_{t},\quad t\in [0,T]. \end{align*} By the above and (\ref{apx1}), \begin{align}
\label{apx3} |Y^{n}_{t}|\le |Y^{1}_{t}|+|\bar{Y}_{t}|,\quad t\in [0,T]. \end{align} Put $Y_{t}=\sup_{n\ge1} Y^{n}_{t}$, $t\in [0,T]$ and \[ \tau_{k}=\inf\{t\in [0,T]; \int_{0}^{t}f^{-}(s,R_{s}(L),0)\,ds >k\}\wedge T. \] Then $f,L$ satisfy assumptions (H1)--(H6), (Z) and (H7*) with $X=R(L)$ on each interval $[0,\tau_{k}]$. Therefore analysis similar to that in the proof of (\ref{eq5.03}), but applied to the equation \begin{equation} \label{apx4} Y^{n}_{t\wedge\tau_{k}}=Y^{n}_{\tau_{k}} +\int_{t\wedge\tau_{k}}^{\tau_{k}}f(s,Y^{n}_{s},Z^{n}_{s})\,ds +\int^{\tau_{k}}_{t\wedge \tau_{k}}n(Y^{n}_{s}-L_{s})^{-}\,ds-\int_{t\wedge \tau_{k}}^{\tau_{k}} Z^{n}_{s}\, dB_{s} \end{equation} instead of (\ref{eq5.01}), shows that for every $k\in\mathbb{N}$, \begin{align}
\label{apx5} E\sup_{0\le t\le \tau_{k}}|Y^{n}_{t}-Y^{m}_{t}|^{q}
+E(\int_{0}^{\tau_{k}}|Z^{n}_{s}-Z^{m}_{s}|^{2}\,ds)^{q/2}
+E\sup_{0\le t\le\tau_{k}}|K^{n}_{t}-K^{m}_{t}|^{q}\rightarrow0 \end{align} as $n,m\rightarrow +\infty$, where $K^{n}_{t}=\int_{0}^{t}n(Y^{n}_{s}-L_{s})^{-}\,ds$. (The only difference between the proof of (\ref{apx5}) and (\ref{eq5.03}) is caused by the fact that in (\ref{apx4}) the terminal condition $Y^{n}_{\tau_{k}}$ depends on $n$. But in view of (\ref{apx3}), monotonicity of the sequence $\{Y^{n}\}$ and integrability of $Y^{1},\bar{Y}$ the dependence of $Y^{n}_{\tau_{k}}$ on $n$ presents no difficulty). Since the sequence $\{\tau_{k}\}$ is stationary, from (\ref{apx4}), (\ref{apx5}) we conclude that there exist $K\in \mathcal{V}^{+}_{c}$ and $Z\in M$ such that \[ Y_{t}=\xi+{\int_{t}^{T}} f(s,Y_{s},Z_{s})\,ds+{\int_{t}^{T}} dK_{s} -{\int_{t}^{T}} Z_{s}\,dB_{s},\quad t\in [0,T] \] and (\ref{apx5}) holds with $(Y,Z,K)$ in place of $(Y^{n},Z^{n},K^{n})$. From the properties of the sequence $\{(Y^{n},Z^{n},K^{n})\}$ on $[0,\tau_{k}]$ proved in Theorem \ref{tw2} it follows that \[ Y_{t}\ge L_{t},\quad t\in [0,\tau_{k}], \quad \int_{0}^{\tau_{k}}(Y_{s}-L_{s})\,ds=0 \] for $k\in \mathbb{N}.$ Due to stationarity of the sequence $\{\tau_{k}\}$ this implies that \[ Y_{t}\ge L_{t},\quad t\in [0,T],\quad \int_{0}^{T}(Y_{s}-L_{s})\,ds=0. \] Accordingly, the triple $(Y,Z,K)$ is a solution of RBSDE$(\xi,f,L)$.
In case $p>1$ the proof is similar. As a matter of fact it is simpler because instead of considering the Snell envelope $R(L)$ of the process $L$ it suffices to consider the process $L^{+,*}$. \end{dow}
\begin{uw} From Proposition \ref{stw4} it follows that the solution obtained in Theorem \ref{tw7.1} is unique in its class for $p>1$. In case $p=1$ it is unique in its class if $f$ does not depend on $z$ (see Remark \ref{uw.dic}). \end{uw}
The next example shows that in general the process $K$ of Theorem \ref{tw7.1} may be nonintegrable for any $q>0$. \begin{prz}
Let $f(t,y)=-y^{+}\exp(|B_{t}|^{4})$, $L_{t}\equiv 1$, $\xi\equiv 1$. Then $\xi,f,L$ satisfy (H1)--(H6) and $L\in\mathcal{S}^{p}$ for every $p\ge 1$. So by Theorem \ref{tw7.1} and Proposition \ref{stw2.5} there exists a unique solution $(Y,Z,K)\in \mathcal{S}^{2}\otimes M\otimes \mathcal{V}^{+}_{c}$ of the RBSDE$(\xi,f,L)$. Observe that $EK_{T}^{q}=+\infty$ for any $q>0$. Of course, to check this it suffices to consider the case $q\in(0,1]$. Aiming for a contradiction, suppose that $q\in (0,1]$ and $EK^{q}_{T}<+\infty$. Then by \cite[Lemma 3.1]{BDHPS}, $Z\in M^{q}$, which implies that $E(\int_{0}^{T} f^{-}(t,Y_{t})\,dt)^{q}<+\infty$. On the other hand, since $Y_{t}\ge 1$ for $t\in [0,T]$, it follows that \[ E(\int_{0}^{T} f^{-}(t,Y_{t})\,dt)^{q}\ge E\int_{0}^{T}(f^{-}(t,1))^{q}\,dt
=E\int_{0}^{T}\exp(q|B_{t}|^{4})\,dt=+\infty. \] \end{prz}
\end{document} |
\begin{document}
\title{Increasing relative nonclassicality quantified by standard entanglement potentials by dissipation and unbalanced beam splitting} \author{Adam Miranowicz} \affiliation{CEMS, RIKEN, 351-0198 Wako-shi, Japan} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland}
\author{Karol Bartkiewicz} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland}\affiliation{RCPTM, Joint Laboratory of Optics of Palack\'y University and Institute of Physics of AS CR, Palack\'y University, 17. listopadu 12, 771 46 Olomouc, Czech Republic}
\author{Neill Lambert} \affiliation{CEMS, RIKEN, 351-0198 Wako-shi, Japan}
\author{Yueh-Nan Chen} \affiliation{Department of Physics and National Center for Theoretical Sciences, National Cheng-Kung University, Tainan 701, Taiwan} \affiliation{CEMS, RIKEN, 351-0198 Wako-shi, Japan}
\author{Franco Nori} \affiliation{CEMS, RIKEN, 351-0198 Wako-shi, Japan} \affiliation{Department of Physics, The University of Michigan, Ann Arbor, MI 48109-1040, USA}
\date{\today}
\begin{abstract} If a single-mode nonclassical light is combined with the vacuum on a beam splitter, then the output state is entangled. As proposed in [Phys. Rev. Lett. \textbf{94}, 173602 (2005)], by measuring this output-state entanglement for a balanced lossless beam splitter, one can quantify the input-state nonclassicality. These measures of nonclassicality (referred to as entanglement potentials) can be based, in principle, on various entanglement measures, leading to the negativity (NP) and concurrence (CP) potentials, and the potential for the relative entropy of entanglement (REEP). We search for the maximal relative nonclassicality, which can be achieved by comparing two entanglement measures for (i) arbitrary two-qubit states and (ii) those which can be generated from a photon-number qubit via a balanced lossless beam splitter, where the qubit basis states are the vacuum and single-photon states. Surprisingly, we find that the maximal relative nonclassicality, measured by the REEP for a given value of the NP, can be increased (if NP$<$0.527) by using either a tunable beam splitter or by amplitude damping of the output state of the balanced beam splitter. We also show that the maximal relative nonclassicality, measured by the NP for a given value of the REEP, can be increased by phase damping (dephasing). Note that the entanglement itself is not increased by these losses (since they act locally), but the possible ratios of different measures are affected. Moreover, we show that partially-dephased states can be more nonclassical than both pure states and completely-dephased states, by comparing the NP for a given value of the REEP. Thus, one can conclude that not all standard entanglement measures can be used as entanglement potentials. Alternatively, one can infer that a single balanced lossless beam splitter is not always transferring the whole nonclassicality of its input state into the entanglement of its output modes. The application of a lossy beam splitter can solve this problem at least for the cases analyzed in this paper. \end{abstract}
\pacs{42.50.Xa, 03.67.Mn, 03.67.Bg}
\maketitle
\section{Introduction}
Nonclassical light plays a central role in quantum optics~\cite{VogelBook,PerinaBook} and atom optics~\cite{HarocheBook} leading to various applications in quantum technologies, including quantum cryptography and communication, and optical quantum information processing~\cite{KokBook}.
In a sense, all states of light are quantum, so which of them can be considered classical? It is usually assumed that coherent states are classical. Thus, also their mixtures (e.g., thermal states) are classical. All other states of light are considered nonclassical (or quantum). Formally, this criterion can be given in terms of the Glauber-Sudarshan $P$~function~\cite{Glauber63,Sudarshan63}: A given state $\sigma$ is nonclassical if and only if it is not described by a positive (semidefinite) function $P(\sigma)$. Thus, all finite superpositions (except for the vacuum) of arbitrary Fock states are nonclassical. We note that this definition ``hides some serious problems'', as discussed in, e.g., Ref.~\cite{Wunsche04}.
Various methods and criteria have been devised to test whether a given state of light is nonclassical (see, e.g., Refs.~\cite{VogelBook,DodonovBook,PerinaBook,Miran10,Bartkowiak11}). However, with respect to the above definition, it seems much more interesting physically to quantify the nonclassicality of light rather than only to test (detect) it. For the last thirty years various measures of nonclassicality have been studied (for reviews see Refs.~\cite{DodonovBook,VogelBook}). The most popular of them include entanglement potentials, nonclassical distance~\cite{Hillery87}, and nonclassical depth~\cite{Lee91,Lutkenhaus95}. For a comparative studies of these measures see recent Refs.~\cite{Miran15,Arkhipov15} and references therein. Other measures or parameters of nonclassicality were described in, e.g., the recent Refs.~\cite{Mari11,Gehrke12,Nakano13,Meznaric13}. Many studies have been devoted to the nonclassical volume~\cite{Kenfack04}, which is the volume of the negative part of the Wigner function of a given state. But it should be stressed that this nonclassical volume is not a good measure but only a parameter of nonclassicality, as it vanishes for some nonclassical states, including ideal squeezed states.
In this paper we solely analyze universal nonclassicality measures defined via entanglement potentials, which are closely related to standard entanglement measures. Specifically, as introduced by Asboth \etal~\cite{Asboth05}, one can quantify the nonclassicality of a given single-mode state by measuring the entanglement generated from this state and the vacuum by an auxiliary balanced beam splitter (BS). This approach is operationally much simpler than other nonclassicality measures, including the mentioned nonclassical depth and distance.
To be more specific, the nonclassicality of a single-mode state $\sigma$ can be quantified, according to Ref.~\cite{Asboth05}, by the entanglement of the output state $\rho_{\rm out}$ of an auxiliary lossless balanced BS with the state $\sigma$ and the vacuum $\ket{0}$ at the inputs, i.e., \begin{equation}
\rho_{\rm out} = U_{\rm BS} (\sigma\otimes \ket{0}\bra{0} )U^\dagger_{\rm
BS}, \label{RhoOut1} \end{equation} where $U_{\rm BS}=\exp(-iH\theta)$, with $\hbar=1$ and $\theta=\pi/2$, is the unitary transformation for a balanced BS, which can be given in terms of the Hamiltonian $H=\frac12i(a_{1}^\dagger a_{2}-a_{1} a_{2}^\dagger),$ where $a_{1,2}$ ($a_{1,2}^\dagger$) are the annihilation (creation) operators of the input modes. This transformation $U_{\rm BS}$ is equivalent to that applied in Ref.~\cite{Miran15} up to a local unitary transformation, which does not change the entanglement of $\rho_{\rm out}$, as quantified by any ``good'' entanglement measure. Note that linear transformations (like that performed by a BS) do not change the nonclassicality of a given state. The output state $\rho_{\rm out}$ is entangled if and only if the input state $\sigma$ is nonclassical. In particular, for the input in an arbitrary finite-dimensional state, except for the vacuum state, the output state is entangled. It should be stressed that the standard entanglement potentials, as proposed (and numerically verified) in Ref.~\cite{Asboth05}, are based solely on the special case of $\rho_{\rm out}$ for $\theta=\pi/2$, i.e., when the BS transmissivity is equal to 1/2 corresponding to a balanced (50/50) BS.
In this paper, we compare three standard entanglement potentials of a single-qubit input state $\sigma$ corresponding to the entanglement measures of the two-qubit output state $\rho_{\rm out}$. The analyzed qubit is assumed to be in an arbitrary (coherent or incoherent) superposition of the vacuum and single-photon Fock states, so it can be referred to as a photon-number qubit. We quantify the nonclassicality of $\sigma$ by the negativity potential (NP), concurrence potential (CP), and the potential for the relative entropy of entanglement (REEP): \begin{eqnarray}
{\rm NP}(\sigma) &=& N (\rho_{\rm out}), \label{NP} \\
{\rm CP}(\sigma) &=& C (\rho_{\rm out}), \label{CP} \\
{\rm REEP}(\sigma) &=& E_R (\rho_{\rm out}), \label{REEP} \end{eqnarray} which are defined via the negativity $N$, concurrence $C$, and REE $E_R$ for $\rho_{\rm out}$. Although these entanglement measures are well-known, for clarity, we give their definitions and operational interpretations in Appendix~A. The REEP is also referred to as the entropic entanglement potential in the original Ref.~\cite{Asboth05}. We emphasize again that we refer to \emph{entanglement potential} of a given state $\sigma$ for \emph{any} entanglement measure applied to the output $\rho_{\rm out}$.
Various entanglement potentials, which are based on this unified approach of measuring nonclassicality and entanglement, have been attracting increasing interest especially during the past year (see, e.g., Refs.~\cite{Vogel14,Mraz14,Miran15,Tasgin15,Killoran15,Ge15}). The ultimate goal of such studies is to demonstrate how nonclassicality can operationally be used as a resource for quantum technologies and quantum information processing. Thus, one of the most natural and fundamental questions in this area addresses the relationship between entanglement and other types of nonclassicality. For example, the problem of faithfully converting single-system nonclassicality into entanglement in general mathematical terms was addressed in Ref.~\cite{Killoran15} with the following conclusions: ``These results generalize and link convertibility properties from the resource theory of coherence, spin coherent states, and optical coherent states, while also revealing important connections between local and non-local quantum correlations.'' A conservation relation of nonclassicality and entanglement in a balanced beam splitter was found for some limited classes of Gaussian states in Ref.~\cite{Ge15}. A related single-mode nonclassicality measure based on the Simon-Peres-Horodecki criterion was described for Gaussian states in Ref.~\cite{Tasgin15}. Refs.~\cite{Vogel14,Mraz14} demonstrated that the rank of the two-mode entanglement of a single-mode state $\sigma$ is equal to the rank of the expansion of $\sigma$ in terms of classical coherent states. It was shown in Ref.~\cite{Miran15} that the CP of a single qubit state $\sigma$ can be interpreted as a Hillery-type nonclassical distance, defined by the Bures distance of $\sigma$ to the vacuum, which is the closest classical state in the single-qubit Hilbert space. Moreover, it is known that the statistical mixtures of the vacuum and single-photon states can be more nonclassical than their superpositions, as shown in Ref.~\cite{Miran15} by comparing the NP and CP.
Here we present a comparative study of these three entanglement potentials, to show that such quantified relative nonclassicality, i.e., the nonclassicality of one measure relative to another, can be increased by damping and unbalanced beam splitting. Moreover, in Ref.~\cite{Miran15} we showed that both pure and completely-dephased single-qubit states can be considered the most nonclassical by comparing some nonclassicality measures. Here we show that also some partially-dephased states can be the most relatively nonclassical in terms of the highest negativity potential for a given value of the REE potential.
The paper is organized as follows. In Sec.~II, we calculate the entanglement potentials for single-qubit states. In Sec.~III, we find the most nonclassical single-qubit states via the standard entanglement potentials. In this section and also in Sec.~IV, we show that there are two-qubit states, which are more entangled than those which can be generated from a single-qubit state and the vacuum by a balanced lossless BS. In Sec.~V, we show that by applying a tunable and/or lossy BS, we can generate relative entanglement higher than that in the standard approach. We refer to this modified approach as being based on generalized entanglement potentials. Moreover, for the completeness and clarity of our presentation, we recall known results in appendices, including definitions of the standard entanglement measures and boundary states for arbitrary two-qubit states. We conclude in Sec.~VI.
\section{Entanglement potentials for single-qubit states}
To make our presentation simple and convincing, we analyze, as in Ref.~\cite{Miran15}, the nonclassicality of only single-qubit states: \begin{equation} \sigma(p,x) = \sum_{m,n=0}^1\sigma_{mn} \ket{m} \bra{n} = \left[\begin{array}{cc} 1-p&x\\ x^*&p \end{array}\right],
\label{rho} \end{equation}
which are spanned by the vacuum $\ket{0}$ and the single-photon Fock state $\ket{1}$. Here $p\in[0,1]$ is the mixing parameter, and $|x|\in[0,\sqrt{p(1-p)}]$, which is often interpreted as a coherence parameter. Analogously, in the context of the NMR spectroscopy of a spin qubit, $x$ and $x^*$ are called the \emph{coherences} between the states $\ket{0}$ and $\ket{1}$~\cite{LevittBook}. The relation between the coherence and entanglement of a partially coherent state of light was recently studied in Ref.~\cite{Svozilik15}. An experimental demonstration of the nonclassicality of the optical qubit states, given in Eq.~(\ref{rho}), was reported in Ref.~\cite{Lvovsky02} based on the nonclassicality criterion of Vogel~\cite{VogelBook}. Note that the only classical state of $\sigma(p,x)$ is for $p=0$, corresponding to the vacuum. Equation~(\ref{RhoOut1}) for $\sigma(p,x)$ simplifies to \begin{equation}
\rho_{\rm out}(p,x) =\left[\begin{array}{cccc} 1-p& -\frac1{\sqrt{2}}x &\frac1{\sqrt{2}}x& 0\\ -\frac1{\sqrt{2}}x^*&\frac12 p&-\frac12p&0\\ \frac1{\sqrt{2}}x^*& -\frac12 p&\frac12p&0\\ 0&0&0&0\\ \end{array}\right]. \label{RhoOut2} \end{equation} Here we study how well the entanglement potentials can serve as measures of nonclassicality for a balanced BS.
As shown in Ref.~\cite{Miran15}, the concurrence potential is given by a simple formula \begin{equation}
{\rm CP}[\sigma(p,x)]=p,
\label{conc} \end{equation} for the arbitrary state given in Eq.~(\ref{rho}). Note that this potential is independent of the coherence parameter $x$. Surprisingly, the negativity potential is given by a much more complicated formula \begin{equation}
{\rm NP}[\sigma(p,x)]=\frac{1}{3} \left[2 {\rm Re}
\left(\sqrt[3]{2 \sqrt{\alpha_1}+2 \alpha_2}\right)+p-2\right],
\label{NPgeneral} \end{equation} for a general state $\sigma(p,x)$, where $\alpha_1$ ($\alpha_2$) is a polynomial of the 6th (3rd) order in $p$ and the 6th (2nd)
order in $|x|$, as explicitly given in Ref.~\cite{Miran15}. Obviously, Eq.~(\ref{NPgeneral}) simplifies considerably for special states, including those studied in the next sections. Equation~(\ref{NPgeneral}) can be obtained from the formula valid for an arbitrary two-qubit state $\rho$~\cite{Bartkiewicz15}: \begin{equation} 48D + 3N^4 + 6N^3 - 6N^2\Pi'_2 - 4N(3\Pi'_2 - 2\Pi'_3 ) = 0, \label{Negativity2} \end{equation} expressing the negativity via the invariant moments $\Pi'_n =\Pi_n-1= {\rm Tr}[(\rho^\Gamma)^n]-1$ and the determinant
$D=\det\rho^\Gamma$, where $\rho^\Gamma$ denotes the partially transposed $\rho$. These moments are directly measurable, as shown in Refs.~\cite{Bartkiewicz15b}. This general formula simplifies for the special two-qubit states $\rho=\rho_{\rm out}$, which are generated by a balanced BS. Thus, we can simply express the coherence parameter $|x|$ as a function of the negativity (or negativity potential) and the mixing parameter $p$ as follows: \begin{equation}
|x| = f(p,N)= \tfrac12 \sqrt{(1+p/N) [2N(N+1)-(N+p)^2]}
\label{f} \end{equation} for any $p\in[N,\sqrt{2N(N+1)}-N]$. Thus, an arbitrary single qubit state $\sigma(p,x)$ can be given as \begin{eqnarray}
\sigma'(p,N,\phi) &\equiv& \sigma[p,x=f(p,N) \exp(i\phi)], \label{rho_pN} \end{eqnarray} where $\phi={\rm Arg}(x)$ is the phase factor of the coherence parameter $x$. In the context of our nonclassicality analysis, the inclusion of $\phi$ in Eqs.~(\ref{rho}) and~(\ref{rho_pN}) is actually irrelevant, as any ``good'' nonclassicality measures (including the entanglement potentials) do not depend on $\phi$. Thus, for simplicity, we can set $\phi=0$.
The calculation of the REE potential is even more demanding as explained in Appendix~A. We have calculated the REE analytically only for some special states, including pure and completely-dephased single-qubit states, and the completely-dephased output states of a BS. In other cases, the REE potential is calculated only numerically based on semidefinite algorithm implemented in Ref.~\cite{Girard15}.
\section{Most relatively nonclassical single-qubit states via entanglement potentials}
Here we address the following question: which single-qubit states are the most nonclassical if quantified by one entanglement potential relative to another. Specifically, we compare the nonclassicality of states for a given entanglement potential assuming that the states have the same nonclassicality in terms of another entanglement potential.
Figure~1 shows such comparison of the three entanglement potentials for randomly generated single-qubit states $\sigma$. These graphs are obtained as follows: For a given $\sigma$, we calculated the potentials ${\rm NP}(\sigma)$, ${\rm CP}(\sigma)$, and ${\rm REEP}(\sigma)$, and then plotted a point at $[{\rm NP}(\sigma),{\rm CP}(\sigma)]$ in Fig. 1(a), another point at $[{\rm REEP}(\sigma),{\rm CP}(\sigma)]$ in Fig. 1(b), and $[{\rm REEP}(\sigma),{\rm NP}(\sigma)]$ in Fig. 1(c). The Monte-Carlo simulated points occupy limited areas. Now we discuss the states on their boundaries. Note that the red curves in Fig.~1 show the boundaries of the entanglement of arbitrary two-qubit states $\rho$ (see Appendix~B), instead of $\rho_{\rm out}$ only. Figure~2 shows more explicitly that there is a two-qubit entanglement (corresponding to the red regions), which cannot be generated from single-qubit states and the vacuum by a balanced lossless BS.
\begin{figure}
\caption{(Color online) Entanglement potentials for 15,000 single-qubit states $\sigma$ generated via a Monte Carlo simulation. These entanglement potentials correspond to the entanglement measures for the two-qubit states $\rho_{\rm out}$ generated from $\sigma$ by a balanced beam splitter. The solid red curves show the boundaries of the allowed entanglement for arbitrary two-qubit states $\rho$ (discussed in Appendix~B). Note that the simulated states $\rho_{\rm out}$ lie in the area bounded by the solid red curves for the concurrence potential for given values of both (a) the negativity potential and (b) the potential of the relative entropy of entanglement (REE). This is not the case for (c) the negativity potential for a given value of the REE potential, as the states $\rho_{\rm out}$ lie in the area, bounded by the dashed black curves, which is smaller than the area covered by the states $\rho$. The upper (lower) red curves in panels (a) and (b) correspond to the completely-dephased states $\SIGMA{M}$ (pure states $\SIGMA{P}$) as compactly indicated by M (P). The meaning of the red and black curves in panel (c) is more detailed, as shown in Fig.~3.}
\end{figure}
\begin{figure}
\caption{(Color online) Allowed values of the negativity for a given value of the REE for the states $\rho_{\rm out}$, defined in Eq.~(\ref{RhoOut1}), generated from arbitrary single-qubit states (in yellow regions), and those for arbitrary two-qubit states $\rho$ (in red and yellow regions). The main goal of this paper is to show how the entanglement, corresponding to any point in the red regions, can be generated from a single-qubit state.}
\end{figure}
\subsection{When pure states are maximally nonclassical}
The most general single-qubit pure state is given by \begin{equation}
\ket{\psi} = \sqrt{1-p}\ket{0}+e^{i\phi}\sqrt{p}\ket{1},
\label{psi_p} \end{equation} up to an irrelevant global phase. So,
$\SIGMA{P}\equiv\ket{\psi}\bra{\psi}$ is a special case of Eq.~(\ref{rho}) for $|x|=\sqrt{p(1-p)}$ and $\phi={\rm Arg(}x)$. As already mentioned, the nonclassicality measures are insensitive to the relative phase $\phi$, so we can set $\phi=0$. The entanglement potentials for a pure state $\SIGMA{P}$ are simply given by: \begin{eqnarray}
{\rm CP}(\SIGMA{P}) &=& {\rm NP}(\SIGMA{P}) =p,\\
{\rm REEP}(\SIGMA{P}) &=&
h\left(\textstyle{\frac{1}{2}}[1+\sqrt{1-p^2}]\right), \label{REEP_pure} \end{eqnarray} where $h(y)=-y\log_2 y-(1-y)\log_2(1-y)$ is the binary entropy. For clarity, we recall the well-known property that ${\rm REEP}(\SIGMA{P})$ is equal to the entanglement of formation $E_F(\RHO{P})$ and to the von Neumann entropy of a reduced density matrix, say, ${\rm Tr}_1(\RHO{P})$.
We find that pure states are the most nonclassical single-qubit states in terms of: (i) the maximal NP for a given value of ${\rm CP}\in[0,1]$ [see Fig.~1(a)], (ii) the maximal REEP for a given value of ${\rm CP}\in[0,1]$ [see Fig.~1(b)], and (iii) the maximal REEP for a given value of ${\rm NP}\in[N_2,1]$, where $N_2\approx 0.527$ [see Figs.~1(c), 2 and~3]. These results are summarized in Table~I. Note that pure states are very close to maximal nonclassical states concerning the largest NP for a given ${\rm REEP} \lesssim 0.1$ [see Fig.~3].
\begin{table}[ht] \caption{Maximally-nonclassical states (MNS) of a single-qubit state $\sigma$, maximally-entangled states (MES) $\rho_{\rm out}$ (generated by a lossless balance BS), and maximally-entangled states $\rho$ (generated by any method) according to one entanglement measure (or entanglement potential) for a given value of another entanglement measure (or entanglement potential). The extra resources enable to increase the entanglement of $\rho_{\rm out}$ (so, also the nonclassicality of $\sigma).$ These include the amplitude-damping channel (ADC), phase-damping channel (PDC), and tunable beam splitter (TBS). Notation: CP is the concurrence potential, NP is the negativity potential, REEP is the potential of the relative entropy of entanglement. The special values of $N_i$ and $E_i$ are defined in Fig.~3. The output extremal states read: $\RHO{B}$ are the Bell-diagonal states, $\RHO{P}$ are pure states, $\RHO{H}$ are the Horodecki states corresponding to the completely-dephased single-qubit states $\SIGMA{M}$, $\RHO{Z}$ are the two-qubit states generated from single-qubit optimally-dephased states $\SIGMA{Z}$, and $\RHO{A}$ are the optimal generalized Horodecki states. This is a summary of all the cases, when the nonclassicality of $\sigma$, as quantified by the three standard entanglement potentials, can or cannot be increased. } \begin{tabular}{l l l l l l} \hline\hline
Potential 1 & for a given value & MNS & MES & MES & extra resources \\
& of Potential 2 & $\sigma$ & $\rho_{\rm out}$ & $\rho$ & \\ \hline
CP & NP$\in[0,1]$ & $\SIGMA{M}$ & $\RHO{H}$ & $\RHO{H}$ & --- \\ CP & REEP$\in[0,1]$ & $\SIGMA{M}$ & $\RHO{H}$ & $\RHO{H}$ & --- \\ [3pt] NP & CP$\in[0,1]$ & $\SIGMA{P}$ & $\RHO{P}$ & $\RHO{P}$ & --- \\ NP & REEP$\in[0,E_3)$ & $\SIGMA{Z}$ & $\RHO{Z}$ & $\RHO{B}$ & PDC \\ NP & REEP$\in[E_3,1]$ & $\SIGMA{M}$ & $\RHO{H}$ & $\RHO{B}$ & PDC \\ [3pt] REEP & CP$\in[0,1]$ & $\SIGMA{P}$ & $\RHO{P}$ & $\RHO{P}$ & --- \\ REEP & NP$\in[0,N_1)$ & $\SIGMA{M}$ & $\RHO{H}$ & $\RHO{A}$ & ADC or TBS \\ REEP & NP$\in[N_1,N_2)$&$\SIGMA{P}$ & $\RHO{P}$ & $\RHO{A}$ & ADC or TBS \\ REEP & NP$\in[N_2,1]$ &$\SIGMA{P}$ & $\RHO{P}$ & $\RHO{P}$ & --- \\ \hline\hline \end{tabular} \end{table}
\subsection{When completely-dephased states are maximally nonclassical}
Now we consider the nonclassicality of a statistical mixture of the vacuum $\ket{0}$ and single-photon state $\ket{1}$. This is a special case of Eq.~(\ref{rho}) for the vanishing coherence parameter $x=0$, i.e., \begin{equation}
\SIGMA{M} = \sigma(p,x=0)= (1-p)\ket{0}\bra{0}+p\ket{1}\bra{1}.
\label{RhoM} \end{equation} These mixtures are referred here to as completely-dephased states, but can also be referred to as completely-mixed states~\cite{Miran15}, or the Fock diagonal states, to emphasize that these states are diagonal in the Fock basis. As shown in Ref.~\cite{Miran15}, these states can be considered the most nonclassical by comparing the CP for a given value of the NP. Here, we analyze the nonclassicality of $\SIGMA{M}$ also with respect to the REE potential.
First we recall that $\SIGMA{M}$ is transformed by the balanced BS (with the vacuum in the other port) into the Horodecki state \begin{equation}
\RHO{H}(p) = \rho_{\rm out}(p,0) = p\ket{\psi^-}\bra{\psi^-}+(1-p)\ket{00}\bra{00},
\label{rhoH} \end{equation} which is a mixture of a maximally-entangled state, here the singlet state $\ket{\psi^-}=(\ket{10}-\ket{01})/\sqrt{2}$, and a separable state (here, the vacuum) orthogonal to it. The entanglement properties of the Horodecki state were studied intensively (see Ref.~\cite{Horodecki09review} for a review), so we can instantly write the entanglement potentials for $\SIGMA{M}$ as: \begin{eqnarray}
{\rm CP}(\SIGMA{M}) &=& p, \notag \\
{\rm NP}(\SIGMA{M}) &=& \sqrt{(1-p)^2+p^2}-(1-p), \label{REEP_H} \\
{\rm REEP}(\SIGMA{M}) &=& (p-2)\log_2 (1-p/2)+(1-p)\log_2 (1-p).\notag \end{eqnarray} Thus, the REE potential can easily be expressed as a function of the other potentials ${\rm NP}=N$ and ${\rm CP}=C$, as follows: \begin{equation}
{\rm REEP}[\SIGMA{M}(C)]
= {\rm REEP}[\SIGMA{M}(\sqrt{2N(1+N)}-N)].
\label{REEP_H2} \end{equation} We observe that the completely-dephased states $\SIGMA{M}$ are the most nonclassical single-qubit states concerning: (i) the maximal CP for a given value of ${\rm NP}\in[0,1]$ [see Fig.~1(a)], (ii) the maximal CP for a given value of ${\rm REEP}\in[0,1]$ [see Fig.~1(b)], (iii) the maximal NP for a given value of the REEP, $E\in[E_3,1]$, where $E_3=0.397$ [see Fig.~3], and (iv) the maximal REEP for a given value of ${\rm NP}\in[0,N_1]$, where $N_1=0.377$. These results are also summarized in Table~I.
\begin{figure}
\caption{(Color online) (a) Boundary states and special points corresponding to Fig.~2. (b) The inset of panel (a) showing, in greater detail, the boundary states: $\RHO{B}$ are the Bell-diagonal states (corresponding to the upper red dot-dashed curve), $\RHO{P}$ (or equivalently $\SIGMA{P}$) are pure states (solid black curve with diamonds), $\RHO{H}$ are the Horodecki states (solid blue curve with circles), which correspond to the single-qubit completely-dephased states $\SIGMA{M}$, $\RHO{A}$ are the optimal generalized Horodecki states (lower red dot-dashed curve), and $\RHO{Z}$ are the two-qubit states (green solid curve, which is the upper bound of the yellow region) generated from single-qubit optimally-dephased states $\SIGMA{Z}$. Special points are marked at: $(E_1\approx 0.228,N_1\approx 0.377)$, $(E_2 \approx 0.385,N_2 \approx 0.527)$, and $(E_3 \approx 0.397,N_3 \approx 0.6)$. Note that here, and in Fig.~2, we refer to entanglement measures rather than directly to the corresponding entanglement potentials, because some of the marked regions and curves cannot be reached by the states generated by the standard entanglement potentials.}
\end{figure}
\begin{figure}
\caption{(Color online) Mixing and coherence parameters for the optimally-dephased state $\SIGMA{Z}=\sigma(p_{\rm Z},x_{\rm Z})$, completely-dephased state $\SIGMA{M}=\sigma(p_{\rm M},x_{\rm M}=0)$, and pure state $\SIGMA{P}=\sigma(p_{\rm P},x_{\rm P})$ as a function of their negativity potential $N={\rm NP}(\SIGMA{Z})={\rm NP}(\SIGMA{M})={\rm NP}(\SIGMA{P})$. Dotted black line in panel (a) is added, to show that $p_Z$ does not linearly depend on $N\in[0,0.6]$. We recall that $\SIGMA{Z}$ exhibits the highest nonclassicality if considered the maximum negativity potential for a given value of the REEP.}
\end{figure}
\subsection{When partially-dephased states are maximally nonclassical}
A closer analysis of Fig.~3 shows that pure states and completely-dephased states do not have always the greatest entanglement potentials. Thus, let us define the following optimally-dephased states \begin{eqnarray}
\SIGMA{Z}(N) &=&\sigma[p_{\rm opt},x_{\rm opt}=f(p_{\rm
opt},N)], \label{rhoX}\\
\RHO{Z} &\equiv& U_{\rm BS} \big(\SIGMA{Z}\otimes \ket{0}\bra{0} \big)U^\dagger_{\rm
BS}, \end{eqnarray} which are the most nonclassical concerning the largest NP for a given value of the REEP. Here, $f$ is given in Eq.~(\ref{f}), $U_{\rm BS}$ is defined below Eq.~(\ref{RhoOut1}), while the optimal mixing parameter $p_{\rm opt}\equiv p_{Z}$ is found numerically, as ${\rm REEP}\{\sigma[p_{\rm opt},f(p_{\rm opt},N)]\}=\min_p {\rm REEP}\{\sigma[p,f(p,N)]\}$, and shown in Fig.~4(a). Note that the optimal coherence parameter $x_{\rm opt}\equiv x_Z$, which is shown in Fig.~4(b), is simply given by $f(p_{\rm opt},N)$. We numerically found that $\SIGMA{Z}$ becomes $\SIGMA{M}$ for $N\ge N_3\approx 0.6$, as shown in Fig.~3. Thus, these partially-dephased states become completely dephased. Moreover, the $\SIGMA{Z}$ are very close to pure states $\SIGMA{P}$ for ${\rm NP} \lesssim 0.2$. The optimal states $\SIGMA{Z}$ are the most distinct from $\SIGMA{P}$ and $\SIGMA{M}$ for ${\rm NP}$ near $N_1$ [see Fig.~3(b)].
\section{Special values of entanglement potentials}
Here we analyze in detail three characteristic points shown in Fig.~3 corresponding to the negativity (or negativity potential) for a given value of the REE (or the REE potential).
Point 1: For the negativity potential $N_1\approx 0.377$ and the REE potential $E_1\approx 0.228$, it holds that pure and completely-dephased states have the same negativity and REE potentials, i.e., \begin{eqnarray}
{\rm NP}[\SIGMA{P}(N_1)]&=&{\rm NP}[\SIGMA{M}(N_1)]=N_1, \notag\\
{\rm REEP}[\SIGMA{P}(N_1)]&=&{\rm REEP}[\SIGMA{M}(N_1)]=E_1.
\label{point1} \end{eqnarray} Then one observes that \begin{eqnarray}
{\rm NP}(\SIGMA{P}) > {\rm NP}(\SIGMA{M}) & \quad &{\rm for}\; 0<{\rm REEP}<E_1,
\notag \\
{\rm NP}(\SIGMA{P}) < {\rm NP}(\SIGMA{M}) & \quad &{\rm for}\; E_1<{\rm REEP}<1.\quad \label{N09b} \end{eqnarray} Point 2: For the negativity potential $N \ge N_2 \approx 0.527$ corresponding to the REE potential $E \ge E_2 \approx 0.385$, one finds that the optimal generalized Horodecki state $\RHO{A}$, , defined in Eq.~(\ref{rhoA}), which maximizes the REE for a given value of the negativity for arbitrary two-qubit states, becomes a two-qubit pure state $\RHO{P}$ corresponding to a single-qubit pure state $\SIGMA{P}$, i.e., \begin{eqnarray}
{\rm NP}[\SIGMA{P}(N)]&=&N[\RHO{A}(N)]=N, \notag\\
{\rm REEP}[\SIGMA{P}(N)]&=&E_R[\RHO{A}(N)]=E,\quad \text{if}\;N\ge N_2.\quad
\label{point2} \end{eqnarray} Point 3: For the negativity potential $N \ge N_3 \approx 0.6$ and the REE potential $E \ge E_3 \approx 0.397$, we find that the optimally-dephased state $\SIGMA{Z}$, which maximizes the negativity potential for a given value of the REE potential, becomes a completely-dephased state $\SIGMA{M}$, i.e., \begin{eqnarray}
{\rm NP}[\SIGMA{M}(N)]&=&{\rm NP}[\SIGMA{Z}(N)]=N, \notag\\
{\rm REEP}[\SIGMA{M}(N)]&=&{\rm REEP}[\SIGMA{Z}(N)]=E,\; \text{if}\;N\ge N_3.\quad\quad
\label{point3} \end{eqnarray} Moreover, although it is not clear on the scale of Fig.~3, the optimally-dephased state $\SIGMA{Z}$ becomes exactly a pure state $\SIGMA{P}$ only for the vacuum and single-photon states, i.e., \begin{eqnarray}
{\rm NP}[\SIGMA{P}(N)]&=&{\rm NP}[\SIGMA{Z}(N)]=N, \notag\\
{\rm REEP}[\SIGMA{P}(N)]&=&{\rm REEP}[\SIGMA{Z}(N)]=E,\; \text{if}\;N =0,1.\quad\quad
\label{point4} \end{eqnarray} Analogously, the optimal generalized Horodecki state $\RHO{A}$ becomes exactly the standard Horodecki state $\RHO{H}$, which can be generated from a completely-dephased state $\SIGMA{M}$, only for the same cases, as in Eq.~(\ref{point4}), i.e., \begin{eqnarray}
N[\RHO{H}(N)]&=&N[\RHO{A}(N)]=N, \notag\\
E_R[\RHO{H}(N)]&=&E_R[\RHO{A}(N)]=E,\quad \text{if}\;N =0,1.\quad
\label{point5} \end{eqnarray} Nevertheless, $\RHO{H}$ is a good approximation of $\RHO{A}$, and $\RHO{P}$ is a good approximation of $\RHO{Z}$ for much larger ranges of $N$, as shown in Fig.~3.
\section{Quantifying nonclassicality by generalized entanglement potentials}
Here we address the question of how to generate entanglement, from single-qubit states $\sigma$, corresponding to the ``forbidden'' red regions shown in Fig.~2. Thus, we define generalized entanglement potentials, which are the standard entanglement measures calculated not for the output state $\rho_{\rm out}$, given in Eq.~(\ref{RhoOut1}) of the non-dissipative balanced BS, but for the output state $\rho'_{\rm out}$ of a BS, which can be dissipative and unbalanced. So, can write: \begin{eqnarray}
\text{GNP}(\sigma) &=& N(\rho'_{\rm out}), \\
\text{GCP}(\sigma) &= &C(\rho'_{\rm out}), \\
\text{GREEP}(\sigma) &= &E_R(\rho'_{\rm out}), \label{GEP} \end{eqnarray} as a generalization of Eqs.~(\ref{NP})--(\ref{REEP}).
\subsection{How to increase the relative nonclassicality by phase damping}
Here we show that the nonclassicality of a single-qubit state, as quantified by the negativity for a given value of the REE, $E_R\in(0,1)$, can be increased by phase damping.
We recall that a phase-damping channel (PDC) for the $i$th qubit can be described by the following Kraus operators~\cite{NielsenBook}: \begin{equation}
E_{0}(\kappa_{i})=|0\rangle\langle0|+\sqrt{1-\kappa_{i}}|1\rangle\langle1|,\quad E_{1}(\kappa_{i})=\sqrt{\kappa_{i}}|1\rangle\langle1|,\label{Kraus_pdc} \end{equation} where $\kappa_i$ is the phase-damping coefficient and $i=1,2$. Let us analyze a pure state \begin{equation}
|\psi_q \rangle =\sqrt{q} |01\rangle + \sqrt{1-q} |10\rangle, \label{psi_q} \end{equation}
where $q\in[0,1].$ Note that a general single-qubit pure state, given in Eq.~(\ref{rho}) for $|x|^2=p(1-p)$, can be simplified by local rotations to $|\psi_q \rangle$. One can find that a given pure state $|\psi_q\rangle$ is changed by the PDC into a mixed state, which can be given in the Bell-state basis~as follows~\cite{Horst13}: \begin{eqnarray}
\RHO{PDC}(q,\kappa_{1},\kappa_{2})=(\textstyle{\frac{1}{2}}-y)|\beta_{1}\rangle\langle\beta_{1}|
+(\textstyle{\frac{1}{2}}+y)|\beta_{2}\rangle\langle\beta_{2}|\nonumber \\
+(q-\textstyle{\frac{1}{2}})(|\beta_{1}\rangle\langle\beta_{2}|+|\beta_{2}\rangle\langle\beta_{1}|), \hspace{5mm} \end{eqnarray} where $\ket{\beta_{1,2}}=\ket{\psi_{\mp}}$ and $y=\sqrt{q(1-q)(1-\kappa_{1})(1-\kappa_{2})}$. Now we set $q=1/2$ or, equivalently, we choose the input state to be $\ket{1}$, which becomes $\rho_{\rm out}(p=1,x=0)$, given by Eq.~(\ref{RhoOut1}). Then after the PDC transformation, $\rho_{\rm out}$ is changed into a Bell-diagonal state \begin{equation}
\RHO{B}=\RHO{PDC}(\tfrac12,\kappa_{1},\kappa_{2})=\lambda_-|\beta_{1}\rangle\langle\beta_{1}|
+\lambda_{+}|\beta_{2}\rangle\langle\beta_{2}|, \hspace{5mm} \label{rhoB1} \end{equation} where $\lambda_{\pm}=[1\pm\sqrt{(1-\kappa_{1})(1-\kappa_{2})}]/2$, which is a special case of Eq.~(\ref{rhoB}). By applying Eq.~(\ref{NCErhoB}), one obtains \begin{eqnarray}
N(\RHO{B}) &=& C(\RHO{B})=\sqrt{(1-\kappa_{1})(1-\kappa_{2}),} \label{NCErhoB2} \end{eqnarray} which can be changed from zero to one by changing the phase-damping coefficients. Equation~(\ref{NCErhoB2}) clearly shows that the PDC can be used for one or both output modes.
We can summarize that it is possible to increase the nonclassicality of an input state by the phase damping of the BS output state. Specifically, one can increase the negativity for a given value of the REE $E_R\in(0,1)$ in comparison to that predicted by the standard entanglement potentials using a balanced BS without damping. This increased nonclassicality is shown by the upper red crescent-shape region in Fig.~2, where the uppermost curve corresponds to the entanglement of the Bell-diagonal states $\RHO{B}$. However, it should be stressed that the entanglement and nonclassicality measures are not increased by the phase damping channels (since they act locally on the output), but the possible ratios of different measures can be increased. Specifically, we start from a highly nonclassical state and decrease its entanglement (and nonclassicality) via this phase damping in a such way that the final state has the entanglement, corresponding to the upper red region in Fig.~2, which cannot be generated from a single-qubit state using a BS without damping.
\subsection{How to increase the relative nonclassicality by amplitude damping}
Now we show that the nonclassicality quantified by the REE for a given value of the negativity, $N\in(0,N_2),$ can be increased by amplitude damping. Here we assume that the BS is balanced (not tunable), but we place an amplitude-damping channel in both (or even single) output modes (ports).
An amplitude-damping channel (ADC) for the $i$th qubit can be described by the following Kraus operators~\cite{NielsenBook}: \begin{equation}
E_{0}(\gamma_{i})=|0\rangle\langle0|+\sqrt{1- \gamma_{i}}
|1\rangle\langle1|,\quad E_{1}(\gamma_{i})=\sqrt{\gamma_{i}}|0\rangle\langle1|,\label{Kraus_adc} \end{equation} where $\gamma_{i}$ is the amplitude-damping coefficient, and $i=1,2$. As can easily be verified (see, e.g., Refs.~\cite{Horst13,Bartkiewicz13}), a pure state
$|\psi_{q}\rangle$, given in Eq.~(\ref{psi_q}), is changed by the ADC into the mixed state \begin{eqnarray} \RHO{ADC}(q,\gamma_{1},\gamma_{2})&=& \RHO{GH}(p',q')\nonumber\\
&=& p' |\psi_{q'}\rangle \langle \psi_{q'}| + (1-p')|00\rangle\<00|,\quad \label{rho_adc} \end{eqnarray} which is the generalized Horodecki state, given in Eq.~(\ref{rhoGH}) for $p'=1-(1-q)(1-\gamma_{1})-q(1-\gamma_{2})$ and $q'=q (1-\gamma_{2})/(1-p')]$. Note that $1-p'$ can be considered an effective damping constant of the pure
$|\psi_{q'}\rangle$, given by Eq.~(\ref{psi_q}) but for $q'$ specified above. Note that by choosing properly the parameters $q,\gamma_{1},\gamma_{2}$, the amplitude-damped state $\RHO{ADC}$ can be changed into the optimal generalized Horodecki states $\RHO{A}$, defined in Eq.~(\ref{rhoA}).
Thus, we have shown that the nonclassicality of an input state can be increased by the amplitude damping of the BS output state in such a way that the REE for a given value of the negativity, $N\in(0,N_2)$, is increased in comparison to the maximum nonclassicality predicted by the standard entanglement potentials. This extra nonclassicality corresponds to the lower red crescent-shape region in Fig.~2, where the lower boundary corresponds to the optimal generalized Horodecki states $\RHO{A}$.
Analogously to the explanation in Sec. V.A, it is important to clarify that the amplitude damping applied locally cannot increase ``good'' entanglement measures, but it can increase the ratios of different measures. Thus, by having a highly nonclassical state, one can decrease its entanglement via amplitude damping in a such manner that the final damped state has the entanglement corresponding to some point in the lower red region in Fig.~2, which cannot be generated in the standard approach, i.e., from a single-qubit state via a balanced lossless BS.
\subsection{How to increase the relative nonclassicality by unbalanced beam splitting}
The effect of amplitude damping can be simply modelled by a tunable lossless BS, as described by $U_{\rm BS}$, given below Eq.~(\ref{RhoOut1}) but for $\theta\neq \pi/2$. Then, by assuming that the single-qubit $\sigma$ is given by Eq.~(\ref{rho}), the two-qubit output state is simply described by \begin{equation}
\rho^{\theta}_{\rm out}(p,x) = \left[\begin{array}{cccc} 1-p& -xr & xt& 0\\ -x^*r& pr^2&-prt&0\\
x^*t& -prt&pt^{2}&0\\ 0&0&0&0\\ \end{array}\right], \label{RhoOutT} \end{equation} where $t^2 =T=\cos^2(\theta/2)$ is the BS transmissivity and $r^2=R=\sin^2(\theta/2)$ is the BS reflectivity. Then, one can observe that $\rho^{\theta}_{\rm out}$ for $x=0$ reduces to the generalized Horodecki state $\RHO{GH}$, given in Eq.~(\ref{rhoGH}), i.e., \begin{equation}
\rho^{\theta}_{\rm out}(p,x=0) = \RHO{GH}(p,q=R=r^2).
\label{GH_TBS} \end{equation} In analogy to the results of Sec. V.B, by choosing properly the mixing parameter $p$ and the BS reflectivity $R$, one can then obtain the optimal generalized Horodecki state $\RHO{A}$, defined by Eq.~(\ref{rhoA}), as a special case of $\RHO{GH}$. The state $\RHO{A}$ maximizes the REE (or, equivalently, the GREEP) for a given value of the negativity $N$ (or the GNP) for arbitrary two-qubit states, as discussed in Appendix~B.
Finally, let us stress again that the optimally-dephased state $\RHO{A}$ cannot be obtained from a single-qubit state $\sigma$ by a balanced lossless BS if $N<N_2$ [see Fig.~3]. Thus, such high nonclassicality of $\sigma$ cannot be measured by applying the standard entanglement potentials.
These results are summarized in Table~I.
\section{Conclusions}
This paper addressed the problem of quantifying the nonclassicality of an arbitrary single-qubit optical state in the unified picture of nonclassicality and entanglement using the concept of (standard) entanglement potentials introduced in Ref.~\cite{Asboth05}. The basis states of the analyzed optical qubits are the vacuum and single-photon states. In this approach, the nonclassicality of a single-qubit state is measured by the entanglement, which can be generated by the light combined with the vacuum on a balanced lossless beam splitter.
We applied the most popular three measures of two-qubit entanglement for the states generated with this auxiliary beam splitter. Specifically, we used the following standard entanglement potentials of a single-qubit state based on the negativity (N), concurrence (C), and the relative entropy of entanglement (REE).
We presented a comparative study of these entanglement potentials showing a counterintuitive result that an entanglement potential, for a given value of another entanglement potential, can be increased by phase and amplitude damping, as well as unbalanced beam splitting.
The goal of this work was to find the maximal nonclassicality, corresponding to the maximal value of one entanglement potential for a fixed value of another entanglement potential, for (i) arbitrary two-qubit states and (ii) those states which can be generated from a single-qubit state and the vacuum via a balanced lossless beam splitter.
We found that the maximal relative nonclassicality measured by the REE potential for a fixed value (such that $\lesssim 0.527$) of the negativity potential can be increased by the amplitude damping of the output state of the balanced beam splitter or, equivalently, by replacing this beam splitter by a tunable lossless one. We also showed that the maximal nonclassicality measured by the negativity potential for a given value (except the extremal values 0 and 1) of the REE potential can be increased by phase damping (dephasing). Thus, we introduced the concept of generalized entanglement potentials in analogy with the standard potentials, but by allowing unbalanced beam splitting or dissipation. Of course, the entanglement itself is not increased by these losses (since they act locally on the output), but the possible ratios of different measures are affected.
The physical or operational meaning of the standard and generalized entanglement potentials is closely related to the corresponding standard entanglement measures. So, let us recall the operational meaning of the entanglement measures:
(i) The negativity is a monotonic function of the logarithmic negativity, which has an operational meaning as a PPT entanglement cost, i.e., the entanglement cost under the operations preserving the positivity of the partial transpose (PPT)~\cite{Audenaert03,Ishizaka04}. The logarithmic negativity is an upper bound to the entanglement of distillation $E_D$~\cite{Vidal02}. Note that $E_D$ quantifies the resources required to extract (i.e., distill) the maximum fraction of the Bell states from multiple copies of a given partially-entangled state. The negativity is also a useful estimator of entanglement dimensionality, i.e., the number of entangled degrees of freedom of two subsystems~\cite{Eltschka13}. Then, we can interpret both standard and generalized negativity potentials as the entanglement potentials for the PPT entanglement cost and for estimating the entangled dimensions of the light in the beam-splitter outputs.
(ii) The concurrence is monotonically related to the entanglement of formation, $E_F$~\cite{Wootters98}, which quantifies the resources required to create a given entangled state~\cite{Bennett96}. Thus, both standard and generalized concurrence potentials can also be interpreted as the potentials for the entanglement of formation. We note that the concurrence potential of a single-qubit state $\sigma$ can also be interpreted~\cite{Miran15} as a Hillery-type nonclassical distance~\cite{Hillery87}, defined by the Bures distance of $\sigma$ to the vacuum.
(iii) The REE $E_R$ is a convenient geometric measure of the distinguishability of an entangled state from separable states. Thus, both standard and generalized REE potentials can be used as measures of distinguishability of a nonclassical state from classical states.
Moreover, it is worth noting that the following inequalities hold~\cite{Horodecki09review}: $$E_F(\rho)\ge E_C(\rho)\ge E_R(\rho)\ge E_D(\rho)$$ for an arbitrary two-qubit state $\rho$, where the equalities hold for pure states. Here, $E_C$ is the (true) entanglement cost. Clearly, the same inequalities hold for the corresponding entanglement potentials (EP): $${\rm EP}_F(\sigma)\ge {\rm EP}_C(\sigma)\ge {\rm EP}_R(\sigma)\ge {\rm EP}_D(\sigma),$$ for an arbitrary single-qubit state $\sigma$. Here ${\rm EP}_F(\sigma)=h(\textstyle{\frac{1}{2}}[1+\sqrt{1-{\rm CP}^2(\sigma)}])$ by applying Eq.~(\ref{EoF}). Thus, the REE potential ${\rm EP}_R(\sigma)\equiv {\rm REEP}(\sigma)$ is an upper (lower) bound for the potential of the entanglement of distillation (formation). Analogous conclusions can be drawn for the generalized REEP.
Thus, the maximal nonclassicality measured by the standard negativity potential for a given value of the standard REE potential can be exceeded (except the values 0 and 1) by the corresponding generalized potentials. This conclusion can be rephrased in various ways by recalling multiple physical meanings of these potentials and the corresponding entanglement measures, as explained above.
By contrast to these results, we found that the maximal relative nonclassicality cannot be increased if it is measured by (i) the concurrence potential for any given value of the negativity potential, and vice versa, (ii) the concurrence potential for any fixed value of the REE potential, and vice versa, or (iii) the REE potential for a fixed value $\gtrsim 0.527$ of the negativity potential. This is because that, for these three cases, the generated entanglement in the standard approach of Ref.~\cite{Asboth05} is exactly the same as the maximal entanglement of arbitrary two-qubit states.
As discussed in Ref.~\cite{Asboth05}: ``Although we currently lack of a general proof, all examples we checked analytically and numerically indicate that the transmissivity of the optimal BS is 1/2 independent of the input state.'' Our examples indicate that not only the transmissivity $T\neq 1/2$ can lead to higher nonclassicality, but also adding dissipation increases relative nonclassicality.
It is worth noting that Refs.~\cite{Horst13,Bartkiewicz13}, which are closely related to the present study, discussed the effect of amplitude damping and phase damping on pure states resulting in increasing the ratios of various measures of entanglement and Bell's nonlocality. Moreover, Refs.~\cite{Bula13,Meyer13} showed how to increase the ratios of entanglement measures of amplitude-damped states by a linear-optical qubit amplifier.
Moreover, it is known that both pure and completely-dephased single-qubit states can be considered the most nonclassical by comparing some entanglement potentials~\cite{Miran15}. Here, we found partially-dephased states, which are the most nonclassical in terms of the highest negativity potential for a given value of the REE potential $\lesssim 0.6$.
On the basis of our results, one can infer that some standard entanglement measures may not be useful for entanglement potentials. Alternatively, one can conclude that a single balanced lossless beam splitter is not always transferring the whole nonclassicality of its input state into the entanglement of its output modes. The concept of generalized entanglement potentials can solve this problem at least for the cases analyzed in this work.
\acknowledgments
The authors thank Anirban Pathak, Jan Pe\v{r}ina Jr., and Mehmet Emre Tasgin for stimulating discussions. A.M. gratefully acknowledges a long-term fellowship from the Japan Society for the Promotion of Science (JSPS). A.M. is supported by the Grant No. DEC-2011/03/B/ST2/01903 of the Polish National Science Centre. K.B. acknowledges the support by the Polish National Science Centre (Grant No. DEC-2013/11/D/ST2/02638) and by the Ministry of Education, Youth and Sports of the Czech Republic (Project No. LO1305). F.N. is partially supported by the RIKEN iTHES Project, the MURI Center for Dynamic Magneto-Optics via the AFOSR award number FA9550-14-1-0040, the IMPACT program of JST, and a Grant-in-Aid for Scientific Research (A).
\appendix
\section{Definitions of standard entanglement measures}
For the completeness of our presentation, we recall the well-known definitions of three popular measures of entanglement of two-qubit states: the negativity, concurrence, and the relative entropy of entanglement (REE), which are applied in our paper.
(i) The negativity of a bipartite state $\rho$ can be defined as~\cite{Zyczkowski98,Vidal02} \begin{equation}
{N}({\rho})=\max \big[0,-2\min{\rm eig}(\rho^{\Gamma})\big],
\label{negativity} \end{equation} which is proportional to the minimum negative eigenvalue of the partially transposed $\rho$, as denoted by $\rho^{\Gamma}$. For two-qubit states, the minimalization in this definition can be omitted, as $\rho^{\Gamma}$ can have at most a single negative eigenvalue. The negativity is a monotonic function of the logarithmic negativity, $\log_2[N(\rho)+1]$, which can be interpreted as the entanglement cost under the operations preserving the positivity of partial transpose~\cite{Audenaert03, Ishizaka04}. The negativity can also be interpreted as an estimator of entanglement dimensionality~\cite{Eltschka13}. In this paper, for simplicity, we apply the potential based on the negativity instead of the logarithmic negativity.
(ii) The concurrence for a two-qubit state $\rho$ can be defined as~\cite{Wootters98}: \begin{equation}
C({\rho})=\max \Big\{0,2\lambda_{\max}-\sum_j\lambda_j\Big\},
\label{concurrence} \end{equation} where $\lambda^2 _{j} = \mathrm{eig}[{\rho }(Y\otimes Y){\rho}^{\ast }(Y\otimes Y)]_j$, $\lambda_{\max}=\max_j\lambda_j$, $Y$ is the Pauli operator, and asterisk denotes complex conjugation. The concurrence is monotonically related to the entanglement of formation, $E_{F}({\rho})$~\cite{Bennett96} via the binary entropy $h(x)$, defined below Eq.~(\ref{REEP_pure}), as follows~\cite{Wootters98}: \begin{equation} E_{F}({\rho})= h\left(\textstyle{\frac{1}{2}}[1+\sqrt{1-C^2({\rho})}]\right). \label{EoF} \end{equation} In this paper, we solely apply the concurrence potential rather than the potential based on the entanglement of formation.
(iii) The REE for a two-qubit state $\rho$ can be defined as~\cite{Vedral97,Vedral98}: \begin{equation}
E_R(\rho)=S(\rho ||\rho_{\rm CSS})=\min_{\rho_{\rm sep}\in {\cal D}} S(\rho ||\rho_{\rm sep}),
\label{REE} \end{equation}
which is the relative entropy $S(\rho ||\rho_{\rm sep} )={\rm Tr}\,( \rho \log_2 \rho -\rho\log_2 \rho_{\rm sep})$ given in terms of the Kullback-Leibler distance minimized over the set ${\cal D}$ of all two-qubit separable states $\rho_{\rm sep}$. Thus, $\rho_{\rm CSS}$ denotes a closest separable state (CSS) for a given $\rho$. Note that the Kullback-Leibler distance is not symmetric and does not fulfill the triangle inequality, thus it is not a true metric. The motivation behind using the Kullback-Leibler distance, instead of other true metrics (like, e.g., the Bures distance) is the following: For pure states, the REE based on the Kullback-Leibler distance reduces to the von Neumann entropy of one of the subsystems. Contrary to the negativity and concurrence, there is a computational difficulty to calculate the REE for two-qubit states, except for some special classes of states. This problem, formulated in Ref.~\cite{Eisert05}, corresponds to finding an analytical compact formula for the CSS for a general two-qubit mixed state. As explained in, e.g., Refs.~\cite{Ishizaka03,Miran08a,Kim10}, this is very unlikely to solve this problem. Surprisingly, there is a compact-form solution of the converse problem: For a given CSS, all the entangled states (with the same CSS) can be found analytically not only for two qubits~\cite{Ishizaka03,Miran08a} but, in general, for arbitrary multipartite systems of any dimensions~\cite{Friedland11}. As inspired by this approach to the REE, a general method has been developed recently in Ref.~\cite{Girard14} to solve the converse problems instead of finding explicit solutions of convex optimization problems in quantum information theory.
There are various numerical procedures for calculating the two-qubit REE~\cite{Vedral98,Miran08b,Zinchenko10,Girard15}. Probably, the most reliable and efficient is the algorithm of Ref.~\cite{Girard15} based on semidefinite programing in CVX~\cite{CVX} (a MATLAB-based modeling system for convex optimization), which is also applied in this paper.
All these measures vanish for separable states and are equal to one for the two-qubit Bell states.
\section{Boundary states for arbitrary two-qubit states}
Here, we recall well-known results~\cite{Eisert99,Verstraete01,Miran04a,Miran04b,Miran08b,Horst13} on the boundary states of one entanglement measure for a given value of another entanglement measure for arbitrary two-qubit states $\rho$. Note that the two-qubit states $\rho_{\rm out}$, which can be generated from single-qubit states $\sigma$ and the vacuum by a balanced BS, are only a subset of the states $\rho$. These boundary states are shown by red solid curves in Fig.~1.
\subsection{Pure states}
A two-qubit pure state, $\ket{\psi}=\sum_{n,m=0,1}c_{nm}\ket{nm}$, including the state \begin{equation}
|\psi_{\rm out} \rangle
=\sqrt{1-p}|00\rangle+\sqrt{\tfrac{p}{2}}(|10\rangle-|01\rangle), \label{PsiRhoOut} \end{equation}
which is generated by a balanced BS from a general single-qubit pure state can be simplified by local rotations to $|\psi_q
\rangle$, given by Eq.~(\ref{psi_q}). In the case of the state given by Eq.~(\ref{PsiRhoOut}), it holds $p=2\sqrt{q(1-q)}$. The negativity and concurrence for $|\psi_q \rangle$ read \begin{equation}
N\equiv N(|\psi _{q}\rangle) =C(|\psi _{q}\rangle)
=2\sqrt{q(1-q)},
\label{N1} \end{equation} and the REE can be given as a function of the negativity (or concurrence) as follows \begin{equation}
E_R(|{\psi}_{q}\rangle) = h\left( \tfrac{1}{2}[1+\sqrt{1-N^{2}}]\right), \label{N05} \end{equation} via the binary entropy $h$.
Pure states have the highest entanglement for arbitrary two-qubit states in the following cases: (i) the maximal negativity for a given value of the concurrence $C\in[0,1]$ [see Fig. 1(a)], as shown in Ref.~\cite{Verstraete01}, (ii) the maximal REE for a given value of the concurrence $C\in[0,1]$ [see Fig. 1(b)], as shown in Ref.~\cite{Miran04b}, and (iii) the maximal REE for a given value of the negativity $N\in[N_2,1]$, where $N_2\approx 0.527$ [see Fig. 1(c)] as demonstrated in Ref.~\cite{Miran08b}.
\subsection{Horodecki states}
The Horodecki states, which are defined in Eq.~(\ref{rhoH}), can be generated by the balanced BS transformation. These states have the highest entanglement for arbitrary two-qubit states by considering: (i) the maximal concurrence for a given value of the negativity $N\in[0,1]$ [see Fig. 1(a)], as shown in Ref.~\cite{Verstraete01}, and (ii) the maximal concurrence for a given value of the REE $E_R\in[0,1]$ [see Fig. 1(b)], as shown in Ref.~\cite{Miran04b}, and they are (iii) very close to maximal REE for a given value of the negativity $N\lesssim 0.2$ [see Fig. 1(c)], which was discussed in Ref.~\cite{Miran08b}.
In addition to these two classes of states there are also two other classes of boundary states, which cannot be generated by a lossless balanced BS, as discussed below.
\subsection{Generalized Horodecki states}
A generalized Horodecki state $\RHO{GH}$ can be defined as a statistical mixture of a pure state $|\psi_q\rangle$, given by Eq.~(\ref{psi_q}), and a separable state (say the vacuum) orthogonal to it, i.e.,~\cite{Miran08a}: \begin{eqnarray}
\RHO{GH}(p,q) &=& p |\psi_q\rangle \langle \psi_q| + (1-p)|00\rangle\<00|, \label{rhoGH} \end{eqnarray}
where $p,q\in [0,1]$. When the pure state $|\psi_q\rangle$ is a Bell state, then $\RHO{GH}$ becomes the standard Horodecki state, given in Eq.~(\ref{rhoH}). The negativity and concurrence are simply given by: \begin{eqnarray} N(\RHO{GH}) &=& \sqrt{(1-p)^2+4p^2q(1-q)}-(1-p), \label{N20a} \quad \\
C(\RHO{GH}) &=& 2p\sqrt{q(1-q)}. \label{N20b} \end{eqnarray} Unfortunately, no compact-form analytical expression for the REE for the general state $\RHO{GH}$ is known. We can express the parameter $q$ as a function $f_1(p,N)$ [$f_2(p,C)$] of the mixing parameter $p$ and the negativity (concurrence) as follows: \begin{eqnarray}
q &=& f_1(p,N) = \frac{1}{2p}\left[p\pm \sqrt{p^2-N^2-2N(1-p)}\right]
\notag \\
&=& f_2(p,C) = \frac12\Big(1\pm \sqrt{1-(C/p)^2}\Big), \label{N21} \end{eqnarray} by simply inverting formulas in Eqs.~(\ref{N20a}) and~(\ref{N20b}). Thus, one can have an explicit formula for $\RHO{GH}$ as a function of $N$ and $C$ as follows: $\RHO{GH}[p,q=f_1(p,N)]$ for $p\ge \sqrt{2N(1+N)}-N$, and $\RHO{GH}[p,q=f_2(p,C)]$ for $p\ge C$.
The optimal generalized Horodecki state $\RHO{A}$, as shown in Fig.~3, can be defined as the generalized Horodecki state $\RHO{GH}$, which maximizes the REE for a given $N$~\cite{Miran08a}: \begin{equation}
\RHO{A}(N)=\RHO{GH}[\bar p_{\rm opt},f_1(\bar p_{\rm opt},N)],
\label{rhoA} \end{equation} where the optimal mixing parameter $\bar p_{\rm opt}(N)$ is chosen such that \begin{equation}
E_R\{\RHO{GH}[\bar p_{\rm opt},f_1(\bar p_{\rm opt},N)]\}\ =\max_p E_R\{\RHO{GH}(p,f_1(p,N)]\}.
\label{p_opt} \end{equation}
\subsection{Bell-diagonal states}
A general Bell-diagonal state is defined by \begin{eqnarray}
\RHO{B'}=\sum_{i=1}^4 \lambda_i |\beta_{i} \rangle\langle
\beta_{i}|, \label{rhoB} \end{eqnarray}
which is a statistical mixture of the Bell states $|\beta_{i} \rangle$, with the normalized weights $\lambda_i$, i.e., $\sum_j\lambda_j=1$. The entanglement measures for $\RHO{B'}$ are given as follows \begin{eqnarray}
N(\RHO{B'}) &=& C(\RHO{B'})=2\max(0,\Lambda-1/2)\ \equiv N, \notag \\
E_R(\RHO{B'})&=& 1-h[(1+N)/2], \label{NCErhoB} \end{eqnarray} where $\Lambda=\max_j\lambda_j$. A well-studied example of the Bell-diagonal states is the Werner state~\cite{Werner89}: \begin{equation}
\RHO{W}=\frac{1+2N}{3}|\psi^{-}\rangle \langle
\psi^{-}|+\frac{1-N}{6} I, \label{N15} \end{equation}
where $I$ is the two-qubit identity operator and $|\psi^{-}\rangle$ is the singlet state.
The Bell-diagonal states exhibit the highest negativity for a given value of the REE, as discussed in Ref.~\cite{Miran04b} and shown by the red uppermost curve in Figs.~1(c) and~3. Note that these states, together with pure states, have the highest negativity for a given value of the concurrence, as discussed in Ref.~\cite{Verstraete01} and shown by the lowest red line in Fig.~1(a)]. It is important to mention that the Bell-diagonal states cannot be generated by a lossless balanced BS.
\end{document} |
\begin{document}
\title{Strebel differentials on stable curves and Kontsevich's
proof of Witten's conjecture}
\begin{abstract}
We define Strebel differentials for stable complex curves, prove the existence and uniqueness theorem that generalizes Strebel's theorem for smooth curves, prove that Strebel differentials form a continuous family over the moduli space of stable curves, and show how this construction can be applied to clarify a delicate point in Kontsevich's proof of Witten's conjecture.
\end{abstract}
\section{Introduction} \label{Sec:intro}
\subsection{Motivation}
The main motivation of this paper is to clarify a delicate point in Kontsevich's proof of the Witten conjecture~\cite{Kontsevich}. The conjecture concerns the intersection numbers of the first Chern classes of some line bundles ${\cal L}_1, \dots, {\cal L}_n$ over the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$ of the moduli space of $n$-pointed genus $g$ curves.
The proof by Kontsevich uses in an essential way the cell decomposition of the space ${\cal M}_{g,n} \times {\mathbb R}_+^n$ given by Strebel differentials. This cell decomposition has a natural closure, and it is necessary to know the relation between this closure and the Deligne-Mumford compactification of the moduli space. In his paper Kontsevich gives the answer, but the proof is only briefly sketched.
A full proof was provided in a very thorough paper by E.~Looijenga~\cite{Looijenga95} (and here we give a new proof). However many people are not sure what the exact implications of Looijenga's results are. In particular, S.~P.~Novikov still insists that there is a gap in Kontsevich's proof. Therefore we think that it is useful to give a complete account of the situation. This paper is essentially an overview, although it contains some new results (Theorems~\ref{Thm:contfam} and~\ref{Thm:cont2} as well as a new proof of Theorem~\ref{Thm:homeomorphism}).
We assume that the notion of moduli space ${\cal M}_{g,n}$ of Riemann surfaces of genus $g$ with $n$ marked and numbered points is known, as well as the notion of a stable curve and that of the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$ of the moduli space. We also use the standard notation ${\cal L}_i$ for the line bundle over ${\overline{\cal M}}_{g,n}$ whose fiber over a point $x \in {\overline{\cal M}}_{g,n}$ representing a stable curve $C_x$ is the cotangent line to $C_x$ at the $i$th marked point.
\subsection{The main steps of the argument}
Here is a sketch of what is to be or has been done to justify Kontsevich's expressions of the first Chern classes of the bundles ${\cal L}_i$.
{\bf 1.} A quotient $K{\overline{\cal M}}_{g,n}$ of the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$ by some equivalence relation is constructed. The quotient is a compact Hausdorff topological orbifold. The line bundles ${\cal L}_i$ on ${\overline{\cal M}}_{g,n}$ are pull-backs of some line bundles over $K{\overline{\cal M}}_{g,n}$, which we will also denote by ${\cal L}_i$. The quotient $K{\overline{\cal M}}_{g,n}$ has only singularities of real codimension at least~2. Therefore its fundamental homology class is well-defined. The fundamental homology class of ${\overline{\cal M}}_{g,n}$ is sent to the fundamental homology class of $K{\overline{\cal M}}_{g,n}$ under the factorization.
All these facts are quite simple once the equivalence relation is given. They are formulated in Kontsevich's original paper, except for the fact that $K{\overline{\cal M}}_{g,n}$ is Hausdorff, which was proved by Looijenga.
{\bf 2.} $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ is homeomorphic to a cell complex $A$. There is a piecewise affine projection from $A$ to ${\mathbb R}_+^n$ that commutes with the projection from $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ to ${\mathbb R}_+^n$.
The cell complex and the homeomorphism were constructed by Kontsevich, but he omitted the proof of continuity. The proof is one of the main results of Looijenga's paper. Here we give a different proof.
{\bf 3.} For each line bundle ${\cal L}_i$, one constructs a cell complex ${\cal B}_i$ together with a projection ${\cal B}_i \rightarrow A$ satisfying the following properties. Each fiber of the projection is homeomorphic to a circle. The (real) spherization $({\cal L}_i^* \setminus \mbox{zero section})/{\mathbb R}_+$ of the dual line bundle ${\cal L}_i^*$ is isomorphic to the circle bundle ${\cal B}_i$.
The bundles ${\cal B}_i$ were constructed by Kontsevich, the isomorphism of the two bundles is immediate.
{\bf 4.} A connection on the bundle ${\cal B}_i$ can be explicitly given. Its curvature represents the first Chern class of the bundle.
The connection and the curvature were written out by Kontsevich. However, they are only piecewise smooth forms defined on a cell complex that is not homeomorphic to a smooth orbifold. Therefore one should explain why (and in what sense) the curvature indeed represents the first Chern class correctly. This issue does not seem to have been raised in the literature. We deal with it here in Section~\ref{Sec:Chern}.
Thus the goal of this paper is to give a complete proof of the following theorem.
Consider a point ${\bf p} = (p_1, \dots, p_n ) \in {\mathbb R}_+^n$. Denote by $A_{\bf p}$ the preimage of $\bf p$ under the projection from $A$ to ${\mathbb R}_+^n$. Then $A_{\bf p}$ is also a cell complex.
\begin{theorem} \label{Thm:int=int} To each bundle ${\cal L}_i$ one can assign a piecewise smooth $2$-form $\omega_i$ on the cell complex $A$ in such a way that $$ \int\limits_{{\overline{\cal M}}_{g,n}} c_1({\cal L}_1)^{d_1} \dots c_1({\cal L}_n)^{d_n} \; = \int\limits_{ \begin{array}{c} \mbox{\rm cells of top}\\ \mbox{\rm dimension of } A_{\bf p} \end{array} } \omega_1^{d_1} \dots \omega_n^{d_n} $$ for every ${\bf p} \in {\mathbb R}_+^n$. \end{theorem}
\subsection{The organization of the paper}
This paper is organized as follows.
In Section~\ref{Sec:strebel} we review the notion of Strebel differentials for smooth curves and give, without proof, two examples of its degeneration as a curve tends to a singular stable curve in ${\overline{\cal M}}_{g,n}$.
In Section~\ref{Sec:stable} we review the notion of dualizing sheaf for a stable curve and define Strebel differentials for nonsmooth stable curves. We show that Strebel differentials with given perimeters $p_1, \dots, p_n$ form a continuous section of the vector bundle of quadratic differentials over ${\overline{\cal M}}_{g,n}$. This theorem does not, as far as we know, appear in the literature.
In Section~\ref{Sec:compactifications} we describe the compactification $K{\overline{\cal M}}_{g,n}$ and the cell complex $A$ homeomorphic to $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$. We prove that they are indeed homeomorphic, and also briefly sketch Looijenga's proof~\cite{Looijenga95}.
In Section~\ref{Sec:Chern} we describe a framework that allows one to work with differential forms on cell complexes. The forms representing the connections on the bundles ${\cal B}_i$ and their curvatures fit into this framework and thus their use is justified.
\section{Strebel's theorem} \label{Sec:strebel}
\subsection{The case of smooth curves: a review} \label{Ssec:smooth}
Let $C$ be a Riemann surface. A {\em simple differential} on $C$ is a meromorphic section of its cotangent bundle. In a local coordinate $z$, it can be written as $f(z) dz$, where $f$ is a meromorphic function. A {\em quadratic differential} is a meromorphic section of the tensor square of the cotangent bundle. In a local coordinate $z$, it can be written as $f(z) dz^2$.
Let $\varphi$ be a quadratic differential and $z_0 \in C$ a point that is neither a pole nor a zero of~$\varphi$. Then $\varphi$ has a square root in the neighborhood of $z_0$: it is a simple differential $\gamma$, unique up to a sign, such that $\gamma^2 = \varphi$. The integral $$ \int_{z_0}^z \gamma $$ is a biholomorphic mapping from a neighborhood of $z_0$ in $C$ to a neighborhood of $0$ in $\mathbb{C}$. The preimages of the horizontal (vertical) lines in $\mathbb{C}$ under this mapping are called {\em horizontal {\rm(}vertical{\rm)} trajectories} of the quadratic differential $\varphi$. Because $\gamma$ is defined up to a sign these trajectories do not have a natural orientation.
If $z_0$ is a double pole, then in the neighborhood of $z_0$ the quadratic differential $\varphi$ has an expansion $$ \varphi = a \frac{dz^2}{(z-z_0)^2} + \dots \, . $$ The complex number $a$ is called the {\em residue} of the double pole and does not depend on the choice of the local coordinate.
By a local analysis, it is easy to see that if $z_0$ is a $d$-tuple zero of $\varphi$, then there are $d+2$ horizontal trajectories issuing from $z_0$. Further, if $z_0$ is a simple pole, then there is a unique horizontal trajectory issuing from $z_0$. Finally, if $z_0$ is a double pole whose residue is a negative real number $a = -(p/2\pi)^2$, then $z_0$ is surrounded by closed horizontal trajectories (see Figure~\ref{1}~(b)). These trajectories, together with $z_0$,
form a topological open disc in $C$. In the metric $|\varphi|$ all these trajectories have the same length $p$. The other possible cases are the case of a double pole with a positive real or a nonreal residue and the case of poles of order greater than~$2$. We will not need them, so we leave them to the reader.
Now we are ready to formulate Strebel's theorem.
Let $C$ be a connected (not necessarily compact) Riemann surface without boundary with $n \geq 1$ distinct marked and numbered points $z_1, \dots, z_n$. We will be interested only in the case where $C$ is a surface of genus $g$ with a finite number of punctures. (The punctures and the marked points are two different things and are pairwise distinct.) However Theorem~\ref{Thm:Strebel} below is applicable to any hyperbolic surface $C$, i.e., whenever the universal covering of $C \setminus \{ z_1, \dots, z_n \}$ is the Poincar\'e disc. For example, $C$ can be a ring, a torus with a puncture, a surface of genus $2$ with a hole, or a surface of infinite genus.
In his book on quadratic differentials Strebel proves the following theorem (\cite{Strebel},~Theorem~23.5 and Theorem~23.2 for $n=1$).
\begin{theorem} \label{Thm:Strebel} For any positive real numbers $p_1, \dots, p_n$ there exists a unique quadratic differential $\varphi$ on $C$ satisfying the following conditions. (i)~It has double poles at the marked points and no other poles. (ii)~The residue at $z_i$ equals $-(p_i/2\pi)^2$. (iii)~If we denote by $D_i$ the disc domain formed by the closed horizontal trajectories of $\varphi$ surrounding $z_i$, then $$ \bigcup_i \overline{D_i} = C. $$ \end{theorem}
\begin{definition} A quadratic differential satisfying the above conditions is called a {\em Strebel differential.} \end{definition}
If $C$ is a compact surface of genus $g$, then the nonclosed horizontal trajectories of a Strebel differential $\varphi$ form a connected graph embedded into $C$. All its vertices (situated at the zeroes of $\varphi$)
have degrees $\geq 3$. Its edges have natural lengths (measured with the length measure $\sqrt{|\varphi|}$). Its faces are the disc domains
$D_i$ and are in a one-to-one correspondence with the marked points. The perimeter of the $i$th face equals $p_i$. Each face $D_i$, punctured at its marked point, has a natural flat Riemannian metric $|\varphi|$. In this metric it is isometric to a semi-infinite cylinder whose base is a circle of length $p_i$.
\begin{definition} An {\em embedded}\, or {\em ribbon} graph is a connected graph endowed with a cyclic order of the half-edges issuing from each vertex. \end{definition}
It is a standard fact that any given cyclic order allows one to construct a unique embedding of the graph into a surface. If an abstract graph is embedded into a surface, the cyclic order of the half-edges adjacent to a vertex is just the counterclockwise order.
On the set $H$ of the half-edges of the graph we introduce three permutations: $\sigma_0$ is the product of the cyclic permutations assigned to the vertices, $\sigma_1$ is the involution without fixed points that exchanges the half-edges of each edge, $\sigma_2 = \sigma_0^{-1} \sigma_1$ is the permutation whose cycles correspond to the faces of the ribbon graph. These permutations sum up all the information about the ribbon graph.
\begin{proposition} \label{Prop:strips} If we are given a ribbon graph with $n$ numbered faces, endowed with edge lengths, and such that each vertex has a degree at least $3$, then there is a unique way to recover a Riemann surface $C$ with $n$ marked points and to determine the perimeters $p_i$ in such a way that the ribbon graph is the graph of nonclosed horizontal trajectories of the corresponding Strebel differential. \end{proposition}
The construction given in the proof is described, for example, in~\cite{Kontsevich}, Section~2.2.
\paragraph{Proof of Proposition~\ref{Prop:strips}.} To find the perimeters $p_i$ we just add up the lengths of the edges surrounding each face.
The Riemann surface is obtained in the following way. To every oriented edge of the ribbon graph we assign a strip $[0,l] \times [0, + i \infty[$ in the complex plane, where $l$ is the length of the edge. This strip inherits the standard complex structure from the complex plane. Now we construct our surface by gluing together the strips corresponding to all the oriented edges (Figure~\ref{Fig:strips}).
First, for every edge, we glue together the two strips that correspond to the two ways of orienting this edge. The segment $[0,l]$ is identified with $[l,0]$ and the complex structure is extended in the natural way. Now we glue together, along the sides $[0 , + i \infty[$, the strips that correspond to neighboring edges in the same face. The complex structure is extended naturally to $]0, + i \infty[$. It remains to extend the complex structure to the vertices of the ribbon graph and to the $n$ punctured points.
At a vertex of degree $k$ there are $2k$ right angles of strips that meet together, that is, in whole, an angle of $k \pi$. Let us place the vertex at the origin of the complex plane and put the strips on the plane one after another around the vertex (so that the 5th strip will overlap with the 1st one, the 6th one with the 2nd one, and so on). If $z$ is the coordinate on the complex plane, we introduce a local coordinate at the neighborhood of the vertex using the function $z^{2/k}$.
Finally, consider a marked point and the semi-infinite cylinder formed by the strips that surround it. Let $h$ be the height of a point in this cylinder and $\theta \in [0, 2\pi[$ its argument (the origin of the angles can be chosen arbitrarily). Then $e^{i\theta - h}$ is a local coordinate in the neighborhood of the marked point.
\begin{figure}
\caption{Gluing a Riemann surface from strips. }
\label{Fig:strips}
\end{figure}
The uniqueness of the Riemann surface is proved in the following way. Consider a Strebel differential on a Riemann surface. Let us cut the Riemann surface along the nonclosed horizontal trajectories of the Strebel differential and along its vertical trajectories joining the marked points to the vertices of the ribbon graph (the zeroes of the differential). We obtain the set of strips described above. Therefore our Riemann surface is necessarily glued of strips as in the above construction. {
$\diamond$}
Denote by $B_i$ the polygon (the face of the graph) that forms the boundary of the $i$th disc domain $D_i$. (If $D_i$ is adjacent to both sides of an edge $e$, this edge appears twice in the polygon $B_i$.) Further, denote by $T_i$ the complex line tangent to $C$ at $z_i$ and by $ST_i = (T_i \setminus \{0 \})/{\mathbb R}_+$ its real spherization. (Here and below ${\mathbb R}_+$ is the set of positive real numbers.) Then there exists a canonical identification $$ B_i = ST_i. $$ Indeed, given a direction $u \in ST_i$, there is a unique vertical trajectory of $\varphi$ issued from $z_i$ in the direction $u$. This trajectory meets the polygon $B_i$ at a unique point, and this point will be identified with $u$.
Thus Strebel's theorem allows us to define $n$ polygonal bundles ${\cal B}_i$ over ${\cal M}_{g,n} \times {\mathbb R}_+^n$ and these bundles can be identified with the circle bundles obtained by a real spherization of the complex line bundles ${\cal L}_i^*$. (Recall that the fiber of the bundle ${\cal L}_i$ is the cotangent line to $C$ at $z_i$.)
We are going to show that these polygonal bundles can be extended to ${\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$, where ${\overline{\cal M}}_{g,n}$ is the Deligne-Mumford compactification, so that the identification above is preserved.
\subsection{Two examples} \label{Ssec:examples}
To extend the bundles ${\cal B}_i$ over ${\overline{\cal M}}_{g,n}$, we need to extend to stable curves the notion of Strebel differentials. The construction is carried out in the next section. Here we just give two examples, without any proofs.
\begin{example} Consider the case of a torus with one marked point that degenerates into a sphere with one marked point and two identified points (see Figure~\ref{FigTorus}; the marked point is represented as a black dot). Fix a positive real number $p$. On every torus there exists a unique Strebel differential with residue $-(p/2\pi)^2$ at the marked point. It determines a $1$-faced embedded (ribbon) graph composed of the nonclosed horizontal trajectories. This graph is either a hexagon whose opposite edges are glued together in pairs, or a quadrilateral whose opposite edges are glued together in pairs. In the figure we represented a hexagon.
Now, when the torus degenerates into a sphere, the lengths of the edges $l_1$ and $l_2$ tend to $0$. Thus, on the sphere we obtain a graph with only one edge of length $l_3 = p/2$. This edge joins the two identified points. If we put the identified points at $0$ and $\infty$, and the marked point at $1$, then the limit Strebel differential on the sphere equals $$ \varphi = -(p/2\pi)^2 \, \frac{dz^2}{z(z-1)^2}. $$ It has simple poles at the identified points $0$ and $\infty$.
\begin{figure}
\caption{A torus degenerating into a sphere with two identified points. }
\label{FigTorus}
\end{figure}
\end{example}
\begin{example} Now consider a sphere with four marked points that degenerates into a reducible curve consisting of two spherical components intersecting at one point (Figure~\ref{FigSphere}). Assume that the first component contains the marked points $z_1$ and $z_2$, while the second component contains the marked points $z_3$ and $z_4$. Fix 4 positive real numbers $p_1, p_2, p_3, p_4$. We will assume that $p_1 > p_2$, but $p_3 = p_4$ (in order to obtain two different pictures on the two components).
For any positions of the fixed points on the sphere ${\mathbb C} {\rm P}^1$, there is a unique Strebel differential with residues $-(p_i/2\pi)^2$ at the marked points $z_i$. This differential determines a $4$-faced graph on ${\mathbb C} {\rm P}^1$. As the curve approaches the degeneration described above, this graph necessarily becomes of a particular form. Namely, it will contain a simple cycle, formed by several edges, separating the marked points $z_1$ and $z_2$ from the marked points $z_3$ and $z_4$. In other words, the faces number $1$ and $2$ become adjacent as well as the faces $3$ and $4$. When the curve degenerates, the lengths of all the edges in the above cycle tend to $0$.
On the first component we obtain a graph with 2 vertices. The vertex at the nodal point has degree $1$, the second vertex has degree $3$. The corresponding quadratic differential has a simple pole at the nodal point and a simple zero at the other vertex (and, of course, double poles at the marked points).
On the second component we obtain a graph with a unique vertex of degree $2$ at the nodal point. The corresponding quadratic differential does not have zeros or poles (except the double poles at the marked points).
\begin{figure}
\caption{A sphere degenerating into a curve with two spherical components. }
\label{FigSphere}
\end{figure}
\end{example}
\section[Differentials on stable curves] {Simple, quadratic, and Strebel differentials on non\-smooth stable curves} \label{Sec:stable}
\subsection{Simple differentials} \label{Ssec:simple}
\begin{definition} Let $C$ be a stable curve with $n$ marked points $z_i$. A {\em simple differential} $\gamma$ on $C$ is a meromorphic differential defined on each component of $C$ and satisfying the following properties. (i)~It has at most simple poles at the marked points and at the nodal points, but no other poles. (ii)~For each nodal point, the sum of the residues of the poles of $\gamma$ on the two components meeting at this point vanishes. \end{definition}
One can readily check that simple differentials form a vector space $V$ of dimension $g+n-1$ (if $n \geq 1$) for any stable curve of arithmetic genus $g$, whether it is smooth or not.
Indeed, consider a stable curve with several irreducible components $C_i$. Suppose $C_i$ is of genus $g_i$, has $n_i$ marked points and $m_i$ nodal points. Then we have $$ n = \sum n_i, \qquad 2-2g = \sum (2 - 2g_i - m_i) $$
Now, according to the Riemann-Roch theorem, the dimension of the space of sections of a line bundle with first Chern class $c$, such that at most simple poles are allowed at $k$ fixed points, is equal to $c+k+1-g$, whenever $c+k \geq 2g-1$. In our case, on $C_i$, $c = 2g_i-2$ (the first Chern class of the cotangent line bundle) and $k = n_i + m_i$. Thus the dimension of the space of sections equals $g_i+n_i+m_i-1$. Adding these numbers for all the irreducible components $C_i$ and subtracting the total number $\frac12 \sum m_i$ of nodal points (because each nodal point gives a linear relation on the residues) we obtain $g+n-1$.
Because the dimensions of the spaces $V$ are the same, these spaces form a holomorphic vector bundle ${\overline{\cal V}}$ over the space ${\overline{\cal M}}_{g,n}$. This follows immediately from algebro-geometric arguments. Indeed, denote by ${\overline {\cal C}}_{g,n}$ the universal curve over ${\overline{\cal M}}_{g,n}$. Then simple differentials form a sheaf on ${\overline {\cal C}}_{g,n}$. It is called the {\em relative dualizing sheaf} and is the sheaf of sections of a line bundle (the relative cotangent line bundle) over ${\overline {\cal C}}_{g,n}$. (See, for example,~\cite{Hartshorne}, Chapter~III, Theorem~7.11, where it is proved that the dualizing sheaf is the sheaf of sections of a line bundle for any algebraic variety that is locally a complete intersection, and an explicit construction of the sheaf is given. See also~\cite{HarMor}, Chapter~3, Section~A.) The direct image of the relative dualizing sheaf on ${\overline{\cal M}}_{g,n}$ has the property that the dimensions of its fibers are the same. Therefore the direct image is itself the sheaf of sections of a vector bundle (see~\cite{Hartshorne}, Exercise 5.8).
The fact that the spaces $V$ form a holomorphic vector bundle can also be understood more intuitively. The space of pairs $(C, \gamma)$, where $C$ is a stable curve and $\gamma$ a simple differential on it, can be given a complex structure in the following way. For any pair $(C,\gamma)$, one can calculate the integral of $\gamma$ over any closed loop that does not contain marked or nodal points. The complex structure is introduced by requiring that these integrals be meromorphic functions on the space of pairs $(C,\gamma)$. (That these integrals can have poles is shown by the following example. Consider a torus whose meridian is contracted, so that it degenerates into a sphere with two identified points. Then the integral of a simple differential over the parallel will tend to $\infty$.)
\subsection{Quadratic differentials} \label{Ssec:quadratic}
Now we repeat the above construction for quadratic differentials.
\begin{definition} Let $C$ be a stable curve with $n$ marked points $z_i$. A {\em quadratic differential} $\varphi$ on $C$ is a meromorphic quadratic differential defined on each component of $C$ and satisfying the following properties. (i)~It has at most double poles at the marked points and at the nodal points, but no other poles. (ii)~For each nodal point, the residues of the poles of $\varphi$ on the two components meeting at this point are equal. \end{definition}
\begin{remark} The residue of a quadratic differential at a pole of order at most $2$ is equal to the coefficient of $dz^2/z^2$ (for any local coordinate $z$). If the order of the pole is actually less than $2$, we let the residue be equal to $0$. \end{remark}
As above, the dimension of the space $W$ of quadratic differentials is the same for any stable curve $C$ and equals $3g-3+2n$.
Indeed, if $C_i$ is an irreducible component of $C$, that has genus $g_i$ and contains $n_i$ marked points and $m_i$ nodal points, then the dimension of the space of quadratic differentials on it is $3g_i-3 + 2n_i + 2 m_i$. Adding these numbers for all the components and subtracting the total number $\frac12 \sum m_i$ of the nodal points (because each nodal point gives a linear relation on the residues), we obtain $3g-3+2n$.
Since the dimensions of the spaces $W$ are the same, they form a holomorphic vector bundle ${\overline{\cal W}}$ over ${\overline{\cal M}}_{g,n}$. This follows from the same arguments as for simple differentials. The quadratic differentials form a sheaf on the universal curve ${\overline {\cal C}}_{g,n}$: the sheaf of sections of the tensor square of the relative cotangent line bundle. The direct image of this sheaf on ${\overline{\cal M}}_{g,n}$ has the property that all its fibers are of the same dimension. Therefore it is a sheaf of sections of a holomorphic vector bundle.
\subsection{Strebel differentials} \label{Ssec:stabStreb}
Here we define Strebel differentials on stable curves.
Let $C$ be a stable curve with $n$ marked points. Suppose we are given $n$ positive real numbers $p_1, \dots, p_n$.
\begin{definition} \label{Def:unmarked} We say that an irreducible component of a stable curve $C$ is {\em marked} if it contains at least one marked point and {\em unmarked} if it contains no marked points. \end{definition}
\begin{definition} \label{Def:StabStreb} A {\em Strebel differential $\varphi$ on a stable curve $C$} is a quadratic differential on $C$ satisfying the following properties. (i)~It has double poles at the marked points, at most simple poles at the nodal points, and no other poles. (ii)~The residue of the pole at the $i$th marked point $z_i$ equals $-(p_i/2\pi)^2$. (iii)~The differential $\varphi$ vanishes identically on the unmarked components. (iv)~Let $C'$ be a marked component of $C$. Let us puncture $C'$ at the nodal points. For $z_i \in C'$, denote by $D_i$ the disc domain formed by the closed horizontal trajectories of $\varphi$ surrounding $z_i$. Then we have $$
C' = \bigcup_{i|z_i \in S} \overline{D_i}. $$ \end{definition}
\begin{remark} Strebel differentials have at most simple poles at the nodes of $C$ (unlike generic quadratic differentials, that have double poles). Therefore the condition that the residues of the poles on the two components meeting at a nodal point must be equal is automatically satisfied, since both residues vanish. \end{remark}
\begin{remark} \label{Rem:stabStreb} It follows from Strebel's theorem, that (once the positive numbers $p_1, \dots, p_n$ are given) there exists a unique Strebel differential $\varphi$ on any stable curve $C$. Its restriction to unmarked components vanishes. Its restriction to each marked component $C'$ is the Strebel differential on $C'$ with punctures at the nodal points. Indeed, it is easy to see that when we put the punctures back into the component $C'$, the corresponding Strebel differential will have at most simple poles at these points, because there is only a finite number of nonclosed horizontal trajectories issuing from them. \end{remark}
\begin{remark} Let $C_1$ be a smooth compact Riemann surface with $n$ marked points, and let $C_2$ be obtained by puncturing $C_1$ at a finite number of points (different from the marked points). Given a list of positive real parameters $p_1, \dots, p_n$, there is a unique Strebel differential $\varphi_1$ on $C_1$ and a unique Strebel differential $\varphi_2$ on $C_2$. At first sight, one could think that they are the same; but, in general, this is not true. Indeed, if we restrict $\varphi_1$ to $C_2$, we will see that the disc domains of $\varphi_1$ contain punctures at their interior, which is not allowed for a Strebel differential. Conversely, if we try to extend $\varphi_2$ to $C_1$ by putting back the punctured points, we will, in general, obtain simple poles at these points. Again, a Strebel differential is not allowed to have poles outside the marked points.
Thus in the condition (iv) in Definition~\ref{Def:StabStreb} above, it is important to puncture each component $C'$ at the nodal points. \end{remark}
As in the case of smooth compact Riemann surfaces, the nonclosed horizontal trajectories of a Strebel differential on a stable curve form a graph, embedded into the stable curve. More precisely, it is embedded into the union of the marked components of the stable curve. The vertices of the graph are of degrees $\geq 3$, except for the vertices that lie at the nodal points and can have any degree $\geq 1$. Its edges have natural lengths (measured, as before, with $\sqrt{|\varphi|}$). Its faces are in a one-to-one correspondence with the marked points, and the perimeter of the $i$th face equals $p_i$. As before, if we denote by $B_i$ the polygon surrounding the $i$th face of the graph, by $T_i$ the complex line tangent to the marked point $z_i$, and by $ST_i = (T_i \setminus \{ 0\})/{\mathbb R}_+$ its real spherization, we have a canonical identification $$ B_i = ST_i. $$
\subsection{Stable ribbon graphs} \label{Ssec:stabribgraphs}
In Section~\ref{Sec:compactifications} we will need a formal definition of a graph formed by the nonclosed horizontal trajectories of a Strebel differential on a stable curve. We give the definition here.
To understand the definition below one must imagine that we have contracted to a point each unmarked component of the stable curve. Thus we have obtained a graph embedded into a new (usually singular) curve.
The unmarked components of the initial stable curve form a not necessarily connected subcurve in it. Each connected component of this subcurve is contracted to a vertex of our graph. On each such vertex we will mark the arithmetic genus of the corresponding contracted component.
If a vertex $v$ of the graph lies at a node of the curve, there is no longer any natural cyclic order on the half-edges issuing from it. Instead, we will have a permutation (with several cycles) acting on these half-edges. Each cycle of this permutation corresponds to a component of the curve at the neighborhood of $v$. The cycle determines the counterclockwise cyclic order of the half-edges on the corresponding component.
\begin{definition} \label{Def:stabgraph} A {\em stable ribbon graph} is a connected graph endowed with the following structure. (i)~A non-negative integer (a {\em genus defect}) is assigned to each vertex. (ii)~A permutation acting on the set of half-vertices issuing from each vertex is given.
There are two types of vertices whose genus defect cannot be equal to~$0$: first, the vertices of degree~$1$, second, the vertices of degree two such that the corresponding permutation of the two half-edges is a transposition. \end{definition}
Kontsevich (\cite{Kontsevich}, Appendix B) gives an equivalent definition of stable ribbon graphs. A stable ribbon graph is represented in Figure~\ref{Fig:stabgraph}. Its surface of embedding (that can be uniquely reconstructed from the stable ribbon graph structure) is shown in dotted lines.
\begin{figure}
\caption{A stable ribbon graph and its surface of embedding. }
\label{Fig:stabgraph}
\end{figure}
It is easy to define faces of a stable graph. Let $H$ be the set of all the half-edges of the graph, $\sigma_0$ the product of all the permutations (with disjoint supports) assigned to the vertices, and $\sigma_1$ the involution without fixed points that switches two half-edges of each edge. Then a face is a cycle of the permutation $\sigma_2 = \sigma_0^{-1} \sigma_1$. The permutations $\sigma_0$, $\sigma_1$, and $\sigma_2$ sum up all the structure of a stable ribbon graph, except the genus defect function. We will usually consider stable graphs with $n$ numbered faces.
The {\em genus} of a stable ribbon graph is the arithmetic genus of its surface of embedding, plus the sum of genus defects of all its vertices.
In Section~\ref{Sec:compactifications} we will see that stable graphs are obtained from ordinary ribbon graphs by edge contractions.
\begin{proposition} \label{Prop:strips2} Given a stable ribbon graph with $n$ numbered faces and endowed with edge lengths, we can find a set of perimeters $p_1, \dots, p_n$ and a stable curve such that the stable ribbon graph is the graph of nonclosed horizontal trajectories of the corresponding Strebel differential. The marked components of the curve are uniquely determined. \end{proposition}
\paragraph{Proof.} To find the perimeters we simply add up the lengths of the edges surrounding each face. To construct a stable curve we carry out the same operations of strip gluings as in the proof of Proposition~\ref{Prop:strips}. This gives us the complex structure on the marked components. That it is unique is shown as in Proposition~\ref{Prop:strips}. As for the unmarked components, their arithmetic genera are given by the genus defect function of the stable ribbon graph, but their complex structures and even their topologies can be chosen arbitrarily. {
$\diamond$}
\subsection{Strebel differentials form a continuous family} \label{Ssec:smoothfamily}
Here we prove the continuity of the map that assigns to a stable curve and a list of perimeters in ${\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ the corresponding Strebel differential in the total space of the vector bundle ${\overline{\cal W}}$ of quadratic differentials over ${\overline{\cal M}}_{g,n}$. In particular, Strebel differentials with fixed perimeters form a continuous section of the vector bundle ${\overline{\cal W}}$ over ${\overline{\cal M}}_{g,n}$. This fact does not seem to be stated explicitly in the literature, although it would not be a surprise for a specialist in Teichm\"uller spaces.
We will need a rather well-known characterization of convergence in ${\overline{\cal M}}_{g,n}$ and in the total space of the vector bundle ${\overline{\cal W}}$ of quadratic differentials.
We view a complex structure on a surface as an operator $J$ that multiplies each tangent vector by $i$. We also remind the reader that quadratic differentials have at most double poles at the marked points, at most double poles with equal residues at the nodes, and no other poles.
\begin{definition} \label{Def:deformation} A continuous map $f:C_1 \rightarrow C_2$ from a stable curve to another one is called a {\em deformation} if (i)~the preimage of any node of $C_2$ is either a node of $C_1$ or a simple loop in the smooth part of $C_1$, (ii)~$f$ is an orientation-preserving diffeomorphism outside the nodes and loops, and (iii)~$f$ sends the marked points to the marked points preserving their numbers. \end{definition}
For a summary of various properties of deformations and their relations with augmented Teichm\"uller spaces see~\cite{Bers}. See also~\cite{Abikoff} for related questions on the topology of the Teichm\"uller spaces.
\begin{proposition} \label{Prop:teichmetric} Let $(C_m, \varphi_m) \rightarrow (C, \varphi)$ be a converging sequence in ${\overline{\cal W}}$ and denote by $J_m$ and $J$ the complex structures on $C_m$ and $C$. Starting from some $m$ there exists a sequence of deformations $f_m:C_m \rightarrow C$ such that:
(i)~on any compact set $K \subset (C \setminus \mbox{\rm nodes})$ the sequence of complex structures $(f_m)_* J_m$ converges uniformly to $J$;
(ii)~on any compact set $K \subset (C \setminus \mbox{\rm nodes and marked points})$ the sequence of complex-valued symmetric $2$-forms $(f_m)_* \varphi_m$ converges uniformly to $\varphi$. \end{proposition}
\paragraph{Sketch of a proof.} Consider a point $x \in {\overline{\cal M}}_{g,n}$ and the corresponding stable curve with marked points $C_x$. Let $(U, {\rm Stab} \, x)$ be a sufficiently small chart containing $x$, where $U$ is an open ball in ${\mathbb C}^{3g-3+n}$ and ${\rm Stab} \, x$ a finite group acting on $U$ and stabilizing $x$. The neighborhood of $x$ in ${\overline{\cal M}}_{g,n}$ is identified with $U/{\rm Stab} \, x$. Consider the part of the universal curve ${{\overline{\cal C}}}_U$ that lies over $U$. It is a fiber bundle over $U$ whose fibers are stable curves parameterized by the points of $U$. Even as a smooth manifold ${\overline{\cal C}}_U$ is, of course, not a direct product with $U$, because its fibers have different topologies. However, there exists a continuous function $f: {\overline{\cal C}}_U \rightarrow C_x$ with the following properties. (i)~The restriction of $f$ to any fiber $C_y$ is a deformation $C_y \rightarrow C_x$ in the sense of Definition~\ref{Def:deformation}. (ii)~The function $f$ is smooth on ${\overline{\cal C}}_U$ outside the preimages of the nodes of $C_x$. (iii)~The restriction of $f$ to $C_x$ is the identity map.
The function $f$ is a kind of universal family of deformations over the open set $U$. Such a function can be constructed, for example, in the following way. First of all, let us choose the loops to be contracted by $f$ in each fiber $C_y$. Their free homotopy types are uniquely determined by the property that we must obtain the curve $C_x$ by pinching all these loops. We choose the loops themselves to be the shortest geodesics inside the corresponding homotopy classes, with respect to the unique complete metric of curvature $-1$ on $C_y$, compatible with the conformal structure. Erasing all the loops and the nodes in each fiber we obtain a locally trivial fiber bundle over a contractible base $U$. Therefore we can trivialize it by a diffeomorphism $$ ({\overline{\cal C}}_U \setminus \mbox{nodes and loops}) \; \rightarrow \; U \times (C_x \setminus \mbox{nodes}) $$ commuting with the projections to $U$. We take $f$ to be the second component of this diffeomorphism and we extend it to the loops in every fiber $C_y$ by sending them to the corresponding nodes of $C_x$.
For $m$ big enough, $C_m$ lies in $U$ (or, more precisely, in $U/{\rm Stab} \, x$, but we can choose any lifting of $C_m$ to $U$). For a compact set $K \subset (C \setminus \mbox{nodes})$ there is, on the whole set $f^{-1}(K)$, a smooth linear operator $J_U$ acting on tangent planes to the fibers of ${\overline{\cal C}}_U$. Therefore the sequence $f_* J_m$ converges uniformly on $K$ to the complex structure $J$ of the stable curve $C_x$. This proves Assertion~(i) of the proposition.
Now consider a holomorphic section of the vector bundle ${\overline{\cal W}}_U$ over $U$. It is represented by a holomorphic section of a line bundle over the universal curve ${\overline{\cal C}}_U$, namely, of the tensor square of the relative dualizing bundle. Almost each fiber of this line bundle is naturally identified with the tensor square of the cotangent line to the corresponding stable curve at the corresponding point. The only exceptions are the fibers over the marked and the nodal points. First assume for simplicity that our sequence $(C_m, \varphi_m)$ belongs to (or, more precisely, is a restriction of) some holomorphic section of ${\overline{\cal W}}_U$. Then, exactly as before, we conclude that if $K \subset (C_x \setminus \mbox{nodes and marked points})$ is a compact set, the sequence of quadratic differentials $f_* \varphi_m$ converges uniformly on $K$ to the quadratic differential $\varphi$. In general, the sequence $(C_m, \varphi_m)$ does not belong to a holomorphic section of ${\overline{\cal W}}_U$. Then we have to consider a family of ${\rm rk} \, {\overline{\cal W}}$ holomorphic sections of ${\overline{\cal W}}_U$ over $U$, forming a basis of each of its fibers. The coordinates of the elements of the sequence $(C_m, \varphi_m)$ in the basis formed by the sections converge to the coordinates of $(C, \varphi)$. Applying to each section of the family the above argument, we conclude that sequence $f_* \varphi_m$ converges to $\varphi$ uniformly on $K$. This proves Assertion~(ii) of the proposition. {
$\diamond$}
\begin{theorem} \label{Thm:contfam} The Strebel differentials with fixed parameters $p_1, \dots, p_n$ form a continuous nonvanishing section of the vector bundle ${\overline{\cal W}}$ of quadratic differentials over the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$. \end{theorem}
\paragraph{Proof.} Let us fix a stable curve $C$ and consider a sequence of smooth curves $C_m$ that tends to $C$ in ${\overline{\cal M}}_{g,n}$ as $m$ tends to $\infty$. By Strebel's theorem (Theorem~\ref{Thm:Strebel}) and Remark~\ref{Rem:stabStreb} there is a unique Strebel differential $\varphi_m$ on each curve $C_m$ and a unique Strebel differential $\varphi$ on $C$. We will prove that $\varphi_m$ tends to $\varphi$ in the vector bundle ${\overline{\cal W}}$, as $m$ tends to $\infty$. This is enough to prove the theorem, since smooth curves form an open dense subset of ${\overline{\cal M}}_{g,n}$.
First of all, the sequence of quadratic differentials $\varphi_m$ is bounded, therefore it has at least one limit point ${\widetilde \varphi}$. We will prove that ${\widetilde \varphi} = \varphi$, which implies that the limit point is unique and is therefore a true limit. To do that, we study the limit quadratic differential ${\widetilde \varphi}$ and prove that it has all the properties of a Strebel differential.
For shortness we will call just ``trajectories'' the horizontal trajectories of the differentials.
{\bf 1. The nonclosed trajectories of ${\widetilde \varphi}$ have a finite total length.}
Let $x \in C$ be a regular point that is neither a zero nor a pole of ${\widetilde \varphi}$. Let $x_m \in C_m$ be a sequence of points of the curves $C_m$ that tends to $x$ in the universal curve ${\overline{\cal C}}_{g,n}$ over ${\overline{\cal M}}_{g,n}$. By moving each $x_m$ slightly inside $C_m$ we can assume that each $x_m$ belongs to a closed horizontal trajectory of $\varphi_m$, because the union of closed trajectories is dense in each curve $C_m$. Moreover, by extracting a subsequence we can assume that all these closed trajectories belong to the disc domains $D_i$ for the same $i$. Therefore each closed trajectory has the same length $p_i$. We will prove that $x$ is contained either in a closed trajectory or in a nonclosed trajectory shorter than $p_i$.
Suppose that moving along the trajectory through $x \in S$ we have covered a segment of length $l > p_i$ without encountering a nonregular point (a pole, a node, or a zero) and without passing twice through the same point. The above segment has a compact neighborhood $K$ that does not contain marked points, nodes of $C$, or zeroes of ${\widetilde \varphi}$. Using Proposition~\ref{Prop:teichmetric} we can construct a sequence of deformations $f_m:C_m \rightarrow C$ in such a way that the sequences of complex structures $(f_m)_* J_m$ and of quadratic differentials $(f_m)_* \varphi_m$ converge uniformly on $K$. Therefore, for $m$ big enough, the trajectory through $x_m$ of the quadratic differential $\varphi_m$ will also have a segment of length greater than $p_i$. This is a contradiction. Thus, if $x$ is a regular point of ${\widetilde \varphi}$, the trajectory through $x$ is either closed or nonclosed of finite length.
By compactness of $C$, a nonclosed trajectory of finite length necessarily has two endpoints in $C$. These endpoints can be zeroes of ${\widetilde \varphi}$, simple poles of ${\widetilde \varphi}$ (including possible simple poles at the nodes of $C$), or nodes of $C$ at which ${\widetilde \varphi}$ has no poles. Since the number of such points is finite and there is only a finite number of nonclosed trajectories issuing from each of them, it follows that the total number of nonclosed trajectories is finite.
{\bf 2. Each closed trajectory of ${\widetilde \varphi}$ bounds a disc with a unique marked point inside it.}
If the trajectory $\alpha$ through $x$ is closed, consider a compact tubular neighborhood $K$ of $\alpha$ that does not contain marked points, nodes of $C$, and zeroes of ${\widetilde \varphi}$. We construct a sequence of deformations $f_m:C_m \rightarrow C$ as in Proposition~\ref{Prop:teichmetric}.
For $m$ big enough $f_m^{-1}(K)$ contains a closed trajectory $\alpha_m$ of the Strebel differential $\varphi_m$. Indeed, let $l$ be a real number greater than any of the perimeters $p_i$. If we choose $m$ big enough and a point $x_m \in C_m$ close enough to $f_m^{-1}(x)$, a segment of length $l$ of the trajectory of $\varphi_m$ through $x_m$ will be entirely contained in $f_m^{-1}(K)$. But for a generic choice of $x_m$ the trajectory through $x_m$ is closed of length less than $l$. Therefore we have obtained a closed trajectory $\alpha_m$ entirely contained in $f_m^{-1}(K)$. Its homotopy type is uniquely determined by $K$. Indeed, $\alpha_m$ can neither bound a small disc inside $f_m^{-1}(K)$ (because $\varphi_m$ has no poles inside $f_n^{-1}(K)$), nor have self-intersections.
We know that this closed trajectory belongs to a disc domain $D$ of $\varphi_m$ and that the restriction of $f_m$ to $D$ is a diffeomorphism. Thus $\alpha$, just as $\alpha_m$, surrounds a disc that contains a unique marked point.
Note that we have always assumed that $x$ is not a zero of ${\widetilde \varphi}$. Therefore we still know nothing about the existence of irreducible components of $C$ on which ${\widetilde \varphi}$ vanishes identically.
{\bf 3. The poles of ${\widetilde \varphi}$ at the nodes of $C$ are at most simple.}
Consider a nodal point $x$ of $C$, and let us prove that the pole of ${\widetilde \varphi}$ at $x$ is not double, as for a generic quadratic differential, but at most simple (on both components meeting at $x$). Suppose that this is not true, and both poles of ${\widetilde \varphi}$ are double (and have the same residue, as is always the case for quadratic differentials). Then the common residue is necessarily a negative real number. Indeed, otherwise a neighborhood of $x$ entirely consists of nonclosed trajectories of ${\widetilde \varphi}$ of infinite lengths (see Figure~\ref{1}a), which contradicts {\bf 1}. If the common residue is a negative real number, then the point $x$ is surrounded (on both components) by concentric closed trajectories (Figure~\ref{1}b). Each of these trajectories must surround a unique marked point. But this would mean that $C$ is composed of $2$ spherical irreducible components with one marked point and one nodal point of each. This is impossible because the curve $C$ is stable. Thus ${\widetilde \varphi}$ cannot have double poles at nodal points.
To get a better insight why double poles are impossible at the nodes of $C$, we have represented (Figures~\ref{2} and~\ref{3}) two families of curves with quadratic differentials, degenerating to a nodal curve $C$ on which the limit quadratic differential has a double pole at a node. All the quadratic differentials in question have only a finite number of nonclosed horizontal trajectories. However, in the first case, the limit curve $C$ is not stable, while in the second case, the quadratic differentials before the limit have cylindric domains.
\begin{figure}
\caption{Horizontal trajectories near a double pole for: (a) not a real negative residue; (b) a real negative residue. }
\label{1}
\end{figure}
\begin{figure}
\caption{Double poles at a nodal point cannot arise from a disc domain because it would mean that the curve $C$ is not stable. }
\label{2}
\end{figure}
\begin{figure}
\caption{Double poles at a nodal point cannot arise from a cylindric domain because the Strebel differentials $\varphi_m$ do not have cylindric domains. }
\label{3}
\end{figure}
{\bf 4. On components without marked points we have ${\widetilde \varphi} = 0$.}
Now consider an unmarked component $C'$ of $C$ (see Definition~\ref{Def:unmarked}). Suppose ${\widetilde \varphi}$ is not identically equal to zero on this component. Then ${\widetilde \varphi}$ has only a finite number of nonclosed trajectories on $C'$ and these are of finite lengths. But, on the other hand, ${\widetilde \varphi}$ has no closed trajectories, because each closed trajectory surrounds a marked point and $C'$ contains no marked points. This is a contradiction.
{\bf 5. The marked components are covered by the closures of the disc domains of ${\widetilde \varphi}$.}
Finally, consider the restriction of ${\widetilde \varphi}$ to a marked component $C'$ (see Definition~\ref{Def:unmarked}). It has double poles with residues $-(p_i/2\pi)^2$ at the marked points (and therefore it does not vanish). It has at most simple poles at the nodal points. Each of its closed trajectories surrounds a unique marked point and therefore belongs to a disc domain. The total length of its nonclosed trajectories is finite and therefore $C'$ is covered by the closures of the disc domains. Thus (by Strebel's theorem) ${\widetilde \varphi}$ is the unique Strebel differential on $(C' \setminus \mbox{nodes})$ with parameters $p_i$.
We have proved that ${\widetilde \varphi} = \varphi$.
This completes the proof. {
$\diamond$}
\begin{theorem} \label{Thm:cont2} The map ${\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n \rightarrow {\overline{\cal W}}$ that assigns to a stable curve and a list of perimeters the corresponding Strebel differential is continuous. \end{theorem}
\paragraph{Proof.} We consider a sequence of smooth curves $C_m$ tending to a stable curve $C$, together with a sequence of $n$-tuples of positive real numbers $(p_1^{(m)}, \dots, p_n^{(m)})$ tending to an $n$-tuple $(p_1, \dots, p_n)$ of positive real numbers. Now we repeat the proof of Theorem~\ref{Thm:contfam} without modifications. {
$\diamond$}
\begin{remark} K.~Strebel (\cite{Strebel}, Theorem~23.3) proves that Strebel differentials on any connected, not necessarily compact Riemann surface (of finite type and without boundary) with $n$ marked points depend continuously on the parameters $p_1, \dots, p_n$, in the topology of uniform convergence on compact sets outside the marked points. \end{remark}
\section{The ``minimal reasonable compactification'' of ${\cal M}_{g,n}$} \label{Sec:compactifications}
Here we describe a compactification of ${\cal M}_{g,n}$ that is different from the Deligne-Mumford one. This compactification, multiplied by ${\mathbb R}_+^n$, is isomorphic (as a topological orbifold) to a natural closure of the cell decomposition of ${\cal M}_{g,n} \times {\mathbb R}_+^n$ given by Strebel differentials.
\subsection{Cell complexes}
Strebel's theorem allows one to divide the space ${\cal M}_{g,n} \times {\mathbb R}_+^n$ into cells: two Riemann surfaces endowed with perimeters $(C; p_1, \dots, p_n)$ and $(C'; p_1', \dots, p_n')$ belong to the same cell if the nonclosed horizontal trajectories of the corresponding Strebel differentials form isomorphic ribbon graphs (without taking into account the lengths of the edges). The cell corresponding to a ribbon graph $G$ whose set of edges is $E$, is isomorphic to ${\mathbb R}_+^E/{\rm Aut}(G)$, where ${\rm Aut}(G)$ is the (finite) group of automorphisms of the ribbon graph. A cell $\Delta_1$ is a face of another cell $\Delta_2$ iff the corresponding graph $G_1$ can be obtained from the graph $G_2$ by contracting several edges. Gluing such cells together we obtain an orbifold cell complex homeomorphic to ${\cal M}_{g,n} \times {\mathbb R}_+^n$.
Now we construct a bigger cell complex whose cells correspond to stable ribbon graphs of genus $g$ with $n$ numbered faces. Let us first define the operation of edge contracting in stable ribbon graphs. A cell $\Delta_1$ of our new cell complex will be a face of another cell $\Delta_2$ iff the corresponding stable graph $G_1$ can be obtained from the stable ribbon graph $G_2$ by contracting several edges.
All the graphs considered below are stable ribbon graphs of genus $g$ with $n$ numbered faces (see Definition~\ref{Def:stabgraph}). Ordinary ribbon graphs with $n$ numbered faces and such that their vertices have degrees at least~$3$ are particular cases of stable ribbon graphs (the genus defect at all vertices being equal to~$0$).
Let $G$ be a stable ribbon graph and $e$ its edge. We suppose that $e$ does not constitute a face on its own. Recall that $\sigma_0$ is the permutation of the half-edges obtained by multiplying all the permutations assigned to vertices; $\sigma_1$ is the involution exchanging the half-edges of each edge; $\sigma_2 = \sigma_0^{-1} \sigma_1$ is the permutation whose cycles correspond to faces.
\begin{definition} The {\em contraction of the edge}\, $e$ in the stable ribbon graph $G$ gives the following stable ribbon graph.
The underlying combinatorial graph is just the underlying graph of $G$ with the edge $e$ contracted.
If the edge $e$ is not a loop (Figure~\ref{Fig:contraction}a), then the genus defect assigned to the vertex obtained by its contraction is the sum of the genus defects of the two initial vertices of $e$. If $e$ is a loop based at a vertex $v$ and the two half-edges of $e$ belong to the same cycle of the permutation $\sigma_0$ (Figure~\ref{Fig:contraction}b), then the genus defect of $v$ does not change after the contraction. If $e$ is a loop and the half-edges of $e$ belong to two different cycles of $\sigma_0$ (Figure~\ref{Fig:contraction}c), then the genus defect increases by $1$.
Finally, the new permutations $\sigma_0'$, $\sigma_1'$, $\sigma_2'$ are defined as follows. Let $h$ be a half-edge. Then the half-edge $\sigma_2'(h)$ is the first among the half-edges $\sigma_2(h)$, $\sigma_2^2(h)$, \dots that is not a half-edge of $e$. (In other words, from the point of view of a face whose boundary included the edge $e$, this edge simply got contracted.) The permutation $\sigma_1'$ is defined in the obvious way (by excluding from $\sigma_1$ the cycle corresponding to $e$). The permutation $\sigma_0'$ equals $\sigma_1' \sigma_2'^{-1}$. \end{definition}
\begin{figure}
\caption{Contracting an edge $e$ of $G$. We have represented the neighborhood of the edge $e$ in the involved component of the surface of embedding of $G$. }
\label{Fig:contraction}
\end{figure}
This operation of edge-contracting might look complicated, but actually it is quite natural and describes what happens to the graph of nonclosed horizontal trajectories of a Strebel differential as the length of one of its edges $e$ tends to~$0$. From the point of view of each polygon $B_i$ surrounding a face nothing special happens: if $e$ was part of $B_i$ it simply gets contracted. Simple topological considerations allow one to find what happens to the genus defect. The precise statement of this is given in Theorem~\ref{Thm:homeomorphism} below.
Note that if an edge $e$ is the unique edge that surrounds a face, then it cannot be contracted, because its length must remain equal to the perimeter $p_i$ of the corresponding face.
The two propositions below are simple combinatorial exercises.
\begin{proposition} Edge contracting is commutative, in other words the result of a contraction of two edges does not depend on the order in which they are performed. \end{proposition}
Thus it makes sense to talk about contracting a subset of the set of edges of a stable ribbon graph.
Consider a stable ribbon graph $G$. Let $E$ be the set of its edges, $H$ the set of its half-edges, and $\sigma_0, \sigma_1, \sigma_2$ its structural permutations. Decompose $E$ into a disjoint union $E = E_c \sqcup E_r$. We are going to describe the result of the contraction of the edges of $E_c$.
Decompose $E_c$ into the union of connected components, $E_c = E_1 \sqcup \dots \sqcup E_k$. First we introduce a structure of a stable ribbon graph on each of the $E_i$. The underlying graph is just the subgraph of $G$ with edges $E_i$. The permutation $\sigma_1^{(i)}$ is the restriction of $\sigma_1$ to the half-edges of $E_i$. The image of a half-edge $h$ under $\sigma_0^{(i)}$ is the first half-edge among $\sigma_0(h)$, $\sigma_0^2(h)$, \dots to belong to an edge of $E_i$. Finally, $\sigma_2^{(i)} = ((\sigma_0)^{(i)})^{-1} \sigma_1^{(i)}$. The genus defect function is the restriction of the genus defect function of $G$ to the subgraph.
Now we can introduce a structure of a stable ribbon graph on the set of remaining edges $E_r$. The underlying graph is obtained from $G$ by contracting the edges of $E_c$. The permutation $\sigma_1'$ is the restriction of $\sigma_1$ to the half-edges of $E_r$. The image of a half-edge $h$ under $\sigma_2'$ is the first half-edge among $\sigma_2(h)$, $\sigma_2^2(h)$, \dots to belong to an edge of $E_r$. Finally, $\sigma_0'= \sigma_1' \sigma_2'^{-1}$. If a vertex of the new graph is the result of the contraction of $E_i$, then its genus defect is equal to the genus of the stable ribbon graph $E_i$. The genus defects of the other vertices are the same as they were in the graph $G$.
\begin{proposition} \label{Prop:contraction} The stable ribbon graph $E_r$ described above is the result of a contraction of the edges $E_c$ in the stable ribbon graph $G$. \end{proposition}
Now, using stable ribbon graphs, we can construct a new orbifold cell complex.
\begin{definition} \label{Def:A} Denote by $A$ the following orbifold cell complex. Its cells are in a one-to-one correspondence with stable ribbon graphs with $n$ numbered faces. If $G$ is such a graph and $E$ the set of its edges, the corresponding cell $\Delta$ is isomorphic to ${\mathbb R}_+^E/{\rm Aut}(G)$, where ${\rm Aut}(G)$ is the (finite) group of automorphisms of $G$. A cell $\Delta_1$ is a boundary cell of the cell $\Delta_2$ if the corresponding stable ribbon graph $G_1$ can be obtained from the stable ribbon graph $G_2$ by contracting several edges $e_1, \dots, e_k$. The cell $\Delta_1$ is then glued to $\Delta_2$ along $e_1 = \dots = e_k =0$. \end{definition}
\begin{definition} Denote the ${\cal B}_i$ the orbifold cell complex of pairs $(G,x)$, where $G$ is a stable graph with $n$ numbered faces and $x$ a point lying on its $i$th face $B_i$. \end{definition}
The fact that ${\cal B}_i$ indeed has a natural structure of a cell complex can be seen in the following way.
Consider a cell $\Delta$ of $A$. Its preimage in the total space of ${\cal B}_i$ (under the projection to $A$ forgetting the point $x$) is naturally subdivided into cells. Each cell is composed of the points $(G,x)$ such that $x$ lies on some given edge of the polygon $B_i$, or such that $x$ coincides with one of the vertices of $B_i$. These cells are then glued to each other in the obvious way.
\subsection{Factorizing ${\overline{\cal M}}_{g,n}$}
In Section~\ref{Ssec:stabStreb} we saw that a Strebel differential on a stable curve vanished identically on the irreducible components that do not contain marked points. Therefore it is a good idea to contract such components.
Let $C$ be a stable curve with $n$ marked points. Consider the curve $\widetilde C$ obtained from $C$ by contracting to a point each unmarked component (i.e., an irreducible component of $C$ that does not contain marked points).
On the curve $\widetilde C$ we can define a genus defect function. It is a function with a finite support and with positive integer values, defined in the following way.
Consider the subcurve of $C$ composed of its unmarked components. Each connected component of this subcurve is contracted to a point of $\widetilde C$ and we assign to this point the arithmetic genus of the corresponding contracted component (cf. Definition~\ref{Def:stabgraph}).
\begin{definition} \label{Def:contraction} We call the curve $\widetilde C$ endowed with the genus defect function the {\em contraction} of $C$. \end{definition}
\begin{definition}\label{Def:MR} We call the {\em minimal reasonable} compactification of ${\cal M}_{g,n}$ (denoted by $K{\overline{\cal M}}_{g,n}$) the quotient of the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$ by the following equivalence relation: two points $x,y \in {\overline{\cal M}}_{g,n}$ are equivalent if the corresponding contractions ${\widetilde C}_x$ and ${\widetilde C}_y$ are isomorphic. \end{definition}
This compactification was defined in Kontsevich's original paper~\cite{Kontsevich}. Looijenga (\cite{Looijenga95}, Lemma~3.1) shows that it is a compact Hausdorff topological orbifold. It is not known whether it can be given a natural algebraic structure.
Note that we have constructed $K{\overline{\cal M}}_{g,n}$ together with a projection $$ {\overline{\cal M}}_{g,n} \rightarrow K{\overline{\cal M}}_{g,n}. $$ This projection contracts some subvarieties of ${\overline{\cal M}}_{g,n}$ of complex codimension at least $1$. Therefore the fundamental homology class of ${\overline{\cal M}}_{g,n}$ is sent to the fundamental homology class of $K{\overline{\cal M}}_{g,n}$.
\begin{proposition} The line bundles ${\cal L}_i$ over ${\overline{\cal M}}_{g,n}$ are pull-backs of some complex line bundles over $K{\overline{\cal M}}_{g,n}$, that we will also denote by ${\cal L}_i$. \end{proposition}
\paragraph{Proof.} This is almost obvious. Indeed, if two curves $C_1$ and $C_2$ are equivalent in the sense of Definition~\ref{Def:MR}, then the cotangent lines $L_i$ to marked points on both curves are naturally identified. {
$\diamond$}
\begin{proposition} The intersection numbers of the first Chern classes $c_1({\cal L}_i)$ are the same on any compactification $X$ of ${\cal M}_{g,n}$ that can be projected on $K{\overline{\cal M}}_{g,n}$ in such a way that the line bundles ${\cal L}_i$ are obtained by pull-back from $K{\overline{\cal M}}_{g,n}$ and the fundamental class of $X$ is sent to the fundamental class of $K{\overline{\cal M}}_{g,n}$. \end{proposition}
\paragraph{Proof.} This is again obvious. Instead of calculating the intersection numbers on $X$ we can calculate them on $K{\overline{\cal M}}_{g,n}$ and pull them back on $X$. {
$\diamond$}
In particular all the intersection numbers are the same on ${\overline{\cal M}}_{g,n}$ and $K{\overline{\cal M}}_{g,n}$
It is not known whether the compactifications $K{\overline{\cal M}}_{g,n}$ can be endowed with the structure of singular algebraic varieties. It had been conjectured that this can be achieved using the semi-ampleness of the line bundles ${\cal L}_i$ over ${\overline{\cal M}}_{g,n}$. However, Sean Keel showed in~\cite{Keel}, Section~3 that the semi-ampleness fails for $g \geq 3$ (although in finite characteristic it holds for any $g$ and $n$, see~\cite{Keel2}).
For $g=0$, the algebraic version of the minimal reasonable compactification $K{\overline{\cal M}}_{0,n}$ was constructed by Marco Boggi~\cite{Boggi}. He discribed it as a solution to a moduli problem using the construction that we give below. He also gave a description of $K{\overline{\cal M}}_{0,n}$ using blow-ups of a projective space and studied the action of the symmetric group on it.
The space $K{\overline{\cal M}}_{0,n}$ was also defined and used in~\cite{GorLan} V.~Goryunov and S.~Lando, although the authors did not know its exact interpretation as a moduli space.
Let us sum up the algebraic construction of $K{\overline{\cal M}}_{0,n}$. Consider a smooth rational curve with $n$ marked points $({\mathbb C} {\rm P}^1, x_1, \dots, x_n) \in {\cal M}_{0,n}$. Consider a degree~$1$ rational function $f_i$ on ${\mathbb C} {\rm P}^1$ having its pole at $x_i$ and such that $$ f_i(x_1) + \dots + f_i(x_{i-1}) + f_i(x_{i+1}) + \dots + f_i(x_n) = 0. $$ Such a function is unique, up to a multiplicative constant, therefore its values $$ \biggl( f_i(x_1), \dots, f_i(x_{i-1}), f_i(x_{i+1}), \dots, f_i(x_n) \biggr) $$ yield a map $F_i : {\cal M}_{0,n} \rightarrow {\mathbb C} {\rm P}^{n-3}$. Putting together such maps for all $i$, we obtain a map $F : {\cal M}_{0,n} \rightarrow ({\mathbb C} {\rm P}^{n-3})^n$. We claim that the closure of the image of $F$ in $({\mathbb C} {\rm P}^{n-3})^n$ is an algebraic model of the minimal reasonable compactification. More precisely, $F$ can be naturally extended to a holomorphic map $F: {\overline{\cal M}}_{0,n} \rightarrow ({\mathbb C} {\rm P}^{n-3})^n$ that sends two stable curves to the same point if and only if their contractions (Definition~\ref{Def:contraction}) are isomorphic.
Let us sketch the proof of this fact. First of all, the family of functions $f_i$ can be extended to a family of functions on the compactification ${\overline{\cal M}}_{0,n}$. The function $f_i$ on a stable curve $C$ with $n$ marked points is defined as follows. It is constant on each irreducible component of $C$ that does not contain $x_i$. On the component that contains $x_i$, the function $f_i$ is of degree~$1$ and has a simple pole at $x_i$. And, as before, we have $$ f_i(x_1) + \dots + f_i(x_{i-1}) + f_i(x_{i+1}) + \dots + f_i(x_n) = 0. $$ Such a function $f_i$ is, again, unique, up to a multiplicative constant. It is easy to see that the values of the function $f_i$ at the points $x_j, j \not= i$ allow one to reconstitute the complex structure on the component of $C$ that contains $x_i$, but not on the other components.
Thus the function $F$ can be extended to ${\overline{\cal M}}_{0,n}$ and a point $F(C)$ allows one to reconstitute the complex structure of the marked components of $C$, but not that of the unmarked components.
\subsection{A homeomorphism between $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ and the cell complex of stable ribbon graphs}
Consider two stable curves $C_1$ and $C_2$ that are mapped to the same point of $K{\overline{\cal M}}_{g,n}$. Let $p_1, \dots, p_n$ be a given list of perimeters (positive real numbers). It is clear that the stable ribbon graphs formed by the nonclosed horizontal trajectories of the Strebel differentials on $C_1$ and $C_2$ are the same, including the edge lengths. We can therefore define a map $h$ from $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ to the cell complex $A$ of stable ribbon graphs with $n$ numbered faces and endowed with edge lengths (see Definition~\ref{Def:A}).
\begin{theorem}\label{Thm:homeomorphism} The map $$ h: K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n \rightarrow A $$ is an isomorphism of topological orbifolds. The polygonal bundles ${\cal B}_i$ over $A$ are naturally identified with the real spherizations $({\cal L}_i^* \setminus \mbox{\rm zero section})/{\mathbb R}_+$ of the complex line bundles ${\cal L}_i^*$ dual to ${\cal L}_i$. \end{theorem}
This theorem was formulated by M.~Kontsevich without a proof. It follows from the main theorem (Theorem~8.6) of E.~Looijenga's paper~\cite{Looijenga95}. Here we give a different proof.
\paragraph{Proof of Theorem~\ref{Thm:homeomorphism}.} The identification of $({\cal L}_i^* \setminus \mbox{\rm zero section})/{\mathbb R}_+$ with ${\cal B}_i$ is immediate (see the discussion in the end of Section~\ref{Ssec:stabStreb}).
The bijectivity of $h$ is a reformulation of Proposition~\ref{Prop:strips2}.
The continuity of $h$ and that of $h^{-1}$ are equivalent. Indeed, both spaces $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ and $A$ have natural proper projections to ${\mathbb R}_+^n$. We know that the map $h$ is a bijection that commutes with the projections. Therefore if $h$ or its inverse is continuous, then $h$ is a homeomorphism.
Thus the main task is to prove the continuity of $h$, which we will do using Theorem~\ref{Thm:cont2} and Proposition~\ref{Prop:teichmetric}.
{\bf 1. A sequence of Strebel differentials.}
Consider a sequence of stable curves $C_m$ tending to a stable curve $C$, together with a sequence of $n$-tuples $p^{(m)} = (p_1^{(m)}, \dots, p_n^{(m)})$ of positive real numbers tending to an $n$-tuple $p = (p_1, \dots, p_n)$ of positive real numbers.
From Theorem~\ref{Thm:cont2} we know that in the vector bundle ${\overline{\cal W}}$ of quadratic differentials, the sequence of the corresponding Strebel differentials $\varphi_m$ on $C_m$ tends to the Strebel differential $\varphi$ on $C$. Moreover, according to Proposition~\ref{Prop:teichmetric}, there is a sequence of deformations $f_m:C_m \rightarrow C$ such that the sequence $(f_m)_* \varphi_m$ converges to $\varphi$, uniformly on any compact set $K \subset (C \setminus \mbox{marked poles and nodes})$.
We must prove that in the cell complex $A$, the sequence of stable graphs with edge lengths corresponding to $\varphi_m$ tends to the stable graph with edge lengths corresponding to $\varphi$.
{\bf 2. For $m$ big enough, a disc domain of $\varphi_m$ contains any compact set inside the corresponding disc domain of $\varphi$.}
Let $z=z_i$ be one of the marked points in the curve $C$, let $D$ be the corresponding disc domain consisting of closed horizontal trajectories, and let $K \subset D$ be any compact set. Then, for $m$ big enough, $f_m^{-1}(K)$ is contained in the disc domain of $\varphi_m$ surrounding the marked point $f_m^{-1}(z)$. Indeed, consider a compact annulus $K'$ surrounding $K$. We suppose that $K'$ is composed of closed trajectories of the disc domain $D$ and that $K \cap K'= \emptyset$. The annulus $K'$ does not contain nodes of $C$, nor zeroes or poles of $\varphi$.
In the proof of Theorem~\ref{Thm:contfam}, paragraph {\bf 2}, we proved that, for $m$ big enough, $f_m^{-1}(K')$ necessarily contains a closed horizontal trajectory of $\varphi_m$ that makes exactly one turn around $f_m^{-1}(K')$. This trajectory surrounds $f_m^{1}(K)$ entirely. Thus $f_m^{-1}(K)$ lies inside a disc domain of $\varphi_m$.
{\bf 3. Cutting the curve $C$ into pieces.}
For shortness, we call a {\em disc neighborhood} of a point $x \in C$ any neighborhood of $x$ homeomorphic to an open disc.
To every marked point $z_i$ on $C$ we assign a disc neighborhood $U_i \subset C$.
To every vertex $v$ of the stable graph of the differential $\varphi$ we assign an open set $U_v \subset C$ as follows. If $v$ is a zero of $\varphi$ at a regular point of $C$, then $U_v$ is just a disc neighborhood of $v$. If $v$ is a node of $C$ such that both irreducible components intersecting at $v$ are marked (contain marked points), then $U_v$ is the union of two disc neighborhoods of $v$ on both components. If $v$ is obtained by contracting one or more unmarked irreducible components of $C$, then $U_v$ is the union of these unmarked components and of the disc neighborhoods of all the nodes at which they meet marked components.
Finally, to every edge $e$ we assign a compact set $K_e \subset C$. It is homeomorphic to a closed disc and contains in its interior the part of $e$ that lies outside $U_v$ and $U_{v'}$, where $v$ and $v'$ are the vertices of $e$. Moreover, we suppose that $K_e$ does not intersect the vertical trajectories of $\varphi$ that join marked points to the vertices $v$ and $v'$.
All these sets are represented in Figure~\ref{Fig:pieces}.
\begin{figure}
\caption{The sets $U_i$, $U_v$, and $K_e$.}
\label{Fig:pieces}
\end{figure}
All the disc neighborhoods above can be chosen arbitrarily small.
We denote by $G = {\rm Gr}(\varphi)$ the stable ribbon graph endowed with edge lengths, assigned to the Strebel differential $\varphi$. Similarly, $G_m = {\rm Gr}(\varphi_m)$ is the stable ribbon graph with edge lengths assigned to $\varphi_m$.
{\bf 5. For $m$ big enough, the vertices of $G_m$ in $C_m$ lie inside the union of the sets $f_m^{-1}(U_v)$ over all the vertices $v$ of $G$.}
Indeed, consider the compact set $K \subset C$ obtained from $C$ by taking away all the open sets $U_i$ and $U_v$. There are no zeroes or poles of $\varphi$ nor nodes of $C$ in $K$. Therefore, for $m$ big enough, there are no vertices of $G_m$ in $f_m^{-1}(K)$. On the other hand, according to {\bf 3}, there are no vertices of $G_m$ in $f_m^{-1}(U_i)$, because $f_m^{-1}(U_i)$ belongs to the $i$th disc domain of $\varphi_m$. Thus every vertex of $G_m$ lies inside $f_m^{-1}(U_v)$ for some vertex $v$ of $G$.
Conversely, let us prove, that each set $f_m^{-1}(U_v)$ contains at least one vertex of $G_m$.
Suppose $v$ is a $k$-tuple zero of $\varphi$ at a regular point of $C$. Then we consider a circle surrounding $v$ and lying inside $U_v$. As we go around this circle, the horizontal trajectories of $\varphi$ make $-k$ half-turns with respect to the tangent line to the circle. Therefore the same is true for the horizontal trajectories of $\varphi_m$, for $m$ big enough. Thus $f_m^{-1}(U_v)$ contains one or several zeroes of $\varphi_m$ whose sum of multiplicities equals $k$.
Suppose $U_v$ contains at least one node of $C$. If $f_m^{-1}(U_v)$ also contains a node of $C_m$ there is nothing to prove, because this node is necessarily a vertex of $G_m$. Suppose that $f_m^{-1}(U_v)$ does not contain nodes. Then it is a smooth Riemann surface with holes. We must prove that $\varphi_m$ has at least one zero on this surface.
First of all, if we are given a quadratic differential without poles on a Riemann surface $S$ of genus $g$ with $d$ holes, the number of its zeroes (taking their multiplicities into account) equals $$
|\mbox{zeroes}| = 4g - 4 + 2d - 2 |\mbox{turns}|, $$
where $|\mbox{turns}|$ is the total number of turns that the horizontal trajectories make with respect to the tangent lines to the boundary circles.
Recall that $U_v$ contains several unmarked components of $C$ and several small discs surrounding nodes of marked components. As above, we draw a circle inside each such small disc, surrounding the corresponding node. If the node is a vertex of degree $k$, then, as we go around the circle, the horizontal trajectories make $-k$ half-turns with respect to the tangent lines to the circle. Therefore the same is true for $(f_m)_*\varphi_m$, for $m$ big enough. Since $f_m^{-1}(U_v)$ has either a positive genus or at least two holes, and since the number of turns is always negative, $\varphi_m$ has at least one zero in $f_m^{-1}(U_v)$.
{\bf 6. An injection from the set of edges of $G$ to the set of edges of $G_m$.}
Let $e$ be an edge of $G$. We will assign to $e$ an edge of $G_m$. Later we will see that one obtains $G$ by contracting the edges of $G_m$ that are not assigned to any edge of $G$.
A vertical trajectory of a Strebel differential usually joins two marked points (that can happen to be the same) and crosses exactly one nonclosed horizontal trajectory. (The exceptions are those vertical trajectories that join a marked point to a zero of the differential or to a node of the curve.)
Two vertical trajectories intersect the same nonclosed horizontal trajectory if and only if they join the same pair of marked points and, moreover, bound a region in the stable curve, that does not contain zeroes of poles of the Strebel differential, nor nodes of the curve.
Consider the vertical trajectory $\alpha$ of $\varphi$ through any point $x \in K_e$. By the choice of $K_e$, we know that $\alpha$ does not end at a vertex of $G$. Therefore it joins some marked points $z_i$ and $z_j$ (it can happen that $z_i = z_j$). Denote by $\alpha_m$ the vertical trajectory of $\varphi_m$ through $f_m^{-1}(x)$. For $m$ big enough, $f_m(\alpha_m)$ follows $\alpha$ closely enough to enter the neighborhoods $U_i$ and $U_j$. Therefore it necessarily joins $z_i$ and $z_j$, just as $\alpha$ does (because $f_m^{-1}(U_i)$ and $f_m^{-1}(U_j)$ lie inside the corresponding disc domains). According to the above remark, the vertical trajectory $\alpha_m$ crosses exactly one nonclosed horizontal trajectory of $\varphi_m$, in other words, exactly one edge of $G_m$. We denote this edge by $e_m$ and assign it to the edge $e$ of $G$.
For $m$ big enough, the resulting edge does not depend on the choice of $x \in K_e$. Indeed, two vertical trajectories of $\varphi_m$ through two points of $f_m^{-1}(K_e)$ bound a region in $C_m$, that does not contain zeroes or poles of $\varphi_m$ or nodes of $C_m$. Therefore these two vertical trajectories cross the same edge of $G_m$.
Let us prove that $e \mapsto e_m$ is an injection. Consider two vertical trajectories of $\alpha_m$ through $x \in f_m^{-1}(K_e)$ and $\alpha_m'$ through $x' \in f_m^{-1}(K_{e'})$, where $e$ and $e'$ are two different edges. Let us prove that they cannot intersect the same edge of $G_m$. If they cross the same edge of $G_m$, it means that they join the same pair of marked points and, moreover, bound a region in $C_m$, that does not contain zeroes or poles of $\varphi_m$ or nodes of $C_m$. It is easy to see that, if $\alpha_m$ and $\alpha_m'$ join the same pair of marked points and bound a region in $C_m$, then this region contains at least one set $f_m^{-1}(U_v)$ for some vertex $v$ of $G$. But, according to {\bf 4}, this set contains at least one vertex of $G_m$. Thus $\alpha_m$ and $\alpha_m'$ correspond to different edges of $G_m$. This proves the injectivity of the map $e \mapsto e_m$.
Each edge $e_m$ lies entirely inside $f_m^{-1}(K_e \cup U_v \cup U_{v'})$, where $v$ and $v'$ are the vertices of $e$. Indeed, $e_m$ is entirely contained inside the union of sets $f_m^{-1}(K_*)$ and $f_m^{-1}(U_*)$ over all edges and vertices of $G$, because the complement of this union is covered by disc domains of $\varphi_m$. On the other hand, $e_m$ does not meet $f_m^{-1}(K_{e'})$ for any edge $e' \not= e$, because otherwise it would cross a vertical trajectory of $\varphi_m$ that is should not cross. Thus $e_m$ is contained in $f_m^{-1}(K_e \cup U_v \cup U_{v'})$, because only $f_m^{-1}(U_v)$ and $f_m^{-1}(U_{v'})$ have common points with $f_m^{-1}(K_e)$.
For the same reason, an edge of $G_m$ that does not correspond to an edge of $G$ lies entirely in $f_m^{-1}(U_v)$ for some vertex $v$.
{\bf 7. The difference of lengths between an edge $e$ of $G$ and the corresponding edge $e_m$ of $G_m$ is less than $\varepsilon$. The other edges of $G_m$ are shorter than $\varepsilon$.}
At present, we have proved the following. To each edge $e$ of $G$ joining to vertices $v$ and $v'$ we can assign an edge $e_m$ of $G_m$ joining some points inside $f_m^{-1}(U_v)$ and $f_m^{-1}(U_{v'})$. The image of $e_m$ under $f_m$ is contained inside the union of $K_e$, $U_v$, and $U_{v'}$. An edge of $G_m$ that does not correspond to an edge of $G$ lies inside $f_m^{-1}(U_v)$ for some vertex $v$. All this, of course, is only true starting from some $m$.
Consider an edge $e$ and let $l_e$ be its length. If we choose the disc neighborhoods $U_v$ of the vertices small enough, the length of the part of $e$ that lies outside the neighborhoods $U_v$ is greater than $l_e - \varepsilon$. Consider the line $f_m(e_m)$, more precisely, its part lying in $K_e$. By choosing $K_e$ small enough and $m$ big enough, we see that the the length of $f_m(e_m) \cap K_e$ measured with the differential $(f_m)_* \varphi_m$ differs from the length of $e \cap K_e$ measured with $\varphi$ by less than $\varepsilon$. Thus the length of $e_m$ is greater than $l_e - 2 \varepsilon$.
But the total sum of lengths of the edges of $G_m$ is fixed: it is equal to the sum of the perimeters $\sum_i p_i^{(m)}$, which is arbitrarily close to $\sum_i p_i$. Therefore, for $m$ big enough, the length of each edge $e_m$ is arbitrarily close to that of $e$, while the lengths of the edges of $G_m$ that do not correspond to edges of $G$ are arbitrarily small.
{\bf 8. The genus defect function.}
The genus defect assigned to a vertex $v$ of $G$ is equal to the arithmetic genus of the open set $U_v$, which is actually a singular noncompact complex curve. Using Proposition~\ref{Prop:contraction}, it is easy to see that this genus defect is indeed obtained by contracting the edges of $G_m$ that lie in $f_m^{-1}(U_v)$.
{\bf 9. Conclusion.}
Thus, for $m$ big enough, the stable ribbon graph $G$ is obtained from $G_m$ by contracting some edges of length less than $\varepsilon$ and by changing the lengths of the other edges by less than $\varepsilon$. This means that the graph $G_m$ lies in an $\varepsilon$-neighborhood of $G$ in the cell complex $A$. Since $\varepsilon$ can be chosen arbitrarily small, the sequence $G_m$ tends to $G$ in $A$. {
$\diamond$}
\subsection{Looijenga's results}
This section is a very brief review of Looijenga's paper~\cite{Looijenga95}. We follow, as closely as possible, the notation introduced there.
Looijenga's main result is the continuity of a map similar to the map $h^{-1}$ in our Theorem~\ref{Thm:homeomorphism}. In other words, he proves that when one changes continuously the stable ribbon graph with edge lengths, the corresponding Riemann surface glued from strips also changes continuously. The main problem is that if we consider a sequence of ordinary ribbon graphs $G_m$ converging in $A$, the corresponding sequence of smooth curves does not necessarily converge in the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$, but only in the quotient $K{\overline{\cal M}}_{g,n}$. However, there is no simple criterion of convergence in $K{\overline{\cal M}}_{g,n}$. To solve this problem, Looijenga constructs a more complicated cell complex, that turns out to be homeomorphic to ${\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$. This is done roughly as follows. Consider a stable ribbon graph $G$ and normalize its edge lengths (by multiplying them by a constant) so that their sum equals $1$. Choose a subset $E_1$ of the set of edges $E$, and suppose the lengths of the edges of $E_1$ tend to $0$. Instead of forgetting everything about the lengths of the edges of $E_1$ (as we do when we contract them to obtain a new stable ribbon graph), we normalize their lengths anew, so that their sum equals $1$. Among the edges of $E_1$ their can now be a new subset of edges $E_2$ whose lengths still tend to $0$. We normalize them once again, so that their sum equals $1$. And so on. Using all this information, we can construct a cell complex with a projection onto $A$, and such that a converging sequence in this complex induces a converging sequence of stable curves.
Looijenga's construction is a little more general than what is needed for Kontsevich's proof, because he allows the perimeters $p_1, \dots, p_n$ to vanish (but under the above normalization their sum remains equal to $2$, so at least one perimeter must remain positive) and studies what happens to Strebel differentials and stable ribbon graphs in that case. Therefore his definitions of stable graphs and of the minimal reasonable compactification are slightly different from ours.
Suppose the set of marked points $\{ 1, \dots, n \}$ is divided into two disjoint parts: $V \sqcup Q$, $Q \not= \emptyset$. Here $V$ is the set of points such that the corresponding perimeters vanish, while $Q$ is the set of points such that the corresponding perimeters do not vanish.
\paragraph{Stable ribbon graphs.} The natural modification of the notion of Strebel differentials to this case is to consider the Strebel differentials on our surface punctured at the points of the set $V$ (and with double poles with given residues at the points of $Q$). It is easy to see that if one puts the points of $V$ back into the surface, the Strebel differential will have at most simple poles at these points. Therefore they will be vertices of the graph of nonclosed horizontal trajectories. Thus the new stable graphs, instead of having $n$ numbered faces, have $n$ numbered faces or vertices. We do not give the precise definition of a stable ribbon graph in this setting; it is a formalization of the properties of the graph of nonclosed horizontal trajectories of a Strebel differential.
\paragraph{Minimal reasonable compactifications.} In our definition of the minimal reasonable compactification, two points $x,y \in {\overline{\cal M}}_{g,n}$ are identified if the stable curves $C_x$ and $C_y$ give the same curve when one contracts each of their components that do not contain marked points. Analogously, $K_Q{\overline{\cal M}}_{g,n}$ is the quotient of ${\overline{\cal M}}_{g,n}$ in which two points $x,y \in {\overline{\cal M}}_{g,n}$ are identified if the stable curves $C_x$ and $C_y$ give the same curve when one contracts each of their components that do not contain the marked points {\em of the set $Q$}.
\paragraph{Teichm\"uller spaces.} Looijenga works with Teichm\"uller spaces rather than moduli spaces. The advantage of this approach is that the spaces considered are not orbifolds, but usual topological space. However one has to work with non locally compact topological spaces.
Let ${\cal T}_{g,n}$ be the Teichm\"uller space of Riemann surfaces of genus $g$ with $n$ marked points. Its quotient by the action of the mapping class group $\Gamma$ is the moduli space ${\cal M}_{g,n}$. \, Looijenga uses an augmented Teichm\"uller space ${\widehat{\cal T}}_{g,n}$ constructed by Harvey~\cite{Harvey}. The space ${\widehat{\cal T}}_{g,n}$ is endowed with a proper action of the mapping class group $\Gamma$ and there is a natural $\Gamma$-invariant surjective projection from ${\widehat{\cal T}}_{g,n}$ onto the Deligne-Mumford compactification ${\overline{\cal M}}_{g,n}$.
The space ${\widehat{\cal T}}_{g,n}$ is constructed as follows. Let $C$ be a smooth genus $g$ Riemann surface with $n$ punctures. We can choose $3g-3+n$ simple loops in $C$ in such a way that they cut $C$ into $2g-2+n$ ``pants'' ($3$-holed spheres). In the free homotopy class of each loop we can choose the shortest geodesic with respect to the unique complete metric of curvature $-1$ on $C$, compatible with the conformal structure. To each geodesic we can assign its length $l \in {\mathbb R}_+$ and the angle $\theta \in {\mathbb R}$, with respect to some chosen gluing, at which the two pants adjacent to it are glued to each other. These lengths and angles are called the {\em Fenchel-Nielsen coordinates} in the Teichm\"uller space. It is well-known that they determine a real analytic diffeomorphism of the Teichm\"uller space onto the open octant ${\mathbb R}_+^{3g-3+n} \times {\mathbb R}^{3g-3+n}$ (see, for example,~\cite{Abikoff}). Now we simply add to the octant the boundary hyperplanes and endow the obtained closed octant with the usual topology. This corresponds to pinching the geodesics, but retaining the angles at which the adjacent pants were glued to each other. This operation can be carried out for all possible choices of $3g-3+n$ geodesics and the points we adjoin to the Teichm\"uller space add up into the augmented Teichm\"uller space ${\widehat{\cal T}}_{g,n}$. For a smooth surface $S$ with $n$ marked points we define a {\em stable complex structure} to be a complex structure $J$ defined on $S \setminus L$, where $L \subset (S \setminus \mbox{marked points})$ is a finite set of simple loops, such that pinching the loops we obtain a stable curve. Then, as a set, ${\widehat{\cal T}}_{g,n}$ can be identified with the set of all stable complex structures on a given smooth surface $S$ with $n$ marked points, up to diffeomorphisms homotopic to the identity, relatively to the marked points.
Looijenga uses the following criterion of convergence on ${\widehat{\cal T}}_{g,n}$. Let $J_m$ be a sequence of complex structures on a smooth surface $S$ with $n$ marked points and let $J$ be a stable complex structure on $S \setminus L$. If $J_m$ converges to $J$ uniformly on every compact set $K \subset (S \setminus L)$, then the sequence of curves $(S, J_m)$ converges to $(S, J)$ in the space ${\widehat{\cal T}}_{g,n}$.
Starting from ${\widehat{\cal T}}_{g,n}$, one can construct the analog of minimal reasonable compactifications for Teichm\"uller spaces. Indeed, there is a $\Gamma$-invariant map ${\widehat{\cal T}}_{g,n} \rightarrow K_Q{\overline{\cal M}}_{g,n}$ for any subset $Q$ of the set of marked points. We denote by $K_Q {\cal T}_{g,n}$ the space obtained from ${\widehat{\cal T}}_{g,n}$ by contracting to one point each connected component of the preimage of each point under this map. This space is still endowed with an action of $\Gamma$, although it is not proper any longer.
\paragraph{The simplicial complex of stable graphs.} Let $S$ be a fixed surface with $n$ marked points. Consider the following infinite simplicial complex $A_{\cal T}$.
Consider the set of homotopy classes (relatively to the marked points) of simple non-oriented arcs in $S$ joining two (possibly coinciding) marked points and avoiding the other marked points. The homotopy class is called trivial if the corresponding arc is contractible in $S$ punctured at the marked points.
A vertex of $A_{\cal T}$ is a nontrivial homotopy class as above. A set of homotopy classes forms a simplex if the homotopy classes can by realized by loops that do not intersect (except at their endpoints).
There is a natural action of the mapping class group $\Gamma$ on $A_{\cal T}$. The quotient of $A_{\cal T}$ by this action is a finite orbifold simplicial complex. One can prove that the quotient complex $A_{\cal T}/\Gamma$ is isomorphic to the complex $A$ defined by stable ribbon graphs. (If a stable graph is a just a ribbon graph with $n$ numbered faces and with degrees of vertices $\geq 3$, then the corresponding set of arcs is obtained by considering the dual graph: joining by arcs the centers of adjacent faces. When we contract an edge in the stable ribbon graph we must erase the corresponding arc.)
Thus there are two equivalent ways of defining the same simplicial complex $A$. The definition with isotopy classes of arcs is certainly more elegant, but stable ribbon graphs are needed anyway to make the connection with Strebel differentials.
\paragraph{A quotient of \, ${\widehat{\cal T}}_{g,n} \times \mbox{ simplex}$.}
Finally, let $\Delta_n$ be the standard $n$-simplex. We consider the topological space $|K_{\bullet} {\cal T}|$ obtained from ${\widehat{\cal T}}_{g,n} \times \Delta_n$ by the following factorization. Consider a point $x$ in $\Delta_n$ and let $Q$ be the set of its nonzero coordinates (a subset of $\{ 1, \dots, n\}$). Then the ``layer'' ${\widehat{\cal T}}_{g,n} \times \{ x \}$ is factorized so as to obtain $K_Q {\cal T} \times \{ x\}$.
\begin{theorem} {\rm \cite{Looijenga95}} \label{Thm:looijenga} There is a natural bijective continuous map from the complex
$A_{\cal T}$ to $|K_{\bullet} {\cal T}|$. It is $\Gamma$-invariant and commutes with the projections of both spaces on the simplex $\Delta_n$. \end{theorem}
The above mapping is not a homeomorphism, but if we quotient both spaces by the action of $\Gamma$ it becomes a homeomorphism, because a continuous bijection between two compact topological spaces is necessarily a homeomorphism. If, in both spaces, we take the preimage of the interior of the simplex $\Delta_n$ we immediately obtain Theorem~\ref{Thm:homeomorphism}.
Looijenga proves Theorem~\ref{Thm:looijenga} by constructing an explicit trivialization of families of Riemann surfaces as a ribbon graph tends to a stable ribbon graph and using the convergence criterion that we formulated in the paragraph on Teichm\"uller spaces.
\section{The Chern classes $c_1({\cal L}_i)$} \label{Sec:Chern}
Here we recall Kontsevich's expression for the first Chern classes $c_1({\cal L}_i)$. Theorem~\ref{Thm:homeomorphism} allows us to work on the cell complex $A$.\, Kontsevich's expressions are cellwise smooth continuous differential forms, and we explain the framework in which such forms can be used.
\subsection{A connection on the bundles ${\cal B}_i$} \label{Ssec:conncurv}
Once we have found a homeomorphism $h$ between $K{\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ and the cell complex $A$, we will no longer use the smooth structure of $K{\overline{\cal M}}_{g,n}$ (defined outside the singularities). Instead, we use the natural piecewise smooth structure of the cell complex $A$. The relation between the two is rather delicate and we won't discuss it here.
Thus we have $n$ polygonal bundles ${\cal B}_i$ over the cell complex $A$ and we want to find their first Chern classes.
(We remind the reader how to define the first Chern class of a topological oriented circle bundle over any topological space $X$ homotopically equivalent to a cell complex. There exists a continuous map $f$ from $X$ to the infinite projective space ${\mathbb C} {\rm P}^{\infty}$ such that the circle bundle over $X$ is isomorphic to the pull-back under $f$ of the canonical circle bundle over ${\mathbb C} {\rm P}^{\infty}$. The first Chern class of the bundle is the pull-back under $f$ of the natural $2$-cohomology class of ${\mathbb C} {\rm P}^{\infty}$. It is an element of $H^2(X, {\mathbb Z})/\mbox{torsion}$.)
Consider one of the polygonal bundles ${\cal B}= {\cal B}_i$. Kontsevich constructs an explicit $1$-form $\alpha$ on each cell of the total space of ${\cal B}$, claiming that $d \alpha$ represents the first Chern class of the line bundle ${\cal L}_i$ over ${\overline{\cal M}}_{g,n} \times {\mathbb R}_+^n$ (see~\cite{Kontsevich}, Lemma~2.1). The $1$-form in question is the following.
Let $p$ be the perimeter of the polygon $B$, $k$ its number of vertices, and $$ 0 \leq \phi_1 < \dots < \phi_k < p $$ the distances from the distinguished point of the polygon to its vertices (as we go around the polygon counterclockwise). Moreover, denote by $l_i$, $1 \leq i \leq k$, the length of the edge that follows the $i$th vertex. Then we have $$ \alpha = \sum_{i=1}^k \frac{l_i}p \; d \! \left( \frac{\phi_i}p \right) \, . $$
\subsection{Differential geometry on polytopal complexes}
Here we introduce a framework for working with cellwise smooth differential forms on cell complexes. It has appeared, for example, in Sullivan's work~\cite{Sullivan}, but we give a complete exposition here, adapted to our needs.
The results of this section can be considered as a far-reaching generalization of the fact that the Newton-Leibniz formula $\int_a^b f'(t) dt = f(b) - f(a)$ holds not only for differentiable functions $f$, but also for continuous piecewise differentiable ones.
First we define polytopal complexes, which are simply spaces glued from affine polytopes.
\begin{definition} A {\em polytope} in a real vector space is an intersection of a finite number of open or closed half-spaces such that its interior is non-empty. Replacing, in the above intersection, some of the closed half-spaces by their boundary hyperplanes, we obtain a {\em face} of the polytope. \end{definition}
Thus a polytope is always convex, but not necessarily closed or bounded. A face is a subset of the polytope.
\begin{definition} A {\em polytopal complex} is a finite set $X$ of polytopes in real vector spaces, together with gluing functions satisfying the following conditions. (i)~Each gluing function is an affine map that identifies a polytope $P_1 \in X$ with a face of another polytope $P_2 \in X$. (For brevity, we will say that $P_1$ is a face of $P_2$.) (ii)~If $P_1$ is a face of $P_2$, which is a face of $P_3$, then $P_1$ is a face of $P_3$, and the corresponding gluing functions form a commutative diagram. (iii)~If $P_1 \in X$ is identified with a face of $P_2 \in X$, no other polytope $P_1' \in X$ can be identified with the same face of $P_2$. \end{definition}
Now we define differential forms on polytopal complexes. A differential form on a polytope is simply a differential form with smooth coefficients defined in some neighborhood of the polytope in the ambient vector space.
\begin{definition} \label{Def:forms} A {\em differential $k$-form} on a polytopal complex is a set of differential $k$-forms defined on all the polytopes such that restricting the $k$-form to a face of a polytope coincides with the $k$-form on the face. \end{definition}
\begin{example} Consider two squares lying in the half-planes $x \geq 0$ and $x \leq 0$ and having a common side on the $y$ axis. They form a polytopal complex. A differential $1$-form on this complex can be given, for example, by $dx+ dy$ in the right-hand square, $-2dx+dy$ in the left-hand square, and $dy$ on their common edge. Moreover, we could have added to our complex a third square (or another polygon) such that the three of them would share a common edge. The $1$-form can then be extended to this new polygon. \end{example}
This example shows that $k$-forms in two adjacent polytopes sharing a common face of dimension $\geq k$ are not independent: they must coincide on the common face.
\begin{definition} The exterior product and the differential $d$ of differential forms on polytopal complexes are defined polytope-wise. \end{definition}
It is obvious that we have $$ d^2\alpha = 0 \quad \mbox{and} \quad d(\alpha \wedge \beta) = (d \alpha) \wedge \beta + (-1)^{\deg\alpha}\alpha \wedge (d \beta), $$ because these identities are true on each polytope. Therefore each polytopal complex possesses a de Rham complex and the de Rham cohomology forms an algebra.
\begin{proposition} The de Rham cohomology groups of a polytopal complex $X$ are canonically identified, as real vector spaces, with its usual cohomology groups over ${\mathbb R}$.. \end{proposition}
\paragraph{Proof.} The de Rham complex can be considered as a complex of sheaves on the polytopal complex $X$. It suffices to prove that it is a flasque resolution of the constant sheaf ${\mathbb R}$ on $X$. In other words, we must prove that locally each closed differential form is exact (except for the constant functions considered as $0$-forms). Consider a point $x \in X$ and a closed $k$-form $\alpha$ defined in a sufficiently small neighborhood $U$ of $x$. We will construct a $(k-1)$-form $\beta$ in $U$, such that $d\beta = \alpha$. The value of $\beta$ on $k-1$ vectors tangent to one of the polytopes is obtained by the following standard procedure. We construct on the $k-1$ vectors a small parallelepiped $P$ that fits entirely into the polytope. Then we consider the cone with vertex $x$ and with base $P$. The integral of $\beta$ over $P$ is, by definition, equal to the integral of $\alpha$ over the cone. By letting the sides of $P$ tend to $0$ we find the value of $\beta$ on the $k-1$ vectors. It is obvious that $\beta$ is a $(k-1)$-form on the polytopal complex in the neighborhood of $x$ (in the sense of Definition~\ref{Def:forms}). It is easy to check that if $d \alpha= 0$, then $d\beta = \alpha$. {
$\diamond$}
Now the main task is to prove that the Stokes formula is still true for differential forms on polytopal complexes.
\begin{definition} A {\em $k$-piece} in a polytopal complex $X$ is an affine map from a compact $k$-dimensional polytope to a polytope of $X$. A {\em $k$-chain} in a polytopal complex $X$ is a finite linear combination $C$ of $k$-pieces with real coefficients. The {\em boundary} of a chain and the integral of a $k$-form over a $k$-chain are defined in the obvious way. \end{definition}
\begin{proposition} {\bf (Stokes formula)} Let $X$ be a polytopal complex, $C$ a $k$-chain in $X$, and $\alpha$ a $(k-1)$-form on $X$. Then $$ \int_C d\alpha = \int_{{\partial} C} \alpha. $$ \end{proposition}
\paragraph{Proof.} The formula is obvious if $C$ is composed of a unique $k$-piece, because in that case the piece is contained in a unique polytope of $X$. In the general case the formula is obtained by summing over the pieces of $C$. {
$\diamond$}
\begin{proposition} The algebra structure of the de Rham cohomology of a polytopal complex $X$ (given by the multiplication of forms) coincides with the usual algebra structure of the cohomology of $X$. \end{proposition}
\paragraph{Proof.} Recall the usual definition of the product in the space of cohomologies (see~\cite{Masey}, chapter XIII). Let $X$ be a polytopal complex and consider the polytopal complex $X \times X$ with the two projections, $p_1$ and $p_2$, on $X$.\, For $u,v \in H^*(X, {\mathbb R})$ one defines $u \otimes v \in H^*(X \times X, {\mathbb R})$ by the formula $$ (u \otimes v) (a \times b) = (-1)^{\deg v \deg a} u(a)v(b), $$ where $a$ and $b$ are two cycles in $X$. The product cycles $a \times b$ span the whole homology group of $X \times X$, therefore the above formula defines the class $u \otimes v$ unambiguously. It is clear that if differential forms $\alpha$ and $\beta$ on $X$ represent the classes $u$ and $v$, then the form $p_1^*\alpha \wedge p_2^*\beta$ represents the class $u \otimes v$. Now, the product $uv$ is defined by taking the restriction of $u \otimes v$ to the diagonal of $X \times X$. Thus it is represented by $\alpha \wedge \beta$. {
$\diamond$}
Now we will consider a polytopal equivalent of a circle bundle and prove that its first Chern classes can be expressed as the ``curvature'' of a cellwise smooth ``connection''.
\begin{definition} A {\em morphism} of polytopal complexes $F:X_1 \rightarrow X_2$ is a set $F$ of affine maps $f:P_1 \rightarrow P_2$, where $P_1$ is a polytope of $X_1$ and $P_2$ a polytope of $X_2$. This set must satisfy the following natural conditions. At least one map should be defined on every polytope of $X_1$. For every map $f \in F$ its restrictions to the faces of $P_1$ must belong to $F$. If the image of $P_1$ under $f$ belongs to a face of $P_2$, the map from $P_1$ to this face should also belong to $F$. If a point of $P_1$ has two different images under maps of $F$, these images should be identified by gluing functions of the complex $X_2$. \end{definition}
Note that the image of each polytope under a morphism lies in a unique polytope of the target complex.
Let $F:X \rightarrow Y$ be a morphism of polytopal complexes such that the preimage of each point of $Y$ is homeomorphic to a circle. (Each such circle is naturally subdivided into $0$-cells and $1$-cells, and we do not require that these subdivisions be the same for different fibers.) Suppose that the circle bundle thus obtained is oriented. Let $\alpha$ be a $1$-form on the polytopal complex $X$ such that its integral over each fiber of $F$ equals $1$ and such that $d\alpha$ is a pull-back under $F$ of a $2$-form $\omega$ on $Y$. The Stokes formula allows one to prove that $\omega$ represents the first Chern class of the bundle. More precisely:
\begin{proposition} \label{Prop:Chern} If $S$ is a polytopal complex homeomorphic to a compact $2$-dimensional manifold without boundary, and $G:S \rightarrow Y$ a morphism of complexes, then $\int_S G^*\omega$ is equal to the first Chern class of the pull-back to $S$ of the circle bundle over $Y$. \end{proposition}
\paragraph{Proof.} We can assume that $S$ is connected. Denote by $G^*X$ the pull-back to $S$ of the bundle $X$. Denote by $a$ the corresponding first Chern class. One can easily construct a section of $G^*X$ over the surface $S$ punctured at one point. Over the punctured point the section will wind $a$ times around the fiber. Such a section is a sub-complex of $G^*X$, homeomorphic to a $2$-dimensional surface with boundary. The integral of $\alpha$ over the boundary equals $a$. Thus, according to the Stokes formula, the integral of $d\alpha$ over the whole section also equals $a$. But the integral of $d\alpha$ over the section equals the integral of $\omega$ over $S$. {
$\diamond$}
\subsection{Back to the first Chern classes of ${\cal L}_i$}
The complex $A$ of stable ribbon graphs and the total space of the polygonal bundle ${\cal B}$ are obviously polytopal complexes. The projection ${\cal B} \rightarrow A$ is a morphism of complexes as in Proposition~\ref{Prop:Chern}.
It is straightforward to check that the $1$-form $\alpha$ defined in Section~\ref{Ssec:conncurv} is a $1$-form on the total space of ${\cal B}$ in the sense of Definition~\ref{Def:forms}.
Thus it remains to check that $\alpha$ satisfies the conditions of Proposition~\ref{Prop:Chern}.
\begin{proposition}{\rm \cite{Kontsevich}}
(i)~The integral of $\alpha$ over any fiber of ${\cal B}$ equals $-1$.
(ii)~The $2$-from $d \alpha$ is the lifting of a $2$-form $\omega$ from the base $A$. \end{proposition}
\paragraph{Proof.} \
(i)~As we go around the fiber, the distinguished point goes around the polygon $B$ counterclockwise. The coefficients $l_i/p$ remain constant, while each $\phi_i$ decreases from its initial value to $0$ and then from $p$ back to its initial value. Thus the integral over the fiber of each $d (\phi_i/p)$ equals $-1$ and the sum of the coefficients $l_i/p$ equals $1$.
(ii)~A simple calculation gives $$ \omega = d\alpha = \sum_{1 \leq i < j \leq k-1} d \! \left( \frac{l_i}p \right) \wedge d \! \left( \frac{l_j}p \right) $$ on each cell. This $2$-form depends only on the lengths $l_i$ but not on the $\phi_i$s. Therefore it is a lifting of a $2$-from from the base $A$. {
$\diamond$}
Thus $\omega$ is a $2$-form on the polytopal complex $A$ that represents minus the first Chern class of the bundle ${\cal B}$. To calculate the intersection numbers of these Chern classes one can multiply the $2$-forms $\omega$ and integrate them over the cells of highest dimension.
This finishes the proof of Theorem~\ref{Thm:int=int}.
\end{document} |
\begin{document}
\title[Extrapolation in variable Lebesgue spaces] {Extrapolation and weighted norm inequalities in the variable Lebesgue spaces}
\author{David Cruz-Uribe, SFO} \address{Department of Mathematics, Trinity College} \email{[email protected]}
\author{Li-An Daniel Wang} \address{Department of Mathematics, Trinity College} \email{[email protected]}
\thanks{Both authors are supported by the Stewart-Dorwart faculty
development fund at Trinity College, and the first is also
supported by NSF grant 1362425.
The authors would like to thank J.M.~Martell for an enlightening
conversation on limited range extrapolation.}
\subjclass[2010]{42B25, 42B35}
\keywords{variable Lebesgue spaces, weights, Muckenhoupt weights,
maximal operator, singular integrals, fractional integrals,
Rubio de Francia extrapolation}
\date{August 13, 2014}
\begin{abstract} We extend the theory of Rubio de Francia extrapolation, including off-diagonal, limited range and $A_\infty$ extrapolation, to the weighted variable Lebesgue spaces $L^{p(\cdot)}(w)$. As a consequence we are able to show that a number of different operators from harmonic analysis are bounded on these spaces. The proofs of our extrapolation results are developed in a way that outlines a general approach to proving extrapolation theorems on other Banach function spaces. \end{abstract}
\maketitle
\section{Introduction}
The variable Lebesgue spaces $L^{p(\cdot)}$ are a generalization of the classical Lebesgue spaces, replacing the constant exponent $p$ with an exponent function ${p(\cdot)}$. It is a Banach function space with the norm
\begin{equation} \label{eqn:defn-norm}
\|f\|_{p(\cdot)} = \|f\|_{L^{p(\cdot)}} = \inf \left\{ \lambda > 0 : \int_{\mathbb R^n\setminus\mathbb R^n_\infty}
\left(\frac{|f(x)|}{\lambda}\right)^{p(x)}\,dx
+ \|f\|_{L^\infty(\mathbb R^n_\infty)} \leq 1 \right\}, \end{equation}
where $\mathbb R^n_\infty = \{ x : p(x)=\infty \}$. These spaces have been the subject of considerable interest since the early 1990s both as function spaces with intrinsic interest and for their applications to problems arising in PDEs and the calculus of variations. For a thorough discussion of these spaces and their history, see~\cite{cruz-fiorenza-book,diening-harjulehto-hasto-ruzicka2010}.
Recently there has been interest in extending the theory of Muckenhoupt $A_p$ weights to this setting. Recall that given a non-negative, measurable function $w$, for $1<p<\infty$, $w\in A_p$ if
\[ [w]_{A_p} = \sup_B \left(\Xint-_B w(x)\,dx\right) \left(\Xint-_B w(x)^{1-p'}\,dx\right)^{p-1} < \infty, \]
where the supremum is taken over all balls $B \subset \mathbb R^n$, and
$\Xint-_B w\,dx = |B|^{-1}\int_B w\,dx$. We say $w\in A_1$ if
\[ [w]_{A_1} = \sup_B \frac{\Xint-_B w(x)\,dx}{\essinf_{x\in B} w(x)} < \infty. \]
These weights characterize the weighted norm inequalities for the Hardy-Littlewood maximal operator,
\[ Mf(x) = \sup_B \Xint-_B |f(y)|\,dy \cdot \chi_{B}(x). \]
More precisely, $w\in A_p$, $1<p<\infty$, if and only if $M : L^p(w)\rightarrow L^p(w)$. The Muckenhoupt weights also govern the weighted norm inequalities for a large number of operators in harmonic analysis, including singular integrals, commutators and square functions. For details, see~\cite{cruz-martell-perezBook, duoandikoetxea01,grafakos08b}.
Weighted norm inequalities for the maximal operator in the variable Lebesgue spaces were proved in~\cite{cruz-diening-hasto2011,MR2927495,
diening-harjulehto-hasto-ruzicka2010} (see also~\cite{diening-hastoPreprint2010} for related results). To show the connection with the classical results we restate them by replacing the weight $w$ by $w^p$ in the definition of $A_p$. In this case we say that $w\in A_p$, $1< p\leq \infty$, if
\[ \sup_B |B|^{-1}\|w\chi_B\|_p \|w^{-1}\chi_B\|_{p'} < \infty, \]
and this is equivalent to the norm inequality
\[ \|(Mf)w\|_p \leq C\|fw\|_p. \]
\begin{remark} Note that in this formulation the inequality holds in the case $p=\infty$; this fact is not well-known but was first proved by Muckenhoupt~\cite{muckenhoupt72}. \end{remark}
In this form the definition immediately generalizes to the variable Lebesgue spaces. (See below for precise definitions.) We say that a weight $w$ is in the class $A_{p(\cdot)}$ if
\[ \sup_B |B|^{-1}\|w\chi_B\|_{p(\cdot)} \|w^{-1}\chi_B\|_{{p'(\cdot)}} < \infty. \]
When ${p(\cdot)}$ is log-H\"older continuous (${p(\cdot)} \in LH$) and is bounded and bounded above $1$ ($1<p_-\leq p_+<\infty$), then $w\in A_{p(\cdot)}$ if and only if
\[ \|(Mf)w\|_{p(\cdot)} \leq C\|fw\|_{p(\cdot)}. \]
In this paper we further develop the theory of weighted norm inequalities on the variable Lebesgue spaces. We show that the $A_{p(\cdot)}$ weights govern the weighted norm inequalities for a wide variety of operators in harmonic analysis, including singular and fractional integrals and the Riesz transforms associated to elliptic operators in divergence form. To do this we show that theory of Rubio de Francia extrapolation holds in this setting. As an immediate consequence we prove, with very little additional work, norm inequalities in weighted $L^{p(\cdot)}$ spaces for any operator that satisfies estimates on $L^p(w)$ when $w$ is a Muckenhoupt $A_p$ weight. The classical theory of extrapolation is a powerful tool in harmonic analysis: for a detailed treatment, see~\cite{cruz-martell-perezBook}. Extrapolation in the scale of the variable Lebesgue spaces was originally developed in~\cite{MR2210118} to prove unweighted inequalities (see also~\cite{cruz-fiorenza-book,cruz-martell-perezBook}). It has found wide application since (see, for instance,~\cite{DCU-dw-P2014,MR2498558,MR3057132,MR2869787}), and the results we present here should be equally useful. We note that our work has already been applied to the study of greedy approximation algorithms on variable Lebesgue spaces in~\cite{dcu-eh-jmm14}.
The remainder of this paper is organized as follows. In Section~\ref{section:main-theorems} we state our extrapolation results, including the precise definitions needed. In Section~\ref{section:applications} we show how to apply extrapolation to prove weighted norm inequalities for several different kinds of operators. Our examples are not exhaustive; rather, they were chosen to illustrate the applicability of extrapolation. In Section~\ref{section:general} we give a general overview of our approach to proving extrapolation theorems. These ideas are not new---they were implicit in~\cite{cruz-martell-perezBook}. However, we think it is worthwhile to make them explicit here, for two reasons. First, they will motivate the technical details in our proofs, particularly Theorem~\ref{thm:limited-var}. Second, they will be helpful to others attempting to prove extrapolation theorems in different settings. Finally, in Section~\ref{section:extrapol-proof} we prove our extrapolation theorems. By following the schema outlined in the previous section, we actually prove more general theorems which yield our main results as special cases.
\section{Main Theorems} \label{section:main-theorems}
We begin with some definitions related to the variable Lebesgue spaces. Throughout we will follow the conventions established in~\cite{cruz-fiorenza-book}. Let $\mathcal P=\mathcal P(\mathbb R^n)$ be the collection of all measurable functions ${p(\cdot)} : \mathbb R^n \rightarrow [1, \infty]$. Given a set $E\subset \mathbb R^n$, we define
\[ p_-(E) = \essinf_{x\in E} p(x), \qquad p_+(E) = \esssup_{x\in E} p(x). \]
If $E=\mathbb R^n$, then for brevity we write $p_-$ and $p_+$. Given ${p(\cdot)}$, the conjugate exponent ${p'(\cdot)}$ is defined pointwise
\[ \frac{1}{p(x)}+\frac{1}{p'(x)} = 1, \]
with the convention that $1/\infty = 0$.
For our results we need to impose some regularity on the exponent functions ${p(\cdot)}$. The most important condition, one widely used in the study of variable Lebesgue spaces, is log-H\"older continuity. Given ${p(\cdot)}\in \mathcal P$, we say ${p(\cdot)}\in LH_0$ if there exists a constant $C_0$ such that
\begin{equation}\label{def:LH0}
|p(x) - p(y)| \leq \frac{C_0}{-\log(|x - y|)}, \qquad x,\, y, \in \mathbb R^n, \qquad |x - y| < 1/2,
\end{equation}
and ${p(\cdot)}\in LH_\infty$ if there exists $p_{\infty}$ and $C_\infty > 0$, such that
\begin{equation}\label{def:LHinf}
|p(x) - p_{\infty}| \leq \frac{C_\infty}{\log(e + |x|)}, \qquad x \in \mathbb R^n. \end{equation}
If ${p(\cdot)}$ satisfies both of these conditions we write ${p(\cdot)}\in LH$. It is immediate that if ${p(\cdot)} \in LH$, then ${p'(\cdot)} \in LH$. A key consequence of log-H\"older continuity is the fact that if $1<p_-$ and ${p(\cdot)}\in LH$, then the maximal operator is bounded on $L^{p(\cdot)}$.
\begin{theorem} \label{thm:max-op}
Given ${p(\cdot)} \in \mathcal P$, suppose $1<p_-\leq p_+<\infty$ and ${p(\cdot)}\in LH$. Then $\|Mf\|_{p(\cdot)} \leq C\|f\|_{p(\cdot)}$. \end{theorem}
However, this condition is not necessary, and there exist exponents ${p(\cdot)}$ which are not log-H\"older continuous but for which the maximal operator is still bounded on $L^{p(\cdot)}$. (See~\cite{cruz-fiorenza-book,diening-harjulehto-hasto-ruzicka2010} for further details.)
Given a weight $w$ (again, a non-negative, measurable function) and
${p(\cdot)}\in \mathcal P$, define the weighted variable Lebesgue space $L^{p(\cdot)}(w)$ to be the set of all measurable functions $f$ such that $fw \in L^{p(\cdot)}$, and we write $\| f \|_{L^{{p(\cdot)}}(w)} = \| fw \|_{{p(\cdot)}}$. We say that an operator $T$ is bounded on $L^{p(\cdot)}(w)$ if $\|(Tf)w\|_{p(\cdot)} \leq C\|fw\|_{p(\cdot)}$ for all $f\in L^{p(\cdot)}(w)$. We are interested in weights in $A_{p(\cdot)}$; we restate their definition here.
\begin{definition} \label{defn:Apvar} Given an exponent ${p(\cdot)}\in \mathcal P$ and a weight $w$ such that $0<w(x)<\infty$ almost everywhere, we say that $w\in A_{p(\cdot)}$ if
\[ [w]_{A_{p(\cdot)}}= \sup_B |B|^{-1} \|w\chi_B\|_{p(\cdot)} \|w^{-1}\chi_B\|_{p'(\cdot)} < \infty, \]
where the supremum is taken over all balls $B\subset \mathbb R^n$. \end{definition}
\begin{remark} Definition \ref{defn:Apvar} has two immediate consequences. First, if $w\in A_{p(\cdot)}$, then $w\in L^{p(\cdot)}_{loc}$ and $w^{-1}\in L^{p'(\cdot)}_{loc}$. Second, if $w \in A_{{p(\cdot)}}$, then $w^{-1} \in A_{{p'(\cdot)}}$. \end{remark}
For our results we will need to assume that the maximal operator is bounded on weighted variable Lebesgue spaces. The following result is from~\cite{MR2927495}.
\begin{theorem} \label{thm:wtd-max} Given ${p(\cdot)}\in \mathcal P$, $1<p_-\leq p_+<\infty$, suppose ${p(\cdot)} \in LH$. Then for every $w\in A_{p(\cdot)}$,
\begin{equation} \label{eqn:wtd-max}
\|Mf\|_{L^{p(\cdot)}(w)} \leq C\|f\|_{L^{p(\cdot)}(w)}. \end{equation}
Conversely, given any ${p(\cdot)}$ and $w$, if \eqref{eqn:wtd-max} holds for $f\in L^{p(\cdot)}(w)$, then $p_->1$ and $w\in A_{p(\cdot)}$. \end{theorem}
For the majority of our extrapolation results we prefer to state the regularity of ${p(\cdot)}$ and $w$ in terms of the boundedness of the maximal operator. Therefore, given ${p(\cdot)}\in \mathcal P$ and a weight $w$, we will say $({p(\cdot)}, w)$ is an $M$-pair if the maximal operator is bounded on $L^{{p(\cdot)}}(w)$ and $L^{{p'(\cdot)}}(w^{-1})$. By Theorem~\ref{thm:wtd-max} we necessarily have $w \in A_{{p(\cdot)}}$ (equivalently, $w^{-1} \in A_{{p'(\cdot)}}$) and $p_- > 1$. Conversely, if ${p(\cdot)}\in LH$, with $p_- > 1$, then for any $w\in A_{p(\cdot)}$, $({p(\cdot)}, w)$ is an $M$-pair.
\begin{remark} \label{remark:duality}
By a very deep result of Diening~\cite{MR2166733,diening-harjulehto-hasto-ruzicka2010}, if $1<p_-\leq p_+<\infty$, $M$ is bounded on $L^{p(\cdot)}$ if and only if it is bounded on $L^{p'(\cdot)}$. We conjecture that the same ``duality'' result holds in the weighted Lebesgue spaces, that is, it suffices to define an $M$-pair only by the boundedness of $M$ on $L^{{p(\cdot)}}(w)$. We also conjecture (see~\cite{MR2927495, diening-hastoPreprint2010} that if $M$ is bounded on $L^{p(\cdot)}$ and $w\in A_{p(\cdot)}$, then $M$ is bounded on $L^{{p(\cdot)}}(w)$. If these two conjectures are true, then the hypotheses of our results below would be simpler. \end{remark}
Though our goal is to use extrapolation to prove specific operators are bounded on $L^{p(\cdot)}(w)$, we will state our results more abstractly. Following the approach established in~\cite{MR2210118} (see also~\cite{cruz-fiorenza-book,cruz-martell-perezBook}) we will write our extrapolation theorems for pairs of functions $(f,g)$ contained in some family $\mathcal{F}$. Hereafter, if we write
\[ \|f\|_{X} \leq C\|g\|_Y, \qquad (f,g) \in \mathcal{F}, \]
where $X$ and $Y$ are Banach function spaces (e.g., weighted classical or variable Lebesgue spaces), then we mean that this inequality is true for every pair $(f,g)\in \mathcal{F}$ such that the left-hand side of this inequality is finite. We will make the utility of this formulation clear in Section~\ref{section:applications}.
We can now state our main results. The first is a direct generalization of the classical Rubio de Francia extrapolation theorem and an extension of~\cite[Theorem~1.3]{MR2210118} to weighted variable Lebesgue spaces.
\begin{theorem}\label{thm:diag-weightedvar} Suppose that for some $p_0$, $1 < p_0 < \infty$, and every $w_0 \in A_{p_0}$,
\begin{equation}\label{hyp:diag-weightedvar}
\int_{\mathbb R^n} f(x)^{p_0} w_0(x) dx \leq C \int_{\mathbb R^n} g(x)^{p_0}
w_0(x) dx, \qquad (f, g) \in \mathcal{F}. \end{equation}
Then for any $M$-pair $({p(\cdot)}, w)$,
\begin{equation}\label{result:diag-weightedvar}
\| f \|_{L^{p(\cdot)}(w)} \leq C \| g \|_{L^{p(\cdot)}(w)}, \qquad (f, g) \in \mathcal{F}.
\end{equation}
The theorem holds if $p_0=1$ if we assume only that the maximal operator is bounded on $L^{{p'(\cdot)}}(w^{-1})$. \end{theorem}
\begin{remark} When $p_0=1$, Theorem~\ref{thm:diag-weightedvar} is still true and is a special case of Theorem~\ref{thm:A1var} below. \end{remark}
Our second result yields off-diagonal inequalities between two different weighted variable Lebesgue spaces. In the constant exponent case this result was first proved in~\cite{harboure-macias-segovia88}, and it was proved in unweighted $L^{p(\cdot)}$ spaces in~\cite[Theorem~1.8]{MR2210118}. To state it, we first define the appropriate weight classes that generalize the $A_p$ weights. In the classical case these weights were introduced in~\cite{muckenhoupt-wheeden74}.
\begin{definition} \label{defn:pq-weights} Given $1<p\leq q<\infty$, we say that $w\in A_{p,q}$ if
\[ \sup_B \left( \frac{1}{|B|} \int_B w(x)^q dx \right)^{1/q} \left(
\frac{1}{|B|} \int_B w(x)^{-p'} dx \right)^{1/p'} < \infty, \]
where the supremum is taken over all balls $B\subset \mathbb R^n$. If $p=1$, then $w\in A_{1,q}$ if
\[ \sup_B \frac{ \Xint- w(x)^q \,dx}{\essinf_{x\in B} w(x)^q} < \infty. \] \end{definition}
\begin{definition} \label{defn:pq-var-weights} Let ${p(\cdot)},\,{q(\cdot)} \in \mathcal P$ be such that for some $\gamma$, $0<\gamma<1$,
\[ \frac{1}{p(x)}-\frac{1}{q(x)} = \gamma. \]
Given $w$ such that $0<w(x)<\infty$ almost everywhere,
we say that $w\in A_{{p(\cdot)},{q(\cdot)}}$ if
\[ \sup_B |B|^{\gamma-1}\|w\chi_B\|_{q(\cdot)} \|w^{-1}\chi_B\|_{p'(\cdot)} < \infty, \]
where the supremum is taken over all balls $B\subset \mathbb R^n$. \end{definition}
\begin{theorem}\label{thm:off-weightedvar} Suppose that for some $p_0, q_0$, $1 < p_0 \leq q_0 < \infty$, and every $w_0 \in A_{p_0, q_0}$,
\begin{equation}\label{hyp:off-weightedvar}
\left( \int_{\mathbb R^n} f(x)^{q_0} w_0(x)^{q_0} dx \right)^{1/q_0}
\leq C \left( \int_{\mathbb R^n} g(x)^{p_0} w_0 (x)^{p_0} dx
\right)^{1/p_0}, \qquad (f,g)\in \mathcal{F}.
\end{equation}
Given ${p(\cdot)}, {q(\cdot)} \in \mathcal{P}$, suppose
\[ \frac{1}{p(x)} - \frac{1}{q(x)} = \frac{1}{p_0}-\frac{1}{q_0}. \]
Define $\sigma\geq 1$ by $1/\sigma'=1/p_0-1/q_0$. If $w\in A_{{p(\cdot)},{q(\cdot)}}$ and $({q(\cdot)}/\sigma, w^\sigma)$ is an $M$-pair, then
\begin{equation}\label{result:off-weightedvar}
\| f \|_{L^{{q(\cdot)}} (w)} \leq C \| g \|_{L^{{p(\cdot)}}(w)}, \qquad
(f,g)\in F. \end{equation}
The theorem holds if $p_0=1$ if we assume only that the maximal operator is bounded on $L^{({q(\cdot)}/q_0)'}(w^{-q_0})$. \end{theorem}
\begin{remark} When $\sigma=1$, Theorem~\ref{thm:off-weightedvar} reduces to Theorem~\ref{thm:diag-weightedvar}. Therefore, in proving it we will assume that $\sigma>1$. \end{remark}
Our third result extends the theory of limited range extrapolation to the weighted variable Lebesgue spaces. This concept was introduced by Auscher and Martell~\cite{auscher-martell07} and independently by Duoandikoetxea {\em et
al.}~\cite{duoandikoetxea-moyua-oruetxebarria-seijoP} in a somewhat different form. We generalize both their results. To state our main result we recall a definition: we say $w\in RH_s$ for some $s>1$ if
\[ [w]_{RH_s} = \sup_B \frac{\left(\Xint-_B w(x)^s\,dx\right)^{1/s}} {\Xint- w(x)\,dx} < \infty. \]
Given a weight $w$, $w\in A_p$ for some $p\geq 1$ if and only if there there exists $s>1$ such that $w\in RH_s$ (see~\cite{duoandikoetxea01}). As given in~\cite{auscher-martell07}, limited range extrapolation in the constant exponent case is the following.
\begin{theorem} \label{thm:limited-const}
Given $1 < q_- < q_+ < \infty$, suppose there exists $p_0$, $q_-
<p_0< q_+$ such that for every $w_0 \in A_{p_0/q_-} \cap
RH_{(q_+/p_0)'}$,
\begin{equation}\label{hyp:limited-var}
\int f(x)^{p_0} w_0 (x) dx \leq c \int g(x)^{p_0} w_0(x) dx,
\qquad (f,g)\in \mathcal{F}. \end{equation}
Then for every $p$, $q_-<p<q_+$ and every $w \in A_{p/q_-} \cap
RH_{(q_+/p)'}$,
\[ \int f(x)^{p} w (x) dx \leq c \int g(x)^{p} w(x) dx,
\qquad (f,g)\in \mathcal{F}. \]
\end{theorem}
In the variable exponent case we have a very different result, which does not reduce to the constant case, Theorem~\ref{thm:limited-const}.
\begin{theorem}\label{thm:limited-var} Given $1 < q_- < q_+ < \infty$, suppose there exists $p_0$, $q_-
<p_0< q_+$ such that for every $w_0 \in A_{p_0/q_-} \cap
RH_{(q_+/p_0)'}$, \eqref{hyp:limited-var} holds. Then for every ${p(\cdot)} \in LH$ with $q_- < p_- \leq p_+ < q_+$,
\begin{equation}\label{result:limited-var1}
\| f \|_{{p(\cdot)}} \leq C\| g \|_{{p(\cdot)}},
\qquad (f,g)\in \mathcal{F}.
\end{equation}
More generally, there exists $p_*$, $q_-<p_*<q_+$ such
that if let $\sigma = \frac{p_* q_-}{p_* - q_-}$, then
there exists a constant $c = c(p_-, p_+, q_-, q_+, p_*) \in (0, 1)$, so that for every
weight $w$ with $w^{\sigma} \in A_{\frac{{p(\cdot)}}{c\sigma}}$,
we have
\begin{equation}\label{result:limited-var2}
\| fw \|_{{p(\cdot)}} \leq C \| gw \|_{{p(\cdot)}}.
\end{equation}
\end{theorem}
\begin{remark}\label{remark:limited-constant} The two inequalities
\ref{result:limited-var1} and \ref{result:limited-var2} follow
from two special cases of a more general version of the theorem
in Proposition~\ref{prop-limited}. However, the constant exponent
result in Theorem~\ref{thm:limited-const} is from a third
special case, and this reduction is not immediately obvious: see
Remark \ref{remark:lim-reduction} for details. We discuss the
relationship between these cases in
Remark~\ref{remark:limited-cases}.
\end{remark}
\begin{remark} A weaker version of the unweighted
inequality~\eqref{result:limited-var1} in
Theorem~\ref{thm:limited-var} was implicit in Fiorenza
{\em et al.}~\cite{MR3057132}. \end{remark}
\begin{remark} The regularity assumption on ${p(\cdot)}$ in
Theorem~\ref{thm:limited-var} can be weakened. For example, it
follows from the proof of~\eqref{result:limited-var1} that there
exists $s=s(q_-,q_+,p_-,p_+)$ such that it suffices to assume that
$M$ is bounded on $L^{{p(\cdot)}}$ and $L^{({p(\cdot)}/s)'}$. By the duality
property of the maximal operator (see Remark~\ref{remark:duality})
the second assumption is equivalent to assuming $M$ is bounded on
$L^{{p(\cdot)}/s}$. Depending on whether $s>1$ or $s<1$, one of these
assumptions implies the other, since if $M$ is bounded on $L^{p(\cdot)}$, it
is bounded on $L^{r{p(\cdot)}}$ for all $r>1$
(\cite[Theorem~3.38]{cruz-fiorenza-book}). Regarding the constants
in the conclusion: $c$ depends on $p_*$, and as we will see from
the proof, the existence of
$p_*$ is guaranteed if we take it sufficiently close to $q_-$. \end{remark}
\begin{remark} \label{remark:power-wt} The hypotheses on the weight $w$ for inequality~\eqref{result:limited-var2} to hold is restrictive, but there exist weights that satisfy them. We have shown that if ${p(\cdot)}
\in LH$ and $0\leq a < n/p_+$, then $w(x)=|x|^{-a}\in A_{p(\cdot)}$. (This result will appear in~\cite{dcu-lw15}.)
Hence, if $0\leq a < cn/p_+$, $|x|^{-a\sigma}\in
A_{\frac{{p(\cdot)}}{c\sigma}}$. This result can also be used to
construct non-trivial examples of weights that satisfy the
hypotheses of our other results.
\end{remark}
We can also generalize the version of limited range extrapolation from~\cite{duoandikoetxea-moyua-oruetxebarria-seijoP}.
\begin{corollary}\label{cor:limited-corollary} Given $\delta$, $0 < \delta \leq 1$, suppose that for every $w \in A_2$,
\begin{equation}\label{hyp:limited-corollary}
\int f(x) ^2 w(x)^{\delta} dx \leq c \int g(x)^2 w(x)^{\delta}
dx, \qquad (f,g)\in \mathcal{F}.
\end{equation}
Then for every ${p(\cdot)} \in LH$ such that
\begin{equation}\label{cor:limited-pp}
\frac{2}{1 + \delta} < p_- \leq p_+ < \frac{2}{1 - \delta},
\end{equation}
we have that
\begin{equation}\label{result:limited-corollary1}
\| f \|_{{p(\cdot)}} \leq C \| g \|_{{p(\cdot)}}, \qquad (f,g)\in \mathcal{F}.
\end{equation}
More generally, for such a ${p(\cdot)}$, then with the same $\sigma$ in the previous theorem, there exists a constant $c \in (0, 1)$, so that for every weight $w$ such that $w^{\sigma} \in A_{\frac{{p(\cdot)}}{c\sigma}}$, we have
\begin{equation}\label{result:limited-corollary2}
\| fw \|_{{p(\cdot)}} \leq C \| gw \|_{{p(\cdot)}}.
\end{equation}
\end{corollary}
\begin{remark}
For simplicity we have stated Corollary~\ref{cor:limited-corollary}
only assuming a weighted $L^2$ estimate. A more general result is
possible: see~\cite[Remark~3.39]{cruz-martell-perezBook}. An unweighted version of
Corollary~\eqref{cor:limited-pp} that includes this generalization
has recently been proved by Gogatishvili and Kopaliani~\cite{GK14}. \end{remark}
Finally, we give two variants of classical extrapolation. We first consider extrapolation from $A_1$ weights. This result is a generalization of the original extrapolation theorem for variable Lebesgue spaces in~\cite[Theorem~1.3]{MR2210118}. It shows that we can weaken the hypotheses of Theorem~\ref{thm:diag-weightedvar} when $p_0=1$ and also prove results for exponents function such that $p_-\leq 1$. To state our result we introduce a more general class of exponents: we say ${p(\cdot)} \in \mathcal P_0$ if ${p(\cdot)} :
\mathbb R^n \rightarrow (0,\infty)$. For such ${p(\cdot)}$ we define the ``norm'' $\|\cdot\|_{{p(\cdot)}}$ (actually a quasi-norm: see~\cite{DCU-dw-P2014}) exactly as we do for ${p(\cdot)}\in \mathcal P$.
\begin{theorem} \label{thm:A1var} Suppose that for some $p_0 >0 $ and every $w_0 \in A_1$,
\begin{equation} \label{eqn:A1hyp}
\int_{{\mathbb R}^n} f(x)^{p_0} w_0 (x) dx \leq C\int_{{\mathbb R}^n} g(x)^{p_0} w_0(x) dx, \qquad (f,g) \in \mathcal{F}. \end{equation}
Given ${p(\cdot)}\in \mathcal P_0$ such that $p_- \geq p_0$, suppose that
$w\in A_{{p(\cdot)}/p_0}$ and $M$ is bounded on $L^{({p(\cdot)}/p_0)'}(w^{-p_0})$.
Then
\[ \| f \|_{L^{p(\cdot)}(w)} \leq C\| g \|_{L^{p(\cdot)}(w)}, \qquad (f,g) \in \mathcal{F}.\]
\end{theorem}
\begin{remark} There is an important difference between Theorem~\ref{thm:A1var} (and~\cite[Theorem~1.3]{MR2210118}) and Theorem~\ref{thm:diag-weightedvar}. With the latter we can extrapolate both ``up'' and ``down'': i.e., we can get results for ${p(\cdot)}$ irrespective of whether $p_-$ is larger or smaller than $p_0$. With $A_1$ extrapolation, however, we have the restriction that $p_-\geq p_0$. The same situation holds in the constant exponent case and is to be expected, since the $A_1$ case often governs ``endpoint'' inequalities. This weaker conclusion is balanced by the weaker hypothesis: we do not require $({p(\cdot)}/p_0,
w^{p_0})$ to be an $M$-pair, since in the proof we will only need the
``dual'' inequality for the maximal operator. \end{remark}
\begin{remark}
The hypotheses of Theorem~\ref{thm:A1var} are
redundant, since if $M$ is bounded on $L^{({p(\cdot)}/p_0)'}(w^{-p_0})$, then
$w^{-p_0} \in A_{({p(\cdot)}/p_0)'}$, which in turn implies that $w^{p_0}\in
A_{{p(\cdot)}/p_0}$. Conversely, if we take ${p(\cdot)} \in LH$, then it is enough
to assume $w^{p_0}\in A_{{p(\cdot)}/p_0}$. \end{remark}
Extrapolation can also be applied to inequalities governed by the larger class $A_\infty = \bigcup_{p>1} A_p$. The following result was first proved in~\cite{cruz-uribe-martell-perez04}.
\begin{theorem} \label{thm:Ainfty-extrapol} If for some $p_0>0$ and every $w_0\in A_\infty$,
\begin{equation} \label{eqn:Ainfty-hyp}
\int_{{\mathbb R}^n} f(x)^{p_0} w_0(x)\,dx \leq C \int_{{\mathbb R}^n} g(x)^{p_0} w_0(x)\,dx, \qquad (f,g) \in \mathcal{F}, \end{equation}
then the same inequality holds with $p_0$ replaced by any $p$, $0<p<\infty$. \end{theorem}
$A_\infty$ extrapolation in variable Lebesgue spaces has the following form.
\begin{theorem} \label{thm:Ainfty-extrapolvar} Suppose that for some $p_0>0$ and every $w_0\in A_\infty$, inequality \eqref{eqn:Ainfty-hyp} holds. Then given ${p(\cdot)} \in \mathcal P_0$, suppose there exists $s\leq p_-$ such that $w^s \in A_{{p(\cdot)}/s}$ and $M$ is bounded on $L^{({p(\cdot)}/s)'}(w^{-s})$. Then
\[ \|f\|_{L^{p(\cdot)}(w)} \leq C\|g\|_{L^{p(\cdot)}(w)}, \qquad (f,g) \in \mathcal{F}. \]
\end{theorem}
\begin{remark} There is a close connection between $A_1$ and $A_\infty$ extrapolation: see~\cite[Section~3.3]{cruz-martell-perezBook}. We will exploit this fact in our proof. \end{remark}
To make the connection between Theorems~\ref{thm:Ainfty-extrapol} and~\ref{thm:Ainfty-extrapolvar} clearer, we introduce the notation $A_{p(\cdot)}^{var}$ for the weights that satisfy the variable exponent Muckenhoupt condition. Then if ${p(\cdot)}=p$ is a constant, the hypothesis in Theorem~\ref{thm:Ainfty-extrapolvar} is $w^s \in A_{p/s}^{var}$. It follows at once from Definition~\ref{defn:Apvar} that this is equivalent to $w^p \in A_{p/s}\subset A_\infty$. Conversely, the hypothesis in Theorem~\ref{thm:Ainfty-extrapol} is that $w^p\in A_\infty$, i.e., for some $t>1$, $w^p \in A_t$. Fix $s<p$ such that $t=p/s$; then $w^p\in A_{p/s}$, or equivalently, $w^s\in A_{p/s}^{var}$.
As the next proposition shows, the hypotheses of Theorem~\ref{thm:Ainfty-extrapolvar} are weaker than those of Theorem~\ref{thm:diag-weightedvar}.
\begin{prop} \label{prop:Ainfty-weaker} Given ${p(\cdot)} \in \mathcal P$, suppose $w\in A_{p(\cdot)}$. Then for every $s$, $0<s<1$, $w^s\in A_{{p(\cdot)}/s}$. \end{prop}
\section{Norm Inequalities for Operators} \label{section:applications}
In this section we use extrapolation to prove norm inequalities for a variety of operators on the weighted variable Lebesgue spaces. We will first discuss how to prove that an operator $T$ is bounded on $L^{p(\cdot)}(w)$ using Theorem~\ref{thm:diag-weightedvar}. These same ideas can be used to apply the other theorems and the details are left to the reader. Following this, we will give applications to some specific operators. Our goal is not to be exhaustive, but rather to illustrate the utility of extrapolation by concentrating on some key examples. For additional applications, see~\cite{
cruz-fiorenza-book,MR2210118, cruz-martell-perezBook}.
\subsection*{Applying extrapolation} The key to applying Theorem~\ref{thm:diag-weightedvar} is to construct the appropriate family $\mathcal{F}$. This generally requires an approximation argument since we need pairs $(f,g)$ such that $f$ lies in both the appropriate weighted space to apply the hypothesis and in the target weighted variable Lebesgue space. The dense subsets of $L^p(w)$ are well-known: e.g., smooth functions and bounded functions of compact support. These sets are also dense in $L^{p(\cdot)}(w)$.
\begin{lemma} \label{lemma:density} Given ${p(\cdot)} \in \mathcal P$ with $p_+<\infty$, and a weight $w\in L^{p(\cdot)}_{loc}$, then $L^\infty_c$, bounded functions of compact support, and $C_c^\infty$, smooth functions of compact support, are dense in $L^{p(\cdot)}(w)$. \end{lemma}
\begin{proof}
We first prove that $L_c^\infty$is dense. The proof is essentially the same as in the unweighted case \cite[Theorem~2.72]{cruz-fiorenza-book}; for the convenience of the reader we sketch the details.
Given $f\in L^{p(\cdot)}(w)$, define $f_n =
\sgn(n)\min(|f(x)|,n)\chi_{B(0,n)}$. Then $f_n\rightarrow f$
pointwise as $n\rightarrow \infty$, and $|f_n|w \leq |f|w$. Since
$p_+<\infty$, we can apply the dominated convergence
theorem~\cite[Theorem~2.62]{cruz-fiorenza-book} to conclude that
$f_nw\rightarrow fw$ in $L^{p(\cdot)}$; equivalently, $f_n\rightarrow f$ in
$L^{p(\cdot)}(w)$.
The density of $C_c^\infty$ now follows form this. By Lusin's
theorem, given $f\in L_c^\infty$, for every $\epsilon>0$ there
exists a continuous function of compact support $g_\epsilon$ such
that $\|g\|_\infty \leq \|f\|_\infty$ and
$|D_\epsilon| = |\{ x : g(x) \neq f(x) \} | < \epsilon$. But then
\[ \|f-g_\epsilon\|_{L^{p(\cdot)}(w)} \leq 2\|f\|_\infty
\|\chi_{D_\epsilon}w\|_{p(\cdot)}. \]
Since $w\inL^{p(\cdot)}_{loc}$, again by the dominated convergence theorem in $L^{p(\cdot)}$, the righthand term tends to $0$ as $\epsilon\rightarrow 0$. Hence, continuous functions of compact support are dense in $L^{p(\cdot)}(w)$. Since every continuous function of compact support can be approximated uniformly by smooth functions, we also have $C_c^\infty$ is dense. \end{proof}
Now suppose that for every $w_0 \in A_{p_0}$ and $f\in L^{p_0}(w)$, an operator $T$ satisfies
\begin{equation} \label{eqn:starting-pt}
\int_{{\mathbb R}^n} |Tf(x)|^{p_0}w_0(x)\,dx \leq C\int_{{\mathbb R}^n} |f(x)|^{p_0} w_0(x)\,dx. \end{equation}
We want to show that given an $M$-pair $({p(\cdot)}, w)$, $T$ is bounded on $L^{p(\cdot)}(w)$. Since $w\in L^{p(\cdot)}_{loc}$, by a standard argument
(cf.~\cite[Theorem~5.39]{cruz-fiorenza-book}) it will suffice to show that $\|(Tf)w\|_{p(\cdot)} \leq C\|fw\|_{p(\cdot)}$ for all $f\in L^\infty_c$. Intuitively, we want to define the family $\mathcal{F}$ by
\[ \mathcal{F} = \{ (|Tf|,|f|) : f \in L^\infty_c \}. \]
However, we do not know {\em a priori} that $Tf \in L^{p(\cdot)}(w)$. To overcome this we make a second approximation and define $\langle Tf\rangle_n = \min(|Tf|,n)\chi_{B(0,n)}$. Again since $ w\in
L^{p(\cdot)}_{loc}$, we have that $\langle Tf\rangle_n\in L^{p(\cdot)}(w)$. Furthermore, it is immediate that \eqref{eqn:starting-pt} holds with $|Tf|$ replaced by $\langle Tf\rangle_n$. Therefore, if we define
\[ \mathcal{F} = \{ ( \langle Tf \rangle_n,|f|) : f \in L^\infty_c, n\geq 1 \}, \]
then we can apply Theorem~\ref{thm:diag-weightedvar} and Fatou's lemma in the variable Lebesgue spaces (\cite[Theorem~2.61]{cruz-fiorenza-book}) to conclude that for all $f\in L^\infty_c$,
\[ \|(Tf)w\|_{p(\cdot)} \leq \liminf_{n\rightarrow \infty} \|\langle Tf\rangle_n w \|_{p(\cdot)}
\leq C\|fw\|_{p(\cdot)}. \]
Similar arguments hold if we need to take $f\in C_c^\infty$ or in some other dense set.
\subsection*{The Hardy-Littlewood maximal operator} Although we must assume the boundedness of the maximal operator to apply extrapolation, as an immediate consequence we get vector-valued inequalities for it. It is well-known that for all $p,\,q$, $1< p,\,q < \infty$, and all $w\in A_p$,
\[ \bigg\| \bigg( \sum_{k=1}^\infty (Mf_k)^q \bigg)^{1/q}\bigg
\|_{L^p(w)} \leq C
\bigg\| \bigg( \sum_{k=1}^\infty |f_k|^q \bigg)^{1/q}\bigg
\|_{L^p(w)}. \]
(See, for instance,~\cite{andersen-john80}.) From this we immediately get the following inequality.
\begin{corollary} Given an $M$-pair $({p(\cdot)}, w)$ and $1<q<\infty$,
\[ \bigg\| \bigg( \sum_{k=1}^\infty (Mf_k)^q \bigg)^{1/q}\bigg
\|_{L^{p(\cdot)}(w)} \leq C
\bigg\| \bigg( \sum_{k=1}^\infty |f_k|^q \bigg)^{1/q}\bigg
\|_{L^{p(\cdot)}(w)}. \]
\end{corollary}
This result is not particular to the maximal operator: such vector-valued inequalities are an immediate consequence of extrapolation defined in terms of ordered pairs of functions. This is proved in the constant exponent case in~\cite[Corollary~3.12]{cruz-martell-perezBook}, and the same proof works in our more general setting.
\begin{remark} In the same way, though we do not discuss them here, weak-type inequalities can be proved using extrapolation. See~\cite[Corollary~3.11]{cruz-martell-perezBook} and \cite[Corollary~5.33]{cruz-fiorenza-book} for details. \end{remark}
\begin{remark} Vector-valued inequalities for the maximal operator play an important role in studying functions spaces in the variable exponent setting: see, for example,~\cite{DCU-dw-P2014,MR2498558}. \end{remark}
\subsection*{Singular integral operators} Let $T$ be a convolution type singular integral: $Tf= K*f$, where $K$ is defined on $\mathbb R^n\setminus \{0\}$ and satisfies $\hat{K} \in L^\infty$ and
\[ |K(x)| \leq \frac{C}{|x|^n}, \quad |\nabla K(x)| \leq
\frac{C}{|x|^{n+1}}, \quad x \neq 0. \]
More generally, we can take $T$ to be a Calder\'on-Zygmund singular integral of the type defined by Coifman and Meyer. Then for all $p$, $1<p<\infty$, and $w\in A_p$,
\begin{equation} \label{eqn:sio}
\int_{{\mathbb R}^n} |Tf(x)|^p w(x)\,dx \leq C\int_{{\mathbb R}^n} |f(x)|^pw(x)\,dx. \end{equation}
(See~\cite{duoandikoetxea01,grafakos08b}.) As an immediate consequence we get that singular integrals are bounded on weighted Lebesgue spaces.
\begin{corollary} Let $T$ be a Calder\'on-Zygmund singular integral operator. {Then for any $M$-pair $({p(\cdot)}, w)$,}
\[ \|Tf\|_{L^{p(\cdot)}(w)} \leq C\|f\|_{L^{p(\cdot)}(w)}. \]
\end{corollary}
We can also use extrapolation to prove norm inequalities for operators that are more singular. Given $1<r\leq \infty$, let $\Omega \in L^r(S^{n-1})$ satisfy $\int_{S^{n-1}} \Omega(y)\,d\sigma(y) = 0$, where $S^{n-1}$ is the unit sphere and $\sigma$ is surface measure on $S^{n-1}$. Given the kernel $K$
\[ K(x) = \frac{\Omega(x/|x|)}{|x|^n}, \]
define $T_\Omega f=K*f$. Then for all $p>r'$ and $w\in A_{p/r'}$, \eqref{eqn:sio} holds for $T_\Omega$~\cite{duoandikoetxea93,watson90}. This is a limiting case of Theorem~\ref{thm:limited-var}, with $q_-=r'$ and $q_+=\infty$. However, it is more straightforward to apply Theorem~\ref{thm:diag-weightedvar} rescaling. If we rewrite \eqref{eqn:sio} as
\begin{equation} \label{eqn:sio-r}
\int_{{\mathbb R}^n} \big(|T_\Omega f(x)|^{r'}\big)^{p/r'} w(x)\,dx
\leq C\int_{{\mathbb R}^n} \big(|f(x)|^{r'}\big)^{p/r'}w(x)\,dx, \end{equation}
then for any $M$-pair $({p(\cdot)}, w)$,
\[ \||T_\Omega f|^{r'}w\|_{L^{p(\cdot)}} \leq C \||f|^{r'}w\|_{L^{p(\cdot)}}. \]
In particular, if we replace $w$ by $w^{r'}$ and ${p(\cdot)}$ by ${p(\cdot)}/r'$, then by dilation we get a variable exponent analog of inequality~\eqref{eqn:sio-r}.
\begin{corollary} Given ${p(\cdot)}$ and $w$ such that {$({p(\cdot)}/r', w^{r'})$ is an $M$-pair, we have}
\[ \|T_\Omega f\|_{L^{p(\cdot)}(w)} \leq C\|f\|_{L^{p(\cdot)}(w)}. \]
\end{corollary}
\subsection*{Off-diagonal operators} Given $\alpha$, $0<\alpha<n$, the fractional integral operator of order $\alpha$ (also referred to as the Riesz potential) is the positive integral operator
\[ I_\alpha f(x) = \int_{{\mathbb R}^n} \frac{f(y)}{|x-y|^{n-\alpha}}\,dy.\]
The associated fractional maximal operator $M_\alpha$ is defined by
\[ M_\alpha f(x) = \sup_{B} |B|^{\alpha/n} \Xint-_B |f(y)|\,dy \cdot \chi_B(x). \]
Weighted inequalities for both of these operators are governed by the $A_{p,q}$ weights in Definition~\ref{defn:pq-weights}: given $p$, $1<p<n/\alpha$, and $q$ such that $\frac{1}{p}-\frac{1}{q}=\frac{\alpha}{n}$, then for all $w\in A_{p,q}$,
\[ \left(\int_{{\mathbb R}^n} |I_\alpha f(x) w(x)|^q \,dx \right)^{1/q}
\leq C \left(\int_{{\mathbb R}^n} |f(x) w(x)|^p \,dx \right)^{1/p}; \]
the same inequality holds if $I_\alpha$ is replaced by $M_\alpha$~\cite{muckenhoupt-wheeden74}. Therefore, we can apply Theorem~\ref{thm:off-weightedvar} (using the obvious variant of the technical reduction discussed at the beginning of this section) to get the following result.
\begin{corollary} Given $\alpha$, $0<\alpha<n$, suppose exponents ${p(\cdot)},\,{q(\cdot)}$ are such that $p_+<n/\alpha$ and $\frac{1}{p(x)}-\frac{1}{q(x)}=\frac{\alpha}{n}$. Let $\sigma=(n/\alpha)'$.
Then for all $M$-pairs $({q(\cdot)}/\sigma, w^{\sigma})$,
\begin{gather*}
\|(I_\alpha f)\|_{L^{q(\cdot)}(w)} \leq C\|f\|_{L^{p(\cdot)}(w)}, \\
\|(M_\alpha f)\|_{L^{q(\cdot)}(w)} \leq C\|f\|_{L^{p(\cdot)}(w)}. \end{gather*}
\end{corollary}
\begin{remark} The restriction $p_+<n/\alpha$ is natural for the fractional integral operator, since in the constant exponent case $I_\alpha$ does not map
$L^{n/\alpha}$ to $L^\infty$. On the other hand, $M_\alpha$ does; moreover, in the unweighted case, if $p_+=n/\alpha$, then $\|M_\alpha f\|_{q(\cdot)} \leq C\|f\|_{p(\cdot)}$. (See~\cite{MR2493649, cruz-fiorenza-book}.) Therefore, we conjecture that the same is true in the weighted case; this question is still open even for $\alpha=0$ and $p_+=\infty$. \end{remark}
\subsection*{Coifman-Fefferman type inequalities}
There are a variety of norm inequalities that compare two operators, usually of the form
\[ \int_{{\mathbb R}^n} |Tf(x)|^p w(x)\,dx \leq C\int_{{\mathbb R}^n} |Sf(x)|^p w(x)\,dx, \]
where $w\in A_\infty$. The first such inequality, due to Coifman and Fefferman~\cite{coifman-fefferman74}, compared singular integrals and the Hardy-Littlewood maximal operator, and there have been a number of results proved since: see~\cite[Chapter~9]{cruz-martell-perezBook}. We can use Theorem~\ref{thm:Ainfty-extrapol} to extend such inequalities to the weighted variable Lebesgue spaces.
We illustrate this by considering one such inequality in particular, the Fefferman-Stein inequality for the sharp maximal operator. (See~\cite{duoandikoetxea01}.) Recall that the the sharp maximal function is defined by
\[ M^\# f(x) = \sup_B \Xint-_B |f(y)-f_B|\,dy \cdot \chi_B(x), \]
where $f_B = \Xint-_B f(x)\,dx$. Though pointwise smaller than the maximal operator, we have that for all $p$, $0<p<\infty$ and $w\in A_\infty$,
\begin{equation*}
\int_{{\mathbb R}^n} Mf(x)^p w(x)\,dx \leq C\int_{{\mathbb R}^n} M^\#f(x)^p w(x)\,dx. \end{equation*}
Then by Theorem~\ref{thm:Ainfty-extrapolvar} we immediately get the following.
\begin{corollary} Given ${p(\cdot)}\in \mathcal P$ and a weight $w$, suppose there exists $s<p_-$ such that $w^{s} \in A_{{p(\cdot)}/s}$ and $M$ is bounded on $L^{({p(\cdot)}/s)'}(w^{-s})$. Then
\[ \|f\|_{L^{p(\cdot)}(w)} \leq C\|M^\# f\|_{L^{p(\cdot)}(w)}. \]
\end{corollary}
In exactly the same way other Coifman-Fefferman type inequalities can be extended to the variable Lebesgue space setting.
\subsection*{Operators with a restricted range of exponents} Certain types of operators are not bounded on $L^p$ for every $p$, $1<p<\infty$, but only for $p$ in some interval, say $q_-<p<q_+$. In this case it is natural to conjecture that such operators are bounded on $L^{p(\cdot)}$ provided that $q_-<p_-\leq p_+<q_+$, and that weighted inequalities hold in the same range for suitable weights $w$. Here we consider two operators: the spherical maximal operator and the Riesz transforms associated with certain elliptic operators.
The spherical maximal operator is defined by
\[ \mathcal M f(x) = \sup_{t>0} \left|\int_{S^{n-1}} f(x-ty)d\sigma(y)\right|,\]
where $S^{n-1}$ is the unit sphere in $\mathbb R^n$ and $d\sigma$ is surface measure on the sphere.
Stein~\cite{MR0420116} proved that for $n\geq 3$, $\mathcal M$ is bounded on $L^p$ if and
only if $p>\frac{n}{n-1}$. Weighted norm inequalities are true for
the same values of $p$, but require strong conditions on the weight.
Cowling, {\em et al.}~\cite{MR1922609} proved that if
\[ \frac{n}{n-1}< p < \infty \quad \text{and} \quad \max\left(0,1-\frac{p}{n}\right) \leq \delta \leq \frac{n-2}{n-1}, \]
and if
\begin{equation} \label{eqn:cowling-wt}
w = u_1^\delta u_2^{\delta(n-1)-(n-2)}, \qquad u_1,\,u_2 \in A_1, \end{equation}
then $\mathcal M : L^p(w) \rightarrow L^p(w)$.
If we combine this result with Theorem~\ref{thm:limited-var} we get the following estimates in the variable Lebesgue spaces.
\begin{corollary} \label{cor:fgk} Fix $n\geq 3$. Given ${p(\cdot)} \in LH$ such that $\frac{n}{n-1}<p_-\leq p_+< (n-1)p_-$, then
\begin{equation} \label{eqn:fgk1}
\|\mathcal M f\|_{p(\cdot)} \leq C\|f\|_{p(\cdot)}. \end{equation}
Moreover, if for some $\sigma>\frac{n-1}{n-2}p_-$, $w^\sigma \in A_{\frac{{p(\cdot)}}{c\sigma}}$, where $c\in (0,1)$ is as in the statement of Theorem~\ref{thm:limited-var}, then
\begin{equation} \label{eqn:fgk2}
\|(\mathcal M f)w\|_{p(\cdot)} \leq C\|fw\|_{p(\cdot)}. \end{equation}
\end{corollary}
\begin{proof} To apply Theorem~\ref{thm:limited-var} we need to restate the hypotheses of the above weighted norm inequality. By the information encoded in the factorization of $A_p$ weights (see~\cite[Theorems~2.1, 2.3, 5.1]{cruz-uribe-neugebauer95}), if $w$ is given by \eqref{eqn:cowling-wt}, then $w\in A_t\cap RH_{1/\delta}$, where $1-t=\delta(n-1)-(n-2)$ or $t=(n-1)(1-\delta)$. Therefore, if we fix any $p_0>\frac{n}{n-1}$, we have that $w\in A_{p_0/q_-}\cap RH_{(q_+/p_0)'}$, where
\[ q_- = \frac{p_0}{(n-1)(1-\delta)}, \qquad q_+ = \frac{p_0}{1-\delta} = (n-1)q_-. \]
Conversely, if we take any $w\in A_{p_0/q_-}\cap RH_{(q_+/p_0)'}$, then it can be written in the form~\eqref{eqn:cowling-wt}.
Given this reformulation we can apply Theorem~\ref{thm:limited-var}. To prove the unweighted inequality \eqref{eqn:fgk1}, fix ${p(\cdot)}$ such that $\frac{n}{n-1}<p_-\leq p_+<(n-1)p_-$. Note that if we fix $\delta=\frac{n-2}{n-1}$, then $q_-=p_0$, so if we take $p_0=p_-$ and take values of $\delta$ close to $\frac{n-2}{n-1}$ we see that we can get $q_-$ as close to $p_-$ as desired. In particular, we can get $p_+<(n-1)q_-=q_+$. Inequality~\eqref{eqn:fgk1} now follows from inequality~\eqref{result:limited-var2} in Theorem~\ref{thm:limited-var}.
To prove the weighted inequality \eqref{eqn:fgk2}, we argue similarly. Fix ${p(\cdot)}$ and $\sigma>\frac{n-1}{n-2}p_-$. Now choose a value of $p_0$ and fix $\,q_-,\,q_+$ as before. Then we have that
\[ \sigma>\frac{n-1}{n-2}q_- = \frac{(n-1)q_-^2}{(n-1)q_- - q_-}. \]
We now apply limited range extrapolation in the constant exponent case, Theorem~\ref{thm:limited-const}; this shows that we can now take {\em a posteriori} any value $p_0$, $q_-<p_0<q_+=(n-1)q_-$. In particular, we can take $p_0$ as close to $(n-1)q_-$ as we want. Fix $p_0$ so that
\begin{equation} \label{eqn:sigma-bound} \sigma \geq \frac{p_0 q_-}{p_0-q_-}. \end{equation}
By Proposition~\ref{prop:Ainfty-weaker}, if $w^\sigma \in A_{\frac{{p(\cdot)}}{c\sigma}}$, then the same inclusion holds for any smaller
value of $\sigma$, so we may assume without loss of generality that
equality holds in \eqref{eqn:sigma-bound}. But then we can apply
Theorem~\ref{thm:limited-var} starting from our new value of $p_0$
and using this value of $\sigma$ to get~\eqref{eqn:fgk2}. \end{proof}
Inequality~\eqref{eqn:fgk1} in Corollary~\ref{cor:fgk}
was originally proved by Fiorenza~{\em et al.}~\cite{MR3057132};
their proof relied on a extrapolation argument which was a slightly
weaker, unweighted version of Theorem~\ref{thm:limited-var}.
A surprising feature of this result is that while there are weighted inequalities for any value of $p>\frac{n}{n-1}$, variable Lebesgue space bounds only hold for exponents with bounded oscillation. This is not an artifact of the proof: in ~\cite{MR3057132} they
also proved that if the spherical maximal operator is bounded on
$L^{p(\cdot)}$, then $p_+\leq np_-$; it is conjectured that this bound is sharp.
To prove this via extrapolation it suffices to show that in
the above weighted norm inequality we could replace the upper bound
on $\delta$ by $\frac{n-1}{n}$. It is unclear if this is
possible, though we note that in~\cite[p.~83]{MR1922609} they
conjectured that one could take weights of the form
$w=u_1^{\frac{n-1}{n}}$ which is a special case.
A second kind of operator that satisfies norm inequalities with a limited range of exponents is the Riesz transform associated to complex elliptic operators in divergence form. We sketch the basic properties of these operators; for complete information, see Auscher~\cite{auscher2007}.
Let $A$ be an $n\times n$, $n\geq 3$, matrix of complex-valued measurable functions, and assume that $A$ satisfies the ellipticity conditions
\[ \lambda | \xi | ^{2}\leq \text{Re}\langle A\xi
,\xi \rangle, \quad
|\langle A\xi ,\eta \rangle |\leq \Lambda |\xi
||\eta |, \quad \xi ,\,\eta \in \mathbb{C}^{n} \quad 0<\lambda < \Lambda. \]
Let $L= -\text{div}A\nabla$. Then $L$ satisfies an $L^2$ functional calculus, so that the square root operator $L^{1/2}$ is well defined. The Kato conjecture asserted that this operator satisfies
\[ \|L^{1/2} f\|_2 \approx \|\nabla f\|_2, \qquad f \in W^{1,2}. \]
This was proved by Auscher~{\em et
al.}~\cite{auscher-hofmann-lacey-mcintosh-tchamitchian02}. As a consequence of this we have that the Riesz transform associated to $L$, $\nabla L^{-1/2}$, also satisfies $L^2$ bounds:
\[ \|\nabla L^{-1/2} f \|_2 \leq C\|f\|_2. \]
This operator also satisfies weighted $L^p$ bounds for $p$ close to $2$. Auscher and Martell~\cite{auscher-martell06} proved that there exist constants $q_-=q_-(L)<\frac{2n}{n+2}<2$ and $q_+=q_+(L)>2$ such that if $q_-<p<q_+$ and $w\in A_{p/q_-}\cap RH_{(q_+/p)'}$, then
\[ \|\nabla L^{-1/2} f \|_{L^p(w)} \leq C\|f\|_{L^p(w)}. \]
By Theorem~\ref{thm:limited-var} we can extend this result to the variable Lebesgue spaces.
\begin{corollary} Given an elliptic operator $L$ as defined above, suppose the exponent ${p(\cdot)} \in LH$ is such that $q_-(L)<p_-\leq p_+ <q_+(L)$. Then
\[ \|\nabla L^{-1/2} f \|_{p(\cdot)} \leq C\|f\|_{p(\cdot)},\]
and for any weight $w$ such that $w^n \in A_{\frac{{p(\cdot)}}{cn}}$, then
\[ \|(\nabla L^{-1/2} f)w \|_{p(\cdot)} \leq C\|fw\|_{p(\cdot)}.\]
\end{corollary}
\begin{proof} The unweighted inequality is immediate. For the weighted inequality we take $p_0=2$, and we take a larger value for $\sigma$ (possible by Proposition~\ref{prop:Ainfty-weaker}) by replacing $q_-$ by the upper bound $\frac{2n}{n+2}$. This gives $\sigma=n$. \end{proof}
\begin{remark} Bongioanni {\em et al.}~\cite{MR2720705} introduced a class of weights that generalize the Muckenhoupt $A_p$ weights and are the appropriate class for studying weighted norm inequalities for the Riesz transforms related to Schr\"odinger operators which in many cases satisfy limited range inequalities. They also showed that the theory of extrapolation could be extended to these weight classes~\cite{MR3042701}. It would be of interest to determine if their results could be extended to the appropriate scale of weighted variable Lebesgue spaces. \end{remark}
\section{The General Approach to Extrapolation} \label{section:general}
In this section we give a broad overview of the way in which we prove each of our extrapolation theorems. We have chosen to organize the arguments in a way which does not yield the most elegant proof but which does make clearer the process by which we found the proof. This discussion should be seen as a complement to the overview of extrapolation given in~\cite[Chapter~2]{cruz-martell-perezBook}; we believe that it will be useful for attempts to prove extrapolation theorems in other contexts.
All of our proofs use five basic tools: dilation, duality, H\"older's inequality, reverse factorization and the Rubio de Francia algorithm. By dilation we mean the property that for any exponent ${p(\cdot)}$ and any $s>0$,
$\|f\|_{p(\cdot)}^s = \||f|^s\|_{{p(\cdot)}/s}$. For constant exponents this is trivial, and even for general exponent functions it is an immediate consequence of the definition~\eqref{eqn:defn-norm}. By duality (see \cite[Section~2.8]{cruz-fiorenza-book}) we have that given $f\in
L^{p(\cdot)}$, there exists $h\in L^{p'(\cdot)}$, $\|h\|_{p'(\cdot)}=1$, such that
\[ \|f\|_{p(\cdot)} \leq C\int_{{\mathbb R}^n} f(x)h(x)\,dx; \]
conversely, by H\"older's inequality \cite[Section~2.4]{cruz-fiorenza-book}, if $f\in L^{p(\cdot)}$ and $h\in L^{p'(\cdot)}$, then
\[ \int_{{\mathbb R}^n} |f(x)h(x)|\,dx \leq C\|f\|_{p(\cdot)}\|h\|_{p'(\cdot)}. \]
(In both cases the constant depends only on ${p(\cdot)}$.) To construct the weight $w\in A_{p_0}$ needed to apply the hypothesis, we use reverse factorization: the property that if $\mu_1,\,\mu_2 \in A_1$, then $w_0=\mu_1\mu_2^{1-p_0} \in A_{p_0}$. (See~\cite[Prop.~7.2]{duoandikoetxea01}.) Finally, to find the $A_1$ weights we apply the Rubio de Francia extrapolation algorithm in the following form.
\begin{prop}\label{prop:H_j}
Given ${r(\cdot)} \in \mathcal P$, suppose ${\mu}$ is a weight such that $M$ is
bounded on $L^{{r(\cdot)}}({\mu})$. For a positive function $h \in L_{loc}^1$,
with $Mh(x)< \infty$ almost everywhere, define:
\[ \mathcal{R} h(x) = \sum_{k = 0}^{\infty} \frac{M^{k}h(x)}{2^k \| M
\|_{L^{r(\cdot)}({\mu})}^k}. \]
Then: $(1)$ $h(x)\leq \mathcal{R} h(x)$;
$(2)$ $\|\mathcal{R} h\|_{{L^{{r(\cdot)}}(\mu)}} \leq 2\|h\|_{{L^{{r(\cdot)}}(\mu)}}$;
$(3)$ $\mathcal{R} h \in A_1$, with $[\mathcal{R} h]_{A_1} \leq 2 \| M \|_{{L^{{r(\cdot)}}(\mu)}}$.
More generally, for fixed constants $\alpha > 0$ and $\beta \in \mathbb R$, and another weight $w$, define the operator
\[ H = Hh = \mathcal{R} (h^{\alpha} w^{\beta})^{1/\alpha} w^{-\beta/\alpha}. \]
Then: $(1)$ $h(x) \leq H(x)$;
$(2)$ {Let $v = w^{\beta/\alpha} \mu^{1/\alpha}$. Then $H$ is bounded on $L^{\alpha {r(\cdot)}}(v)$, with $\|H\|_{L^{\alpha {p(\cdot)}}({v})} \leq
2\|h\|_{L^{\alpha {p(\cdot)}}({v})}$}; $(3)$ $H^{\alpha} w^{\beta} \in A_1$, with $[H^{\alpha}
w^{\beta}]_{A_1} \leq 2 \| M \|_{L^{{r(\cdot)}}({\mu})}$. \end{prop}
\begin{proof}
The proof is straightforward and essentially the same as in the
constant exponent case
(see~\cite[Chapter~2]{cruz-martell-perezBook}): property (1) for $\mathcal{R}$
is immediate; property (2) follows from our assumption that $M$ is
bounded; and property (3) follows from the fact that $M$ is
sublinear. The properties of $H$ are immediate consequences of
dilation and
those for~$\mathcal{R}$. \end{proof}
To prove our extrapolation theorems we use these tools to reduce the quantity we want to estimate (e.g., the lefthand term in \eqref{result:diag-weightedvar}, \eqref{result:off-weightedvar}, \eqref{result:limited-var1}, \eqref{result:limited-var2}) to something we can apply our hypothesis to (e.g., a weighted integral in the form of the lefthand side of \eqref{hyp:diag-weightedvar}, \eqref{hyp:off-weightedvar}, \eqref{hyp:limited-var}). Let us use Theorem~\ref{thm:diag-weightedvar} as an example. We first fix a weight $w$ satisfying our hypotheses and a pair $(f,g)\in \mathcal{F}$. For technical reasons we introduce a new function $h_1$ that depends on both $f$ and $g$: intuitively, $h_1=g$, but we introduce a term involving $f$ so that we can prove that the integral corresponding to the lefthand side of the weighted norm inequality in the hypothesis is finite. We also define it to have uniformly bounded norm. We majorize it by an operator $H_1$ with constants $\alpha_1$ and $\beta_1$ to be determined. If we first apply dilation with an exponent $s>0$ and then duality, we get a function $h_2$, also with uniformly bounded norm, which we majorize by a second operator $H_2$ with constants $\alpha_2$ and $\beta_2$. We multiply and divide by $H_1^\gamma$, $\gamma>0$, and apply H\"older's inequality to get, for example,
\[ \|f\|_{L^{p(\cdot)}(w)}^s \leq \left(\int_{{\mathbb R}^n} f^{p_0} H_1^{-\gamma {(p_0/s)}} \,H_2 w^s\,dx\right)^{s/p_0} \left(\int_{{\mathbb R}^n} H_1^{\gamma(p_0/s)'} \,H_2
w^s\,dx\right)^{1/(p_0/s)'}. \]
Our goal is to show that the second integral is uniformly bounded, and the first is bounded by the righthand side of our desired conclusion. To do so we need to find appropriate values for the six undetermined parameters: $\alpha_j$, $\beta_j$, $1\leq j \leq 2$, $s$ and $\gamma$. These parameters are subject to the following constraints:
\begin{enumerate}
\item Since we know which (unweighted) variable Lebesgue space $h_2$
belongs to (e.g., $h_2 \in L^{(\frac{{p(\cdot)}}{s})'}$), we will assume that $H_2=\mathcal{R}_2 (h_2^{\alpha_2} w^{\beta_2})^{1/\alpha_2} w^{-\beta_2/\alpha_2}$
is bounded there too. We can then use
Proposition~\ref{prop:H_j} ``backwards'' (i.e., set $v=1$,
$(\frac{{p(\cdot)}}{s})'=\alpha_2 {r(\cdot)}$ and solve for $\mu$) to deduce that we need the
maximal operator $M$ bounded on $L^{({p(\cdot)}/s)'/\alpha_2}(w^{-\beta_2})$.
This gives constraints on $\alpha_2$ and $\beta_2$.
\item Similarly, we want $H_1=\mathcal{R}_1 (h_1^{\alpha_1}
w^{\beta_1})^{1/\alpha_1} w^{-\beta_1/\alpha_1}$ to be bounded on
the same space in which $h_1$ is contained, and again by
Proposition~\ref{prop:H_j} (taking $v=w$ and ${p(\cdot)}=\alpha_1{r(\cdot)}$) this
means that we need $M$ to be bounded on
$L^{{p(\cdot)}/\alpha_1}(w^{\alpha_1 - \beta_1})$. This gives constraints
on $\alpha_1$, $\beta_1$ and $\gamma$.
\item Lastly, to apply our hypothesis, we need $H_1^{-\gamma {(p_0/s)}} \,H_2
w^s$ to satisfy the $A_{p_0}$ condition. To apply reverse
factorization (since $H_1$ and $H_2$ both yield $A_1$ weights) we
get more constraints on all the parameters (in particular
on $s$). \end{enumerate}
If we combine all of these constraints we are able to find sufficient conditions on the exponent ${p(\cdot)}$ and the weight $w$ to get the desired conclusion.
In each of the proofs in Section~\ref{section:extrapol-proof} below, we follow this schema. Some of the parameters described above have their values determined, but others are still free. For our first three theorems we will prove a (seemingly) more general result, in the sense that we will show that the desired weighted norm inequality holds for a family of weight classes parameterized by $\beta_1$ (the constant from $H_1$) and $s$ (the constant that determines the dual space). We will get the stated result by choosing appropriate values for these parameters.
For Theorem~\ref{thm:diag-weightedvar} one can see the choice of the parameters as simply what is necessary to get the result that is the obvious analog of the classical Rubio de Francia extrapolation theorem. However, we will also show, in the special case of power weights, that our choice of parameters is in some sense optimal. The proof of off-diagonal extrapolation, Theorem~\ref{thm:off-weightedvar}, will follow the same pattern. However, the proof has some technical difficulties related to the variable Lebesgue space norm, and requires more care in choosing the parameters.
For both Theorems~\ref{thm:diag-weightedvar} and~\ref{thm:off-weightedvar}, the proofs would be simpler if we had simply fixed our parameters initially, without motivating our choices. Indeed, we admit that when we first proved each result we chose our parameters in an {\em ad hoc} fashion, justifying our choices by the fact that we got the desired outcome. However, in proving limited range extrapolation, Theorem~\ref{thm:limited-var}, we discovered that the ``right'' parameters were not obvious: none of our initial choices led to a meaningful result, let alone one analogous to the constant exponent case. Ultimately we used the approach outlined above in order to discover what was actually going on. We have chosen to retain it here since it both illuminates our final result and makes clear why the constant exponent theorem does not immediately generalize to the variable space setting. But then, in order to help the reader understand our approach, we chose to write the previous two proofs in this more general fashion.
Finally, extrapolation with $A_\infty$ and $A_1$ weights, Theorems~\ref{thm:Ainfty-extrapolvar} and~\ref{thm:A1var}, requires some minor modification to our general approach; we will make these clear in the course of the proofs.
\section{Proof of Theorems} \label{section:extrapol-proof}
In this section we give the proofs of all the results in Section~\ref{section:main-theorems}.
\subsection*{Proof of Theorem \ref{thm:diag-weightedvar}}
When $p_0=1$, Theorem \ref{thm:diag-weightedvar} is a special case of Theorem~\ref{thm:A1var}, so here we will assume $p_0>1$. We will prove the following proposition.
\begin{prop} \label{prop-diag} Suppose \eqref{hyp:diag-weightedvar} holds for some $p_0>1$. Fix ${p(\cdot)} \in \mathcal P$, $\beta_1 \in \mathbb R$ and choose any $s$ such that
\begin{align}\label{eqn:s-diagonal}
\max\big(0,p_0 - p_-(p_0 - 1)\big) < s < \min(p_-, p_0).
\end{align}
Let $\alpha_1 = \frac{p_0 - s}{p_0 - 1}$ and $\beta_2 = s - \beta_1(1 -
p_0)$. If $M$ is bounded on $L^{{p(\cdot)}/\alpha_1}(w^{\alpha_1 -
\beta_1})$ and $L^{({p(\cdot)}/s)'} (w^{-\beta_2})$, then $\| fw
\|_{{p(\cdot)}} \leq C \| gw \|_{{p(\cdot)}}$.
\end{prop}
The constant $s$ comes from duality and the constants $\alpha_j$ and
$\beta_j$ are from using
Proposition~\ref{prop:H_j} to define $H_1$ and $H_2$; the values and
constraints are the only ones which arise in applying the method
outlined in Section~\ref{section:general}.
To prove Theorem~\ref{thm:diag-weightedvar} it is enough to take $s=1$ and $\beta_1=0$. Then \eqref{eqn:s-diagonal} holds (since $p_0>1$) and the conditions on the maximal operator reduce to saying that $M$ is bounded on $L^{p(\cdot)}(w)$ and $L^{p'(\cdot)}(w^{-1})$: that is, that $({p(\cdot)},w)$ is an $M$-pair. We will consider other choices of parameters in Remark~\ref{remark:optimal} below.
\begin{proof}
Let $(f, g) \in \mathcal{F}$ with $\|f\|_{L^{p(\cdot)}(w)}<\infty$. Without loss of generality we may assume $\|f\|_{L^{p(\cdot)}(w)}>0$ and
$\|g\|_{L^{p(\cdot)}(w)}<\infty$ since otherwise there is nothing to prove. We may also assume $\|g\|_{L^{p(\cdot)}(w)}>0$: otherwise, $g(x)=0$ almost everywhere, and so by our assumption~\eqref{hyp:diag-weightedvar} (perhaps via an approximation argument like the one in Section~\ref{section:applications}) we get that $f(x)=0$ a.e. Define
\[ h_1 = \frac{f}{\| f \|_{L^{p(\cdot)}(w)}} + \frac{g}{\| g \|_{L^{p(\cdot)}(w)}}. \]
Then $h_1 \in L^{{p(\cdot)}}(w)$ and $\|h_1\|_{L^{p(\cdot)}(w)}\leq 2$.
We will use Proposition~\ref{prop:H_j} to define the two operators $H_1$ and $H_2$,
\begin{equation} \label{eqn:H1-H2}
H_1 = \mathcal{R}_1(h_1^{\alpha_1} w^{\beta_1})^{1/\alpha_1}w^{-\beta_1/\alpha_1}, \qquad
H_2 = \mathcal{R}_2(h_2^{\alpha_2} w^{\beta_2})^{1/\alpha_2}w^{{-}\beta_2/\alpha_2}, \end{equation}
where $h_2$ will be fixed momentarily. Fix $s$, $0<s< \max(p_0,p_-)$. By dilation, duality and H\"older's inequality, there exists $h_2 \in L^{({p(\cdot)}/s)'}$, $\|h_2\|_{({p(\cdot)}/s)'}=1$, such that for any $\gamma > 0$,
\begin{align} \label{eqn:first-reduction}
\| fw \|_{{p(\cdot)}}^s
&\leq C\int_{{\mathbb R}^n} f^s w^s h_2 \,dx \leq \int_{{\mathbb R}^n} f^s H_1^{\gamma} H_1^{-\gamma} H_2 w^s \,dx \\ \notag
&\leq C\left( \int_{{\mathbb R}^n} f^{p_0} H_1^{-\gamma (p_0/s)} H_2 w^s \,dx
\right)^{s/p_0} \left( \int_{{\mathbb R}^n} H_1^{\gamma (p_0/s)'} H_2 w^s \,dx \right)^{1/(p_0/s)'} \\ \notag & = I_1^{s/p_0} I_2^{1/(p_0/s)'}.
\end{align}
We will first find assumptions that let us show that $I_2$ is uniformly bounded. Since $h_1\in
L^{{p(\cdot)}}(w)$ and $h_2 \in L^{({p(\cdot)}/s)'}$, we must have that $H_1$ and
$H_2$ are bounded on these spaces. To get the norm of $H_2$ in
$L^{({p(\cdot)}/s)'}$ we apply H\"older's inequality with exponent
${p(\cdot)}/s$ to get
\[ I_2 \leq C\| H_1^{\gamma (p_0/s)'} w^s \|_{{p(\cdot)}/s} \| H_2 \|_{({p(\cdot)}/s)'}. \]
To use our assumption that $H_1$ is bounded on $L^{{p(\cdot)}}(w)$ we need to fix $\gamma = \frac{s}{(p_0/s)'}$. Then by dilation and the properties of $H_1$ and $H_2$ in Proposition~\ref{prop:H_j} we have that
\begin{gather*}
\| H_1^{\gamma (p_0/s)'} w^s \|_{{p(\cdot)}/s} =
\|H_1 w \|_{{p(\cdot)}}^s \leq 2^s\| h_1 w \|_{{p(\cdot)}}^s \leq 4^s \\ \intertext{and}
\| H_2 \|_{({p(\cdot)}/s)'} \leq 2\| h_2 \|_{({p(\cdot)}/s)'} = 2. \end{gather*}
For $H_1$ and $H_2$ to be bounded on these spaces, by Proposition \ref{prop:H_j}, we must have that the maximal operator satisfies
\begin{equation} \label{eqn:M-constraints} M \text{ bounded on } L^{{p(\cdot)}/\alpha_1} (w^{\alpha_1 - \beta_1}) \text{
and } L^{({p(\cdot)}/s)'/\alpha_2} (w^{-\beta_2}). \end{equation} A necessary condition for this is that that $p_-/\alpha_1>1$ and $[({p(\cdot)}/s)'/\alpha_2]_->1$, or equivalently,
\[ p_- > \alpha_1, \qquad (p_+/s)' > \alpha_2. \]
We must now estimate $I_1$; with our choice of $\gamma$ it can be written as
\[ I_1 = \int_{{\mathbb R}^n} f^{p_0} H_1^{s - p_0} H_2 w^s \,dx. \]
In order to apply \eqref{hyp:diag-weightedvar}, we must show that $I_1$ is finite. Since $h_1 \leq H_1$, by H\"{o}lder's inequality
\begin{multline*}
I_1
\leq \int_{{\mathbb R}^n} f^{p_0} \left( \frac{f}{\| fw \|_{{p(\cdot)}}} \right)^{{s} -
p_0} H_2 w^{{s}} \,dx \\
=\| fw \|_{{p(\cdot)}}^{p_0 - {s}} \int_{{\mathbb R}^n} f^{{s}} w^{{s}} H_2 \,dx
\leq \| fw \|_{{p(\cdot)}}^{p_0 - {s}} \| fw \|_{{p(\cdot)}}^s \| H_2 \|_{{({p(\cdot)}/s)'}} < \infty. \end{multline*}
Suppose for the moment that $w_0 = H_1^{s - p_0} H_2 w^s \in A_{p_0}$; then we can use \eqref{hyp:diag-weightedvar} to estimate $I_1$. Again since $h_1 \leq H_1$ and by H\"{o}lder's inequality,
\begin{multline*}
I_1 \leq C\int_{{\mathbb R}^n} g^{p_0} H_1^{{s} - p_0} H_2 w^{{s}} \,dx
\leq C\int_{{\mathbb R}^n} g^{p_0} \left( \frac{g}{\| gw \|_{{p(\cdot)}}} \right)^{{s} - p_0} H_2 w^{{s}} \,dx \\
= C\| gw \|_{{p(\cdot)}}^{p_0 - {s}} \int_{{\mathbb R}^n} g^{{s}} H_2 w^{{s}} \,dx
\leq C\| gw \|_{{p(\cdot)}}^{p_0 - {s}} \| gw \|_{{p(\cdot)}}^{{s}} \| H_2 \|_{{({p(\cdot)}/s)'}}
\leq C \| gw \|_{{p(\cdot)}}^{p_0}. \end{multline*}
If we combine this with the previous inequalities we get the desired norm inequality.
To complete the proof we must determine constraints on the parameters so that $H_1^{s - p_0} H_2 w^s \in A_{p_0}$. By reverse factorization and Proposition~\ref{prop:H_j} we need to fix our parameters so that
\[ H_1^{s - p_0} H_2 w^s = \left[ H_1^{\frac{p_0 - s}{p_0 - 1}}
w^{\beta_1} \right]^{1 - p_0} H_2 w^{s - \beta_1 (1 - p_0)} = \left[ H_1^{\alpha_1}w^{\beta_1} \right]^{1-p_0} H_2^{\alpha_2}w^{\beta_2}. \]
Equating the exponents we get that
\begin{equation} \label{eqn:final-constraints}
\alpha_1 = \frac{p_0 - s}{p_0 - 1}, \quad \beta_1 \in \mathbb R, \quad \alpha_2 = 1, \quad \beta_2 = s - \beta_1 (1 - p_0). \end{equation}
(In other words, there is no constraint on $\beta_1$.) Since above we assumed $s<p_0$, we have $\alpha_1>0$ as required in Proposition~\ref{prop:H_j}. Above, we required that $p_->\alpha_1$; combining this with the new constraint we have that $s > p_0 - p_-(p_0-1)$. With $\alpha_2=1$, the second restriction from above, that $(p_+/s)'>\alpha_2$, always holds.
To summarize: we have shown that given a constant $s$ such that~\eqref{eqn:s-diagonal} holds, and constants $\alpha_j,\,\beta_j$ as in~\eqref{eqn:final-constraints}, and if the maximal operator satisfies~\eqref{eqn:M-constraints}, then the desired weighted norm inequality holds. This completes the proof. \end{proof}
\begin{remark} \label{remark:optimal} As we noted above, if $s=1$,
$\beta_1=0$ then we get a result analogous to the
classical extrapolation theorem. This is enough to motivate our
choice of these parameters. But in some sense this choice is also
optimal.
To see this for $\beta_1$, we will construct
power weights that satisfy the boundedness conditions on the maximal
operator in~\eqref{eqn:M-constraints}. By
Remark~\ref{remark:power-wt} above, if ${p(\cdot)}\in LH$ and $0\leq a <
n/p_+$, then $w(x) = |x|^{-a} \in A_{p(\cdot)}$. Using this, we get from \eqref{eqn:M-constraints} that $w^{\alpha_1-\beta_1} \in A_{{p(\cdot)}/\alpha_1}$ and $w^{\beta_2}\in A_{{p(\cdot)}/s}$. Assume that
$\alpha_1\geq \beta_1$. Then the weight $|x|^{-a}$ satisfies these inclusions if
\[ a(\alpha_1-\beta_1)< \frac{\alpha_1 n}{p_+}, \quad a(s+\beta_1(p_0-1)) < \frac{s n}{p_+}. \]
Clearly, we get the same range for $a$, $a<n/p_+$, in each inequality if $\beta_1=0$, and if $\beta_1\neq 0$ one of the ranges will be smaller than this. Therefore, to maximize the range of exponents we should take $\beta_1=0$.
When $\beta_1=0$ we then have $w^{\alpha_1} \in A_{{p(\cdot)}/\alpha_1}$ and $w^s \in A_{{p(\cdot)}/s}$. If $\alpha_1=s$, then $s=1$ and we get the single condition $w\in A_{p(\cdot)}$. If $\alpha_1>s$, then $s<1$ and so $\alpha_1>1$ and by Proposition~\ref{prop:Ainfty-weaker} we get that $w^{\alpha_1} \in A_{{p(\cdot)}/\alpha_1}$ implies $w \in A_{{p(\cdot)}}$. If $\alpha_1<s$, then $s>1$ and we again get a condition stronger than $w\in A_{p(\cdot)}$. So we have that the choice $s=1$ is in some sense optimal. \end{remark}
\subsection*{Proof of Theorem \ref{thm:off-weightedvar}}
For the proof we need a few propositions. The first gives the relationship between Muckenhoupt $A_p$ weights and $A_{p,q}$ weights. It was first observed in~\cite{muckenhoupt-wheeden74}; the proof follows immediately from the definition.
\begin{prop} \label{prop:Apq-Ar} Given $p,\,q$, $1\leq p<q < \infty$, suppose $w \in A_{p,q}$. Then $w^q\in A_r$ when $r=1+q/p'$. \end{prop}
The next result is not strictly necessary to our proof, but we include it as it is the variable exponent version of Proposition~\ref{prop:Apq-Ar}.
\begin{prop} \label{prop:Apq-Ar-var} Given ${p(\cdot)},\, {q(\cdot)}\in \mathcal P$, $1 < p(x)
\leq q(x) < \infty$, suppose there exists $\sigma >1$ such that
$\frac{1}{p(x)} - \frac{1}{q(x)} = \frac{1}{\sigma'}$. Then $w \in
A_{{p(\cdot)}, {q(\cdot)}}$ if and only if $w^\sigma \in A_{{q(\cdot)}/\sigma}$. \end{prop}
\begin{proof} First note that $\sigma{r'(\cdot)} = {p'(\cdot)}$. Indeed, taking the reciprocal, we have
\[ \frac{1}{\sigma{r'(\cdot)}} = \frac{1}{\sigma} - \frac{1}{\sigma{r(\cdot)}} = 1 - \frac{1}{\sigma'} - \frac{1}{{q(\cdot)}} = 1 - \frac{1}{{p(\cdot)}} + \frac{1}{{q(\cdot)}} - \frac{1}{{q(\cdot)}} = \frac{1}{{p'(\cdot)}}. \]
The equivalence then follows by dilation and the definition of $A_{r(\cdot)}$ and $A_{{p(\cdot)},{q(\cdot)}}$:
\begin{multline*} |B|^{-1}\| w^\sigma \chi_B \|_{{r(\cdot)}} \|
w^{-\sigma} \chi_B \|_{{r'(\cdot)}} \\
= |B|^{-1}\| w\chi_B \|_{{q(\cdot)}}^\sigma \| w^{-1} \chi_B \|_{{p'(\cdot)}}^\sigma
= \big(|B|^{\frac{1}{\sigma'}-1}\| w\chi_B \|_{{q(\cdot)}} \| w^{-1} \chi_B \|_{{p'(\cdot)}}\big)^\sigma. \end{multline*}
\end{proof}
To state the next result recall that given ${p(\cdot)}\in \mathcal P$, the modular is defined by
\[ \rho_{p(\cdot)}(f) = \rho(f) = \int_{{\mathbb R}^n} |f(x)|^{p(x)}\,dx. \]
In the case of constant exponents, the $L^p$ norm and the modular differ only by an exponent. In the variable Lebesgue spaces their relationship is more subtle as the next result shows. For a proof see~\cite[Prop.~2.21, Cor.~2.23]{cruz-fiorenza-book}.
\begin{prop}\label{prop:mod-norm} Given ${p(\cdot)}\in \mathcal P$, suppose $p_+ < \infty$. Then:
\begin{enumerate}
\item $\| f \|_{p(\cdot)} = 1$ if and only if $\rho(f) = 1$;
\item if $\rho(f) \leq C$, then $\| f \|_{L^{{p(\cdot)}}} \leq \max (C^{1/p_-}, C^{1/p_+})$;
\item if $\| f \|_{{p(\cdot)}} \leq C$, then $\rho(f) \leq \max( C^{p_+}, C^{p_-})$.
\end{enumerate} \end{prop}
We can now prove Theorem \ref{thm:off-weightedvar}. As we noted above, when $\sigma=1$ Theorem~\ref{thm:off-weightedvar} reduces to Theorem~\ref{thm:diag-weightedvar}, so we will assume $\sigma>1$. The proof when $p_0=1$ is more similar to that of Theorem~\ref{thm:A1var}, and so we will defer this case to below after the proof of~Theorem~\ref{thm:A1var}. Here we will assume that $p_0>1$. We will actually prove the following more general proposition.
\begin{prop} \label{prop-offdiag} Let $p_0,\,q_0,\,\sigma$ and exponents ${p(\cdot)},\,{q(\cdot)}$ be as in the statement of Theorem \ref{thm:off-weightedvar}. Fix $\beta_1 \in \mathbb R$ and choose any $s$ such that
\begin{equation}\label{eqn:s-off}
q_0-q_-\left(\frac{q_0}{\sigma}-1\right) < s < \min(q_0, q_-).
\end{equation}
Let $r_0 = q_0/s$, and define $\alpha_1 = s$ and
$\beta_2 = s - \beta_1 (1 - r_0)$. Then if $w$ is a weight such
that $M$ is bounded on $L^{{q(\cdot)}/s}(w^{\alpha_1 - \beta_1})$ and
$L^{({q(\cdot)}/s)'} (w^{-\beta_2})$, we have that $\|fw\|_{q(\cdot)} \leq
C\|gw\|_{p(\cdot)}$.
\end{prop}
To prove Theorem \ref{thm:off-weightedvar}, we
take $\beta_1 = 0$ and $s = \sigma$. Since
\[ 1 - \frac{1}{\sigma}
= \frac{1}{p_0} - \frac{1}{q_0} = \frac{1}{p_-}-\frac{1}{q_-}, \]
we have that the second inequality in \eqref{eqn:s-off} holds. The first inequality is equivalent to $\sigma^2-(q_0+q_-)\sigma +q_-q_0>0$, which follows from the second inequality. The requirement on the weight $w$ reduces to
$M$ being bounded on $L^{{q(\cdot)}/\sigma}(w^\sigma)$ and
$L^{({q(\cdot)}/\sigma)'}(w^{-\sigma})$, or equivalently, $({q(\cdot)}/\sigma,
w^{\sigma})$ is an $M$-pair.
\begin{proof}
The proof follows an outline similar to that of
Theorem~\ref{thm:diag-weightedvar}; we will concentrate on details
that are different. Fix a pair $(f,g)\in \mathcal{F}$; as before we may
assume without loss of generality that $0< \|f\|_{L^{q(\cdot)}(w)},\,
\|g\|_{L^{p(\cdot)}(w)}<\infty$. Moreover, if $(f,g)$
satisfies~\eqref{result:off-weightedvar}, then so does $(\lambda f,
\lambda g)$ for any $\lambda>0$, so without loss of generality we
may assume that $\|g\|_{L^{p(\cdot)}(w)}=1$. Then by
Proposition~\ref{prop:mod-norm} it will suffice to prove that $\| fw
\|_{{q(\cdot)}} \leq C$.
Define
\[ h_1 = \frac{f}{\| fw \|_{{q(\cdot)}}} + g^{\frac{{p(\cdot)}}{{q(\cdot)}}} w^{\frac{{p(\cdot)}}{{q(\cdot)}} - 1}; \]
we claim that $\| h_1 w \|_{{q(\cdot)}} \leq C$. This follows from Proposition~\ref{prop:mod-norm}:
\[ \rho_{q(\cdot)}(h_1w) \leq 2^{q_+} \int_{{\mathbb R}^n}
\left(\frac{f(x)w(x)}{\|fw\|_{{q(\cdot)}}}\right)^{q(x)}\,dx + 2^{q_+} \int_{{\mathbb R}^n} \big( g(x) w(x) )^{p(x)}\,dx \leq 2^{q_++1}. \]
We again use Proposition~\ref{prop:H_j} to define two operators $H_1$ and $H_2$ as in~\eqref{eqn:H1-H2}. Let $r_0 = q_0/s$, and fix $s$, {$0
< s < \min(q_0, q_-)$}. Then there exists $h_2 \in L^{({q(\cdot)}/s)'}$,
$\| h_2 \|_{({q(\cdot)}/s)'} = 1$, such that for any $\gamma>0$,
\begin{multline*}
\| fw \|_{{q(\cdot)}}^s
\leq C\int_{{\mathbb R}^n} f^s w^s h_2 \,dx \leq C\int_{{\mathbb R}^n} f^s H_1^{\gamma} H_1^{-\gamma} H_2 w^s \,dx \\
\leq C\left( \int_{{\mathbb R}^n} f^{q_0} H_1^{-\gamma (q_0/s)} H_2 w^s \,dx
\right)^{1/r_0} \left( \int_{{\mathbb R}^n} H_1^{\gamma r_0'} w^s H_2 \,dx \right)^{1/r_0'} = I_1^{1/r_0} \cdot I_2^{1/r_0'}. \end{multline*}
{We start by finding conditions to insure that $I_2$ is uniformly
bounded. Since $h_1\in L^{{q(\cdot)}}(w)$ and $h_2\in L^{({q(\cdot)}/s)'}$, we
require $H_1$ and $H_2$ to be bounded on these
spaces. We apply H\"older's inequality with exponent ${q(\cdot)}/s$ to get
\[ I_2 \leq C \| H_1^{\gamma(q_0/s)'} w^s \|_{{q(\cdot)}/s} \| H_2
\|_{({q(\cdot)}/s)'}. \]
If we let $\gamma = \frac{s}{(q_0/s)'}$, then by dilation,
\[ \| H_1^{\gamma r_0'} w^s \|_{{q(\cdot)}/s} = \| H_1 w \|_{{q(\cdot)}}^s \leq 2^s
\| h_1 w \|_{{q(\cdot)}}^s \leq C, \qquad \| H_2 \|_{({q(\cdot)}/s)'} \leq 2\| h_1
\|_{({q(\cdot)}/s)'} = 2. \] }
For $H_1$ and $H_2$ to be bounded on these spaces, by Proposition~\ref{prop:H_j} we must have that the maximal operator satisfies
\begin{equation*} \label{eqn:max-constraint-off} M \text{ bounded on } L^{{q(\cdot)}/\alpha_1} (w^{\alpha_1 - \beta_1}) \text{
and } L^{({q(\cdot)}/s)'/\alpha_2} (w^{-\beta_2}). \end{equation*}
For these to hold we must have that
\begin{equation} \label{eqn:sub-constraint} q_->\alpha_1 \qquad \text{ and } \qquad (q_+/s)'>\alpha_2. \end{equation}
It remains to estimate $I_1$; with our value of $\gamma$ we now have that
\[ I_1 = \int_{\mathbb R^n} f^{q_0} H_1^{-q_0/r_0'} H_2 w^s dx. \]
In order to apply~\eqref{hyp:off-weightedvar} we need to show that $I_1$ is finite. However, this follows from H\"older's inequality and the above estimates for $H_1$ and $H_2$:
\begin{multline*}
I_1 \leq \| f \|_{L^{{q(\cdot)}}(w)}^{q_0} \int H_1^{q_0} H_1^{-q_0/r_0'}
H_2 w^{{s}} \, dx \\
= \| f \|_{L^{{q(\cdot)}}(w)}^{q_0} \int H_1^{{s}} H_2 w^{{s}}\, dx \leq
\| f \|_{L^{{q(\cdot)}}(w)}^{q_0} \| H_1^{{s}} w^{{s}} \|_{{q(\cdot)}/{{s}}} \| H_2
\|_{({q(\cdot)}/{s})'} < \infty.
\end{multline*}
To apply our hypothesis \eqref{hyp:off-weightedvar} we need the
weight $w_0 = (H_1^{-\gamma(q_0/s)} H_2 w^s )^{1/q_0}$ to be in $
A_{p_0,q_0}$, or equivalently by Proposition~\ref{prop:Apq-Ar},
$w^{q_0} = H_1^{-(q_0 - s)} H_2 w^s \in A_{r_1}$, where
\[ r_1 = 1+ \frac{q_0}{p_0'} = \frac{q_0}{\sigma}. \]
To apply reverse factorization
we write
\[ w^{q_0} = \left( H_1^{\frac{q_0 - s}{r_1 - 1}} w^{\beta_1} \right)^{1 - r_1} H_2 w^{s - \beta_1 (1 - r_1)}. \]
By Proposition~\ref{prop:H_j} this gives the following constraints on $\alpha_j, \beta_j$:
\[ \alpha_1 = \frac{q_0 - s}{\frac{q_0}{\sigma} - 1} , \quad \beta_1 \in \mathbb R, \quad \alpha_2 = 1, \quad \beta_2 = s - \beta_1 (1 - q_0/\sigma) \]
If we combine these with the constraints in~\eqref{eqn:sub-constraint} we see that the second one there always holds and the first one holds if
\[ s > q_0 -q_-\left(\frac{q_0}{\sigma}-1\right). \]
We can now apply \eqref{hyp:off-weightedvar}: by the definition of $h_1$ and by H\"{o}lder's inequality with respect to the undetermined exponent $\alpha(\cdot)$, we get
\begin{align*}
I_1^{1/q_0} & \leq C\left( \int_{{\mathbb R}^n} g^{p_0} \big[ H_1^{-q_0/r_0'} w^{{s}} H_2 \big]^{p_0/q_0} \,dx \right)^{1/p_0} \\ &\leq C\left( \int_{{\mathbb R}^n} \left( h_1^{\frac{{q(\cdot)}}{{p(\cdot)}}} w^{\frac{{q(\cdot)}}{{p(\cdot)}} - 1} \right)^{p_0} H_1^{-p_0/r_0'} H_2^{p_0/q_0} w^{{s} p_0/q_0} \,dx\right)^{1/p_0} \\
&\leq C\bigg(\int_{{\mathbb R}^n} H_1^{p_0 (\frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'})} H_2^{p_0/q_0} w^{p_0 (\frac{{s}}{q_0} + \frac{{q(\cdot)}}{{p(\cdot)}} - 1)} \,dx\bigg)^{1/p_0} \\
&\leq C\big\| H_1^{p_0 (\frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'})} w^{p_0
(\frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'})} \big\|_{\alpha'(\cdot)}^{1/p_0}
\| H_2^{p_0/q_0} \|_{\alpha(\cdot)}^{1/p_0} \\ & = CJ_1^{1/p_0} J_2^{1/p_0}.
\end{align*}
If we let $\alpha(\cdot) = \frac{q_0 ({q(\cdot)}/s)'}{p_0}$, then by dilation $J_2$ is uniformly bounded. To show that $J_1$ is uniformly bounded we first note that
\[ p_0 \left( \frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'} \right) \alpha'(\cdot) = {q(\cdot)}. \]
(This is given without proof in the constant case in~\cite[Section~3.5]{cruz-martell-perezBook}. It follows by a tedious but straightforward computation. Though $r_0$ depends on $s$, the argument only uses the fact that $\frac{1}{{p(\cdot)}} - \frac{1}{{q(\cdot)}} = \frac{1}{p_0} - \frac{1}{q_0}$, and does not depend on the value of $s$.) Given this, then
\[ \rho_{\alpha'(\cdot)} \big(H_1^{p_0 (\frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'})} w^{p_0 (\frac{{q(\cdot)}}{{p(\cdot)}} - \frac{1}{r_0'})}\big) = \int_{{\mathbb R}^n} H_1^{{q(\cdot)}} w^{{q(\cdot)}} \,dx = \rho_{{q(\cdot)}} (H_1 w). \]
If we apply Proposition~\ref{prop:mod-norm} twice, since
$\|H_1\|_{L^{q(\cdot)}(w)}\leq 2 \|H_1\|_{L^{q(\cdot)}(w)}$ is uniformly bounded, $\rho_{{q(\cdot)}} (H_1 w)$ is as well, and hence, $J_1$ is uniformly bounded. This completes the proof. \end{proof}
\subsection*{Proof of Theorem \ref{thm:limited-var}}
For the proof we will need a lemma due to Johnson and Neugebauer~\cite{johnson-neugebauer91}.
\begin{lemma} \label{lemma:jn} Given a weight $w$, then $w\in A_p\cap RH_s$, $1<p,\,s<\infty$, if and only if $w^s \in A_{{\tau}}$, where ${\tau}=s(p-1)+1$. \end{lemma}
We again prove a more general result.
\begin{prop} \label{prop-limited} Given that the hypotheses of Theorem~\ref{thm:limited-var} hold, suppose ${p(\cdot)} \in LH$ with $q_- < p_- \leq p_+ < q_+$. Then there exists $p_*$, $q_-<p_*<q_+$ and $s > 0$ such that
\begin{equation}\label{eqn:s-limited}
\max\left( p_- - p_* \left( \frac{p_-}{q_-} - 1 \right), \frac{p_* p_+}{q_+} \right) < s < \min(p_-, p_*). \end{equation}
Define
\[ \tau_0 = \left(\frac{q_+}{p_*}\right)' \left(\frac{p_*}{q_-} - 1\right). \]
Let $\beta_1 \in \mathbb R$ be any constant and define
\[ \alpha_1 = q_- \left( \frac{p_* - s}{p_* - q_-} \right), \quad \alpha_2 = \left(\frac{q_+}{p_*}\right)', \quad \beta_2 = s\left( \frac{q_+}{p_*} \right)' - \beta_1 (1 - \tau_0). \] Then for any weight $w$ such that
\begin{equation}\label{pair:limited}
w^{\alpha_1 - \beta_1} \in A_{{p(\cdot)}/\alpha_1} \qquad \text{and} \qquad w^{-\beta_2} \in A_{({p(\cdot)}/s)'/\alpha_2},
\end{equation}
we have that
\[ \| f \|_{L^{{p(\cdot)}}(w)} \leq C \| g \|_{L^{{p(\cdot)}}(w)}, \qquad (f, g) \in \mathcal{F}. \]
\end{prop}
\begin{remark} It will follow from the proof that the values of $p_*$ and $s$ are not unique. We will also see that the $A_{p(\cdot)}$ conditions in~\eqref{pair:limited} are well defined. \end{remark}
To prove Theorem \ref{thm:limited-var}, note first that if we take
$w = 1$, then \eqref{pair:limited} holds since ${p(\cdot)} \in LH$ and $p_->1$ implies ${p(\cdot)}$ has the $K_0$ condition (see Corollay 4.50 in \cite{dcu-af-crm}), and so we get the unweighted inequality \eqref{result:limited-var1}.
To prove the weighted norm inequality \eqref{result:limited-var2}, let $p_*$ and $s$ be any values satisfying \eqref{eqn:s-limited}. We want $\beta_2 = 0$ so that the second condition in \eqref{pair:limited} always holds. This is the case if we let
\[ \beta_1 = \frac{s(q_+/p_*)'}{1 - \tau_0} = \frac{sq_-}{q_- - p_*}
= -\frac{s\sigma}{p_*} < 0, \]
where $\sigma = \frac{p_* q_-}{p_* - q_-}$. Then $\alpha_1 -
\beta_1 = \sigma$, and if we let $c = 1 - \frac{s}{p_*}$,
the first condition in~\eqref{pair:limited} reduces to
$w^{\sigma} \in A_{\frac{{p(\cdot)}}{c\sigma}}$.
\begin{proof} Fix an exponent ${p(\cdot)}\in LH$, $q_-<p_-\leq p_+<q_+$, and fix a pair $(f, g)\in \mathcal{F}$. As before, without loss of generality we may assume that
$0<\|f\|_{L^{{p(\cdot)}}{(w)}},\,
\|g\|_{L^{{p(\cdot)}}{(w)}} < \infty$. Define $h_1 \in L^{{p(\cdot)}}{(w)}$,
$\|h_1\|_{L^{{p(\cdot)}}{(w)}} \leq 2$, by
\[ h_1 = \frac{f}{\| f \|_{L^{{p(\cdot)}}{(w)}}} + \frac{g}{\| g \|_{L^{{p(\cdot)}}{(w)}}}. \]
We will use Proposition~\ref{prop:H_j} to define two operators $H_1$
and $H_2$ as in~\eqref{eqn:H1-H2}. By dilation and duality, there exists $h_2 \in L^{({p(\cdot)}/s)'}$, $\|h_2\|_ {({p(\cdot)}/s)'}=1$, such that
\begin{multline*}
\| f {w} \|_{{p(\cdot)}}^s
= \| f^s {w^s} \|_{{p(\cdot)}/s} \leq C\int_{{\mathbb R}^n} f(x)^s h_2 (x) {w^s} \,dx \leq C\int_{{\mathbb R}^n} f(x)^s H_1^{-\gamma} H_1^{\gamma} H_2 {w^s} \,dx \\ \leq C\left( \int_{{\mathbb R}^n} f^{p_0} H_1^{-\gamma r_0} H_2 {w^s}
\,dx \right)^{1/r_0} \left( \int_{{\mathbb R}^n} H_1^{\gamma r_0'} H_2 {w^s} \,dx \right)^{1/r_0'} = CI_1^{1/r_0} I_2^{1/r_0'}, \end{multline*}
where $r_0=p_0/s$.
We first show that $I_2$ is uniformly bounded. As in the proof of Theorem~\ref{thm:diag-weightedvar}, we want $H_1$ to be bounded on $L^{{p(\cdot)}}(w)$ and $H_2$ to be bounded on $L^{({p(\cdot)}/s)'}$. Then by H\"older's inequality and dilation,
\begin{multline*}
I_2 \leq C\|H_1^{\gamma r_0'} w^s\|_{{p(\cdot)}/s}\|H_2\|_{({p(\cdot)}/s)'} \\
\leq C \|H_1 w\|_{\gamma r_0' {p(\cdot)}/s}^{\gamma r_0'} \|H_2\|_{({p(\cdot)}/s)'}
\leq C\|h_1w\|_{\gamma r_0' {p(\cdot)}/s}^{\gamma r_0'} \|h_2\|_{({p(\cdot)}/s)'}. \end{multline*}
The last term will be uniformly bounded if we let $\gamma= s/r_0'$. For $H_1$ and $H_2$ to be bounded on these spaces, by Proposition~\ref{prop:H_j} we must have that
\[ M \text{ bounded on } L^{{p(\cdot)}/\alpha_1}(w^{\alpha_1-\beta_1}) \text{ and } L^{({p(\cdot)}/s)'/\alpha_2}(w^{-\beta_2}). \]
Since ${p(\cdot)}\in LH$, this will be the case if
\begin{equation} \label{eqn:var-constraint} p_->\alpha_1, \qquad (p_+/s)'>\alpha_2. \end{equation}
To bound $I_1$, we want to apply our hypothesis~\eqref{hyp:limited-var}; to do so we need to show that it is finite. But by our assumptions on $H_1$ and $H_2$ and the definition of $h_1$, we have that
\begin{multline*}
I_1 = \int_{{\mathbb R}^n} f^{p_0} H_1^{-(p_0 - s)} H_2 {w^s} \,dx
\leq \int_{{\mathbb R}^n} (\| f {w} \|_{{p(\cdot)}} H_1)^{p_0} H_1^{-(p_0 - s)} H_2 {w^s} \,dx \\
= \| f {w} \|_{{p(\cdot)}}^{p_0} \int_{{\mathbb R}^n} H_1^s H_2 {w^s} \,dx
\leq C\| f {w} \|_{{p(\cdot)}}^{p_0} \| H_1 {w} \|_{{p(\cdot)}}^s \| H_2 \|_{({p(\cdot)}/s)'} < \infty. \end{multline*}
Assume for the moment that $w_0 = H_1^{-(p_0 - s)} H_2 {w^s} \in A_{p_0/q_-} \cap RH_{(q_+/p_0)'}$. Then by~\eqref{hyp:limited-var} and arguing as we did in the previous inequality, we get that
\begin{multline*}
\int_{{\mathbb R}^n} f^{p_0} H_1^{-(p_0 - s)} H_2 {w^s} \,dx \\ \leq C\int_{{\mathbb R}^n} g^{p_0} H_1^{-(p_0 - s)} H_2 {w^s} \,dx
\leq C\| g {w} \|_{{p(\cdot)}}^{p_0} \int_{{\mathbb R}^n} H_1^{p_0} H_1^{-(p_0 - s)} H_2 {w^s} \,dx
\leq C \| g {w} \|_{{p(\cdot)}}^{p_0}. \end{multline*}
If we combine this with the previous estimates we get the desired weighted norm inequality.
We can complete the proof if our various assumptions hold. However, as we will see, this may not be possible with our given value of $p_0$, and so we will introduce a new parameter $p_*$. We first consider the weight $w_0$. We want $w_0 = H_1^{-(p_0 - s)} H_2 {w^s}$ to be in $A_{p_0/q_-} \cap RH_{(q_+/p_0)'}$, which by Lemma~\ref{lemma:jn} is equivalent to $w_0^{(q_+/p_0)'} \in A_{{\tau_0}}$, where ${\tau_{0}} = \left( \frac{q_+}{p_0} \right)' \left( \frac{p_0}{q_-} - 1 \right) + 1$. To apply reverse factorization, we rewrite $w_0$ as
\[
w_0^{(q_+/p_0)'}
= \left[ H_1^{-(p_0 - s)} H_2 {w^s} \right]^{(q_+/p_0)'}
= \left[ H_1^{(q_-) \frac{p_0 - s}{p_0 - q_-}} {w^{\beta_1}} \right]^{1 - {\tau_0}} H_2^{(q_+/p_0)'} {w^{s(q_+ /p_0)' - \beta_1 (1 - {\tau_{0}})}}. \]
Therefore, by Proposition~\ref{prop:H_j} we must have that
\[ \alpha_1 = q_-\left(\frac{p_0 - s}{p_0 - q_-}\right), \quad \beta_1 \in \mathbb R, \quad \alpha_2 = \left(\frac{q_+}{p_0}\right)', \quad {\beta_2 = s\left(\frac{q_+}{p_0}\right)' - \beta_1 (1 - {\tau_{0}})}. \]
If we combine this with the first constraint in~\eqref{eqn:var-constraint} we see that we need
\[ \frac{p_-}{q_-} \left( \frac{p_0 - q_-}{p_0
- s} \right) > 1; \]
equivalently, we must have that
\[ s > p_- - p_0 \left( \frac{p_-}{q_-} - 1 \right) > 0. \]
Similarly, the second constraint in~\eqref{eqn:var-constraint} implies that we also need
\[ s > \frac{p_0 p_+}{q_+}. \]
However, it need not be the case that we can find such an $s$ that also satisfies $s<\min(p_-,p_0)$. We can overcome this problem by changing the value $p_0$. By limited range extrapolation in the constant exponent case, Theorem~\ref{thm:limited-const}, we have that our hypothesis~\eqref{hyp:limited-var} holds with $p_0$ replaced by any $p_*$, $q_-<p_*<q_+$ provided that $w_0\in A_{p_*/q_-} \cap RH_{(q_+/p_*)'}$.
We can, therefore, repeat the entire argument above with $p_0$ replaced by $p_*$ and we will get our desired conclusion if we can find $p_*$ and $s>0$ such that~\eqref{eqn:s-limited} holds. (The constants $\alpha_j,\,\beta_j,\,\tau_0$ are also redefined as in the statement of Proposition~\ref{prop-limited}.) This is equivalent to the following four inequalities being true:
\begin{align*} (1) & \; p_* > \frac{p_* p_+}{q_+}, \qquad & (3) &\; p_- > p_- - p_* \left( \frac{p_-}{q_-} - 1 \right), \\ (2) &\; p_* > p_- - p_* \left( \frac{p_-}{q_-} - 1 \right), \qquad & (4) & \; p_- > \frac{p_* p_+}{q_+}. \end{align*}
Inequalities (1) and (3) always hold. Inequality (2) is equivalent to $p_- \left( \frac{p_*}{q_-} \right) > p_-$ which is always true. Inequality (4) holds if $p_*$ is such that
\[ q_- < p_* < \frac{q_+}{p_+} p_-<q_+; \]
such a $p_*$ exists since $\frac{p_+}{p_-} < \frac{q_+}{q_-}$. Therefore, we can find the desired value of $p_*$ and $s$ and this completes the proof of Proposition~\ref{prop-limited}. \end{proof}
\begin{remark}\label{remark:lim-reduction}
The limited-range extrapolation theorem with constant exponents does
not follow from Theorem~\ref{thm:limited-var}. However, it does
follow from Proposition~\ref{prop-limited} by choosing a different
set of parameters. We need to prove that if let
${p(\cdot)} = p$, $q_-<p<q_+$, then the norm inequality $\| fw \|_{p} \leq C \| gw
\|_{p}$ holds provided that the weight $w^p \in A_{p/q_-}\cap
RH_{(q_+/p)'}$, which by Lemma~\ref{lemma:jn} is equivalent to
$w^{p(q_+/p)'} \in
A_{\tau_p}$, where $\tau_p = (\frac{q_+}{p})'
(\frac{p}{q_-} - 1) + 1$. Restating this condition in terms of our variable
weight condition, we need that the norm inequality holds provided
$w$ satisfies
\begin{equation} \label{eqn:const-req} w^{p(q_+/p)'/\tau_p}
\in A_{\tau_p}^{var}. \end{equation}
(See the comments just before Proposition~\ref{prop:Ainfty-weaker} for this notation.) For the two
conditions in~\eqref{pair:limited} to reduce to this one
requirement, we must have that:
\begin{enumerate}
\item The first condition must be the same as
\eqref{eqn:const-req}. This is the case if $\alpha_1 - \beta_1
= p(q_+/p)'/\tau_p$, and $p/\alpha_1 = \tau_p$, or $\alpha_1 = p/\tau_p$ and $\beta_1 = \frac{p}{\tau_p}
\left( 1 - (q_+/p)' \right)$. Therefore, $s$ and $\beta_2$
must satisfy
\begin{align*}
s = \frac{p}{\tau_p} \left( 1 - \frac{p_0}{q_-} \right) + p_0, \qquad \beta_2 = s(q_+/p_0)' - \beta_1 (1 - \tau_{p_0}).
\end{align*}
\item The second condition must be the `dual' of
\eqref{eqn:const-req}: i.e., $w^{-p(q_+/p)'/\tau_p} \in
A_{\tau_p'}^{var}$. Thus we must have that
\[ \frac{(p/s)'}{\alpha_2} = \tau_p', \qquad \beta_2 = \frac{p}{\tau_p} (q_+/p)'. \]
\end{enumerate}
A lengthy but straightforward computation shows that these two pairs of
values for $s$ and $\beta_2$ are exactly the same.
Finally, we also need to show that $s$ satisfies \eqref{eqn:s-limited}: that is, with $p_- = p = p_+$, if we have
\[ \max\left( p - p_0 \left( \frac{p}{q_-} - 1 \right), \frac{p_0 p}{q_+} \right) < s < \min(p, p_0). \]
This actually follows from the above computations. First note that by the first condition in (1), we have $s<p_0$ since $p_0>q_-$. By the first condition in (2) we must have $p/s>1$ for $(p/s)'$ to be defined. To prove the lower inequalities, it is easier to look back to the proof to see where these come from. The first comes from the requirement that $p/\alpha_1>1$, which follows from the fact that in this case we have $p/\alpha_1=\tau_p>1$. The second condition comes from the requirement that $(p/s)'/\alpha_2>1$, which comes from the fact that this equal to $\tau_p'$. \end{remark}
\begin{remark}\label{remark:limited-cases}
The computations in the previous remark also show why our
extrapolation theorem is stated in a way that is quite different
from the constant exponent case. In our reduction we need to choose
the constants so that the two conditions on the weight in
\eqref{pair:limited} are actually the same: i.e.,
$\alpha_1 - \beta_1 = \beta_2$ and $({p(\cdot)}/\alpha_1)' =
({p(\cdot)}/s)'/\alpha_2$. But this last equality reduces to
\[ {p(\cdot)} = \frac{s\alpha_2-\alpha_1}{\alpha_2-1} = \frac{s(q_+ - q_-) + q_-(p_* - q_+)}{p_* - q_-},
\] and this can only hold if ${p(\cdot)} = p$ is a constant. However, in obtaining \eqref{result:limited-var2}, we did have two separate conditions from \eqref{pair:limited}, namely $w^{\sigma} \in A_{\frac{{p(\cdot)}}{c\sigma}}$ and $1 \in A_{({p(\cdot)}/s)'/\alpha_2}$, which always holds. It would be of interest to find a different version of Theorem~\ref{thm:limited-var} that did reduce immediately to the constant exponent theorem. \end{remark}
\subsection*{Proof of Corollary \ref{cor:limited-corollary}}
Given $\delta \in (0, 1]$ we can restate our hypothesis \eqref{hyp:limited-corollary} as follows:
\[ \int_{{\mathbb R}^n} f(x)^{2} w_0(x) \,dx \leq c \int_{{\mathbb R}^n} g(x)^{2} w_0(x) \,dx, \]
for all weights $w_0$ such that $w_0^{1/\delta} \in A_2$. By Lemma~\ref{lemma:jn} this is equivalent to $w_0 \in A_{2/q_-} \cap RH_{(q_+/2)'}$, where $q_- = \frac{2}{1 + \delta}$ and $q_+ = \frac{2}{1 - \delta}$. This is the hypothesis \eqref{hyp:limited-var} of Theorem~\ref{thm:limited-var}, and applying this theorem, we get {\eqref{result:limited-corollary1} and \eqref{result:limited-corollary2}} for all ${p(\cdot)}$ satisfying \eqref{cor:limited-pp}.
\subsection*{Proof of Theorem~\ref{thm:A1var} and
Theorem~\ref{thm:off-weightedvar} when $p_0=1$}
To prove Theorem~ \ref{thm:A1var} we need to modify the general approach outlined in Section~\ref{section:general}. To see why, first consider the proof of Theorem~\ref{thm:diag-weightedvar}. If we take $p_0=1$, then the proof fails, because in order to apply H\"older's inequality we require $s<1$, but later we need the constraint $s>1$ for the maximal operator to be bounded on $L^{{p(\cdot)}/\alpha_1}(w^{\alpha_1-\beta_1})$. This suggests that we should not use H\"older's inequality and not introduce the operator $H_1$ (which leads to this condition on the boundedness of the maximal operator). We can still dualize if we take $s=1$, and this gives us the correct exponent to apply our hypothesis. We can then introduce the operator $H_2$, and argue as before to determine the appropriate values for $\alpha_2$ and $\beta_2$.
This seem approach works for general $p_0$. Fix ${p(\cdot)}\in \mathcal P_0$,
$p_-\geq p_0$, and $(f,g)\in \mathcal{F}$. As before, we may assume without loss of generality that $0<\|f\|_{L^{p(\cdot)}(w)},\, \|g\|_{L^{p(\cdot)}(w)} <\infty$. We will use Proposition~\ref{prop:H_j} to define an operator $H_2=\mathcal{R}_2(h_2^{\alpha_2}
w^{\beta_2})^{1/\alpha_2}w^{{-}\beta_2/\alpha_2}$. By dilation and duality, there exists $h_2 \in L^{({p(\cdot)}/p_0)'}$, $\|h_2\|_{({p(\cdot)}/p_0)'}=1$, such that
\[ \|fw\|_{p(\cdot)}^{p_0} \leq C\int_{{\mathbb R}^n} f^{p_0} h_2 w^{p_0}\,dx \leq C\int_{{\mathbb R}^n} f^{p_0} H_2 w^{p_0}\,dx. \]
To apply our hypothesis~\eqref{eqn:A1hyp} we need the righthand term to be bounded. Since $h_2\in L^{({p(\cdot)}/p_0)'}$, if we assume that $H_2$ is bounded on the same space, then by H\"older's inequality and dilation we have that
\[ \int_{{\mathbb R}^n} f^{p_0} H_2 w^{p_0}\,dx \leq
\|fw\|_{p(\cdot)}^{p_0}\|H_2\|_{({p(\cdot)}/p_0)'} \leq 2 \|fw\|_{p(\cdot)}\|h_2\|_{({p(\cdot)}/p_0)'} < \infty. \]
For $H_2$ to be so bounded, we need $M$ to be bounded on $L^{({p(\cdot)}/p_0)'/\alpha_2}(w^{-\beta_2})$. Furthermore, to apply our hypothesis we also need $H_2 w^{p_0}\in A_1$, so we must have that $\alpha_2=1$ and $\beta_2=p_0$.
Therefore, if $M$ is bounded on $L^{({p(\cdot)}/p_0)'}(w^{-p_0})$, we have that
\[ \int_{{\mathbb R}^n} f^{p_0} H_2 w^{p_0}\,dx \leq C \int_{{\mathbb R}^n} g^{p_0} H_2 w^{p_0}\,dx \leq C \|gw\|_{p(\cdot)}^{p_0}\|H_2\|_{({p(\cdot)}/p_0)'} \leq C \|gw\|_{p(\cdot)}^{p_0}. \]
This completes the proof.
\begin{remark} We note that in this endpoint case we do not have any flexibility in choosing our parameters: at each stage our choice is completely determined by the requirements of the proof. \end{remark}
The proof of Theorem~\ref{thm:off-weightedvar} when $p_0=1$ is nearly identical to the proof of Theorem~\ref{thm:A1var} and can be motivated by exactly the same analysis as we made of the proof of Theorem~\ref{thm:diag-weightedvar}. If we apply dilation and duality with $p_0$ replaced by $q_0$, we get
\[ \|fw\|_{q(\cdot)}^{q_0} \leq C\int_{{\mathbb R}^n} f^{q_0} H_2 w^{q_0}\,dx. \]
Checking the required conditions we see that we can apply our hypothesis if $H_2 w^{q_0} \in A_1$, which is equivalent to $H_2^{1/q_0} w \in A_{1,q_0}$, and this follows if the maximal operator is bounded on $L^{({q(\cdot)}/q_0)'}(w^{-q_0})$. The rest of the proof now continues exactly as before.
\subsection*{Proof of Theorem~\ref{thm:Ainfty-extrapolvar} and
Proposition~\ref{prop:Ainfty-weaker}}
We could prove Theorem~\ref{thm:Ainfty-extrapolvar} by an analysis similar to that used to prove Theorem~\ref{thm:A1var}. However, we can also derive it directly from this result using the connection between $A_1$ and $A_\infty$ extrapolation (cf.~\cite[Proposition~3.20]{cruz-martell-perezBook}). Fix ${p(\cdot)}$ and $s\leq p_-$ as in our hypotheses. Then by Theorem~\ref{thm:Ainfty-extrapol}, we have that \eqref{eqn:Ainfty-hyp} holds with $p_0$ replaced by $s$ and for any $w_0\in A_\infty$. In particular, we can take $w_0\in A_1$, and this gives us the hypothesis~\eqref{eqn:A1hyp} in Theorem~\ref{thm:A1var} with $p_0$ replaced by $s$. The desired conclusion now follows from this result.
Finally, we prove Proposition~\ref{prop:Ainfty-weaker}. Fix a ball $B$. Define the exponent function ${r(\cdot)}= \frac{1}{1-s}$. Then it is immediate that
\[ \frac{1}{({p(\cdot)}/s)'} = \frac{s}{{p'(\cdot)}} + \frac{1}{{r(\cdot)}}. \]
Therefore, by dilation and the generalized H\"older's inequality~\cite[Corollary~2.28]{cruz-fiorenza-book},
\begin{multline*}
|B|^{-1} \|w^s\chi_B\|_{{p(\cdot)}/s}\|w^{-s}\chi_B\|_{({p(\cdot)}/s)'} \leq
|B|^{-1}\|w\chi_B\|_{{p(\cdot)}}^s\|w^{-s}\chi_B\|_{{p'(\cdot)}/s}\|\chi_B\|_{r(\cdot)} \\
= |B|^{-1} \|w\chi_B\|_{{p(\cdot)}}^s\|w^{-1}\chi_B\|_{{p'(\cdot)}}^s|B|^{1-s} \leq [w]_{A_{p(\cdot)}}^s. \end{multline*}
Since this is true for all $B$, $w^s\in A_{{p(\cdot)}/s}$.
\end{document} |
\begin{document}
\title[Generalized Spikes]{Generalized spikes with circuits and cocircuits of different cardinalities}
\author[N.~Brettell]{Nick Brettell} \address{School of Mathematics and Statistics\\
Victoria University of Wellington\\ New Zealand} \email{[email protected]}
\author[K.~Grace]{Kevin Grace} \address{Department of Mathematics\\
Vanderbilt University\\ Nashville, Tennessee} \email{[email protected]}
\subjclass{05B35} \date{\today}
\begin{abstract}
We consider matroids with the property that every subset of the ground set of size $s$ is contained in a $2s$-element circuit and every subset of size $t$ is contained in a $2t$-element cocircuit. We say that such a matroid has the \emph{$(s,2s,t,2t)$-property}. A matroid is an \emph{$(s,t)$-spike} if there is a partition of the ground set into pairs such that the union of any $s$ pairs is a circuit and the union of any $t$ pairs is a cocircuit. Our main result is that all sufficiently large matroids with the $(s,2s,t,2t)$-property are $(s,t)$-spikes, generalizing a 2019 result that proved the case where $s=t$. We also present some properties of $(s,t)$-spikes. \end{abstract}
\maketitle
\section{Introduction}
For integers $s$, $u$, $t$, and $v$, with $u \ge s \ge 1$ and $v \ge t \ge 1$, a matroid~$M$ has the \emph{$(s,u,t,v)$-property} if every $s$-element subset of $E(M)$ is contained in a circuit of size~$u$, and every $t$-element subset of $E(M)$ is contained in a cocircuit of size~$v$. Matroids with this property appear regularly in the matroid theory literature: for example, wheels and whirls have the $(1,3,1,3)$-property, and (tipless) spikes have the $(2,4,2,4)$-property. Note that $M$ has the $(s,u,t,v)$-property if and only if $M^*$ has the $(t,v,s,u)$-property. Brettell, Campbell, Chun, Grace, and Whittle~\cite{bccgw2019} studied such matroids, and showed that if $u<2s$ or $v<2t$, then there are only finitely many matroids with the $(s,u,t,v)$-property~\cite[Theorem 3.3]{bccgw2019}. On the other hand, in the case that $s=t$ and $u=v=2t$, any sufficiently large matroid with the $(s,u,t,v)$-property is a member of a class of structured matroids referred to as \emph{$t$-spikes}. In particular, when $t=2$, this is the class typically known simply as \emph{(tipless) spikes}.
Our focus in this paper is also on the case where $u=2s$ and $v=2t$, but we drop the requirement that $s=t$. For positive integers $s$ and $t$, an \emph{$(s,t)$-spike} is a matroid on at least $2\max\{s,t\}$ elements whose ground set has a partition $(S_1,S_2,\ldots,S_n)$ into pairs such that the union of every set of $s$ pairs is a circuit and the union of every set of $t$ pairs is a cocircuit. The following is our main result:
\begin{theorem} \label{mainthm}
There exists a function $f : \mathbb{N}^2 \rightarrow \mathbb{N}$ such that, if $M$ is a matroid with the $(s,2s,t,2t)$-property and $|E(M)| \ge f(s,t)$, then $M$ is an $(s,t)$-spike. \end{theorem}
\noindent This proves the conjecture of Brettell et al.~\cite[Conjecture~1.2]{bccgw2019}.
Our approach is essentially the same as in \cite{bccgw2019}, but some care is required to generalize the argument. We note also that \cref{modcut} corrects an erroneous lemma \cite[Lemma 6.6]{bccgw2019}.
This paper is one in a developing series on matroids with the $(s,u,t,v)$-property. First, Miller~\cite{miller2014} studied matroids with the $(2,4,2,4)$-property, proving the specialization of \cref{mainthm} to the case where $s=t=2$. As previously mentioned, Brettell et al.~\cite{bccgw2019} considered the more general case where $s=t$ and $u=v=2t$, for any $t \ge 1$. Oxley, Pfeil, Semple, and Whittle considered the case where $s=2$, $u=4$, $t=1$, and $v \in \{3,4\}$, showing that a sufficiently large $v$-connected matroid with the $(2,4,1,v)$-property is isomorphic to $M(K_{v,n})$ for some $n$~\cite{pfeil}. A ``cyclic'' analogue of the $(s,u,t,v)$-property has also been considered, where a cyclic ordering $\sigma$ is imposed on $E(M)$, and only sets that appear consecutively with respect to $\sigma$ and have size~$s$ (or size~$t$) need appear in a circuit of size $u$ (or a cocircuit of size $v$, respectively). The case where $s = u-1$ and $t = v-1$ and $s=t$ was considered by Brettell, Chun, Fife, and Semple~\cite{bcfs2019}; whereas Brettell, Semple, and Toft dropped the requirement that $s=t$~\cite{bst2022}.
This series of papers has been motivated by problems involving matroid connectivity. The well-known Wheels-and-Whirls Theorem of Tutte~\cite{tutte1966} states that wheels and whirls (which have the $(1,3,1,3)$-property) are the only $3$-connected matroids with no elements that can be either deleted or contracted to retain a $3$-connected matroid. Similarly, spikes (which have the $(2,4,2,4)$-property) are the only $3$-connected matroids on at least $13$ elements that have no triangles, no triads, and no pairs of elements that can be either deleted or contracted to preserve $3$-connectivity~\cite{williams2015}.
The following conjecture was stated as \cite[Conjecture 1.3]{bccgw2019}. The case where $t=2$ was proved by Williams~\cite{williams2015}. \begin{conjecture} \label{conj:old}
There exists a function $f : \mathbb{N} \rightarrow \mathbb{N}$ such that if $M$ is a $(2t-1)$-connected matroid with no circuits or cocircuits of size $2t-1$, and $|E(M)| \ge f(t)$, then either
\begin{enumerate}
\item there exists a $t$-element set $X \subseteq E(M)$ such that either $M/X$ or $M \backslash X$ is $(t+1)$-connected, or
\item $M$ is a $(t,t)$-spike.
\end{enumerate} \end{conjecture}
Indeed, sufficiently large $(t,t)$-spikes are $(2t-1)$-connected matroids~\cite[Lemma~6.5]{bccgw2019}, they have no circuits or cocircuits of size $(2t-1)$~\cite[Lemma~6.3]{bccgw2019}, and for every $t$-element subset $X \subseteq E(M)$, neither $M/X$ nor $M \backslash X$ is $(t+1)$-connected. Optimistically, we offer the following generalization of \cref{conj:old}.
\begin{conjecture} \label{conj:new}
There exists a function $f : \mathbb{N}^2 \rightarrow \mathbb{N}$ such that if $M$ is a matroid with no circuits of size at most $2s-1$, no cocircuits of size at most $2t-1$, the matroid $M$ is $(2\min\{s,t\}-1)$-connected, and $|E(M)| \ge f(s,t)$, then either
\begin{enumerate}
\item there exists an $s$-element set $X \subseteq E(M)$ such that $M/X$ is $(s+1)$-connected,
\item there exists a $t$-element set $X \subseteq E(M)$ such that $M \backslash X$ is $(t+1)$-connected, or
\item $M$ is an $(s,t)$-spike.
\end{enumerate} \end{conjecture}
\cref{sec:Preliminaries} recalls some terminology and a Ramsey-theoretic result used later in the paper. In \cref{sec:echidnas}, we recall the definition of echidnas from~\cite{bccgw2019} and show that every matroid with the $(s,2s,t,2t)$-property and having a sufficiently large $s$-echidna is an $(s,t)$-spike. In \cref{sec:t2t}, we prove \cref{mainthm}. Finally, \cref{sec:tspikeprops} describes some properties of $(s,t)$-spikes, as well as a construction that allows us to build an $(s,t+1)$-spike from an $(s,t)$-spike.
\section{Preliminaries} \label{sec:Preliminaries}
Our notation and terminology follows Oxley~\cite{oxbook}. We refer to the fact that a circuit and a cocircuit cannot intersect in exactly one element as ``orthogonality''. A set $S_1$ \emph{meets} a set $S_2$ if $S_1 \cap S_2 \neq \emptyset$. We denote $\{1,2,\dotsc,n\}$ by $\seq{n}$, and, for positive integers $i < j$, we denote $\{i,i+1,\dotsc,j\}$ by $[i,j]$. We denote the set of positive integers by $\mathbb{N}$.
In order to prove \cref{mainthm}, we will use some hypergraph Ramsey Theory~\cite{ramsey1930}. Recall that a hypergraph is \emph{$k$-uniform} if every hyperedge has size~$k$.
\begin{theorem}[Ramsey's Theorem for $k$-uniform hypergraphs]
\label{hyperramsey}
For positive integers $k$ and $n$, there exists an integer $r_k(n)$ such that if $H$ is a $k$-uniform hypergraph on $r_k(n)$ vertices, then $H$ has either a clique on $n$ vertices, or a stable set on $n$ vertices. \end{theorem}
\section{Echidnas and \texorpdfstring{$(s,t)$}{(s,t)}-spikes} \label{sec:echidnas}
Recall that $M$ is an $(s,t)$-spike if there is a partition of $E(M)$ into pairs such that the union of any $s$ pairs is a circuit and the union of any $t$ pairs is a cocircuit. In this section, we prove a sufficient condition for $M$ to be an $(s,t)$-spike. Namely, we prove as \cref{lem:swamping} that if $M$ has the $(s,2s,t,2t)$-property, and a subset of $E(M)$ can be partitioned into $u$ pairs such that the union of any $t$ pairs is a circuit, then, when $u$ is sufficiently large, $M$ is an $(s,t)$-spike. Conforming with \cite{bccgw2019}, we call such a partition a $t$-echidna, as defined below.
Let $M$ be a matroid.
A $t$-\emph{echidna} of order $n$ is a partition $(S_1,\ldots, S_n)$ of a subset of $E(M)$ such that
\begin{enumerate}
\item $|S_i|=2$ for all $i \in \seq{n}$, and
\item $\bigcup_{i \in I}S_i$ is a circuit for all $I \subseteq \seq{n}$ with $|I|=t$.
\end{enumerate}
For $i \in \seq{n}$, we say $S_i$ is a \emph{spine}.
We say $(S_1,\ldots,S_n)$ is a \emph{$t$-coechidna} of $M$ if $(S_1,\ldots,S_n)$ is a $t$-echidna of $M^*$.
Let $(S_1,\dotsc,S_n)$ be a $t$-echidna of a matroid $M$. If $(S_1,\dotsc,S_m)$ is a $t$-echidna of $M$, for some $m \ge n$, we say that $(S_1,\dotsc,S_n)$ \emph{extends} to $(S_1,\dotsc,S_m)$. We say that $\pi=(S_1,\dotsc,S_n)$ is \emph{maximal} if $\pi$ extends only to $\pi$.
Note that a matroid~$M$ is an $(s,t)$-spike if there exists a partition $\pi=(A_1,\ldots,A_m)$ of $E(M)$ such that $\pi$ is an $s$-echidna and a $t$-coechidna, for some $m\geq\max\{s,t\}$. In this case, we say that the $(s,t)$-spike~$M$ has \emph{order~$m$}, we call $\pi$ the \emph{associated partition} of the $(s,t)$-spike~$M$, and we say that $A_i$ is an \emph{arm} of the $(s,t)$-spike for each $i \in \seq{m}$. An $(s,t)$-spike with $s=t$ is also called a \emph{$t$-spike}. Note that if $M$ is an $(s,t)$-spike, then $M^*$ is a $(t,s)$-spike.
Throughout this section, we assume that $s$ and $t$ are positive integers.
\begin{lemma}
\label{lem:coechidna}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property.
If $M$ has an $s$-echidna $(S_1,\ldots, S_n)$, where $n\geq s+2t-1$, then $(S_1,\ldots, S_n)$ is also a $t$-coechidna of $M$. \end{lemma}
\begin{proof}
Suppose $M$ has an $s$-echidna $(S_1,\ldots, S_n)$ with $n \ge s+2t-1$, and
let $S_i=\{x_i,y_i\}$ for each $i \in [n]$. We show, for every $t$-element subset $J$ of $[n]$, that $\bigcup_{j \in J} S_j$ is a cocircuit. Without loss of generality, let $J=[t]$. By the $(s,2s,t,2t)$-property, $\{x_1,\ldots,x_{t}\}$ is contained in a $2t$-element cocircuit~$C^*$. Suppose for a contradiction that $C^*\neq\bigcup_{j \in J} S_j$.
Then there is some $i \in [t]$ such that $y_i\notin C^*$. Without loss of generality, say $y_1\notin C^*$.
Let $I$ be an $(s-1)$-element subset of $[t+1,n]$.
For any such $I$, the set $S_1 \cup \bigcup_{i \in I} S_i$ is a circuit that meets $C^*$. By orthogonality, $\bigcup_{i \in I} S_i$ meets $C^*$.
Thus, $C^*$ avoids at most $s-2$ of the $S_i$'s for $i \in [t+1,n]$. In fact, as $C^*$ meets each $S_i$ with $i \in [t]$, the cocircuit~$C^*$ avoids at most $s-2$ of the $S_i$'s for $i \in [n]$. Thus $|C^*| \ge n-(s-2) \ge (s+2t-1) -(s-2) =2t+1 > 2t$, a contradiction.
Therefore, we conclude that $C^*=\bigcup_{j \in J} S_j$, and the result follows. \end{proof}
\sloppy \begin{lemma}
\label{lem:rep-orthog}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property, and let $(S_1,\ldots, S_n)$ be an $s$-echidna of $M$ with $n\geq\max\{s+2t,2s+t\}-1$.
\begin{itemize}
\item[(i)]
Let $I$ be an $(s-1)$-subset of $[n]$. For $z\in E(M)-\bigcup_{i \in I}S_i$, there is a $2s$-element circuit containing $\{z\} \cup \bigcup_{i \in I}S_i$.
\item[(ii)]
Let $I$ be a $(t-1)$-subset of $[n]$. For $z\in E(M)-\bigcup_{i \in I}S_i$, there is a $2t$-element cocircuit containing $\{z\} \cup \bigcup_{i \in I}S_i$. \end{itemize} \end{lemma} \fussy
\begin{proof}
First we prove (i). For $i \in [n]$, let $S_i=\{x_i,y_i\}$.
By the $(s,2s,t,2t)$-property, there is a $2s$-element circuit~$C$ containing $\{z\} \cup \{x_i : i \in I\}$.
Let $J$ be a $(t-1)$-element subset of $[n]$ such that $C$ and $\bigcup_{j \in J}S_j$ are disjoint (such a set exists since $|C|=2s$ and $n \ge 2s+t-1$).
For $i \in I$, let $C^*_i=S_i \cup \bigcup_{j \in J} S_j$, and observe that $x_i \in C^*_i \cap C$, and $C^*_i \cap C \subseteq S_i$.
By \cref{lem:coechidna}, $(S_1,\dotsc,S_n)$ is a $t$-coechidna as well as an $s$-echidna; therefore, $C^*_i$ is a cocircuit.
Now, for each $i \in I$, orthogonality implies that $|C^*_i \cap C| \ge 2$, and hence $y_i \in C$.
So $C$ contains $\{z\} \cup \bigcup_{i \in I}S_i$, as required.
Now, to prove (ii), recall that $(S_1,\dotsc,S_n)$ is a $t$-coechidna by Lemma \cref{lem:coechidna}. Therefore, (ii) follows by (i) and duality. \end{proof}
\begin{lemma}
\label{lem:swamping}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property.
If $M$ has an $s$-echidna $\pi=(S_1,\ldots, S_n)$, where $n\geq\max\{s+2t-1,2s+t-1,3s+t-3\}$, then $(S_1,\ldots, S_n)$ extends to a partition of $E(M)$ that is both an $s$-echidna and a $t$-coechidna. \end{lemma}
\begin{proof}
Let $\pi'=(S_1, \dotsc, S_m)$ be a maximal $s$-echidna with $X=\bigcup_{i = 1}^{m} S_i\subseteq E(M)$.
Suppose for a contradiction that $X\neq E(M)$.
Since $\pi'$ is maximal, $m\geq n\geq s+2t-1$.
Therefore, by Lemma \ref{lem:coechidna}, $\pi'$ is a $t$-coechidna.
Let $z\in E(M)-X$.
By Lemma \ref{lem:rep-orthog}, there is a $2s$-element circuit $C = (\bigcup_{i \in [s-1]} S_i)\cup \{z,z'\}$ for some $z'\in E(M)$.
We claim that $z'\notin X$.
Towards a contradiction, suppose that $z'\in S_k$ for some $k\in [s,m]$.
Let $J$ be a $t$-element subset of $[s,m]$ containing $k$.
Then, since $(S_1,\dotsc,S_m)$ is a $t$-coechidna, $\bigcup_{j \in J}S_j$ is a cocircuit that contains $z'$.
Now, this cocircuit intersects the circuit~$C$ in a single element $z'$, contradicting orthogonality.
Thus, $z'\notin X$, as claimed.
We next show that $(\{z,z'\}, S_{s}, S_{s+1}, \ldots, S_m)$ is a $t$-coechidna.
Since $\pi'$ is a $t$-coechidna, it suffices to show that $\{z,z'\} \cup \bigcup_{i \in I}S_i$ is a cocircuit for each $(t-1)$-element subset~$I$ of $[s,m]$.
Let $I$ be such a set.
\Cref{lem:rep-orthog} implies that there is a $2t$-element cocircuit~$C^*$ of $M$ containing $\{z\} \cup \bigcup_{i\in I}S_i$.
By orthogonality, $|C\cap C^*|>1$. Therefore, $z'\in C^*$. Thus, $(\{z,z'\}, S_{s}, S_{s+1}, \ldots, S_m)$ is a $t$-coechidna.
Since this $t$-coechidna has order $1+m-(s-1)\geq n-s+2\geq2s+t-1$, the dual of \cref{lem:coechidna} implies that $(\{z,z'\}, S_{s}, S_{s+1}, \dotsc, S_m)$ is also an $s$-echidna.
Next we show that $(\{z,z'\}, S_1, S_2, \dotsc, S_m)$ is a $t$-coechidna. Let $I$ be a $(t-1)$-element subset of $[m]$. We claim that $\{z,z'\} \cup \bigcup_{i \in I}S_i$ is a cocircuit.
Let $J$ be an $(s-1)$-element subset of $[s,m]-I$.
Then $C=\{z,z'\} \cup \bigcup_{j \in J}S_j$ is a circuit since $(\{z,z'\}, S_{s}, S_{s+1}, \dotsc, S_m)$ is an $s$-echidna.
By \cref{lem:rep-orthog}, there is a $2t$-element cocircuit~$C^*$ containing $\{z\} \cup \bigcup_{i \in I}S_i$.
By orthogonality between $C$ and $C^*$, we have $z'\in C^*$.
Since $I$ was arbitrarily chosen, $(\{z,z'\}, S_1, S_2, \dotsc, S_m)$ is a $t$-coechidna.
By the dual of \cref{lem:coechidna}, it is also an $s$-echidna, contradicting the maximality of $(S_1,\dotsc,S_m)$. \end{proof}
\section{Matroids with the \texorpdfstring{$(s,2s,t,2t)$}{(s,2s,t,2t)}-property} \label{sec:t2t}
In this section, we prove that every sufficiently large matroid with the $(s,2s,t,2t)$-property is an $(s,t)$-spike. We will show that a sufficiently large matroid with the $(s,2s,t,2t)$-property has a large $s$-echidna or $t$-coechidna; it then follows, by \cref{lem:swamping}, that the matroid is an $(s,t)$-spike. As in the previous section, we assume that $s$ and $t$ are positive integers.
\begin{lemma}
\label{lem:rank-t}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property, and let $X\subseteq E(M)$.
\begin{enumerate}
\item If $r(X)<s$, then $X$ is independent.\label{rt1}
\item If $r(X)=s$, then $M|X\cong U_{s,|X|}$ and $|X|<s+2t$.\label{rt2}
\end{enumerate} \end{lemma}
\begin{proof}
Every subset of $E(M)$ of size at most $s$ is independent since it is contained in a circuit of size $2s$. In particular, \ref{rt1} holds.
Now let $r(X)=s$. Then every $(s+1)$-element subset of $X$ is a circuit, so $M|X\cong U_{s,|X|}$.
Suppose for a contradiction that $|X|\geq s+2t$. Let $C^*$ be a $2t$-element cocircuit such that there is some $x\in X\cap C^*$. Then $X-C^*$ is contained in the hyperplane $E(M)-C^*$. Since $x\in X\cap C^*$, we have $r(X-C^*)<r(X)=s$. Therefore, $X-C^*$ is an independent set, so $|X-C^*|<s$. Since $|X|\geq s+2t$, we have $|C^*|>2t$, a contradiction. Thus, \ref{rt2} holds. \end{proof}
\begin{lemma}
\label{lemmaA}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property, and let $C_1^*,C_2^*,\dotsc,C_{s-1}^*$ be a collection of pairwise disjoint cocircuits of $M$.
Let $Y = E(M)-\bigcup_{i \in [s-1]} C_i^*$.
For all $y \in Y$, there is a $2s$-element circuit~$C_y$ containing $y$ such that either
\begin{enumerate}
\item $|C_y \cap C_i^*| = 2$ for all $i \in [s-1]$, or\label{A1}
\item $|C_y \cap C_j^*| = 3$ for some $j \in [s-1]$, and $|C_y \cap C_i^*| = 2$ for all $i \in [s-1]-\{j\}$.\label{A2}
\end{enumerate}
Moreover, if $C_y$ satisfies \ref{A2}, then there are at most $s+2t-1$ elements $w \in Y$ such that $(C_y-y) \cup \{w\}$ is a circuit. \end{lemma}
\begin{proof}
Choose an element $c_i \in C_i^*$ for each $i \in [s-1]$. By the $(s,2s,t,2t)$-property, there is a $2s$-element circuit~$C_y$ containing $\{c_1,c_2,\dotsc,c_{s-1},y\}$, for each $y \in Y$.
By orthogonality, $C_y$ satisfies \ref{A1} or \ref{A2}.
Suppose $C_y$ satisfies \ref{A2}, and let $S =C_y-Y= C_y-\{y\}$.
Let $W = \{w \in Y : S \cup \{w\} \textrm{ is a circuit}\}$.
It remains to prove that $|W| < s+2t$.
Observe that $W \subseteq \cl(S) \cap Y$, and, since $S$ contains $s-1$ elements in pairwise disjoint cocircuits that avoid $Y$, we have $r(\cl(S) \cup Y) \ge r(Y) + (s-1)$. Thus,
\begin{align*}
r(W) &\le r(\cl(S) \cap Y) \\
&\le r(\cl(S)) + r(Y) - r(\cl(S) \cup Y) \\
&\le (2s-1) + r(Y) - (r(Y)+ (s-1)) \\
&=s,
\end{align*}
using submodularity of the rank function at the second line.
Now, by \cref{lem:rank-t}\ref{rt1}, if $r(W) < s$, then $W$ is independent, so $|W| = r(W) < s < s + 2t$.
On the other hand, by \cref{lem:rank-t}\ref{rt2}, if $r(W)=s$, then $M|W \cong U_{t,|W|}$ and $|W|<s+2t$, as required. \end{proof}
\begin{lemma}
\label{lem:disjoint}
There exists a function $h$ such that if $M$ is a matroid with at least $h(k,d,t)$ $k$-element circuits, and the property that every $t$-element set is contained in a $2t$-element cocircuit for some positive integer $t$, then $M$ has a collection of $d$ pairwise disjoint $2t$-element cocircuits. \end{lemma}
\begin{proof}
By \cite[Lemma 3.2]{bccgw2019}, there is a function $g$ such that if $M$ has at least $g(k,d)$ $k$-element circuits, then $M$ has a collection of $d$ pairwise disjoint circuits.
We define $h(k,d,t) = g(k,dt)$, and claim that a matroid with at least $h(k,d,t)$ $k$-element circuits, and the property that every $t$-element set is contained in a $2t$-element cocircuit, has a collection of $d$ pairwise disjoint $2t$-element cocircuits.
Let $M$ be such a matroid.
Then $M$ has a collection of $dt$ pairwise disjoint circuits.
We partition these into $d$ groups of size $t$:
call this partition $(\mathcal{C}_1,\dotsc,\mathcal{C}_d)$.
Since the $t$ circuits in any cell of this partition are pairwise disjoint, it now suffices to show that, for each $i \in [d]$, there is a $2t$-element cocircuit contained in the union of the members of $\mathcal{C}_i$.
Let $\mathcal{C}_i = \{C_1,\dotsc,C_{t}\}$ for some $i \in [d]$.
Pick some $c_j \in C_j$ for each $j \in [t]$.
Then, since $\{c_1,c_2,\dotsc,c_{t}\}$ is a $t$-element set, it is contained in a $2t$-element cocircuit, which, by orthogonality, is contained in $\bigcup_{j \in [t]}C_j$. \end{proof}
\begin{lemma}
\label{setup}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property such that $r(M)\geq r^*(M)$. There exists a function $g$ such that, if $|E(M)| \ge g(s,t,q)$, then $M$ has $s-1$ pairwise disjoint $2t$-element cocircuits $C_1^*, C_2^*, \dotsc, C_{s-1}^*$, and
there is some $Z \subseteq E(M)-\bigcup_{i \in [s-1]}C_i^*$ such that
\begin{enumerate}
\item $r_{M}(Z) \ge q$, and\label{ps1}
\item for each $z \in Z$, there exists an element $z'\in Z-\{z\}$ such that $\{z,z'\}$ is contained in a $2s$-element circuit~$C$ with $|C \cap C_i^*|=2$ for each $i \in [s-1]$.\label{ps2}
\end{enumerate} \end{lemma}
\begin{proof}
By \cref{lem:disjoint}, there is a function $h$ such that if $M$ has at least $h(k,d,t)$ $k$-element circuits, then $M$ has $d$ pairwise disjoint $2t$-element cocircuits.
Suppose $|E(M)|\geq 2s\cdot h(2s,s-1,t)$. By the $(s,2s,t,2t)$-property, $M$ has at least $h(2s,s-1,t)$ distinct $2s$-element circuits. Therefore, by \cref{lem:disjoint}, $M$ has a collection of $s-1$ pairwise disjoint $2t$-element cocircuits $C_1^*,\dotsc, C_{s-1}^*$.
Let $X = \bigcup_{i \in [s-1]}C_i^*$ and $Y=E(M)-X$.
By \cref{lemmaA}, for each $y \in Y$ there is a $2s$-element circuit~$C_y$ containing $y$ such that $|C_y \cap C_j^*| = 3$ for at most one $j \in [s-1]$ and $|C_y \cap C_i^*| = 2$ otherwise.
Let $W$ be the set of all $w \in Y$ such that $w$ is in a $2s$-element circuit~$C$ with $|C\cap C_j^*|=3$ for some $j \in [s-1]$, and $|C \cap C_i^*|=2$ for all $i \in [s-1]-\{j\}$.
Now, letting $Z=Y-W$, we see that \ref{ps2} is satisfied. It remains to show that \ref{ps1} holds.
Since each $C_i^*$ has size $2t$, there are $(s-1)\binom{2t}{3}\binom{2t}{2}^{s-2}$
sets $X'\subseteq X$ with $|X' \cap C_j^*|=3$ for some $j \in [s-1]$ and $|X' \cap C_i^*|=2$ for all $i \in [s-1]-\{j\}$.
It follows, by \cref{lemmaA}, that $|W| \le f(s,t)$ where \[f(s,t) = (s+2t-1)\left[(s-1)\binom{2t}{3}\binom{2t}{2}^{s-2}\right].\]
We define \[g(s,t,q) = \max\left\{2s\cdot h(2s,s-1,t), 2\big(2t(s-1)+f(s,t)+q\big)\right\}.\]
Suppose that $|E(M)| \ge g(s,t,q)$.
Since $r(M)\geq r^*(M)$ and $|E(M)|\geq2(2t(s-1)+f(s,t)+q)$, we have $r(M) \ge 2t(s-1)+f(s,t)+q$. Then,
\begin{align*}
r_{M}(Z) &\ge r_{M}(Y) - |W| \\
&\ge \big(r(M)-2t(s-1)\big) - f(s,t) \\
&\ge q,
\end{align*}
so \ref{ps1} holds as well. \end{proof}
\sloppy \begin{lemma}
\label{lem:payoff}
Let $M$ be a matroid with the $(s,2s,t,2t)$-property. Suppose $M$ has $s-1$ pairwise disjoint $2t$-element cocircuits $C_1^*, C_2^*, \dotsc, C_{s-1}^*$
and, for some positive integer~$p$, there is a set $Z \subseteq E(M)-\bigcup_{i \in [s-1]}C_i^*$ such that
\begin{enumerate}[label=\rm(\alph*)]
\item $r(Z) \ge \binom{2t}{2}^{s-1}(p + 2(s-1))$, and
\item for each $z \in Z$, there exists an element $z'\in Z-\{z\}$ such that $\{z,z'\}$ is contained in a $2s$-element circuit $C$ of $M$ with $|C\cap C_i^*|=2$ for each $i\in [s-1]$.
\end{enumerate}
There exists a subset $Z' \subseteq Z$ and a partition $\pi=( Z_1', \dotsc, Z_p' )$ of $Z'$ into pairs such that
\begin{enumerate}
\item each circuit of $M|Z'$ is a union of pairs in $\pi$, and
\item the union of any $s$ pairs in $\pi$ contains a circuit.
\end{enumerate} \end{lemma} \fussy
\begin{proof}
We first prove the following:
\begin{sublemma}
\label{prelem:payoff}
There exists a $(2s-2)$-element set $X$ such that $|X\cap C_i^*|=2$ for every $i\in[s-1]$ and a set $Z'\subseteq Z$ with a partition $\pi=\{ Z_1', \dotsc, Z_p' \}$ of $Z'$ into pairs such that \begin{enumerate}[label=\rm(\Roman*)]
\item $X \cup Z_i'$ is a circuit, for each $i \in [p]$ and \label{ppo1}
\item $\pi$ partitions the ground set of $(M/X)|Z'$ into parallel classes such that $r_{M/X}\big(\bigcup_{i \in [p]}Z_i'\big)=p$. \label{ppo2}
\end{enumerate}
\end{sublemma}
\begin{subproof}
By (b), for each $z \in Z$, there exists an element $z'\in Z-\{z\}$ and a set $X'$ such that $\{z,z'\} \cup X'$ is a circuit of $M$ and $X'$ is the union of pairs $Y_i$ for $i\in[s-1]$, with $Y_i\subseteq C_i^*$.
Since $|C_i^*|=2t$ for each $i\in[s-1]$, there are $\binom{2t}{2}^{s-1}$ choices for $(Y_1,Y_2,\ldots,Y_{s-1})$. Therefore, for some $m\leq\binom{2t}{2}^{s-1}$, there are $(2s-2)$-element sets $X_1,X_2,\ldots,X_m$, and sets $Z_1,Z_2,\ldots,Z_m$ whose union is $Z$, such that each of $X_1,X_2,\ldots,X_m$ intersects $C_i^*$ in two elements for each $i\in[s-1]$, and such that, for each $j\in[m]$ and each $z_j\in Z_j$, there is an element $z_j'$ such that $\{z_j,z_j'\}\cup X_j$ is a circuit. Since $Z=\bigcup_{i \in [m]}Z_i$, we have $\sum_{i\in[m]}r(Z_i)\geq r(Z)$. Thus, the pigeonhole principle implies that there is some $j\in[m]$ such that \[r(Z_j) \ge \frac{r(Z)}{\binom{2t}{2}^{s-1}} \ge p+2(s-1),\] by (a).
We define $Z' = Z_j$ and $X = X_j$.
Observe that $X \cup \{z,z'\}$ is a circuit, for some pair $\{z,z'\} \subseteq Z'$, if and only if $\{z,z'\}$ is a parallel pair in $M/X$.
Therefore, there is a partition of the ground set of $(M/X)|Z'$ into parallel classes, where every parallel class has size at least two.
Let $\{\{z_1,z_1'\}, \dotsc,\{z_n,z_n'\}\}$ be a collection of pairs from each parallel class such that $\{z_1,z_2,\dotsc,z_n\}$ is an independent set in $(M/X)|Z'$.
Note that $n\geq r_{M/X}(Z') = r(Z' \cup X) -r(X) \ge r(Z') - 2(s-1) \ge p$. For $i\in[p]$, let $Z_i'=\{z_i,z_i'\}$. Then $\pi=\{ Z_1', \dotsc, Z_p' \}$ satisfies \ref{prelem:payoff}.
\end{subproof}
Let $X$, $\pi$, and $Z'$ be as described in \ref{prelem:payoff}, and let $\mathcal{X} = \{X_1,\dotsc,X_{s-1}\}$, where $X_i = \{x_i,x_i'\} = X \cap C_i^*$.
\begin{sublemma}
\label{metamatroid}
Each circuit of $M|(X \cup Z')$ is a union of pairs in $\mathcal{X} \cup \pi$.
\end{sublemma}
\begin{subproof}
Let $C$ be a circuit of $M|(X \cup Z')$.
If $x_i \in C$, for some $\{x_i,x_i'\} \in \mathcal{X}$, then orthogonality with $C_i^*$ implies that $x_i' \in C$.
Assume for a contradiction that $\{z,z'\} \in \pi$ and $C \cap \{z,z'\} = \{z\}$.
Let $W$ be the union of the pairs in $\pi$ containing elements of $(C-\{z\}) \cap Z'$. Then $z \in \cl(X \cup W)$. Hence $z \in \cl_{M/X}(W)$, contradicting \cref{prelem:payoff}\ref{ppo2}.
\end{subproof}
\begin{sublemma}
\label{induct}
Every union of $s$ pairs in $\mathcal{X} \cup \pi$ contains a circuit.
\end{sublemma}
\begin{subproof}
Let $\mathcal{W}$ be a subset of $\mathcal{X} \cup \pi$ of size $s$.
We proceed by induction on the number of pairs in $\mathcal{W} \cap \pi$.
If there is only one pair in $\mathcal{W} \cap \pi$, then the union of the pairs in $\mathcal{W}$ contains a circuit (indeed, is a circuit) by \cref{prelem:payoff}\ref{ppo1}.
Suppose the result holds for any subset containing $k$ pairs in $\pi$, and let $\mathcal{W}$ be a subset containing $k+1$ pairs in $\pi$.
Let $\{x,x'\}$ be a pair in $\mathcal{X}-\mathcal{W}$,
and let $W = \bigcup_{W' \in \mathcal{W}}W'$.
Then $W \cup \{x,x'\}$ is the union of $s+1$ pairs of $\mathcal{X} \cup \pi$, of which $k+1$ are in $\pi$, so, by the induction hypothesis, $W \cup \{x,x'\}$ properly contains a circuit~$C_1$.
If $\{x,x'\} \subseteq E(M)-C_1$, then $C_1 \subseteq W$, in which case the union of the pairs in $\mathcal{W}$ contains a circuit, as desired.
Therefore, we may assume, by \cref{metamatroid}, that $\{x,x'\} \subseteq C_1$.
Since $X$ is independent, there is a pair $\{z,z'\} \subseteq Z' \cap C_1$.
By the induction hypothesis, there is a circuit~$C_2$ contained in $(W-\{z,z'\}) \cup \{x,x'\}$.
Observe that $C_1$ and $C_2$ are distinct, and $\{x,x'\} \subseteq C_1 \cap C_2$.
Circuit elimination on $C_1$ and $C_2$, and \cref{metamatroid}, imply that there is a circuit $C_3 \subseteq (C_1 \cup C_2) - \{x,x'\} \subseteq W$, as desired. The claim now follows by induction.
\end{subproof}
Now, \cref{induct} implies that the union of any $s$ pairs in $\pi$ contains a circuit, and the result follows. \end{proof}
\begin{lemma}
\label{lem:tis1}
If $M$ is a matroid with the $(1,2,t,2t)$-property and at least $t$ elements, then $M$ is a $(1,t)$-spike.
Dually, if $M$ is a matroid with the $(s,2s,1,2)$-property and at least $s$ elements, then $M$ is an $(s,1)$-spike. \end{lemma}
\begin{proof}
By duality, it suffices to consider the case where $M$ has the $(1,2,t,2t)$-property and at least $t$ elements. Since every element of $M$ is contained in a $2$-element circuit, there is a partition of $E(M)$ into parallel classes $P_1,P_2,\ldots,P_n$, where $|P_i|\geq2$ for each $i$. For each $P_i$, let $x_i\in P_i$.
First, we consider the case where $n\geq t$. Let $X$ be a $t$-element subset of $\{x_1,\ldots,x_{n}\}$; for ease of notation, we assume $X=\{x_1,\ldots,x_{t}\}$. By the $(1,2,t,2t)$-property, $X\subseteq C^*$ for some $2t$-element cocircuit $C^*$. Since $P_i$ is a parallel class, $\{x_i,y_i\}$ is a circuit for each $y_i\in P_i-\{x_i\}$. By orthogonality, $y_i\in C^*$ for each such $y_i$, so $P_i\subseteq C^*$. Since $|C^*|=2t$, and $X$ is an arbitrary $t$-element subset of $\{x_1,\ldots,x_{n}\}$, it follows that $|P_i|=2$ for each $i\in[n]$, and that the union of any $t$ of the $P_i$'s is a cocircuit. Thus $M$ is a $(1,t)$-spike.
It remains to consider the case where $n<t$. Since $M$ has at least $t$ elements, let $X$ be any $t$-element set containing $\{x_1,\ldots,x_n\}$. By the $(1,2,t,2t)$-property, there is a $2t$-element cocircuit $C^*$ containing $X$. For $i\in[n]$ and each $y_i\in P_i-\{x_i\}$, orthogonality implies $y_i\in C^*$. Thus, $E(M)=C^*$. It follows that $M\cong U_{1,2t}$, which is a $(1,t)$-spike. \end{proof}
We now prove \cref{mainthm}, restated below.
\begin{theorem}
\label{mainthmtake2}
There exists a function $f : \mathbb{N}^2 \rightarrow \mathbb{N}$ such that, if $M$ is a matroid with the $(s,2s,t,2t)$-property and $|E(M)| \ge f(s,t)$, then $M$ is an $(s,t)$-spike.
\end{theorem}
\begin{proof}
If $s=1$ or $t=1$, then, by \cref{lem:tis1}, the theorem holds with $f(s,t) = \max\{s,t\}$.
So we may assume that $\min\{s,t\} \ge 2$.
A matroid is an $(s,t)$-spike if and only if its dual is a $(t,s)$-spike; moreover, a matroid has the $(s,2s,t,2t)$-property if and only if its dual has the $(t,2t,s,2s)$-property. Therefore, by duality, we may also assume that $r(M)\geq r^*(M)$.
Let $r_k(n)$ be the Ramsey number described in \cref{hyperramsey}.
For $k \in [s]$, we define the function $h_k : \mathbb{N}^2 \rightarrow \mathbb{N}$ such that \[h_{s}(s,t)=\max\{s+2t-1,2s+t-1,3s+t-3,s+3t-3\}\]
and such that $h_k(s,t)=r_k(h_{k+1}(s,t))$ for $k\in[s-1]$.
Note that $h_{k}(s,t) \ge h_{k+1}(s,t) \ge h_{s}(s,t)$, for each $k \in [s-1]$.
Let $p = h_1(s,t)$ and let $q(s,t)=\binom{2t}{2}^{s-1}(p + 2(s-1))$.
By \cref{setup}, there exists a function $g$ such that if $|E(M)| \ge g(s,t,q(s,t))$, then $M$ has $s-1$ pairwise disjoint $2t$-element cocircuits $C_1^*, C_2^*, \dotsc, C_{s-1}^*$, and there is some $Z \subseteq E(M)-\bigcup_{i \in [s-1]}C_i^*$ such that
$r_M(Z) \ge q(s,t)$, and,
for each $z \in Z$, there exists an element $z'\in Z'-\{z\}$ such that $\{z,z'\}$ is contained in a $2s$-element circuit~$C$ with $|C \cap C_i^*|=2$ for each $i \in [s-1]$.
Let $f(s,t) = g(s,t,q(s,t))$, and suppose that $|E(M)| \ge f(s,t)$.
Then, by \cref{lem:payoff}, there exists a subset $Z \subseteq Z'$ such that $Z$ has a partition into pairs $\pi = ( Z_1, \dotsc, Z_{p})$ such that
\begin{enumerate}[label=\rm(\Roman*)]
\item each circuit of $M|Z$ is a union of pairs in $\pi$, and
\item the union of any $s$ pairs in $\pi$ contains a circuit.\label{rc2}
\end{enumerate}
Let $m=h_{s}(s,t)$. By \cref{lem:swamping} and its dual,
it suffices to show that $M$ has either an $s$-echidna or a $t$-coechidna of order $m$.
If the smallest circuit in $M|Z$ has size $2s$, then, by \ref{rc2}, $\pi$ is an $s$-echidna of order $p \ge m$.
So we may assume that the smallest circuit in $M|Z$ has size $2j$ for some $j \in [s-1]$.
\begin{sublemma}
\label{iterramsey}
If the smallest circuit in $M|Z$ has size $2j$, for $j \in [s-1]$, and $|\pi| \ge h_j(s,t)$, then either
\begin{enumerate}
\item $M$ has a $t$-coechidna of order $m$,
or\label{ir1}
\item there exists some $Z' \subseteq Z$ that is the union of $h_{j+1}(s,t)$ pairs in $\pi$ for which the smallest circuit in $M|Z'$ has size at least $2(j+1)$.\label{ir2}
\end{enumerate}
\end{sublemma}
\begin{subproof}
We define $H$ to be the $j$-uniform hypergraph with vertex set $\pi$ whose hyperedges are the $j$-subsets of $\pi$ that are partitions of circuits in $M|Z$.
By \cref{hyperramsey}, and the definition of $h_k$, as $H$ has at least $h_j(s,t)$ vertices, it has either a clique or a stable set, on $h_{j+1}(s,t)$ vertices.
If $H$ has a stable set~$\pi'$ on $h_{j+1}(s,t)$ vertices, then clearly \ref{ir2} holds, with $Z' = \bigcup_{P \in \pi'} P$.
Therefore, we may assume that there are $h_{j+1}(s,t)$ pairs in $\pi$ such that the union of any $j$ of these pairs is a circuit.
Let $Z''$ be the union of these $h_{j+1}(s,t)$ pairs.
We claim that the union of any set of $t$ pairs contained in $Z''$ is a cocircuit.
Let $T$ be a transversal of $t$ pairs in $\pi$ contained in $Z''$, and let $C^*$ be the $2t$-element cocircuit containing $T$.
Suppose, for a contradiction, that there exists some pair $P \in \pi$ with $P \subseteq Z''$ such that $|C^* \cap P| = 1$.
Select $j-1$ pairs $Z_1'',\dotsc,Z_{j-1}''$ in $\pi$ that are each contained in $Z''-C^*$ (these exist since $h_{j+1}(s,t) \ge s+2t-1 \ge 2t + j - 1$).
Then $P \cup (\bigcup_{i \in [j-1]}Z_i'')$ is a circuit intersecting $C^*$ in a single element, contradicting orthogonality.
We deduce that the union of any $t$ pairs in $\pi$ that are contained in $Z''$ is a cocircuit.
Thus, $M$ has a $t$-coechidna of order $h_{j+1}(t) \ge m$,
satisfying \ref{ir1}.
\end{subproof}
We now apply \cref{iterramsey} iteratively, for a maximum of $s-j$ iterations.
If \ref{ir1} holds, at any iteration, then $M$ has a $t$-coechidna of order $m$,
as required.
Otherwise, we let $\pi'$ be the partition of $Z'$ induced by $\pi$; then, at the next iteration, we relabel $Z=Z'$ and $\pi=\pi'$.
If \ref{ir2} holds for each of $s-j$ iterations, then we obtain a subset $Z'$ of $Z$ such that the smallest circuit in $M|Z'$ has size $2s$.
Then, by \ref{rc2}, $M$ has an $s$-echidna of order $h_{s}(s,t)=m$,
completing the proof. \end{proof}
\section{Properties of \texorpdfstring{$(s,t)$}{(s,t)}-spikes} \label{sec:tspikeprops}
In this section, we prove some properties of $(s,t)$-spikes. In particular, we show that an $(s,t)$-spike has order at least $s+t-1$; an $(s,t)$-spike of order~$m$ has $2m$ elements and rank~$m+s-t$; and the circuits of an $(s,t)$-spike that are not a union of $s$ arms meet all but at most $t-2$ of the arms. We also give some results about the connectivity of $(s,t)$-spikes of sufficiently large order.
We also show that an appropriate concatenation of the associated partition of a $t$-spike is a $(2t-1)$-anemone, following the terminology of~\cite{ao2008}. Finally, we describe a construction that can be used to obtain an $(s,t+1)$-spike from an $(s,t)$-spike of sufficiently large order, and we show that every $(s,t+1)$-spike can be constructed from some $(s,t)$-spike in this way.
We again assume that $s$ and $t$ are positive integers.
\subsection*{Basic properties}
\begin{lemma}
\label{tspikeorder}
Let $M$ be an $(s,t)$-spike with associated partition $(A_1,\ldots,A_m)$. Then $m \ge s+t-1$. \end{lemma} \begin{proof}
By the definition of an $(s,t)$-spike, we have $m\geq\max\{s,t\}$. Let $Y = \bigcup_{j \in [t]}A_j$, and let $y\in Y$. Since $Y$ is a cocircuit, $Z=(E(M)-Y) \cup \{y\}$ spans $M$. Therefore, $r(M)\leq|Z|=2m-2t+1$. Similarly, by duality, $r^*(M)\leq2m-2s+1$. Therefore, \[2m = |E(M)| = r(M) + r^*(M) \le (2m-2t+1)+(2m-2s+1).\] The result follows. \end{proof} \begin{lemma}
\label{lem:rank-matroid}
Let $M$ be an $(s,t)$-spike of order~$m$.
Then $r(M)=m+s-t$ and $r^*(M)=m-s+t$. \end{lemma} \begin{proof}
Let $(A_1,\ldots,A_m)$ be the associated partition of $M$, and let $A_i = \{x_i,y_i\}$ for each $i \in [m]$. Choose $I \subseteq J \subseteq [m]$ such that $|I|=s-1$ and $|J| = m-t$. (This is possible by \cref{tspikeorder}.)
Let $X = \{y_j : i \in I\} \cup \{x_j : j \in J\}$.
Note that $\bigcup_{i \in I\cup J}A_i\subseteq\cl(X)$.
Since $E(M)-\bigcup_{i \in I\cup J}A_i$ is a cocircuit, $\bigcup_{i \in I\cup J}A_i$ is a hyperplane.
Therefore, $\bigcup_{i \in I\cup J}A_i=\cl(X)$, and we have $r(M)-1=r(X)\leq|X|=|I|+|J|=m+s-t-1$. Thus, $r(M)\leq m+s-t$. Similarly, by duality, $r^*(M)\leq m-s+t$.
Therefore, we have \[2m=|E(M)|=r(M)+r^*(M)\leq(m+s-t)+(m-s+t)=2m.\] Thus, we must have equality, and the result holds. \end{proof}
\sloppy \begin{lemma}
\label{l:circuits}
Let $M$ be an $(s,t)$-spike of order~$m$ with associated partition $(A_1,\ldots,A_m)$, and
let $C$ be a circuit of $M$.
\begin{enumerate}
\item
$C = \bigcup_{j \in J}A_j$ for some $s$-element set $J \subseteq [m]$, or\label{c1}
\item
$\left|\{i \in [m] : A_i \cap C \neq \emptyset\}\right| \ge m-(t-2)$ and
$\left|\{i \in [m] : A_i \subseteq C\}\right| < s$.\label{c2}
\end{enumerate} \end{lemma} \fussy \begin{proof}
Let $S = \{i \in [m] : A_i \cap C \neq \emptyset\}$. Thus, $S$ is the minimal subset of $[m]$ such that $C \subseteq \bigcup_{i \in S}A_i$.
We have $|S| \ge s$ since $C$ is independent otherwise.
If $|S|=s$, then $C$ satisfies \ref{c1}.
Therefore, we may assume $|S| > s$.
We must have $\left|\{i \in [m] : A_i \subseteq C\}\right| < s$; otherwise $C$ properly contains a circuit.
Thus, there is some $j \in S$ such that $A_j - C \neq \emptyset$.
If $|S| \ge m-(t-2)$, then $C$ satisfies \ref{c2}.
Therefore, we may assume $|S| \le m-(t-1)$.
Let $T = ([m]-S) \cup \{j\}$. Then $|T|\ge t$, implying that $\bigcup_{i \in T}A_i$ contains a cocircuit intersecting $C$ in one element.
This contradicts orthogonality. \end{proof}
In the remainder of the paper, if $(A_1,\ldots,A_m)$ is the associated partition of an $(s,t)$-spike and $J\subseteq[m]$, then we define \[A_J=\bigcup_{j \in J} A_j.\]
\begin{proposition}
\label{pro:rank-func}
Let $\pi=(A_1,\ldots,A_m)$ be the associated partition of an $(s,t)$-spike. If $J\subseteq[m]$, then
\[r(A_J) =
\begin{cases}
2|J| & \textrm{if $|J| < s$,}\\
s+|J|-1 & \textrm{if $s\leq|J| \leq m-t+1$,}\\
m+s-t & \textrm{if $|J| \ge m-t+1$.}
\end{cases}\] \end{proposition}
\begin{proof}
If $|J|<s$, then $A_J$ is properly contained in a circuit and is therefore independent. Thus, $r(A_J)=|A_J|=2|J|$.
We now prove that $r(A_J)=s+|J|-1$ if $s\leq|J| \leq m-t+1$. We proceed by induction on $|J|$. As a base case, if $|J|=s$, then $A_J$ is a circuit. Therefore, $r(A_J)=|A_J|-1=s+|J|-1$. Now, for the inductive step, let $s<|J|\leq m-t+1$, and let $J'\subseteq J$ with $|J'|=|J|-1$. By induction, $r(A_{J'})=s+|J|-2$. Let $\{x_i,y_i\}=A_J-A_{J'}$. By \cref{l:circuits}, since $|J|<m-t+2$, there is no circuit $C$ such that $x_i\in C\subseteq A_{J'}\cup\{x_i\}$. Therefore, $x_i\notin\cl(A_{J'})$, and $r(A_{J'}\cup\{x_i\})=r(A_{J'})+1$. On the other hand, since $|J|>s$, there is a circuit $C$ such that $y_i\in C\subseteq A_{J}$. Therefore, $y_i\in\cl(A_{J'}\cup\{x_i\})$, and $r(A_J)=r(A_{J'})+1=s+|J|-1$.
Note that the preceding argument, along with \cref{lem:rank-matroid} implies that, if $|J|=m-t+1$, then $A_J$ is spanning. Thus, if $|J|\geq m-t+1$, then $r(A_J)=r(M)=m+s-t$. \end{proof}
\subsection*{Connectivity}
Let $M$ be a matroid with ground set $E$. Recall that the \emph{connectivity function} of $M$, denoted by $\lambda$, is defined as \begin{align*}
\lambda(X) = r(X) + r(E - X) - r(M), \end{align*} for all subsets $X$ of $E$. In the case where $M$ is an $(s,t)$-spike of order $m$ and $X=A_J$ for some set $J\subseteq[m]$, this implies \begin{align*}
\lambda(A_J) = r(A_J) + r(A_{[m]-J}) - r(M). \end{align*}
Therefore, \cref{pro:rank-func} allows us to easily compute $\lambda(A_J)$.
\begin{lemma}
\label{lem:conn}
Let $\pi=(A_1,\ldots,A_m)$ be the associated partition of an $(s,t)$-spike, and let $(J,K)$ be a partition of $[m]$ with $|J| \le |K|$.
\begin{enumerate}
\item If $|J|\leq t-1$, then $\lambda(A_J)=r(A_J)$.
\item If $t-1\leq|J|\leq m-s$, then \[\lambda(A_J)=
\begin{cases}
t+|J|-1 & \textrm{if $|J| < s$,}\\
s+t-2 & \textrm{if $s\leq|J|\leq m-t+1$.}
\end{cases}\]
\item If $|J|> m-s$, then $\lambda(A_J)=m-s+t$.
\end{enumerate} \end{lemma}
\begin{proof}
If $|J|\leq t-1$, then $|K|\geq m-t+1$. Therefore, $A_K$ is spanning, and $\lambda(A_J)=r(A_J)+r(A_K)-r(M)=r(A_J)$. Statement (i) follows.
If $t-1\leq|J|\leq m-s$, then $s\leq|K|\leq m-t+1$. Therefore, $\lambda(A_J)=r(A_J)+r(A_K)-r(M)=r(A_J)+s+m-|J|-1-(m+s-t)$. Statement (ii) follows. (Note that we cannot have $|J|>m-t+1$ because otherwise $|K|<t-1\leq|J|$.)
If $|J|> m-s$, then $s>|K|\geq|J|$. Therefore, $\lambda(A_J)=r(A_J)+r(A_K)-r(M)=2|J|+2(m-|J|)-(m+s-t)=m-s+t$. Statement (iii) follows. \end{proof}
Using the terminology of~\cite{ao2008}, \cref{lem:conn} implies the following.
\begin{proposition} \label{pro:anemone}
Let $(A_1,\dotsc,A_m)$ be the associated partition of an $(s,t)$-spike~$M$, and suppose that $(P_1,\dotsc,P_k)$ is a partition of $E(M)$ such that, for each $i \in [k]$, $P_i = \bigcup_{i \in I}A_i$ for some subset $I$ of $[m]$, with $|I| \ge \max\{s-1,t-1\}$. Then $(P_1,\dotsc,P_k)$ is an $(s+t-1)$-anemone. \end{proposition}
We now continue our study of the connectivity of $(s,t)$-spikes.
\begin{lemma} \label{ind-and-coind}
Let $M$ be an $(s,t)$-spike of order $m\geq3\max\{s,t\}-2$, and let $X\subseteq E(M)$ such that $|X|\leq2\min\{s,t\}-1$. Then $\lambda(X)=|X|$. \end{lemma}
\begin{proof}
By Lemma \ref{l:circuits}, if $X$ is dependent, then either $|X|=2s$ or $|X|\geq m-t+2\geq 3\max\{s,t\}-2-t+2=3\max\{s,t\}-t\geq2\max\{s,t\}\geq2s$. However, $|X|\leq2\min\{s,t\}-1<2s$. Therefore, $X$ is independent, which implies that $r(X)=|X|$.
By a similar argument, using the dual of \cref{l:circuits}, $X$ is coindependent, implying that $r(E(M)-X)=r(M)$. Therefore,
\begin{align*}
\lambda(X)&=r(X)+r(E(M)-X)-r(M)\\
&=|X|+r(M)-r(M)\\
&=|X|,
\end{align*} proving the lemma. \end{proof}
\begin{theorem} Let $M$ be an $(s,t)$-spike of order \[m\geq\max\{3s+t,s+3t\}-4,\] where $\min\{s,t\}\geq2$. Then $M$ is $(2\min\{s,t\}-1)$-connected. \end{theorem}
\begin{proof} Because $M^*$ is a $(t,s)$-spike and because $\lambda_{M^*}=\lambda_M$, we may assume without loss of generality that $t\leq s$. Note that $\max\{3s+t,s+3t\}=3\max\{s,t\}+\min\{s,t\}$. Therefore, $m\geq3s+t-4$, and we must show that $M$ is $(2t-1)$-connected.
Now, suppose for a contradiction that $M$ is not $(2t-1)$-connected. Then there is a $k$-separation $(P,Q)$ of $M$, with $|P|\geq|Q|$, for some $k<2t-1$. Therefore, $\lambda(P)=\lambda(Q)<k\leq2t-2$.
First, we consider the case where $A_I \subseteq P$, for some $(t-1)$-element set $I \subseteq [m]$. Let $U = \{u \in [m] : |P \cap A_u|= 1\}$. Then $A_j \subseteq \cl_{M^*}(P)$ for each $j \in U$. For such a $j$, it follows, by the definition of $\lambda_{M^*}$ (which is equal to $\lambda_M=\lambda$), that $\lambda(P \cup A_j) \le \lambda(P)$. We use this repeatedly below; in particular, we see that $\lambda(P\cup A_U)\leq\lambda(P)$.
Let $P' = P\cup A_U$, and let $Q' = E(M)-P'$. Then there is a partition $(J,K)$ of $[m]$, with $|J|\leq|K|$, such that $Q'=A_J$ and $P'=A_K$. Moreover, $\lambda(Q')=\lambda(P')\leq\lambda(P)$.
Suppose $|J|\geq t-1$. Note that $m\geq3s+t-4\geq2s$ since $\min\{s,t\}\geq2$. Therefore, $|J|\leq\frac{1}{2}m=m-\frac{1}{2}m\leq m-\frac{1}{2}(2s)=m-s$. Thus, to determine $\lambda(Q')$, we need only consider Lemma \ref{lem:conn}(ii). If $|J|\geq s$, then by Lemma \ref{lem:conn}(ii), \[\lambda(P)\geq\lambda(P')=\lambda(Q')=s+t-2\geq2t-2,\] a contradiction. Otherwise, $|J|<s$, implying by Lemma \ref{lem:conn}(ii) that \[\lambda(P)\geq\lambda(P')=\lambda(Q')=t+|J|-1\geq t+t-1-1=2t-2,\] another contradiction.
Therefore, $|J|<t-1$. Let
$U'\subseteq U$ such that $|U'|=|Q|-(2t-2)$. Then $\lambda(P) \ge \lambda\left(P \cup A_{U'}\right) = \lambda\left(Q- A_{U'}\right)$. Since $\left|Q- A_{U'}\right| = 2t-2$ and $m\geq3s+t-4\geq3s-2$, \cref{ind-and-coind} implies that $\lambda\left(Q-A_{U'}\right)=2t-2$, so $\lambda(P) \ge 2t-2$, a contradiction.
Now we consider the case that $|\{i \in [m] : A_i \subseteq P\}| < t-1$. Since $|Q| \le |P|$, it follows that $|\{i \in [m] : A_i \subseteq Q\}| \le |\{i \in [m] : A_i \subseteq P\}| < t-1<s$.
Now, since $|\{i \in [m] : A_i \subseteq P\}| < t-1$, we have $|\{i \in [m] : A_i \cap Q \neq \emptyset\}| > m-(t-1)$. Therefore, $r(Q) \ge m-(t-1)$ by \cref{l:circuits}. Similarly, $r(P) \ge m-(t-1)$. Thus,
\begin{align*}
\lambda(P) &= r(P) + r(Q) - r(M) \\
&\ge (m-(t-1)) + (m-(t-1)) - (m+s-t) \\
&=m-s-t+2 \\
&\ge 3s+t-4-s-t+2 \\
&= 2s-2\\
&\ge 2t-2,
\end{align*}
a contradiction. This completes the proof. \end{proof}
\subsection*{Constructions} In \cite{bccgw2019}, a construction is described that, starting from a $(t,t)$-spike $M_0$, obtains a $(t+1,t+1)$-spike $M_1$. This construction consists of a certain elementary quotient $M_0'$ of $M_0$, followed by a certain elementary lift $M_1$ of $M_0'$. It is shown in \cite{bccgw2019} that $M_1$ is a $(t+1,t+1)$-spike as long as the order of $M_0$ is sufficiently large.
In the process of constructing $M_1$ in this way, the intermediary matroid $M_0'$ is a $(t,t+1)$-spike. For the sake of completeness, we will review this construction in the more general case where $M_0$ is an $(s,t)$-spike, in which case $M_0'$ is an $(s,t+1)$-spike. To construct an $(s+1,t)$-spike, we perform the construction on $M^*$ and dualize. Since $(2,2)$-spikes (and indeed, $(1,1)$-spikes) are well known to exist, this means that $(s,t)$-spikes exist for all positive integers $s$ and $t$.
It is also shown in \cite{bccgw2019} that all $(t,t)$-spikes can be constructed in this manner. We also extend this to the general case of $(s,t)$-spikes below.
Recall that $M_1$ is an \emph{elementary quotient} of $M_0$ if there is a single-element extension $M^+_0$ of $M_0$ by an element~$e$ such that $M_1 = M^+_0 / e$. If $M_1$ is an elementary quotient of $M_0$, then $M_0$ is an \emph{elementary lift} of $M_1$. Also, note that if $M_1$ is an elementary lift of $M_0$, then $M_1^*$ is an elementary quotient of $M_0^*$.
\begin{construction} \label{cons:quotient} Let $M$ be an $(s,t)$-spike of order~$m \ge s+t$, with associated partition $\pi$. Let $M+e$ be a single-element extension of $M$ by an element $e$ such that $e$ blocks each $2t$-element cocircuit that is a union of $t$ arms of $M$. Then let $M'=(M+e)/e$. \end{construction}
In other words, $M+e$ has the property that $e\notin \cl_{M+e}(E(M)-C^*)$ for every $2t$-element cocircuit $C^*$ that is the union of $t$ arms. Note that one possibility is that $M+e$ is the free extension of $M$ by an element $e$. Since $m-t\geq s$, we have $e\notin\cl_{M+e}(C)$ for each $2s$-element circuit $C$. Thus, in $M'$, the union of any $s$ arms of the $(s,t)$-spike $M$ is still a circuit of $M'$. However, since $r(M') = r(M) - 1$, the union of any $t+1$ arms is a $2(t+1)$-element cocircuit. Therefore, $M'$ is an $(s,t+1)$-spike.
Note that $M'$ is not unique; more than one $(s,t+1)$-spike can be constructed from a given $(s,t)$-spike $M$ using \cref{cons:quotient}. Given an $(s+1,t)$-spike~$M'$, we will describe how to obtain an $(s,t)$-spike~$M$ from $M'$ by a specific elementary quotient. This process reverses the dual of \cref{cons:quotient}. This will then imply that every $(s,t)$-spike can be constructed from a $(1,1)$-spike by repeated use of \cref{cons:quotient} and its dual. \cref{modcut} describes the single-element extension that gives rise to the elementary quotient we desire. Intuitively, the extension adds a ``tip'' to the $(s,t)$-spike. In the proof of this lemma, we assume knowledge of the theory of modular cuts (see \cite[Section~7.2]{oxbook}).
The proof of \cref{modcut} will be very similar to the proof of \cite[Lemma 6.6]{bccgw2019}. However, we note that \cite[Lemma 6.6]{bccgw2019} is falsely stated; what is proven in \cite{bccgw2019} is essentially the specialisation of \cref{modcut}, below, in the case that $s=t$.
The statement of \cite[Lemma 6.6]{bccgw2019} replaces the condition that $M$ is a $(t,t)$-spike with the weaker condition that $M$ has a $t$-echidna. To demonstrate that this is overly general, consider the rank-$3$ matroid consisting of two disjoint lines with four points. Let these lines be $\{a,b,c,d\}$ and $\{w,x,y,z\}$. Then $(\{a,b\},\{w,x\})$ is a $2$-echidna of order $2$. For \cite[Lemma 6.6]{bccgw2019} to be true, we would need a single-element extension $M^+$ by an element $e$ such that $e\in\cl_{M^+}(\{a,b\})$ but $e\notin\cl_{M^+}(\{c,d\})$. This is impossible since $\cl_M(\{a,b\})=\cl_M(\{c,d\})$.
\begin{lemma}
\label{modcut}
Let $M$ be an $(s,t)$-spike.
There is a single-element extension $M^+$ of $M$ by an element $e$ having the property that, for every $X \subseteq E(M)$, $e \in \cl_{M^+}(X)$ if and only if $X$ contains at least $s-1$ arms of $M$. \end{lemma}
\begin{proof}
Since $M$ is an $(s,t)$-spike, there is a partition $\pi=(S_1,\dotsc,S_m)$ of $E(M)$ that is both an $s$-echidna and a $t$-coechidna. Let $$\mathcal{F} = \left\{\bigcup_{i\in I}S_i : I \subseteq [m] \textrm{ and } |I|=s-1\right\}.$$ By the definition of an $s$-echidna, $\mathcal{F}$ is a collection of flats of $M$. Let $\mathcal{M}$ be the set of all flats of $M$ containing some flat $F \in \mathcal{F}$. We claim that $\mathcal{M}$ is a modular cut. Recall that, for distinct $F_1,F_2 \in \mathcal{M}$, the pair $(F_1,F_2)$ is \emph{modular} if $r(F_1) + r(F_2) = r(F_1 \cup F_2) + r(F_1 \cap F_2)$. To show that $\mathcal{M}$ is a modular cut, it suffices to prove that, for any $F_1,F_2 \in \mathcal{M}$ such that $(F_1,F_2)$ is a modular pair, $F_1 \cap F_2 \in \mathcal{M}$.
For any $F \in \mathcal{M}$, since $F$ contains at least $s-1$ arms of $M$, and the union of any $s$ arms is a circuit, it follows that $F$ is a union of arms of $M$. Thus, let $F_1,F_2 \in \mathcal{M}$ be such that $F_1=\bigcup_{i\in I_1}S_i$ and $F_2=\bigcup_{i\in I_2}S_i$, where $I_1$ and $I_2$ are distinct subsets of $[m]$ with $u_1=|I_1| \ge s-1$ and $u_2=|I_2|\ge s-1$.
Let $q=|I_1 \cap I_2|$. Then $F_1 \cup F_2$ is the union of $u_1 + u_2 - q \ge s-1$ arms, and $F_1\cap F_2$ is the union of $q$ arms. We show that if $q<s-1$, then $(F_1,F_2)$ is not a modular pair.
We consider several cases. First, suppose $u_1,u_2\leq m-t+1$. By \cref{pro:rank-func}, \begin{align*}
r(F_1) + r(F_2) &= (s + u_1 - 1) + (s + u_2 - 1) \\
&>
(s-1 + u_1 + u_2 - q) +2q \\
&= s+|I_1\cup I_2|-1+2|I_1\cap I_2| \\
&\geq r(F_1 \cup F_2) + r(F_1 \cap F_2).
\end{align*}
Next, consider the case where $u_2\leq m-t+1<u_1$. (By symmetry, the argument is the same if $u_1$ and $u_2$ are swapped.) One can check that $u_1+u_2-q>m-t+1$. By \cref{pro:rank-func}, \begin{align*}
r(F_1) + r(F_2) &= (m+s-t) + (s + u_2 - 1) \\
&> (m + s-t)+2q\\
&= r(F_1 \cup F_2) + r(F_1 \cap F_2).
\end{align*}
Finally, consider the case where $u_1,u_2>m-t-1$. We have
\[r(F_1) + r(F_2) = 2m+2s -2t,\] which by \cref{tspikeorder}, is at least
\begin{align*}
m+3s-t-1
&> m+s-t+2q\\
&= r(F_1 \cup F_2) + r(F_1 \cap F_2).
\end{align*}
Thus, in all cases, $(F_1,F_2)$ is not a modular pair. Therefore, we have shown that $\mathcal{M}$ is a modular cut. Now, there is a single-element extension corresponding to the modular cut~$\mathcal{M}$, and this extension satisfies the requirements of the lemma (see, for example, \cite[Theorem~7.2.3]{oxbook}). \end{proof}
\begin{theorem} Let $M$ be an $(s,t)$-spike of order $m\geq s+t$. Then $M$ can be constructed from a $(1,1)$-spike of order $m$ by applying \cref{cons:quotient} $t-1$ times, followed by the dual of \cref{cons:quotient} $s-1$ times. \end{theorem}
\begin{proof} For $s=t=1$, the result is clear. Otherwise, by duality, we may assume without loss of generality that $t>1$. By induction and duality, it suffices to show that $M$ can be constructed from an $(s-1,t)$-spike of order $m$ by applying the dual of \cref{cons:quotient} once.
Let $\pi=(A_1,\dotsc,A_m)$ be the associated partition of $M$. Let $M^+$ be the single-element extension of $M$ by an element~$e$ described in \cref{modcut}.
Let $M'=M^+/e$. We claim that $\pi$ is an $(s-1)$-echidna and a $t$-coechidna that partitions the ground set of $M'$.
Let $X$ be the union of any $s-1$ spines of $\pi$. Then $X$ is independent in $M$, and $X \cup \{e\}$ is a circuit in $M^+$, so $X$ is a circuit in $M'$. Thus, $\pi$ is an $(s-1)$-echidna of $M'$. Now let $C^*$ be the union of any $t$ spines of $\pi$, and let $H=E(M)-C^*$. Then $H$ is the union of at least $s-1$ spines, so $e \in \cl_{M^+}(H)$. Now $H \cup \{e\}$ is a hyperplane in $M^+$, so $C^*$ is a cocircuit in $M^+$ and therefore in $M'$. Hence $\pi$ is a $t$-coechidna of $M'$.
Note that $M'$ is an elementary quotient of $M$, so $M$ is an elementary lift of $M'$ where none of the $2(s-1)$-element circuits of $M'$ are preserved in $M$. So the $(s,t)$-spike $M$ can be obtained from the $(s-1,t)$-spike $M'$ using the dual of \cref{cons:quotient}. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} In this note we make a few remarks about the geometry of the holomorphic symplectic manifold $Z$ constructed in \cite{LLSvS} as a two-step contraction of the variety of twisted cubic curves on a cubic fourfold $Y\subset \mathbb{P}^5$.
We show that $Z$ is birational to a component of the moduli space of stable sheaves in the Calabi-Yau subcategory of the derived category of $Y$.
Using this description we deduce that the twisted cubics contained in a hyperplane section $Y_H = Y \cap H$ of $Y$ give rise to a Lagrangian subvariety $Z_H \subset Z$. For a generic choice of the hyperplane, $Z_H$ is birational to the theta-divisor in the intermediate Jacobian $\IJac{Y_H}$. \end{abstract}
\title{On the geometry of the Lehn--Lehn--Sorger--van Straten eightfold}
\tableofcontents
\section{Introduction}
We work over the field of complex numbers. Throughout the paper $Y\subset \mathbb{P}^5$ is a smooth cubic fourfold not containing a plane. In \cite{LLSvS} the variety $M_3(Y)$ of generalized twisted cubic curves on $Y$ was studied. It was shown that $M_3(Y)$ is 10-dimensional, smooth and irreducible. Starting from this variety an 8-dimensional irreducible holomorphic symplectic (IHS) manifold $Z$ was constructed. More precisely, it was shown that there exist morphisms \begin{equation}\label{eqn_contractions} M_3(Y)\stackrel{a}{\longrightarrow} Z'\stackrel{\sigma}{\longrightarrow} Z, \end{equation} and \begin{equation}\label{eqn_embedding} \mu\colon Y\hookrightarrow Z, \end{equation} where $a$ is a $\mathbb{P}^2$-fibre bundle and $\sigma$ is the blow-up along the image of $\mu$.
It was later shown in \cite{AL} that $Z$ is birational --- and hence deformation equivalent --- to a Hilbert scheme of four points on a K3 surface.
In this paper we present another point of view on $Z$. We show that an open subset of $Z$ can be described as a moduli space of Gieseker stable torsion-free sheaves of rank $3$ on $Y$.
Kuznetsov and Markushevich \cite{KM} have constructed a closed two-form on any moduli space of sheaves on $Y$. Properties of the Kuznetsov-Markushevich form are known to be closely related to the structure of the derived category of $Y$. The bounded derived category $\EuScript{D}^b(Y)$ of coherent sheaves on $Y$ has an exceptional collection $\mathcal O_Y$, $\mathcal O_Y(1)$, $\mathcal O_Y(2)$ with right orthogonal $\mathcal A_Y$, so that $\EuScript{D}^b(Y)=\langle\mathcal A_Y, \mathcal O_Y, \mathcal O_Y(1), \mathcal O_Y(2)\rangle$. The category $\mathcal A_Y$ is a Calabi-Yau category of dimension two, meaning that its Serre functor is the shift by $2$ \cite[Section 4]{K}.
It was shown in \cite{KM} that the two-form on moduli spaces of sheaves on $Y$ is non-degenerate if the sheaves lie in $\mathcal A_Y$. The torsion-free sheaves mentioned above lie in $\mathcal A_Y$. This gives an alternative description of the symplectic form on $Z$:
{\bf Theorem \ref{thm_MMF}.} {\it The component $\mathcal M_F$ of the moduli space of Gieseker stable rank 3 sheaves on $Y$ with Hilbert polynomial $\frac38 n^4 + \frac94 n^3 + \frac{33}{8} n^2 + \frac94 n$ is birational to the IHS manifold $Z$. Under this birational equivalence the symplectic form on $Z$ defined in \cite{LLSvS} corresponds to the Kuznetsov-Markushevich form on $\mathcal M_F$.}
A similar approach relying on the description of an open part of $Z$ as a moduli space was used by Addington and Lehn in \cite{AL} to prove that the variety $Z$ is a deformation of a Hilbert scheme of four points on a K3 surface. In \cite{O} Ouchi considered the case of cubic fourfolds containing a plane. He proved that one can describe (a birational model) of the LLSVS variety as a moduli space of Bridgeland-stable objects in the derived category of a twisted K3 surface. Moreover, in this situation one also has a Lagrangian embedding of the cubic fourfold into the LLSVS variety as in (\ref{eqn_embedding}).
Another similar construction has been proposed by \cite{LMS} who proved that $Z$ is birational to a component of the moduli space of stable vector bundles of rank $6$ on $Y$.
Using the birational equivalence between $Z$ and the moduli space of sheaves on $Y$ we show that twisted cubics lying in hyperplane sections $Y_H$ of $Y$ give rise to Lagrangian subvarieties in $Z$ and discuss the geometry of these subvarieties:
{\bf Theorem.} {\it Denote by $Z_H$ the image in $Z$ of twisted cubics lying in a hyperplane section $Y_H = Y\cap H$ under the map $a$ from (\ref{eqn_contractions}). If $Y$ and $H$ are generic, then $Z_H$ is a Lagrangian subvariety of $Z$ which is birational to the theta-divisor of the intermediate Jacobian of $Y_H$.} \begin{proof} See Proposition \ref{prop_Lagr} and Theorem \ref{thm_AJ}. \end{proof}
This is analogous to the case of lines on $Y$: it is well-known that lines on $Y$ form an IHS fourfold, and lines contained in hyperplane sections of $Y$ form Lagrangian surfaces in this fourfold, see for example \cite{V}.
{\bf Acknowledgements} The authors would like to thank Alexander Kuznetsov, Daniel Huybrechts, Christoph Sorger, Manfred Lehn and Yukinobu Toda for useful discussions and remarks.
\section{Twisted cubics and sheaves on a cubic fourfold}
\subsection{Twisted cubics on cubic surfaces and determinantal representations}
Let us recall the structure of the general fibre of the map $a\colon M_3(Y)\to Z'$ in (\ref{eqn_contractions}). We follow \cite{LLSvS} in notation and terminology and we refer to \cite{EPS, LLSvS} for all details about the geometry of twisted cubics.
Consider a cubic surface $S=Y\cap \mathbb{P}^3$ where $\mathbb{P}^3$ is a general linear subspace in $\mathbb{P}^5$. There exist several families of generalized twisted cubics on $S$. Each of the families is isomorphic to $\mathbb{P}^2$ and these are the fibres of the map $a$. The number of families depends on $S$. If the surface is smooth there are 72 families, corresponding to 72 ways to represent $S$ as a blow-up of $\mathbb{P}^2$ (and to the 72 roots in the lattice $E_6$). Each of the families is a linear system which gives a map to $\mathbb{P}^2$. If $S$ is singular, generalized twisted cubics on it can be of two different types. Curves of the first type are arithmetically Cohen-Macaulay (aCM), and those of the second type are non-CM. The detailed description of their geometry on surfaces with different singularity types can be found in \cite{LLSvS}, \S 2. For our purposes it is enough to recall that the image in $Z'$ of non-CM curves under the map $a$ is exactly the exceptional divisor of the blow-up $\sigma\colon Z'\to Z$ in (\ref{eqn_contractions}), see \cite{LLSvS}, Proposition 4.1.
In this section we deal only with aCM curves and we also assume that the surface $S$ has only ADE singularities. In this case every aCM curve belongs to a two-dimensional linear system with smooth general member, just as in the case of smooth $S$ \cite[Theorem 2.1]{LLSvS}. Moreover, these linear systems are in one-to-one correspondence with the determinantal representations of $S$. Let us explain this in detail.
Let $S$ be a cubic surface in $\mathbb{P}^3$ with at most ADE singularities. Let $\alpha \colon S\hookrightarrow \mathbb{P}^3$ denote the embedding and let $p\colon \tilde{S}\to S$ be the minimal resolution of singularities. Take a general aCM twisted cubic $C$ on $S$ and let $\tilde{C} \subset \tilde{S}$ be its proper preimage. Let $\tilde{L}=\mathcal O_{\tilde{S}}(\tilde{C})$ be the corresponding line bundle and let $L=p_*\tilde{L}$ be its direct image.
\begin{lemma}\label{lem_L} The sheaf $L$ has the following properties:
(1) $H^0(S,L)=\mathbb{C}^3$, $H^k(S,L)=0$ for $k \geqslant 1$; $H^{k}(S, L(-1)) = H^k(S, L(-2)) = 0$ for $k\geqslant 0$;
(2) We have the following resolution: \begin{equation}\label{eqn_determinantal} 0\longrightarrow\mathcal O_{\mathbb{P}^3}(-1)^{\oplus 3}\stackrel{A}{\longrightarrow}\mathcal O_{\mathbb{P}^3}^{\oplus 3}\longrightarrow \alpha_*L\longrightarrow 0, \end{equation} where $A$ is given by a $3\times 3$ matrix of linear forms on $\mathbb{P}^3$, and the surface $S$ is the vanishing locus of $\det A$;
(3) $\EuScript{E}xt^k(L,L) = 0$ for $k\geqslant 1$. \end{lemma} \begin{proof} We note that the map $\alpha\circ p\colon \tilde{S}\to \mathbb{P}^3$ is given by the anticanonical linear system on $\tilde{S}$, so we will use the notation $K_{\tilde{S}} = \mathcal O_{\tilde{S}}(-1)$.
{\it (1)} First we observe that $\mathrm{R}^mp_*\tilde{L} = 0$ for $m\geqslant 1$. This follows from the long exact sequence of higher direct images for the triple \begin{equation}\label{eqn_triple} 0\longrightarrow\mathcal O_{\tilde{S}}\longrightarrow\tilde{L}\longrightarrow \mathcal O_{\tilde{C}}\otimes \tilde{L}\longrightarrow 0, \end{equation} because the singularities of $S$ are rational, so that $\mathrm{R}^m p_*\mathcal O_{\tilde{S}} = 0$ for $m \geqslant 1$ and the map $p$ induces an embedding of $\tilde{C}$ into $S$, so that $\mathrm{R}^m p_*$ vanishes on sheaves supported on $\tilde{C}$ for $m \geqslant 1$.
Analogously, $\mathrm{R}^mp_*\tilde{L}(-1) = \mathrm{R}^mp_*\tilde{L}(-2) = 0$ for $m\geqslant 1$. Hence it is enough to verify the cohomology vanishing for $\tilde{L}$.
The linear system $|\tilde{L}|$ is two-dimensional and base point free (we refer to \S 2 of \cite{LLSvS}, in particular Proposition 2.5). We also know the intersection products $\tilde{L}\cdot \tilde{L} = 1$, $\tilde{L}\cdot K_{\tilde{S}} = -3$ and $K_{\tilde{S}}\cdot K_{\tilde{S}} = 3$. Using Riemann-Roch we find $\chi(\tilde{L}) = 3$ and $\chi(\tilde{L}(-1)) = \chi(\tilde{L}(-2)) = 0$. We have $H^0(\tilde{S},\tilde{L}(-1)) = H^0(\tilde{S},\tilde{L}(-2)) = 0$
which is clear from (\ref{eqn_triple}) since $\tilde{L}|_{\tilde{C}} = \mathcal O_{\mathbb{P}^1}(1)$ and
$\mathcal O_{\tilde{S}}(1)|_{\tilde{C}}=\mathcal O_{\mathbb{P}^1}(3)$. By Serre duality we have $H^2(\tilde{S},\tilde{L}) = H^0(\tilde{S},\tilde{L}^\vee(-1))^* = 0$, $H^2(\tilde{S},\tilde{L}(-1)) = H^0(\tilde{S},\tilde{L}^\vee)^* = 0$ because $\tilde{L}^\vee$ is the ideal sheaf of $\tilde{C}$, and $H^2(\tilde{S},\tilde{L}(-2)) = H^0(\tilde{S},\tilde{L}^\vee(1))^* = 0$. The last vanishing follows from the fact that $C$ is not contained in any hyperplane in $\mathbb{P}^3$. It follows that $H^1(\tilde{S},\tilde{L}) = H^1(\tilde{S},\tilde{L}(-1)) = H^1(\tilde{S},\tilde{L}(-2)) = 0$.
{\it (2)} We decompose the sheaf $\alpha_*L$ with respect to the full exceptional collection $\EuScript{D}^b(\mathbb{P}^3) = \langle \mathcal O_{\mathbb{P}^3}(-1),\mathcal O_{\mathbb{P}^3},\\ \mathcal O_{\mathbb{P}^3}(1), \mathcal O_{\mathbb{P}^3}(2)\rangle$. From part {\it (1)} it follows that $\alpha_*L$ is right-orthogonal to $\mathcal O_{\mathbb{P}^3}(2)$ and $\mathcal O_{\mathbb{P}^3}(1)$. The left mutation of $\alpha_*L$ through $\mathcal O_{\mathbb{P}^3}$ is given by a cone of the morphism $\mathcal O_{\mathbb{P}^3}^{\oplus 3}\to \alpha_*L$ induced by the global sections of $L$. This cone is contained in the subcategory generated by the exceptional object $\mathcal O_{\mathbb{P}^3}(-1)$. Hence it must be equal to $\mathcal O_{\mathbb{P}^3}(-1)^{\oplus 3}[1]$, and we obtain the resolution (\ref{eqn_determinantal}) for $\alpha_*L$.
{\it (3)} Since $L$ is a vector bundle outside of the singular points of $S$, the sheaves $\EuScript{E}xt^k(L,L)$ for $k\geqslant 1$ must have zero-dimensional support. It follows that it will be sufficient to prove that $\mathrm{Ext}^k(L,L)=0$ for $k\geqslant 0$.
We first compute $\mathrm{Ext}^k(\alpha_*L,\alpha_*L)$. Applying $\mathrm{Hom}(-,\alpha_*L)$ to (\ref{eqn_determinantal}) we get the exact sequence $$ 0\longrightarrow\mathrm{Hom}(\alpha_*L,\alpha_*L)\longrightarrow H^0(\mathbb{P}^3,\alpha_*L)^{\oplus 3}\longrightarrow H^0(\mathbb{P}^3,\alpha_*L(1))^{\oplus 3}\longrightarrow \mathrm{Ext}^1(\alpha_*L,\alpha_*L)\longrightarrow 0, $$ where we use that $H^k(\mathbb{P}^3,\alpha_*L(m)) = 0$ for $k\geqslant 1$, $m\geqslant 0$ which is clear from (\ref{eqn_determinantal}). This also shows that $\mathrm{Ext}^k(\alpha_*L,\alpha_*L) = 0$ for $k\geqslant 2$. We have $\dim\mathrm{Hom}(\alpha_*L,\alpha_*L)=1$ and from the sequence above and (\ref{eqn_determinantal}) we compute $\dim\mathrm{Ext}^1(\alpha_*L,\alpha_*L)=19$.
The object $\mathrm{L} \alpha^*\alpha_*L$ is included into the triangle $\mathrm{L} \alpha^*\alpha_*L\to L\to L(-3)[2]\to \mathrm{L} \alpha^*\alpha_*L[1]$, see \cite{KM}, Lemma 1.3.1. Applying $\mathrm{Hom}(-,L)$ to this triangle and using $\mathrm{Ext}^k(\mathrm{L} \alpha^*\alpha_*L,L) = \mathrm{Ext}^k(\alpha_*L,\alpha_*L)$ we get the exact sequence $$ 0\longrightarrow \mathrm{Ext}^1(L,L)\longrightarrow \mathrm{Ext}^1(\alpha_*L,\alpha_*L)\longrightarrow \mathrm{Hom}(L,L(3))\longrightarrow \mathrm{Ext}^2(L,L)\longrightarrow 0. $$ The arrow in the middle is an isomorphism. To see this note that $\mathrm{Hom}(L,L(3))=H^0(S,N_{S/\mathbb{P}^3})=\mathbb{C}^{19}$ and that all the deformations of $\alpha_*L$ are induced by the deformations of its support $S$. It follows that $\mathrm{Ext}^1(L,L)=\mathrm{Ext}^2(L,L)=0$. As we have mentioned above the sheaves $\EuScript{E}xt^k(L,L)$ have zero-dimensional support for $k\geqslant 1$, and from the local-to-global spectral sequence we see that $\mathrm{Ext}^k(L,L)=H^0(S,\EuScript{E}xt^k(L,L))$ for $k\geqslant 1$. It follows that $\EuScript{E}xt^1(L,L)=\EuScript{E}xt^2(L,L)=0$. To prove the vanishing of higher $\EuScript{E}xt$'s we construct a quasi-periodic free resolution for $L$. From (\ref{eqn_determinantal}) we see that the restriction of the complex $\mathcal O_{\mathbb{P}^3}(-1)^{\oplus 3}\stackrel{A}{\longrightarrow}\mathcal O_{\mathbb{P}^3}^{\oplus 3}$ to $S$ will have cohomology $L$ in degree $0$ and $L(-3)$ in degree $-1$. Hence $L$ is quasi-isomorphic to the complex of the form $$ \ldots\longrightarrow\mathcal O_S(-7)^{\oplus 3}\longrightarrow\mathcal O_S(-6)^{\oplus 3}\longrightarrow\mathcal O_S(-4)^{\oplus 3}\longrightarrow\mathcal O_S(-3)^{\oplus 3}\longrightarrow\mathcal O_S(-1)^{\oplus 3}\longrightarrow\mathcal O_S^{\oplus 3}\longrightarrow 0. $$ This complex is quasi-periodic of period two, with subsequent entries obtained by tensoring by $\mathcal O_S(-3)$. Applying $\EuScript{H}om(-,L)$ to this complex we see that $\EuScript{E}xt^k(L,L)$ are also quasi-periodic, and vanishing of the first two of these sheaves implies vanishing of the rest.
\end{proof}
Starting from $L$, we have constructed the determinantal representation of $S$. Conversely, given a sequence (\ref{eqn_determinantal}), generalized twisted cubics corresponding to this determinantal representation can be recovered as vanishing loci of sections of $L$. More detailed discussion of determinantal representations of cubic surfaces with different singularity types can be found in \cite{LLSvS}, \S 3.
\subsection{Moduli spaces of sheaves on a cubic fourfold} Let $S=Y\cap \mathbb{P}^3$ be a linear section of $Y$ with ADE singularities and $L$ a sheaf which gives a determinantal representation of $S$ as in (\ref{eqn_determinantal}). Denote by $i\colon S\hookrightarrow Y$ the embedding. We consider the moduli space of torsion sheaves on $Y$ of the form $i_*L$ to get a description of an open subset of $Z$.
\begin{lemma}\label{lem-unobs} For any $u \in \mathrm{Ext}^1(i_*L, i_*L)$ its Yoneda square $u \circ u \in \mathrm{Ext}^2(i_*L, i_*L)$ is zero, so that the deformations of $i_*L$ are unobstructed. \end{lemma} \begin{proof}
Recall that $L$ is a rank one sheaf on $S$. The unobstructedness is clear when $S$ is smooth, because $L$ is a line bundle in this case. Then the local $\EuScript{E}xt$'s are given by $\EuScript{E}xt^k(i_*L,i_*L) = i_*\Lambda^kN_{S/Y}$ (see \cite{KM}, Lemma 1.3.2 for the proof of this). In the case when $S$ is singular and $L$ is not locally free we can use the same argument as in Lemma 1.3.2 of \cite{KM} to obtain a spectral sequence $E_2^{p,q}=i_*(\EuScript{E}xt^p(L,L)\otimes \Lambda^qN_{S/Y}) \Rightarrow \EuScript{E}xt^{p+q}(i_*L,i_*L)$. Now we can use the second part of Lemma \ref{lem_L} to conclude that in this case $\EuScript{E}xt^k(i_*L,i_*L) = i_*\Lambda^kN_{S/Y}$ as well.
We have $N_{S/Y}=\mathcal O_S(1)^{\oplus 2}$ and $H^m(S,\mathcal O_S(k)) = 0$ for $k\geqslant 0$, $m\geqslant 1$ and from the local-to-global spectral sequence we deduce that $\mathrm{Ext}^k(i_*L,i_*L)=H^0(S,\Lambda^kN_{S/Y})$. The algebra structure is induced by exterior product $\Lambda^kN_{S/Y}\otimes \Lambda^mN_{S/Y}\to \Lambda^{k+m}N_{S/Y}$ (see \cite{KM}, Lemma 1.3.3). The exterior square of any section of $N_{S/Y}$ is zero and unobstructedness follows. \end{proof}
The sheaf $i_*L$ has Hilbert polynomial $P(i_*L,n)=\frac{3}{2} n^2+\frac{9}{2} n +3$ which is easy to compute from (\ref{eqn_determinantal}). Denote by $\mathcal M_L$ the irreducible component of the moduli space of semistable sheaves with this Hilbert polynomial containing $i_*L$.
Let us denote by $V$ the 6-dimensional vector space, so that $Y\subset \mathbb{P}(V)=\mathbb{P}^5$. Denote by $G$ the Grassmannian $\mathrm{Gr}(4,V)$. Recall from \cite{LLSvS} that we have a closed embedding $\mu\colon Y\hookrightarrow Z$, and the open subset $Z\backslash \mu(Y)$ corresponds to aCM twisted cubics. There exists a map $\pi: Z\backslash \mu(Y)\to G$ which sends a twisted cubic to its linear span in $\mathbb{P}^5$. If we consider linear sections $S = Y\cap \mathbb{P}^3$, then $S$ can have non-ADE singularities, but the codimension in $G$ of such linear subspaces is at least 4 by Proposition 4.2 and Proposition 4.3 in \cite{LLSvS}. Denote by $G^\circ\subset G$ the open subset consisting of $U\in G$, such that $Y\cap \mathbb{P}(U)$ has only ADE singularities. Let $Z^\circ = \pi^{-1}(G^\circ)$ be the corresponding open subset in $Z\backslash \mu(Y)$. This open subset has complement of codimension 4.
\begin{lemma}\label{lem_MML} There exists an open subset $\mathcal M_L^\circ\hookrightarrow \mathcal M_L$ isomorphic to $Z^\circ$. The sheaves on $Y$ corresponding to points of $\mathcal M_L^\circ$ are of the form $i_*L$, where $L$ gives a determinantal representation for a linear section $S=Y\cap\mathbb{P}^3$ with ADE singularities. \end{lemma} \begin{proof}
Denote by $\mathcal U$ the universal subbundle of $\mathcal O_G\otimes V$. Let $p: \mathbb{P}(\mathcal U)\to G$ be the projection and $\mathcal H=\mathcal{H}om_{p}(\mathcal O_{\mathbb{P}(\mathcal U)}(-1)^{\oplus 3},\mathcal O_{\mathbb{P}(\mathcal U)}^{\oplus 3})$. We have $\mathcal H\simeq(\mathcal U^\vee)^{\oplus 9}$. We will denote by the same letter $\mathcal H$ the total space of the bundle $\mathcal H$. By construction, over $\mathcal H\times_G\mathbb{P}(\mathcal U)$ we have the universal morphism $$ \mathcal O_{\mathbb{P}(\mathcal U)}(-1)^{\oplus 3}\stackrel{\mathcal A}{\longrightarrow}\mathcal O_{\mathbb{P}(\mathcal U)}^{\oplus 3}. $$ Denote by $\mathcal H^\circ$ the open subset in the total space of $\mathcal H$ where $\mathrm{det}(\mathcal A)\neq 0$. Consider the closed embedding $j: \mathcal H^\circ\times_G\mathbb{P}(\mathcal U) \hookrightarrow \mathcal H^\circ\times \mathbb{P}(V)$
and the sheaf $\mathcal M = \mathrm{coker}(j_*\mathcal A)$ on $\mathcal H^\circ\times \mathbb{P}(V)$. Let $q: \mathcal H^\circ\times \mathbb{P}(V)\to \mathcal H^\circ$ be the projection. For a point $A\in \mathcal H^\circ$ the restriction $\mathcal M|_{q^{-1}(A)}$ is a sheaf that defines a determinantal representation of a cubic surface in $\mathbb{P}(U)\subset \mathbb{P}(V)$. The condition that this surface is contained in $Y$ defines a closed subvariety $\mathcal W\subset \mathcal H^\circ$.
Let $\beta: \mathcal W\times Y\hookrightarrow \mathcal H^\circ\times \mathbb{P}(V)$ be the closed embedding. Define $\mathcal L=\mathcal M|_{\mathcal W\times Y}$ and consider the open subset $G^\circ\subset G$ of such subspaces $U\subset V$ that $\mathbb{P}(U)\cap Y$ has ADE singularities. Let $\mathcal W^\circ$ be the preimage of $G^\circ$ under the natural map $\mathcal W\to G$.
The sheaf $\mathcal L$ on $\mathcal W^\circ\times Y$ is flat over $\mathcal W^\circ$ since Hilbert polynomials of its restrictions to the fibres are the same (see \cite{H}, chapter III, Theorem 9.9). We obtain a morphism $\psi: \mathcal W^\circ\to \mathcal M_L$. Denote its image by $\mathcal M_L^\circ$. Consider the fibre $\mathcal W_U$ of the map $\mathcal W^\circ\to G^\circ$ over a point $U\in G$ and the restriction of $\mathcal L$ to $\mathcal W_U\times Y$. Over a point $w\in \mathcal W_U$ the sheaf $\mathcal L$ defines a determinantal representation of the surface $Y\cap \mathbb{P}(U)$. The general structure of determinantal representations (see \cite{LLSvS} \S 3) implies that each connected component of the fibre $\mathcal W_U$ is a single $(\mathrm{GL}_3\times \mathrm{GL}_3)/\mathbb{C}^*$ orbit (\cite{LLSvS} Corollary 3.7). Connected components of $\mathcal W_U$ are in one-to-one correspondence with non-isomorphic determinantal representations of $Y\cap \mathbb{P}(U)$. The restriction of $\mathcal L$ to each connected component of $\mathcal W_U\times Y$ is a constant family of sheaves, so the map $\psi$ contracts connected components of the fibre $\mathcal W_U$. From the explicit description of $Z^\circ$ given above, we see that $\mathcal M_L^\circ$ is isomorphic to $Z^\circ$. The properties stated in the lemma are clear from construction. We also see that $\mathcal W^\circ$ is a $(\mathrm{GL}_3\times \mathrm{GL}_3)/\mathbb{C}^*$-fibre bundle over $Z^\circ$. \end{proof}
The sheaves $i_*L$ are not contained in the subcategory $\mathcal A_Y$. In order to show that the closed 2-form described in \cite{KM} is a symplectic form on $\mathcal M_L^\circ$, we are going to project the sheaves $i_*L$ to $\mathcal A_Y$, and then show that this projection induces an isomorphism of open subsets of moduli spaces respecting the 2-forms (up to a sign).
\begin{lemma}\label{lem_proj} The sheaves $i_*L$ are globally generated and lie in the subcategory $\langle\mathcal A_Y,\mathcal O_Y\rangle$. The space of global sections $H^0(Y, i_*L)$ is three-dimensional, and the sheaf $F_L$, defined by the exact triple \begin{equation}\label{eqn_F} 0\longrightarrow F_L\longrightarrow \mathcal O_Y^{\oplus 3}\longrightarrow i_*L\longrightarrow 0. \end{equation} lies in $\mathcal A_Y$. \end{lemma} \begin{proof} From Lemma \ref{lem_L} we deduce that $i_*L$ is right orthogonal to $\mathcal O_Y(1)$, $\mathcal O_Y(2)$, so that $i_*L$ lies in $\langle\mathcal A_Y,\mathcal O_Y\rangle$. It also follows from Lemma \ref{lem_L} that $i_*L$ is globally generated, the global sections are three-dimensional and that the higher cohomology groups of $L$ vanish. Thus $F_L$ is (up to a shift) the left mutation of $i_*L$ through the exceptional bundle $\mathcal O_Y$, and in particular it lies in $\mathcal A_Y$. \end{proof}
\begin{lemma}\label{lem_F} Consider the exact triple (\ref{eqn_F}) where $i_*L$ is in $\mathcal M_L^\circ$. Then $F_L$ is a Gieseker-stable rank 3 sheaf contained in $\mathcal A_Y$ with Hilbert polynomial $P(F_L, n) = \frac38 n^4 + \frac94 n^3 + \frac{33}{8} n^2 + \frac94 n$. \end{lemma} \begin{proof} By Lemma \ref{lem_MML} the sheaf $i_*L$ is right-orthogonal to $\mathcal O_Y(2)$ and $\mathcal O_Y(1)$. The sheaf $F_L$ is a shift of the left mutation of $i_*L$ through $\mathcal O_Y$, hence it is contained if $\mathcal A_Y$. The Hilbert polynomial can be computed using the Hirzebruch-Riemann-Roch formula. It remains to check the stability of $F_L$.
The sheaf $F_L$ is a subsheaf of $\mathcal O_Y^{\oplus 3}$, hence it has no torsion. In order to check the stability we consider all proper saturated subsheaves $\mathcal G\subset F_L$. We have to make sure that $p(\mathcal G, n)< p(F_L, n)$ where $p$ is the reduced Hilbert polynomial (see \cite{HL} for all the relevant definitions). We use the convention that the inequalities between polynomials are supposed to hold for $n \gg 0$.
We denote by $P$ the non-reduced Hilbert polynomial. We have $P(\mathcal O_Y, n) = a_0 n^4 + a_1 n^3 +\ldots + a_4$, with the leading coefficient $a_0 = \frac{3}{4!}$. From the exact sequence (\ref{eqn_F}) we see that $P(F_L, n) = 3P(\mathcal O_Y,n) - P(i_*L,n)$. Since $i_*L$ has two-dimensional support, the degree of $P(i_*L,n)$ is two, and hence the leading coefficient of $P(F_L, n)$ equals $3a_0$. So we have \begin{equation}\label{pFF_C} p(F_L, n) = p(\mathcal O_Y, n) - \frac{1}{3a_0}P(i_*L, n). \end{equation}
Let $\tilde{\mathcal G}$ be the saturation of $\mathcal G$ inside $\mathcal O_Y^{\oplus 3}$. Then $\tilde{\mathcal G}$ is a reflexive sheaf and we have a diagram: $$ \begin{tikzcd}[] 0\rar & \mathcal G\dar\rar & \tilde{\mathcal G}\dar\rar & \mathcal H\dar\rar & 0\\ 0\rar & F_L\rar & \mathcal O_Y^{\oplus 3}\rar & i_*L\rar & 0 \end{tikzcd} $$ In this diagram $\mathcal H$ is a torsion sheaf which injects into $i_*L$ because $F_L/\mathcal G$ is torsion-free. Note that $\mathcal O_Y^{\oplus 3}$ is Mumford-polystable, so $c_1(\mathcal G)\leqslant c_1(\tilde{\mathcal G})\leqslant 0$. If $c_1(\mathcal G)< 0$ then $\mathcal G$ is not destabilizing in $F_L$ because $c_1(F_L) = 0$.
Next we consider the case $c_1(\mathcal G)= c_1(\tilde{\mathcal G})= 0$. In this case $\tilde{\mathcal G}=\mathcal O_Y^{\oplus m}$ where $m=1$ or $m=2$. This is clear if $\mathrm{rk}\,{\tilde{\mathcal G}}=1$ since a reflexive sheaf of rank one is a line bundle. If $\mathrm{rk}\,{\tilde{\mathcal G}}=2$ we can consider the quotient $\mathcal O_Y^{\oplus 3}/\tilde{\mathcal G}$ which is torsion-free, globally generated, of rank one and has zero first Chern class. It follows that the quotient is isomorphic to $\mathcal O_Y$ and then $\tilde{\mathcal G}=\mathcal O_Y^{\oplus 2}$.
We have an exact triple $0\longrightarrow\mathcal G\longrightarrow\mathcal O_Y^{\oplus m}\longrightarrow\mathcal H\longrightarrow 0$ with $m$ equal to $1$ or $2$. We see that $p(\mathcal G,n) = p(\mathcal O_Y,n) - \frac{1}{ma_0}P(\mathcal H, n)$. Note that $\mathcal H$ is a non-zero sheaf which injects into $i_*L$, and the sheaf $L$ on the surface $S$ is torsion-free of rank one. Hence the leading coefficient of $P(\mathcal H, n)$ is the same as for $P(i_*L, n)$ and this implies $\frac{1}{ma_0}P(\mathcal H, n)> \frac{1}{3a_0}P(i_*L, n)$. From this and (\ref{pFF_C}) we conclude that $p(\mathcal G,n)<p(F_L,n)$, hence $\mathcal G$ is not destabilizing. This completes the proof. \iffalse
We consider two cases depending on the rank of $\mathcal G\subset F_L$.
{\it The case $\mathrm{rk}(\mathcal G) = 1$.} Denote by $\mathcal A$ the saturation of $\mathcal G$ inside $\mathcal O_Y^{\oplus 3}$. Then $\mathcal A$ is of rank one and we have $0\to \mathcal G\to \mathcal A\to \mathcal H\to 0$ with $\mathcal H$ a torsion sheaf. We have a diagram: $$ \begin{tikzcd}[]
& 0\dar & 0\dar & 0\dar\\ 0\rar & \mathcal G\dar\rar & \mathcal A\dar\rar & \mathcal H\dar\rar & 0\\ 0\rar & F_L\dar\rar & \mathcal O_Y^{\oplus 3}\dar\rar & i_*L\dar\rar & 0\\ 0\rar & \mathcal G'\dar\rar & \mathcal A'\dar\rar & \mathcal H'\dar\rar & 0\\
& 0 & 0 & 0 \end{tikzcd} $$
The morphism $\mathcal H\to i_*L$ in the right column is injective since the sheaf $\mathcal G' = F_L/\mathcal G$ is torsion-free. The sheaf $\mathcal A$ has rank one and it is saturated in the reflexive sheaf $\mathcal O_Y^{\oplus 3}$, hence it is a line bundle (see \cite{OSS} Lemma 1.1.15 and Lemma 1.1.16). The Hilbert polynomial is $P(\mathcal A, n) = \int e^{c_1(\mathcal A(n))} td(Y) = a_0 n^4 + a_1' n^3 + \ldots + a_4'$, where the leading coefficient is the same as for $P(\mathcal O_Y, n)$. We have $p(\mathcal G, n) = p(\mathcal A, n) - \frac{1}{a_0} P(\mathcal H, n)$, because $\mathcal H$ has at most two-dimensional support. For the same reason, if $\mathcal A$ is not destabilizing in $\mathcal O_Y^{\oplus 3}$, then $p(\mathcal G, n) < p(\mathcal O_Y, n)$ and these polynomials differ in degree 3 terms. Thus, we also have that $p(\mathcal G, n) < p(F_L, n)$ (see \ref{pFF_C}) and $\mathcal G$ is not destabilizing in $F_L$. So it remains to consider the case when $\mathcal A$ is destabilizing in $\mathcal O_Y^{\oplus 3}$, that is $\mathcal A = \mathcal O_Y$. Then $\mathcal G$ is an ideal sheaf which can not be trivial, since $\mathrm{Hom}(\mathcal O_Y,F_L) = 0$. It follows that $\mathcal H = i_*\mathcal O_S$. We have an equality $$ p(\mathcal G, n) = p(F_L,n) + \frac{1}{a_0} \left( \frac{1}{3} P(i_*L,n) - P(i_*\mathcal O_S,n) \right). $$ The polynomials $P(i_*L,n)$ and $P(i_*\mathcal O_S,n)$ have the same positive leading coefficient. Hence the expression in brackets is negative and $\mathcal G$ is not destabilizing.
{\it The case $\mathrm{rk}(\mathcal G) = 2$.} We can use the same diagram as before. In this case $\mathcal G'$ is of rank one, $\mathcal A$ is again the saturation of $\mathcal G$ in $\mathcal O_Y^{\oplus 3}$ and $\mathcal H$ a torsion sheaf. We can check stability by looking at the bottom row of the diagram and proving that $p(\mathcal G', n) > p(F_L, n)$.
The sheaf $(\mathcal A')^\vee$ injects into $\mathcal O_Y^{\oplus 3}$ with torsion-free quotient, hence it is a line bundle. Then $\mathcal A'$ is a rank one subsheaf of the line bundle $\mathcal L' = (\mathcal A')^{\vee\vee}$. It follows that $\mathcal A'$ has the same first Chern class as $\mathcal L'$. Suppose that $c_1(\mathcal L') \neq 0$, then the sheaf $\mathcal A'$ is not destabilizing for $\mathcal O_Y^{\oplus 3}$, so $p(\mathcal A',n) > p(\mathcal O_Y, n)$ and these polynomials differ in degree 3 term. We have $p(\mathcal G', n) = p(\mathcal A', n) - \frac{1}{a_0}P(\mathcal H', n)$. But $\mathcal H'$ has at most two-dimensional support and we again conclude that in this case $p(\mathcal G', n) > p(F_L, n)$.
It remains to consider the case when $\mathcal A'$ is destabilizing, that is $\mathcal A' = \mathcal O_Y$ and $\mathcal A = \mathcal O_Y^{\oplus 2}$. Note that in this case we should have $\mathcal H \neq 0$ (because $\mathrm{Hom}(\mathcal O_Y^{\oplus 2}, F_L) = 0$), consequently $\mathcal H = i_* \mathcal E$ for some rank one sheaf $\mathcal E$ on $S$. Then we have $\mathcal H' = i_*\mathcal E'$ and this sheaf has at most one-dimensional support. We use the equality $$ p(\mathcal G', n) = p(F_L,n) + \frac{1}{a_0} \left( \frac{1}{3} P(i_*L,n) - P(i_*\mathcal E',n) \right), $$ and observe that the degree of $P(i_*\mathcal E',n)$ is at most one, but $P(i_*L,n)$ has degree two with positive leading coefficient, so we have $p(\mathcal G', n) > p(F_L, n)$. This completes the proof. \fi
\end{proof}
Let us consider the moduli space of rank 3 semistable sheaves on $Y$ with Hilbert polynomial $P(F_L,n)$. Denote by $\mathcal M_F$ its irreducible component which contains the sheaves $F_L$ from (\ref{eqn_F}).
\begin{lemma}\label{lem_mutation} The left mutation of $i_*L$ through $\mathcal O_Y$ gives an open embedding $\mathcal M_L^\circ\to \mathcal M_F$. \end{lemma} \begin{proof} Recall from the proof of Lemma \ref{lem_MML} that $\mathcal M_L^\circ$ was defined as the image of a map $\mathcal W^\circ\to \mathcal M_L$ where $\mathcal W^\circ$ was a fibre bundle over $Z^\circ$. On $X = \mathcal W^\circ\times Y$ a universal sheaf $\mathcal L$ flat over $\mathcal W^\circ$ was constructed. Denote by $\pi\colon X\to \mathcal W^\circ$ the projection.
By definition of $\mathcal M_L^\circ$ and from Lemma \ref{lem_L} it follows that $\pi_*\mathcal L$ is a rank 3 vector bundle and we have an exact sequence $0\to\mathcal F_\mathcal L\to\pi^*\pi_*\mathcal L\to \mathcal L\to 0$. The family of sheaves $\mathcal F_\mathcal L$ defines a map $\mathcal W^\circ\to \mathcal M_F$ which factors through $\mathcal M_L^\circ\to \mathcal M_F$. We will show that the differential of the latter map is an isomorphism.
For a sheaf $i_*L$ corresponding to a point of $\mathcal M_L^\circ$ and any tangent vector $u\in\mathrm{Ext}^1(i_*L,i_*L)$ we have unique morphism of triangles \begin{equation}\label{eqn_mutation} \begin{tikzcd}[] F_L \dar{u'}\rar & \mathcal O_Y^{\oplus 3} \dar{0}\rar & i_*L \dar{u}\rar & F_L[1] \dar{u'[1]} \\ F_L[1] \rar & \mathcal O_Y^{\oplus 3}[1] \rar & i_*L[1]\rar & F_L[2] \end{tikzcd} \end{equation} Uniqueness of $u'$ follows from $\mathrm{Ext}^1(\mathcal O_Y,F_L) = 0$. Moreover, $u$ is uniquely determined by $u'$ because $\mathrm{Ext}^1(i_*L,\mathcal O_Y) = \mathrm{Ext}^3(\mathcal O_Y,i_*L(-3))^* = 0$. This shows that the mutation induces an isomorphism of $\mathrm{Ext}^1(i_*L,i_*L)$ and $\mathrm{Ext}^1(F_L,F_L)$.
Finally, let us prove that the map $\mathcal M_L^\circ\to \mathcal M_F$ is injective. It follows from Grothendieck-Verdier duality that $\EuScript{E}xt^2(i_*L,\mathcal O_Y)=i_*L^\vee(2)$. Then from (\ref{eqn_F}) we see that $\EuScript{E}xt^1(F_L,\mathcal O_Y)=i_*L^\vee(2)$ and hence $L$ can be reconstructed from $F_L$. \end{proof}
\subsection{The symplectic form and Lagrangian subvarieties}
Let us recall the description of the two-form on the moduli spaces of sheaves on $Y$ from \cite{KM}.
Given a coherent sheaf $\mathcal F$ on $Y$ we can define its Atiyah class $\Ati{\mathcal F}\in\mathrm{Ext}^1(\mathcal F,\mathcal F\otimes\Omega_Y)$. The Atiyah class is functorial, meaning that for any morphism of sheaves $\alpha\colon \mathcal F\to \mathcal G$ we have $\Ati{\mathcal G}\circ\alpha=(\alpha\otimes \mathrm{id})\circ\Ati{\mathcal F}$.
We define a bilinear form $\sigma$ on the vector space $\mathrm{Ext}^1(\mathcal F,\mathcal F)$. Given two elements $u,v\in \mathrm{Ext}^1(\mathcal F,\mathcal F)$ we consider the composition $\Ati{\mathcal F}\circ u\circ v\in \mathrm{Ext}^3(\mathcal F,\mathcal F\otimes\Omega_Y)$ and apply the trace map $\mathrm{Tr}\colon\mathrm{Ext}^3(\mathcal F,\mathcal F\otimes\Omega_Y)\to \mathrm{Ext}^3(\mathcal O_Y,\Omega_Y)= H^{1,3}(Y)= \mathbb{C}$ to it: \begin{equation}\label{eqn_sigma} \sigma(u,v)=\mathrm{Tr}(\Ati{\mathcal F}\circ u\circ v). \end{equation}
Note that when the Kuranishi space of $\mathcal F$ is smooth then for any $u\in \mathrm{Ext}^1(\mathcal F,\mathcal F)$ we have $u\circ u=0$ and then $\sigma(u,u)=0$. In this case $\sigma$ is antisymmetric. Hence the formula (\ref{eqn_sigma}) defines a two-form at smooth points of moduli spaces of sheaves on $Y$. This form is closed by \cite{KM}, Theorem 2.2.
\begin{lemma}\label{lem_symplectic} The formula (\ref{eqn_sigma}) defines a symplectic form on $\mathcal M_L^\circ$ which coincides up to a non-zero constant with the restriction of the symplectic form on $Z$ under the isomorphism $\mathcal M_L^\circ\simeq Z^\circ$. \end{lemma} \begin{proof} By Lemma \ref{lem-unobs} the sheaves $i_*L$ from $\mathcal M_L^\circ$ have unobstructed deformations, so that (\ref{eqn_sigma}) indeed defines a two-form.
Recall from Lemma \ref{lem_mutation} that we have an open embedding $\mathcal M_L^\circ\hookrightarrow \mathcal M_F$. Let us show that this embedding respects (up to a sign) symplectic forms on $\mathcal M_L$ and $\mathcal M_F$ given by (\ref{eqn_sigma}). Note that by functoriality of Atiyah classes the following diagram gives a morphism of triangles: $$ \begin{tikzcd}[] F_L \dar{\Ati{F_L}}\rar & \mathcal O_Y^{\oplus 3} \dar{\Ati{\mathcal O_Y^{\oplus 3}} = 0}\rar & i_*L \dar{\Ati{i_*L}}\rar & F_L[1] \dar{\Ati{F_L}[1]} \\ F_L\otimes\Omega_Y[1] \rar & \Omega_Y^{\oplus 3}[1] \rar & i_*L\otimes\Omega_Y[1]\rar & F_L\otimes\Omega_Y[2] \end{tikzcd} $$ For any pair of tangent vectors $u,v\in \mathrm{Ext}^1(i_*L,i_*L)$ we have two morphisms of triangles as in (\ref{eqn_mutation}). If we compose these two morphisms of triangles with the one induced by Atiyah classes then we get the following: $$ \begin{tikzcd}[] F_L \dar{\Ati{F_L}\circ u'\circ v'}\rar & \mathcal O_Y^{\oplus 3} \dar{0}\rar &
i_*L \dar{\Ati{i_*L}\circ u\circ v}\rar & F_L[1] \dar{\Ati{F_L}\circ u'\circ v'[1]} \\ F_L\otimes\Omega_Y[3] \rar & \Omega_Y^{\oplus 3}[3] \rar & i_*L\otimes\Omega_Y[3]\rar & F_L\otimes\Omega_Y[4] \end{tikzcd} $$ This diagram is a morphism of triangles and the additivity of traces implies that $\sigma(u,v)=-\sigma(u',v')$.
By Theorem 4.3 from \cite{KM} the form $\sigma$ on $\mathcal M_F$ is symplectic, because the sheaves $F_L$ are contained in $\mathcal A_Y$. Hence $\sigma$ is a symplectic form on $\mathcal M_L^\circ$. But $\mathcal M_L^\circ$ is embedded into $Z$ as an open subset with complement of codimension four. This implies that the symplectic form on $\mathcal M_L^\circ$ is unique up to a constant, because $Z$ is IHS. This completes the proof. \end{proof}
\begin{thm}\label{thm_MMF} The component $\mathcal M_F$ of the moduli space of Gieseker stable sheaves with Hilbert polynomial $P(F_L, n)$ is birational to the IHS manifold $Z$. Under this birational equivalence the symplectic form on $Z$ defined in \cite{LLSvS} corresponds to the Kuznetsov-Markushevich form on $\mathcal M_F$. \end{thm} \begin{proof} Follows from Lemmas \ref{lem_MML}, \ref{lem_F}, \ref{lem_mutation}, \ref{lem_symplectic}. \end{proof}
Now we explain how hyperplane sections of $Y$ give rise to Lagrangian subvarieties of $Z$.
Let $H\subset \mathbb{P}^5$ be a generic hyperplane, so that $Y_H=Y\cap H$ is a smooth cubic threefold. Twisted cubics contained in $Z$ form a subvariety $M_3(Y)_H\subset M_3(Y)$ whose image in $Z$ we denote by $Z_H$. Its open subset $Z_H^\circ = Z_H\cap Z^\circ$ consists of sheaves $i_*L$ whose support is contained in $H$.
\begin{prop}\label{prop_Lagr} $Z_H$ is a Lagrangian subvariety of $Z$. \end{prop} \begin{proof} It is clear that $Z_H$ has dimension four since the Grassmannian of three-dimensional subspaces in $H$ is $\mathbb{P}^4$. Consider a sheaf $i_*L$ whose support $S$ is smooth and contained in $Y_H$. Since $L$ is a locally free sheaf on $S$ we have $\EuScript{E}xt^k(i_*L,i_*L)=i_*\Lambda^kN_{S/Y}$ (see for example \cite{KM}, Lemma 1.3.2). The higher cohomologies of the sheaves $\EuScript{E}xt^k(i_*L,i_*L)$ vanish for $k\geqslant 0$, because $N_{S/Y}=\mathcal O_S(1)^{\oplus 2}$ and the sheaves $\mathcal O_S(k)$ have no higher cohomologies for $k\geqslant 0$. Hence from the local-to-global spectral sequence we find that $T_{i_*L}\mathcal M_L=\mathrm{Ext}^1(i_*L, i_*L)=H^0(S,N_{S/Y})$. Moreover, the Yoneda multiplication on $\mathrm{Ext}$'s is given by the map $H^0(S,N_{S/Y})\times H^0(S,N_{S/Y})\to H^0(S,\Lambda^2N_{S/Y})$ which is induced from the exterior product morphism $N_{S/Y}\otimes N_{S/Y}\to \Lambda^2N_{S/Y}$ (see \cite{KM}, Lemma 1.3.3). Now, the tangent space to $Z_H$ at $i_*L$ is $H^0(S,N_{S/Y_H})$. But the exterior product $N_{S/Y_H}\otimes N_{S/Y_H}\to \Lambda^2N_{S/Y_H}=0$ vanishes because $N_{S/Y_H}$ is of rank one. So the Yoneda product vanishes on the corresponding subspace of $\mathrm{Ext}^1(i_*L, i_*L)$ and from the definition of the symplectic form (\ref{eqn_sigma}) we conclude that the tangent subspace to $Z_H$ is Lagrangian. This holds on an open subset of $Z_H$, so $Z_H$ is a Lagrangian subvariety. \end{proof}
In the next section we give a description of the subvarieties $Z_H$ in terms of intermediate Jacobians of the threefolds $Y_H$.
\section{Twisted cubics on a cubic threefold}
In this section we assume that the cubic fourfold $Y$ and its hyperplane section $Y_H$ are chosen generically, so that $Y_H$ is smooth and all the surfaces obtained by intersecting $Y_H$ with three-dimensional subspaces have at worst ADE singularities. For general $Y$ and $H$ this indeed will be the case, because for a general cubic threefold in $\mathbb{P}^4$ its hyperplane sections have only ADE singularities. One can see this from dimension count by considering the codimensions of loci of cubic surfaces with different singularity types (see for example \cite{LLSvS}, sections 2.2 and 2.3).
The cubic threefold $Y_H$ has an intermediate Jacobian $\IJac{Y_H}$ which is a principally polarized abelian variety.
We will show that if we choose a general hyperplane $H$ then the Abel-Jacobi map \[ \mathrm{AJ}\colon Z_H\to \IJac{Y_H} \] defines a closed embedding on an open subset $Z_H^\circ$ and the complement $Z_H\backslash Z_H^\circ$ is contracted to a point. The image of $\mathrm{AJ}$ is the theta-divisor $\Theta \subset \IJac{Y_H}$.
Recall from the description of $Z$ that we have an embedding $\mu\colon Y\hookrightarrow Z$. We have $Z_H^\circ \simeq Z_H\setminus \mu(Y)$ and $Z_H\cap \mu(Y)\simeq Y_H$. Hence the Abel-Jacobi map $\mathrm{AJ}\colon Z_H\to \IJac{Y_H}$ gives a resolution of the unique singular point of the theta-divisor and the exceptional divisor of this map is isomorphic to $Y_H$. This explicit description of the singularity of the theta-divisor first obtained in \cite{B2} implies Torelli theorem for cubic threefolds.
The fact that $Z_H$ is birational to the theta-divisor in $\IJac{Y_H}$ also follows from \cite{I} (see also \cite[Proposition 4.2]{B1}).
\subsection{Differential of the Abel-Jacobi map}
As before, we will identify the open subset $Z_H^\circ$ with an open subset in the moduli space of sheaves of the form $i_*L$, where $i\colon S\hookrightarrow Y_H$ is a hyperplane section and $L$ is a sheaf which gives a determinantal representation (\ref{eqn_determinantal}) of this section.
The Abel-Jacobi map $\mathrm{AJ}\colon Z_H^\circ\to \IJac{Y_H}$ can be described as follows. We use the Chern classes with values in the Chow ring $\mathrm{CH}(Y_H)$. The second Chern class $c_2(i_*L) \in \mathrm{CH}^2(Y_H)$ is a cycle class of degree $3$. Let $h\in \mathrm{CH}^1(Y_H)$ denote the class of a hyperplane section, then $c_2(i_*L)-h^2$ is a cycle class homologous to zero, and it defines an element in the intermediate Jacobian.
Since $c_2(i_* L)$ can be represented by corresponding twisted cubics, the map above extends to $AJ: Z_H \to \IJac{Y_H}$.
\begin{lemma}\label{lem_dAJ} The differential of the Abel-Jacobi map $d\mathrm{AJ}_{i_*L}\colon \mathrm{Ext}^1(i_*L,i_*L)\to H^{1,2}(Y_H)$ at the point corresponding to the sheaf $i_*L$ is given by \begin{equation}\label{eqn_dAJ} d\mathrm{AJ}_{i_*L}(u) = \frac12 \mathrm{Tr}(\Ati{i_*L}\circ u), \end{equation} for any $u\in \mathrm{Ext}^1(i_*L,i_*L)$. \end{lemma} \begin{proof} We apply the general formula for the derivative of the Abel-Jacobi map, see Appendix \ref{appendix}, Proposition \ref{prop_dAJ_general}. We have $c_1(i_*L) = 0$, so that $s_2(i_*L) = -2c_2(i_*L)$, which yields the $\frac12$ factor in the statement. \end{proof}
\iffalse
We will reduce the question to the case when the Abel-Jacobi map is applied to a structure sheaf of a smooth curve $C\subset Y_H$, a section of $L$. In this case it is known (see \cite{BF}, Proposition 8.7) that the differential is given by an analogous formula: for any $v\in\mathrm{Ext}^1(i_*\mathcal O_{C}, i_*\mathcal O_{C})$ we have $d\mathrm{AJ}_{i_*\mathcal O_{C}}(v) = \mathrm{Tr}(\Ati{i_*\mathcal O_{C}}\circ v)$. Next note, that the same will be true if we replace $O_{C}$ by any line bundle on the curve $C$ because the second Chern class will stay the same.
On $S$ we have an exact triple $0\longrightarrow\mathcal O_S\longrightarrow L\longrightarrow j_*\mathcal O_{\mathbb{P}^1}(2)\longrightarrow 0$ where $C$ is a smooth curve given by a section of $L$ and $j\colon C\hookrightarrow S$ denotes the embedding (recall from \cite{LLSvS} that any generalized aCM twisted cubic belongs to the unique two-dimensional linear system and is smooth generically). Let us use the notation $L_C = j_*\mathcal O_{\mathbb{P}^1(2)}$.
Local deformations of the sheaf $i_*L$ are in one-to-one correspondence with local deformations of $i_*\mathcal O_S$, so given any $u\in \mathrm{Ext}^1(i_*L,i_*L)$ we can find unique $u'\in \mathrm{Ext}^1(i_*\mathcal O_S,i_*\mathcal O_S)$ and some $v\in\mathrm{Ext}^1(i_*L_C, i_*L_C)$ which give a morphism of triangles $$ \begin{tikzcd}[] i_*\mathcal O_S \dar{u'}\rar & i_*L \dar{u}\rar & i_*L_C \dar{v}\rar & i_*\mathcal O_S[1] \dar{u'[1]} \\ i_*\mathcal O_S[1] \rar & i_*L[1] \rar & i_*L_C[1]\rar & i_*\mathcal O_S[2] \end{tikzcd} $$ Composing $u$, $u'$ and $v$ with Atiyah classes for $i_*L$, $i_*\mathcal O_S$ and $i_*L_S$ gives us a morphism of triangles $$ \begin{tikzcd}[] i_*\mathcal O_S \dar{\Ati{i_*\mathcal O_S}\circ u'}\rar & i_*L \dar{\Ati{i_*L}\circ u}\rar
& i_*L_C \dar{\Ati{i_*L_C}\circ v}\rar & i_*\mathcal O_S[1] \dar{\Ati{i_*\mathcal O_S}\circ u'[1]} \\ \Omega_{Y_H}\otimes i_*\mathcal O_S[2] \rar & \Omega_{Y_H}\otimes i_*L[2] \rar & \Omega_{Y_H}\otimes i_*L_C[2] \rar & \Omega_{Y_H}\otimes i_*\mathcal O_S[3] \end{tikzcd} $$ where we have used the functoriality of Atiyah classes. Now we note that $\mathrm{Tr}(\Ati{i_*\mathcal O_S}\circ u') = 0$. The reason is that the morphism $u'\colon i_*\mathcal O_S\to i_*\mathcal O_S[1]$ is included into the following morphism of triangles $$ \begin{tikzcd}[] \mathcal O_{Y_H} \dar{0}\rar & i_*\mathcal O_S \dar{u'}\rar & \mathcal O_{Y_H}(-1)[1] \dar{0}\rar & \mathcal O_{Y_H}[1] \dar{0} \\ \mathcal O_{Y_H}[1] \rar & i_*\mathcal O_S[1] \rar & \mathcal O_{Y_H}(-1)[2] \rar & \mathcal O_{Y_H}[2] \end{tikzcd} $$ The left and the central squares in this diagram commute because of the identities $\mathrm{Ext}^1(\mathcal O_{Y_H},i_*\mathcal O_S)=H^1(S,\mathcal O_S)=0$ and $\mathrm{Ext}^2(i_*\mathcal O_S,\mathcal O_{Y_H}(-1))=H^1(S,\mathcal O_S(-1))=0$. If we take the composition with the Atiyah classes, the additivity of traces will show that $\mathrm{Tr}(\Ati{i_*\mathcal O_S}\circ u') = 0$.
From the discussion above we conclude that $\mathrm{Tr}(\Ati{i_*L}\circ u) = - \mathrm{Tr}(\Ati{i_*L_C}\circ v)$. But $\mathrm{AJ}(i_*L) = -\mathrm{AJ}(i_*L_C)$ and we know that $d\mathrm{AJ}_{i_*L_C} = \mathrm{Tr}(\Ati{i_*L_C}\circ v)$, hence the claim of the lemma. \fi
It will be convenient for us to rewrite (\ref{eqn_dAJ}) in terms of the linkage class of a sheaf, see \cite{KM}. We recall its definition in our particular case of the embedding $j\colon Y_H\hookrightarrow \mathbb{P}^4$. If $\mathcal F$ is a sheaf on $Y_H$ then the object $j^*j_*\mathcal F\in \EuScript{D}^b(Y_H)$ has non-zero cohomologies only in degrees $-1$ and $0$. They are equal to $\mathcal F\otimes N_{Y/\mathbb{P}^4}^\vee = \mathcal F(-3)$ and $\mathcal F$ respectively. Hence the triangle \[ \mathcal F(-3)[1]\longrightarrow \mathrm{L} j^*j_*\mathcal F\longrightarrow \mathcal F\longrightarrow \mathcal F(-3)[2]. \] The last morphism in this triangle is called the linkage class of $\mathcal F$ and will be denoted by $\epsilon_\mathcal F\colon \mathcal F\to \mathcal F(-3)[2]$. The linkage class can also be described as follows (see \cite{KM}, Theorem 3.2): let us denote by $\kappa\in \mathrm{Ext}^1(\Omega_{Y_H},\mathcal O_{Y_H}(-3))$
the extension class of the conormal sequence $0\to\mathcal O_{Y_H}(-3)\to\Omega_{\mathbb{P}^4}|_{Y_H}\to \Omega_{Y_H}\to 0$; then we have $\epsilon_{\mathcal F} = (\mathrm{id}_{\mathcal F}\otimes \kappa)\circ\Ati{\mathcal F}$.
Note that composition with $\kappa$ gives an isomorphism of vector spaces $H^{1,2}(Y_H)=\mathrm{Ext}^2(\mathcal O_{Y_H},\Omega_{Y_H})$ and $\mathrm{Ext}^3(\mathcal O_{Y_H}, \mathcal O_{Y_H}(-3)) = H^0(Y_H,\mathcal O_{Y_H}(1))^*$. Composing the right hand side of (\ref{eqn_dAJ}) with $\kappa$ and using the fact that taking traces commutes with compositions, we obtain the following expression for $d\mathrm{AJ}(u)$ where $u\in\mathrm{Ext}^1(i_*L,i_*L)$: \begin{equation}\label{eqn_dAJ2} \kappa \circ d\mathrm{AJ}_{i_*L}(u) = \frac12 \mathrm{Tr}(\epsilon_{i_*L}\circ u) \in H^0(Y_H, \mathcal O_H(1))^* \end{equation}
\begin{prop}\label{prop_dAJ} The differential of the Abel-Jacobi map (\ref{eqn_dAJ}) is injective.
\end{prop} \begin{proof} As before, we will denote by $i\colon S\hookrightarrow Y_H$ and $j\colon Y_H\hookrightarrow \mathbb{P}^4$ the embeddings. A point of $Z_H^\circ$ is represented by a sheaf $i_*L$. Let us also use the notation $\mathcal F = i_*L$. It suffices to show that the map $u \mapsto \kappa \circ d\mathrm{AJ}_{i_*L}(u)$ is injective.
The proof is done in three steps.
{\it Step 1.} Let us construct a locally free resolution of $j_*\mathcal F$. We decompose $j_*\mathcal F$ with respect to the exceptional collection $\mathcal O_{\mathbb{P}^4}(-2)$, $\mathcal O_{\mathbb{P}^4}(-1)$, $\mathcal O_{\mathbb{P}^4}$, $\mathcal O_{\mathbb{P}^4}(1)$, $\mathcal O_{\mathbb{P}^4}(2)$. The sheaf $j_*\mathcal F$ is already left-orthogonal to $\mathcal O_{\mathbb{P}^4}(2)$ and $\mathcal O_{\mathbb{P}^4}(1)$ (see Lemma \ref{lem_L}). It is globally generated by (\ref{eqn_determinantal}) and its left mutation is the shift of the sheaf $\mathcal K$ from the exact triple $0\longrightarrow\mathcal K\longrightarrow\mathcal O_{\mathbb{P}^4}^{\oplus 3}\longrightarrow j_*\mathcal F\longrightarrow 0$. From cohomology exact sequence we see that $H^0(\mathbb{P}^4,\mathcal K(1)) = \mathbb{C}^6$ and $H^k(\mathbb{P}^4,\mathcal K(1))=0$ for $k\geqslant 1$. We can also check that $\mathcal K(1)$ is globally generated (it is in fact Castelnuovo-Mumford $0$-regular, as one can see using (\ref{eqn_determinantal})). The left mutation of $\mathcal K$ through $\mathcal O_{\mathbb{P}^4}(-1)$ is the cone of the surjection $\mathcal O_{\mathbb{P}^4}(-1)^{\oplus 6}\to \mathcal K$, and it lies in the subcategory generated by $\mathcal O_{\mathbb{P}^4}(-2)$. Since it has rank 3, this completes the construction of the resolution for $j_*\mathcal F$. The resulting resolution is: \begin{equation}\label{eqn_resjF} 0\longrightarrow\mathcal O_{\mathbb{P}^4}(-2)^{\oplus 3}\longrightarrow \mathcal O_{\mathbb{P}^4}(-1)^{\oplus 6}\longrightarrow \mathcal O_{\mathbb{P}^4}^{\oplus 3}\longrightarrow j_*\mathcal F\longrightarrow 0. \end{equation}
{\it Step 2.} Let us show that the linkage class $\epsilon_\mathcal F$ induces an isomorphism \[ \mathrm{Ext}^1(\mathcal F,\mathcal F) \to \mathrm{Ext}^3(\mathcal F,\mathcal F(-3)). \] The object $\mathrm{L} j^*j_*\mathcal F$ is included into the triangle $$ \mathrm{L} j^*j_*\mathcal F\longrightarrow \mathcal F\stackrel{\epsilon_\mathcal F}{\longrightarrow} \mathcal F(-3)[2]\longrightarrow \mathrm{L} j^*j_*\mathcal F[1]. $$ Applying $\mathrm{Hom}(\mathcal F,-)$ to this triangle we find the following exact sequence: $$ \mathrm{Ext}^1(\mathcal F,\mathrm{L} j^*j_*\mathcal F)\longrightarrow \mathrm{Ext}^1(\mathcal F,\mathcal F)\stackrel{\epsilon_\mathcal F\circ-}{\longrightarrow} \mathrm{Ext}^3(\mathcal F,\mathcal F(-3)) \longrightarrow \mathrm{Ext}^2(\mathcal F,\mathrm{L} j^*j_*\mathcal F). $$ Note that by (\ref{eqn_resjF}) the object $\mathrm{L} j^*j_*\mathcal F$ is represented by a complex of the form $0\to \mathcal O_{Y_H}(-2)^{\oplus 3}\to \mathcal O_{Y_H}(-1)^{\oplus 6}\to \mathcal O_{Y_H}^{\oplus 3}\to 0$. Let us check that $\mathrm{Ext}^2(\mathcal F,\mathrm{L} j^*j_*\mathcal F) = 0$. By Serre duality $\mathrm{Ext}^q(\mathcal F,\mathcal O_{Y_H}(-p)) =\mathrm{Ext}^{3-q}(\mathcal O_{Y_H}(-p),\mathcal F(-2))^* = H^{3-q}(Y_H,\mathcal F(p-2))^*$ and from (\ref{eqn_determinantal}) we see that for $p=0$ and $1$ these cohomology groups vanish, and for $p=2$ the only non-vanishing group corresponds to $q=3$. The spectral sequence computing $\mathrm{Ext}^k(\mathcal F,\mathrm{L} j^*j_*\mathcal F)$, obtained from the complex representing $\mathrm{L} j^*j_*\mathcal F$, implies that $\mathrm{Ext}^k(\mathcal F,\mathrm{L} j^*j_*\mathcal F)=0$ for $k\neq 1$ and $\mathrm{Ext}^1(\mathcal F,\mathrm{L} j^*j_*\mathcal F)=H^0(Y_H,\mathcal F)^*=\mathbb{C}^3$.
We conclude that the map $\mathrm{Ext}^1(\mathcal F,\mathcal F)\stackrel{\epsilon_\mathcal F\circ-}{\longrightarrow} \mathrm{Ext}^3(\mathcal F,\mathcal F(-3))$ is surjective. It is actually an isomorphism, because the vector spaces are of the same dimension. The dimensions can be computed in the same way as in the proof of Lemma \ref{lem_symplectic}.
{\it Step 3.} Let us show that $\mathrm{Tr}: \mathrm{Ext}^3(\mathcal F, \mathcal F(-3)) \to H^3(Y_H, \mathcal O_{Y_H}(-3))$ is injective.
Using Serre duality we identify the dual to the trace map with \[ \mathrm{Tr}^*: H^0(Y_H, \mathcal O_{Y_H}(1)) \to \mathrm{Hom}(\mathcal F, \mathcal F(1)). \] One can show as in the proof of Lemma \ref{lem-unobs} that $\mathrm{Hom}(\mathcal F, \mathcal F(1)) = H^0(S, \mathcal O(1))$ and postcomposing $\mathrm{Tr}^*$ with this isomorphism gives the restriction map \[ H^0(Y_H,\mathcal O_{Y_H}(1))\to H^0(S,\mathcal O_S(1)) \] which is surjective.
We see that the composition \[ \mathrm{Ext}^1(\mathcal F, \mathcal F) \to \mathrm{Ext}^3(\mathcal F, \mathcal F(-3)) \to H^3(Y_H, \mathcal O(-3)) \] is injective and the proof is finished by means of formula (\ref{eqn_dAJ2}). \end{proof}
\subsection{Image of the Abel-Jacobi map}
\begin{thm} \label{thm_AJ} Assume that $Y_H$ is smooth and all its hyperplane sections have at worst ADE singularities. Then the image of the Abel-Jacobi map $\mathrm{AJ}\colon Z_H\to \IJac{Y_H}$ is the theta-divisor $\Theta \subset \IJac{Y_H}$. The map $\mathrm{AJ}$ is an embedding on $Z_H^\circ$ and contracts the divisor $Y_H = Z_H \backslash Z_H^\circ$ to the unique singular point of $\Theta$. \end{thm} \begin{proof} The divisor $Y_H$ is contracted by the Abel-Jacobi map to a point because $Y_H$ is a cubic threefold which has no global one-forms.
To identify the image of $\mathrm{AJ}$ it is enough to check that a general point of $Z_H$ is mapped to a point of $\Theta$. General point $z\in Z_H$ is represented by a smooth twisted cubic $C$ on a smooth hyperplane section $S\subset Y_H$. Denote by $C_0\subset S$ a hyperplane section of $S$. Then $C-C_0$ is a degree zero cycle on $Y_H$ and $z$ is mapped to the corresponding element of the intermediate Jacobian. The cohomology class $[C-C_0]\in H^2(S,\mathbb{Z})$ is orthogonal to the class of the canonical bundle $K_S$ and has square $-2$. Hence it is a root in the $E_6$ lattice. All such cohomology classes can be represented by differences of pairs of lines $l_1-l_2$ in 6 different ways.
Recall that the Fano variety of lines on the cubic threefold $Y_H$ is a surface which we will denote by $X$. It was shown in \cite{CG} that the theta divisor $\Theta\subset \IJac{Y_H}$ can be described as the image of the map $X\times X\to \IJac{Y_H}$ which sends a pair of lines $(l_1,l_2)$ to the point in $\IJac{Y_H}$ corresponding to degree zero cycle $l_1-l_2$.
The map $X \times X \to \Theta$ has degree $6$. We get a commutative diagram: $$ \begin{tikzcd}[] X \times X \arrow[dashed]{d}[swap]{6:1} \arrow{r}{6:1} & \Theta \\ Z_H \arrow{ur}[swap]{\mathrm{AJ}} & \end{tikzcd} $$
It follows from the diagram above that $\mathrm{AJ}$ is generically of degree one. Since $\mathrm{AJ}$ is \'etale on $Z_H^\circ$ by Proposition \ref{prop_dAJ} and the theta-divisor $\Theta$ is a normal variety \cite[Proposition 2, \S3]{B2} we deduce that $\mathrm{AJ}: Z_H^\circ \to \Theta$ is on open embedding. This completes the proof. \end{proof}
\appendix
\section{Differential of the Abel-Jacobi map}\label{appendix}
Let $X$ be a smooth complex projective variety of dimension $n$. Recall that the $p$-th intermediate Jacobian of $X$ is the complex torus $$ \mathrm{J}^p(X) = H^{2p-1}(X,\mathbb{C})/(F^pH^{2p-1}(X,\mathbb{C})+H^{2p-1}(X,\mathbb{Z})), $$ where $F^{\sdot}$ denotes the Hodge filtration.
We use the Abel-Jacobi map \cite[Appendix A]{G2} \[ \mathrm{AJ}^p\colon \mathrm{CH}^p(X,\mathbb{Z})_{h}\to \mathrm{J}^p(X) \] where $\mathrm{CH}^p(X)_{h}$ is the group of homologically trivial codimension $p$ algebraic cycles on $X$ up to rational equivalence.
For a coherent sheaf $\mathcal F_0$ on $X$ we consider integral characteristic classes \[ s_p(\mathcal F_0) = p! \cdot ch_p(\mathcal F_0) \in CH^p(X,\mathbb{Z}) \] where $ch_p(\mathcal F_0)$ is the $p$'th component of the Chern character $ch(\mathcal F_0)$. These classes can be expressed in terms of the Chern classes using Newton's formula \cite[\S16]{MS}.
Let us consider a deformation of $\mathcal F_0$ over a smooth base $B$ with base point $0\in B$, that is a coherent sheaf $\mathcal F$ on $X\times B$ flat over $B$ and with $\mathcal F_0 \simeq \mathcal F|_{\pi_B^{-1}(0)}$. We will denote by $\pi_B$ and $\pi_X$ the two projections from $X\times B$ and by $\mathcal F_t$ the restriction of $\mathcal F$ to $\pi_B^{-1}(t)$, $t\in B$. In this setting the difference $s_p(\mathcal F_t)-s_p(\mathcal F_0)$ is contained in $\mathrm{CH}^p(X, \mathbb{Z})_{h}$ and we get an induced Abel-Jacobi map \[ \mathrm{AJ}^p_{\mathcal F}: B \to J^p(X). \]
Since the classes $s_p$ are additive, it follows that if $0 \to \mathcal F' \to \mathcal F \to \mathcal F'' \to 0$ is a short exact sequences of sheaves on $X \times B$ flat over $B$, then \begin{equation}\label{AJ-add} \mathrm{AJ}^p_{\mathcal F} = \mathrm{AJ}^p_{\mathcal F'} + \mathrm{AJ}^p_{\mathcal F''}. \end{equation}
Recall that a coherent sheaf $\mathcal F_0$ has an Atiyah class $\Ati{\mathcal F_0}\in \mathrm{Ext}^1(\mathcal F_0,\mathcal F_0\otimes\Omega_X)$ \cite[1.6]{KM}. The vector space $\bigoplus_{p,q\geqslant 0}\mathrm{Ext}^q(\mathcal F_0,\mathcal F_0\otimes\Omega_X^p)$ has the structure of a bi-graded algebra with multiplication induced by Yoneda product of $\mathrm{Ext}$'s and exterior product of differential forms and this defines the $p$'th power of the Atiyah class \[ \Ati{\mathcal F_0}^p \in \mathrm{Ext}^p(\mathcal F_0,\mathcal F_0\otimes\Omega^p_X). \]
Given any tangent vector $v\in T_{0}B$ we shall denote its Kodaira-Spencer class by $\mathrm{KS}_{\mathcal F_0}(v)\in \mathrm{Ext}^1(\mathcal F_0,\mathcal F_0)$ and we consider the composition $\Ati{\mathcal F_0}^p \circ \mathrm{KS}_{\mathcal F_0}(v) \in \mathrm{Ext}^{p+1}(\mathcal F_0, \mathcal F_0 \otimes \Omega_X^p)$.
We will also use the trace maps \cite[1.2]{KM} $$\mathrm{Tr}\colon \mathrm{Ext}^q(\mathcal F_0,\mathcal F_0\otimes\Omega_X^p)\to \mathrm{Ext}^q(\mathcal O_X,\Omega_X^p)=H^{p,q}(X).$$
\begin{prop}\label{prop_dAJ_general} In the above setting the differential of the Abel-Jacobi map $\mathrm{AJ}_{\mathcal F}^p: B \to \mathrm{J}^p(X)$, $p \geqslant 2$ at $0 \in B$ is given by \begin{equation}\label{eqn_dAJ_general} d\mathrm{AJ}^p_{\mathcal F,0}(v)=\mathrm{Tr}\bigl( (-1)^{p-1}\Ati{\mathcal F_0}^{p-1} \circ \mathrm{KS}_{\mathcal F_0}(v)\bigr), \end{equation} for any $v\in T_{0}B$. The right hand side is an element of $H^{p-1,p}(X) \subset H^{2p-1}(X,\mathbb{C})/F^pH^{2p-1}(X,\mathbb{C})$. \end{prop} \begin{proof} We argue by induction on the length of a locally free resolution of $\mathcal F$. The base of induction is the case when $\mathcal F_0$ is a vector bundle. Then the result is essentially contained in the paper of Griffiths \cite{G1} (in particular formula 6.8). We will show how to do the induction step. We note that the statement is local, so we may replace the base $B$ by an open neighborhood of $0\in B$ every time it is necessary. In particular we assume that $B$ is affine.
By our assumptions $X$ is projective and we denote by $\mathcal O_X(1)$ an ample line bundle. Then we can find $k$ big enough, so that $\mathcal F(k)$ is generated by global sections and has no higher cohomology. We define a sheaf $\mathcal G$ on $X\times B$ as the kernel of the natural map: $$ 0\longrightarrow\mathcal G\longrightarrow\pi_B^*{\pi_B}_*(\mathcal F(k))\otimes\mathcal O_X(-k)\longrightarrow \mathcal F\longrightarrow 0. $$ Since $\mathcal F$ is flat over $B$ and ${\pi_B}_*(\mathcal F_0(k))$ is a vector bundle on $B$ for $k$ large enough \cite[Proof of Theorem 9.9]{H}, the sheaf $\mathcal G$ is flat over $B$.
It follows from (\ref{AJ-add}) that $\mathrm{AJ}_{\mathcal G}^p = -\mathrm{AJ}_{\mathcal F}^p$. Since homological dimension of $\mathcal G$ has dropped by one, induction hypothesis yields the formula (\ref{eqn_dAJ_general}) for $\mathcal G$. It remains to relate right hand side of (\ref{eqn_dAJ_general}) for $\mathcal G_0$ and for $\mathcal F_0$.
Using functoriality of the Kodaira-Spencer classes we obtain the following morphism of triangles: $$ \begin{tikzcd}[] \mathcal G_0 \dar{u'}\rar & H^0(X,\mathcal F_0(k))\otimes\mathcal O_X(-k) \dar{0}\rar & \mathcal F_0 \dar{u}\rar & \mathcal G_0[1] \dar{u'[1]} \\ \mathcal G_0[1]\rar & H^0(X,\mathcal F_0(k))\otimes\mathcal O_X(-k)[1]\rar & \mathcal F_0[1]\rar & \mathcal G_0[2] \end{tikzcd} $$ where $u = \mathrm{KS}_{\mathcal F_0}(v)\in \mathrm{Ext}^1(\mathcal F_0,\mathcal F_0)$ and $u' = \mathrm{KS}_{\mathcal G_0}(v)\in \mathrm{Ext}^1(\mathcal G_0,\mathcal G_0)$. Composing the vertical arrows with $\Ati{\mathcal F_0}^{p-1}$, $\Ati{\mathcal O_X(-k)}^{p-1}$ and $\Ati{\mathcal F_0}^{p-1}$ respectively and using the additivity of traces we get $\mathrm{Tr}(\Ati{\mathcal F_0}^{p-1}\circ \mathrm{KS}_{\mathcal F_0}(v)) = -\mathrm{Tr}(\Ati{\mathcal G_0}^{p-1}\circ \mathrm{KS}_{\mathcal G_0}(v))$ because the map in the middle is zero. This completes the induction step. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} In the 1995 paper entitled ``Noncommutative symmetric functions," Gelfand, et.\ al.\ defined two noncommutative symmetric function analogues for the power sum basis of the symmetric functions, along with analogues for the elementary and the homogeneous bases. They did not consider the noncommutative symmetric power sum duals in the quasisymmetric functions, which have since been explored only in passing by Derksen and Malvenuto-Reutenauer. These two distinct quasisymmetric power sum bases are the topic of this paper. In contrast to the simplicity of the symmetric power sums, or the other well known bases of the quasisymmetric functions, the quasisymmetric power sums have a more complex combinatorial description. As a result, although symmetric function proofs often translate directly to quasisymmetric analogues, this is not the case for quasisymmetric power sums. Neither is there a model for working with the quasisymmetric power sums in the work of Gelfand, et.\ al., which relies heavily on quasi-determinants (which can only be exploited by duality for our purposes) and is not particularly combinatorial in nature. This paper therefore offers a first glimpse at working with these two relatively unstudied quasisymmetric bases, avoiding duality where possible to encourage a previously unexplored combinatorial understanding. \end{abstract} \title{Quasisymmetric Power Sums} \setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
The ring of symmetric functions $\Sym$ has several well-studied bases indexed by integer partitions $\lambda$, such as the monomial basis $m_\lambda$, the elementary basis $e_\lambda$, the complete homogeneous basis $h_\lambda$, the Schur functions $s_\lambda$, and, most relevant here, the power sum basis $p_\lambda$. Two important generalizations of $\Sym$ are $\QS$ (the ring of quasisymmetric functions) and $\NS$ (the ring of noncommutative symmetric functions). These rings share dual Hopf algebra structures, giving a rich interconnected theory with many beautiful algebraic and combinatorial results. In particular, many quasisymmetric and noncommutative symmetric analogues to the familiar symmetric bases have been defined and studied, such as the quasisymmetric monomial basis $M_\alpha$, and the noncommutative elementary and homogeneous bases $\boldsymbol{e}_\alpha$ and $\boldsymbol{h}_\alpha$ \cite{GKLLRT94} (where the indexing set is compositions $\alpha$). Several different analogues of the Schur functions have also been defined, including the quasisymmetric fundamental basis $F_\alpha$ \cite{gessel1984multipartite}, dual to the noncommutative ribbon basis $\boldsymbol{r}_\alpha$; the quasi-Schur basis and its dual in \cite{haglund2011quasisymmetric}; and the immaculate basis and its quasisymmetric dual~\cite{BBSSZ14lift}.
Quasisymmetric analogues of symmetric function bases are useful for a number of reasons. Quasisymmetric functions form a combinatorial Hopf algebra~\cite{Ehr96,gessel1984multipartite,MalReu95} and in fact are the terminal object in the category of combinatorial Hopf algebras~\cite{ABS06}, which explains why they consistently appear throughout algebraic combinatorics. Complicated combinatorial objects often have simpler formulas when expanded into quasisymmetric functions, and translating from symmetric to quasisymmetric functions can provide new avenues for proofs.
Here, we explore the analogs to the power sum bases. In $\Sym$, there is an important bilinear pairing, the Hall inner product, defined by $\<m_\lambda, h_\mu\rangle = \delta_{\lambda, \mu}$. Moreover, the duality between $\QS$ and $\NS$ precisely generalizes the inner product on $\Sym$ so that, for example, $\<M_\lambda, \boldsymbol{h}_\mu\rangle = \delta_{\lambda, \mu}$. With respect to the pairing on $\Sym$, the power sum basis is (up to a constant) self-dual, so analogs to the power sum basis in $\QS$ and $\NS$ should share a similar relationship. Two types of noncommutative power sum bases, $\boldsymbol{\Psi}_\alpha$ and $\boldsymbol{\Phi}_\alpha$, were defined by Gelfand, et.\ al.\ \cite{GKLLRT94}. Briefly, the quasisymmetric duals to one type or the other were also discussed in \cite{der09} and in \cite{MalReu95}; but in contrast to the other bases listed above, very little has been said about their structure or their relationship to other bases. The main objective of this paper is to fill this gap in the literature. Namely, we define two types of quasisymmetric power sum bases, which are scaled duals to $\boldsymbol{\Psi}_\alpha$ and $\boldsymbol{\Phi}_\alpha$. The scalars are chosen analogous to the scaled self-duality of the symmetric power sums; moreover, we show that these are exactly the right coefficients to force our bases to refine the symmetric power sums (Theorems~\ref{thm:refine} and~\ref{thm:2refine}). Section~\ref{sec:qsps} develops combinatorial proofs of these refinements. In Section~\ref{sec:btw}, we give transition matrices to other well-understood bases. Section~\ref{sec:products} explores algebraic properties, giving explicit formulas for products of quasisymmetric power sums. Section~\ref{sec:plethysm} gives formulas for plethysm in the quasisymmetric case.
\section{Preliminaries}\label{sec:prelim} In this section, we define the rings $\QS$ of quasisymmetric functions and $\NS$ of noncommutative symmetric functions, and briefly discuss their dual Hopf algebra structures.
We begin with a brief discussion of notation. Due to the nature of this paper, we note that there in a lot of notation to keep track of throughout, and therefore we set aside numbered definitions and notations to help the reader. In general, we use lower case letters (e.g.\ $e, m, h, s$, and $p$) to indicate {\em symmetric functions}, bold lowercase letters (e.g.\ $\boldsymbol{e}$, $\boldsymbol{h}$, and $\boldsymbol{r}$) to indicate {\em noncommutative symmetric functions}, and capital letters (e.g.\ $M$ and $F$) to indicate {\em quasisymmetric functions}. When there is a single clear analogue of a symmetric function basis, we use the same letter for the symmetric functions and their analogue (following \cite{LMvW} rather than \cite{GKLLRT94}). For the two different analogs to the power sums, we echo \cite{GKLLRT94} in using $\boldsymbol{\Psi}$ and $\boldsymbol{\Phi}$ for the noncommutative symmetric power sums, and then $\Psi$ and $\Phi$ as quasisymmetric analogues. We generally follow \cite{LMvW} for the names of the automorphisms on the quasisymmetric and noncommutative symmetric functions. For example, we use $S$ for the antipode map (in particular, see \cite[\S 3.6]{LMvW} for a complete list and a translation to other authors).
\subsection{Quasisymmetric functions}\label{sec:qsym} A formal power series $f \in \mathbb{C}\llbracket x_1,x_2,\ldots \rrbracket$ is a {\em quasisymmetric function} if the coefficient of $x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k}$ in $f$ is the same as the coefficient for $x_{i_1}^{a_1}x_{i_2}^{a_2}\cdots x_{i_k}^{a_k}$ for any $i_1<i_2<\cdots <i_k$. The set of quasisymmetric functions $\QS$ forms a ring. Moreover, this ring has a $\ZZ_{\geq 0}$-grading by degree, so that $\QS=\bigoplus_n\QS_n$, where $\QS_n$ is the set of $f \in \QS$ that are homogeneous of degree $n$. For a comprehensive discussion of $\QS$ see \cite{LMvW,MalReu95,Sta99v2}.
There are a number of common bases for $\QS_n$ as a vector space over $\mathbb{C}$. These bases are indexed by (strong) integer compositions. \begin{defn}[composition, $\alpha\vDash n$] A sequence $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_k)$ is a \emph{composition} of $n$, denoted $\alpha\vDash n$, if $\alpha_i>0$ for each $i$ and $\sum_i \alpha_i=n$. \end{defn}
\begin{notn}[$|\alpha|$, $l(\alpha)$, $\widetilde{\alpha}$]The {\em size} of a composition $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_k)$ is $|\alpha|=\sum \alpha_i$ and the {\em length} of $\alpha$ is $\ell(\alpha)=k$. Given a composition $\alpha$, we denote by $\widetilde{\alpha}$ the partition obtained by placing the parts of $\alpha$ in weakly decreasing order. \end{notn}
\begin{defn}[refinement, $\beta\preccurlyeq\alpha$, $\beta^{(j)}$]\label{defn:refinement} If $\alpha$ and $\beta$ are both compositions of $n$, we say that $\beta$ {\em refines} $\alpha$ (equivalently, $\alpha$ is a {\em coarsening} of $\beta$), denoted $\beta\preccurlyeq \alpha$, if $$\alpha=(\beta_1+\cdots+\beta_{i_1}, \beta_{i_1+1}+\cdots +\beta_{i_1+i_2}, \ldots, \beta_{i_1+\cdots +i_{k-1}+1}+\cdots + \beta_{i_1+\cdots+i_k}).$$ We will denote by $\beta^{(j)}$ the composition made up of the parts of $\beta$ (in order) that sum to $\alpha_{j}$; namely, if $j=i_s$, then $\beta^{(j)}=(\beta_{i_1+\cdots +i_{s-1}+1},\cdots , \beta_{i_1+\cdots+i_s})$. \end{defn} It is worth noting that some authors reverse the inequality, using $\preccurlyeq$ for coarsening as opposed to refinement as we do here. Repeatedly we will need the particular parts of $\beta$ that sum to a particular part of $\alpha$.
\begin{notn}[$\set(\alpha)$, $\comp(A)$] There is a natural bijection between compositions of $n$ and subsets of $[n-1]$ given by partial sums. (Here $[n]$ is the set $\{1,2, \hdots , n\}$.) Namely, if $\alpha=(\alpha_1,\ldots,\alpha_k)\vDash n$, then $\set(\alpha) = \{\alpha_1, \alpha_1+\alpha_2, \ldots, \alpha_1+\cdots+\alpha_{k-1}\}.$ Similarly, if $A=\{a_1,\ldots,a_j\}\subseteq[n-1]$ with $a_1<a_2<\cdots<a_j$ then $\comp(A)=(a_1,a_2-a_1,\ldots, a_j-a_{j-1},n-a_j)$. \end{notn} We remark that $\alpha \preccurlyeq \beta$ if and only if $\set(\beta)\subseteq \set(\alpha)$.
Let $\alpha=(\alpha_1,\ldots,\alpha_k)$ be a composition. The {\em quasisymmetric monomial function} indexed by $\alpha$ is \begin{equation}\label{eq:Ms}M_\alpha=\sum_{i_1<i_2<\cdots<i_k} x_{i_1}^{\alpha_1}x_{i_2}^{\alpha_2}\cdots x_{i_k}^{\alpha_k};\end{equation} and the {\em fundamental quasisymmetric function} indexed by $\alpha$ is \begin{equation} \label{M-F} F_\alpha = \sum_{\beta\preccurlyeq \alpha}M_\beta, \qquad \text{so that} \qquad M_\alpha = \sum_{\beta\preccurlyeq\alpha}(-1)^{\ell(\beta)-\ell(\alpha)}F_\beta.\end{equation} Equivalently, $F_{\alpha}$ is defined directly by \begin{equation}\label{eq:Fs}
F_{\alpha} = \sum_{\substack{i_1\leq i_2\leq \cdots \leq i_n\\ i_j<i_{j+1} \text{ if } j \in \set(\alpha)}} x_{i_1}x_{i_2}\cdots x_{i_n}.\end{equation}
In addition to being a graded ring, $\QS$ can be endowed with the structure of a \emph{combinatorial Hopf algebra}. For our purposes, this means that $\QS$ has a product (ordinary polynomial multiplication), a coproduct $\Delta$, a unit and counit, and an antipode map. The ring $\NS$ of noncommutative symmetric functions is dual to $\QS$ with respect to a certain inner product (defined later), and thus also is a combinatorial Hopf algebra. For further details on the Hopf algebra structure of $\QS$ and $\NS$, see~\cite{ABS06,grinberg2014hopf,LMvW}.
\subsection{Noncommutative symmetric functions}\label{sec:nsym} The ring of {\em noncommutative symmetric functions}, denoted $\NS$, is formally defined as a free associative algebra $\mathbb{C} \langle \boldsymbol{e}_1, \boldsymbol{e}_2, \hdots \rangle$, where the $\boldsymbol{e}_i$ are regarded as {\em noncommutative elementary functions} and
\[
\boldsymbol{e}_\alpha = \boldsymbol{e}_{\alpha_1}\boldsymbol{e}_{\alpha_2}\cdots \boldsymbol{e}_{\alpha_k}, \qquad
\text{for a composition } \alpha.\] Define the {\em noncommutative complete homogeneous symmetric functions} as in~\cite[\S 4.1]{GKLLRT94} by \begin{equation}\label{eq:hine} \boldsymbol{h}_n = \sum_{\alpha \vDash n} (-1)^{n-\ell(\alpha)}\boldsymbol{e}_\alpha, \quad \text{ and } \quad
\boldsymbol{h}_\alpha = \boldsymbol{h}_{\alpha_1}\cdots \boldsymbol{h}_{\alpha_k} = \sum_{\beta\preccurlyeq \alpha}(-1)^{|\alpha|-\ell(\beta)}\boldsymbol{e}_\beta.\end{equation} The noncommutative symmetric analogue (dual) to the fundamental quasisymmetric functions is given by the {\em ribbon Schur functions} \begin{equation}\label{eq:rinh} \boldsymbol{r}_\alpha = \sum_{\beta\succcurlyeq \alpha} (-1)^{\ell(\alpha)-\ell(\beta)}\boldsymbol{h}_\beta.\end{equation} \subsubsection{Noncommutative power sums} To define the noncommutative power sums, we begin by recalling the useful exposition in~\cite[\S 2]{GKLLRT94} on the (commuting) symmetric power sums. Namely, the power sums $p_n$ can be defined by the generating function: $$P(X;t)=\sum_{k\geq 1}t^{k-1}p_k[X]=\sum_{i\geq 1}x_i(1-x_it)^{-1}.$$
This generating function can equivalently be defined by any of the following generating functions, where $H(X;t)$ is the standard generating function for the complete homogeneous functions and $E(X;t)$ is the standard generating function for the elementary homogeneous functions: \begin{equation} P(X;t)=\frac{d}{dt}\log H(X;t) = -\frac{d}{dt}\log E(-X;t).\label{eq:pfrome} \end{equation} Unfortunately, there is not a unique sense of logarithmic differentiation for power series (in $t$) with noncommuting coefficients (in $\NS$). Two natural well-defined reformulations of these are \begin{equation}\label{eq:type1gen} \frac{d}{dt}H(X;t)=H(X;t)P(X;t) \quad \text{ or } \quad -\frac{d}{dt}E(X;-t)=P(X;t)E(X;-t), \end{equation} and \begin{equation} H(X;t)=-E(X;-t) = \exp\left(\int P(X;t) dt\right). \label{eq:type2gen} \end{equation} In $\NS$, these do indeed give rise to \emph{two different analogs} to the power sum basis, introduced in~\cite[\S 3]{GKLLRT94}: the \emph{noncommutative power sums of the first kind} (or \emph{type}) $\boldsymbol{\Psi}_\alpha$ and of the \emph{second kind} (or \emph{type}) $\boldsymbol{\Phi}_\alpha$, with explicit formulas (due to \cite[\S 4]{GKLLRT94}) as follows.
The noncommutative power sums of the first kind are those satisfying essentially the same generating function relation as \eqref{eq:type1gen}, where this time $H(X;t)$, $E(X;t)$, and $P(X;t)$ are taken to be the generating functions for the noncommutative homogeneous, elementary, and type one power sums respectively, and expand as \begin{equation}\label{eq:powerinh} \boldsymbol{\Psi}_n = \sum_{\beta \vDash n} (-1)^{\ell(\beta)-1} \beta_k \boldsymbol{h}_\beta \end{equation} where $\beta=(\beta_1,\ldots,\beta_k)$. \begin{notn}[$\lp(\beta,\alpha)$]Given a composition $\alpha =(\alpha_1, \ldots, \alpha_m)$ and a composition $\beta = (\beta_1,\ldots, \beta_k)$ which refines $\alpha$, we let $\lp(\beta) = \beta_k$ (last part) and $$ \lp(\beta,\alpha) = \prod_{i=1}^{\ell(\alpha)} \lp(\beta^{(i)}).$$ \end{notn} Then \begin{equation}\label{eq:htopsi}\boldsymbol{\Psi}_\alpha =\boldsymbol{\Psi}_{\alpha_1}\cdots\boldsymbol{\Psi}_{\alpha_m}=\sum_{\beta \preccurlyeq \alpha} (-1)^{\ell(\beta)-\ell(\alpha)}\lp(\beta,\alpha)\boldsymbol{h}_\beta. \end{equation} Similarly, the noncommutative power sums of the second kind are those satisfying the analogous generating function relation to \eqref{eq:type2gen}, and expand as \begin{equation}\label{eq:htophi} \boldsymbol{\Phi}_n = \sum_{\alpha\vDash n}(-1)^{\ell(\alpha)-1}\frac{n}{\ell(\alpha)}\boldsymbol{h}_\alpha, \quad \text{and} \quad \boldsymbol{\Phi}_\alpha = \sum_{\beta\preccurlyeq\alpha}(-1)^{\ell(\beta)-\ell(\alpha)}\frac{\prod_i \alpha_i}{\ell(\beta,\alpha)}\boldsymbol{h}_\beta, \end{equation} where $\ell(\beta,\alpha)=\prod_{j=1}^{\ell(\alpha)} \ell(\beta^{(j)})$.
\subsection{Dual bases}\label{sec:dual}Let $V$ be a vector space over $\mathbb{C}$, and let $V^* = \{ \text{linear } \varphi: V \to \CC\}$ be its dual. Let $\langle,\rangle: V\otimes V^* \rightarrow \mathbb{C}$ be the natural bilinear pairing. Bases of these vector spaces are indexed by a common set, say $I$; and we say bases $\{b_\alpha\}_{\alpha \in I}$ of $V$ and $\{b_\alpha^*\}_{\alpha \in I}$ of $B^*$ are \emph{dual} if $\<b_\alpha,b^*_\beta\rangle=\delta_{\alpha,\beta}$ for all $\alpha, \beta \in I$.
Due to the duality between $\QS$ and $\NS$, we make extensive use of the well-known relationships between change of bases in a vector space an its dual. Namely, if $(A,A^*)$ and $(B,B^*)$ are two pairs of dual bases of $V$ and $V^*$, then for $a_{\alpha}\in A$ and $b_{\beta}^* \in B^*$, we have \[a_\alpha = \sum_{b_\beta \in B}c_\beta^\alpha b_\beta\qquad \text{ if and only if }\qquad b^*_\beta = \sum_{a^*_\alpha \in A^*}c_\beta^\alpha a^*_\alpha.\]
\noindent In particular, the bases $\{M_\alpha\}$ of $\QS$ and $\{\boldsymbol{h}_\alpha\}$ of $\NS$ are dual; as are $\{F_\alpha\}$ and $\{\boldsymbol{r}_\alpha\}$ (see \cite[\S 6]{GKLLRT94}). The primary object of this paper is to explore properties of two $\QS$ bases dual to $\{\boldsymbol{\Psi}_\alpha\}$ or $\{ \boldsymbol{\Phi}_\alpha \}$ (up to scalars) that also refine $p_\lambda$. Malvenuto and Reutenauer~\cite{MalReu95} mention (a rescaled version of) the type 1 version but do not explore its properties; Derksen \cite{der09} describes such a basis for the type 2 version, but a computational error leads to an incorrect formula in terms of the monomial quasisymmetric function expansion.
\section{Quasisymmetric power sum bases}\label{sec:qsps} The symmetric power sums have the property that $\<p_\lambda,p_\mu\rangle = z_\lambda \delta_{\lambda, \mu}$ where $z_\lambda$ is as follows. \begin{notn}[$z_\alpha$]\label{notn:z}
For a partition $\lambda \vdash n$, let $m_i$ be the number of parts of length $i$. Then $$z_\lambda = 1^{m_1}m_1!2^{m_2}m_2!\cdots k^{m_k}m_k!.$$ Namely, $z_\lambda $ is the size of the stabilizer of a permutation of cycle type $\lambda$ under the action of $S_n$ on itself by conjugation. For a composition $\alpha$, we use $z_{\alpha}=z_{\widetilde{\alpha}}$, where $\widetilde{\alpha}$ is the partition rearrangement of $\alpha$ as above. \end{notn}
We describe two quasisymmetric analogues of the power sums, each of which satisfies a variant of this duality property.
\subsection{Type 1 quasisymmetric power sums} We define the type 1 quasisymmetric power sums to be the basis $\Psi_\alpha$ of $\QS$ such that $$\langle\Psi_\alpha,\boldsymbol{\Psi}_\beta\rangle = z_\alpha \delta_{\alpha,\beta}.$$ While duality makes most of this definition obvious, the scaling is somehow a free choice to be made. However, as we show in Theorem \ref{thm:refine} and Corollary \ref{cor:refine}, our choice of scalar not only generalizes the self-dual relationship of the symmetric power sums, but serves to provide a refinement of those power sums. Moreover, the proof leads to a (best possible) combinatorial interpretation of $\Psi_\alpha$.
In \cite[\S 4.5]{GKLLRT94}, the authors give both the transition matrix from the $\boldsymbol{h}$ basis to the $\Psi$ basis (above in (\ref{eq:powerinh})), and is inverse. Using the latter and duality, we compute a monomial expansion of $\Psi_{\alpha}$.
\begin{notn}[$\pi(\alpha,\beta)$] First, given $\alpha$ a refinement of $\beta$, recall from Definition \ref{defn:refinement} that $\alpha^{(i)}$ is the composition consisting of the parts of $\alpha$ that combine (in order) to $\beta_i$. Define
$$\pi(\alpha)=\prod_{i=1}^{\ell(\alpha)} \sum_{j=1}^i \alpha_j
\qquad\text{and} \qquad
\pi(\alpha,\beta)=\prod_{i=1}^{\ell(\beta)} \pi(\alpha^{(i)}).$$ \end{notn}
Then \[\boldsymbol{h}_\alpha = \sum_{\beta \preccurlyeq \alpha} \frac{1}{\pi(\beta,\alpha)}\boldsymbol{\Psi}_\beta.\] By duality, the polynomial \[\psi_\alpha = \sum_{\beta\succcurlyeq \alpha}\frac{1}{\pi(\alpha,\beta)}M_\beta\] has the property that $\langle\psi_\alpha,\boldsymbol{\Psi}_\beta\rangle=\delta_{\alpha,\beta}$. Then the type 1 quasisymmetric power sums have the following monomial expansion: \begin{equation}\label{eq:PsiM}\Psi_\alpha = z_\alpha \psi_\alpha=z_\alpha\sum_{\beta\succcurlyeq\alpha}\frac{1}{\pi(\alpha,\beta)}M_\beta.\end{equation}
For example \begin{align*} \Psi_{232} &= (2^2 \cdot 2! \cdot 3)(\frac{1}{2 \cdot 3 \cdot 2} M_{232}+\frac{1}{2 \cdot 5 \cdot 2} M_{52} + \frac{1}{2 \cdot 3 \cdot 5} M_{25} + \frac{1}{2 \cdot 5 \cdot 7} M_7) \\ &= 2 M_{232} +\frac{6}{5} M_{52} + \frac{4}{5} M_{25}+\frac{12}{35} M_7. \end{align*}
The remainder of this section is devoted to constructing the ``best possible'' combinatorial formulation of the $\Psi_\alpha$, given in Theorem \ref{thm:combPsi}, followed by the proof of the refinement of the symmetric power sums, given in Theorem \ref{thm:refine} and Corollary \ref{cor:refine}.
\subsubsection{A combinatorial interpretation of $\Psi_\alpha$} We consider the set $S_n$ of permutations of $[n]=\{1,2,\dots,n\}$ both in one-line notation and in cycle notation. For a partition $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_{\ell})$ of $n$, a permutation $\sigma$ has \emph{cycle type} $\lambda$ if its cycles are of lengths $\lambda_1$, $\lambda_2$, \dots, $\lambda_\ell$. We consider two canonical forms for writing a permutation according to its cycle type.
\begin{defn}[standard and partition forms]\label{def:standard-partition}
A permutation in cycle notation is said to be in \emph{standard form} if each cycle is written with the largest element last and the cycles are listed in increasing order according to their largest element. It is said to be in \emph{partition form} if each cycle is written with the largest element last, the cycles are listed in descending length order, and cycles of equal length are listed in increasing order according to their largest element. \end{defn} For example, for the permutation $(26)(397)(54)(1)(8)$, we have standard form $(1)(45)(26)(8)(739)$ and partition form $(739)(45)(26)(1)(8)$. Note that our definition of standard form differs from that in \cite[\S1.3]{Sta99v1} in the cyclic ordering; they are equivalent, but our convention is more convenient for the purposes of this paper. If we fix an order of the cycles (as we do when writing a permutation in standard and partition forms), the \emph{(ordered) cycle type} is the composition $\alpha \vDash n$ where the $i$th cycle has length $\alpha_i$.
As alluded to in Notation \ref{notn:z}, we have \cite[Prop.\ 1.3.2]{Sta99v1} \begin{equation*}
\frac{n!}{z_\lambda} = \#\{ \sigma \in S_n \text{ of cycle type } \lambda \}. \end{equation*}
We are now ready to define a subset of $S_n$ (which uses one-line notation) needed to prove Proposition~\ref{prop:consistent}.
\begin{notn}[$\splt{\beta}{\sigma}{j}$] Let $\beta \vdash n$ and let $\sigma \in S_n$ be written in one-line notation. First partition $\sigma$ according to $\beta$ (which we draw using $|\!|$), and consider the (disjoint) words $\spl{\beta}{\sigma}==[\sigma^{1}, \dots, \sigma^{\ell}]$, where $\ell = \ell(\beta)$. Let $\splt{\beta}{\sigma}{j}=\sigma^j$. See Table \ref{tbl:consistent}. \end{notn} \begin{defn}[consistent, $\mathrm{Cons}_{\alpha \preccurlyeq \beta}$]\label{defn:consistent} Fix $\alpha \preccurlyeq \beta$ compositions of $n$. Given $\sigma \in S_n$ written in one-line notation, let $\sigma^j=\splt{\beta}{\sigma}{j}$. Then, for each $i = 1, \dots, \ell$, add parentheses to $\sigma^{i}$ according to $\alpha^{(i)}$, yielding disjoint permutations $\bar{\sigma}^{i}$ (of subalphabets of $[n]$) of cycle type $\alpha^{(i)}$. If the resulting subpermutations $\bar{\sigma}^{i}$ are all in standard form, we say $\sigma$ is \emph{consistent} with $\alpha \preccurlyeq \beta$. In other words, we look at subsequences of $\sigma$ and split according to $\beta$ \textit{separately} to see if, for each $j$, the $j$th subsequence is in standard form when further partition by $\alpha^{(j)}$. Define \begin{equation*} \mathrm{Cons}_{\alpha \preccurlyeq \beta} = \{\sigma \in S_n \mid \sigma \mbox{ is consistent with } \alpha \preccurlyeq \beta\}. \end{equation*}
\end{defn}
\begin{example}\label{ex:consistent} Fix $\alpha = (1,1,2,1,3,1)$ and $\beta = (2,2,5)$. Table~\ref{tbl:consistent} shows several examples of permutations and the partitioning process.\end{example} \begin{table}[h] $$ \begin{array}{c@{\qquad}c@{\qquad}c@{\qquad}c} \text{permutation $\sigma$} & \text{partition by $\beta$} & \text{add $()$ by $\alpha$} & \text{$\sigma$ consistent?}\\\hline 571423689
& \underbrace{57}_{{\sigma}^{1}}|\!|\underbrace{14}_{{\sigma}^{2}}|\!|\underbrace{23689}_{{\sigma}^{3}}
& \underbrace{(5)(7)}_{\bar{\sigma}^{1}}|\!|\underbrace{(14)}_{\bar{\sigma}^{2}}|\!|\underbrace{(2)(368)(9)}_{\bar{\sigma}^{3}} &\text{yes}\\ 571428369
& \underbrace{57}_{{\sigma}^{1}}|\!|\underbrace{14}_{{\sigma}^{2}}|\!|\underbrace{28369}_{{\sigma}^{3}}
& \underbrace{(5)(7)}_{\bar{\sigma}^{1}}|\!|\underbrace{(14)}_{\bar{\sigma}^{2}}|\!|\underbrace{(2)(\boldsymbol{8}36)(9)}_{\bar{\sigma}^{3}}
&\textbf{no}\\ 571493682
& \underbrace{57}_{{\sigma}^{1}}|\!|\underbrace{14}_{{\sigma}^{2}}|\!|\underbrace{93682}_{{\sigma}^{3}}
& \underbrace{(5)(7)}_{\bar{\sigma}^{1}}|\!|\underbrace{(14)}_{\bar{\sigma}^{2}}|\!|\underbrace{(\boldsymbol{9})(36\boldsymbol{8})(\boldsymbol{2})}_{\bar{\sigma}^{3}}
&\textbf{no} \end{array} $$ \caption{Examples of permutations in $S_9$ and determining if they are in $\mathrm{Cons}_{\alpha\preccurlyeq\beta}$ where $\alpha=(1,1,2,1,3,1)$ and $\beta=(2,2,5)$. Note how $\beta$ subtly influences consistency in the last example.}\label{tbl:consistent} \end{table}
We also consider the set of all permutations consistent with a given $\alpha$ and all possible choices of (a coarser composition) $\beta$, as each will correspond to various monomial terms in the expansion of a given $\Psi_\alpha$. \begin{example} We now consider sets of permutations that are consistent with $\alpha=(1,2,1)$ and each coarsening of $\alpha$. The coarsening of $\alpha = (1,2,1)$ are $(1,2,1)$, $(3,1)$, $(1,3)$, and $(4)$, and the corresponding sets of consistent permutations in $S_4$ are \begin{align*} \mathrm{Cons}_{(1,2,1) \preccurlyeq (1,2,1)}& = \{1234, 1243, 1342, 2134, 2143, 2341, 3124, 3142, 3241, 4123, 4132, 4231\}, \\\mathrm{Cons}_{(1,2,1) \preccurlyeq (1,3)} &= \{1234, 2134, 3124, 4123\},\\ \mathrm{Cons}_{(1,2,1) \preccurlyeq (3,1)} &= \{1234, 1243, 1342, 2134, 2143, 2341, 3142, 3241\}, \\\mathrm{Cons}_{(1,2,1) \preccurlyeq (4)}& = \{1234, 2134\}. \end{align*} Notice that these sets are not disjoint. \end{example} The following is a salient observation that can be seen from this example. \begin{lemma}\label{lemma:niceobs}If $\sigma$ is consistent with $\alpha\preccurlyeq \beta$ for some choice of $\beta$, then $\sigma$ is consistent with $\alpha \preccurlyeq\gamma$ for all $\alpha \preccurlyeq \gamma \preccurlyeq \beta$.\end{lemma} Note that this implies $Cons_{\alpha \preccurlyeq \beta} \subseteq Cons_{\alpha \preccurlyeq \alpha}$ for all $\alpha \preccurlyeq \beta$. With these examples in mind, we will use the following lemma to justify a combinatorial interpretation of $\Psi_\alpha$ in Theorem~\ref{thm:combPsi}.
\begin{lemma}\label{lem:cons} If $\alpha \preccurlyeq \beta$, we have $$ n!=|\mathrm{Cons}_{\alpha \preccurlyeq \beta}|\cdot \pi(\alpha, \beta).$$ \end{lemma} \begin{proof}Consider the set \[A_\alpha= \bigotimes_{i=1}^{\ell(\beta)}\left( \bigotimes_{j=1}^{\ell(\alpha^{(i)})} \nicefrac{\mathbb{Z}}{a_j^{(i)}\mathbb{Z}}\right), \qquad \text{where } a_j^{(i)} = \sum_{r=1}^j \alpha_r^{(i)}.\] We have
\[|A_\alpha| = \prod_{i=1}^{\ell(\beta)} \prod_{j=1}^{\ell(\alpha^{(i)})}a_j^{(i)} = \pi(\alpha, \beta).\] Construct a map \begin{align*}
\mathrm{Sh}: \mathrm{Cons}_{\alpha\preccurlyeq\beta} \times A_\alpha &\to S_n\\
(\sigma, s) \quad&\mapsto \sigma_s \end{align*}
as follows (see also Example \ref{ex:Rlambdabeta-identity(b)}).
For $s=[s^{(i)}_j]_{i=1\ j=1}^{\ell(\beta)\ \ell(\alpha^{(i)})} \in A_\alpha$ and $\sigma \in \mathrm{Cons}_{\alpha\preccurlyeq\beta}$, construct a permutation $\sigma_s \in S_n$ as follows. \begin{enumerate}[1.] \item Partition $\sigma$ into words $\sigma^{1}, \dots, \sigma^{\ell}$ according to $\beta$ so that $\sigma^{i}=\splt{\beta}{\sigma}{i}$. \item For each $i=1, \ldots, \ell(\beta)$, modify $\sigma^{i}$ by cycling the first $a_j^{(i)}$ values right by $s_j^{(i)}$ for $j=1, \ldots,\ell(\alpha^{(i)})$. Call the resulting word $\sigma^{i}_s$. \item Let $\sigma_s = \sigma^{1}_s \cdots \sigma^{\ell}_s$. \end{enumerate}
\noindent This process is invertible as follows. Let $\tau \in S_n$ be written in one-line notation. \begin{enumerate}[1'.] \item Partition $\tau$ into words $\tau^{1}, \dots, \tau^{\ell}$ according to $\beta$ such that $\tau^i=\splt{\beta}{\tau}{i}$. \item For each $i=1, \ldots, \ell(\beta)$, let $m_i = \ell(\alpha^{(i)})$. Modify $\tau^{i}$ and record $s^{(i)}_j$ for $j=m_i,\ldots,1$ by cycling the first $a_j^{(i)}$ values left until the largest element is last. Let $s_j^{(i)}$ be the number of required shifts left. Call the resulting word $\sigma^{i}$. \item Let $\sigma= \sigma^{1} \cdots \sigma^{\ell}$ and $s =[s^{(i)}_j]_{i=1\ j=1}^{\ell(\beta)\ \ell(\alpha^{(i)})}$. \end{enumerate}
By construction, $s \in A_\alpha$ and $\sigma \in \mathrm{Cons}_{\alpha \preccurlyeq \beta}$. It is straightforward to verify that $\sigma_s = \tau$. Therefore $\mathrm{Sh}^{-1}$ is well-defined, so that $\mathrm{Sh}$ is a bijection, and thus $n!=|\mathrm{Cons}_{\alpha\preccurlyeq\beta}|\cdot \pi(\alpha,\beta)$. \end{proof} \begin{example}\label{ex:Rlambdabeta-identity(b)}
As an example of the construction of $\mathrm{Sh}$ in the proof of Lemma~\ref{lem:cons}, let $\beta=(5,4) \vDash 9$, and let $\alpha=(2,3,2,2) \preccurlyeq \beta$, so that
$$\alpha^{(1)} = (2,3), \quad a_1^{(1)} = 2, \quad a_2^{(1)} = 2+3=5, \quad \text{ and }$$
$$\alpha^{(2)} = (2,2), \quad a_1^{(2)} = 2, \quad a_2^{(2)} = 2+2=4. \phantom{\quad \text{ and }}$$
Fix $\sigma=267394518 \in \mathrm{Cons}_{\alpha \preccurlyeq \beta}$, and $s=(s^{(1)},s^{(2)})=((1,3),(0,1)) \in A_\alpha$. We want to determine $\sigma_s$.
\begin{enumerate}[1.]
\item Partition $\sigma$ according to $\beta$: \quad
$\sigma^{1} = 26739$ and $\sigma^{2} = 4518.$
\item Cycle $\sigma^{i}$ according to $\alpha^{(i)}$:
\begin{align*}
\sigma^{1} = 26739 \xrightarrow{\text{take first $a_1^{(1)} = 2$ terms}} &\ \underline{26}739
\xrightarrow{\text{cycle $s_1^{(1)} = 1$ right}} \ \underline{62}739\\
\xrightarrow{\text{take first $a_2^{(1)} = 5$ terms}} &\ \underline{62739}
\xrightarrow{\text{cycle $s_2^{(1)} = 3$ right}} \ \underline{73962} = \sigma^{1}_s; \end{align*}
\begin{align*}
\sigma^{2} = 4518 \xrightarrow{\text{take first $a_1^{(2)} = 2$ terms}} &\ \underline{45}18
\xrightarrow{\text{cycle $s_1^{(2)} = 0$ right}} \ \underline{45}18\\
\xrightarrow{\text{take first $a_2^{(2)} = 4$ terms}} &\ \underline{4518}
\xrightarrow{\text{cycle $s_2^{(2)} = 1$ right}} \ \underline{8451} = \sigma^{2}_s. \end{align*}
\item Combine to get $\sigma_s = \sigma^{1}_s \sigma^{2}_s = 739628451$.
\end{enumerate}
\noindent Going the other way, start with $\beta=(5,4)\vDash 9$, $\alpha=(2,3,2,2) \preccurlyeq \beta$, and $\tau = 739628451 \in S_9$ in one-line notation, and want to find $\sigma \in \mathrm{Cons}_{\alpha \preccurlyeq \beta}$ and $s \in S$ such that $\sigma_s = \tau$. \begin{enumerate}[1'.] \item Partition $\tau$ according to $\beta$: \quad
$\tau^{1} = 73962$ and $\tau^{2} = 8451.$
\item Cycle $\tau^{i}$ into $\alpha \preccurlyeq \beta$ consistency and record shifts: {\setlength{\fboxsep}{1.5pt}
\begin{align*}
\tau^{1} = 73962 \xrightarrow{\text{take first $a_2^{(1)} = 5$ terms}} &\ \stackrel{(\xleftarrow{~3~}) }{\underline{73\fbox{$9$}62}}
\xrightarrow{\text{cycle left so largest is last, and record}} \underline{6273\fbox{$9$}}, & s_2^{(1)} = 3,\\
\xrightarrow{\text{take first $a_1^{(1)} = 2$ terms}} &\ \stackrel{(\xleftarrow{1})}{\underline{\fbox{$6$}2}}\!\!739
\xrightarrow{\text{cycle left so largest is last, and record}} \underline{2\fbox{$6$}}739 = \sigma^{1}, & s_1^{(1)} = 1;\\ \end{align*}
\begin{align*}
\tau^{2} = 8451 \xrightarrow{\text{take first $a_2^{(2)} = 4$ terms}} & \stackrel{(\xleftarrow{~1~})}{\underline{\fbox{$8$}451}}
\xrightarrow{\text{cycle left so largest is last, and record}} \underline{451\fbox{$8$}}, & s_2^{(2)} = 1,\\
\xrightarrow{\text{take first $a_1^{(2)} = 2$ terms}} &\ \stackrel{\checkmark}{\underline{4\fbox{5}}}18
\xrightarrow{\text{cycle left so largest is last, and record}} \underline{4\fbox{$5$}}18 = \sigma^{2}, & s_1^{(2)} = 0.\\ \end{align*}}
\item Combine to get $\sigma = \sigma^{1} \sigma^{2} = 267394518$ and $s = ((1,3),(0,1))$ as expected.
\end{enumerate} \end{example}
One might reasonably ask for a ``combinatorial'' description of the quasisymmetric power sums, one similar to our description of the monomial and fundamental basis of the quasisymmetric functions as in (\ref{eq:Ms}) and (\ref{eq:Fs}). There is not an altogether satisfactory such formula for either of the quasisymmetric power sums, although the previous lemma hints at what appears to be the best possible interpretation in the Type 1 case. We give the formula, and its quick proof. Before we begin, we will find the following notation useful both here and in the fundamental expansion of the Type 1 power sums.
\begin{notn}[$\widehat{\alpha(\sigma)}$, $\mathrm{Cons}_{\alpha}$]\label{notn:hat} Given a permutation $\sigma$ and a composition $\alpha$, let $\widehat{\alpha(\sigma)}$ denote the coarsest composition $\beta$ with $\beta \succcurlyeq \alpha$ and $\sigma \in \mathrm{Cons}_{\alpha\preccurlyeq\beta}.$ For example, if $\alpha=(3,2,2)$ and $\sigma=1352467$, then $\widehat{\alpha(\sigma)}=(3,4)$. In addition, we write $\sigma \in \mathrm{Cons}_\alpha$ if we are considering $\beta=\alpha$.\end{notn}
\begin{thm}\label{thm:combPsi} Let $m_i(\alpha)$ the multiplicity of $i$ in $\alpha$. Then
$$\Psi_{(\alpha_1,\cdots,\alpha_k)}(x_1,\cdots,x_m)
=\frac{ \prod_{i=1}^n m_i(\alpha)!}{n!}\sum_{\sigma\in S_n}
\sum_{\substack{1\leq i_1\leq \cdots\leq i_k\leq m\\
i_j=i_{j+1}\Rightarrow\\
\max(\splt{\alpha}{\sigma}{j})<\max(\splt{\alpha}{\sigma}{j+1})}}x_{i_1}^{\alpha_1}\cdots x_{i_k}^{\alpha_k},$$
where $\max(\splt{\alpha}{\sigma}{j})$ is the maximum of the list defined in Notation 3.4. \end{thm} \begin{proof} First, by Lemmas \ref{lem:cons} and \ref{lemma:niceobs},
\begin{align}
\Psi_\alpha &=\frac{z_\alpha}{n!}\sum_{\alpha\preccurlyeq\beta}|\mathrm{Cons}_{\alpha\preccurlyeq\beta}|M_\beta \nonumber\\ &=\frac{z_\alpha}{n!}\sum_{\sigma\in \mathrm{Cons}_\alpha}\sum_{\alpha\preccurlyeq\delta\preccurlyeq \widehat{\alpha(\sigma)}}M_\delta \label{eq:unscaled}\\&= \frac{ \prod_{i=1}^n m_i(\alpha)!}{n!}\sum_{\sigma\in \mathrm{Cons}_\alpha}\left(\prod_{i}i^{m_i(\alpha)}\right)\sum_{\alpha\preccurlyeq\delta\preccurlyeq \widehat{\alpha(\sigma)}}M_\delta.\nonumber\end{align} The first equality follows by grouping together terms according to the permutation counted rather than the monomial basis. Next, for each $\sigma\in \mathrm{Cons}_\alpha$, we can assign $\left(\prod_{i}i^{m_i(\alpha)}\right)$ objects by cycling each $\sigma$ (within cycles defined by $\alpha$) in all possible ways. (Thus for all $j$ we cycle $\splt{\alpha}{\sigma}{j}$.) The result for a fixed $\alpha$ is all permutations of $\mathfrak{S}_n$ (considered in one line notation by removing the cycle marks). In particular, for any new permutation $\tau$ we may recover the original permutation $\sigma$ by cycling each $\splt{\alpha}{\tau}{j}$, until the maximal term in each cycle is again at the end. Thus we may instead sum over all permutations and consider the largest element (rather than the last element) in each cycle: \begin{align*} \Psi_\alpha &=\frac{ \prod_{i=1}^n m_i(\alpha)!}{n!}
\sum_{\sigma\in S_n}
\sum_{\substack{1\leq i_1\leq \cdots\leq i_k\leq m\\
i_j=i_{j+1}\Rightarrow\\
\max(\splt{\alpha}{\sigma}{j})<\max(\splt{\alpha}{\sigma}{j+1})}}
x_{i_1}^{\alpha_1}\cdots x_{i_k}^{\alpha_k}. \end{align*}
\end{proof} While one might hope to incorporate the multiplicities (perhaps summing over a different combinatorial object, or considering cycling parts, then sorting them by largest last element) there does not seem to be a natural way to do so with previously well known combinatorial objects; the heart of the problem is that the definition of consistency inherently uses (sub)permutations written in standard form, while $\frac{n!}{z_\alpha}$ counts permutations with cycle type $\alpha$ in partition form. This subtlety blocks simplification of formulas throughout the paper. In practice, we expect (\ref{eq:unscaled}) to be a more useful expression because of this fact, but work out the details of the other interpretation here as it cleanly expresses this easily overlooked subtlety.
\subsubsection{Type 1 quasisymmetric power sums refine symmetric power sums} We next turn our attention to a proof that the type 1 quasisymmetric power sums refine the symmetric power sums in a natural way. \begin{notn}[$R_{\alpha\beta}$, $\mathcal{O}_{\alpha\beta}$]\label{notn:R} For compositions $\alpha$, $\beta$, let \begin{equation*}
R_{\alpha\beta} = | \mathcal{O}_{\alpha\beta}|, \text{ where } \mathcal{O}_{\alpha\beta} = \left\{ \left. \begin{matrix}\text{ordered set partitions}\\\text{$(B_1,\cdots, B_{\ell(\beta)})$ of $\{1,\cdots,\ell(\alpha)\}$}\end{matrix} ~\right| ~
\beta_j=\sum_{i\in B_j}\alpha_i \text{ for } 1\leq j\leq \ell(\beta) \right\},
\end{equation*} i.e.\ $R_{\alpha\beta}$ is the number of ways to group the parts of $\alpha$ so that the parts in the $j$th (unordered) group sum to $\beta_j$. \end{notn} \begin{example} If $\alpha=(1,3,2,1)$ and $\beta=(3,4)$, then $\mathcal{O}_{\alpha\beta}$ consists of $$(\{2\}, \{1,3,4\}),\ (\{1,3\}, \{2,4\}), \quad \text{and} \quad (\{ 3,4\},\{1,2 \}),$$
corresponding, respectively, to the three possible ways of grouping parts of $\alpha$, $$(\alpha_2 , \alpha_1+\alpha_3+\alpha_4),\ (\alpha_1+\alpha_3 , \alpha_2+\alpha_4), \quad \text{and} \quad ( \alpha_3+\alpha_4 , \alpha_1+\alpha_2).$$ Therefore $R_{\alpha \beta} = 3$. \end{example}
For partitions $\lambda, \mu$, we have \begin{equation*}\label{eq:p-in-m}
p_\lambda=\sum_{\mu \vdash n}R_{\lambda \mu}m_\mu \end{equation*} (see for example \cite[p.297]{Sta99v2}). Further, if $\widetilde{\alpha}$ is the partition obtained by putting the parts of $\alpha$ in decreasing order as before, then \begin{equation*}\label{eq:p-in-M} R_{\alpha\beta}=R_{\widetilde{\alpha}\widetilde{\beta}}, \qquad \text{ implying }\qquad p_\lambda=\sum_{\alpha \models n}R_{\lambda \alpha}M_\alpha. \end{equation*}
The refinement of the symmetric power sums can be established either by exploiting duality or through a bijective proof. We present the bijective proof first (using it to justify our combinatorial interpretation of the basis) and defer the simpler duality argument to \S~\ref{sec:products}. \begin{thm}\label{thm:refine} Let $\lambda \vdash n$. Then \[p_\lambda = \sum_{\alpha: \widetilde{\alpha}=\lambda} \Psi_\alpha.\] \end{thm} \begin{cor}\label{cor:refine} $\Psi_\alpha=z_\alpha\psi_\alpha=z_{\widetilde{\alpha}}\psi_\alpha$ is the unique rescaling of the $\psi$ basis (that is the dual basis to $\boldsymbol{\Psi}$) which refines the symmetric power sums with unit coefficients. \end{cor}
Recall from \eqref{eq:PsiM} that for a composition $\alpha$, $$\Psi_\alpha = \sum_{\beta \succcurlyeq \alpha}\frac{z_\alpha}{\pi(\alpha,\beta)}M_\beta.$$ Summing over $\alpha$ rearranging to $\lambda$ and multiplying on both sides by ${n!}/{z_\lambda}$, we see that to prove Theorem \ref{thm:refine} it is sufficient to establish the following for a fixed $\beta\vDash n$.
\begin{prop}\label{prop:consistent} For $\lambda \vdash n$ and $\beta \vDash n$, \[R_{\lambda\beta}\frac{n!}{z_{\lambda}}=\sum_{\substack{\alpha \preccurlyeq \beta \\ \widetilde{\alpha}=\lambda}}\frac{n!}{\pi(\alpha, \beta)}.\] \end{prop}
\begin{proof} The proof is in two steps, with the first being Lemma~\ref{lem:cons}. Let $\beta\vDash n$ and $\lambda \vdash n$. We must establish that \begin{equation}\label{R=cons}R_{\lambda\beta} \frac{n!}{z_{\lambda}} = \sum_{\substack{\alpha \preccurlyeq \beta\\\widetilde{\alpha}=\lambda}}|\mathrm{Cons}_{\alpha\preccurlyeq\beta}|.\end{equation}
Let $\mathcal{O}_{\lambda\beta}$ be the set of ordered set partitions as defined in Notation\ \ref{notn:R}. For each refinement $\alpha \preccurlyeq \beta$, let $C_{\alpha}=\{(\alpha, \sigma) \mid \sigma \mbox{ is consistent with } \alpha \preccurlyeq \beta\}$ and define $ C=\bigcup_{\substack{\alpha \preccurlyeq \beta \\ \widetilde{\alpha}=\lambda}} C_{\alpha}.$ Denote by $S_n^{\lambda}$ the set of permutations of $n$ of cycle type $\lambda$. Then we prove \eqref{R=cons}, by defining the map
$$\mathrm{Br}: C \to \mathcal{O}_{\lambda\beta} \times S_n^{\lambda}$$ as follows (see also Example \ref{ex:Rlambdabeta-identity(a)}), and showing that it is a bijection. Start with $(\alpha, \sigma)\in C$, with $\sigma$ written in one-line notation. \begin{enumerate}[1.] \item Add parentheses to $\sigma$ according to $\alpha$, and denote the corresponding permutation (now written in cycle notation) $\bar{\sigma}$. \item Sort the cycles of $\bar{\sigma}$ into partition form (as in Definition \ref{def:standard-partition}), and let $c_i$ be the $i$th cycle in this ordering. \item Comparing to $\bar{\sigma}^{1}$, \dots, $\bar{\sigma}^{\ell}$ as in Definition \ref{defn:consistent} of consistent permutations, define $B=(B_1, \dots, B_k)$ by $j \in B_i$ when $c_j$ belongs to $\bar{\sigma}^{i}$, i.e.\ $\bar{\sigma}^{i} = \prod_{j \in B_i} c_j$. \end{enumerate} Define $\mathrm{Br}(\alpha, \sigma)=(B, \bar{\sigma}).$ Since $\alpha$ rearranges to $\lambda$, $\bar{\sigma}$ has (unordered) cycle type $\lambda$. And since $\prod_{j \in B_i} c_j = \bar{\sigma}^{i},$ we have $\sum_{j \in B_i}\lambda_j=\beta_i.$ Thus $\mathrm{Br}: (\alpha, \sigma) \mapsto (B, \bar{\sigma})$ is well-defined.
Next, we show that $\mathrm{Br}$ is invertible, and therefore a bijection. Namely, fix $B = (B_1, \ldots, B_{\ell}) \in \mathcal{O}_{\lambda \beta}$ and $\bar{\sigma} \in S_n^\lambda$, writing $\bar{\sigma} = c_1 c_2 \cdots c_k$ in partition form. Then determine $(\alpha, \sigma) \in C$ as follows. \begin{enumerate}[1'.] \item Let $\bar{\sigma}^{i} = \prod_{j \in B_i} c_j$, and sort $\bar{\sigma}^{i}$ into standard form (as a permutation of the corresponding subalphabet of $[n]$). \item Let $\alpha$ be the (ordered) cycle type of $\bar{\sigma}^{1}\bar{\sigma}^{2}\cdots\bar{\sigma}^{\ell}$. \item Delete the parentheses. Let $\sigma$ be the corresponding permutation written in one-line notation. \end{enumerate} By construction, $\alpha$ refines $\beta$ and is a rearrangement of $\lambda$, and $\sigma$ is ($\alpha \preccurlyeq \beta$)-consistent. And it is straightforward to verify that this process exactly inverts $\mathrm{Br}$. Therefore $\mathrm{Br}^{-1}$ is well-defined. This implies that $\mathrm{Br}$ is a bijection and hence \eqref{R=cons} holds.
Then it follows from Lemma \ref{lem:cons} that \[R_{\lambda\beta}\frac{n!}{z_\lambda}=\sum_{\substack{\alpha\preccurlyeq\beta\\\widetilde{\alpha}=\lambda}}\frac{n!}{\pi(\alpha,\beta)}\] as desired. \end{proof}
\begin{example}\label{ex:Rlambdabeta-identity(a)} As an example of the construction of $\mathrm{Br}$ in the proof of Proposition~\ref{prop:consistent}, let $\beta=(5,4)$, $\alpha=(2,3;2,2)$ and $\sigma=267394518$. We want to determine $\mathrm{Br}(\alpha,\sigma)$.
\begin{enumerate}[1.] \item Add parentheses to $\sigma$ according to $\alpha$: \ \ $\bar{\sigma} = (26)(739)(45)(18)$. \item Partition-sort the cycles of $\bar{\sigma}$: $\bar{\sigma} = \underbrace{(739)}_{c_1}\underbrace{(45)}_{c_2}\underbrace{(26)}_{c_3}\underbrace{(18)}_{c_4}$.
\item Compare to the $\beta$-partitioning: $\underbrace{(26)(739)}_{\bar{\sigma}^{1}}|\!|\underbrace{(45)(18)}_{\bar{\sigma}^{2}}$. So $B=(\{1,3\},\{2,4\})$, since $\bar{\sigma}^{1} = c_1 c_3$ and $\bar{\sigma}^{2} = c_2 c_4$. \end{enumerate}
\noindent Going the other way, start with $B=(\{1,3\},\{2,4\})$ and $\bar{\sigma} = (739)(45)(26)(18)$ written in partition form. \begin{enumerate}[1'.]
\item Place cycles into groups according to $B$: $(739)(26) |\!| (45)(18)$.
\item Sort within parts in ascending order by largest value: $(26) (739)|\!| (45)(18)$\\
Then $\alpha = (2,3,2,2)$. \item Delete parentheses to get $\sigma = 267394518$. \end{enumerate} \end{example}
\subsection{Type 2 quasisymmetric power sums}\label{sec:type2power} In order to describe the second type of quasisymmetric power sums, we introduce the following notation.
\begin{notn}[$\spab(\alpha,\beta)$] In the following, $\spab(\beta,\alpha) = \prod_i \spab(\beta^{(i)})$ for $\spab(\gamma)=\ell(\gamma)!\prod_j\gamma_j$. \end{notn}
As shown in~\cite{GKLLRT94}, we can write the noncommutative complete homogeneous functions in terms of the noncommutative power sums of type 2 as \[\boldsymbol{h}_\alpha = \sum_{\beta\preccurlyeq\alpha}\frac{1}{\spab(\beta,\alpha)}\boldsymbol{\Phi}_\beta.\]
By duality, the basis dual to $\boldsymbol{\Phi}_\beta$ can be written as a sum of monomial symmetric functions as \[\phi_\alpha = \sum_{\beta\succcurlyeq\alpha}\frac{1}{\spab(\alpha,\beta)}M_\beta.\]
We then define the type 2 quasisymmetric power sums as \[\Phi_\alpha = z_\alpha \phi_\alpha.\]
A similar polynomial $P_\alpha$ is defined in~\cite{MalReu95}, and is related to $\phi_\alpha$ by $\phi_\alpha=\left(\prod_i \alpha_i\right)^{-1} P_\alpha$. Note that this means Malvenuto and Reutenaur's polynomial $P_\alpha$ is not dual to $\boldsymbol{\Phi}_\alpha$ and (by the following results) does not refine the symmetric power sums. For example, $\Phi_{322} = 2M_{322}+M_{52}+M_{34}+\frac{1}{3}M_7$ whereas $P_{322}=M_{322}+\frac{1}{2}M_{52}+\frac{1}{2}M_{34}+\frac{1}{6}M_7$.
We can obtain a more combinatorial description for $\Phi_\alpha$ by rewriting the coefficients and interpreting them in terms of ordered set partitions.
\begin{notn}[$\OSP(\alpha,\beta)$]\label{notn:OSP} Let $\alpha \preccurlyeq \beta$ and let $\OSP(\alpha,\beta)$ denote the ordered set partitions of $\{1,\ldots,\ell(\alpha)\}$ with block size $|B_i|=\ell(\alpha^{(i)})$. If $\alpha \not\preccurlyeq \beta$, we set $\OSP(\alpha,\beta)=\emptyset$. \end{notn} \begin{thm}\label{thm:power2} Let $\alpha\vDash n$ and let $m_i$ denote the number of parts of $\alpha$ of size $i$. Then
\[\Phi_\alpha = \binom{\ell(\alpha)}{m_1,m_2,\ldots,m_k}^{-1}\sum_{\beta\succcurlyeq\alpha}|\OSP(\alpha,\beta)|M_\beta.\] \end{thm} \begin{proof} Given $\alpha\vDash n$, let $m_i$ denote the number of parts of $\alpha$ of size $i$. Then \begin{align*} \Phi_\alpha&=\sum_{\beta\succcurlyeq\alpha} \frac{z_\alpha}{\spab(\alpha,\beta)}M_\beta\\ &= \sum_{\beta\succcurlyeq\alpha}\frac{z_\alpha}{\prod_j \alpha_j\prod_i (\ell(\alpha^{(i)}))! } M_\beta\\ &=\frac{z_\alpha}{\ell(\alpha)!\prod_j \alpha_j}\sum_{\beta\succcurlyeq\alpha}\frac{\ell(\alpha)!}{\prod_i(\ell(\alpha^{(i)}))!}M_\beta\\ &=\binom{\ell(\alpha)}{m_1,m_2,\ldots,m_k}^{-1}\sum_{\beta\succcurlyeq\alpha}\frac{\ell(\alpha)!}{\prod_i(\ell(\alpha^{(i)}))!}M_\beta. \end{align*} Note that $\dfrac{\ell(\alpha)!}{\prod_i(\ell(\alpha^{(i)}))!}$ is the number of ordered set partitions of $\{1,\ldots,\ell(\alpha)\}$ with block size $B_i=\ell(\alpha^{(i)})$. Thus
\[\Phi_\alpha = \binom{\ell(\alpha)}{m_1,m_2,\ldots,m_k}^{-1}\sum_{\beta\succcurlyeq\alpha}|\OSP(\alpha,\beta)|M_\beta.\qedhere\] \end{proof}
\begin{thm}{\label{thm:2refine}} The type 2 quasisymmetric power sums refine the symmetric power sums by $$p_\lambda = \sum_{\widetilde{\alpha}=\lambda}\Phi_\alpha.$$ \end{thm}
Here the proof requires only a single (and less complex) bijection. \begin{lemma}\label{lem:type2refine} Let $\lambda\vdash n$ and $\beta \vDash n$. Let $m_i$ denote the number of parts of $\lambda$ of size $i$. Then
\[\binom{\ell(\lambda)}{m_1,m_2,\ldots,m_k}R_{\lambda\beta} = \sum_{\substack{\alpha\preccurlyeq\beta\\\widetilde{\alpha}=\lambda}}|\OSP(\alpha,\beta)|.\] \end{lemma} \begin{proof} Let $\lambda \vdash n$, and $m_i$ denote the number of parts of $\lambda$ of size $i$, so that $\ell(\lambda) = \sum_{i=1}^{\lambda_1} m_i$. We can model a composition $\alpha$ that rearranges $\lambda$ as an ordered set partition $(A_1,\ldots,A_{\lambda_1})$ of $\{1,\cdots,\ell(\lambda)\}$ where
$A_i = \{j ~|~ \alpha_j = i\}$. Thus, if $$\mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}}\def\cF{\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}}\def\cK{\mathcal{K}}\def\cL{\mathcal{L}}\def\cM{\mathcal{M}}\def\cN{\mathcal{N}}\def\mathcal{O}{\mathcal{O}}\def\cP{\mathcal{P}}\def\cQ{\mathcal{Q}}\def\cR{\mathcal{R}}\defA{\mathcal{S}}\def\cT{\mathcal{T}}\def\cU{\mathcal{U}}\def\cV{\mathcal{V}}\def\cW{\mathcal{W}}\def\cX{\mathcal{X}}\def\cY{\mathcal{Y}}\def\cZ{\mathcal{Z}_\lambda
= \left\{\left. \begin{matrix}\text{ordered set partitions}\\\text{$(A_1,\ldots,A_{\lambda_1})$ of $\{1,\cdots,\ell(\lambda)\}$}\end{matrix} ~\right| ~
|A_i| = m_i \right\},
$$ then the map
$$\gamma: \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}}\def\cF{\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}}\def\cK{\mathcal{K}}\def\cL{\mathcal{L}}\def\cM{\mathcal{M}}\def\cN{\mathcal{N}}\def\mathcal{O}{\mathcal{O}}\def\cP{\mathcal{P}}\def\cQ{\mathcal{Q}}\def\cR{\mathcal{R}}\defA{\mathcal{S}}\def\cT{\mathcal{T}}\def\cU{\mathcal{U}}\def\cV{\mathcal{V}}\def\cW{\mathcal{W}}\def\cX{\mathcal{X}}\def\cY{\mathcal{Y}}\def\cZ{\mathcal{Z} \to \{\alpha \vDash n ~|~ \tilde{\alpha} = \lambda\}$$
defined by \begin{equation} \gamma(A) \text{ is the composition with $\gamma(A)_j=i$ for all $j \in A_i$} \label{eq:gamma(A)} \end{equation} is a natural bijection.
Further, we have $|\mathcal{A}_\lambda|=\binom{\ell(\lambda)}{m_1,m_2,\ldots,m_{\lambda_1}}$.
Now, fix $\beta \vDash n$. Recall, we have
$$\mathcal{O}_{\lambda\beta}= \left\{\left. \begin{matrix}\text{ordered set partitions}\\\text{$(B_1,\cdots, B_{\ell(\beta)})$ of $\{1,\cdots,\ell(\lambda)\}$}\end{matrix} ~\right| ~
\beta_j=\sum_{i\in B_j}\lambda_i \text{ for } 1\leq j\leq \ell(\beta) \right\},
$$
so that $|\mathcal{O}_{\lambda\beta}| = \cR_{\lambda \beta}$ (Notation \ref{notn:R}). For $\alpha\preccurlyeq\beta$, we have
$$\OSP(\alpha, \beta) = \left\{\left. \begin{matrix}\text{ordered set partitions}\\\text{$(C_1,\cdots, C_{\ell(\beta)})$ of $\{1,\cdots,\ell(\alpha)\}$}\end{matrix} ~\right| ~ |C_i| = \ell(\alpha^{(i)})\right\}$$ (Notation \ref{notn:OSP}). Informally, $\mathcal{O}_{\lambda\beta}$ tells us how to build $\beta$ as combination of parts of $\lambda$; and $\OSP(\alpha, \beta)$ tells us a shuffle of a refinement $\alpha \preccurlyeq \beta$.
For an ordered set partition $P = (P_1, \dots, P_\ell)$ of $\{1, \dots, \ell(\lambda)\}$, let $p_1^{(i)}, \dots, p_{\ell(\alpha^{(i)})}^{(i)}$ be the elements of $P_i$ written in increasing order. Define $w_{P}$ be the permutation (in one-line notation) given by $$w_{P_i} = p_1^{(i)} \cdots p_{\ell}^{(i)}\qquad \text{ and } \qquad w_P = w_{p_1} \cdots w_{p_{\ell(\beta)}}.$$
We are now ready to construct a bijection
$$g: \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}}\def\cF{\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}}\def\cK{\mathcal{K}}\def\cL{\mathcal{L}}\def\cM{\mathcal{M}}\def\cN{\mathcal{N}}\def\mathcal{O}{\mathcal{O}}\def\cP{\mathcal{P}}\def\cQ{\mathcal{Q}}\def\cR{\mathcal{R}}\defA{\mathcal{S}}\def\cT{\mathcal{T}}\def\cU{\mathcal{U}}\def\cV{\mathcal{V}}\def\cW{\mathcal{W}}\def\cX{\mathcal{X}}\def\cY{\mathcal{Y}}\def\cZ{\mathcal{Z}_\lambda \times \mathcal{O}_{\lambda\beta} \to \bigsqcup_{\substack{\alpha\preccurlyeq\beta\\\widetilde{\alpha}=\lambda}}\{(\alpha, C) ~|~ C \in \OSP(\alpha, \beta)\}.$$ See Example \ref{ex:OSP}.
Let $(A, B) \in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}}\def\cF{\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}}\def\cK{\mathcal{K}}\def\cL{\mathcal{L}}\def\cM{\mathcal{M}}\def\cN{\mathcal{N}}\def\mathcal{O}{\mathcal{O}}\def\cP{\mathcal{P}}\def\cQ{\mathcal{Q}}\def\cR{\mathcal{R}}\defA{\mathcal{S}}\def\cT{\mathcal{T}}\def\cU{\mathcal{U}}\def\cV{\mathcal{V}}\def\cW{\mathcal{W}}\def\cX{\mathcal{X}}\def\cY{\mathcal{Y}}\def\cZ{\mathcal{Z}_\lambda \times \mathcal{O}_{\lambda\beta}$. Initially, set $\alpha' = \gamma(A)$ (where $\gamma$ is the map in \eqref{eq:gamma(A)}), and set $C\in \mathcal{O}_{\alpha'\beta}$ equal to the image of $B$ under the permutation of indices induced by $\lambda \to \gamma(A)$. Namely, if the $i$th part of $\lambda$ got placed into the $j$th part of $\alpha'$ (where parts of equal length are kept in the same relative order), then $i$ in $B$ is replaced by $j$ in $C$. Now, act by $w_C^{-1}$ on the subscripts of $\alpha'$ to get $\alpha$. The result is $\alpha \preccurlyeq \beta$, $\tilde{\alpha} = \lambda$, and $C \in \OSP(\alpha, \beta)$. Let $g((A,B)) = (\alpha, C)$.
To see that this is a bijection, we show that each step in building $g((A,B))$ is invertible as follows. Take $\alpha \preccurlyeq \beta$ such that $\tilde{\alpha} = \lambda$, and let $C \in \OSP(\alpha, \beta)$. Let $\alpha'$ be the result of acting by $w_C$ on the subscripts of $\alpha$. Then we can recover $A = \gamma^{-1}(\alpha')$ from $\alpha'$; and $B$ is the image of $C$ under the permutation of indices induced by $\alpha' \to \lambda$. Namely, if the $j$th part of $\alpha'$ came from the $i$th part of $\lambda$ (where parts of equal length are kept in the same relative order), then $j$ in $C$ is replaced by $i$ in $B$. Then $A \in \mathcal{A}}\def\cB{\mathcal{B}}\def\cC{\mathcal{C}}\def\cD{\mathcal{D}}\def\cE{\mathcal{E}}\def\cF{\mathcal{F}}\def\cG{\mathcal{G}}\def\cH{\mathcal{H}}\def\cI{\mathcal{I}}\def\cJ{\mathcal{J}}\def\cK{\mathcal{K}}\def\cL{\mathcal{L}}\def\cM{\mathcal{M}}\def\cN{\mathcal{N}}\def\mathcal{O}{\mathcal{O}}\def\cP{\mathcal{P}}\def\cQ{\mathcal{Q}}\def\cR{\mathcal{R}}\defA{\mathcal{S}}\def\cT{\mathcal{T}}\def\cU{\mathcal{U}}\def\cV{\mathcal{V}}\def\cW{\mathcal{W}}\def\cX{\mathcal{X}}\def\cY{\mathcal{Y}}\def\cZ{\mathcal{Z}_\lambda$, $B \in \mathcal{O}_{\lambda\beta}$, and setting $g^{-1}((\alpha, C)) = (A,B)$ gives $g(g^{-1}((\alpha, C))) = (\alpha, C)$ and $g^{-1}(g((A,B)) = (A,B)$. \end{proof}
\begin{example}\label{ex:OSP} Fix $\lambda=(3,2,2,1,1,1,1)$ and $\beta = (5,1,4,1)$. So $m_1 = 4$, $m_2 = 2$, and $m_3 = 1$.
Now consider $$A = ((\{1,2,4,7\},\{3,6\},\{5\}) \in \mathcal{A}_{\lambda} \quad \text{ and } \quad B = (\{1,3\},\{4\},\{2,5,7\},\{6\}) \in \mathcal{O}_{\lambda\beta}.$$ Then $\alpha' = \gamma(A) = (1,1,2,1,3,2,1)$, corresponding to the rearrangement $$\TikZ{ \foreach \x in {1,2,...,7}{\coordinate (t\x) at (\x,1); \coordinate (b\x) at (\x,0); } \node[above] at (1,1) {$\begin{matrix}\lambda_1 \\ 3\end{matrix}$}; \node[above] at (2,1) {$\begin{matrix}\lambda_2 \\ 2\end{matrix}$}; \node[above] at (3,1) {$\begin{matrix}\lambda_3 \\ 2\end{matrix}$}; \node[above] at (4,1) {$\begin{matrix}\lambda_4 \\ 1\end{matrix}$}; \node[above] at (5,1) {$\begin{matrix}\lambda_5 \\ 1\end{matrix}$}; \node[above] at (6,1) {$\begin{matrix}\lambda_6 \\ 1\end{matrix}$}; \node[above] at (7,1) {$\begin{matrix}\lambda_7 \\ 1\end{matrix}$}; \node[below] at (1,0) {$\begin{matrix}1\\ \alpha'_1\end{matrix}$}; \node[below] at (2,0) {$\begin{matrix}1\\ \alpha'_2\end{matrix}$}; \node[below] at (3,0) {$\begin{matrix}2\\ \alpha'_3\end{matrix}$}; \node[below] at (4,0) {$\begin{matrix}1\\ \alpha'_4\end{matrix}$}; \node[below] at (5,0) {$\begin{matrix}3\\ \alpha'_5\end{matrix}$}; \node[below] at (6,0) {$\begin{matrix}2\\ \alpha'_6\end{matrix}$}; \node[below] at (7,0) {$\begin{matrix}1\\ \alpha'_7\end{matrix}$}; \draw[thick,->] (t1) to (b5); \draw[thick,->] (t2) to (b3); \draw[thick,->] (t3) to (b6); \draw[thick,->] (t4) to (b1); \draw[thick,->] (t5) to (b2); \draw[thick,->] (t6) to (b4); \draw[thick,->] (t7) to (b7); }, \quad \text{which induces} \quad \TikZ{ \foreach \x in {1,2,...,7}{\node (t\x) at (\x,2) {$\x$}; \node (b\x) at (\x,0) {$\x$}; } \draw[thick,->] (t1) to (b5); \draw[thick,->] (t2) to (b3); \draw[thick,->] (t3) to (b6); \draw[thick,->] (t4) to (b1); \draw[thick,->] (t5) to (b2); \draw[thick,->] (t6) to (b4); \draw[thick,->] (t7) to (b7); }. $$ The image of $B$ under this induced map is $C = (\{5,6\}, \{1\}, \{2, 3, 7\}, \{4\})$. So $w_C = 5612374$, and the image of $\alpha'$ under the action of $w_C^{-1}$ on subscripts is $$w_C^{-1} : \ \TikZ{ \foreach \x in {1,2,...,7}{\coordinate (t\x) at (\x,1); \coordinate (b\x) at (\x,0); } \node[above] at (1,1) {$\begin{matrix}\alpha'_1 \\ 1\end{matrix}$}; \node[above] at (2,1) {$\begin{matrix}\alpha'_2 \\ 1\end{matrix}$}; \node[above] at (3,1) {$\begin{matrix}\alpha'_3 \\ 2\end{matrix}$}; \node[above] at (4,1) {$\begin{matrix}\alpha'_4 \\ 1\end{matrix}$}; \node[above] at (5,1) {$\begin{matrix}\alpha'_5 \\ 3\end{matrix}$}; \node[above] at (6,1) {$\begin{matrix}\alpha'_6 \\ 2\end{matrix}$}; \node[above] at (7,1) {$\begin{matrix}\alpha'_7 \\ 1\end{matrix}$}; \node[below] at (1,0) {$\begin{matrix}3\\ \alpha_1\end{matrix}$}; \node[below] at (2,0) {$\begin{matrix}2\\ \alpha_2\end{matrix}$}; \node[below] at (3,0) {$\begin{matrix}1\\ \alpha_3\end{matrix}$}; \node[below] at (4,0) {$\begin{matrix}1\\ \alpha_4\end{matrix}$}; \node[below] at (5,0) {$\begin{matrix}2\\ \alpha_5\end{matrix}$}; \node[below] at (6,0) {$\begin{matrix}1\\ \alpha_6\end{matrix}$}; \node[below] at (7,0) {$\begin{matrix}1\\ \alpha_7\end{matrix}$}; \draw[thick,->] (t5) to (b1); \draw[thick,->] (t6) to (b2); \draw[thick,->] (t1) to (b3); \draw[thick,->] (t2) to (b4); \draw[thick,->] (t3) to (b5); \draw[thick,->] (t7) to (b6); \draw[thick,->] (t4) to (b7); }.$$ And, indeed, we see that $\alpha = (3,2,1,1,2,1,1) \preccurlyeq \beta$ and $C \in \OSP(\alpha, \beta)$.
We can see why it is necessary to record $\alpha$ as follows. For example, we consider inverting $g$ on the same $C$ as above, but now paired with $\alpha = (3,2,1,2,1,1,1)$. Then the image of $\alpha$ under the action of $w_C$ on subscripts is $$w_C: \ \TikZ{ \foreach \x in {1,2,...,7}{\coordinate (t\x) at (\x,1); \coordinate (b\x) at (\x,0); } \node[above] at (1,1) {$\begin{matrix}\alpha_1 \\ 3\end{matrix}$}; \node[above] at (2,1) {$\begin{matrix}\alpha_2 \\ 2\end{matrix}$}; \node[above] at (3,1) {$\begin{matrix}\alpha_3 \\ 1\end{matrix}$}; \node[above] at (4,1) {$\begin{matrix}\alpha_4 \\ 2\end{matrix}$}; \node[above] at (5,1) {$\begin{matrix}\alpha_5 \\ 1\end{matrix}$}; \node[above] at (6,1) {$\begin{matrix}\alpha_6 \\ 1\end{matrix}$}; \node[above] at (7,1) {$\begin{matrix}\alpha_7 \\ 1\end{matrix}$}; \node[below] at (1,0) {$\begin{matrix}1\\ \alpha'_1\end{matrix}$}; \node[below] at (2,0) {$\begin{matrix}2\\ \alpha'_2\end{matrix}$}; \node[below] at (3,0) {$\begin{matrix}1\\ \alpha'_3\end{matrix}$}; \node[below] at (4,0) {$\begin{matrix}1\\ \alpha'_4\end{matrix}$}; \node[below] at (5,0) {$\begin{matrix}3\\ \alpha'_5\end{matrix}$}; \node[below] at (6,0) {$\begin{matrix}2\\ \alpha'_6\end{matrix}$}; \node[below] at (7,0) {$\begin{matrix}1\\ \alpha'_7\end{matrix}$}; \draw[thick,->] (t1) to (b5); \draw[thick,->] (t2) to (b6); \draw[thick,->] (t3) to (b1); \draw[thick,->] (t4) to (b2); \draw[thick,->] (t5) to (b3); \draw[thick,->] (t6) to (b7); \draw[thick,->] (t7) to (b4); }.$$ So $A = (\{1,3,4,7\}, \{2,6\}, \{5\})$, which is different from the $A$ we started with above. The set $B$, though, is left unchanged. \end{example}
\section{Relationships between bases}{\label{sec:btw}}
\subsection{The relationship between the type 1 and type 2 quasisymmetric power sums}
To determine the relationship between the two different types of quasisymmetric power sums, we first use duality to expand the monomial quasisymmetric functions in terms of the type 2 quasisymmetric power sums. Thus, from \eqref{eq:htopsi} and duality we obtain $$M_{\beta} = \sum_{\alpha \succcurlyeq \beta} (-1)^{\ell(\beta)-\ell(\alpha)} \frac{\Pi_i \alpha_i}{\ell(\beta,\alpha)} \Phi_{\alpha}.$$
Then we expand the type 1 quasisymmetric power sums in terms of the monomial quasisymmetric functions \eqref{eq:PsiM} and apply substitution to obtain the following expansion of the type 1 quasisymmetric power sums into the type 2 quasisymmetric power sums:
\begin{align*} \Psi_{\alpha} &= \sum_{\beta \succcurlyeq \alpha} \frac{z_{\alpha}}{\pi(\alpha, \beta)}M_{\beta} \\
&= \sum_{\beta \succcurlyeq \alpha} \frac{z_{\alpha}}{\pi(\alpha, \beta)} \sum_{\gamma\succcurlyeq\beta} (-1)^{\ell(\beta)-\ell(\gamma)} \frac{\Pi_i \gamma_i}{\ell(\beta, \gamma)} \Phi_{\gamma} \\
&= \sum_{\alpha \preccurlyeq \beta \preccurlyeq \gamma} (-1)^{\ell(\beta)-\ell(\gamma)} \frac{z_{\alpha} \Pi_i \gamma_i}{\pi(\alpha, \beta) \ell(\beta, \gamma)} \Phi_{\gamma}.
\end{align*}
A similar process produces
\begin{align*} \Phi_{\alpha} &= \sum_{\beta \succcurlyeq \alpha} \frac{z_\alpha}{sp(\alpha,\beta)} M_{\beta} \\ &= \sum_{\beta \succcurlyeq \alpha} \frac{z_\alpha}{sp(\alpha,\beta)} \sum_{\gamma\succcurlyeq\beta} (-1)^{\ell(\beta)-\ell(\gamma)} lp(\beta, \gamma) \Psi_{\gamma} \\ &= \sum_{\alpha \preccurlyeq \beta \preccurlyeq \gamma} (-1)^{\ell(\beta)-\ell(\gamma)} \frac{z_\alpha lp(\beta, \gamma)}{sp(\alpha, \beta)} \Psi_{\gamma}. \end{align*} \subsection{The relationship between monomial and fundamental quasisymmetric functions} Our next goal is to give the ``cleanest'' possible interpretation of the expansions of the quasisymmetric power sums in the fundamental basis. Towards this goal we first establish a more basic relationship between the $F$ basis and certain sums of monomials.
\begin{notn}[$\alpha^c$, $\alpha\wedge\beta$, $\alpha\vee\beta$]
Given $\alpha\vDash n$, let $\alpha^c=\comp((\set(\alpha))^c)$. Given a second composition $\beta$, $\alpha\wedge\beta$ denotes the finest (i.e.\ with the smallest parts) composition $\gamma$ such that $\gamma\succcurlyeq\alpha$ and $\gamma\succcurlyeq \beta$. Similarly, $\alpha\vee \beta$ denotes the coarsest composition $\delta$ such that $\delta\preccurlyeq\alpha$ and $\delta \preccurlyeq \beta$.
\end{notn} \begin{example} If $\alpha=(2,3,1)$ and $\beta=(1,2,2,1)$, then $\alpha^c=(1,2, 1,2), \alpha\wedge\beta=(5,1)$, and $\alpha\vee \beta=(1,1,1,2,1)$.
\end{example}
\noindent The notation is motivated by the poset of sets ordered by containment (when combined with the bijection from sets to compositions). We note that $\set(\alpha\wedge\beta)=\set(\alpha) \cap \set(\beta)$ and $\set(\alpha \vee \beta)=\set(\alpha)\cup \set(\beta)$.
We begin by writing the sum (over an interval in the refinement partial order) of quasisymmetric monomial functions as a sum of distinct fundamental quasisymmetric functions. \begin{lemma}\label{lem:moninterval} Let $\alpha,\beta\vDash n$ with $\alpha\preccurlyeq\beta$. Then \[\sum_{\delta: \alpha\preccurlyeq\delta\preccurlyeq\beta} M_\delta = \sum_{\beta\vee \alpha^c\preccurlyeq\delta\preccurlyeq\beta}(-1)^{\ell(\beta)-\ell(\delta)}F_\delta.\] \end{lemma}
\begin{proof} Let $\alpha, \beta\vDash n$ with $\alpha \preccurlyeq \beta$. Then \begin{align} \sum_{\alpha\preccurlyeq\delta\preccurlyeq\beta}M_\delta & =\sum_{\alpha\preccurlyeq\delta\preccurlyeq\beta}\sum_{\gamma\preccurlyeq\delta}(-1)^{\ell(\gamma)-\ell(\delta)}F_\gamma\nonumber\\ &=\sum_{\gamma\preccurlyeq\beta}(-1)^{\ell(\gamma)}F_\gamma\left(\sum_{\alpha\wedge\gamma\preccurlyeq\delta\preccurlyeq\beta}(-1)^{\ell(\delta)}\right).\label{eq:fexpansion} \end{align}
Recall (see \cite{Sta99v1}) the M\"obius function for the lattice of subsets of size $n-1$, ordered by set inclusion. If $S$ and $T$ are subsets of an $n-1$ element set with $T \subseteq S$, then $\mu(T,S)=(-1)^{|S-T|}$ and $(-1)^{|S-T|}= - \sum_{T\subseteq U\subset S} \mu(T,U)$. Thus, since compositions of $n$ are in bijection with subsets of $[n-1]$ and $\ell(\delta)=|\set(\delta)|+1$, when $\alpha\wedge\gamma \neq \beta$, we can write \begin{align*} \sum_{\alpha\wedge\gamma\preccurlyeq\delta\preccurlyeq\beta} (-1)^{\ell(\delta)} &= (-1)^{\ell(\alpha\wedge\gamma)}+(-1)^{\ell(\beta)}\sum_{\alpha\wedge\gamma\prec\delta\preccurlyeq\beta}(-1)^{\ell(\delta)-\ell(\beta)} \\ &=(-1)^{\ell(\alpha\wedge\gamma)}+(-1)^{\ell(\beta)}\sum_{\alpha\wedge\gamma\prec\delta\preccurlyeq\beta}\mu(\set(\beta),\set(\delta)) \\ &=(-1)^{\ell(\alpha\wedge\gamma)}+(-1)^{\ell(\beta)+1}\mu(\set(\beta), \set(\alpha\wedge\gamma)) \\ &=(-1)^{\ell(\alpha\wedge\gamma)}+(-1)^{\ell(\beta)+1}(-1)^{\ell(\alpha\wedge\gamma)-\ell(\beta)}\\ &=0.\end{align*}
We can now rewrite \eqref{eq:fexpansion} as \[\sum_{\gamma\preccurlyeq\beta}(-1)^{\ell(\gamma)}F_\gamma\left(\sum_{\alpha\wedge\gamma\preccurlyeq\delta\preccurlyeq\beta}(-1)^{\ell(\delta)}\right)=\sum_{\substack{\gamma\preccurlyeq\beta\\\beta=\gamma\wedge\alpha}} (-1)^{\ell(\gamma)+\ell(\alpha\wedge\gamma)}F_\gamma=\sum_{ \alpha^c\vee\beta \preccurlyeq \gamma\preccurlyeq \beta}(-1)^{\ell(\gamma)-\ell(\beta)}F_\gamma.\qedhere\] \end{proof}
\subsection{The relationship between type 1 quasisymmetric power sums and fundamental quasisymmetric functions}
Recall Notation \ref{notn:hat} for the following. \begin{thm}\label{thm:psitoF} Let $\alpha\vDash n$. Then
\[\Psi_\alpha = \frac{z_\alpha}{n!}\sum_{\gamma\succcurlyeq\alpha}|\{\sigma\in\mathrm{Cons}_\alpha: \widehat{\alpha(\sigma)}=\gamma\}|\sum_{\eta\succcurlyeq \alpha^c}(-1)^{\ell(\eta)-1}F_{\gamma\vee\eta}.\] \end{thm}
\begin{proof} Let $\alpha\vDash n$. We use $\mathbbold{1}_{\mathcal{R}}$ to denote the characteristic function of the relation $\mathcal{R}$.
Combining the quasisymmetric monomial expansion of $\Psi_\alpha$ given in \eqref{eq:PsiM}, Lemma~\ref{lem:cons}, and Lemma~\ref{lem:moninterval}, we have \begin{align*}
\Psi_\alpha &=\frac{z_\alpha}{n!}\sum_{\alpha\preccurlyeq\beta}|\mathrm{Cons}_{\alpha\preccurlyeq\beta}|M_\beta\\ &=\frac{z_\alpha}{n!}\sum_{\sigma\in \mathrm{Cons}_\alpha}\sum_{\alpha\preccurlyeq\delta\preccurlyeq \widehat{\alpha(\sigma)}}M_\delta \\ &=\frac{z_\alpha}{n!}\sum_{\sigma\in\mathrm{Cons}_\alpha}\sum_{\alpha^c\vee \widehat{\alpha(\sigma)}\preccurlyeq\delta\preccurlyeq\widehat{\alpha(\sigma)}}(-1)^{\ell(\widehat{\alpha(\sigma)})-\ell(\delta)}F_\delta \textrm{ (by Lemma~\ref{lem:moninterval})} \\ &=\frac{z_\alpha}{n!}\sum_{\delta\vDash n}(-1)^{\ell(\delta)}F_\delta \sum_{\sigma\in\mathrm{Cons}_\alpha}(-1)^{\ell(\widehat{\alpha(\sigma)})}\mathbbold{1}_{\alpha^c\vee\widehat{\alpha(\sigma)}\preccurlyeq \delta \preccurlyeq \widehat{\alpha(\sigma)}}\\
&=\frac{z_\alpha}{n!}\sum_{\gamma\succcurlyeq\alpha}|\{\sigma\in\mathrm{Cons}_\alpha:\widehat{\alpha(\sigma)}=\gamma\}|\sum_{\delta \vDash n}(-1)^{\ell(\gamma)-\ell(\delta)}F_\delta \mathbbold{1}_{\alpha^c\vee\gamma\preccurlyeq \delta\preccurlyeq\gamma}, \end{align*} with the last equality holding since the compositions $\widehat{\alpha(\sigma)}$ are coarsening of $\alpha$. It is straightforward to check that given $\gamma \succcurlyeq \alpha$ and $\delta \vDash n$, there exists $\eta \succcurlyeq \alpha^c$ such that $\delta=\gamma\vee\eta$ if and only if $\delta \succcurlyeq \alpha^c\vee\gamma$. Then \begin{align*}
\Psi_\alpha&=\frac{z_\alpha}{n!}\sum_{\gamma\succcurlyeq\alpha}|\{\sigma\in \mathrm{Cons}_\alpha: \widehat{\alpha(\sigma)}=\gamma\}|(-1)^{\ell(\gamma)}\sum_{\eta\succcurlyeq\alpha^c}(-1)^{\ell(\gamma\vee\eta)}F_{\gamma\vee\eta}\\
&= \frac{z_\alpha}{n!}\sum_{\gamma\succcurlyeq\alpha}|\{\sigma\in\mathrm{Cons}_\alpha: \widehat{\alpha(\sigma)}=\gamma\}|\sum_{\eta\succcurlyeq \alpha^c}(-1)^{\ell(\eta)-1}F_{\gamma\vee\eta}. \end{align*}
The final equality is established by noting that $\set(\gamma)\cap\set(\eta)=\emptyset$, so $\ell(\gamma\vee\eta)=|\set(\gamma\vee\eta)|+1=|\set(\gamma)|+|\set(\eta)|+1=\ell(\gamma)+\ell(\eta)-1$. \end{proof}
\begin{note}The $F_{\gamma \vee \eta}$'s are distinct in this sum, meaning the coefficient of $F_\delta$ is either 0 or is $$|\{ \sigma ~|~ \widehat{\alpha(\sigma)} = \gamma \}| (-1)^{\ell(\eta)-1}$$ when $\delta = \gamma \vee \eta$ for $\gamma \succcurlyeq \alpha$ and $\alpha^c \preccurlyeq \eta$. This follows from the fact that we can recover $\gamma$ and $\eta$ from $\gamma \vee \eta$ and $\alpha$, with $$\gamma=\comp(\set(\gamma \vee \eta)\cap \set(\alpha)),$$ $$\eta=\comp(\set(\gamma \vee \eta)\cap \set(\alpha)^c).$$ \end{note}
\subsection{The relationship between type 2 quasisymmetric power sums and fundamental quasisymmetric functions} The expansion of $\Phi_\alpha$ into fundamental quasisymmetric functions is somewhat more straightforward. Let $m_i$ denote the number of parts of $\alpha \vDash n$ that have size $i$.
\begin{thm}\label{thm:phitoF} Let $\alpha\vDash n$. Then
\[\Phi_\alpha=\binom{m_1+\cdots+m_n}{m_1,\ldots,m_n}^{-1}\sum_{\gamma\vDash n} \left(\sum_{\beta\succcurlyeq(\gamma\wedge \alpha)} (-1)^{\ell(\gamma)-\ell(\beta)}|\OSP(\alpha,\beta)|\right) F_{\gamma}.\] \end{thm}
\begin{proof} Let $\alpha\vDash n$. Combining the quasisymmetric monomial expansion of $\Phi_\alpha$ and the fundamental expansion of $M_\beta$, gives \begin{align}
\Phi_\alpha&=\binom{m_1+\cdots+m_n}{m_1,\ldots,m_n}^{-1}\sum_{\beta\succcurlyeq\alpha} |\OSP(\alpha,\beta)|M_\beta\\
&=\binom{m_1+\cdots+m_n}{m_1,\ldots,m_n}^{-1}\sum_{\beta\succcurlyeq\alpha} |\OSP(\alpha,\beta)|\sum_{\beta\succcurlyeq\gamma}(-1)^{\ell(\gamma)-\ell(\beta)}F_\gamma\\
&=\binom{m_1+\cdots+m_n}{m_1,\ldots,m_n}^{-1}\sum_{\gamma\vDash n}F_\gamma \left(\sum_{\beta\succcurlyeq(\gamma\wedge \alpha)} (-1)^{\ell(\gamma)-\ell(\beta)}|\OSP(\alpha,\beta)|\right).\qedhere \end{align} \end{proof} \subsection{The antipode map on quasisymmetric power sums}\label{sec:omega} \begin{defn}[transpose, $\alpha^r,\alpha^t$] Let $\alpha^r$ give the reverse of $\alpha$. Then we call $\alpha^t=(\alpha^c)^r$ the transpose of the composition $\alpha$. \end{defn}
The antipode map $S: \NS\rightarrow\NS$ on the Hopf algebra of quasisymmetric functions is commonly defined by $S(F_\alpha)=(-1)^{|\alpha|}F_{\alpha^t}$. On the noncommutative functions, it is commonly defined as the automorphism such that $S(\boldsymbol{e}_n)=(-1)^n\boldsymbol{h}_n$. It can equivalently be defined by $S(\boldsymbol{r}_\alpha)=(-1)^{|\alpha|}\boldsymbol{r}_{\alpha^t}$. Thus, for $f$ in $\QS$ and $g\in \NS$, $$\<f,g\rangle=\<S(f),S(g)\rangle.$$ It can be easier to compute $S$ on a multiplicative basis, such as $\boldsymbol{\Psi}$ or $\boldsymbol{\Phi}$ and then use duality to establish the result on the quasisymmetric side.
We start with the expansion of the $\boldsymbol{\Psi}$ and $\boldsymbol{\Phi}$ in terms of the $\boldsymbol{e}$ basis in \cite[\S 4.5]{GKLLRT94}: \begin{equation}\label{eq:powerine} \boldsymbol{\Psi}_n = \sum_{\alpha\vDash n} (-1)^{n-\ell(\alpha)}\alpha_1 \boldsymbol{e}_\alpha. \end{equation} It follows from \eqref{eq:powerinh} and \eqref{eq:powerine} that $S(\boldsymbol{\Psi}_n)=-\boldsymbol{\Psi}_n$. Then \begin{align*} S(\boldsymbol{\Psi}_{\alpha})&=S(\boldsymbol{\Psi}_{\alpha_1}\boldsymbol{\Psi}_{\alpha_2}\cdots \boldsymbol{\Psi}_{\alpha_k})\\ &=S(\boldsymbol{\Psi}_{\alpha_k})S(\boldsymbol{\Psi}_{\alpha_{k-1}})\cdots S(\boldsymbol{\Psi}_{\alpha_1})\\ &=\left(-\boldsymbol{\Psi}_{\alpha_k}\right)\cdots \left(-\boldsymbol{\Psi}_{\alpha_1}\right)\\ &=(-1)^{\ell(\alpha)}\boldsymbol{\Psi}_{\alpha^r}. \end{align*}
This result also follows from the fact that $\Psi$ is a primitive element.
\begin{thm}\label{thm:omega} For $\alpha \vDash n$, $S(\Psi_\alpha) = (-1)^{\ell(\alpha)}\Psi_{\alpha^r}.$ \end{thm} \begin{proof} Let $\alpha \vDash n$. Then \[z_\alpha \delta_{\alpha, \beta}=\langle\Psi_\alpha, \boldsymbol{\Psi}_\beta\rangle=\<S(\Psi_\alpha), S(\boldsymbol{\Psi}_\beta)\rangle=\<S(\Psi_\alpha),(-1)^{\ell(\beta)}\boldsymbol{\Psi}_{\beta^r} \rangle=\langle(-1)^{\ell(\beta)}S(\Psi_\alpha),\boldsymbol{\Psi}_{\beta^r} \rangle,\] so $S(\Psi_\alpha)=(-1)^{\ell(\alpha)}\Psi_{(\alpha)^r}$. \end{proof} Similarly, we have that $$S(\Phi_\alpha)=(-1)^{\ell(\alpha)}\Psi_{\alpha^r}.$$
There are considerable notational differences between various authors on the names of the well known automorphisms of $\QS$ and $\NS$, in part because there are two natural maps which descend to the well known automorphism $\omega$ in the symmetric functions. Following both \cite{GKLLRT94} and \cite{LMvW}, we use $\omega(\boldsymbol{e}_n)=\boldsymbol{h}_n$ (where $\omega$ is an anti-automorphism) and $\omega(F_\alpha)=F_{\alpha^t}$ to define (one choice of) a natural analogue of the symmetric function case. We can see, from the definition of $\omega$ and $S$ on the elementary symmetric functions, that the two maps vary by only a sign on homogeneous polynomials of a given degree. In particular,
$$\omega(\Psi_\alpha)=(-1)^{|\alpha|-\ell(\alpha)}\Psi_{\alpha^r},$$
$$\omega(\Phi_\alpha)=(-1)^{|\alpha|-\ell(\alpha)}\Phi_{\alpha^r}.$$ \section{Products of quasisymmetric power sums}\label{sec:products}
In contrast to the symmetric power sums, the quasisymmetric power sums are not a multiplicative basis. This is immediately evident from the fact that $\Psi_{(n)}=p_{(n)}=\Phi_{(n)}$ but the quasisymmetric power sum basis is not identical to the symmetric power sums. Thus the product of two elements of either power sum basis is more complex in the quasisymmetric setting than the symmetric setting.
\subsection{Products of type 1 quasisymmetric power sums}{\label{sec:product}}
We can exploit the duality of comultiplication in $\NS$ and multiplication in $\QS$. \begin{defn}[shuffle, $\shuffle$] Let $[a_1,\cdots ,a_n]\shuffle[b_1,\cdots,b_m]$ give the set of shuffles of $[a_1,\cdots ,a_n]$ and $[b_1,\cdots,b_n]$; that is the set of all length $m+n$ words without repetition on $ \{a_1,\cdots ,a_n\}\cup \{b_1,\cdots,b_n\}$ such that for all $i$, $a_i$ occurs before $a_{i+1}$ and $b_i$ occurs before $b_{i+1}$. \end{defn}
Comultiplication for the noncommutative symmetric power sums (type 1) is given in~\cite{GKLLRT94} by \[\Delta ( \boldsymbol{\Psi}_k) = 1 \oplus \boldsymbol{\Psi}_k + \boldsymbol{\Psi}_k \oplus 1.\] Thus \[\Delta(\boldsymbol{\Psi}_\alpha) = \prod_i \Delta (\boldsymbol{\Psi}_{\alpha_i}) = \prod_i (1\oplus \boldsymbol{\Psi}_{\alpha_i}+\boldsymbol{\Psi}_{\alpha_i}\oplus 1) = \sum_{\substack{\gamma,\beta\\\alpha \in \gamma \shuffle \beta}} \boldsymbol{\Psi}_{\gamma}\oplus \boldsymbol{\Psi}_\beta.\] \begin{notn}[$C(\alpha,\beta)$] Let $a_j$ denote the number of parts of size $j$ in $\alpha$ and $b_j$ denote the number of parts of size $j$ in $\beta$. Define $C(\alpha,\beta)=\prod_j \binom{a_j+b_j}{a_j}.$ A straightforward calculation shows that $C(\alpha,\beta) = z_{\alpha\cdot\beta}/(z_\alpha z_\beta)$. \end{notn}
\begin{thm}\label{thm:productpower} Let $\alpha$ and $\beta$ be compositions. Then \[\Psi_\alpha \Psi_\beta = \frac{1}{C(\alpha,\beta)}\sum_{\gamma \in \alpha\shuffle\beta} \Psi_\gamma.\] \end{thm}
\begin{proof} Let $\alpha$ and $\beta$ be compositions. Then \begin{align} \Psi_\alpha \Psi_\beta & = (z_\alpha \psi_\alpha)(z_\beta \psi_\beta)\nonumber\\ &= (z_\alpha z_\beta) (\psi_\alpha\psi_\beta). \label{eq:prod} \end{align} Since the $\psi$ are dual to the $\boldsymbol{\Psi}$, we have that $\displaystyle{\psi_\alpha\psi_\beta=\sum_{\gamma \in \alpha\shuffle\beta}\psi_\gamma}$. Note that for any rearrangement $\delta$ of $\gamma$, $z_\delta=z_\gamma$. Thus, we can rewrite \eqref{eq:prod} as \[\Psi_\alpha\Psi_\beta = \frac{z_\alpha z_\beta}{z_{\alpha\cdot\beta}}\sum_{\gamma \in \alpha \shuffle \beta}\Psi_\gamma. \] \end{proof}
In addition to this proof based on duality, we note that it is possible to prove this product rule directly using the monomial expansion of the quasisymmetric power sums. We do this by showing that the coefficients in the quasisymmetric monomial function expansions of both sides of the product formula in Theorem~\ref{thm:productpower} are the same. \begin{defn}[overlapping shuffle, $\cshuffle$] $\delta\cshuffle\beta$ is the set of {\em overlapping shuffles} of $\delta$ and $\eta$, that is, shuffles where a part of $\delta$ and a part of $\eta$ can be added to form a single part.
\end{defn} \begin{lemma}\label{lem:productpi} Let $\alpha \vDash m$, $\beta \vDash n$, and fix $\xi$ a coarsening of a shuffle of $\alpha$ and $\beta$. Then \[\binom{m+n}{m}\sum_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq \beta\\\text{s.t.\ }\xi\in\delta\cshuffle \eta}}\frac{m!}{\pi(\alpha,\delta)}\frac{n!}{\pi(\beta,\eta)} = \sum_{\substack{\gamma \in \alpha \shuffle \beta\\\gamma \preccurlyeq \xi}}\frac{(m+n)!}{\pi(\gamma,\xi)}.\] \end{lemma} \begin{proof} Let $\alpha \vDash m$, $\beta \vDash n$, and fix $\xi$, a coarsening of a shuffle of $\alpha$ and $\beta$. Then $\xi=\xi_1,\ldots,\xi_k$ where each $\xi_i$ is a (known) sum of parts of $\alpha$ or $\beta$, or both.
Let $Y_m= \{D\subseteq [m+n]: |D|=m\}$ and $B_{\xi,\alpha,\beta}=\{\gamma \in \alpha\shuffle\beta:\gamma\preccurlyeq\xi\}$. We establish a bijection \[f: Y_m\times \bigcup_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq\beta\\\text{s.t.\ }\xi\in\delta\cshuffle\eta}}\left(\mathrm{Cons}_{\alpha\preccurlyeq\delta}\times\mathrm{Cons}_{\beta\preccurlyeq\eta}\right)\rightarrow \bigcup_{\gamma \in B_{\xi,\alpha,\beta}} (\mathrm{Cons}_{\gamma\preccurlyeq\xi}\times\{\gamma\}).\]
Let $(D, \sigma, \tau) \in Y_m\times \bigcup_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq\beta\\\text{s.t.\ }\xi\in\delta\cshuffle\eta}}\left(\mathrm{Cons}_{\alpha\preccurlyeq\delta}\times\mathrm{Cons}_{\beta\preccurlyeq\eta}\right)$. Then $D=\{i_1<i_2<\ldots<i_m\}$. To construct $(\word,\gamma)=f((D,\sigma,\tau))$: \begin{enumerate} \item Create a word $\widetilde{\sigma}$ that is consistent with $\alpha\preccurlyeq\delta$ by replacing each $j$ in $\sigma$ with $i_j$ from $D$. Similarly, use $[m+n]\setminus D$ to create $\widetilde{\tau}$ consistent with $\beta\preccurlyeq\eta$. \item Arrange the parts of $\widetilde{\sigma}$ and $\widetilde{\tau}$ in a single permutation by placing the parts corresponding to $\alpha_i$ (resp. $\beta_i$) in the location they appear in $\xi$. Finally, for all parts within a single part of $\xi$, arrange the sub-permutations so that the final elements of each sub-permutation creates an increasing sequence from left to right. Note that this will keep parts of $\alpha$ in order since $\widetilde{\sigma}$ is consistent with $\alpha\preccurlyeq \delta$ and parts of $\alpha$ occurring in the same part of $\xi$ also occurred in the same part of $\delta$. (An analogous statement is true for parts of $\beta$.) \item The resulting permutation is $\word=f((D,\sigma,\tau))$ and is an element of $\mathrm{Cons}_{\gamma\preccurlyeq\xi}$ where $\gamma$ is determined by the order of parts in $\word$ corresponding to $\alpha$ and $\beta$. \end{enumerate}
Conversely, given $(\word',\gamma) \in \bigcup_{\gamma \in B_{\xi,\alpha,\beta}}(\mathrm{Cons}_{\gamma\leq\xi}\times\{\gamma\})$, construct a triple $(D',\sigma',\tau')$ by: \begin{enumerate} \item In $\word'$, the $i$th block corresponds to the $i$th part of $\gamma$. Place the labels in the $i$th block of $\word'$ in $D'$ if the $i$th part of $\gamma$ is from $\alpha$. \item Let $\widetilde{\sigma}'$ be the subword of $\word'$ consisting of blocks corresponding to parts of $\alpha$, retaining the double-lines to show which parts of $\alpha$ were in the same part of $\xi$ to indicate $\delta\succcurlyeq \alpha$. Rewrite as a permutation in $S_m$ by standardizing in the usual way: replace the $i^{th}$ smallest entry with $i$ for $1\leq i \leq m$. The resulting permutation $\sigma'$ is consistent with $\alpha\preccurlyeq\delta$. \item Similarly construct $\tau'$ from the subword $\widetilde{\tau}'$ of $\word'$ consisting of parts corresponding to parts of $\beta$. \qedhere \end{enumerate}\end{proof}
\begin{example} Let $\alpha = (2,1,1,2)$, $\beta = (\underline{2},\underline{1})$ and $\xi=(2+1+\underline{2}, \underline{1}, 1+2)$. Then $(\delta,\eta)=((2+1,1+2), (\underline{2},\underline{1}))$.
Choose $D=\{1,2,5,6,7,9\}$, $\sigma = |\!|34|6|\!|2|15|\!| $, and $\tau=|\!|13|\!|2|\!|$. Then $\widetilde{\sigma} = |\!|5 6|9|\!|2|17|\!|$ and $\widetilde{\tau}=|\!|3 8|\!|4|\!|$. Then $\word = |\!|56|38|9|\!|4|\!|2|17|\!|$ and the corresponding shuffle $\gamma=(2,\underline{2},1,\underline{1},1,2)$.
Now, consider $\gamma'=(\underline{2},2,1,\underline{1},1,2)$ and $\word'=|\!|24|16|8|\!|9|\!|5|37|\!|.$ Then $\widetilde{\sigma}'=|\!|16|8|\!|5|37|\!|$ and $\widetilde{\tau}'=|\!|24|\!|9|\!|$. Then $D'=\{1,3,5,6,7,8\}$, $\sigma'=|\!|14|6|\!|3|25|\!|$, and $\tau'=|\!|12|\!|3|\!|$. \end{example}
We now use Lemma~\ref{lem:productpi} to offer a more combinatorial proof of Theorem~\ref{thm:productpower}.
\begin{proof}(of Theorem~\ref{thm:productpower}) Let $\alpha \vDash m$ and $\beta \vDash n$. Then \begin{align} \Psi_\alpha \Psi_\beta & = \left(\sum_{\delta\succcurlyeq \alpha}\frac{z_\alpha}{\pi(\alpha,\delta)}M_\delta\right)\left(\sum_{\eta\succcurlyeq \beta} \frac{z_\beta}{\pi(\beta,\eta)}M_\eta\right)\nonumber\\ &=z_\alpha z_\beta \sum_{\delta\succcurlyeq\alpha}\sum_{\eta\succcurlyeq \beta}\frac{1}{\pi(\alpha,\delta)\pi(\beta,\eta)}M_\delta M_\eta\nonumber\\ &=\frac{z_{\alpha\cdot\beta}}{C(\alpha,\beta)}\sum_{\delta\succcurlyeq\alpha}\sum_{\eta\succcurlyeq \beta}\frac{1}{\pi(\alpha,\delta)\pi(\beta,\eta)}\sum_{\zeta \in \delta \cshuffle\eta}M_\zeta\nonumber\\ &= \frac{z_{\alpha\cdot\beta}}{C(\alpha,\beta)}\sum_{\zeta \vDash m+n}M_\zeta \left(\sum_{\substack{(\delta,\eta): \delta \succcurlyeq \alpha, \eta \succcurlyeq \beta\\\zeta \in \delta\cshuffle\eta}}\frac{1}{\pi(\alpha,\delta)\pi(\beta,\eta)}\right).\label{eq:monex} \end{align}
By Lemma~\ref{lem:productpi} we can rewrite \eqref{eq:monex} as \begin{align*} \Psi_\alpha\Psi_\beta &=\frac{z_{\alpha\cdot\beta}}{C(\alpha,\beta)}\sum_{\zeta \vDash m+n} M_\zeta \sum_{\substack{\gamma\in\alpha\shuffle\beta\\\gamma\preccurlyeq\zeta}}\frac{1}{\pi(\gamma,\zeta)}\\ &=\frac{1}{C(\alpha,\beta)}\sum_{\gamma \in \alpha \shuffle\beta}\sum_{\zeta \succcurlyeq \gamma} \frac{z_{\gamma}}{\pi(\gamma,\zeta)}M_\zeta\\ &=\frac{1}{C(\alpha,\beta)}\sum_{\gamma \in \alpha\shuffle\beta}\Psi_\gamma.\qedhere \end{align*}
\end{proof}
Now that we have a product formula for the quasisymmetric power functions, a more straightforward proof can be given for Theorem~\ref{thm:refine}. \begin{proof}(of Theorem~\ref{thm:refine}) We proceed by induction on $\ell(\lambda)$, the length of $\lambda$. If $\ell(\lambda)=1$, then $\lambda=(n)$ and $p_{(n)}=m_{(n)}=M_{(n)}=\Psi_{(n)}.$ (This is because $\psi_{(n)}=\frac{1}{\pi((n),(n))}M_{(n)}=\frac{1}{n}M_{(n)}$ and $\Psi_{(n)}=z_{(n)}\psi_{(n)}=n\psi_{(n)}$.)
Suppose the theorem holds for partitions of length $k$ and let $\mu$ be a partition with $\ell(\mu)=k+1$. Suppose $\mu_{k+1}=j$ and let $\lambda=(\mu_1, \mu_2, \ldots, \mu_k)$. Let $m_j$ be the number of parts of size $j$ in $\mu$. Then, using the induction hypothesis and Theorem~\ref{thm:productpower}, we have \begin{equation}\label{ind} p_\mu=p_\lambda p_{(j)}= \left( \sum_{\substack{\alpha \vDash |\lambda|\\ \tilde{\alpha}=\lambda}} \Psi_\alpha\right)\Psi_{(j)}= \sum_{\substack{\alpha \vDash|\lambda|\\ \tilde{\alpha}=\lambda}} \left( \Psi_\alpha\Psi_{(j)}\right)=\frac{1}{m_j} \sum_{\substack{\alpha \vDash |\lambda|\\ \tilde{\alpha}=\lambda}} \sum_{\gamma \in \alpha\shuffle (j)} \Psi_\gamma.\end{equation} Here, we used the fact that, if $\tilde{\alpha}=\lambda$, then $\displaystyle C(\alpha, (j))=\binom{m_j}{m_j-1}=m_j$.
Suppose $\gamma\in \alpha \shuffle (j)$ for some $\alpha \vDash |\lambda|$ such that $\tilde{\alpha}=\lambda$. Then $\gamma \vDash |\mu|$ and $\tilde{\gamma}=\mu$. Moreover, every composition $\theta\vDash |\mu|$ with $\tilde{\theta}=\mu$ belongs to $ \alpha \shuffle (j)$ for some $\alpha\vDash |\lambda|$ with $\tilde{\alpha}=\lambda$.
We write $\gamma\vDash |\mu|$ with $\tilde{\gamma}=\mu$ as $\gamma^{(1)},J^{(1)},\gamma^{(2)},J^{(2)},\ldots, \gamma^{(q)},J^{(q)}$ where each $\gamma^{(i)}$ has no part equal to $j$ and each $J^{(i)}$ consists of exactly $r_i$ parts equal to $j$. We refer to $J^{(i)}$ as the $i$th block of parts equal to $j$. Here $r_i>0$ for $i=1, 2, \ldots, q-1$ and $r_q\geq 0$. Moreover, $r_1+r_2+\cdots +r_q=m_j$. Denote by $\alpha(\gamma, i)$ the composition obtained from $\gamma$ be removing the first $j$ in $J^{(i)}$ (if it exists). Then, the multiplicity of $\gamma$ in $\alpha(\gamma, i)\shuffle (j)$ equals $r_i$ (since $(j)$ can be shuffled in $r_i$ different positions in the $i$th block of parts equal to $j$ of $\alpha(\gamma, i)$ to obtain $\gamma$.) Then, the multiplicity of $\gamma$ in $\displaystyle\cup_{\tilde{\alpha}=\lambda}\{\alpha \shuffle (j) \mid \alpha \vDash |\lambda|, \tilde{\alpha}=\lambda\}$ equals $m_j$ and
\[p_\lambda=\sum_{\substack{\beta \vDash |\mu|\\ \tilde{\beta}=\mu}} \Psi_\beta.\qedhere\]
\end{proof}
\subsection{Products of type 2 quasisymmetric power sums} As with the type one quasisymmetric power sums, since $\Delta\boldsymbol{\Phi}_k=1\oplus\boldsymbol{\Phi}_k+\boldsymbol{\Phi}_k\oplus 1$, the product rule is \begin{equation}\label{eqn:productpower2} \Phi_\alpha\Phi_\beta = \frac{1}{C(\alpha,\beta)}\sum_{\gamma\in \alpha\shuffle\beta}\Phi_\gamma.\end{equation}
Again, we can give a combinatorial proof of the product rule. The proof of \eqref{eqn:productpower2} is almost identical to the proof of Theorem \ref{thm:productpower}, so we omit the details. A significant difference is the proof of the analog of Lemma \ref{lem:productpi}, so we give the analog here. \begin{lemma}Let $\alpha \vDash m$, $\beta \vDash n$, and fix $\xi$ a coarsening of a shuffle of $\alpha$ and $\beta$. Then
\[\sum_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq \beta\\\text{s.t.\ }\xi\in\delta\cshuffle \eta}}\frac{1}{\spab(\alpha,\delta)}\frac{1}{\spab(\beta,\eta)} = \sum_{\substack{\gamma \in \alpha \shuffle \beta\\\gamma \preccurlyeq \xi}}\frac{1}{\spab(\gamma,\xi)}.\] \end{lemma} \begin{proof} First, note that $\prod_i\alpha_i$ and $\prod_i\beta_i$ occur in the denominator of every term in the left hand side of our desired equation, but $$\prod_i\alpha_i\prod_i\beta_i=\prod_i \gamma_i \text{ and } \ell(\alpha)+\ell(\beta)=\ell(\gamma)$$ for any $\gamma$ occuring in the right hand sum. Then multiplying by $\ell(\gamma)!\prod_i\alpha_i\prod_i\beta_i $ on the left and right, we need to show
\begin{align*}{\ell(\alpha)+\ell(\beta) \choose \ell(\alpha)}\sum_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq \beta\\\text{s.t.\ }\xi\in\delta\cshuffle \eta}}\frac{\ell(\alpha)!}{\prod_j\ell(\alpha^{(j)})!}\frac{\ell(\beta)!}{\prod_j\ell(\beta^{(j)})!} = \sum_{\substack{\gamma \in \alpha \shuffle \beta\\\gamma \preccurlyeq \xi}}\frac{\ell(\gamma)!}{\prod_j\ell(\gamma^{(j)})!}.
\end{align*}
Equivalently, we need to show
\begin{align}\label{eq:power2eq15}{\ell(\alpha)+\ell(\beta) \choose \ell(\alpha)}\sum_{\substack{\delta\succcurlyeq\alpha,\eta\succcurlyeq \beta\\\text{s.t.\ }\xi\in\delta\cshuffle \eta}}|\OSP(\alpha,\delta)||\OSP(\beta,\eta)| = \sum_{\substack{\gamma \in \alpha \shuffle \beta\\\gamma \preccurlyeq \xi}}|\OSP(\gamma,\xi)|.
\end{align}
For a given choice of $\delta$ and $\eta$ satisfying the conditions on the left, select $S$ and $T$, ordered set partitions in $\OSP(\alpha,\delta)$ and $\OSP(\beta,\eta)$ respectively. Pick a subset $U$ of size $\ell(\alpha)$ from the first $\ell(\alpha)+\ell(\beta)$ positive integers and re-number the elements in $S$ and $T$ in order, such that the elements of $S$ are re-numbered with the elements of $U$ and the elements of $T$ are re-numbered with elements of $U^c$ to form $\tilde{S}$ and $\tilde{T}$ respectively. Going forward, consider each of the subsets as lists, with the elements listed in increasing order. Use the subsets to assign an additional value to each part of $\alpha$ or $\beta$, working in order. Say that $f(\alpha,i)=m$ if $\alpha_i$ occurs as the $k$th element in $\alpha^{(j)}$ and $m$ is the $k$th element in $\tilde{S}_j$. Similarly say that $f(\beta,i)=m$ if $\beta_i$ occurs as the $k$th element in $\beta^{(j)}$ and $m$ is the $k$th element in $\tilde{T}_j$. Note that each choice of $\delta$ and $\eta$ gives a refinement of $\xi$, with each $\xi_i$ a sum of parts of $\alpha$ and $\beta$. Then sort the parts of $\alpha$ and $\beta$ to create $\gamma$, such that parts of $\alpha$ occur in order and parts of $\beta$ occur in order, and the following additional rules are satisfied: Let $\alpha_i$ be one of the parts that forms $\delta_j$ which in turn is used to form $\xi_k$ and $\beta_l$ be one of the parts that forms $\eta_m$ which in turn is used form $\xi_n$. Then
\begin{itemize}
\item if $k>n$ (i.e.\ $\beta_l$ is an earlier subpart of $\xi$ than $\alpha_i$ is), $\alpha_i$ occurs after $\beta_l$,
\item if $k<n$,(i.e.\ $\alpha_i$ is an earlier subpart of $\xi$ than $\beta_l$ is) $\alpha_i$ occurs before $\beta_l$,
\item if $k=n$ (i.e. $\alpha_i$ and $\beta_l$ eventually make up the same part of $\xi$), $\alpha_i$ is left of $\beta_l$ iff $f(\alpha,i)<f(\beta,l)$.
\end{itemize}
Finally, create an element of $\OSP(\gamma,\xi)$ by placing $p$ in the $q$th part if $f(\alpha,i)=p$ (or $f(\beta,l)=p$) and $\alpha_i$ ($\beta_l$ respectively) is one of the parts used to form $\gamma^{(q)}$. Note that this map is bijective; since the the parts of $\alpha$ and $\beta$ which occur in the same part of $\gamma$ are sorted by the value they are assigned in the set partition, we can recover from the set partition which integers were assigned to each part (and what $U$ was, by looking at which numbers are assigned to parts corresponding to $\alpha$).
\end{proof}
\begin{example}Let $\alpha=(1,2,1)$ and $\beta=(\underline{1},\underline{1},\underline{2})$. Let $\delta=(1+2,1)$ $\eta=(\underline{1},\underline{1}+\underline{2})$, and $\xi=(1+2+\underline{1},\underline{1}+\underline{2},1)$. ($\xi$ here is fixed before $\delta$ and $\eta$, but how we write it as an overlapping shuffle depends on their choice.) Finally let $S=(\{1,3\},\{2\})$, $T=(\{3\},\{1,2\})$, and $U=\{1,4,6\}$. Then $\tilde{S}=(\{1,6\},\{4\})$ and $\tilde{T}=(\{5\},\{2,3\})$. Next, we reorder the second and third subparts of $\xi$ to get $\gamma=(1,\overline{1},2,\overline{1},\overline{2},1)$ and the final ordered set partition is $(\{1,5,6\},\{2,3\},\{4\})$.
\end{example} \subsection{The shuffle algebra}
Let $V$ be a vector space with basis $\{v_a \}_{a \in \mathfrak{U}}$ where $\mathfrak{U}$ is a totally ordered set. For our purposes, $\mathfrak{U}$ will be the positive integers. The \emph{shuffle algebra} $sh(V)$ is the Hopf algebra of the set of all words with letters in $\mathfrak{U}$, where the product is given by the shuffle product $v \shuffle w$ defined above. The shuffle algebra and $QSym$ are isomorphic as graded Hopf algebras~\cite{GriRei14}. We now describe a method for generating $QSym$ through products of the type 1 quasisymmetric power sums indexed by objects called \emph{Lyndon words}; to do this we first need several definitions.
A \emph{proper suffix} of a word $w$ is a word $v$ such that $w=uv$ for some nonempty word $u$. The following total ordering on words with letters in $\mathfrak{U}$ is used to define Lyndon words. We say that $u \le_L v$ if either
\begin{itemize} \item $u$ is a prefix of $v$, \; \; \; or \item $j$ is the smallest positive integer such that $u_j \not= v_j$ and $u_j<v_j$. \end{itemize}
Otherwise $v \le_L u$. If $w=w_1 w_2 \cdots w_k$ is a nonempty word with $w_i \in \mathfrak{U}$ for all $i$, we say that $w$ is a \emph{Lyndon word} if every nonempty proper suffix $v$ of $w$ satisfies $w < v$. Let $\mathcal{L}$ be the set of all Lyndon words. Radford's Theorem~\cite{Rad79}, (Theorem 3.1.1 (e)) states that if $\{ b_a \}_{a \in \mathfrak{U}}$ is a basis for a vector space $V$, then $\{b_w\}_{w \in \mathcal{L}}$ is an algebraically independent generating set for the shuffle algebra $Sh(V)$. To construct a generating set for $Sh(V)$, first define the following operation (which we will call an \emph{index-shuffle}) on basis elements $b_{\alpha}$ and $b_{\beta}$: $$b_{\alpha} \underline{\shuffle} b_{\beta} = \sum_{\gamma \in \alpha \shuffle \beta} b_{\gamma}.$$ Recall that $$\Psi_\alpha \Psi_\beta = \frac{1}{C(\alpha,\beta)}\sum_{\gamma \in \alpha\shuffle\beta} \Psi_\gamma,$$ where $C(\alpha,\beta) = z_{\alpha\cdot\beta}/(z_\alpha z_\beta)$. Then $\Psi_\alpha \underline{\shuffle} \Psi_\beta = C(\alpha, \beta) \Psi_{\alpha} \Psi_{\beta}$. Since Radford's theorem implies that every basis element $b_{\alpha}$ can be written as a linear combination of index shuffles of basis elements indexed by Lyndon words, every basis element $\Psi_{\alpha}$ can be written as a linear combination of products of type 1 quasisymmetric power sums indexed by Lyndon words and we have the following result.
\begin{thm}
The set $\{ \Psi_C | C \in \mathcal{L} \}$ freely generates $QSym$ as a commutative $\mathbb{Q}$-algebra. \end{thm}
\begin{example} Since $231$ can be written as $23 \shuffle 1 - 2 \shuffle 13 + 132$, $$\Psi_{231} = C(23,1)\Psi_{23} \Psi_1 - C(2,13) \Psi_2 \Psi_{13} + \Psi_{132} = \Psi_{23} \Psi_1 - \Psi_2 \Psi_{13} + \Psi_{132}. $$ \end{example}
\section{Plethysm on the quasisymmetric power sums}{\label{sec:plethysm}} The symmetric power sum basis $\{p_\lambda\}_{\lambda\vdash n}$ plays a particularly important role in the language of $\Lambda$-rings. It is natural to hope that one of the quasisymmetric power sums might play the same role here, and it was this motivation that initially piqued the authors' interest in the quasisymmetric power sums. This seems not to be the case, so one might take the next section as a warning to similarly minded individuals that this does not appear to be a productive path for studying quasisymmetric plethysm. To explain the differences between this and the symmetric function case, we remind the reader of the symmetric function case first. \subsection{Plethysm and symmetric power sums} Recall that plethysm is a natural (indeed even universal in some well defined functorial sense) $\Lambda$-ring on the symmetric functions. Following the language of \cite{knutson2006lambda}, recall that a pre-$\Lambda$-ring is a commutative ring $R$ with identity and a set of operations for $i\in \{0,1,2,\cdots\}$ $\lambda^i:R\rightarrow R$ such that for all $r_1,r_2\in R$: \begin{itemize} \item $\lambda^0(r_1)=1$ \item $\lambda^1(r_1)=r_1$ \item $\lambda^n(r_1+r_2)=\sum_{i=0}^n \lambda^i(r_1)\lambda^{n-i}(r_2)$ \end{itemize} To define a $\Lambda$-ring, use $e^X_i$ and $e^Y_i$ as the elementary symmetric functions $e_i$ in the $X$ or $Y$ variables, and define the universal polynomials $P_n$ and $P_{m,n}$ by $$\sum_{n=0}P_n(e^X_1,\cdots,e^X_n;e^Y_1,\cdots, e^Y_n)t^n=\prod_{i,j}(1-x_{i,j}t),$$ and $$\sum_{n=0}P_{n,m}(e^X_1,\cdots,e^X_{nm})t^n=\prod_{i_1<\cdots<i_m }(1-x_{i_1}x_{i_2}\cdots x_{i_m}t).$$ Then a pre-$\Lambda$-ring is by definition a $\Lambda$-ring if \begin{itemize} \item for all $i>1$, $\lambda^i(1)=0$; \item for all $r_1,r_2\in R$, $n\geq 0$, $$\lambda^n(r_1r_2)=P_n(\lambda^1 (r_1),\cdots, \lambda^n (r_1);\lambda^1 (r_2),\cdots, \lambda^n (r_2));$$ \item for all $r\in R$, $m,n\geq 0$, $$\lambda^m(\lambda^n (r))=P_{m,n}(\lambda^1 (r),\cdots, \lambda^{mn} (r)).$$ \end{itemize} These operations force the $\lambda_i$ to behave like exterior powers of vector spaces (with sums and products of $\lambda_i$ corresponding to exterior powers of direct sums and tensor products of vector spaces), but are not always so helpful to work with directly. In the classical case, one works more easily indirectly by defining a new series of operations called the Adams operators $\Psi^n:R\rightarrow R$ by the relationship (for all $r\in R$) \begin{equation}\label{eq:adams}\frac{d}{dt}\log \sum_{i\geq 0}t^i\lambda^i(r)=\sum_{i=0}^\infty (-1)^i\Psi^{i+1}(r)t^i.\end{equation} Note that while use $\Psi$ in this section for both the power sums and the Adams operators, the basis elements have subscripts and the Adams operators superscripts. Standard literature uses $\Psi$ for the Adams operators, so this follows the usual convention. Moreover, there is quite a close connection between the two, as mentioned below. \begin{thm}[\cite{knutson2006lambda}]\label{thm:adams} If $R$ is torsion free, $R$ is a $\Lambda$-ring if and only if for all $r_1,r_2\in R$, \begin{enumerate} \item \label{it:1}$\Psi^i(1)=1$, \item\label{it:2} $\Psi^i(r_1r_2)=\Psi^i(r_1)\Psi^i(r_2)$, and \item \label{it:3} $\Psi^i(\Psi^j(r_1))=\Psi^{ij}(r_1)$. \end{enumerate} \end{thm} Since (\ref{eq:adams}) defining the Adams operators is equivalent to (\ref{eq:pfrome}), this suggests that simple operations on the symmetric power sum functions should give a $\Lambda$-ring. This is exactly the case for the ``free $\Lambda$-ring on one generator'', where we start with $\Lambda$ a polynomial ring in infinitely many variables $\mathbb{Z}[x_1, x_2,\cdots]$ and let (for any symmetric function $f\in\mathbb{Z}[x_1, x_2,\cdots] $) $$\lambda^i(f)=e_i[f].$$ The $[\cdot]$ on the right denotes plethysm. One can make an equivalent statement using the Adams operators and the symmetric power sum $p_i$: $$\Psi^i(f)=p_i[f].$$ This implies that for all $i\geq 0$, $f,g\in\mathbb{Z}[x_1, x_2,\cdots]$ (symmetric functions, although $f$ and $g$ can in fact be any integral sum of monomials) \begin{enumerate} \item$p_i[1]=1$, \item $p_i[f+g]=p_i[f]+p_i[g]$ and \item $p_m[fg]=p_m[f]p_m[g]$. \end{enumerate} (Note that the first and third items follow directly from (\ref{it:1}) and (\ref{it:2}) in Theorem \ref{thm:adams}. The second follows from the additive properties of a $\Lambda$-ring.) This is the context in which plethysm is usually directly defined, such that for $f,g$ symmetric functions (and indeed with $f$ allowed much more generally to be a sum of monomials) one more generally calculates $g[f]$ by first expressing $g$ in terms of the power sums and then using the above rules, combined with the following (which allows one to define plethysm as a homomorphism on the power sums): \begin{itemize} \item For any constant $c\in \mathbb{Q}$ (or generalizing, for $c$ in an underlying field $K$), $c[f]=c$. \item For $m\geq 1$, $g_1,g_2$ symmetric functions, $(g_1+g_2)[f]=g_1[f]+g_2[f]$. \item For $m\geq 1$, $g_1,g_2$ symmetric functions, $(g_1g_2)[f]=g_1[f]g_2[f]$. \end{itemize} In this context, $$s_\lambda[s_\mu]=\sum_{\gamma}a_{\lambda,\mu}^\nu s_\nu$$ where the $a_{\lambda,\mu}^\nu$ correspond to the well-known Kronecker coefficients and $f[x_1+\cdots +x_n]$ is just $f(x_1,\cdots,x_n)$. See \cite{LeuRem2007Computational} for a fantastic exposition, starting from this point of view.
\subsection{Quasisymmetric and noncommutative symmetric plethysm} While one can modify the definition slightly to allow evaluation of $g[f]$ when $f$ is not a symmetric function, the case is not so simple when $g$ is no longer symmetric. Krob, Leclerc, and Thibon~\cite{KLT97Noncommutative} are the first to examine this case in detail; they begin by looking at $g$ in the noncommutative symmetric functions, but $g$ in the quasisymmetric functions can be defined from there by a natural duality. (It is not so natural to try to find an analogue directly on the quasisymmetric function side, since one needs an analogue of the elementary symmetric functions to begin.) A peculiarity of this case is that the order of the monomials in $f$ matters (as the evaluation of noncommutative symmetric functions or quasisymmetric functions depends on the order of the variables), and one can meaningfully define a different result depending on the choice of monomial ordering.
As is suggested by the formalism of $\Lambda$-rings, Krob, Leclerc, and Thibon~\cite{KLT97Noncommutative} begin by essentially defining the natural analogue of a pre-$\Lambda$-ring on the homogeneous noncommutative symmetric functions, and thus by extension on the elementary noncommutative symmetric functions They do this in a way that guarantees the properties of a pre-$\Lambda$-ring as follows. Use $A\oplus B$ for the addition of ordered noncommuting alphabets (with the convention that terms in $A$ preceed terms in $B$) and $H(X;t)$ for the generating function of the homogeneous noncommutative symmetric functions on the alphabet $X$. Then define $H(A \oplus B; T)$ as follows. $$H(A\oplus B;t)=\sum_{n\geq 0}\boldsymbol{h}_{n}[A\oplus B]t^n:= H(A;t)H(B;t).$$
Already, this is enough to show that plethysm on the noncommutative power sums (or by duality the quasisymmetric power sums) is not as nice. Using $\boldsymbol{\Psi}(X;t)$ for the generating function of the noncommuting symmetric power sums, \cite{KLT97Noncommutative} show that $$\boldsymbol{\Psi}(A\oplus B;t)=H(B;t)^{-1}\boldsymbol{\Psi}(A;t)H(B;t)+\boldsymbol{\Psi}(B;t).$$ This is, of course, in contrast to simple easy to check symmetric function expansion $$p[X+Y;t] =p[X;t]+p[Y;t],$$ for $p[X;t]$ the symmetric power sum generating function. Moreover, they show that the case is equally complex for the type 2 case. The takeaway from this computation is that one can define a $\Lambda$-ring or a pre-$\Lambda$-ring in the noncommutative symmetric functions and then define Adams operators by one of two relationships between the power sums and the elementary (or equivalently homogeneous) noncommutative symmetric functions, but the resulting relationships on the Adams operators, that is the analogue of Theorem \ref{thm:adams}, can be far more complicated than working with plethysm directly using the elementary nonsymmetric functions or (by a dual definition) the monomials in the quasisymmetric functions. A succinct resource to the latter is \cite{BKNMPT01overview}.
In theory, one could work in reverse, defining an operation ``plethysm" on either the type 1 or the type 2 power sums and extending it as a homomorphism to the quasisymmetric or noncommutative symmetric functions. In practice, besides the more complicated plethysm identities on the power sums, most of the natural relationships commonly used in (commuting) symmetric function calculations are naturally generalized by plethysm as defined in \cite{KLT97Noncommutative}. (Here for example, the addition of alphabets corresponds to the coproduct, as in the symmetric function case.) Therefore the perspective of \cite{KLT97Noncommutative} seems to result in a more natural analogue of plethysm than any choice of homomorphism on the quasisymmetric power sums.
\section{Future directions} We suggest a couple of possible directions for future research. \subsection{Murnaghan-Nakayama style rules} The Murnaghan-Nakayama Rule provides a formula for the product of a power sum symmetric function indexed by a single positive integer and a Schur function expressed in terms of Schur functions. This rule can be thought of as a combinatorial method for computing character values of the symmetric group. Tewari~\cite{Tew16} extends this rule to the noncommutative symmetric functions by producing a rule for the product of a type 1 noncommutative power sum symmetric function and a noncommutative Schur function. LoBue's formula for the product of a quasisymmetric Schur function and a power sum symmetric function indexed by a single positive integer~\cite{LoB15} can be thought of as a quasisymmetric analogue of the Murnaghan-Nakayama rule, although there are several alternative approaches worth exploring.
The quasisymmetric power sum analogues, unlike the power sum symmetric functions and the noncommutative power sum symmetric functions, are not multiplicative, meaning a rule for multiplying a quasisymmetric Schur function and a quasisymmetric power sum indexed by a single positive integer does not immediately result in a rule for the product of a quasisymmetric Schur function and an arbitrary quasisymmetric power sum. It is therefore natural to seek a new rule for such a product.
Also recall that there are several natural quasisymmetric analogues of the Schur functions; in addition to the quasisymmetric Schur functions, the fundamental quasisymmetric functions and the dual immaculate quasisymmetric functions can be thought of as Schur-like bases for $QSym$. Therefore it is worth investigating a rule for the product of a quasisymmetric power sum and either a fundamental quasisymmetric function or a dual immaculate quasisymmetric function.
\subsection{Representation theoretic interpretations} The symmetric power sums play an important role in connecting representation theory to symmetric function theory via the Frobenius characteristic map $\mathcal{F}$. In particular, if $C_\lambda$ is a class function in the group algebra of $S_n$, one can define the Frobenius map by $\mathcal{F}(C_\lambda)=\frac{p_\lambda}{z_\lambda}$. With this definition, one can show that $\mathcal{F}$ maps the irreducible representation of $S_n$ indexed by $\lambda$ to the schur function $s_\lambda$.
Krob and Thibon~\cite{krob1997noncommutative} define quasisymmetric and noncommutative symmetric characteristic maps; one takes irreducible representations of the $0$-Hecke algebra to the fundamental quasisymmetric basis, the other takes indecomposable representations of the same algebra to the ribbon basis. It would be interesting to understand if these maps could be defined equivalently and usefully on the quasisymmetric and noncommutative symmetric power sums. \begin{akn} We would like to express our gratitude to BIRS and the organizers of Algebraic Combinatorixx II for bringing the authors together and enabling the genesis of this project, and to the NSF for supporting further collaboration via the second author's grant (DMS-1162010). \end{akn}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\pdfrender{StrokeColor=black,TextRenderingMode=2,LineWidth=.01pt}
\title{Primitive ideals and Jacobson's structure spaces of noncommutative semigroups}
\author{Amartya Goswami}
\address{Department of Mathematics and Applied Mathematics\\ University of Johannesburg\\ P.O. Box 524, Auckland Park 2006\\ South Africa}
\address{National Institute for Theoretical and Computational Sciences (NITheCS)\\ South Africa}
\email{[email protected]}
\begin{abstract} The purpose of this note is to introduce primitive ideals of noncommutative semigroups and study some topological aspects of the corresponding structure spaces. \end{abstract}
\makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother
\subjclass[2020]{20M12; 20M10; 16W22}
\keywords{semigroup; primitive ideal; Jacobson topology}
\maketitle
\section*{Introduction} Since the introduction of primitive rings in \cite{J45}, primitive ideals have shown their immense importance in understanding structural aspects of rings and modules \cite{J56, R88}, Lie algebras \cite{KPP12}, enveloping algebras \cite{D96,J83}, PI-algebras \cite{J75}, quantum groups \cite{J95}, skew polynomial rings \cite{I79}, and others. In \cite{J451}, Jacobson has introduced a hull-kernel topology (also known as Jacobson topology) on the set of primitive ideals of a noncommutative ring, and has obtained representations of biregular rings. This Jacobson topology also turns out to play a key role in representation of finite-dimensional Lie algebras (see \cite{D96}).
Compare to the above algebraic structures, after magmas (also known as groupoids), semigroups are the most basic ones. A detailed study of algebraic theory of semigroups can be found in one of the earliest textbooks \cite{CP61} and \cite{CP67} (see also \cite{G01, H92, H95}), whereas specific study of prime, semiprime, and maximal ideals of semigroups are done in \cite{A53, A81, PK92, S69}. Furthermore, various notions of radicals of semigroups have been studied in \cite{A75, G69, S76}. Readers may consider \cite{AJ84} for a survey on ideal theory of semigroups.
The next question is of imposing topologies on various types of ideals of semigroups. To this end, hull-kernel topology on maximal ideals of (commutative) semigroups has been considered in \cite{A62}, whereas the same on minimal prime ideals has been done in \cite{K63}. Using the notion of $x$-ideals introduced in \cite{A62}, although in \cite{H66} a study of general notion of structure spaces for semigroups has been done, but having the assumption of commutativity restricts it to only certain types of ideals of semigroups, and hence did not have a scope for primitive ideals.
To best of author's knowledge, primitive ideals of semigroups has never been considered. The aim of this paper is to introduce primitive ideals of (noncommutative) semigroups and endow Jacobson topology on primitive ideals to study some topological aspects of them. In order to have the notion of primitive ideals of semigroups, we furthermore need a notion of a module over a noncommutative semigroup, which in general has also not been studied much. We hope this notion of primitive ideals introduced here will in future shade some light on the structural aspects of noncommutative semigroups.
\section{Primitive ideals}
A \emph{semigroup} is a tuple $(S, \cdot)$ such that the binary operation $\cdot$ on the set $S$ is associative. For all $a, b\in S$, we shall write $ab$ to mean $a\cdot b$. Throughout this work, all semigroups are assumed to be noncommutative. If a semigroup $S$ has an identity, we denote it by $1$ satisfying the property: $s1=s=1s$ for all $s\in S.$ If $A$ and $B$ are subsets of $S$, then by the \emph{set product} $AB$ of $A$ and $B$ we shall mean $AB=\{ab\mid a\in A, b\in B\}.$ If $A=\{a\}$ we write $AB$ as $aB$, and similarly for $B=\{b\}.$ Thus $$AB=\cup\{ Ab\mid b\in B\}=\cup \{aB\mid a\in A\}.$$
A \emph{left} (\emph{right}) \emph{ideal} of a semigroup $S$ is a nonempty subset $\mathfrak{a}$ of $S$ such that $S\!\mathfrak{a}\subseteq \mathfrak{a}$ ($\mathfrak{a} \,S\subseteq \mathfrak{a}$). A \emph{two-sided ideal} or simply an \emph{ideal} is a subset which is both a left and a right ideal of $S$. In this work the word ``ideal'' without modifiers will always mean two-sided ideal. If $X$ is a nonempty subset of a semigroup $S$, then the ideal $\langle X\rangle$ \emph{generated by} $X$ is the intersection of all ideals containing $X$. Therefore, $$\langle X\rangle =X\cup XS\cup SX\cup XSX.$$ We say an ideal $\mathfrak{a}$ is of \emph{finite character} if the generating set $X$ of $\mathfrak{a}$ is equal to the set-theoretic union of all the ideals generated by finite subsets of $X$. We assume all our ideals are of finite character. To define primitive ideals of a semi group $S,$ we require the notion of a module over $S$, which we introduce now.
A (\emph{left}) \emph{$S$-module} is an abelian group $(M,+,0)$ endowed with a map $S\times M\to M$ (denoted by $(s,m)\mapsto sm$) satisfying the identities: \begin{enumerate}[\upshape (i)] \itemsep -.2em \item $s(m+m')=sm+sm';$ \item $(ss')m=s(s'm);$ \item $s0=0,$ \end{enumerate} for all $s,s'\in S$ and for all $m, m'\in M$. Henceforth the term ``$S$-module'' without modifier will always mean left $S$-module. If $M$, $M'$ are $S$-modules, then an \emph{$S$-module homomorphism} from $M$ into $M'$ is a group homomorphism $f\colon M\to M'$ such that $f(sm)=sf(m)$ for all $s\in S$ and for all $m\in M.$ A subset $N$ of $M$ is called an $S$-\emph{submodule} of the module $M$ if \begin{enumerate}[\upshape (i)] \itemsep -.2em \item $(N,+)$ is a subgroup of $(M,+);$ \item for all $s\in S$ and for all $n\in N$, $sn\in N.$ \end{enumerate} If $\mathfrak{a}$ is an ideal of $S$, then the additive subgroup $\mathfrak{a}M$ of $M$ generated by the elements of the form $\{am \mid a \in \mathfrak{a},m \in M\}$ is an $S$-submodule. An $S$-module $M$ is called \emph{simple} (or \emph{irreducible}) if \begin{enumerate}[\upshape (i)] \itemsep -.2em \item $S\!M=\left\{\sum s_im_i \mid s_i\in S, m_i\in M\right\}\neq 0.$ \item There is no proper $S$-submodule of $M$ other than $0$. \end{enumerate} A (\emph{left}) \emph{annihilator} of an $S$-module $M$ is $\mathrm{Ann}_S(M)=\{ s\in S\mid sm=0\;\;\text{for all}\;\; m\in M\}.$ When $M=\{m\},$ we write $ \mathrm{Ann}_S(\{m\})$ as $ \mathrm{Ann}_S(m)$.
\begin{lemma} An annihilator $\mathrm{Ann}_S(M)$ is an ideal of $S$. \end{lemma}
\begin{proof} For all $s\in S$ and for all $x\in \mathrm{Ann}_S(M)$ we have $(sx)m=s(xm)=s0=0.$ \end{proof}
A nonempty proper ideal $\mathfrak{p}$ is said to be \emph{primitive} if $\mathfrak{p}=\mathrm{Ann}_S(M)$ for some simple $S$-module $M$. We denote the set of primitive ideals of a semigroup $S$ by $\mathrm{Prim}(S)$. A nonempty proper ideal $\mathfrak{q}$ of a semigroup $S$ is said to be \emph{prime} if for any two ideals $\mathfrak{a}$, $\mathfrak{b}$ of $S$ and $\mathfrak{a}\mathfrak{b}\subseteq \mathfrak{q}$ implies $\mathfrak{a}\subseteq \mathfrak{q}$ or $\mathfrak{b}\subseteq \mathfrak{q}$.
As it has been remarked in \cite{BM58}, it does not matter whether the product $\mathfrak{a}\mathfrak{b}$ of ideals $\mathfrak{a}$ and $\mathfrak{b}$ is defined to be the set of all finite sums $\sum i_{\alpha} j_{\alpha}$ (where $i_{\alpha}\in \mathfrak{a}$, $j_{\alpha}\in \mathfrak{b}$), or the smallest ideal of the semigroup $S$ containing all products $i_{\alpha} j_{\alpha}$, or merely the set of all these products. For rings, in \cite{B56}, the second of these definitions has been used and in \cite{A54} the third. The proof of the following result is easy to verify.
\begin{lemma}\label{dint} If $\mathfrak{a}$ and $\mathfrak{b}$ are any two ideals of a semigroup, then $\mathfrak{a}\mathfrak{b}\subseteq \mathfrak{a}\cap \mathfrak{b}.$ \end{lemma}
The following proposition gives an alternative formulation of prime ideals of semigroups. For a proof, see \cite[Lemma 2.2]{PK92}.
\begin{proposition}\label{alpri} Suppose $S$ is a semigroup. Then the following conditions are equivalent: \begin{enumerate}[\upshape (i)] \itemsep -.2em \item $\mathfrak{q}$ is a prime ideal of $S$. \item $aSb \subseteq \mathfrak{q}$ implies $a\in \mathfrak{q}$ or $b\in\mathfrak{q}$\; for all $a, b \in S.$ \end{enumerate} \end{proposition}
Primitive ideals and prime ideals of a semigroup are related as follows.
\begin{proposition}\label{prtpr} Every primitive ideal of a semigroup is a prime ideal. \end{proposition}
\begin{proof} Suppose $\mathfrak{p}$ is a primitive ideal and $\mathfrak{p}=\mathrm{Ann}_S(M)$ for some simple $S$-module $M$. Let $a, b \notin\mathrm{Ann}_S(M)$. Then $am\neq 0$ and $bm'\neq 0$ for some $m, m'\in M.$ Since $M$ is simple, there exists an $s\in S$ such that $s(bm')=m$. Then $$(asb)m'=a(s(bm'))=am\neq 0,$$ and hence $asb \notin \mathrm{Ann}_S(M)$. Therefore, $\mathrm{Ann}_S(M)$ is a prime ideal by Lemma \ref{alpri}. \end{proof}
In the next section we talk about Jacobson topology on the set of primitive ideals of a semigroup and discuss about some of the topological properties of the corresponding structure spaces.
\section{Jacobson topology}
We shall introduce Jacobson topology in $\mathrm{Prim}(S)$ by defining a closure operator for the subsets of $\mathrm{Prim}(S)$. Once we have a closure operator, closed sets are defined as sets which are invariant under this closure operator. Suppose $X$ is a subset of $\mathrm{Prim}(S)$. Set $\mathcal{D}_X=\bigcap_{\mathfrak{q}\in X}\mathfrak{q}.$ We define the closure of the set $X$ as \begin{equation}\label{clop} \mathcal{C}(X)=\left\{ \mathfrak{p}\in \mathrm{Prim}(S) \mid \mathfrak{p}\supseteq \mathcal{D}_X \right\}. \end{equation}
If $X=\{x\}$, we will write $\mathcal{C}(\{x\})$ as $\mathcal{C}(x)$. We wish to verify that the closure operation defined in (\ref{clop}) satisfies Kuratowski's closure conditions and that is done in the following
\begin{proposition}\label{ztp} The sets $\{\mathcal{C}(X)\}_{X\subseteq \mathrm{Prim}(S)}$ satisfy the following conditions: \begin{enumerate}[\upshape (i)] \itemsep -.2em \item\label{clee} $\mathcal{C}(\emptyset)=\emptyset$, \item\label{clxx} $\mathcal{C}(X)\supseteq X$, \item\label{clclx} $\mathcal{C}(\mathcal{C}(X))=\mathcal{C}(X),$ \item\label{clxy} $ \mathcal{C}(X\cup Y)=\mathcal{C}(X)\cup \mathcal{C}(Y).$ \end{enumerate} \end{proposition}
\begin{proof} The proofs of (\ref{clee})-(\ref{clclx}) are straightforward, whereas for (\ref{clxy}), it is easy to see that $ \mathcal{C}(X\cup Y)\supseteq\mathcal{C}(X)\cup \mathcal{C}(Y).$ To obtain the the other inclusion, let $\mathfrak{p}\in \mathcal{C}(X\cup Y).$ Then $$\mathfrak{p}\supseteq \mathcal{D}_{X\cup Y}=\mathcal{D}_X \cap \mathcal{D}_Y.$$ Since $\mathcal{D}_X$ and $\mathcal{D}_Y$ are ideals of $S$, by Lemma \ref{dint}, it follows that $$\mathcal{D}_X\mathcal{D}_Y\subseteq \mathcal{D}_X \cap \mathcal{D}_Y\subseteq \mathfrak{p}.$$ Since by Proposition \ref{prtpr}, $\mathfrak{p}$ is prime, either $\mathcal{D}_X\subseteq \mathfrak{p}$ or $\mathcal{D}_Y\subseteq \mathfrak{p}$ This means either $\mathfrak{p}\in \mathcal{C}(X)$ or $\mathfrak{p}\in \mathcal{C}(Y)$. Thus $ \mathcal{C}(X\cup Y)\subseteq\mathcal{C}(X)\cup \mathcal{C}(Y).$ \end{proof}
The set $\mathrm{Prim} (S)$ of primitive ideals of a semigroup $S$ topologized (the Jacobson topology) by the closure operator defined in (\ref{clop}) is called the \emph{structure space} of the semigroup $S$. It is evident from (\ref{clop}) that if $\mathfrak{p}\neq \mathfrak{p}'$ for any two $\mathfrak{p}, \mathfrak{p}'\in \mathrm{Prim}(S)$, then $\mathcal{C}(\mathfrak{p})\neq \mathcal{C}(\mathfrak{p}').$ Thus
\begin{proposition}\label{t0a} Every structure space $\mathrm{Prim}(S)$ is a $T_0$-space. \end{proposition}
\begin{theorem}\label{csb} If $S$ is a semigroup with identity then the structure space $\mathrm{Prim}(S)$ is compact. \end{theorem}
\begin{proof} Suppose $\{K_{ \lambda}\}_{\lambda \in \Lambda}$ is a family of closed sets of $\mathrm{Prim}(S)$ with $\bigcap_{\lambda\in \Lambda}K_{ \lambda}=\emptyset.$ Set $$\mathfrak{a}=\left\langle \bigcup_{\lambda \in \Lambda} \mathcal{D}_{K_{\lambda}}\right\rangle.$$ If $\mathfrak{a}\neq S,$ then we must have a maximal ideal $\mathfrak{m}$ of $S$ such that $\mathfrak{a}\subseteq \mathfrak{m}.$ Moreover, $$\mathcal{D}_{K_{\lambda}}\subseteq \mathfrak{a}\subseteq \mathfrak{m},$$ for all $\lambda \in \Lambda.$ Therefore $\mathfrak{m}\in \mathcal{C}(K_{\lambda})=K_{\lambda}$ for all $\lambda \in \Lambda$, a contradiction of our assumption. Hence $\mathfrak{a}=S,$ and the identity $1\in \mathfrak{a}.$ Since $\mathfrak{a}$ is of finite character we must have a finite subset $\{\lambda_{\scriptscriptstyle 1}, \ldots, \lambda_{\scriptscriptstyle n}\}$ of $\Lambda$ such that $1\in \bigcup_{i=1}^n \mathcal{D}_{K_{\lambda_i}}.$ This implies $\bigcap_{\lambda_i}K_{\lambda_i}=\emptyset,$ which establishes the finite intersection property. \end{proof}
Recall that a nonempty closed subset $K$ of a topological space $X$ is \emph{irreducible} if $K\neq K_{\scriptscriptstyle 1}\cup K_{\scriptscriptstyle 2}$ for any two proper closed subsets $K_{\scriptscriptstyle 1}, K_{\scriptscriptstyle 2}$ of $K$. A maximal irreducible subset of a topological space $X$ is called an \emph{irreducible component} of $X.$ A point $x$ in a closed subset $K$ is called a \emph{generic point} of $K$ if $K = \mathcal{C}(x).$
\begin{lemma}\label{lemprime} The only irreducible closed subsets of a structure space $\mathrm{Prim}(S)$ are of the form: $\{\mathcal{C}(\mathfrak{p})\}_{\mathfrak{p}\in \mathrm{Prim}(S)}$. \end{lemma}
\begin{proof} Since $\{\mathfrak{p}\}$ is irreducible, so is $\mathcal{C}(\mathfrak{p}).$ Suppose $\mathcal{C}(\{\mathfrak{a}\})$ is an irreducible closed subset of $\mathrm{Prim}(S)$ and $\mathfrak{a}\notin \mathrm{Prim}(S).$ This implies there exist ideals $\mathfrak{b}$ and $\mathfrak{c}$ of $S$ such that $\mathfrak{b}\nsubseteq \mathfrak{a}$ and $\mathfrak{c}\nsubseteq \mathfrak{a}$, but $\mathfrak{b}\mathfrak{c}\subseteq \mathfrak{a}$. Then $$\mathcal{C}(\langle \mathfrak{a}, \mathfrak{b}\rangle)\cup \mathcal{C}(\langle \mathfrak{a},\mathfrak{c}\rangle)=\mathcal{C}(\langle \mathfrak{a}, \mathfrak{b}\mathfrak{c}\rangle)=\mathcal{C}(\mathfrak{a}).$$ But $\mathcal{C}(\langle \mathfrak{a}, \mathfrak{b}\rangle)\neq \mathcal{C}(\mathfrak{a})$ and $\mathcal{C}(\langle \mathfrak{a}, \mathfrak{c}\rangle)\neq \mathcal{C}(\mathfrak{a}),$ and hence $\mathcal{C}(\mathfrak{a})$ is not irreducible. \end{proof}
\begin{proposition} Every irreducible closed subset of $\mathrm{Prim}(S)$ has a unique generic point. \end{proposition}
\begin{proof} The existence of generic point follows from Lemma \ref{lemprime}, and the uniqueness of such a point follows from Proposition \ref{t0a}. \end{proof}
The irreducible components of a structure space can be characterised in terms of minimal primitive ideals, and we have that in the following
\begin{proposition}\label{thmirre} The irreducible components of a structure space $\mathrm{Prim}(S)$ are the closed sets $\mathcal{C}(\mathfrak{p})$, where $\mathfrak{p}$ is a minimal primitive ideal of $S$. \end{proposition}
\begin{proof} If $\mathfrak{p}$ is a minimal primitive ideal, then by Lemma \ref{lemprime}, $\mathcal{C}(\mathfrak{p})$ is irreducible. If $\mathcal{C}(\mathfrak{p})$ is not a maximal irreducible subset of $\mathrm{Prim}(S)$, then there exists a maximal irreducible subset $\mathcal{C}(\mathfrak{p}')$ with $\mathfrak{p}'\in \mathrm{Prim}(S)$ such that $\mathcal{C}(\mathfrak{p})\subsetneq \mathcal{C}(\mathfrak{p}')$. This implies that $\mathfrak{p}\in \mathcal{C}(\mathfrak{p}')$ and hence $\mathfrak{p}'\subsetneq \mathfrak{p}$, contradicting the minimality property of $\mathfrak{p}$. \end{proof}
Recall that a semigroup is called \emph{Noetherian} if it satisfies the ascending chain condition, whereas a topological space $X$ is called \emph{Noetherian} if the descending chain condition holds for closed subsets of $X.$ A relation between these two notions is shown in the following
\begin{proposition}\label{fwn} If a semigroup $S$ is Noetherian, then $\mathrm{Prim}(S)$ is a Noetherian space. \end{proposition}
\begin{proof} It suffices to show that a collection of closed sets in $\mathrm{Prim}(S)$ satisfy descending chain condition. Let $\mathcal{C}(\mathfrak{a}_{\scriptscriptstyle 1})\supseteq \mathcal{C}(\mathfrak{a}_{\scriptscriptstyle 2})\supseteq \cdots$ be a descending chain of closed sets in $\mathrm{Prim}(S)$. Then, $\mathfrak{a}_{\scriptscriptstyle 1}\subseteq \mathfrak{a}_{\scriptscriptstyle 2}\subseteq \cdots$ is an ascending chain of ideals in $S.$ Since $S$ is Noetherian, the chain stabilizes at some $n \in \mathds{N}.$ Hence, $\mathcal{C}(\mathfrak{a}_{\scriptscriptstyle n}) = \mathcal{C}(\mathfrak{a}_{\scriptscriptstyle n+k})$ for any $k.$ Thus $\mathrm{Prim}(S)$ is Noetherian. \end{proof}
\begin{corollary} The set of minimal primitive ideals in a Noetherian semigroup is finite. \end{corollary}
\begin{proof} By Proposition \ref{fwn}, $\mathrm{Prim}(S)$ is Noetherian, thus $\mathrm{Prim}(S)$ has a finitely many irreducible components. By Proposition \ref{thmirre}, every irreducible closed subset of $\mathrm{Prim}(S)$ is of form $\mathcal{C}(\mathfrak{p}),$ where $\mathfrak{p}$ is a minimal primitive ideal. Thus $\mathcal{C}(\mathfrak{p})$ is irreducible components if and only if $\mathfrak{p}$ is minimal primitive. Hence, $S$ has only finitely many minimal primitive ideals. \end{proof}
\begin{proposition}\label{conmap} Suppose $\phi\colon S\to T$ is a semigroup homomorphism and define the map $\phi_*\colon \mathrm{Prim}(T)\to \mathrm{Prim}(S)$ by $\phi_*(\mathfrak{p})=\phi\inv(\mathfrak{p})$, where $\mathfrak{p}\in\mathrm{Prim}(T).$ Then $\phi_*$ is a continuous map. \end{proposition}
\begin{proof} To show $\phi_*$ is continuous, we first show that $f\inv(\mathfrak{p})\in \mathrm{Prim}(S),$ whenever $\mathfrak{p}\in \mathrm{Prim}(T)$. Note that $\phi\inv(\mathfrak{p})$ is an ideal of $S$ and a union of $\mathrm{ker}\phi$-classes (see \cite[Proposition 3.4]{G01}. Suppose $\mathfrak{p}=\mathrm{Ann}_{T}(M)$ for some simple $T$-module. Then by the ``change of rings'' property of modules, $\phi\inv(\mathfrak{p})$ is the annihilator of the simple $T$-module $M$ obtained by defining $sm:=\phi(s)m$. Therefore $f\inv(\mathfrak{p})\in \mathrm{Prim}(S)$. Now consider a closed subset $\mathcal{C}(\mathfrak{a})$ of $\mathrm{Prim}(S).$ Then for any $\mathfrak{q}\in \mathrm{Prim}(T),$ we have: \begin{align*} \mathfrak{q}\in \phi_*\inv (\mathcal{C}(\mathfrak{a}))\Leftrightarrow \phi\inv(\mathfrak{q})\in \mathcal{C}(\mathfrak{a})\Leftrightarrow \mathfrak{a}\subseteq \phi\inv(\mathfrak{q})\Leftrightarrow \mathfrak{q}\in\mathcal{C}(\langle \phi(\mathfrak{a})\rangle), \end{align*} and this proves the desired continuity of $\phi_*$. \end{proof}
\end{document} |
\begin{document}
\title{Height pairings of 1-motives}
\begin{abstract} The purpose of this work is to generalize, in the context of 1-motives, the $p$-adic height pairings constructed by B. Mazur and J. Tate on abelian varieties. Following their approach, we define a global pairing between the rational points of a 1-motive and its dual. We also provide local pairings between zero-cycles and divisors on a curve, which is done by considering its Picard and Albanese 1-motives. \end{abstract}
\section{Introduction}
In \cite{MT83} Mazur and Tate gave a construction of a global pairing on the rational points of paired abelian varieties over a global field, as well as N\'eron-type local pairings between disjoint zero-cycles and divisors on an abelian variety over a local field. Their approach involved the concept of $\rho$-splittings of biextensions of abelian groups, which they mainly studied in the case of $K$-rational sections of a $\mathbb G_m$-biextension of abelian varieties over a local field. When certain requirements on the base field, the morphism $\rho$, and the abelian varieties are met, they proved the existence of canonical $\rho$-splittings for this type of biextensions, which they later used to construct canonical local pairings between disjoint zero-cycles and divisors on an abelian variety. By considering a global field endowed with a set of places and its respective completions, they were also able to construct a global pairing on the rational points of paired abelian varieties. \\
It will be of particular interest to us, the Poincar\'e biextension of an abelian variety and its dual defined over a non-archimedean local field of characteristic 0. When considering this biextension, there is another method of obtaining $\rho$-splittings, due to Zarhin \cite{ZA90}, starting from splittings of the Hodge filtration of the first de Rham cohomology group of the abelian variety. His construction coincides with Mazur and Tate's in the case that $\rho$ is unramified, or when $\rho$ is ramified and the splitting of the Hodge filtration is the one induced by the unit root subspace. In the latter case, the equality of both constructions is a result of Coleman \cite{CO91}, in the case of ordinary reduction, and of Iovita and Werner \cite{IW03}, in the case of semistable ordinary reduction. \\
For our generalization to 1-motives we will focus on the ramified case. Following Zarhin's approach, we construct $\rho$-splittings of the Poincar\'e biextension of a 1-motive and its dual starting from a pair of splittings of the Hodge filtrations of their de Rham realizations; this is done in Section \ref{sec:ramified}. In order to construct pairings from these $\rho$-splittings, we need them to be compatible with the canonical linearization associated to the biextension; the conditions under which this happens are studied in Section \ref{sec:linearization}. \\
In Section \ref{sec:local_pairing} we consider a semi-normal irreducible curve $C$ over a finite extension of $\mathbb Q_p$ and construct a local pairing between disjoint zero-cycles of degree zero on $C$ and on its regular locus $C_{\text{\rm reg}}$. We do this by considering the Poincar\'e biextension of the Picard and Albanese 1-motives of $C$. This construction generalizes the local pairing of Mazur and Tate \cite[p. 212]{MT83} in the case of elliptic curves. \\
Finally, in Section \ref{sec:global_pairing} we consider a 1-motive $M$ over a number field $F$, a set of places of $F$, and homomorphisms $\rho_v: F_v^* \to \mathbb Q_p$ (almost all vanishing on the units of the valuation ring), with $v$ running through the set of places, as well as a $\rho_v$-splitting $\psi_v$, for each $v$, on the $F_v$-rational sections of the Poincar\'e biextension $P$ of $M$ and its dual $M^\vee$ (satisfying certain properties). With this data we construct a global pairing between the $F$-rational points of $M$ and $M^\vee$ under the condition that, for each ramified $\rho_v$, the $\rho_v$-splitting $\psi_v$ is compatible with the canonical linearization of $P$. The pairing is defined similarly to the case of abelian varieties, hence generalizing the global pairing of Mazur and Tate \cite[Lemma 3.1, p. 214]{MT83} in the case of an abelian variety and its dual.
\section{Preliminaries on abelian varieties and 1-motives}
\subsection{$\rho$-splittings on abelian varieties}
For the definition of biextension of abelian groups and group schemes we refer to \cite{MU69}.
\begin{definition}[{\cite[p. 199]{MT83}}] Let $A$, $B$, $H$ , $Y$ be abelian groups and $P$ a biextension of $(A, B)$ by $H$. Let $\rho: H \to Y$ be a homomorphism. A \emph{$\rho$-splitting} of $P$ is a map $\psi: P \to Y$ such that \begin{enumerate}[(i)] \item $\psi(h+x)=\rho(h) + \psi(x)$, for all $h \in H$ and $x \in P$ and \item for each $a \in A$ (resp. $b \in B$) the restriction of $\psi$ to $P_{a, B}$ (resp. $P_{A, b}$) is a group homomorphism, \end{enumerate} where $P_{a, B}$ (resp. $P_{A, b}$) denotes the fiber of $P$ over $\{a\} \times B$ (resp. $A \times \{b\}$). \end{definition} \noindent Thus, a $\rho$-splitting can be seen as a bi-homomorphic map which is compatible with the natural actions of $H$. Moreover, $\psi$ induces a trivialization of the pushout of $P$ along $\rho$, hence its name. \\
The context in which these maps were classically studied is the following. Consider a field $K$ which is complete with respect to a place $v$, either archimedean or discrete, $A$ and $B$ abelian varieties over $K$, $P$ a biextension of $(A, B)$ by $\mathbb G_m$, and $\rho: K^* \to Y$ a homomorphism from the group of units of $K$ to an abelian group $Y$. A key result by Mazur and Tate \cite[p. 199]{MT83} states the existence of canonical $\rho$-splittings of the group $P(K)$ of rational points of $P$ in the following cases: \begin{enumerate}[(i)]
\item $v$ is archimedean and $\rho(c) = 0$ for all $c$ such that $|c|_v = 1$, \item $v$ is discrete, $\rho$ is unramified (\textit{i.e.} $\rho(R^*) = 0$, where $R$ is the valuation ring of $K$) and $Y$ is uniquely divisible by $N$, and \item $v$ is discrete, the residue field of $K$ is finite, $A$ has semistable ordinary reduction and $Y$ is uniquely divisible by $M$, \end{enumerate} where $N$ is an integer depending on $A$ and $M$ is an integer depending on $A$ and $B$. We will mainly focus on case (iii). In this case, the $\rho$-splitting of $P(K)$ is obtained by extending a local formal splitting of $P$, which exists and is unique because of the semistable ordinary reduction of $A$. \\
When $B = A^\vee$ is the dual abelian variety of $A$ and $P = P_A$ is the Poincar\'e biextension, there is an alternate method of obtaining $\rho$-splittings of $P(K)$ starting with a splitting of the Hodge filtration of the first de Rham cohomology of $A$. This construction is due to Zarhin \cite{ZA90} and is done as follows. Let $K$ be a field which is the completion of a number field with respect to a discrete place $v$ over a prime $p$ and consider a continuous homomorphism $\rho: K^* \rightarrow \mathbb Q_p$. Recall that, associated to the first de Rham cohomology $K$-vector space of $A$, there is a canonical extension \begin{equation} \label{hodgefil} 0 \rightarrow \operatorname{H}^0(A, \Omega_{A/K}^1) \rightarrow \operatorname{H}_\text{\rm dR}^1(A) \rightarrow \operatorname{H}^1(A, \mathcal O_A) \rightarrow 0 \end{equation} coming from the Hodge filtration of $\operatorname{H}_\text{\rm dR}^1(A)$. It is known that \eqref{hodgefil} can be naturally identified with the exact sequence of Lie algebras induced by the universal vectorial extension $A^{\vee \#}$ of $A^\vee$: \begin{equation} 0 \rightarrow \omega_A \rightarrow A^{\vee \#} \rightarrow A^\vee \rightarrow 0, \end{equation} where $\omega_A$ is the $K$-vector group representing the sheaf of invariant differentials on $A$ (see \cite[Prop. 4.1.7, p. 48]{MM74}). Therefore, it is possible to obtain a (uniquely determined) splitting $\eta: A^\vee(K) \rightarrow A^{\vee \#}(K)$ at the level of groups from any splitting $r: \operatorname{H}^1(A, \mathcal O_A) \rightarrow \operatorname{H}_\text{\rm dR}^1(A)$ of \eqref{hodgefil} (see \cite[Ex. 3.1.5, p. 328]{ZA90} or \cite[Lemma 3.1.1, p. 641]{CO91}). Since $A^{\vee}$ represents the functor $\underline{\Ext}_K(A, \mathbb G_{m})$, while $A^{\vee \#}$ represents the functor $\underline{\Extrig}_K(A, \mathbb G_{m})$ of rigidified extensions of $A$ by $\mathbb G_m$, then the morphism $\eta$ gives a multiplicative way of associating a rigidification to every extension of $A$ by $\mathbb G_{m}$. Indeed, take a point $a^\vee \in A^\vee(K)$ and let $P_{A,a^\vee}$ be the fiber of the Poincar\'e bundle $P_A$ over $A \times \{a^\vee\}$. Then $\eta(a^\vee)$ corresponds to the extension $P_{A,a^\vee}$ of $A$ by $\mathbb G_m$ endowed with a rigidification or, equivalently, a splitting $$t_{a^\vee}: \operatorname{Lie} P_{A,a^\vee}(K) \rightarrow \operatorname{Lie} \mathbb G_m(K)$$ of the exact sequence of Lie algebras induced by $P_{A,a^\vee}$. The composition $\operatorname{Lie} \rho \circ t_{a^\vee}$ can then be extended to a group homomorphism $P_{A,a^\vee}(K) \to \mathbb Q_p$ (see \cite[Thm. 3.1.7, p. 329]{ZA90}), for every $a^\vee \in A^\vee$, thus obtaining a $\rho$-splitting $$\psi_\rho: P_{A}(K) \rightarrow \mathbb Q_p.$$
When $\rho$ is unramified, $\psi_\rho$ does not depend on the choice of splitting of \eqref{hodgefil}, recovering Mazur and Tate's result for case (ii) (see \cite[Thm. 4.1, p. 331]{ZA90}). On the other hand, when $\rho$ is ramified, $\psi_\rho$ does depend on the chosen splitting of \eqref{hodgefil} (see \cite[Thm. 4.3, p. 333]{ZA90}). Coleman \cite{CO91} demonstrated that, when $A$ has good ordinary reduction, the canonical $\rho$-splitting of $P_A(K)$ constructed by Mazur and Tate comes from the splitting of \eqref{hodgefil} induced by the unit root subspace, which is the subspace of $\operatorname{H}_\text{\rm dR}^1(A)$ on which the Frobenius acts with slope 0. Later, Iovita and Werner \cite{IW03} were able to generalize this result to abelian varieties with semistable ordinary reduction by considering their Raynaud extension, which can be seen as a 1-motive whose abelian part has good ordinary reduction (see also \cite{WE98}).
\subsection{1-motives}
According to Deligne \cite[p. 59]{DE74}, a \emph{1-motive} $M$ over a field $K$ consists of: \begin{enumerate}[(i)] \item a \emph{lattice} $L$ over $K$, \textit{i.e.} a group scheme which, locally for the \'etale topology on $K$, is isomorphic to a finitely generated free abelian constant group; \item a \emph{semi-abelian variety} $G$ over $K$, \textit{i.e.} an extension of an abelian variety $A$ by a torus $T$; and \item a morphism of $K$-group schemes $u: L \rightarrow G$. \end{enumerate} A 1-motive can be considered as a complex of $K$-group schemes with the lattice in degree -1 and the semi-abelian in degree 0. A \emph{morphism of 1-motives} can then be defined as a morphism of the corresponding complexes.
\subsubsection{Cartier duality} Associated to a 1-motive $M$ there is a \emph{Cartier dual 1-motive} $M^\vee = [L^\vee \xrightarrow{u^\vee} G^\vee]$ defined as follows (see \cite[p. 67]{DE74}). The lattice $L^\vee := \underline{\Hom}_K(T, \mathbb G_m)$ is the Cartier dual of $T$, the torus $T^\vee := \underline{\Hom}_K(L, \mathbb G_m)$ is the Cartier dual of $L$, the abelian variety $A^\vee$ is the dual abelian variety of $A$, and the semi-abelian variety $G^\vee$ is the image of $v: L \xrightarrow{u} G \to A$ under the natural isomorphism $$\operatorname{Hom}_K(L, A) \xrightarrow{\cong} \operatorname{Ext}_K^1(A^\vee, T^\vee).$$
There is a canonical biextension $P$ of $(M, M^\vee)$ by $\mathbb G_m$, called the \emph{Poincar\'e biextension}, expressing the duality between $M$ and $M^\vee$. It is defined as the pullback to $G \times G^\vee$ of the Poincar\'e biextension $P_A$ of $(A, A^\vee)$. $P$ is naturally endowed with trivializations over $L \times G^\vee$ and $G \times L^\vee$ that coincide over $L \times L^\vee$, making it a biextension of $(M, M^\vee)$ by $\mathbb G_m$ (see \cite[p. 60]{DE74}). Using the fact that the group scheme $G^\vee$ represents the sheaf $\underline{\Ext}_K([L \xrightarrow{v} A], \mathbb G_m)$, it is possible to define the map $u^\vee: L^\vee \to G^\vee$ as \begin{align*} u^\vee: \underline{\Hom}_K(T, \mathbb G_m) & \to \underline{\Ext}_K([L \xrightarrow{v} A], \mathbb G_m) \\ \chi & \mapsto [L \xrightarrow{\xi} P_{A, v^\vee(x^\vee)}], \end{align*} where $x^\vee \in L^\vee$ is the element corresponding to $\chi \in \underline{\Hom}_K(T, \mathbb G_m)$ and $\xi$ is obtained from the trivialization of $P$ over $L \times L^\vee$. \\
\subsubsection{de Rham realization} \label{sec:derham}
A 1-motive is endowed with a de Rham realization defined via its universal vectorial extension (see \cite[p. 58]{DE74}). The \emph{universal vectorial extension} of a 1-motive $M = [L \xrightarrow{u} G]$ over $K$ is a two term complex of $K$-group schemes $$M^\natural = [L \xrightarrow{u^\natural} G^\natural]$$ which is an extension of $M$ by the $K$-vector group $\omega_{G^\vee}$ of invariant differentials on $G^\vee$ \begin{equation} \label{def:UVE} \xymatrix{ 0 \ar[r] & 0 \ar[r] \ar[d] & L \ar@{=}[r] \ar[d]^{u^\natural} & L \ar[r] \ar[d]^{u} & 0 \\ 0 \ar[r] & \omega_{G^\vee} \ar[r] & G^\natural \ar[r] & G \ar[r] & 0} \end{equation} and satisfies the following universal property: for all $K$-vector groups $V$, the map $$\operatorname{Hom}_{\mathcal O_K}(\omega_{G^\vee}, V) \rightarrow \operatorname{Ext}^1_K(M, V),$$
which sends a morphism $\omega_{G^\vee} \to V$ of vector groups to the extension of $M$ by $V$ induced by pushout, is an isomorphism. It is well known that the universal vectorial extension of a 1-motive always exists. The \emph{de Rham realization} of $M$ is then defined as \[\operatorname{T}_\text{\rm dR}(M) = \operatorname{Lie} G^\natural.\] This is endowed with a \emph{Hodge filtration}, defined as follows: \[ F^i \operatorname{T}_\text{\rm dR}(M) = \left\{ \begin{array}{ll} \operatorname{T}_\text{\rm dR}(M) & \mbox{if $i \leq -1$,}\\ \omega_{G^\vee} & \mbox{if $i = 0$,}\\ 0 & \mbox{if $i \geq 1$.} \end{array} \right. \] We mention some properties concerning universal vectorial extensions of subquotients of $M$.
\begin{lemma} \label{lem:UVE} \begin{enumerate}[(i)] \item The group scheme $G^\natural$ represents the fppf-sheaf $$S \mapsto \left\{
\begin{array}{c|c} (g, \nabla) & \text{$g \in G(S)$ and $\nabla$ is a $\natural-$structure on the extension} \\ & \text{$[L^\vee \to P_{g, G^\vee}]$ of $M^\vee$ by $\mathbb G_{m}$ induced by $g$} \end{array} \right\}.$$ \item If we regard the semi-abelian variety $G$ as the 1-motive $G[0] = [0 \to G]$, then its universal vectorial extension is a group scheme $G^\#$ which is an extension of $G$ by the vector group $\omega_{A^\vee}$. Moreover, $G^\#$ represents the fppf-sheaf $$S \mapsto \left\{
\begin{array}{c|c} (g, \nabla) & \text{$g \in G(S)$ and $\nabla$ is a $\natural-$structure on the extension} \\ & \text{of $[L^\vee \xrightarrow{v^\vee} A^\vee]$ by $\mathbb G_{m}$ associated to $g$} \end{array} \right\}.$$ \item If we regard the abelian variety $A$ as the 1-motive $A[0] = [0 \to A]$, then its universal vectorial extension is a group scheme $A^\#$ which is an extension of $A$ by the vector group $\omega_{A^\vee}$. Moreover, $A^\#$ represents the fppf-sheaf $$S \mapsto \left\{
\begin{array}{c|c} (a, \nabla) & \text{$a \in A(S)$ and $\nabla$ is a $\natural-$structure on} \\ & \text{the extension $P_{a, A^\vee}$ of $A^\vee$ by $\mathbb G_{m}$} \end{array} \right\}.$$ \item If we regard the lattice $L$ as the 1-motive $L[1] = [L \to 0]$, then its universal vectorial extension is the complex $[L \to \omega_{T^\vee}]$. Via the identifications $L = \underline{\Hom}_K(T^\vee, \mathbb G_m)$ and $\omega_{T^\vee} = \underline{\Hom}_{\mathcal O_K}(\operatorname{Lie} T^\vee, \mathcal O_K)$, this map is described as \begin{align*} \underline{\Hom}_K(T^\vee, \mathbb G_m) & \to \underline{\Hom}_{\mathcal O_K}(\operatorname{Lie} T^\vee, \mathcal O_K) \\ \chi & \mapsto \operatorname{Lie} \chi . \end{align*} \end{enumerate} \end{lemma}
\begin{proof} Parts (i) and (ii) follow from Proposition 3.8 and Lemma 5.2 in \cite{BE09}, respectively. Part (iii) follows from Proposition 2.6.7 and Proposition 3.2.3 (a) in \cite{MM74} (see also \cite[Thm. 0.3.1, p. 633]{CO91}). And, finally, (iv) follows from Lemma 2.2.2 in \cite{AB05}, once we notice that there is a natural isomorphism $L \otimes_{\mathbb Z} \mathbb G_a \cong \omega_{T^\vee}$ mapping $x \otimes 1 \mapsto \operatorname{Lie} \chi$. \end{proof}
Let $P^\natural$ be the biextension of $(M^\natural, M^{\vee \natural})$ by $\mathbb G_m$ obtained from $P$ by pullback. There is a canonical connection $\nabla$ on $P^\natural$ which endows it with a $\natural-$structure (see \cite[Prop. 3.9, p. 1644]{BE09}). Its curvature is an invariant 2-form on $G^\natural \times G^{\vee \natural}$ and therefore it determines an alternating pairing $R$ on $\operatorname{Lie} G^\natural \times \operatorname{Lie} G^{\vee \natural}$ with values in $\operatorname{Lie} \mathbb G_{m}$. Since the restriction of $R$ to $\operatorname{Lie} G^\natural$ and $\operatorname{Lie} G^{\vee \natural}$ is zero, this map induces a pairing \[\Phi: \operatorname{Lie} G^\natural \times \operatorname{Lie} G^{\vee \natural} \to \operatorname{Lie} \mathbb G_{m}. \] \emph{Deligne's pairing} is then defined as \[(\, \cdot \,, \, \cdot \,)^{Del}_M :=- \Phi: \operatorname{T}_\text{\rm dR}(M) \times \operatorname{T}_\text{\rm dR}(M^\vee) \to \operatorname{Lie} \mathbb G_{m}. \]
\subsubsection{Albanese and Picard 1-motives} \label{sec:alb_pic} Let $C_0$ be a curve over a field $K$ of characteristic 0, \textit{i.e.} a purely 1-dimensional variety \footnote{Originally, Deligne considered only algebraically closed fields, but these constructions can also be done over an arbitrary field of characteristic 0 (see \cite[p. 87--90]{BS01}).}. Consider the following commutative diagram \[\begin{tikzcd} C' \ar[d, twoheadrightarrow, "\pi"'] \ar[r, hook, "j'"] \ar[dd, bend right=50, "q"'] & \bar C' \ar[d, twoheadrightarrow, "\bar \pi"] \\ C \ar[r, hook, "j"] \ar[d, twoheadrightarrow, "\pi_0"'] & \bar C \\ C_0 & \end{tikzcd}\] where $C'$ is the normalization of $C_0$, $\bar C'$ is a smooth compactification of $C'$, and $\bar C$ (resp. $C$) is the curve obtained from $\bar C'$ (resp. $C'$) by contracting each of the finite sets $q^{-1}(x)$, for $x \in C_0$. Notice that $\bar C$ is projective and $C$ is semi-normal. Let $S$ be the set of singular points of $C$, $S' := \pi^{-1}(S)$, and $F := \bar C' - C' = \bar C - C$. \\
The \emph{cohomological Albanese 1-motive of $C_0$} is defined as \[\operatorname{Alb}^+(C_0) = [u_{\mathrm{Alb}}: \operatorname{Div}^0_F(\bar C') \to \operatorname{Pic}^0(\bar C)], \] where : \begin{enumerate}[(i)] \item $\operatorname{Pic}^0(\bar C)$ denotes the group of isomorphism classes of invertible sheaves on $\bar C$ which are algebraically equivalent to 0. This is a semi-abelian variety: the map $\bar \pi^*: \operatorname{Pic}^0(\bar C) \to \operatorname{Pic}^0(\bar C')$ is surjective and its kernel is a torus. \item $\operatorname{Div}^0_F(\bar C')$ denotes the group of (Cartier) divisors $D$ on $\bar C'$ such that $\operatorname{supp} D \subset F$ and $\mathcal O(D) \in \operatorname{Pic}^0(\bar C')$. \item $u_{\mathrm{Alb}}$ is the map $D \mapsto \mathcal O(D)$ associating a divisor $D$ to the corresponding invertible sheaf $\mathcal O(D)$. \end{enumerate}
The \emph{homological Picard 1-motive of $C_0$} is defined as \[\operatorname{Pic}^-(C_0) = [u_{\mathrm{Pic}}: \operatorname{Div}^0_{S'/S}(\bar C', F) \to \operatorname{Pic}^0(\bar C', F)], \] where : \begin{enumerate}[(i)]
\item $\operatorname{Pic}^0(\bar C', F)$ denotes the group of isomorphism classes of pairs $(\mathcal L, \phi)$, where $\mathcal L$ is an invertible sheaf on $\bar C'$ algebraically equivalent to 0 and $\phi: \mathcal L|_{F} \to \mathcal O_F$ is a trivialization over $F$. This is a semi-abelian variety: the natural map $\operatorname{Pic}^0(\bar C', F) \to \operatorname{Pic}^0(\bar C')$ is surjective and its kernel is a torus. \item $\operatorname{Div}^0_{S'/S}(\bar C', F)$ denotes the group of (Cartier) divisors $D$ on $\bar C'$ which belong to the kernel of $\bar \pi_*: \operatorname{Div}_{S'}(\bar C') \to \operatorname{Div}_S(\bar C)$ and satisfy that $\mathcal O(D) \in \operatorname{Pic}^0(\bar C', F)$. \item $u_{\mathrm{Pic}}$ is the map $D \mapsto \mathcal O(D)$ associating a divisor $D$ to the corresponding invertible sheaf $\mathcal O(D)$. \end{enumerate}
An important fact is that the dual of $\operatorname{Pic}^-(C_0)$ is $\operatorname{Alb}^+(C_0)$ and viceversa.
\section{Linearizations of biextensions} \label{sec:linearization}
In this section, we consider commutative group schemes over a field $K$. We give the following definition.
\begin{definition} \label{def:linearization} Let $C = [A \xrightarrow{u} B], C' = [A' \xrightarrow{u'} B']$ be complexes of commutative group schemes over $K$. Let \begin{align*} \sigma: A \times B & \to B \\ (a, b) & \mapsto u(a) + b \end{align*} be the $A$-action on $B$ induced by $u$, and define $\sigma': A' \times B'$ analogously. Let $P$ be a biextension of $(B, B')$ by $\mathbb G_m$. We define an \emph{$A \times A'$-linearization} of $P$ as an $A \times A'$-action on $P$ \[\Sigma: (A \times A') \times P \to P\] satisfying the following conditions: \begin{enumerate}[(i)]
\item \emph{$\mathbb G_{m}$-equivariance}: For $a \in A$, $a' \in A'$, $c \in \mathbb G_{m}$ and $x \in P$,
\[\Sigma(a, a', c + x) = c + \Sigma(a, a', x).\]
\item \emph{Compatibility with $\sigma$ and $\sigma'$}: For $a \in A$ and $a' \in A'$, if $x \in P$ lies above $(b, b') \in B \times B'$ then $\Sigma(a, a', x)$ lies above $(\sigma(a, b), \sigma'(a', b'))$.
\item \emph{Compatibility with the partial group structures of $P$}: For $a \in A$, $a'_1, a'_2 \in A'$ and $x_1, x_2 \in P$ lying above $b \in B$,
\[\Sigma(a, a_1' + a_2', x_1 +_1 x_2) = \Sigma(a, a'_1, x_1) +_1 \Sigma(a, a'_2, x_2),\]
and for $a_1, a_2 \in A$, $a' \in A'$ and $x_1, x_2 \in P$ lying above $b' \in B'$,
\[\Sigma(a_1 + a_2, a', x_1 +_2 x_2) = \Sigma(a_1, a', x_1) +_2 \Sigma(a_2, a', x_2).\] \end{enumerate} \end{definition}
\begin{remark} An action $\Sigma: (A \times A') \times P \to P$ satisfying conditions (i) and (ii) is an $A \times A'$-linearization of the line bundle $P$ in the sense of Definition 1.6 in \cite[p. 30]{MF94}; this can be summed up as saying that $\Sigma$ is a ``bundle action'' lifting the actions $\sigma$ and $\sigma'$. Notice that $\sigma$ and $\sigma'$ are homomorphisms, and so condition (iii) may then be interpreted as a lifting to $P$ of the compatibility of $\sigma$ and $\sigma'$ with the group structures of $B$ and $B'$. In the rest of the article, we will only use the term \textit{linearization} in the sense of Definition \ref{def:linearization} above.
\end{remark}
\begin{remark} By considering constant group schemes, we are also able to talk about linearizations of biextensions of abelian groups. \end{remark}
Let $C = [A \xrightarrow{u} B], C' = [A' \xrightarrow{u'} B']$ be as in Definition \ref{def:linearization} and consider a biextension $P$ of $(B, B')$ by $\mathbb G_{m}$. Given a biextension structure of $(C, C')$ by $\mathbb G_{m}$ on $P$ with trivializations $$\tau: A \times B' \to P, \quad \tau': B \times A' \to P$$ we can define an $A \times A'$-linearization of $P$ as \begin{align*} \Sigma: (A \times A') \times P & \to P \\ (a,a',x) & \mapsto [\tau'(u(a), a') +_2 \tau'(b, a')] +_1 [\tau(a, b') +_2 x], \end{align*} where $x \in P$ lies above $(b, b') \in B \times B'$. This construction is due to \cite[Thm. 6.8, p. 688]{BL91} (see also \cite[p. 306]{WE98}). Conversely, given an $A \times A'$-linearization \[\Sigma: (A \times A') \times P \to P\] of $P$, we can define a biextension structure of $(C, C')$ by $\mathbb G_{m}$ on $P$ as the one determined by the trivializations \begin{align*}
\tau: A \times B' & \to P \\
(a, b') & \mapsto \Sigma(a, 0, 0_{b'}) \end{align*} \begin{align*}
\tau': B \times A' & \to P \\
(b, a') & \mapsto \Sigma(0, a', 0_b), \end{align*} where $0_b, 0_{b'}$ are the zero elements in the groups $(P_{b, B'}, +_1), (P_{B, b'}, +_2)$, respectively. These constructions are inverses of each other. \\
\begin{proposition} \label{pro:quotient_biext} Let $C, C'$ an $P$ be as in Definition \ref{def:linearization} and suppose that $u(K)$ and $u'(K)$ are injective. Then an $A \times A'$-linearization $\Sigma$ of $P$ induces a quotient biextension $Q(K)$ of $(B(K)/A(K), B'(K)/A'(K))$ by $K^*$. \end{proposition}
\begin{proof} Notice that $P(K)$ is a biextension of $(B(K), B'(K))$ by $K^*$ and that $\Sigma(K): (A(K)\times A'(K)) \times P(K) \to P(K)$ is an $A(K) \times A'(K)$-linearization of $P(K)$. We define $Q(K)$ as the set consisting of the orbits
\[[x] := \{\Sigma(a, a', x) | a \in A(K), a' \in A'(K)\}\] of elements $x \in P(K)$ under $\Sigma$. Then $Q(K)$ maps surjectively onto $B(K)/A(K) \times B'(K)/A'(K)$ and is endowed with a $K^*$-action which is free and transitive on fibers. To see that it is a biextension it is then enough to prove that $+_1$ and $+_2$ induce partial group structures on $Q(K)$. For this, take elements $x_1, x_2 \in P(K)$ lying above $(b_1, b_1'), (b_2, b_2') \in B(K) \times B'(K)$, respectively, such that the orbits of $b_1$ and $b_2$ under $\sigma$ are equal. This is equivalent to having $$b_1 = \sigma(a, b_2),$$ for some (unique) $a \in A(K)$. Then $x_1$ and $\Sigma(a, 0, x_2)$ project to $b_1 \in B(K)$ and we are able to define \[[x_1] +_1 [x_2] := [x_1 +_1 \Sigma(a, 0, x_2)].\] This is well defined and commutative. We define the partial group structure $+_2$ analogously. \\ \end{proof}
Consider a pair of 1-motives $M = [L \xrightarrow{u} G]$, $M' = [L' \xrightarrow{u'} G']$ and a biextension $P$ of $(M, M')$ by $\mathbb G_m$. For our purposes, we give the following \begin{definition} The group of \emph{$K$-points of} $M$ over $K$ as $$M(K) := \operatorname{Ext}^1_K(M^\vee, \mathbb G_{m}).$$ \end{definition} \noindent This is inspired by \cite[p. 326]{DE79}. Consider the short exact sequence of complexes \[\xymatrix{ 0 \ar[r] & 0 \ar[r] \ar[d] & L^\vee \ar@{=}[r] \ar[d]^{u^\vee} & L^\vee \ar[r] \ar[d]^{v^\vee} & 0 \\ 0 \ar[r] & T^\vee \ar[r] & G^\vee \ar[r] & A^\vee \ar[r] & 0 }\] and the long exact sequence of abelian groups that it induces \[\ldots \to L(K) \xrightarrow{u(K)} G(K) \to M(K) \to \operatorname{Ext}_K^1(T^\vee, \mathbb G_{m}) \to \ldots.\] It follow that, when $T^\vee$ is split (or, equivalently, when $L$ is constant), the group of $K$-points of M is \begin{equation} M(K) = G(K)/\operatorname{Im}(u(K)). \end{equation}
If $L$, $L'$ are constant and $u(K)$, $u'(K)$ are injective then $P(K)$ induces a biextension of $(M(K), M'(K))$ by $K^*$, by Proposition \ref{pro:quotient_biext}. When $M' = M^\vee$ and $P$ is the Poincar\'e biextension, we will denote by $Q_M(K)$ the induced biextension of $(M(K), M^\vee(K))$ by $K^*$. \\
We will now introduce the concept of \emph{compatibility} between a linearization and a $\rho$-splitting of a biextension. First, we recall the following definition from \cite[p. 199]{MT83} \begin{definition} Let $B$, $B'$, $H$ , $Y$ be abelian groups and $P$ a biextension of $(B, B')$ by $H$. Let $\rho: H \to Y$ be a homomorphism. A \emph{$\rho$-splitting} of $P$ is a map $\psi: P \to Y$ such that \begin{enumerate}[(i)] \item $\psi(h+x)=\rho(h) + \psi(x)$, for all $h \in H$ and $x \in P$ and \item for each $b \in B$ (resp. $b' \in B'$) the restriction of $\psi$ to $P_{b, B'}$ (resp. $P_{B, b'}$) is a group homomorphism. \end{enumerate}
\end{definition}
\begin{definition} Let $C = [A \xrightarrow{u} B], C' = [A' \xrightarrow{u'} B']$ be complexes of commutative group schemes over $K$ and $P$ a biextension of $(C, C')$ by $\mathbb G_{m}$. Let $Y$ be an abelian group and $\rho: K^* \to Y$ a homomorphism. We will say that a $\rho$-splitting $\psi: P(K) \to Y$ of $P(K)$ is \emph{compatible} with the induced $A \times A'$-linearization $\Sigma$ of $P$ if any of the following equivalent conditions are satisfied: \begin{enumerate}[(i)]
\item $\psi(\Sigma(a, a', x)) = \psi(x)$,
for all $a \in A(K)$, $a' \in A'(K)$ and $x \in P(K)$,
\item $\psi \circ \tau$ and $\psi \circ \tau'$ vanish on $A(K) \times B'(K)$ and $B(K) \times A'(K)$, respectively. \end{enumerate} \end{definition}
\begin{remark} Assuming $u(K)$ and $u'(K)$ injective, $\psi$ is compatible with an $A \times A'$-linearization if and only if it induces a $\rho$-splitting on the quotient biextension $Q(K)$, which exists by Proposition \ref{pro:quotient_biext}. \\ \end{remark}
\section{$\rho$-splittings in the ramified case} \label{sec:ramified}
Let $K$ be a finite extension of $\mathbb Q_p$ and consider a branch $\lambda: K^* \to K$ of the $p$-adic logarithm. For a commutative algebraic group $H$ over $K$, we will denote by $\lambda_H: H(K) \to \operatorname{Lie} H(K)$ the uniquely determined homomorphism of Lie groups extending $\lambda$ as constructed in \cite{ZA96}. Let $M = [L \xrightarrow{u} G]$ be a 1-motive over $K$ with $L$ and $T$ split, and denote $M^\vee = [L^\vee \xrightarrow{u^\vee} G^\vee]$ its dual; notice that $L^\vee$ and $T^\vee$ are also split. Let $M^\natural = [L \xrightarrow{u^\natural} G^\natural]$ and $M^{\vee \natural} = [L \xrightarrow{u^{\vee \natural}} G^{\vee \natural}]$ be their corresponding universal vectorial extensions. The group schemes described in Lemma \ref{lem:UVE} fit in the following commutative diagrams with exact rows and columns: \\ \noindent\begin{minipage}{0.55\linewidth} \begin{equation} \label{notation_UVE1} \begin{tikzcd} & 0 \ar[d] & 0 \ar[d] & & \\ 0 \ar[r] & \omega_{A^\vee} \ar[d] \ar[r] & G^\# \ar[d, "\gamma"'] \ar[r, "\theta'"] & G \ar[d, equal] \ar[r] & 0 \\ 0 \ar[r] & \omega_{G^\vee} \ar[d, "\varepsilon"'] \ar[r, "\zeta"] & G^\natural \arrow[ul, phantom, "\ulcorner", very near start] \ar[d, "\sigma"'] \ar[r, "\theta"] & G \ar[r] & 0 \\ & \omega_{T^\vee} \ar[r, equal] \ar[d] & \omega_{T^\vee} \ar[d] & & \\ & 0 & 0 & & \end{tikzcd} \end{equation} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{equation} \label{notation_UVE2} \begin{tikzcd} & & 0 \ar[d] & 0 \ar[d] & \\ & & T \ar[d, "\iota^\#"'] \ar[r, equal] & T \ar[d, "\iota"] & \\ 0 \ar[r] & \omega_{A^\vee} \ar[r] \ar[d, equal] & G^\# \ar[r, "\theta'"] \ar[d, "\pi^\#"'] \arrow[dr, phantom, "\lrcorner", very near start] & G \ar[r] \ar[d, "\pi"] & 0 \\ 0 \ar[r] & \omega_{A^\vee} \ar[r] & A^\# \ar[r, "\theta_A"] \ar[d] & A \ar[r] \ar[d] & 0 \\ & & 0 & 0 & \rlap{\ .} \end{tikzcd} \end{equation} \end{minipage}
We will denote the morphisms in the diagrams for $G^{\vee}$ analogously, so that $\varepsilon$ is defined by pullback along $\iota^\vee:T^\vee \to G^\vee$ and $\operatorname{Lie} \iota^\vee$ is dual to $\varepsilon$. \\
For the rest of this section, we fix splittings of the following exact sequences of vector group schemes over $K$: \begin{equation} \label{es_vg1} \begin{tikzcd} 0 \ar[r] & \omega_{A^\vee} \ar[r] & \omega_{G^\vee} \ar[r, "\varepsilon"'] & \omega_{T^\vee} \ar[r] \ar[l, dashed, bend right, "\bar \varepsilon"'] & 0 \end{tikzcd} \end{equation} \[\begin{tikzcd} 0 \ar[r] & \omega_A \ar[r] & \omega_G \ar[r, "\varepsilon^\vee"'] & \omega_T \ar[r] \ar[l, dashed, bend right, "\bar \varepsilon^\vee"'] & 0 \rlap{\ .} \end{tikzcd}\]
These induce the isomorphisms: \begin{enumerate}[(i)] \item $\omega_G \cong \omega_A \times \omega_T$ of vector group schemes, and similarly for $\omega_{G^\vee}$.
\item $G^\natural \cong \omega_{T^\vee} \times G^\#$ of commutative group schemes induced by the section $\bar \sigma := \zeta \circ \bar \varepsilon$ of $\sigma$, and similarly for $G^{\vee \natural}$. We will denote by $\bar \gamma$ the induced retraction of $\gamma$: \begin{equation} \label{es_Gnat} \begin{tikzcd} 0 \ar[r] & G^\# \ar[r, "\gamma"'] & G^\natural \ar[r, "\sigma"'] \ar[l, dashed, bend right, "\bar \gamma"'] & \omega_{T^\vee} \ar[r] \ar[l, dashed, bend right, "\bar \sigma"'] & 0 \rlap{\ .} \end{tikzcd} \end{equation} Notice that $\bar \gamma$ satisfies $\theta' \circ \bar \gamma = \theta$, by the universal property of the pushout. We fix the analogous notation for $G^{\vee \natural}$.
\item $\operatorname{Lie} G \cong \operatorname{Lie} A \times \operatorname{Lie} T$ of Lie algebras obtained from (i) by duality. We denote $j := \operatorname{Lie} \iota$, $q := \operatorname{Lie} \pi$ and let $\bar j$ be the retraction of $j$ and $\bar q$ the section of $q$ induced by this isomophism: \begin{equation} \label{es_la} \begin{tikzcd} 0 \ar[r] & \operatorname{Lie} T \ar[r, "j"'] & \operatorname{Lie} G \ar[r, "q"'] \ar[l, dashed, bend right, "\bar j"'] & \operatorname{Lie} A \ar[r] \ar[l, dashed, bend right, "\bar q"'] & 0 \rlap{\ .} \end{tikzcd} \end{equation} We also fix the analogous notation for $G^{\vee \natural}$. \\
\end{enumerate}
We will continue to denote Deligne's pairing associated to $M$ and its dual as \[(\, \cdot \,, \, \cdot \,)^{Del}_M: \operatorname{T}_\text{\rm dR}(M) \times \operatorname{T}_\text{\rm dR}(M^\vee) = \operatorname{Lie} G^\natural \times \operatorname{Lie} G^{\vee \natural} \to \mathbb G_a .\] Deligne's pairing associated to $A$ and its dual will be denoted as \[(\, \cdot \,, \, \cdot \,)^{Del}_A: \operatorname{T}_\text{\rm dR}(A) \times \operatorname{T}_\text{\rm dR}(A^\vee) = \operatorname{Lie} A^\# \times \operatorname{Lie} A^{\vee \#} \to \mathbb G_a. \] We want to recognize in $(\, \cdot \,, \, \cdot \,)^{Del}_M$ the contribution of the abelian varieties and the tori. With this in mind, we also define the following pairing. \begin{definition} Define $T^\natural:= \omega_{T^\vee} \times T$ and $T^{\vee \natural}:= \omega_{T} \times T^\vee$. Let $\alpha_{T^\vee}$ be the invariant differential of $T^\vee$ over $\omega_{T^\vee}$ which corresponds to the identity map on $\omega_{T^\vee}$, and define $\alpha_T$ analogously. Denote by $\Phi_T$ the pairing on $\operatorname{Lie} T^\natural \times \operatorname{Lie} T^{\vee \natural}$ determined by the curvature of the invariant differential $\alpha_{T^\vee} + \alpha_T$. We define \[(\, \cdot \,, \, \cdot \,)_T:= - \Phi_T: \operatorname{Lie} T^\natural \times \operatorname{Lie} T^{\vee \natural} \to \mathbb G_a. \] \end{definition}
\noindent The following lemma gives an explicit description of $(\, \cdot \,, \, \cdot \,)_T$.
\begin{lemma} \label{lem:Del_pairing_T} Let $L \cong \mathbb Z^r$ and $T \cong \mathbb G_m^d$, so that $L^\vee \cong \mathbb Z^d$ and $T^\vee \cong \mathbb G_m^r$. Then the pairing \[(\, \cdot \,, \, \cdot \,)_T: \operatorname{Lie} T^\natural \times \operatorname{Lie} T^{\vee \natural} \cong (\mathbb G_a^r \times \mathbb G_a^d) \times (\mathbb G_a^d \times \mathbb G_a^r) \to \mathbb G_a \] is given by the matrix \[\Gamma = \phantom{(17') \left\{\vphantom{\dfrac{n}{\eta \cdot \delta}}\right.\hspace{\dimexpr\arraycolsep-\nulldelimiterspace}} \bordermatrix{\hspace{-\arraycolsep} & \overbrace{\hphantom{\hspace{1.5cm}}}^{d} & \overbrace{\hphantom{\hspace{2.3cm}}}^{r} \cr \hspace{-\arraycolsep}\mathllap{r \left\{\vphantom{\begin{matrix} -1 & & -1 \\ & \ddots& \\ -1 & & -1 \end{matrix}}\right.\kern-\nulldelimiterspace} & \begin{matrix} 0 & & 0 \\ & \ddots& \\ 0 & & 0 \end{matrix} & \begin{matrix} -1 & & \quad 0 \\ & \ddots& \\ \quad 0 & & -1 \end{matrix} \cr \hspace{-\arraycolsep}\mathllap{d \left\{\vphantom{\begin{matrix} -1 & & -1 \\ & \ddots& \\ -1 & & -1 \end{matrix}}\right.\kern-\nulldelimiterspace} & \begin{matrix} 1 & & 0 \\ & \ddots& \\ 0 & & 1 \end{matrix} & \begin{matrix} \quad 0 & & \quad 0 \\ & \ddots& \\ \quad 0 & & \quad 0 \end{matrix}}.\] \end{lemma}
\begin{proof} In this case, the global differential $\alpha_{T^\vee} + \alpha_T$ on $T^\natural \times T^{\vee \natural} = (\mathbb G_a^r \times \mathbb G_m^d) \times (\mathbb G_a^d \times \mathbb G_m^r)$ has the expression \[\alpha_{T^\vee} + \alpha_T = \sum_{i=1}^r x_i \frac{dt_i}{t_i} + \sum_{j=1}^d y_j \frac{dz_j}{z_j}, \] where $x_i$ (resp. $y_j$) are the parameters of $\mathbb G_a^r$ (resp. $\mathbb G_a^d$) and $t_i$ (resp. $z_j$) are the parameters of $\mathbb G_m^r$ (resp. $\mathbb G_m^d$) (see \cite[Ex. 4.4, p. 1647]{BE09}), and its curvature is \begin{align*} d(\alpha_{T^\vee} + \alpha_T) & = \sum_{i=1}^r d x_i \wedge \frac{dt_i}{t_i} + \sum_{j=1}^d d y_j \wedge \frac{dz_j}{z_j} \\ & = \sum_{i=1}^r d x_i \wedge \frac{dt_i}{t_i} - \sum_{j=1}^d \frac{dz_j}{z_j} \wedge d y_j \, . \end{align*} From this, it is straightforward that $(\, \cdot \,, \, \cdot \,)_T$ is given by the matrix $\Gamma$.
\end{proof}
\begin{definition} \label{def:jnat+qnat} Define \begin{align*} \iota^\natural & := Id \times \iota^\#: T^\natural = \omega_{T^\vee} \times T \to \omega_{T^\vee} \times G^\# \cong G^\natural, \\ \pi^\natural & := \pi^\# \circ \bar \gamma: G^\natural \to A^\#, \end{align*} and denote $j^\natural := \operatorname{Lie} \iota^\natural$ and $q^\natural := \operatorname{Lie} \pi^\natural$. Define $\iota^{\vee \natural}, \pi^{\vee \natural}, j^{\vee \natural}, q^{\vee \natural}$ analogously. \end{definition}
Notice that the following diagram commutes and the upper and lower rows are exact, which makes the middle row exact as well: \[\begin{tikzcd} & 0 \ar[d] & 0 \ar[d] & & \\ 0 \ar[r] & T \ar[r, "{\iota^\#}"] \ar[d] & G^\# \ar[r, "{\pi^\#}"] \ar[d, "\gamma"] & A^\# \ar[r] \ar[d, equal] & 0 \\ 0 \ar[r] & T^\natural \ar[r, "{\iota^\natural}"] \ar[d] & G^\natural \ar[r, "{\pi^\natural}"] \ar[d, "\sigma"] & A^\# \ar[r] & 0 \\ & \omega_{T^\vee} \ar[r, equal] \ar[d] & \omega_{T^\vee} \ar[d] & & \\ & 0 & 0 & & \rlap{\, .} \end{tikzcd}\] Therefore, $j^\natural$ and $q^\natural$ fit in a short exact sequence of Lie algebras \begin{equation} \label{es_lanat} \begin{tikzcd} 0 \ar[r] & \operatorname{Lie} T^\natural \ar[r, "j^\natural"'] & \operatorname{Lie} G^\natural \ar[r, "q^\natural"'] \ar[l, dashed, bend right, "\bar j^\natural"'] & \ar[r] \operatorname{Lie} A^\# \ar[r] & 0 \end{tikzcd} \end{equation} which has a splitting $\bar j^{\natural}$ induced by $\bar j$ (see diagram \eqref{es_la}). More precisely, $\bar j^\natural$ is given by $$\bar j^\natural := Id \times (\bar j \circ \operatorname{Lie} \theta'): \operatorname{Lie} G^\natural \cong \omega_{T^\vee} \times \operatorname{Lie} G^\# \to \omega_{T^\vee} \times \operatorname{Lie} T = \operatorname{Lie} T^\natural,$$ and similarly for $\bar j^{\vee \natural}$. Indeed, $\bar j^\natural$ is a splitting of \eqref{es_lanat}: $$\bar j^\natural \circ j^\natural = (\bar j \circ \operatorname{Lie} \theta') \circ j^\# = \bar j \circ j = Id.$$ Consider the morphisms \[\operatorname{Lie} T^\natural \times \operatorname{Lie} T^{\vee \natural} \xleftarrow{\bar j^\natural \times \bar j^{\vee \natural}} \operatorname{Lie} G^\natural \times \operatorname{Lie} G^{\vee \natural} \xrightarrow{q^\natural \times q^{\vee \natural}} \operatorname{Lie} A^\# \times \operatorname{Lie} A^{\vee \#} .\] We have the following
\begin{lemma} \label{lem:Del_pairing_M} For all $(h, h^\vee) \in \operatorname{Lie} G^\natural \times \operatorname{Lie} G^{\vee \natural}$, the following equality holds \[(h, h^\vee)^{Del}_M = (\bar j^\natural(h), \bar j^{\vee \natural}(h^\vee))_T + (q^\natural(h), q^{\vee \natural}(h^\vee))^{Del}_A .\] \end{lemma}
\begin{proof} Recall that $P^\natural$ is defined as the pullback of the Poincar\'e biextension $P$ along $\theta \times \theta^\vee: G^\natural \times G^{\vee \natural} \to G \times G^\vee$, and that $\nabla$ is determined by the sum of two differentials associated to the identities of $G^\natural$ and $G^{\vee \natural}$ (see \cite[Prop.3.9, p. 1644]{BE09}). \\
We will first describe the decomposition of the structure of $\natural$-extension over $G^\natural$ of $P^\natural$ induced by $Id \in G^\natural(G^\natural)$. The split exact sequence \[\begin{tikzcd}[row sep=scriptsize] 0 \ar[r] & G^\# \ar[r, "\gamma"'] & G^\natural \ar[r, "\sigma"'] \ar[l, bend right, dashed, "\bar \gamma"'] & \omega_{T^\vee} \ar[r] \ar[l, bend right, dashed, "\bar \sigma"'] & 0 \end{tikzcd}\] induces an isomorphism \begin{align*} G^\natural(G^\natural) & \cong \omega_{T^\vee}(G^\natural) \oplus G^\#(G^\natural) \\ Id & \mapsto (\sigma, \bar \gamma) \nonumber . \end{align*} By Definition \ref{def:jnat+qnat} we have $\pi^\natural = \pi^\# \circ \bar \gamma$, and so $\bar \gamma \in G^\#(G^\natural)$ and $Id \in A^\#(A^\#)$ map to the same element $\pi^\natural \in A^\#(G^\natural)$ in the diagram below: \[\begin{tikzcd}[row sep=tiny, column sep=tiny] G^\#(G^\natural) \ar[r, "\pi^\# \circ \_"] & A^\#(G^\natural) & A^\#(A^\#) \ar[l, "\_ \circ \pi^\natural"'] \\ \bar \gamma \ar[d, "="{sloped}, phantom] \ar[r, mapsto] & \pi^\# \circ \bar \gamma = \pi^\natural \ar[d, "="{sloped}, phantom] & Id \ar[l, mapsto] \ar[d, "="{sloped}, phantom] \\ ([L^\vee_{G^\natural} \to (\pi^\natural \times Id)^*P_{A^\# \times A^\vee}], (\pi^\natural \times Id)^*\nabla_{A, 2}) & ((\pi^\natural \times Id)^*P_{A^\# \times A^\vee}, (\pi^\natural \times Id)^*\nabla_{A, 2}) & (P_{A^\# \times A^\vee}, \nabla_{A, 2}) \rlap{\ .} \end{tikzcd}\] Hence, if $(P_{A^\# \times A^\vee}, \nabla_{A, 2})$ is the $\natural$-extension of $A^\vee_{A^\#}$ by $\mathbb G_{m, A^\#}$ corresponding to $Id \in A^\#(A^\#)$, by Lemma \ref{lem:UVE} (iii), then $\bar \gamma$ corresponds to $([L^\vee_{G^\natural} \to (\pi^\natural \times Id)^*P_{A^\# \times A^\vee}], (\pi^\natural \times Id)^*\nabla_{A, 2})$. \\
On the other hand, the contribution of $\sigma \in \omega_{T^\vee}(G^\natural)$ is described by the trivial extension of $G^\vee_{G^\natural}$ by $\mathbb G_{m, G^\natural}$ endowed with the connection induced by the invariant differential $\bar \varepsilon \circ \sigma \in \omega_{G^\vee}(G^\natural)$ (see diagram \eqref{es_vg1} for notation). Notice that the invariant differential of $T^\vee$ over $G^\natural$ corresponding to $\sigma \in \omega_{T^\vee}(G^\natural)$ is just the pullback of $\alpha_{T^\vee}$ along $\sigma$. Now, if we consider invariant differentials as morphisms of vector groups, then $\bar \varepsilon \circ \sigma \in \omega_{G^\vee}(G^\natural)$ will correspond to $(\sigma^*\alpha_{T^\vee}) \circ \bar j^\vee$, since we had defined $\bar j^\vee$ as the morphism induced by $\bar \varepsilon$ by duality (see diagram \eqref{es_la} for notation):
\[\begin{tikzcd}[row sep=tiny] \omega_{T^\vee}(\omega_{T^\vee}) \ar[r, "\_ \circ \sigma"] \ar[d, "="{sloped}, phantom] & \omega_{T^\vee}(G^\natural) \ar[r, "\bar \varepsilon \circ \_"] \ar[d, "="{sloped}, phantom] & \omega_{G^\vee}(G^\natural) \ar[d, "="{sloped}, phantom] \\ \underline{\Hom}_{\mathcal O_{\omega_{T^\vee}}}(\operatorname{Lie} T^\vee_{\omega_{T^\vee}}, \mathbb G_{a, \omega_{T^\vee}}) \ar[r, "\sigma^*"] & \underline{\Hom}_{\mathcal O_{G^\natural}}(\operatorname{Lie} T^\vee_{G^\natural}, \mathbb G_{a, G^\natural}) \ar[r, "\_ \circ \bar j^\vee"] & \underline{\Hom}_{\mathcal O_{G^\natural}}(\operatorname{Lie} G^\vee_{G^\natural}, \mathbb G_{a, G^\natural}) \\ \alpha_{T^\vee} \ar[r, mapsto] & \sigma^*\alpha_{T^\vee} \ar[r, mapsto] & (\sigma^*\alpha_{T^\vee}) \circ \bar j^\vee \rlap{\ .} \end{tikzcd}\]
Doing the analogous calculations for $G^{\vee \natural}$, we conclude that
$$(P^\natural, \nabla) = (0, (\sigma^*\alpha_{T^\vee}) \circ \bar j^\vee + (\sigma^{\vee *}\alpha_{T}) \circ \bar j) + (P^\natural, (\pi^\natural \times \pi^{\vee \natural})^*\nabla_{A}),$$ which gives us the desired result. \end{proof}
\begin{definition} Let $\eta: G(K) \to G^\natural(K)$ and $\eta^\vee: G^\vee(K) \to G^{\vee \natural}(K)$ be a pair of splittings of the exact sequences of Lie groups \begin{gather} 0 \to \omega_{G^\vee}(K) \xrightarrow{\zeta} G^\natural(K) \xrightarrow{\theta} G(K) \to 0, \label{UVE1} \\ 0 \to \omega_G(K) \xrightarrow{\zeta^\vee} G^{\vee \natural}(K) \xrightarrow{\theta^\vee} G^\vee(K) \to 0. \label{UVE2} \end{gather} We say that $(\eta, \eta^\vee)$, or also that $(\operatorname{Lie} \eta, \operatorname{Lie} \eta^\vee)$, are \emph{dual} with respect to Deligne's pairing $(\, \cdot \,, \, \cdot \,)^{Del}_M$ if \[(\, \cdot \,, \, \cdot \,)^{Del}_M \circ (\operatorname{Lie} \eta, \operatorname{Lie} \eta^\vee) = 0.\] We define \emph{dual} splittings with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_A$ and $(\, \cdot \,, \, \cdot \,)_T$ analogously. \end{definition}
For the proof of Lemma \ref{lem:etas} we will need the following result, which is a slight generalization of Lemma 3.1.1 in \cite[p. 641]{CO91}. \begin{lemma} \label{lem:spl} Let \[0 \to V \to X \to Y \to 0\] be an exact sequence of algebraic $K$-groups with $V$ a vector group. There is a bijection between splittings of the exact sequence \begin{equation} \label{lem:spl-es1} 0 \to V(K) \to X(K) \to Y(K) \to 0 \end{equation} and splittings of the exact sequence of Lie algebras \begin{equation} \label{lem:spl-es2} 0 \to \operatorname{Lie} V(K) \to \operatorname{Lie} X(K) \to \operatorname{Lie} Y(K) \to 0. \end{equation}
\end{lemma}
\begin{proof} Consider the following commutative diagram \[\begin{tikzcd} 0 \ar[r] & V(K) \ar[d, equal] \ar[r] & X(K) \ar[r] \ar[d, "\lambda_{X}"] & Y(K) \ar[d, "\lambda_Y"] \ar[r] & 0 \\ 0 \ar[r] & \operatorname{Lie} V(K) \ar[r] & \operatorname{Lie} X(K) \ar[r] \ar[r] & \operatorname{Lie} Y(K) \ar[r] & 0 \rlap{\ .} \end{tikzcd}\] If $s: X(K) \to V(K)$ is a splitting of \eqref{lem:spl-es1} then $\operatorname{Lie} s: \operatorname{Lie} X(K) \to \operatorname{Lie} V(K)$ is a splitting of \eqref{lem:spl-es2} that satisfies $ \operatorname{Lie} s \circ \lambda_X = s$. For the converse, let $r: \operatorname{Lie} X(K) \to \operatorname{Lie} V(K)$ be a splitting of \eqref{lem:spl-es2}. Then $$s: X(K) \xrightarrow{\lambda_X} \operatorname{Lie} X(K) \xrightarrow{r} \operatorname{Lie} V(K) = V(K)$$ is a splitting of \eqref{lem:spl-es1}. Moreover, by the properties of the logarithm (see \cite[p. 5]{ZA96}), this map is such that $\operatorname{Lie} s = r$. \end{proof}
\begin{lemma} \label{lem:etas}
Let $\eta: G(K) \to G^\natural(K)$ and $\eta^\vee: G^\vee(K) \to G^{\vee \natural}(K)$ be a pair of splittings of \eqref{UVE1} and \eqref{UVE2}, respectively. Then we can define new splittings $\tilde \eta, \tilde \eta^\vee$ such that \begin{align*} \operatorname{Lie} \tilde \eta := \operatorname{Lie} \eta_T \times \operatorname{Lie} \eta_A & : \operatorname{Lie} G(K) \cong \operatorname{Lie} T(K) \times \operatorname{Lie} A(K) \to \operatorname{Lie} T^\natural(K) \times \operatorname{Lie} A^\#(K) \cong \operatorname{Lie} G^\natural(K), \\ \operatorname{Lie} \tilde \eta^\vee := \operatorname{Lie} \eta_T^\vee \times \operatorname{Lie} \eta_A^\vee & : \operatorname{Lie} G^\vee(K) \cong \operatorname{Lie} T^\vee(K) \times \operatorname{Lie} A^\vee(K) \to \operatorname{Lie} T^{\vee \natural}(K) \times \operatorname{Lie} A^{\vee \#}(K) \cong \operatorname{Lie} G^{\vee \natural}(K), \end{align*} where $\eta_T: T(K) \to T^\natural(K), \eta_T^\vee: T^\vee(K) \to T^{\vee \natural}(K)$ are homomorphic sections of the projections $$pr_2: T^\natural(K) \to T(K), \quad pr_2: T^{\vee \natural}(K) \to T^\vee(K),$$ respectively, and $\eta_A: A(K) \to A^\#(K), \eta_A^\vee: A^\vee(K) \to A^{\vee \#}(K)$ are homomorphic sections of $$\theta_A: A^\#(K) \to A(K), \quad \theta_{A^\vee}: A^{\vee \#}(K) \to A^\vee(K),$$ respectively. Moreover, if $(\eta, \eta^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$ then $(\eta_T, \eta_T^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)_T$, $(\eta_A, \eta_A^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_A$, and $(\tilde \eta, \tilde \eta^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$. \end{lemma}
\begin{proof} Define $r_T: \operatorname{Lie} T(K) \to \operatorname{Lie} T^\natural(K)$ and $r_A: \operatorname{Lie} A(K) \to \operatorname{Lie} A^\#(K)$ such that they make the following diagram commute (see diagrams \eqref{es_la}, \eqref{es_lanat} for notation) \[\begin{tikzcd} \operatorname{Lie} T(K) \ar[r, "j"] \ar[d, dashed, "r_T"'] & \operatorname{Lie} G(K) \ar[d, "\operatorname{Lie} \eta"] & \operatorname{Lie} A(K) \ar[l, "\bar q"'] \ar[d, dashed, "r_A"] \\ \operatorname{Lie} T^\natural(K) & \operatorname{Lie} G^\natural(K) \ar[l, "\bar j^\natural"'] \ar[r, "q^\natural"] & \operatorname{Lie} A^\#(K) \rlap{\ .} \end{tikzcd}\] From the definitions of $\bar j^\natural$ and $q^\natural$ we get that $r_T$ and $r_A$ are sections of \begin{gather*} pr_2: \operatorname{Lie} T^\natural(K) \to \operatorname{Lie} T(K), \\ \operatorname{Lie} \theta_A: \operatorname{Lie} A^\#(K) \to \operatorname{Lie} A(K), \end{gather*} respectively. Notice that $r_T: \operatorname{Lie} T(K) \to \operatorname{Lie} T^\natural(K)$ is given by $r_T(z) = (\operatorname{Lie} (\sigma \circ \eta \circ \iota)(z), z)$. By Lemma \ref{lem:spl}, we can extend these homomorphisms in a canonical way to homomorphisms of Lie groups $\eta_T: T(K) \to T^\natural(K)$ and $\eta_A: A(K) \to A^\#(K)$, \textit{i.e.} satisfying $\operatorname{Lie} \eta_T = r_T$ and $\operatorname{Lie} \eta_A = r_A$, in such a way that they are sections of \begin{gather*} pr_2: T^\natural(K) \to T(K), \\ \theta_A: A^\#(K) \to A(K), \end{gather*} respectively. Notice that $\eta_T: T(K) \to T^\natural(K)$ is given by $\eta_T(t) = (\sigma \circ \eta \circ \iota(t), t)$. Let $$\tilde r := r_T \times r_A: \operatorname{Lie} G(K) \cong \operatorname{Lie} T(K) \times \operatorname{Lie} A(K) \to \operatorname{Lie} T^\natural(K) \times \operatorname{Lie} A(K) \cong \operatorname{Lie} G^\natural(K)$$ and define $\tilde \eta: \operatorname{Lie} G(K) \to \operatorname{Lie} G^\natural$ as the morphism such that $\operatorname{Lie} \tilde \eta = \tilde r$. Clearly, $\tilde \eta$ is a section of $\theta$. We define $\eta^\vee_T$, $\eta^\vee_A$ and $\tilde \eta^\vee$ analogously. \\
Now suppose that $(\eta, \eta^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$. We will prove that $(\eta_T, \eta_T^\vee)$ are dual splittings with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_T$. By Lemma \ref{lem:Del_pairing_M}, we get the following equality for every $z \in \operatorname{Lie} T(K)$ and $z^\vee \in \operatorname{Lie} T^\vee(K)$ \begin{align*} (\operatorname{Lie} \eta \circ j(z), \operatorname{Lie}\eta^\vee \circ j^\vee(z^\vee))^{Del}_M & = (\bar j^\natural \circ \operatorname{Lie} \eta \circ j(z), \bar j^{\vee \natural} \circ \operatorname{Lie} \eta^\vee \circ j^\vee(z^\vee))_T \\ & \quad + (q^\natural \circ \operatorname{Lie} \eta \circ j(z), q^{\vee \natural} \circ \operatorname{Lie} \eta^\vee \circ j^\vee(z^\vee))^{Del}_A . \nonumber \end{align*} Notice that $q^\natural \circ \operatorname{Lie} \eta \circ j: \operatorname{Lie} T(K) \to \operatorname{Lie} A^\#(K)$ becomes zero when composed with $\operatorname{Lie} \theta_A$ (see Definition \ref{def:jnat+qnat} and diagrams \eqref{notation_UVE2} and \eqref{es_Gnat}): \begin{align*} \operatorname{Lie} \theta_A \circ q^\natural \circ \operatorname{Lie} \eta \circ j & = \operatorname{Lie}(\theta_A \circ \pi^\# \circ \bar \gamma \circ \eta) \circ j \\ & = q \circ \operatorname{Lie}(\theta' \circ \bar \gamma \circ \eta) \circ j \\ & = q \circ \operatorname{Lie}(\theta \circ \eta) \circ j \\ & = 0 \end{align*} This means that $$q^\natural \circ \operatorname{Lie} \eta \circ j(z) = (\omega, 0) \in \operatorname{Lie} A^\#(K)$$ is the trivial extension of $A^\vee$ by $\mathbb G_a$ endowed with a $\natural$-structure coming from an invariant differential $\omega \in \omega_{A^\vee}(K)$. Since the same is true for $q^{\vee \natural} \circ \operatorname{Lie} \eta^\vee \circ j^\vee(z^\vee)$ then, by \cite[Cor. 2.1.1, p. 638]{CO91}, $$(q^\natural \circ \operatorname{Lie} \eta \circ j(z), q^{\vee \natural} \circ \operatorname{Lie} \eta^\vee \circ j^\vee(z^\vee))^{Del}_A = 0,$$ and so \begin{align*} (\operatorname{Lie} \eta_T(z), \operatorname{Lie} \eta^\vee_T(z^\vee))_T & = (\bar j^\natural \circ \operatorname{Lie} \eta \circ j(z), \bar j^{\vee \natural} \circ \operatorname{Lie} \eta^\vee \circ j^\vee(z^\vee))_T \\ & = (\operatorname{Lie} \eta \circ j(z), \operatorname{Lie}\eta^\vee \circ j^\vee(z^\vee))^{Del}_M \\ & = 0 , \end{align*} \textit{i.e.} $(\eta_T, \eta^\vee_T)$ are dual splittings with respect to $(\, \cdot \,, \, \cdot \,)_T$. The proof that $(\eta_A, \eta^\vee_A)$ are dual splittings with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_A$ is carried out in a similar fashion. Now, to prove that $(\tilde \eta, \tilde \eta^\vee)$ are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$ consider the following commutative diagram \begin{equation} \label{weightfil_spl} \begin{tikzcd}[column sep=4em] \operatorname{Lie} T(K) \ar[d, "\operatorname{Lie} \eta_T"'] & \operatorname{Lie} G(K) \ar[l, "\bar j"'] \ar[d, "\operatorname{Lie} \tilde \eta"] \ar[r, "q"] & \operatorname{Lie} A(K) \ar[d, "\operatorname{Lie} \eta_A"] \\ \operatorname{Lie} T^\natural(K) & \operatorname{Lie} G^\natural(K) \ar[l, "\bar j^\natural"'] \ar[r, "q^\natural"] & \operatorname{Lie} A^\#(K) \rlap{\ ,} \end{tikzcd} \end{equation} as well as the corresponding one for $\tilde \eta^\vee$. From this and Lemma \ref{lem:Del_pairing_M} we conclude that for every $(h, h^\vee) \in \operatorname{Lie} G(K) \times \operatorname{Lie} G^\vee(K)$ \begin{align*} (\operatorname{Lie} \tilde \eta (h), \operatorname{Lie} \tilde \eta^\vee (h^\vee))^{Del}_M & = (\operatorname{Lie} \tilde \eta_T \circ \bar j(h), \operatorname{Lie} \tilde \eta_T^\vee \circ \bar j^\vee(h^\vee))^{Del}_T \\ & \quad + (\operatorname{Lie} \tilde \eta_A \circ q(h), \operatorname{Lie} \tilde \eta_A^\vee \circ q^\vee(h^\vee))^{Del}_A \\ & = 0. \end{align*}
\end{proof}
\begin{theorem} \label{thm:lambda-spl} Let $r: \operatorname{Lie} G(K) \to \operatorname{Lie} G^\natural(K)$ and $r^\vee: \operatorname{Lie} G^\vee(K) \to \operatorname{Lie} G^{\vee \natural}(K)$ be a pair of splittings of the exact sequences of Lie algebras \begin{gather*} 0 \to \omega_{G^\vee}(K) \xrightarrow{\operatorname{Lie} \zeta} \operatorname{Lie} G^\natural(K) \xrightarrow{\operatorname{Lie} \theta} \operatorname{Lie} G(K) \to 0, \\ 0 \to \omega_G(K) \xrightarrow{\operatorname{Lie} \zeta^\vee} \operatorname{Lie} G^{\vee \natural}(K) \xrightarrow{\operatorname{Lie} \theta^\vee} \operatorname{Lie} G^\vee(K) \to 0 , \end{gather*} respectively, which are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$. Then we have an induced $\lambda$-splitting \[\psi: P(K) \to K,\] where $P$ is the Poincar\'e biextension. \end{theorem}
\begin{proof} Let $g \in G(K)$ be a section above $a \in A(K)$. First note that, from the splitting of $\operatorname{Lie} G^\vee$ in \eqref{es_la}, we also obtain a splitting of $\operatorname{Lie} P_{g, G^\vee}$ by pullback \begin{equation} \label{lambda-spl} \begin{tikzcd} & & 0 \ar[d] & 0 \ar[d] & \\ & & \operatorname{Lie} \mathbb G_m \ar[d] \ar[r, equal] & \operatorname{Lie} \mathbb G_m \ar[d] & \\ 0 \ar[r] & \operatorname{Lie} T^\vee \ar[d, "\cong"'] \ar[r] & \operatorname{Lie} P_{g, G^\vee} \arrow[dr, phantom, "\lrcorner", very near start] \ar[d] \ar[r] \ar[l, dashed, bend right] & \operatorname{Lie} P_{a, A^\vee} \ar[d] \ar[r] \ar[l, dashed, bend right] & 0 \\ 0 \ar[r] & \{g\} \times \operatorname{Lie} T^\vee \ar[r, "j^\vee"'] & \{g\} \times \operatorname{Lie} G^\vee \ar[d] \ar[r, "q^\vee"'] \ar[l, dashed, bend right, "\bar j^\vee"'] & \{a\} \times \operatorname{Lie} A^\vee \ar[d] \ar[r] \ar[l, dashed, bend right, "\bar q^\vee"'] & 0 \\ & & 0 & 0 & \end{tikzcd} \rlap{\ .} \end{equation} In a similar way, we induce a splitting of $\operatorname{Lie} P_{G, g^\vee}$, for all $g^\vee \in G^\vee(K)$. \\
Let $\eta: G(K) \to G^\natural(K)$ and $\eta^\vee: G^\vee(K) \to G^{\vee \natural}(K)$ be the splittings of \eqref{UVE1} and \eqref{UVE2}, respectively, such that $\operatorname{Lie} \eta = r$ and $\operatorname{Lie} \eta^\vee = r^\vee$, and let $\eta_T$, $\eta_T^\vee$, $\eta_A$, $\eta_A^\vee$, $\tilde \eta$ and $\tilde \eta^\vee$ be as constructed in Lemma \ref{lem:etas}. Consider the following diagram
\begin{equation} \label{Gnat_eta} \begin{tikzcd} & G(K) \ar[d, "\tilde \eta"] \ar[r, "\pi"] & A(K) \ar[d, "\eta_A"'] \\ \omega_{T^\vee}(K) & G^\natural(K) \ar[l, "\sigma"'] \ar[r, "\pi^\natural"] & A^\#(K) \rlap{\ .} \end{tikzcd} \end{equation}
Denote by $s_{g}^1: \operatorname{Lie} T^\vee \to K$ the morphism of Lie algebras corresponding to the invariant differential $\sigma \circ \tilde \eta(g) \in \omega_{T^\vee}(K)$. By \cite[Thm. 0.3.1, p. 633]{CO91} (see also Lemma \ref{lem:UVE} (iii)) we have that $\pi^\natural \circ \tilde \eta(g) \in A^\#(K)$ is represented by the $\mathbb G_m$-extension $P_{a, A^\vee}$ of $A^\vee$ equipped with a normal invariant differential, which corresponds to a morphism $s_{g}^2: \operatorname{Lie} P_{a, A^\vee} \to K$. We define
\begin{align*} s_g: \operatorname{Lie} P_{g, G^\vee} \cong \operatorname{Lie} T^\vee \times \operatorname{Lie} P_{a, A^\vee} & \to K \\ z = (z^1, z^2) & \mapsto s_g^1(z^1) + s_g^2(z^2) . \end{align*} This is a rigidification of $P_{g, G^\vee}$, considered as an extension of $G^\vee$ by $\mathbb G_m$. For every $g^\vee \in G^\vee(K)$, we let $a^\vee := \pi^\vee(g^\vee)$, and define the rigidification $s_{g^\vee}: \operatorname{Lie} P_{G, g^\vee} \to K$ of $P_{G, g^\vee}$ analogously as \begin{align*} s_{g^\vee}: \operatorname{Lie} P_{G, g^\vee} \cong \operatorname{Lie} T \times \operatorname{Lie} P_{A, a^\vee} & \to K \\ z = (z^1, z^2) & \mapsto s_{g^\vee}^1(z^1) + s_{g^\vee}^2(z^2) , \end{align*} where $s_{g^\vee}^1: \operatorname{Lie} T \to K$ is the morphism corresponding to the invariant differential $\sigma^\vee \circ \tilde \eta^\vee(g^\vee) \in \omega_{T}(K)$, and $s_{g^\vee}^2: \operatorname{Lie} P_{A, a^\vee} \to K$ is the morphism corresponding to the normal invariant differential on $P_{A, a^\vee}$ associated to $\pi^{\vee \natural} \circ \tilde \eta^\vee(g^\vee) \in A^{^\vee \#}(K)$. \\
Let $y \in P(K)$ lie above $(g, g^\vee) \in G(K) \times G^\vee(K)$. We define maps $\psi_1, \psi_2: P(K) \to K$ as follows $$\psi_1(y) = s_g \circ \lambda_{P_{g, G^\vee}}(y), \quad \psi_2(y) = s_{g^\vee} \circ \lambda_{P_{G, g^\vee}}(y).$$ \noindent\begin{minipage}{0.55\linewidth} \begin{equation} \label{diag_def_rho-spl1} \begin{tikzcd} K^* \ar[d, hook] \ar[r, "\lambda"] & K \ar[d, hook] \\ P_{g, G^\vee}(K) \ar[r, "\lambda_{P_{g, G^\vee}}"] \ar[d] & \operatorname{Lie} P_{g, G^\vee}(K) \ar[d] \ar[u, dashed, bend right, "s_g"'] \\ \{g\} \times G^\vee(K) \ar[r, "\lambda_{G^\vee}"] & \operatorname{Lie} G^\vee(K) \end{tikzcd} \end{equation} \end{minipage} \begin{minipage}{0.5\linewidth}
\[\begin{tikzcd} K^* \ar[d, hook] \ar[r, "\lambda"] & K \ar[d, hook] \\ P_{G, g^\vee}(K) \ar[r, "\lambda_{P_{G, g^\vee}}"] \ar[d] & \operatorname{Lie} P_{G, g^\vee}(K) \ar[d] \ar[u, dashed, bend right, "s_{g^\vee}"'] \\ G(K) \times \{g^\vee\} \ar[r, "\lambda_{G}"] & \operatorname{Lie} G(K) \end{tikzcd}\]
\end{minipage}
\begin{claim} $\psi_1 = \psi_2$. \end{claim}
\begin{proof} Denote \begin{align*} (z_g^1, z_g^2) & := \lambda_{P_{g, G^\vee}}(y) \in \operatorname{Lie} P_{g, G^\vee} \cong \operatorname{Lie} T^\vee \times \operatorname{Lie} P_{a, A^\vee}, \\ (z_{g^\vee}^1, z_{g^\vee}^2) & := \lambda_{P_{G, g^\vee}}(y) \in \operatorname{Lie} P_{G, g^\vee} \cong \operatorname{Lie} T \times \operatorname{Lie} P_{A, a^\vee}. \end{align*} To prove the claim it suffices to show that \begin{align*} s_{g}^1(z_g^1) & = s_{g^\vee}^1(z_{g^\vee}^1), \\ s_{g}^2(z_g^2) & = s_{g^\vee}^2(z_{g^\vee}^2). \end{align*} \begin{enumerate}[(i)] \item $s_{g}^1(z_g^1) = s_{g^\vee}^1(z_{g^\vee}^1)$: From the commutativity of diagram \eqref{lambda-spl} and the analogous one for $P_{G, g^\vee}$ we get that $$z^1_{g^\vee} = \bar j \circ \lambda_G(g) \in \operatorname{Lie} T (K), \quad z^1_g = \bar j^\vee \circ \lambda_{G^\vee}(g^\vee) \in \operatorname{Lie} T^\vee(K).$$ Therefore, we have \begin{align*} \operatorname{Lie} \eta_T(z^1_{g^\vee}) & = \operatorname{Lie} \eta_T \circ \bar j \circ \lambda_G(g) \\ & = \bar j^\natural \circ \operatorname{Lie} \tilde \eta \circ \lambda_G(g) \\ & = (\sigma \circ \operatorname{Lie} \tilde \eta \circ \lambda_G(g), \bar j \circ \operatorname{Lie} \theta' \circ \operatorname{Lie} \gamma' \circ \operatorname{Lie} \tilde \eta \circ \lambda_G(g)) \\ & = (\sigma \circ \tilde \eta(g), \bar j \circ \operatorname{Lie} \theta \circ \operatorname{Lie} \tilde \eta \circ \lambda_G(g)) \\ & = (\sigma \circ \tilde \eta(g), \bar j \circ \lambda_G(g)) \in \omega_{T^\vee}(K) \times \operatorname{Lie} T(K) = \operatorname{Lie} T^\natural(K), \end{align*} where the second equality comes from the commutativity of diagram \eqref{weightfil_spl} in the proof of Lemma \ref{lem:etas}, the third one from the definition of $\bar j^\natural$ (see diagram \eqref{es_lanat}), the fourth one from the fact that $\theta' \circ \gamma' = Id$, and the last one from the fact that $\theta \circ \tilde \eta = Id$. Similarly, $$\operatorname{Lie} \eta^\vee_T(z^1_g) = (\sigma^\vee \circ \tilde \eta^\vee(g^\vee), \bar j^\vee \circ \lambda_{G^\vee}(g^\vee)) \in \omega_{T}(K) \times \operatorname{Lie} T^\vee(K) = \operatorname{Lie} T^{\vee \natural}(K).$$ By Lemma \ref{lem:Del_pairing_T}, we have $$(\operatorname{Lie} \eta_T(z^1_{g^\vee}), \operatorname{Lie} \eta^\vee_T(z^1_g))_T = s^1_g(z^1_g) - s^1_{g^\vee}(z^1_{g^\vee}).$$ Since $(\eta_T, \eta^\vee_T)$ are dual, we get the desired equality.
\item $s_{g}^2(z_g^2) = s_{g^\vee}^2(z_{g^\vee}^2)$: Let $y_A \in P_A(K)$ be the image of $y$. Then, by functoriality of the logarithm, we get $$z^2_g = \lambda_{P_{a, A^\vee}}(y_A), \quad z^2_{g^\vee} = \lambda_{P_{A, a^\vee}}(y_A).$$ Notice that, because of the commutativity of diagram \eqref{Gnat_eta}, we have \begin{align*} \eta_A(a) & = \eta_A \circ \pi (g) \\ & = \pi^\natural \circ \tilde \eta(g) \in A^\#(K). \end{align*} Similarly, $$\eta^\vee_A(a^\vee) = \pi^{\vee \natural} \circ \tilde \eta^\vee(g) \in A^{\vee \#}(K).$$ Hence, if we denote by $s_a$ the rigidification of $P_{a, A^\vee}$ determined by $\eta_A(a)$ and by $s_{a^\vee}$ the rigidification of $P_{A, a^\vee}$ determined by $\eta^\vee_A(a^\vee)$ then $s_a = s_{g}^2$ and $s_{a^\vee} = s_{g^\vee}^2$. Since $(\eta_A, \eta^\vee_A)$ are dual, the $\lambda$-splittings of $P_A(K)$ obtained from $\eta_A$ and $\eta^\vee_A$ coincide (see Proposition 3.1.2, Corollary 3.1.3 and Proposition 3.2.1 in \cite[p. 642--643]{CO91}). This implies that \begin{align*} s_{g}^2(z_g^2) & = s_a \circ \lambda_{P_{a, A^\vee}}(y_A) \\ & = s_{a^\vee} \circ \lambda_{P_{A, a^\vee}}(y_A) \\ & = s_{g^\vee}^2(z_{g^\vee}^2) . \end{align*} \end{enumerate} \end{proof}
Therefore, we can define $$\psi := \psi_1 = \psi_2.$$ It only remains to check that $\psi$ is indeed a $\lambda$-splitting. Using the definition of $\psi_1$ we get that for all $c \in K^*$ and $y \in P(K)$ lying above $(g, g^\vee) \in G(K) \times G^\vee(K)$ \begin{align*} \psi(c + y) & = s_g \circ \lambda_{P_{g, G^\vee}}(c + y) \\ & = s_g \circ \lambda_{P_{g, G^\vee}}(c) + s_g \circ \lambda_{P_{g, G^\vee}}(y) \\ & = \lambda(c) + \psi(y) , \end{align*} where the last equality holds because of the commutativity of diagram \eqref{diag_def_rho-spl1}. Also, for $y, y' \in P_{g, G^\vee}(K)$, \begin{align*} \psi(y + y') & = s_g \circ \lambda_{P_{g, G^\vee}}(y +_1 y') \\ & = s_g \circ \lambda_{P_{g, G^\vee}}(y) + s_g \circ \lambda_{P_{g, G^\vee}}(y') \\ & = \psi(y) + \psi(y'). \end{align*} Finally, from the definition of $\psi_2$ it follows that $\psi$ is also compatible with the group structure $+_2$ of $P(K)$. \end{proof}
\begin{theorem} In the situation of Theorem \ref{thm:lambda-spl}, assume that $\eta$ and $\eta^\vee$ make the following diagrams commute \[\xymatrix{ L(K) \ar@{=}[r] \ar[d]_{u} & L(K) \ar[d]^{u^\natural} \\ G(K) \ar[r]^{\eta} & G^\natural(K) } \quad \xymatrix{ L^\vee(K) \ar@{=}[r] \ar[d]_{u^\vee} & L^\vee(K) \ar[d]^{u^{\vee \natural}} \\ G^\vee(K) \ar[r]^{\eta^\vee} & G^{\vee \natural}(K)}\] and, moreover, that $\eta = \tilde \eta$, $\eta^\vee = \tilde \eta^\vee$, where $\tilde \eta$ and $\tilde \eta^\vee$ are the morphisms of Lemma \ref{lem:etas}. Then the $\lambda$-splitting $\psi: P(K) \to K$ constructed in Theorem \ref{thm:lambda-spl} is compatible with the $L \times L^\vee$-linearization of $P$. In particular, it induces a $\lambda$-splitting of the biextension $Q_M(K)$ of $(M(K), M^\vee(K))$ by $K^*$ in the case that $u(K)$ and $u^\vee(K)$ are injective. \end{theorem}
\begin{remark} The condition $\eta \circ u = u^\natural$ says that, on $K$-sections, $(Id, \eta)$ is a splitting of the complex $M^\natural$ seen as an extension of $M$ by $\omega_{G^\vee}$; and similarly for $\eta^\vee$. \end{remark}
\begin{proof} We have to prove that the $\lambda$-splitting $\psi: P(K) \to K$ constructed in Theorem \ref{thm:lambda-spl} satisfies $\psi \circ \tau = 0$ and $\psi \circ \tau^\vee = 0$ on $K$-sections. \\
Let $x \in L(K)$ and denote by $\chi: T^\vee \to \mathbb G_m$ the homomorphism corresponding to it. We have the following diagram with exact rows (see \cite[\S 1.2]{AB05}) \[\begin{tikzcd} 0 \ar[r] & T^\vee \ar[r, "\iota^\vee"] \ar[d, "-\chi"'] & G^\vee \ar[r, "\pi^\vee"] \ar[d, "\tau'_x"] & A^\vee \ar[r] \ar[d, equal] & 0 \\ 0 \ar[r] & \mathbb G_m \ar[r] & P_{v(x), A^\vee} \ar[r] & A^\vee \ar[r] & 0 \\ 0 \ar[r] & \mathbb G_m \ar[r] \ar[u, equal] & P_{u(x), G^\vee} \ar[r] \ar[u] & G^\vee \ar[r] \ar[u] & 0 \rlap{\ ,} \end{tikzcd}\] where $v$ is the composition $L \xrightarrow{u} G \xrightarrow{\pi} A$. We also have the corresponding diagram of Lie algebras with exact rows and splittings induced by $\bar j^\vee$ and $\bar q^\vee$: \begin{equation} \label{spl_P} \begin{tikzcd} 0 \ar[r] & \operatorname{Lie} T^\vee \ar[r, "j^\vee"] \ar[d, "-\operatorname{Lie} \chi"'] & \operatorname{Lie} G^\vee \ar[r, "q^\vee"] \ar[d] \ar[l, dashed, bend right, "\bar j^\vee"'] & \operatorname{Lie} A^\vee \ar[r] \ar[d, equal] \ar[l, dashed, bend right, "\bar q^\vee"'] & 0 \\ 0 \ar[r] & \operatorname{Lie} \mathbb G_m \ar[r] & \operatorname{Lie} P_{v(x), A^\vee} \ar[r] \ar[l, dashed, bend right, "\xi"'] & \operatorname{Lie} A^\vee \ar[r] \ar[l, dashed, bend right] & 0 \\ 0 \ar[r] & \operatorname{Lie} \mathbb G_m \ar[r] \ar[u, equal] & \operatorname{Lie} P_{u(x), G^\vee} \ar[r] \ar[u] \ar[l, dashed, bend right] & \operatorname{Lie} G^\vee \ar[r] \ar[u] \ar[l, dashed, bend right] & 0 \rlap{\ .} \end{tikzcd} \end{equation}
By Lemma \ref{lem:UVE} (i), $u^\natural(x) \in G^\natural(K)$ corresponds to the extension $[L^\vee \to P_{u(x), G^\vee}]$ of $M^\vee$ by $\mathbb G_m$ endowed with a $\natural$-structure. We know that the invariant differential $\sigma \circ u^\natural(x) \in \omega_{T^\vee}(K)$ is the one associated to the homomorphism $\operatorname{Lie} \chi \in \operatorname{Hom}_{\mathcal O_K}(\operatorname{Lie} T^\vee, \mathbb G_a)$, by Lemma \ref{lem:UVE} (iv).
On the other hand, $\pi^\natural \circ u^\natural(x) \in A^\#(K)$ is the extension $P_{v(x), A^\vee}$ of $A^\vee$ by $\mathbb G_m$ endowed with the normal invariant differential associated to $\xi: \operatorname{Lie} P_{v(x), A^\vee} \to \operatorname{Lie} \mathbb G_m$. From our hypothesis that $\eta \circ u = u^\natural$, it follows that $$s_{u(x)}^1 = \operatorname{Lie} \chi: \operatorname{Lie} T^\vee \to \operatorname{Lie} \mathbb G_m,$$ since this is the morphism induced by $\sigma \circ \eta(u(x)) = \sigma \circ u^{\natural}(x)$, and $$s_{u(x)}^2 = \xi: \operatorname{Lie} P_{v(x), A^\vee} \to \operatorname{Lie} \mathbb G_m,$$ since this is the morphism induced by $\pi^\natural \circ \eta(u(x)) = \pi^\natural \circ u^\natural(x)$. \\
Let $g^\vee \in G^\vee(K)$. By setting $g = u(x)$, the middle row in diagram \eqref{lambda-spl} provides us with a decomposition $\operatorname{Lie} P_{u(x), G^\vee} \cong \operatorname{Lie} T^\vee \times \operatorname{Lie} P_{v(x), A^\vee}$ identifying $$\lambda_{P_{u(x), G^\vee}}(\tau(x, g^\vee)) = (\bar j^\vee \circ \lambda_{G^\vee}(g^\vee), \lambda_{P_{v(x), A^\vee}} \circ \tau'_x(g^\vee)).$$ Furthermore, the middle row of diagram \eqref{spl_P}, allows us to identify $\operatorname{Lie} P_{v(x), A^\vee}$ with $\operatorname{Lie} \mathbb G_m \times \operatorname{Lie} A^\vee$; under this isomorphism, $\lambda_{P_{v(x), A^\vee}} \circ \tau'_x(g^\vee)$ corresponds to $$\lambda_{P_{v(x), A^\vee}} \circ \tau'_x(g^\vee) = (-\operatorname{Lie} \chi \circ \bar j^\vee \circ \lambda_{G^\vee}(g^\vee), \lambda_{A^\vee}(a^\vee)),$$ where $a^\vee \in A^\vee$ is the image of $g^\vee \in G^\vee$ under the canonical projection. Therefore, by \eqref{diag_def_rho-spl1}, we get that $\psi \circ \tau(x, g^\vee)$ equals \begin{align*} \psi \circ \tau(x, g^\vee) & = s_{u(x)} \circ \lambda_{P_{u(x), G^\vee}}(\tau(x, g^\vee)) \\ & = s^1_{u(x)}(\bar j^\vee \circ \lambda_{G^\vee}(g^\vee)) + s^2_{u(x)}(\lambda_{P_{v(x), A^\vee}} \circ \tau'_x(g^\vee)) \\ & = \operatorname{Lie} \chi \circ \bar j^\vee \circ \lambda_{G^\vee}(g^\vee) + \xi(-\operatorname{Lie} \chi \circ \bar j^\vee \circ \lambda_{G^\vee}(g^\vee), \lambda_{A^\vee}(a^\vee)) \\ & = \operatorname{Lie} \chi \circ \bar j^\vee \circ \lambda_{G^\vee}(g^\vee) - \operatorname{Lie} \chi \circ \bar j^\vee \circ \lambda_{G^\vee}(g^\vee) \\ & = 0 . \end{align*}
The proof of the equality $\psi \circ \tau^\vee(g, x^\vee) = 0$ is carried out in a similar way. \end{proof}
\begin{corollary} Let $\rho: K^* \to \mathbb Q_p$ be a ramified homomorphism and consider $r: \operatorname{Lie} G(K) \to \operatorname{Lie} G^\natural(K)$ and $r^\vee: \operatorname{Lie} G^\vee(K) \to \operatorname{Lie} G^{\vee \natural}(K)$ a pair of splittings of the exact sequences of Lie algebras \[0 \to \omega_{G^\vee}(K) \xrightarrow{\operatorname{Lie} \zeta} \operatorname{Lie} G^\natural(K) \xrightarrow{\operatorname{Lie} \theta} \operatorname{Lie} G(K) \to 0,\] \[0 \to \omega_G(K) \xrightarrow{\operatorname{Lie} \zeta^\vee} \operatorname{Lie} G^{\vee \natural}(K) \xrightarrow{\operatorname{Lie} \theta^\vee} \operatorname{Lie} G^\vee(K) \to 0,\] respectively, which are dual with respect to $(\, \cdot \,, \, \cdot \,)^{Del}_M$. Then: \begin{enumerate}[(i)] \item There is a $\rho$-splitting $\psi: P(K) \to \mathbb Q_p$. \item Let $\eta: G(K) \to G^\natural(K)$ and $\eta^\vee: G^\vee(K) \to G^{\vee \natural}(K)$ be the splittings of \eqref{UVE1} and \eqref{UVE2} such that $\operatorname{Lie} \eta = r$ and $\operatorname{Lie} \eta^\vee = r^\vee$. If the following diagrams commute \[\xymatrix{ L(K) \ar@{=}[r] \ar[d]_{u} & L(K) \ar[d]^{u^\natural} \\ G(K) \ar[r]^{\eta} & G^\natural(K) } \quad \xymatrix{ L^\vee(K) \ar@{=}[r] \ar[d]_{u^\vee} & L^\vee(K) \ar[d]^{u^{\vee \natural}} \\ G^\vee(K) \ar[r]^{\eta^\vee} & G^{\vee \natural}(K) }\] and $\eta = \tilde \eta$, $\eta^\vee = \tilde \eta^\vee$, where $\tilde \eta$ and $\tilde \eta^\vee$ are the morphisms of Lemma \ref{lem:etas}, then the $\rho$-splitting $\psi: P(K) \to \mathbb Q_p$ of (i) is compatible with the $L \times L^\vee$-linearization of $P$. In particular, if $u(K)$ and $u^\vee(K)$ are injective then $\psi$ induces a $\rho$-splitting of the biextension $Q_M(K)$ of $(M(K), M^\vee(K))$ by $K^*$. \end{enumerate} \end{corollary}
\begin{proof} \begin{enumerate}[(i)] \item By \cite[p. 319]{ZA90}, there exists a branch $\lambda: K^* \to K$ of the $p$-adic logarithm and a $\mathbb Q_p$-linear map $\delta: K \to \mathbb Q_p$ such that \[\xymatrix{ K^* \ar[rr]^{\rho} \ar[dr]_{\lambda} && \mathbb Q_p \\ & K \ar[ru]_{\delta} & \rlap{\ .} }\] Let $\psi: P(K) \to K$ be the $\lambda$-splitting constructed as in Theorem \ref{thm:lambda-spl}. Then $\psi_{\rho} := \delta \circ \psi: P(K) \to \mathbb Q_p$ is a $\rho$-splitting of $P(K)$. \item We have that \[\psi_{\rho} \circ \tau = \delta \circ \psi \circ \tau = 0 ,\] and similarly for $\tau^\vee$. Therefore, $\psi_{\rho}$ is compatible with the $L \times L^\vee$-linearization of $P$ and thus induces a $\rho$-splitting of $Q_M(K)$, in the case that $u(K)$ and $u^\vee(K)$ are injective. \end{enumerate} \end{proof}
\section{Local pairing between zero-cycles} \label{sec:local_pairing}
In this section, we construct a pairing between disjoint zero-cycles of degree zero on a curve over a local field and its regular locus, which generalizes the local pairing defined in \cite[p. 212]{MT83} in the case of an elliptic curve (see also \cite{CG89}). \\
Let $K$ be a finite extension of $\mathbb Q_p$ and $C$ a semi-normal irreducible curve over $K$. Consider the following commutative diagram \[\begin{tikzcd} C' \ar[d, twoheadrightarrow, "\pi"'] \ar[r, hook, "j'"] & \bar C' \ar[d, twoheadrightarrow, "\bar \pi"] \\ C \ar[r, hook, "j"] & \bar C \rlap{\ ,} \end{tikzcd}\] where $C'$ is the normalization of $C$, $\bar C'$ is a smooth compactification of $C'$, and $\bar C$ (resp. $C$) is the curve obtained from $\bar C'$ (resp. $C'$) by contracting each of the finite sets $\pi^{-1}(x)$, for $x \in C$. Let $S$ be the set of singular points of $C$, $S' := \pi^{-1}(S)$, and $F := \bar C' - C' = \bar C - C$. We recall from Section \ref{sec:alb_pic} the homological Picard 1-motive of $C$ and the cohomological Albanese 1-motive of $C$:
\[\operatorname{Pic}^-(C) = [u: \operatorname{Div}^0_{S'/S}(\bar C', F) \to \operatorname{Pic}^0(\bar C', F)],\] \[\operatorname{Alb}^+(C) = \operatorname{Pic}^-(C)^\vee = [u^\vee: \operatorname{Div}^0_{F}(\bar C) \to \operatorname{Pic}^0(\bar C)] .\] Denote by $\bar C_{\text{\rm reg}}$ the set of smooth points of $\bar C$ and let $a^+_{x}: \bar C_\text{\rm reg} \to \operatorname{Pic}^0(\bar C)$ be the Albanese mapping, which depends on a base point $x \in \bar C_{\text{\rm reg}}$ (see \cite[p. 50]{BS01}). Extending by linearity, one obtains a mapping $a^+_{\bar C}: Z_0(\bar C_\text{\rm reg})_0 \to \operatorname{Pic}^0(\bar C)$ on the group of zero-cycles of degree zero on $\bar C_\text{\rm reg}$; notice that it does not depend on any base point. As usual, we denote by $P$ the Poincar\'e biextension of $(\operatorname{Pic}^-(C), \operatorname{Alb}^+(C))$ by $\mathbb G_m$. We consider a homomorphism $\rho: K^* \to \mathbb Q_p$ and a $\rho$-splitting $\psi: P(K) \to \mathbb Q_p$ which is compatible with the $\operatorname{Div}^0_{S'/S}(\bar C', F) \times \operatorname{Div}^0_{F}(\bar C)$-linearization of $P$. Our aim is to construct a pairing \[ [\, \cdot \,, \, \cdot \,]_C: (Z_0(C)_0 \times Z_0(C_\text{\rm reg})_0)' \to \mathbb Q_p ,\] where $(Z_0(C)_0 \times Z_0(C_\text{\rm reg})_0)'$ denotes the subset of $Z_0(C)_0 \times Z_0(C_\text{\rm reg})_0$ consisting of pairs of cycles with disjoint support. \\
First, we define a pairing \[[\, \cdot \,, \, \cdot \,]'_C: (\operatorname{Div}^0(\bar C', F) \times Z_0(\bar C_\text{\rm reg})_0)' \to \mathbb Q_p \]
on the set of all pairs $(D, z)$, with $D$ a divisor on $\bar C'$ algebraically equivalent to 0 whose support is contained in $\bar C' \backslash F$, and $z$ a zero-cycle of degree zero on $\bar C_\text{\rm reg}$, satisfying that $\operatorname{supp} D \cap \operatorname{supp} z = \emptyset$. Notice that a divisor $D \in \operatorname{Div}^0(\bar C', F) \subset \operatorname{Div}^0(\bar C')$ corresponds to a line bundle $L(D)$ over $\bar C'$ together with a rational section $s_D: \bar C' \dashrightarrow L(D)$ which is defined on the open subset $\bar C' \setminus \operatorname{supp} D \subset \bar C'$; in particular, $s_D$ is defined on $F$, since $\operatorname{supp} D \cap F = \emptyset$. Moreover, the pullback along $a_{x}^+$ of $P_{[D]}$, the fiber of the Poincar\'e bundle $P$ over $[D] \in \operatorname{Pic}^0(\bar C', F)$, is the restriction of $L(D)$ to $\bar C_{\text{\rm reg}}$, and so $a_{x}^+$ induces a map $a_{x, D}^+: L(D)|_{\bar C_{\text{\rm reg}}} \to P_{[D]}$ by pullback: \[\begin{tikzcd}
L(D)|_{\bar C_{\text{\rm reg}}} \ar[r, "a^+_{x, D}"] \ar[d] \ar[dr, phantom, "\lrcorner", very near start] & P_{[D]} \ar[d] \\
\bar C_{\text{\rm reg}} \ar[r, "a^+_{x}"] \ar[u, dashed, bend left, "s_D|_{\bar C_{\text{\rm reg}}}"] & \{[D]\} \times \operatorname{Pic}^0(\bar C) \rlap{\ .} \end{tikzcd}\] Therefore, we can define \[ [D, \sum n_j x_j]'_C := \sum n_j \psi \circ a_{x, D}^+ \circ s_D(x_j) ,\] where $\sum n_j x_j \in Z_0(\bar C_\text{\rm reg})_0$ is a zero-cycle whose support is disjoint from $\operatorname{supp} D$. Notice that since $\sum n_j x_j$ is a zero-cycle then $[D, \sum n_j x_j]'_C$ no longer depends on the base point $x$. \\
When $D \in \operatorname{Div}^0_{S'/S}(\bar C', F) \subset \operatorname{Div}^0(\bar C', F)$ we have that $a_{x, D}^+ \circ s_D = \tau \circ a_{x}^+$: \[\begin{tikzcd}
L(D)|_{\bar C_{\text{\rm reg}}} \ar[r, "a_{x, D}^+"] \ar[d] \ar[dr, phantom, "\lrcorner", very near start] & P_{u(D)} \ar[d] \\
\bar C_{\text{\rm reg}} \ar[r, "a_{x}^+"] \ar[u, dashed, bend left, "s_D|_{\bar C_{\text{\rm reg}}}"] & \{u(D)\} \times \operatorname{Pic}^0(\bar C) \ar[u, dashed, bend right, "\tau |_{\{D\} \times \operatorname{Pic}^0(\bar C)}"'] \rlap{\ .} \end{tikzcd}\] This implies that $[D, \sum n_j x_j]'_C = 0$ for all $D \in \operatorname{Div}^0_{S'/S}(\bar C', F)$. Notice that, since every closed point in $C'$ is also closed in $\bar C'$, then $Z_0(C')_0 = \operatorname{Div}^0(\bar C', F)$. Moreover, since $\bar C'$ is irreducible, $\operatorname{Div}^0_{S'/S}(\bar C', F) \subset \operatorname{Div}^0(\bar C', F)$ is the free abelian subgroup generated by cycles of the form $x_0 - x_1$, where $\pi(x_0) = \pi(x_1)$; denote this group by $Z_0(S'/S)_0$.
Recalling that the pushforward of cycles along $\pi$ preserves the degree, we obtain the following exact sequence \[0 \to Z_0(S'/S)_0 \to Z_0(C')_0 \xrightarrow{\pi_*} Z_0(C)_0 \to 0. \] Therefore, $[\, \cdot \,, \, \cdot \,]'$ is a pairing on $(Z_0(C')_0 \times Z_0(\bar C_\text{\rm reg})_0)'$ which is zero when restricted to $(Z_0(S'/S)_0 \times Z_0(\bar C_\text{\rm reg})_0)'$, yielding a pairing \[ [\, \cdot \,, \, \cdot \,]''_C: (Z_0(C)_0 \times Z_0(\bar C_\text{\rm reg})_0)' \to \mathbb Q_p .\] By restricting to $Z_0(C_\text{\rm reg})_0 \subset Z_0(\bar C_\text{\rm reg})_0$ we get the desired pairing \[ [\, \cdot \,, \, \cdot \,]_C: (Z_0(C)_0 \times Z_0(C_\text{\rm reg})_0)' \to \mathbb Q_p .\]
We make the remark that since $\bar C'$ is irreducible then $\operatorname{Div}_{F}^0(\bar C) = Z_0(F)_0$, and so the restriction of $a_{\bar C}^+$ to $Z_0(F)_0$ equals $u^\vee$: \[\begin{tikzcd} Z_0(F)_0 \ar[r, equal] \ar[d] & \operatorname{Div}^0_{F}(\bar C) \ar[d, "u^\vee"] \\ Z_0(\bar C_{\text{\rm reg}})_0 \ar[r, "a^+_{\bar C}"] & \operatorname{Pic}^0(\bar C) \rlap{\ .} \end{tikzcd}\] Therefore, $[D, z]'_C = \psi \circ \tau^\vee(z) = 0$ for all $z \in Z_0(F)_0$: \[\begin{tikzcd} & & K^* \ar[d, hook] \ar[dr, bend left, "\rho"] & \\ & & P_{[D]}(K) \ar[r, "\psi"] \ar[d] & \mathbb Q_p \\
Z_0(F)_0 \ar[r, equal] & \{[D]\} \times \operatorname{Div}_{F}^0(\bar C) \ar[r, "u^\vee"] \ar[ur, bend left, "\tau^\vee|_{\{[D]\} \times \operatorname{Div}_{F}^0(\bar C)}"] & \{[D]\} \times \operatorname{Pic}^0(\bar C) & \rlap{\ .} \end{tikzcd}\]
\section{Global pairing on rational points} \label{sec:global_pairing}
We define a global pairing between the rational points of a 1-motive over a global field and its dual. The construction, which is given in Proposition \ref{pro:global_pairingM}, generalizes the global pairing defined in \cite[Lemma 3.1, p. 214]{MT83} in the case of abelian varieties (see also \cite[p. 337]{ZA90}). \\
Let $F$ be a number field endowed with a set of places $\mathcal V$ which are either archimedean or discrete, and such that, for each $c \in F^*$, we have $|c|_v = 1$ for almost all $v \in \mathcal V$. For each place $v$, let $F_v$ denote the completion of $F$ with respect to $v$; for $v$ discrete denote by $\mathcal O_{F_v}$ the ring of integers of $F_v$ and let $\pi_v$ be a uniformizer of $\mathcal O_{F_v}$ such that $\pi_v \in F$. Consider a family $\rho = (\rho_v)_{v \in \mathcal V}$ of homomorphisms $$\rho_v: F_v^* \to \mathbb Q_p$$ such that $\rho_v(\mathcal O_{F_v}^*) = 0$ for almost all discrete places $v$, and such that the ``sum formula'' $\sum_v \rho_v(c) = 0$ holds for all $c \in F^*$. \\
Let $M_F = [L_F \xrightarrow{u_F} G_F]$ be a 1-motive over $F$, where $G_F$ is an extension of $A_F$ by $T_F$. For each place $v$, denote $M_{F_v} = [L_{F_v} \xrightarrow{u_{F_v}} G_{F_v}]$ its base change to $F_v$, so that $G_{F_v}$ is an extension of $A_{F_v}$ by $T_{F_v}$. Denote by $P_F$ the Poincar\'e biextension of $(M_F, M^\vee_F)$ and by $P_{F_v}$ its base change to $F_v$, which coincides with the Poincar\'e biextension of $(M_{F_v}, M^\vee_{F_v})$. Moreover, denote $$\tau_{F_v}: L_{F_v} \times G_{F_v}^\vee \to P_{F_v}, \ \tau^\vee_{F_v}: G_{F_v} \times L_{F_v}^\vee \to P_{F_v}$$ the trivializations associated to the 1-motive $M_{F_v}$ and its dual. \\
Observe that $M_{F_v}$ has good reduction over $\mathcal O_{F_v}$ for almost all discrete places $v$ (see \cite[Lemma 3.3, p. 309]{ABB17}). This means that there exists an $\mathcal O_{F_v}$-1-motive $M_{\mathcal O_{F_v}} = [L_{\mathcal O_{F_v}} \xrightarrow{u_{\mathcal O_{F_v}}} G_{\mathcal O_{F_v}}]$, with $G_{\mathcal O_{F_v}}$ an extension of an abelian scheme $A_{\mathcal O_{F_v}}$ by a torus $T_{\mathcal O_{F_v}}$, whose generic fiber is $M_{F_v}$. Moreover, the Poincar\'e biextension $P_{\mathcal O_{F_v}}$ of $(M_{\mathcal O_{F_v}}, M^\vee_{\mathcal O_{F_v}})$ has generic fiber equal to $P_{F_v}$ and its trivializations $$\tau_{\mathcal O_{F_v}}: L_{\mathcal O_{F_v}} \times G_{\mathcal O_{F_v}}^\vee \to P_{\mathcal O_{F_v}}, \ \tau^\vee_{\mathcal O_{F_v}}: G_{\mathcal O_{F_v}} \times L_{\mathcal O_{F_v}}^\vee \to P_{\mathcal O_{F_v}}$$ extend $\tau_{F_v}$ and $\tau^\vee_{F_v}$, respectively. \\
Finally, for every $v$ consider a $\rho_v$-splitting $\psi_{v}: P_{F_v}(F_v) \to \mathbb Q_p$ of $P_{F_v}(F_v)$ and assume that, for almost all discrete places $v$ for which $M_{F_v}$ has good reduction, $\psi_v(P_{\mathcal O_{F_v}}(\mathcal O_{F_v})) = 0$. We have the following
\begin{proposition} \label{pro:global_pairingG} There is a pairing \[\langle \, \cdot \,,\, \cdot \, \rangle: G_F(F) \times G^\vee_F(F) \to \mathbb Q_p \] such that if $y \in P_F(F)$ lies above $(g, g^\vee) \in G_F(F) \times G^\vee_F(F)$ then \begin{equation} \label{global_pairingG} \langle g, g^\vee \rangle = \sum_v \psi_v(y). \end{equation} \end{proposition}
\begin{proof} To prove that the right hand side of \eqref{global_pairingG} is a finite sum, we use the fact that the 1-motive $M_F$ has good reduction over $\mathcal O_{F}\left[1/N\right]$ for $N$ sufficiently divisible (see \cite[Lemma 3.3, p. 309]{ABB17}). This means that $M_F$ extends to a 1-motive $M_{\mathcal O_{F}\left[1/N\right]} = [L_{\mathcal O_{F}\left[1/N\right]} \to G_{\mathcal O_{F}\left[1/N\right]}]$ over $\mathcal O_{F}\left[1/N\right]$, and similarly for $M_F^\vee$. Moreover, the Poincar\'e biextension $P_F$ extends as well to a biextension $P_{\mathcal O_{F}\left[1/N\right]}$ over $\mathcal O_{F}\left[1/N\right]$. We then have a tower of biextensions as follows: \[\begin{tikzcd} \mathcal O_{F}\left[1/N\right]^* \ar[r, hookrightarrow] \ar[d, hookrightarrow] & F^* \ar[d, hookrightarrow] \\ P_{\mathcal O_{F}\left[1/n\right]}(\mathcal O_{F}\left[1/N\right]) \ar[r, hookrightarrow] \ar[d, two heads] & P_F(F) \ar[d, two heads] \\ G_{\mathcal O_{F}\left[1/N\right]} \times G^\vee_{\mathcal O_{F}\left[1/N\right]} \ar[r, equal] & G_F(F) \times G^\vee_F(F) \rlap{\ .} \end{tikzcd}\] Therefore, we can always choose $y \in P_{\mathcal O_{F}\left[1/n\right]}(\mathcal O_{F}\left[1/N\right])$ lying above a pair of rational points $(g, g^\vee) \in G_F(F) \times G^\vee_F(F)$. By doing so, we ensure that $y \in P_{\mathcal O_{F_v}(\mathcal O_{F_v})}$ for almost all $v$, and thus $\psi_v(y) = 0$ for almost all $v$. \\
Observe that if $y \in P_F(F)$ lies above $(g, g^\vee)$ then any other element lying above $(g, g^\vee)$ is of the form $c + y$, for $c \in F^*$. From the sum formula we obtain the equalities \begin{align*} \sum_v \psi_v(c + y) & = \sum_v \rho_v(c) + \sum_v \psi_v(y) \\ & = \sum_v \psi_v(y), \end{align*} which proves that the right hand side of \eqref{pro:global_pairingG} indeed defines a map $G_F(F) \times G^\vee_F(F) \to \mathbb Q_p$. It remains to check that it is bilinear. Let $y_1, y_2 \in P_F(F)$ mapping to $(g_1, g^\vee), (g_2, g^\vee) \in G_F(F) \times G^\vee_F(F)$, respectively. Since the $\psi_v$ are $\rho_v$-splittings, we get that \begin{align*} \langle g_1 + g_2, g^\vee \rangle & = \sum_v \psi_v(y_1 +_2 y_2) \\ & = \sum_v \psi_v(y_1) + \sum_v \psi_v(y_2) \\ & = \langle g_1, g^\vee \rangle + \langle g_2, g^\vee \rangle. \end{align*} In a similar way we verify linearity in $G^\vee_F$. \end{proof}
From now on we will assume that $L_F$ and $T_F$ are split. We assume, moreover, that $\psi_v$ factors through a $\rho_v$-splitting $\psi_{A, v}$ of $P_{A_{F_v}}(F_v)$: $$\psi_v: P_{F_v}(F_v) \to P_{A_{F_v}}(F_v) \xrightarrow{\psi_A, v} \mathbb Q_p.$$ Denote $\mathcal V'$ the set of discrete places $v$ such that $M_{F_v}$ has good reduction and $\psi_v(P_{\mathcal O_{F_v}}(\mathcal O_{F_v})) = 0$. Notice that, necessarily, $\rho_v(\mathcal O_{F_v}^*) = 0$ for all $v \in \mathcal V'$.
\begin{lemma} \label{formula_global_pairing} For every $x^\vee \in L^\vee_F(F)$ and $g \in G_F(F)$ there exists $t \in T_F(F)$ such that \[\sum_v \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(t^{-1} g, x^\vee),\] and similarly for every $x \in L_F(F)$ and $g^\vee \in G^\vee_F(F)$. \end{lemma}
\begin{proof} Fix $x^\vee \in L^\vee_F(F)$ and $g \in G_F(F)$. Suppose that $L_F^\vee \cong \mathbb Z_F^r$ and let $(m_1, \ldots, m_r) \in \mathbb Z_F^r$ be the element corresponding to $x^\vee$. Notice that this induces an isomorphism $T_F \cong \mathbb G_{m, F}^r$. Consider a discrete place $v$ in $\mathcal V'$. Since $G_{F_v}$ has good reduction then $A_{F_v}(F_v) = A_{\mathcal O_{F_v}}(\mathcal O_{F_v})$, which induces isomorphisms \begin{equation} \label{G_class} \frac{G_{F_v}(F_v)}{G_{\mathcal O_{F_v}}(\mathcal O_{F_v})} \cong \frac{T_{F_v}(F_v)}{T_{\mathcal O_{F_v}}(\mathcal O_{F_v})} \cong \mathbb Z^r . \end{equation} Since $M_{F_v}$ has good reduction, the following diagram commutes \[\begin{tikzcd}[column sep=tiny] & & 0 \ar[rr, hookrightarrow] & & \mathbb Q_p \\
& P_{\mathcal O_{F_v}}(\mathcal O_{F_v}) \ar[d] \ar[ur, "\psi_v |_{P_{\mathcal O_{F_v}}}"] \ar[rr, hookrightarrow] & & P_{F_v}(F_v) \ar[d] \ar[ur, "\psi_v"] & \\ & G_{\mathcal O_{F_v}}(\mathcal O_{F_v}) \times G^\vee_{\mathcal O_{F_v}}(\mathcal O_{F_v}) \ar[rr, hookrightarrow] & & G_{F_v}(F_v) \times G^\vee_{F_v}(F_v) & \\ G_{\mathcal O_{F_v}}(\mathcal O_{F_v}) \times L^\vee_{\mathcal O_{F_v}}(\mathcal O_{F_v}) \ar[ur, "Id \times u^\vee_{\mathcal O_{F_v}}"'] \ar[rr, hookrightarrow] \ar[uur, bend left, "\tau^\vee_{\mathcal O_{F_v}}"] & & G_{F_v}(F_v) \times L^\vee_{F_v}(F_v) \ar[ur, "Id \times u^\vee_{F_v}"'] \ar[uur, bend left, "\tau^\vee_{F_v}"] & & \rlap{\ .} \end{tikzcd}\] This implies that the map $\psi_v \circ \tau^\vee_{F_v}(\, \cdot \,, x^\vee)$ factors through the quotient $G_{F_v}(F_v)/ G_{\mathcal O_{F_v}}(\mathcal O_{F_v})$. Thus, any $t_v \in T_{F_v}(F_v)$ whose class in $T_{F_v}(F_v)/T_{\mathcal O_{F_v}}(\mathcal O_{F_v})$ equals that of $g$ satisfies $$\psi_v \circ \tau^\vee_{F_v}(g, x^\vee) = \psi_v \circ \tau^\vee_{F_v}(t_v, x^\vee),$$ where we identify $t_v$ with the corresponding point in $G_{F_v}(F_v)$. If the class of $g$ corresponds to $(n_1, \ldots, n_r) \in \mathbb Z^r$ under the isomorphism \eqref{G_class}, we may choose $t_v$ of the form $t_v := (\pi_v^{n_{1}}, \ldots, \pi_v^{n_{r}})$; in this way, $t_v$ belongs to $T_F(F)$ and $\psi_w \circ \tau^\vee_{F_w}(t_v, x^\vee) = 0$, for all $w \in \mathcal V'$ such that $w \neq v$.
To prove this last assertion, start by considering any place $w \in \mathcal V$. We have the following commutative diagram with exact rows \[\begin{tikzcd} & & \mathbb G_{m, F_w} \ar[r, equal] \ar[d] & \mathbb G_{m, F_w} \ar[d] & \\ 0 \ar[r] & T_{F_w} \ar[r, "i"] \ar[d, "\cong"] & P_{G_{F_w}, \{x^\vee\}} \ar[r] \ar[d] \ar[dr, phantom, "\lrcorner", very near start] & P_{A_{F_w}, a^\vee} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & T_{F_w} \times \{x^\vee\} \ar[r] & G_{F_w} \times \{x^\vee\} \ar[r] \ar[u, "\tau_{F_w}^\vee", dashed, bend left] & A_{F_w} \times \{a^\vee\} \ar[r] & 0 \rlap{\ ,} \end{tikzcd}\] where $a^\vee \in A_{F_w}^\vee(F_w)$ denotes the image of $x^\vee$ by the composition $L_{F_w}^\vee \xrightarrow{u_{F_w}} G_{F_w}^\vee \to A_{F_w}^\vee$. The map $i$ is the one such that when composed with $P_{G_{F_w}, \{x^\vee\}} \to G_{F_w} \times \{x^\vee\}$ equals the natural injection and when composed with $P_{G_{F_w}, \{x^\vee\}} \to P_{A_{F_w}, a^\vee}$ equals zero. Let $\chi: T_F \to \mathbb G_{m, F}$ be the map corresponding to $x^\vee \in L_F^\vee$. With this notation we have $$\tau_{F_w}^\vee(t, x^\vee) = \chi(t) + i(t),$$ for all $t \in T_{F_w}$. In particular, for $w \neq v$ in $\mathcal V'$ and $t = t_v$ we get \begin{align*} \psi_w \circ \tau^\vee_{F_w}(t_v, x^\vee) & = \psi_w(\chi(t_v) + i(t_v)) \\ & = \rho_w(\chi(t_v)) \\ & = \rho_w(\pi_v^{\sum n_i m_i}) \\ & = (n_1m_1 + \ldots + n_rm_r)\rho_w(\pi_v) \\ & = 0, \end{align*} where the second equality is deduced from $\psi_w(i(t_v)) = 0$ (since $\psi_w$ is obtained from a $\rho_w$-splitting of $P_{A_{F_w}}$), and the last one from the fact that $\pi_v \in \mathcal O_{F_w}^*$. \\
Define $$t := \prod_{v \in \mathcal V'} t_v \in T_F(F).$$ Notice that this is a finite product, since $g \in G_{\mathcal O_{F_v}}(\mathcal O_{F_v})$ for almost all $v \in \mathcal V'$. From the previous equalities, we get that $t$ satisfies $$\psi_v \circ \tau^\vee_{F_v}(t, x^\vee) = \psi_v \circ \tau^\vee_{F_v}(g, x^\vee),$$ for every $v \in \mathcal V'$. Therefore, we obtain \begin{align*} \sum_v \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) & = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) + \sum_{v \in \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) \\ & = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) + \sum_{v \in \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(t, x^\vee) \\ & = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) - \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(t, x^\vee) \\ & = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(t^{-1} g, x^\vee) , \end{align*} where the third equality is derived from $$\sum_{v} \psi_v \circ \tau^\vee_{F_v}(t, x^\vee) = \sum_v \rho_v(\chi(t)) = 0.$$ \end{proof}
\begin{proposition} \label{pro:global_pairingM} Suppose that $u_F(K)$ and $u_F^\vee(K)$ are injective, and that the $\rho_v$-splittings $\psi_v$ are compatible with the \sloppy $L_{F_v} \times L_{F_v}^\vee$-linearization of $P_{F_v}$, for every place $v \in \mathcal V - \mathcal V'$. Then the pairing $\langle \, \cdot \,,\, \cdot \, \rangle$ of Proposition \ref{pro:global_pairingG} descends to a pairing \[\langle \, \cdot \,,\, \cdot \, \rangle_M: M_F(F) \times M^\vee_F(F) \to \mathbb Q_p. \] \end{proposition}
\begin{proof} Fix $g \in G_F(F)$ and $x^\vee \in L^\vee_F(F)$, and let $t \in T_F(F)$ be the element constructed in Lemma \ref{formula_global_pairing}. We have \[\sum_v \psi_v \circ \tau^\vee_{F_v}(g, x^\vee) = \sum_{v \in \mathcal V - \mathcal V'} \psi_v \circ \tau^\vee_{F_v}(t^{-1} g, x^\vee) = 0 .\] Since we have the analogous equality for every $x \in L_F(F)$ and $g^\vee \in G^\vee_F(F)$, then $\langle \, \cdot \,, \, \cdot \, \rangle$ is zero on $G(F) \times \operatorname{Im}(u^\vee(F))$ and $\operatorname{Im}(u(F)) \times G^\vee(F)$, inducing a pairing \[\langle \, \cdot \,,\, \cdot \, \rangle_M: M_F(F) \times M^\vee_F(F) \to \mathbb Q_p. \] \end{proof}
\end{document} |
\begin{document}
\title[The H\"ormander multiplier theorem]{The H\"ormander multiplier theorem, II: The bilinear local $L^2$ case} \author{Loukas Grafakos}
\address{Department of Mathematics, University of Missouri, Columbia MO 65211, USA} \email{[email protected]}
\author{Danqing He}
\address{Department of Mathematics, Sun Yat-sen (Zhongshan) University, Guangzhou, 510275, P.R. China} \email{[email protected]}
\author[Honzik]{Petr Honzik} \address{Faculty of Mathematics and Physics, Charles University in Prague, Ke Karlovu 3, 121 16 Praha 2, Czech Republic} \email{[email protected]}
\thanks{The first author was supported by the Simons Foundation. The third author
was supported by the ERC CZ grant LL1203 of the Czech Ministry of Education}
\thanks{{\it Mathematics Subject Classification:} Primary 42B15. Secondary 42B25} \thanks{{\it Keywords and phases:} Bilinear multipliers, H\"ormander multipliers, wavelets, multilinear operators} \date{} \begin{abstract} We use wavelets of tensor product type to obtain the boundedness of bilinear multiplier operators on $\mathbb R^n\times \mathbb R^n$ associated with H\"ormander multipliers on $\mathbb R^{2n}$ with minimal smoothness. We focus on the local $L^2$ case and we obtain boundedness under the minimal smoothness assumption of $n/2$ derivatives. We also provide counterexamples to obtain necessary
conditions for all sets of indices. \end{abstract}
\maketitle
\begin{comment} \begin{rmk} Notations.
$\sigma$: multipliers
$m$: multiplicity
$L^r_s$: Sobolev space
$\Psi^{\lambda,G}_\mu$: basis with dilation $\lambda$ and translation $\mu$
$\xi$ and $\eta$: two variables.
$f$ and $g$: two functions \end{rmk} \end{comment}
\section{Introduction}
\begin{comment}The purpose of this paper is to present several results on bilinear H\"ormander type multipliers.
A function $\sigma$ is called an ($L^p$) multiplier if the operator $T_\sigma(f)=(\sigma\widehat f)^{\vee}$ is bounded on $L^p(\mathbb{R}^n)$ for some $p$, where $\widehat f$ is the Fourier transform of $f$ defined by $\int_{\mathbb{R}^n} f(x)e^{-2\pi ix\cdot \xi}dx$, and $f^{\vee}$ is the inverse Fourier transform of $f$ which we sometimes use also $\mathcal F^{-1}(f)$ to denote. The study of multiplier theory was initiated by Marcinkiewicz \cite{Mar}. Later Mikhlin \cite{Mikhlin} proved his famous multiplier condition, which was improved by H\"ormander \cite{Hoe}. H\"ormander's result could be stated as follows. Let $L^r_s(\mathbb{R}^n)$ be the Sobolev space consisting of all distributions $f$ such that $(I-\Delta)^{s/2}(f)\in L^r(\mathbb{R}^n)$, where $\Delta$ is the Laplacian. Let $\widehat\psi$ be a nonnegative smooth bump supported in the unit annulus such that $\sum_{j=-\infty}^{\infty}\widehat\psi(2^{-j}\xi)=1$ for $\xi\neq0$. If $1< r\le 2$ and $s>n/r$, a bounded function $\sigma$ satisfies \begin{equation}\label{2}
\sup_{k\in \mathbb Z} \big\|\widehat{\psi}\sigma (2^k \cdot)\big\|_{L^r_s}<\infty, \end{equation} then $T_\sigma$ admits a bounded extension from $L^p(\mathbb{R}^n)$ to itself for all $1<p<\infty$. We call all multipliers described by Sobolev spaces $L^r_s$ as above H\"ormander type multipliers, where $r$ is not necessary to be less than $2$. \end{comment}
An $m$-linear $(p_1,\dots, p_m,p)$ multiplier $\sigma(\xi_1,\dots,\xi_m)$ is a function on $\mathbb R^{ n}\times \cdots \times \mathbb R^{ n}$ such that the corresponding $m$-linear operator $$ T_\sigma(f_1,\dots , f_m)(x)= \int_{\mathbb R^{mn}}\sigma(\xi_1,\dots,\xi_m)\widehat f_1(\xi_1)\cdots \widehat f_m(\xi_m) e^{2\pi i x\cdot(\xi_1+\cdots+\xi_m)}d\xi_1\cdots d\xi_m, $$
initially defined on $m$-tuples of Schwartz functions, has a bounded extension from $L^{p_1}(\mathbb R^n)\times\cdots\times L^{p_m}(\mathbb R^n)$ to $L^{p}(\mathbb R^n)$ for appropriate $p_1,\dots, p_m,p$.
It is known from the work in \cite{CM} for $p>1$ and ~\cite{KS},~\cite{GT} for $p\le 1$, that the classical Mihlin condition on $\sigma$ in $\mathbb R^{mn}$ yields boundedness for $T_\sigma$ from $L^{p_1}(\mathbb R^n)\times\cdots\times L^{p_m}(\mathbb R^n)$ to $L^{p}(\mathbb R^n)$ for all $1<p_1,\dots p_m\le \infty$, $1/m<p = (1/p_1+\cdots +1/p_m)^{-1}<\infty$. The Mihlin condition in this setting is usually referred to as the Coifman-Meyer condition and the associated multipliers bear the same names as well. The Coifman-Meyer condition cannot be weakened to the Marcinkiewicz condition, as the latter fails in the multilinear setting; see \cite{GK}. Related multilinear multiplier theorems with mixed smoothness (but not necessarily minimal) can be found in \cite{MT}, \cite{MPTT1},
\cite{GHNY}.
A natural question on H\"ormander type multipliers is how the minimal smoothness
$s $ interplays with the
range of $p$'s on which boundedness is expected. In the linear case, this question was studied in \cite{CT}, \cite{Seeger}, and \cite{P1}. Let $L^r_s(\mathbb{R}^n)$ be the Sobolev space consisting of all functions $h$ such that $(I-\Delta)^{s/2}(h)\in L^r(\mathbb{R}^n)$, where $\Delta$ is the Laplacian. In the first paper of this series
\cite{P1}, we showed that the conditions $|1/2-1/p|< s/n$ and $rs>n$ imply $L^p(\mathbb R^n)$ boundedness for $1<p<\infty$ for $T_\sigma$ in the linear case $m=1$, when the
multiplier $\sigma$ lies in the Sobolev space $L^r_s(\mathbb{R}^n)$ uniformly over all annuli.
This minimal smoothness problem in the bilinear setting was first studied in \cite{T} and later in \cite{MT} and \cite{GMT}. These references contain necessary conditions on $s$ when the multiplier in the Sobolev space $L^r_s$ with $r=2$; other values of $r$ were considered in \cite{GS}.
Our goal here is to pursue the analogous bilinear question. In this paper we focus on the boundedness of $T_\sigma$ in the local $L^2$ case, i.e., the situation where $1\le p_1,p_2\le 2$ and $1\le p= 1/(1/p_1+1/p_2)\le 2$ under minimal smoothness conditions on $s$. It turns out that to express our result in an optimal fashion, we need to work with
$r>2$. We also work with the case $L^2\times L^2\to L^1$ as boundedness in the remaining local $L^2$
indices follows by duality and interpolation.
We achieve our goal via new technique to study
boundedness for bilinear operators based on
tensor product wavelet decomposition developed in \cite{GHH}; this technique was recently used to
solve other problems; see \cite{He}.
The main result of this paper is the following theorem.
\begin{thm}\label{MR2} Suppose $\widehat\psi\in\mathcal C_0^{\infty}(\mathbb{R}^{2n})$ is positive and supported in the annulus
$\{(\xi,\eta):1/2\le|(\xi,\eta)|\le 2\}$ such that $\sum_{j\in\mathbb Z}\widehat\psi_j(\xi,\eta)= \sum_{j}\widehat\psi(2^{-j}(\xi,\eta))=1$ for $(\xi,\eta)\neq0$. Let $ 1<r<\infty$, $s>\max\{n/2, 2n/r\}$, and suppose there is a constant $A$ such that \begin{equation}\label{usb}
\sup_{j}\|\sigma(2^j\cdot)\widehat\psi\|_{L^r_s(\mathbb{R}^{2n})}\le A<\infty. \end{equation} Then there is a constant $C=C(n,\Psi)$ such that the bilinear operator $$ T_{\sigma}(f,g)(x)= \int_{\mathbb{R}^{2n}}\sigma(\xi,\eta)\widehat f(\xi)\widehat g(\eta)e^{2\pi i x\cdot (\xi+\eta)}d\xi d\eta, $$ initially defined on Schwartz functions $f$ and $g$, satisfies \begin{equation}
\|T_{\sigma}(f,g)\|_{L^1(\mathbb{R}^n)}\le CA\|f\|_{L^2(\mathbb{R}^n)}\|g\|_{L^2(\mathbb{R}^n)}. \end{equation} \end{thm}
The optimality of \eqref{usb} in the preceding theorem is contained in the following result. \begin{thm}\label{MR3} Suppose that for $0<p_1,\dots , p_m<\infty$, $p=(1/p_1+\cdots +1/p_m)^{-1}$, we have \begin{equation} \label{B1}
\|T_\sigma\|_{L^{p_1}(\mathbb{R}^n)\times\cdots\times L^{p_m}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)}\le C\sup_{j\in\mathbb Z}\|\sigma(2^j\cdot)\widehat\Psi\|_{L^r_s(\mathbb{R}^{mn})} \end{equation}
for all bounded functions $\sigma$ for which $\sup_{j\in\mathbb Z}\|\sigma(2^j\cdot)\widehat\Psi\|_{L^r_s(\mathbb{R}^{mn})}<\infty$ (for some fixed $r,s>0$). Then we must necessarily have $s\ge \max\{(m-1)n/2,mn/r\}$. \end{thm}
Finally, we have another set of necessary conditions for the boundedness of $m$-linear multipliers. The sufficiency of these conditions is shown in the third paper of this series.
\begin{thm}\label{PL} Suppose there exists a constant $C$ such that \eqref{B1} holds for all $\sigma$ such that the right hand side is finite. Then we must necessarily have $$ \frac 1 p-\frac 1 2\le \frac s n+\sum_{i\in I}\Big(\frac 1{p_i}-\frac 1 2\Big), $$ where $I$ is an arbitrary subset of $\{1,2,\dots,\ m\}$ which may also be empty (in which case the sum is supposed to be zero). \end{thm}
\section{Preliminaries}
We utilize wavelets with compact supports. Their existence is due to Daubechies \cite{Dau} and their construction is contained in Meyer's book \cite{Meyer1} and Daubechies' book \cite{DauB}. For our purposes we need product type smooth wavelets with compact supports; the construction of such objects we use here can be found in Triebel~\cite[Proposition 1.53]{Tr1}.
\begin{lm}\label{TrDau}
For any fixed $k\in \mathbb N$ there exist real compactly supported functions $\psi_F,\psi_M\in \mathcal C^k(\mathbb R)$, the class of functions with continuous derivatives of order up to $k$, which satisfy that $\|\psi_F\|_{L^2(\mathbb R)}=\|\psi_M\|_{L^2(\mathbb R)}=1$ and $\int_{\mathbb R}x^{\alpha}\psi_M(x)dx=0$ for $0\le\alpha\le k$, such that, if $\Psi^G$ is defined by $$ \Psi^{G}(\vec x\,)=\psi_{G_1}(x_1)\cdots \psi_{G_{2n}}(x_{2n}) $$ for $G=(G_1,\dots, G_{2n})$ in the set $$
\mathcal I :=\Big\{ (G_1,\dots, G_{2n}):\,\, G_i \in \{F,M\}\Big\} \, ,
$$ then the family of functions $$ \bigcup_{\vec \mu \in \mathbb Z^{2n}}\bigg[ \Big\{ \Psi^{(F,\dots, F)} (\vec x-\vec \mu ) \Big\} \cup \bigcup_{\lambda=0}^\infty \Big\{ 2^{\lambda n}\Psi^{G} (2^{\lambda}\vec x-\vec \mu):\,\, G\in \mathcal I\setminus \{(F,\dots , F)\} \Big\}
\bigg] $$ forms an orthonormal basis of $L^2(\mathbb R^{2n})$, where $\vec x= (x_1, \dots , x_{2n})$. \end{lm}
In order to prove our results, we use the wavelet characterization of Sobolev spaces, following Triebel's book \cite{Tr1}. Let us fix the smoothness $s,$ for our purposes we always have $s\leq n+1.$ Also, we only work with spaces with the integrability index $r>1.$ Take $\varphi$ as a smooth function defined on $\mathbb R^{2n}$ such that $\widehat\varphi$ is supported in the unit annulus such that $\sum_{j=0}^\infty\widehat \varphi_j=1$, where $\widehat\varphi_j=\widehat\varphi(2^{-j}\cdot)$ for $j\ge1$ and $\widehat\varphi_0=\sum_{k\le 0}\widehat\varphi(2^{-k}\cdot)$. Then for a distribution $f\in\mathcal S'(\mathbb R^{2n})$ we define the $F^s_{r,q}$ norm as follows: $$
\|f|F^s_{r,q}(\mathbb R^{2n})\|=
\Big\|\big(\sum_{j=0}^\nf2^{jsq}|(\varphi_j\widehat f\,)^{\vee}(\cdot)|^q\big)^{1/q}\Big\|_{L^r(\mathbb R^{2n})}. $$ We then pick wavelets with smoothness and cancellation degrees $k=6n.$ This number suffices for the purposes of the following lemma.
\begin{lm}[{\cite[Theorem 1.64]{Tr1}}]\label{TrSo} Let $0<r<\infty,\ 0<q\le\infty, \ s\in\mathbb R$, and for $\lambda\in \mathbb N$ and $\vec\mu\in\mathbb N^{2n}$ let $\chi_{\lambda\vec\mu}$ be the characteristic function of the cube $Q_{\lambda\vec\mu}$ centered at $2^{-\lambda}\vec\mu$ with length $2^{1-\lambda}$. For a sequence $\gamma=\{\gamma^{\lambda,G}_{\vec\mu}\}$ define the norm $$
\|\gamma|f^s_{r,q}\|=
\Big\|(\sum_{\lambda,G,\vec\mu}2^{\lambda sq}|\gamma^{\lambda,G}_{\vec\mu}\chi_{\lambda\vec\mu}(\cdot)|)^{q/2}\Big\|_{L^r(\mathbb R^{2n})}. $$
Let $\mathbb N\ni k>\max \{s,\frac{4n}{\min(r,q)}+n-s\}$. Let $\Psi_{\vec\mu}^{\lambda,G}$ be the $2n$-dimensional Daubechies wavelets with smoothness $k$ according Lemma \ref{TrDau}. Let $f\in \mathcal S'(\mathbb R^{2n})$. Then $f\in F^s_{r,q}(\mathbb R^{2n})$ if and only if it can be represented as $$ f=\sum_{\lambda,G,\vec\mu}\gamma^{\lambda,G}_{\vec\mu}2^{-\lambda n}\Psi^{\lambda,G}_{\vec\mu} $$
with $\|\gamma|f_{rq}^s\|<\infty$ with unconditional convergence in $\mathcal S'(\mathbb{R}^n)$. Furthermore this representation is unique, $$ \gamma_{\vec\mu}^{\lambda,G}=2^{\lambda n}\langle f,\Psi^{\lambda,G}_{\vec\mu}\rangle, $$ and $$ I: f\to\big\{2^{\lambda n}\langle f,\Psi^{\lambda,G}_{\vec\mu}\rangle\big\} $$ is an isomorphic map of $F^s_{r,q}(\mathbb R^{2n})$ onto $f^s_{r,q}.$
\end{lm}
\begin{comment} First, let us denote by $Q_{j,m}$ the dyadic cubes of generation $j$ (sidelength $2^{-j}$). Next, we define a sequence space $f_{r,q}^s$ by $$
\|\gamma\|_{r,q}^s=\left\|\left(\sum_{j,G,m}2^{jsq}|\gamma^{j,G}_m \chi_{Q_{j,m}}|^q\right)^{1/q}\right\|_r $$ Then the Triebel-Lizorkin space $F_{r,q}^s(\mathbb R^{2n})$ may be written using the wavelets we fixed as follows $$f\in F_{r,q}^s\qquad {\rm if\; and \; only \; if} \qquad \gamma \in f_{r,q}^s\;{\rm and}\;f=\sum_{j,G,m}2^{jsq}\gamma^{j,G}_m2^{-jn}\Psi^{j,G}_{m}$$
and $\|f\|_{r,q}^s\approx \|\gamma\|_{f_{r,q}^s}.$ \end{comment} In particular, the Sobolev space $L^r_s(\mathbb R^{2n})$ coincides with $F_{r,2}^s(\mathbb R^{2n})$. In the proof of our results, we use for fixed $\lambda$ the following estimate: \begin{equation}\label{1L}
\Big\|\Big(\sum_{G,\vec\mu }|\langle \sigma, \Psi^{\lambda,G}_{{\vec\mu}} \rangle \Psi^{\lambda,G}_{{\vec\mu}}|^2\Big)^{1/2}\Big\|_{L^r}\leq
C\|\sigma\|_{L^r_s}2^{-s\lambda}. \end{equation}
To verify this, by Lemma \ref{TrSo}, we have $$
\bigg\|\sum_{G,{\vec\mu}}2^{\lambda s}|\gamma^{\lambda,G}_{\vec\mu} \chi_{Q_{\lambda,{\vec\mu}}}|\bigg\|_{L^r} \leq C\|\sigma\|_{L^r_s}, $$ with $\gamma^{\lambda,G}_{\vec\mu}=2^{\lambda n}\langle \sigma, \Psi^{\lambda,G}_{\vec\mu}\rangle.$ Notice that $2^{-\lambda n}\Psi^{\lambda,G}_{{\vec\mu}}$ are $L^{\infty}$ normalized wavelets, and there exists an absolute constant $B$ such that the support of
$\Psi^{\lambda,G}_{{\vec\mu}}$ is always contained in $\cup_{|\vec\nu|\le B} Q_{\lambda,{\vec\mu}+\vec\nu}$. This then implies~\eqref{1L}.
\section{The main lemma}\label{MLS}
Let $Q$ denote the cube $[-2,2]^{2n}$ in $\mathbb R^{2n}$, and consider a Sobolev space $L^r_s(Q)$ as the Sobolev space
of distributions supported in $Q $ which are in $L^r_s(\mathbb R^{2n})$.
\begin{lemma}\label{MR} For $r\in (1,\infty)$ let $s>\max(n/2,2n/r)$ and suppose $\sigma \in L^r_s(Q).$ Then $\sigma$ is a bilinear multiplier bounded from $L^2(\mathbb{R}^n)\times L^2(\mathbb{R}^n)$ to $L^1(\mathbb{R}^n)$. \end{lemma}
\begin{proof}
The important inequality is the one for a single generation of wavelets (with $\lambda$ fixed). For a fixed $\lambda$, by the uniform compact supports of the elements in the basis, we can classify the wavelets into finitely many subclasses such that the supports of the elements in each subclass are pairwise disjoint. We denote by
$D_{\lambda,\kappa}$ such a subclass and the related symbol $$ \sigma_{\lambda,\kappa}=\sum_{\omega\in D_{\lambda,\kappa}}a_\omega \omega, $$
where $a_\omega=\langle \sigma, \omega \rangle .$ The $\omega$'s are $L^2$ normalized, but we change the normalization to $L^r,$ i.e. we consider $\tilde \omega = \omega/\|\omega\|_{L^r}$ and $b_\omega=a_\omega\|\omega\|_{L^r}.$ We have $$ \sigma_{\lambda,\kappa}=\sum_{\omega\in D_{\lambda,\kappa}}b_\omega \tilde \omega $$ and from the Sobolev smoothness and the fact that the supports of the wavelets do not overlap, with the aid of \eqref{1L} we obtain \begin{align*}
B =\bigg( \sum_{\omega\in D_{\lambda,\kappa}} |b_\omega|^r\bigg)^{1/r}
& = \bigg(\sum_{\omega}\int \Big(|a_\omega\omega|^2\Big)^{r/2}dx\bigg)^{1/r} \\
& \le \Big\|\Big(\sum_\omega |a_\omega\omega|^2\Big)^{1/2}\Big\|_{L^r}\\
& \leq C \|\sigma\|_{L^r_s} 2^{-s\lambda}. \end{align*} Now, each $\omega$ in $D_{\lambda,\kappa}$ is of the form $\omega=\omega_k\omega_l$ with ${\vec\mu}=(k,l)$, where $k$ and $l$ both range over index sets $U_1$ and $U_2$ of cardinality at most $C2^{\lambda n}.$ Moreover we denote by $b_{kl}$ the coefficient $b_\omega$, and we have $$ \sigma_{\lambda,\kappa}=\sum_{k\in U_1}\tilde \omega_k\sum_{l\in U_2} b_{k l} \tilde \omega_l.$$
Set $\tau_{\rm max}$ to be the positive number such that $2n\lambda/r\le \tau_{\max}< 1+2n\lambda/r$. For a nonnegative number $ \tau<2n\lambda/r=\tau_{\rm max}$ and a positive constant (depending on $\tau$) $K=2^{\tau r/2}$ we introduce the following decomposition: We define the level set according to $b$ as $$
D_{\lambda,\kappa}^{\tau}=\{\omega\in D_{\lambda,\kappa}: B2^{-\tau}<|b_\omega|\leq B2^{-\tau+1}\}, $$ when $\tau<\tau_{\max}$.
We also define the set $$
D_{\lambda,\kappa}^{\tau_{\rm max}}= \{\omega\in D_{\lambda,\kappa}: |b_\omega|\leq B2^{-\tau_{\rm max}+1}\}. $$ We now take the part with heavy columns $$ D_{\lambda,\kappa}^{\tau,1}=\{\omega_k\omega_l\in D_{\lambda,\kappa}^{\tau}: {\rm card}\{s:\omega_k\omega_s\in D_{\lambda,\kappa}^{\tau}\}\geq K\}, $$ and the remainder $$ D_{\lambda,\kappa}^{\tau,2} = D_{\lambda,\kappa}^{\tau}\setminus D_{\lambda,\kappa}^{\tau,1}. $$ We also use the following notations for the index sets: $U_1^{\tau,1}$ is the set of $k$'s such that $\omega_k\omega_l$ in $D_{\lambda,\kappa}^{\tau,1}$, and for each $k\in U_1^{\tau,1}$ we denote $U_{2,k}^{\tau,1}$ the set of corresponding second indices $l$'s such that $\omega_k\omega_l \in D_{\lambda,\kappa}^{\tau,1}$, whose cardinality is at least $K$. We also denote $$ \sigma_{\lambda,\kappa}^{\tau,1}=\sum_{k\in U_1^{\tau,1}}\tilde \omega_k\sum_{l\in U_{2,k}^{\tau,1}} b_{kl} \tilde \omega_l,$$ thus summing over the wavelets in the set $D_{\lambda,\kappa}^{\tau,1}.$ The symbol $\sigma_{\lambda,\kappa}^{\tau,2}$ is then defined by summation over $D_{\lambda,\kappa}^{\tau,2}.$
We first treat the part $\sigma_{\lambda,\kappa}^{\tau,1}.$
Denote $\gamma={\rm card }\ U_1^{\tau,1}$. For $\tau<\tau_{\rm max}$ the $\ell^r$-norm of the part of the sequence $\{b_{kl}\}$ indexed by the set $D_{\lambda,\kappa}^{\tau,1}$ is comparable to $$ C \bigg(\sum_{k\in U_1^{\tau,1}}\sum_{l\in U_{2,k}^{\tau,1}} (B2^{-\tau})^r\bigg)^{1/r} $$ which is at least as big as $C(\gamma K (B 2^{-\tau})^r)^{1/r}$. However this $\ell^r$-norm is smaller than $B$, therefore we get $\gamma\leq C2^{\tau r}/ K= C2^{\tau r/2}.$ For $\tau=\tau_{\rm max}$ we trivially have that
$\gamma\le C2^{n\lambda}= C2^{\tau_{\max} r/2}.$
For $f,g \in \mathcal S$ we estimate the multiplier norm of $\sigma_{\lambda,\kappa}^{\tau,1}$ as follows: $$ \begin{aligned}
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau,1}\widehat f \widehat g\,)\|_{L^1}&\leq \sum_{k\in U_1^{\tau,1}}\| \widehat f \tilde \omega_k\|_{L^2}\| \sum_{l\in U_{2,k}^{\tau,1}} b_{kl} \tilde \omega_l \widehat g\,\|_{L^2} \\&\leq C \sum_{k\in U_1^{\tau,1}}\| \widehat f \tilde \omega_k\|_{L^2} \sup_{l}|b_{kl}|2^{\lambda n/r}\|g\|_{L^2} \\
&\leq C2^{\lambda n/r}\|g\|_{L^2}
\bigg(\sum_k\sup_{l}|b_{kl}|^2\bigg)^{1/2} \bigg(\sum_{k}\| \widehat f \tilde\omega_k\|_{L^2}^2 \bigg)^{1/2}. \end{aligned} $$
In view of orthogonality and of the fact that $\|\tilde \omega_k\|_{L^{\infty}}\approx 2^{\lambda n/r}$ we obtain the inequality $$
\bigg(\sum_{k}\| \widehat f \tilde\omega_k\|_{L^2}^2 \bigg)^{1/2} \leq C 2^{\frac {\lambda n}r}\|f\|_{L^2}. $$ By the definition of $U_1^{\tau,1}$ we have also that $$
\bigg(\sum_k\sup_{l}|b_{kl}|^2\bigg)^{1/2} \leq B 2^{-\tau}\gamma^{\frac 12} . $$ Collecting these estimates, we deduce \begin{equation}\label{rows}
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau,1}\widehat f \,\,\widehat g\,)\|_{L^1}\leq
C\|\sigma\|_{L^r_s}\gamma^{\frac 1 2}2^{\lambda( \frac{2n}r-s)} 2^{-\tau}\|f\|_{L^2}\|g\|_{L^2} . \end{equation}
The set $D_{\lambda,\kappa}^{\tau,2}$ has the property that in each column there are at most $K$ elements. Let us denote by $V^2$ the index set of all second indices such that $\tilde\omega_k\tilde\omega_l\in D_{\lambda,\kappa}^{\tau,2}$, and for each $l\in V^2$ set $V^{1,l}$ the corresponding sets of first indices. Thus $$D_{\lambda,\kappa}^{\tau,2}=\{\omega_k\omega_l: l\in V^2, k\in V^{1,l}\}.$$ We then have $$ \begin{aligned}
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau,2} \widehat f \,\,\widehat g\, )\|_{L^1}&\leq \sum_{l\in V^2} \big\|\sum_{k\in V^{1,l}}b_{kl} \tilde \omega_k\widehat f\, \big\|_{L^2}
\|\tilde \omega_l\widehat g\, \|_{L^2} \\& \leq \bigg( \sum_{l\in V^2_\sigma}
\big\|\sum_{k\in V^{1,l}} b_{kl}\tilde \omega_k\widehat f\, \big\|_{L^2}^2 \bigg)^{1/2} \bigg(\sum_{l\in V^2} \|\tilde \omega_l\widehat g\, \|_{L^2}^2\bigg)^{1/2}. \end{aligned} $$ We need to estimate $$ \begin{aligned}
\sum_{l\in V^2}
\Big\|\sum_{k\in V^{1,l}} b_{kl}\tilde \omega_k\widehat f\, \Big\|_{L^2}^2
&\leq C \int_{Q} \sum_{l\in V^2} \sum_{k\in V^{1,l}} B^22^{-2\tau} |\tilde \omega_k|^2 |\widehat f(\xi_1)|^2 d\xi_1\\& \leq C K 2^{\frac {2n\lambda}r} B^{2}2^{-2\tau}\|f\|_{L^2}^2, \end{aligned} $$ since, by the disjointness of the supports of $\tilde \omega_k$,
$\sum_k|\tilde\omega_k|^2\le C2^{2n\lambda/r}$, and the cardinality of $V^2$ is controlled by $K$.
Returning to our estimate, and using orthogonality, we obtain \begin{equation} \label{columns}
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau,2}\widehat f \,\,\widehat g\,)\|_{L^1}\leq C
\|\sigma\|_{L^r_s}K^{\frac 12} 2^{-s\lambda} 2^{-\tau} 2^{ \frac{2\lambda n}r} \|f\|_{L^2}\|g\|_{L^2}. \end{equation}
For any $\tau\le\tau_{\rm max}$ the two inequalities~\eqref{rows} and~\eqref{columns} are the same due to $\gamma\leq C2^{\tau r}/K= C2^{\tau r/2}.$ Therefore, we have \begin{equation}\label{IMP}
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau}\widehat f \,\,\widehat g\,)\|_{L^1}\leq C \|\sigma\|_{L^r_s}2^{(\frac r4-1)\tau}2^{\lambda(\frac{ 2n} r -s)} \|f\|_{L^2}\|g\|_{L^2}. \end{equation} \begin{comment} For $\tau=\tau_{\rm max}$ we put $K=2^{n\lambda},$ which makes the set $D_{\lambda,\kappa}^{\tau,2}$ empty. Thus {\color{red} for $\tau_{\rm max}$} we do not consider \eqref{columns} and in~\eqref{rows} the value $\gamma=2^{n\lambda} $ yields $$
\|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau_{\rm max}}\widehat f \widehat g)\|_{L^1}\leq C
\|\sigma\|_{L^r_s} 2^{ \frac {n\lambda}{2} - \lambda s } \|f\|_{L^2}\|g\|_{L^2}, $$ {\color{red} which is summable in $\lambda$ when $s>n/2$. \end{comment} The right hand side has a negative exponent in $\lambda$ since $s>2n/r.$
The behavior in $\tau$ depends on $r$. For $1<r<4$ it is a geometric series in $\tau$ and hence summing over $0\le\tau\le\tau_{\max}$ and $\lambda\ge0$ is finite.
However, if $r\geq 4,$ we need to use the following observation: $$ \sum_{\tau=0}^{\tau_{\rm max} }2^{(\frac r4-1)\tau}\leq C \tau_{\rm max}2^{(\frac r 4 -1)\tau_{\rm max}} \le C \big(\tfrac{ 2n\lambda}r\big)2^{(\frac r4-1) \frac{2n\lambda}r}. $$ Therefore, by summing over $\tau$ in \eqref{IMP} we obtain $$
\sum_{\tau=0}^{\tau_{\rm max}} \|{\mathcal F}^{-1}(\sigma_{\lambda,\kappa}^{\tau}\widehat f \,\,\widehat g\,)\|_{L^1}\leq C \|\sigma\|_{L^r_s} \big(\tfrac{ 2n\lambda}r\big)2^{(\frac r4-1) (1+\frac{2n\lambda}r)} 2^{\lambda(\frac{ 2n} r -s)} \|f\|_{L^2}\|g\|_{L^2}. $$ Since $ (2n\lambda/r)2^{(r/4-1)2n\lambda/r} 2^{\lambda(2n/r -s)}= (2n\lambda/r) 2^{\lambda(n/2-s)},$ these estimates form a summable series in $\lambda$ only if $s>n/2.$
We have $1\leq \kappa\leq C_{n}$ and $\sigma=\sum_{\lambda=0}^{\infty} \sum_\kappa \sigma_{\lambda,\kappa}.$ Therefore for $s$ and $r$ related as in $s>\max(2n/r, n/2)$ we have convergent series, and we obtain the result by summation in $\tau$ first and then in $\lambda$. \end{proof}
\begin{rmk} We see from the proof (or by an easy dilation argument) that the condition $Q$ is $[-2,2]^n$ is not essential and the statement keeps valid when $Q$ is any fixed compact set. \end{rmk}
\section{The proof of Theorem \ref{MR2}}
\begin{proof} We use an idea developed in \cite{GHH}, where we consider off-diagonal and diagonal cases separately. For the former we use the Hardy-Littlewood maximal function and a ``square'' function, and for the latter we use use Lemma \ref{MR} in Section~\ref{MLS}.
We introduce notation needed to study these cases appropriately. We define $\sigma_j(\xi,\eta)= \sigma(\xi,\eta)\widehat\psi(2^{-j}(\xi,\eta))$ and write $m_j(\xi,\eta)=\sigma_j(2^j(\xi,\eta))$. We note that all $m_j$ are supported in the unit annulus, the dyadic annulus centered at zero with radius
comparable to $1$, and $\|m_j\|_{L^r_s}\le A$ uniformly in $j$ by assumption \eqref{usb}.
By the discussion in the previous section, for each $m_j$ we have the decomposition $m_j(\xi,\eta)=\sum_{\kappa}\sum_{\lambda}\sum_{k,l}b_{k,l}\tilde\omega_k(\xi)\tilde \omega_l(\eta)= \sum_{\lambda}m_{j,\lambda}$ with $\tilde \omega_k\approx 2^{\lambda n/r}$
and $(\sum_{k,l}|b_{k,l}|^r)^{1/r}\le CA2^{-\lambda s}$. Assume that both $\Psi_F$ and $\Psi_M$ are supported in $B(0,N)$ for some large fixed number $N$. We define the off-diagonal parts $$
m_{j,\lambda}^2(\xi,\eta)=\sum_{\kappa}\sum_k\sum_{|l|\le 2\sqrt nN}b_{k,l}\tilde \omega_k(\xi) \tilde \omega_l(\eta) $$ and $$
m_{j,\lambda}^3(\xi,\eta)=\sum_{\kappa}\sum_l\sum_{|k|\le 2\sqrt nN}b_{k,l}\tilde \omega_k(\xi) \tilde \omega_l(\eta), $$ then the remainder in the $\lambda$ level is $m_{j,\lambda}^1(\xi,\eta)=[m_{j,\lambda}-m_{j,\lambda}^2-m_{j,\lambda}^3](\xi,\eta)$ with each wavelet involved away from the axes. Moreover for $i=1,2,3$, we define $m_j^i=\sum_{\lambda}m_{j,\lambda}^i$, $\sigma_j^i=m_j^i(2^{-j}\cdot)$, $\sigma^i=\sum_j\sigma_j^i$. Notice that $\sigma$ is equal to the sum $\sigma^1+\sigma^2+\sigma^3$.
\noindent {\em (i) The Off-diagonal Cases}
We consider the off-diagonal cases $m_{j,\lambda}^2$ and $m_{j,\lambda}^3$ first. By symmetry, it suffices to consider $$ T_{m_{j,\lambda}^2}(f,g)(x)= \int_{\mathbb{R}^{2n}}m_{j,\lambda}^2(\xi,\eta)\widehat f(\xi)\widehat g(\eta)e^{2\pi i x(\xi+\eta)}d\xi d\eta . $$
By the definition $\tilde \omega_l=2^{\lambda n/2}\Psi(2^{\lambda}x-l)/\|\omega_l\|_{L^r}$, we have
$|(\tilde \omega_l\widehat g)^{\vee}(x)|\le C2^{\lambda n/r}M(g)(x)$, where $M(g)(x)$ is the Hardy-Littlewood maximal function. Recall the boundedness of $b_{k,l}$ and $\tilde \omega_{k}$, we therefore have $$
|(\sum_{k}b_{k,l}\tilde \omega_k\widehat f\, )^{\vee}|
\le 2^{\lambda(n/r-s)} |(m\widehat f\chi_{1/2\le|\xi|\le2})^{\vee}| $$
with $\|m\|_{L^{\infty}}\le C$. In view of the finiteness of $N$ and the number of $\kappa$'s, we finally obtain a pointwise control $$
|T_{m_{j,\lambda}^2}(f,g)(x)|\le C2^{(2n/r-s)\lambda} |T_m( f')(x)|M(g)(x), $$
where $\widehat{f'}=\widehat f \chi_{1/2\le|\xi|\le2}$.
Observe that $$ T_{\sigma_j^2}(f,g)(x)=2^{jn}T_{m_j^2}(f_j,g_j)(2^jx) $$
with $\widehat f_j(\xi)=2^{jn/2}f(2^j\xi)\chi_{1/2\le|\xi|\le2}$ and $\widehat g_j(\xi)=2^{jn/2}\widehat g(2^j\xi)$. Note that we did not define $f_j$ and $g_j$ in similar ways. By a standard argument using the square function characterization of the Hardy space $H^1$, we control
$\|T_{\sigma^2}(f,g)\|_{L^1}$ by \begin{align*}
\Big\|\Big(\sum_j|T_{\sigma_j^2}(f,g)|^2\Big)^{1/2}\Big\|_{L^1}
= & \Big\|\Big(\sum_j|
2^{jn}T_{m_j^2}(f_j,g_j)(2^j\cdot)|^2\Big)^{1/2}\Big\|_{L^1} \\
\le & \sum_{\lambda}2^{(2n/r-s)\lambda}\|g\|_{L^2}\Big(\int \sum_j|\widehat f_j(\xi)|^2d\xi\Big)^{1/2}\, . \end{align*} Because of the definition of $\widehat f_j$, we see that $$
\int \sum_j|\widehat f_j(\xi)|^2d\xi
=\int \sum_j|\widehat f(\xi)|^2\chi_{2^{j-1}\le|\xi|\le2^{j+1}}d\xi\le C\|f\|_{L^2}^2. $$ The exponential decay in $\lambda$ given by the condition $rs>2n$ then concludes the proof of the off-diagonal cases.
\noindent {\em (ii) The Diagonal Case}
This case is relatively simple by an argument similar to the diagonal part in \cite{GHH}, because we have dealt with the key ingredient in Lemma \ref{MR}. We give a brief proof here for completeness. By dilation we have that $$
\|T_{\sigma^1}(f,g)(x)\|_{L^1} \le
\|\sum_j\sum_{\lambda}T_{\sigma_{j,\lambda}^1}(f,g)\|_{L^1}
\le \sum_{\lambda}\sum_j\|2^{jn}T_{m_{j,\lambda}^1}(f_j,g_j)(2^j\cdot)\|_{L^1}, $$
where $\widehat f_j(\xi)=2^{jn/2}\widehat f(2^{jn}\xi)\chi_{C2^{-\lambda}\le |\xi|\le 2}(\xi)$
because in the support of $m_{j,\lambda}^1$ we have $C2^{-\lambda}\le |\xi|\le 2$, and $g_j$ is defined similarly. For the last line we apply Lemma \ref{MR} and obtain, when $ r\ge4$, the estimate $$
\sum_{\lambda}C \tfrac{2n\lambda}r 2^{\lambda(n/2-s)}\sum_j\|\widehat f_j\|_{L^2}\|\widehat g_j\|_{L^2}\le
\sum_{\lambda}C \tfrac{2n\lambda}r 2^{\lambda(n/2-s)}\Big(\sum_j\|\widehat f_j\|^2_{L^2}\Big)^{1/2}\Big(\sum_j\|\widehat g_j\|^2_{L^2}\Big)^{1/2} . $$
And when $r< 4$, we have a similar control $$
\sum_{\lambda}C 2^{\lambda(2n/r-s)}\sum_j\|\widehat f_j\|_{L^2}\|\widehat g_j\|_{L^2}\le
\sum_{\lambda}C 2^{\lambda(2n/r-s)}\Big(\sum_j\|\widehat f_j\|^2_{L^2}\Big)^{1/2}\Big(\sum_j\|\widehat g_j\|^2_{L^2}\Big)^{1/2} . $$ Observe that $$
\sum_j\|\widehat f_j\|^2_{L^2}=\int|\widehat f(\xi)|^2\sum_j\chi_{2^{-\lambda-j}\le|\xi|\le 2^{1-j}}(\xi)d\xi
\le C\lambda\|f\|_{L^2}^2, $$ so in either case with the restriction $s>\max\{n/2, 2n/r\}$ the sum over $\lambda$
is controlled by $\|f\|_{L^2}\|g\|_{L^2}$. Thus we conclude the proof of the diagonal case and of Theorem \ref{MR2}. \end{proof}
\section{Necessary Conditions}
For a bounded function $\sigma$, let $T_\sigma$ be the $m$-linear multiplier operator with symbol $\sigma$. In this section we obtain examples for $m$-linear multiplier operators that impose restrictions on the indices and the smoothness in order to have \begin{equation} \label{B33}
\|T_\sigma\|_{L^{p_1}(\mathbb{R}^n)\times\cdots\times L^{p_m}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)}\le C\sup_{j\in\mathbb Z}\|\sigma(2^j\cdot)\widehat\Psi\|_{L^r_s(\mathbb{R}^{mn})}. \end{equation} These conditions show in particular that the restriction on $s$ in Theorem \ref{MR2} is necessary.
We first prove Theorem~\ref{MR3} via two counterexamples; these are contained in Proposition \ref{Sq} and Proposition~\ref{SIS}, respectively.
\begin{comment} \begin{proof} Let $\varphi$ be a nontrivial Schwartz function such that $\widehat\varphi$ is supported in the unit ball, and let $\phi_2=\cdots=\phi_{m-1}$ be Schwartz functions whose Fourier transforms, $\widehat{\phi_2},$ is supported in an annulus $\frac1{17m}\le{\left\vert{\xi}\right\vert}\le \frac1{13m},$ and identical to $1$ on $\frac1{16m}\le{\left\vert{\xi}\right\vert}\le \frac1{14m}.$ Similarly, fix a Schwartz function $\phi_m$ with $\widehat{\phi_{m}}\subset \left\{\xi\in\mathbb R^n\ :\ \frac{12}{13}\le{\left\vert{\xi}\right\vert}\le\frac{14}{13}\right\}$ and $\widehat{\phi_m}\equiv 1$ on an annulus $\frac{25}{26}\le{\left\vert{\xi}\right\vert}\le\frac{27}{26}.$ Take $a,b\in\mathbb R^{n}$ with ${\left\vert{a}\right\vert}=\frac1{15m}$ and ${\left\vert{b}\right\vert}=1.$
For $0<\epsilon<\frac{1}{240m},$ set $$ \sigma^{\epsilon}(\xi_1,\ldots,\xi_m) = \widehat\varphi\Big(\frac{\xi_1-a}{\epsilon}\Big)\widehat{\phi_2}(\xi_2)\cdots\widehat{\phi_m}(\xi_m). $$
It is easy to check that $\mathrm{supp}{\sigma^\epsilon} \subset \big\{2^{-\frac14}\le |\xi | \le 2^{\frac14}\big\};$ hence, $\sigma^{\epsilon}(2^j\cdot)\widehat\psi = \sigma^{\epsilon}$ for $j=0$ and $\sigma^{\epsilon}(2^j\cdot)\widehat\psi = 0$ for $j\ne 0.$ This directly implies that $$ \sup_{j\in\mathbb Z}\norm{\sigma^{\epsilon}(2^j\cdot)\widehat\psi}_{W^{(s_1,\ldots,s_m)}} = \norm{\sigma^{\epsilon}}_{W^{(s_1,\ldots,s_m)}}. $$ Taking the inverse Fourier transform of $\sigma^{\epsilon}$ gives $$ (\sigma^{\epsilon})\spcheck(x_1,\ldots,x_m) = \epsilon^n e^{2\pi ia\cdot x_1}\varphi(\epsilon x_1)\phi_2(x_2)\cdots\phi_m(x_m). $$ Now apply Lemma \ref{lem:varphiepsilon} to have $$ \norm{\sigma^{\epsilon}}_{W^{(s_1,\ldots,s_m)}} \lesssim \epsilon^{\frac{n}2-s_1}. $$ Thus $$ \sup_{j\in\mathbb Z}\norm{\sigma^{\epsilon}(2^j\cdot)\widehat\psi}_{W^{(s_1,\ldots,s_m)}} \lesssim \epsilon^{\frac{n}2-s_1}. $$ Now choose $\widehat{f_k}(\xi) = \epsilon^{\frac{n}{p_k}-n}\widehat\varphi(\frac{\xi-a}{\epsilon})$ for $1\le k\le m-1,$ and $\widehat{f_m}(\xi) = \epsilon^{\frac{n}{p_m}-n}\widehat\varphi(\frac{\xi-b}{\epsilon}).$ Then we will show that these functions are what we needed to construct.
In the following estimates, we will use the fact, its proof can be done by using the Littlewood-Paley characterization for Hardy spaces, that if $f$ is a function whose Fourier transform is supported in a fixed annulus centered at the origin, then ${\left\Vert{f}\right\Vert}_{H^p}\approx {\left\Vert{f}\right\Vert}_{L^p}$ for $0<p<\infty$, (cf. \cite[Remark 7.1]{FuTomi}).
Indeed, using the above fact and checking that each $\widehat{f_k}$ is supported in an annulus centered at zero and not depending on $\epsilon$ allow us to estimate $H^p$-norms via $L^p$-norms, namely $$ \norm{f_k}_{H^{p_k}} \approx \norm{f_k}_{L^{p_k}}=1,\quad (1\le k\le m). $$ Thus, we are left with showing that $\norm{T_\sigma(f_1,\ldots,f_m)}_{L^p} \approx 1.$ Notice that $\widehat{\phi_k}(\xi)=1$ on the support of $\widehat{f_k}$ for $2\le k\le m.$ Therefore, \begin{align*} T_{\sigma^{\epsilon}}(f_1,\ldots,f_m)(x) \,=&\, \Big(\widehat{\varphi}\Big(\frac{\cdot -a}{\epsilon}\Big)\epsilon^{\frac{n}{p_1}-n}\widehat{\varphi}\Big(\frac{\cdot -a}{\epsilon}\Big)\Big) \spcheck\! (x) \Big(\widehat{\phi_2}\widehat{f_2}\Big)\spcheck\!(x)\cdots \Big(\widehat{\phi_m}\widehat{f_m}\Big)\spcheck\!(x)\\
=&\,\Big(\widehat{\varphi}\Big(\frac{\cdot -a}{\epsilon}\Big)\epsilon^{\frac{n}{p_1}-n}\widehat{\varphi}\Big(\frac{\cdot -a}{\epsilon}\Big)\Big) \spcheck\!(x) \Big(\widehat{f_2}\Big)\spcheck\!(x)\cdots \Big(\widehat{f_m}\Big)\spcheck\!(x)\\
=&\,\,\epsilon^{\frac{n}{p_1}+\cdots+\frac{n}{p_{m}}}e^{2\pi i [(m-1)a+b]\cdot x}(\varphi*\varphi)(\epsilon x) [\varphi(\epsilon x)]^{m-1}\\
=&\,\, \epsilon^{\frac{n}{p}}e^{2\pi i [(m-1)a+b]\cdot x}(\varphi*\varphi)(\epsilon x) [\varphi(\epsilon x)]^{m-1}, \end{align*} which obviously gives $\norm{T_{\sigma^{\epsilon}}(f_1,\ldots,f_m)}_{L^p} \approx 1.$ So far, we have proved that $s_1\ge \frac{n}2;$ hence, by symmetry, we have $s_k\ge \frac{n}2$ for all $1\le k\le m.$ \end{proof} \end{comment}
\begin{prop}\label{Sq} Under the hypothesis of Theorem~\ref{MR3}
we must have $s\ge (m-1)n/2$. \end{prop} \begin{proof} We use the bilinear case with dimension one to demonstrate the idea first. Then we easily extend the argument to higher dimensions.
We fix a Schwartz function $\varphi$ with $\hat \varphi$ supported in $[-1/100,1/100]$. Let $\{a_j(t)\}_{j }$ be a sequence of Rademacher functions indexed by positive integers, and
for $N>1$ define $$ \widehat{ f_N}(\xi_1)=\sum_{j=1}^N a_j(t_1) \hat \varphi (N \xi_1 -j)\, , \qquad \widehat{ g_N}(\xi_2)=\sum_{k=1}^N a_k(t_2) \hat \varphi (N \xi_2 -k). $$ Let $\phi$ be a smooth function $\phi$ supported in $[-\tf1{10},\tf1{10}]$ assuming value $1$ in $[-\tf1{20},\tf1{20}]$. We construct the multiplier $\sigma_N$ of the bilinear operator $T_N$ as follows, \begin{equation}\label{si_N} \sigma_N=\sum_{j=1}^{N}\sum_{k=1}^N a_j(t_1)a_k(t_2) a_{j+k}(t_3)c_{j+k}\phi(N \xi_1-j)\phi(N \xi_2-k), \end{equation} where $c_l=1$ when $9N/10\le l\le 11N/10$ and $0$ elsewhere. Hence \begin{align*} T_N(f_N,g_N)(x)=&\sum_{j=1}^{N}\sum_{k=1}^N a_{j+k}(t_3)c_{j+k}\tf1 {N^2} \varphi(x/N) \varphi(x/N)e^{2\pi ix(j+k)/N}\\ =&\sum_{l=2}^{2N}\sum_{k=s_l}^{S_l}a_l(t_3)c_{l}\tf1 {N^2} \varphi(x/N) \varphi(x/N)e^{2\pi ixl/N}, \end{align*}
where $s_l=\max(1,l-N)$ and $S_l=\min(N,l-1)$. We estimate $\|f_N\|_{L^{p_1}(\mathbb{R})}$, $\|g_N\|_{L^{p_2}(\mathbb{R})}$,
$\|\sigma_N\|_{L^{r}_s(\mathbb{R}^2)}$ and $\|T_N(f_N,g_N)\|_{L^{p}(\mathbb{R})}$.
First we prove that
$\|f_N\|_{L^{p_1}(\mathbb{R})}\approx N^{1-\tfrac{p_1}2}$. By Khinchine's inequality we have \begin{align*}
\int_0^1\|f_N\|_{L^{p_1}}^{p_1}dt_1=&
\int_{\mathbb{R}}\int_0^1\big|\sum_{j=1}^Na_j(t_1)\frac{\varphi(x/N)}Ne^{2\pi ixj/N}\big|^{p_1}dt_1dx\\
\approx&\int_{\mathbb{R}}\Big(\sum_{j=1}^N \Big| \frac{\varphi(x/N)}N \Big|^2\Big)^{p_1/2}dx\\
\approx&\,\,N^{-p_1/2}\int_{\mathbb{R}} \big|\varphi(x/N)\big|^{p_1}dx\\ \approx&\,\, N^{1-\tfrac{p_1}2}. \end{align*} Hence
$\|f_N\|_{L^{p_1}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p_1}-\tf12}$. Similarly
$\|g_N\|_{L^{p_1}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p_2}-\tf12}$. The same idea gives that \begin{align*}
\int_0^1\|T_N(f_N,g_N)\|_{L^{p}}^{p}dt_3\approx& \int_{\mathbb{R}}\Big(\sum_{l=2}^{2N}
\big|c_l(S_l-s_l)\tf1 {N^2}
\varphi^2(x/N)e^{2\pi ixl/N}\big|^2\Big)^{p/2}dx\\ \approx& \int_{\mathbb{R}} \Big(\sum_{l=9N/10}^{11N/10}(S_l-s_l)^2\Big)^{p/2} \tf1 {N^{2p}}
|\varphi(x/N)|^{2p}dx\\
\approx&\,\,N^{\tfrac{3p}2-2p}\int_{\mathbb{R}}|\varphi(x/N)|^{2p}dx\\ \approx&\,\, N^{1-\tfrac{p}2}. \end{align*} In other words we showed that
$\|T_N(f_N,g_N)\|_{L^{p}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p}-\tf12}$.
As for $\sigma_N$, we have the following result whose proof can be found in \cite[Lemma 4.2]{P1}.
\begin{lm}\label{SN} For the multiplier $\sigma_N$ defined in \eqref{si_N} and any $s\in(0,1)$, there exists a constant $C_s$ such that \begin{equation}
\|\sigma_N\|_{L^r_s(\mathbb{R}^2)}\le C_sN^s. \end{equation} \end{lm}
Apply \eqref{B1} to $f_N,\ g_N$ and $T_N$ defined above and integrate with respect to $t_1$, $t_2$ and $t_3$ on both sides, we have $$
\bigg(\int_0^1\int_0^1\int_0^1\|T_N(f_N,g_N)\|_{L^p}^pdt_3dt_1dt_2\bigg)^{1/p}\le
C_sN^s
\bigg(\int_0^1\|f_N\|^p_{L^{p_1}}dt_1\int_0^1\|g_N\|_{L^{p_2}}^pdt_2\bigg)^{1/p}, $$ which combining the estimates obtained on $f_N$, $g_N$ and $T_N(f_N,g_N)$ above implies $$ N^{\tf1{p}-\tf12}\le C_s N^s N^{\tf1{p_1}-\tf12}N^{\tf1{p_2}-\tf12}, $$ so we automatically have $N^{1/2}\le C_s N^s$, which is true when $N$ goes to $\infty$ only if $s\ge 1/2$.
We now discuss the case $m\ge2$ and $n=1$. We use for $1\le k\le m$ $$ \widehat {f_k}(\xi_k)=\sum_{j=1}^N a_j(t_k) \widehat \varphi (N \xi_k -j), $$ and $$ \sigma_N=\sum_{j_1=1}^{N}\cdots\sum_{j_m=1}^N a_{j_1}(t_1)\cdots a_{j_m}(t_m) a_{j_1+\cdots+j_m}(t_{m+1})c_{j_1+\cdots+j_m}\prod_{k=1}^m\phi(N \xi_k-j_k). $$ By an argument similar to the case $m=2$ and $n=1$, we have $$
\|f_k\|_{L^{p_k}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p_k}-\tf12}, $$
$\|\sigma_N\|_{L^r_s}\le C N^s$ and \begin{equation}\label{mo}
\|T(f_1,\dots,f_m)\|_{L^p(\mathbb{R})}\approx N^{\frac 1 p-\frac 1 2}, \end{equation} hence we obtain that $s\ge (m-1)/2$.
For the higher dimensional cases, we define $$ F_k(x_1,\dots, x_n)=\prod_{\tau=1}^nf_k(x_\tau) , $$
and
$\sigma(\xi_1,\dots, \xi_n)=\prod_{\tau=1}^n\sigma_N(\xi_\tau)$, then $\|F_k\|_{L^{p_k}}\approx N^{n(\tf1{p_k}-\tf12)}$,
$\|\sigma\|_{L^r_s}\le CN^s$, and $$
\|T(F_1,\dots, F_m)\|\approx N^{n(\tf1{p}-\tf12)} . $$ We therefore obtain the restriction $s\ge (m-1)n/2$. \end{proof}
\begin{comment} For the higher dimensional cases, we define $F_N(x_1,\dots, x_n)=\prod_{\tau=1}^nf_N(x_\tau)$, $G_N(x_1,\dots, x_n)=\prod_{\tau=1}^ng_N(x_\tau)$ and
$\sigma_N(\xi_1,\dots, \xi_n)=\prod_{\tau=1}^n\sigma_N(\xi_\tau)$, then $\|F_N\|\approx N^{n(\tf1{p_1}-\tf12)}$,
$\|G_N\|\approx N^{n(\tf1{p_2}-\tf12)}$, $\|\sigma_N\|_{L^r_s}\le CN^s$, and $\|T(F_N,G_N)\|\approx N^{n(\tf1{p_1}-\tf12)}$. We therefore get the restriction $s\ge n/2$. \end{rmk}
\begin{rmk} We can even use the same idea to obtain that for $m$-linear cases we should have $s\ge (m-1)n/m$, whose details will be typed later. \end{comment}
\begin{comment} It remains to control \eqref{SN1}, for which we recall that $$\frac{\sin(\pi s)}2\int_{-\infty}^{\infty}\frac{1}{\cosh(\pi t)-\cos(\pi s)}dt=1-s$$ and $$ \frac{\sin(\pi s)}2\int_{-\infty}^{\infty}\frac{1}{\cosh(\pi t)+\cos(\pi s)}dt=s. $$ So \begin{align*}
&\frac{\sin(\pi s)}2\int_{-\infty}^{\infty}[\frac{\log\|\varphi\|_{L^{r'}}}{\cosh(\pi t)-\cos(\pi s)}+
\frac{\log\|\varphi\|_{L^{r'}}+2\log N}{\cosh(\pi t)+\cos(\pi s)}]dt \\
& \qquad\qquad\qquad\qquad\qquad =\log\|\varphi\|_{L^{r'}}+2s\log N. \end{align*} What left is that for $0<s<1$ we have
$$\int_{-\infty}^{\infty} \frac{\log|t|}{\cosh(\pi t)-\cos(\pi s)}dt<\infty$$ and
$$\int_{-\infty}^{\infty} \frac{\log|t|}{\cosh(\pi t)+\cos(\pi s)}dt<\infty.$$ We prove that second one while the first one is similar. The second one is reduce to
$\int_{-\infty}^{0}\frac{\log|t|}{\cosh( t)+\cos( s)}dt<\infty$ for $0<s<\pi$. Let $v=e^t$, then \begin{align*}
\int_{-\infty}^{0}\frac{\log|t|}{\cosh( t)+\cos( s)}dt=&2\int_0^\infty\frac{\log t}{e^t+e^{-t}+2\cos s}dt\\ =&2\int_0^\infty\frac{\log\log v}{v^2+2v\cos s+1}dv\\ =&2\int_1^e\frac{\log\log v}{v^2+2v\cos s+1}dv+2\int_e^\infty\frac{\log\log v}{v^2+2v\cos s+1}dv\\ =:&I+II \end{align*}
We observe that $|\log t|\le Ct^{-\epsilon}$ for $0<t<1$, hence
$I\le \int_1^e\frac{C(\log v)^{-\epsilon}}{2+2\cos s}dv\le C(2+2\cos s)^{-1}<\infty$. We can control $II$ by $C\int_e^{\infty}\frac{v^\epsilon}{(v+\cos)^2+\sin^2s}dv<\infty$. \end{comment}
\begin{prop}\label{SIS} Under the hypothesis of Theorem~\ref{MR3}
we must have $ s\ge mn/r$. \end{prop}
\begin{comment} \begin{lm}[MFA Proposition 7.3.6] If
$\|T_\sigma\|_{L^{p_1}(\mathbb{R}^n)\times\cdots\times L^{p_m}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)}\le C$, then $\sigma\in L^{\infty}$. \end{lm}
\begin{lm}[\fbox{Q\qt}\ ]\label{FS} For given $r$ and $s$ with $rs\le mn$, there is an unbounded function $\sigma_0$ in $L^r_s(\mathbb{R}^{mn})$. \end{lm} \end{comment}
\begin{proof}
Let $\varphi$ and $\phi$ be as in Proposition \ref{Sq}. Define $\widehat f_j(\xi_j)=\widehat\varphi(N(\xi_j-a))$ with $|a|=1$, and $\sigma(\xi,\dots,\xi_m)=\prod_{j=1}^m \phi(N(\xi_j-a))$, then a direct calculation gives
$\|f_j\|_{L^{p_j}(\mathbb{R}^n)}\approx N^{-n+n/p_j}$ and $\|\sigma\|_{L^r_s(\mathbb{R}^{mn})} \le CN^sN^{-mn/r}$. Moreover, $$ T_{\sigma}(f_1,\dots, f_m)(x)=N^{-mn}(\varphi(x/N)e^{2\pi ix\cdot a})^m . $$ We can therefore obtain that
$\|T_\sigma(f_1,\dots ,f_m)\|_{L^p(\mathbb{R}^n)}\approx N^{-mn+n/p} CN^sN^{-mn/r}$. Then we come to the inequality $N^{-mn+n/p}\le CN^sN^{-mn/r}\prod_jN^{-n+n/p_j}$, which forces $s-mn/r\ge0$ by letting $N$ go to infinity. \end{proof}
\begin{comment} We have the following proposition. \begin{prop} Suppose that $\sigma$ is a bounded function such that \begin{equation} \label{BB`}
\|T_\sigma\|_{L^{p_1}(\mathbb{R}^n)\times\cdots\times L^{p_m}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)}\le C\sup_{j\in\mathbb Z}\|\sigma(2^j\cdot)\widehat\Psi\|_{L^r_s(\mathbb{R}^{mn})}. \end{equation} Then we must necessarily have $p_1\ge \frac ns$, $p_2\ge \frac ns$ and $p\ge (\frac sn+\f12 )^{-1}$. \end{prop}
\begin{proof} We first consider the case $n=1$. Let us prove that $p\ge ( s +\f12 )^{-1}$. We define $$ \widehat{f_N} (\xi) = \widehat{g_N} (\xi) =\sum_{j=-N}^N\widehat{\varphi} (N\xi-j) $$ where $\varphi$ is a smooth function whose Fourier transform is supported in $[-1/100,1/100]$. We define
the multiplier $\sigma_N$ of the bilinear operator $T_N$ as follows: \begin{equation}\label{si_N2} \sigma_N=\sum_{j=1}^{N}\sum_{k=1}^N a_{j+k}(t )c_{j+k}\phi(N \xi_1-j)\phi(N \xi_2-k), \end{equation} where $c_l=1$ when $9N/10\le l\le 11N/10$ and $0$ elsewhere and where $\phi$ is a smooth function supported in $[-1/10,1/10]$ which is equal to $1$ on $[-1/20,1/20]$.
The calculation in \cite{P1} shows that $\| f_N\|_{L^{p_1} } \le C$ and likewise $\|g_N\|_{L^{p_2} } \le C$. Moreover, we have $$
\| \sigma_N \|_{L^r_s} \le C \, N^s $$ and $$
\int_0^1 \big\| T_{\sigma_N} (f_N,g_N) \big\|_{L^p(\mathbb R)}^p dt \approx N^{\frac 1p-\frac 12} $$ as shown earlier. It follows from these calculations that $\f1p-\f12 \le s$.
We now prove $p_1\ge 1/s$. We use the same definition for $f_n$ as before but define $\widehat g_n(\eta)=\sum_{k=-N}^Na_k(t_1)\widehat\varphi(N\eta-k)$. Correspondingly we change the definition of $\sigma_N(\xi,\eta)$, which will be $$ \sum_{j=1}^{N}\sum_{k=1}^N a_k(t_1) a_{j+k}(t )c_{j+k}\phi(N \xi_1-j)\phi(N \xi_2-k), $$ so that $T_{\sigma_N}(f_N,g_N)$ is the same as the previous example. As a result, calculations we have done before shows that
$\|f_N\|_{L^{p_1}}\le C$, $\|g_N\|_{L^{p_2}}\approx N^{1/p_2-1/2}$,
$\|\sigma_N\|_{L^r_s}\le CN^s$ and $\|T_{\sigma_N}(f_N,g_N)\|_{L^p} \approx N^{1/p-1/2}$. Consequently we obtain that $1/p-1/2\le 1/p_2-1/2+s$, which is equivalent to $1/p_1\le s$. The other restriction $1/p_2\le s$ can be treated similarly.
For higher dimensional cases, we remark that we can always consider the products $\prod_{\ell=1}^nf_N(x_\ell)$, $\prod_{\ell=1}^n\sigma_N(\xi_\ell,\eta_\ell)$ and $g_N$ modified similarly. As a result, we can show that $1/p-1/2\le s/n$, $1/p_1\le s/n$ and $1/p_2\le s/n$ directly.
\end{proof}
\end{comment}
Next, we obtain from \eqref{B33} the restrictions for the indices $p_j$ claimed in Theorem~\ref{PL}.
\begin{proof} (Theorem~\ref{PL}) By symmetry it suffices to consider the case $I=\{1,2,\ \dots,\ k\}$ with $k\in\{0,1,\dots ,m\}$ and the explanation $I=\emptyset$ when $k=0$. Define for $\xi\in\mathbb R$ $$ \widehat {f_N}(\xi)=\sum_{j=-N}^N\widehat\varphi(N\xi-j)a_j(t), \quad\quad \widehat {g_N}(\xi)=\sum_{j=-N}^N\widehat\varphi(N\xi-j), $$ and \begin{align*} & \sigma_N(\xi_1,\dots,\xi_m) \\ &=\sum_{j_1=-N}^N\cdots\sum_{j_m=-N}^N a_{j_1+\cdots+j_m}(t)c_{j_1+\cdots+j_m}a_{j_1}(t_1)\cdots a_{j_k}(t_k) \phi(N\xi_1-j_1)\cdots \phi(N\xi_m-j_m). \end{align*} The idea is that in this setting if we take the first $k$ functions as $f_N$ and the remaining as $g_N$, we have \begin{align*} &T_{\sigma_N}(\overbrace {f_N,\dots,f_N}^{\textup{$k$ terms}},\overbrace {g_N,\dots,g_N}^{\textup{$m-k$ terms}})(x) \\ = &\sum_{j_1=-N}^N\cdots\sum_{j_m=-N}^N a_{j_1+\cdots+j_m}(t)c_{j_1+\cdots+j_m} N^{-m}[\varphi( x/N)]^me^{2\pi ix(j_1+\cdots+j_m)/N}. \end{align*} This expression is independent of $k$ and by \eqref{mo} we know $$
\|T_{\sigma_N}(f_N,\dots,f_N,g_N,\dots,g_N)\|_{L^p} \approx N^{1/p-1/2} . $$
Previous calculations show also $\|f_N\|_{L^{p_i}}\approx C_{p_i}N^{1/p_i-1/2}$
and $\|\sigma_N\|_{L^r_s}\le CN^s$. Lemma 4.3 in \cite{P1} gives that
$\|g_N\|_{L^{p_i}}\le C_{p_i}$ for $p_i\in(1,\infty]$. Consequently, we have $$ N^{\frac 1 p-\frac 1 2}\le CN^{\sum_{i=1}^k(\f1{p_i}-\f12)}N^s $$ and this verifies our conclusion when $n=1$.
For the higher dimensional case, we just use the tensor products and $\sigma$ similar to what we have in Proposition \ref{Sq}, and thus conclude the proof. \end{proof}
Notice that when $k=m$, Theorem~\ref{PL} coincides with Proposition \ref{Sq}.
\begin{comment} Proposition \ref{Sq} is optimal when $1<p\le 2$, but not sharp when $p>2$ since no positive answer is proved so far. Meanwhile, we realize that the restriction on $s$ could be enhanced via duality, which is contained in the following lemma.
\begin{lm}\label{Dua} If there is a triple $(q_1,q_2,q)$ with $q_1,q_2,q>1$ such that \eqref{B1} is valid, then \eqref{B1} is also valid for $(q',q_2,q_1')$ and $(q_1,q',q_2')$, where $q'$ is the conjugate of $q$, i.e. $\tf1q+\tf1{q'}=1$. \end{lm}
\begin{cor}\label{S2} If \eqref{B1} is valid for all $m\in W^{p,s}(\mathbb{R}^2)$, then $s\ge\max\{1/q,1/2\}$. \end{cor}
\begin{proof} The part $s\ge1/q$ has been proved in Proposition \ref{Sq}.
Now we assume that $s<1/2$. Then $q\ge1/s>2$. By duality argument Lemma \ref{Dua} ,
we can find a triple $(r_1,r_2,r)$ such that $\|T\|_{L^{r_1}\times L^{r_2}\to L^r}\le C\|m\|_{W^{p,s}}$, and $r<2$. Consequently, $rs<1$, which contradicts Proposition \ref{Sq}. \end{proof}
\begin{rmk} The Corollary \ref{S2} reveals an essential difference between linear multipliers and their multilinear analogies. Recall that in the linear case no regularity is required for an $\mathcal M_2$ multiplier. However, a minimum smoothness of order $1/2$ is necessary in multilinear case if we expect a uniform control of the norms of corresponding operators. A related statement is as follows.
\end{rmk}
\begin{cor}\label{BN} We should never expect that for some triple $(q_1,q_2,q)$ the following estimate is valid $$
\|T_m(f,g)\|_{L^q}\le C_{q_1,q_2}\|m\|_{L^{\infty}}\|f\|_{L^{q_1}}\|g\|_{L^{q_2}}. $$ \end{cor} \begin{proof} Let us assume it to be right and choose $s_0<1/2$. Moreover we choose $p_0$ so that $p_0s_0>2$. By the Sobolev embedding theorem $W^{p,s}\subset L^{\infty}$ when $ps>2$, we then have $$
\|T_m(f,g)\|_{L^q}\le C\|m\|_{L^{\infty}}\|f\|_{L^{q_1}}\|g\|_{L^{q_2}}
\le C\|m\|_{W^{p_0,s_0}}\|f\|_{L^{q_1}}\|g\|_{L^{q_2}}, $$ which contradicts Corollary \ref{S2}. \end{proof}
It turns out that the indices $q,s,p$ interact each other in the study of the boundedness of H\"ormander type multilinear operators utilizing Sobolev spaces, and the smoothness $s$ plays an central role as we have seen. So we collect all known positive and negative results in the next theorem while all unknown cases in the conjecture after it.
\begin{thm} (i) When $s<1/2$ or $1<p<2/s$, \eqref{B1} fails everywhere.
(ii) When $1/2\le s<1$ and $p>2/s>2$, then \eqref{B1} fails outside the triangle.
(iii) When $s> 1$ and $2/\min\{s,2\}<p\le 2$, \eqref{B1} fails when $q<1/s$ and is valid when $(1/q_1,1/q_2,1/q)$ is in the rhombus with vertices $(0,0,0)$, $(1,0,1)$, $(0,1,1)$ and $(\min\{s,2\}/2,\min\{s,2\}/2,\min\{s,2\})$.
(iv) When $s>1$ and $p>2$, \eqref{B1} fails when $q<1/s$. \end{thm}
\begin{conj} (ii') What do we have in the triangle?
(iii') What do we have in the remaining two shoulders when $1<s<2$? I guess we should have some counterexample showing that \eqref{B1} fails in these two shoulders.
(iv') What do we have when $q\ge1/s$? We should have at least some special points like $(1/2,1/2,1)$. \end{conj}
\section{Higher Dimensional Results}
\begin{prop}
If for any $m$ such that $A=\sup_{j\in\mathbb Z}\|m(2^j\cdot)\widehat\Psi\|_{L^r_s(\mathbb{R}^{2n})}$, we have \begin{equation} \label{B1}
\|T_m\|_{L^{p_1}(\mathbb{R}^n)\times L^{p_2}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)}\le CA, \end{equation} then $s\ge n/2$. \end{prop} \begin{proof}
Let us take a Schwartz function $\varphi$ with $\hat \varphi$ supported in $[1/2-1/100,1/2+1/100]$. Let $\{a_j(t)\}_{j\in \mathbb Z}$ be a sequence of Randemacher functions, and
for $N>1$ and $\xi=(\xi_1,\dots, \xi_n)\in \mathbb{R}^n$ we denote $E_N=\{j:1\le j_1\le N,\dots, 1\le j_N\le N\}$ and
define $$\hat f_N(\xi)=\sum_{j\in E_N} a_{j_1}(t_1)\cdots a_{j_n}(t_n) \hat \varphi (N \xi_1 -j_1)\cdots \hat \varphi (N \xi_n -j_n)$$ and $$\hat g_N(\xi)=\sum_{k\in E_N} a_{k_1}(t_1)\cdots a_{k_n}(t_n) \hat \varphi (N \xi_1 -k_1)\cdots \hat \varphi (N \xi_n -k_n).$$ Let $\phi$ be a smooth function $\phi$ supported in $[-\tf1{10},\tf1{10}]$ assuming value $1$ in $[-\tf1{20},\tf1{20}]$. We construct the multiplier $\sigma_N$ of the bilinear operator $T_N$ as follows, \begin{equation}\label{\sigma_N} m= \prod_{\rho=1}^n\sum_{j_\rho=1}^N\sum_{k_\rho=1}^N a_{j_\rho}(t_\rho)
a_{k_\rho}(\tau_\rho) a_{j_{\rho}}
c_{j_{\rho}+k_\rho}\phi(N \xi_{\rho}-j_{\rho})\phi(N \eta_\rho-k_\rho), \end{equation} where $c_l=1$ when $9N/10\le l\le 11N/10$ and $0$ elsewhere. Hence \begin{align*} T_N(f_N,g_N)(x)=&\sum_{j\in E_N}\sum_{k\in E_N} a_{j+k}(t_3)c_{j+k}\tf1 {N^2} \varphi(x/N) \varphi(x/N)e^{2\pi ix(j+k)/N}\\ =&\sum_{l=2}^{2N}\sum_{k=s_l}^{S_l}a_l(t_3)c_{l}\tf1 {N^2} \varphi(x/N) \varphi(x/N)e^{2\pi ixl/N}, \end{align*}
where $s_l=\max(1,l-N)$ and $S_l=\min(N,l-1)$. We will estimate $\|f_N\|_{L^{p_1}(\mathbb{R})}$, $\|g_N\|_{L^{p_2}(\mathbb{R})}$,
$\|\sigma_N\|_{L^{r}_s(\mathbb{R}^2)}$ and $\|T_N(f_N,g_N)\|_{L^{p}(\mathbb{R})}$.
First we prove that
$\|f_N\|_{L^{p_1}(\mathbb{R})}\approx N^{1-\tfrac{p_1}2}$. By Khinchin inequality we have \begin{align*}
\int_0^1\|f_N\|_{L^{p_1}}^{p_1}dt_1=&
\int_{\mathbb{R}}\int_0^1\big|\sum_{j=1}^Na_j(t_1)\frac{\varphi(x/N)}Ne^{2\pi ixj/N}\big|^{p_1}dt_1dx\\
\approx&\int_{\mathbb{R}}\big(\sum_{j=1}^N|\frac{\varphi(x/N)}N|^2\big)^{p_1/2}dx\\
\approx&N^{-p_1/2}\int_{\mathbb{R}}|\varphi(x/N)|^{p_1}dx\\ \approx&N^{1-\tfrac{p_1}2}. \end{align*} Hence
$\|f_N\|_{L^{p_1}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p_1}-\tf12}$. Similarly
$\|g_N\|_{L^{p_1}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p_2}-\tf12}$. The same idea gives that \begin{align*}
\int_0^1\|T_N(f_N,g_N)\|_{L^{p}}^{p}dt_3\approx& \int_{\mathbb{R}}\Big(\sum_{l=2}^{2N}
\big|c_l(S_l-s_l)\tf1 {N^2}
\varphi^2(x/N)e^{2\pi ixl/N}\big|^2\Big)^{p/2}dx\\ \approx& \int_{\mathbb{R}}(\sum_{l=9N/10}^{11N/10}(S_l-s_l)^2)^{p/2} \tf1 {N^{2p}}
|\varphi(x/N)|^{2p}dx\\
\approx&N^{\tfrac{3p}2-2p}\int_{\mathbb{R}}|\varphi(x/N)|^{2p}dx\\ \approx&N^{1-\tfrac{p}2}. \end{align*} That is to say,
$\|T_N(f_N,g_N)\|_{L^{p}(\mathbb{R}\times[0,1],\ dxdt)}\approx N^{\tf1{p}-\tf12}$.
As for the multiplier $\sigma_N$, we have the following estimate whose proof is given later.
\begin{lm}\label{SN} For the multiplier $\sigma_N$ defined in \eqref{\sigma_N} and any $s\in(0,1)$, there exists a constant $C_s$ such that \begin{equation}
\|\sigma_N\|_{L^r_s(\mathbb{R}^2)}\le C_sN^s. \end{equation} \end{lm}
Apply \eqref{B1} to $f_N,\ g_N$ and $T_N$ defined above and integrate with respect to $t_1$, $t_2$ and $t_3$ on both sides, we have $$
(\int_0^1\int_0^1\int_0^1\|T_N(f_N,g_N)\|_{L^p}^pdt_3dt_1dt_2)^{1/p}\le C_sN^s
(\int_0^1\|f_N\|^p_{L^{p_1}}dt_1\int_0^1\|g_N\|_{L^{p_2}}^pdt_2)^{1/p}, $$ which combining the estimates obtained on $f_N$, $g_N$ and $T_N(f_N,g_N)$ above implies $$ N^{\tf1{p}-\tf12}\le C_s N^s N^{\tf1{p_1}-\tf12}N^{\tf1{p_2}-\tf12}, $$ so we automatically have $N^{1/2}\le C_s N^s$, which is true when $N$ goes to $\infty$ only if $s\ge 1/2$.
\end{proof} \end{comment}
\end{document} |
\begin{document}
\maketitle \begin{abstract} We study the algebraic implications of the non-independence property (NIP) and variants thereof (dp-minimality) on infinite fields, motivated by the conjecture that all such fields which are neither real closed nor separably closed admit a (definable) henselian valuation. Our results mainly focus on Hahn fields and build up on Will Johnson's preprint "dp-minimal fields", arXiv: 1507.02745v1, July 2015. \end{abstract}
\section*{Introduction}
The classification of $\omega$-stable fields \cite[Theorem 3.1]{PoiGroups} and later of super-stable fields \cite{ChSh} is a cornerstone in the development of the interactions between model theory, algebra and geometry. Ever since, the classification of algebraic structures according to their model theoretic properties is a recurring theme in model theory. Despite some success in the classification of groups of finite rank (with respect to various notions of rank), e.g.. \cite{EaKrPi},\cite[Section 4]{WaBook} (essentially, generalising results from the stable context), and most notably in the o-minimal setting (e.g., \cite{HrPePi} and many references therein) little progress has been made in the classification of infinite stable (let alone simple) fields. Indeed, most experts view the conjecture asserting that (super) simple fields are bounded (perfect) PAC, and even the considerably weaker conjecture that stable fields are separably closed to be out of the reach of existing techniques.
In the last decade or so the increasing interest in theories without the independence property (NIP theories), associated usually with the solution of Pillay's conjecture \cite{HrPePi} and with the study of algebraically closed valued fields, led naturally to analogous classification problems in that context. In its full generality, the problem of classifying NIP fields encompasses the classification of stable fields, and may be too ambitious. In \cite{Sh863}, as an attempt to find the right analogue of super-stability in the context of NIP theories, Shelah introduced the notion of \emph{strong NIP}. As part of establishing this analogy, Shelah showed \cite[Claim 5.40]{Sh863} that the theory of a separably closed field that is not algebraically closed is not strongly NIP. In fact Shelah's proof actually shows that strongly NIP fields are perfect\footnote{Shelah's proof only uses the simple fact that if $\mathrm{char}(K)=p>0$ then either $K$ is perfect or $[K^\times: (K^\times)^p]$ is infinite. See, e.g., \cite[Remark 2.5]{KrVal}}. Shelah conjectured \cite[Conjecture 5.34]{Sh863} that (interpreting its somewhat vague formulation) strongly NIP fields are real closed, algebraically closed or support a definable non-trivial (henselian) valuation. Recently, this conjecture was proved\footnote{The existence of a definable valuation is implicit in Johnson's work. See Remark \ref{JohnsonDef}.} by Johnson \cite{JohnDPMin} in the special case of dp-minimal fields (and, independently, assuming the definability of the valuation, henselianity is proved in \cite{JaSiWa2015}).
The two main open problems in the field are:
\begin{enumerate}
\item Let $K$ be an infinite (strongly) NIP field that is neither separably closed nor real closed. Does $K$ support a non-trivial definable valuation?
\item Are all (strongly) NIP fields henselian (i.e., admit some non-trivial henselian valuation) or, at least, t-henselian (i.e., elementarily equivalent in the language of rings, to a henselian field)? \end{enumerate}
A positive answer to Questions (2) would imply, for example, that strongly\footnote{F. Jahnke and S. Anscombe (private communication) informed us that similar results are obtained for NIP fields.} dependent fields are elementarily equivalent to Hahn fields over well understood base fields \cite[Theorem 3.11]{HaHaJa}: \begin{description}
\item[Equi-characteristic] $\mathbb R((t^\Gamma))$, $\mathbb C((t^\Gamma))$ or $\overline{\mathbb F}_p((t^\Gamma))$.
\item[Finite residue field] $Q((t^\Gamma))$ where $Q$ is a p-adically closed field if the field admits a henselian valuation with finite residue field.
\item[Kaplansky] $L((t^\Gamma))$ where $L$ is a rank 1 Kaplansky field with residue field as in (1) above. \end{description} where in all cases $\Gamma$ is a strongly dependent ordered abelian group (see \cite{HalHas} for the classification of such groups).
In view of the above, a natural strategy for studying Shelah's conjecture would be to, on the one hand, study the conjecture for Hahn fields (with dependent residue fields), as the key example and -- on the other hand -- using the information gained in the study of Hahn fields, try to generalise Johnson's results from dp-minimal fields to the strongly dependent setting.
The simplest extension of Johnson's proof of Shelah's conjecture for dp-minimal fields would be to finite extensions of dp-minimal fields. Section \ref{dp-min} is dedicated to showing that this extension is vacuous, namely we prove that
a finite extension of a dp-minimal field is again dp-minimal (see Theorem \ref{finite}).
The proof builds heavily on Johnson's classification of dp-minimal fields.
Section \ref{Hahn} is dedicated to the study of (strongly) dependent Hahn fields. Collecting known results of the first author (based on unpublished work of Koenigsmann), we show that Hahn fields that are neither algebraically nor real closed support a definable non-trivial valuation with a $t$-henselian topology. We use Hahn fields to provide examples proving that perfection and boundedness -- the conjectural division lines for simple fields -- are not valid in the NIP case. Building on previous results of Delon \cite{DelHenselian}, B\'elair \cite{BelHenselian} and Jahnke-Simon \cite{JaSiTransfer} we construct the following examples (see Theorem \ref{examples}):
There are NIP fields with the following properties:
\begin{enumerate}
\item A strongly NIP field that is not dp-minimal.
\item A strongly NIP field $K$ such that $[K^\times: (K^\times)^q ]=\infty$ for some prime $q$.
\item A perfect NIP field that is not strongly NIP.
\item An unbounded strongly NIP field.
\end{enumerate}
In the last two sections of the paper we turn to the problem of constructing definable valuations on (strongly) NIP fields. As Johnson's methods of \cite{JohnDPMin} do not seem to generalise easily even to the finite dp-rank case, we study a more general construction due to Koenigsmann.
We give, provided the field $K$ is neither real closed nor separably closed (without further model theoretic assumptions), an explicit first order sentence $\psi_K$ in the language of rings such that $K\models \psi_K$ implies the existence of a non-trivial valuation ring definable (over the same parameters appearing in $\psi_K$) in the language of rings. As we will show (see the discussion following Corollary \ref{Q2toQ1}) if $K$ is $t$-henselian then $K\models \psi_K$. Thus to provide a positive answer to Question (1) it will suffice to show that any NIP field (infinite not real closed or separably closed) $K\models \psi_K$, which is also a necessary condition for a positive answer to Question (2).
Implicit in the work of Koenigsmann \cite{Koe2}, a sentence with roughly the same properties as $\psi_K$ above can certainly be extracted from \cite{Du2016}. However, the sentence $\psi_K$ obtained in Proposition \ref{PropSimplifiedVAxiomsnontrivialdefinable} of this paper is simpler in quantifier depth and in length. As a result the strategy proposed for tackling Question (1) above can be summarised as follows:
\begin{con}\label{fo}
Let $K$ be an infinite field not separably closed. For any prime $q\neq \mathrm{char}(K)$ let $T_q:=(K^\times)^q+1$. Assume that
\begin{enumerate}
\item $T_q\neq K\setminus \{1\}$
\item $\sqrt{-1}\in K$
\item There exists $\zeta_q\in K$ a primitive $q$-th root of unity.
\end{enumerate}
and at least one of the following holds:
\begin{enumerate}
\item $K\models (\exists a_1,a_2)(\{0\}=a_1T_q\cap a_2 T_q)$
\item $K\models (\forall a_1,a_2\exists b)(b\notin T_q\land b\in (a_1T_q\cap a_2T_q)-(a_1T_q\cap a_2T_q))$
\item $K\models (\forall a_1,a_2\exists b)(b\notin T_q\land b\in (a_1T_q\cap a_2T_q)\cdot (a_1T_q\cap a_2T_q))$
\item $K\models (\forall a_1,a_2\exists x,y)(xy\in a_1T_q\cap a_2T_q\land x\notin a_1T_q\cap a_2T_q\land y\notin a_1T_q\cap a_2T_q)$
\end{enumerate}
Then $K$ has IP. \end{con}
\noindent\emph{Acknowledgements.} We would like to thank I. Efrat, M. Hils, F. Jahnke, M. Kamensky, F.-V. Kuhlmann and P. Simon for several ideas, corrections and suggestions.
\section{Preliminaries}\label{prelims}
Throughout we use standard valued fields terminology and notation: $K, L, F$ will be fields, $\ensuremath{\mathcal{O}} _K$ will denote a valuation ring on $K$ with maximal ideal $\ensuremath{\mathcal{M}} _K$ (we will drop the subscript $K$ if it is clear from the context). Valuations on $K$ will be denoted by $v,w$ and $\ensuremath{\mathcal{O}} _v:=\left\{x\in K: v(x)\geq 0\right\}$, $\ensuremath{\mathcal{M}} _v$ the valuation ring associated with $v$ and its maximal ideal respectively. The reader is referred to any standard textbook on the subject (e.g., \cite{EnPr}) for more details. A non-trivial valuation $v$ on a field $K$ induces a Hausdorff field topology (generated by open balls $B_\gamma(x):=\{y:v(x)>\gamma\}$). It is well known that such topologies can be characterised: \begin{de}\label{Vtop}
A collection of subsets $\ensuremath{\mathcal{N}} $ of $K$ is a basis of 0-neighbourhoods for a \emph{V-topology} on $K$ if is satisfies the following axioms:
\begin{description}
\item[\textbf{\textup{(V\,1)}}] $\bigcap \mathcal{N}:=\bigcap_{U\in\mathcal{N}}U=\left\{0\right\}$ and $\left\{0\right\}\notin\mathcal{N}$;
\item[\textbf{\textup{(V\,2)}}] $\forall\, U,\,V\ \exists\, W\ W\subseteq U\cap V$;
\item[\textbf{\textup{(V\,3)}}] $\forall\, U\ \exists\, V\ V-V\subseteq U$;
\item[\textbf{\textup{(V\,4)}}] $\forall\, U\ \forall\, x,\,y\in K\ \exists\, V\ \left(x+V\right)\cdot\left(y+V\right)\subseteq x\cdot y+U$;
\item[\textbf{\textup{(V\,5)}}] $\forall\, U\ \forall\, x\in K^{\times}\ \exists\, V\ \left(x+V\right)^{-1}\subseteq x^{-1}+U$;
\item[\textbf{\textup{(V\,6)}}] $\forall\, U\ \exists\, V\ \forall\, x,\,y\in K\ (x\cdot y\in V\to (x\in U\, \vee\, y\in U))$.
\end{description} \end{de} It is not hard to check that if $(K,v)$ is a valued field (or a field with an absolute value) then the collection of open balls is a V-topology on $K$. More importantly, the converse is also true (see \cite[Appendix B]{EnPr}): any V-topology on a field $K$ arises in this way.
In the present paper we will investigate and apply a standard technique for constructing a V-topology on a field $K$ from a multiplicative sub-group $G\le K^\times$. We will be following a construction due to Koenigsmann, \cite{Koe2}, but the general method is well known (see \cite[\S11]{EfBook} and references therein). Fix $K$ an infinite field and let $G$ be a multiplicative subgroup of $K^\times$ with $G\neq K^\times$.
Given a group $G\le K^\times$ we let $\ensuremath{\mathcal{T}} _G$ be the coarsest topology for which $G$ is open and linear transformations are continuous. As shown in \cite[Theorem~3.3]{Du2016} $\ensuremath{\mathcal{S}} _G:=\left\{a\cdot G+b: a\in K^\times, b\in K\right\}$ is a subbase of $\ensuremath{\mathcal{T}} _G$. Hence
\[\ensuremath{\mathcal{B}} _G:=\left\{\left.\bigcap_{i=1}^n\left(a_i\cdot G+b_i\right)\,\right|\, n\in \ensuremath{\mathbb{N}} ,\, a_1,\ldots,a_n\in K^\times, b_1,\ldots,b_n\in K\right\}\] is a base for $\ensuremath{\mathcal{T}} _G$.
A simple calculation shows that \[
\ensuremath{\mathcal{N}} _G:=\left.\left\{U\in \ensuremath{\mathcal{B}} _G \ \right|\ 0\in U\right\}
\:=\left.\left\{\bigcap_{i=1}^n a_i\cdot \left(- G+1\right) \ \right|\ n\in\ensuremath{\mathbb{N}} ,\, a_i\in K^\times\right\} \] is a base of neighbourhoods of zero for $\ensuremath{\mathcal{T}} _G$. If further $-1\in G$ then \[
\ensuremath{\mathcal{N}} _G=\left\{\bigcap_{i=1}^n a_i\cdot \left(G+1\right): \ n\in\ensuremath{\mathbb{N}} ,\, a_i\in K^\times\right\}. \]
Throughout the paper $U,\,V$ and $W$, possibly with indices, will always denote elements of $\ensuremath{\mathcal{N}} _G$.
It follows from \cite[Lemma~3.6 and Corollary 3.8]{Du2016} that, if $\ensuremath{\mathcal{T}} _G$ is a basis for a V-topology (see Fact \ref{corVAxiomsnontrivialdefinable} below) then already
\[\left\{\left(a_1\cdot G+b_1\right)\cap\left(a_2\cdot G+b_2\right): a_1,a_2\in K^\times,\, b_1,b_2\in K\right\}\]
is a base for $\ensuremath{\mathcal{T}} _G$. Hence, if $-1\in G$, \[\ensuremath{\mathcal{N}} _G':=\left\{\left(a_1\cdot \left(G+1\right)\right)\cap\left(a_2\cdot\left( G+1\right)\right): a_1,a_2\in K^\times\right\}\] is a base of the neighbourhoods of zero for $\ensuremath{\mathcal{T}} _G$.
As in most of the paper it will be more convenient to work with arbitrary intersections, we will mostly choose to work with $\ensuremath{\mathcal{N}} _G$. The advantage of the basis $\ensuremath{\mathcal{N}} _G'$ is that, if $G$ is definable (as will be the case) it is a definable basis of $0$-neighbourhoods.
The starting point of the present paper is the following result of Koenigsmann\footnote{A valuation $v$ on $K$ is $p$-henselian if it extends uniquely to $K(p)$ the compositum of all Galois extensions of $K$ of degree $p^n$ (any $n$). For the purposes of the present paper the fact that any henselian valuation is $p$-henselian will suffice. For more information see \cite{KoepHens}.}\footnote{As pointed out by the referee, a correct proof Koenigsmann's result can be found in \cite{JahKo}}., \cite{KoepHens}:
\begin{fact}\label{corVAxiomsnontrivialdefinable}
Let $K$ be a field of characteristic $p$ (possibly 0) and $q$ a prime different from $p$. Let $G:=(K^\times)^q\subsetneq K^\times $ and assume that $\zeta_p\in K$ for a primitive $q$th root of unity. Then $K$ is $q$-henselian if and only if $\ensuremath{\mathcal{T}} _G$ is a basis for a $V$-topology, if and only if the canonical $p$-henselian valuation, $v_p$ is $\emptyset$-definable (in which case $\ensuremath{\mathcal{T}} _G$ is the topology induced by $v_q$).
\end{fact} \begin{proof}
By \cite[Theorem 2.1]{KoepHens} $K$ is $p$-henselian if and only if $\ensuremath{\mathcal{T}} _G$ generates the same topology as $v$ for some $p$-henselian valuation. By \cite[Main theorem]{JahKo} if $K$ is $p$-henselian then the canonical $p$-henselian valuation is definable. The statement concerning the topologies also follows from \cite[Theorem 2.1]{KoepHens}. \end{proof}
In the above, and throughout, by \emph{definable} we mean \emph{definable in the language $\ensuremath{\mathcal{L}} $ of rings} and by saying that a valuation $v$ on $K$ is definable we mean that $\ensuremath{\mathcal{O}} _v$ is $\ensuremath{\mathcal{L}} (K)$-definable (where $\ensuremath{\mathcal{L}} (K)$ is the expansion of the language $\ensuremath{\mathcal{L}} $ by constants for all elements of $K$).
Let us now explain how the above fact will be applied. Let $K$ be an NIP field. We aim to find conditions for the existence of a definable non-trivial valuation on $K$. By \cite[Theorem II.4.11]{Sh1} (\cite[Observation 1.4]{Sh863}) if $T$ is (strongly) NIP then so is $T^{eq}$. Thus any finite extension of $K$ is also (strongly) NIP. It will suffice, therefore, to find a definable non-trivial valuation on some finite extension $L\ge K$ (since if $\mathcal O$ is a non-trivial valuation ring on $L$ then $\mathcal O\cap K$ is a non-trivial valuation ring in $K$). It is therefore, harmless to assume that $\sqrt{-1}\in K$. By \cite[Theorem~4.4]{KaScWa} $K$ is Artin-Schreier closed. So the same is true of any finite extension $L\ge K$. This implies (e.g., \cite[Lemma 2.4]{KrVal}\footnote{Krupinski's argument assumes that the field is perfect to conclude that it is algebraically closed. Descarding this additional assumption, and restricting to separable extensions the stronger result follows.}) that either $K$ is separably closed, or there exists some finite separable extension $L\ge K$ and $q\neq \mathrm{char}(K)$ such that $(L^\times)^q\neq L^\times$ (in fact, by \cite[Corollary 4.5]{KaScWa} $K$ has no finite separable extensions of degree divisible by $p$). Since $\sqrt{-1}\in L$ it follows that, letting $L(q)$ denote the $q$-closure of $L$, we have $[L(q):L]=\infty$ (\cite[Theorem 4.3.5]{EnPr}). So extending $L$ a little more, there is no harm assuming that there exists $\zeta_q\in L$, a primitive $q$th root of unity. Thus, at the price of, possibly, losing the $\emptyset$-definability of the resulting valuation (because of the passage to the field $L$), the basic assumptions of Fact \ref{corVAxiomsnontrivialdefinable} are easily met. So that the application of this result reduces to proving that for $L$ and $q$ as above, $\ensuremath{\mathcal{N}} _G$ is a $0$-neighbourhood basis for a V-topology on $L$. Thus, we get the following result (see also \cite{Du2016}):
\begin{cor}\label{groups}
Let $K$ be an NIP field that is neither separably closed nor real closed. Then there exists a finite separable field extension $L\ge K$ and a prime $q\neq \mathrm{char}(K)$ such that $(L^\times)^q\neq L^\times$ and $\zeta_q\in L$ for some primitive root of unity. If $K$ is $t$-henselian then for any such $L\ge K$ and $q$ the group $G_q(L):=(L^\times)^q$ satisfies conditions (V1)-(V6) of Definition \ref{Vtop}. \end{cor} \begin{proof}
Assume first that $K$ is henselian, witnessed by a valuation $v$. Then, by the above discussion, as $K$ is neither real closed nor algebraically closed, there is some finite separable field extension $L\ge K$ and prime $q$ such that $G_q(L):=(L^\times)^q$ is a proper subgroup of $L^\times$ and $\zeta_q\in L$. Fix any such extension $L$. Since $v$ is henselian, it extends to a henselian valuation on $L$ which by abuse of notation we will also denote $v$. By \cite[Theorem 5.18]{Du2016} there exists a definable valuation $w$ on $L$ inducing the same topology as both $v$ and $\mathcal O_{G_q(L)}$. In particular $\mathcal O_{G_q(L)}$ is non-trivial. So, by the above discussion, $\mathcal N_{G_q(L)}$ is a basis for a $V$-topology, i.e., it satisfies (V1)-(V6), as required.
In general, let $\mathcal K\succ K$ be $\aleph_1$-saturated. Then $\mathcal K$ is henselian. Let $L\ge K$ be a finite separable extension such that $G_{q}(L)$ is a proper subgroup. By the primitive element theorem there exists $\alpha\in L$ such that $L=K(\alpha)$. Let $\mathcal L:=\mathcal K(\alpha)$. Then $\mathcal L\succ L$ and $G_{q}(\mathcal L)$ is a proper subgroup. By what we have already shown the group $G_q(\mathcal L)$ satisfies conditions (V1)-(V6). So
$\mathcal N_{G_q(\mathcal L)}'$ is a also a basis for the topology, so it satisfies the corresponding statements (V1)$'$-(V6)$'$. Since those are first order statements without parameters, they are also satisfied by $G_q(L)$, so $G_q(L)$ also satisfies (V1)-(V6) as required. \end{proof}
We remind also that by a Theorem of Schmidt \cite[Theorem 4.4.1]{EnPr} any two henselian valuations on a non-separably closed field $K$ are dependent (i.e., generate the same $V$-topology). So we get:
\begin{cor}\label{Q2toQ1}
Let $K$ be an NIP field that is neither real closed, nor separably closed. If $K$ is henselian, then $K$ supports a definable non-trivial valuation. Moreover, there exists a finite separable extension $L\ge K$ and a prime $q$ such that $G_q(L)\cap K$ generates the same $V$ topology as any henselian valuation on $K$.
\end{cor} \begin{proof}
There is no harm assuming that $\sqrt{-1}\in K$.
As above, if for all finite separable extensions $L$ and all $q\neq \mathrm{char}(K)$ we have $L^q=L$, we get that $K$ is separably closed, contradicting our assumption. So there are $L\ge K$, a finite extension, and $q$ such that $L^q\neq L$. Since $\sqrt{-1}\in K$ we get that $L(\zeta_q)^q\neq L(\zeta_q)$ for $\zeta_q$, a primitive $q$th root of unity. So there is no harm assuming $\zeta_q\in L$. Since $K$ is henselian, so is $L$. By the previous corollary, $G_q(L)$ generates on $L$ the same topology as any henselian valuation on $L$. The corollary follows. \end{proof}
Let $L\ge K$ be as provided by the previous corollary. Then $L=K(\alpha)$ for some $\alpha$, and let $f(x)$ be its minimal polynomial over $K$. Let $\bar a$ be the coefficients of $f$. Then $L$ is $K$-interpretable over $\bar a$. So we let $\psi_K$ be the sentence (over $\bar a$) stating that $G:=(L^\times)^q$ satisfies axioms (V1)$'$-(V6)$'$ of a $V$-topology. By Fact \ref{corVAxiomsnontrivialdefinable} if $K\models \psi_K$ then $K$ supports a non-trivial $\bar a$-definable valuation. And if $K$ happens to be $t$-henselian (and, therefore, so is $L$) then $L$ is $q$-henselian, Fact \ref{corVAxiomsnontrivialdefinable} implies that $K\models \psi_K$. Therefore, if $K$ is NIP and we assume the conjecture that any infinite NIP field is ($t$)-henselian then $K\models \psi_K$.
Assuming that, in the above discussion we do not have to pass to the separable extension $L$ (i.e., $K$ itself satisfies assumptions (1)-(3) of Conjecture \ref{fo}) we get that $K\models \psi_K$ implies the existence of a non-trivial $p$-henselian valuation on $K$ (for some $p$ explicit in $\psi_K$). It is therefore natural to ask: \begin{qu}
If $(K,v)$ is NIP and $v$ is $p$-henselian (for some $p\neq \mathrm{char}(K)$). Is $v$ necessarily henselian? Does this follow, at least, from Shelah's conjecture? \end{qu}
It is worth pointing out that by \cite[Remark 2.3]{KoepHens} there are fields that are $p$-henselian for all primes $p$ but not henselian. For any such field, $K$, the canonical $p$-henselian valuation (any $p$) is definable, inducing the topology $\ensuremath{\mathcal{T}} _G$ for $G=(K^\times)^p$, but this definable valuation is not henselian. So in full generality, the definable valuations discussed in this paper need not be henselian.
Throughout the paper we will be using without further reference the facts that strongly NIP fields are perfect, that NIP fields are Artin-Schreier closed, and that NIP valued fields of characteristic $p>0$ have a $p$-divisible value group (\cite[Proposition 5.4]{KaScWa}).
\section{dp-minimal fields}\label{dp-min} Dp-minimal fields are classified in the main result of \cite{JohnDPMin}\footnote{Specific references to Johnson's paper below refer to the publicly available version of the paper, \cite{JohnDPMinArc}.}: \begin{thm}[Johnson]\label{classification}
A sufficiently saturated field $K$ is dp-minimal if and only if $K$ is perfect and there exists a valuation $v$ on $K$ such that:
\begin{enumerate}
\item $v$ is henselian.
\item $v$ is defectless (i.e., any finite extension of $(L,v)$ over $(K,v)$ is defectless).
\item The residue field $Kv$ is either algebraically closed of characteristic $p$ or elementarily equivalent to a local field of characteristic $0$.
\item The valuation group $\Gamma_v$ is almost divisible, i.e., $[\Gamma_v: n\Gamma_v]<\infty$ for all $n$.
\item If $\mathrm{char}(Kv)=p\neq \mathrm{char}(K)$ then $[-v(p),v(p)]\subseteq p\Gamma_v$.
\end{enumerate} \end{thm}
Given a dp-minimal field $K$ that is not strongly minimal, Johnson constructs an (externally definable) topology \cite[\S3]{JohnDPMinArc}, which he then proves to be a V-topology \cite[\S3 , \S 4]{JohnDPMinArc}. Pushing these results further he proceeds to show \cite[Theorem 5.14]{JohnDPMinArc} that $K$ admits a henselian topology (not necessarily definable). From this we immediately get:
\begin{cor}
Any dp-minimal field is either real closed, algebraically closed or admits a non-trivial definable henselian valuation. In particular, the V-topology constructed by Johnson is definable and coincides with Koenigsmann's topology, $\ensuremath{\mathcal{T}} _G(L)\cap K$, for some finite extension $L\ge K$ and some (equivalently, any) $G:=(L^\times)^p$ such that $G\neq L^\times$. \end{cor} \begin{proof}
Let $K$ be a dp-minimal field that is neither real closed nor algebraically closed. By \cite[Theorem 5.14]{JohnDPMinArc} $K$ is henselian, and therefore so is any finite extension of $K$. Let $L$ be a finite extension of $K$ such that $G_q(L)\neq L^\times$ and $L$ contains a primitve $q$th root of unity. Then by Fact \ref{corVAxiomsnontrivialdefinable} and Corollary \ref{groups} we get that $L$ admits a non-trivial definable valuation. So $K$ admits a non-trivial definable valuation, and by \cite[Theorem 5.14]{JohnDPMinArc} all definable valuations on $K$ are henselian.
Since $K$ is not separably closed it follows that $K$ supports a unique non-trivial t-henselian topology so the V-topology constructed by Johnson coincides with the topology associated with the definable henselian valuation, and is therefore definable. \end{proof}
\begin{rem}\label{JohnsonDef}
\begin{enumerate}
\item The above corollary is implicit in Johnson's work. By inspecting his proof of Theorem 1.2 (\cite[\S 6]{JohnDPMinArc}) one sees that unless $K$ is real closed or algebraically closed the valuation ring $\mathcal O_\infty$ appearing in the proof, the intersection of all definable valuation rings on $K$, is non-trivial, implying that $K$ supports a non-trivial definable valuation.
\item The same result can also be inferred from \cite[\S 7]{JaSiWa2015}. In that paper it is shown that a dp-minimal valued field which is neither real closed nor algebraically closed supports a non-trivial henselian valuation definable already in the pure field structure. By Johnson's Theorem 5.14 we know that $K$ admits a henselian valuation, which is externally definable. Since an expansion of a dp-minimal field by externally definable sets is again dp-minimal, the result follows.
\end{enumerate}
\end{rem}
We note that the proof of the first part of the above corollary shows that the same results remain true for finite extensions of dp-minimal fields. This follows also from the following, somewhat surprising, corollary of Theorem \ref{classification}: \begin{thm}\label{finite}
Let $K$ be a dp-minimal field, $L$ a finite extension of $K$. Then $L$ is dp-minimal. \end{thm} \begin{proof}
Since dp-minimality is an elementary property, we may assume that $K$ is saturated. Indeed, since $L$ is a finite extension of $K$ it is interpretable in $K$, and if $K'\succ K$ is saturated, the field $L'$ interpreted in $K'$ by the same interpretation is a saturated elementary extension of $L$. Thus, it will suffice to show that there exists a valuation $v$ on $L$ satisfying conditions (1)-(5) of Theorem \ref{classification}. Since $K$ is saturated, there is such a valuation on $K$, extending uniquely to $L$. By abuse of notation we will let $v$ denote also this extension.
Conditions (1) and (2) of the theorem are automatic and condition (4) is an immediate consequence of the fundamental inequality (e.g., Theorem 3.3.4\cite{EnPr}). Condition (3) is automatic if $Kv$ is real closed or algebraically closed. So it remains to check that if $Kv$ is elementarily equivalent to a finite extension of $\mathbb Q_p$ then so is $Lv$. This is probably known, but as we could not find a reference, we give the details.
By Krasner's Lemma any finite extension of $\mathbb Q_p$ is of the form $\mathbb Q_p(\delta)$ for some $\delta$ algebraic over $\mathbb Q$ and $\mathbb Q_p$ has only finitely many extensions of degree $n$ (for any $n$). Denoting $e(n)$ the number of extensions of $\mathbb Q_p$ of degree $n$, there are $P_1(x),\dots, P_{e(n)}(x)\in \mathbb Q$ irreducible such that any finite extension of $\mathbb{Q}_p$ of degree $n$ is generated by a root of one of $P_1(x), \dots, P_{e(n)}(x)$. As this is clearly an elementary property, we get that the same remains true if $F\equiv \mathbb Q_p$. Of course, all of the above remains true if we replace $\mathbb Q_p$ by some finite extension $L\ge \mathbb Q_p$.
So if $L'\equiv L$ and $F'\ge L'$ is an extension of degree $n$ it must be that $F'=L(\delta)$ for some $\delta$ realising on of $P_1(x),\dots, P_{e(n)}(x)$, implying that $F'$ is elementarily equivalent to $F$, the algebraic extension of $L$ obtained by realising the same polynomial.
It remains to show that if $(K,v)$ is of mixed characteristic then $[-v(p),v(p)]\subseteq p\Gamma$ where $p=\mathrm{char} Kv$. By \cite[Lemma 6.8]{JohnDPMinArc} and the sentence following it, to verify this condition it suffices to show that $[-v(p),v(p)]$ is infinite. Towards that end it will suffice to show that $[-v(p),v(p)]\cap v(K)$ is infinite. Indeed, by assumption $(K,v)$ satisfies (5) of Theorem \ref{classification}, so $[-v(p),v(p)]\subseteq p\Gamma$. As we are in the mixed characteristic case $v(p)>0$. Since $v(p)\subseteq p\Gamma$, there is some $g_1\in \Gamma$ such that $pg_1=v(p):=g_0$. So $0<g_1<g_0$, and by induction, for all $n$ we can find $0<g_n<g_{n-1}<g_0=v(p)$. This show that $[-v(p), v(p)]\cap v(K)$ is infinite, concluding the proof of the theorem. \end{proof}
As already mentioned in the beginning of this section, the V-topologies constructed by Johnson and Koenigsmann coincide in the dp-minimal case. However, in order to start Koenigsmann's construction we first need to assure that $G_q(K)\neq K^\times$, and for that we may have to pass to a finite extension. Let us now point out that in the dp-minimal case this is not needed: \begin{lem}
Let $K$ be a dp-minimal field that is neither real closed nor algebraically closed. Then $G_q(K)\neq K^\times$ for some $q$. \end{lem} \begin{proof}
Let $v$ be as provided by Theorem \ref{classification}. It will suffice to show that the value group is not divisible. This is clear if the residue field is elementarily equivalent to a finite extension of $\mathbb Q_p$. Indeed, any finite extension $L$ of $\mathbb Q_p$ is henselian with value group isomorphic to $\mathbb Z$, which is not $n$ divisible for any $n>1$. So $G_n(L)\neq L^\times$ for any such $n$. As this is expressible by a first order sentence with no parameters, it remains true in any $L'\equiv L$.
If $Kv\models ACF_0$ or $Kv\models RCF$, the value group cannot be divisible, as then $K$ would be algebraically closed (resp. real closed). If $Kv\models ACF_p$ then, as $v$ is henselian defectless $(K,v)$ is algebraically maximal, in which case divisibility of the value group would again imply that $K\models ACF$. \end{proof}
\section{Hahn Series and related constructions}\label{Hahn} Little is known on the construction of simple fields. The situation is different in the NIP setting where strong transfer principles for henselian valued fields (see, e.g., \cite{JaSiTransfer} and references therein for the strongest such result to date) allow the construction of many examples of NIP fields. In the present section we sharpen some of these results and exploit them to construct various examples.
For the sake of clarity we remind the definition of strong dependence (in the formulation most convenient for our needs. See \cite[\S2]{Sh863} for more details): \begin{de}
A theory $T$ is strongly dependent if whenever $I$ is an infinite linear order, $\{a_t\}_{t\in I}$ an indiscernible sequence (of $\alpha$-tuples, some $\alpha$), and $a$ is a singleton there is an equivalence relation $E$ on $I$ with finitely many convex classes such that for $s\in I$ the sequence $\{a_t: t\in s/E\}$ is $a$-indiscernible.
\end{de}
We show:
\begin{thm}\label{examples}
There are NIP fields with the following properties:
\begin{enumerate}
\item A strongly NIP field that is not dp-minimal.
\item A strongly NIP field $K$ such that $[K^\times: (K^\times)^q ]=\infty$ for some prime $q$.
\item A perfect NIP field that is not strongly NIP.
\item An unbounded strongly NIP field.
\end{enumerate} \end{thm}
Recall that a field is \emph{bounded}\footnote{In the literature e.g., \cite{PilPoi}, \cite{PiScWa} a slightly stronger condition is used. The restriction to separable extensions seems, however, more natural and even implicitly implied in some applications.} if for all $n\in \mathbb N$ it has finitely many separable extensions of degree $n$. Super-simple fields are bounded,\cite{PilPoi}, and conjecturally, so are all simple fields. As pointed out to us by F. Wagner, it follows, e.g., from \cite[Theorem 5.10]{PoiGroups} that bounded stable fields are separably closed.
For the sake of completeness we give a different proof, essentially, due to Krupinski, with a less stability-theoretic flavour: Let $K$ be a bounded stable field. Since stability implies NIP $K$ and all its finite extensions are, as already mentioned, Artin-Schreier closed. By an easy strengthening of \cite[Lemma 2.4]{KrVal}, it will suffice to show that $K^q=K$ for all prime $q\neq \mathrm{char}(K)$. Boundedness\footnote{In \cite{KrVal} Krupinski introduces the slightly weaker \emph{radical boundedness}, which suffices for the argument.} implies that were this not the case for some $q$ we would have $1<[K^\times, (K^\times)^q]<\infty$. By \cite[Proposition 4.8]{KrSRNIP} this implies that $K$ is unstable (in fact, that the formula $\exists z(x-y=z^q)$ has the order-property).
As we will see in the concluding section of the present paper, boundedness may also have a role to play in the study of the two questions stated in the Introduction. In view of the results of Theorem \ref{examples} it seems natural to look for model theoretic division lines that will separate the bounded NIP fields\footnote{Added in proof: In \cite{HaHaJa} it is shown that Shelah's conjecture implies that a strongly dependent field is bounded if and only if it is dp-minimal.}.
\begin{rem}
In \cite[Corollary 3.13]{KaSh} it is shown that in a strongly dependent field $K$ for all but finitely many primes $p$ we have $[K^\times: (K^\times)^q]<\infty$. Clause (2) of Theorem \ref{examples} shows that this result is optimal. \end{rem}
We will use Hahn series to construct the desired examples. The basic facts that we need are: \begin{fact}\label{delon}
A henselian valued field $(K,v)$ of equi-characteristic $0$ is (strongly) NIP if and only if the value group and the residue field are (strongly) NIP. If $(K,v)$ is dp-minimal then so are the residue field and the value group. \end{fact}
The NIP case of the above fact is due to Delon \cite{DelHenselian} and the strongly NIP case is due to Chernikov \cite{Chernikov}. We get: \begin{lem}
Let $k$ be a field of characteristic $0$, $\Gamma$ an ordered abelian group. Then the Hahn series $k((t^\Gamma))$ is NIP as a valued field if and only if $k$ is NIP as a pure field. It is strongly NIP if and only if $k$ and $\Gamma$ are. \end{lem} \begin{proof}
Hahn series are maximally complete, and therefore henselian. So the result follows from the previous fact. \end{proof}
In order to prove clauses (1) and (3) of Theorem \ref{examples} it will suffice, therefore, to find strongly NIP ordered abelian groups that are not dp-minimal and ones that are not strongly NIP. We start with the latter:
\begin{ex}
Consider $\Gamma:=\mathbb Z^\mathbb N$ as an abelian group (with respect to pointwise addition) with the lexicographic order. Then $\Gamma$ is NIP but not strongly NIP. \end{ex} \begin{proof}
The group $\Gamma$ is ordered abelian, and therefore NIP by \cite{GurSch}. But $[\Gamma:n\Gamma]=\infty$ for all $n>1$, whence not strongly NIP by \cite[Corollary 3.13]{KaSh}.
\end{proof}
\begin{rem}
In \cite{Sh863} Shelah considers a closely related example of an ordered abelian group that is not strongly dependent. \end{rem}
\begin{ex}
Let $\Gamma:=\mathbb Z^\mathbb N$. If $k$ is an NIP field of characteristic $0$ then $K:=k((t^\Gamma))$ is NIP by the previous lemma. It is not strongly NIP because $\Gamma$ is not strongly NIP. It is unbounded, since by the fundamental inequality it has infinitely many Kummer extensions of any prime degree $q$. Indeed, for any natural number $n$ let $\{a_1,\dots, a_n\}\in \Gamma$ be pairwise non-equivalent modulo $q\Gamma$. Let $c_1,\dots, c_n\in K$ be such that $v(c_i)=a_i$ for all $i$. Let $L\ge K$ be the extension obtained by adjoining $q^{\text{th}}$-roots for all $c_i$. Let $\Delta=v(L)$, where $v$ is identified with its unique extension to $L$. Then $\Gamma\not\subseteq q\Delta$. Otherwise $[q\Delta:q\Gamma]=[q\Delta:\Gamma][\Gamma:q\Gamma]=\infty$, whereas $[\Delta:\Gamma]\le [L:K]$ and $[q\Delta:q\Gamma]\le [\Delta:\Gamma]$. This is a contradiction. Since $n$ was arbitrary, this shows that $K$ has infinitely many Kummer extensions of degree $q$. \end{ex}
Note that by \cite[Corollary 3.13]{KaSh} and \cite{JaSiWa2015} if $G$ is an ordered abelian group that is strongly dependent and not dp-minimal then there are finitely many primes $q$ such that $[G:qG]=\infty$. So the previous example with $G$ replacing $\Gamma$ will give an example for Theorem \ref{examples}(1), (2) and (5).
The details of the following example can be found in \cite{HalHas}: \begin{fact}\label{poschar}
Let $_{(2)}\mathbb Z$ be the localisation of $\mathbb Z$ at $(2)$. Let $B$ be a base for $\mathbb R$ as a vector space over $\mathbb Q$ and let $\langle B \rangle$ be the $\mathbb Z$-module generated by $B$. Let $G:=_{(2)}\mathbb Z\otimes \langle B \rangle$. Viewed as an additive subgroup of $\mathbb R$ the group $G$ is naturally ordered. It is strongly dependent but not dp-minimal. \end{fact}
In positive characteristic, the situation is slightly different. The basic result is due to B\'elair \cite{BelHenselian}: \begin{fact}\label{Belair}
Let $(K,v)$ be an algebraically maximal Kaplansky field of characteristic $p>0$. Then $K$ is NIP as a valued field if and only if the residue field $k$ is NIP as a pure field. \end{fact}
This generalises to the strongly dependent setting using the following results: \begin{fact}[\cite{Sh863}, Claim 1.17]
Let $T$ be a theory of valued fields in the Denef-Pas language. If $T$ admits elimination of field quantifiers (\cite[Definition 1.14]{Sh863}) then $T$ is strongly dependent if and only if the value group and the residue field are. \end{fact}
\begin{fact}[\cite{BelHenselian}, Theorem 4.4]\label{QE}
Algebraically maximal Kaplansky fields of equi-characteristic $(p,p)$ admit elimination of field quantifiers in the Denef-Pas language. \end{fact} The combination of the last two facts extends Fact \ref{Belair} to the strongly NIP case in analogy with Fact \ref{delon}: \begin{cor}
Let $k$ be an infinite NIP field (of equi-characteristic $(p,p)$) and $\Gamma$ an ordered abelian group. Then $k((t^\Gamma))$ is NIP provided that, if $p=\mathrm{char}(k)>0$, then $\Gamma$ is $p$-divisible. It is strongly NIP if and only if $k$ and $\Gamma$ are. \end{cor}
\begin{rem}
Though in \cite{BelHenselian} B\'elair does not claim Fact \ref{QE} for algebraically maximal Kaplansky fields in mixed characteristic his proof seems to work equally well in that setting. A more self contained proof is available in \cite{HalHasQE}. Combined with \cite[Proposition 5.9]{HalHas} we get that for the last sentence in the above corollary to hold (for algebraically closed Kaplansky fields of any characteristics) we do not need the value group and the residue field to be pure. This gives a strongly dependent version of \cite[Theorem 3.3]{JaSiTransfer}. \end{rem}
It is natural to ask whether all NIP fields constructed as Hahn series satisfy Shelah's conjecture, namely, whether they all support a definable henselian valuation. It follows immediately from Corollary \ref{groups} that: \begin{prop}\label{ConjForHahn}
Let $k$ be an NIP field, $\Gamma$ an ordered abelian group which is $p$-divisible if $\mathrm{char}(k)=p>0$. Then $K:=k((t^\Gamma))$ is either algebraically closed, or real closed or it supports a definable non-trivial valuation. \end{prop}
This answers Question (1) for Hahn fields. Whether NIP Hahn fields support a definable \emph{henselian} valuation is more delicate. In positive characteristic this follows from \cite[Corollary 3.18]{JanKo}. Proposition 4.2 of that same paper provides a positive answer (in any characteristic) in case $K=k((t^\Gamma))$ and $\Gamma$ is not divisible. It seems, however, that the general equi-characteristic 0 case remains open. In some cases we can be even more precise. E.g., Hong, \cite{Hong} gives conditions on the value group implying the definability of the natural (Krull) valuation on $k((t^\Gamma))$: \begin{fact}
Let $(K, \mathcal O)$ be a henselian field. If the value group contains a convex $p$-regular subgroup that is not $p$-divisible, then $\mathcal O$ is definable in the language of rings. \end{fact}
In all the examples discussed in the present section, the source of the complexity of the field (unbounded, strongly dependent not dp-minimal etc.) can be traced back to the value group of the natural (power series) valuation. For example, as shown in \cite{JaSiWa2015}, an ordered abelian group $\Gamma$ is dp-minimal if and only if $[\Gamma:p\Gamma]$ is finite for all primes $p$. By Theorem \ref{classification} dp-minimal fields are henselian with dp-minimal value groups. We note that it also follows from the same theorem that dp-minimal fields are bounded. Indeed\footnote{This argument was sugegsted to us by I. Efrat. Any mistake is, of course, solely, ours.}, for any Henselian NIP field $(K,v)$ with $\mathrm{char}(K)=\mathrm{char}(Kv)$ we have that $G_K\cong T \rtimes G_k$ where $G_K, G_k$ are the respective absolute Galois groups of $K$ and $k=Kv$, and $T$ is the inertia group (see \cite[Theorem 22.1.1]{EfBook} and use the fact that $K$ has no extensions of degree divisible by $\mathrm{char }(K)$). If $K$ is dp-minimal then $T=\prod_{l\in \Omega} \mathbb Z_l^{\dim_{\mathbb F_l}\Gamma/l\Gamma}$ for a certain set of primes $\Omega$. Since $\Gamma:=vK$ is dp-minimal, this implies that $T$ is small. Since $k$ is either real closed, algebraically closed or elementarily equivalent to a finite extension of $\mathbb Q_p$, also $G_K$ is small. If $(K,v)$ is of mixed characteristic, the exact same argument works if $v$ has no coarsening $w$ of equi-characteristic $0$. Otherwise, decompose $K\xrightarrow{w}Kw\xrightarrow{\bar v} Kv$ and note that $G_{Kw}$ is small by what we have just written, so $K$ is small by our argument for equi-characteristic $0$.
It seems, therefore, natural to ask whether the complexity of the value group in the above examples can be recovered definably. Can any (model theoretic) complexity of an NIP field be traced back to that of an ordered abelian group: \begin{qu}
Let $K$ be a non separably closed NIP field. Does $K$ interpret a dp-minimal field? If $K$ is not strongly dependent (resp. dp-minimal) is $K$ either imperfect or admits an (externally) definable non-trivial henselian valuation with a non strongly-dependent (resp. dp-minimal) value group? \end{qu}
\section{The Axioms of V-Topologies for $\ensuremath{\mathcal{T}} _G$}\label{AxiomsRevisted}
We are now returning to that construction of V topologies from multiplicative subgroups, as described in Section \ref{prelims}. Throughout this section no model theoretic assumptions are made, unless explicitly stated otherwise.
For ease of reference, we remind the axioms of V topology: \begin{de}
A collection of subsets $\ensuremath{\mathcal{N}} $ of a field $K$ is a basis of 0-neighbourhoods for a V-topology on $K$ if is satisfies the following axioms:
\begin{description}
\item[\textbf{\textup{(V\,1)}}] $\bigcap \mathcal{N}:=\bigcap_{U\in\mathcal{N}}U=\left\{0\right\}$ and $\left\{0\right\}\notin\mathcal{N}$;
\item[\textbf{\textup{(V\,2)}}] $\forall\, U,\,V\ \exists\, W\ W\subseteq U\cap V$;
\item[\textbf{\textup{(V\,3)}}] $\forall\, U\ \exists\, V\ V-V\subseteq U$;
\item[\textbf{\textup{(V\,4)}}] $\forall\, U\ \forall\, x,\,y\in K\ \exists\, V\ \left(x+V\right)\cdot\left(y+V\right)\subseteq x\cdot y+U$;
\item[\textbf{\textup{(V\,5)}}] $\forall\, U\ \forall\, x\in K^{\times}\ \exists\, V\ \left(x+V\right)^{-1}\subseteq x^{-1}+U$;
\item[\textbf{\textup{(V\,6)}}] $\forall\, U\ \exists\, V\ \forall\, x,\,y\in K\ x\cdot y\in V\longrightarrow x\in U\, \vee\, y\in U$.
\end{description} \end{de}
\noindent\textbf{Notation:} From now on $G$ will denote a multiplicative subgroup of $K^\times$ with $-1\in G$ and $T:= G+1$. We let $\ensuremath{\mathcal{N}} _G:=\left\{\bigcap_{i=1}^n a_i\cdot T: a_i\in K^\times\right\}$, as defined in the opening paragraphs of Section \ref{prelims}.
In this setting the first part of \textbf{\textup{(V\,1)}} is automatic, and \textbf{\textup{(V\,2)}} holds by definition: \begin{lem}\label{lemV1}\label{lemV2}\begin{enumerate}
\item $\bigcap \ensuremath{\mathcal{N}} _G=\left\{0\right\}$.
\item $\forall\, U,\,V\ \exists\, W\ W\subseteq U\cap V$.
\end{enumerate} \end{lem} \begin{proof}
For every $x\in K^\times$ we have $x\not\in x\cdot T\in \ensuremath{\mathcal{N}} _G$. As $-1\in G$ further $0\in x\cdot T$ for every $x\in K^\times$. Hence $\bigcap \ensuremath{\mathcal{N}} _G=\left\{0\right\}$. This proves (1), item (2) holds by the definition of $\ensuremath{\mathcal{N}} _G$. \end{proof}
We will come back to the second part of Axiom~\textbf{\textup{(V\,1)}} later. Axiom~\textbf{\textup{(V\,3)}} is simplified as follows: \begin{lem}\label{lemV3V3'}
The following are equivalent
\begin{description}
\item[\textbf{\textup{(V\,3)}}] $\forall\, U\ \exists\, V\ V-V\subseteq U$.
\item[\textbf{\textup{(V\,3)$'$}}] $ \exists\, V\ V-V\subseteq T$.
\item[\textbf{\textup{(V\,3)}$^*$}] $\forall\, U\ \exists\, V\ V+V\subseteq U$.
\end{description} \end{lem}
\begin{proof}
The first implication is obvious.
By \textbf{\textup{(V\,3)$'$}} there exists $ V=\bigcap_{j=1}^n b_j\cdot T\in \ensuremath{\mathcal{N}} _G$ such that $V-V\subseteq T$.
Let $ U=\bigcap_{i=1}^m a_i\cdot T\in \ensuremath{\mathcal{N}} _G$.
For all $i\in \{1,\ldots, m\}$, $j\in \{1,\ldots, n\}$.
Let $V':=\bigcap_{i=1}^m\bigcap_{j=1}^{n}\left( a_i\cdot b_{j}\cdot T\right)$. Then by direct computation
\[
V'-V'
\subseteq\bigcap_{i=1}^m a_i\cdot\left(V-V\right)
\subseteq \bigcap_{i=1}^m a_i\cdot T= U.
\]
This shows that \textbf{\textup{(V\,3)}} follows from \textbf{\textup{(V\,3)$'$}}. Replacing $V$ with $V\cap (-V)$ (throughout) we may assume that $V=-V$, proving the equivalence with \textbf{\textup{(V\,3)}$^*$} \end{proof}
In order to simplify Axiom~\textbf{\textup{(V\,4)}} we need: \begin{lem}\label{lemV4xy0case}
If $ \exists\, V\ V\cdot V\subseteq T$ then
$\forall\, U\ \exists\, V\ V\cdot V\subseteq U$. \end{lem}
\begin{proof}
Let $U=\bigcap_{i=1}^ma_i\cdot T\in \ensuremath{\mathcal{N}} _G$.
By assumption there exist\\
$ V=\bigcap_{j=1}^n b_j\cdot T\in \ensuremath{\mathcal{N}} _G$ such that $V\cdot V\subseteq T$.
Let $ V':=\bigcap_{i=1}^{m}\left(\left(\bigcap_{j=1}^n a_i\cdot b_j\cdot T\right)\cap \left(\bigcap_{j=1}^n b_j\cdot T\right)\right)\in\ensuremath{\mathcal{N}} _G$. Then by direct computation
\[
V'\cdot V'
=\bigcap_{i=1}^{m} a_i\cdot\left(V\cdot V\right)
\subseteq \bigcap_{i=1}^ma_i\cdot T= U.
\]
This proves the claim. \end{proof}
Now we can prove: \begin{lem}
The axiom \\
\textbf{\textup{(V\,4)}}
$\forall\, U\ \forall\, x,\,y\in K\ \exists\, V\ \left(x+V\right)\cdot\left(y+V\right)\subseteq x\cdot y+U$\\
is equivalent to the conjunction of\\
\textbf{\textup{(V\,4)$'$}} $ \exists\, V\ V\cdot V\subseteq T$ and \\
\textbf{\textup{(V\,4)$''$}}
$\forall\, x\in K\ \exists V \left(x+V\right)\cdot\left(1+V\right)\subseteq x+ T$. \end{lem} \begin{proof}
\textbf{\textup{(V\,4)$'$}} and \textbf{\textup{(V\,4)$''$}} are special cases of \textbf{\textup{(V\,4)}}. So we prove the other implication.
Let $x,y\in K$ and $ U=\bigcap_{i=1}^m a_i\cdot T\in \ensuremath{\mathcal{N}} _G$. The case $x=y=0$ is Lemma~\ref{lemV4xy0case}. So we assume that $y\neq 0$. For every $i\in \left\{1,\ldots, m\right\}$ we define $\widetilde{a}_i:=a_i\cdot y^{-1}$ and $x_i:=x\cdot\widetilde{a}_i^{-1}$. By \textbf{\textup{(V\,4)$''$}} there exists $V_i\in \ensuremath{\mathcal{N}} _G$ such that
\begin{equation}\label{eqV4subseteqxj+ T}
\left(x_i+V_i\right)\cdot \left(1+V_i\right)\subseteq x_i+ T.
\end{equation}
Let $ V:=\bigcap_{i=1}^m\left(\bigcap_{j=1}^{n_i}\widetilde{a}_i\cdot V_i\right)\cap\left( \bigcap_{j=1}^{n_i}\left(y\cdot V_i\right)\right)\in\ensuremath{\mathcal{N}} _G$. Then
\begin{eqnarray*}
\left(x+V\right)\cdot\left(y+V\right)
&\subseteq& \bigcap_{i=1}^m\big(\widetilde{a}_i\cdot y\cdot \left(x_i+V_i\right)\cdot\left(1+V_i\right)\big)\\
&{\subseteq} &\bigcap_{i=1}^m\big(\widetilde{a}_i\cdot y\cdot\left( x_i+ T\right)\big)= x\cdot y+U.
\end{eqnarray*}
Where the last inclusion follows from Equation (\ref{eqV4subseteqxj+ T}). This finishes the proof. \end{proof}
Assuming \textbf{\textup{(V\,3)$'$}} we can simplify further: \begin{lem}\label{lemV4V3'V4'}
The axioms \textbf{\textup{(V\,3)$'$}} and \textbf{\textup{(V\,4)$'$}} imply axion \textbf{\textup{(V\,4)}}.
\end{lem}
\begin{proof}
By the previous lemma it will suffice to prove the lemma for $U=T$ and $y=1$. The case $x=0$ is automatic from the assumptions and Lemma \ref{lemV3V3'}. So assume $x\in K^\times$.
By Lemma~\ref{lemV3V3'} there exist $V_1,\, V_2$ such that $V_1+V_1\subseteq T$, $V_2+V_2\subseteq V_1$. Further by Lemma~\ref{lemV4xy0case} there exists $V_3$ with $V_3\cdot V_3\subseteq V_2$.
Define $V:= \left(x^{-1}\cdot V_1\right)\cap V_2 \cap V_3\in \ensuremath{\mathcal{N}} _G$.
Let $v,w\in V$.
\begin{eqnarray*}
\left(x+v\right)\cdot \left(1+w\right)
\in x+V+x\cdot V+V\cdot V \subseteq x+V_2+x\cdot x^{-1}\cdot V_1+V_3\cdot V_3 \\ \subseteq x+V_2+V_1+V_2 \subseteq x+V_1+V_1
\subseteq x+ T.
\end{eqnarray*}
Hence $\left(x+V\right)\cdot\left(1+V\right)\subseteq x+ T$, as required.
\end{proof}
The axiom \textbf{\textup{(V\,5)}} holds without further assumptions: \begin{lem}\label{lemV5}
Let $K$ be a field. Let $G$ be a multiplicative subgroup of $K$ with $-1\in G$.
Then \textbf{\textup{(V\,5)}} $\forall\, U\ \forall\, x\in K^{\times}\ \exists\, V\ \left(x+V\right)^{-1}\subseteq x^{-1}+U$ holds. \end{lem}
\begin{proof}
We will first show
\begin{equation}\label{eq:V5 T}
\forall\, x\in K^{\times}\ \exists\, V\ \left(x+V\right)^{-1}\subseteq x^{-1}+ T.
\end{equation}
For $x=-1$ let $V:= T$.
We have
$\left(x+ T\right)^{-1}=\left(-1+G+1\right)^{-1}=G^{-1}
= G
=x^{-1}+ T.$
If $x\in K^\times\setminus\left\{-1\right\}$,
let $b_1=-x^2\cdot \left(1+x\right)^{-1}$, $b_2=-x$ and $V:=b_1\cdot T\cap b_2\cdot T=b_1\cdot\left(G+1\right)\cap b_2\cdot\left(G+1\right)$.
Let $z\in \left(x+V\right)^{-1}$. Let $g_1,g_2 \in G$ such that $z=\left(x+b_1\cdot g_1 +b_1\right)^{-1}=\left(x+b_2\cdot g_2 +b_2\right)^{-1}$.
We have
\begin{equation}\label{eqV5z}
z=\left(x+b_2\cdot g_2 +b_2\right)^{-1}=\left(x-x\cdot g_2 -x\right)^{-1}=-x^{-1}\cdot g_2^{-1}.
\end{equation}
Further we have
$z^{-1}= x+b_1+b_1\cdot g_1$ and therefore $1- b_1\cdot g_1\cdot z= \left(x+b_1\right)\cdot z$. This implies
\begin{eqnarray*}
z&=& \left(1-b_1\cdot g_1\cdot z\right)\cdot \left(x+b_1\right)^{-1}\\
&=& x^{-1}+1+x\cdot g_1\cdot z\\
&\stackrel{\textrm{\scriptsize{(\ref{eqV5z})}}}{=}& x^{-1}+1-x\cdot g_1\cdot x^{-1}\cdot g_2^{-1}\\
&=& x^{-1}+ \left(- g_1\cdot g_2^{-1}\right)+1
\in x^{-1}+G+1.
\end{eqnarray*}
Hence $\left(x+V\right)^{-1}\subseteq x^{-1}+ T$.
This proves Equation~(\ref{eq:V5 T}).
Now let $x\in K^\times$ and $U=\bigcap_{i=1}^m a_i\cdot T\in\ensuremath{\mathcal{N}} _G$.
For every $i\in \left\{1,\ldots, m\right\}$ let $x_i:=a_i\cdot x$. By Equation~(\ref{eq:V5 T}) there exists $V_i$ such that
\begin{equation}\label{eqV5zwei}
\left(x_i+V_i\right)^{-1}\subseteq x_i^{-1}+ T.
\end{equation}
For $ V:=\bigcap_{i=1}^ma_i^{-1}\cdot V_i$
\begin{eqnarray*}
\left(x+V\right)^{-1}
=\bigcap_{i=1}^m a_i\cdot\left(x_i+V_i \right)^{-1}
\stackrel{\textrm{\scriptsize{(\ref{eqV5zwei})}}}{\subseteq} \bigcap_{i=1}^m a_i\cdot \left( x_i^{-1}+ T\right)
= x^{-1}+U.
\end{eqnarray*}
Therefore \textbf{\textup{(V\,5)}} holds. \end{proof}
The axiom \textbf{\textup{(V\,6)}} can be reduced as follows: \begin{lem}\label{lemV6V6'}
The following are equivalent
\begin{description}
\item[\textbf{\textup{(V\,6)}}] $ \forall\, U\ \exists\,V\ \forall\,x,y\in K\ (x\cdot y\in V\to x\in U\vee y\in U)$
\item[\textbf{\textup{(V\,6)$'$}}] $ \exists\, V\ \forall\,x,y\in K\ (x\cdot y\in V\to x\in T\vee y\in T)$.
\end{description} \end{lem}
\begin{proof}
We assume \textbf{\textup{(V\,6)$'$}} and show \textbf{\textup{(V\,6)}}. We will show by induction on $m$, that for all $a_1,\ldots, a_m\in K^\times$, there exists
$V\in \ensuremath{\mathcal{N}} _G$ such that for all $x,y\in K$, if $x\cdot y\in V$ then $x\in \bigcap_{i=1}^m a_iT$ or $y\in \bigcap_{i=1}^m a_iT$.
Let $a_1\in K^\times$ and $U:= a_1\cdot T\in \ensuremath{\mathcal{N}} _G$. By \textbf{\textup{(V\,6)$'$}} there exists $V$ such that for all $x,y\in K$, if $x\cdot y\in V$ then $x\in T$ or $y\in T$. Define $V':={a_1^2}\cdot V\in \ensuremath{\mathcal{N}} _G$. For all $x,y\in K$ such that $x\cdot y\in V'$ we have $ x\cdot a_1^{-1}\cdot y\cdot a_1^{-1}\in a_1^{-2} \cdot V'=V$ and therefore $x\cdot a_1^{-1}\in T$ or $y\cdot a_1^{-1}\in T$ and hence $x\in U$ or $y\in U$.
Now let $a_1,a_2\in K^\times$ and $ U:=\bigcap_{i=1}^2 a_i\cdot T\in \ensuremath{\mathcal{N}} _G$.
By assumption there exists $V$ such that for all $x,y\in K$ if $x\cdot y\in V$ then $x\in T$ or $y\in T$.
Define $ V'=:a_1^2\cdot V\cap a_2^2\cdot V\cap a_1\cdot a_2\cdot V $.
Let $x,y\in K$ such that $x\cdot y\in V'$. Then $ x\cdot a_1^{-1}\cdot y\cdot a_1^{-1}\in a_1^{-2}\cdot V'\subseteq V$ and therefore as above
\begin{equation}\label{oneina1}
x\in a_1\cdot T\text{ or }y\in a_1\cdot T.
\end{equation}
and
\begin{equation}\label{oneina2}
x\in a_2\cdot T\text{ or }y\in a_2\cdot T.
\end{equation}
If, by way of contradiction, $x\cdot a_1^{-1}\notin T$ and $y\cdot a_2^{-1}\notin T$, then $ x\cdot a_1^{-1}\cdot y\cdot a_2^{-1}\notin V$, implying $ x\cdot y\notin {a_1}\cdot{a_2}\cdot V\supseteq V'$ contradicting the choice of $x$ and $y$.
Therefore
\begin{equation}\label{xina1oryina2}
x\in a_1\cdot T\text{ or } y\in a_2\cdot T.
\end{equation}
and, similarly,
\begin{equation}\label{yina1orxina2}
y\in a_1\cdot T\text{ or } x\in a_2\cdot T.
\end{equation}
A straightforward verification shows that equations (\ref{oneina1})-(\ref{yina1orxina2}) implies that if $x\cdot y\in V'$ then either $x\in U$ or $y\in U$.
Now let $m\geq 3$. Assume that for all $a_1,\ldots, a_{m-1}$ there exists $V$ such that for all $x,y\in K$, if $x\cdot y\in V$ then $x\in \bigcap_{i=1}^{m-1}a_i\cdot T$ or $y\in \bigcap_{i=1}^{m-1}a_i\cdot T$.
Let $a_1,\ldots, a_m\in K^\times$ and $ U:=\bigcap_{i=1}^m a_i\cdot T\in \ensuremath{\mathcal{N}} _G$. By induction hypothesis for every $j\in \left\{1,\ldots, m\right\}$ there exists $V_{\neq j}$ such that for all $x,y\in K$, if $x\cdot y\in V_{\neq j}$ then $\displaystyle x\in \bigcap_{\stackrel{i=1}{i\neq j}}^m a_i\cdot T$ or $\displaystyle y\in \bigcap_{\stackrel{i=1}{i\neq j}}^m a_i\cdot T$.
Define $ V:=\bigcap_{i=1}^m V_{\neq i}$. Let $x,y\in K^\times$ such that $x\cdot y\in V$.
If $x\in a_i\cdot T$ for all $i\in \left\{1,\ldots, m\right\}$ then $x\in U$ and we are done.
Otherwise let $j\in \left\{1,\ldots, m\right\}$ with $x\notin a_j\cdot T$. Let $k,\ell\in \left\{1,\ldots, m\right\}\setminus\left\{j\right\}$ with $k\neq \ell$.
We have $ x\cdot y\in \bigcap_{i=1}^m V_{\neq i}\subseteq V_{\neq k}$. As $\displaystyle x\notin a_j\cdot T\supseteq \bigcap_{\stackrel{i=1}{i\neq k}}^m a_i\cdot T$ we have $\displaystyle y\in \bigcap_{\stackrel{i=1}{i\neq k}}^m a_i\cdot T$.
Analogous we show $\displaystyle y\in \bigcap_{\stackrel{i=1}{i\neq \ell}}^m a_i\cdot T$.
Therefore $\displaystyle y\in \bigcap_{\stackrel{i=1}{i\neq k}}^m a_i\cdot T \cap \bigcap_{\stackrel{i=1}{i\neq \ell}}^m a_i\cdot T=U$.
Hence for all $U$ there exists $V$ such that for all $x,y\in K$, if $x\cdot y\in V$ then $x\in U$ or $y\in U$. \end{proof}
Summing up all the simplifications of the present section we obtain:
\begin{prop}\label{PropSimplifiedVAxiomsnontrivialdefinable}
Let $K$ be a field.
Let $\mathrm{char}\left(K\right)\neq q$ and if $q= 2$ assume $K$ is not euclidean. Assume that for the primitive $q$th-root of unity $\zeta_q\in K$. Let $G:=\left(K^\times\right)^q\neq K^\times$.
Assume that
\begin{description}
\item[\textbf{\textup{(V\,1)$'$}}] $\left\{0\right\}\notin\mathcal{N}_G$;
\item[\textbf{\textup{(V\,3)$'$}}] $ \exists\, V\ V-V\subseteq T$
\item[\textbf{\textup{(V\,4)$'$}}] $ \exists V \ V\cdot V\subseteq T$
\item[\textbf{\textup{(V\,6)$'$}}] $ \exists\, V\ \forall\,x,y\in K\ x\cdot y\in V\to x\in T\vee y\in T$.
\end{description}
Then $K$ admits a non-trivial definable valuation. \end{prop}
\begin{proof}
With Lemma~\ref{lemV1}, Lemma~\ref{lemV2}, Lemma~\ref{lemV3V3'}, Lemma~\ref{lemV4V3'V4'}, Lemma~\ref{lemV5} and Lemma~\ref{lemV6V6'} the result follows directly from Corollary~\ref{corVAxiomsnontrivialdefinable}. \end{proof}
\section{Back to NIP fields}\label{secNIP} As already explained in the opening sections, our main motivation in the present paper is to study the existence of definable valuations on (strongly) NIP fields. We also hope that such a project may shed some light on the long standing open conjecture that stable fields are separably closed. We have already explained that in the stable case this conjecture can be rather easily settled under the further assumption that the field is bounded. It is therefore natural to ask whether the same assumption can help settle the questions stated in the Introduction. In the present section we show how boundedness gives quite easily Axiom [\textbf{\textup{(V\,1)$'$}}] (stating that $\left\{0\right\}\notin\mathcal{N}_G$).
If $K$ is an infinite NIP field, i.e. a field definable in a monster model satisfying NIP, then by \cite[Corollary~4.2]{KrSRNIP} there is a definable additively and multiplicatively invariant Keisler measure on $K$. In the whole section if not stated differently let $K$ be an infinite NIP field and $\mu$ an additively and multiplicatively invariant definable Keisler measure on $K$.
By \cite[Proposition~4.5]{KrSRNIP} for any definable subset $X$ of $K$ with $\mu(X)>0$ and any $a\in K$, we have \[\mu\left(\left(a+X\right)\cap X\right)=\mu\left(X\right).\tag{$\clubsuit$}\]
\begin{lem}\label{propV1ForGgenerel}
Let $a_1,\ldots, a_m\in K^\times$ and $G\subseteq K^\times$ a multiplicative subgroup with $-1\in G$ and $\mu(G)>0$.
Then
$
\bigcap_{i=1}^m a_i T\supsetneq\{0\}.
$ \end{lem}
\begin{proof}
As $-1\in G$ it follows that $0\in\bigcap_{i=1}^m a_i T$.
This also implies that
\[G+a^{-1}=\{s: 1\in a(G+s)\}\tag{*}\]
for any $a\in K^\times$.
By additivity of the measure $(\clubsuit)$ applied to the left hand side of $(*)$ gives
\[\mu(\bigcap_{i=1}^m (G+a_i^{-1}\cap G))=\mu(G)>0. \]
So by the right hand side of $(*)$ we have $t_0\in \bigcap_{i=1}^m \{s\in G: 1\in a_i(G+s)\}$. So $1\in a_i(G+t_0)$ for all $1\le i \le m$, and as $t_0\in G$ we get $t_0^{-1}\in \bigcap_{i=1}^m a_i(G+1)=\bigcap_{i=1}^m a_iT$.
\end{proof}
\begin{cor}\label{thmV3V4V6Existsdefinablenontrivialvaluation}
Let $K$ be an infinite NIP field with $\sqrt{-1}\in K$. Let $G:=\left(K^\times\right)^q\neq K^\times$ for some $q\neq \ensuremath{\textrm{char}} (K)$ prime with $\zeta_q\in K$. Assume that $[K^\times: G]<\infty$ and that for $T:=G+1$ we have:
\begin{description}
\item[\textbf{\textup{(V\,3)$'$}}] $ \exists\, V\ V-V\subseteq T$
\item[\textbf{\textup{(V\,4)$'$}}] $ \exists V \ V\cdot V\subseteq T$
\item[\textbf{\textup{(V\,6)$'$}}] $ \exists\, V\ \forall\,x,y\in K\ x\cdot y\in V\to x\in T\vee y\in T$.
\end{description}
Then $K$ admits a non-trivial $\emptyset$-definable valuation. \end{cor}
\begin{proof}
By additivity and invariance of $\mu$ we get that $\mu(G)=[K^\times:G]^{-1}$. The result now follows directly from Proposition~\ref{PropSimplifiedVAxiomsnontrivialdefinable} using Lemma~\ref{propV1ForGgenerel} \end{proof}
As mentioned in Section~1, \[\ensuremath{\mathcal{N}} _G':=\left\{\left(a_1\cdot \left(G+1\right)\right)\cap\left(a_2\cdot\left( G+1\right)\right): a_1,a_2\in K^\times\right\}\] is a base of the neighbourhoods of zero of $\ensuremath{\mathcal{T}} _G$. We obtain the following corollary:
\begin{cor}\label{remNG'}
Let $K$ be an infinite NIP field with $\sqrt{-1}\in K$. Let $G:=\left(K^\times\right)^q\neq K^\times$ for some $q\neq \ensuremath{\textrm{char}} (K)$ prime with $\zeta_q\in K$. Assume that $[K^\times: G]<\infty$. Then for $T:=G+1$ we have that
\begin{description}
\item[\textbf{\textup{(V\,3)$'$}}] $ \exists\, V\in \ensuremath{\mathcal{N}} _G \ V-V\subseteq T$
\item[\textbf{\textup{(V\,4)$'$}}] $ \exists V\in \ensuremath{\mathcal{N}} _G \ V\cdot V\subseteq T$
\item[\textbf{\textup{(V\,6)$'$}}] $ \exists\, V\in \ensuremath{\mathcal{N}} _G \ \forall\,x,y\in K\ x\cdot y\in V\to x\in T\vee y\in T$.
\end{description}
if and only if
\begin{description}
\item[\textbf{\textup{(V\,3)$'_2$}}] $ \exists\, \widetilde{V}\in \ensuremath{\mathcal{N}} _G'\ \widetilde{V}-\widetilde{V}\subseteq T$
\item[\textbf{\textup{(V\,4)$'_2$}}] $ \exists \widetilde{V}\in \ensuremath{\mathcal{N}} _G' \ \widetilde{V}\cdot \widetilde{V}\subseteq T$
\item[\textbf{\textup{(V\,6)$'_2$}}] $ \exists\, \widetilde{V}\in \ensuremath{\mathcal{N}} _G'\ \forall\,x,y\in K\ x\cdot y\in \widetilde{V}\to x\in T\vee y\in T$.
\end{description} \end{cor}
\begin{proof}
As $\ensuremath{\mathcal{N}} _G'\subseteq \ensuremath{\mathcal{N}} _G$ it is clear that if \textbf{\textup{(V\,3)$'_2$}}, \textbf{\textup{(V\,4)$'_2$}} and \textbf{\textup{(V\,6)$'_2$}} hold, then so do
\textbf{\textup{(V\,3)$'$}}, \textbf{\textup{(V\,4)$'$}} and \textbf{\textup{(V\,6)$'$}}.
On the otherhand if \textbf{\textup{(V\,3)$'$}}, \textbf{\textup{(V\,4)$'$}} and \textbf{\textup{(V\,6)$'$}} hold, then $\ensuremath{\mathcal{T}} _G$ is a V-topology and $\ensuremath{\mathcal{N}} _G'$ is a 0-neighbourhood basis for $\ensuremath{\mathcal{T}} _G$. Therefore, for any $V\in \ensuremath{\mathcal{N}} _G$ witnessing \textbf{\textup{(V\,i)}} ($i=3,4,6$), there exists $\widetilde{V}\in \ensuremath{\mathcal{N}} _G$ such that $\widetilde{V}\subseteq V$, and as -- for a fixed $V$ -- the axiom \textbf{\textup{(V\,i)}} is universal, it is automatically satisfied by $\tilde V$.
\end{proof}
Note that \textbf{\textup{(V\,3)$'_2$}}, \textbf{\textup{(V\,4)$'_2$}} and \textbf{\textup{(V\,6)$'_2$}} are first order sentences in the language of rings (appearing explicitly in the statement of Conjecture \ref{fo}). Let us denote their conjunction as $\psi_K$. Thus, if $K$ is a bounded\footnote{As already mentioned, "radically bounded" would suffice.} NIP field such that $K\models \psi_K$ then $K$ supports a definable valuation.
\section{Concluding remarks} We have shown in Section \ref{prelims} that if $K$ is infinite NIP there exist a sentence $\psi_K$ (possibly with parameters) such that: \begin{enumerate}
\item If $K\models \psi_K$ then $K$ admits a non-trivial definable valuation.
\item If $K$ is $t$-henselian then $K\models \psi_K$.
\item If $K^q\neq K$ for some prime $q\neq \mathrm{char}(K)$ then $\psi_K$ (and the definable valuation ring) can be taken over $\emptyset$. \end{enumerate} If $K$ is as in (3) above, $\zeta_q\in K$, a primitive root of unity, and $\sqrt{-1}\in K$ the sentence $\psi_K$ is the statement that $\ensuremath{\mathcal{N}} _G'$ is a neighbourhood basis for a $V$ topology for $G=(K^\times)^q$. Assuming for simplicity that the predicates $G$, $T:=1+G$ and $aT$ for $a\in K^\times$ are atomic, $\psi_K$ is the conjunction of (V1)-(V6) which is readily checked to be an AEA-sentence. We have shown in Section \ref{AxiomsRevisted} that $\psi_K$ is equivalent to the conjunction of (V1)$'$, (V3)$'$, (V4)$'$ and (V6)$'$ -- which is an EA-sentence. In the last corollary we have shown that if $K$ is bounded NIP then, in fact, (V1)$'$ automatically holds, reducing further the complexity of $\psi_K$.
If $K$ does not satisfy (3) above (or the additional assumptions on roots of unity) we have to replace $K$ with a finite extension $L\ge K$ satisfying the necessary assumptions. In that case $\psi_K$ has to be relativised to $L$ -- which, since $L$ is interpretable in $K$ (possibly with parameters) is not a problem, and does not change the complexity of $\psi_K$ -- provided that, as above, the corresponding field operations, the group $G$, the set $T$ and the open sets $aT$ interpreted in $L$ are assumed atomic.
\end{document} |
\begin{document}
\title{Continuous transfer and laser guiding between two cold atom traps} \author{E.~Dimova\inst{1}\thanks{\email{[email protected]}}, O.~Morizot\inst{2}\thanks{\email{[email protected]}}, G.~Stern\inst{1}, C.L.~Garrido Alzar\inst{2}, A.~Fioretti\inst{1}, V.~Lorent\inst{2}, D.~Comparat\inst{1}, H.~Perrin\inst{2} and P.~Pillet\inst{1}} \institute{{Laboratoire Aim{\'e} Cotton, CNRS, B{\^a}timent 505, Universit\'e Paris Sud, F-91405 Orsay, FRANCE} \and {Laboratoire de physique des lasers, CNRS-Universit\'e Paris 13, 99 av. Jean-Baptiste Cl\'ement, F-93430 Villetaneuse, FRANCE}}
\titlerunning{Continuous transfer and laser guiding between two cold atom traps} \authorrunning{E.~Dimova et al.}
\abstract{ We have demonstrated and modeled a simple and efficient method to transfer atoms from a first Magneto-Optical Trap (MOT) to a second one. Two independent setups, with cesium and rubidium atoms respectively, have shown that a high power and slightly diverging laser beam optimizes the transfer between the two traps when its frequency is red-detuned from the atomic transition. This pushing laser extracts a continuous beam of slow and cold atoms out of the first MOT and also provides a guiding to the second one through the dipolar force. In order to optimize the transfer efficiency, the dependence of the atomic flux on the pushing laser parameters (power, detuning, divergence and waist) is investigated. The atomic flux is found to be proportional to the first MOT loading rate. Experimentally, the transfer efficiency reaches $70\,\%$, corresponding to a transfer rate up to $2.7\times10^8$\,atoms/s with a final velocity of 5.5~m/s. We present a simple analysis of the atomic motion inside the pushing--guiding laser, in good agreement with the experimental data. \PACS{ {07.77.Gx, Atomic and molecular beam sources and detectors} \and {32.80.Lg, {Mechanical effects of light on atoms, molecules, and ions}} \and {32.80.Pj, {Optical cooling of atoms; trapping}}~ } }
\date{\today}
\maketitle
\section{Introduction}
The realization of degenerate quantum gases requires the production of an initial dense and cold trapped atomic sample. The lifetime of the trapped atoms must be long enough to allow for appropriate evaporative cooling ramps, lasting up to several tens of seconds. A standard vapour Magneto-Optical Trap (MOT) setup cannot always satisfy this last condition because of the relatively high background pressure of the atomic vapour in the cell. The use of a dispenser~\cite{2001PhRvA..64b3402R} or of a desorption source~\cite{2001PhRvA..63b3404A,2003PhRvA..67e3401A} to load the MOT does not usually provide a trap lifetime longer than a few seconds. To obtain the required lifetime, the MOT has to be placed in an ultra-high vacuum chamber and loaded from a cold atom source, in general a slow and cold atomic beam. One of the demonstrated and widely used methods to create a cold atomic beam is the Zeeman slower technique. However, this solution requires an important technical development of different experimental techniques than the one implied in a MOT setup. In this paper, we will then concentrate on the transfer of atoms from a first cold source to a trap situated in a second high vacuum chamber.
There are several ways to transfer atoms from a cold atomic source to the high vacuum chamber. Mechanical devices~\cite{JILA} or magnetic guides~\cite{2001PhRvA...63:031401} are used to implement an efficient transfer of atoms initially in a MOT directly to either magnetic, electrostatic or atom chip traps. Other techniques, based on quasi-resonant light forces, allow a faster transfer to a recapture MOT. Beam velocities low enough to allow the capture in a MOT in an ultra-high vacuum chamber can be obtained by the pyramidal MOT \cite{1998OptCo.157..303A,2003OSAJB..20.1161K}, the conical mirror funnel~\cite{2001PhRvA..64a3402K} or the two-dimensional MOT~\cite{1998PhRvA..58.3891D,2002PhRvA..66b3410S,2003OptCo.226..259C,2004PRL...93..093003}. Even simpler devices exist such as the low velocity intense atomic source (LVIS)~\cite{1996PhRvL..77.3331L,2002EPJD...20..107C,2002OptCo.212..307T}. Very high flux, up to $3 \times 10^{12}$~atoms/s, have been reported with a transversely cooled candlestick Zeeman slower type of setup~\cite{2004physics...7040S}. However, the counterpart of this large flux is a higher atomic velocity, $116$~m/s in this last experiment, which is by far too high to load a second MOT. A pulsed multiple loading, starting from a three-dimensional MOT, has been performed in Ref.~\cite{1996OptLett.21..290}. The atoms are pushed by a near resonant laser beam resulting in a high number of atoms $1.5 \times 10^{10}$ in second MOT, loading rate $2 \times 10^8$ atoms/s and allow lower velocity 16~m/s. However, the transfer is based on using an hexapole magnetic field, produced by a current above 60~A, which complicates the experiment. Simpler devices, without magnetic guiding, have achieved similar result by using continuous transfer~\cite{Wohlleben2001,Cacciapuoti2001,ChinesePhys}. In these experiments, a thin extraction column is created in the centre of a MOT and, due to a radiation pressure imbalance, a continuous beam of cold atoms is produced. It is possible to couple these simple devices with a distinct dipolar atomic guide~\cite{2000PhRvA..61c3411M}. We propose here to use the same laser beam for pushing and guiding the atoms, resulting in an even simpler setup.
This paper reports on a double MOT setup combining the ability of a pushing laser to extract the atoms from a first trap (MOT1) and to guide them to a second trap (MOT2). The idea is to merge the leaking MOT technique ~\cite{1996PhRvL..77.3331L,Wohlleben2001,Cacciapuoti2001} with the red-detuned far off--resonance optical dipole guide technique~\cite{1999OptCo.166..199P,1999EL.....45..450S,2002NJPh....4...69W}. Two experiments have been simultaneously performed in two different laboratories, with different atoms: $^{133}$Cs at Laboratoire Aim{\'e} Cotton and $^{87}$Rb at Laboratoire de phy\-si\-que des lasers. Our setups are as simple as the one used in the leaking MOT techniques, but provide a higher flux and a lower atomic beam velocity. We can achieve a transfer efficiency up to $70\,\%$ with a mean atomic velocity 4 to 12 m/s depending on the pushing beam parameters. Our setups are very robust against mis-alignments of the pushing and guiding laser beam, and small variations of its detuning or power. The only requirement is a sufficiently high laser power (tens of mW) to produce a significant dipolar force to guide the atoms during their flight.
This paper is organized as follows: in section 2 we give details on the experimental realization of the beam and discuss the role of MOT1 parameters. In section 3 we describe theoretically the pushing and guiding processes during the atom transfer. Section 4 discusses the experimental parameter dependences of the setup as compared with the theory. Finally we present a comparison with other available techniques.
\section{Experimental realization}
\subsection{Experimental setup}
The vacuum system is similar in both experiments, except for a slight difference in the design of the differential vacuum tubes and the MOT2 cells.
\begin{figure}
\caption{Scheme of the experimental setups. The parameters used in the discussion ($f$, $D$, $z_0$, $w_0$) are labeled on the picture. The vertical $z$ axis is oriented downwards.}
\label{fig:setup}
\end{figure}
For the cesium (resp. rubidium) experiment the setup consists of two cells vertically separated (see Figure~\ref{fig:setup}). The distance between the two traps is $D=57$~cm (resp. $D=72$~cm). A reservoir connected to the upper source cell supplies the atomic vapour. The recapture chamber is a glass cell with $1\times1\times10$~cm$^3$ (resp. $1.25 \times 7.5\times 12$~cm$^3$) volume. A differential pumping tube located $3$~cm (resp. 10~cm) below MOT1 provides a vacuum within the $10^{-11}$~mbar range in the bottom MOT2 cell while in the MOT1 cell it is in the $ 10^{-8}-10^{-9}$\,mbar range. For the cesium experiment, the tube is $18$~cm long and has a conical shape ($3$~mm diameter at its top and $6$~mm at its bottom part) whereas it is cylindrical, 12~cm long and 6~mm diameter in the rubidium experiment.
In both cases, MOT1 runs in a standard magneto-optical trap configuration with a magnetic field gradient around $15$\,G/cm along the horizontal axis of the MOT1 coils. All the laser beams have a $2.5$~cm diameter (clipped by the mounts of the quarter-wave plates) and are provided by laser diodes. In the rubidium experiment, the laser is divided into 3 retroreflected beams carrying 10~mW laser power. They are 10~MHz red-detuned from the $^{87}$Rb $5s(F=2)\rightarrow 5p_{3/2}(F'=3)$ transition. In the cesium experiment, two $5$\,mW radial beams are retroreflected and make an angle $\pm 45^{\circ }$ with the vertical axis. Each of the two (non reflected) axial beams carries $10$\,mW laser power. They are $15\,$MHz red-detuned from the Cs $6s(F=4)\rightarrow 6p_{3/2}(F'=5)$ transition. The 5~mW repumping light, with a frequency on resonance respectively with the Cs transition $6s(F=3)\rightarrow 6p_{3/2}(F'=4)$ and the $^{87}$Rb transition $5s (F=1) \rightarrow 5p_{3/2}\, (F'=2)$, is mixed with all the trapping beams. In MOT2, the trapping beams are limited to about $2R=8$~mm in diameter in both experiments due to the cell dimensions and in order to reduce the scattered light.
In addition to these trapping lasers, the linearly polarized pushing--guiding beam, red-detuned from the MOT ($F \longrightarrow F+1$) transition ($F=4$ for Cs, $F=2$ for $^{87}$Rb) with maximum power of $P_0 = 63$~mW for Cs (resp. $P_0 = 21$~mW for Rb), is aligned vertically into the trap. The parameters used in both experiments are summarized in Table~\ref{tab:param}. In contrast with the similar setups reported in \cite{Wohlleben2001,Cacciapuoti2001}, the pushing lasers are not frequency-locked in our experiments. The detuning is chosen to optimize the transfer efficiency and is found to be such that the number of atoms in MOT1 is roughly reduced by a factor ten when the ``pushing--guiding beam'' is present. The beam is focused at position $z_0=-34$\,cm (resp. $z_0=-13$~cm) before MOT1 by a lens $f=2$\,m (resp. $f=1$~m). It is not perfectly Gaussian, however the waist at position $z$ is still given by $w(z)=w_{0}\sqrt{1+(z-z_{0})^{2}/z^{2}_{R}}$, where $w_{0}=200~\mu$m (resp. $300~\mu$m) is the measured minimum waist and $z_R= 110$~mm is the estimated Rayleigh length (resp. $z_R=260~$~mm, measured value for Rb). It diverges to a $1/e^2$-radius of $w_1=0.65$\,mm (resp. 0.33~mm) in MOT1 and about $1.7$\,mm (resp. 1.0~mm) in MOT2. The larger size of the beam at the position of MOT2 limits the perturbation of the trapping and cooling mechanisms.
\begin{table} \caption{Pushing beam parameters used in cesium and rubidium experiments (see text and Figure~\ref{fig:setup}). All distances are given in mm, the laser power $P$ is in mW. $w_0$, $w_1$ and $w_2$ are the pushing beam radius at $1/e^2$ at focus, MOT1 and MOT2 positions, respectively.} \label{tab:param} \begin{tabular}{lllllllll} \hline\noalign{
}
exp. & $D$ & $f$ & $|z_0|$ & $w_0$ & $w_1$ & $w_2$ & $z_R$ & $P$ \\ \noalign{
}\hline\noalign{
} Cs & 570 & 2000 & 340 & 0.2 & 0.65 & 1.7 & 110 & $< 63$\\ Rb & 720 & 1000 & 130 & 0.3 & 0.33 & 1.0 & 260 & $< 21$\\ \noalign{
}\hline \end{tabular} \end{table}
\subsection{Flux from MOT1} Experimentally, the main features of the atomic beam are deduced from the loading characteristics of MOT1 and MOT2, where the number of atoms is determined using a calibrated photodiode monitoring the scattered MOT light. The main goal is to have the highest possible recapture rate of atoms in the MOT2 region. This ingoing flux is obviously related to the characteristics of MOT1.
The extraction process can be summarized as follows \cite{Wohlleben2001,Cacciapuoti2001}. In MOT1 hot atoms are first decelerated by the MOT radiation pressure, then slowly moved to the centre of the trap where they are extracted by the pushing laser. In addition to its pushing effect, the laser beam shifts the atomic levels by a few natural linewidths so that a transverse cooling of the atomic beam takes place during extraction, limiting the initial atomic temperature to about $25~\mu$K for Cs ($40~\mu$K for Rb). Moreover, the trapping forces are reduced and the pushing beam becomes dominant. Hence, atoms are extracted from the trap and accelerated in the direction of MOT2. After the transfer to the second chamber, the atoms are finally recaptured in MOT2 by radiation pressure.
In a first set of experiments, we study the flux of atoms extracted from the upper chamber. This outgoing flux depends on the number of captured atoms in MOT1, which is related to the background pressure of the alkali vapour. As there is no direct access to the background pressure value, we have measured the loading time of MOT1, which, in a large regime of operating parameters, is inversely proportional to the atomic pressure in the source cell. The number of atoms in a MOT in the stationary regime is~\cite{Steane92}
\begin{equation}
N=\frac{L}{\gamma+\gamma_{p}+\beta n} ,
\end{equation} where L represents the loading rate of the MOT, $\gamma $ is the loss rate due to background collisions, $\gamma_p$ gives the loss rate induced by the pushing laser, $\beta $ is the rate of the cold two-body collisions between the trapped atoms, and $n$ is the average atomic density in the MOT. The density in MOT1 is limited to about $10^{10}$~atoms/cm$^3$, so that the term $\beta n$ is negligible in both setups.
\begin{figure}
\caption{Dependence of the MOT2 parameters on the MOT1 loading time $\tau$ in the Cs experiment. $N_{1}$ is the number of atoms in MOT1 without pushing beam, $L_{\rm out}$ is the atomic flux, $L_2$ the loading rate of MOT2 and $N_{2}$ the number of recaptured atoms in MOT2.}
\label{fig:fluo2}
\end{figure}
Loading rates in both MOTs are given by the measured initial slope of the number of trapped atoms in the MOT versus time. In MOT2, this measure is performed after suddenly switching on the pushing laser and waiting for the arrival of the first atoms. In MOT1, the loss rate $\gamma$ is inferred by measuring the 1/$e$-loading time $\tau$ ($\gamma=1/\tau$ in a wide range of vapour pressure~\cite{Steane92}) or by dividing the loading rate $L_1$ by the number of atoms $N_1$ measured when the pushing beam is off. When the pushing laser is switched on, the loss processes in MOT1 increase drastically. If $N^p_1$ is the number of atoms in MOT1 in the presence of the pushing beam, then $L_{\rm out}=\gamma_p N^p_1$ is the flux of atoms leaking out of MOT1 through the optical guide. We deduce it from parameters we already measured via the formula: \begin{equation}
L_{\rm out}=L_1-\gamma N^p_1. \end{equation}
To get the data plotted on Figure~\ref{fig:fluo2}, $\tau$ is tuned by varying the background pressure. Whatever the background pressure, the number $N_{1}$ of atoms is approximately constant. At high background pressure ({\em i.e.} low values of $\tau$), the outgoing atomic flux $L_{\rm out}$ increases with the loading time $\tau$ because the number of atoms without pushing beam $N_{1}$ slightly does. Then at relatively low pressure $L_{\rm out}$ decreases, following the behaviour of the MOT1 loading rate $L_1$ (inversely proportional to $\tau$). The loading rate of MOT2 $L_2$ and the number of atoms $N_{2}$ in MOT2 are presented as a function of $\tau$ on Figure~\ref{fig:fluo2}. Their dependence with the MOT1 loading time is similar to that of the atomic flux $L_{\rm out}$. The overall efficiency of the transfer process is defined by the incoming flux in MOT2 divided by the outgoing flux from MOT1, that is $L_2/L_{\rm out}$.
We conclude that for higher MOT2 loading rate we need a relatively high background pressure in MOT1 and a large laser power in the trapping beams (to have higher $N_{1}$ value). For our experimental conditions the optimum is at a MOT1 loading time of about 1-2~s. The data presented here were not taken with optimized pushing--guiding beam parameters, the efficiency being here limited to typically 10\%. Once these parameters are well set we are able to achieve maximum transfer efficiency of about $70\%$ for Cs (resp. $50\%$ for $^{87}$Rb), without affecting the overall dependence of the different quantities on the MOT1 loading time.
\section{Pushing and guiding processes}
After leaving the MOT1 region, the atomic beam is no longer affected by the MOT1 lasers and is guided due to the attractive dipolar force created by the red-detuned pushing--guiding beam. In this section, we describe the guiding process using an analytical model similar to that given in references \cite{1999OptCo.166..199P,2002NJPh....4...69W}. The total force applied on the atoms is the sum of a radiation pressure ``pushing force'' $ \vec F_{\rm push} $ and of a dipolar ``guiding force'' $ \vec F= - \vec \nabla U$, where U is the guiding potential. The gravitational force plays a minor role in the loading process.
A two-level model function of the laser parameters (power, detuning and waist) describes qualitatively the experimental dependence of the transfer efficiency between the two MOTs. A more detailed quantitative analysis of the processes is proposed in the rest of this section.
\subsection{Two-level model} \label{subsec:2level}
In this first simple model we neglect gravity, the initial velocity and temperature of the atoms and beam divergence. We consider the atoms as a two-level system with a transition energy $\hbar \omega_0$, a natural linewidth $\Gamma$ ($\Gamma/2\pi = 5.2$~MHz for Cs, 5.9~MHz for Rb) and a saturation intensity $I_s = \frac{1}{6} \hbar c k^3 \frac{ \Gamma}{ 2 \pi} $ (1.1~mW/cm$^2$ for Cs, 1.6~mW/cm$^2$ for Rb). We use here $z$ as the vertical coordinate along the laser beam propagation with origin in the centre of MOT1 and $r$ for the radial cylindrical coordinate (see Figure~\ref{fig:setup}). For this two-level model, the waist $w$ of the pushing--guiding laser is assumed to be constant and equal to its experimental value at MOT1 position $z=0$. The laser beam has a power $P_0$, a wave vector $k = 2\pi/\lambda $, and an angular frequency $\omega$ detuned by $\delta = \omega - \omega_0$ with respect to the atomic transition.
The on-axis light shift is given by $U_{0} = \frac{\hbar \delta}{2} \ln( 1 + s ) $ where $s = (I/I_s)/(1 + 4 \delta^2/\Gamma^2)$ is the saturation parameter and $I= 2 P_0/(\pi w^2)$ is the peak laser intensity. As the laser is far detuned, saturation is always very low and one can simply replace $\ln(1+s)$ by $s$ in this expression. In this limit, the guiding potential is \begin{equation}
U_{0} \, e^{-\frac{2r^{2}}{w^{2}}} \, . \end{equation} As the waist is considered constant, the guide does not affect the longitudinal motion. On the contrary, it is crucial for confining the transverse motion.
The atoms absorb and emit spontaneously photons at a rate \begin{equation}
\Gamma' = \frac{\Gamma}{2} \frac{ s } {1+s} = \frac{\Gamma}{2}\frac{I/I_s}{1+4\frac{\delta^2}{\Gamma^2}+I/I_s}, \end{equation}
which gives a pushing force \begin{equation}
F_{\rm push} = \Gamma' \hbar k = \Gamma' M v_{\rm rec}, \end{equation} where $v_{\rm rec}=\hbar k/M$ is the recoil velocity and $M$ the atomic mass. The velocity increases due to photon absorption, and the number of scattered photons to reach the position $z$ is approximately $v(z)/v_{\rm rec}=\sqrt{2\Gamma' z/v_{\rm rec}}$. The pushing process is also responsible for a heating due to random spontaneous emission in all directions. The mean horizontal kinetic plus potential energy per atom $2 k_B T$ in the 2D confining potential is increased by two third of the recoil energy $E_{\rm rec} = M v_{\rm rec}^2/2 = k_B T_{\rm rec}/2$ at each scattering event~\cite{Grimm99,2002NJPh....4...69W}. This gives rise to a horizontal kinetic temperature \begin{equation} T_h(z)= \frac{v(z)}{v_{\rm rec}} \frac{T_{\rm rec}}{6}. \label{eqn:T_horizontal} \end{equation}
To have an efficient pushing--guiding beam we require in this simple two--level approach that the atoms remain trapped in two dimensions inside the guide during the whole transfer. This condition is \begin{equation}
2 k_{\rm B} T_h(z) < |U_0| \mbox{ for all }z. \label{eqn:trapped} \end{equation}
As the horizontal velocity spread increases with $z$, this is equivalent to $2 k_{\rm B} T_h(D) < |U_0|$. A second constraint is that the beam velocity at the MOT2 position ($v_{\rm beam}$) should be lower than the capture velocity ($v_{\rm capture}$) of the MOT \begin{equation} v_{\rm beam} < v_{\rm capture}. \label{eqn:velocity} \end{equation} The value of $v_{\rm capture}$ is on the order of the maximal velocity for an atom to be stopped on the MOT beam diameter distance $2R$, that is $v_{\rm capture}=\sqrt{\Gamma R v_{\rm rec}}$~\cite{Steane92}. As a result, we evaluate $v_{\rm capture}$ to be about 21~m/s for cesium and 30~m/s for rubidium.
The efficiency of the pushing--guiding process is determined by how deep the conditions~(\ref{eqn:trapped}) and~(\ref{eqn:velocity}) are verified. To describe qualitatively the guiding efficiency in relation with these conditions, we propose to describe each condition by a function $f$, equal to zero when the inequality is strongly violated and to 1 when it is fully verified, with a continuous transition between these two extremes. The guiding efficiency is then described by the product $f(\frac{2 k_{B}T_{h}(D)}{\left|U_{0}\right|}) \times f(\frac{v_{\rm beam}}{v_{\rm capture}})$ of the two conditional functions. The result is given for Cs in Figure~\ref{fig:pot_2_niv} as function of the laser detuning, with $v_{\rm capture}=21$~m/s and the function $f$ chosen arbitrarily to be $f(x)=\frac{1}{1+x^{10}}$.
A comparison of the two-level model with experimental results (see Figure~\ref{fig:detuning}, left) presents a good qualitative agreement, reproducing the presence of an optimal red detuning at given laser power. The maximum transfer efficiency increases with the power of the pushing beam while the position of the peak is shifted to larger absolute values of the detuning. This simple model is sufficient to derive the main conclusion: the transfer is more efficient with a far red-detuned and intense laser beam. However, the theory predicts a peak further from resonance than observed experimentally. Moreover, the sensitivity to the laser power is much more pronounced than observed in the experiment. This motivates a more detailed analysis of the processes operating during the travel of the atoms from MOT1 to MOT2. In particular, the effect of optical pumping to the lower hyperfine state has to be considered.
\begin{figure}
\caption{Efficiency (see text) of the pushing--guiding processes versus laser detuning $\delta$ for a $650\,\mu$m waist laser beam with different laser power 10~mW (dashed line $\times 1000$) and 46~mW (solid line). The other parameters are: initial temperature $T_0 = 25\,\mu$K,
$I_{\rm sat}=1.1\,$mW/cm$^2$, $\Gamma = 2\pi \times 5.2\,$MHz, $v_{\rm rec} = 3.5\,$mm/s, $T_{\rm rec} = 0.2\,\mu$K and the two MOT cells are separated by $57\,$cm (the values used are those of the Cs experiment).}
\label{fig:pot_2_niv}
\end{figure}
\subsection{Optical pumping} \label{sec:OptPump}
The absorbed photons can lead to optical pumping between the two hyperfine levels of the ground state which have different laser detuning with respect to the pushing laser. Indeed, very quickly after leaving the MOT1 region, the atoms are pumped essentially in the lower ground state $F=3$ for cesium (resp. $F=1$ for rubidium) as there is no repumping laser light superimposed with the pushing laser beam. This optical pumping is essential for a good transfer efficiency, as it greatly reduces the final velocity of the atomic beam (see sections \ref{subsec:TransferEff} and \ref{sec:BeamVelocity}). However, a small population in the other ground state is still present, typically 1 to 3 percent for a linearly polarized beam, as we shall see~\cite{note1}. As the radiation pressure is much larger for atoms in the upper ground state (about 100 times larger for detuning values discussed here), even this small fraction plays a role and both ground state populations have to be taken into account for the estimation of the pushing force. On the contrary, the dipolar force may be estimated by assuming that the atoms are only in the lower ground state, as this force is only about 10 times smaller than in the upper ground state, which is 100 times less populated.
An estimate of the populations in the ground states is obtained by assuming an equal detuning for the transitions from the upper hyperfine ground state to all the hyperfine excited states. We define an ``effective'' detuning $\bar{\delta} \approx \delta + \Delta'_{\rm HFS}/2$, where $\Delta'_{\rm HFS}$ is the total width of the hyperfine structure in the excited state ($\Delta'_{\rm HFS}\simeq 600$~MHz for Cs and $\Delta'_{\rm HFS} \simeq 500$~MHz for $^{87}$Rb respectively). Using this mean detuning $\bar{\delta}$ we calculate the pumping rates between the two hyperfine ground states. This is fairly good for large detunings (above 1 GHz from the cycling transition). To illustrate our results we will choose the following typical values: $\delta/2 \pi =- 2$~GHz from the (F=4$\to$F'=5) transition of the Cs (\textit{i.e.} $\bar{\delta}/2 \pi = - 1.70$~GHz) and $\delta/2 \pi = - 1$~GHz (\textit{i.e.} $\bar{\delta}/2 \pi = - 750$~MHz) from the (F=2$\to$F'=3) transition of the $^{87}$Rb. We also define $\Delta_{\rm HFS}$ as the hyperfine structure interval in the ground state ($2\pi \times 9.2$~GHz for Cs, $2\pi \times 6.8$~GHz for $^{87}$Rb) (See~\cite{D.Steck}).
The ratio of populations in the upper hyperfine ground state $N_{F+1}$ and in the lower one $N_F$ may then be estimated as: \begin{equation} \eta = \frac{N_{F+1}}{N_{F}+ N_{F+1}} \approx \frac{N_{F+1}}{N_{F}} = \alpha \left( \frac{ \bar{\delta} }{ \bar{\delta} - \Delta_{\rm HFS} } \right)^2 \end{equation}
with $\alpha = \displaystyle \frac{2F+3}{2F+1} = \left\{ \begin{array}{l} 9/7 \, \mbox{ for Cs } (F=3)\\ 5/3 \, \mbox{ for $^{87}$Rb } (F=1) \end{array} \right.$.
The factor $\alpha$ is simply the ratio between the number of substates in the $F+1$ and $F$ ground states, to which $N_{F+1}/N_F$ should be equal at a detuning large as compared to the hyperfine structure $\Delta_{\rm HFS}$; the term involving the detuning is related to the ratio of excitation rates from the two hyperfine ground states. The formula leads to $\eta = 3.2$~\% of the atoms in the Cs($6s, F=4$) state and $\eta = 1.6$~\% in the Rb($5s, F=2$). This value is in excellent agreement with a full calculation taking into account all the different detunings with the hyperfine excited states.
\subsection{Pushing force} \label{sec: Pushing force}
Another factor should be considered: the laser mode shape. Indeed, a relatively strong divergence is needed in order to both efficiently push and guide atoms in the MOT1 region and not affect the MOT2 operation. The guiding beam waist varies with position, according to \begin{equation} w(z)=w_{0}\sqrt{1+(z-z_{0})^{2}/z^{2}_{R}} \, . \end{equation} The depth $U_0$ is then modified along the atomic trajectory due to the change in the laser intensity and, taking into account the results of the previous section, the pushing force in the centre of the beam may be estimated as follows: \begin{eqnarray}
F_{\rm push}(z) &=& \frac{\Gamma}{2} \hbar k \bar{s}(z) \left( (1 - \eta) + \eta \left( \frac{ \bar{\delta} - \Delta_{\rm HFS} }{ \bar{\delta} } \right)^2 \right) \nonumber \\
& & \simeq \frac{\Gamma}{2} \hbar k \bar{s}(z) (1 + \alpha), \end{eqnarray} where $\bar{s}(z)$ is the saturation parameter calculated for the lower ground state, at detuning $\bar{\delta} - \Delta_{\rm HFS}$. We take into account the linear polarization of the pushing beam by multiplying $\bar{s}$ by a factor 2/3 in all calculations. We neglect however the small change in $\bar{\delta}$ due to the light shift, which is even reduced when the waist $w(z)$ becomes larger.
The mean pushing force is reduced due to the Gauss\-ian transverse profile of the pushing beam and the finite size of the atomic cloud. This may be taken into account approximately by dividing the force by a factor 2~\cite{2002NJPh....4...69W}. Note that this underestimates the initial pushing force, when the atoms are still well guided (rms radius less than $w(z)$/2), and overestimates it when the cloud size becomes larger than half the waist. As the mean pushing force now depends only on $z$, it may be written as the derivative of a ``pushing potential'' $U_{\rm push}$: \begin{equation} U_{\rm push}(z) = \frac{\Gamma}{2} \hbar k s_0 z_R (1 + \alpha) \, \arctan \frac{z-z_0}{z_R} . \end{equation} where $s_0=\bar{s}(z_0)$. The velocity at each point is then easily calculated by energy conservation ($z$ axis is oriented downwards):
\begin{eqnarray} \lefteqn{ v(z)= \Big[ v_0^2 + 2 g z + } \label{eqn:vel} \\
& & \left. \Gamma v_{\rm rec} s_0 z_R (1 + \alpha) \, \left( \arctan \frac{z-z_0}{z_R} + \arctan \frac{z_0}{z_R} \right) \right]^{1/2} , \nonumber \end{eqnarray} where $v_0$ is the input velocity in the guide. The effect of gravity is not dominant, but was taken into account by the $2 g z$ term. $v_0$ can be estimated as the output velocity of the MOT1 region. We have calculated it using formula~(\ref{eqn:vel}) assuming that the atoms in the MOT1 region have a zero initial velocity and are in the upper hyperfine ground state due to the presence of the repumping light ($\eta=1$). For instance, using a travel distance $z$ roughly equals to the MOT1 region radius (10~mm) and a laser power of 21~mW, we find that atoms enter the guide with a velocity $v_0\approx9$ m/s for Rb; for the Cs parameters, we obtain in the same way $v_0\approx3.1$~m/s. From Equation~(\ref{eqn:vel}), we also infer the traveling time as $\Delta t = \int_0^D \frac{d z}{v(z)} $.
\begin{figure*}
\caption{Evolution for the $^{87}$Rb experiment of different parameters with the traveling distance $z$ between the two MOTs, at different powers: 15~mW (dashed lines) and 21 mW (solid lines); the initial temperature at the guide entrance was set to $T_0=40~\mu$K. The point $z_{\rm out}$ where the atoms leave the guide is marked by a vertical line. Left: mean horizontal energy $2 k_B T_h$ (thin lines, Equation~\ref{eqn:Th}) and trap depth $|U_0|$ (bold lines, Equation~\ref{eqn:depth}). Right: rms radius of the guided atomic beam. The radius of the MOT2 beams is marked by a horizontal line.}
\label{fig:sizeRb}
\end{figure*}
\subsection{Guiding condition} \label{subsec: Guiding potential}
As previously discussed, the light shift of the lower ground state is dominant in our case. The atoms leaving MOT1 are thus guided by the on-axis light shift potential given by
\begin{equation}
U_{0} (z) = \frac{\hbar (\bar{\delta} - \Delta_{\rm HFS})}{2} \, \bar{s}(z).
\label{eqn:depth} \end{equation}
Equation~(\ref{eqn:trapped}) is still the strongest constraint for the choice of the parameters and becomes more and more difficult to fulfil as $z$ increases, because $|U_{0}|$ is reduced due to the beam divergence. The horizontal kinetic temperature $T_h(z)$ is evolving due to two opposite effects: photon scattering~\cite{note2} is responsible for an increase of $T_h$ while adiabatic cooling tends to lower it as the waist increases. The adiabaticity condition $|d\omega_p/dt| \ll \omega_p^2$, where $\omega_p$ is the transverse oscillation frequency of the guide, is well fulfilled in both experiments except when the atoms move in the non harmonic part of the potential. This break-down of the adiabaticity occurs only when the atoms are close to leave the guide. This only marginally affects the guiding condition and will not be taken into account here. $\omega_p$ varies with the inverse squared waist, and one has $\omega_p(z)=\omega_p(0) w^2(0)/w^2(z)=\omega_p(0)\frac{z_0^2 + z_R^2}{(z-z_0)^2 + z_R^2}$. To obtain an expression for $T_h(z)$, valid while the atoms remain guided, we write the change in $T_h$ for a small change $\delta z$ in $z$. As the phase space density is conserved during this adiabatic cooling, the cooling contribution is proportional to the inverse squared waist. Spontaneous scattering is responsible for a supplementary heating term, proportional to the number of photons scattered during $\delta t=\delta z/v$: \begin{equation}
T_h(z+\delta z) = T_h(z) \frac{w^2(z)}{w^2(z+\delta z)} + \frac{\Gamma}{2} \bar{s}(z) \frac{T_{\rm rec}}{6} \frac{\delta z}{v(z)} \, . \end{equation} The temperature increase is $T_{\rm rec}/6$ for each spontaneous scattering event. $\bar{s}(z)$ is proportional to $1/w(z)^2$, just like the oscillation frequency. Using the dependence in $w(z)$, we obtain the following differential equation for $T_h$: \begin{equation}
\frac{dT_h}{dz} = - T_h(z) \frac{2}{w(z)}\frac{dw}{d z} + \frac{T_{\rm rec}}{6}\frac{w^2(0)}{w^2(z)} \frac{\Gamma}{2} \bar{s}(0) \frac{1}{v(z)} \end{equation} Using the expression of $w(z)$, the solution of this equation reads: \begin{equation} T_h(z) = \frac{z_0^2 + z_R^2}{(z-z_0)^2 + z_R^2} \left[ T_0 + \frac{T_{\rm rec}}{6} \frac{\Gamma}{2} \bar{s}(0) \int_0^z\frac{dz'}{v(z')} \right] \label{eqn:Th} \end{equation} where $T_0$ is the initial temperature at the guide entrance. The integral in the last term is the time necessary for an atom to travel to position $z$. In the range of parameters explored in our experiments, the sum of these two terms decreases with $z$, but slower than the trap depth. As can be seen on Figure~\ref{fig:sizeRb} (left), the mean horizontal energy $2 k_B T_h$ becomes larger than the trap depth at some position $z_{\rm out}$ before reaching MOT2. However, as will be discussed below, this partial guiding is sufficient for limiting the size of the atom cloud to below the MOT2 beam diameter.
\subsection{Recaptured atoms} \label{subsec:Recaptured}
For a good transfer efficiency, two main criteria have to be fulfilled. First, the atomic beam should stay roughly collimated on a distance long enough to pass through the differential tube, and then the transverse cloud radius at the end should be comparable to the capture radius of MOT2. This means that even if they leave the guide before reaching MOT2, the atoms can still be recaptured. Second, the final longitudinal velocity of the atomic beam must not exceed the capture velocity of MOT2. As the atomic beam velocity is in any case lower than the capture velocity of MOT2, the recapturing mechanism is mostly limited by the matching between the atomic beam size and the size of the capturing region of MOT2.
The capture size of MOT2 is limited by the radius $R=4$~mm of the collimated trapping laser beams. According to the former considerations about heating of the guided atoms (see~\ref{subsec: Guiding potential}), the mean horizontal energy of the cloud is lower than the guiding trap depth over a distance $z_{\rm out} = 38$~cm for a laser power of 63 mW and an initial temperature $T_0=25~\mu$K in the case of Cs (resp. $z_{\rm out} = 28.5$~cm with $T_0=40~\mu$K and a laser power of 21 mW in the case Rb) (see Figure~\ref{fig:sizeRb}). For simplicity we consider hereafter that all the atoms remain pushed and guided up to that point and then undergo a free ballistic expansion as they keep falling. Including this assumption in our model, we can evaluate the size of the atomic cloud $\Delta r_f$ as it reaches MOT2. While the atoms remain trapped, the cloud size is of the order of $\omega^{-1}_p(z) \sqrt{k_B T_h/m}$. The guiding step ends when $k_B T_h$ reaches $|U_0(z)|/2$, such that the rms size at the guide output is $\Delta r_{\rm out} = \omega^{-1}_p(z)\sqrt{|U_0(z)|/2m} = w(z)/\sqrt{8}$, that is $\Delta r_{\rm out} = 470~\mu$m for Cs (resp. $\Delta r_{\rm out} = 200~\mu$m for Rb). We assume a fixed temperature for the falling atoms, as the adiabatic cooling is not efficient for a non trapped cloud and the heating rate is also very low after $z_{\rm out}$. $T_h$ is about $10~\mu$K for Cs and $25~\mu$K Rb. After the remaining falling time of 36~ms (resp. 36~ms for Rb) the atomic beam has a typical standard deviation for the transverse Gaussian atomic density distribution of $\Delta r_f \approx 1~$mm for Cs and $\Delta r_f \approx 1.75~$mm for Rb, smaller than MOT2 radius, meaning that almost all the atoms are recaptured in MOT2 for both experiments. Note that this model allows to predict $\Delta r(z)$ at any position $z$, as shown on figure~\ref{fig:sizeRb}, right.
\subsection{Transfer efficiency} \label{subsec:TransferEff}
\begin{figure}
\caption{(Color online) Efficiency (see text) of the pushing--guiding processes versus laser detuning $\delta$, calculated for $^{87}$Rb for the parameter of the pushing beam given in text, with different laser power 10 mW (thin solid line, black), 15 mW (dashed line, red) and 21~mW (thick solid line, blue). The maximal capture velocity in MOT2 has been fixed to $v_{\rm capture} = 30$~m/s, the initial temperature to $T_0=40~\mu$K and the MOT2 beam radius $R$ to 4~mm. The corresponding experimental values are shown on Figure~\ref{fig:detuning}, right.}
\label{fig:efficiency}
\end{figure}
\begin{figure*}
\caption{Dependence of the number of recaptured atoms $N_{2}$ in MOT2 on the pushing beam power. Left: Cs data. The experiment is done at four different frequencies (see the diagram in the centre). Right: $^{87}$Rb data, recorded at -1~GHz detuning from the $5S_{1/2} (F=2) \rightarrow 5P_{3/2} (F^{'}=3)$ transition.}
\label{fig:power@detuning}
\end{figure*}
We come back now to an estimation of the transfer efficiency as discussed in section~\ref{subsec:2level} and presented in Figure~\ref{fig:pot_2_niv}. Within the frame of the refined model presented now, we are able to compute a transfer efficiency in the same spirit. As we have seen in the previous section, the guiding is not required until the end for the whole cloud to be recaptured. We thus retained the two following conditions: (i) the arrival velocity has to be smaller than $v_{\rm capture}$ and (ii) the cloud size must be less than the MOT2 beam waist. We then calculate the efficiency $f[\Delta r(D)/R] \times$\\ $ f[v(D)/v_{\rm capture}]$, with the function $f$ previously used in section~\ref{subsec:2level}, and plot it on Figure~\ref{fig:efficiency}. The model predicts a good efficiency in a detuning range between -0.5~GHz and -1.6~GHz, the width of the large efficiency region being reduced with a smaller laser power. These predictions have to be compared with the rubidium experimental data of Figure~\ref{fig:detuning}, right. The agreement is qualitatively good, and reproduces the main features. The two limits of the large efficiency region have different origins: On the large detuning side, the efficiency drops due to an increase in the atomic cloud size, as the guiding potential is weaker. On the lower detuning side, close to resonance, the efficiency becomes limited by the final velocity, which is larger than the capture velocity of MOT2. On this side, theory fails to predict the less measured efficiency at lower laser power, as the mean detuning $\bar{\delta}$ approach (section~\ref{sec:OptPump}) is not valid any more. In particular, the efficiency should drop to zero at the resonance with the rubidium $5S_{1/2} (F=2) \rightarrow 5P_{3/2} (F^{'}=1)$ line, situated at $\delta/2\pi = -424$~MHz and marked with a vertical line on Figure~\ref{fig:efficiency}.
\section{Experimental results}
In this section, we present the experimental study of the guiding process and compare it with the above theoretical model. The dependence of the recaptured atom number on the pushing beam parameters are first investigated. We then measure the mean atomic velocity and the traveling time. During the experimental investigation the atom vapour pressure in MOT1 is kept constant.
\subsection{Pushing Beam Parameters}
The parameters of the pushing beam that we have experimentally optimized are its divergence, waist, power and detuning.
\paragraph{Divergence and waist} \label{sec:DivergenceAndWaist}
In order to optimize the atomic beam characteristics we have first investigated the role of the laser beam waist, related to the divergence of the pushing beam and to the pushing force. It is clear that the pushing--guiding beam should diverge, to have a significant effect on MOT1 without disturbing MOT2. Moreover, this divergence provides an horizontal adiabatic cooling of the guided atoms. We have used three different lenses ($f$= 0.75~m ; 1~m ; 2~m) to focus the pushing beam. For each lens the transfer efficiency is studied as a function of the focus distance from MOT1. The position of the lens is more critical than its focal length. The optimum is obtained with a lens $f= 2$ m for the Cs experiment (resp. $f= 1$ m for $^{87}$Rb) and distance from MOT1 $34$~cm (resp. $\approx 13$~cm), where the beam diameter on the MOT1 region is $\approx 1.3$~mm (resp. $\approx 0.6$~mm). The measured waist at the focal point is $200~\mu$m (resp. $300~\mu$m). It leads to a divergence $w_0/z_R$ of about 2~mrad (resp. 1~mrad).
In conclusion we found that the best transfer efficiency occurs when the pushing beam, focused before MOT1, has a diameter smaller than 1~mm in MOT1 and a divergence such that the beam diameter at MOT2 position is less than 3 mm. In this sense our results are similar to the one found in references \cite{Wohlleben2001,Cacciapuoti2001}.
\paragraph{Power and Detuning}
\begin{figure*}
\caption{(Color online) Number of atoms recaptured in MOT2 $N_2$ vs. pushing beam detuning for different optical powers. The vertical lines indicate the position of hyperfine resonance frequencies. Left: Cs data. The detuning is given with respect to the $6S_{1/2}(F=4)\rightarrow 6P_{3/2}(F'=5)$ transition. The pushing beam power is 46~mW (squares), 10~mW (stars) or 2~mW (circles). Right: $^{87}$Rb data. The frequency is measured relatively to the $5S_{1/2} (F=2) \rightarrow 5P_{3/2} (F^{'}=3)$ cycling transition. Pushing beam power: 10~mW (squares), 15~mW (circles), and 21~mW (triangles).}
\label{fig:detuning}
\end{figure*}
The recaptured number of atoms into MOT2 at different laser powers of the pushing beam and at different detunings for the two elements Cs and $^{87}$Rb is shown resp. on left and right of Figure~\ref{fig:power@detuning}.
It is first obvious that the best experimental conditions are achieved with a laser frequency red-detuned with respect to all atomic transitions (curve (a) in Figure~\ref{fig:power@detuning}, left). The transfer efficiency is larger for a red-detuned laser frequency than for the other laser frequencies due to the fact that after leaving the MOT1 area the atoms feel the pushing light also as a guide. For such detunings, the atomic flux as well as the number of recaptured atoms $N_{2}$ in MOT2 increase when the power of the pushing light increases, and saturates at large power when all the atoms are efficiently guided to MOT2 (see also Figure~\ref{fig:power@detuning}, right). At a given detuning, an increase of the laser power leads to a decrease of the transfer efficiency, due to an excessive final velocity, a strong perturbation of both MOTs, and a large heating of the atoms.
In order to optimize the conditions for the atomic beam, the influence of the detuning of the pushing light was investigated in more details (see Figure~\ref{fig:detuning} left and right resp. for Cs and $^{87}$Rb). For a frequency close to resonance (corresponding to the best conditions found in reference~\cite{Wohlleben2001,Cacciapuoti2001}) the number of recaptured atoms into MOT2 is much smaller than the one we could achieve with a much more red-detuned light and a higher power. In conclusion we find that the best loading of MOT2 is at highest possible power of the pushing laser beam and, given this power, at the value of red detuning optimizing the flux.
\subsection{Atomic beam velocity} \label{sec:BeamVelocity} \begin{figure}
\caption{(Color online) Experimental (points) and theoretical results (solid lines) for the traveling time $\Delta t$ between MOT1 and MOT2 for different pushing beam powers. The beam is red-detuned by 1 GHz from the cycling transition of $^{87}$Rb, $5S_{1/2} (F=2) \rightarrow 5P_{3/2} (F^{'}=3)$. The theoretical calculations are done for both the two-level model approximation (blue lower curve) and for the more detailed model described above (red upper curve). In the calculations the radius of MOT1 trapping region is 10~mm. }
\label{fig:timedelay}
\end{figure}
For a high recapture efficiency, a relatively slow and collimated atomic beam is required (see section~\ref{subsec:Recaptured}). After the pushing and guiding process, the atoms reach MOT2 within a time delay $\Delta t$. This time has been measured in two different ways. First, one can record the MOT2 fluorescence after having suddenly removed the atoms in MOT1 (the MOT1 laser beams are stopped by a mechanical shutter). In this case, one observes the delay after which the number of atoms in MOT2 starts to drop. The second method consists in pulsing the pushing beam through a permanently loaded MOT1. Both methods lead to the same result $\Delta t \approx 130$ ms for the Cs experiment at 63~mW power. In the Rb experiment presented in Figure~\ref{fig:timedelay}, the measured time delay as a function of the pushing beam power is obtained by using the second method. A similar dependence on the pushing beam power is observed in the cesium experiment. The two--level model is not sufficient to describe accurately the atomic beam velocity, the predicted transfer time $\Delta t$ being by far too short (see Figure~\ref{fig:timedelay}, lower curve). On the contrary, the theoretical model presented in section \ref{sec: Pushing force}, Equation~(\ref{eqn:vel}) describes well the experimental results as demonstrated on Figure~\ref{fig:timedelay}. From the model, we also deduce the final longitudinal velocity of the atomic beam $v \approx 5.5$~m/s (resp. 12.6~m/s for Rb). Note that this final velocity is not very different from the mean velocity $D/\Delta t$, as the acceleration stage takes place essentially in the MOT1 zone, where the atoms remain in the $F+1$ state thanks to the repumping MOT beams.
\section{Conclusion}
In our work we have studied a very efficient setup to transfer cold atoms from a first MOT to a second one. Our setups have a similar geometry to the ones described in references~\cite{Wohlleben2001,Cacciapuoti2001}, but due to the higher laser power (tens of mW) we could achieve a partial dipolar guide for the atoms at a larger detuning (1~GHz typically). As a result, the mean longitudinal velocity of the atomic beam is lower (4.3-12 m/s) than in these previous experiments (15~m/s). Moreover, thanks to the lower sensibility of the method to the frequency of the pushing laser, its frequency does not need to be locked (see for instance the detuning dependence in Figure~\ref{fig:detuning}, right) and the setup is much more robust to small mis-alignments of the pushing beam. The atomic flux is limited only by the number of atoms loaded into MOT1. We estimated the transfer efficiency to MOT2 which is about $70~\%$ for the $^{133}$Cs experiment and about $50~\%$ for the $^{87}$Rb experiment.
We used a two-level system model to describe the pro\-cesses during the atomic transfer. A good qualitative agreement between theory and experiment was found. The transfer efficiency is maximum for a large red detuning, and this maximum efficiency increases with the laser power. A more detailed discussion of the pushing, guiding and recapture processes is presented for a better understanding of the atomic transfer between the two traps. Our theoretical description, which takes into account the optical pumping, the pushing force and the guiding potential nicely reproduces the experimentally observed traveling time.
In conclusion, we experimentally described and theoretically modelled a method to transfer cold atoms between two traps. Two different setups lead qualitatively to the same optimized parameters -- a large laser power (tens of mW), $\approx$ 1~GHz detuning, $300~\mu$m waist. The implementation of this technique in our setups brought in both cases a much better stability and improved loading efficiency, with respect to the use of a near resonant laser beam.
\end{document} |
\begin{document}
\input epsf
\def\vphantom{\Bigl |}\boxed{\vphantom{\Bigl |}\boxed}
\def{\mathbb R }{{\mathbb R }}
\def{\mathbb C }{{\mathbb C }}
\def{\mathbb Z}{{\mathbb Z}}
\def{\mathbb H}{{\mathbb H}}
\def\mathbb N{\mathbb N} \def\mathbf {wr}{\mathbf {wr}}
\def\mathfrak M{\mathfrak M} \def
{
}
\newcommand{\mathop{\rm arcsh}\nolimits}{\mathop{\rm arcsh}\nolimits}
\newcommand{\mathop{\rm sh}\nolimits}{\mathop{\rm sh}\nolimits} \newcommand{\mathop{\rm ch}\nolimits}{\mathop{\rm ch}\nolimits} \newcommand{\mathop{\rm tanh}\nolimits}{\mathop{\rm tanh}\nolimits}
\newcommand{\mathop{\rm const}\nolimits}{\mathop{\rm const}\nolimits}
\newcommand{\mathop{\rm Dom}\nolimits}{\mathop{\rm Dom}\nolimits}
\def{\rm SL}{{\rm SL}}
\def{\rm U}{{\rm U}} \def{\rm O}{{\rm O}} \def{\rm Sp}{{\rm Sp}} \def{\rm SO}{{\rm SO}}
\def\widehat{\widehat}
\def\overline{\overline} \def\varphi{\varphi} \def\varepsilon{\varepsilon} \def\varkappa{\varkappa}
\def\leqslant{\leqslant} \def\geqslant{\geqslant}
\def\widetilde{\widetilde}
\def\,\,{}_2F_1{\,\,{}_2F_1} \def\,\,{}_3F_2{\,\,{}_3F_2}
\renewcommand{\mathop{\rm Re}\nolimits}{\mathop{\rm Re}\nolimits} \renewcommand{\mathop{\rm Im}\nolimits}{\mathop{\rm Im}\nolimits}
\newcounter{sec}
\renewcommand{\arabic{sec}.\arabic{equation}}{\arabic{sec}.\arabic{equation}} \newcounter{punct}[sec]
\newcounter{fact} \def\addtocounter{fact}{1}{\scc \arabic{fact}}{\addtocounter{fact}{1}{\scc \arabic{fact}}}
\renewcommand{\thesec.\arabic{punct}}{\thesec.\arabic{punct}}
\def\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. }{\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. }}
\def\mathcal L{\mathcal L} \def\mathcal M{\mathcal M} \def\mathcal H{\mathcal H} \def\mathcal K{\mathcal K} \def\mathcal E{\mathcal E} \def\mathcal D{\mathcal D}
\begin{center} {\Large\bf
Perturbations of Jacobi polynomials and piece-wise hypergeometric orthogonal systems }
\large
\sc Neretin Yu.A. \footnote{Supported by grant NWO--047.017.015}
\end{center}
{\small We construct a family of noncomplete orthogonal systems of functions on the ray $[0,\infty]$; these systems depend on 3 real parameters $\alpha$, $\beta$, $\theta$. Elements of a system are piece-wise hypergeometric functions, having a singularity at $x=1$. For $\theta=0$ these functions vanish on $[1,\infty)$ and our system is reduced to the Jacobi polynomials $P_n^{\alpha,\beta}$ on the segment $[0,1]$. In a general case, our functions can be considered as an interpretation of $P_{n+\theta}^{\alpha,\beta}$. Our functions are solutions of some exotic Sturm--Liouville boundary problem for
the hypergeometric differential operator. We find the spectral measure for this problem. }
{\bf \large 1. Formulation of result}
\addtocounter{sec}{1}
Results of the paper are formulated in Subsections
1.1--\ref{l-expansion}. Next, in \ref{discussion-1}--\ref{attempt}, we discuss existing and hypotetical relations of our phenomenon with some other mathematical topics.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Jacobi polynomials. Preliminaries. \label{l-jacobi}} Recall that the Jacobi polynomials $P_n^{\alpha,\beta}$ are the polynomials on the segment $[-1,1]$ orthogonal with respect to the inner product \begin{equation} \langle f,g\rangle= \int_{-1}^1 f(y)\overline{g(y)}(1-y)^{\alpha}(1+y)^{\beta}\,dy ,\qquad \alpha>-1,\beta>-1 \label{jacobi-product} \end{equation} These polynomials are given by explicit formulae, (see \cite{HTF2}, 10.8(16)), \begin{align} P_n^{\alpha,\beta}(y)&= \frac{\Gamma(n+\alpha+1)}{\Gamma(\alpha+1)n!} \,\, F\left[ \begin{matrix} -n,n+\alpha+\beta+1\\ \alpha+1 \end{matrix} ;\frac{1-y}2\right] \label{jacobi-1} \\&= \frac{(-1)^n\Gamma(n+\beta+1)}{\Gamma(\beta+1) n!} \,\, F\left[ \begin{matrix}-n,n+\alpha+\beta+1\\
\beta+1
\end{matrix};
\frac{1+y}2\right]
\label{jacobi-2}
\\ &= \frac{(-1)^n\Gamma(n+\beta+1)}{\Gamma(\beta+1)n!} \,\, \Bigl(\frac{1-y}2\Bigr)^{-\alpha} F\left[ \begin{matrix} n+\beta+1,-\alpha-n \\ \beta+1 \end{matrix}; \frac{1-y}2\right] \label{jacobi-3} \end{align}
Here $F={}_2F_1$ is the Gauss hypergeometric function, $$ F\bigl[a,b;c;x\bigr]= F \left[\begin{matrix} a,b \\ c\end{matrix};x\right]:= \sum_{j=0}^\infty \frac{(a)_k(b)_k}{(c)_k} x^k $$ and $(a)_k:=a(a+1)\dots(a+k-1)$ is the Pochhammer symbol.
The expressions (\ref{jacobi-1}), (\ref{jacobi-2}) are polynomials since $(-n)_k=0$ for $k>n$. The last expression (\ref{jacobi-3}) is a series, it can be obtained from (\ref{jacobi-2}) by the transformation (see \cite{HTF1}(2.1.22-23)), \begin{equation} F\bigl[a,b;c;x\bigr]= (1-x)^{c-a-b}F\bigl[c-a,c-b;c;x\bigr] \label{bolza} \end{equation}
Norms of the Jacobi polynomials with respect to the inner product (\ref{jacobi-product}) are given by\begin{equation}
\|P_n^{\alpha,\beta}\|^2= \langle P_n^{\alpha,\beta}, P_n^{\alpha,\beta}\rangle= \frac{2^{\alpha+\beta+1}\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}
{(2n+\alpha+\beta+1)\, n!\,\Gamma(n+\alpha+\beta+1)}
\label{jacobi-norms}
\end{equation}
The Jacobi polynomials are the eigen-functions of
the differential operator
\begin{equation}
D:=(1-y^2) \frac {d^2}{dy^2}
+\bigl[\beta-\alpha-(\alpha+\beta+2)y)\bigr]\frac d{dy}
\label{jacobi-operator}
\end{equation}
Precisely,
$$
D P_n^{\alpha,\beta}=-n(n+\alpha+\beta+1) P_n^{\alpha,\beta}
$$
{\bf\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Piece-wise hypergeometric orthogonal systems
\label{l-result}.}
Now, fix $\theta\in\C$ such that
\begin{equation}
0\leqslant \mathop{\rm Re}\nolimits\theta<1
\label{restriction-theta}
\end{equation}
Also, fix $\alpha$, $\beta\in\C$ such that
\begin{equation}
-1<\mathop{\rm Re}\nolimits \alpha < 1,\,\,\alpha\ne 0,\qquad \mathop{\rm Re}\nolimits \beta > - 1
\label{conditions-1}
\end{equation}
Consider the space of functions on the half-line $x>0$ equipped with the bilinear scalar product
\begin{equation}
\{ f,g\}=
\int\limits_0^1 f(x) g(x) (1-x)^\alpha x^\beta dx+
\frac{\sin(\alpha+\theta)\pi}
{\sin \theta\pi}
\int\limits_1^\infty f(x) g(x) (x-1)^\alpha x^{\beta} {dx} \label{bilinear-product} \end{equation}
Denote by $H(x)$ the Heaviside function $$ H(x)= \left\{ \begin{aligned} 1,\qquad x>0;\\ 0, \qquad x<0 \end{aligned} \right. $$ Let $p\in \C$ ranges in the set \begin{equation} p-\theta\in{\mathbb Z},\qquad \mathop{\rm Re}\nolimits(2p+\alpha+\beta+1)>0 , \quad 1+p+\alpha\ne 0 \label{conditions-2} \end{equation}
Define the piece-wise hypergeometric
functions $\Phi_p(x)$ on the half-line $[0,\infty)$ by \begin{multline} \Phi_p(x)= \frac{\Gamma(2p+\alpha+\beta+2)} {\Gamma(\beta+1)} F\left[ \begin{matrix} -p,p+\alpha+\beta+1\\ \beta+1 \end{matrix}; x \right]\, H(1-x) +\\+ \frac{\Gamma(1+p+\alpha)}{\Gamma(-p)} F\left[ \begin{matrix} p+\alpha+1, p+\alpha+\beta+1\\ 2p+\alpha+\beta+2 \end{matrix}; \frac 1x \right] x^{-\alpha-\beta-p-1}
H(x-1) \label{basis} \end{multline}
{\sc Theorem 1.} {\it The functions $\Phi_p$ are orthogonal with respect to the symmetric bilinear form (\ref{bilinear-product}),\begin{equation} \{ \Phi_p,\Phi_q \}=0 \qquad \text{for $p\ne q$} \label{orthogonality} \end{equation} and} \begin{equation} \{ \Phi_p,\Phi_p \}= \frac{\Gamma^2(2p+\alpha+\beta+2)\Gamma(1+p+\alpha)\Gamma(p+1)} {(2p+\alpha+\beta+1)\Gamma(p+\beta+1)\Gamma(p+\alpha+\beta+1)} \label{norms} \end{equation}
We can consider also the Hermitian inner product
\begin{equation}
\langle f,g\rangle=
\int\limits_0^1 f(x)\overline{ g(x)} (1-x)^\alpha x^\beta dx+
\frac{\sin(\alpha+\theta)\pi}
{\sin \theta\pi}
\int\limits_1^\infty f(x)\overline{ g(x)} (x-1)^\alpha x^{\beta} {dx} \label{hermitian-product} \end{equation} here we must assume $\theta$, $\alpha$, $\beta\in\R$, and $$ 0\leqslant\theta<1, \quad -1<\alpha<1,\qquad \beta>-1,\quad 2p+\alpha+\beta+1>0 $$ By our theorem, the functions $\Phi_p$ are orthogonal with respect to the inner product (\ref{hermitian-product}). If the factor $\sin(\alpha+\theta)\pi/
\sin (\alpha\pi)$ is positive, then our inner product
also is positive definite.
{\sc Remark.} {\it The system $\Phi_p$
is not a basis in our Hilbert space.}
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Comparison with the Jacobi polynomials. \label{comparison}} Let us show, that our construction is reduced to the Jacobi polynomials in the case $\theta=0$.
First, assume $x=(1+y)/2$ in the formulae (\ref{jacobi-product}), (\ref{jacobi-2}). We observe that the first summand in (\ref{basis}) is a Jacobi polynomial. The second summand in (\ref{basis}) is 0 since it contains the factor $\Gamma(-p)^{-1}$. This factor is 0 if $p=0,1,2,\dots$. Thus, for an integer $p$, $$ \Phi_p(x)= \frac{\Gamma(2p+\alpha+\beta)\Gamma(p+1)}
{\Gamma(p+\beta+1)}
P_p^{\alpha,\beta}(2x-1)H(1-x)
$$
Hence for $\theta=0$ our orthogonality relations
are the orthogonality relations for the Jacobi polynomials.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Singular boundary problem. \label{sbp}} Consider the differential
operator \begin{equation} D:=x(x-1)\frac {d^2}{dx^2} + (\beta+1-(\alpha+\beta+2)x)\frac d{dx} \label{D} \end{equation} The functions $\Phi_p$ satisfy the equation \begin{equation} D\Phi_p=-p(p+\alpha+\beta+1)\Phi_p \label{eigenfunctions} \end{equation} More precisely, the function $\Phi_p$ is given by different Kummer solutions of the hypergeometric equation on the intervals $(0,1)$, $(1,\infty)$, see formulae \cite{HTF1}, 2.9(1), (13)).
Let $\alpha\ne 0$. \footnote{Otherwise, below we have logarithmic asymptotics at $x=1$.} Now we define a space $\mathcal E$ of functions on $[0,\infty)$. Its elements are functions $f(x)$ that are smooth outside the singular points $x=0$, $x=1$, $x=\infty$; at the singular points they satisfy the following boundary conditions (a strange element of the problem is the condition b); its self-adjointness is not obvious).
a) {\it The condition at 0.} A function $f$ is smooth at 0.
b) {\it The condition at 1.} There are functions $u(x)$, $v(x)$ smooth at 1 such that \begin{equation} f(x)= \left\{ \begin{aligned} &u(x)+v(x)(1-x)^{-\alpha}, \qquad &x<1;\\ &\frac{\sin\theta\pi}{\sin(\alpha+\theta)\pi} u(x)+v(x)(x-1)^{-\alpha} &\qquad x>1 \end{aligned} \right. \label{uzhas} \end{equation}
c) {\it The condition at $\infty$.} There is a function $w(y)$ smooth at zero, such that $$f(x)=w(1/x)x^{-\alpha-\beta-r-1}\qquad \text{for large $x$}$$ where $r$ is the minimal possible value of $p$.
{\sc Theorem 2.}
a) {\it $\Phi_p\in \mathcal E$.}
b) {\it For $f$, $g\in\mathcal E$,}
$$
\{Df,g\}=\{f,Dg\}
$$
Obviously, this implies the orthogonality relations for $p\ne q$.
Indeed,
$$\{D\Phi_p, \Phi_q\} = \{\Phi_p, D \Phi_q\}
=-p(p+\alpha+\beta+1)\{\Phi_p, \Phi_q\}=
-q(q+\alpha+\beta+1)\{\Phi_p, \Phi_q\}
$$
and hence \footnote{Under our conditions for parameters, $p(p+\alpha+\beta+1)=q(q+\alpha+\beta+1)$ implies $p=q$.} $\{\Phi_p, \Phi_q\}=0$.
Denote by $\mathcal H$ the Hilbert space with the inner product (\ref{hermitian-product}). Obviously, $\mathcal E\subset \mathcal H$.
{\sc Theorem 3.} {\it The operator $D$ is essentially self-adjoint on $\mathcal E$.}
{\sc Remark.} a) We can replace the boundary condition at $\infty$ by the following: $f(x)=0$ for large $x$. Thus, our complicated formulation is not necessary.
b) If $\beta\geqslant 1$, then we can replace the condition at 0 by the following: $f(x)=0$ at some neighborhood of 0.
For $\beta<1$ the latter simplifying variant gives a symmetric, but not-self-adjoint operator \footnote{for a discussion of difference between symmetry and self-adjointness, see any text-book on the functional analysis, for instance, \cite{DS}}.
Possible self-adjoint conditions are enumerated by points $\lambda:\mu$ of the real projective line; they can be given in the form $$ f(x)=A(\lambda+\mu x^{-\beta})+ x\varphi(x)+ x^{-\beta+1}\psi(x) $$ where $\varphi$, $\psi$ are functions
smooth near 0, $A$ ranges in $ \C$.
The condition given above corresponds to $\mu=0$. Thus, our requirement of the smoothness is not an absence of a condition, it hides a condition for asymptotics.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Expansion in eigenfunctions
\label{l-expansion}.} Our orthogonal system is not complete; hence our operator has a partially continuous specter. In such a case, a usual expansion of a function in a series of the Jacobi polynomials must be replaced by the eigenfunction expansion of $D$ in spirit of Weyl and Titshmarsh (see \cite{DS}).
For $s\in \R$, we define the function $\Psi_s(x)$
on $[0,\infty)$ given by \begin{multline} \Psi_s(x)= F\left[ \begin{matrix} \frac{\alpha+\beta+1}2+is,\frac{\alpha+\beta+1}2-is\\ \beta+1 \end{matrix};x\right]H(1-x) +\\+ \frac{2\Gamma(\beta+1)}{\sin (\theta+\alpha)\pi} \cdot \mathop{\rm Re}\nolimits\Biggl\{ \frac{\Gamma(-2is)\cos(\frac{\alpha+\beta}2+\theta-is)\pi} {\Gamma(\frac{\alpha+\beta+1}2-is) \Gamma(\frac{-\alpha+\beta+1}2-is)} \times\\ \times F\left[ \begin{matrix} \frac{\alpha+\beta+1}2+is,\frac{\alpha-\beta+1}2+is\\ 1+2is \end{matrix};\frac 1x\right] x^{-(\alpha+\beta+1)/2-is}\Biggr\}H(x-1) \label{eigenfunctions-psi} \end{multline}
Obviously, $$\Psi_s(x)=\Psi_{-s}(x)$$
{\sc Remark 1.} The both summands of $\Psi_s$ are solutions of the equation \begin{equation} D f=-(\frac14(\alpha+\beta+1)^2+s^2)f \label{yyy} \end{equation} Indeed, the first summand is the same as above, we substitute $p=-\frac{\alpha+\beta+1}2+is$ to (\ref{basis}). The hypergeometric function in the second summand is a Kummer solution of the equation (\ref{yyy}). Since the coefficients of $D$ are real, the complex conjugate function also is a solution of the same equation.
{\sc Remark 2.} The functions $\Psi_s(x)$satisfy the boundary condition at $x=1$.
{\sc Remark 3.} The functions $\Phi_p$ with $p-\theta\in{\mathbb Z}$ are all the $L^2$-eigenfuctions for the boundary problem formulated above. Also, the functions $\Psi_s(x)$ are all the remaining generalized eigenfunctions of the same boundary problem. See below Section 4.
This 3 remarks easily imply the explicit Plancherel measure for the operator $D$.
Consider the Hilbert space $V$, whose elements are pairs $(a(p), F(s))$, where $a(p)$ is a sequence ($p$ ranges in the same set as above), and $F(s)$ is a function on the half-line $s>0$; the inner product is given by \begin{multline} \bigl[ (a,F);(b,G)\bigr] = \sum_{p}\frac {a(p)\overline{ b(p)}}{\langle\Phi_p,\Phi_p\rangle}+ \\+ \frac {\sin\theta\pi \sin(\theta+\alpha)\pi }{4\pi\Gamma(\beta+1)^2} \int_0^\infty
\left| \frac{\Gamma(\frac{\alpha+\beta+1}2-is) \Gamma(\frac{-\alpha+\beta+1}2-is)}
{\Gamma(2is)\cos(\frac{\alpha+\beta}2 +\theta-is)\pi}\right|^2 F(s)\overline{G(s)} ds \end{multline}
We define the operator $Uf\mapsto (a_p,F(s))$ from $\mathcal H$ to $V$ by \begin{align*} a(p)=\langle f,\Phi_p\rangle_{\mathcal H},\\ F(s)=\langle f,\Psi_s\rangle_{\mathcal H} \end{align*}
{\sc Theorem 4.} {\it The operator $U:\mathcal H\to V$
is a unitary invertible operator.}
In particular, this theorem implies the
{\it inversion formula}, \begin{equation} U^{-1}=U^* \label{inversion} \end{equation}
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Discussion: shift of the index $n$ for classical orthogonal bases \label{discussion-1}.} Thus, for Jacobi polynomials $P_n^{\alpha,\beta}$ there is a perturbated nonpolynomial orthogonal system obtained by a shift of the number
$n$ by a real $\theta$. Similar deformations are known for several other classical orthogonal systems. In the present time, the general picture looks as confusing, and I'll shortly refview some known facts.
a) {\it Meixner system.} Perturbated systems were discovered by Vilenkin and Klimyk in \cite{VK}, see, also, \cite{VK1}, and more details in \cite{GK}.
b) {\it Laguerre system}, see detailed discussion \cite{Gro}
c) {\it Meixner--Pollachek system,} see. \cite{Ner-preprint}.
d) {\it Continuous dual Hahn polynomials},
a (noncomplete) construction was obtained in \cite{Ner-double}.
In all the cases enumerated above, the perturbated systems are orthogonal {\it bases}, indexed by numbers $n+\theta$, where $n$ ranges in ${\mathbb Z}$.
All the such deformations are obtained in the following way. Allmost all classical orthogonal hypergeometric systems \footnote{The only possible exception is the Wilson polynomials (the author does not know, are they appear in this context).} arise in a natural way in a detailed consideration of highest weight and lowest weight representations of
${\rm SL}_2(\R)$. Repeating the same operations with principal and complementary series of representations, we obtain deformed systems \footnote{For the Hahn system, the group ${\rm SL}_2(\R)$ does not provide a sufficient collection of parameters, nevertheless this method has an heuristic meaning}.
All the basic formulae existing for orthogonal polynomials (see lists in \cite{HTF2}, \cite{KS}, \cite{NUS}),
"survive" for deformed systems; this confirms the nontriviality of the phenomen under a disscussion. Moreover, representation-theoretical interpretation allows to write such formulae quite easily.
Note that the systems a)--c) can be partially \footnote{ Partially, since such bases form "series" imitating
"series of representations". Constructions imitatiting "complementary series", can be hardly observed without representation theory.} produced by very simple operations.
a) Fix real parameters $h$, $\sigma$, $t$. Consider two orthonormal bases $$ e_k(z)=z^n,\qquad f_n(z)= \left(\frac{z\mathop{\rm ch}\nolimits t+\mathop{\rm sh}\nolimits t}{z\mathop{\rm sh}\nolimits t+\mathop{\rm ch}\nolimits t}\right)^n (z\mathop{\rm sh}\nolimits t+\mathop{\rm ch}\nolimits t)^{-h+i\sigma} (z^{-1}\mathop{\rm sh}\nolimits t+\mathop{\rm ch}\nolimits t)^{-1+h+i\sigma} $$ in the space
$L^2$ on the circle $|z|=1$. Expanding one basis in another one, we obtain a matrix, whose rows are orthogonal in the space $l_2({\mathbb Z})$. These rows form a Meixner-type system.
b), c) Consider an orthonormal basis in $L^2$ on $\R$ consistng of functions $$ f_n(x)=(1+ix)^{-n-1+h+i\sigma}(1-ix)^{n-h+i\sigma} $$ $n$ ranges in ${\mathbb Z}$. Applying the Forier transform, we obtain a Laguerre-like piece-wise confluent hypergeometric basis.
Considering the Mellin transform $f_n(x)\mapsto (F^+_n(t),F^-_n(t))$, $$
F^+_n(t)=\int_0^\infty f_n(x) x^{it-1}\,dx ,\qquad F^-_n(t)= \int_0^\infty f_n(-x) x^{it-1}\,dx $$ of the same functions, we obtain a Meixner--Pollachek-like system. It appears that this system is an orthonormal basis in a space of $\C^2$-valued functions on $\R$.
d) Hahn-type system can be obtained from the same functions $f_n$ by applying of the bilateral index hypergeometric transform introduced in \cite{Ner-double}. \footnote{Again, there arises a system of $\C^2$-valued functions. There is a general interesting problem on existence of theory of vector-valued and matrix-valued special functions. Two examples are given by us just now.}
Now, the author does not know nor represetentation-theoretical interpretation, nor a way for "simple production" Jacobi-type systems. \footnote{The specter of the operator $D$ is the same as of a tensor of a pair of representations of ${\rm SL}_2(\R)$ that are contained in principal and complementary series. Possible, this coincidence is not a chance.}
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Discussion: multi-contour boundary broblems. \label{discussion-2}} Boundary problems for systems of contours with cross-glueing of asymptotics are not well known. We intend to explain some natural origins of their appearence.
First, consider a model example, the equation $$ \frac{\partial^2 }{\partial x\partial y} f(x,y) =\lambda f(x,y) $$where $\lambda$ is a constant. Since, this equation is similar to $$ \Bigl(\frac{\partial^2}{\partial^2 x} + \frac{\partial^2}{\partial^2 y}\Bigr)\,f(x,y)= \lambda f(x,y)$$ let us imitate the radial separation of the variables for the Laplace operator.
Let $f$ be a smooth compactly supported function on the plane. For $r>0$ assume \begin{equation} g(r,\mu)=\int_{-\infty}^\infty f(re^t,r e^{-t}) e^{-it\mu} d\mu \label{povorot} \end{equation} Respectively,
$$ f(x,y)=\frac 1{2\pi} \int_{-\infty}^\infty g(\sqrt{xy},\mu) \left(\frac xy\right)^{i\mu/2} d\mu $$ Our equation now transforms to \begin{equation} \mathcal D g=\lambda g \label{dlambda} \end{equation} where $$ \mathcal D= \frac 14 \Bigl(\frac {\partial^2}{\partial r^2 } +\frac 1r \frac {\partial}{\partial r } +\frac{\mu^2}{r^2}\bigr) $$
For a fixed $\mu$, we obtain a Sturm--Liouville problem for the operator $D$.
Next, for a fixed
$\mu\ne 0$ the function $g(r,\mu)$ has the following asymptotics $$ g(r,\mu)\sim A r^{i\mu} + B r^{-i\mu},\qquad r\to 0, $$ where $$A= \frac \pi 2 \Gamma(-i\mu/2) f(0,0) +\int_0^\infty (f(x,0)- f(0,0)e^{-x^2})x^{-i\mu-1}dx $$ and $B$ is given with a similar expression.
In the integral transform (\ref{povorot}) only the values of $f$ in the quadrant $x\geqslant 0$, $y\geqslant 0$ take part. Hence, consider functions $f(-x,y)$, $f(-x,-y)$, $f(x,-y)$ and construct 3 more functions
(\ref{povorot}). As a result, we obtain 4 functions, whose asymptotics are shown on Fig.1c. But it two cases we obtain the equation (\ref{dlambda}) transformed by $\lambda\mapsto-\lambda$. Nethertheless, the same result can be obtained by the substitution $r\mapsto ir$ in the Bessel operator $\mathcal D$. Now, in all the 4 cases the operator
$\mathcal D$ became being the same, but the argument $r$ now is contained in different contours.
As a result, we obtain a boundary problem of the following type (see Fig. 1.d): we have the Bessel operator
$\mathcal D$ defined on quadruples of functions, each function is defined on its own contour, and the asymptotics at zero satisfy certain condition of cross-gluing.
Emphasis, that our actions were completely standard. Namely, our equation is invariant with respect to action of the group $\Gamma$ of hyperbolic rotations $(x,y)\mapsto (xe^t,ye^{-t})$ of the plane, see Fig. 1a). We simply use this invariance for the separation of the variables. But the set of orbits of $\Gamma$
in a general position on $\R^2$ is disconnected and consists of 4 components (see Fig. 1b). This produces 4-contour problem.
More interesting example is the problem on a spectral decomposition of a
${\rm SL}_2(\R)$-invariant Laplace operator on the torus
$|z|=1$, $|u|=1$, \begin{multline*} \Delta=-(z-u)^2 \frac{\partial^2} {\partial z \partial u} +\frac 1u (\widetilde\theta u+\widetilde\tau z) (z-u) \frac \partial{\partial z} +\frac 1z (\theta z+\tau u) (u-z) \frac \partial{\partial u}+ \\+ \frac 1{zu} (\widetilde\theta u+\widetilde\tau z) (\theta z+\tau u) \end{multline*}
The operator is self-adjoint in $L^2$ on the torus \footnote{There are several more cases of self-adjointness in other functional spaces} in the case
$$ \mathop{\rm Re}\nolimits(\theta+\tau)=1,\,\, \mathop{\rm Im}\nolimits\theta=\mathop{\rm Im}\nolimits\tau,\,\, \mathop{\rm Re}\nolimits(\widetilde\theta+\widetilde\tau)=1,\,\, \mathop{\rm Im}\nolimits\widetilde\theta=\mathop{\rm Im}\nolimits\widetilde\tau $$
This operator is an interesting, complicated and not well-undertand object. It was concidered in several works , \cite{Puk}, \cite{Mol-tensor}, \cite{Ner-double}, \cite{Gro}). We intend to discuss some details that are absent in these works.
The group ${\rm SL}_2(\R)$ acts on the circle by M\"obious transformations, and hence it acts on the torus since it is a product of circles. Consider an one-parametric group
$\Gamma\subset{\rm SL}_2(\R)$ and separate variables using $\Gamma$ as in the previous example.
There are 3 possibility
a) $\Gamma=K$ is a subgroup of rotations of the circle.
b) $\Gamma=P$ is parabolic, i.e. it is an one-parametric subgroup having one fixed point on the circle.
c) $\Gamma=H$ is hyperbolic, i.e., $\Gamma$ is an one-parametric subgroup having two fixed points on the circle.
Separations of variables corresponding to these subgroups is reduced respectively to Forier expansion, Fourier transform, and Mellin transform (a circle is a real projective line, and then we can Fourier and Mellin transform).
As a result, we obtain 3 variants shown on Fig.2 (we represent the torus as a square).
Numerous nonstandard boundary problems of this kind (in particular, multi-dimensional) arise in a natural way \footnote{Miller's treatise \cite{Mil} contains lists of various separations of variables for several classical partial differential equations. Some of such ways produce multi-contour problems, in the book this is not mentioned.}, in the non-commutative harmonic analysis (the topic apprently arises to \cite{Ten}).
Actually, they are not well-studied. Several one-dimensional problems for the Legendre equation were examined by Molchanov
\cite{Mol-hyp}, \cite{Mol-tensor}, \cite{Mol2}, \cite{Mol3} and Faraut \cite{Far} for obtaining of the Plancherel formula on rank 1 pseudo-Riemannian symmetric spaces (in particular, this class of spaces includes multi-dimensional hyperboloids).
{\sc Remark}. Examples enumerated above allows to think that reasonable multi-contour problems has approximately the following form. We consider the space of all the functions as a module other the space of smooth functions. Domain of definiteness of a PDE system consists of functions, that are contained in a fixed (explicetly defined) submodule of our module. A possibility of explicit solution of such problems looks as questionable; but also there are no reasons to think that roundabot ways are better.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Discussion. Degree of rigidity of the problem. \label{l-rigidity}} There are two classical variants of expansion of the hypergeometric differential operator in eigenfunctions. One case gives the expansion in the Jacobi polynomials. Another one gives the Weyl--Olevsky index hypergeometric transform \begin{multline} g(s) =\\ =\frac 1{\sqrt{2\pi}\Gamma(b+c)} \int_0^\infty f(x) \,\,{}_2F_1\Bigl[
\begin{matrix} b+is,\, b-is\\b+c\end{matrix}
; -x\Bigr] x^{b+c-1} (1+x)^{b+c-1}\,dx \label{def} \end{multline} Remind (see, for instance, \cite{DS}, Chapter XIII), that this transformation is an unitary operator $$ L^2\Bigl(\R_+,x^{b+c-1}(1+x)^{b-c}dx\Bigr)
\to L^2\Bigl(\R_+,\Bigl|\frac
{\Gamma(b+is)\Gamma(c+is)}{\Gamma(2is)}\Bigr|^2 \Bigr) $$
Unitarity condition implies the inversion formula (\ref{inversion}). This transformation is an object interesting by itself having numerious applications in harmonic analysis and theory of special functions (see \cite{Koo}, \cite{VK1}, \cite{Ner-index}, \cite{Ner-wilson}).
There is the third variant of this transformation recentely obtained in
\cite{Ner-double}, it corresponds to the contour
shown on Fig. 2b).
Our Theorem
4 can be interpreted as one more (but strange) analog of expansion in Jacobi polynomials. Evidentely
(see, for instance our subsection \ref{discussion-2}), there are other analogs having natural origins.
On another hand, we can assign arbitrary multiplicities to the contours $(0,1)$, $(1,\infty)$, $(\infty,0)$,
on $\C$; after this there arise a wide (and even too wide) freedom to invent boundaru conditions as in (\ref{uzhas}).
As a model example,
consider the same operator $D$ defined in $L^2(0,1)$ with respect to the same weight $x^\beta(1-x)^\alpha$. If $\alpha>1$, $\beta>1$, then the space $\mathcal D(0,1)$
of compactly supported smooth functions on $(0,1)$ is a domain of self-adjointness. If $$-1<\beta<1, \qquad -1<\alpha<1$$
then the deficiency indices of $D$ on $\mathcal D(0,1)$ are $(2,2)$. In fact, the both solutions of the equation $Df=\lambda f$ are in $L^2$ for all $\lambda$.
Fix $\mu$, $\nu\in\R$. Let us write the boundary conditions \begin{align*} f(x)&=A\bigl[1-\mu\frac{\Gamma(-\beta)}{\Gamma(\beta)} x^{-\beta}\bigr] + x\varphi_0(x)+x^{-\beta+1}\psi_0(x) \quad &\text{near $x=0$}\\ f(x)&=B\bigl[1+\nu\frac{\Gamma(\alpha)}{\Gamma(-\alpha)} (1-x)^{-\alpha}\bigr] + x\varphi_1(x)+(1-x)^{-\alpha+1}\psi_1(x) \quad &\text{near $x=1$} \end{align*} where $\varphi_0$, $\psi_0$ are smooth near $x=0$ and $\varphi_1$, $\psi_1$ are smooth near $x=1$.
Then the specter is discrete and $\lambda=-p(\alpha+\beta+1+p)$ is a point of the specter iff \begin{multline*} \frac 1{\Gamma(\beta+p+1)\Gamma(-p-\alpha)}+ \frac \lambda {\Gamma(p+1)\Gamma(-\alpha-\beta-p)} =\\= \frac\mu{\Gamma(-p)\Gamma(p+\alpha+\beta+1)} +\frac{\mu\nu}{\Gamma(-\beta-p)\Gamma(\alpha+p+1)} \end{multline*} The equation seems nice, but apparently it is nonsolvable.
In any case, not all the boundary conditions have the equal rights.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Discussion. An attempt of an application. \label{attempt}} Denote \footnote{We imitate a simple way to derive De Branges--Wilson integral, proposed by Koornwinder, see, for instance, \cite{Ner-wilson}.} $$\xi_\mu=(1-x)^\mu H(1-x), \qquad\mu\in\R$$ Equating $$ \langle\xi_\mu,\xi_\nu\rangle=[U\xi_\mu,U\xi_\nu] $$ we obtain the following identity \begin{multline} \pi^{-3}\sin\theta\pi \sin(\theta+\alpha)\pi \times\\ \times \int_0^\infty
\left| \frac{\Gamma(\frac{\alpha+\beta+1}2+is) \Gamma(\frac{-\alpha+\beta+1}2+is) \Gamma(\frac{\alpha+\beta+1}2 +\theta+is) \Gamma(\frac{-\alpha-\beta+1}2 -\theta+is)} {\Gamma(2is) \Gamma(\mu+\frac{\alpha+\beta+3}2+is) \Gamma(\nu+\frac{\alpha+\beta+3}2+is)
}\right|^2
ds
+\\+\\+
\frac 1{\pi^2}
\sin(\theta-\mu)\pi \sin(\theta-\nu)\pi
\sum_p
(2p+\alpha+\beta+1)\times
\\ \times
\Gamma\begin{bmatrix}p+\alpha+\beta+1, p+\beta+1, -\nu+p,-\mu+p\\
p+\alpha+\beta+\mu+2, p+\alpha+\beta+\nu+2,p+\alpha+1,p+1
\end{bmatrix}
=\\=
\frac{\Gamma(\beta+1)\Gamma(\alpha+\mu+\nu+1)}
{\Gamma(\alpha+\beta+\mu+\nu+2)\Gamma(\mu+1)\Gamma(\nu+1)
\Gamma(\alpha+\mu+1)\Gamma(\alpha+\nu+1)} \label{beta} \end{multline} This identity is a kind of a beta-integral, continuous and discrete beta-integrals are well-known, see \cite{Ask}. Our integral has a mixed continuous-discrete form, here an integral and a {\it countable} ${}_6F_5$-sum are present\footnote{ Analytic continuation of beta-integrals with respect to parameters can produce a finite collection of additional summands
at the left-hand side (due residues). Our integral has another type}.
Under the substitution $\mu=\theta-1$
to (\ref{beta}), the ${}_6F_5$-sum vanishes, and we obtain the following beta-integral obtained in
\cite{Ner-wilson}, $$ \frac1{2\pi}\int_{-\infty}^\infty
\Bigl| \frac{\prod_{i=1}^3 \Gamma(a_k+is)} {\Gamma(2is)\Gamma(b+is)}
\Bigr|^2 ds = \frac{\Gamma(b-a_1-a_2-a_3)\prod_{1\leqslant k<l\leqslant 3} \Gamma(a_k+a_l)} {\prod_{i=1}^3 \Gamma(b-a_k)} $$
The substitution $\theta=0$, kills the integral term and we obtain a known summation formula of ${}_5F_4$-type. In fact, this is the famous Dougall ${}_5H_5$-formula (see, for instance, \cite{AAR}.) $$ \sum_{n=-\infty}^\infty \frac{\alpha+n} {\prod_{j=1}^4\Gamma(a_j+\alpha+n)
\Gamma(a_j-\alpha-n)} = \frac{\sin 2\pi\alpha}{2\pi} \frac{\Gamma(a_1+a_2+a_3+a_4-3)} {\prod_{1\leqslant j<k\leqslant 4} \Gamma(a_j+a_k-1)} $$ where one parameter is killed by the substitution $a_1=\alpha$.
It is interesting that our integral does not mojorize the Dougall formula. Apparepntly, this means that our construction must contain additional parameter or parameters.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Structure of the paper.} In Sections 2 and 3, we give two proofs of the orthogonality relations. In Section 3 we also discuss our boundary problem. In Section 4 we obtain the spectral decomposition of $D$.
{\bf Acknowledgments.} I am grateful to V.F.Molchanov for a discussion of this subject.
{\bf \large 2. Calculation}
\addtocounter{sec}{1} \setcounter{equation}{0} \setcounter{punct}{0}
We use the notation $$\Gamma \begin{bmatrix} a_1,\dots, a_k\\b_1,\dots b_l \end{bmatrix}:= \frac{\Gamma(a_1)\dots \Gamma(a_k)} {\Gamma(b_1)\dots\Gamma( b_l)} $$
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } The Mellin transform. \label{l-mellin}} For a function $f$ defined on the semi-line $x>0$,
its {\it Mellin transform} is defined by the formula \begin{equation} \mathfrak M f(s)=\int_0^\infty f(x) x^s dx/x \label{mellin} \end{equation} In the cases that are considered below, this integral converges in some strip $\sigma<\mathop{\rm Re}\nolimits s <\tau$. The inversion formula is $$ f(x)=\frac 1{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} \mathfrak M f(s) x^{-s}ds $$ there the integration is given an over arbitrary contour lying in the strip $\sigma<\mathop{\rm Re}\nolimits s <\tau$.
The multiplicative convolution $f*g$ is defined by
\begin{equation}
f*g(x)=\int_0^\infty f(x)g(y/x)\,dx/x
\label{convolution}
\end{equation} The Mellin transform maps the convolution to the product of functions, \begin{equation} \mathfrak M [f*g](s)= \mathfrak M(s) \cdot \mathfrak M g(s) \label{convolution-theorem} \end{equation} (if $\mathfrak M f(s)$, $\mathfrak M g(s)$ are defined in the common strip).
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } A way of proof of orthogonality.} We write two explicit functions $\mathcal K_1(s)$, $\mathcal K_2(s)$ and evaluate their inverse Mellin transforms $K_1$, $K_2$. Next, we write the identity
$$K_1*K_2(1)=\mathfrak M^{-1}[\mathcal K_1 \mathcal K_2](1)$$
and observe that it coincides
with the orthogonality
relations for $\Phi_p$ and $\Phi_q$.
Below, in
2.7 we explain an origin of the functions $K_1$, $K_2$. The calculation on formal level (without following of convergences, conditions for convolutions theorem etc.) is performed in the next Subsection 2.3. In Subsections 2.4--2.6, we follow omited details.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Evaluation of the convolution. \label{l-convolution}} We use the following Barnes-type integral \cite{PBM3}, 8.4.49.1) \begin{multline} \frac 1{2\pi i} \Gamma \begin{bmatrix} c,1-b\\a \end{bmatrix} \int_{-i\infty}^{+i\infty} \Gamma \begin{bmatrix} s,a-s\\s+1-b,c-s \end{bmatrix}x^{-s} ds= \\= F\left[ \begin{matrix} a,b\\c\end{matrix} ;x\right] H(1-x) +\\+ x^{-a} \Gamma \begin{bmatrix} c,1-b\\c-a,1+a-b \end{bmatrix} F\left[ \begin{matrix}a, 1+a-c\\ 1+a-b \end{matrix} ;\frac 1x \right]H(x-1) \label{main-integral} \end{multline} where $x>0$. The integrand has two series of poles $$ s=0,-1,-2,\dots,\qquad s=a,a+1,\dots $$ The integration is given over an arbitrary contour lying in the strip $0<\mathop{\rm Re}\nolimits s<\mathop{\rm Re}\nolimits a$ (such contour separates two series of poles). The condition of the convergence is $\mathop{\rm Re}\nolimits(c-a-b)>-1$.
{\sc Remark.} A reference to the tables of integrals is not necessary, since the identity can be easily proved using the Barnes residue method, see, for instance, (\cite{Sla}, \cite{Mar}).
We consider two functions $\mathcal K_1(s)$, $\mathcal K_2(s)$ given by \begin{align*} \mathcal K_1(s)= \Gamma \begin{bmatrix} \beta+1, p+\alpha+1\\ \beta +p+1 \end{bmatrix} \cdot \Gamma \begin{bmatrix} s,\beta+p+1-s\\s+p+\alpha+1,\beta+1-s \end{bmatrix} \\ \mathcal K_2(s):= \Gamma \begin{bmatrix} 2q+\alpha+\beta+2,-\alpha-q\\q+\alpha+\beta+1 \end{bmatrix} \cdot \Gamma \begin{bmatrix} s+\alpha+q, \beta+1-s\\ s, q+\beta+2-s \end{bmatrix} \end{align*} we assume that $p-\theta$, $q-\theta\in {\mathbb Z}$. Using formula (\ref{main-integral}), we evaluate their inverse Mellin transforms, \begin{multline} K_1(x):=F\left[ \begin{matrix} p+\beta+1, -p-\alpha\\ \beta+1 \end{matrix}; x\right] H(1-x) +\\+ x^{-\beta-p-1} \Gamma\left[ \begin{matrix} \beta+1,p+\alpha+1\\-p,2p+\alpha+\beta+2 \end{matrix}\right]\cdot F\left[ \begin{matrix} \beta+p+1,p+1\\ 2p+\alpha+\beta+2 \end{matrix} ;\frac 1x\right] H(x-1) \end{multline}
\begin{multline*} K_2(x)=x^{q+\alpha} F\left[ \begin{matrix} q+\alpha+\beta+1, q+\alpha+1\\ 2q+\alpha+\beta+2 \end{matrix};x\right] H(1-x) +\\+ x^{-\beta-1} \Gamma \begin{bmatrix} 2q+\alpha+\beta+2,-\alpha-q\\q+1, \beta+1 \end{bmatrix} F\left[ \begin{matrix} q+\alpha+\beta+1, -q\\\beta+1 \end{matrix} ;\frac 1x\right] H(x-1) \end{multline*}
Now we write the identity
$$K_1*K_2(1)=\mathfrak M^{-1}[\mathcal K_1 \mathcal K_2](1)$$ and multiply its both sides by
\begin{equation}
\Gamma
\begin{bmatrix}
2p+\alpha+\beta+2, q+1\\
\beta+1, -\alpha-q
\end{bmatrix} \label{U}
\end{equation}
We obtain the following identity
\begin{multline}
\Gamma
\begin{bmatrix}
2p+\alpha+\beta+2, \vphantom{\Bigl |}\boxed{q+1}\\
\beta+1, \vphantom{\Bigl |}\boxed{-\alpha-q}
\end{bmatrix}
\Gamma \begin{bmatrix} 2q+\alpha+\beta+2,\vphantom{\Bigl |}\boxed{-\alpha-q}\\\vphantom{\Bigl |}\boxed{q+1}, \beta+1 \end{bmatrix} \times \\ \times
\int_0^1
F\left[ \begin{matrix} p+\beta+1, -p-\alpha\\ \beta+1 \end{matrix}; x\right]
x^{\beta+1} F\left[ \begin{matrix} q+\alpha+\beta+1, -q\\\beta+1 \end{matrix} ;x\right] \,dx/x+ \label{A1}
\end{multline}
\begin{multline} +
\Gamma
\begin{bmatrix}
\vphantom{\Bigl |}\boxed{2p+\alpha+\beta+2}, q+1\\
\vphantom{\Bigl |}\boxed{\beta+1}, -\alpha-q
\end{bmatrix} \Gamma\left[ \begin{matrix} \vphantom{\Bigl |}\boxed{\beta+1},p+\alpha+1\\-p,\vphantom{\Bigl |}\boxed{2p+\alpha+\beta+2} \end{matrix}\right] \times \\ \times \int\limits_1^\infty x^{-\beta-p-1} F\left[ \begin{matrix} \beta+p+1,p+1\\ 2p+\alpha+\beta+2 \end{matrix} ;\frac 1x\right] x^{-q-\alpha} F\left[ \begin{matrix} q+\alpha+\beta+1, q+\alpha+1\\ 2q+\alpha+\beta+2 \end{matrix};\frac 1x\right] \,\frac{dx}x \label{A2} \end{multline}
$$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=$$
\begin{multline}
\frac 1{2\pi i}
\Gamma
\begin{bmatrix}
2p+\alpha+\beta+2, q+1\\
\vphantom{\Bigl |}\boxed{\beta+1}, \vphantom{\Bigl |}\boxed{-\alpha-q}
\end{bmatrix} \Gamma \begin{bmatrix} \vphantom{\Bigl |}\boxed{\beta+1}, p+\alpha+1\\
\beta +p+1 \end{bmatrix} \Gamma \begin{bmatrix} 2q+\alpha+\beta+2,\vphantom{\Bigl |}\boxed{-\alpha-q}\\q+\alpha+\beta+1 \end{bmatrix} \times \\ \times \int_{-i\infty}^{+i\infty} \Gamma \begin{bmatrix} \vphantom{\Bigl |}\boxed{s},\beta+p+1-s\\s+p+\alpha+1,\vphantom{\Bigl |}\boxed{\beta+1-s} \end{bmatrix} \Gamma \begin{bmatrix} s+\alpha+q, \vphantom{\Bigl |}\boxed{\beta+1-s}\\ \vphantom{\Bigl |}\boxed{s}, q+\beta+2-s \end{bmatrix} \label{A3} \end{multline} {\sc Remark.} Sixteen boxed $\Gamma$-factors are intensionally are not canceled. One of decisive element of the calculation is cancelations in the row (\ref{A3}). But trick with changing the integer parameters
$m$, $n$ by the shifted real parameters $m+\theta$, $n+\theta$ from 2.7 guarantee this cancelation.
$\square$
We must identify this identity with the following orthogonality identity for $\Phi_p$, $\Phi_q$.
\begin{multline}
\Gamma
\begin{bmatrix}
2p+\alpha+\beta+2\\
\beta+1
\end{bmatrix}
\Gamma \begin{bmatrix} 2q+\alpha+\beta+2\\ \beta+1 \end{bmatrix} \times \\ \times
\int_0^1
F\left[ \begin{matrix} p+\alpha+\beta+1, -p\\ \beta+1 \end{matrix} ;x\right]
F\left[ \begin{matrix} q+\alpha+\beta+1, -q\\ \beta+1 \end{matrix} ;x\right]x^{\beta}(1-x)^\alpha \,dx + \label{B1} \end{multline}
\begin{multline} + \frac{\sin(\alpha+\theta)\pi}{\sin\theta\pi} \,\cdot\, \Gamma \begin{bmatrix} 1+p+\alpha,1+q+\alpha\\-p,-q \end{bmatrix} \times\\ \times \int\limits_1^\infty F\left[ \begin{matrix} p+\alpha+\beta+1, p+\alpha+1\\ 2p+\alpha+\beta+2 \end{matrix};\frac 1x\right] F\left[ \begin{matrix} q+\alpha+\beta+1, q+\alpha+1\\ 2q+\alpha+\beta+2 \end{matrix};\frac 1x\right] \times\\ \times x^{-2\alpha-\beta-p-q-2}(x-1)^\alpha \,{dx}= \label{B2} \end{multline}
\begin{equation} =\frac{\delta_{p-q,0}} {2p+\alpha+\beta+1} \Gamma \begin{bmatrix} 2p+\alpha+\beta+2,2p+\alpha+\beta+2,1+p+\alpha,p+1\\ p+\beta+1,p+\alpha+\beta+1 \end{bmatrix} \label{B3} \end{equation} where $\delta_{p-q,0}$ is the Kronecker symbol.
We identify (\ref{A1})--(\ref{A3}) with (\ref{B1})--(\ref{B3}) line-by-line
1. {\it The summand (\ref{A1}) equals the summand (\ref{B1})}. We transform the first $F$-factor of the integrand by (\ref{bolza}).
2. {\it The summand (\ref{A2}) equals the summand (\ref{B2})}. First, we transform the first $F$-factor of the integrand by (\ref{bolza}).Secondly, we apply the reflection formula $\Gamma(z)\Gamma(1-z)=\pi/\sin(\pi z)$ to the gamma-product in (\ref{A2}). $$ \Gamma
\begin{bmatrix}
q+1,p+\alpha+1\\
-\alpha-q,-p, \end{bmatrix}= \frac{\Gamma(1+\alpha+q)\Gamma(1+\alpha+p)\sin(1+\alpha+q)\pi} {\Gamma(-p)\Gamma(-q)\sin (1+q)\pi} $$ Next, we use $n:=q-\theta\in{\mathbb Z}$, $$\frac{ \sin(1+\alpha+q)\pi} {\sin(1+q)\pi}= \frac{ \sin(1+\alpha+n+\theta)\pi} {\sin(1+n+\theta)\pi} =\frac{(-1)^{n+1} \sin(\alpha+\theta)\pi} {(-1)^{n+1}\sin(\theta)\pi} $$
{\sc Remark.} After this $\Gamma$-factors in (\ref{A1}), (\ref{A2}) transforms to the form $$ \mathop{\rm const}\nolimits\cdot u(p)u(q), \qquad \mathop{\rm const}\nolimits\cdot v(p)v(q) $$ with constants that do not depend on
$p$, $q$. This is necessary for a possibility to interprete the identity (\ref{A1})--(\ref{A3}) as an orthogonality relation of single type functions $\Phi_p$, $\Phi_q$.
Certainly, this was achieved due a multiplication of our identity by a $\Gamma$-factor (\ref{U}). But, a priory, a possibility of such multiplications is not obvious; as far as understand this is not predictible beforehand.
$\square$
3. {\it Right-hand sides, i.e., (\ref{A3}) and (\ref{B3}).}
We apply the Barnes-type integral $$ \frac 1{2\pi i} \int_{-i\infty}^{i\infty} \Gamma \begin{bmatrix} a+s,b-s\\ c+s,d-s \end{bmatrix}\,ds= \Gamma \begin{bmatrix} a+b, c+d-a-b-1\\ c+d-1, c-a, d-b \end{bmatrix} $$ see \cite{PBM3}, 2.2.1.3 \footnote{This identity is a partial case of (\ref{main-integral}). We substitute $x=1$ to (\ref{main-integral}) and apply the Gauss summation formula for $F[a,b;c;1]$.} and obtain \begin{multline} \frac 1{2\pi i} \int_{-i\infty}^{i\infty} \Gamma \begin{bmatrix} \beta+1+p-s,\alpha+q+s\\ 1+p+\alpha+s,q+\beta+2-s \end{bmatrix}\,ds =\\= \Gamma\begin{bmatrix} \alpha+\beta+p+q+1,\,\, 1\\ \alpha+\beta+p+q+2, q-p+1, p-q+1 \end{bmatrix} =\\= \frac 1 {(\alpha+\beta+p+q+1)\Gamma(q-p+1)\Gamma(p-q+1)} =\\= \frac 1 {(\alpha+\beta+p+q+1)} \frac{\sin(q-p)\pi}{\pi(q-p)} \label{delta} \end{multline} Since $q-p\in {\mathbb Z}$, the latter expression is zero if $p\ne q$.
{\sc Remark.} This place, finishing the calculation, can look like misterious. But this almost predictible under the point of view proposed in 2.7. Otherwise, how can the Jacobi polynomials find a possibility to be orthogonal?
$\square$
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Convergence of the integrals. \label{l-convergence}}
{\sc Lemma.} {\it Under our conditions (\ref{conditions-1}), (\ref{conditions-2}) the integral \begin{equation} \int_0^1 \Phi_p(x)^2x^\beta(1-x)^\alpha dx + \frac{\sin(\alpha+\theta)\pi}{\sin (\theta\pi)} \int_1^\infty \Phi_p(x)^2 x^\beta (x-1)^\alpha\, dx \label{integral-for-convergence} \end{equation} is absolutely convergent.}
Since
$$|\Phi_p(x)\Phi_q(x)|\leqslant \frac12( |\Phi_p(x)|^2+|\Phi_q(x)|^2) $$ this lemma implies also the absolute convergence of \begin{equation} \int\limits_0^1 \Phi_p(x)\Phi_q(x)x^\beta(1-x)^\alpha dx + \frac{\sin(\alpha+\theta)\pi}{\sin (\theta\pi)} \int\limits_1^\infty \Phi_p(x)\Phi_q(x) x^\beta (x-1)^\alpha\, dx \label{integral-for-convergence-2} \end{equation}
{\sc Proof.} To follow the asymptotics, we use one of the Kummer relations (see \cite{HTF1}, 2.10(1)), \begin{multline} F\left[ \begin{matrix} a,b\\c \end{matrix}; z\right]= \frac{\Gamma(c)\Gamma(c-a-b)}
{\Gamma(c-a)\Gamma(c-b)} F\left[ \begin{matrix} a,b\\a+b-c+1 \end{matrix}; 1-z\right] +\\+ \frac{\Gamma(c)\Gamma(a+b-c)}
{\Gamma(a)\Gamma(b)}
(1-z)^{c-a-b}
F\left[ \begin{matrix} c-a,c-b\\c-a-b+1 \end{matrix}; 1-z\right] \label{kummer-relation} \end{multline}
The function $\Phi_p$ is continuous at $x=0$, and hence the condition of the convergence of the integral is $\mathop{\rm Re}\nolimits\beta>-1$.
The formula (\ref{kummer-relation}) gives the following asymptotics of $\Phi_p$ as $x\to 1-0$ \begin{equation} C_1+ C_2(1-x)^{-\alpha} \label{as} \end{equation} For $\mathop{\rm Re}\nolimits\alpha>0$ we have $\Phi_p\sim (1-x)^{-\alpha}$, and the condition of convergence of (\ref{integral-for-convergence}) is $\mathop{\rm Re}\nolimits\alpha<1$. For $\mathop{\rm Re}\nolimits\alpha<0$, the function $\Phi_p$ has a finite limit at $1+0$, and the condition of convergence (\ref{integral-for-convergence}) is $\mathop{\rm Re}\nolimits\alpha>-1$.
Considering the right limit at 1, we obtain the same restrictions for $\alpha$.
Obviously, $$ \Phi_p(x)\sim x^{-\alpha-\beta-p-1},\qquad x\to\infty $$ Thus the condition of the convergence is $\mathop{\rm Re}\nolimits(2p+\alpha+\beta+1)>0$.
We also must avoid a pole in (\ref{basis}), and this gives $\alpha+p+1\ne 0$.
$\square$
Denote $m=p-\theta$, $n=q-\theta$.
{\sc Lemma.} {\it For fixed $m$, $n$, the integral (\ref{integral-for-convergence-2}) depends holomorphicaly on $\alpha$, $\beta$, $\theta$ in the allowed domain of parameters. }
{\sc Proof.} For each given point $(\alpha_0, \beta_0,\theta_0)$, the convergence of our integral is uniform in a small neighborhood of $(\alpha_0, \beta_0,\theta_0)$
(since our asymptotics are uniform). It remains to refer to the Morera Theorem (if each integral over closed contour is 0, then the function is holomorphic).
$\square$
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Restrictions necessary for our calculation. \label{l-restrictions}} First, we used the Mellin transform, and hence our functions $K_1$, $K_2$ must be locally integrable. The unique point of discontinuity is $x=1$ We have \begin{align*} K_1(x)&\sim A_1+A_2^\pm(1-x)^{-\alpha},\qquad
&x\to 1\pm 0;\\ K_2(x)&\sim B_1+B_2^\pm(x-1)^{\alpha},\qquad
&x\to 1\pm 0; \end{align*}
This implies $|\mathop{\rm Re}\nolimits\alpha|<1$.
Second, we use the convolution theorem for the Mellin transform.
The Mellin transform (\ref{mellin}) of $K_1$ absolutely converges in the strip $$0<\mathop{\rm Re}\nolimits s < \beta+p+1$$ The Mellin transform of $K_2$ absolutely converges in the strip $$-\alpha-q< \mathop{\rm Re}\nolimits <\beta+1$$ We can apply the convolution theorem (\ref{convolution-theorem}) if the following conditions are satisfied \begin{align*} &\left. \begin{aligned} 0< \beta+p+1 \\ 0< \beta+\alpha+q+1 \end{aligned} \right\}& \qquad\text{ --- nonemptyness of strips}\\ &\left. \begin{aligned} 0< \beta+1 \\ 0<p+q+\alpha+ \beta+1 \end{aligned} \right\}& \qquad \begin{matrix}\text{--- nonemptyness of intersection}\\
\text{of strips} \end{matrix}
\end{align*}
This domain is nonempty, but it is smaller than the domain
of convergence of (\ref{integral-for-convergence}).
But the orthogonality identities
(\ref{orthogonality}), (\ref{norms})
have holomorphic left-hand sides and right-hand
sides. Hence they are valid in the whole
domain of the convergence.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Restrictions for $\theta$.}
These restrictions (\ref{restriction-theta})
were not used in proof. In fact, $\theta$ is defined
up to a shift $\theta\mapsto\theta+1$.
This shift preserves the orthogonal system
$\Phi_p$ but changes enumeration
of the basic elements.
By this reason I'll explain how the functions
$\mathcal K_1$, $\mathcal K_2$ were written.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Comments. The origin of the calculations.
\label{l-comments}} Now, we intend to explain the origin of $K_1$, $K_2$.
The orthogonality relations for the Jacobi polynomials $P_n^{\alpha,\beta}$ are well known but not self-evident. Let us trying to prove them using the technique of Barnes integrals, see \cite{Mar}.
Our problem is an evaluation of the integral \begin{equation} \int_0^1 \,\,{}_2F_1\left[ \begin{matrix} -n,n+\alpha+\beta\\ \beta+1 \end{matrix}; x\right] \,\,{}_2F_1\left[ \begin{matrix} -m,m+\alpha+\beta\\ \beta+1 \end{matrix}; x\right] x^\beta(1-x)^\alpha dx \label{jacobi-integral} \end{equation} Denote \begin{align} L_1(x)= (1-x)^\alpha \,\,{}_2F_1\left[ \begin{matrix} -m,m+\alpha+\beta\\ \beta+1 \end{matrix}; x\right] H(1-x) \nonumber \\ L_2(x)= x^{-\beta-1} \,\,{}_2F_1\left[ \begin{matrix} -n,n+\alpha+\beta\\ \beta+1 \end{matrix}; \frac 1x\right]H(x-1)+r(x) H(x-1) \label{L2} \end{align} where $r(x)$ is an arbitrary function, but we will choose it later.
Our integral (\ref{jacobi-integral}) is the convolution $L_1*L_2(x)$ at the point $x=1$. We intend to evaluate it using the Mellin transform.
The Mellin transform of $L_1$ is (see \cite{PBM3}, 8.4.49.1) $$ \mathfrak M L_1(s)= \Gamma \begin{bmatrix} \beta+1, \alpha+m+1\\ \beta+m+1 \end{bmatrix} \cdot \Gamma \begin{bmatrix} s, \beta+m+1-s\\ \alpha+m+1+s, \beta+1-s \end{bmatrix} $$ Then we find a function of the form (\ref{L2}) in the table of inverse Mellin transforms (see \cite{PBM3}, 8.4.49.1). \footnote{ In fact, tables of integrals are not necessary here, since we must write a Barnes integral defining a given hypergeometric function on $[0,1]$, and it is more-or-less clear how to do this.}
We can assume $$ \mathfrak M L_2(s)= \Gamma \begin{bmatrix} 2n+\alpha+\beta+2,-\alpha-n\\n+\alpha+\beta+1 \end{bmatrix} \cdot \Gamma \begin{bmatrix}
\alpha+n+s,\beta+1-s\\s, n+\beta+2-s \end{bmatrix} $$ and after this the desired calculation can be performed.
After this we change $m\to m+\theta$, $n\to n+\theta$.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Comments.
Evaluation of summands in (\ref{B1}), (\ref{B2}). \label{l-summands}} Our orthogonality relations contain a sum of two integral over different intervals. It is ineresting to evaluate each summand separately.
Let $p$, $q\in\C$.
Let us evaluate $$ X:=\int_0^1 \Phi_p(x)\Phi_q(x) x^\beta(1-x)^\alpha dx ,\qquad Y:=\int_1^\infty \Phi_p(x)\Phi_q(x) x^\beta(x-1)^\alpha dx,\\ $$ Denote \begin{align*} a(p,q)&:=\frac{1}{p+q+\alpha+\beta+1} \Gamma\bigl[2p+\alpha+\beta+2,2q+\alpha+\beta+2\bigr] \\ b(p,q)&:=\Gamma\begin{bmatrix} q+1,p+\alpha+1\\ p+\beta+1,q+\alpha+\beta+1 \end{bmatrix} \end{align*} We write the equation (\ref{A1})--(\ref{A3}) and the same equation with transposed $p$, $q$ \begin{align*} X+\frac{\sin(\alpha+q)\pi}{\sin q\pi} Y =a(p,q)b(p,q) \frac{\sin(q-p)\pi}{(q-p)\pi} \\
X+\frac{\sin(\alpha+p)\pi}{\sin p\pi} Y =a(p,q)b(q,p) \frac{\sin(q-p)\pi}{(q-p)\pi} \end{align*}
It is a linear system of equations for $X$ and $Y$. Its determinant is $$ \frac{\sin(\alpha+p)\pi}{\sin p\pi}- \frac{\sin(\alpha+q)\pi}{\sin q\pi} = \frac{\sin\alpha\pi \sin(q-p)\pi} {\sin p\pi \sin q\pi} $$ Hence \begin{align} Y&=a(p,q)\frac {\sin p\pi \sin q\pi}
{\pi(q-p)\sin\alpha\pi}\Bigl[b(q,p)-b(p,q)\Bigr]
\label{Y-ravno}
\\ X&=a(p,q)\frac {\sin p\pi \sin q\pi}
{\pi(q-p)\sin\alpha\pi}
\Bigl[\frac{\sin(\alpha+p)\pi}{\sin p\pi}b(p,q)-
\frac{\sin(\alpha+q)\pi}{\sin q\pi}b(q,p)\Bigr]= \nonumber \\ &=\frac {\pi a(p,q)}
{(q-p)\sin\alpha\pi} \Biggl[\frac 1{\Gamma
\bigl[p+\beta+1,q+\alpha+\beta+1,-q,-p-\alpha\bigr]}
-
\nonumber\\ &\qquad\qquad\qquad\qquad- \frac 1{\Gamma
\bigl[q+\beta+1,p+\alpha+\beta+1,-p,-q-\alpha\bigr]}
\Bigg]
\label{X-ravno} \end{align}
{\bf \large 3. The boundary problem}
\addtocounter{sec}{1} \setcounter{equation}{0} \setcounter{punct}{0}
In this section $\alpha\ne 0$.
{\bf\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Symmetry of the boundary problem. \label{l-symmetry}} We consider the hypergeometric differential operator $D$ given by (\ref{D}) and the boundary problem for $D$ defined in Subsection \ref{sbp}. We intend to prove the identity \begin{equation} \{Df,g\}=\{f,Dg\},\qquad f,g\in\mathcal E \label{yyyyyy} \end{equation}
Let $$H=a(x)\frac {d^2}{dx^2}+b(x)\frac d{dx}$$ be a differential operator on $[a,b]$ formally symmetric with respect to a weight $\mu(x)$, i.e., for smooth $f$, $g$ that vanish near the ends of the interval, $$\int_a^b Hf(x)\cdot g(x)\,dx=\int_a^b f(x)\cdot H g(x)\,dx$$ Equivalently, $(a\mu)'=b\mu$. Then for general $f$, $g$, we have \begin{align} \int_a^b Hf(x)\cdot g(x)\,dx&-\int_a^b f(x)\cdot H g(x)\,dx= \nonumber \\ &=
\Biggl\{\bigl\{f'(x)g(x)-g'(x)f(x)\bigl\}a(x)\mu(x)\Biggr\}\Biggr|_{a}^b \label{correction} \end{align} We apply this identity to the operator $D$ and to the segment $[a,b]=[0,1-\varepsilon]$. Let on some segment $[1-h,1]$ we have $$ f(x)=u(x)+(1-x)^{-\alpha}v(x),\qquad g(x)=\widetilde u(x)+(1-x)^{-\alpha}\widetilde v(x) $$ with smooth $u(x)$, $v(x)$. Then the correcting term (\ref{correction}) is \begin{multline} \Biggl\{ \det\begin{pmatrix} u'(x)+(1-x)^{-\alpha}v'(x)-\alpha(1-x)^{-\alpha-1}v(x)&
u(x)+(1-x)^{-\alpha} v(x)\\ \widetilde u'(x)+(1-x)^{-\alpha}\widetilde v'(x)-\alpha(1-x)^{-\alpha-1}\widetilde v(x)&
\widetilde u(x)+(1-x)^{-\alpha}\widetilde v(x) \end{pmatrix} \times \\
\times x^{\beta+1}(1-x)^{\alpha+1}\Biggr\}\Biggr|_{x=1-\varepsilon} \end{multline} The last factor gives the power $\varepsilon^{\alpha+1}$; recall that $-1<\alpha<1$. The summands of the determinant have powers $$ 1,\quad \varepsilon^{-\alpha},\quad \varepsilon^{-2\alpha}, \quad \varepsilon^{-\alpha-1}\quad, \varepsilon^{-2\alpha-1} $$ But the term with $\varepsilon^{-2\alpha-1}$ in the determinant
is $$ \det\begin{pmatrix} -\alpha v(x)&
v(x)\\ -\alpha \widetilde v(x)&
\widetilde v(x) \end{pmatrix}=0 $$
Hence the leading term of the determinant has the order
$\varepsilon^{-\alpha-1}$ and only this term
gives a contribution to the limit as $\varepsilon\to+0$.
Finally, \begin{equation}
\lim_{\varepsilon\to+0}
\int_{0}^{1-\varepsilon} \bigl(D f(x) g(x)-f(x) Dg(x)\bigr)\,
x^\beta(x-1)^\alpha\,dx= u(1)\widetilde v(1)- v(1)\widetilde u(1) \label{popravka-sleva} \end{equation}
For $x>1$, we have $$ f(x)=\frac{\sin\theta\pi}{\sin(\alpha+\theta)\pi} u(x)+(1-x)^{-\alpha}v(x),\qquad g(x)=\frac{\sin\theta\pi}{\sin(\alpha+\theta)\pi} \widetilde u(x)+(1-x)^{-\alpha}\widetilde v(x) $$ In a similar way, we obtain $$
\lim_{\varepsilon\to+0}
\frac{\sin(\alpha+\theta)\pi}{\sin\theta\pi} \int_{1+\varepsilon}^\infty \bigl(D f(x) g(x)-f(x) Dg(x)\bigr)\, x^\beta(x-1)^\alpha\,dx =-u(1)\widetilde v(1)+ v(1)\widetilde u(1) $$ This finishes the proof of the identity (\ref{yyyyyy})
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Verification of the boundary conditions for $\Phi_p$.\label{l-phi-p}} Let us show that $\Phi_p(x)$ satisfy the boundary conditions at $x=1$. It is given by a direct calculation, below we present its details.
We need in expressions for $\Phi_p$ having the form \begin{equation} \Phi_p(x)= \left\{ \begin{aligned} A_1(p;x)+B_1(p;x)(1-x)^{-\alpha},\qquad x<1 \\ A_2(p,x)+B_2(p,x)(x-1)^{-\alpha}, \qquad x>1 \end{aligned} \right. \label{expansion-phi-p} \end{equation}
We intend to expand $\Phi_p$ in power series at $x=1$, on the semi-segments $(1-\varepsilon,1]$, $[1,1+\varepsilon)$.
We use the formula (\ref{kummer-relation}) for the left semi-segment and obtain \begin{multline} \Phi_p(x)=\Gamma \begin{bmatrix} 2p+\alpha+\beta+2\\ \beta+1 \end{bmatrix} \Biggl\{ \Gamma \begin{bmatrix} \beta+1,-\alpha\\ p+\beta+1,-p-\alpha \end{bmatrix} F\left[ \begin{matrix}-p,p+\alpha+\beta+1\\ \alpha+1 \end{matrix};\, 1-x\right] +\\+ \Gamma \begin{bmatrix} \beta+1,\alpha\\ -p,p+\alpha+\beta+1 \end{bmatrix} F\left[ \begin{matrix} p+\beta+1,-p-\alpha\\
1-\alpha
\end{matrix};\,
1-x \right](1-x)^{-\alpha}\Biggr\} \label{phi-p-left} \end{multline} for $x<1$
Next we use the identity \begin{multline} F \left[ \begin{matrix} a,b\\c \end{matrix}; \frac 1x \right] = \Gamma \begin{bmatrix} c,c-a-b\\ c-a,c-b \end{bmatrix}F \left[ \begin{matrix} a,a+1-c\\ a+b+1-c \end{matrix}; 1-x \right] x^a +\\+ \Gamma \begin{bmatrix} c,a+b-c\\ a,b \end{bmatrix} F \left[ \begin{matrix} c-b,1-b\\ c+1-a-b \end{matrix}; 1-x \right] x^a(x-1)^{c-a-b} \label{kummer-2} \end{multline}(this formula is a modified variant of \cite{HTF1}, 1.10(4). We obtain \begin{multline} \Phi_p(x)= \Gamma \begin{bmatrix} p+\alpha+1,-p \end{bmatrix} \times \\ \times \Biggl\{ \Gamma \begin{bmatrix} 2p+\alpha+\beta+2,-\alpha\\p+1,p+\beta+1 \end{bmatrix} F \left[ \begin{matrix} p+\alpha+\beta+1,-p\\ \alpha+1 \end{matrix}; 1-x \right] +\\+ \Gamma \begin{bmatrix} 2p+\alpha+\beta+2,\alpha\\ p+\alpha+\beta+1,p+\alpha+1 \end{bmatrix} F \left[ \begin{matrix} p+\beta+1,-p-\alpha\\ 1-\alpha \end{matrix}; 1-x \right] (x-1)^{-\alpha} \Biggr\} \label{phi-p-right} \end{multline} for $x>1$.
The expressions (\ref{phi-p-left}),
(\ref{phi-p-right}) are the desired
expansions (\ref{expansion-phi-p}).
We observe, that
$$
B_1(p,x)=B_2(p,x);
\qquad A_1(p,x)/A_2(p,x)=\frac{\sin(p+\alpha)\pi}{\sin p\pi}
$$
We have
\begin{equation} \frac{\sin(p+\alpha)\pi}{\sin p\pi}=\frac{\sin(\theta+\alpha)\pi}{\sin \theta\pi} \label{sin-sin} \end{equation} and this implies our boundary condition.
{\sc Remark} (it will be important below in Subsection \ref{l-adjoint}). The property (\ref{sin-sin}) is valid iff $p-\theta\in {\mathbb Z}$. Indeed, the difference between the left-hand side and the right-hand side is $$ \frac{\sin\alpha\pi\sin(\theta-p)\pi}
{\sin p\pi \sin\theta\pi} $$
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Another proof of the orthogonality relations.\label{l-another}}
For $p\ne q$, the orthogonality follows from the symmetry condition (\ref{yyyyyy}).
Let us evaluate $$X:=\int_0^1 \Phi_p(x)\Phi_q(x)x^\beta (1-x)^\alpha dx$$ We preserve the notation (\ref{expansion-phi-p}). By formula (\ref{popravka-sleva}), \begin{multline*} \lim_{\varepsilon\to +0} \Biggl\{\int_0^{1-\varepsilon} D\Phi_p(x) \cdot\Phi_q(x) x^\beta(1-x)^\alpha dx - \int_0^{1-\varepsilon} \Phi_p(x) \cdot D\Phi_q(x) x^\beta(1-x)^\alpha dx\Biggr\} =\\= A(p,1)B(q,1)-A(q,1)B(p,1) \end{multline*} The constants $A(p,1)$ etc. are the $\Gamma$-coefficients in (\ref{phi-p-left}) and (\ref{phi-p-right}); thus the right-hand side is known.
Since $\Phi_p$ are the eigenfunctions (see (\ref{eigenfunctions})), the left hand-side is $$ \bigl[-p(p+\alpha+\beta+1)+ q(q+\alpha+\beta+1)\bigr]\cdot X= (q-p)(q+p+\alpha+\beta+1)X $$ After simple cancellations we obtain the expression (\ref{X-ravno}).
In the same way, we obtain the expresion (\ref{Y-ravno}) for $\int_1^\infty$.
Now we verify our orthogonality relations via a direct calculation. But this again is long.
{\bf \large 4. The spectral measure}
\addtocounter{sec}{1} \setcounter{equation}{0} \setcounter{punct}{0}
Now we intend to evaluate the spectral measure for the operator $D$ in the Hilbert space $\mathcal H$ using Weyl--Titchmarsh machinery, see \cite{DS}.
To avoid logarithmic asymptotics, we assume $\alpha\ne 0$, $\beta\ne 0$.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Eigenfunctions of the adjoint operator. } Now we intend to discuss the adjoint operator $D^*$ for $D$.
Denote by $\mathop{\rm Dom}\nolimits(A)$ the domain of definiteness of a linear operator $A$. Recall that $f\in\mathcal H$ is contained in $\mathop{\rm Dom}\nolimits(D^*)$ if there exists a function $h\in\mathcal H$ such that for each $g\in\mathop{\rm Dom}\nolimits(D)$ we have $$ \langle f, Dg\rangle = \langle h, g\rangle $$ In this case, we claim $h=D^*f$.
Since $D$ is symmetric, we have $$\mathop{\rm Dom}\nolimits(D^*)\supset \mathop{\rm Dom}\nolimits(D)=\mathcal E$$
Description of $\mathop{\rm Dom}\nolimits(D^*)$ is not an important question, really it is necessary only description of eigenfunctions of $D^*$.
{\sc Lemma.} {\it Let $\Xi$ be an eigenfunction of $D^*$. Then $\Xi$ satisfies to the boundary conditions a), b) at $x=0$ and $x=1$ from \ref{sbp}.}
\
{\sc Proof.}
{\it The condition at $0$.}
Let $D^*\Xi=\lambda\Xi$, represent $\lambda$ as \begin{equation} \lambda=-p(p+\alpha+\beta+1) \label{lambda-cherez-p} \end{equation} There are two solutions of the hypergeometric equation $Df=\lambda f$ near $x=0$; if $\beta$ is not a non-negative integer, then they are given by \begin{align} S_1&=\Phi_p(x)=F \left[ \begin{matrix}-p,p+\alpha+\beta+1\\ \beta+1 \end{matrix};x\right] \label{nol-1}\\ S_2&=x^{-\beta} F\left[ \begin{matrix} -\beta-p,\alpha+p+1\\ 1-\beta \end{matrix};x\right] \label{nol-2} \end{align}
If $\beta\geqslant 1$, the second solution
is not in $\mathcal H$, and the statement is obvious.
\footnote{For integer $\beta>0$, this also is valid.}
Let $-1<\beta<1$.
Let $f\in \mathcal E$, i.e. $f$ is smooth near $0$.
Expand our eigenfunction $\Xi$ as
$$
\Xi=u(x)+x^{-\beta} v(x), \qquad u(x),\,v(x)\in C^\infty
$$
(in fact, $u(x)$ and $v(x)$ are the hypergeometric
functions defined from (\ref{nol-1}), (\ref{nol-2})
up to scalar factors.
If $\Xi$ is in $\mathop{\rm Dom}\nolimits(D^*)$, then
\begin{equation}
\langle Df,\Xi\rangle-\langle f, D^*\Xi\rangle=0
\label{xi-in-domain}
\end{equation} Repeating the calculation
of Subsection \ref{l-symmetry},
we obtain that this difference is
$$f(0)v(0)$$
Since $f$ is arbitrary, then $v(0)=0$.
But
$v(x)=\mathop{\rm const}\nolimits\cdot F[ -\beta-p,\alpha+p+1; 1-\beta;x]$,
we have $\mathop{\rm const}\nolimits=0$.
{\it The condition at $x=1$.} A proof is similar.
A priory, we know that
$$
\Xi(x)=
\left\{
\begin{aligned}
u_-(x)+v_-(x)(1-x)^{-\alpha}, \qquad x<1\\
u_+(x)+v_+(x)(x-1)^{-\alpha}, \qquad x<1
\end{aligned}
\right.
$$
In fact $u_\pm$ and $v_\pm$ are the hypergeometric functions
in the right-hand sides of (\ref{phi-p-left}), (\ref{phi-p-right})
up to constant factors.
Let $f\in\mathcal E$, i.e., $$ f(x)= \left\{ \begin{aligned}
a(x)+b(x)(1-x)^{-\alpha},\qquad x<1
\\
\frac{\sin \theta\pi} {\sin (\alpha+\theta)\pi}
a(x)+b(x)(x-1)^\alpha,\qquad x>1
\end{aligned}
\right.
$$
here $a(x)$, $b(x)$ are smooth near $x=1$,
If $\Xi\in\mathop{\rm Dom}\nolimits(D^*)$, then the condition
(\ref{xi-in-domain})
is satisfied.
Repeating the considerations of
Subsection \ref{l-symmetry}, we obtain that
(\ref{xi-in-domain}) is equal to
$$
\bigl(
a(1)v_-(1)-b(1)u_-(1)
\bigr)-
\frac{\sin(\alpha+\theta)\pi}
{\sin \theta\pi}
\bigl(\frac {\sin \theta\pi}{\sin(\alpha+\theta)\pi}
a(1)v_+(1)-b(1)u_+(1)\bigr) $$
It is zero for all $a(1)$, $b(1)$
and hence
$$
v_-(1)=v_+(1),\qquad u_+(1)=
\frac {\sin \theta\pi}{\sin(\alpha+\theta)\pi}
u_-(1)
$$
But a priory we know $v_\pm$ and $u_\pm$
up to constant factors, and hence,
and this implies our statement.
{\bf\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } $L^2$-eigenfunctions of $D^*$.\label{l-adjoint}}
{\sc Lemma.} {\it If $\Xi\in\mathcal H$ is an eigenfunction
of $D^*$, then $\Xi=\Phi_q$ with $q\in\theta+{\mathbb Z}$.}
{\sc Proof.} Let $\lambda$ be an eigenvalue, let
$p$ is given by
(\ref{lambda-cherez-p}) with $\mathop{\rm Re}\nolimits p>-(\alpha+\beta+1)/2$.
Due the boundary condition at $0$,
we have
\begin{equation}
\Xi=\mathop{\rm const}\nolimits F[-p,p+\alpha+\beta+1;\beta+1;x]
\label{okolo-nulya}
\end{equation}
for
$x<1$.
Only one solution of the equation $Df=\lambda f$
is contained in $L^2$ at infinity, it has the form $$\mathop{\rm const}\nolimits\cdot F[p+\alpha+1,p+\alpha+\beta+1;2p+\alpha+\beta+2;1/x]x^{-\alpha-\beta-p-1} $$ on $[1,\infty]$. Thus, on the both segments the eigenfunction $\Xi$ coincides with $\Phi_p$ up to scalar factors. The gluing condition is (\ref{sin-sin}). By the last remark of Subsection \ref{l-phi-p}, $p-\theta\in{\mathbb Z}$.
If \begin{equation} \mathop{\rm Re}\nolimits p=-(\alpha+\beta+1)/2 \label{remaining-specter} \end{equation} then there is no $L^2$-eigenfunctions at infinity.
$\square$
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Self-adjointness.} By the previous lemma, the equations $D^*f=\pm if$ have no solution in $\mathcal H$. This implies the essential self-adjointness of $D$.
{\bf\refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Specter.} The eigenvalues $\lambda=-p(p+\alpha+\beta+1)$ corresponding to the functions $\Phi_p$ form a discrete specter. The remaining specter corresponds to the semi-line (\ref{remaining-specter}), i.e., $\lambda\geqslant (\alpha+\beta+1)^2/4$.
Indeed, in all the other cases, we have precisely one $L^2$ solution $S_0(x)$ of the differential equation $Df=\lambda f$ near 0, and precisely one $L^2$-solution $S_\infty(x)$ near infinity.Hence we can write the Green kernel (i.e., the kernel of resolvent) as it is explained in \cite{DS}. Thus for such $\lambda$
the resolvent exists.
{\bf \refstepcounter{punct}{\arabic{sec}.\arabic{punct}. } Almost $L^2$-eigenfunctions.} Let $$p=-(\alpha+\beta+1)/2+is,\qquad s\in\R$$ and $\lambda$ is given by (\ref{lambda-cherez-p}).
{\sc Lemma.}
{\it The function $\Psi_s$
given by (\ref{eigenfunctions-psi}) is a unique
almost $L^2$-solution of the equation
$D\Xi=\lambda \Xi$.}
{\sc Proof.}
Near $x=0$ such solution must have the form
(\ref{okolo-nulya}).
We write the following basis $\Lambda(s,x)$, $\Lambda(-s,x)$
in the space of solutions of the equation
$Df=\lambda f$,
\begin{equation}
\Lambda(s,x)=
F
\left[
\begin{matrix}
\frac{\alpha+\beta+1}2+is, \frac{\alpha-\beta+1}2+is\\
1+2is
\end{matrix};\frac 1x
\right]x^{-(\alpha+\beta+1)/2-is}
\label{lambda-u-infty}
\end{equation}
The both solutions are almost $L^2$. Now we must satisfy the boundary conditions at $x=1$. For this, we expand the 3 solutions (\ref{okolo-nulya}) and $\Lambda(\pm s; x)$ near the point $x=1$. It remains to write the gluing conditions at $x=1$. The calculation is long, its reduced to usage of the complement formula for $\Gamma$ and elementary trigonometry. We omit this.
$\square$
The formula for the spectral measure follows from the explicit asymptotics of almost $L^2$-solutions at $\infty$; this is explained in \cite{DS}.
\end{document} |
Subsets and Splits